sentences
sequence
labels
sequence
[ "Recent pretrained language models solved many reading comprehension benchmarks, where questions are written with the access to the evidence document.", "However, datasets containing information-seeking queries where evidence documents are provided after the queries are written independently remain challenging.", "We analyze why answering information-seeking queries is more challenging and where their prevalent unanswerabili-ties arise, on Natural Questions and TyDi QA.", "Our controlled experiments suggest two headrooms paragraph selection and answerability prediction, i.e. whether the paired evidence document contains the answer to the query or not.", "When provided with a gold paragraph and knowing when to abstain from answering, existing models easily outperform a human annotator.", "However, predicting answerability itself remains challenging.", "We manually annotate 800 unanswerable examples across six languages on what makes them challenging to answer.", "With this new data, we conduct per-category answerability prediction, revealing issues in the current dataset collection as well as task formulation.", "Together, our study points to avenues for future research in information-seeking question answering, both for dataset creation and model development.", "1 1 Introduction Addressing the information needs of users by answering their questions can serve a variety of practical applications.", "To answer such information-seeking queries where users pose a question because they do not know the answer in an unconstrained setting is challenging for annotators as they have to exhaustively search over the web.", "1 Our code and annotated data is publicly available at https://github.com/AkariAsai/ unanswerable_qa .", "To reduce annotator burden, the task has been sim-plified as reading comprehension: annotators are tasked with finding an answer in a single document.", "Recent pretrained language models surpassed estimated human performance (Liu et al., 2019; Devlin et al., 2019) in many reading comprehension datasets such as SQuAD (Rajpurkar et al., 2016) and CoQA (Reddy et al., 2019), where questions are posed with an answer in mind.", "However, those state-of-the-art models have difficulty answering information-seeking questions (Kwiatkowski et al., 2019; Choi et al., 2018).", "In this work, we investigate what makes information-seeking question answering (QA) more challenging, focusing on the Natural Questions (NQ; Kwiatkowski et al., 2019) and TyDi QA (Clark et al., 2020) datasets.", "Our experimental results from four different models over six languages on NQ and TyDi QA show that most of their headroom can be explained by two subproblems: selecting a paragraph that is relevant to a question and deciding whether the paragraph contains an answer.", "The datasets are annotated at the document level, with dozens of paragraphs, and finding the correct paragraph is nontrivial.", "When provided with a gold paragraph and an answer type (i.e., if the question is answerable or not), the performance improves significantly (up to 10% F1 in NQ), surpassing that of a single human annotator.", "After identifying the importance of answerability prediction, in Section 4, we compare a question only baseline, state-of-the-art QA models, and human agreement on this task.", "For comparison, we also evaluate unanswerability prediction in a reading comprehension dataset including unanswerable questions (Rajpurkar et al., 2018).", "While all datasets contain a large proportion of unanswerable questions (33-59%), they differ in how easily models can detect them.", "This motivates us to further investigate the source of unanswerability.", "To this end, we quantify the sources of unanswerability by annotating unanswerable questions from NQ and TyDi QA; we first classify unanswerable questions into six categories and then further annotate answers and alternative knowledge sources when we can find the answers to the unanswerable questions.", "Despite the difficulty of annotating questions from the web and crowdsourcing bilingual speakers, we annotated 800 examples across six typologically diverse languages.", "Our analysis shows that why questions are unanswerable differs based on the dataset or language.", "We conduct per-category answerability prediction on those annotated data, and found unanswerable questions from some categories are particularly hard to be identified.", "We provide a detailed analysis for alternative sources of an answer beyond Wikipedia.", "Grounded in our analysis, we suggest avenues for future research, both for dataset creation and model development based on the analysis.", "Our contributions are summarized as follows: We provide in-depth analysis on information-seeking QA datasets, namely on Natural Questions and TyDi QA to identify the remaining headrooms.", "We show that answerability prediction and paragraph retrieval remain challenging even for state-of-the-art models through controlled experiments using four different models.", "We manually annotate reasons for unanswerability for 800 examples across six languages, and suggest potential improvements for dataset collections and task design.", "We first define the terminology used in this paper.", "In this work, we focus on a reading comprehension setting, where reference documents (context) are given and thus retrieval is unnecessary, unlike open retrieval QA (Chen et al., 2021).", "Information-seeking QA datasets contain questions written by a human who wants to know the answer but doesn't know it yet.", "In particular, NQ is a collection of English Google Search Engine queries (anonymized) and TyDi QA is a collection of questions authored by native speakers of 11 languages.", "The answers are annotated post hoc by another annotator, who selects a paragraph with sufficient information to answer ( long answer ).", "Alternatively, the annotator can select unanswerable Data % Answerable % Avg.", "if there is no answer on the page, or if the information required to answer the question is spread across more than one paragraph.", "If they have identified the long answer, then the annotators are tasked to choose the short answer , a span or set of spans within the chosen paragraph, if there is any.", "Questions are collected independently from existing documents, so those datasets tend to have limited lexical overlap between questions and context, which is a common artifact in prior reading comprehension datasets (Sugawara et al., 2018).", "Reading comprehension datasets such as SQuAD (Rajpurkar et al., 2016), by contrast, have been created by asking annotators to write question and answer pairs based on a single provided paragraph.", "SQuAD 2.0 (Rajpurkar et al., 2018) includes unanswerable questions that are written by annotators who try to write confusing questions based on the single paragraph.", "As shown in Table 1, while unanswerable questions are very common in NQ, TyDi QA and SQuAD 2.0, there are some major differences between the first two datasets and the last: First, NQ and TyDi QA unanswerable questions arise naturally, while SQuAD 2.0 unanswerable questions are artificially created by annotators (e.g. changing an entity name).", "Prior work (Kwiatkowski et al., 2019) suggests that those questions can be identified as such with little reasoning.", "Second, while NQ or TyDi QA models have to select the evidence paragraph (long answer) from dozens of paragraphs, SQuAD 2.0 provides a single reference paragraph.", "That lengthy context provided in NQ and TyDi QA requires systems to select and focus on relevant information to answer.", "As of January 2021, the best models on NQ or TyDi QA lag behind humans, while several models surpass human performance on SQuAD and SQuAD 2.0.", "2 2 https://rajpurkar.github.io/ In the following sections, we focus on information-seeking QA datasets, investigating how to improve the answer coverage of those questions that are currently labeled as unanswerable through several controlled experiments and manual analysis.", "3 QA performances with Gold Answer Type and Gold Paragraph We quantify how the two aforementioned subproblems in information-seeking QA deciding answer type, also referred to as answer calibrations (Kamath et al., 2020) or answerability prediction, and finding a paragraph containing the answer affect the final QA performance.", "We conduct oracle analysis on existing models given two pieces of key information: Gold Paragraph and Gold Type .", "In the Gold Paragraph setting, we provide the long answer to limit the answer space.", "In the Gold Type setting, a model outputs the final answer following the gold answer type t i { short , long only , unanswerable } , which correspond to the questions with short answers, 3 questions with long answers only, and questions without any answers, respectively.", "This lifts the burden of answer calibration from the model.", "QA models.", "For NQ, we use RikiNet (Liu et al., 2020) 4 and ETC (Ainslie et al., 2020).", "These systems are within 3% of the best-performing systems on the long answer and short answer prediction tasks as of January 2021.", "We use the original mBERT (Devlin et al., 2019) baseline for TyDi QA.", "RikiNet uses an answer type predictor whose predicted scores are used as biases to the predicted long and short answers.", "ETC and mBERT jointly predict short answer spans and answer types, following Alberti et al. (2019).", "Human.", "The NQ authors provide upper-bound performance by estimating the performance of a single annotator (Single), and one of the aggregates of 25 annotators (Super).", "Super-annotator performance is considered as an NQ upper bound.", "See complete distinction in Kwiatkowski et al. (2019).", "SQuAD-explorer/ 3 The short answer is found inside the long answer, so long answer is also provided.", "4 We contacted authors of RikiNet for the prediction files.", "We appreciate their help.", "The final metric of NQ is based on precision, recall and F1 among the examples where more than one annotators select NON-NULL answers and a model predicts a NON-NULL answer (Kwiatkowski et al., 2019), to prevent a model always outputting unanswerable for achieving high scores.", "TyDi QA evaluation is based on recall, precision and byte-level F1 scores among the examples with answer annotations.", "The final score is calculated by taking a macro-average score of the results on 11 target languages.", "Table 2 presents oracle analysis on NQ.", "Having access to gold answer type and gold paragraph is almost equally crucial for short answer performance on NQ.", "For long answers, we observe that the models rank the paragraphs correctly but struggle to decide when to abstain from answering.", "When the gold type is given, ETC reaches 84.6 F1 for the long answer task, which is only 2.6 points behind the upper bound, and significantly outperforms single annotator performance.", "Provided both gold paragraph and answer type (Gold T&P), the model's short answer F1 score reaches 10% above that of a single annotator, while slightly behind super human performance.", "For short answers, providing gold paragraph can improve ETC's performance by 5 points, gaining mostly in recall.", "Having the gold answer type information also significantly improves recall at a small cost of precision.", "Table 3 shows that a similar pattern holds in TyDi QA: answerability prediction is a remaining challenge for TyDi QA model.", "5 Given the gold type information, the long answer F1 score is only 1.4 points below the human performance.", "These results suggest that our models performed well when selecting plausible answers and would benefit from improved answerability prediction.", "We first quantitatively analyze how easy it is to estimate answerability from the question alone, and then we test the state-of-the-art models' performance to see how well our complex models given question and the gold context perform on this task.", "We conduct the same experiments on SQuAD 2.0, to highlight the unique challenges of the information-seeking queries.", "Each example consists of a question q i , a list of paragraphs of an evidence document d i , and a list of answer annotations A i , which are aggregated into an answer type t i { short , long , unanswerable } .", "Majority baseline.", "We output the most frequent label for each dataset (i.e., short for NQ, unanswerable for TyDi QA and SQuAD 2.0).", "Question only model (Q only).", "This model takes a question and classify it into one of three classes (i.e., short , long , unanswerable ) solely based on the question input.", "In particular, we use a BERT-based classifier: encode each input question with BERT, and use the [CLS] token as the summary representation to classify.", "Experimental details can be found in the appendix.", "QA models.", "We convert the state-of-the-art QA models' final predictions into answer type predictions.", "When a QA system outputs any short/long answers, we map them to short / long type; otherwise we map them to unanswerable .", "We use ETC for NQ, and mBERT baseline for TyDi QA as in Section 3.3.", "For SQuAD 2.0, we use Retro-reader (Zhang et al., 2021).", "6 The evaluation 5 We do not experiment with Gold P setting for TyDi QA, as it's included in the original paper (Clark et al., 2020).", "6 We contacted authors of Retro-reader for the prediction file.", "We appreciate their help.", "long , short , none for three-way classification and answerable,unanswerable for two-way classification.", "script of NQ and TyDi QA calibrates the answer type for each question by thresholding long and short answers respectively to optimize the F1 score.", "We use the final predictions after this calibration process.", "Human.", "We compare the models' performance with two types of human performance: binary and aggregate.", "Binary evaluation computes pair-wise agreements among all combinations of 5 annotators for NQ and 3 annotators for TyDi QA.", "Aggregate evaluation compares each annotator's label to the majority label selected by the annotators.", "This inflates human performance modestly as each annotator's own label contributes to the consensus label.", "The results in Table 4 indicate the different characteristics of the naturally occurring and artificially annotated unanswerable questions.", "Question only models yield over 70% accuracy in NQ and TyDi QA, showing there are clues in the question alone, as suggested in Liu et al. (2020).", "While models often outperform binary agreement score between two annotators, the answer type prediction component of ETC performs on par with the Q only model, suggesting that answerability calibration happens mainly at the F1 optimization processing.", "Which unanswerable questions can be easily identified?", "We randomly sample 50 NQ examples which both Q only and ETC successfully answered.", "32% of them are obviously too vague or are not valid questions (e.g., bye and bye going to see the king by blind willie johnson, history of 1st world war in Bangla language).", "13% of them include keywords that are likely to make the questions unanswerable (e.g., which of the following would result in an snp?).", "14% of the questions require complex reasoning, in particular, listing entities or finding a maximum / best one (e.g., top 10 air defense systems in the world), which are often annotated as unanswerable in NQ due to the difficulty of finding a single paragraph answering the questions.", "Models, including the Q only models, seem to easily recognize such questions.", "Comparison with SQuAD 2.0.", "In SQuAD 2.0, somewhat surprisingly, the question only baseline achieved only 63% accuracy.", "We hypothesize that crowdworkers successfully generated unanswerable questions that largely resemble answerable questions, which prevents the question only model from exploiting artifacts in question surface forms.", "However, when the context was provided, the QA model achieves almost 95% accuracy, indicating that detecting unanswerability becomes substantially easier when the correct context is given.", "Yatskar (2019) finds the unanswerable questions in SQuAD 2.0 focus on simulating questioner confusion (e.g., adding made-up entities, introducing contradicting facts, topic error), which the current state-of-the-art models can recognize when the short reference context is given.", "By design, these questions are clearly unanswerable, unlike information-seeking queries which can be partially answerable.", "Thus, identifying unanswerable information-seeking queries poses additional challenges beyond matching questions and contexts.", "In this section, we conduct an in-depth analysis to answer the following questions:", "(i) where the unanswerability in information-seeking QA arises,", "(ii) whether we can answer those unanswerable questions when we have access to more knowledge sources beyond a single provided Wikipedia article, and", "(iii) what kinds of questions remain unanswerable when these steps are taken.", "To this end, we annotate 800 unanswerable questions from NQ and TyDi QA across six languages.", "Then, we conduct per-category performance analysis to determine the types of questions for which our models fail to predict answerability.", "We first define the categories of the unanswerable questions.", "Retrieval miss includes questions that are valid and answerable, but paired with a document which does not contain a single paragraph which can answer the question.", "We subdivide this category into three categories based on the question types: factoid , non-factoid , and multi-evidence questions.", "Factoid questions are unanswerable due to the failure of retrieving articles with answers available on the web.", "These questions fall into two categories: where the Wikipedia documents including answers are not retrieved by Google Search, or where Wikipedia does not contain articles answering the questions so alternative knowledge sources (e.g., non-Wikipedia articles) are necessary.", "We also find a small number of examples whose answers cannot be found on the web even when we exhaustively searched dozens of web-pages.", "7 Non-factoid questions cover complex queries whose answers are often longer than a single sentence and no single paragraphs fully address the questions.", "Lastly, multi-evidence questions require reasoning over multiple facts such as multi-hop questions (Yang et al., 2018; Dua et al., 2019).", "A question is assigned this category only when the authors need to combine information scattered in two or more paragraphs or articles.", "Theoretically, the boundaries among the categories can overlap (i.e., there could be one paragraph that concisely answers the query, which we fail to retrieve), but in practice, we achieved a reasonable annotation agreement.", "Invalid QA includes invalid questions , false premise and invalid answers .", "Invalid questions are ill-defined queries, where we can only vaguely guess the questioner's intent.", "NQ authors found 14% of NQ questions are marked as bad questions; here, we focus on the unanswerable subset of the original data.", "We regard queries with too much ambiguity or subjectivity to determine single answers as invalid questions (e.g., where is turkey commodity largely produced in our country ).", "False premise (Kim et al., 2021) are questions based on incorrect presuppositions.", "For example, the question in Table 5 is valid, but no Harry Potter movie was released in 2008, as its sixth movie release was pushed back from 2008 to 2009 to booster its release schedule.", "Invalid answers are annotation errors, where the annotator missed an answer existing in the provided evidence document.", "We randomly sampled and intensively annotated a total of 450 unanswerable questions from the NQ", "development set, and 350 unanswerable questions across five languages from the TyDi QA development set.", "Here, we sample questions where annotators unanimously agreed that no answer exists.", "See Table 6 for the statistics.", "For NQ, the authors of this paper annotated 100 examples and adjudicated the annotations to clarify common confusions.", "The remaining 350 questions were annotated individually.", "Before the adjudication, the annotators agreed on roughly 70% of the questions.", "After this adjudication process, the agreements on new samples reached over 90%.", "For TyDi QA, we recruit five native speakers to annotate examples in Bengali, Japanese, Korean, Russian, and Telugu.", "We provide detailed instructions given the adjudication process, and closely communicate with each annotator when they experienced difficulty deciding among multiple categories.", "Similar to NQ annotation, annotators searched the answers using Google Search, in both the target language and English, referring to any web pages (not limited to Wikipedia) and reannotated the answer, while classifying questions into the categories described earlier.", "Causes of unanswerability.", "Table 6 summarizes our manual analysis.", "We found different patterns of unanswerability in the two datasets.", "Invalid answers were relatively rare in both, which shows they are high quality.", "We observe that invalid answers are more common for questions where annotators need to skim through large reference documents.", "In NQ, where the questions are naturally collected from user queries, ill-defined queries were prevalent (such queries account for 14% of the whole NQ data, but 38% of the unanswerable % Retrieval Miss % Invalid N Fact Non-F Multi q.", "subset).", "In TyDi QA, document retrieval was a major issue across all five languages (50-74%), and a significantly larger proportion of re-annotated answers were found in other Wikipedia pages (50% in TyDi QA v.s. 21.8% in NQ), indicating that the retrieval system used for document selection made more mistakes.", "Document retrieval is a crucial part of QA, not just for modeling but also for dataset construction.", "We observe more complex and challenging questions in some TyDi QA languages; 20% of the unanswerable questions in Korean and 32% of the unanswerable questions in Russian require multiple paragraphs to answer, as opposed to 6% in NQ.", "Alternative knowledge sources.", "Table 7 shows the breakdown of the newly annotated answer sources for the retrieval miss (factoid) questions.", "As mentioned above, in TyDi QA new answers are found in other Wikipedia pages (66.7% of retrieval miss in Japanese subset, 55.6% in Korean subset and 34.8% in Russian), while in NQ, the majority of the answers are from non-Wikipedia websites, which indicates that using Wikipedia as the single Number (%) dataset total Ib / tab diff.", "knowledge source hurts the coverage of answerability.", "Table 8 shows retrieval miss (factoid) questions in TyDi Japanese, Korean and Russian subsets.", "In the first example, the retrieved document is about a voice actor who has acted on a character named Vincent.", "Yet, Japanese Wikipedia has an article about Vince Lombardi, and we could find the correct answer 57 there.", "The second group shows two examples where we cannot have Wikipedia articles with sufficient information to answer but can find non-Wikipedia articles on the web.", "For example, we cannot find useful Korean Wikipedia articles for a question about Pokemon, but a non-Wikipedia Pokemon fandom page clearly answers this question.", "This is also prevalent in NQ.", "We provide a list of the alternative web articles sampled from the retrieval misses (factoid) cases of NQ in Table 11 in the appendix.", "For the TyDi QA dataset, answers were sometimes found in tables or infoboxes of provided Wikipedia documents.", "This is because TyDi QA removes non-paragraph elements (e.g., Table, List, Infobox) to focus on the modeling challenges of multilingual text (Clark et al., 2020).", "WikiData also provides an alternative source of information, covering roughly 15% of queries.", "These results show the potential of searching heterogeneous knowledge sources (Chen et al., 2020b; Oguz et al., 2020) to increase answer coverage.", "Alternatively, Asai et al. (2021) show that searching documents in another language significantly increases the answer coverage of the questions particularly in low-resource languages.", "Lastly, a non-negligible number of Telugu and Bengali questions cannot be answered even after an extensive search over multiple documents due to the lack of information on the web.", "A Bengali question asks Who is the father of famous space researcher Abdus Sattar Khan (a Bangladeshi scientist)?, and our annotator could not find any supporting documents for this question.", "Limitations of the current task designs.", "Table 9 shows non-factoid or multi-evidence questions from TyDi QA, which are marked as unanswerable partially due to the task formulation answers have to be extracted from a single paragraph based on the information provided in the evidence document.", "On the first three examples of non-factoid questions, we have found that to completely answer the questions, we need to combine evidence from multiple paragraphs and to write descriptive answers.", "The second group shows several examples for multi-evidence questions.", "Although they are not typical compositional questions in multihop QA datasets (Yang et al., 2018), it requires comparison across several entities.", "How challenging is it to detect unanswerablity from different causes?", "Table 10 shows the per-category performance of answerability prediction using the models from Section", "4. Both Q only and QA models show the lowest error rate on invalid questions on NQ, suggesting that those questions can be easily predicted as unanswerable, even from the question surface only.", "Unsurprisingly, all models struggle on the invalid answer category.", "We found that in some of those cases, our model finds the correct answers but is penalized.", "Detecting factoid questions' unanswerability is harder when reference documents are incorrect but look relevant due to some lexical overlap to the questions.", "For example, given a question who sang the song angel of my life and the paired document saying My Life is a song by Billy Joel that first appeared on his 1978, which is about a different song, our QA model extracts Billy Joel as the answer with a high confidence score.", "This shows that even the state-of-the-art models can be fooled by lexical overlap.", "We summarize directions for future work from the manual analysis.", "First, going beyond Wikipedia as the only source of information is effective to increase the answer coverage.", "Many of the unanswerable questions in NQ or TyDi QA can be answered Sub-Type Example Query Original Wiki Title New article Answer different Wikipedia", "if we use non-Wikipedia web pages (e.g., IMDb) or structured knowledge bases (e.g., WikiData).", "Alternative web pages where we have found answers have diverse formats and writing styles.", "Searching those documents to answer information-seeking QA may introduce additional modeling challenges such as domain adaptation or generalization.", "To our knowledge, there is no existing large-scale dataset addressing this topic.", "Although there are several new reading comprehension datasets focusing on reasoning across multiple modalities (Talmor et al., 2021; Hannan et al., 2020), limited prior work integrate heterogeneous knowledge sources for open-domain or information-seeking QA (Oguz et al., 2020; Chen et al., 2021).", "Invalid or ambiguous queries are common in information-seeking QA, where questions are often under-specified.", "We observed there are many ambiguous questions included in NQ data.", "Consistent with the findings of Min et al. (2020), we have found that many of the ambiguous questions or ill-posed questions can be fixed by small edits, and we suggest asking annotators to edit those questions or asking them a follow-up clarification instead of simply marking and leaving the questions as is in the future information-seeking QA dataset creation.", "Lastly, we argue that the common task formulation, extracting a span or a paragraph from a single document, limits answer coverage.", "To further improve, models should be allowed to generate the answer based on the evidence document (Lewis et al., 2020), instead of limiting to selecting a single span in the document.", "Evaluating the correctness of free-form answers is more challenging, and requires further research (Chen et al., 2020a).", "While all the individual pieces might be revealed in independent studies (Min et al., 2020; Oguz et al., 2020), our study quantifies how much each factor accounts for reducing answer coverage.", "Analyzing unanswerable questions.", "There is prior work that seeks to understand unanswerability in reading comprehension datasets.", "Yatskar (2019) analyzes unanswerable questions in SQuAD 2.0 and two conversational reading comprehension datasets, namely CoQA and QuAC, while we focus on information-seeking QA datasets to understand the potential dataset collection improvements and quantify the modeling challenges of the state-of-the-art QA models.", "Ravichander et al. (2019) compare unanswerable factors between NQ and a QA dataset on privacy policies.", "This work primarily focuses on a privacy QA, which leads to differences of the categorizations of the unanswerable questions.", "We search alternative knowledge sources as well as the answers to understand how we could improve answer coverage from dataset creation perspective and connect the annotation results with answerability prediction experiments for modeling improvements.", "Answer Calibrations.", "Answerability prediction can bring practical values, when errors are expensive but abstaining from it is less so (Kamath et al., 2020).", "While predicting answerability has been studied in SQuAD 2.0 (Zhang et al., 2021; Hu et al., 2019), the unanswerability in SQuAD 2.0 has different characteristics from unanswerability in information-seeking QA as we discussed above.", "To handle unanswerable questions in information-seeking QA, models either adopt threshold based answerable verification (Devlin et al., 2019), or introduce an extra layer to classify unanswerablity and training the model jointly (Zhang et al., 2020; Yang et al., 2019).", "Kamath et al. (2020) observes the difficulty of answer calibrations, especially under domain shift.", "Artifacts in datasets.", "Recent work (Gururangan et al., 2018; Kaushik and Lipton, 2018; Sugawara et al., 2018; Chen and Durrett, 2019) exhibited that models can capture annotation bias in crowd-sourced data effectively, achieving high performance when only provided with a partial input.", "Although NQ and TyDi QA attempt to avoid such typical artifacts of QA data by annotating questions independently from the existing documents (Clark et al., 2020), we found artifacts in question surface forms can let models easily predict answerability with a partial input (i.e., question only).", "We provide the first in-depth analysis on information-seeking QA datasets to inspect where unanswerability arises and quantify the remaining modeling challenges.", "Our controlled experiments identifies two remaining headrooms, answerability prediction and paragraph selection.", "Observing a large percentage of questions are unanswerable, we provide manual analysis studying why questions are unanswerable and make suggestions to improve answer coverage: (1) going beyond Wikipedia textual information as the only source of information, (2) addressing ambiguous queries instead of simply marking and leaving the questions as is, (3) enable accessing multiple documents and introducing abstractive answers for non-factoid questions.", "Together, our work shed light on future work for information-seeking QA, both for modeling and dataset design.", "All of the manual annotations conducted by the authors of the papers and our collaborators.", "The NQ and TyDi QA data is publicly available and further analysis built upon on them is indeed encouraged.", "This work would encourage future dataset creation and model development for information-seeking QA towards building a QA model that could work well on users' actual queries.", "We thank Jon Clark, Michael Collins, Kenton Lee, Tom Kwiatkowski, Jennimaria Palomaki, Sewon Min, Colin Lockard, David Wadden, Yizhong Wang for helpful feedback and discussion.", "We thank Vitaly Nikolaev for helping with the Russian data annotation, Trina Chatterjee for help with Bengali data annotation, and for Aditya Kusupati for Telegu data annotation.", "We also thank the authors of RikiNet, Retro-reader and ETC for their cooperation on analyzing their system outputs.", "We are grateful for the feedback and suggestions from the anonymous reviewers.", "This research was supported by gifts from Google and the Nakajima Foundation Fellowship." ]
[ "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "method", "abstain", "objective", "objective", "abstain", "objective", "result", "method", "abstain", "objective", "result", "abstain", "objective", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "method", "method", "other", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "method", "abstain", "abstain", "result", "other", "other", "method", "other", "method", "abstain", "other", "other", "method", "other", "other", "other", "other", "abstain", "objective", "method", "result", "abstain", "method", "abstain", "method", "other", "other", "other", "other", "other" ]
[ "Unsupervised relation discovery aims to discover new relations from a given text corpus without annotated data.", "However, it does not consider existing human annotated knowledge bases even when they are relevant to the relations to be discovered.", "In this paper, we study the problem of how to use out-of-relation knowledge bases to supervise the discovery of unseen relations, where out-of-relation means that relations to discover from the text corpus and those in knowledge bases are not overlapped.", "We construct a set of constraints between entity pairs based on the knowledge base embedding and then incorporate constraints into the relation discovery by a variational auto-encoder based algorithm.", "Experiments show that our new approach can improve the state-of-the-art relation discovery performance by a large margin.", "Relation extraction has been widely used for many applications, such as knowledge graph construction (Dong et al., 2014), information retrieval (Liu et al., 2014), and question answering (Ravichan-dran and Hovy, 2002).", "Traditional supervised approaches require direct annotation on sentences with a relatively small number of relations (Roth and Yih, 2002; Kambhatla, 2004).", "1 With the development of large-scale knowledge bases (KBs) such as Freebase (Bollacker et al., 2008), relation extraction has been extended to larger scales comparable to KBs using the distant supervision (Mintz et al., 2009).", "However, when the training corpus does not support the annotated relations showing in the KB, such approach could fail to find sufficient training examples.", "Distant supervision assumption can be violated by up to 31% 1 We distinguish a relation (e.g., a predicate in a knowledge base) from the relation expression (e.g., the text surface between entities in a sentence) throughout the paper.", "for some relations when aligning to NYT corpus (Riedel et al., 2010).", "More importantly, either traditional supervised learning or distantly supervised learning cannot discover new relations unseen in the training phase.", "Unsupervised relation discovery tries to overcome the shortcomings of supervised or distantly supervised learning approaches.", "Existing approaches either extract surface or syntactic patterns from sentences and use relation expressions as predicates (which result in many noisy relations) (Etzioni et al., 2004; Banko et al., 2007), or cluster the relation expressions based on the extracted triplets to form relation clusters (Yao et al., 2011, 2012; Marcheggiani and Titov, 2016).", "However, these approaches do not use existing high-quality and large-scale KBs when they are relevant to the relations to be discovered.", "In this paper, we consider a new relation discovery problem where both the training corpus for relation clustering and a KB are available, but the relations in the training corpus and those in the KB are not overlapped.", "As shown in Figure 1, in the KB, we have entities Pink Floyd , Animals , etc., with some existing relations notable work and has member in the KB.", "However, when doing relation discovery, we can only get supporting sentences that suggest new relations based on and influenced by .", "This is a common and practical problem since predicates in KBs are limited to the annotator defined relations while the real relations in the world are always open and creative.", "It is challenging when there is no overlapped relation between target relation clusters and the KB because in this case the KB is not a direct supervision.", "But if target relation clusters and the KB share some entities, we can use the shared entities as a bridge to introduce indirect supervision for the relation discovery problem.", "Specifically, we build constraints between pairs of tuples based on the Pink Floyd RogerWaters Animals AmusedtoDeath AmusingOurselvestoDeath AnimalFarm GeorgeOrwell Neil Postman Amused to Death was inspried by Neil Postman's book Amusing Ourselves to Death .", "Postman distinguishes the Orwellian vision of the future, from that offered by Aldous Huxley in Brave New World.", "KB.", "For example, in Figure 1, when we cluster the based on relation, we can evaluate the similarity between the tuple ( Animals , Animal Farm ) and the tuple ( Amused to Death , Amusing Ourselves to Death ) based on the KB.", "If the KB tells us these two pairs of tuples are close to each other, then we put a constraint to force our relation clustering algorithm to group them together.", "We use the discrete-state variational autoencoder (DVAE) framework (Marcheggiani and Titov, 2016) as our base relation discovery model since this framework is flexible to incorporate different features and currently the state-of-the-art.", "We use KB embedding (Bordes et al., 2013) to obtain entity embeddings in the KB and use entity embeddings to evaluate the similarity between a pair of tuples.", "Then constraints are constructed and incorporated into the DVAE framework in a way inspired by the must-link and cannot-link based constrained clustering (Basu et al., 2004).", "We show that with no overlapped relations between the KB and the training corpus, we can improve the relation discovery by a large margin.", "Our contributions are summarized as follows.", "We study a new prevalent but challenging task of relation discovery where the training corpus and the KB have no overlapped relation.", "We propose a new kind of indirect supervision to relation discovery which is built based on pairwise constraints between two tuples.", "We show promising results using existing relation discovery datasets to demonstrate the effectiveness of our proposed learning algorithm for the new relation discovery task.", "The code we used to train and evaluate our models is available at https://github.com/ HKUST-KnowComp/RE-RegDVAE .", "We use X to denote the set of all training sentences.", "V is the set of named entities that are recognized by an NER system in X , and ( e 1 , e 2 ) is the pair of first and second entities in a given sentence x X .", "RX is the set of relation labels for X .", "In addition, there exists an external knowledge base G ( EG , TG ) , consisting of a set of entities EG and relations RG and triplets TG where a triplet consists of two entities with their relation.", "Our model is a relation extractor to predict the underlying semantic relation r RX given sentences X , with the help of G ( EG , TG ) .", "In particular, we focus on the challenging scenario where RX RG = .", "In this section, we first review the discrete-state variational autoencoder (DVAE) in 3.1.", "Then we introduce our new framework in 3.2.", "Assuming that we perform generative modeling, where each latent relation r follows a uniform prior distribution p u ( r ) , we follow (Marcheggiani and Titov, 2016) to optimize a pseudo-likelihood:", "L ( ) = log (cid:88) r R X p ( e i , e i | r, ) p u ( r ) (1) 2 (cid:88) i =1 log (cid:88) r R X p ( e i | e i , r, ) p u ( r ) , (2)", "where e i and e i are entities, i { 1 , 2 } and e i denotes the complement { e 1 , e 2 } \\ { e i } .", "p ( e i | e i , r, ) is the probability of one entity given another entity as well as the relation, where denotes the set of parameters.", "Note that this probability p is defined on the triplet ( e 1 , r, e 2 ) which is universal across different sentences containing the two entities.", "The pseudo-likelihood L ( ) can be lower-bounded based on Jensen's inequality through a variational posterior q ( r | x, ) : L ( , ) = 2 (cid:88) i =1 (cid:88) r R T q ( r | x, ) log p ( e i | e i , r, ) + H [ q ( r | x, )] , (3) where q ( r | x, ) predicts the relation based on the whole sentence x as an input and as the set of parameters.", "H is the entropy to regularize the probability distribution q , and is the hyper-parameter to balance the regularization strength.", "This model consists of two components, an encoder q ( r | x, ) which encodes sentence features into a relation distribution, and a decoder p ( e i | r, e i , ) which predicts an entity given the relation cluster and another entity.", "Both are modeled by softmax functions: q ( r | x, ) = exp ( w (cid:124) r g ( x )) (cid:80) r (cid:48) R X exp (cid:0) w (cid:124) r (cid:48) g ( x ) (cid:1) , (4) p ( e i | e i , r, ) = exp ( ( e i , e i , r, )) (cid:80) e (cid:48) i V exp ( ( e (cid:48) i , e i , r, )) , (5) where = { w r | r RX } and g ( x ) is a vector representation of sentence x , which can be high-dimensional one-hot feature encodings or low-dimensional sentence embeddings encoded by deep neural networks.", "( e 1 , e 2 , r, ) can be a general scoring function defined over triplets.", "We use the instantiation with the best performance shown by (Marcheggiani and Titov, 2016), which is a combination of bilinear model and selectional preference model: ( e 1 , e 2 , r, ) = e (cid:124) 1 C r e 2 + [ e 1 , e 2 ] (cid:124) r (6) where = { C r , r , e i | r RT , e i V} , C r is a matrix, r is a vector for the relation r , e 1 and e 2 are the vectors for head and tail entities respectively, and [ e 1 , e 2 ] is the concatenation of the vector representations of the two entities.", "The DVAE model directly optimizes the variational lower bound by doing gradient ascent for and jointly.", "Both encoder q ( r | x, ) and decoder p ( e i | r, e i , ) are implemented as neural networks.", "Standard training techniques and tricks can be applied.", "Our KB constraint framework can be summarized as a two-step procedure: KB constraints construction and regularization for the learning model.", "In the constraints construction step, a set of sentences is formed as a query to KB and retrieves a set of constraints back.", "Then in the regularization step, we apply the constraint to regularize posterior distributions of the relation extractor.", "Conceptually, given a set of sentences X , we want to bias the learning result: After the entities are linked to the KB, if KB inference indicates that some pairs should be in a relation based on a set of rules , then the extractor should be constrained to output it.", "This constraint can be encoded into a feature function Q ( X ) = entity pairs in the same relation based on and put into the posterior regularization framework (Gillenwater et al., 2011).", "However, the computational complexity of the feature function is exponential since we need to traverse the KB to find .", "We instead consider the must-link and cannot-link constraints (Basu et al., 2004), indicating respectively that a pair of sentences should be or should not be labeled as the same relation.", "For each pairwise constraint, the model assigns an associated cost of violating that constraint for the model regularization.", "From the perspective of KB, a must-link constraint on sentences ( x 1 , x 2 ) exists if two pairs of entities ( p 1 , p 2 ) = [( e 1 , 1 , e 1 , 2 ) , ( e 2 , 1 , e 2 , 2 )] are similar given the KB, where ( e i, 1 , e i, 2 ) is the entity pair", "belongs to sentence x i .", "This motivates us to define a similarity score for a pair of entity pairs.", "Instead of modeling the common relation paths or logic rules, which is computationally infeasible, we compare them in the latent embedding space.", "In particular, we model the KB using the TransE (Bordes et al., 2013) model, where a relation is interpreted as a translation from the head entity to the tail entity, with a score function, e 1 + r = e 2 for each gold triplet ( e 1 , r, e 2 ) in the KB.", "This operation is fast and the latent embeddings are expressive in many cases.", "Then we can reason the latent relation representation of a particular pair in vector space by r i = e i, 2 e i, 1 , without the need for extra parameters.", "Here r i is not necessarily a real relation between two entities in the KB but just reflects the geometric property.", "The penalty for violating a must-link constraint between a pair of sentences with a high KB score should be higher than those with low KB scores.", "This further inspires us to define a soft constraint penalty based on the similarity of latent KB relations.", "Here, we use the adjusted cosine similarity (Sar-war et al., 2001) between two latent relations as a must-link confidence score s + ( x 1 , x 2 ) = [cos( e 1 , 2 e 1 , 1 , e 2 , 2 e 2 , 1 )] + + (7) where [ x ] + + = x if x > + otherwise 0, + [0 , 1] is a threshold we defined to control the must-link scope, e i,j is named entity in x i and e i,j is its embedding.", "The similarity between e 1 , 2 e 1 , 1 and e 2 , 2 e 2 , 1 evaluates whether two sentences indicate similar relations according to the KB embedding.", "We also define the cannot-link in a similar way, where two sentences cannot be in the same cluster with a confidence s ( x 1 , x 2 ) = [cos( e 1 , 2 e 1 , 1 , e 2 , 2 e 2 , 1 )] (8) where [ x ] = x if x < otherwise 0, and [0 , 1] is a threshold we defined to control the cannot-link scope.", "We simply set + = = .", "For each pair of sentences ( x 1 , x 2 ) , the relation extractor will predict a clustering posterior q i ( r | x i , ) , i = 1 , 2 , which can be computed based on Eq.", "(4).", "We regularize the clustering result on the probability distance between sentence pairs, using either Euclidean L 2 distance, Kullback-Leibler (KL) divergence, or Jensen-Shannon (JS) divergence.", "The computation of the distance or divergences can be found in Table 1. Then the soft constraints introduced in 3.2.1 are applied on the corresponding distance to calculate the regularization terms: D + ( x 1 , x 2 ) = d ( q 1 ( r ) , q 2 ( r )) s + ( x 1 , x 2 ) , (9) D ( x 1 , x 2 ) = d ( q 1 ( r ) , q 2 ( r )) | s ( x 1 , x 2 ) | , (10) for must and cannot links respectively, where d can be d Euc , d KL , or d JS .", "Taking must-link constraint as an example, if the posterior distributions q 1 ( r | x 1 , ) and q 2 ( r | x 2 , ) are different from each other but KB suggests that these two sentences should be in the same cluster where s + ( x 1 , x 2 ) is large, then d ( q 1 ( r ) , q 2 ( r )) being large means there is a large cost when q 1 and q 2 being different.", "Then in the training phase, we want to reduce this cost given the constraint.", "The constraints above are defined in a |X ||X | space.", "It is almost impossible to enumerate all of the constraints.", "To make it trainable, we instead gather the constraints within a mini-batch.", "Since in different training epochs we randomly permute the training samples, it is possible to touch many pairs of sentences in practice.", "The model parameters only exist in original autoencoder components (i.e., and ), which can be jointly optimized by maximizing the following", "L ( , ) = (cid:88) x X 2 (cid:88) i =1 (cid:88) r RT q ( r | x, ) log p ( e i | e i , r, + (cid:88) x X H [ q ( r | x, )] + (cid:88) X i X (cid:88) ( x 1 ,x 2 ) X i D ( x 1 , x 2 ) + (cid:107) ( , ) (cid:107) 2 ,", "where , , , and are hyper-parameters to control the regularization strength.", "D can be D + or D depending on the cosine similarity between pairs.", "In practice, we apply annealing method over in an exponential way: t = 0 exp( t ) and = log( 0 / T ) T , where 0 is the initial value, and T is the final value, t and T are the current and total training steps respectively.", "This method enables the extractor to explore more possibilities first and finally converge to a stable distribution.", "It is difficult to directly compute the partition function in Eq.", "(5), as it requires to sum over |V| .", "We use the same negative sampling method as (Marcheggiani and Titov, 2016) to substitute log p ( e i | e i , r, ) in Eq.", "(11) with: log p ( e i | e i , r, ) log ( ( e i , e i , r, )) + (cid:88) e neg N log ( ( e neg , e i , r, )) , where N is the set of randomly sampled entities in V and is the sigmoid function.", "We evaluate our model in the context of unsupervised relation discovery and compare to the baseline model, DVAE (Marcheggiani and Titov, 2016) which is the current state-of-the-art of relation discovery.", "Distant supervision assumes that the relations should be aligned between the KB and the training text corpus, which is not available in our setting.", "We tested our model on three different subsets of New York Times corpus (NYT) (Sandhaus and Evan, 2008).", "The first one is widely used in unsupervised settings, which was developed by Yao et al. (2011) and has also been used by Marcheggiani and Titov (2016).", "This dataset contains articles 2000 to 2007, with named entities annotated and features processed (POS tagging, NER, and syntactic parsing).", "We use this dataset to compare with previous work directly (Marcheggiani and Titov, 2016).", "The second and third ones are usually applied by supervised models.", "So when they generated the data, they tended to focus on relations with more supporting sentences.", "The second one was developed by Zeng et al. (2017).", "The data is built by aligning Wikidata (Vrandecic, 2012) relations with NYT corpus, as a result of 99 possible relations.", "It is built to contain more updated facts and richer structures of relations, e.g., a larger number of relation/relation paths.", "We use this dataset to amplify the effects coming from relation paths in KB, as the data was used to train a path-based relation extraction model.", "The third one was developed by Riedel et al. (2010) and has also been used by Lin et al. (2016).", "This dataset was generated by aligning Freebase (Bollacker et al., 2008) relations with NYT in 2005-2007, and with 52 possible relations.", "We use this data to test the clustering result with a narrow relation domain.", "We align these datasets against FB15K, which is a randomly sampled subset of Freebase developed by Bordes et al. (2013).", "For each of the datasets above, we hold out the triplets in FB15K that contains relations in corresponding text data, so that we ensure that KB cannot give any direct supervision on any relation labels.", "We then discard named Model Metrics Prediction based on encoder Prediction based on decoder F1 NMI F1 NMI Mean Std Mean Std Mean Std Mean Std DVAE 0.417 0.011 0.339 0.009 0.419 0.011 0.337 0.014 RegDVAE (Euclidean at encoder) 0.469 0.014 0.430 0.020 0.448 0.020 0.384 0.020 RegDVAE (KL at encoder) 0.375 0.009 0.359 0.014 0.380 0.011 0.355 0.014 RegDVAE (JS at encoder) 0.435 0.038 0.370 0.042 0.409 0.012 0.336 0.005 RegDVAE (Euclidean at decoder) 0.416 0.019 0.329 0.017 0.350 0.012 0.201 0.054 Table 3: Comparison results on NYT122 with different prediction and regularization strategies (using encoder or decoder).", "entities in text corpus if they are not shown in KB, so that we can directly test the influence of our KB constraint model.", "Finally, we only keep a single label for each sentence, and e 1 , e 2 follow the occurrence order in the sentence.", "The resulting datasets contain 122, 71, and 27 relation labels respectively, so we name them as NYT122, NYT71, and NYT27.", "The statistics of the three datasets are shown in Table 2. For NYT71 and NYT27, we perform the same feature extraction as NYT122 shown in (Marcheggiani and Titov, 2016).", "All the model parameters are initialized randomly.", "The number of negative samples is set to 5, mini-batch size is set to 100 with 80 epochs.", "We optimize all the models using AdaGrad (Duchi et al., 2011) with initial learning rate at 0.5.", "For NYT122, we induce 40 relations clusters, with 0 = 4 , T = 10 5 , = 0 .", "6 , and = 0 .", "9 .", "For NYT71, we induce 30 relations clusters, with 0 = 2 , T = 10 4 , = 0 .", "8 , and = 0 .", "95 .", "For NYT27, we induce 20 relations clusters, with 0 = 2 , T = 10 4 , = 0 .", "8 , and = 0 .", "3 .", "We train TransE as our KB embedding model with 50 dimensions and 1,000 epochs.", "We report the average and standard deviation based on five different runs.", "We randomly split the data into validation:test=4:6.", "All the model selections were based on validation sets, and final evaluation results will be only based on test sets.", "As the scoring function, we use the B 3 F 1 (Bagga and Baldwin, 1998) which has also been used by our baseline (Marcheggiani and Titov, 2016), and Normalized Mutual Information (NMI) (Strehl and Ghosh, 2002) metrics.", "Both are standard measures for evaluating clustering tasks.", "Regularization and Prediction Strategies.", "We first report our results on NYT122 using different regularization and prediction settings, as this dataset was used by our baseline model DVAE.", "Note that both encoder and decoder components can make relation predictions.", "In fact, the way of using encoder q ( r | x, ) for each sentence is straightforward.", "Then based on the encoder, we predict relation on the basis of single occurrence of entity pair.", "When using the decoder, we need to re-normalize p ( e i | r, e i , ) as p ( r | e 1 , e 2 , ) to make predictions.", "Based on the decoder, we make predictions for each unique entity pair.", "As a consequence, our constraints can be imposed on both encoder and decoder.", "The way of computing decoder probability distribution is the same as making predictions.", "So in this experiment, we report both results.", "The results are shown in Table 3. From the table, we can see that regularization with Euclidean distance performs the best compared to KL and JS.", "Moreover, the regularization over encoder is better than the regularization over decoder.", "This may be because the way that we put constraints only over sampled sentences in a batch may hurt the regularization of decoder, since sampled unique pairs may be less than sample sentences.", "If we look at results comparing original DVAE prediction based on the encoder and the decoder, both result in similar F1 and NMI numbers.", "Thus, we can only conclude that currently in the way we do sampling, constraining over encoder is a better choice.", "Comparison on Different Datasets.", "We also compare our algorithm on the three datasets with different baseline settings.", "In order to evaluate our model rigorously, besides the original DVAE model, we compare two additional augmented baseline models with the same hyper-parameter setting: DVAE with TransE embeddings appended to encoder input features (DVAE+E) and DVAE Model NYT122 NYT71 NYT27 F1 NMI F1 NMI F1 NMI Mean Std Mean Std Mean Std Mean Std Mean Std Mean Std Majority 0.355 0 -0.121 0 -0.549 0 DVAE 0.417 0.011 0.339 0.009 0.325 0.011 0.375 0.023 0.433 0.018 0.384 0.021 DVAE+E 0.385 0.021 0.341 0.043 0.339 0.021 0.418 0.022 0.396 0.034 0.381 0.039 DVAE+D 0.452 0.033 0.438 0.022 0.352 0.038 0.339 0.009 0.499 0.040 0.469 0.027 RegDVAE 0.469 0.014 0.430 0.020 0.377 0.020 0.466 0.036 0.587 0.005 0.451 0.005 RegDVAE+D 0.499 0.022 0.497 0.013 0.432 0.028 0.589 0.071 0.665 0.022 0.562 0.038 Table 4: Comparison of prediction results based on encoder using NYT122, NYT71, and NYT27 datasets with different KB regularization strategies.", "with decoder entity vectors replaced by pre-trained KB embeddings (DVAE+D).", "For our method, we report RegDVAE with the best setting where we use Euclidean distance based constraints to regularize the encoder.", "Moreover, we report a setting with fixed embeddings in the decoder as the ones obtained from TransE (RegDVAE+D).", "This also makes sense since even though the TransE embeddings are not trained with the observation of the same relations as the text corpus, the embeddings already contain much semantic information about entities.", "Then by fixing the embeddings of entities in the decoder, we can significantly reduce the number of parameters that need to be trained.", "The results are shown in Table 4. As we can see that, RegDVAE+D can outperform the original DVAE by 8 23 points on F1.", "DVAE+D is also good but may fail when there are a lot of out-of-sample entities in the training corpus.", "Hyper-parameter Sensitivity.", "We have three hyper-parameters in our algorithm: 0 for the regularization of encoder entropy, for the regularization with our constraints, and for the threshold of KB based cosine similarities.", "Here, we test and , since the sensitivity result of 0 is the same as the original DVAE work (Marcheggiani and Titov, 2016).", "The sensitivity of is shown in Figure", "2(a).", "The results are good in a wide range from = 0 .", "5 to = 2 .", "The sensitivity of is shown in Figure", "2(b).", "It reveals some interesting patterns.", "At the beginning when is small, it hurts the performance.", "After getting greater than 0.7, it improves the performance, which means that only very similar relations indicated by KB embeddings are useful relations as constraints.", "In addition, = 1 (meaning only finding identical relations) is worse than = 0 .", "9 , which means we indeed find some relations in our KB so that different triplets will be constrained.", "KB Relation Overlap.", "Although we assume that there is no overlapped relation between the KB and the training text corpus, in practice, we may find a lot of applications that the relations are partially observed in KB.", "Thus, we also test a setting when the KB has different proportions of overlapped relations with training text corpus.", "In this case, we train different KB embeddings for different percentages of overlapped relations, and then apply the embeddings into the constraints.", "The results are shown in Figure", "2(c).", "As we can see, in general, more overlapped relations will result in better performance.", "The best number can be better than the number without overlapped relation by about two points.", "This again verifies that the KB embedding is very robust and represent the semantic meanings of entities even with part of the Contextual Sentence Cluster Similarity ...", "relations observed (Bordes et al., 2013).", "Case Study.", "We also show some examples of entity pair similarities in Table 5. From the Table we can see that our target relation cluster is /location/contained by .", "In the first example, the similarity between entity pairs ( Spain, Europe ) and ( Portugal, Europe ) are high, which indicates the same cluster of pairs of sentences.", "The same constraint is applied in the second example, although there's no direct connection between ( Brazil, Latin America ), ( Argentina, Latin America ).", "Supervised and Distantly Supervised Relation Extraction.", "Traditional supervised relation extraction focuses on a limited number of relations (Roth and Yih, 2002; Kambhatla, 2004; Chan and Roth, 2010).", "Distant supervision uses KBs to obtain a lot of automatically annotated data (Mintz et al., 2009; Riedel et al., 2010; Hoffmann et al., 2011; Surdeanu et al., 2012; Xu et al., 2013a; Zhang et al.; Zeng et al., 2015; Lin et al., 2016; Zeng et al., 2017).", "There are two important assumptions behind these models, namely multi-instance learning (Riedel et al., 2010) and multi-instance multi-label learning (Hoffmann et al., 2011; Surdeanu et al., 2012).", "Our setting is similar to multi-instance learning but we assume there is no overlapped relation between KB and training text corpus.", "Universal schema (Riedel et al., 2013; Verga et al., 2016; Toutanova et al., 2015; McCallum et al., 2017) can also exploit KB to help extract relations.", "It needs a lot of entity pairs in text to co-occur with KB triplets, which is under the same setting with distant supervision.", "Those surface patterns are pre-extracted and shown in the training phase, which makes it also a weakly supervised learning method.", "that every relation expression can represent a unique relation (Etzioni et al., 2004; Banko et al., 2007; Fader et al., 2011; Mausam et al., 2012; Xu et al., 2013b; Angeli et al., 2015).", "On the other hand, relation clustering approaches group all the related relation expressions to represent a relation (Lin and Pantel, 2001; Mohamed et al., 2011; Takamatsu et al., 2011; Yao et al., 2011, 2012; Nakashole et al., 2012a,b; Marcheggiani and Titov, 2016).", "Our setting is based on (Marcheggiani and Titov, 2016) but we also introduce KB as a different kind of weak and indirect supervision.", "Knowledge Base Representation.", "Embedding based knowledge base representation learning methods (Bordes et al., 2013; Wang et al., 2014; Lin et al., 2015; Trouillon et al., 2016) represent entities and relations as vectors, denoted as e and C r respectively such that for a distances function f , the value f ( e 1 , C r , e 2 ) is maximized for all ( e 1 , r, e 2 ) facts.", "Among all these methods, TransE model has a favorable property that the translation operation can be easily recovered by entity vectors ( r 1 , 2 = e 1 e 2 ) .", "With its simplicity and high performance, TransE is enough for demonstration.", "Though our method is not restricted to the representation form of KB, we leave it for future evaluation.", "Constraints can be made more explainable by paths finding.", "For instance, the Path Ranking Algorithm (PRA) (Lao and Cohen, 2010; Lao et al., 2011) uses random walk to perform multi-hop reasoning based on logic rules.", "Later on, reinforcement Learning (Toutanova et al., 2015; Xiong et al., 2017; Das et al., 2017; Chen et al., 2018) is used to search for paths more effectively.", "Though heuristics are used to further reduce the number of mined relations, it is still very costly to find the paths for KB with hundreds of relations, if not impossible.", "Constraint Modeling.", "Originated from semi-supervised learning (Chapelle et al., 2006), must-link and cannot-link modeling has been well studied in machine learning community (Wagstaff et al., 2001; Basu et al., 2004, 2008).", "Such constraints were usually generated based on the ground truth labels of data.", "For document clustering, word constraints constructed based on Word-Net similarities have been applied (Song et al., 2013) and entity constraints based on entity types in an external KB have been used (Wang et al., a, 2016), both being considered as a kind of indirect supervision based on side information.", "For triplet relation clustering, relation surface similarity and entity type constraints have been explored (Wang et al.,", "b).", "However the above constraints are applied to a particular form of models, co-clustering models.", "Compared to existing approaches, our constraints are constructed based on more recently developed KB embeddings, which is more flexible and easy to incorporate into different models.", "In natural language processing community, constraints based on background knowledge are also well studied.", "For example, constrained conditional models (CCM) (Chang et al., 2012) provides a very flexible framework to decouple learning and inference, where in the inference step, background knowledge can be incorporated as an ILP (integer linear programming) problem.", "Posterior regularization (PR) (Ganchev et al., 2010) generalizes this idea so that it uses a joint learning and inference framework to incorporate the background knowledge.", "Both CCM and PR have many applications including the application to relation extraction (Chan and Roth, 2010; Chen et al., 2011).", "Compared to these existing approaches, our constraints are derived from the general-purpose KB, which is quite different from their way of manually crafting some background knowledge as declarative rules.", "It is very interesting that we are similar to the PR framework.", "Since we use a DVAE framework as the base algorithm, there is no traditional E-step and M-step in the variational inference.", "Instead, only q and p probabilities parameterized by neural networks are updated.", "In our framework, we can add constraints to either q or p probabilities (ap-plying to p needs modification of normalization).", "It is the same that we draw a biased learning process when estimating the posteriors as PR does.", "In this paper, we propose a new relation discovery setting where there is no overlapped relations between the training text corpus and the KB.", "We propose a new learning framework of KB regularization which uses must-link and cannot-link constraints derived based on similarities in the KB embedding space.", "Our method improves the results over all baseline models without harming the scalability.", "We believe this framework is as flexible as other constraint models to be applied to many applications when we think the semantics of entities and relations provided by the KB is useful.", "This paper was supported by the Early Career Scheme (ECS, No. 26206717) from Research Grants Council in Hong Kong.", "We thank Intel Corporation for supporting our deep learning related research.", "We also thank the anonymous reviewers for their valuable comments and suggestions that help improve the quality of this manuscript." ]
[ "abstain", "abstain", "method", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "objective", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "result", "abstain", "result", "objective", "objective", "objective", "objective", "other", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "objective", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "result", "result", "abstain", "method", "result", "method", "abstain", "method", "other", "abstain", "method", "result", "abstain", "method", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "objective", "abstain", "method", "other", "method", "abstain", "objective", "objective", "result", "method", "other", "other", "other" ]
[ "Language Technology Lab, University of Cambridge Department of Computing Science, University of Glasgow Department of Data Science and AI, Monash University {zm324, fl399, ys484, cac74, nhc30}@cam.ac.uk [email protected]", "Abstract Knowledge probing is crucial for understanding the knowledge transfer mechanism behind the pre-trained language models (PLMs).", "Despite the growing progress of probing knowledge for PLMs in the general domain, specialised areas such as biomedical domain are vastly under-explored.", "To facilitate this, we release a well-curated biomedical knowledge probing benchmark, MedLAMA , constructed based on the Unified Medical Language System (UMLS) Metathesaurus.", "We test a wide spectrum of state-of-the-art PLMs and probing approaches on our benchmark, reaching at most 3% of acc@10.", "While highlighting various sources of domain-specific challenges that amount to this underwhelming performance, we illustrate that the underlying PLMs have a higher potential for probing tasks.", "To achieve this, we propose C ontrastive -P robe , a novel self-supervised contrastive probing approach, that adjusts the underlying PLMs without using any probing data.", "While C ontrastive P robe pushes the acc@10 to 24%, the performance gap remains notable.", "Our human expert evaluation suggests that the probing performance of our C ontrastive -P robe is underestimated as UMLS does not comprehensively cover all existing factual knowledge.", "We hope MedLAMA and C ontrastive -P robe facilitate further developments of more suited probing techniques for this domain.", "1 1 Introduction Pre-trained language models (PLMs; Devlin et al. 2019; Liu et al. 2020) have orchestrated incredible progress on myriads of fewor zero-shot language understanding tasks, by pre-training model parameters in a task-agnostic way and transferring knowledge to specific downstream tasks via finetuning (Brown et al., 2020; Petroni et al., 2021).", "To better understand the underlying knowledge transfer mechanism behind these achievements, many knowledge probing approaches and benchmark datasets have been proposed (Petroni et al., 2019; Jiang et al., 2020a; Kassner et al., 2021; Zhong et al., 2021).", "This is typically done by formulating knowledge triples as cloze-style queries with the objects being masked (see Table 1) and using the PLM to fill the single (Petroni et al., 2019) or multiple (Ghazvininejad et al., 2019) [M ask ] token(s) without further fine-tuning.", "In parallel, it has been shown that specialised PLMs (e.g., BioBERT; Lee et al. 2020, BlueBERT; Peng et al. 2019 and PubMedBERT; Gu et al. 2020) substantially improve the performance in several biomedical tasks (Gu et al., 2020).", "The biomedical domain is an interesting testbed for investigating knowledge probing for its unique challenges (including vocabulary size, multi-token en-tities), and the practical benefit of potentially disposing the expensive knowledge base construction process.", "However, research on knowledge probing in this domain is largely under-explored.", "To facilitate research in this direction, we present a well-curated biomedical knowledge probing benchmark, MedLAMA , that consists of 19 thoroughly selected relations.", "Each relation contains 1 k queries (19 k queries in total with at most 10 answers each), which are extracted from 4798 ID Relation Manual Prompt 1 disease may have associated disease The disease [X] might have the associated disease [Y] .", "the large UMLS (Bodenreider, 2004) biomedical knowledge graph and verified by domain experts.", "We use automatic metrics to identify the hard examples based on the hardness of exposing answers from their query tokens.", "See Table 1 for a sample of easy and hard examples from MedLAMA .", "A considerable challenge in probing in biomedical domain is handling multi-token encoding of the answers (e.g. in MedLAMA only 2.6% of the answers are single-token, while in the English set of mLAMA; Kassner et al. 2021, 98% are single-token), where all existing approaches (i.e., mask predict; Petroni et al. 2019, retrieval-based; Dufter et al. 2021, and generation-based; Gao et al. 2020) struggle to be e ective.", "2 For example, the mask predict approach (Jiang et al., 2020a) which performs well in probing multilingual knowledge achieves less than 1% accuracy on MedLAMA .", "To address the aforementioned challenge, we propose a new method, C ontrastive -P robe , that first adjusts the representation space of the underlying PLMs by using a retrieval-based contrastive learning objective (like rewiring ' the switchboard to the target appliances Liu et al. 2021c) then retrieves answers based on their representation similarities to the queries.", "Notably, our C ontrastive P robe does not require using the MLM heads during probing, which avoids the vocabulary bias across di erent models.", "Additionally, retrieval-based probe is e ective for addressing the multi-token challenge, as it avoids the need to generate multiple tokens from the MLM vocabulary.", "We show that C ontrastive -P robe facilitates abso-2 Prompt-based probing approaches such as Auto-Prompt (Shin et al., 2020a), SoftPrompt (Qin and Eisner, 2021), and OptiPrompt (Zhong et al., 2021) need additional labelled data for fine-tuning prompts, but we restrict the scope of our investigation to methods that do not require task data.", "lute improvements of up-to 5% and 21% on the acc@1 and acc@10 probing performance compared with the existing approaches.", "We further highlight that the elicited knowledge by C ontrastive -P robe is not gained from the additional random sentences, but from the original pre-trained parameters, which echos the previous finding of Liu et al. (2021b); Glava and Vulic (2021); Su et al. (2021, 2022).", "Additionally, we demonstrate that di erent state-of-the-art PLMs and transformer layers are suited for di erent types of relational knowledge, and di erent relations requires di erent depth of tuning, suggesting that both the layers and tuning depth should be considered when infusing knowledge over different relations.", "Furthermore, expert evaluation of PLM responses on a subset of MedLAMA highlights that expert-crafted resources such as UMLS still do not include the full spectrum of factual knowledge, indicating that the factual information encoded in PLMs is richer than what is reflected by the automatic evaluation.", "The findings of our work, along with the proposed MedLAMA and C ontrastive -P robe , highlight both the unique challenges of the biomedical domain and the unexploited potential of PLMs.", "We hope our research to shed light on what domain-specialised PLMs capture and how it could be better resurfaced, with minimum cost, for probing.", "To facilitate research of knowledge probing in the biomedical domain, we create the MedLAMA benchmark based on the largest biomedical knowledge graph UMLS (Bodenreider, 2004).", "UMLS 3 3 Release version 2021AA: https://download.nlm.", "is a comprehensive metathesaurus containing 3.6 million entities and more than 35.2 million knowledge triples over 818 relation types which are integrated from various ontologies, including SNOMED CT, MeSH and the NCBI taxonomy.", "Creating a LAMA-style (Petroni et al., 2019) probing benchmark from such a knowledge graph poses its own challenges: (1) UMLS is a collection of knowledge graphs with more than 150 ontologies constructed by di erent organisations with very di erent schemata and emphasis; (2) a significant amount of entity names (from certain vocabularies) are unnatural language (e.g., t(8;21)(q22;q22) denoting an observed karyotypic abnormality) which can hardly be understood by the existing PLMs, with tokenisation tailored for natural language; (3) some queries (constructed from knowledge triples) can have up to hundreds of answers (i.e., 1-to-N relations), complicating the interpretation of probing performance; and (4) some queries may expose answers in themselves (e.g., answer within queries), making it challenging to interpret relative accuracy scores.", "Selection of Relationship Types.", "In order to obtain high-quality knowledge queries, we conducted multiple rounds of manual filtering on the relation level to exclude uninformative relations or relations that are only important in the ontological context but do not contain interesting semantics as a natural language (e.g, taxonomy and measurement relations).", "We also excluded relations with insu cient triples / entities.", "Then, we manually checked the knowledge triples for each relation to filter out those that contain unnatural language entities and ensure that their queries are semantically meaningful.", "Additionally, in the cases of 1-to-N relations where there are multiple gold answers for the same query, we constrained all the queries to contain at most 10 gold answers.", "These steps resulted in 19 relations with each containing 1k randomly sampled knowledge queries.", "Table 2 shows the detailed relation names and their corresponding prompts.", "Easy vs. Hard Queries.", "Recent works (Poerner et al., 2020; Shwartz et al., 2020) have discovered Approach Type Answer space MLM Fill-mask (Petroni et al., 2019) MP PLM Vocab (cid:51) X-FACTR (Jiang et al., 2020a) MP PLM Vocab (cid:51) Generative PLMs (Lewis et al., 2020) GB PLM Vocab (cid:55) Mask average (Kassner et al., 2021) RB KG Entities (cid:51) C ontrastive -P robe (Ours) RB KG Entities (cid:55) Table 3: Comparison of di erent approaches.", "that PLMs are overly reliant on the surface form of entities to guess the correct answer of a knowledge query.", "The PLMs cheat by detecting lexical overlaps between the query and answer surface forms instead of exercising their abilities of predicting factual knowledge.", "For instance, PLMs can easily deal with the triple < Dengue virus live antigen CYD serotype 1 , may-prevent , Dengue > since the answer is part of the query.", "To mitigate such bias, we also create a hard query set for each relation by selecting a subset of their corresponding 1k queries using token and matching metrics (i.e., exact matching and ROUGE-L (Lin and Och, 2004)).", "For more details see the Appendix .", "We refer to the final filtered and original queries as the hard sets and full sets , respectively.", "Figure 1 (left) shows the count of hard vs. full sets.", "The Multi-token Issue.", "One of the key challenges for probing MedLAMA is the multi-token decoding of its entity names.", "In MedLAMA there are only 2.6% of the entity names that are single-token 4 while in the English set of mLAMA (Kass-ner et al., 2021) and LAMA (Petroni et al., 2019) the percentage of single-token answers are 98% and 100%, respectively.", "Figure 1 (right) shows the percentage of answers by di erent token numbers.", "While the pioneer works in PLM knowledge probing mainly focused on the single-token entities, many recent works have started exploring the solutions for the multi-token scenario (Kassner et al., 2021; Jiang et al., 2020a; De Cao et al., 2021).", "These knowledge probing approaches can be categorised, based on answer search space and re-liance on MLM head, into three categories: mask predict , generation-based , and retrieval-based .", "Table 3 summarises their key di erences.", "used approaches to probe knowledge for masked PLMs (e.g. BERT).", "The mask predict approach uses the MLM head to fill a single mask token for a cloze-style query, and the output token is subjected to the PLM vocabulary (Petroni et al., 2019).", "Since many real-world entity names are encoded with multiple tokens, the mask predict approach has also been extended to predict multi-token answers using the conditional masked language model (Jiang et al., 2020a; Ghazvininejad et al., 2019).", "Figure", "2(a) shows the prediction process.", "Specifically, given a query, the probing task is formulated as: 1) filling masks in parallel independently ( Independent ); 2) filling masks from left to right autoregressively ( Order ); 3) filling tokens sorted by the maximum confidence greedily ( Confidence ).", "After all mask tokens are replaced with the initial predictions, the predictions can be further refined by iteratively modifying one token at a time until convergence or until the maximum number of iterations is reached (Jiang et al., 2020a).", "For example, Order + Order represents that the answers are initially predicted by Order and then refined by Order .", "In this paper we examined two of these approaches, i.e. Independent and Order + Order , based on our initial exploration.", "Generation-based.", "Recently, many generation based PLMs have been presented for text generation tasks, such as BART (Lewis et al., 2020) and T5 (Ra el et al., 2020).", "These generative PLMs are trained with a de-noising objective to restore its original form autoregressively (Lewis et al., 2020; Ra el et al., 2020).", "Such an autoregressive generation process is analogous to the Order probing approach, thus the generative PLMs can be directly used to generate answers for each query.", "Specifically, we utilize the cloze-style query with a single [M ask ] token as the model input.", "The model then predicts the answer entities that correspond to the [M ask ] token in an autoregressive manner.", "An illustration is provided in Figure", "2(b).", "Retrieval-based.", "Mask predict and Generation-based approaches need to use the PLM vocabulary as their search spaces for answer tokens, which may generate answers that are not in the answer set.", "In particular, when probing the masked PLMs using their MLM heads, the predicted result might not be a good indicator for measuring the amount of knowledge captured by these PLMs.", "This is mainly because the MLM head will be eventually dropped during the downstream task fine-tuning while the MLM head normally accounts for more than 20% of the total PLM parameters.", "Alternatively, the retrieval-based probing (Dufter et al., 2021; Kassner et al., 2021) are applied to address this issue.", "Instead of generating answers based on the PLM vocabulary, the retrieval-based approach finds answers by ranking the knowledge graph candidate entities based on the query and entity representations, or the entity generating scores.", "To probe PLMs on MedLAMA , we use mask average (Kassner et al., 2021), an approach that takes the average log probabilities of entity's individual tokens to rank the candidates.", "The retrieval-based approaches address the multi-token issue by restricting the output space to the valid answer set and can be used to probe knowledge in di erent types of PLMs (e.g. BERT vs. fastText; Dufter et al. 2021).", "However, previous works (Kassner et al., 2021; Dufter et al., 2021) only report results based on the type-restricted candidate set (e.g. relation) which we observed to decay drastically under the full entity set.", "To better transform the PLM encoders for the cloze-style probing task, we propose C ontrastive 4801", "P robe which pre-trains on a small number of sentences sampled from the PLM's original pre-training corpora with a contrastive self-supervising objective, inspired by the Mirror-BERT (Liu et al., 2021b).", "Our contrastive pretraining does not require the MLM head or any additional external knowledge, and can be completed in less than one minute on 2 2080Ti GPUs.", "Self-supervised Contrastive Rewiring.", "We randomly sample a small set of sentences (e.g. 10k, see 5.2 for stability analysis of C ontrastive P robe on several randomly sampled sets), and replace their tail tokens (e.g. the last 50% excluding the full stop) with a [M ask ] token.", "Then these transformed sentences are taken as the queries of the cloze-style self-retrieving game .", "In the following we show an example of transforming a sentence into a cloze-style query: Sentence: Social-distancing largely reduces coronavirus infections.", "Given a batch, the cloze-style self-retrieving game is to ask the PLMs to retrieve the positive answer from all the queries and answers in the same batch.", "Our C ontrastive -P robe tackles this by optimising an InfoNCE objective (Oord et al., 2018), L = N (cid:88) i = 1 log exp(cos( f ( x i ) , f ( x p )) / ) (cid:88) x j N i exp(cos( f ( x i ) , f ( x j )) / ) , (1) where f ( ) is the PLM encoder (with the MLM head chopped-o and [CLS] as the contextual rep-resentation), N is batch size, x i and x p are from a query-answer pair (i.e., x i and x p are from the same sentence), N i contains queries and answers in the batch, and is the temperature.", "This objective function encourages f to create similar representations for any query-answer pairs from the same sentence and dissimilar representations for queries / answers belonging to di erent sentences.", "Retrieval-based Probing.", "For probing step, the query is created based on the prompt-based template for each knowledge triple , as shown in the following: Triple: < Elvitegravir , may-prevent , Epistaxis > Query: Elvitegravir may prevent [M ask ].", "and we search for nearest neighbours from all the entity representations encoded by the same model.", "In this section we conduct extensive experiments to verify whether C ontrastive -P robe is e ective for probing biomedical PLMs.", "First, we experiment with C ontrastive -P robe and existing probing approaches on MedLAMA benchmark (5.1).", "Then, we conduct in-depth analysis of the stability and applicability of C ontrastive -P robe in probing biomedical PLMs (5.2).", "Finally, we report an evaluation of a biomedical expert on the probing predictions and highlight our findings (5.3).", "C ontrastive -P robe Rewiring.", "We train our C ontrastive -P robe based on 10k sentences which are randomly sampled from the PubMed texts 5 using a mask ratio of 0.5.", "The best hyperparameters and their tuning options are provided in Appendix .", "Probing Baselines.", "For the mask predict approach, we use the original implementation of X-FACTR (Jiang et al., 2020a), and set the beam size and the number of masks to 5.", "Both mask predict and retrieval-based approaches are tested under both the general domain and biomedical domain BERT models, i.e. Bert-based-uncased (De-vlin et al., 2019), BlueBERT (Peng et al., 2019), BioBERT (Lee et al., 2020), PubMedBERT (Gu et al., 2020).", "6 For generation-based baselines, we test five PLMs, namely BART-base (Lewis et al., 5 We sampled the sentences from a PubMed corpus used in the pre-training of BlueBERT (Peng et al., 2019).", "2020), T5-small and T5-base (Ra el et al., 2020) that are general domain generation PLMs, and SciFive-base & SciFive-large (Phan et al., 2021) that are pre-trained on large biomedical corpora.", "Comparing Various Probing Approaches.", "Table 4 shows the overall results of various probing baselines on MedLAMA .", "It can be seen that the performances of all the existing probing approaches (i.e. generative PLMs , X-FACTR and mask predict ) are very low ( < 1% for acc@1 and < 4% for acc@10) regardless of the underlying PLM, which are not e ective indicators for measuring knowledge captured.", "In contrast, our C ontrastive P robe obtains absolute improvements by up-to 5% and 21% on acc@1 and acc10 respectively comparing with the three existing approaches, which validates its e ectiveness on measuring the knowledge probing performance.", "In particular, PubMedBERT model obtains the best probing performance (5.71% in accuracy) for these biomedical queries, validating its e ectiveness of capturing biomedical knowledge comparing with other PLMs (i.e. BERT, BlueBERT and BioBERT).", "Benchmarking with C ontrastive -P robe .", "To further examine the e ectiveness of PLMs in capturing biomedical knowledge, we benchmarked several state-of-the-art biomedical PLMs (including pure pre-trained and knowledge-enhanced models) on MedLAMA through our C ontrastive -P robe .", "Table 5 shows the probing results over the full and hard sets.", "In general, we can observe that these biomedical PLMs always perform better than general-domain PLMs (i.e., BERT).", "Also, we observe the decay of performance of all these models on the more challenging hard set queries.", "While PubMedBERT performs the best among all the pure pre-trained models, SapBERT (Liu et al., 2021a) and CoderBERT (Yuan et al., 2020) (which are the knowledge infused PubMedBERT) further push performance to 8% and 30.41% on acc@1 and acc@10 metrics respectively, highlighting the benefits of knowledge infusion pre-training.", "Comparison per Answer Length.", "Since di erent PLMs use di erent tokenizers, we use char length of the query answers to split MedLAMA into different bins and test the probing performance over various answer lengths.", "Figure 3 shows the result.", "We can see that the performance of retrieval-based probing in C ontrastive -P robe increases as Model acc@1 / acc@10 FullSet HardSet BERT(Devlinetal.,2019) 1.95 0 .", "the answer length increase while the performance of mask predict dropped significantly.", "This result validates that our C ontrastive -P robe (retrieval-based) are more reliable at predicting longer answers than the mask predict approach since the latter heavily relies on the MLM head.", "7 5.2 In-depth Analysis of C ontrastive -P robe Since our C ontrastive -P robe involves many hyperparameters and stochastic factors during self-retrieving pre-training, it is critical to verify if it behaves consistently under (1) di erent randomly sampled sentence sets; (2) di erent types of relations; and (3) di erent pre-training steps.", "Stability of C ontrastive -P robe .", "To conduct this verification, we sampled 10 di erent sets of 10k sentences from the PubMed corpus 8 and probed the PubMedBERT model using our C ontrastive P robe on the full set.", "Figure 4 shows the acc@1 performance over top 9 relations and the micro average performance of all the 19 relations.", "We can see that the standard deviations are small and the performance over di erent sets of samples shows the similar trend.", "This further highlights 7 For the single-token answer probing scenario, C ontrastive -P robe does not outperform the mask predict approach, particularly in the general domain.", "This is expected since most of the masked PLMs are pre-trained by a single-token-filling objective.", "8 The tuning corpus itself is unimportant, since we can obtain the similar results even using Wikipedia.", "that the probing success of C ontrastive -P robe is not due the selected pre-training sentences.", "Intuitively, the contrastive self-retrieving game (4) is equivalent to the formulation of the cloze-style filling task, hence tuning the underlying PLMs makes them better suited for knowledge elicitation needed during probing (like rewiring' the switchboards).", "Additionally, from Figure 4 we can also observe that di erent relations exhibit very di erent trends during pre-training steps of C ontrastive -P robe and peak under di erent steps, suggesting that we need to treat di erent types of relational knowledge with di erent tuning depths when infusing knowledge.", "We leave further exploration of this to future work.", "Probing by Relations.", "To further analyse the probing variance over di erent relations, we also plot the probing performance of various PLMs over di erent relations of MedLAMA in Figure 5.", "We can observe that di erent PLMs exhibit different performance rankings over di erent types of relational knowledge (e.g. BlueBERT peaks at relation 12 while PubMedBERT peaks at relation 3 ).", "This result demonstrates that di erent PLMs are suited for di erent types of relational knowledge.", "We speculate this to be reflective of their training corpora.", "Probing by Layer.", "To investigate how much knowledge is stored in each Transformer layer, we chopped the last layers of PLMs and applied C ontrastive -P robe to evaluate the probing performance based on the first L { 3 , 5 , 7 , 9 , 11 , 12 } layers on MedLAMA .", "In general, we can see in Figure 6 that the model performance drops signifi-cantly after chopping the last 3 layers, while its accuracy is still high when dropping only last one layer.", "In Figure 7, we further plot the layer-wise probing performance of PubMedBERT over different relations.", "Surprisingly, we find that di erent relations do not show the same probing performance trends over layers.", "For example, with only the first 3 layers, PubMedBERT achieves the best accuracy ( > 15%) on relation 11 queries.", "This result demonstrates that both relation types and PLM layers are confounding variables in capturing factual knowledge, which helps to explain the di erence of training steps over relations in Figure 4.", "This result also suggests that layer-wise and relation-wise training could be the key to effectively infuse factual knowledge for PLMs.", "To assess whether the actual probing performance could be possibly higher than what is reflected by the commonly used automatic evaluation, we conducted a human evaluation on the prediction result.", "Specifically, we sample 15 queries and predict their top-10 answers using C ontrastive P robe based on PubMedBERT and ask the assessor 9 to rate the predictions on a scale of [1,5].", "Figure 8 shows the confusion matrices.", "10 We observe the followings: (1) There are 3 UMLS answers that are annotated with score level 1-4 (precisely, level 3), which indicates UMLS answers might not always be the perfect answers.", "(2) There are 20 annotated perfect answers (score 5) in the top 10 predictions that are not marked as the gold answers in the UMLS, which suggests the UMLS does not include all the expected gold knowledge.", "(3) In general, PubMedBERT achieves an 8.67% (13 / 150) acc@10 under gold answers, but under the expert annotation the acc@10 is 22% (33 / 150), which means the probing performance is higher than what evaluated using the automatically extracted answers.", "During the writing of this work, we noticed a concurrent work to ours that also released a biomedical knowledge probing benchmark, called BioLAMA Sung et al. (2021).", "In Table 6, we com-9 A senior Ph.D. graduate in Cell Biology.", "In the Appendix , we provide examples with their UMLS gold answers, human annotated answers and probing predictions of di erent probing approaches.", "pare MedLAMA with LAMA (Petroni et al., 2019) and BioLAMA in terms of data statistics.", "We found that there is only 1 overlapped relation (i.e., may treat) between BioLAMA and MedLAMA , and no overlap exists on the queries.", "We can see that, without additional training data from the biomedical knowledge facts, C ontrastive -P robe reaches a promising performance compared with OptiPrompt approach, which needs further training data.", "Additionally, since Mask Predict and OptiPrompt require using the MLM head, it is impossible to compare a model without MLM head being released (e.g. PubMedBERT).", "In contrast, our C ontrastive -P robe not only provides a good indicator of comparing these models in terms of their captured knowledge, but also makes layer-wise knowledge probing possible.", "How to early stop?", "For fair comparison of di erent PLMs, we currently use checkpoints after contrastive tuning for a fixed number of steps (200, specifically).", "However, we have noticed that different models and di erent probing datasets have di erent optimal training steps.", "To truly rewire' the most knowledge out of each PLMs, we need a unified validation set for checkpoint selection.", "What the validation set should be and how to guarantee its fairness require further investigation.", "Performance not very stable.", "We have noticed that using di erent contrastive tuning corpus as well as di erent random seeds can lead to a certain variance of their probing performances (see Table 5).", "To mitigate such issue, we use average perfor-Probe Model CTD wikidata UMLS acc@1 acc@5 acc@1 acc@5 acc@1 acc@5 MaskPredict BERT 0.06 1.20 1.16 6.04 0.82 1.99 BioBERT 0.42 3.25 3.67 11.20 1.16 3.82 Bio-LM 1.17 7.30 11.97 25.92 3.44 8.88 OptiPrompt BERT 3.56 6.97 3.29 8.13 1.44 3.65 BioBERT 4.82 9.74 4.21 12.91 5.08 13.28 Bio-LM 2.99 10.19 10.60 25.15 8.25 20.19 C ontrastive -P robe BlueBERT 1.62 5.84 6.64 25.97 2.63 11.46 BioBERT 0.20 0.99 1.04 4.51 0.89 3.89 Bio-LM 1.70 4.26 4.32 18.74 1.27 5.01 PubMedBERT 2.60 8.87 10.20 35.14 4.93 18.33 Table 7: Performance on BioLAMA benchmark.", "mance of 10 runs on 10 randomly sampled corpus.", "Improving the stability of C ontrastive -P robe and investigating its nature is a future challenge.", "Knowledge Probing Benchmarks for PLMs.", "LAMA (Petroni et al., 2019), which starts this line of work, is a collection of single-token knowledge triples extracted from sources including Wikidata and ConceptNet (Speer et al., 2017).", "To mitigate the problem of information leakage from the head entity, Poerner et al. (2019) propose LAMA-UHN, which is a hard subset of LAMA that has less token overlaps in head and tail entities.", "X-FACTR (Jiang et al., 2020a) and mLAMA (Kass-ner et al., 2021) extend knowledge probing to the multilingual scenario and introduce multi-token answers.", "They each propose decoding methods that generate multi-token answers, which we have shown to work poorly on MedLAMA .", "BioLAMA (Sung et al., 2021) is a concurrent work that also releases a benchmark for biomedical knowledge probing.", "Probing via Prompt Engineering.", "Knowledge probing is sensitive to what prompt is used (Jiang et al., 2020b).", "To bootstrap the probing performance, Jiang et al. (2020b) mine more prompts and ensemble them during inference.", "Later works parameterised the prompts and made them train-4805 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 Relation ID 0 10 20 30 a cc @ 1 layer 3 5 7 9 11 12 Figure 7: Performance of PubMedBERT over layers.", "50 100 Figure 8: Confusion matrices of expert annotated scores versus the extracted UMLS answers.", "Five annotation score levels: 5Perfectly answer the query ; 4-Similar to the gold answer, could somehow be the answer ; 3Related to the query but not correct ; 2Same domain or slight relation ; 1Completely unrelated .", "able (Shin et al., 2020b; Fichtel et al., 2021; Qin and Eisner, 2021).", "We have opted out prompt-engineering methods that require training data in this work, as tuning the prompts are essentially tuning an additional (parameterised) model on top of PLMs.", "As pointed out by Fichtel et al. (2021), prompt tuning requires large amounts of training data from the task.", "Since task training data is used, the additional model parameters are exposed to the target data distribution and can solve the set set by overfitting to such biases (Cao et al., 2021).", "In our work, by adaptively finetuning the model with a small set of raw sentences, we elicit the knowledge out from PLMs but do not expose the data biases from the benchmark ( MedLAMA ).", "Biomedical Knowledge Probing.", "Nadkarni et al. (2021) train PLMs as KB completion models and test on the same task to understand how much knowledge is in biomedical PLMs.", "BioLAMA focuses on the continuous prompt learning method OptiPrompt (Zhong et al., 2021), which also requires ground-truth training data from the task.", "Overall, compared to BioLAMA, we have provided a more comprehensive set of probing experiments and analysis, including proposing a novel probing technique and providing human evaluations of model predictions.", "In this work, we created a carefully curated biomedical probing benchmark, MedLAMA , from the UMLS knowledge graph.", "We illustrated that state-of-the-art probing techniques and biomedical pre-trained languages models (PLMs) struggle to cope with the challenging nature (e.g. multi-token answers) of this specialised domain, reaching only an underwhelming 3% of acc@10.", "To reduce the gap, we further proposed a novel contrastive recipe which rewires the underlying PLMs without using any probing-specific data and illustrated that with a lightweight pre-training their accuracies could be pushed to 24%.", "Our experiments also revealed that di erent layers of transformers encode di erent types of information, reflected by their individual success at handling certain types of prompts.", "Additionally, using a human expert, we showed that the existing evaluation criteria could overpenalise the models as many valid responses that PLMs produce are not in the ground truth UMLS knowledge graph.", "This further highlights the importance of having a human in the loop to better understand the potentials and limitations of PLMs in encoding domain specific factual knowledge.", "Our findings indicate that the real lower bound on the amount of factual knowledge encoded by PLMs is higher than we estimated, since such bound can be continuously improved by optimising both the encoding space (e.g. using our self-supervised contrastive learning technique) and the input space (e.g. using the prompt optimising techniques (Shin et al., 2020a; Qin and Eisner, 2021)).", "We leave further exploration of integrating the two possibilities to future work.", "Nigel Collier and Zaiqiao Meng kindly acknowledges grant-in-aid support from the UK ESRC for project EPI-AI (ES / T012277 / 1)." ]
[ "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "method", "other", "other", "other", "objective", "method", "method", "objective", "result", "result", "abstain", "result", "abstain", "other" ]
[ "Word embeddings derived from human-generated corpora inherit strong gender bias which can be further amplified by downstream models.", "Some commonly adopted debiasing approaches, including the seminal Hard Debias algorithm (Bolukbasi et al., 2016), apply post-processing procedures that project pre-trained word embeddings into a subspace orthogonal to an inferred gender subspace.", "We discover that semantic-agnostic corpus regularities such as word frequency captured by the word embeddings negatively impact the performance of these algorithms.", "We propose a simple but effective technique, Double-Hard Debias, which purifies the word embeddings against such corpus regularities prior to inferring and removing the gender subspace.", "Experiments on three bias mitigation benchmarks show that our approach preserves the distributional semantics of the pre-trained word embeddings while reducing gender bias to a significantly larger degree than prior approaches.", "Despite widespread use in natural language processing (NLP) tasks, word embeddings have been criticized for inheriting unintended gender bias from training corpora.", "Bolukbasi et al. (2016) highlights that in word2vec embeddings trained on the Google News dataset (Mikolov et al., 2013a), programmer is more closely associated with man and homemaker is more closely associated with woman.", "Such gender bias also propagates to downstream tasks.", "Studies have shown that coreference resolution systems exhibit gender bias in predictions due to the use of biased word embeddings (Zhao et al., 2018a; Rudinger et al., 2018).", "Given the fact that pre-trained word embeddings This research was conducted during the author's internship at Salesforce Research.", "have been integrated into a vast number of NLP models, it is important to debias word embeddings to prevent discrimination in NLP systems.", "To mitigate gender bias, prior work have proposed to remove the gender component from pre-trained word embeddings through postprocessing (Bolukbasi et al., 2016), or to compress the gender information into a few dimensions of the embedding space using a modified training scheme (Zhao et al., 2018b; Kaneko and Bollegala, 2019).", "We focus on post-hoc gender bias mitigation for two reasons: 1) debiasing via a new training approach is more computationally expensive; and 2) pre-trained biased word embeddings have already been extensively adopted in downstream NLP products and post-hoc bias mitigation presumably leads to less changes in the model pipeline since it keeps the core components of the original embeddings.", "Existing post-processing algorithms, including the seminal Hard Debias (Bolukbasi et al., 2016), debias embeddings by removing the component that corresponds to a gender direction as defined by a list of gendered words.", "While Bolukbasi et al. (2016) demonstrates that such methods alleviate gender bias in word analogy tasks, Gonen and Goldberg (2019) argue that the effectiveness of these efforts is limited, as the gender bias can still be recovered from the geomrtry of the debiased embeddings.", "We hypothesize that it is difficult to isolate the gender component of word embeddings in the manner employed by existing post-processing methods.", "For example, Gong et al. (2018); Mu and Viswanath (2018) show that word frequency significantly impact the geometry of word embeddings.", "Consequently, popular words and rare words cluster in different subregions of the embedding space, despite the fact that words in these clusters are not semantically similar.", "This can degrade the ability of component-based methods for debiasing gender.", "Specifically, recall that Hard Debias seeks to remove the component of the embeddings corresponding to the gender direction.", "The important assumption made by Hard Debias is that we can effectively identify and isolate this gender direction.", "However, we posit that word frequency in the training corpora can twist the gender direction and limit the effectiveness of Hard Debias.", "To this end, we propose a novel debiasing algorithm called Double-Hard Debias that builds upon the existing Hard Debias technique.", "It consists of two steps.", "First, we project word embeddings into an intermediate subspace by subtracting component(s) related to word frequency.", "This mitigates the impact of frequency on the gender direction.", "Then we apply Hard Debias to these pu-rified embeddings to mitigate gender bias.", "Mu and Viswanath (2018) showed that typically more than one dominant directions in the embedding space encode frequency features.", "We test the effect of each dominant direction on the debiasing performance and only remove the one(s) that demonstrated the most impact.", "We evaluate our proposed debiasing method using a wide range of evaluation techniques.", "According to both representation level evaluation (WEAT test (Caliskan et al., 2017), the neighborhood metric (Gonen and Goldberg, 2019)) and downstream task evaluation (coreference resolution (Zhao et al., 2018a)), Double-Hard Debias outperforms all previous debiasing methods.", "We also evaluate the functionality of debiased embeddings on several benchmark datasets to demonstrate that Double-Hard Debias effectively mitigates gender bias without sacrificing the quality of word embeddings 1 .", "Current post-hoc debiasing methods attempt to reduce gender bias in word embeddings by subtracting the component associated with gender from them.", "Identifying the gender direction in the word embedding space requires a set of gender word pairs, P , which consists of she & he, daughter & son, etc.", "For every pair, for example boy & girl, the difference vector of the two embeddings is expected to approximately capture the gender direction: v boy,girl = w boy w girl (1) Bolukbasi et al. (2016) computes the first principal component of ten such difference vectors and use that to define the gender direction.", "2 Recent works (Mu and Viswanath, 2018; Gong et al., 2018) show that word frequency in a training 1 Code and data are available at https://github.", "2 The complete definition of P is: woman & man, girl & boy, she & he, mother & father, daughter & son, gal & guy, female & male, her & his, herself & himself, and Mary & John (Bolukbasi et al., 2016).", "corpus can degrade the quality of word embeddings.", "By carefully removing such frequency features, existing word embeddings can achieve higher performance on several benchmarks after fine-tuning.", "We hypothesize that such word frequency statistics also interferes with the components of the word embeddings associated with gender.", "In other words, frequency-based features learned by word embedding algorithms act as harmful noise in the previously proposed debiasing techniques.", "To verify this, we first retrain GloVe (Pennington et al., 2014) embeddings on the one billion English word benchmark (Chelba et al., 2013) following previous work (Zhao et al., 2018b; Kaneko and Bollegala, 2019).", "We obtain ten difference vectors for the gendered pairs in P and compute pairwise cosine similarity.", "This gives a similarity matrix S in which S p i ,p j denotes the cosine similarity between difference vectors v pair i and v pair j .", "We then select a specific word pair, e.g. boy & girl, and augment the corpus by sampling sentences containing the word boy twice.", "In this way, we produce a new training corpus with altered word frequency statistics for boy.", "The context around the token remains the same so that changes to the other components are negligible.", "We retrain GloVe with this augmented corpus and get a set of new offset vectors for the gendered pairs P .", "We also compute a second similarity matrix S (cid:48) where S (cid:48) p i ,p j denotes the cosine similarity between difference vectors v (cid:48) pair i and v (cid:48) pair j .", "By comparing these two similarity matrices, we analyze the effect of changing word frequency statistics on gender direction.", "Note that the offset vectors are designed for approximating the gender direction, thus we focus on the changes in offset vectors.", "Because statistics were altered for boy, we focus on the difference vector v boy,girl and make two observations.", "First, the norm of v boy,girl has a 5 .", "8% relative change while the norms of other difference vectors show much smaller changes.", "For example, the norm of v man,woman only changes by 1 .", "8% .", "Second, the cosine similarities between v boy,girl and other difference vectors also show more significant change, as highlighted by the red bounding box in Figure 1a.", "As we can see, the frequency change of boy leads to deviation of the gender direction captured by v boy,girl .", "We observe similar phenomenon when we change the frequency of the word daughter and present these results in Figure 1b.", "Based on these observations, we conclude that word frequency plays an important role in gender debiasing despite being overlooked by previous works.", "In this section, we first summarize the terminology that will be used throughout the rest of the paper, briefly review the Hard Debias method, and provide background on the neighborhood evaluation metric.", "Then we introduce our proposed method: Double-Hard Debias.", "Let W be the vocabulary of the word embeddings we aim to debias.", "The set of word embeddings contains a vector w R n for each word w W .", "A subspace B is defined by k orthogonal unit vectors B = { b 1 , . . . , b k } R d .", "We denote the projection of vector v on B by v B = k (cid:88) j =1 ( v b j ) b j .", "Following (Bolukbasi et al., 2016), we assume there is a set of gender neutral words N W , such as doctor and teacher, which by definition are not specific to any gender.", "We also assume a pre-defined set of n male-female word pairs D 1 , D 2 , . . . , D n W , where the main difference between each pair of words captures gender .", "Hard Debias.", "The Hard Debias algorithm first identifies a subspace that captures gender bias.", "Let i := (cid:88) w D i w / | D i | .", "The bias subspace B is the first k ( 1 ) rows of SVD( C ), where", "Following the original implementation of Bolukbasi et al. (2016), we set k = 1 .", "As a result the subspace B is simply a gender direction.", "3 Hard Debias then neutralizes the word embeddings by transforming each w such that every word 3 Bolukbasi et al. (2016) normalize all embeddings.", "However, we found it is unnecessary in our experiments.", "This is also mentioned in Ethayarajh et al. (2019) Figure 2: Clustering accuracy after projecting out D-th dominating direction and applying Hard Debias.", "w N has zero projection in the gender subspace.", "For each word w N , we re-embed w : w := w w B (5) Neighborhood Metric.", "The Neighborhood Metric proposed by (Gonen and Goldberg, 2019) is a bias measurement that does not rely on any specific gender direction.", "To do so it looks into similarities between words.", "The bias of a word is the proportion of words with the same gender bias polarity among its nearest neighboring words.", "We selected k of the most biased male and females words according to the cosine similarity of their embedding and the gender direction computed using the word embeddings prior to bias mitigation.", "We use W m and W f to denote the male and female biased words, respectively.", "For w i W m , we assign a ground truth gender label g i = 0 .", "For w i W f , g i = 1 .", "Then we run KMeans ( k = 2 ) to cluster the embeddings of selected words g i = KMeans ( w i ) , and compute the alignment score a with respect to the assigned ground truth gender labels: a = 1 2 k 2 k (cid:88) i =1 1 [ g i == g i ] (6) We set a = max( a, 1 a ) .", "Thus, a value of 0 .", "5 in this metric indicates perfectly unbiased word embeddings (i.e. the words are randomly clustered), and a value closer to 1 indicates stronger gender bias.", "According to Mu and Viswanath (2018), the most statistically dominant directions of word embeddings encode word frequency to a significant extent.", "Mu and Viswanath (2018) removes these frequency features by centralizing and subtracting components along the top D dominant directions Algorithm 1: Double-Hard Debias.", "from the original word embeddings.", "These post-processed embedddings achieve better performance on several benchmark tasks, including word similarity, concept categorization, and word analogy.", "It is also suggested that setting D near d/ 100 provides maximum benefit, where d is the dimension of a word embedding.", "We speculate that most the dominant directions also affect the geometry of the gender space.", "To address this, we use the aforementioned clustering experiment to identify whether a direction contains frequency features that alter the gender direction.", "More specifically, we first pick the top biased words ( 500 male and 500 female) identified using the original GloVe embeddings.", "We then apply PCA to all their word embeddings and take the top principal components as candidate directions to drop.", "For every candidate direction u , we project the embeddings into a space that is orthogonal to u .", "In this intermediate subspace, we apply Hard Debias and get debiased embeddings.", "Next, we cluster the debiased embeddings of these words and compute the gender alignment accuracy (Eq. 6).", "This indicates whether projecting away direction u improves the debiasing performance.", "Algorithm 1 shows the details of our method in full.", "We found that for GloVe embeddings pre-trained on Wikipedia dataset, elimination of the projection along the second principal component significantly decreases the clustering accuracy.", "This translates to better debiasing results, as shown in Figure 2.", "We further demonstrate the effectiveness of our method for debaising using other evaluation metrics in Section 4.", "In this section, we compare our proposed method with other debiasing algorithms and test the functionality of these debiased embeddings on word analogy and concept categorization task.", "Experimental results demonstrate that our method effectively reduces bias to a larger extent without degrading the quality of word embeddings.", "We use 300-dimensional GloVe (Pennington et al., 2014) 4 embeddings pre-trained on the 2017 January dump of English Wikipedia 5 , containing 322 , 636 unique words.", "To identify the gender direction, we use 10 pairs of definitional gender words compiled by (Bolukbasi et al., 2016) 6 .", "GloVe: the pre-trained GloVe embeddings on Wikipedia dataset described in 4.1.", "GloVe is widely used in various NLP applications.", "This is a non-debiased baseline for comparision.", "GN-GloVe: We use debiased Gender-Neutral GN-GloVe embeddings released by the original authors (Zhao et al., 2018b).", "GN-GloVe restricts gender information in certain dimensions while neutralizing the rest dimensions.", "GN-GloVe( w a ): We exclude the gender dimensions from GN-GloVe.", "This baseline tries to completely remove gender.", "GP-GloVe: We use debiased embeddings released by the original authors (Kaneko and Bollegala, 4 Experiments on Word2Vec are included in the appendix. 5 https://github.com/uclanlp/gn_glove 6 https://github.com/tolga-b/debiaswe 2019).", "Gender-preserving Debiasing attempts to preserve non-discriminative gender information, while removing stereotypical gender bias.", "GP-GN-GloVe: : This baseline applies Gender-preserving Debiasing on already debaised GN-GloVe embeddings.", "We also use debiased embeddings provided by authors.", "Hard-GloVe: We apply Hard Debias introduced in (Bolukbasi et al., 2016) on GloVe embeddings.", "Following the implementation provided by original authors, we debias netural words and preserve the gender specific words.", "Strong Hard-GloVe: A variant of Hard Debias where we debias all words instead of avoiding gender specific words.", "This seeks to entirely remove gender from GloVe embeddings.", "Double-Hard GloVe: We debias the pre-trained GloVe embeddings by our proposed Double-Hard Debias method.", "We demonstrate the effectiveness of our debiasing method for downstream applications and according to general embedding level evaluations.", "Coreference Resolution.", "Coreference resolution aims at identifying noun phrases referring to the same entity.", "Zhao et al. (2018a) identified gender bias in modern coreference systems, e.g. doctor is prone to be linked to he.", "They also introduce a new benchmark dataset WinoBias, to study gender bias in coreference systems.", "WinoBias provides sentences following two prototypical templates.", "Each type of sentences can be divided into a pro-stereotype (PRO) subset and a antistereotype (ANTI) subset.", "In the PRO subset, gender pronouns refer to professions dominated by the same gender.", "For example, in sentence The physician hired the secretary because he was overwhelmed with clients., he refers to physician, which is consistent with societal stereotype.", "On the other hand, the ANTI subset consists of same sentences, but the opposite gender pronouns.", "As such, he is replaced by she in the aforementioned example.", "The hypothesis is that gender cues may distract a coreference model.", "We consider a system to be gender biased if it performs better in pro-stereotypical scenarios than in anti-stereotypical scenarios.", "We train an end-to-end coreference resolution model (Lee et al., 2017) with different word embeddings on OntoNotes 5.0 training set and report the performance on WinoBias dataset.", "Results are presented in Table1.", "Note that absolute performance difference (Diff) between the PRO set and ANTI set connects with gender bias.", "A smaller Diff value indicates a less biased coreference system.", "We can see that on both types of sentences in WinoBias, Double-Hard GloVe achieves the smallest Diff compared to other baselines.", "This demonstrates the efficacy of our method.", "Meanwhile, Double-Hard GloVe maintains comparable performance as GloVe on OntoNotes test set, showing that our method preserves the utility of word embeddings.", "It is also worth noting that by reducing gender bias, Double-Hard GloVe can significantly improve the average performance on type-2 sentences, from 75 .", "1% (GloVe) to 85 .", "0% .", "The Word Embeddings Association Test (WEAT).", "WEAT is a permutation test used to measure the bias in word embeddins.", "We consider male names and females names as attribute sets and compute the differential association of two sets of target words 7 and the gender attribute sets.", "We report effect sizes ( d ) and p-values ( p ) in Table2.", "The effect size is a normalized measure of how separated the two distributions are.", "A higher value of effect size indicates larger bias between target words with regard to gender.", "p-values denote if the bias is significant.", "A high p-value (larger than 0 . 05 ) indicates the bias is insignificant.", "We refer readers to Caliskan et al. (2017) for more details.", "As shown in Table 2, across different target words sets, Double-Hard GloVe consistently outperforms other debiased embeddings.", "For Career & Family and Science & Arts, Double-Hard GloVe reaches the lowest effect size, for the latter one, Double-Hard GloVe successfully makes the bias insignificant (p-value > 0 . 05 ).", "Note that in WEAT test, some debiasing methods run the risk of amplifying gender bias, e.g. for Math & Arts words, the bias is significant in GN-GloVe while it is insignificant in original GloVe embeddings.", "Such concern does not occur in Double-Hard GloVe.", "Neighborhood Metric.", "(Gonen and Goldberg, 2019) introduces a neighborhood metric based on clustering.", "As described in Sec 3.1, We take the top k most biased words according to their cosine similarity with gender direction in the original GloVe embedding space 8 .", "We then run k-Means to cluster them into two clusters and compute the alignment accuracy with respect to gender, results are presented in Table 3.", "We recall that in this metric, a accuracy value closer to 0 .", "5 indicates less biased word embeddings.", "Using the original GloVe embeddings, k-Means can accurately cluster selected words into a male group and a female group, suggesting the presence of a strong bias.", "Hard Debias is able to reduce bias in some degree while other baselines appear to be less effective.", "Double-Hard GloVe achieves the lowest accuracy across experiments clustering top 100/500/1000 biased words, demonstrating that the proposed technique effectively reduce gender bias.", "We also conduct tSNE (van der Maaten and Hinton, 2008) projection for all baseline embed-8 To be fair, we exclude all gender specific words used in debiasing, so Hard-GloVe and Strong Hard-GloVe have same acurracy performance in Table 3 Embeddings Career & Family Math & Arts Science & Arts d p d p d p GloVe 1 .", "dings.", "As shown in Figure 3, original non-debiased GloVe embeddings are clearly projected to different regions.", "Double-Hard GloVe mixes up male and female embeddings to the maximum extent compared to other baselines, showing less gender information can be captured after debiasing.", "Word Analogy.", "Given three words A , B and C , the analogy task is to find word D such that A is to B as C is to D .", "In our experiments, D is the word that maximize the cosine similarity between D and C A + B .", "We evaluate all non-debiased and debiased embeddings on the MSR (Mikolov et al., 2013c) word analogy task, which contains 8000 syntactic questions, and on a second Google word analogy (Mikolov et al., 2013a) dataset that contains 19 , 544 ( Total ) questions, including 8 , 869 semantic ( Sem ) and 10 , 675 syntactic ( Syn ) questions.", "The evaluation metric is the percentage of questions for which the correct answer is assigned the maximum score by the algorithm.", "Results are shown in Table4.", "Double-Hard GloVe achieves comparable good results as GloVe and slightly outperforms some other debiased embeddings.", "This proves that Double-Hard Debias is capable of preserving proximity among words.", "Concept Categorization.", "The goal of concept categorization is to cluster a set of words into different categorical subsets.", "For example, sandwich and hotdog are both food and dog and cat are animals.", "The clustering performance is evaluated in terms of purity (Manning et al., 2008) the fraction of the total number of the words that are correctly classified.", "Experiments are conducted on four benchmark datasets: the Almuhareb-Poesio (AP) dataset (Almuhareb, 2006); the ESSLLI 2008 (Baroni et al., 2008); the Battig 1969 set (Battig and Montague, 1969) and the BLESS dataset (Ba-roni and Lenci, 2011).", "We run classical Kmeans algorithm with fixed k .", "Across four datasets, the performance of Double-Hard GloVe is on a par with GloVe embeddings, showing that the proposed debiasing method preserves useful semantic information in word embeddings.", "Full results can be found in Table4.", "Gender Bias in Word Embeddings.", "Word embeddings have been criticized for carrying gender bias.", "Bolukbasi et al. (2016) show that word2vec (Mikolov et al., 2013b) embeddings trained on the Google News dataset exhibit occupational stereotypes, e.g. programmer is closer to man and homemaker is closer to woman.", "More recent works (Zhao et al., 2019; Kurita et al., 2019; Basta", "et al., 2019) demonstrate that contextualized word embeddings also inherit gender bias.", "Gender bias in word embeddings also propagate to downstream tasks, which substantially affects predictions.", "Zhao et al. (2018a) show that coreference systems tend to link occupations to their stereotypical gender, e.g. linking doctor to he and nurse to she.", "Stanovsky et al. (2019) observe that popular industrial and academic machine translation systems are prone to gender biased translation errors.", "Recently, Vig et al. (2020) proposed causal mediation analysis as a way to interpret and analyze gender bias in neural models.", "Debiasing Word Embeddings.", "For contextualized embeddings, existing works propose task-specific debiasing methods, while in this paper we focus on more generic ones.", "To mitigate gender bias, Zhao et al. (2018a) propose a new training approach which explicitly restricts gender information in certain dimensions during training.", "While this method separates gender information from embeddings, retraining word embeddings on massive corpus requires an undesirably large amount of resources.", "Kaneko and Bollegala (2019) tackles this problem by adopting an encoder-decoder model to re-embed word embeddings.", "This can be applied to existing pre-trained embeddings, but it still requires train different encoder-decoders for different embeddings.", "Bolukbasi et al. (2016) introduce a more simple and direct post-processing method which zeros out the component along the gender direction.", "This method reduces gender bias to some degree, however, Gonen and Goldberg (2019) present a series of experiments to show that they are far from delivering gender-neutral embeddings.", "Our work builds on top of Bolukbasi et al. (2016).", "We discover the important factor word frequency that limits the effectiveness of existing methods.", "By carefully eliminating the effect of word frequency, our method is able to significantly improve debiasing performance.", "We have discovered that simple changes in word frequency statistics can have an undesirable impact on the debiasing methods used to remove gender bias from word embeddings.", "Though word frequency statistics have until now been neglected in previous gender bias reduction work, we propose Double-Hard Debias, which mitigates the negative effects that word frequency features can have on debiasing algorithms.", "We experiment on several benchmarks and demonstrate that our Double-Hard Debias is more effective on gender bias reduction than other methods while also preserving the quality of word embeddings suitable for the downstream applications and embedding-based word analogy tasks.", "While we have shown that this method significantly reduces gender bias while preserving quality, we hope that this work encourages further research into debiasing along other dimensions of word embeddings in the future." ]
[ "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "abstain", "objective", "abstain", "method", "abstain", "objective", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "method", "method", "abstain", "method", "objective", "objective", "result" ]
[ "In the field of sentiment analysis, several studies have highlighted that a single sentence may express multiple, sometimes contrasting, sentiments and emotions, each with its own experiencer, target and/or cause.", "To this end, over the past few years researchers have started to collect and annotate data manually, in order to investigate the capabilities of automatic systems not only to distinguish between emotions, but also to capture their semantic constituents.", "However, currently available gold datasets are heterogeneous in size, domain, format, splits, emotion categories and role labels, making comparisons across different works difficult and hampering progress in the area.", "In this paper, we tackle this issue and present a unified evaluation framework focused on Semantic Role Labeling for Emotions (SRL4E), in which we unify several datasets tagged with emotions and semantic roles by using a common labeling scheme.", "We use SRL4E as a benchmark to evaluate how modern pretrained language models perform and analyze where we currently stand in this task, hoping to provide the tools to facilitate studies in this complex area.", "Emotion detection a long-standing open problem in Natural Language Processing (NLP) is the task of automatically associating one or more emotions with a text.", "Even though emotional states are highly subjective and often depend on several factors, such as one's past experiences, culture and ed-ucation, the automatic identification, categorization and analysis of emotions in texts has been found to be beneficial in a wide array of downstream tasks, such as hate speech detection (Markov et al., 2021), sarcasm detection (Chauhan et al., 2020), and modeling political discourse (Huguet Cabot et al., 2021), inter alia .", "In the past decade, Deep Learning techniques have become ubiquitous in the development of automatic systems for an increasing number of NLP tasks, including emotion detection (Chatterjee et al., 2019).", "However, most of the effective neural-based approaches still require significant amounts of training data in order to learn to perform at their best.", "For this reason, with a view to bootstrapping the development of neural systems for emotion detection, there have been several efforts to annotate corpora with emotions manually (Bostan and Klinger, 2018).", "Nevertheless, over the past few years, numerous studies have indicated that a short text, even a single sentence, may contain multiple at times concurring, at other times contrasting sentiments and emotions.", "And not only this, two emotions in the same sentence may be experienced, targeted, and/or caused by different semantic constituents which, similarly to predicate-argument structures in Semantic Role Labeling (SRL), can be linked to form abstract semantic structures.", "The potential applications in social media analysis, abuse detection, and other actively studied areas in NLP (Rajamanickam et al., 2020) of such automatically-extracted emotion-focused semantic structures have prompted researchers to create datasets aimed at investigating the capabilities of modern systems to parse emotional events (Ober-lnder et al., 2020).", "Unfortunately, despite the increasing interest in this area, currently available gold datasets feature heterogeneous structures and characteristics, ranging from varying sizes to different domains, file format, splits and, most importantly, non-overlapping emotion categories.", "We argue that this heterogeneity obstructs, or at least hinders, further progress in this relatively new area of sentiment analysis.", "In this paper, we take a step towards addressing the above-mentioned issues and introduce a unified framework for Semantic Role Labeling for Emotions (SRL4E).", "In SRL4E, we unify several gold but heterogeneous datasets that contain anno-4586 tations both for emotions and for their semantic constituents, so as to obtain a new homogeneous dataset that covers diverse domains and that can be used to train, validate and evaluate current and future work in this task.", "Our contributions can be summarized as follows: We propose a unified gold benchmark for training and evaluating a system on Semantic Role Labeling for Emotions (SRL4E); We take advantage of SRL4E to show the inadequacy of training a model on domain-specific data and the benefits of our unified framework; We show the advantages of bilingual transfer from English to Chinese, and vice versa, in SRL4E.", "We release SRL4E at https://github.com/ SapienzaNLP/srl4e in the hope that our unified framework will become a stepping stone for the development and evaluation of current and future approaches to Semantic Role Labeling for Emotions.", "Emotion classification datasets.", "Currently, there are a wide variety of datasets annotated with emotion classes, ranging across different domains and using different annotation schemes.", "Among others, we can find datasets on emotional experiences (Scherer and Wallbott, 1994), children's fairy tales (Alm et al., 2005), news headlines (Strapparava and Mihalcea, 2007), blog posts (Aman and Szpakowicz, 2007, 2008), news (Lei et al., 2014), social media posts and reviews (Buechel and Hahn, 2017), dialogs (Li et al., 2017; Chatterjee et al., 2019), Facebook posts (Preotiuc-Pietro et al., 2016), with many focusing on tweets (Mohammad, 2012; Mohammad and Bravo-Marquez, 2017; CrowdFlower, 2016; Liu et al., 2017; Schuff et al., 2017) due to their tendency to have dense emotional content.", "To meet such a diversity of contents and formats, Bostan and Klinger (2018) created a unified resource for emotion classification comprising many of the aforementioned datasets, while Tafreshi and Diab (2018), instead, added an additional clause-level annotation layer to some existing resources.", "More recent efforts, such as GoEmotion (Demszky et al., 2020), XED (hman et al., 2020) and CancerEmo (Sosea and Caragea, 2020), provide, respectively, emotion annotations for Reddit comments, multilingual subtitles and blog posts about health problems.", "Although the above-mentioned corpora have enabled systems to perform emotion detection across different domains, their annotations are sentence-level and, therefore, introduce an oversimplifica-tion: they indicate only the overall sentiment and/or emotion that appears in a given text, neglecting the cases in which a short text, even a single sentence, may express multiple emotions.", "Furthermore, the aforementioned datasets do not indicate which part of the text elicits an emotion and who experiences, is the target of, or causes that emotion.", "As a consequence, a system trained on these datasets may produce predictions that are hard to interpret and more difficult to use in real-world applications.", "To overcome these problems, we rely on resources that not only indicate emotions, but also identify their semantic constituents, namely, emotional CUE s, EXPERIENCER s, TARGET s and STIMULI .", "Emotion Taxonomy.", "Among the studies that aim to identify the fundamental emotions, Ekman (1992) proposed a set of six categories: anger , disgust , fear , joy , sadness and surprise ; Plutchik (1980) shared the same set with two additions: anticipation and trust .", "Instead of relying on discrete categories, Russell (1980) proposed the circumplex model where every emotion can be described by three continuous values: arousal , dominance and valence .", "More recent studies in psychology use more fine-grained sets of emotions, ranging from 12 (Cowen et al., 2019b) to 28 categories (Cowen and Keltner, 2020), devised depending on the context of the study, e.g., speech prosody and facial expressions.", "However, the analysis of Demszky et al. (2020) over a fine-grained set of 28 emotions suggests that a large number of categories results in more frequent disagreements on similar classes (such as anger and annoyance , or excitement and joy ) which, in turn, can lead to low inter-annotator agreement and unbalanced distributions among some of these categories.", "Therefore, we adopt Plutchik's Wheel of Emotions (Plutchik, 2001), which provides clearly distinct and well-defined coarse-grained categories, whose composition can be used to virtually describe all other fine-grained sets.", "Moreover, some datasets in SRL4E (Moham-mad et al., 2014; Kim and Klinger, 2018; Bostan et al., 2020) already use Plutchik's or Plutchik-4587 based categories.", "Emotions and SRL.", "Over the past few years, automatic systems for SRL have achieved impressive performance in identifying and labeling predicate-argument relations (Shi and Lin, 2019; Conia and Navigli, 2020; Blloshmi et al., 2021; Conia et al., 2021), and have long become useful tools in several downstream tasks, from Question Answering (He et al., 2015) to Machine Translation (Marcheggiani et al., 2018).", "Defined by Mrquez et al. (2008) as the task of answering the question Who did What to Whom, Where, When and How? , SRL is almost a natural choice for the extraction of the semantic constituents of those events that elicit emotional states.", "Indeed, emotional CUE s can be seen as particular types of predicates, and their semantic constituents as their arguments.", "Among the currently available datasets for emotion detection, there are some that also provide this kind of more granular semantic information.", "In particular Aman and Szpakowicz (2007) and Liew et al. (2016) released corpora that indicate multiple emotions and their corresponding emotion CUE s in each sentence; Ghazi et al. (2015) and Gao et al. (2017) indicate the cause of an emotion, with the latter providing such annotations both in English and in Chinese.", "Finally, Mohammad et al. (2014), Mohammad et al. (2015), Kim and Klinger (2018) and Bostan et al. (2020) provide annotations for emotion CUE s, EXPERIENCER s, TARGET s and STIMULI , employing, however, different sets of emotions in different domains.", "This means that the results of a system trained on one of these datasets cannot be compared against the results of another system trained on a different dataset, emphasizing the need for a unified framework to train and evaluate future approaches to this task.", "This is also evidenced by the success of existing works, e.g. Bostan and Klinger (2018) for sentence-level Emotion Classification and Raganato et al. (2017) for Word Sense Disambiguation.", "In SRL4E, not only do we aggregate the resources under the same task formulation, but we also manually correct their inconsistencies and unify the different emotion schemes.", "In this Section, we introduce SRL4E.", "We first describe the categories of emotions and the format of the semantic roles we adopt to unify the annotation scheme of the original datasets.", "Next, we provide a short overview of the datasets included in SRL4E.", "Finally, we give a formal definition of the task.", "The task of SRL (Gildea and Jurafsky, 2000) is aimed at identifying, given an input sentence, who or what the participants are in an action or event denoted by a predicate.", "As mentioned in Section 2, this is comparable to answering the question Who did What to Whom , Where , When and How ? (Mrquez et al., 2008).", "When it comes to emotions, however, the task does not necessarily revolve around an action, but more precisely around an emotional cue , a word or an expression that acts as a trigger for an emotion.", "Therefore, it would be more appropriate to reformulate the question as: Who feels What , towards Whom and Why ?.", "To answer this question, we first need to define a set of semantic roles, i.e., semantic relations that can exist between an emotion CUE and its semantic constituents.", "Following previous work (Mohammad et al., 2014; Bostan et al., 2020), we take a subset of semantic roles, namely, EXPERIENCER , TARGET and STIMULUS , from those defined in the Emotion semantic frame of FrameNet (Baker et al., 1998).", "While the use of thematic roles allows for human-readable labels (Kipper Schuler, 2005; Di Fabio et al., 2019), we also provide their respective definitions in Table", "1. 3.2 Choosing a common set of emotions In psychology, the debate on which categories are best suited for describing emotions is still open (Barrett et al., 2018; Cowen and Keltner, 2018; Cowen et al., 2019a).", "There are numerous studies that try to tackle this problem, and some of the most authoritative were briefly described in Section 2, above.", "In this work, we adopt Plutchik's Wheel of Emotions (Plutchik, 1980, 2001) to standardize the heterogeneous emotion categories used in the various datasets.", "Plutchik's Wheel of Emotions is composed of a coarse-grained set of 8 basic emotions: anger , fear , sadness , disgust , surprise , anticipation , trust , and joy .", "These emotions can be compounded into dyads which express the much wider range of human feelings, with the advantage of maintaining a solid and unambiguous base set.", "For example, combining anticipation together with joy describes the emotion of optimism , whereas anticipation with sadness describes pessimism .", "Further compositions are described in Appendix A. SRL4E includes 6 datasets: 4588 Role Definition CUE Trigger word or expression that describes (even implicitly) an emotion.", "Kim and Klinger (2018) and Bostan et al. (2020) use Plutchik's or Plutchik-based emotions; Aman and Szpakowicz (2007) and Gao et al. (2017) use Ekman's or Ekman-based emotions, which are a subset of Plutchik's set and can be directly mapped to it; Mohammad et al. (2014) use 19 emotions, but provide a mapping to Plutchik's emotions; Liew et al. (2016) use 28 emotions for which we provide a mapping to Plutchik's emotions.", "We provide a more detailed description of each dataset in Section 3.3.", "As a further contribution, we produce an alignment of each set of emotions to a sentiment polarity positive, negative, neutral, or other (used when polarity cannot be inferred based on the emotion category) to allow SRL4E also to be used to train and evaluate a system on Semantic Role Labeling for Sentiments.", "In the following, we describe the datasets that we included in SRL4E.", "For each dataset, we provide general information, including source, domain, format and tagging scheme.", "We also indicate where we intervened manually to identify and correct errors such as typos, format errors and inconsistencies.", "Table 2 reports the sizes of the original and converted datasets in SRL4E.", "Table 3 summarizes which annotations form part of the original corpora and, therefore, which ones are also part of SRL4E.", "We report the license, availability and link of each resource in Appendix B. Blogs.", "This dataset, proposed by Aman and Szpakowicz (2007), consists of 5,202 sentences, extracted from 173 online blog posts.", "Each sentence Resource Original SRL4E % Blogs 5,202 4,855 93.3 Elections 1,385 1,024 73.9 EmoTweet 15,553 15,553 100.0 GNE 5,000 5,000 100.0 NTCIR (ZH) 2,022 1,956 96.7 NTCIR (EN) 1,826 1,796 98.4 REMAN 1,720 1,705 99.1 All 32,708 31,889 97.5 Table 2: Original/new sizes after conversion to SRL4E.", "is annotated using Ekman's six emotion categories and no emotion , along with intensities.", "The words or spans that indicate emotions are marked, allowing us to remap them to the CUE in our unified format.", "The dataset was annotated by two experts.", "For each sample, we decided to consider only the CUE s that were annotated with the same emotion by both annotators.", "Where possible, we manually identified and corrected some annotations containing typos.", "Elections.", "This dataset, introduced by Mohammad et al. (2014, 2015), includes 1,385 unique tweets related to the 2012 US presidential election and collected using the Twitter API.", "The tweets were annotated via crowdsourcing using an informative tagging scheme which comprised not only 4589 I stand by Obama 100% he deserves another 4yrs in office.", "a set of 19 emotions, but also other features such as emotion intensity, valence, purpose, style, CUE , EXPERIENCER , TARGET and STIMULUS .", "Each sample was annotated by multiple people, i.e., each sample appears more than once with different annotations, one for each annotator.", "We adjudicated role spans by majority voting, discarding all the tweets with conflicting annotations.", "EmoTweet.", "EmoTweet, presented by Liew et al. (2016), is the largest dataset that we include in our unified resource.", "It comprises 15,553 tweets, collected through the Twitter API using various sampling strategies (e.g., by user, by topic, random, etc.) and annotated via crowdsourcing.", "The original tagging scheme of this dataset features 28 emotion categories along with valence and arousal.", "For each emotion, the CUE s are indicated and are easily mappable to our unified format.", "However, a mapping to Plutchik's emotions is not provided by the authors, so we formulated a conversion scheme based on the similarity of the emotion categories with those from other works that are instead mapped to Plutchik's emotions, such as Demszky et al. (2020).", "In addition, we also intervened to identify and manually correct some typos in the annotations.", "GNE.", "GoodNewsEveryone, proposed by Bostan et al. (2020), is a dataset composed of 5,000 news headlines from 82 sources, annotated via crowdsourcing.", "It is labeled with writer and reader emotions using a set of emotions derived from Pluthick's classes and is, therefore, easily mappable to the standard Plutchik set.", "To keep the annotations consistent with those of the other datasets in our unified framework, we considered only the writer's emotions.", "GNE provides annotations for every semantic role we include in our framework, namely, CUE , EXPERIENCER , TARGET and STIMULUS , making this resource highly valuable for our purposes.", "Whenever possible, we identified and manually corrected the annotations that contained typos.", "NTCIR 13 ECA.", "This dataset was proposed as a part of the NTCIR 13 Emotion Cause Analysis task.", "It consists of 1,826 unique sentences from English novels and 2,022 unique sentences from Chinese news, annotated using Ekman's classes.", "Moreover, emotion keywords and causes are annotated, making them suitable to be considered, respectively, as CUE and STIMULUS in our unified format.", "REMAN.", "Relational EMotion ANnotation, introduced by Kim and Klinger (2018), is a corpus consisting of 1,720 fictional text excerpts from Project Gutenberg.", "These documents were annotated using an informative tagging scheme, which included emotion categories based on Plutchik's set, CUE , EXPERIENCER , TARGET , STIMULUS , named entities, events and coreferences, making it another desirable dataset for our unified framework.", "For some sentences, we automatically identified and manually corrected some typos in order to increase the overall quality of this dataset.", "Here we provide a more formal definition of the SRL4E task.", "Unlike the majority of previous work on emotion detection, instead of assigning an emotion to a sentence, we associate each emotion with a CUE .", "In this way, in each sentence, more than one CUE can be identified and associated with its corresponding emotion category and semantic roles, allowing the coexistence of multiple emotions, EXPERIENCER s, TARGET s and STIMULI in the same sentence.", "A visual representation of the relationship between CUE , emotion category and roles is shown in Figure", "1. To the best of our knowledge, other than SRL4E, Liew et al. (2016) and Kim and Klinger (2018) are the only approaches that leverage CUES to model the presence of multiple emotions in a sentence.", "for Emotions can be divided into three key steps: CUE identification, emotion classification and role identification.", "While there are no hard constraints on the order of these steps, we believe that CUE identification should be done first since its output will serve as the input of the other two steps, however, we also believe that our framework could be a step towards the development of joint approaches that solve the three steps at the same time.", "Cue identification.", "As we described earlier, the CUE acts similarly to a predicate in SRL.", "Indeed, the main objective of CUE identification is to recognize where and how many emotions are present in a sentence, and what their trigger words or expressions are.", "The output of this step consists of a set of CUE s, each corresponding to an emotion in the text, as illustrated in Figure", "1. Emotion classification.", "Traditional approaches in emotion classification take as input a sentence and output the emotion class corresponding to that sentence.", "In SRL4E, instead, given a pair (sen-tence, CUE ) we want to classify the emotion expressed in the sentence by the indicated input CUE .", "Note that the result of this approach is not necessarily the same as a sentence-level approach.", "Role identification.", "As previously stated, SRL aims at identifying the semantic constituents of an action expressed by a predicate.", "In SRL4E, instead, we are interested in identifying the actants of an emotional event which is hinted at by the CUE .", "Therefore a CUE can be considered in the same way as a predicate in SRL and role identification consists in identifying all those spans of text that have a semantic relationship EXPERIENCER , TARGET , STIMULUS with the CUE .", "Emotion classes distribution.", "Depending on the dataset, the distribution of emotion classes changes drastically, as illustrated in Figure", "2. For example, in Elections, which contains random tweets related to an American election campaign, almost 45% of samples are tagged with disgust , as one might expect: this is because many of the tweets in question tend to discredit the opposing party; similarly, the second most used class is trust in the tweets in favor of candidates.", "Another interesting example is GNE, where the most frequent category is surprise , highlighting the sensationalistic tone typically found in newspaper headlines.", "It is worth noting that, in contrast to each individual dataset, our unified dataset includes a fairly balanced distribution between categories, where the only category that appears more often is joy (20%), while all the others are between 6% and 10%, approximately.", "Other statistics.", "The statistics reported in Table 4 show the heterogeneity of the resources included in our framework, with very different text and role lengths.", "In fact, datasets containing sentences from similar domains share similar values.", "For example, REMAN and the English version of NTCIR both come from novels and they have comparable text lengths, from 58 to 59 words on average.", "Instead, Blogs (from online blog posts), Elections and EmoTweet (from tweets) have much shorter sentences, from 14 to 16 words, approximately.", "Table 4 also shows that almost all CUE s are very short, usually around 1-2 words; only those datasets involving tweets have a much higher value.", "In fact, in EmoTweet and Elections, CUE s contain 4 and 8 words on average, respectively, due to their dense emotional content and, therefore, their larger number of trigger expressions.", "It is interesting to note that all datasets feature a similar average length for STIMULI , regardless of the domain.", "Borderline examples.", "SRL4E's formulation is based on the presence of CUE s within sentences, which are seen as the trigger of the emotion in that context.", "This formulation particularly suits those domains where emotions are expressed explicitly, such as GNE, NTCIR and REMAN.", "However, handling CUE s becomes non-trivial in some situations, for example in social networks (Elections and EmoTweet) and blog posts (Blogs).", "In these contexts, language features numerous implicit references and ironic content, where the mere presence of an emoji or a particular punctuation mark completely changes the context.", "In our task formulation, the presence of a CUE is a fundamental requirement even if it may be difficult to identify, as we want to be able to model multiple, sometimes opposite, emotions in the same sentence.", "Here is an example: @user Quieter. My sis, brother in law and habibti are going back to Ireland this afternoon [ ;/ CUE ] Tennis doubles [ sounds fun CUE ] ! [ Enjoy CUE ] ! #Juice!", "In this case, the sadness emotion is associated only with the first CUE , which is ;/, while the joy emotion is associated with the other two.", "Even if a CUE is composed only of punctuation marks (or emojis), it may still be the only useful signal for disambiguating the emotion, or for separating the presence of multiple emotions in the same sentence.", "In this Section, we analyze the benefits that our unified framework can bring to a neural model, based on recent contextualized representations from a pretrained language model.", "The main roadblock to the development of neural models for Semantic Role Labeling for Emotions the heterogeneity of the emotion labels employed", "by each currently available dataset.", "Therefore, we first evaluate the benefits that a unified framework brings in emotion classification.", "Note that, differently from traditional sentence-level Emotion Detection, here we are interested in assigning an emotion to a given (sentence, CUE ) pair, so as to allow a sentence to be assigned different emotions depending on the CUE considered.", "Model description.", "We design a simple neural baseline composed mainly of a BERT-based word representation module and a stack of BiLSTM layers.", "Given an input sentence w and a pre-identified CUE c , the two are concatenated as an input sequence s = [CLS] w [SEP] c [SEP] and fed into the BERT-based word representation module, obtaining a sequence of word encodings e = BERT ( s ) .", "These word encodings are further processed by a stack of 2 BiLSTM layers with hidden size 512 to obtain a new sequence of output encodings o = BiLSTM ( e ) .", "Finally, the output encoding o [CLS] corresponding to the [CLS] token is fed into a linear classifier which outputs the emotions corresponding to the (sentence, CUE ) pair.", "Each model configuration is trained to minimize a binary cross-entropy loss for emotion classification (more than one emotion can be assigned to a given input), for a total of 20 epochs with Adam and a learning rate of 10 3 , leaving the weights of the underlying language model frozen.", "Results.", "Table 5 shows the results of our system on emotion classification.", "First, our unified framework reveals that a system trained on a single dataset can achieve good results on the test set of the same dataset, i.e., on an in-domain evaluation, but is not able to perform as well on other datasets, i.e., on out-of-domain evaluations.", "Instead, the same system trained jointly on the datasets of SRL4E is able not only to perform consistently across all the test sets, but also to improve over the same system trained on in-domain data only, demonstrating empirically the effectiveness of employing a unified scheme for emotion classification.", "This is not a given, since each dataset differs sometimes significantly from the others in domain and linguistic register.", "On average, when using multilingual-BERT as the underlying language model, our unified framework provides an improvement of 11.2% in F1 score over EmoTweet, the second best dataset (64.3% against 53.1%).", "Moreover, Table 5 shows that our unified framework 4592 Trained on Evaluated on (F1 score) Model BL EL ET GN N/E N/Z RE BL EL ET GN N/E N/Z RE ALL m u ltili ngua l BERT (cid:52) 51.0 13.3 38.6 15.3 24.6 11.1 21.8 29.9 (cid:52) 9.2 40.5 21.7 15.6 8.7 7.9 13.7 17.2 (cid:52) 49.9 32.2 76.7 20.1 48.8 22.8 38.2 53.1 (cid:52) 34.3 25.5 30.3 29.0 29.0 18.3 23.3 28.6 (cid:52) 42.1 10.8 34.2 4.0 30.2 11.9 20.1 26.0 (cid:52) 8.9 5.7 17.5 2.6 21.4 22.8 9.2 13.9 (cid:52) 35.4 7.8 22.1 4.8 16.1 2.8 23.5 17.8 (cid:52) (cid:52) (cid:52) (cid:52) (cid:52) (cid:52) (cid:52) 65.9 40.7 74.6 33.7 78.5 77.8 54.1 64.3 Table 5: F1 scores on emotion classification .", "allows our system to improve in bilingual emotion classification (77.8% against 22.8% in F1 score on ALL).", "We now turn to CUE identification, where we aim to find every CUE in an input sentence.", "We frame this subtask as a BIO-tagging problem and devise a neural model to highlight the benefits of our unified framework in this task.", "Model description.", "For CUE identification, we use a similar system architecture to the one we used for emotion classification.", "However, this time the input of the BERT-based word representation module is just the input sentence, whereas the output is a sequence of BIO tags.", "Specifically, the output encodings o = o 1 , . . . , o n produced by the last BiLSTM layer are given to a classifier which learns to predict B-cue, I-cue or O. Results.", "As one can see in Table 6, similarly to what we observed in emotion classification, our unified framework highlights how a model trained on a single dataset is not robust to out-of-domain evaluations.", "Instead, the same model trained on all the datasets in SRL4E shows consistent results across all the test sets, providing a significant improvement in F1 score over the second best dataset, EmoTweet (56.5% against 47.3% in F1 score on ALL, with an absolute improvement of 9.2%).", "Model description.", "For role identification, we use an approach similar to that for CUE identification.", "Indeed, similarly to CUE identification, we model role identification as a BIO-tagging problem, with the only difference being that we provide the pre-identified CUE in input, i.e., the input sequence is s = [CLS] w [SEP] c [SEP], where w is the input sentence and c is the CUE span.", "Results.", "We find that our results on the identification of each role are in line with the results from CUE identification, leading us to draw similar conclusions (see Tables 7, 8 and 9).", "In general, we see a familiar pattern in which training our baseline model on a single dataset results in good perfor-4593 Trained on Evaluated on (F1 score) Model EL GN N/E N/Z RE EL GN N/E N/Z RE ALL m u ltili ngua l BERT (cid:52) 52.8 62.7 24.5 25.9 14.1 32.2 (cid:52) 42.4 75.8 21.9 22.4 13.8 37.1 (cid:52) 31.6 40.8 50.4 12.9 20.1 30.3 (cid:52) 16.5 16.5 20.1 56.2 15.9 38.5 (cid:52) 9.8 9.0 24.4 3.8 26.4 10.3 (cid:52) (cid:52) (cid:52) (cid:52) (cid:52) 54.5 76.3 52.7 57.8 16.6 62.5 Table 7: F1 scores of our baseline model on STIMULUS identification .", "mances on that specific dataset, but with significantly lower results on out-of-domain data.", "Results generally benefit from a unified resource.", "For instance, emotion classification and STIMULUS identification almost always struggle in out-of-domain evaluations, while they perform better when the model is trained on all the datasets at the same time.", "The only exception is CUE identification: when our model is trained on all the data in SRL4E, the performance drops when measured on each dataset separately.", "This is to be expected: while STIMULI follow a similar syntactic pattern across domains, CUE s appear in very different forms (e.g., Twitter usually contains highly informal language with explicit emotions, while news headlines tend to try to describe events objectively, making emotions more implicit).", "Instead, when the datasets share a similar domain, the model is able to generalize well, even in cross-lingual settings (such as the English and Chinese versions of NTCIR), highlighting once again the advantages of our unified framework.", "Recently, the study of emotions in NLP has been gaining interest, due to their potential not only for application to downstream tasks, but also for enhancing the interpretability of automatic outputs, especially when emotions are accompanied by information about their semantic constituents, i.e., their experiencers, targets and stimuli.", "However, recent efforts to provide manually annotated data for emotions and their semantic constituents have been heterogeneous in their annotation scheme, making it difficult to train, evaluate, and compare novel approaches.", "In this paper, we aimed at addressing these issues and presented a unified framework for the Semantic Role Labeling of Emotions (SRL4E).", "Our framework collects, cleans, and unifies the annotation schemes of six datasets that provide information about emotions and their semantic roles, making it easy to train and evaluate existing and future systems.", "We conducted several experiments to demonstrate empirically that our unified scheme is beneficial in each subtask, namely, emotion classification and role (experiencer, target, stimulus) identification, especially in bilingual settings (English-Chinese).", "With SRL4E, we hope to stimulate future research in this complex area at the intersection of Emotion Detection and Semantic Role Labeling.", "We release the software to reproduce the benchmark and our experiments at https://github.com/SapienzaNLP/srl4e .", "The authors gratefully acknowledge the support of the ERC Consolidator Grant MOUSSE No. 726487 and the European Language Grid project No. 825627 (Universal Semantic Annotator, USeA) under the European Union's Horizon 2020 research and innovation programme.", "This work was supported in part by the MIUR under grant Dipartimenti di Eccellenza 2018-2022 of the Department of Computer Science of Sapienza University of Rome." ]
[ "abstain", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "objective", "method", "objective", "abstain", "other", "other", "other" ]
[ "Previous work of class-incremental learning for Named Entity Recognition (NER) relies on the assumption that there exists abundance of labeled data for the training of new classes.", "In this work, we study a more challenging but practical problem, i.e. , few-shot class-incremental learning for NER, where an NER model is trained with only few labeled samples of the new classes, without forgetting knowledge of the old ones.", "To alleviate the problem of catastrophic forgetting in few-shot class-incremental learning, we generate synthetic data of the old classes using the trained NER model, augmenting the training of new classes.", "We further develop a framework that distills from the NER model from previous steps with both synthetic data, and real data from the current training set.", "Experimental results show that our approach achieves significant improvements over existing baselines.", "Existing models of Named Entity Recognition (NER) are usually trained on a large scale dataset with predefined entity classes, then deployed for entity extraction on the test data without further adaptation or refinement.", "In practice, data of new entity classes that the NER model has not seen during training arrives constantly, thus it is desirable that the NER model can be incrementally updated over time with knowledge of data for these new classes.", "In this case, one challenge is that the training data of old entity classes may not be available due to privacy concerns or memory limitations (Ma et al., 2020).", "Then, the model can easily degrade in terms of the performance on old classes when being fine-tuned with only annotations of new entity classes, i.e. , catastrophic forgetting .", "In addressing this problem, previous work in class-incremental learning for NER (Monaikul * Corresponding Author et al., 2021) regularizes the current model by distilling from the previous model trained on old (ex-isting) classes, using text from the training dataset of new classes.", "However, this requires abundance of data in the new training dataset being used for distillation.", "Such an assumption is usually unrealistic since the token-level annotations required by NER training are labor-consuming and scarce, especially for the new unseen classes.", "In this paper, we study a more realistic setting, i.e. , few-shot class-incremental learning for NER, where the model ( i ) incrementally learns on new classes with few annotations, and ( ii ) without requiring access to training data for old classes.", "There is very limited work in few-shot class-incremental learning for NER.", "Such a setting is more challenging compared with class-incremental learning for NER.", "First, the few-shot datasets in few-shot class-incremental learning may not contain enough information for the trained model to generalize during testing.", "Second, it is more challenging to solve the catastrophic forgetting problem in few-shot class-incremental learning when data for old classes is not available and new data is scarce.", "In class-incremental learning for NER (Monaikul et al., 2021), the same training sequence may contain entities of different classes.", "Therefore, when the training dataset for new classes is sufficiently large, its context, i.e. , words labeled as not from entities of new classes, may also contain abundant entities of the old classes.", "That is, the new training data can be regarded as an unlabeled replay dataset of the existing entity classes.", "In such case, we can simply address the problem of catastrophic forgetting by distilling from the previous model (trained on old classes) to the current one, using text from such a replay dataset (Mon-aikul et al., 2021).", "However, in few-shot class-incremental learning, we cannot expect to avoid catastrophic forgetting by distilling with only the few samples from the new training dataset, since 571 there may not exist sufficient (if any) entities of the old classes.", "In this paper, we propose a framework to enable few-shot class-incremental learning for NER.", "As mentioned above, since the few-shot dataset may not contain enough entities of old classes as replay data for distilling from the previous model, which leads to catastrophic forgetting, we consider generating synthetic data of the old entity classes for distillation.", "Such data is termed as synthetic replay .", "Specifically, we generate synthetic data samples of old classes by inverting the NER model.", "Given the previous model trained on the old classes, we optimize the token embeddings of the synthetic data, so that predictions from the previous model can contain old entity classes, given the synthetic data as input.", "In this way, the synthetic data is likely to contain entities of old classes, and distilling from the previous model with such data will thus encourage knowledge preservation of old classes.", "Additionally, to ensure the synthetic (reconstructed) data to be realistic, we propose to leverage the readily available real text data for new classes, via adversarially matching the hidden features of tokens from the synthetic data and those from the real data.", "Note that the synthetic data generated from such adversarial match with real data will contain semantics that are close to the real text data for new classes.", "Consequently, compared with training with only the few samples of new classes, the synthetic data will provide more diverse context that are close to the samples of the few-shot dataset, augmenting the few-shot training for the new classes.", "Further, with the generated synthetic data, we propose a framework that trains the NER model with annotations of the new classes, while distilling from the previous model with both the synthetic data and real text from the new training data.", "Our contributions of this work are summarized as follows: We present the first work of studying few-shot incremental learning for Named Entity Recognition (NER), a more practical but challenging problem compared with class-incremental learning for NER.", "We approach the problem by proposing a framework that distills from the existing model with both, real data of new entity classes and synthetic data reconstructed from the model as replay data of old entity classes.", "Experiments show that our method significantly improves over existing baselines for the task of few-shot class-incremental learning in NER.", "Assume there is a stream of NER datasets D 1 , . . . , D t , . . . , annotated with disjoint entity classes, where t is the time step and D t = { ( X ti , Y ti ) } |D t | i =1 contains c t entity classes.", "Here X ti = [ x ti, 1 , , x ti,N i ] and y ti = [ y ti, 1 , , y ti,N i ] are the NER token and label sequences, respectively, with length N i , and |D t | is the size of the dataset.", "Dataset D 1 is the base dataset, assumed of reasonably large size for classes of step t = 1 .", "The datasets {D t } t> 1 are the few-shot datasets with about K samples for each class.", "In few-shot class-incremental learning, the NER model will be incrementally trained with D 1 , D 2 , . . . , over time, with data from D t only available at the t th time step.", "After being trained with D t , the model will be evaluated jointly on all entity classes encountered in D 1 , , D t , i.e. , we do not learn separate prediction modules for each time step.", "Figure 1 shows an example of annotations for different incremental learning steps on classes of PER , LOC , and TIME .", "In Figure 1, we should note that tokens that are labeled as O in the current step are likely to contain abundant entities from the previous classes.", "For instance, tokens annotated as O in step 3 include entities of previous classes, i.e. , PER and LOC .", "Therefore, when a large amount of training data is available for the new classes, the new dataset can be regarded as unlabeled replay data of previous classes.", "As an example, in Monaikul et al. (2021), 572", "(a) London was attacked in 1943 .", "(b) Figure 2:", "(a) An example of L syn of Eq (4) at step 3 of Figure 1.", "(b) An example of distilling with D t at step 3 of Figure 1. M 2 and M 3 are models from step 2 and 3, respectively.", "We replace the predictions on the position of 1943 from M 2 with the correct annotation, TIME, from D 3 before training on M 3 .", "their performance of class-incremental learning on CoNLL2003 has been comparable or even better than training with full annotations of all the classes encountered, by just distilling with the training data of the new classes.", "However, in few-shot class-incremental learning, the few training samples of the current step may not contain enough entities of the previous classes.", "In Section 4, we also discuss the difference between few-shot class-incremental and few-shot learning for NER.", "Following Beltagy et al. (2019); Souza et al. (2019), we use the BERT-CRF as our NER model, which consists of a BERT base (Devlin et al., 2018) encoder with a linear projection and a conditional random field (CRF) (Lafferty et al., 2001) layer for prediction.", "We denote M t as the NER model for step t .", "M t is initialized from M t 1 to preserve knowledge of old classes.", "For time step t > 1 , M t is expected to learn about the new classes from D t , while not forgetting the knowledge from {D k } t 1 k =1 .", "Assume we have already obtained a synthetic dataset D tr = { E t,ri , Y t,ri } |D tr | i =1 of previous entity classes from {D k } t 1 k =1 , where E t,ri = [ e t,ri, 1 , , e t,ri,N i ] and Y t,ri = [ y t,ri, 1 , , y t,ri,N i ] are the reconstructed token embeddings and reference label sequence.", "Y t,ri is a randomly sampled label sequence containing classes from the previous steps and E t,ri is optimized so the output from M t 1 with E t,r i matches Y t,ri .", "We will discuss the construction of the synthetic D tr in Section 3.2.", "Given the current training data D t and M t 1 that has been trained on D t 1 , we propose to train M t by distilling from M t 1 with both the real data from D t and synthetic data from D tr .", "The challenge of such distillation is that the predictions from M t and M t 1 are likely to contain different set of labels, i.e. , M t should also predict with the new entity classes from D t .", "This is different from the standard setting of distillation, where the teacher and student models share the same label space (Hinton et al., 2015).", "In tackling such a problem of label space discrepancy, we propose separate approaches of distillation for D t and D tr , respectively.", "The distillation from M t 1 to M t involves matching the output distributions between M t to M t 1 .", "However, given an input sequence X from D t , the CRF layer outputs correspond to a sequence-level distribution P ( Y | X ) , i.e. , probabilities for all possible label sequences of X , the cardinality of which, grows exponentially large with the length of X .", "Therefore, it is infeasible to match with the exact output distributions of CRF.", "Following the current state-of-the-art approach of NER distillation (Wang et al., 2020b), we approximate the sequence-level output distribution of CRF with only its top S predictions.", "Specifically, for model M t 1 , we have, PM t 1 ( Y | X ) = [ PM t 1 ( Y 1 | X ) , . . . , (1) PM t 1 ( YS | X ) , 1 S (cid:88) s =1 PM t 1 ( Y s | X )] , where { Y s } Ss =1 are the top S most probable predictions of label sequence from M t 1 .", "We set S = 10 .", "In this way, the output from the CRF of M t 1 becomes tractable.", "However, M t still cannot be trained with such an output from M t 1 .", "This is because M t 1 was not trained with the 573 new classes in D t .", "Therefore, when X is from D t , M t 1 will have wrong predictions on the tokens labeled as being from entities of new classes.", "In order to distill with M t 1 , we propose a correction for { Y s } Ss =1 .", "Figure", "2(b) shows an example of such a process.", "Specifically, on the positions of the sequence where D t has labeled as new classes, we replace the predictions in { Y s } Ss =1 with the annotations from D t .", "We denote the corrected set of predictions as { Y cs } Ss =1 .", "For training of M t , we first calculate the predicted distribution of M t with respect to { Y cs } Ss =1 , as PM t ( Y | X ) =[ PM t ( Y c 1 | X ) , , PM t ( Y cS | X ) , 1 S (cid:88) s =1 PM t ( Y cs | X )] , (2) where we compute the predicted probabilities from M t with regard to { Y cs } Ss =1 from M t 1 .", "Then, M t can be trained by minimizing the cross entropy between PM t 1 ( Y | X ) and PM t ( Y | X ) via L real ( D t ) = (3) 1 | D t | (cid:88) X D t CE ( PM t 1 ( Y | X ) , PM t ( Y | X )) , where CE ( , ) is the cross entropy function.", "Note that the definition of O is different in M t 1 and M t .", "Take Figure", "2(b) as an example, the prediction of O in step 2 corresponds to both O and TIME for step 3, since TIME is not in the target entity classes of step 2. However, from the annotation of step 3, we know that tokens annotated as O are not TIME .", "Therefore, we can safely assume that the prediction of O in { Y cs } Ss =1 from M 2 matches the definition of O in M 3 , i.e. , the semantics of O in { Y cs } Ss =1 is the same for M t and M t 1 .", "Different from data in D r , in which we know tokens annotated as O are not from the new classes, data from D tr is reconstructed from M t 1 and only contains labels for the previous classes.", "Any token predicted with \"O\" from M t 1 can be potentially labeled as O or the new classes by M t .", "Therefore, with D tr , it is unclear how to correct the output of CRF from M t 1 , i.e. { Y s } Ss =1 , for training of M t .", "By considering the above, we resort to another approach that decomposes the output from CRF, i.e. , sequence level label distribution, into marginal label prediction for each token, using the forward and backward method in Lafferty et al. (2001).", "Figure", "2(a) shows a graphic example of our distillation loss L syn with D tr .", "Specifically, let C t be the cumulative number of possible labels for any given token in NER at step t , i.e. , C t = (cid:80) tk =1 c t , with c t be the number of class in D t .", "For each token with embedding e , we define p te = [ p te,O ; p te,C t 1 ; p te,c t ] and p t 1 e = [ p t 1 e,O ; p t 1 e,C t 1 ] as the predicted marginal distribution of a token from M t and M t 1 , respectively.", "p te,O , p t 1 e,O R are the probabilities for class O , whereas p te,C t 1 , p t 1 e,C t 1 RC t 1 are the probabilities for entity classes encountered up to step t 1 .", "Further, p te,c t R c t are probabilities for the new classes in step t .", "Since O from step t 1 corresponds to the O and the c t new classes in step t, we first collapse p te by computing p te = [sum( p te,O , p te,c t ); p te,C t 1 1 ] , where we merge the predictions of O and c t new classes.", "In this way, p te will have the same dimension as p t 1 e .", "Let E tr be the set of embeddings for all tokens contained in D tr .", "The distillation loss for D tr is L syn ( D tr ) = E e E tr KL( p te || p t 1 e ) , (4) where KL( || ) is the KL divergence.", "where L real ( ) and L syn ( ) corresponds to distillation with the real data in D t and synthetic data in D tr , respectively, and is a parameter balancing between the losses for D t and D tr .", "We set = 1 in the experiment.", "Now we describe how to reconstruct D t r from M t 1 .", "Given a randomly sampled label sequence Y containing the old entity classes from {D k } k<t , we seek to reconstruct the embedding sequence E corresponding to its training data.", "In doing so, we randomly initialize embeddings E , then optimize the parameters of E with gradient descent so that its output with M t 1 matches the expected label sequence Y .", "Formally, we optimize E by minimizing the training loss of the CRF as L crf = log PM t 1 ( Y | E ) .", "result in a domain gap of training on the synthetic data of old entities but testing on the real data.", "To alleviate this problem, we propose to encourage synthetic data to be more realistic by leveraging the real data from D t .", "Let h t 1 ,syn l ( E tr ) be the hidden state from the l th layer of the BERT encoder in M t 1 , regarding the set of synthetic token embeddings, E tr .", "Similarly, let h t 1 ,real l ( emb ( X t )) be the output hidden states from the l th layer of M t 1 , regarding the set of real tokens, X t , from D tr .", "Moreover, emb ( ) is the embedding layer.", "We propose to adversarially match h t 1 ,syn l ( E tr ) and h t 1 ,real l ( emb ( X t )) so that hidden states from the real and synthetic are not far away from each other.", "In this way, the reconstructed embeddings from D tr are likely to be more realistic.", "Specifically, let M l be a binary discriminator module, i.e. , one layer linear projection with sigmoid output, whose inputs are the real and synthetic hidden states, M l = argmin M l E h h t 1 ,syn l ( E tr ) log M l ( h ) E h h t 1 ,real l ( emb ( X t )) log(1 M l ( h )) , L advl = E h h t 1 ,syn l ( E tr ) log(1 M l ( h )) .", "(7) Finally, the loss for reconstructing D tr is L r = L crf + (cid:88) l l s L ladv , (8) where l s = 2 , 4 , , 12 , i.e. , we match every two layers of the BERT encoder in M t 1 .", "is a balancing parameter and is default to 10 in the experiment.", "Since we train M t with the reconstructed token embeddings from M t 1 , we freeze the BERT token embedding layer during training, so that M t 1 and M t can share the same token embeddings.", "This is also reasonable for the setting of few shot learning, since tuning all the model parameters with few samples will result in overfitting.", "Another problem we should consider is that the real data D t and synthetic data D tr may contain different sets of entity classes , i.e. , the few-shot dataset D t may not contain entities of old classes in D tr .", "In this case, for the token embeddings of old classes in D tr , s.t. , { e i,j | y t,ri,j = O } , matching the hidden states of these embeddings with those from D t will distract these embedding from being optimized into the entities of old classes, which we will show in the experiments.", "Therefore, we overload the definition of E tr in (4) by excluding embeddings of the old entity classes in D tr from matching, i.e. , E tr = { e i,j | y t,ri,j = O } , while X t contains all the real tokens from D t .", "Algorithm 1 shows the complete procedure for constructing D tr .", "Since D tr contains entities of old classes from previous steps, distilling with L syn ( D tr ) will help preserving knowledge of old entity classes, i.e. , avoiding catastrophic forgetting, without accessing the real data from previous steps.", "Additionally, with D tr , M t is no longer trained with only few samples from D t , thus is less likely to overfit.", "This is because D tr can construct a relative larger scale, e.g. , several thousand sentences, within the computation limit.", "Additionally, the semantics of D tr can be close to D t , since their token embeddings are closely matched.", "Thus, compared with training only with D t , D tr provides more diverse text information for M t during training.", "Moreover, the entity of old classes from D tr can be regarded as negative samples for training of the new classes in D t , thus reducing the confusion between old and new classes for M t during training.", "Class-Incremental Learning : Different from continual learning, e.g. , (Hu et al., 2018), which sequentially learn on different tasks (usually with different classes) and requires task labels for prediction, class-incremental learning aims at jointly predicting with all the encountered classes without knowing task labels.", "Sun et al. (2019); Ke et al. (2021) have study continual learning for different tasks of NLP.", "Recently, Monaikul et al. (2021) studies class-incremental learning for NER, building a unified NER classifier for all the classes encountered over time.", "There are two problems regarding this method.", "Firstly, Monaikul et al. (2021) only works with a non-CRF-based model.", "However, many current state-of-the-art NER models are built with a CRF module (Liu et al., 2019; Chen et al., 2020; Wang et al., 2021).", "Secondly, it assumes that a large amount of data for the new classes is available, which is unrealistic since annotations for unseen classes are usually scarce.", "In this work, we assume only few-shot datasets are available for the new classes, i.e. , few-shot class-incremental learning, which was proposed in Tao et al. (2020); Mazumder et al. (2021), yet not studied in NER.", "Also, note that class-incremental learning is different meta-learning with episode training (Ding et al., 2021; Finn et al., 2017), since tasks/classes of meta-leaning may appear multiple times in episode 575 training, while we assume the dataset of each class only appear once in class-incremental learning.", "Few-Shot Learning : Models of few-shot learning are generally trained with a base dataset, then learned to predict unseen target classes with few samples.", "One branch of the works is based on metric learning.", "These generally involves predicting by learning to compare token features with class prototypes (Hou et al., 2020) or stored query samples (training data) of target classes (Yang and Katiyar, 2020).", "The latter violates our setting of class-incremental learning, for which it is prohibitive to store the training data for e.g. , privacy issue.", "Alternatively, Huang et al. (2020) avoids overfitting of few-shot learning by augmenting with noisy or unlabeled data from the web.", "Our approach is similar to Huang et al. (2020), in that we also augment few-shot training of the current step with additional data, except we use generated synthetic instead of real data.", "Recently, (Cui et al., 2021) proposes Template NER , a few-shot friendly model for NER that convert NER into a sequence-to-sequence problem.", "Our few-shot class-incremental learning is different from few-shot learning in that", "i) Few-shot learning requires data of different classes arrives at the same time and with complete annotations for all the target classe, while data of few-shot class-incremental learning arrives sequentially, containing annotation of only classes of the current step.", "ii) Existing works of few-shot NER build separate prediction modules for the target and base classes and ignore the performance of base classes during evaluation, thus incompatible with class-incremental learning.", "Data-Free Distillation : Data-free distillation refers to the case in which we distill from a teacher model to a student model with the training data of the teacher not available.", "A typical solution is to reconstruct synthetic training data from the trained teacher model for distillation.", "Such a setting was previously explored for model compression of image classification (Yin et al., 2020) and text classification (Ma et al., 2020).", "However, it has not been studied for NER scenarios.", "We use data-free distillation for transferring knowledge between models from the current and previous steps for few-shot class-incremental learning.", "Following the previous work of class-incremental learning for NER (Monaikul et al., 2021), we", "experiment with two datasets: CoNLL2003 and Ontonote 5.0.", "For CoNLL2003, our results are average over eight ordering of entity classes for each step as in Monaikul et al. (2021).", "For Ontonote 5.0, we rank the entities in alphabetic order and experiment with two combinations of different entity classes for different steps.", "Table 3 and 4 in the Appendix list the entity classes used for each step.", "Since CoNLL2003 is a relative smaller dataset, we conduct both 5-shot and 10-shot experiments for CoNLL2003 and 5-shot experiments for OntoNote 5.0.", "Following Yang and Katiyar (2020), our base datasets, i.e. , dataset of step 1, is the training data of CoNLL2003 and OntoNote 5.0, labeled with only entity classes included in step 1. The few-shot datasets are sampled from the evaluation dataset with greedy sampling (Yang and Katiyar, 2020).", "The resulting NER model of each step is tested on the entire test set.", "Please refer to the Appendix for addition details.", "We compare with the state-of-the art work of class-incremental learning for NER ( CI NER ).", "Additionally, we implement EWC++ (Chaudhry et al., 2018) with = 0 , i.e. , using weights regularization to avoid forgetting instead of generating synthetic data.", "We also implement FSLL (Mazumder et al., 576 1 2 3 4 5 6 7 8 9 Steps 10 20 30 40 50 60 70 80 F 1 S c o r e CI NER EWC++FSLLL-TapNet+CDTTemplate NER AS-DFDOurs", "2021), a state-of-the-art method of few-shot class-incremental learning for image classification with metric learning.", "As mentioned in the related work section, our method can be considered as data-free distillation.", "Therefore, we also include AS-DFD (Ma et al., 2020), the state-of-the-art method of data-free distillation in text classification .", "Specifically, we construct D tr with the adversarial regularization described in AS-DFD instead of (8).", "We also adapt L-TAPNet+CDT (Hou et al., 2020) for comparison.", "L-TAPNet+CDT is a state-of-the-art work of few-shot learning for sequence labeling with CRF module.", "Please refer to Appendix for how we adapt it for class-incremental learning.", "As an ablation study, we compare our method with:", "i) Ours ( = 0 ) , only train with only D t with = 0 .", "ii) Ours ( = 0 , marg) , which is also training with = 0 .", "The difference is that instead of using the sequence-level distillation with L real ( D t ) , we decompose the output of CRF into marginal predictions for each token, as described before (4).", "In this way, we can directly apply the token-level distillation in CI NER (Monaikul et al., 2021) for the CRF-based NER model.", "Compared with Ours ( = 0 ) , this is included to show the performance of directly applying token-level distillation for CRF-based model.", "iii) In Ours ( = 0 ) , we examine the usefulness of L adv by setting = 0 .", "iv) Ours (all tokens) , which matches all the synthetic tokens in D tr with real tokens in D t , instead of matching with only those labeled as O in D tr , as described after eq (8).", "Table 1 and 2 show the F1 scores from different steps of few-shot class-Incremental learning on CoNLL2003.", "The values are averaged over eight permutations as in (Monaikul et al., 2021).", "Our methods outperform all the considered baselines for both 5-shot and 10-shot learning.", "Especially, CI NER (Monaikul et al., 2021) has the worst result among all the methods.", "This is because the performance of CI NER relies on a large amount of data from D t for replay of previous entities.", "Therefore, it does not work well in the few-shot scenario, where D t with only few samples may not contain entities of old classes for replay.", "Additionally, we find that the performance of AS-DFD (Ma et al., 2020) is slightly lower than Ours ( = 0 ) , i.e. , 577", "distilling using data reconstructed with only L crf .", "AS-DFD is designed for text classification, where they use the feature of the special token [CLS] from BERT for classification, while features of the non-special tokens (within text) are trained with an augmented task of language modeling.", "However, in NER, features of the non-special tokens are directly used for prediction.", "Thus, simultaneously training such features with language modeling may distract the model from learning the task specific information needed for NER.", "In the ablation study, we find that our adversarial matching indeed improves the quality of the synthetic data ( Ours vs .", "Ours ( = 0 ) ), especially when excluding tokens of the reconstructed old entities from matching ( Ours vs .", "Ours (all tokens) ).", "Further, Ours ( = 0 , marg) has lower performance than Ours ( = 0 ) , showing that it might not be optimal to directly apply CI NER (Monaikul et al., 2021) with CRF based models.", "Figure 3 shows the results of class-incremental with OntoNote 5.0.", "Since there are more steps relative to the experiments for CoNLL2003, following previous works in few-shot class-incremental learning (Tao et al., 2020; Mazumder et al., 2021), we plot the F1 scores as curves, to highlight the relative difference of different methods over time.", "Our method consistently outperforms the baselines.", "Note that with larger number of steps of incremental learning, the curves may not be necessarily monotonically decreasing.", "This may indeed happen because training with some classes can benefit the performance of other downstream classes, thus 2 4 6 8 10 12 14 16 Value of 0 1 2 3 4 5 6 G a i n o n F 1 s c o r e s Step 2 Step 3 Step 4 Average Figure 5: F1 score gains of 10-shot learning with different values of on CoNLL2003, relative to = 0 .", "(locally) increasing the overall performance.", "In Appendix, we also present the ablation study with OntoNote 5.0.", "Figure 4 shows the t-sne plots of hidden states of tokens from 10-shot LOC PER (explained in the caption).", "In", "(a), we can find that there are synthetic tokens that are very close to the real LOC tokens (green dots in the black ellipse).", "These synthetic tokens (within the black ellipse) are the reconstructed LOC .", "On the contrary, the synthetic context, i.e. , the rest of the synthetic tokens outside the ellipse, are far away from the real distribution.", "This may because the context contains more diverse information, which makes it more difficult to be reconstructed.", "Such a difference between real and synthetic tokens may cause a domain shift between training and testing, since we are training on synthetic token and testing on real tokens.", "Note that there is no tokens from D 2 (red dots) in the black ellipse of LOC , indicating that there may not be LOC entities in the few-shot dataset D 2 , unlike in non-few-shot learning where D 2 can contain a lot of entities of the old classes ( LOC ).", "(b) shows the result of matching all the synthetic tokens from D 2 r with all the real ones from D 2 .", "In this way, most of the synthetic tokens are matched with the real ones, except that only few synthetic tokens are aligned with the real LOC tokens.", "This is because the few-shot dataset D 2 may not contain entities from the old classes LOC .", "In this case, the adversarial matching will distract synthetic tokens from being reconstructed as LOC .", "Then, the reconstructed embedding sequences will contain less information from the old classes ( LOC ).", "In", "(c), we exclude synthetic tokens that are intended to be reconstructed as the old class LOC , i.e. , labeled as LOC in the target label sequence Y in Algorithm 1. As a result, the synthetic tokens contain both LOC and context that is aligned with the real distribution.", "We investigate the relationship between model performance and the value of , i.e. , the parameter controlling the degree of adversarial matching.", "Figure 5 shows the the F1 scores from different steps on CoNLL2003, with different values of .", "We experiment with 10-shot and report the gain of F1 score compared with = 0 .", "We noticed that there is positive gain of the average F1 score on the whole experiment for a range of values, i.e. , [1 , 16] .", "These results demonstrate that the proposed adversarial matching between the real and synthetic data ( D t and D tr ) is generally beneficial and is not sensitive to the selection of .", "We present the first work of few-shot class-Incremental learning for NER To address the problem of catastrophic forgetting, we proposed to reconstruct synthetic training of the old entity classes from the model trained at the previous time step.", "Additionally, the synthetic data allows the model to be trained with a more diverse context, thus less likely to overfit to the few training samples of current step.", "Experimental results showed that our method outperforms the baselines, enabling the NER model to incrementally learning from new classes with few samples.", "This work was carried out during an internship at Adobe Research.", "Further, it was supported by NIH (NINDS 1R61NS120246), DARPA (FA8650-18-2-7832-P00009-12) and ONR (N00014-18-1-2871-P00002-3).", "We thank all the researchers involved from Adobe Research and the support from Duke University." ]
[ "abstain", "objective", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "method", "abstain", "method", "method", "abstain", "objective", "abstain", "abstain", "objective", "objective", "objective", "result", "other", "other", "other", "other", "other", "method", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "abstain", "method", "other", "other", "other", "abstain", "other", "abstain", "other", "abstain", "other", "other", "abstain", "other", "other", "other", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "other", "other", "other" ]
[ "Univeristy of Chicago [email protected]", "Greg Shakhnarovich TTI-Chicago [email protected]", "Abstract", "Natural language processing for sign language videoincluding tasks like recognition, translation, and searchis crucial for making ar-tificial intelligence technologies accessible to deaf individuals, and is gaining research interest in recent years.", "In this paper, we address the problem of searching for fingerspelled keywords or key phrases in raw sign language videos.", "This is an important task since signifi-cant content in sign language is often conveyed via fingerspelling, and to our knowledge the task has not been studied before.", "We propose an end-to-end model for this task, FSS-Net, that jointly detects fingerspelling and matches it to a text sequence.", "Our experiments, done on a large public dataset of ASL fingerspelling in the wild, show the importance of fingerspelling detection as a component of a search and retrieval model.", "Our model significantly outperforms baseline methods adapted from prior work on related tasks.", "Sign languages are a type of natural language which convey meaning through sequences of handshapes and gestures as well as non-manual elements, and are a chief means of communication for about 70 million deaf people worldwide.", "1 Automatic sign language technologies would help to bridge the communication barrier between deaf and hearing individuals, and would make deaf video media more searchable and indexable.", "Automatic sign language processing has recently received growing interest in the computer vision (CV) and natural language processing (NLP) communities.", "Yin et al. (2021) make several recommendations for the study of sign languages in NLP research, including greater emphasis on real-world data.", "Most studies on sign language are based on data collected in a controlled environment, either 1 From https://wfdeaf.org/our-work/ in a studio setting (Martnez et al., 2002; Kim et al., 2017) or in a specific domain (Forster et al., 2016).", "The challenges involved in real-world signing videos, including various visual conditions and different levels of fluency in signing, are not fully reflected in such datasets.", "Automatic processing of sign language videos \"in the wild\" has not been addressed until recently, and is still restricted to tasks like isolated sign recognition (Albanie et al., 2020; Joze and Koller, 2019; Li et al., 2020) and fingerspelling recognition (Shi et al., 2018, 2019).", "In this work we take a step further and study search and retrieval of arbitrary fingerspelled content in real-world American Sign Language (ASL) video (see Figure 1).", "in which words are signed by a series of handshapes or movements corresponding to single letters (see the Appendix for the ASL fingerspelling alphabet).", "Fingerspelling is used mainly for lexical items that do not have their own signs, such as proper nouns or technical terms, and has an important place in sign language.", "For example, fingerspelling accounts for 12-35% of ASL (Padden and Gunsauls, 2003).", "Since important content like named entities is often fingerspelled, the fingerspelled portions of a sign language video often carry a disproportionate amount of the content.", "Most prior work on fingerspelling has focused on recognition (Shi et al., 2018, 2019), that is, transcription of a fingerspelling video clip into text.", "However, automatic recognition assumes that the boundaries of fingerspelled segments are known at test time, and may not be the end goal in real-world use cases.", "In addition, complete transcription may not be necessary to extract the needed information.", "Fingerspelling search, such as retrieving sign language videos based on a query word, is a more practical task, and is an important component of general video search involving sign language.", "In addition to introducing the task, we address the research question of whether the explicit temporal localization of fingerspelling can help its search and retrieval, and how best to localize it.", "As fingerspelling occurs sparsely in the signing stream, explicit detection of fingerspelling could potentially improve search performance by removing unrelated signs.", "To this end, we propose an end-to-end model, FSS-Net, which jointly detects fingerspelling from unconstrained signing video and matches it to text queries.", "Our approach consistently outperforms a series of baselines without explicit detection and a baseline with an off-the-shelf fingerspelling detector by a large margin.", "In existing work on sign language video processing, search and retrieval tasks have been studied much less than sign language recognition (mapping from sign language video to gloss labels) (Koller et al., 2017; Forster et al., 2016) and translation (map-ping from sign language video to text in another language) (Yin and Read, 2020; Camgz et al., 2018).", "Work thus far on sign language search has been framed mainly as the retrieval of lexical signs rather than fingerspelling.", "Pfister et al. (2013); Albanie et al. (2020) employ mouthing to detect keywords in sign-interpreted TV programs with coarsely aligned subtitles.", "Tamer and Saralar (2020a,b) utilize whole-body pose estimation to search for sign language keywords (gloss or translated word) in a German Sign Language translation dataset PHOENIX-2014T (Camgz et al., 2018).", "All prior work on keyword search for sign language has been done in a closed-vocabulary setting, which assumes that only words from a pre-determined set will be queried.", "Searching in an open-vocabulary setting, including proper nouns, typically requires searching for fingerspelling.", "Some related tasks in the speech processing literature are spoken term detection (STD) and query-by-example search, which are the tasks of automatically retrieving speech segments from a database that match a given text or audio query (Knill et al., 2013; Mamou et al., 2007; Chen et al., 2015).", "In terms of methodology, our model also shares some aspects with prior work on moment retrieval (Gao et al., 2017; Xu et al., 2019; Zhang et al., 2020), which also combines candidate generation and matching components.", "However, we incorporate additional task-specific elements that consistently improve performance.", "We consider two tasks: Fingerspelled Word Search (FWS) and Fingerspelling-based Video Search (FVS).", "FWS and FVS respectively consist of detecting fingerspelled words within a given raw ASL video stream and detecting video clips of interest containing a given fingerspelled word.", "2 Given a query video clip v and a list of n words w 1: n , FWS is the task of finding which (if any) of w 1: n are present in v .", "Conversely, in FVS the input is a query word w and n video clips v 1: n , and the task consists of finding all videos containing the fingerspelled word w .", "We consider an open-vocabulary setting where the word w is not constrained to a pre-determined set.", "The two tasks correspond to two directions of search (video text and text video), as is standard practice in other retrieval work such as video-text search (Zhang et al., 2018; Ranjay et al., 2017; Ging et al., 2020).", "We propose a single model, FSS-Net (for \"Finger-Spelling Search Network\"), summarized in Fig-2", "Fig-2 We use \"word\" to refer to a fingerspelling sequence, which could be a single word or a phrase.", "ure 2, to address the two aforementioned search tasks.", "FSS-Net receives a pair of inputsa raw ASL video clip, and a written text sequenceand produces a score indicating the degree of match between the video clip and the text.", "The text is encoded into an embedding vector via a learned encoder.", "The visual branch of FSS-Net generates a number of fingerspelling segment proposals and each proposed visual segment is encoded into a feature space shared with the text embeddings.", "Paired embeddings from both modalities are drawn towards each other in the shared embedding space during training.", "Image encoding The input image frames are encoded into a sequence of feature vectors via an image encoder, which consists of the VGG-19 (Si-monyan and Zisserman, 2015) convolutional layers followed by a Bi-LSTM.", "3 We use raw RGB images as input, instead of signer pose as used in some prior work (Tamer and Saralar, 2020b,a) on sign language search, as estimating pose for hands is particularly hard for signing videos in the wild (see Section 6 for details).", "Temporal proposal generation Suppose the visual feature sequence is f 1: T , where T is the number of frames in the video clip.", "The purpose of temporal proposal generation is to produce a number of candidate fingerspelling segments H ( I 1: T ) = { ( s i , t i ) } 1 i |H ( I 1: T ) | from f 1: T , where s i , t i are the start and end frame indices of 3 Transformers (Vaswani et al., 2017) can also be used, but in our initial experiments, they were outperformed by BiLSTMs on our tasks and data.", "the i th proposed segment.", "Below we use H as a shorthand for H ( I 1: T ) .", "Here we adopt the strategy in (Xu et al., 2017), which is commonly used to generate proposals for action detection.", "Briefly, the model assigns a probability p det of each proposal being fingerspelling.", "See (Xu et al., 2017) for more details.", "We denote the detection loss as L det .", "Note that the training requires known ground-truth fingerspelling boundaries.", "In the fingerspelling datasets we use here (Shi et al., 2018, 2019), the fingerspelling boundaries are already annotated, so no further annotation is needed.", "Filtering A visual embedding is produced for each segment.", "We denote a labeled fingerspelling segment (shortened as fingerspelling segment below) as a tuple ( s, t, w ) , where s , t and w represent the start frame index, the end frame index, and the written text it represents.", "A naive approach would be to use only the ground-truth fingerspelling segments P g = { ( s i , t i , w i ) } 1 i |P g | for training.", "However, this approach does not take into account the potential shifts (errors) that may exist at test time between the ground-truth and generated segment proposals.", "The embeddings produced by the fingerspelling encoder should be robust to such shifts.", "To this end, we incorporate proposals in forming positive pairs at training time.", "Formally, let the set of time intervals from the temporal proposal generator be H = { ( s i , t i ) } 1 i |H| .", "We sample K intervals from P t to form the set of generated 1701 fingerspelling segments: P k = { ( s k , t k , w g ) | IoU (( s k , t k ) , ( s g , t g )) > IoU , IS (( s t , t k ) , ( s g , t g )) > IS , ( s k , t k ) H , ( s g , t g , w g ) P g } (1) where IS ( x, y ) = Intersection ( x,y ) Length ( y ) and IoU ( x, y ) = Intersection ( x,y ) Union ( x,y ) .", "We use IoU and IS to control the degree to which the proposals can deviate from the ground-truth.", "In addition to the intersection over union (IoU), we use the normalized intersection IS to eliminate proposals with many missing frames.", "We take the union of the two sets, P + = P g P k , as the filtered proposal set to be encoded.", "Fingerspelling visual encoding (FS-encoding) The visual encoding of each segment ( s, t, w ) P + is e ( s,t ) v = Bi-LSTM ( f s : t ) .", "4 Text encoding A written word (or phrase) w is mapped to an embedding vector e wx via a text encoder.", "To handle words not seen at training time (and better handle rarely seen words), we first decompose w into a sequence of characters c 1: | w | (e.g. ASL'=A'-S'-L') and feed the character sequence c 1: | w | into a text encoder (here, a Bi-LSTM 5 ).", "Visual-text matching With the above pairs of visual and textual embeddings, we use a training objective function consisting of two triplet loss terms: L tri ( I 1: T , P + ) = (cid:88) ( s,t,w ) P + max { m + d ( e ( s,t ) v , e wx ) 1 |N w | (cid:88) w (cid:48) N w d ( e ( s,t ) v , e w (cid:48) x ) , 0 } + max { m + d ( e ( s,t ) v , e wx ) 1 |N v | (cid:88) ( s (cid:48) ,t (cid:48) ) N v d ( e ( s (cid:48) ,t (cid:48) ) v , e wx ) , 0 } (2) where d denotes cosine distance d ( a , b ) = 1 a b (cid:107) a (cid:107)(cid:107) b (cid:107) , m is a margin, and N v and N w are sets of negative samples of proposals and words.", "To form negative pairs we use semi-hard negative sampling (Schroff et al., 2015): N v = { ( s (cid:48) , t (cid:48) ) | d ( e ( s (cid:48) ,t (cid:48) ) v , e wx ) > d ( e ( s,t ) v , e wx ) } N w = { w (cid:48) | d ( e ( s,t ) v , e w (cid:48) x ) > d ( e ( s,t ) v , e wx ) } (3) 4 We compared the Bi-LSTM encoder with average/max pooling of f s : t , and found the former to perform better.", "the corresponding mini-batch.", "Overall loss The model is trained with a combination of the detection loss and triplet loss: L tot ( I 1: T , P g ) = det L det ( I 1: T , P g ) + L tri ( I 1: T , P + ) (4) with the tuned weight det controlling the relative importance of detection versus visual-textual matching.", "Inference At test time, the model assigns a score sc ( I 1: T , w ) to a given video clip I 1: T and word w .", "The word is encoded into the word embedding e wx .", "Suppose the set of fingerspelling proposals generated by the temporal proposal generator is H ( I 1: T ) .", "We define a scoring function for the proposal h H ( I 1: T ) and word w sc word ( h m , w ) = p det (1 d ( e h m v , e wx )) (5) where p det is the probability given by the temporal proposal generator and controls the relative weight between detection and matching.", "In other words, in order for a segment and word to receive a high score, the segment should be likely to be fingerspelling (according to p det ) and its embedding should match the text.", "Finally, the score for the video clip I 1: T and the word w is defined as the highest score among the set of proposals H ( I 1: T ) : sc ( I 1: T , w ) = max h H ( I 1: T ) sc word ( h, w ) (6) 5 Experimental Setup 5.1 Data We conduct experiments on ChicagoFSWild (Shi et al., 2018) and ChicagoFSWild+ (Shi et al., 2019), two large-scale publicly available fingerspelling datasets containing 7,304 and 55,272 fingerspelling sequences respectively.", "The ASL videos in the two datasets are collected from online resources and include a variety of viewpoints and styles, such as webcam videos and lectures.", "We follow the setup of (Shi et al., 2021) and split the raw ASL videos into 300-frame clips with a 75-frame overlap between neighboring chunks and remove clips without fingerspelling.", "The numbers of clips in the various splits can be found in the Appendix.", "On average, each clip contains 1.9/1.8 fingerspelling segments in the ChicagoFSWild and ChicagoFSWild+ datasets respectively.", "We compare the proposed model, FSS-Net, to the following baselines adapted from common approaches for search and retrieval in related fields.", "To facilitate comparison, the network architecture for the visual and text encoding in all baselines is the same as in FSS-Net.", "Recognizer In this approach, we train a recognizer that transcribes the video clip into text.", "Specifically, we train a recognizer to output a sequence of symbols consisting of either fingerspelled letters or a special non-fingerspelling symbol <x>.", "We train the recognizer with a connectionist temporal classification (CTC) loss (Graves et al., 2006), which is commonly used for speech recognition.", "At test time, we use beam search to generate a list of hypotheses w 1: M for the target video clip I 1: T .", "Each hypothesis w m is split into a list of words { w nm } 1 n N separated by <x>.", "The matching score between video I 1: T and w is defined as: sc ( I 1: T , w ) = 1 min 1 m M min 1 n NLER ( w nm , w ) (7) where the letter error rate LER is the Leven-shtein edit distance.", "This approach is adapted from (Saralar and Sproat, 2004) for spoken utterance retrieval.", "Fingerspelling boundary information is not used in training this baseline model.", "Whole-clip The whole-clip baseline encodes the whole video clip I 1: T into a visual embedding e Iv , which is matched to the textual embedding e wx of the query w .", "The model is trained with contrastive loss as in equation 2.", "At test time, the score for video clip I 1: T and word w is: sc ( I 1: T , w ) = 1 d ( e Iv , e wx ) (8) where d is the cosine distance as in FSS-Net.", "Fingerspelling boundary information is again not used in this baseline.", "External detector (Ext-Det) This baseline uses the off-the-shelf fingerspelling detectors of (Shi et al., 2021) to generate fingerspelling proposals, instead of our proposal generator, and is otherwise identical to FSS-Net.", "For each dataset (ChicagoF-SWild, ChicagoFSWild+), we use the detector trained on the training subset of that dataset.", "This baseline uses ground-truth fingerspelling boundaries for the detector training.", "Saralar, 2020b)'s approach for keyword search in sign language.", "The model employs an attention mechanism to match a text query with a video clip, where each frame is weighted based on the query embedding.", "The attention mechanism enables the model to implicitly localize frames relevant to the text.", "The model of (Tamer and Saralar, 2020b) is designed for lexical signs rather than fingerspelling.", "To adapt the model to our open-vocabulary fingerspelling setting, we use the same text encoder as in FSS-Net to map words into embeddings instead of using a word embedding matrix as in (Tamer and Saralar, 2020b).", "Fingerspelling boundary information is again not used in training this model, which arguably puts it at a disadvantage compared to FSS-Net.", "More details on the formulation of the model can be found in the Appendix.", "For FWS, we use all words in the test set as the test vocabulary w 1: n .", "For FVS, all video clips in the test are used as candidates and the text queries are again the entire test vocabulary.", "We report the results in terms of standard metrics from the video-text retrieval literature (Momeni et al., 2020; Tamer and Saralar, 2020a): mean Average Precision (mAP) and mean F1 score (mF1), where the averages are over words for FVS and over videos for FWS.", "Hyperparameters are chosen to maximize the mAP on the dev set, independently for the two tasks (though ultimately, the best hyperparameter values in our search are identical for both tasks).", "Additional details on data, preprocessing, model implementation, and hyperparameters can be found in the Appendix.", "Table 1 shows the performance of the above approaches on the two datasets.", "First, we notice that embedding-based approaches consistently outperform the recognizer baseline in the larger data setting (ChicagoFSWild+) but not the smaller data setting (ChicagoFSWild), which suggests that embedding-based models generally require more training data.", "The inferior performance of recognizer also shows that explicit fingerspelling recognition is not necessary for the search tasks.", "In addition, explicit fingerspelling detection (Ext-Det, FSS-Net) improves performance over implicit fin-1703 Table 1: FWS/FVS performance on the ChicagoFSWild and ChicagoFSWild+ test sets.", "gerspelling detection (Attn-KWS) and detection-free search (Whole-clip).", "Explicit fingerspelling detection requires boundary information during training.", "Of the models that don't use such supervision, Attn-KWS is the best performer given enough data, but is still far behind FSS-Net.", "Our model outperforms all of the alternatives.", "The relative performance of different models remains consistent across the various metrics and the two search tasks.", "For completeness, we also measure the performance of different models in terms of ranking-based metrics (e.g., Precision@N, Recall@N), as in prior work on video-text retrieval (Ging et al., 2020; Ranjay et al., 2017) (see full results in the Appendix).", "The relative performance of different models remains consistent on these metrics.", "The analysis below is done on ChicagoFSWild for simplicity.", "The conclusions also hold for ChicagoF-SWild+.", "Does better localization lead to better search?", "In the previous section we have seen that models that explicitly detect and localize fingerspelling outperform ones that do not.", "Next we look more closely at how well several modelsExt-Det, Attn-KWS and FSS-Netperform on the task of localizing fingerspelling, which is a byproduct of these models' output.", "We measure performance via AP@IoU, a commonly used evaluation metric for action detection (Idrees et al., 2016; Heilbron et al., 2015) that has also been used for fingerspelling detection (Shi et al., 2021).", "AP@IoU measures the average precision of a detector under the constraint that the overlap of its predicted segments with the ground truth is above some threshold Intersection-over-Union (IoU) value.", "For Attn-KWS, the model outputs an attention vector, which we convert to segments as in (Shi et al., 2021).", "In general, the models with more accurate localization also have higher search and retrieval performance, as seen by comparing Table 2 with Table 1.", "However, differences in AP@IoU do not directly translate to differences in search performance.", "For example, the AP@IoU of Ext-Det (0.344) is an order of magnitude higher than that of Attn-KWS (0.035) while their FVS mAP results are much closer (0.593 vs. 0.573).", "Raw images vs. estimated pose as input Prior work on sign language search (Tamer and Saralar, 2020a,b) has used estimated pose keypoints as input, rather than raw images as we do here.", "For comparison, we extract body and hand keypoints with OpenPose (Cao et al., 2019) and train a model with the pose skeleton as input.", "As is shown in Table 3, the pose-based model has much poorer search performance than the RGB image-based models.", "We believe this is largely because, while pose estimation works well for large motions and clean visual conditions, in our dataset much of the handshape information is lost in the estimated pose (see the Appendix for some qualitative examples).", "Within our model, the proposal generator produces a subset of all possible fingerspelling proposals,", "intended to represent the most likely fingerspelling segments.", "To measure whether this component is important to the performance of the model, we compare our full model with the proposal generator to one where the proposal generator is removed (see Table 4).", "When the proposal generator is not used, the model is trained only with ground-truth fingerspelling segments ( P g ) and considers all possible proposals within a set of sliding windows.", "Such a \"sliding-window\" approach is commonly used in previous sign language keyword search (Al-banie et al., 2020; Pfister et al., 2013) and spoken keyword spotting (Chen et al., 2015).", "As can be seen from Table 4 (Full model vs. row (1)), the proposal generator greatly improves search performance.", "This is not surprising, since the proposal generator greatly reduces the number of non-fingerspelling segments, thus lowering the chance of a mismatch between the text and video, and also refines the segment boundaries through regression, which should improve the quality of the visual segment encoding.", "The fingerspelling detection component of our model has two aspects that may affect performance: imposing an additional loss during training, and rescoring during inference.", "We disentangle these two factors and show their respective benefits for our model in Table 4 (row (2) and (3)).", "The auxiliary detection task, which includes classification between fingerspelling and non-fingerspelling proposals, helps encode more comprehensive visual information into the visual embedding.", "In addition, the proposal probability output by the detector 1705 contains extra information and merging it into the matching score further improves the search performance.", "Table 4 (row (4)) shows the effect of sampling additional proposals ( P k ) in fingerspelling detection.", "Additional positive samples make the visual embedding more robust to temporal shifts in the generated proposals, thus improving search performance.", "The performance of our model is worse for short fingerspelled sequences than for long sequences (see Figure 4).", "This may be because shorter words are harder to spot, as is shown from the trend in fingerspelling detection in the same figure.", "The datasets we use are collected from multiple sources, and the video quality varies between them.", "To quantify the effect of visual quality on search/retrieval performance, we categorize the ASL videos into three categories according to their source: YouTube, DeafVIDEO, and other miscellaneous sources (misc).", "YouTube videos are mostly ASL lectures with high resolution.", "DeafVIDEO videos are vlogs from deaf users of the social media site deafvideo.tv , where the style, camera an-gle, and image quality vary greatly.", "The visual quality of videos in the miscellaneous category tends to fall between the other two categories.", "Typical image examples from the three categories can be found in the Appendix (figure 7).", "The FWS performance of our model on videos in YouTube, DeafVIDEO, and misc are 0.684, 0.584, 0.629 (mAP) respectively.", "The results are overall consistent with the perceived relative visual qualities of these categories.", "words/phrases with the highest/lowest FVS performance.", "The best-performing queries tend to be long and drawn from the highest-quality video source.", "Another common source of error is confusion between letters with similar handshapes (e.g., \"i\" vs. \"j\").", "A final failure type is fingerspelling detection failure.", "As our model includes a fingerspelling detector, detection errors can harm search performance.", "Our work takes one step toward better addressing the need for language technologies for sign languages, by defining fingerspelling search tasks and developing a model tailored for these tasks.", "These tasks are complementary to existing work on keyword search for lexical signs, in that it addresses the need to search for a variety of important content that tends to be fingerspelled, like named entities.", "Fingerspelling search is also more challenging in that it requires the ability to handle an open vocabulary and arbitrary-length queries.", "Our results demonstrate that a model tailored for the task in fact improves over baseline models based on related work on signed keyword search, fingerspelling detection, and speech recognition.", "However, there is room for improvement between our results and the maximum possible performance.", "One important aspect of our approach is the use of explicit fingerspelling detection within the model.", "An interesting avenue for future work is to address the case where the training data does not include segment boundaries for detector training.", "Finally, a complete sign language search system should consider both fingerspelling and lexical sign search." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "result", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain" ]
[ "Virtual agents are becoming a prominent channel of interaction in customer service.", "Not all customer interactions are smooth, however, and some can become almost comically bad.", "In such instances, a human agent might need to step in and salvage the conversation.", "Detecting bad conversations is important since disappointing customer service may threaten customer loyalty and impact revenue.", "In this paper, we outline an approach to detecting such egregious conversations, using behavioral cues from the user, patterns in agent responses, and user-agent interaction.", "Using logs of two commercial systems, we show that using these features improves the detection F1-score by around 20% over using textual features alone.", "In addition, we show that those features are common across two quite different domains and, arguably, universal.", "Automated conversational agents (chatbots) are becoming widely used for various tasks such as personal assistants or as customer service agents.", "Recent studies project that 80% of businesses plan to use chatbots by 2020 1 , and that chatbots will power 85% of customer service interactions by the year 2020 2 .", "This increasing usage is mainly due to advances in artificial intelligence and natural language processing (Hirschberg and Manning, 2015) 1 http://read.bi/2gU0szG 2 http://gtnr.it/2z428RS along with increasingly capable chat development environments, leading to improvements in conversational richness and robustness.", "Still, chatbots may behave extremely badly, leading to conversations so off-the-mark that only a human agent could step in and salvage them.", "Consequences of these failures may include loss of customer goodwill and associated revenue, and even exposure to litigation if the failures can be shown to include fraudulent claims.", "Due to the increasing prevalence of chatbots, even a small fraction of such egregious 3 conversations could be problematic for the companies deploying chatbots and the providers of chatbot services.", "In this paper we study detecting these egregious conversations that can arise in numerous ways.", "For example, incomplete or internally inconsistent training data can lead to false classification of user intent.", "Bugs in dialog descriptions can lead to dead ends.", "Failure to maintain adequate context can cause chatbots to miss anaphoric references.", "In the extreme case, malicious actors may provide heavily biased (e.g., the Tay chatbot 4 ) or even hacked misbehaviors.", "In this article, we focus on customer care systems.", "In such setting, a conversation usually becomes egregious due to a combination of the aforementioned problems.", "The resulting customer frustration may not surface in easily detectable ways such as the appearance of all caps, shouting to a speech recognizer, or the use of profanity or extreme punctuation.", "Consequently, the chatbot will continue as if the conversation is proceeding well, usually 3 Defined by the dictionary as outstandingly bad.", "Customer: I got 2 quotes for the flight ticket, but i'm wondering what the details of each ticket are?", "Agent: Please select Buy next to the ticket you'd like to purchase.", "Customer: No, I don't want to buy yet till I know the details of the flights.", "Agent: If you're in the process of renting a car, please continue with by clicking Next Customer: Are you a real person?", "Agent: I am a digital assistant.", "I've been trained to answer questions about travels.", "Ask me any questions you have.", "Customer: I asked a specific question and you gave me a random answer about car rental which I'm not interested in.", "Agent: I'm not trained on that yet, but I'm still learning.", "You may want to rephrase your question and try again.", "Customer: This service is pointless , can i talk to a real live person?", "Agent: We don't currently have live agents to chat with online.", "leading to conversational breakdown.", "Consider, for example, the anonymized but representative conversation depicted in Figure 1.", "Here the customer aims to understand the details of a flight ticket.", "In the first two turns, the chatbot misses the customer's intentions, which leads to the customer asking Are you a real person?.", "The customer then tries to explain what went wrong, but the chatbot has insufficient exposure to this sort of utterance to provide anything but the default response (I'm not trained on that).", "The response seems to upset the customer and leads to a request for a human agent, which is rejected by the system (We don't currently have live agents).", "Such rejection along with the previous responses could lead to customer frustration (Amsel, 1992).", "Being able to automatically detect such conversations, either in real time or through log analysis, could help to improve chatbot quality.", "If detected in real time, a human agent can be pulled in to salvage the conversation.", "As an aid to chatbot improvement, analysis of egregious conversations can often point to problems in training data or system logic that can be repaired.", "While it is possible to scan system logs by eye, the sheer volume of conversations may overwhelm the analyst or lead to random sampling that misses important failures.", "If, though, we can automatically detect the worst conversations (in our experience, typically under 10% of the total), the focus can be on fixing the worst problems.", "Our goal in this paper is to study conversational features that lead to egregious conversations.", "Specifically, we consider customer inputs throughout a whole conversation, and detect cues such as rephrasing, the presence of heightened emotions, and queries about whether the chatbot is a human or requests to speak to an actual human.", "In addition, we analyze the chatbot responses, looking for repetitions (e.g. from loops that might be due to flow problems), and the presence of not trained responses.", "Finally, we analyze the larger conversational context exploring, for example, where the presence of a not trained response might be especially problematic (e.g., in the presence of strong customer emotion).", "The main contributions of this paper are twofold: (1) This is the first research focusing on detecting egregious conversations in conversational agent (chatbot) setting and (2) this is the first research using unique agent, customer, and customer-agent interaction features to detect egregiousness.", "The rest of this paper is organized as follows.", "We review related work, then we formally define the methodology for detecting egregious conversations.", "We describe our data, experimental setting, and results.", "We then conclude and suggest future directions.", "Detecting egregious conversations is a new task, however, there is related work that aim at measuring the general quality of the interactions in conversational systems.", "These works studied the complementary problem of detecting and measuring user satisfaction and engagement.", "Early work by (Walker et al., 1997, 2001) discussed a framework that maximizes the user satisfaction by considering measures such as number of inappropriate utterances, recognition rates, number of times user requests repetitions, number of turns per interaction, etc.", "Shortcomings of this approach are discussed by (Hajdin-jak and Mihelic, 2006).", "Other works focus on predicting the user engagement in such systems.", "Examples include (Kiseleva et al., 2016b,a; Jiang et al., 2015).", "Specifically, these 1803 works evaluated chat functionality by asking users to make conversations with an intelligent agent and measured the user satisfaction along with other features such as the automatic speech recognition (ASR) quality and intent classification quality.", "In (Sandbank et al., 2017) the authors presented a conversational system enhanced with emotion analysis, and suggested using emotions as triggers for human escalation.", "In our work, we likewise use emotion analysis as predictive features for egregious conversation.", "The works of (Sarikaya, 2017; Sano et al., 2017) studied reasons why users reformulated utterances in such systems.", "Specifically, in (Sarikaya, 2017) they reported on how the different reasons affect the users' satisfaction.", "In (Sano et al., 2017) they focused on how to automatically predict the reason for user's dissatisfaction using different features.", "Our work also explores user reformulation (or rephrasing) as one of the features to predict egregious conversations.", "We build on the previous work by leveraging some of the approaches in our classifier for egregious conversations.", "In (Walker et al., 2000; Hastie et al., 2002) the authors also looked for problems in a specific setting of spoken conversations.", "The main difference with our work is that we focus on chat logs for domains for which the expected user utterances are a bit more diverse, using interaction features as well as features that are not sensitive to any architectural aspects of the conversational system (e.g., ASR com-ponent).", "Several other approaches for evaluating chatbot conversations indirectly capture the notion of conversational quality.", "For example, several prior works borrowed from the field of pragmatics in various metrics around the principles of cooperative conversation (Chakrabarti and Luger, 2013; Saygin A. P., 2002).", "In (Steidl et al., 2004) they measured dialogue success at the turn level as a way of predicting the success of a conversation as a whole.", "(Webb et al., 2010) created a measure of dialogue appropriateness to determine its role in maintaining a conversation.", "Recently, (Liu et al., 2016) evaluated a number of popular measures for dialogue response generation systems and highlighted specific weaknesses in the measures.", "Similarly, in (Sebastian et al., 2009) they developed a taxonomy of available measures for an end-user's quality of experience for multimodel dialogue systems, some of which touch on conversational quality.", "All these measures may serve as reasons for a conversation turning egregious, but none try to capture or predict it directly.", "In the domain of customer service, researchers mainly studied reasons for failure of such systems along with suggestions for improved design (Mimoun et al., 2012; Gnewuch et al., 2017).", "In (Mimoun et al., 2012) the authors analyzed reasons sales chatbots fail by interviewing chatbots experts.", "They found that a combination of exaggerated customer expectations along with a reduction in agent performance (e.g., failure to listen to the consumer, being too intrusive) caused customers to stop using such systems.", "Based on this qualitative study, they proposed an improved model for sales chatbots.", "In (Gnewuch et al., 2017) they studied service quality dimensions (i.e., reliability, empathy, responsiveness, and tangibility) and how to apply them during agent design.", "The main difference between those works and ours is that they focus on qualitative high-level analysis while we focus on automatic detection based on the conversations logs.", "The objective of this work is to reliably detect egregious conversations between a human and a virtual agent.", "We treat this as a binary classification task, where the target classes are egregious and non-egregious.", "While we are currently applying this to complete conversations (i.e., the classification is done on the whole conversation), some of the features examined here could likely be used to detect egregious conversations as they were unfolding in real time.", "To perform egregious conversation detection, features from both customer inputs and agent responses are extracted, together with features related to the combination of specific inputs and responses.", "In addition, some of these features are contextual , meaning that they are dependent on where in the conversation they appear.", "Using this set of features for detecting egre-1804 gious conversations is novel, and as our experimental results show, improves performance compared to a model based solely on features extracted from the conversation's text.", "We now describe the agent, customer, and combined customer-agent features.", "A virtual agent is generally expected to closely simulate interactions with a human operator (Reeves and Nass, 1996; Nass and Moon,Y, 2000; Kramer, 2008).", "When the agent starts losing the context of a conversation, fails in understanding the customer intention, or keeps repeating the same responses, the illusion of conversing with a human is lost and the conversation may become extremely annoying.", "With this in mind, we now describe the analysis of the agent's responses and associated features (summarized in the top part of Table 1).", "As typically implemented, the virtual agent's task is to reliably detect the intent of each customer's utterance and respond meaningfully.", "Accurate intent detection is thus a fundamental characteristic of well-trained virtual agents, and incorrect intent analysis is reported as the leading cause of user dissatisfaction (Sarikaya, 2017).", "Moreover, since a classifier (e.g., SVM, neural network, etc.) is often used to detect intents, its probabilistic behavior can cause the agent to repeat the same (or semantically similar) response over and over again, despite the user's attempt to rephrase the same intent.", "Such agent repetitions lead to an unnatural interaction (Kluwer, 2011).", "To identify the agent's repeating responses, we measured similarity between agent's subsequent (not necessarily sequential) turns.", "We represented each sentence by averaging the pre-trained embeddings 5 of each word in the sentence, calculating the cosine similarity between the representations.", "Turns with a high similarity value 6 are considered as repeating responses.", "Given that the knowledge of a virtual agent is necessarily limited, we can expect that training would not cover all customer intents.", "If the classifier technology provides an estimate of classification confidence, the agent can respond with some variant of I'm not trained on that when confidence is low.", "In some cases, customers will accept that not all requests are supported.", "In other cases, unsupported intents can lead to customer dissatisfaction (Sarikaya, 2017), and cascade to an egregious conversation (as discussed below in Section 3.3).", "We extracted the possible variants of the unsupported intent messages directly from the system, and later matched them with the agent responses from the logs.", "From the customer's point of view, an ineffective interaction with a virtual agent is clearly undesirable.", "An ineffective interaction requires the expenditure of relatively large effort from the customer with little return on the investment (Zeithaml et al., 1990; Mi-moun et al., 2012).", "These efforts can appear as behavioral cues in the customer's inputs, and include emotions, repetitions, and more.", "We used the following customer analysis in our model.", "Customer features are summarized in the middle part of Table 1.", "When a customer repeats or rephrases an utterance, it usually indicates a problem with the agent's understanding of the customer's intent.", "This can be caused by different reasons as described in (Sano et al., 2017).", "To measure the similarity between subsequent customer turns to detect repetition or rephrasing, we used the same approach as described in Section 3.1.1.", "Turns with a high similarity value 6 are considered as rephrases.", "The customer's emotional state during the conversation is known to correlate with the conversation's quality (Oliver, 2014).", "In order to analyze the emotions that customers exhibit in each turn, we utilized the IBM Tone Analyzer service, available publicly online 7 .", "This service was trained using customer care interactions, and infers emotions such as frustration , sadness , happiness .", "We focused on negative emotions (denoted as NEG EMO) to identify turns with a negative emotional peak (i.e., single utterances that carried high negative emotional state), as well as to estimate the aggregated negative emotion throughout the conversation (i.e., the averaged negative emotion intensity).", "In order to get a more robust representation of the customer's negative emotional state, we summed the score of the negative emotions (such as frustration , sadness , anger , etc.) into a single negative sentiment score (denoted as NEG SENT).", "Note that we used the positive emotions as a fil-ter for other customer features, such as the rephrasing analysis.", "Usually, high positive emotions capture different styles of thank-ing the agent, or indicate that the customer is somewhat satisfied (Rychalski and Hudson, 2017), thus, the conversation is less likely to become egregious.", "In examining the conversation logs, we noticed that it is not unusual to find a customer asking to be transferred to a human agent.", "Such a request might indicate that the virtual agent is not providing a satisfactory service.", "Moreover, even if there are human agents, they might not be available at all times, and thus, a rejection of such a request is sometimes reasonable, but might still lead to customer frustration (Amsel, 1992).", "In addition to the above analyses, we also detected customer turns that contain exactly one word.", "The assumption is that single word (unigram) sentences are probably short customer responses (e.g., no, yes, thanks, okay), which in most cases do not contribute to the egregiousness of the conversation.", "Hence, calculating the percentage of those turns out of the whole conversation gives us another measurable feature.", "We also looked at features across conversation utterance-response pairs in order to capture a more complete picture of the interac-Group", "tion between the customer and the virtual agent.", "Here, we considered a pair to be customer utterance followed by an agent response.", "For example, a pair may contain a turn in which the customer expressed negative emotions and received a response of not trained by the agent.", "In this case, we would leverage the two analyses: emotional and unsupported intent.", "Figure 1 gives an example of this in the customer's penultimate turn.", "Such interactions may divert the conversation towards becoming egregious.", "These features are summarized in the last part of Table 1.", "We also calculated the similarity between the customer's turn and the virtual agent's response in cases of customer rephrasing.", "This analysis aims to capture the reason for the customer rephrasing.", "When a similarity score between the customer's turn and the agent's response is low, this may indicate a misclassi-fied intent, as the agent's responses are likely to share some textual similarity to the customer's utterance.", "Thus, a low score may indicate a poor interaction, which might lead the conversation to become egregious.", "Another similarity feature is between two customer's subsequent turns when the agent's response was not trained.", "We trained a binary SVM classifier with a linear kernel.", "A feature vector for a sample in the training data is generated using the scores calculated for the described features, where each feature value is a number between [0,1].", "After the model was trained, test conversations are classified by the model, after being transformed to a feature vector in the same way a training sample is transformed.", "The SVM classification model (denoted EGR ) outputs a label egregious or non-egregious as a prediction for the conversation.", "4.1 Dataset We extracted data from two commercial systems that provide customer support via conversational bots (hereafter denoted as company A and company B ).", "Both agents are using similar underlying conversation engines, each embedded in a larger system with its own unique business logic.", "Company A 's system deals with sales support during an online purchase, while company B 's system deals with technical support for purchased software products.", "Each system logs conversations, and each conversation is a sequence of tuples, where each tuple consists of { conversation id, turn id, customer input, agent response } .", "From each system, we randomly extracted 10000 conversations.", "We further removed conversations that contained fewer than 2 turns, as these are too short to be meaningful since the customer never replied or provided more details about the issue at hand.", "Figure 2 depicts the frequencies of conversation lengths which follow a power-law relationship.", "The conversations from company A 's system tend to be longer, with an average of 8.4 turns vs. an average of 4.4 turns for company B .", "The first step in building a classification model is to obtain ground truth data.", "For this purpose, we randomly sampled conversations from our datasets.", "This sample included 1100 and 200 conversations for company A and company B respectively.", "The 1 10 100 1000 10000 001 01 1 F r e qu e n c y Conversa on length Company A Company B Figure 2: Frequency versus conversation length for company A and company B on a log-log scale.", "sampled conversations were tagged using an in-house tagging system designed to increase the consistency of human judgements.", "Each conversation was tagged by four different expert judges 8 .", "Given the full conversation, each judge tagged whether the conversation was egregious or not following this guideline: Conversations which are extraordinarily bad in some way, those conversations where you'd like to see a human jump in and save the conversation.", "We generated true binary labels by considering a conversation to be egregious if at least three of the four judges agreed.", "The inter-rater reliability between all judges, measured by Cohen's Kappa, was 0.72 which indicates high level agreement.", "This process generated the egregious class sizes of 95 (8.6%) and 16 (8%) for company A and company B , respectively.", "This verifies the unbalanced data expectation as previously discussed.", "We also implemented two baseline models, rule-based and text-based, as follows: Rule-based.", "In this approach, we look for cases in which the virtual agent responded with a not trained reply, or occurrences of the customer requesting to talk to a human agent.", "As discussed earlier, these may be indicative of the customer's dissatisfaction with the nature of the virtual agent's responses.", "Text-based.", "A model that was trained to predict egregiousness given the conversation's text (all customer and agent's text dur-8 judges that are HCI experts and have experience in designing conversational agents systems. 1807 Egregious Non-Egregious Model P R F P R F Rule-based 0.28 0.54 0.37 0.95 0.87 0.91 Text-based 0.46 0.56 0.50 0.96 0.94 0.95 EGR 0.47 0.79 0.59 0.98 0.92 0.95 Table 2: Cross-validation results for the baselines and EGR models. ing the conversation).", "This model was implemented using state-of-the-art textual features as in (Herzig et al., 2017).", "In (Herzig et al., 2017) emotions are detected from text, which can be thought of as similar to our task of predicting egregious conversations.", "We evaluated these baseline methods against our classifier using 10-fold cross-validation over company A 's dataset (we did not use company B 's data for training due to the low number of tagged conversations).", "Since class distribution is unbalanced, we evaluated classification performance by using precision (P), recall (R) and F1-score (F) for each class.", "The EGR classifier was implemented using an SVM with a linear kernel 9 .", "Table 2 depicts the classification results for both classes and the three models we explored.", "The EGR model significantly outperformed both baselines 10 .", "Specifically, for the egregious class, the precision obtained by the text-based and EGR models were similar.", "This indicates that the text analyzed by both models encodes some information about egregiousness.", "On the other hand, for the recall and hence the F1-score, the EGR model relatively improved the text-based model by 41% and 18%, respectively.", "We will further analyze the models below.", "To better understand the contributions of different sets of features to our EGR model, we examined various features in an incremental fashion.", "Based on the groups of feature sets that we defined in Section 3, we tested the performance of different group combinations, added in the following order: agent , customer and customer-agent interactions .", "Figure 3 depicts the results for the classification task.", "The x -axis represents specific combinations of groups, and the y -axis represents the performance obtained.", "Figure 3 shows that adding each group improved performance, which indicates the informative value of each group.", "The figure also suggests that the most informative group in terms of prediction ability is the customer group.", "We also studied how robust our features were: If our features generalize well, performance should not drop much when testing company B with the classifier trained exclusively on the data from company A .", "Although company A and company B share similar conversation engine platforms, they are completely different in terms of objectives, domain, terminology, etc.", "For this task, we utilized the 200 annotated conversations of company B as test data, and experimented with the different models, trained on company A 's data.", "The rule-based baseline does not require training, of course, and could be applied directly.", "Table 3 summarizes the results showing that the performance of the EGR model is relatively stable (w.r.t the model's performance when it was trained and tested on the same domain), with a degradation of only 9% in F1-score 11 .", "In addition, the results also show that the text-based model performs poorly when applied to a different domain (F1-score of 0.11).", "This may occur since textual features are closely tied to the training domain.", "11 EGR model results are statistically significant compared to the baselines models with p < 0.001, using McNemar's test.", "Inspired by (Sarikaya, 2017; Sano et al., 2017) we analyzed the customer rephrasing motivations for both the egregious and the non-egregious classes.", "First, we detected customer rephrasing as described in Section 3.2.1, and then assigned to each its motivation.", "Specifically, in our setting, the relevant motivations are 12 : (1) Natural language understanding (NLU) error the agent's intent detection is wrong, and thus the agent's response is semantically far from the customer's turn; (2) Language generation (LG) limitation the intent is detected correctly, but the customer is not satisfied by the response (for example, the response was too generic); (3) Unsupported intent error the customer's intent is not supported by the agent.", "In order to detect NLU errors , we measured the similarity between the first customer turn (before the rephrasing) and the agent response.", "We followed the methodology presented in (Jovita et al., 2015) claiming that the best answer given by the system has the highest similarity value between the customer turn and the agent answer.", "Thus, if the similarity was < 0.8 we considered this as an erroneous detection.", "If the similarity was 0.8 we considered the detection as correct, and thus the rephrasing occurred due to LG limitation .", "To detect unsupported intent error we used the approach described in Section 3.1.2.", "As reported in table 4, rephrasing due to an unsupported intent is more common in egregious conversations (18% vs. 14%), whereas, rephrasing due to generation limitations ( LG limitation ) is more common in 12 We did not consider other motivations like automatic speech recognition (ASR) errors, fallback to search, and backend failure as they are not relevant to our setting.", "non-egregious conversations (37% vs. 33%).", "This indicates that customers are more tolerant of cases where the system understood their intent, but the response is not exactly what they expected, rather than cases where the system's response was not trained.", "Finally, the percentage of rephrasing due to wrong intent detection ( NLU errors ) is similar for both classes, which is somewhat expected as similar underlying systems provided NLU support.", "We further investigated why the EGR model was better at identifying egregious conversations (i.e., its recall was higher compared to the baseline models).", "We manually examined 26 egregious conversations that were identi-fied justly so by the EGR model, but mis-classified by the other models.", "Those conversations were particularly prevalent with the agent's difficulty to identify correctly the user's intent due to NLU errors or LG limitation .", "We did not encounter any unsupported intent errors leading to customer rephrasing, which affected the ability of the rule-based model to classify those conversations as egregious.", "In addition, the customer intents that appeared in those conversations were very diverse.", "While customer rephrasing was captured by the EGR model, for the text-based model some of the intents were new (did not appear in the training data) and thus were difficult for the model to capture.", "In this paper, we have shown how it is possible to detect egregious conversations using a combination of customer utterances, agent responses, and customer-agent interactional features.", "As explained, the goal of this work is to give developers of automated agents tools to detect and then solve problems cre-1809 ated by exceptionally bad conversations.", "In this context, future work includes collecting more data and using neural approaches (e.g., RNN, CNN) for analysis, validating our models on a range of domains beyond the two explored here.", "We also plan to extend the work to detect egregious conversations in real time (e.g., for escalating to a human operators), and create log analysis tools to analyze the root causes of egregious conversations and suggest possible remedies." ]
[ "abstain", "abstain", "abstain", "abstain", "method", "result", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "method", "objective", "method", "method", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "objective", "method", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "objective", "objective" ]
[ "Emotion-cause pair extraction aims to extract all emotion clauses coupled with their cause clauses from a given document.", "Previous work employs two-step approaches, in which the first step extracts emotion clauses and cause clauses separately, and the second step trains a classifier to filter out negative pairs.", "However, such pipeline-style system for emotion-cause pair extraction is suboptimal because it suffers from error propagation and the two steps may not adapt to each other well.", "In this paper, we tackle emotion-cause pair extraction from a ranking perspective, i.e., ranking clause pair candidates in a document, and propose a one-step neural approach which emphasizes inter-clause modeling to perform end-to-end extraction.", "It models the interrelations between the clauses in a document to learn clause representations with graph attention, and enhances clause pair representations with kernel-based relative position embedding for effective ranking.", "Experimental results show that our approach significantly outperforms the current two-step systems, especially in the condition of extracting multiple pairs in one document.", "Emotion cause analysis has attracted increasing research attention in sentiment analysis and text mining community in recent years (Lee et al., 2010a; Russo et al., 2011; Neviarouskaya and Aono, 2013; Ghazi et al., 2015; Gui et al., 2016).", "Its goal is to detect causes or stimuli for a certain emotion expressed in text.", "Understanding why an emotion occurs has broad applications such as consumer review mining and public opinion monitoring.", "Previous studies mostly focus on emotion cause extraction task which aims to identify cause(s) for a given emotion.", "Xia and Ding (2019) pointed out that this setting ignores the mutual indication of emotions and causes, and the need of emotion annotation in advance restricts the range of applications.", "To overcome such limitations, they put forward a new research task named emotion-cause pair extraction , aiming to extract all emotion expression clauses coupled with their causes from a given document.", "As shown in the following example, an emotion clause c 3 and its corresponding cause clause c 2 construct an emotion-cause pair ( c 3 , c 2 ) : Example.", "He told us that since his illness ( c 1 ), his classmates and advisors have given him much help about the schoolwork ( c 2 ).", "He has been touched ( c 3 ), and said that he will repay them ( c 4 ).", "Compared with emotion cause extraction, emotion-cause pair extraction is a more challenging task, because we need a comprehensive understanding of document content and structure to perform emotion-cause co-extraction and discriminate emotion-cause clause pairs from negative ones.", "Xia and Ding (2019) proposed to tackle emotion-cause pair extraction using a two-step solution.", "At the first step, a multi-task LSTM network extracts emotion clauses and cause clauses separately.", "Then at the second step, a binary classifier is used to filter out negative pairs from all possible pairs.", "Although the two-step solution has shown its effectiveness, such pipeline-style system is suboptimal for emotion-cause pair extraction, because it is confronted with error propagation, and the two steps may not adapt to each other well.", "Coherent document has an underlying structure (Mann and Thompson, 1988; Marcu, 2000) and there is a causal relationship between the two clauses of an emotion-cause pair, which distinguishes it from other non-emotion-cause pairs in the document.", "Thus, knowledge about the interrelations between the clauses in a document is bene-ficial for extracting potential emotion-cause pairs.", "Further, according to the cohesion and coherence of discourse (De Beaugrande and Dressler, 1981), the probability of two distant clauses containing causal relationship is relatively small.", "Thus, relative position information between two clauses of a clause pair can be considered as an effective feature for emotion-cause pair extraction.", "Based on the above two considerations, in this paper, we tackle emotion-cause pair extraction from a ranking perspective, i.e., ranking clause pair candidates in a given document, and propose a one-step approach which emphasizes inter-clause modeling to perform end-to-end extraction.", "Our approach first models the inter-clause relationships via exploiting graph attention to learn clause representations, facilitating pair extraction through capturing the latent relationship between two clauses.", "It then learns clause pair representations and rank these pairs to extract emotion-cause pairs.", "A kernel-based relative position embedding scheme is proposed to model the mutual impact among relative positions and enhance clause pair representations for effective ranking.", "We integrate the two components into a unified neural network, which is optimized end-to-end.", "Unlike the previous two-step solution, our approach can directly extract emotion-cause pairs from documents.", "To our knowledge, we propose the first end-to-end approach for emotion-cause pair extraction, which is a unified model to tackle this task from a ranking perspective.", "Our approach emphasizes inter-clause modeling by integrating inter-clause relationship modeling and kernel-based relative position enhanced clause pair ranking.", "Experimental results demonstrate that our one-step approach significantly outperforms the current best-performing systems, especially in the condition of extracting multiple pairs in one document.", "Given a document D = ( c 1 , c 2 , . . . , c | D | ) where | D | is the number of clauses and the i -th clause c i = ( w i 1 , w i 2 , . . . , w i | c i | ) is a word sequence, our goal is to extract all emotion-cause pairs in D :", "P = { ( c emo 1 , c cau 1 ) , ( c emo 2 , c cau 2 ) , . . . } , (1) where ( c emo j , c cau j ) is the j -th pair, c emo j D is an emotion clause, and c cau j D is the corresponding cause clause.", "Note that an emotion may have more than one cause, and the same cause may also become the stimulus of multiple emotions.", "We propose a one-step approach named RANKCP, which ranks clause pair candidates in a document to extract emotion-cause pairs.", "The overall architecture is shown in Fig. 1, which consists of three components.", "The first component learns vector representations of clauses in a given document.", "The second component models the relationships between clauses to obtain better clause representations.", "The third component learns clause pair representations enhanced with relative position modeling, and ranks clause pair candidates to extract emotion-cause pairs.", "Given a document D = ( c 1 , c 2 , . . . , c | D | ) composed of | D | clauses, we use a hierarchical recurrent neural network (Hierarchical RNN) to encode textual content and learn clause representations.", "1 For each clause c i = ( w i 1 , w i 2 , . . . , w i | c i | ) , we use a word-level bidirectional RNN to encode its content information and obtain the clause's hidden state sequence ( h i 1 , h i 2 , . . . , h i | c i | ) .", "An attention layer is adopted to combine them and return a state vector h i = (cid:80) | c i | j =1 j h ij for the clause c i , where j = Softmax (cid:16) w (cid:62) a tanh( W a h ij + b a ) (cid:17) is the attention weight of the j -th word in clause c i , with a multilayer perceptron (MLP) parameterized by W a , b a and w a .", "Then the document D 's clause state sequence ( h 1 , h 2 , . . . , h | D | ) is fed into a clause-level bidirectional RNN to produce clause representations, denoted as ( c 1 , c 2 , . . . , c | D | ) .", "Knowledge about inter-clause relationships is useful for extracting emotion-cause pairs.", "After learning clause representations of a document, to enhance the interactions between clauses in the document, we regard the document structure as a fully-connected clause graph, and adopt graph attention network (Velickovic et al., 2018) to model the inter-clause relationships.", "Specifically, each node in the fully-connected graph is a clause in the document, and every two nodes have an edge.", "We also add a self-loop edge 1 Pretrained BERT encoder (Devlin et al., 2019) based clause representation component is shown in Appendix A.1.", "to every node, because the cause clause of an emotion clause may be itself.", "Graph attention network propagates information among clauses by stacking multiple graph attention layers, in which each layer is to learn an updated clause representation via aggregating neighboring clauses' information using self-attention (Vaswani et al., 2017).", "At the t -th graph attention layer, let { h ( t 1) 1 , h ( t 1) 2 , . . . , h ( t 1) | D | } denote the input clause representations of this layer, where the clause representation of clause c i is denoted as h ( t 1) i R d t 1 .", "The graph attention mechanism operates on each clause c i in the document via the following aggregation scheme: h ( t ) i = ReLU (cid:88) j N ( i ) ( t ) ij W ( t ) h ( t 1) j + b ( t ) , (2) where h ( t ) i is the output representation, W ( t ) and b ( t ) are learnable parameters, and N ( i ) denotes the directly neighboring clauses of c i (in our case it contains all clauses in the document).", "The attention weight ( t ) ij reflects the strength of aggregation level between the clause c i and the clause c j , which is learned by an MLP parameterized by w ( t ) : e ( t ) ij = w ( t ) (cid:62) tanh (cid:16)(cid:104) W ( t ) h ( t 1) i ; W ( t ) h ( t 1) j (cid:105)(cid:17) , ( t ) ij = exp (cid:16) LeakyReLU ( e ( t ) ij ) (cid:17) (cid:80) k N ( i ) exp (cid:16) LeakyReLU ( e ( t ) ik ) (cid:17) , (3) where [ ; ] is concatenation.", "The following matrix form can describe the t -th graph attention layer: H ( t ) = ReLU (cid:16) A ( t ) H ( t 1) W ( t ) (cid:62) + b ( t ) (cid:17) , (4) where [ A ( t ) ] ij = ( t ) ij .", "The first layer's input H (0) = (cid:0) c 1 , c 2 , . . . , c | D | (cid:1) (cid:62) is the document en-coder's output (see Section 3.1).", "By stacking T layers to model inter-clause relationships, the last layer's output is the updated clause representations H ( T ) = (cid:0) h 1 , h 2 , . . . , h | D | (cid:1) (cid:62) .", "We further adopt multi-head attention, where each head can capture a global pattern based on the order-preserving property of graph attention (Qiu et al., 2018).", "In practice, we add a highway connection (Srivastava et al., 2015) between every two adjacent layers to control the information flow.", "2 Based on modeling the interactions between clauses with graph attention network composed of multiple graph attention layers, each clause representation h i is produced by fusing other clauses' information adaptively, and the inter-clause relationships in the document can be learned sufficiently.", "After obtaining updated clause representations { h i } | D | i =1 , we feed them into two pre-output layers to predict whether a clause is an emotion/cause clause or not.", "Specifically, an MLP (parameterized by w emo and b emo ) with logistic function ( ) is used to predict the probability of a clause c i being an emotion clause (denoted as y emo i ): y emo i = (cid:16) w (cid:62) emo h i + b emo (cid:17) .", "(5) Similarly, the probability of a clause c i being a cause clause ( y cau i ) is obtained by the other layer.", "resentations and rank these pairs to obtain emotion-cause pairs.", "Relative position between two clauses is the key to indicate emotion-cause pairs.", "Thus, we inject relative position information into the clause pair representation learning process via relative position embedding learning.", "We hypothesize that if the relative position of two clauses is too large, the probability of their forming an emotion-cause pair is very small.", "Thus, given the document D = ( c 1 , . . . , c | D | ) , we consider each clause pair ( c i , c j ) in which the two clauses' relative position (absolute value) | j i | is less than or equal to a certain value M as a candidate of emotion-cause pair.", "We construct a set of clause pair candidates from the document D : P (cid:48) = { ( c i , c j ) | M j i + M } .", "For each clause pair candidate p ij = ( c i , c j ) P (cid:48) , its initialized representation is obtained by concatenating three vectors: the clause c i 's representation h i , the clause c j 's representation h j , and their relative position j i 's embedding r j i .", "We employ a one-layer MLP to learn its representation: p ij = ReLU ( W p [ h i ; h j ; r j i ] + b p ) , (7) with learnable W p and b p .", "Vanilla relative position embedding For each relative position m { M, . . . , 1 , 0 , +1 , . . . , + M } , we randomly initialize the embedding r m via sampling from a uniform distribution.", "Then each relative position embedding is learned together with the model training process.", "Kernel-based relative position embedding Beyond the above vanilla scheme where each relative position embedding is partly independent of each other, we aim to model the mutual impact among different relative positions for further improving relative position embeddings.", "To this end, for each relative position m { M, . . . , + M } , we use an RBF kernel function K m ( ) to model the impact between m and other relative positions: K m ( j ) = exp (cid:18) ( j m ) 2 K 2 (cid:19) , (8) where j { M, . . . , + M } is one of possible relative position values, and K restricts the shape of the kernel function.", "Then, we enhance the vanilla r 0\u0000 1 <latexit sha1_base64=\"eqcAzF/DINHc43ykL9ajxwiMFVU=\">AAACAXicbVA9SwNBEJ2LXzF+RS1tFoNoY7gTQcugjWUEEwOXI+xt9pIlu7fH7p4Qjqv8DbZa24mtv8TSf+ImucIkPhh4vDfDzLww4Uwb1/12Siura+sb5c3K1vbO7l51/6CtZaoIbRHJpeqEWFPOYtoyzHDaSRTFIuT0MRzdTvzHJ6o0k/GDGSc0EHgQs4gRbKzkd0OBVC879/LTXrXm1t0p0DLxClKDAs1e9afblyQVNDaEY619z01MkGFlGOE0r3RTTRNMRnhAfUtjLKgOsunJOTqxSh9FUtmKDZqqfycyLLQei9B2CmyGetGbiP95fmqi6yBjcZIaGpPZoijlyEg0+R/1maLE8LElmChmb0VkiBUmxqY0tyUUuc3EW0xgmbQv6p5b9+4va42bIp0yHMExnIEHV9CAO2hCCwhIeIFXeHOenXfnw/mctZacYuYQ5uB8/QL9mZcX</latexit> <latexit sha1_base64=\"eqcAzF/DINHc43ykL9ajxwiMFVU=\">AAACAXicbVA9SwNBEJ2LXzF+RS1tFoNoY7gTQcugjWUEEwOXI+xt9pIlu7fH7p4Qjqv8DbZa24mtv8TSf+ImucIkPhh4vDfDzLww4Uwb1/12Siura+sb5c3K1vbO7l51/6CtZaoIbRHJpeqEWFPOYtoyzHDaSRTFIuT0MRzdTvzHJ6o0k/GDGSc0EHgQs4gRbKzkd0OBVC879/LTXrXm1t0p0DLxClKDAs1e9afblyQVNDaEY619z01MkGFlGOE0r3RTTRNMRnhAfUtjLKgOsunJOTqxSh9FUtmKDZqqfycyLLQei9B2CmyGetGbiP95fmqi6yBjcZIaGpPZoijlyEg0+R/1maLE8LElmChmb0VkiBUmxqY0tyUUuc3EW0xgmbQv6p5b9+4va42bIp0yHMExnIEHV9CAO2hCCwhIeIFXeHOenXfnw/mctZacYuYQ5uB8/QL9mZcX</latexit> <latexit sha1_base64=\"eqcAzF/DINHc43ykL9ajxwiMFVU=\">AAACAXicbVA9SwNBEJ2LXzF+RS1tFoNoY7gTQcugjWUEEwOXI+xt9pIlu7fH7p4Qjqv8DbZa24mtv8TSf+ImucIkPhh4vDfDzLww4Uwb1/12Siura+sb5c3K1vbO7l51/6CtZaoIbRHJpeqEWFPOYtoyzHDaSRTFIuT0MRzdTvzHJ6o0k/GDGSc0EHgQs4gRbKzkd0OBVC879/LTXrXm1t0p0DLxClKDAs1e9afblyQVNDaEY619z01MkGFlGOE0r3RTTRNMRnhAfUtjLKgOsunJOTqxSh9FUtmKDZqqfycyLLQei9B2CmyGetGbiP95fmqi6yBjcZIaGpPZoijlyEg0+R/1maLE8LElmChmb0VkiBUmxqY0tyUUuc3EW0xgmbQv6p5b9+4va42bIp0yHMExnIEHV9CAO2hCCwhIeIFXeHOenXfnw/mctZacYuYQ5uB8/QL9mZcX</latexit> <latexit sha1_base64=\"eqcAzF/DINHc43ykL9ajxwiMFVU=\">AAACAXicbVA9SwNBEJ2LXzF+RS1tFoNoY7gTQcugjWUEEwOXI+xt9pIlu7fH7p4Qjqv8DbZa24mtv8TSf+ImucIkPhh4vDfDzLww4Uwb1/12Siura+sb5c3K1vbO7l51/6CtZaoIbRHJpeqEWFPOYtoyzHDaSRTFIuT0MRzdTvzHJ6o0k/GDGSc0EHgQs4gRbKzkd0OBVC879/LTXrXm1t0p0DLxClKDAs1e9afblyQVNDaEY619z01MkGFlGOE0r3RTTRNMRnhAfUtjLKgOsunJOTqxSh9FUtmKDZqqfycyLLQei9B2CmyGetGbiP95fmqi6yBjcZIaGpPZoijlyEg0+R/1maLE8LElmChmb0VkiBUmxqY0tyUUuc3EW0xgmbQv6p5b9+4va42bIp0yHMExnIEHV9CAO2hCCwhIeIFXeHOenXfnw/mctZacYuYQ5uB8/QL9mZcX</latexit> X <latexit sha1_base64=\"HPvrImzRBbCzydhOIVtm7PZs4Rs=\">AAAB+nicbVBNSwMxEJ3Ur1q/qh69BIvgqeyKoMeiF48VbCu0S8mm2TY0yS5JVihr/4JXPXsTr/4Zj/4Ts+0ebOuDgcd7M8zMCxPBjfW8b1RaW9/Y3CpvV3Z29/YPqodHbROnmrIWjUWsH0NimOCKtSy3gj0mmhEZCtYJx7e533li2vBYPdhJwgJJhopHnBKbSz2Tyn615tW9GfAq8QtSgwLNfvWnN4hpKpmyVBBjur6X2CAj2nIq2LTSSw1LCB2TIes6qohkJshmt07xmVMGOIq1K2XxTP07kRFpzESGrlMSOzLLXi7+53VTG10HGVdJapmi80VRKrCNcf44HnDNqBUTRwjV3N2K6YhoQq2LZ2FLKKcuE385gVXSvqj7Xt2/v6w1bop0ynACp3AOPlxBA+6gCS2gMIIXeIU39Ize0Qf6nLeWUDFzDAtAX7/pcJTp</latexit> <latexit sha1_base64=\"HPvrImzRBbCzydhOIVtm7PZs4Rs=\">AAAB+nicbVBNSwMxEJ3Ur1q/qh69BIvgqeyKoMeiF48VbCu0S8mm2TY0yS5JVihr/4JXPXsTr/4Zj/4Ts+0ebOuDgcd7M8zMCxPBjfW8b1RaW9/Y3CpvV3Z29/YPqodHbROnmrIWjUWsH0NimOCKtSy3gj0mmhEZCtYJx7e533li2vBYPdhJwgJJhopHnBKbSz2Tyn615tW9GfAq8QtSgwLNfvWnN4hpKpmyVBBjur6X2CAj2nIq2LTSSw1LCB2TIes6qohkJshmt07xmVMGOIq1K2XxTP07kRFpzESGrlMSOzLLXi7+53VTG10HGVdJapmi80VRKrCNcf44HnDNqBUTRwjV3N2K6YhoQq2LZ2FLKKcuE385gVXSvqj7Xt2/v6w1bop0ynACp3AOPlxBA+6gCS2gMIIXeIU39Ize0Qf6nLeWUDFzDAtAX7/pcJTp</latexit> <latexit sha1_base64=\"HPvrImzRBbCzydhOIVtm7PZs4Rs=\">AAAB+nicbVBNSwMxEJ3Ur1q/qh69BIvgqeyKoMeiF48VbCu0S8mm2TY0yS5JVihr/4JXPXsTr/4Zj/4Ts+0ebOuDgcd7M8zMCxPBjfW8b1RaW9/Y3CpvV3Z29/YPqodHbROnmrIWjUWsH0NimOCKtSy3gj0mmhEZCtYJx7e533li2vBYPdhJwgJJhopHnBKbSz2Tyn615tW9GfAq8QtSgwLNfvWnN4hpKpmyVBBjur6X2CAj2nIq2LTSSw1LCB2TIes6qohkJshmt07xmVMGOIq1K2XxTP07kRFpzESGrlMSOzLLXi7+53VTG10HGVdJapmi80VRKrCNcf44HnDNqBUTRwjV3N2K6YhoQq2LZ2FLKKcuE385gVXSvqj7Xt2/v6w1bop0ynACp3AOPlxBA+6gCS2gMIIXeIU39Ize0Qf6nLeWUDFzDAtAX7/pcJTp</latexit> <latexit sha1_base64=\"HPvrImzRBbCzydhOIVtm7PZs4Rs=\">AAAB+nicbVBNSwMxEJ3Ur1q/qh69BIvgqeyKoMeiF48VbCu0S8mm2TY0yS5JVihr/4JXPXsTr/4Zj/4Ts+0ebOuDgcd7M8zMCxPBjfW8b1RaW9/Y3CpvV3Z29/YPqodHbROnmrIWjUWsH0NimOCKtSy3gj0mmhEZCtYJx7e533li2vBYPdhJwgJJhopHnBKbSz2Tyn615tW9GfAq8QtSgwLNfvWnN4hpKpmyVBBjur6X2CAj2nIq2LTSSw1LCB2TIes6qohkJshmt07xmVMGOIq1K2XxTP07kRFpzESGrlMSOzLLXi7+53VTG10HGVdJapmi80VRKrCNcf44HnDNqBUTRwjV3N2K6YhoQq2LZ2FLKKcuE385gVXSvqj7Xt2/v6w1bop0ynACp3AOPlxBA+6gCS2gMIIXeIU39Ize0Qf6nLeWUDFzDAtAX7/pcJTp</latexit> = ) <latexit sha1_base64=\"rJ0dVvQdlB1RZE4oedNMK+3aVRQ=\">AAACB3icbVC7SgNBFL0bXzE+smppMxgEq7ArgpZBGwuLCOYByRJmJ7ObIbMzy8ysEkI+wG+w1dpObP0MS//ESbKFSTxw4XDOvZzLCVPOtPG8b6ewtr6xuVXcLu3s7u2X3YPDppaZIrRBJJeqHWJNORO0YZjhtJ0qipOQ01Y4vJn6rUeqNJPiwYxSGiQ4FixiBBsr9dxy906KWLF4YLBS8qnnVryqNwNaJX5OKpCj3nN/un1JsoQKQzjWuuN7qQnGWBlGOJ2UupmmKSZDHNOOpQInVAfj2eMTdGqVPoqksiMMmql/L8Y40XqUhHYzwWagl72p+J/XyUx0FYyZSDNDBZkHRRlHRqJpC6jPFCWGjyzBRDH7KyIDrDAxtquFlDCZ2E785QZWSfO86ntV//6iUrvO2ynCMZzAGfhwCTW4hTo0gEAGL/AKb86z8+58OJ/z1YKT3xzBApyvXz3emgw=</latexit> <latexit sha1_base64=\"rJ0dVvQdlB1RZE4oedNMK+3aVRQ=\">AAACB3icbVC7SgNBFL0bXzE+smppMxgEq7ArgpZBGwuLCOYByRJmJ7ObIbMzy8ysEkI+wG+w1dpObP0MS//ESbKFSTxw4XDOvZzLCVPOtPG8b6ewtr6xuVXcLu3s7u2X3YPDppaZIrRBJJeqHWJNORO0YZjhtJ0qipOQ01Y4vJn6rUeqNJPiwYxSGiQ4FixiBBsr9dxy906KWLF4YLBS8qnnVryqNwNaJX5OKpCj3nN/un1JsoQKQzjWuuN7qQnGWBlGOJ2UupmmKSZDHNOOpQInVAfj2eMTdGqVPoqksiMMmql/L8Y40XqUhHYzwWagl72p+J/XyUx0FYyZSDNDBZkHRRlHRqJpC6jPFCWGjyzBRDH7KyIDrDAxtquFlDCZ2E785QZWSfO86ntV//6iUrvO2ynCMZzAGfhwCTW4hTo0gEAGL/AKb86z8+58OJ/z1YKT3xzBApyvXz3emgw=</latexit> <latexit sha1_base64=\"rJ0dVvQdlB1RZE4oedNMK+3aVRQ=\">AAACB3icbVC7SgNBFL0bXzE+smppMxgEq7ArgpZBGwuLCOYByRJmJ7ObIbMzy8ysEkI+wG+w1dpObP0MS//ESbKFSTxw4XDOvZzLCVPOtPG8b6ewtr6xuVXcLu3s7u2X3YPDppaZIrRBJJeqHWJNORO0YZjhtJ0qipOQ01Y4vJn6rUeqNJPiwYxSGiQ4FixiBBsr9dxy906KWLF4YLBS8qnnVryqNwNaJX5OKpCj3nN/un1JsoQKQzjWuuN7qQnGWBlGOJ2UupmmKSZDHNOOpQInVAfj2eMTdGqVPoqksiMMmql/L8Y40XqUhHYzwWagl72p+J/XyUx0FYyZSDNDBZkHRRlHRqJpC6jPFCWGjyzBRDH7KyIDrDAxtquFlDCZ2E785QZWSfO86ntV//6iUrvO2ynCMZzAGfhwCTW4hTo0gEAGL/AKb86z8+58OJ/z1YKT3xzBApyvXz3emgw=</latexit> <latexit sha1_base64=\"rJ0dVvQdlB1RZE4oedNMK+3aVRQ=\">AAACB3icbVC7SgNBFL0bXzE+smppMxgEq7ArgpZBGwuLCOYByRJmJ7ObIbMzy8ysEkI+wG+w1dpObP0MS//ESbKFSTxw4XDOvZzLCVPOtPG8b6ewtr6xuVXcLu3s7u2X3YPDppaZIrRBJJeqHWJNORO0YZjhtJ0qipOQ01Y4vJn6rUeqNJPiwYxSGiQ4FixiBBsr9dxy906KWLF4YLBS8qnnVryqNwNaJX5OKpCj3nN/un1JsoQKQzjWuuN7qQnGWBlGOJ2UupmmKSZDHNOOpQInVAfj2eMTdGqVPoqksiMMmql/L8Y40XqUhHYzwWagl72p+J/XyUx0FYyZSDNDBZkHRRlHRqJpC6jPFCWGjyzBRDH7KyIDrDAxtquFlDCZ2E785QZWSfO86ntV//6iUrvO2ynCMZzAGfhwCTW4hTo0gEAGL/AKb86z8+58OJ/z1YKT3xzBApyvXz3emgw=</latexit> ... <latexit sha1_base64=\"4fajA5Sub0emPp2TPmVj1aHk3ls=\">AAAB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cK9gPaUDabTbt2kw27E6GE/gcvHhTx6v/x5r9x2+agrQ8GHu/NMDMvSKUw6LrfTmltfWNzq7xd2dnd2z+oHh61jco04y2mpNLdgBouRcJbKFDybqo5jQPJO8H4duZ3nrg2QiUPOEm5H9NhIiLBKFqp3ZehQjOo1ty6OwdZJV5BalCgOah+9UPFspgnyCQ1pue5Kfo51SiY5NNKPzM8pWxMh7xnaUJjbvx8fu2UnFklJJHSthIkc/X3RE5jYyZxYDtjiiOz7M3E/7xehtG1n4skzZAnbLEoyiRBRWavk1BozlBOLKFMC3srYSOqKUMbUMWG4C2/vEraF3XPrXv3l7XGTRFHGU7gFM7BgytowB00oQUMHuEZXuHNUc6L8+58LFpLTjFzDH/gfP4AvB+POA==</latexit> <latexit sha1_base64=\"4fajA5Sub0emPp2TPmVj1aHk3ls=\">AAAB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cK9gPaUDabTbt2kw27E6GE/gcvHhTx6v/x5r9x2+agrQ8GHu/NMDMvSKUw6LrfTmltfWNzq7xd2dnd2z+oHh61jco04y2mpNLdgBouRcJbKFDybqo5jQPJO8H4duZ3nrg2QiUPOEm5H9NhIiLBKFqp3ZehQjOo1ty6OwdZJV5BalCgOah+9UPFspgnyCQ1pue5Kfo51SiY5NNKPzM8pWxMh7xnaUJjbvx8fu2UnFklJJHSthIkc/X3RE5jYyZxYDtjiiOz7M3E/7xehtG1n4skzZAnbLEoyiRBRWavk1BozlBOLKFMC3srYSOqKUMbUMWG4C2/vEraF3XPrXv3l7XGTRFHGU7gFM7BgytowB00oQUMHuEZXuHNUc6L8+58LFpLTjFzDH/gfP4AvB+POA==</latexit> <latexit sha1_base64=\"4fajA5Sub0emPp2TPmVj1aHk3ls=\">AAAB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cK9gPaUDabTbt2kw27E6GE/gcvHhTx6v/x5r9x2+agrQ8GHu/NMDMvSKUw6LrfTmltfWNzq7xd2dnd2z+oHh61jco04y2mpNLdgBouRcJbKFDybqo5jQPJO8H4duZ3nrg2QiUPOEm5H9NhIiLBKFqp3ZehQjOo1ty6OwdZJV5BalCgOah+9UPFspgnyCQ1pue5Kfo51SiY5NNKPzM8pWxMh7xnaUJjbvx8fu2UnFklJJHSthIkc/X3RE5jYyZxYDtjiiOz7M3E/7xehtG1n4skzZAnbLEoyiRBRWavk1BozlBOLKFMC3srYSOqKUMbUMWG4C2/vEraF3XPrXv3l7XGTRFHGU7gFM7BgytowB00oQUMHuEZXuHNUc6L8+58LFpLTjFzDH/gfP4AvB+POA==</latexit> <latexit sha1_base64=\"4fajA5Sub0emPp2TPmVj1aHk3ls=\">AAAB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cK9gPaUDabTbt2kw27E6GE/gcvHhTx6v/x5r9x2+agrQ8GHu/NMDMvSKUw6LrfTmltfWNzq7xd2dnd2z+oHh61jco04y2mpNLdgBouRcJbKFDybqo5jQPJO8H4duZ3nrg2QiUPOEm5H9NhIiLBKFqp3ZehQjOo1ty6OwdZJV5BalCgOah+9UPFspgnyCQ1pue5Kfo51SiY5NNKPzM8pWxMh7xnaUJjbvx8fu2UnFklJJHSthIkc/X3RE5jYyZxYDtjiiOz7M3E/7xehtG1n4skzZAnbLEoyiRBRWavk1BozlBOLKFMC3srYSOqKUMbUMWG4C2/vEraF3XPrXv3l7XGTRFHGU7gFM7BgytowB00oQUMHuEZXuHNUc6L8+58LFpLTjFzDH/gfP4AvB+POA==</latexit> r \u0000 2 <latexit sha1_base64=\"/IShXm3MEtRkTwOnuprq6W4fqgQ=\">AAACAHicbVA9SwNBEJ2LXzF+RS1tFoNgY7gLgpZBG8sI5gOTI+xt9pIlu3vH7p4QjjT+Blut7cTWf2LpP3EvucIkPhh4vDfDzLwg5kwb1/12CmvrG5tbxe3Szu7e/kH58Kilo0QR2iQRj1QnwJpyJmnTMMNpJ1YUi4DTdjC+zfz2E1WaRfLBTGLqCzyULGQEGys99gKBVD+9qE375YpbdWdAq8TLSQVyNPrln94gIomg0hCOte56bmz8FCvDCKfTUi/RNMZkjIe0a6nEgmo/nV08RWdWGaAwUrakQTP170SKhdYTEdhOgc1IL3uZ+J/XTUx47adMxomhkswXhQlHJkLZ+2jAFCWGTyzBRDF7KyIjrDAxNqSFLYHIMvGWE1glrVrVc6ve/WWlfpOnU4QTOIVz8OAK6nAHDWgCAQkv8ApvzrPz7nw4n/PWgpPPHMMCnK9fmVGW5w==</latexit> <latexit sha1_base64=\"/IShXm3MEtRkTwOnuprq6W4fqgQ=\">AAACAHicbVA9SwNBEJ2LXzF+RS1tFoNgY7gLgpZBG8sI5gOTI+xt9pIlu3vH7p4QjjT+Blut7cTWf2LpP3EvucIkPhh4vDfDzLwg5kwb1/12CmvrG5tbxe3Szu7e/kH58Kilo0QR2iQRj1QnwJpyJmnTMMNpJ1YUi4DTdjC+zfz2E1WaRfLBTGLqCzyULGQEGys99gKBVD+9qE375YpbdWdAq8TLSQVyNPrln94gIomg0hCOte56bmz8FCvDCKfTUi/RNMZkjIe0a6nEgmo/nV08RWdWGaAwUrakQTP170SKhdYTEdhOgc1IL3uZ+J/XTUx47adMxomhkswXhQlHJkLZ+2jAFCWGTyzBRDF7KyIjrDAxNqSFLYHIMvGWE1glrVrVc6ve/WWlfpOnU4QTOIVz8OAK6nAHDWgCAQkv8ApvzrPz7nw4n/PWgpPPHMMCnK9fmVGW5w==</latexit> <latexit sha1_base64=\"/IShXm3MEtRkTwOnuprq6W4fqgQ=\">AAACAHicbVA9SwNBEJ2LXzF+RS1tFoNgY7gLgpZBG8sI5gOTI+xt9pIlu3vH7p4QjjT+Blut7cTWf2LpP3EvucIkPhh4vDfDzLwg5kwb1/12CmvrG5tbxe3Szu7e/kH58Kilo0QR2iQRj1QnwJpyJmnTMMNpJ1YUi4DTdjC+zfz2E1WaRfLBTGLqCzyULGQEGys99gKBVD+9qE375YpbdWdAq8TLSQVyNPrln94gIomg0hCOte56bmz8FCvDCKfTUi/RNMZkjIe0a6nEgmo/nV08RWdWGaAwUrakQTP170SKhdYTEdhOgc1IL3uZ+J/XTUx47adMxomhkswXhQlHJkLZ+2jAFCWGTyzBRDF7KyIjrDAxNqSFLYHIMvGWE1glrVrVc6ve/WWlfpOnU4QTOIVz8OAK6nAHDWgCAQkv8ApvzrPz7nw4n/PWgpPPHMMCnK9fmVGW5w==</latexit> <latexit sha1_base64=\"/IShXm3MEtRkTwOnuprq6W4fqgQ=\">AAACAHicbVA9SwNBEJ2LXzF+RS1tFoNgY7gLgpZBG8sI5gOTI+xt9pIlu3vH7p4QjjT+Blut7cTWf2LpP3EvucIkPhh4vDfDzLwg5kwb1/12CmvrG5tbxe3Szu7e/kH58Kilo0QR2iQRj1QnwJpyJmnTMMNpJ1YUi4DTdjC+zfz2E1WaRfLBTGLqCzyULGQEGys99gKBVD+9qE375YpbdWdAq8TLSQVyNPrln94gIomg0hCOte56bmz8FCvDCKfTUi/RNMZkjIe0a6nEgmo/nV08RWdWGaAwUrakQTP170SKhdYTEdhOgc1IL3uZ+J/XTUx47adMxomhkswXhQlHJkLZ+2jAFCWGTyzBRDF7KyIjrDAxNqSFLYHIMvGWE1glrVrVc6ve/WWlfpOnU4QTOIVz8OAK6nAHDWgCAQkv8ApvzrPz7nw4n/PWgpPPHMMCnK9fmVGW5w==</latexit> r \u0000 1 <latexit sha1_base64=\"AcAh/Nk3Gv1ENIIS+UbTjL0rMPU=\">AAACAHicbVA9SwNBEJ2LXzF+RS1tFoNgY7iTgJZBG8sI5gOTI+xt9pIlu3vH7p4QjjT+Blut7cTWf2LpP3EvucIkPhh4vDfDzLwg5kwb1/12CmvrG5tbxe3Szu7e/kH58Kilo0QR2iQRj1QnwJpyJmnTMMNpJ1YUi4DTdjC+zfz2E1WaRfLBTGLqCzyULGQEGys99gKBVD+98Kb9csWtujOgVeLlpAI5Gv3yT28QkURQaQjHWnc9NzZ+ipVhhNNpqZdoGmMyxkPatVRiQbWfzi6eojOrDFAYKVvSoJn6dyLFQuuJCGynwGakl71M/M/rJia89lMm48RQSeaLwoQjE6HsfTRgihLDJ5Zgopi9FZERVpgYG9LClkBkmXjLCayS1mXVc6vefa1Sv8nTKcIJnMI5eHAFdbiDBjSBgIQXeIU359l5dz6cz3lrwclnjmEBztcvl72W5g==</latexit> <latexit sha1_base64=\"AcAh/Nk3Gv1ENIIS+UbTjL0rMPU=\">AAACAHicbVA9SwNBEJ2LXzF+RS1tFoNgY7iTgJZBG8sI5gOTI+xt9pIlu3vH7p4QjjT+Blut7cTWf2LpP3EvucIkPhh4vDfDzLwg5kwb1/12CmvrG5tbxe3Szu7e/kH58Kilo0QR2iQRj1QnwJpyJmnTMMNpJ1YUi4DTdjC+zfz2E1WaRfLBTGLqCzyULGQEGys99gKBVD+98Kb9csWtujOgVeLlpAI5Gv3yT28QkURQaQjHWnc9NzZ+ipVhhNNpqZdoGmMyxkPatVRiQbWfzi6eojOrDFAYKVvSoJn6dyLFQuuJCGynwGakl71M/M/rJia89lMm48RQSeaLwoQjE6HsfTRgihLDJ5Zgopi9FZERVpgYG9LClkBkmXjLCayS1mXVc6vefa1Sv8nTKcIJnMI5eHAFdbiDBjSBgIQXeIU359l5dz6cz3lrwclnjmEBztcvl72W5g==</latexit> <latexit sha1_base64=\"AcAh/Nk3Gv1ENIIS+UbTjL0rMPU=\">AAACAHicbVA9SwNBEJ2LXzF+RS1tFoNgY7iTgJZBG8sI5gOTI+xt9pIlu3vH7p4QjjT+Blut7cTWf2LpP3EvucIkPhh4vDfDzLwg5kwb1/12CmvrG5tbxe3Szu7e/kH58Kilo0QR2iQRj1QnwJpyJmnTMMNpJ1YUi4DTdjC+zfz2E1WaRfLBTGLqCzyULGQEGys99gKBVD+98Kb9csWtujOgVeLlpAI5Gv3yT28QkURQaQjHWnc9NzZ+ipVhhNNpqZdoGmMyxkPatVRiQbWfzi6eojOrDFAYKVvSoJn6dyLFQuuJCGynwGakl71M/M/rJia89lMm48RQSeaLwoQjE6HsfTRgihLDJ5Zgopi9FZERVpgYG9LClkBkmXjLCayS1mXVc6vefa1Sv8nTKcIJnMI5eHAFdbiDBjSBgIQXeIU359l5dz6cz3lrwclnjmEBztcvl72W5g==</latexit> <latexit sha1_base64=\"AcAh/Nk3Gv1ENIIS+UbTjL0rMPU=\">AAACAHicbVA9SwNBEJ2LXzF+RS1tFoNgY7iTgJZBG8sI5gOTI+xt9pIlu3vH7p4QjjT+Blut7cTWf2LpP3EvucIkPhh4vDfDzLwg5kwb1/12CmvrG5tbxe3Szu7e/kH58Kilo0QR2iQRj1QnwJpyJmnTMMNpJ1YUi4DTdjC+zfz2E1WaRfLBTGLqCzyULGQEGys99gKBVD+98Kb9csWtujOgVeLlpAI5Gv3yT28QkURQaQjHWnc9NzZ+ipVhhNNpqZdoGmMyxkPatVRiQbWfzi6eojOrDFAYKVvSoJn6dyLFQuuJCGynwGakl71M/M/rJia89lMm48RQSeaLwoQjE6HsfTRgihLDJ5Zgopi9FZERVpgYG9LClkBkmXjLCayS1mXVc6vefa1Sv8nTKcIJnMI5eHAFdbiDBjSBgIQXeIU359l5dz6cz3lrwclnjmEBztcvl72W5g==</latexit> r 0 <latexit sha1_base64=\"WFmJ0f1DyXNoJgOC6cr2D5bQNIw=\">AAAB/3icbVA9SwNBEJ2LXzF+nVraLAbBKtyJoGXQxjKC+ZDkCHubvWTJ7t6xuyeEI4W/wVZrO7H1p1j6T9xLrjCJDwYe780wMy9MONPG876d0tr6xuZWebuys7u3f+AeHrV0nCpCmyTmseqEWFPOJG0aZjjtJIpiEXLaDse3ud9+okqzWD6YSUIDgYeSRYxgY6XHXiiQ6mfetO9WvZo3A1olfkGqUKDRd396g5ikgkpDONa663uJCTKsDCOcTiu9VNMEkzEe0q6lEguqg2x28BSdWWWAoljZkgbN1L8TGRZaT0RoOwU2I73s5eJ/Xjc10XWQMZmkhkoyXxSlHJkY5d+jAVOUGD6xBBPF7K2IjLDCxNiMFraEIs/EX05glbQuar5X8+8vq/WbIp0ynMApnIMPV1CHO2hAEwgIeIFXeHOenXfnw/mct5acYuYYFuB8/QInXpau</latexit> <latexit sha1_base64=\"WFmJ0f1DyXNoJgOC6cr2D5bQNIw=\">AAAB/3icbVA9SwNBEJ2LXzF+nVraLAbBKtyJoGXQxjKC+ZDkCHubvWTJ7t6xuyeEI4W/wVZrO7H1p1j6T9xLrjCJDwYe780wMy9MONPG876d0tr6xuZWebuys7u3f+AeHrV0nCpCmyTmseqEWFPOJG0aZjjtJIpiEXLaDse3ud9+okqzWD6YSUIDgYeSRYxgY6XHXiiQ6mfetO9WvZo3A1olfkGqUKDRd396g5ikgkpDONa663uJCTKsDCOcTiu9VNMEkzEe0q6lEguqg2x28BSdWWWAoljZkgbN1L8TGRZaT0RoOwU2I73s5eJ/Xjc10XWQMZmkhkoyXxSlHJkY5d+jAVOUGD6xBBPF7K2IjLDCxNiMFraEIs/EX05glbQuar5X8+8vq/WbIp0ynMApnIMPV1CHO2hAEwgIeIFXeHOenXfnw/mct5acYuYYFuB8/QInXpau</latexit> <latexit sha1_base64=\"WFmJ0f1DyXNoJgOC6cr2D5bQNIw=\">AAAB/3icbVA9SwNBEJ2LXzF+nVraLAbBKtyJoGXQxjKC+ZDkCHubvWTJ7t6xuyeEI4W/wVZrO7H1p1j6T9xLrjCJDwYe780wMy9MONPG876d0tr6xuZWebuys7u3f+AeHrV0nCpCmyTmseqEWFPOJG0aZjjtJIpiEXLaDse3ud9+okqzWD6YSUIDgYeSRYxgY6XHXiiQ6mfetO9WvZo3A1olfkGqUKDRd396g5ikgkpDONa663uJCTKsDCOcTiu9VNMEkzEe0q6lEguqg2x28BSdWWWAoljZkgbN1L8TGRZaT0RoOwU2I73s5eJ/Xjc10XWQMZmkhkoyXxSlHJkY5d+jAVOUGD6xBBPF7K2IjLDCxNiMFraEIs/EX05glbQuar5X8+8vq/WbIp0ynMApnIMPV1CHO2hAEwgIeIFXeHOenXfnw/mct5acYuYYFuB8/QInXpau</latexit> <latexit sha1_base64=\"WFmJ0f1DyXNoJgOC6cr2D5bQNIw=\">AAAB/3icbVA9SwNBEJ2LXzF+nVraLAbBKtyJoGXQxjKC+ZDkCHubvWTJ7t6xuyeEI4W/wVZrO7H1p1j6T9xLrjCJDwYe780wMy9MONPG876d0tr6xuZWebuys7u3f+AeHrV0nCpCmyTmseqEWFPOJG0aZjjtJIpiEXLaDse3ud9+okqzWD6YSUIDgYeSRYxgY6XHXiiQ6mfetO9WvZo3A1olfkGqUKDRd396g5ikgkpDONa663uJCTKsDCOcTiu9VNMEkzEe0q6lEguqg2x28BSdWWWAoljZkgbN1L8TGRZaT0RoOwU2I73s5eJ/Xjc10XWQMZmkhkoyXxSlHJkY5d+jAVOUGD6xBBPF7K2IjLDCxNiMFraEIs/EX05glbQuar5X8+8vq/WbIp0ynMApnIMPV1CHO2hAEwgIeIFXeHOenXfnw/mct5acYuYYFuB8/QInXpau</latexit> K \u0000 1 ( j ) <latexit sha1_base64=\"cFlBEabvnNWxy5Xd3U8AijQ2xwI=\">AAACMnicbVDLSsNAFJ34rPVVdekmWBRdWBIRdCm6EdxUsA9oS5lMb+roZBJmbsQy5A/8GnGnP6I7cevWvdO0C6seGDic+zpzgkRwjZ736kxNz8zOzRcWiotLyyurpbX1uo5TxaDGYhGrZkA1CC6hhhwFNBMFNAoENILbs2G9cQdK81he4SCBTkT7koecUbRSt7TTRrjHfI+JFZV9yEw7onitQ3ORdc2+n+3e7GXdUtmreDncv8QfkzIZo9otfbV7MUsjkMgE1brlewl2DFXImYCs2E41JJTd0j60LJU0At0xuY/M3bZKzw1jZZ9EN1d/ThgaaT2IAtuZW/1dG4r/1Vophscdw2WSIkg2OhSmwsXYHYbj9rgChmJgCWWKW68uu6aKMrQRTlwJook/mPuRdZuT/zuVv6R+UPG9in95WD45HSdWIJtki+wSnxyRE3JOqqRGGHkgj+SZvDhPzpvz7nyMWqec8cwGmYDz+Q2ZzayB</latexit> <latexit sha1_base64=\"cFlBEabvnNWxy5Xd3U8AijQ2xwI=\">AAACMnicbVDLSsNAFJ34rPVVdekmWBRdWBIRdCm6EdxUsA9oS5lMb+roZBJmbsQy5A/8GnGnP6I7cevWvdO0C6seGDic+zpzgkRwjZ736kxNz8zOzRcWiotLyyurpbX1uo5TxaDGYhGrZkA1CC6hhhwFNBMFNAoENILbs2G9cQdK81he4SCBTkT7koecUbRSt7TTRrjHfI+JFZV9yEw7onitQ3ORdc2+n+3e7GXdUtmreDncv8QfkzIZo9otfbV7MUsjkMgE1brlewl2DFXImYCs2E41JJTd0j60LJU0At0xuY/M3bZKzw1jZZ9EN1d/ThgaaT2IAtuZW/1dG4r/1Vophscdw2WSIkg2OhSmwsXYHYbj9rgChmJgCWWKW68uu6aKMrQRTlwJook/mPuRdZuT/zuVv6R+UPG9in95WD45HSdWIJtki+wSnxyRE3JOqqRGGHkgj+SZvDhPzpvz7nyMWqec8cwGmYDz+Q2ZzayB</latexit> <latexit sha1_base64=\"cFlBEabvnNWxy5Xd3U8AijQ2xwI=\">AAACMnicbVDLSsNAFJ34rPVVdekmWBRdWBIRdCm6EdxUsA9oS5lMb+roZBJmbsQy5A/8GnGnP6I7cevWvdO0C6seGDic+zpzgkRwjZ736kxNz8zOzRcWiotLyyurpbX1uo5TxaDGYhGrZkA1CC6hhhwFNBMFNAoENILbs2G9cQdK81he4SCBTkT7koecUbRSt7TTRrjHfI+JFZV9yEw7onitQ3ORdc2+n+3e7GXdUtmreDncv8QfkzIZo9otfbV7MUsjkMgE1brlewl2DFXImYCs2E41JJTd0j60LJU0At0xuY/M3bZKzw1jZZ9EN1d/ThgaaT2IAtuZW/1dG4r/1Vophscdw2WSIkg2OhSmwsXYHYbj9rgChmJgCWWKW68uu6aKMrQRTlwJook/mPuRdZuT/zuVv6R+UPG9in95WD45HSdWIJtki+wSnxyRE3JOqqRGGHkgj+SZvDhPzpvz7nyMWqec8cwGmYDz+Q2ZzayB</latexit> <latexit sha1_base64=\"cFlBEabvnNWxy5Xd3U8AijQ2xwI=\">AAACMnicbVDLSsNAFJ34rPVVdekmWBRdWBIRdCm6EdxUsA9oS5lMb+roZBJmbsQy5A/8GnGnP6I7cevWvdO0C6seGDic+zpzgkRwjZ736kxNz8zOzRcWiotLyyurpbX1uo5TxaDGYhGrZkA1CC6hhhwFNBMFNAoENILbs2G9cQdK81he4SCBTkT7koecUbRSt7TTRrjHfI+JFZV9yEw7onitQ3ORdc2+n+3e7GXdUtmreDncv8QfkzIZo9otfbV7MUsjkMgE1brlewl2DFXImYCs2E41JJTd0j60LJU0At0xuY/M3bZKzw1jZZ9EN1d/ThgaaT2IAtuZW/1dG4r/1Vophscdw2WSIkg2OhSmwsXYHYbj9rgChmJgCWWKW68uu6aKMrQRTlwJook/mPuRdZuT/zuVv6R+UPG9in95WD45HSdWIJtki+wSnxyRE3JOqqRGGHkgj+SZvDhPzpvz7nyMWqec8cwGmYDz+Q2ZzayB</latexit> ... <latexit sha1_base64=\"4fajA5Sub0emPp2TPmVj1aHk3ls=\">AAAB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cK9gPaUDabTbt2kw27E6GE/gcvHhTx6v/x5r9x2+agrQ8GHu/NMDMvSKUw6LrfTmltfWNzq7xd2dnd2z+oHh61jco04y2mpNLdgBouRcJbKFDybqo5jQPJO8H4duZ3nrg2QiUPOEm5H9NhIiLBKFqp3ZehQjOo1ty6OwdZJV5BalCgOah+9UPFspgnyCQ1pue5Kfo51SiY5NNKPzM8pWxMh7xnaUJjbvx8fu2UnFklJJHSthIkc/X3RE5jYyZxYDtjiiOz7M3E/7xehtG1n4skzZAnbLEoyiRBRWavk1BozlBOLKFMC3srYSOqKUMbUMWG4C2/vEraF3XPrXv3l7XGTRFHGU7gFM7BgytowB00oQUMHuEZXuHNUc6L8+58LFpLTjFzDH/gfP4AvB+POA==</latexit> <latexit sha1_base64=\"4fajA5Sub0emPp2TPmVj1aHk3ls=\">AAAB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cK9gPaUDabTbt2kw27E6GE/gcvHhTx6v/x5r9x2+agrQ8GHu/NMDMvSKUw6LrfTmltfWNzq7xd2dnd2z+oHh61jco04y2mpNLdgBouRcJbKFDybqo5jQPJO8H4duZ3nrg2QiUPOEm5H9NhIiLBKFqp3ZehQjOo1ty6OwdZJV5BalCgOah+9UPFspgnyCQ1pue5Kfo51SiY5NNKPzM8pWxMh7xnaUJjbvx8fu2UnFklJJHSthIkc/X3RE5jYyZxYDtjiiOz7M3E/7xehtG1n4skzZAnbLEoyiRBRWavk1BozlBOLKFMC3srYSOqKUMbUMWG4C2/vEraF3XPrXv3l7XGTRFHGU7gFM7BgytowB00oQUMHuEZXuHNUc6L8+58LFpLTjFzDH/gfP4AvB+POA==</latexit> <latexit sha1_base64=\"4fajA5Sub0emPp2TPmVj1aHk3ls=\">AAAB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cK9gPaUDabTbt2kw27E6GE/gcvHhTx6v/x5r9x2+agrQ8GHu/NMDMvSKUw6LrfTmltfWNzq7xd2dnd2z+oHh61jco04y2mpNLdgBouRcJbKFDybqo5jQPJO8H4duZ3nrg2QiUPOEm5H9NhIiLBKFqp3ZehQjOo1ty6OwdZJV5BalCgOah+9UPFspgnyCQ1pue5Kfo51SiY5NNKPzM8pWxMh7xnaUJjbvx8fu2UnFklJJHSthIkc/X3RE5jYyZxYDtjiiOz7M3E/7xehtG1n4skzZAnbLEoyiRBRWavk1BozlBOLKFMC3srYSOqKUMbUMWG4C2/vEraF3XPrXv3l7XGTRFHGU7gFM7BgytowB00oQUMHuEZXuHNUc6L8+58LFpLTjFzDH/gfP4AvB+POA==</latexit> <latexit sha1_base64=\"4fajA5Sub0emPp2TPmVj1aHk3ls=\">AAAB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cK9gPaUDabTbt2kw27E6GE/gcvHhTx6v/x5r9x2+agrQ8GHu/NMDMvSKUw6LrfTmltfWNzq7xd2dnd2z+oHh61jco04y2mpNLdgBouRcJbKFDybqo5jQPJO8H4duZ3nrg2QiUPOEm5H9NhIiLBKFqp3ZehQjOo1ty6OwdZJV5BalCgOah+9UPFspgnyCQ1pue5Kfo51SiY5NNKPzM8pWxMh7xnaUJjbvx8fu2UnFklJJHSthIkc/X3RE5jYyZxYDtjiiOz7M3E/7xehtG1n4skzZAnbLEoyiRBRWavk1BozlBOLKFMC3srYSOqKUMbUMWG4C2/vEraF3XPrXv3l7XGTRFHGU7gFM7BgytowB00oQUMHuEZXuHNUc6L8+58LFpLTjFzDH/gfP4AvB+POA==</latexit> r +1 <latexit sha1_base64=\"VrY7MmzV0w7rs3Ls5xy2a3XABPo=\">AAACAHicbVA9SwNBEJ2LXzF+RS1tFoMgCOFOAloGbSwjmA9MjrC32UuW7O4du3tCONL4G2y1thNb/4ml/8S95AqT+GDg8d4MM/OCmDNtXPfbKaytb2xuFbdLO7t7+wflw6OWjhJFaJNEPFKdAGvKmaRNwwynnVhRLAJO28H4NvPbT1RpFskHM4mpL/BQspARbKz02AsEUv30wpv2yxW36s6AVomXkwrkaPTLP71BRBJBpSEca9313Nj4KVaGEU6npV6iaYzJGA9p11KJBdV+Ort4is6sMkBhpGxJg2bq34kUC60nIrCdApuRXvYy8T+vm5jw2k+ZjBNDJZkvChOOTISy99GAKUoMn1iCiWL2VkRGWGFibEgLWwKRZeItJ7BKWpdVz61697VK/SZPpwgncArn4MEV1OEOGtAEAhJe4BXenGfn3flwPuetBSefOYYFOF+/lJOW5A==</latexit> <latexit sha1_base64=\"VrY7MmzV0w7rs3Ls5xy2a3XABPo=\">AAACAHicbVA9SwNBEJ2LXzF+RS1tFoMgCOFOAloGbSwjmA9MjrC32UuW7O4du3tCONL4G2y1thNb/4ml/8S95AqT+GDg8d4MM/OCmDNtXPfbKaytb2xuFbdLO7t7+wflw6OWjhJFaJNEPFKdAGvKmaRNwwynnVhRLAJO28H4NvPbT1RpFskHM4mpL/BQspARbKz02AsEUv30wpv2yxW36s6AVomXkwrkaPTLP71BRBJBpSEca9313Nj4KVaGEU6npV6iaYzJGA9p11KJBdV+Ort4is6sMkBhpGxJg2bq34kUC60nIrCdApuRXvYy8T+vm5jw2k+ZjBNDJZkvChOOTISy99GAKUoMn1iCiWL2VkRGWGFibEgLWwKRZeItJ7BKWpdVz61697VK/SZPpwgncArn4MEV1OEOGtAEAhJe4BXenGfn3flwPuetBSefOYYFOF+/lJOW5A==</latexit> <latexit sha1_base64=\"VrY7MmzV0w7rs3Ls5xy2a3XABPo=\">AAACAHicbVA9SwNBEJ2LXzF+RS1tFoMgCOFOAloGbSwjmA9MjrC32UuW7O4du3tCONL4G2y1thNb/4ml/8S95AqT+GDg8d4MM/OCmDNtXPfbKaytb2xuFbdLO7t7+wflw6OWjhJFaJNEPFKdAGvKmaRNwwynnVhRLAJO28H4NvPbT1RpFskHM4mpL/BQspARbKz02AsEUv30wpv2yxW36s6AVomXkwrkaPTLP71BRBJBpSEca9313Nj4KVaGEU6npV6iaYzJGA9p11KJBdV+Ort4is6sMkBhpGxJg2bq34kUC60nIrCdApuRXvYy8T+vm5jw2k+ZjBNDJZkvChOOTISy99GAKUoMn1iCiWL2VkRGWGFibEgLWwKRZeItJ7BKWpdVz61697VK/SZPpwgncArn4MEV1OEOGtAEAhJe4BXenGfn3flwPuetBSefOYYFOF+/lJOW5A==</latexit> <latexit sha1_base64=\"VrY7MmzV0w7rs3Ls5xy2a3XABPo=\">AAACAHicbVA9SwNBEJ2LXzF+RS1tFoMgCOFOAloGbSwjmA9MjrC32UuW7O4du3tCONL4G2y1thNb/4ml/8S95AqT+GDg8d4MM/OCmDNtXPfbKaytb2xuFbdLO7t7+wflw6OWjhJFaJNEPFKdAGvKmaRNwwynnVhRLAJO28H4NvPbT1RpFskHM4mpL/BQspARbKz02AsEUv30wpv2yxW36s6AVomXkwrkaPTLP71BRBJBpSEca9313Nj4KVaGEU6npV6iaYzJGA9p11KJBdV+Ort4is6sMkBhpGxJg2bq34kUC60nIrCdApuRXvYy8T+vm5jw2k+ZjBNDJZkvChOOTISy99GAKUoMn1iCiWL2VkRGWGFibEgLWwKRZeItJ7BKWpdVz61697VK/SZPpwgncArn4MEV1OEOGtAEAhJe4BXenGfn3flwPuetBSefOYYFOF+/lJOW5A==</latexit> r +2 <latexit sha1_base64=\"CCKE+tCbu5IbahI8Qhq9qBowFwY=\">AAACAHicbVA9SwNBEJ2LXzF+RS1tFoMgCOEuCFoGbSwjmA9MjrC32UuW7O4du3tCONL4G2y1thNb/4ml/8S95AqT+GDg8d4MM/OCmDNtXPfbKaytb2xuFbdLO7t7+wflw6OWjhJFaJNEPFKdAGvKmaRNwwynnVhRLAJO28H4NvPbT1RpFskHM4mpL/BQspARbKz02AsEUv30ojbtlytu1Z0BrRIvJxXI0eiXf3qDiCSCSkM41rrrubHxU6wMI5xOS71E0xiTMR7SrqUSC6r9dHbxFJ1ZZYDCSNmSBs3UvxMpFlpPRGA7BTYjvexl4n9eNzHhtZ8yGSeGSjJfFCYcmQhl76MBU5QYPrEEE8XsrYiMsMLE2JAWtgQiy8RbTmCVtGpVz61695eV+k2eThFO4BTOwYMrqMMdNKAJBCS8wCu8Oc/Ou/PhfM5bC04+cwwLcL5+AZYnluU=</latexit> <latexit sha1_base64=\"CCKE+tCbu5IbahI8Qhq9qBowFwY=\">AAACAHicbVA9SwNBEJ2LXzF+RS1tFoMgCOEuCFoGbSwjmA9MjrC32UuW7O4du3tCONL4G2y1thNb/4ml/8S95AqT+GDg8d4MM/OCmDNtXPfbKaytb2xuFbdLO7t7+wflw6OWjhJFaJNEPFKdAGvKmaRNwwynnVhRLAJO28H4NvPbT1RpFskHM4mpL/BQspARbKz02AsEUv30ojbtlytu1Z0BrRIvJxXI0eiXf3qDiCSCSkM41rrrubHxU6wMI5xOS71E0xiTMR7SrqUSC6r9dHbxFJ1ZZYDCSNmSBs3UvxMpFlpPRGA7BTYjvexl4n9eNzHhtZ8yGSeGSjJfFCYcmQhl76MBU5QYPrEEE8XsrYiMsMLE2JAWtgQiy8RbTmCVtGpVz61695eV+k2eThFO4BTOwYMrqMMdNKAJBCS8wCu8Oc/Ou/PhfM5bC04+cwwLcL5+AZYnluU=</latexit> <latexit sha1_base64=\"CCKE+tCbu5IbahI8Qhq9qBowFwY=\">AAACAHicbVA9SwNBEJ2LXzF+RS1tFoMgCOEuCFoGbSwjmA9MjrC32UuW7O4du3tCONL4G2y1thNb/4ml/8S95AqT+GDg8d4MM/OCmDNtXPfbKaytb2xuFbdLO7t7+wflw6OWjhJFaJNEPFKdAGvKmaRNwwynnVhRLAJO28H4NvPbT1RpFskHM4mpL/BQspARbKz02AsEUv30ojbtlytu1Z0BrRIvJxXI0eiXf3qDiCSCSkM41rrrubHxU6wMI5xOS71E0xiTMR7SrqUSC6r9dHbxFJ1ZZYDCSNmSBs3UvxMpFlpPRGA7BTYjvexl4n9eNzHhtZ8yGSeGSjJfFCYcmQhl76MBU5QYPrEEE8XsrYiMsMLE2JAWtgQiy8RbTmCVtGpVz61695eV+k2eThFO4BTOwYMrqMMdNKAJBCS8wCu8Oc/Ou/PhfM5bC04+cwwLcL5+AZYnluU=</latexit> <latexit sha1_base64=\"CCKE+tCbu5IbahI8Qhq9qBowFwY=\">AAACAHicbVA9SwNBEJ2LXzF+RS1tFoMgCOEuCFoGbSwjmA9MjrC32UuW7O4du3tCONL4G2y1thNb/4ml/8S95AqT+GDg8d4MM/OCmDNtXPfbKaytb2xuFbdLO7t7+wflw6OWjhJFaJNEPFKdAGvKmaRNwwynnVhRLAJO28H4NvPbT1RpFskHM4mpL/BQspARbKz02AsEUv30ojbtlytu1Z0BrRIvJxXI0eiXf3qDiCSCSkM41rrrubHxU6wMI5xOS71E0xiTMR7SrqUSC6r9dHbxFJ1ZZYDCSNmSBs3UvxMpFlpPRGA7BTYjvexl4n9eNzHhtZ8yGSeGSjJfFCYcmQhl76MBU5QYPrEEE8XsrYiMsMLE2JAWtgQiy8RbTmCVtGpVz61695eV+k2eThFO4BTOwYMrqMMdNKAJBCS8wCu8Oc/Ou/PhfM5bC04+cwwLcL5+AZYnluU=</latexit> Figure 2: An example: calculating r (cid:48) 1 using kernel.", "embedding r m by integrating other relative posi-tions' influences: r (cid:48) m = + M (cid:88) j = MK m ( j ) r j .", "The intuition behind it is that if j is close to m , r j will exert more influence on r (cid:48) m than other distant relative positions.", "Fig. 2 shows an illustration for m = 1 .", "As K 0 , kernel-based embeddings devolve to vanilla ones.", "Thus, our kernel-based embedding scheme can be regarded as a regularized version of vanilla embedding.", "A ranking layer (parameterized by w r and b r ) with activation function f act ( ) is adopted to produce the ranking score y ij for each clause pair candidate p ij P (cid:48) :", "Our network RANKCP is optimized end-to-end.", "The loss function for the input document D consists of the following two parts.", "The first part measures the ranking scores of clause pairs.", "Pointwise ranking loss is defined as: L pair = (cid:88) p ij P (cid:48) ( y ij log y ij +(1 y ij ) log(1 y ij )) , (11) where y ij { 0 , 1 } is the ground-truth of the clause pair p ij ( y ij = 1 means that p ij is an emotion-cause pair), and f act ( ) is set to logistic function.", "It can also be computed by pairwise ranking loss, with a margin hyperparameter : L pair = (cid:88) { p + ,p }P (cid:48) p + (cid:31) p max { 0 , ( y + y )+ } , (12) where the ground-truth of clause pair p + is 1 while the ground-truth of clause pair p is 0 (thus p + 's score y + should rank higher than p 's score y ), and f act ( ) is set to tanh function.", "The second part of the loss function measures the pre-output y emo i and y cau i of graph attention # Doc.", "network (see Eq. 5).", "According to the ground-truth of clause pairs, we know whether a clause is an emotion/cause clause or not, thus we use two cross-entropy loss functions L emo and L cau to supervise the two pre-output predictions.", "This forms two-level supervision for both clause representation learning and clause pair ranking.", "At test time, a key problem is how to extract potential emotion-cause pairs according to the ranking scores of all pair candidates.", "Note that it is not easy to determine an overall threshold score that can be adopted to all documents for dividing candidates into emotion-cause pairs and negative ones.", "We adopt a lexicon-based extraction scheme to obtain emotion-cause pairs from the topN ranking list { p 1 , p 2 , . . . , p N } of a test document.", "We first extract the top pair p 1 (with the highest score) as an emotion-cause pair.", "Then, for each remaining clause pair p i = ( c i, 1 , c i, 2 ) { p 2 , . . . , p N } , we use a sentiment lexicon to determine whether the clause c i, 1 contains sentiment word(s).", "If so, we extract the pair p i as an emotion-cause pair.", "Therefore, our model is able to extract multiple emotion-cause pairs from a given document.", "We conduct extensive experiments to verify the effectiveness of our proposed model RANKCP.", "We use the benchmark dataset released by (Xia and Ding, 2019) to conduct our experiments.", "This dataset is constructed based on an emotion cause extraction corpus (Gui et al., 2016) that consists of 1,945 Chinese documents from SINA NEWS website.", "Table 1 shows the summary statistics.", "In our experiments, following the previous work, we use the same data split (10-fold cross-validation), and choose precision P , recall R and F-score F 1 as evaluation metrics: P = #correctly predicted pairs #predicted pairs , R = #correctly predicted pairs #ground truth pairs , F 1 = 2 P R P + R .", "Moreover, we also evaluate the performance on emotion clause extraction and cause clause extraction respectively.", "That is, we break down the emotion-cause pairs to a set of emotion clauses and a set of cause clauses, and then compute metrics for the two sets.", "Precision, recall and F-score are defined similar to those in Eq.", "14: replacing pairs with emotion clauses or cause clauses .", "Xia and Ding (2019) proposed three two-step systems.", "The first step extracts emotion clauses and cause clauses separately, and the second step is a binary classifier that filters out negative pairs.", "Specifically, the difference of their three systems exists at the first step.", "INDEP encodes clauses with bidirectional LSTM, then uses two independently bidirectional LSTMs to extract emotion and cause clauses respectively.", "INTER-CE is different from INDEP in that it first extracts cause clauses, and then the predicted distribution is utilized as extra feature to extract emotion clauses.", "INTER-EC is similar to INTER-CE except that it first extracts emotion clauses.", "For fair comparison, we adopt the same word embeddings as used in INTER-EC.", "We use LSTM as the RNN cell, and the dimension of clause representations is 200.", "We stack two graph attention layers to build the graph attention network, and we add dropout with rate 0.1 for each layer.", "The maximum relative position M is set to 12, and the dimension of relative position embedding is set to 50, with K = 1 in the RBF kernel function.", "3 Our implementation based on PyTorch is available at: https://github.com/Determined22/ Rank-Emotion-Cause .", "We train RANKCP using Adam optimizer with 0.001 learning rate and 4 mini-batch size, and (cid:96) 2 regularization coefficient is set to 1e-5.", "We choose pointwise ranking loss because training with it is faster than that with pairwise loss.", "We use ANTUSD (Wang and Ku, 2016) as the sentiment lexicon, 4 and the hyperparameter N is set to", "3. 4.2 Experimental Results Results on Emotion-Cause Pair Extraction Table 2 reports the comparative results on emotion-cause pair extraction and two sub-tasks, i.e., emotion clause extraction and cause clause extraction.", "5 Our one-step approach RANKCP shows clear advantage over other baseline systems on all three tasks, which obtains 4.82%, 3.18% and 3.17% F 1 improvements over the best-performing baseline system INTER-EC on three tasks respectively.", "More specifically, we can observe that the above advantage mainly originates from the significant improvement of recall R .", "Comparing to INTEREC, RANKCP achieves 8.43% and 6.60% improvements on emotion-cause pair extraction and cause clause extraction respectively, which indicates that our one-step solution can effectively extract more correct emotion-cause pairs without hurting the precision P .", "Comparison between the last two lines' results in Table 2 demonstrates the effectiveness of lexicon-based extraction.", "We can see that adding the lexicon-based extraction scheme can improve the recall R , indicating that it indeed obtains more correct emotion-cause pairs.", "Although the precision P slightly decreases, the F-score F 1 still performs better than only extracting the top-1 pair in a document.", "Thus, lexicon-based extraction is an effective 4 https://academiasinicanlplab.github.", "We further compare the results on extracting multiple pairs in one document.", "We divide each fold's test set into two subsets: one subset contains documents having only one emotion-cause pair, and the other subset contains documents having two or more emotion-cause pairs.", "Table 3 reports the comparative results on two subsets respectively.", "It can be seen that our model consistently outperforms INTER-EC on both subsets.", "Our one-step approach is relatively more effective for documents with more than one emotion-cause pair (over 13% F 1 improvement).", "We also provide the comparative results with recently-proposed methods for emotion cause extraction task: a rule-based method RB (Lee et al., 2010a), a traditional machine learning based method MULTI-KERNEL (Gui et al., 2016), and three neural methods CONVMS-MEMNET (Gui et al., 2017), CANN (Li et al., 2018), and RTHN (Xia et al., 2019).", "Note that all of them utilize known emotion clauses as model input.", "The top half of Table 4 reports their performance.", "The bottom half of Table 4 shows the comparative results of methods without using known emotion clauses as model input.", "It clearly demonstrates Emotion Cause Extraction F 1 P RRB 0.5243 0.6747 0.4287 MULTI-KERNEL 0.6752 0.6588 0.6927 CONVMS-MEMNET 0.6955 0.7076 0.6838 CANN 0.7266 0.7721 0.6891 RTHN 0.7677 0.7697 0.7662 Cause Clause Extraction F 1 P RCANN E 0.3797 0.4826 0.3160 RTHN-APE 0.5694 0.5800 0.5618 INTER-EC 0.6507 0.7041 0.6083 RANKCP 0.6824 0.6927 0.6743 Table 4: Results on emotion cause extraction task.", "that our proposed RANKCP performs much better than other methods.", "Besides, although RANKCP does not utilize known emotions of test documents as model input, it still outperforms RB and MULTI-KERNEL , and is comparable to CONVMSMEMNET .", "Thus, our approach benefits from inter-clause modeling and shows its effectiveness on cause clause extraction.", "We conduct ablation studies to analyze the effects of different components in our approach.", "Our model is trained with a mixture of two supervised signals: a low-level signal L emo + L cau on clause representation learning at the output of graph attention network (see Eq. 5), and a high-level signal L pair on clause pair representation learning and ranking (see Eq. 10).", "To verify the effect of low-level supervision, we train our model with L pair only, and the results compared with those of our full model are given in Table", "5. It shows that training with two-level supervision boosts the extraction performance.", "This indicates that incorporating a low-level supervision helps learn better clause representations, and eventually facilitates the clause pair representation learning and ranking process.", "Graph attention network for modeling inter-clause latent relationships is the key component of our approach.", "We vary the number of graph attention layers (ranging from 0 to 3) to test its effect, and the results on emotion-cause pair extraction and cause clause extraction are shown in Fig.", "3. Obviously, the model without graph attention layer can not obtain good performance.", "Our approach achieves the best performance with two-layer graph attention network, indicating that inter-clause relationships can be modeled sufficiently without stacking a lot of layers in this task.", "We further investigate if we can obtain ideal performance by directly using clause representations to predict emotion clauses and cause clauses.", "In other words, we remove the clause pair representation learning and ranking component, and utilize the graph attention network's predictions (i.e., Eq. 5) to produce emotion-cause pairs.", "After predicting emotion clauses and cause clauses in a document, we consider all combinations of the predicted emotions and causes as the extracted emotion-cause pairs, and the comparative results of this variant model and our full model are shown in Fig.", "4. RANKCP performs much better than the variant one (especially on Recall), demonstrating that Relative Position Scheme F 1 P R No (top-1 ext.) 0.6267 0.6600 0.5973 No (lexicon-based ext.) 0.6260 0.6378 0.6160 Vanilla (top-1 ext.) 0.6468 0.6810 0.6164 Vanilla (lexicon-based ext.) 0.6582 0.6669 0.6510 Kernel (top-1 ext.) 0.6562 0.6910 0.6254 Kernel (lexicon-based ext.) 0.6610 0.6698 0.6546 Table 6: Comparison on relative position embedding schemes.", "only offering clause-level predictions is not suitable for emotion-cause pair extraction task.", "Thus, combining clause-level and clause pair representation learning in a unified one-step model is indeed effective for extracting emotion-cause pairs.", "We remove the relative position embedding part in RANKCP to verify its effect.", "We also compare vanilla and kernel-based relative position embedding schemes.", "The results are given in Table", "6. Removing relative position embedding results in performance degradation, indicating that relative position between a clause pair is indeed useful for prediction.", "Another observation from the first two lines is that lexicon-based extraction can not outperform top-1 extraction, which further verifies that the model without relative position embedding can not offer ideal ranking list.", "Kernel-based embedding achieves better performance than vanilla one on both top-1 and lexicon-based extractions, thus considering the mutual impact among relative positions helps obtain more powerful clause pair representations and further improves the performance of emotion-cause pair extraction.", "( c 5 , c 4 ) while INTER-EC fails: 4 11 ( c 1 ) ( c 2 ) ( c 3 ) ( c 4 ) ( c 5 ) ( c 6 )", "Translation: On April 11th ( c 1 ), a netizen posted her complains on the Internet ( c 2 ), she has a wacko boyfriend ( c 3 ), he never goes to a restaurant without discounts ( c 4 ), this makes her feel bad ( c 5 ), and very embarrassed ( c 6 ).", "We visualize the attention weights for two clauses c 4 and c 5 in Fig.", "5. The emotion clause c 5 attends the corresponding cause c 4 with the highest c 3 <latexit sha1_base64=\"z+eTTvCMDCz8piPM6gBxcLrfdTs=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0m0oMeiF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0wPqX/XLFrbpzkFXi5aQCORr98ldvELM0QmmYoFp3PTcxfkaV4UzgtNRLNSaUjekQu5ZKGqH2s/mpU3JmlQEJY2VLGjJXf09kNNJ6EgW2M6JmpJe9mfif101NeO1nXCapQckWi8JUEBOT2d9kwBUyIyaWUKa4vZWwEVWUGZtOyYbgLb+8SloXVc+teve1Sv0mj6MIJ3AK5+DBFdThDhrQBAZDeIZXeHOE8+K8Ox+L1oKTzxzDHzifP+6rjY0=</latexit> <latexit sha1_base64=\"z+eTTvCMDCz8piPM6gBxcLrfdTs=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0m0oMeiF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0wPqX/XLFrbpzkFXi5aQCORr98ldvELM0QmmYoFp3PTcxfkaV4UzgtNRLNSaUjekQu5ZKGqH2s/mpU3JmlQEJY2VLGjJXf09kNNJ6EgW2M6JmpJe9mfif101NeO1nXCapQckWi8JUEBOT2d9kwBUyIyaWUKa4vZWwEVWUGZtOyYbgLb+8SloXVc+teve1Sv0mj6MIJ3AK5+DBFdThDhrQBAZDeIZXeHOE8+K8Ox+L1oKTzxzDHzifP+6rjY0=</latexit> <latexit sha1_base64=\"z+eTTvCMDCz8piPM6gBxcLrfdTs=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0m0oMeiF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0wPqX/XLFrbpzkFXi5aQCORr98ldvELM0QmmYoFp3PTcxfkaV4UzgtNRLNSaUjekQu5ZKGqH2s/mpU3JmlQEJY2VLGjJXf09kNNJ6EgW2M6JmpJe9mfif101NeO1nXCapQckWi8JUEBOT2d9kwBUyIyaWUKa4vZWwEVWUGZtOyYbgLb+8SloXVc+teve1Sv0mj6MIJ3AK5+DBFdThDhrQBAZDeIZXeHOE8+K8Ox+L1oKTzxzDHzifP+6rjY0=</latexit> <latexit sha1_base64=\"z+eTTvCMDCz8piPM6gBxcLrfdTs=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0m0oMeiF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0wPqX/XLFrbpzkFXi5aQCORr98ldvELM0QmmYoFp3PTcxfkaV4UzgtNRLNSaUjekQu5ZKGqH2s/mpU3JmlQEJY2VLGjJXf09kNNJ6EgW2M6JmpJe9mfif101NeO1nXCapQckWi8JUEBOT2d9kwBUyIyaWUKa4vZWwEVWUGZtOyYbgLb+8SloXVc+teve1Sv0mj6MIJ3AK5+DBFdThDhrQBAZDeIZXeHOE8+K8Ox+L1oKTzxzDHzifP+6rjY0=</latexit> c 4 <latexit sha1_base64=\"ypzjBHHz1k2rmAf3Jp3DJGBI4uE=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI9FLx4r2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1FjpgQ1qg3LFrboLkHXi5aQCOZqD8ld/GLM0QmmYoFr3PDcxfkaV4UzgrNRPNSaUTegIe5ZKGqH2s8WpM3JhlSEJY2VLGrJQf09kNNJ6GgW2M6JmrFe9ufif10tNeO1nXCapQcmWi8JUEBOT+d9kyBUyI6aWUKa4vZWwMVWUGZtOyYbgrb68TtpXVc+teve1SuMmj6MIZ3AOl+BBHRpwB01oAYMRPMMrvDnCeXHenY9la8HJZ07hD5zPH/AvjY4=</latexit> <latexit sha1_base64=\"ypzjBHHz1k2rmAf3Jp3DJGBI4uE=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI9FLx4r2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1FjpgQ1qg3LFrboLkHXi5aQCOZqD8ld/GLM0QmmYoFr3PDcxfkaV4UzgrNRPNSaUTegIe5ZKGqH2s8WpM3JhlSEJY2VLGrJQf09kNNJ6GgW2M6JmrFe9ufif10tNeO1nXCapQcmWi8JUEBOT+d9kyBUyI6aWUKa4vZWwMVWUGZtOyYbgrb68TtpXVc+teve1SuMmj6MIZ3AOl+BBHRpwB01oAYMRPMMrvDnCeXHenY9la8HJZ07hD5zPH/AvjY4=</latexit> <latexit sha1_base64=\"ypzjBHHz1k2rmAf3Jp3DJGBI4uE=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI9FLx4r2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1FjpgQ1qg3LFrboLkHXi5aQCOZqD8ld/GLM0QmmYoFr3PDcxfkaV4UzgrNRPNSaUTegIe5ZKGqH2s8WpM3JhlSEJY2VLGrJQf09kNNJ6GgW2M6JmrFe9ufif10tNeO1nXCapQcmWi8JUEBOT+d9kyBUyI6aWUKa4vZWwMVWUGZtOyYbgrb68TtpXVc+teve1SuMmj6MIZ3AOl+BBHRpwB01oAYMRPMMrvDnCeXHenY9la8HJZ07hD5zPH/AvjY4=</latexit> <latexit sha1_base64=\"ypzjBHHz1k2rmAf3Jp3DJGBI4uE=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI9FLx4r2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1FjpgQ1qg3LFrboLkHXi5aQCOZqD8ld/GLM0QmmYoFr3PDcxfkaV4UzgrNRPNSaUTegIe5ZKGqH2s8WpM3JhlSEJY2VLGrJQf09kNNJ6GgW2M6JmrFe9ufif10tNeO1nXCapQcmWi8JUEBOT+d9kyBUyI6aWUKa4vZWwMVWUGZtOyYbgrb68TtpXVc+teve1SuMmj6MIZ3AOl+BBHRpwB01oAYMRPMMrvDnCeXHenY9la8HJZ07hD5zPH/AvjY4=</latexit> c 5 <latexit sha1_base64=\"LxO/yXw7L0hYzFEtg5wEptC0xMA=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0nEoseiF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0wPq1frniVt05yCrxclKBHI1++as3iFkaoTRMUK27npsYP6PKcCZwWuqlGhPKxnSIXUsljVD72fzUKTmzyoCEsbIlDZmrvycyGmk9iQLbGVEz0sveTPzP66YmvPYzLpPUoGSLRWEqiInJ7G8y4AqZERNLKFPc3krYiCrKjE2nZEPwll9eJa2LqudWvfvLSv0mj6MIJ3AK5+DBFdThDhrQBAZDeIZXeHOE8+K8Ox+L1oKTzxzDHzifP/GzjY8=</latexit> <latexit sha1_base64=\"LxO/yXw7L0hYzFEtg5wEptC0xMA=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0nEoseiF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0wPq1frniVt05yCrxclKBHI1++as3iFkaoTRMUK27npsYP6PKcCZwWuqlGhPKxnSIXUsljVD72fzUKTmzyoCEsbIlDZmrvycyGmk9iQLbGVEz0sveTPzP66YmvPYzLpPUoGSLRWEqiInJ7G8y4AqZERNLKFPc3krYiCrKjE2nZEPwll9eJa2LqudWvfvLSv0mj6MIJ3AK5+DBFdThDhrQBAZDeIZXeHOE8+K8Ox+L1oKTzxzDHzifP/GzjY8=</latexit> <latexit sha1_base64=\"LxO/yXw7L0hYzFEtg5wEptC0xMA=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0nEoseiF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0wPq1frniVt05yCrxclKBHI1++as3iFkaoTRMUK27npsYP6PKcCZwWuqlGhPKxnSIXUsljVD72fzUKTmzyoCEsbIlDZmrvycyGmk9iQLbGVEz0sveTPzP66YmvPYzLpPUoGSLRWEqiInJ7G8y4AqZERNLKFPc3krYiCrKjE2nZEPwll9eJa2LqudWvfvLSv0mj6MIJ3AK5+DBFdThDhrQBAZDeIZXeHOE8+K8Ox+L1oKTzxzDHzifP/GzjY8=</latexit> <latexit sha1_base64=\"LxO/yXw7L0hYzFEtg5wEptC0xMA=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0nEoseiF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0wPq1frniVt05yCrxclKBHI1++as3iFkaoTRMUK27npsYP6PKcCZwWuqlGhPKxnSIXUsljVD72fzUKTmzyoCEsbIlDZmrvycyGmk9iQLbGVEz0sveTPzP66YmvPYzLpPUoGSLRWEqiInJ7G8y4AqZERNLKFPc3krYiCrKjE2nZEPwll9eJa2LqudWvfvLSv0mj6MIJ3AK5+DBFdThDhrQBAZDeIZXeHOE8+K8Ox+L1oKTzxzDHzifP/GzjY8=</latexit> c 6 <latexit sha1_base64=\"mvSOnvlAfRmbt3bzfjq6KuTRErs=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqseiF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0wPq1frniVt05yCrxclKBHI1++as3iFkaoTRMUK27npsYP6PKcCZwWuqlGhPKxnSIXUsljVD72fzUKTmzyoCEsbIlDZmrvycyGmk9iQLbGVEz0sveTPzP66YmvPYzLpPUoGSLRWEqiInJ7G8y4AqZERNLKFPc3krYiCrKjE2nZEPwll9eJa2LqudWvfvLSv0mj6MIJ3AK5+DBFdThDhrQBAZDeIZXeHOE8+K8Ox+L1oKTzxzDHzifP/M3jZA=</latexit> <latexit sha1_base64=\"mvSOnvlAfRmbt3bzfjq6KuTRErs=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqseiF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0wPq1frniVt05yCrxclKBHI1++as3iFkaoTRMUK27npsYP6PKcCZwWuqlGhPKxnSIXUsljVD72fzUKTmzyoCEsbIlDZmrvycyGmk9iQLbGVEz0sveTPzP66YmvPYzLpPUoGSLRWEqiInJ7G8y4AqZERNLKFPc3krYiCrKjE2nZEPwll9eJa2LqudWvfvLSv0mj6MIJ3AK5+DBFdThDhrQBAZDeIZXeHOE8+K8Ox+L1oKTzxzDHzifP/M3jZA=</latexit> <latexit sha1_base64=\"mvSOnvlAfRmbt3bzfjq6KuTRErs=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqseiF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0wPq1frniVt05yCrxclKBHI1++as3iFkaoTRMUK27npsYP6PKcCZwWuqlGhPKxnSIXUsljVD72fzUKTmzyoCEsbIlDZmrvycyGmk9iQLbGVEz0sveTPzP66YmvPYzLpPUoGSLRWEqiInJ7G8y4AqZERNLKFPc3krYiCrKjE2nZEPwll9eJa2LqudWvfvLSv0mj6MIJ3AK5+DBFdThDhrQBAZDeIZXeHOE8+K8Ox+L1oKTzxzDHzifP/M3jZA=</latexit> <latexit sha1_base64=\"mvSOnvlAfRmbt3bzfjq6KuTRErs=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lEqseiF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0wPq1frniVt05yCrxclKBHI1++as3iFkaoTRMUK27npsYP6PKcCZwWuqlGhPKxnSIXUsljVD72fzUKTmzyoCEsbIlDZmrvycyGmk9iQLbGVEz0sveTPzP66YmvPYzLpPUoGSLRWEqiInJ7G8y4AqZERNLKFPc3krYiCrKjE2nZEPwll9eJa2LqudWvfvLSv0mj6MIJ3AK5+DBFdThDhrQBAZDeIZXeHOE8+K8Ox+L1oKTzxzDHzifP/M3jZA=</latexit> c 1 <latexit sha1_base64=\"yRjoBq99koyFx8YvuYkRx/CJ8Y4=\">AAAB6nicbVBNS8NAEJ3Ur1q/oh69LBbBU0lE0GPRi8eK9gPaUDbbTbt0swm7E6GE/gQvHhTx6i/y5r9x2+agrQ8GHu/NMDMvTKUw6HnfTmltfWNzq7xd2dnd2z9wD49aJsk0402WyER3Qmq4FIo3UaDknVRzGoeSt8Px7cxvP3FtRKIecZLyIKZDJSLBKFrpgfX9vlv1at4cZJX4BalCgUbf/eoNEpbFXCGT1Jiu76UY5FSjYJJPK73M8JSyMR3yrqWKxtwE+fzUKTmzyoBEibalkMzV3xM5jY2ZxKHtjCmOzLI3E//zuhlG10EuVJohV2yxKMokwYTM/iYDoTlDObGEMi3srYSNqKYMbToVG4K//PIqaV3UfK/m319W6zdFHGU4gVM4Bx+uoA530IAmMBjCM7zCmyOdF+fd+Vi0lpxi5hj+wPn8AeujjYs=</latexit> <latexit sha1_base64=\"yRjoBq99koyFx8YvuYkRx/CJ8Y4=\">AAAB6nicbVBNS8NAEJ3Ur1q/oh69LBbBU0lE0GPRi8eK9gPaUDbbTbt0swm7E6GE/gQvHhTx6i/y5r9x2+agrQ8GHu/NMDMvTKUw6HnfTmltfWNzq7xd2dnd2z9wD49aJsk0402WyER3Qmq4FIo3UaDknVRzGoeSt8Px7cxvP3FtRKIecZLyIKZDJSLBKFrpgfX9vlv1at4cZJX4BalCgUbf/eoNEpbFXCGT1Jiu76UY5FSjYJJPK73M8JSyMR3yrqWKxtwE+fzUKTmzyoBEibalkMzV3xM5jY2ZxKHtjCmOzLI3E//zuhlG10EuVJohV2yxKMokwYTM/iYDoTlDObGEMi3srYSNqKYMbToVG4K//PIqaV3UfK/m319W6zdFHGU4gVM4Bx+uoA530IAmMBjCM7zCmyOdF+fd+Vi0lpxi5hj+wPn8AeujjYs=</latexit> <latexit sha1_base64=\"yRjoBq99koyFx8YvuYkRx/CJ8Y4=\">AAAB6nicbVBNS8NAEJ3Ur1q/oh69LBbBU0lE0GPRi8eK9gPaUDbbTbt0swm7E6GE/gQvHhTx6i/y5r9x2+agrQ8GHu/NMDMvTKUw6HnfTmltfWNzq7xd2dnd2z9wD49aJsk0402WyER3Qmq4FIo3UaDknVRzGoeSt8Px7cxvP3FtRKIecZLyIKZDJSLBKFrpgfX9vlv1at4cZJX4BalCgUbf/eoNEpbFXCGT1Jiu76UY5FSjYJJPK73M8JSyMR3yrqWKxtwE+fzUKTmzyoBEibalkMzV3xM5jY2ZxKHtjCmOzLI3E//zuhlG10EuVJohV2yxKMokwYTM/iYDoTlDObGEMi3srYSNqKYMbToVG4K//PIqaV3UfK/m319W6zdFHGU4gVM4Bx+uoA530IAmMBjCM7zCmyOdF+fd+Vi0lpxi5hj+wPn8AeujjYs=</latexit> <latexit sha1_base64=\"yRjoBq99koyFx8YvuYkRx/CJ8Y4=\">AAAB6nicbVBNS8NAEJ3Ur1q/oh69LBbBU0lE0GPRi8eK9gPaUDbbTbt0swm7E6GE/gQvHhTx6i/y5r9x2+agrQ8GHu/NMDMvTKUw6HnfTmltfWNzq7xd2dnd2z9wD49aJsk0402WyER3Qmq4FIo3UaDknVRzGoeSt8Px7cxvP3FtRKIecZLyIKZDJSLBKFrpgfX9vlv1at4cZJX4BalCgUbf/eoNEpbFXCGT1Jiu76UY5FSjYJJPK73M8JSyMR3yrqWKxtwE+fzUKTmzyoBEibalkMzV3xM5jY2ZxKHtjCmOzLI3E//zuhlG10EuVJohV2yxKMokwYTM/iYDoTlDObGEMi3srYSNqKYMbToVG4K//PIqaV3UfK/m319W6zdFHGU4gVM4Bx+uoA530IAmMBjCM7zCmyOdF+fd+Vi0lpxi5hj+wPn8AeujjYs=</latexit> c 2 <latexit sha1_base64=\"tH/lnfdmPbXeWx2i9xfZDS+3iMU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mKUI9FLx4r2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1FjpgQ1qg3LFrboLkHXi5aQCOZqD8ld/GLM0QmmYoFr3PDcxfkaV4UzgrNRPNSaUTegIe5ZKGqH2s8WpM3JhlSEJY2VLGrJQf09kNNJ6GgW2M6JmrFe9ufif10tNeO1nXCapQcmWi8JUEBOT+d9kyBUyI6aWUKa4vZWwMVWUGZtOyYbgrb68Ttq1qudWvfurSuMmj6MIZ3AOl+BBHRpwB01oAYMRPMMrvDnCeXHenY9la8HJZ07hD5zPH+0njYw=</latexit> <latexit sha1_base64=\"tH/lnfdmPbXeWx2i9xfZDS+3iMU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mKUI9FLx4r2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1FjpgQ1qg3LFrboLkHXi5aQCOZqD8ld/GLM0QmmYoFr3PDcxfkaV4UzgrNRPNSaUTegIe5ZKGqH2s8WpM3JhlSEJY2VLGrJQf09kNNJ6GgW2M6JmrFe9ufif10tNeO1nXCapQcmWi8JUEBOT+d9kyBUyI6aWUKa4vZWwMVWUGZtOyYbgrb68Ttq1qudWvfurSuMmj6MIZ3AOl+BBHRpwB01oAYMRPMMrvDnCeXHenY9la8HJZ07hD5zPH+0njYw=</latexit> <latexit sha1_base64=\"tH/lnfdmPbXeWx2i9xfZDS+3iMU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mKUI9FLx4r2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1FjpgQ1qg3LFrboLkHXi5aQCOZqD8ld/GLM0QmmYoFr3PDcxfkaV4UzgrNRPNSaUTegIe5ZKGqH2s8WpM3JhlSEJY2VLGrJQf09kNNJ6GgW2M6JmrFe9ufif10tNeO1nXCapQcmWi8JUEBOT+d9kyBUyI6aWUKa4vZWwMVWUGZtOyYbgrb68Ttq1qudWvfurSuMmj6MIZ3AOl+BBHRpwB01oAYMRPMMrvDnCeXHenY9la8HJZ07hD5zPH+0njYw=</latexit> <latexit sha1_base64=\"tH/lnfdmPbXeWx2i9xfZDS+3iMU=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mKUI9FLx4r2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1FjpgQ1qg3LFrboLkHXi5aQCOZqD8ld/GLM0QmmYoFr3PDcxfkaV4UzgrNRPNSaUTegIe5ZKGqH2s8WpM3JhlSEJY2VLGrJQf09kNNJ6GgW2M6JmrFe9ufif10tNeO1nXCapQcmWi8JUEBOT+d9kyBUyI6aWUKa4vZWwMVWUGZtOyYbgrb68Ttq1qudWvfurSuMmj6MIZ3AOl+BBHRpwB01oAYMRPMMrvDnCeXHenY9la8HJZ07hD5zPH+0njYw=</latexit> c 4 <latexit sha1_base64=\"ypzjBHHz1k2rmAf3Jp3DJGBI4uE=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI9FLx4r2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1FjpgQ1qg3LFrboLkHXi5aQCOZqD8ld/GLM0QmmYoFr3PDcxfkaV4UzgrNRPNSaUTegIe5ZKGqH2s8WpM3JhlSEJY2VLGrJQf09kNNJ6GgW2M6JmrFe9ufif10tNeO1nXCapQcmWi8JUEBOT+d9kyBUyI6aWUKa4vZWwMVWUGZtOyYbgrb68TtpXVc+teve1SuMmj6MIZ3AOl+BBHRpwB01oAYMRPMMrvDnCeXHenY9la8HJZ07hD5zPH/AvjY4=</latexit> <latexit sha1_base64=\"ypzjBHHz1k2rmAf3Jp3DJGBI4uE=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI9FLx4r2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1FjpgQ1qg3LFrboLkHXi5aQCOZqD8ld/GLM0QmmYoFr3PDcxfkaV4UzgrNRPNSaUTegIe5ZKGqH2s8WpM3JhlSEJY2VLGrJQf09kNNJ6GgW2M6JmrFe9ufif10tNeO1nXCapQcmWi8JUEBOT+d9kyBUyI6aWUKa4vZWwMVWUGZtOyYbgrb68TtpXVc+teve1SuMmj6MIZ3AOl+BBHRpwB01oAYMRPMMrvDnCeXHenY9la8HJZ07hD5zPH/AvjY4=</latexit> <latexit sha1_base64=\"ypzjBHHz1k2rmAf3Jp3DJGBI4uE=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI9FLx4r2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1FjpgQ1qg3LFrboLkHXi5aQCOZqD8ld/GLM0QmmYoFr3PDcxfkaV4UzgrNRPNSaUTegIe5ZKGqH2s8WpM3JhlSEJY2VLGrJQf09kNNJ6GgW2M6JmrFe9ufif10tNeO1nXCapQcmWi8JUEBOT+d9kyBUyI6aWUKa4vZWwMVWUGZtOyYbgrb68TtpXVc+teve1SuMmj6MIZ3AOl+BBHRpwB01oAYMRPMMrvDnCeXHenY9la8HJZ07hD5zPH/AvjY4=</latexit> <latexit sha1_base64=\"ypzjBHHz1k2rmAf3Jp3DJGBI4uE=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI9FLx4r2g9oQ9lsJ+3SzSbsboQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgobm1vbO8Xd0t7+weFR+fikreNUMWyxWMSqG1CNgktsGW4EdhOFNAoEdoLJ7dzvPKHSPJaPZpqgH9GR5CFn1FjpgQ1qg3LFrboLkHXi5aQCOZqD8ld/GLM0QmmYoFr3PDcxfkaV4UzgrNRPNSaUTegIe5ZKGqH2s8WpM3JhlSEJY2VLGrJQf09kNNJ6GgW2M6JmrFe9ufif10tNeO1nXCapQcmWi8JUEBOT+d9kyBUyI6aWUKa4vZWwMVWUGZtOyYbgrb68TtpXVc+teve1SuMmj6MIZ3AOl+BBHRpwB01oAYMRPMMrvDnCeXHenY9la8HJZ07hD5zPH/AvjY4=</latexit> c 5 <latexit sha1_base64=\"LxO/yXw7L0hYzFEtg5wEptC0xMA=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0nEoseiF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0wPq1frniVt05yCrxclKBHI1++as3iFkaoTRMUK27npsYP6PKcCZwWuqlGhPKxnSIXUsljVD72fzUKTmzyoCEsbIlDZmrvycyGmk9iQLbGVEz0sveTPzP66YmvPYzLpPUoGSLRWEqiInJ7G8y4AqZERNLKFPc3krYiCrKjE2nZEPwll9eJa2LqudWvfvLSv0mj6MIJ3AK5+DBFdThDhrQBAZDeIZXeHOE8+K8Ox+L1oKTzxzDHzifP/GzjY8=</latexit> <latexit sha1_base64=\"LxO/yXw7L0hYzFEtg5wEptC0xMA=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0nEoseiF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0wPq1frniVt05yCrxclKBHI1++as3iFkaoTRMUK27npsYP6PKcCZwWuqlGhPKxnSIXUsljVD72fzUKTmzyoCEsbIlDZmrvycyGmk9iQLbGVEz0sveTPzP66YmvPYzLpPUoGSLRWEqiInJ7G8y4AqZERNLKFPc3krYiCrKjE2nZEPwll9eJa2LqudWvfvLSv0mj6MIJ3AK5+DBFdThDhrQBAZDeIZXeHOE8+K8Ox+L1oKTzxzDHzifP/GzjY8=</latexit> <latexit sha1_base64=\"LxO/yXw7L0hYzFEtg5wEptC0xMA=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0nEoseiF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0wPq1frniVt05yCrxclKBHI1++as3iFkaoTRMUK27npsYP6PKcCZwWuqlGhPKxnSIXUsljVD72fzUKTmzyoCEsbIlDZmrvycyGmk9iQLbGVEz0sveTPzP66YmvPYzLpPUoGSLRWEqiInJ7G8y4AqZERNLKFPc3krYiCrKjE2nZEPwll9eJa2LqudWvfvLSv0mj6MIJ3AK5+DBFdThDhrQBAZDeIZXeHOE8+K8Ox+L1oKTzxzDHzifP/GzjY8=</latexit> <latexit sha1_base64=\"LxO/yXw7L0hYzFEtg5wEptC0xMA=\">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0nEoseiF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0wPq1frniVt05yCrxclKBHI1++as3iFkaoTRMUK27npsYP6PKcCZwWuqlGhPKxnSIXUsljVD72fzUKTmzyoCEsbIlDZmrvycyGmk9iQLbGVEz0sveTPzP66YmvPYzLpPUoGSLRWEqiInJ7G8y4AqZERNLKFPc3krYiCrKjE2nZEPwll9eJa2LqudWvfvLSv0mj6MIJ3AK5+DBFdThDhrQBAZDeIZXeHOE8+K8Ox+L1oKTzxzDHzifP/GzjY8=</latexit> Figure 5: Attention weights for two clauses c 4 and c 5 .", "weight, indicating that graph attention effectively captures the relationship between the two clauses.", "Emotion Cause Extraction Lee et al. (2010a,b) first studied emotion cause extraction and designed a linguistic rule-based system to detect cause events.", "Early work attempted rule-based (Chen et al., 2010; Neviarouskaya and Aono, 2013; Gao et al., 2015), commonsense-based (Russo et al., 2011), and traditional machine learning based (Ghazi et al., 2015) approaches to extract causes for certain emotion expressions.", "Gui et al. (2016) proposed an event-driven multi-kernel SVM method and released a benchmark corpus.", "Both feature based (Xu et al., 2019) and neural approaches (Gui et al., 2017; Li et al., 2018; Ding et al., 2019; Yu et al., 2019) have been proposed recently.", "Xia et al. (2019) adopted Transformer encoder augmented with position information and integrated global prediction embedding to improve performance.", "Fan et al. (2019) incorporated sentiment and position regularizers to restrain parameter learning.", "Hu et al. (2019) exploited external sentiment classification corpus to pretrain the model.", "In other research lines, some work (Cheng et al., 2017) extracted emotion causes in the context of microblog with multi-user structure.", "Besides, Kim and Klinger (2018) and Bostan et al. (2020) addressed emotions as structured phenomena, and studied the semantic roles of emotions including trigger phrases, experiencers, targets and causes, as well as the reader's perception.", "Emotion-Cause Pair Extraction All previous studies on emotion cause analysis need to take known emotion clauses as model input.", "The pio-neer work (Xia and Ding, 2019) first put forward emotion-cause pair extraction task.", "They proposed a two-step approach to extract emotion and cause clauses separately, and then train a classifier to filter out negative pairs.", "Unlike their work, our work is a one-step solution for end-to-end emotion-cause pair extraction via effective inter-clause modeling, achieving significantly better performance.", "In this paper, we propose the first one-step neural approach RANKCP to tackle the problem of emotion-cause pair extraction, which emphasizes inter-clause modeling from a ranking perspective.", "Our approach effectively models inter-clause relationships to learn clause representations, and integrates relative position enhanced clause pair ranking into a unified neural network to extract emotion-cause pairs in an end-to-end fashion.", "Experimental results on the benchmark dataset demonstrate that RANKCP significantly outperforms previous systems, and further analysis verifies the effectiveness of each component in our model.", "In future work, we shall explore the following directions.", "First, current studies on emotion cause analysis mainly focus on clause-level extraction which is relatively coarse-grained, and it is desirable to further design fine-grained methods that can extract span-level or phrase-level emotion expressions and causes.", "Second, designing effective methods to inject appropriate linguistic knowledge into neural models is valuable to emotion analysis tasks (Ke et al., 2019; Zhong et al., 2019).", "Finally, it would be interesting to study the semantic roles of emotion (Bostan et al., 2020), which considers the full structure of an emotion expression and is more challenging.", "This work was supported in part by the Ministry of Science and Technology of China under Grants #2016QY02D0305 and #2018ZX10201001, and NSFC under Grants #71621002, #61671450 and #11832001.", "We thank the anonymous reviewers for their valuable comments." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "method", "method", "objective", "objective", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "other", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "other", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "objective", "method", "objective", "objective", "abstain", "abstain", "abstain", "other", "other" ]
[ "Current methods for few-shot fine-tuning of pretrained masked language models (PLMs) require carefully engineered prompts and verbalizers for each new task to convert examples into a cloze-format that the PLM can score.", "In this work, we propose PERFECT , a simple and efficient method for few-shot fine-tuning of PLMs without relying on any such handcrafting , which is highly effective given as few as 32 data points.", "PERFECT makes two key design choices: First, we show that manually engineered task prompts can be replaced with task-specific adapters that enable sample-efficient fine-tuning and reduce memory and storage costs by roughly factors of 5 and 100, respectively.", "Second, instead of using handcrafted verbalizers, we learn new multi-token label embeddings during fine-tuning, which are not tied to the model vocabulary and which allow us to avoid complex auto-regressive decoding.", "These embeddings are not only learnable from limited data but also enable nearly 100x faster training and inference.", "Experiments on a wide range of few shot NLP tasks demonstrate that PERFECT , while being simple and efficient, also outperforms existing state-of-the-art few-shot learning methods.", "Our code is publicly available at https://github.com/ facebookresearch/perfect.git .", "Recent methods for few-shot language model tuning obtain impressive performance but require careful engineering of prompts and verbalizers to convert inputs to a cloze-format (Taylor, 1953) that can be scored with pre-trained language models (PLMs) (Radford et al., 2018; Radford et al.; Brown et al., 2020; Schick and Schtze, 2021a,b).", "For example, as Figure 1 shows, a sentiment classifier can be designed by inserting the input text x in a prompt template x It was [MASK] where verbalizers (e.g., great' and terrible') are substituted for the [MASK] to score target task labels (positive' or negative').", "In this paper, we show that such engineering is [CLS] The restaurant had excellent foods.", "It was [MASK] [SEP] Pretrained Language Model Input Pattern MLM Head terrible great Verbalizers positive negative Labels Figure 1: Existing few-shot fine-tuning methods require manual engineering to reduce new tasks to masked language modeling.", "not needed for few-shot learning and instead can be replaced with simple methods for data-efficient fine-tuning with as few as 32 end-task examples.", "More specifically, we propose PERFECT , a Prompt-free and Efficient paRadigm for FEw-shot Cloze-based fine-Tuning.", "To remove handcrafted patterns, PERFECT uses task-specific adapter layers (Houlsby et al., 2019; Pfeiffer et al., 2020) (3.1).", "Freezing the underlying PLM with millions or billions of parameters (Liu et al., 2019; Raffel et al., 2020), and only tuning adapters with very few new parameters saves on memory and storage costs (4.2), while allowing very sample-efficient tuning (4).", "It also stabilizes the training by increasing the worst-case performance and decreasing variance across the choice of examples in the few shot training sets (4.3).", "To remove handcrafted verbalizers (with variable token lengths), we introduce a new multi-token fixed-length classifier scheme that learns task label embeddings which are independent from the language model vocabulary during fine-tuning (3.2).", "We show (4) that this approach is sample efficient and outperforms carefully engineered verbalizers from random initialization (4).", "It also allows us to avoid previously used expensive auto-regressive decoding schemes (Schick and Schtze, 2021b), by leveraging prototypical networks (Snell et al., 2017) over multiple tokens.", "Overall, these changes enable up to 100x faster learning and inference (4.2).", "PERFECT has several advantages: It avoids engineering patterns and verbalizers for each new task, which can be cumbersome.", "Recent work has shown that even some intentionally irrelevant or misleading prompts can perform as well as more interpretable ones (Webson and Pavlick, 2021).", "Unlike the zero-shot or extreme few-shot case, where prompting might be essential, we argue in this paper that all you need is tens of training examples to avoid these challenges by adopting PERFECT or a similar data-efficient learning method.", "Experiments on a wide variety of NLP tasks demonstrate that PERFECT outperforms state-of-the-art prompt-based methods while being significantly more efficient in inference and training time, storage, and memory usage (4.2).", "To the best of our knowledge, we are the first to propose a few-shot learning method using the MLM objective in PLMs that provide state-of-the-art results while removing all per-task manual engineering.", "Problem formulation: We consider a general problem of fine-tuning language models in a few-shot setting, on a small training set with K unique classes and N examples per class, such that the total number of examples is |D| = N K .", "Let D = Kk =1 D k be the given training set, where D k = { ( x ik ,y ik ) } Ni =1 shows the set of examples labeled with class k and y ik Y is the corresponding label, where |Y| = K .", "We additionally assume access to a development set with the same size as the training data.", "Note that larger validation sets can grant a substantial advantage (Perez et al., 2021), and thus it is important to use a limited validation size to be in line with the goal of few-shot learning.", "Unless specified otherwise, in this work, we use 16 training examples ( N = 16 ) and a validation set with 16 examples, for a total of 32-shot learning.", "Recent work has shown that fine-tuning all parameters of PLMs with a large number of parameters in low-resource datasets can lead to a sub-optimal solution (Peters et al., 2019; Dodge et al., 2020).", "As shown in Figure 2, Rebuffi et al. (2018) and Houlsby et al. (2019) suggest an efficient alternative, by inserting small task-specific modules called adapters within layers of a PLMs.", "They then only train the newly added adapters and layer normalization, while fixing the remaining parameters of a PLM.", "b) a feed-forward block, where both modules are followed by a skip connection.", "As depicted in Figure 2, adapters are normally inserted after each of these blocks before the skip connection.", "Adapters are bottleneck architectures.", "By keeping input and output dimensions the same, they introduce no additional architectural changes.", "Each adapter, A ( . ) RH , consists of a down-projection, D ( . ) RH B , a non-linearity, such as GeLU (Hendrycks and Gimpel, 2016), and an up-projection U ( . ) RB H , where H is the dimension of input hidden states x , and B is the bottleneck size.", "Formally defined as: A ( x )= U ( GeLU ( D ( x )))+ x , (1) 2.2 Prompt-based Fine-tuning Standard Fine-tuning: In standard fine-tuning with PLMs (Devlin et al., 2019), first a special [CLS] token is appended to the input x , and then the PLM maps it to a sequence of hidden representations h = ( h 1 ,..., h S ) with h i RH , where H is the hidden dimension, and S is the maximum sequence length.", "Then, a classifier, softmax ( WT h [CLS] ) , using the embedding of the classification token ( h [CLS] ), is trained end-to-end for each downstream task.", "The main drawback of this approach is the discrepancy between the pre-training and fine-tuning phases since PLMs have been trained to predict mask tokens in a masked language modeling task (Devlin et al., 2019).", "2021a,b; Gao et al., 2021) formulates tasks in a cloze-format (Taylor, 1953).", "This way, the model can predict targets with a masked language modeling (MLM) objective .", "For example, as shown in Figure 1, for a sentiment classification task, inputs are converted to: x prompt = [CLS] x .", "Then, the PLM determines which verbalizer (e.g., great' and terrible') is the most likely substitute for the mask in the x prompt .", "This subsequently determines the score of targets (positive' or negative').", "In detail: Training strategy: Let M : Y V be a mapping from target labels to individual words in a PLM's vocabulary.", "We refer to this mapping as verbalizers .", "Then the input is converted to x prompt = T ( x ) by appending a pattern and a mask token to x so that it has the format of a masked language modeling input.", "Then, the classification task is converted to a MLM objective (Tam et al., 2021; Schick and Schtze, 2021a), and the PLM computes the probability of the label y as: p ( y | x )= p ( [MASK] = M ( y ) | x prompt ) = exp( WT M ( y ) h [MASK] ) (cid:80) v V exp( W Tv h [MASK] ) , (2) where h [MASK] is the last hidden representation of the mask, and W v shows the output embedding of the PLM for each verbalizer v V .", "For many tasks, verbalizers have multiple tokens.", "Schick and Schtze (2021b) extended (2) to multiple mask tokens by adding the maximum number of mask tokens M needed to express the outputs (verbalizers) for a task.", "In that case, Schick and Schtze (2021b) computes the probability of each class as the summation of the log probabilities of each token in the corresponding verbalizer, and then they add a hinge loss to ensure a margin between the correct verbalizer and the incorrect ones.", "Inference strategy: During inference, the model needs to select which verbalizer to use in the given context.", "Schick and Schtze (2021b) predicts the verbalizer tokens in an autoregressive fashion.", "They first trim the number of mask tokens from M to each candidate verbalizer's token length and compute the probability of each mask token.", "They then choose the predicted token with the highest probability and replace the corresponding mask token.", "Conditioning MASK 1 CLS Multi-head Attention Adapter + PLML aye r Layer norm Feed forward + Layer norm Embedding Layer SEP TOK 1 Adapter MASK Embedding 1 Hinge Loss D es i r e d L a b e l s E s t i m a t e d L a b e l s TOKNMASKMMASK Embedding MWM Label Embedding W 1 Input Masks Figure 3: We remove handcrafted patterns and verbalizers.", "on this new token, the probabilities of the remaining mask positions are recomputed.", "They repeat this autoregressive decoding until they fill all mask positions.", "This inference strategy is very slow, as the number of forward passes increases with the number of classes and the number of verbalizer's tokens.", "This formulation obtained impressive few-shot performance with PLMs.", "However, the success of this approach heavily relies on engineering handcrafted patterns and verbalizers .", "Coming up with suitable verbalizers and patterns can be difficult (Mishra et al., 2022b,a).", "Additionally, the performance is sensitive to the wording of patterns (Zhao et al., 2021; Perez et al., 2021; Schick and Schtze, 2021a; Jiang et al., 2020) or to the chosen verbalizers (Webson and Pavlick, 2021).", "In addition, handcrafted verbalizers cause problems for efficient training:", "a) they require updating the PLM embedding layer, causing large memory overhead;", "b) fine-tuning PLMs also requires a very small learning rate (usually 10 5 ), which slows down tuning the parameters of the verbalizers;", "c) modeling verbalizers as one of the tokens of 3640 the PLM vocabulary (perhaps unintentionally) impacts the input representation during tuning;", "d) verbalizers have variable token lengths, complicating the implementation in a vectorized format, thereby making it challenging to efficiently fine-tune PLMs.", "We propose PERFECT , a verbalizer and pattern free few-shot learning method.", "We design PERFECT to be close to the pre-training phase, similar to the PET family of models (Schick and Schtze, 2021b; Gao et al., 2021), while replacing handcrafted patterns and verbalizers with new components that are designed to describe the task and learn the labels.", "As shown in Figure 3, we first convert each input x input to its masked language modeling (MLM) input containing M mask tokens [MASK] 1 with no added patterns, denoted as x masked = T ( x input ) .", "2 PERFECT then trains a classifier per-token and optimizes the average multi-class hinge loss over each mask position.", "Three main components play a role in the success of PERFECT :", "a) a pattern-free task description, where we use task-specific adapters to efficiently tell the model about the given task, replacing previously manually engineered patterns (3.1),", "b) multi-token label-embedding as an efficient mechanism to learn the label representations, removing manually designed verbalizers (3.2).", "c) an efficient inference strategy building on top of the idea of prototypical networks (Snell et al., 2017) (3.4), which replaces prior iterative autoregressive decoding methods (Schick and Schtze, 2021b).", "As shown in Figure 3, we fix the underlying PLM model and only optimize the new parameters that we add (green boxes).", "This includes the task-specific adapters to adapt the representations for a given task and the multi-token label representations.", "We detail each of these components below.", "We use task-specific adapter layers to provide the model with learned, implicit task descriptions.", "Adapters additionally bring multiple other benefits:", "a) fine-tuning all weights of PLMs with millions or billions of parameters is sample-inefficient, and can be unstable in low-resource settings (Dodge et al., 1 We discuss the general case with inserting multiple masks; for some datasets this improves performance (4.3.1).", "2 We insert mask tokens after the input string in single-sentence benchmarks, and after the first sentence in the case of sentence-pair datasets and encode both sentences as a single input, which we found to perform the best (Appendix C).", "2020); adapters allow sample-efficient fine-tuning, by keeping the underlying PLM fixed,", "b) adapters reduce the storage and memory footprints (4.2),", "c) they also increase stability and performance (4), making them an excellent choice for few-shot fine-tuning.", "To our knowledge, this is the first approach for using task-specific adapters to effectively and efficiently remove patterns in few-shot learning.", "Experimental results in 4 show its effectiveness compared to handcrafted patterns and soft prompts (Li and Liang, 2021; Lester et al., 2021).", "We freeze the weights of the PLM's embedding layer and introduce a separate label embedding L RK M H , which is a multi-token label representation where M is the number of tokens representing each label, K indicates the number of classes, H is the input hidden dimension.", "Using a fixed number of tokens M for each label, versus variable-token length verbalizers used in prior work (Schick and Schtze, 2021a,b) substantially simplifies the implementation and accelerates the training (4.2).", "As shown in Figure 3, we optimize label embeddings so that the PLM predicts the correct label, and optimize adapters to adapt the PLM for the given task.", "For label embeddings, PERFECT trains a classifier per token and optimizes the average multi-class hinge loss over all mask positions.", "Given x masked , let h [MASK] i be the embedding of its i -th mask token from the last layer of the PLM encoder.", "Additionally, let f ( . ) : RH RK be a per-token classifier that computes the predictions by multiplying the mask token embedding with its corresponding label embedding.", "Formally defined as: t i = f ( h [MASK] i )= LT i h [MASK] i , where L i RK H shows the label embedding for the i -th mask position.", "Then, for each mask position, we optimize a multi-class hinge loss between their scores t i and labels.", "Formally defined as: L ( x ,y,i )= (cid:80) Kk =1 ,k = y max(0 ,m t iy + t ik ) K , where t ik shows the k -th element of t i , representing the score corresponding to class k , and m is the margin, which we fix to the default value of m = 1 .", "Then, the final loss is computed by averaging the loss 3641 over all mask tokens and training samples: L = 1 M |D| (cid:88) ( x ,y ) D M (cid:88) i =1 L ( x ,y,i ) (3) 3.4 Inference with PERFECT During evaluation, instead of relying on the prior iterative autoregressive decoding schemes (Schick and Schtze, 2021b), we classify a query point by finding the nearest class prototype to the mask token embeddings: y =argmax y Y max i { 1 ,...,M } (cid:16) exp d ( h qi , c iy ) (cid:17) , (4) where d is squared euclidean distance, 3 h qi indicates the embedding of the i -th mask position for the query sample q , and c iy RD is the prototype representation of the i -th mask token with class label y , i.e., the mean embedding of i -th mask position in all training samples with label y : c iy = 1 |D y | (cid:88) b D y h bi , (5) where h bi shows the embedding of i -th mask position for training sample b , and D y is the training instances with class y .", "This strategy closely follows prototypical networks (Snell et al., 2017), but applied across multiple tokens.", "We choose this form of inference because prototypical networks are known to be sample efficient and robust (Snell et al., 2017), and because it substantially speeds up evaluation compared to prior methods (4.2).", "We conduct extensive experiments on a variety of NLP datasets to evaluate the performance of PERFECT and compare it with state-of-the-art few-shot learning.", "Datasets: We consider 7 tasks and 12 datasets: 1) the sentiment analysis datasets SST-2 (Socher et al., 2013), SST-5 (Socher et al., 2013), MR (Pang and Lee, 2005), and CR (Hu and Liu, 2004), 2) the subjectivity classification dataset SUBJ (Pang and Lee, 2004), 3) the question classification dataset TREC (Voorhees and Tice, 2000), 4) the natural language inference datasets CB (De Marneffe et al., 2019) and RTE (Wang et al., 2019a), 5) the question answering dataset QNLI (Rajpurkar et al., 2016), 6) the word sense disambiguation dataset WiC (Pilehvar 3 We also tried with cosine similarity but found a slight improvement with squared Euclidean distance (Snell et al., 2017).", "and Camacho-Collados, 2019), 7) the paraphrase detection datasets MRPC (Dolan and Brockett, 2005) and QQP.", "4 See datasets statistics in Appendix A. For MR, CR, SST-5, SUBJ, and TREC, we test on the original test sets, while for other datasets, since test sets are not publicly available, we test on the original validation set.", "We sample 16 instances per label from the training set to form training and validation sets.", "Baselines We compare with the state-of-the-art few-shot learning of PET and fine-tuning: PET (Schick and Schtze, 2021a,b) is the state-of-the-art few-shot learning method that employs carefully crafted verbalizers and patterns.", "We report the best (PET-best) and average (PET-average) results among all patterns and verbalizers.", "5 FINETUNE The standard fine-tuning (Devlin et al., 2019), with adding a classifier on top of the [CLS] token and fine-tuning all parameters.", "Our method We study the performance of PERFECT and perform an extensive ablation study to show the effectiveness of our design choices: PERFECT -rand We randomly initialize the label embedding L from a normal distribution N (0 , ) with = 10 4 (chosen based on validation performance, see Appendix D) without relying on any handcrafted patterns and verbalizers .", "As an ablation, we study the following two variants: PERFECT -init We initialize the label embedding with the token embeddings of manually designed verbalizers in the PLM's vocabulary to study the impact of engineered verbalizers.", "prompt+mte To compare the impact of adapters versus soft prompt-tuning for few-shot learning, we append trainable continuous prompt embeddings to the input (Lester et al., 2021).", "Then we only tune the soft prompt and multi-token label embeddings (mte).", "bitfit+mte Following Cai et al. (2020) and Rav-fogel et al. (2021), we tune biases as an alternative to adapters.", "We additionally tune multi-token label embeddings.", "Logan IV et al. (2021) Following Logan IV et al. (2021), we remove patterns and tune the biases in the PET.", "Experimental details: We use the RoBERTa large model (Liu et al., 2019) (355M parameters) as the underlying PLM for all methods.", "We use the HuggingFace PyTorch implementation (Wolf et al., 2020).", "For 4 https://quoradata.quora.com/ 5 For a controlled study, we use the MLM variant shown in (2), which has been shown to perform the best (Tam et al., 2021).", "the baselines, we used the carefully manually designed patterns and verbalizers in Gao et al. (2021), Min et al. (2021), and Schick and Schtze (2021b) (usually 5 different options per datasets; see Appendix B).", "We evaluate all methods using 5 different random samples to create the training/validation sets and 4 different random seeds for training.", "Therefore, for PET-average, we report the results on 20 x 5 (number of patterns and verbalizers) = 100 runs, while for PET-best and our method, we report the results over 20 runs.", "The variance in few-shot learning methods is usually high (Perez et al., 2021; Zhao et al., 2021; Lu et al., 2021).", "Therefore, we report average, worst-case performance, and standard deviation across all runs, where the last two values can be important for risk-sensitive applications (Asri et al., 2016).", "Table 1 shows the performance of all methods.", "PERFECT obtains state-of-the-art results, improving the performance compared to PET-average by +1.1 and +4.6 points for single-sentence and sentence-pair datasets respectively.", "It even outperforms PET-best, where we report the best performance of PET across multiple manually engineered patterns and verbalizers.", "Moreover, PERFECT generally improves the minimum performance and reduces standard deviation substantially.", "Finally, PERFECT is also significantly more efficient: reducing the training and inference time, memory usage, and storage costs (see 4.2).", "PET-best improves the results over PET-average showing that PET is unstable to the choice of patterns and verbalizers; this difference is more severe for sentence-pair benchmarks.", "This might be because the position of the mask highly impacts the results, and the patterns used for sentence-pair datasets in Schick and Schtze (2021b) exploits this variation by putting the mask in multiple locations (see Appendix B).", "Removing patterns and tuning biases in Logan IV et al. (2021) is not expressive enough and performs substantially worse than PERFECT on average.", "embedding with handcrafted verbalizers in PERFECT -init, it consistently obtains lower performance, demonstrating that PERFECT is able to obtain state-of-the-art performance with learning from pure random initialization .", "We argue that initializing randomly close to zero (with low variance =10 4 ), as done in our case, slightly improves performance, which perhaps is not satisfied when initializing from the manually engineered verbalizers (see Appendix D).", "As a second ablation, when learning patterns with optimizing soft prompts in prompt+mte, we observe high sensitivity to learning rate, as also confirmed in Li and Liang (2021) and Mahabadi et al. (2021a).", "We experimented with multiple learning rates but performance consistently lags behind PERFECT -rand.", "This can be explained by the low flexibility of such methods as all the information regarding specifying patterns needs to be contained in the prefixes.", "As a result, the method only allows limited interaction with the rest of the model parameters, and obtaining good performance requires very large models (Lester et al., 2021).", "In addition, increasing the sequence length leads to memory overhead (Mahabadi et al., 2021a), and the number of prompt tokens is capped by the number of tokens that can fit in the maximum input length, which can be a limitation for tasks requiring large contexts.", "As a third ablation, tuning biases with optimizing soft prompts in bitfit+mte obtains lower performance compared to PERFECT , showing that adapters are a better alternative compared to tuning biases to learn task descriptions for few-shot learning.", "In this section, we compare the efficiency of PERFECT with the state-of-the-art few-shot learning method, PET.", "To this end, we train all methods for ten epochs on the 500-sampled QNLI dataset.", "We select the largest batch size for each method that fits a fixed budget of the GPU memory (40 GB).", "Due to the auto-regressive inference strategy of PET (Schick and Schtze, 2021b), all prior work implemented it with a batch size of 1 (Perez et al., 2021; Schick and Schtze, 2021b; Tam et al., 2021).", "Additionally, since PET deals with verbalizers of variable lengths, it is hard to implement their training phase in batch mode.", "We specifically choose QNLI to have verbalizers of the same length and enable batching for comparison purposes (referred to as PET in batch ).", "However, verbalizers are still not of fixed-length for most other tasks, and this speed-up does not apply generally to PET.", "In Table 2, for each method we report the percentage of trained parameters, memory usage, training time, and inference time.", "PERFECT reduces the number of trained parameters, and therefore the storage requirement, by 99.08%.", "It additionally reduces the memory requirement by 21.93% compared to PET.", "PERFECT speeds up training substantially, by 97.22% relative to the original PET's implementation, and 30.85% to our implementation of PET.", "This is because adapter-based tuning saves on memory and allows training with larger batch sizes.", "In addition, PERFECT is significantly faster during inference time (96.76% less inference time relative to PET).", "Note that although prompt+mte and bitfit+mte can also reduce the storage costs, by having 0.02M and 0.32 M trainable parameters respectively, they are not expressive enough to learn task descriptions, and their performance substantially lags behind PERFECT (see Table 1).", "Overall, given the size of PLMs with millions and billions of parameters (Liu et al., 2019; Raffel et al., 2020), efficient few-shot learning methods are of paramount importance for practical applications.", "PERFECT not only outperforms the state-of-the-art in terms of accuracy and generally improves the stability (Table 1), but also is significantly more efficient in runtime, storage, and memory.", "Can task-specific adapters replace manually engineered patterns?", "PERFECT is a pattern-free approach and employs adapters to provide the PLMs with task descriptions implicitly.", "In this section, we study the contribution of replacing manual patterns with adapters in isolation without considering our other contributions in representing labels, training, and inference.", "In PET (Schick and Schtze, 2021a,b), 3644 Dataset PET-Average Pattern-Free SST-2 89.7/81.0/2.4 90.5/87.8/1.2 CR 88.4/68.8/3.0 89.8/87.0/1.4 MR 85.9/79.0/2.1 86.4/83.0/1.8 SST-5 45.9/40.3/2.4 44.8/40.0/2.4 SUBJ 88.1/79.6/2.4 85.3/74.7/3.8 TREC 85.0/70.6/4.5 87.9/84.6/1.8 CB 86.9/73.2/5.1 93.0/89.3/1.9 RTE 60.1/49.5/4.7 63.7/56.3/4.1 QNLI 66.5/55.7/6.2 71.3/65.8/2.5 MRPC 62.1/38.2/6.8 66.0/54.4/5.6 QQP 63.4/44.7/7.9 71.8/64.3/3.7 WiC 51.0/46.1/2.6 53.7/50.3/2.0 Avg 72.8/60.6/4.2 75.4/69.8/2.7 Table 3: Average performance of PET with five different patterns vs. Pattern-Free that replaces handcrafted patterns with task-specific adapters.", "we replace the handcrafted patterns with task-specific adapters ( Pattern-Free ) while keeping the verbalizers and the training and inference intact 6 and train it with a similar setup as in 4.", "Table 3 shows the results.", "While PET is very sensitive to the choice of prompts, adapters provide an efficient alternative to learn patterns robustly by improving the performance (average and worst-case) and reducing the standard deviation.", "This finding demonstrates that task-specific adapters can effectively replace manually engineered prompts.", "Additionally, they also save on the training budget by at least 1 / number of patterns (normally 1/5) by not requiring running the method for different choices of patterns, and by freezing most parameters, this saves on memory and offers additional speed-up.", "Impact of Removing Adapters To study the impact of adapters in learning patterns, we remove adapters, while keeping the label embedding.", "Handcrafted patterns are not included and we tune all parameters of the model.", "Table 4 shows the results.", "Adding adapters for learning patterns contributes to the performance by improving the average performance, and making the model robust by improving the minimum performance and reducing the standard deviation.", "This is because training PLMs with millions of parameters is sample-inefficient and unstable on resource-limited datasets (Dodge 6 Since we don't have patterns, in the case of multiple sets of verbalizers, we use the first set of verbalizers as a random choice. Dataset PERFECT -Adapters SST-2 90.7/88.2/1.2 88.2/81.9/2.3 CR 90.0/85.5/1.4 89.2/83.1/1.7 MR 86.3/81.4/1.6 82.5/78.2/2.5 SST-5 42.7/35.1/2.9 40.6/33.6/3.3 SUBJ 89.1/82.8/2.1 89.7/85.0/1.9 TREC 90.6/81.6/3.2 89.8/74.2/4.3 CB 90.3/83.9 /3.5 89.6 /83.9/2.8 RTE 60.4/53.1/ 4.7 61.7/53.8 /5.1 QNLI 74.1/60.3/4.6 73.2/56.3/5.8 MRPC 67.8 /54.7/5.7 68.0 /54.2/6.1 QQP 71.2/64.2/3.5 71.0/62.0/3.7 WiC 53.8/47.0/3.0 52.5/46.9/3.0 Avg 75.6/68.1/3.1 74.7/66.1/3.5 Table 4: Performance of PERFECT w/o adapters, -Adapters . We report the average performance/worst-case perfor-mance/and the standard deviation. et al., 2020; Zhang et al., 2020; Mosbach et al., 2021).", "However, by using adapters, we substantially reduce the number of trainable parameters, allowing the model to be better tuned in a few-shot setting.", "Impact of the number of masks In Table 1, to compare our design with PET in isolation, we fixed the number of mask tokens as the maximum number inserted by PET.", "In table 5, we study the impact of varying the number of inserted mask tokens for a random selection of six tasks.", "For most tasks, having two mask tokens performs the best, while for MR and RTE, having one, and for MRPC, inserting ten masks improves the results substantially.", "The number of required masks might be correlated with the difficulty of the task.", "PERFECT is designed to be general, enabling having multiple mask tokens.", "Adapter Layers: Mahabadi et al. (2021b) and stn et al. (2020) proposed to generate adapters' weights using hypernetworks (Ha et al., 2017), where Mahabadi et al. (2021b) proposed to share a small hypernetwork to generate conditional adapter weights efficiently for each transformer layer and task.", "Mahabadi et al. (2021a) proposed compacter layers by building on top of ideas of parameterized hypercomplex layers (Zhang et al., 2021) and low-rank methods (Li et al., 2018; Aghajanyan et al., 2021), as an efficient fine-tuning method for PLMs.", "We are the first to employ adapters to replace handcrafted patterns for few-shot learning.", "Few-shot Learning with PLMs: Le Scao and Rush (2021) showed that prompting provides substantial improvements compared to fine-tuning, especially in low-resource settings.", "Subsequently, researchers continuously tried to address the challenges of manually engineered patterns and verbalizers:", "a) Learning the patterns in a continuous space (Li and Liang, 2021; Qin and Eisner, 2021; Lester et al., 2021), while freezing PLM for efficiency, has the problem that, in most cases, such an approach only works with very large scale PLMs (Lester et al., 2021), and lags behind full fine-tuning in a general setting, while being inefficient and not as effective compared to adapters (Ma-habadi et al., 2021a).", "b) Optimizing patterns in a discrete space (Shin et al., 2020; Jiang et al., 2020; Gao et al., 2021) has the problem that such methods are computationally costly.", "c) Automatically finding verbalizers in a discrete way (Schick et al., 2020; Schick and Schtze, 2021a) is computationally expensive and does not perform as well as manually designed ones.", "d) Removing manually designed patterns (Logan IV et al., 2021) substantially lags behind the expert-designed ones.", "Our proposed method, PERFECT , does not rely on any handcrafted patterns and verbalizers.", "We proposed PERFECT , a simple and efficient method for few-shot learning with pre-trained language models without relying on handcrafted patterns and verbalizers.", "PERFECT employs task-specific adapters to learn task descriptions implicitly, replacing previous handcrafted patterns, and a continuous multi-token label embedding to represent the output classes.", "Through extensive experiments over 12 NLP benchmarks, we demonstrate that PERFECT , despite being far simpler and more efficient than recent few-shot learning methods, produces state-of-the-art results.", "Overall, the simplicity and effectiveness of PERFECT make it a promising approach for few-shot learning with PLMs.", "The authors would like to thank Sebastian Ruder and Marius Mosbach for their comments on drafts of this paper.", "This research was partly supported by the Swiss National Science Foundation under grant number 200021_178862." ]
[ "abstain", "objective", "objective", "objective", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "result", "method", "abstain", "abstain", "abstain", "method", "abstain", "objective", "method", "other", "method", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "objective", "objective", "abstain", "objective", "abstain", "other", "other" ]
[ "Incorporating syntax into neural approaches in NLP has a multitude of practical and scientific benefits.", "For instance, a language model that is syntax-aware is likely to be able to produce better samples; even a discriminative model like BERT with a syntax module could be used for core NLP tasks like unsupervised syntactic parsing.", "Rapid progress in recent years was arguably spurred on by the empirical success of the Parsing-Reading-Predict architecture of (Shen et al., 2018a), later simplified by the Order Neuron LSTM of (Shen et al., 2019).", "Most notably, this is the first time neural approaches were able to successfully perform unsupervised syntactic parsing (evaluated by various metrics like F-1 score).", "However, even heuristic (much less fully mathematical) understanding of why and when these architectures work is lagging severely behind.", "In this work, we answer representational questions raised by the architectures in (Shen et al., 2018a, 2019), as well as some transition-based syntax-aware language models (Dyer et al., 2016): what kind of syntactic structure can current neural approaches to syntax represent ?", "Concretely, we ground this question in the sandbox of probabilistic context-free-grammars (PCFGs), and identify a key aspect of the representational power of these approaches: the amount and directionality of context that the predictor has access to when forced to make parsing decision.", "We show that with limited context (either bounded, or unidirectional), there are PCFGs, for which these approaches cannot represent the max-likelihood parse; conversely, if the context is unlimited , they can represent the max-likelihood parse of any PCFG.", "Neural approaches have been steadily making their way to NLP in recent years.", "By and large however, the neural techniques that have been scaled-up the most and receive widespread usage do not explicitly try to encode discrete structure that is natural to language, e.g. syntax.", "The reason for this is perhaps not surprising: neural models have largely achieved substantial improvements in unsupervised settings, BERT (Devlin et al., 2019) being the defacto method for unsupervised pre-training in most NLP settings.", "On the other hand unsupervised syntactic tasks, e.g. unsupervised syntactic parsing, have long been known to be very difficult tasks (Htut et al., 2018).", "However, since incorporating syntax has been shown to improve language modeling (Kim et al., 2019b) as well as natural language inference (Chen et al., 2017; Pang et al., 2019; He et al., 2020), syntactic parsing remains important even in the current era when large pre-trained models, like BERT (Devlin et al., 2019), are available.", "Arguably, the breakthrough works in unsupervised constituency parsing in a neural manner were (Shen et al., 2018a, 2019), achieving F1 scores 42.8 and 49.4 on the WSJ Penn Treebank dataset (Htut et al., 2018; Shen et al., 2019).", "Both of these architectures, however (especially Shen et al., 2018a) are quite intricate, and it's difficult to evaluate what their representational power is (i.e. what kinds of structure can they recover).", "Moreover, as subsequent more thorough evaluations show (Kim et al., 2019b,a), these methods still have a rather large performance gap with the oracle binary tree (which is the best binary parse tree according to F1-score) raising the question of what is missing in these methods.", "We theoretically answer both questions raised in the prior paragraph.", "We quantify the representational power of two major frameworks in neural approaches to syntax: learning a syntactic distance (Shen et al., 2018a,b, 2019) and learning to parse through sequential transitions (Dyer et al., 2016; Chelba, 1997).", "To formalize our results, we consider the well-established sandbox of probabilistic context-free grammars (PCFGs).", "Namely, we ask: When is a neural model based on a syntactic distance or transitions able to represent the max-likelihood parse of a sentence generated from a PCFG?", "We focus on a crucial hyperparameter common to practical implementations of both families of methods that turns out to govern the representational power: the amount and type of context the model is allowed to use when making its predictions.", "Briefly, for every position t in the sentence, syntactic distance models learn a distance d t to the previous token the tree is then inferred from this distance; transition-based models iteratively construct the parse tree by deciding, at each position t , what operations to perform on a partial parse up to token t .", "A salient feature of both is the context , that is, which tokens is d t a function of (correspond-ingly, which tokens can the choice of operations at token t depend on)?", "We show that when the context is either bounded (that is, d t only depends on a bounded window around the t -th token) or unidirectional (that is, d t only considers the tokens to the left of the t th token), there are PCFGs for which no distance metric (correspondingly, no algorithm to choose the sequence of transitions) works.", "On the other hand, if the context is unbounded in both directions then both methods work: that is, for any parse, we can design a distance metric (correspondingly, a sequence of transitions) that recovers it.", "This is of considerable importance: in practical implementations the context is either bounded (e.g. in Shen et al., 2018a, the distance metric is parametrized by a convolutional kernel with a con-stant width) or unidirectional (e.g. in Shen et al., 2019, the distance metric is computed by a LSTM, which performs a left-to-right computation).", "This formally confirms a conjecture of Htut et al. (2018), who suggested that because these models commit to parsing decision in a left-to-right fashion and are trained as a part of a language model, it may be difficult for them to capture sufficiently complex syntactic dependencies.", "Our techniques are fairly generic and seem amenable to analyzing other approaches to syntax.", "Finally, while the existence of a particular PCFG that is problematic for these methods doesn't necessarily imply that the difficulties will carry over to real-life data, the PCFGs that are used in our proofs closely track linguistic intuitions about difficult syntactic structures to infer: the parse depends on words that come much later in the sentence.", "We consider several neural architectures that have shown success in various syntactic tasks, most notably unsupervised constituency parsing and syntax-aware language modeling.", "The general framework these architectures fall under is as follows: to parse a sentence W = w 1 w 2 ...w n with a trained neural model, the sentence W is input into the model, which outputs o t at each step t , and finally all the outputs { o t } nt =1 are utilized to produce the parse.", "Given unbounded time and space resources, by a seminal result of Siegelmann and Sontag (1992), an RNN implementation of this framework is Turing complete.", "In practice it is common to restrict the form of the output o t in some way.", "In this paper, we consider the two most common approaches, in which o t is a real number representing a syntactic distance (Section 2.1) (Shen et al., 2018a,b, 2019) or a sequence of parsing operations (Section 2.2) (Chelba, 1997; Chelba and Jelinek, 2000; Dyer et al., 2016).", "We proceed to describe our results for each architecture in turn.", "Syntactic distance -based neural parsers train a neural network to learn a distance for each pair of adjacent words, depending on the context surrounding the pair of words under consideration.", "The distances are then used to induce a tree structure (Shen et al., 2018a,b).", "For a sentence W = w 1 w 2 ...w n , the syntactic distance between w t 1 and w t ( 2 t n ) is defined as d t = d ( w t 1 , w t | c t ) , where c t is the context that d t takes into consideration 1 .", "We will show that restricting the surrounding context either in directionality, or in size, results in a poor representational power, while full context confers essentially perfect representational power with respect to PCFGs.", "Theorem (Informal, full context) .", "For sentence W generated by any PCFG, if the computation of d t has as context the full sentence and the position index under consideration, i.e. c t = ( W, t ) and 1 Note that this is not a conditional distributionwe use this notation for convenience.", "d t = d ( w t 1 , w t | c t ) , then d t can induce the maximum likelihood parse of W .", "On the flipside, if the context is unidirectional (i.e. unbounded left-context from the start of the sentence, and even possibly with a bounded look-ahead), the representational power becomes severely impoverished: Theorem (Informal, limitation of left-to-right parsing via syntactic distance) .", "There exists a PCFG G such that for any distance measure d t whose computation incorporates only bounded context in at least one direction (left or right), e.g. c t = ( w 0 , w 1 , ..., w t + L (cid:48) ) d t = d ( w t 1 , w t | c t ) the probability that d t induces the max likelihood parse is arbitrarily low.", "In practice, for computational efficiency, parametrizations of syntactic distances fall into the above assumptions of restricted context (Shen et al., 2018a).", "This puts the ability of these models to learn a complex PCFG syntax into considerable doubt.", "For formal definitions, see Section 4.2.", "For formal theorem statements and proofs, see Section", "5. Subsequently we consider ON-LSTM, an architecture proposed by Shen et al. (2019) improving their previous work (Shen et al., 2018a), which also is based on learning a syntactic distance, but in (Shen et al., 2019) the distances are reduced from the values of a carefully structured master forget gate (see Section 6).", "While we show ON-LSTM can in principle losslessly represent any parse tree (Theorem 3), calculating the gate values in a left to right fashion (as is done in practice) is subject to the same limitations as the syntactic distance approach: Theorem (Informal, limitation of syntactic distance estimation based on ON-LSTM) .", "There exists a PCFG G for which the probability that the syntactic distance converted from an ON-LSTM induces the max likelihood parse is arbitrarily low.", "For a formal statement, see Section 6 and in particular Theorem", "4. 2.2 Transition-based parsing In principle, the output o t at each position t of a left-to-right neural models for syntactic parsing need not be restricted to a real-numbered distance or a carefully structured vector.", "It can also be a combinatorial structure e.g. a sequence of transitions (Chelba, 1997; Chelba and Jelinek, 2000; Dyer et al., 2016).", "We adopt a simplification of the neural parameterization in (Dyer et al., 2016) (see Definition 4.7).", "With full context, Dyer et al. (2016) describes an algorithm to find a sequence of transitions to represent any parse tree, via a depth-first, left-to-right traversal of the tree.", "On the other hand, without full context, we prove that transition-based parsing suffers from the same limitations: Theorem (Informal, limitation of transition-based parsing without full context) .", "There exists a PCFG G , such that for any learned transition-based parser with bounded context in at least one direction (left or right), the probability that it returns the max likelihood parse is arbitrarily low.", "Remark.", "There is no immediate connection between the syntactic distance-based approaches (in-cluding ON-LSTM) and the transition-based parsing framework, so the limitations of transition-based parsing does not directly imply the stated negative results for syntactic distance or ON-LSTM, and vice versa.", "Most of our theorems proving limitations on bounded and unidirectional context are based on a PCFG family (Definition 2.1) which draws inspirations from natural language already suggested in (Htut et al., 2018): later words in a sentence can force different syntactic structures earlier in the sentence.", "For example, consider the two sentences: I drink coffee with milk. and I drink coffee with friends. Their only difference occurs at their very last words, but their parses differ at some earlier words in each sentence, too, as shown in Figure", "1. To formalize this intuition, we define the following PCFG.", "Definition 2.1 (Right-influenced PCFG) .", "Let m 2 , L (cid:48) 1 be positive integers.", "The grammar G m,L (cid:48) has starting symbol S , other non-terminals A k , B k , A lk , A rk , B (cid:48) k for all k { 1 , 2 , ..., m } , and terminals a i for all i { 1 , 2 , ..., m + 1 + L (cid:48) } , c j for all j { 1 , 2 , ..., m } .", "S A k B k , k { 1 , 2 , . . . , m } w.", "prob.", "1 /m A k A lk A rk w.", "prob.", "1 A lk a 1 a 2 ...a k w.", "prob.", "1 A rk a k +1 a k +2 ...a m +1 w.", "prob.", "1 B k B (cid:48) k c k w.", "prob.", "1 B (cid:48) k a m +2 a m +3 ...a m +1+ L (cid:48) w.", "prob.", "1 in which means that the left expands into the right through a sequence of rules that conform to the requirements of the Chomsky normal form (CNF, Definition 4.4).", "Hence the grammar G m,L (cid:48) is in CNF.", "The language of this grammar is L ( G m,L (cid:48) )= { l k = a 1 a 2 ...a m +1+ L (cid:48) c k : 1 k m } .", "The parse of an arbitrary l k is shown in Figure", "2. Each l k corresponds to a unique parse determined by the choice of k .", "The structure of this PCFG is such that for the parsing algorithms we consider that proceed in a left-to-right fashion on l k , before processing the last token c k , it cannot infer the syntactic structure of a 1 a 2 ...a m +1 any better than randomly guessing one of the m possibilities.", "This is the main intuition behind Theorems 2 and", "5. Remark.", "While our theorems focus on the limitation of left-to-right parsing, a symmetric argument implies the same limitation of right-to-left parsing.", "Thus, our claim is that unidirectional context (in either direction) limits the expressive power of parsing models.", "Neural models for parsing were first successfully implemented for supervised settings, e.g. (Vinyals et al., 2015; Dyer et al., 2016; Shen et al., 2018b).", "Unsupervised tasks remained seemingly out of reach, until the proposal of the Parsing-Reading-Predict Network (PRPN) by Shen et al. (2018a), whose performance was thoroughly verified by extensive experiments in (Htut et al., 2018).", "The follow-up paper (Shen et al., 2019) introducing the ON-LSTM architecture simplified radically the architecture in (Shen et al., 2018a), while still ultimately attempting to fit a distance metric with the help of carefully designed master forget gates.", "Subsequent work by Kim et al. (2019a) departed from the usual way neural techniques are integrated in NLP, with great success: they proposed a neural parameterization for the EM algorithm for learning a PCFG, but in a manner that leverages semantic information as well achieving a large improvement on unsupervised parsing tasks.", "2 In addition to constituency parsing, dependency parsing is another common task for syntactic parsing, but for our analyses on the ability of various approaches to represent the max-likelihood parse of sentences generated from PCFGs, we focus on the task of constituency parsing.", "Moreover, it's important to note that there is another line of work aiming to probe the ability of models trained without explicit syntactic consideration (e.g. BERT) to nevertheless discover some (rudimentary) syntactic elements (Bisk and Hockenmaier, 2015; Linzen et al., 2016; Choe and Charniak, 2016; Kuncoro et al., 2018; Williams et al., 2018; Goldberg, 2019; Htut et al., 2019; Hewitt and Manning, 2019; Reif et al., 2019).", "However, to-date, we haven't been able to extract parse trees achieving scores that are close to the oracle binarized trees on standard benchmarks (Kim et al., 2019b,a).", "Methodologically, our work is closely related to a long line of works aiming to characterize the representational power of neural models (e.g. RNNs, LSTMs) through the lens of formal languages and formal models of computation.", "Some of the works of this flavor are empirical in nature (e.g. LSTMs have been shown to possess stronger abilities to recognize some context-free language and even some context-sensitive language, compared with simple RNNs (Gers and Schmidhuber, 2001; Suz-gun et al., 2019) or GRUs (Weiss et al., 2018; Suz-gun et al., 2019)); some results are theoretical in nature (e.g. Siegelmann and Sontag (1992)'s proof that with unbounded precision and unbounded time complexity, RNNs are Turing-complete; related results investigate RNNs with bounded precision and computation time (Weiss et al., 2018), as well as 2 By virtue of not relying on bounded or unidirectional context, the Compound PCFG (Kim et al., 2019a) eschews the techniques in our paper.", "Specifically, by employing a bidirectional LSTM inference network in the process of constructing a tree given a sentence, the parsing is no longer left-to-right.", "memory (Merrill, 2019; Hewitt et al., 2020).", "Our work contributes to this line of works, but focuses on the task of syntactic parsing instead.", "First recall several definitions around formal language, especially probabilistic context free grammar:", "Definition 4.1 (Probabilistic context-free grammar (PCFG)) .", "Formally, a PCFG (Chomsky, 1956) is a 5-tuple G = ( , N, S, R, ) in which is the set of terminals, N is the set of non-terminals, S N is the start symbol, R is the set of production rules of the form r = ( r L r R ) , where r L N , r R is of the form B 1 B 2 ...B m , m Z + , and i { 1 , 2 , ..., m } , B i ( N ) .", "Finally, : R (cid:55) [0 , 1] is the rule probability function, in which for any r = ( A B 1 B 2 ...B m ) R, ( r ) is the conditional probability P ( r R = B 1 B 2 ...B m | r L = A ) .", "Definition 4.2 (Parse tree) .", "Let TG denote the set of parse trees that G can derive.", "Each t TG is associated with yield(t) , the sequence of terminals composed of the leaves of t and PT ( t ) [0 , 1] , the probability of the parse tree, defined by the product of the probabilities of the rules in the derivation of t .", "Each s L ( G ) is called a sentence in L ( G ) , and is associated with the set of parses TG ( s ) = { t TG | yield(t) = s } , the set of max likelihood parses, arg max t TG ( s ) PT ( t ) , and its probability PS ( s ) = (cid:80) t TG ( s ) PT ( t ) .", "Definition 4.4 (Chomsky normal form (CNF)) .", "A PCFGG = ( , N, S, R, ) is in CNF (Chomsky, 1959) if we require, in addition to Definition 4.1, that each rule r R is in the form A B 1 B 2 where B 1 , B 2 N \\ { S } ; A a where a , a (cid:54) = (cid:15) ; or S (cid:15) which is only allowed if the empty string (cid:15) L ( G ) .", "Every PCFGG can be converted into a PCFGG (cid:48) in CNF such that L ( G ) = L ( G (cid:48) ) (Hopcroft et al., 2006).", "The Parsing-Reading-Predict Networks (PRPN) (Shen et al., 2018a) is one of the leading approaches to unsupervised constituency parsing.", "The parsing network (which computes the parse tree, hence the only part we focus on in our paper) is a convolutional network that computes the syntactic distances d t = d ( w t 1 , w t ) (defined in Section 2.1) based on the past L words.", "A deterministic greedy tree induction algorithm is then used to produce a parse tree as follows.", "First, we split the sentence w 1 ...w n into two constituents, w 1 ...w t 1 and w t ...w n , where t argmax { d t } nt =2 and form the left and right subtrees of t .", "We recursively repeat this procedure for the newly created constituents.", "An algorithmic form of this procedure is included as Algorithm 1 in Appendix A. Note that, due to the deterministic nature of the tree-induction process, the ability of PRPN to learn a PCFG is completely contingent upon learning a good syntactic distance.", "Building upon the idea of representing the syntactic information with a real-valued distance measure at each position, a simple extension is to associate each position with a learned vector, and then use the vector for syntactic parsing.", "The ordered-neuron LSTM (ON-LSTM, Shen et al., 2019) proposes that the nodes that are closer to the root in the parse tree generate a longer span of terminals, and therefore should be less frequently forgotten than nodes that are farther away from the root.", "The difference in the frequency of forgetting is captured by a carefully designed master forget gate vector f , as shown in Figure 3 (in Appendix B).", "Formally: Definition 4.5 (Master forget gates, Shen et al., 2019) .", "Given the input sentence W = w 1 w 2 ...w n and a trained ON-LSTM, running the ON-LSTM on W gives the master forget gates, which are a sequence of D -dimensional vectors { f t } nt =1 , in which at each position t , f t = f t ( w 1 , ..., w t ) [0 , 1] D .", "Moreover, let f t,j represent the j -th dimension of f t .", "The ON-LSTM architectures requires that f t, 1 = 0 , f t,D = 1 , and i < j, f t,i f t,j .", "When parsing a sentence, the real-valued master forget gate vector f t at each position t is reduced to a single real number representing the syntactic distance d t at position t (see (1)) (Shen et al., 2018a).", "Then, use the syntactic distances to obtain a parse.", "In addition to outputting a single real numbered distance or a vector at each position t , a left-to-right model can also parse a sentence by outputting a sequence of transitions at each position t , an idea proposed in some traditional parsing approaches (Sagae and Lavie, 2005; Chelba, 1997; Chelba and Jelinek, 2000), and also some more recent neural parameterization (Dyer et al., 2016).", "W = w 1 w 2 ...w n .", "N t : the number of transitions performed between reading in the token w t and reading in the next token w t +1 .", "Z t : the sequence of transitions after reading in the prefix w 1 w 2 ...w t of the sentence.", "We base our analysis on the approach introduced in the parsing version of (Dyer et al., 2016), though that work additionally proposes a generator version.", "3 Definition 4.6 (Transition-based parser) .", "A transition-based parser uses a stack (initialized to empty) and an input buffer (initialized with the sentence w 1 ...w t ).", "At each position t , based on a context c t , the parser outputs a sequence of parsing transitions { z ti } N t i =1 , where each z ti can be one of the following transitions (Definition 4.7).", "The parsing stops when the stack contains one single constituent, and the buffer is empty.", "Definition 4.7 (Parser transitions, Dyer et al., 2016) .", "A parsing transition can be one of the following three types: NT(X) pushes a non-terminal X onto the stack.", "SHIFT: removes the first terminal from the input buffer and pushes onto the stack.", "3 Dyer et al. (2016) additionally proposes some generator transitions.", "For simplicity, we analyze the simplest form: we only allow the model to return one parse, composed of the parser transitions, for a given input sentence.", "Note that this simplified variant still confers full representational power in the full context setting (see Section 7).", "REDUCE: pops from the stack until an open non-terminal is encountered, then pops this non-terminal and assembles everything popped to form a new constituent, labels this new constituent using this non-terminal, and finally pushes this new constituent onto the stack.", "In Appendix Section C, we provide an example of parsing the sentence I drink coffee with milk using the set of transitions given by Definition 4.7.", "The different context specifications and the corresponding representational powers of the transition-based parser are discussed in Section 7.", "In this section we formalize the results on syntactic distance-based methods.", "Since the tree induction algorithm always generates a binary tree, we consider only PCFGs in Chomsky normal form (CNF) (Definition 4.4) so that the max likelihood parse of a sentence is also a binary tree structure.", "To formalize the notion of representing a PCFG, we introduce the following definition: Definition 5.1 (Representing PCFG with syntactic distance) .", "Let G be any PCFG in Chomsky Normal Form.", "A syntactic distance function d is said to be able to p -represent G if for a set of sentences in L ( G ) whose total probability is at least p , d can correctly induce the tree structure of the max likelihood parse of these sentences without ambiguity.", "Remark.", "Ambiguities could occur when, for example, there exists t such that d t = d t +1 .", "In this case, the tree induction algorithm would have to break ties when determining the local structure for w t 1 w t w t +1 .", "We preclude this possibility in Definition 5.1.", "In the least restrictive setting, the whole sentence W , as well as the position index t can be taken into consideration when determining each d t .", "We prove that under this setting, there is a syntactic distance measure that can represent any PCFG.", "Theorem 1 (Full context) .", "Let c t = ( W, t ) .", "For each PCFGG in Chomsky normal form, there exists a syntactic distance measure d t = d ( w t 1 , w t | c t ) that can 1-represent G .", "Proof.", "For any sentence s = s 1 s 2 ...s n L ( G ) , let T be its max likelihood parse tree.", "Since G is in Chomsky normal form, T is a binary tree.", "We will describe an assignment of { d t : 2 t n } such that their order matches the level at which the branches split in T .", "Specifically, t [2 , n ] , let a t denote the lowest common ancestor of w t 1 and w t in T .", "Let d (cid:48) t denote the shortest distance between a t and the root of T .", "Finally, let d t = n d (cid:48) t .", "As a result, { d t : 2 t n } induces T .", "Remark.", "Since any PCFG can be converted to Chomsky normal form (Hopcroft et al., 2006), Theorem 1 implies that given the whole sentence and the position index as the context, the syntactic distance has sufficient representational power to capture any PCFG.", "It does not state, however, that the whole sentence and the position are the minimal contextual information needed for representability nor does it address training (i.e. optimization) issues.", "On the flipside, we show that restricting the context even mildly can considerably decrease the representational power.", "Namely, we show that if context is bounded even in a single direction (to the left or to the right), there are PCFGs on which any syntactic distance will perform poorly 4 .", "(Note in the implementation (Shen et al., 2018a) the context only considers a bounded window to the", "left.) Theorem 2 (Limitation of left-to-right parsing via syntactic distance) .", "Let w 0 = (cid:104) S (cid:105) be the sentence start symbol.", "Let the context c t = ( w 0 , w 1 , ..., w t + L (cid:48) ) .", "(cid:15) > 0 , there exists a PCFG G in Chomsky normal form, such that any syntactic distance measure d t = d ( w t 1 , w t | c t ) cannot (cid:15) -represent G .", "Proof.", "Let m > 1 /(cid:15) be a positive integer.", "Consider the PCFGG m,L (cid:48) in Definition 2.1.", "For any k [ m ] , consider the string l k L ( G m,L (cid:48) ) .", "Note that in the parse tree of l k , the rule S A k B k is applied.", "Hence, a k and a k +1 are the unique pair of adjacent non-terminals in a 1 a 2 ...a m +1 whose lowest common ancestor is the closest to the root in the parse tree of l k .", "Then, in order for the syntactic distance metric d to induce the correct parse tree for l k , d k must be the unique maximum in { d t : 2 t m + 1 } .", "However, d is restricted to be in the form d t = d ( w t 1 , w t | w 0 , w 1 , ..., w t + L (cid:48) ) .", "4 In Theorem 2 we prove the more typical case, i.e. unbounded left context and bounded right context.", "The other case, i.e. bounded left context and unbounded right context, can be proved symmetrically.", "Note that 1 k 1 < k 2 m , the first m + 1 + L (cid:48) tokens of l k 1 and l k 2 are the same, which implies that the inferred syntactic distances { d t : 2 t m + 1 } are the same for l k 1 and l k 2 at each position t .", "Thus, it is impossible for d to induce the correct parse tree for both l k 1 and l k 2 .", "Hence, d is correct on at most one l k L ( G m,L (cid:48) ) , which corresponds to probability at most 1 /m < (cid:15) .", "Therefore, d cannot (cid:15) -represent G m,L (cid:48) .", "Remark.", "In the counterexample, there are only m possible parse structures for the prefix a 1 a 2 ...a m +1 .", "Hence, the proved fact that the probability of being correct is at most 1 /m means that under the restrictions of unbounded look-back and bounded look-ahead, the distance cannot do better than random guessing for this grammar.", "Remark.", "The above Theorem 2 formalizes the intuition discussed in (Htut et al., 2018) outlining an intrinsic limitation of only considering bounded context in one direction.", "Indeed, for the PCFG constructed in the proof, the failure is a function of the context, not because of the fact that we are using a distance-based parser.", "Note that as a corollary of the above theorem, if there is no context ( c t = null ) or the context is both bounded and unidirectional, i.e. c t = w t L w t L +1 ...w t 1 w t , then there is a PCFG that cannot be (cid:15) -represented by any such d .", "In this section, we formalize the results characterizing the representational power of the ON-LSTM architecture.", "The master forget gates of the ON-LSTM, { f t } nt =2 in which each f t [0 , 1] D , encode the hierarchical structure of a parse tree, and Shen et al. (2019) proposes to carry out unsupervised constituency parsing via a reduction from the gate vectors to syntactic distances by setting: d ft = D D (cid:88) j =1 f t,j for t = 2", "..n (1) First we show that the gates in ON-LSTM in principle form a lossless representation of any parse tree.", "Theorem 3 (Lossless representation of a parse tree) .", "For any sentence W = w 1 w 2 ...w n with parse tree T in any PCFG in Chomsky normal form, there exists a dimensionality D Z + , a sequence of vectors { f t } nt =2 in which each f t [0 , 1] D , such that the estimated syntactic distances via (1) induce the structure of T .", "Proof.", "By Theorem 1, there is a syntactic distance measure { d t } nt =2 that induces the structure of T (such that t, d t (cid:54) = d t +1 ).", "For each t = 2", "..n , set d t = k if d t is the k -th smallest entry in { d t } nt =2 , breaking ties arbitrarily.", "Then, each d t [1 , n 1] , and { d t } nt =2 also induces the structure of T .", "Therefore, the calculated { d ft } nt =2 induces the structure of T .", "Although Theorem 3 shows the ability of the master forget gates to perfectly represent any parse tree, a left-to-right parsing can be proved to be unable to return the correct parse with high probability.", "In the actual implementation in (Shen et al., 2019), the (real-valued) master forget gate vectors { f t } nt =1 are produced by feeding the input sentence W = w 1 w 2 ...w n to a model trained with a language modeling objective.", "In other words, f t,j is calculated as a function of w 1 , ..., w t , rather than the entire sentence.", "As such, this left-to-right parser is subject to similar limitations as in Theorem 2: Theorem 4 (Limitation of syntactic distance estimation based on ON-LSTM) .", "For any (cid:15) > 0 , there exists a PCFGG in Chomsky normal form, such that the syntactic distance measure calculated with (1) , d ft , cannot (cid:15) -represent G .", "Proof.", "Since by Definition 4.5, f t,j is a function of w 1 , ..., w t , the estimated syntactic distance d ft is also a function of w 1 , ..., w t .", "By Theorem 2, even with unbounded look-back context w 1 , ..., w t , there exists a PCFG for which the probability that d ft induces the correct parse is arbitrarily low.", "In this section, we analyze a transition-based parsing framework inspired by (Dyer et al., 2016; Chelba and Jelinek, 2000; Chelba, 1997).", "Again, we proceed to say first that full context confers full representational power.", "Namely, using the terminology of Definition 4.6, we let the context c t at each position t be the whole sentence W and the position index t .", "Note that any parse tree can be generated by a sequence of transitions defined in Definition 4.7.", "Indeed, Dyer et al. (2016) describes an algorithm to find such a sequence of transitions via a depth-first, left-to-right traversal of the tree.", "Proceeding to limited context, in the setting of typical left-to-right parsing, the context c t consists of all current and past tokens { w j } tj =1 and all previous parses { ( z j 1 , ..., z jN j ) } tj =1 .", "We'll again prove even stronger negative results, where we allow an optional look-ahead to L (cid:48) input tokens to the right.", "Theorem 5 (Limitation of transition-based parsing without full context) .", "For any (cid:15) > 0 , there exists a PCFG G in Chomsky normal form, such that for any learned transition-based parser (Definition 4.6) based on context c t = ( { w j } t + L (cid:48) j =1 , { ( z j 1 , ..., z jN j ) } tj =1 ) , the sum of the probabilities of the sentences in L ( G ) for which the parser returns the maximum likelihood parse is less than (cid:15) .", "Proof.", "Let m > 1 /(cid:15) be a positive integer.", "Consider the PCFG G m,L (cid:48) in Definition 2.1.", "Note that k, S A k B k is applied to yield string l k .", "Then in the parse tree of l k , a k and a k +1 are the unique pair of adjacent terminals in a 1 a 2 ...a m +1 whose lowest common ancestor is the closest to the root.", "Thus, different l k requires a different sequence of transitions within the first m + 1 input tokens, i.e. { z ti } i 1 , 1 t m +1 .", "For each w L ( G m,L (cid:48) ) , before the last token w m +2+ L (cid:48) is processed, based on the common prefix w 1 w 2 ...w m +1+ L (cid:48) = a 1 a 2 ...a m +1+ L (cid:48) , it is equally likely that w = l k , k , w.", "prob.", "1 /m each.", "Moreover, when processing w m +1 , the bounded look-ahead window of size L (cid:48) does not allow access to the final input token a m +2+ L (cid:48) = c k .", "Thus, 1 k 1 < k 2 m , it is impossible for the parser to return the correct parse tree for both l k 1 and l k 2 without ambiguity.", "Hence, the parse is correct on at most one l k L ( G ) , which corresponds to probability at most 1 /m < (cid:15) .", "In this work, we considered the representational power of two frameworks for constituency parsing prominent in the literature, based on learning a syntactic distance and learning a sequence of iterative transitions to build the parse tree in the sandbox of PCFGs.", "In particular, we show that if the context for calculating distance/deciding on transitions is limited at least to one side (which is typically the case in practice for existing architec-tures), there are PCFGs for which no good distance metric/sequence of transitions can be chosen to construct the maximum likelihood parse.", "This limitation was already suspected in (Htut et al., 2018) as a potential failure mode of leading neural approaches like (Shen et al., 2018a, 2019) and we show formally that this is the case.", "The PCFGs with this property track the intuition that bounded context methods will have issues when the parse at a certain position depends heavily on latter parts of the sentence.", "The conclusions thus suggest re-focusing our attention on methods like (Kim et al., 2019a) which have enjoyed greater success on tasks like unsupervised constituency parsing, and do not fall in the paradigm analyzed in our paper.", "A question of definite further interest is how to augment models that have been successfully scaled up (e.g. BERT) in a principled manner with syntactic information, such that they can capture syntactic structure (like PCFGs).", "The other question of immediate importance is to understand the interaction between the syntactic and semantic modules in neural architectures information is shared between such modules in various successful architectures, e.g. (Dyer et al., 2016; Shen et al., 2018a, 2019; Kim et al., 2019a), and the relative pros and cons of doing this are not well understood.", "Finally, our paper purely focuses on representational power, and does not consider algorithmic and statistical aspects of training.", "As any model architecture is associated with its distinct optimization and generalization considerations, and natural language data necessitates the modeling of the interaction between syntax and semantics, those aspects of considerations are well beyond the scope of our analysis in this paper using the controlled sandbox of PCFGs, and are interesting directions for future work." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "abstain", "abstain", "result", "method", "abstain", "abstain", "method", "method", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "other", "other", "other", "other", "method", "other", "abstain", "objective", "other", "other", "other", "objective", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "result", "abstain", "method", "abstain", "abstain", "method", "method" ]
[ "Recently, considerable literature has grown up around the theme of few-shot named entity recognition (NER), but little published benchmark data specifically focused on the practical and challenging task.", "Current approaches collect existing supervised NER datasets and reorganize them into the few-shot setting for empirical study.", "These strategies conventionally aim to recognize coarse-grained entity types with few examples, while in practice, most unseen entity types are fine-grained.", "In this paper, we present FEW-NERD, a large-scale human-annotated few-shot NER dataset with a hierarchy of 8 coarse-grained and 66 fine-grained entity types.", "FEW-NERD consists of 188,238 sentences from Wikipedia, 4,601,160 words are included and each is annotated as context or a part of a two-level entity type.", "To the best of our knowledge, this is the first few-shot NER dataset and the largest human-crafted NER dataset.", "We construct benchmark tasks with different emphases to comprehensively assess the generalization capability of models.", "Extensive empirical results and analysis show that FEW-NERD is challenging and the problem requires further research.", "We make FEW-NERD public at https:// ningding97.github.io/fewnerd/ .", "1 1 Introduction Named entity recognition (NER), as a fundamental task in information extraction, aims to locate and classify named entities from unstructured natural language.", "A considerable number of approaches equipped with deep neural networks have shown promising performance (Chiu and Nichols, 2016) on fully supervised NER.", "Notably, pre-trained language models (e.g., BERT (Devlin et al., 2019a)) equal contributions corresponding authors 1 The baselines are available at https://github.", "with an additional classifier achieve significant success on this task and gradually become the base paradigm.", "Such studies demonstrate that deep models could yield remarkable results accompanied by a large amount of annotated corpora.", "With the emerging of knowledge from various domains, named entities, especially ones that need professional knowledge to understand, are difficult to be manually annotated on a large scale.", "Under this circumstance, studying NER systems that could learn unseen entity types with few examples, i.e., few-shot NER, plays a critical role in this area.", "There is a growing body of literature that recognizes the importance of few-shot NER and contributes to the task (Hofer et al., 2018; Fritzler et al., 2019; Yang and Katiyar, 2020; Li et al., 2020a; Huang et al., 2020).", "Unfortunately, there is still no dataset specifically designed for few-shot NER .", "Hence, these methods collect previously proposed supervised NER datasets and reorganize them into a few-shot setting.", "Common options of datasets include OntoNotes (Weischedel et al., 2013), CoNLL'03 (Tjong Kim Sang, 2002), WNUT'17 (Derczynski et al., 2017), etc.", "These research efforts of few-shot learning for named entities mainly face two challenges: First, most datasets used for few-shot learning have only 4-18 coarse-grained entity types, making it hard to construct an adequate variety of N-way meta-tasks and learn correlation features.", "And in reality, we observe that most unseen entities are fine-grained.", "Second, because of the lack of benchmark datasets, the settings of different works are inconsistent (Huang et al., 2020; Yang and Katiyar, 2020), leading to unclear comparisons.", "To sum up, these methods make promising contributions to few-shot NER, nevertheless, a specific dataset is urgently needed to provide a unified benchmark dataset for rigorous comparisons.", "To alleviate the above challenges, we present a large-scale human-annotated few-shot NER dataset, FEW-NERD, which consists of 188.2k sentences extracted from the Wikipedia articles and 491.7k entities are manually annotated by well-trained annotators (Section 4.3).", "To the best of our knowledge, FEW-NERD is the first dataset specially constructed for few-shot NER and also one of the largest human-annotated NER dataset (statistics in Section 5.1).", "We carefully design an annotation schema of 8 coarse-grained entity types and 66 fine-grained entity types by conducting several pre-annotation rounds.", "(Section 4.1).", "In contrast, as the most widely-used NER datasets, CoNLL has 4 entity types, WNUT'17 has 6 entity types and OntoNotes has 18 entity types (7 of them are value types).", "The variety of entity types makes FEW-NERD contain rich contextual features with a finer granularity for better evaluation of few-shot NER.", "The distribution of the entity types in FEW-NERD is shown in Figure 1, more details are reported in Section 5.1.", "We conduct an analysis of the mutual similarities among all the entity types of FEW-NERD to study knowledge transfer (Sec-tion 5.2).", "The results show that our dataset can provide sufficient correlation information between different entity types for few-shot learning.", "For benchmark settings, we design three tasks on the basis of FEW-NERD, including a standard supervised task (FEW-NERD ( SUP )) and two few-shot tasks (FEW-NERD-INTRA ) and FEWNRTD ( INTER )), for more details see Section 6.", "FEW-NERD ( SUP ), FEW-NERD ( INTRA ), and FEW-NERD ( INTER ) assess instance-level generalization, type-level generalization and knowledge transfer of NER methods, respectively.", "We implement models based on the recent state-of-the-art approaches and evaluate them on FEW-NERD (Section 7).", "And empirical results show that FEW-NERD is challenging on all these three settings.", "We also conduct sets of subsidiary experiments to analyze promising directions of few-shot NER.", "Hopefully, the research of few-shot NER could be further facilitated by FEW-NERD.", "As a pivotal task of information extraction, NER is essential for a wide range of technologies (Cui et al., 2017; Li et al., 2019b; Ding et al., 2019; Shen et al., 2020).", "And a considerable number of NER datasets have been proposed over the years.", "For example, CoNLL'03 (Tjong Kim Sang, 2002) is regarded as one of the most popular datasets, which is curated from Reuters News and includes 4 coarse-grained entity types.", "Subsequently, a series of NER datasets from various domains are proposed (Bala-suriya et al., 2009; Ritter et al., 2011; Weischedel et al., 2013; Stubbs and Uzuner, 2015; Derczynski et al., 2017).", "These datasets formulate a sequence labeling task and most of them contain 4-18 entity types.", "Among them, due to the high quality and size, OntoNotes 5.0 (Weischedel et al., 2013) is considered as one of the most widely used NER datasets recently.", "As approaches equipped with deep neural networks have shown satisfactory performance on NER with sufficient supervision (Lample et al., 2016; Ma and Hovy, 2016), few-shot NER has received increasing attention (Hofer et al., 2018; Fritzler et al., 2019; Yang and Katiyar, 2020; Li et al., 2020a).", "Few-shot NER is a considerably challenging and practical problem that could facilitate the understanding of textual knowledge for neural model (Huang et al., 2020).", "Due to the lack of specific benchmarks of few-shot NER, current methods collect existing NER datasets and use different few-shot settings.", "To provide a benchmark that could comprehensively assess the generalization of models under few examples, we annotate FEW-NERD.", "To make the dataset practical and close to reality, we adopt a fine-grained schema of entity annotation, which is inspired and modified from previous fine-grained entity recognition studies (Ling and Weld, 2012; Gillick et al., 2014; Choi et al., 2018; Ringland et al., 2019).", "NER is normally formulated as a sequence labeling problem.", "Specifically, for an input sequence of tokens x = { x 1 , x 2 , ..., x t } , NER aims to assign each token x i a label y i Y to indicate either the token is a part of a named entity (such as Person , Organization , Location ) or not belong to any entities (denoted as O class), Y being a set of pre-defined entity-types.", "N -way K -shot learning is conducted by iteratively constructing episodes.", "For each episode in training, N classes ( N -way) and K examples ( K -shot) for each class are sampled to build a support set S train = { x ( i ) , y ( i ) } N K i =1 , and K (cid:48) examples for each of N classes are sampled to construct a query set Q train = { x ( j ) , y ( j ) } N K (cid:48) j =1 , and S (cid:84) Q = .", "Few-shot learning systems are trained by predicting labels of query set Q train with the information of support set S train .", "The supervision of S train and Q train are available in training.", "In the testing procedure, all the classes are unseen in the training phase, and by using few labeled examples of support set S test , few-shot learning systems need to make predictions of the unlabeled query set Q test ( S (cid:84) Q = ).", "However, in the sequence labeling problem like NER, a sentence may contain multiple entities from different classes.", "And it is imperative to sample examples in sentence-level since contextual information is crucial for sequence labeling problems, especially for NER.", "Thus the sampling is more difficult than conventional classification tasks like relation extraction (Han et al., 2018).", "Some previous works (Yang and Katiyar, 2020; Li et al., 2020a) use greedy-based sampling strategies to iteratively judge if a sentence could be added into the support set, but the limitation becomes gradually strict during the sampling.", "For example, when it comes to a 5-way 5-shot setting, if the support set already had 4 classes with 5 examples and 1 class with 4 examples, the next sampled sentence must only contain the specific one entity to strictly meet the requirement of 5 way 5 shot.", "It is not suitable for FEW-NERD since it is annotated with dense entities.", "Thus, as shown in Algorithm 1 we adopt a N -way K 2 K -shot setting in our paper, the primary principle of which is to ensure that each class in S contain K 2 K examples, effectively alleviating the limitations of sampling.", "The primary goal of FEW-NERD is to construct a fine-grained dataset that could specifically be used in the few-shot NER scenario.", "Hence, schemas of traditional NER datasets such as CoNLL'03, OntoNotes that only contain 4-18 coarse-grained types could not meet the requirements.", "The schema of FEW-NERD is inspired by FIGER (Ling and Weld, 2012), which contains 112 entity tags with good coverage.", "On this basis, we make some mod-ifications according to the practical situation.", "It is worth noting that FEW-NERD focuses on named entities, omitting value/numerical/time/date entity types (Weischedel et al., 2013; Ringland et al., 2019) like Cardinal, Day, Percent , etc.", "First, we modify the FIGER schema into a two-level hierarchy to incorporate simple domain information (Gillick et al., 2014).", "The coarse-grained types are { Person , Location , Organization , Art , Building , Product , Event , Miscellaneous } .", "Then we statistically count the frequency of entity types in the automatically annotated FIGER .", "By removing entity types with low frequency, there are 80 fine-grained types remaining.", "Finally, to ensure the practicality of the annotation process, we conduct rounds of pre-annotation and make further mod-ifications to the schema.", "For example, we combine the types of Country , Province/State , City , Restrict into a class GPE , since it is difficult to distinguish these types only based on context (especially GPEs at different times).", "For another example, we create a Person-Scholar type, because in the pre-annotation step, we found that there are numerous person entities that express the semantics of research, such as mathematician, physicist, chemist, biologist, paleontologist, but the Figer schema does not define this kind of entity type.", "We also conduct rounds of manual denoising to select types with truly high frequency.", "Consequently, the finalized schema of FEWNERD includes 8 coarse-grained types and 66 fine-grained types, which is detailedly shown accompanied by selected examples in Appendix.", "The raw corpus we use is the entire Wikipedia dump in English, which has been widely used in constructions of NLP datasets (Han et al., 2018; Yang et al., 2018; Wang et al., 2020).", "Wikipedia contains a large variety of entities and rich contextual information for each entity.", "FEW-NERD is annotated in paragraph-level, and it is crucial to effectively select paragraphs with sufficient entity information.", "Moreover, the category distribution of the data is expected to be balanced since the data is applied in a few-shot scenario.", "It is also a key difference between FEW-NERD and previous NER datasets, whose entity distributions are usually considerably uneven.", "In order to do so, we construct a dictionary for each fine-grained type by automatically collecting entity mentions annotated in FIGER , then the dictionaries are manually denoised.", "We develop a search engine to retrieve paragraphs including entity mentions of the distant dictionary.", "For each entity, we choose 10 paragraphs and construct a candidate set.", "Then, for each fine-grained class, we randomly select 1000 paragraphs for manual annotation.", "Eventually, 66,000 paragraphs are selected, consisting of 66 fine-grained entity types, and each paragraph contains an average of 61.3 tokens.", "Paragraph London [Art-Music] is the fifth album by the British [Loc-GPE] rock band Jesus Jones [Org-ShowOrg] in 2001 through Koch Records [Org-Company] .", "Following the commercial failure of 1997's Already [Art-Music] which led to the band and EMI [Org-Company] parting ways, the band took a hiatus before regathering for the recording of London [Art-Music] for Koch/Mi5 Recordings, with a more alternative rock approach as opposed to the techno sounds on their previous albums.", "The album had low-key promotion, initially only being released in the United States [Loc-GPE] .", "Two EP's were released from the album, Nowhere Slow [Art-Music] and In the Face Of All This [Art-Music] .", "For example, shown in Table 1, London is the fifth album by the British rock band Jesus Jones.. , where London should be annotated as an entity of Art-Music rather than Location-GPE .", "Such a situation requires that the annotator has basic linguistic training and can make reasonable judgments based on the context.", "Annotators of FEW-NERD include 70 annotators and 10 experienced experts.", "All the annotators have linguistic knowledge and are instructed with detailed and formal annotation principles.", "Each paragraph is independently annotated by two well-trained annotators.", "Then, an experienced expert goes over the paragraph for possible wrong or omissive annotations, and make the final decision.", "With 70 annotators participated, each annotator spends an average of 32 hours during the annotation process.", "We ensure that all the annotators are fairly compensated by market price according to their workload (the number of examples per hour).", "The data is annotated and submitted in batches, and each batch contains 1000 3000 sentences.", "To ensure the quality of FEW-NERD, for each batch of data, we randomly select 10% sentences and conduct double-checking.", "If the accuracy of the annotation is lower than 95 % (measured in sentence-level), the batch will be re-annotated.", "Furthermore, we calculate the Cohen's Kappa (Cohen, 1960) to measure the aggreements between two annotators, the result is 76.44%, which indicates a high degree of consistency.", "FEW-NERD is not only the first few-shot dataset for NER, but it also is one of the biggest human-annotated NER datasets.", "We report the the statistics of the number of sentences, tokens, entity types and entities of FEW-NERD and several widely-used NER datasets in Table 2, including CoNLL'03, WikiGold, OntoNotes 5.0, WNUT'17 and I2B2.", "We observe that although OntoNotes and I2B2 are considered as large-scale datasets, FEW-NERD is significantly larger than all these datasets.", "Moreover, FEW-NERD contains more entity types and annotated entities.", "As introduced in Section 4.2, FEW-NERD is designed for few-shot learning and the distribution could not be severely uneven.", "Hence, we balance the dataset by selecting paragraphs through a distant dictionary.", "The data distribution is illustrated in Figure 1, where Location (especially GPE ) and Person are entity types with the most examples.", "Although utilizing a distant dictionary to balance the entity types could not produce a fully balanced data distribution, it still ensures that each fine-grained type has a sufficient number of examples for few-shot learning.", "Knowledge transfer is crucial for few-shot learning (Li et al., 2019a).", "To explore the knowledge correlations among all the entity types of FEW-NERD, we conduct an empirical study about entity type similarities in this section.", "We train a BERT-Tagger (details in Section 7.1) of 70% arbitrarily selected data on FEW-NERD and use 10% data to select the model with best performance (it is actually the setting of FEW-NERD ( SUP ) in Section 6.1).", "After obtaining a contextualized encoder, we produce entity mention representations of the remaining 20% data of FEW-NERD.", "Then, for each fine-grained types, we randomly select 100 instances of entity embeddings.", "We mutually compute the dot product among entity embeddings for each type two by two and average them to obtain the similarities among types, which is illustrated in Figure 2.", "We observe that entity types shared identical coarse-grained types typically have larger similarities, resulting in an easier knowledge transfer.", "In contrast, although some of the fine-grained types have large similari-0 100 200 300 400 z 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 x 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 y Art Building Event Loc Org MISC Person Product Figure 2: A heat map to illustrate knowledge correlations among type in FEW-NERD, each small colored square represents the similarity of two entity types.", "ties, most of them across coarse-grained types share little correlations due to distinct contextual features.", "This result is consistent with intuition.", "Moreover, it inspires our benchmark-setting from the perspective of knowledge transfer (see Section 6.2).", "We collect and manually annotate 188,238 sentences with 66 fine-grained entity types in total, which makes FEW-NERD one of the largest human-annotated NER datasets.", "To comprehensively exploit such rich information of entities and contexts, as well as evaluate the generalization of models from different perspectives, we construct three tasks based on FEW-NERD (Statistics are reported in Table 3).", "FEW-NERD ( SUP ) We first adopt a standard supervised setting for NER by randomly splitting 70% data as the training data, 10% as the validation data and 20% as the testing data.", "In this setting, the training set, dev set, and test set contain the whole 66 entity types.", "Although the supervised setting is not the ultimate goal of the construction of FEW-NERD, it is still meaningful to assess the instance-level generalization for NER models.", "As shown in Section 6.2, due to the large number of entity types, FEW-NERD is very challenging even in a standard supervised setting.", "The core intuition of few-shot learning is to learn new classes from few examples.", "Hence, we first split the overall entity set (denoted as E ) into three mutually disjoint subsets, respectively denoted as E train , E dev , E test , and E train (cid:83) E dev (cid:83) E test = E , E train (cid:84) E dev (cid:84) E test = .", "Note that all the entity types are fine-grained types.", "Under this circumstance, instances in train, dev and test datasets only consist of instances with entities in E train , E dev , E test respectively.", "However, NER is a sequence labeling problem, and it is possible that a sentence contains several different entities.", "To avoid the observation of new entity types in the training phase, we replace the labels of entities that belong to E test with O in the training set.", "Similarly, in the test set, entities that belongs to E train and E dev are also replaced by O .", "Based on this setting, we develop two few-shot NER tasks adopting different splitting strategies.", "FEW-NERD ( INTRA ) Firstly, we construct E train , E dev and E test according to the coarse-grained types.", "In other words, all the entities in different sets belong to different coarse-grained types.", "In the basis of the principle that we should replace as few as possible entities with O , we assign all the fine-grained entity types belonging to People, MISC, Art, Product to E train , all the fine-grained entity types belonging to Event, Building to E dev , and all the fine-grained entity types belonging to ORG, LOC to E test , respectively.", "Based on Figure 2, in this setting, the training set, dev set and test set share little knowledge, making it a difficult benchmark.", "FEW-NERD ( INTER ) In this task, although all the fine-grained entity types are mutually disjoint in E train , E dev , the coarse-grained types are shared.", "Specifically, we roughly assign 60% fine-grained types of all the 8 coarse-grained types to E train , 20% to E dev and 20% E test , respectively.", "The intuition of Split #Train #Dev #Test FEW-NERD ( SUP ) 131,767 18,824 37,648 FEW-NERD ( INTRA ) 99,519 19,358 44,059 FEW-NERD ( INTER ) 130,112 18,817 14,007 Table 3: Statistics of train, dev and test sets for three tasks of FEW-NERD .", "Recent studies show that pre-trained language models with deep transformers (e.g., BERT (Devlin et al., 2019a)) have become a strong encoder for NER (Li et al., 2020b).", "We thus follow the empirical settings and use BERT as the backbone encoder in our experiments.", "We denote the parameters as and the encoder as f .", "Given a sequence x = { x 1 , ..., x n } , for each token x i , the encoder produces contextualized representations as: h = [ h 1 , ..., h n ] = f ([ x 1 , ..., x n ]) .", "Specifically, we implement four BERT-based models for supervised and few-shot NER, which are BERT-Tagger (Devlin et al., 2019b), ProtoBERT (Snell et al., 2017), NNShot (Yang and Katiyar, 2020) and StructShot (Yang and Katiyar, 2020).", "BERT-Tagger As stated in Section 6.1, we construct a standard supervised task based on FEW-NERD, thus we implement a simple but strong baseline BERT-Tagger for supervised NER.", "BERT-Tagger is built by adding a linear classifier on top of BERT and trained with a cross-entropy objective under a full supervision setting.", "ProtoBERT Inspired by achievements of meta-learning approaches (Finn et al., 2017; Snell et al., 2017; Ding et al., 2021) on few-shot learning.", "The first baseline model we implement is ProtoBERT, which is a method based on prototypical network (Snell et al., 2017) with a backbone of BERT (Devlin et al., 2019a) encoder.", "This approach derives a prototype z for each entity type by computing the average of the embeddings of the tokens that share the same entity type.", "The computation is conducted in support set S .", "For the i -th type, the prototype is denoted as z i and the support set is S i , z i = 1 |S i | (cid:88) x S i f ( x ) .", "While in the query set Q , for each token x Q , we firstly compute the distance between x and all the prototypes.", "We use the l -2 distance as the metric function d ( f ( x ) , z ) = || f ( x ) z || 22 .", "Then, through the distances between x and all other prototypes, we compute the prediction probability of x over all types.", "In the training step, parameters are updated in each meta-task.", "In the testing step, the prediction is the label of the nearest prototype to x .", "That is, for a support set SY with types of Y and a query x , the prediction process is given as y = arg min y Y d y ( x ) , d y ( x ) = d ( f ( x ) , z y ) .", "NNShot & StructShot NNShot and StructShot (Yang and Katiyar, 2020) are the state-of-the-art methods based on token-level nearest neighbor classification.", "In our experiments, we use BERT as the backbone encoder to produce contextualized representations for fair comparison.", "Different from the prototype-based method, NNShot determines the tag of one query based on the token-level distance, which is computed as d ( f ( x ) , f ( x (cid:48) )) = || f ( x ) f ( x (cid:48) ) || 22 .", "Hence, for a support set SY with type of Y and a query x , y = arg min y Y d y ( x ) , d y ( x ) = min x (cid:48) S y d ( f ( x ) , f ( x (cid:48) )) .", "With the identical basic structure as NNShot, StructShot adopts an additional Viterbi decoder during the inference phase (Hou et al., 2020) (not in training phase), where we estimate a transition distribution p ( y (cid:48) | y ) and an emission distribution", "To sum up, BERT-Tagger is a well-acknowledged baseline that could produce pronounced results on supervised NER.", "ProtoBERT, and NNShot & StructShot respectively use prototype-level and token-level similarity scores to tackle the few-shot NER problem.", "These baselines are strong and representative models of the NER task.", "For implementation details, please refer to Appendix.", "We evaluate models by considering query sets Q test of test episodes.", "We calculate the precision (P), recall (R) and micro F1-score over all test episodes.", "Instead of the popular BIO schema, we utilize the IO schema in our experiments, using I-type to denote all the tokens of a named entity and O to denote other tokens.", "We evaluate all baseline models on the three benchmark settings introduced in Section 6, including FEW-NERD ( SUP ), FEW-NERD ( INTRA ) and FEW-NERD ( INTER ).", "Supervised NER As mentioned in Section 6.1, we first split the FEW-NERD as a standard supervised NER dataset.", "As shown in Table 4, BERT-Tagger yields promising results on the two widely used supervised datasets.", "The F1-score is 91.34%, 89.11%, respectively.", "However, the model suffers a grave drop in the performance on FEW-NERD ( SUP ) because the number of types of FEW-NERD ( SUP ) is larger than others.", "The results indicate that FEW-NERD is challenging in the supervised setting and worth studying.", "We further analyze the performance of different entity types (see Figure 3).", "We find that the model achieves the best performance on the Person type and yields the worst performance on the Product type.", "And almost for all the coarse-grained types, the Coarse-Other type has the lowest F1-score.", "This is because the semantics of such fine-grained types are relatively sparse and difficult to be recognized.", "A natural intuition is that the performance of each entity type is related to the portion of the type.", "But surprisingly, we find that they are not linearly correlated.", "For examples, the model performs very well on the Art type, although this type represents only a small fraction of FEW-NERD.", "Few-shot NER For the few-shot benchmarks, we adopt 4 sampling settings, which are 5 way 1 2 shot, 5 way 5 10 shot, 10 way 1 2 shot, and 10 way 5 10 shot.", "Intuitively, 10 way 1 2 shot is the hardest setting because it has the largest number of entity types and the fewest number of examples, and similarly, 5 way 5 10 shot is the easiest setting.", "All results of FEW-NERD ( INTRA ) and FEW-NERD ( INTER ) are reported in Table 5 and Table 6 respectively.", "Overall, we observe that the previous state-of-the-art methods equipped by BERT encoder could not yield promising results on FEW-NERD.", "From a perspective of high level, models generally perform better on FEW-NERD ( INTER ) than FEW-NERD ( INTRA ), and the latter is regarded as a more difficult task as we analyze in Section 5.2 and Section 6, it splits the data according to the coarse-grained entity types, which means entity types between the training set and test set share less knowledge.", "In a horizontal comparison, consistent with intuition, almost all the methods produce the worst results on 10 way 1 2 shot and achieve the best performance on 5 way 5 10 .", "In the comparison across models, ProtoBERT generally achieves better performance than NNShot and StructShot, especially in 5 10 shot setting where calculation by prototype may differ more from calculation by entity.", "StructShot has seen a large improvement in precision in FEW-NERD ( INTRA ).", "It shows that Viterbi decoder at the inference stage can help remove false positive predictions when knowledge transfer is hard.", "It is also observed that NNShot and StructShot may suffer from the instability of the nearest neighbor mechanism in the training phase, and prototypical models are more stable because Models Span Error Type Error FP FN Within Outer ProtoNet 6.01% 3.25% 5.13% 11.69% NNShot 4.73% 5.77% 5.77% 14.98% StructShot 3.11% 8.42% 5.59% 13.62% Table 7: Error analysis of 5 way 5 10 shot on FEW-NERD ( INTER ), Within indicates within the coarse types and Outer is outer the coarse types.", "the calculation of prototypes essentially serves as regularization.", "7.3 Error Analysis We conduct error analysis to explore the challenges of FEW-NERD, the results are reported in Table 7.", "We choose the setting of FEW-NERD ( INTER ) because the test set contains all the coarse-grained types.", "We analyze the errors of models from two perspectives.", "Span Error denotes the misclassifying in token-level classification.", "If an O token is misclassified as a part of entity, i.e., I-type , it is an FP case, and if a token with the type I-type is misclassified to O , it is FN.", "Type Error indicates the misclassification of entity types when the spans are correctly classified.", "A Within error represents the entity is misclassified to another type within the same coarse-grained type, while Outer denotes the entity is misclassified to another type in a different coarse-grained type.", "As the statistics of type errors may be impacted by the sampled episodes in testing, we conduct 5 rounds of experiments and report the average results.", "The results demonstrate that the token-level accuracy is not that low since most O tokens could be detected.", "But an entity mention is considered to be wrong if one token is wrong, which becomes the main reason for the challenge of FEW-NERD.", "If an entity span could be accurately detected, the models could yield relatively good performance on entity typing, indicating the effectiveness of metric learning.", "We propose FEW-NERD, a large-scale few-shot NER dataset with fine-grained entity types.", "This is the first few-shot NER dataset and also one of the largest human-annotated NER dataset.", "FEW-NERD provides three unified benchmarks to assess approaches of few-shot NER and could facilitate future research in this area.", "By implementing state-of-the-art methods, we carry out a series of experiments on FEW-NERD, demonstrating that few-shot NER remains a challenging problem and worth exploring.", "In the future, we will extend FEW-NERD by adding cross-domain annotations, distant annotations, and finer-grained entity types.", "FEW-NERD also has the potential to advance the construction of continual knowledge graphs.", "This research is supported by National Natural Science Foundation of China (Grant No. 61773229 and 6201101015), National Key Research and Development Program of China (No. 2020AAA0106501), Alibaba Innovation Research (AIR) programme, the General Research Project (Grand No. JCYJ20190813165003837, No.JCYJ20190808182805919), and Overseas Cooperation Research Fund of Graduate School at Tsinghua University (Grant No. HW2018002).", "Finally, we thank the valuable help of Ronny, Xiaozhi, Ziyu and comments of anonymous reviewers.", "In this paper, we present a human-annotated dataset, FEW-NERD, for few-shot learning in NER.", "We describe the details of the collection process and conditions, the compensation of annotators, the measurements to ensure the quality in the main text.", "The corpus of the dataset is publicly obtained from Wikipedia and we have not modified or interfered with the content.", "FEW-NERD is likely to directly facilitate the research of few-shot NER, and further increase the progress of the construction of large-scale knowledge graphs (KGs).", "Models and systems built on FEW-NERD may contribute to construct KGs in various domains, including biomedical, financial, and legal fields, and further promote the development of NLP applications on specific domains.", "FEW-NERD is annotated in English, thus the dataset may mainly facilitate NLP research in English.", "For the sake of energy saving, we will not only open source the dataset and the code, but also release the checkpoints of our models from the experiments to reduce unnecessary carbon emission." ]
[ "abstain", "abstain", "abstain", "method", "abstain", "objective", "method", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "objective", "method", "abstain", "abstain", "abstain", "abstain", "method", "result", "result", "abstain", "method", "abstain", "method", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "objective", "abstain", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method" ]
[ "Chinese pre-trained language models usually process text as a sequence of characters, while ignoring more coarse granularity, e.g., words.", "In this work, we propose a novel pre-training paradigm for Chinese Lattice-BERT, which explicitly incorporates word representations along with characters, thus can model a sentence in a multi-granularity manner.", "Specifically, we construct a lattice graph from the characters and words in a sentence and feed all these text units into transformers.", "We design a lattice position attention mechanism to exploit the lattice structures in self-attention layers.", "We further propose a masked segment prediction task to push the model to learn from rich but redundant information inherent in lattices, while avoiding learning unexpected tricks.", "Experiments on 11 Chinese natural language understanding tasks show that our model can bring an average increase of 1.5% under the 12-layer setting, which achieves new state-of-the-art among base -size models on the CLUE benchmarks.", "Further analysis shows that Lattice-BERT can harness the lattice structures, and the improvement comes from the exploration of redundant information and multi-granularity representations.", "1 1 Introduction Pre-trained Language Models (PLMs) have achieved promising results in many Chinese Natural Language Understanding (NLU) tasks (Cui et al., 2019; Liu et al., 2020; Sun et al., 2020).", "These models take a sequence of fine-grained units Chinese characters as the input, following the English PLMs' practice (Devlin et al., 2019, BERT).", "Work done during an internship at Alibaba DAMO Academy.", "Corresponding author.", "1 Our code will be available at https://github.", "com/alibaba/pretrained-language-models/LatticeBERT .", "However, the meanings of many Chinese words cannot be fully understood through direct compositions of their characters' meanings.", "For example, / boss does not mean / elder / board .", "2 The importance of word-level inputs in Chinese has been addressed in different tasks, including relation classification (Li et al., 2019), short text matching (Lai et al., 2019; Chen et al., 2020; Lyu et al., 2021), trigger detection (Lin et al., 2018), and named entity recognition (Zhang and Yang, 2018; Gui et al., 2019; Li et al., 2020a).", "The coarse-grained inputs benefit these tasks by introducing word-level semantics with multi-granularity representations, which is potentially complementary in character-level Chinese PLMs.", "In this work, we discuss how to pre-train a Chinese PLM over a word lattice structure to exploit multi-granularity inputs.", "We argue that by incorporating the coarse-grained units into PLM, models could learn to utilize the multi-granularity information for downstream tasks.", "Specifically, we organize characters and words in sentences as word lattices (see Figure 1), which enable the models to explore the words from all possible word segmentation results.", "However, it is not straightforward to learn a BERT-like PLM over the word lattices.", "The major challenges are two-folded.", "Firstly, BERT's original input is a sequence of characters ordered by their positions, making it difficult to consume the word lattices and preserve the positional relation-2 For clarity, we use / English translation to represent an example in Chinese with its translation followed by.", "ship between multi-granularity units.", "Secondly, the conventional masked language modeling (MLM) task may make the word-lattice based PLMs learn unexpected tricks.", "The reason is that such a word lattice naturally introduces redundancy, that is, one character can be contained in multiple text units.", "In MLM, models may refer to the other text units overlapping with the randomly masked one instead of the real context, which brings information leakages.", "To address these challenges, we propose a Lattice-based Bidirectional Encoder Representation from Transformers (Lattice-BERT).", "Specifi-cally, we design a lattice position attention (LPA) to help the transformers directly exploit positional relationship and distances between text units in lattices.", "Moreover, we propose a masked segment prediction (MSP) task to avoid the potential leakage between overlapping text units in language modeling.", "With LPA and MSP, the Lattice-BERT could harness the multi-granularity structures in lattices, thus, directly utilize the lattice structures to aggregate the coarse-grained word information to benefit various downstream tasks.", "We evaluate our model on 11 Chinese NLU tasks in various paradigms, including the CLUE benchmarks (Xu et al., 2020) as well as two sequence labeling tasks.", "Compared with the baseline that only takes characters as inputs, Lattice-BERT bring an average increase of 1.5% and 2.0% under the settings of 12 and 6 layers, respectively.", "The 12-layer Lattice-BERT model beats all other base -size models on CLUE benchmarks.", "3 Morever, we show that Lattice-BERT can harness the multi-granularity in-3 https://www.cluebenchmarks.com/rank.", "puts and utilize word-level semantics to outperform vanilla fine-grained PLMs.", "Our contributions can be summarized as 1) We propose Lattice-BERT to leverage multi-granularity representations from word lattices in Chinese PLMs.", "2) We design lattice position attention and masked segment prediction to facilitate Chinese PLMs to exploit the lattice structures.", "3) Lattice-BERT brings remarkable improvements on 11 Chinese tasks and achieves new state of the arts among base -size models at the CLUE benchmarks.", "This section, we detail the implementation of Lattice-BERT, and its overall framework is presented in Figure 2.", "BERTBERT (Devlin et al., 2019, Bidirectional Encoder Representations from Transformers) is a pre-trained language model comprising a stack of multi-head self-attention layers and fully connected layers.", "For each head in the l th multi-head self-attention layer, the output matrix H out ,l = (cid:110) h out ,l 1 , h out ,l 2 , . . . , h out ,l n (cid:111) R n d k satisfies: h out ,l i = n (cid:88) j =1 (cid:32) exp lij (cid:80) j (cid:48) exp lij (cid:48) h in ,l j W v,l (cid:33) lij = 1 2 d k (cid:16) h in ,l i W q,l (cid:17) (cid:16) h in ,l j W k,l (cid:17) T (1) where H in ,l = (cid:110) h in ,l 1 , h in ,l 2 , . . . , h in ,l n (cid:111) R n d h is the input matrix, and W q,l , W k,l , W v,l R d h d k are learnable parameters.", "n and d h are sequence length and hidden size, and the attention size d k = d h /n h , where n h is the number of attention heads.", "To capture the sequential features in languages, previous PLMs adopt position embedding in either input representations (Devlin et al., 2019; Lan et al., 2020) or attention weights (Yang et al., 2019; Wei et al., 2019; Ke et al., 2020).", "For the input-level position embedding, the inputs of the first layer are (cid:101) h in , 0 i = h in , 0 i + P i , where P i is the embedding of the i th position.", "The other works incorporate position information in attention weights, i.e., (cid:101) lij = lij + f ( i, j ) , where f is a function of the position pair ( i, j ) .", "The BERT model is pre-trained on an unlabeled corpus with reconstruction losses, i.e., Masked Language Modeling (MLM) and Next Sentence Prediction (NSP), and then fine-tuned on downstream tasks to solve specific NLU tasks.", "Readers could refer to Devlin et al. (2019) for details.", "We adopt a word lattice to consume all possible segmentation results of a sentence in one PLM.", "Each segmentation can be a mixture of characters and words.", "As shown in Figure 1, a word lattice is a directed acyclic graph, where the nodes are positions in the original sentences, and each directed edge represents a character or a plausible word.", "Word lattices incorporate all words and characters so that models could explicitly exploit the inputs of both granularities, despite some of the words are redundant.", "In the rest of this work, we use lattice tokens to refer to text units, including the characters and words, contained in lattice graphs.", "As shown in Figure 2, we list the lattice tokens in a line and consume these tokens to transformers straightforwardly.", "However, the challenges of learning PLMs as BERT over the lattice-like inputs include: 1) encoding the lattice tokens while preserving lattice structures; 2) avoiding potential leakage brought by redundant information.", "Since the original BERT is designed for sequence modeling, it is not straightforward for BERT to consume a lattice graph.", "The word lattices encode not only the character sequences but also nested and overlapping words from different segmentations.", "To accurately incorporate positional information from lattice graphs into the interactions between Source T.2 (left-det) T.1 (self) T.7 (right-det.) T.3 (left-ovl.) T.6 (right-ovl.) T.5 (cted. by) T.4 (containing) Figure 3: An illustration of the positional relations.", "The lattice position attention aggregates the attention score of token representations, ij in Eq.", "1, with three position related attention terms, encoding the absolute positions, the distance, and the positional relationship, which can be formulated as: (cid:101) ij = ij + att ij + b ij + r ij (2) The att ij in Eq.", "att ij = 1 2 d k (cid:0)(cid:2) P Ss i ; P Ee i (cid:3) W q (cid:1) (cid:16)(cid:104) P Ss j ; P Ee j (cid:105) W k (cid:17)", "[ ; ] means the concatenation of vectors.", "W q , W k R 2 d e d k are learnable parameters, d e and d k are embedding size and attention size.", "s i , e i are positions of start and end characters of the i th token.", "Taking the word / research in Figure 1 as an example, it starts at the first character and ends at the second one, thus, its s i and e i are 1 and 2, respectively.", "PS and PE are learnable position embedding matrices.", "P St , P Et R d e is the t th embedding vector of PS or PE .", "The att ij exploit the prior of attention weight between the start and end positions of the token pairs.", "The b ij in Eq.", "2 is the attention term for the distance between the i th and j th tokens, which consists of four scaling terms considering the combinations of the start and end positions: b ij = b sss j s i + b ses j e i + b ese j s i + b eee j e i b sst reflects the attention weight brought by the relative distance t between the start positions of two tokens.", "The other terms, i.e., b set , b est , and b eet , have similar meanings.", "In practice, the distance t is clipped into [ 128 , 128] .", "consider seven relations, including (1) self, (2) left and detached, (3) left and overlapped, (4) containing, (5) contained by, (6) right and overlapped, (7) right and detached.", "Figure 3 shows an illustration of these 7 relations.", "Formally, for the i th and j th tokens, they are overlapped means s i s j < e i e j or s j s i < e j e i , and if e i < s j or e j < s i , they are detached.", "If s i s j e j e i and i (cid:54) = j , the i th token contains the j th token and the j th token is contained by the i th token.", "Intuitively, only two detached tokens can be concurrent in one Chinese word segmentation result.", "Moreover, the containing relation reflects a sort of lexical hierarchy in the lattices.", "We think r ij can explicitly model the positional relations between tokens in lattice graphs.", "We argue that the attention scores for distances and token relations capture different aspects of the multi-granularity structures in lattice graphs, thus, meeting the needs of various downstream tasks, such as distance for coreference resolution and positional relation for named entity recognition.", "With the information of absolute positions, distances, and positional relations, PLMs could accurately exploit the lattice structures in attention layers.", "The lattice position attention weights are shared over all layers.", "b ij , r ij , W q , and W k are diverse in different attention heads to capture diverse attention patterns.", "We follow Ke et al. (2020) to reset the positional attention scores related to [CLS] tokens, which is the special token as prefix of the input sequences to capture the overall semantics.", "Vanilla BERT is trained to predict the randomly masked tokens in the sentences, i.e., the masked language modeling (MLM).", "For the case of consuming multi-granularity inputs, the input tokens are redundant which means a character can occur in its character forms and multiple words it belongs to.", "Directly adopting the randomly masking strategy may simplify the prediction problem in our case because the masked token can be easily guessed via peeking the unmasked tokens overlapping with the masked one.", "Taking the word / research in Figure 2 as an example, supposing the masked input is [M] / / , the model will consult rather than the context to predict the masked token, .", "tokens within a minimal segment of the lattice provide strong clue for the prediction of other tokens.", "A segment is a connected subgraph of a lattice where no token exists outside the subgraph that overlaps with any token inside the subgraph.", "To identify these minimal segments, we enumerate the character-level tokens in sentence order, checking if all the word-level tokens which contain this character end at this character.", "If so, all the tokens containing previous and current characters are considered as a segment, and the next segment starts from the next character, see the example in Figure 2.", "After the segment detection, we propose a masked segment prediction (MSP) task as a replacement of the MLM in the original BERT.", "In MSP, we mask out all the tokens in a segment and predict all these tokens (see Figure 2) to avoid the potential leakage.", "In addition to MSP, we also pre-train our models with the sentence order prediction (SOP) task in Lan et al. (2020), where the model predicts whether two consecutive sentences are swapped in inputs.", "We explore four kinds of downstream tasks, i.e., sentence/sentence-pair classification, multiple choices, sequence labeling, and span selection machine reading comprehension (MRC).", "For the sentence/sentence-pair classification, both vanilla and Lattice-BERT classify input instances base on logistic regressions over the representation of [CLS] tokens in the last layer.", "The circumstances are similar in multiple choice tasks, where softmax regressions are conducted over the representations of [CLS] tokens to choose the best options.", "However, for the span selection MRC, and the sequence labeling tasks like named entity recognition (NER), models need to perform token-wise classification.", "Vanilla BERT predicts labels for the input characters, but lattice-BERT has additional words.", "In Lattice-BERT, we extract the character chains (word pieces for numbers and English words) from lattices for training and prediction for a fair comparison with vanilla BERT.", "Pilot studies show that this strategy performs comparably with the more complex strategies, which supervise the labels over words and obtain a character's label via ensembles of all tokens containing that character.", "Lattice Construction.", "We construct the word lattices based on a vocabulary consisting of 102K high-frequency open domain words.", "All the substrings of the input sequence that appear in the vocabulary are considered lattices tokens of the input.", "With Aho-Corasick automaton (Aho and Corasick, 1975), this construction procedure can complete in linear time to the size of the corpus and the vocabulary.", "4 To deal with English words and numbers where the substrings are meaningless, we use the character sequences for those out-of-vocabulary non-Chinese inputs and remain the in-vocabulary words and word pieces.", "We construct word lattices using all possible words according to a vocabulary instead of more sophisticated lattice construction strategies.", "Previous research efforts (Lai et al., 2019; Chen et al., 2020; Li et al., 2020b) on lattice construction suggests that using all possible words usually yields better performance.", "We think an overly-designed lattice construction method may bias our model on certain types of text, and would probably harm the generalization.", "So, in our case, we let the model learn by itself to filter the noise introduced by using all possible words during pre-training on a large-scale corpus.", "Pre-training Details.", "To compare with previous pre-training works, we implement the base -size models, which contains 12 layers, 768-dimensional of hidden size, and 12 attention heads.", "To demonstrate how lattice gains in shallower architectures and provide lightweight baselines, we also conduct the lite -size models with 6 layers, 8 attention heads, and the hidden size of 512.", "To avoid the large vocabulary introducing too many parameters in embedding matrix, we adopt the embedding decomposition trick following Lan et al. (2020, ALBERT).", "Consequently, the parameters of Lattice-BERT is 100M in base-size, only 11% more than its character-level counterpart (90M), and smaller than the RoBERTa-base (Liu et al., 2019) (102M) and AMBERT (Zhang and Li, 2020) (176M).", "The modeling of positional relation and distances in lattice position attention introduces only 12K parameters.", "A collection of Chinese text, including Chinese Wikipedia, Zhihu, and web news, is used in our BERT models' pre-training stage.", "The total number of characters in our unlabeled data is 18.3G.", "We follow Liu et al. (2019) and train the PLMs with a large batch size of 8K instances for 100K 4 Formally, the time complexity is O (( N + M ) L ) .", "We present the details of the Lattice-BERT fine-tuning results on 11 Chinese NLU tasks.", "Answering the following questions: (1) Whether the Lattice-BERT performs better than mono-granularity PLMs and other multi-granularity PLMs?", "(2) How the proposed lattice position attention and masked segment prediction contribute to the downstream tasks?", "(3) How Lattice-BERT outperforms the original character-level PLMs?", "We test our models on 11 Chinese NLU tasks, including the text classification and Machine Reading Comprehension (MRC) tasks in the Chinese Language Understanding Evaluation benchmark (Xu et al., 2020, CLUE), and two additional tasks to probe the effectiveness in sequence labeling.", "CLUE text classification : natural language inference CMNLI , long text classification IFLY-TEK ( IFLY. ), short text classification TNEWS , semantic similarity AFQMC , coreference resolution (CoRE) CLUEWSC 2020 ( WSC. ), and key word recognition (KwRE) CSL .", "CLUE MRC : Span selection based MRC CMRC 2018 ( CMRC ), multiple choice questions C 3 , and idiom cloze ChID .", "Sequence Labeling : Chinese word segmentation (CWS) MSR dataset from SIGHAN2005 (Emerson, 2005), and named entity recognition (NER) MSRA-NER (Levow, 2006).", "We probe our proposed Lattice-BERT model thoroughly with these various downstream tasks.", "The statistics and hyper-parameters of each task are elaborated in Appendix B. We tune learning rates on validation sets and report test results with the best developing performances for CLUE tasks.", "5 For MSR and MSRA-NER, we run the settings with the best learning rates five times and report the average scores to ensure the reliability of results.", "RoBERTa (Cui et al., 2020) is the Chinese version RoBERTa model (Liu et al., 2019), which adopts", "the whole word masking trick and external pretraining corpus, known as RoBERTa-wwm-ext.", "6 NEZHA (Wei et al., 2019) is one of the best Chinese PLMs with a bag of tricks, which also explores attention-level position embedding.", "AMBERT (Zhang and Li, 2020) is the state-of-the-art multi-granularity Chinese PLM, with two separated encoders for words and characters.", "BERT-word is a Chinese PLM baseline, taking words as single-granularity inputs.", "We obtain the results from Zhang and Li (2020) directly.", "BERT-our is our implemented BERT model, with the same pre-training corpus, model structures, hyper-parameters, and training procedure with Lattice-BERT, but taking characters as inputs.", "We also adopt the whole word masking trick.", "LBERT is our proposed Lattice-BERT model, with word lattices as inputs, equipping with lattice position attentions and masked segment prediction .", "In Table 1, we can see in text classification, MRC, and sequence labeling tasks, with both base and lite sizes, LBERT works better than our character-level baselines consistently.", "LBERTbase outperforms all previous base -size PLMs in average scores and obtain the best performances in 7 of the 11 tasks.", "Comparing with the mono-granularity PLMs in base -size, LBERT takes benefits from word-level information and outperforms its character-level counterpart, BERT-our, by 1.5% averagely.", "Meanwhile, LBERT performs better than the word-level model, BERT-word, remarkably on CLUE tasks.", "We think the lattice inputs incorporate 6 https://huggingface.co/hfl/ chinese-roberta-wwm-ext coarse-grained semantics while avoiding segmentation errors by combining multiple segmentation results.", "Therefore, with the multi-granularity treatments in word lattices, PLMs obtain better performances in downstream tasks than the mono-granularity settings.", "Furthermore, LBERT outperforms the previous state-of-the-art ( sota ) multi-granularity PLM, AMBERT (Zhang and Li, 2020), by 0.9% in text classification and 1.3% in MRC, averagely.", "Different from modeling the characters and words separately, the graph representations of word lattices could enhance the interaction between multi-granularity tokens and utilize all possible segmentation results simultaneously.", "As a result, LBERT achieves a new sota among the base -size models on the CLUE leaderboard as well as the sub-leaderboards for text classification and MRC tasks.", "7 With lite -size settings, LBERT brings 2.0% improvement over BERT-our on average, which is larger than the case in base -size.", "In CWS, TNEWS, and CSL, the lite -size LBERT even outperforms the base -size BERT-our.", "With more coarse-grained inputs, the shallower architectures do not require complicated interactions to identify character combinations but utilizing word representations explicitly, thus, narrowing the gap with the deeper ones.", "Ablation Study.", "We conduct ablation experiments to investigate the effectiveness of our proposed lattice position attention (LPA) and masked segment prediction (MSP) in downstream tasks.", "To reduce the computational costs, we base our 7 https://www.cluebenchmarks.com/rank.", "pre-training settings on lite -size with the sequence length of 128 characters.", "We select one task from each of the task clusters.", "We use the entity-level F1 score for NER to highlight the influence on boundary prediction.", "We report the average scores over 5 runs and use the development sets for CLUE tasks.", "We can see in Table 2 that the ablation of either module (Dis.Rel. & MSP) leads to a substantial drop in the average scores.", "In particular, replacing MSP with vanilla MLM, the average score of MSP drops by 1.6%.", "For the WSC.", "task, where long-range dependency is required to resolve the coreference, the gap is high up to 3.1%.", "We trace this drop into the pre-training procedure and observe the MLM accuracy for the MSP setting on the development set is 88.3%.", "However, if we mask the tokens within the segment and avoid potential leakages, the accuracy drastically drops to 48.8%, much lower than the performance of LBERT training with MSP (56.6%).", "This gap provides evidence that the MSP task prevents the PLMs from tricking the target by peeking the overlapping text units in one segment, thus encourages the PLMs to characterize the long-range dependency.", "For the LPA method, without the positional relation (Rel.), the entity-level F1 score on NER decreases by 0.4%, and the performance on CMRC decreases by 0.7%.", "The performance drops are similar to the case without distance information (Dis.).", "Without either of them (Dis. Rel.), the gaps widen to 0.5% and 2.8%, respectively.", "The boundary predictions in NER and CMRC are more sensitive to the local linguistic structures like nested words or overlapping ambiguity.", "With the positional relation and distance charaterized in attention, LBERT could accurately model the interaction between the nested and overlapping tokens in different segmentation results.", "Meanwhile, the accuracy of WSC.", "remarkably drops without distance information.", "The performance drops by 7.5% and 5.8% when the number of characters between the pronouns and candidate phrases is larger than 30, or between 20 to 30, respectively.", "For the rest cases, the drop is only 0.4%.", "With explicitly modeling of distance, LBERT predicts the long-distance coreference relations more accurately.", "Averagely, without the positional relation and distance modeling in LPA, the performance drops by 2.0% on the three tasks, showing the importance of LPA in assisting the PLMs to exploit the multi-granularity structures in word lattices.", "How LBERT Improves Fine-grained PLMs?", "We compare the prediction results of LBERT and the character-level BERT-our in base -size on development sets to investigate how the LBERT outperforms the vanilla fine-grained PLMs.", "Intuitively, the word-level tokens in lattices provide coarse-grained semantics, which argument the character-level inputs.", "We observe in TNEWS, the short text classification task, LBERT brings more improvement in the shorter instances, where the statements may be too short to provide enough context for predictions.", "By dividing the development set into five bins with equal size according to the sentence length, LBERT outperforms BERT-our by 2.3% and 1.3% in the shortest and second shortest bins, respectively, larger than the average gain on the rest instances (0.6%).", "We think the redundant tokens in word lattices provide rich context for the semantics of these short statements.", "For example, for the short title / the cinema in our village , with the redundant words, / movie , /cinema, and / cinema , introduced in the lattice, LBERT classifies the instance as entertainment news instead of news stories.", "Another case is the CSL task, where the target is to predict whether the candidate words are keywords for a given paragraph.", "For those instances, where LBERT identifies more than two word-level tokens from each candidate word averagely, which accounts for 47% of the dataset, the performance gain is 3.0%, significantly larger than the average improvement of the rest, 1.0%.", "We think LBERT understands the key words from various aspects by exploiting the redundant expressions in lattices.", "For example, from the keyword candidate / solar battery , the /solar, /solar energy, /battery, and / solar battery Figure 4: Visualization of the attention scores of / Research life is very fulfilling .", "are lattice tokens.", "With these word-level tokens, LBERT could match this candidate with the expressions in the paragraph like / positive electrode , / light , / electron , / ion , etc.", "On the other side, for MSRA-NER, LBERT reduces the errors in identifying entities with nested structures.", "Averagely, the number of error cases where the predicted entities are nested with the golden ones are reduced by 25% in LBERT.", "For example, the organization entity / Palestine National Liberation Movement is nested with the location entity / Palestine and ends with an indicator to organizations, / movement .", "The character-level baseline model mistakenly recognizes the / Palestine and / move as a location and an organization, separately.", "While LBERT identifies this entity correctly after integrating the words, / liberate , / Palestine , and / movement .", "With the pre-trained multi-granularity representations, LBERT fuse the contextual information from words and characters simultaneously, and detects the correct entity in success.", "How does LBERT harness multi-Granularity representations?", "LBERT consumes all the words and characters from input sequences simultaneously, but how does the model utilizes such multi-granularity representations during pre-training and downstream tasks?", "To investigate this, we use the average attention scores that each lattice token receives among all layers and all heads to represent its importance.", "As the example shown in Figure 4, before fine-tuning, LBERT focuses on tokens including / live , / fulfilling , / research , / graduate student , / investigate , etc.", "Before fine-tuning on specific tasks, the model captures various aspects of the sentence.", "After fine-tuning with MSRA-NER, the most focused words become / fulfilling , Question : What is the game with the theme song which is sung by Chen Yiting and is composed by Zeng Zhihao?", "/ very , / life , and / research , i.e., the tokens from the golden segmentation result, | | | , which is intuitively beneficial for the NER tasks.", "The attention score of the wrong segmented word, / graduate student , drops remarkably.", "On the other hand, after fine-tuning with the news title classification task, TNEWS, LBERT tends to focus on / fulfilling , / graduate student , / life , etc.", "Although these tokens can not co-exist in one Chinese word segmentation result, LBERT can still utilize the redundant information from various plausible segmentations to identify the topics of inputs.", "These results indicate that Lattice-BERT can well manage the lattice inputs by shifting the attention to different aspects among the multi-granularity representations according to specific downstream tasks.", "Case Study.", "Table 3 shows an example in CMRC, a span selection MRC task, where models choose a text span from the given document to answer the question.", "In this case, the question asks for a game , restricted by its theme song .", "BERT-our incorrectly outputs a theme song, The Song of China , since there is no expression in the document explicitly related to game .", "However, LBERT find the correct answer, The Legend of Sword and Fairy V .", "One possible reason is that The Legend of Sword and Fairy is an entry in the vocabulary for lattice construction.", "LBERT may have learned this word as an entity for a famous video game from the context in pre-training by explicitly exploiting its representation as a whole.", "With the coarse-grain text units in pre-training, LBERT directly encodes knowledge about these units to benefit the downstream tasks.", "(i.e., BERT-our) have the same training epochs when the training steps are equal following previous works (Diao et al., 2020; Zhang and Li, 2020).", "Thus, comparing with BERT-our, 35% more text units are introduced in the pre-training instances of LBERT, which introduces 48% more computational resources comparing with BERT-our to process the additional word-level tokens (See Appendix C).", "To illustrate the gains attribute to the incorporation of lattices instead of additional computations, we investigate the lite -size BERT-our with longer input sequences in pre-training, which has the same computational costs as LBERT.", "We find LBERT still outperforms BERT-our by 2.2% averagely on CLUE classification tasks.", "More details are elaborated in Appendix D. 4 Related Works Recently, several works utilize the lattice structures to explore multi-granularity information in Chinese NLU tasks.", "Buckman and Neubig (2018) incorporate lattice into recurrent neural networks based language modeling to capture marginal possibilities across all possible paths.", "In NER, Lattice-LSTM (Zhang and Yang, 2018), graph neural networks (Gui et al., 2019), and flat-lattice transformers (Li et al., 2020a) are adopted to incorporate words from lattice inputs.", "Lai et al. (2019) adapt convolutional neural networks to lattice for matching based question answering.", "Chen et al. (2020) adopt graph matching networks to perform multi-granularity interaction between lattices for text similarity.", "These works are designed to explore word lattices in specific tasks.", "We explore the multi-granularity representations with word lattices in PLMs, investigating the previously attempted downstream tasks as well as other tasks, e.g., MRC.", "We design LPA to meet various interaction needs of the downstream tasks and propose MSP to avoid the leakages.", "In the field of Chinese PLMs, some efforts incorporate coarse-grained information with the character-level inputs.", "ERNIE 1.0 (Sun et al., 2019) and BERT-wwm (Cui et al., 2019) propose to mask the words, entities, and phrases as a whole in the MLM task to encourage the modeling of coarse-grained features.", "ZEN (Diao et al., 2020) adopt auxiliary networks to integrate n-gram representations.", "BERT-MWA (Li et al., 2020b) propose word-aligned attention to use multiple segmentation boundaries.", "Different from their methods, we propose Lattice-BERT to consume multi-granularity tokens simultaneously into one PLM via lattice graphs.", "Thus, Lattice-BERT explicitly exploits the representations of the coarse-grained units, as well as the interactions among wordand character-level tokens.", "The proposed MSP task can be treated as an extension of the whole word masking (Cui et al., 2019), while considering the span information like Joshi et al. (2020, SpanBERT) according the lattice structures.", "The concurrent work, Zhang and Li (2020, AMBERT) investigate multi-granularity inputs similarly, but they use two transformer encoders to separately deal the word and character sequences.", "We treat the words and characters as lattice graphs, which enables thor-ough interactions among multi-granularity tokens and utilizes all potential segmentation results.", "In this paper, we propose Lattice-BERT to leverage multi-granularity representations of input sentences for Chinese PLMs.", "Specifically, Lattice-BERT takes a word-lattice as input, modeling the representations of words and characters simultaneously.", "We design the lattice position attention to embed the multi-granularity structure into transformers and propose the masked segment prediction task to avoid potential leakage in original MLM caused by the redundancy information in lattices.", "We conduct extensive experiments on 11 Chinese NLU tasks and observe consistent gains over character-level baselines, achieving new sota on CLUE benchmarks.", "We show that Lattice-BERT can well manage the lattice inputs and utilize multi-granularity representations to augment the character-level inputs.", "We believe the lattice structure can be adapted to integrate the phrase and word representations into the word-piece based PLMs in other languages, which we leave for future exploration.", "This work is supported in part by the National Hi-Tech RD Program of China (No. 2020AAA0106600), the NSFC under grant agreements (61672057, 61672058).", "For any correspondence, please contact Yansong Feng." ]
[ "abstain", "objective", "method", "method", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "abstain", "method", "abstain", "abstain", "other", "abstain", "objective", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "objective", "objective", "method", "other", "other" ]
[ "Class-based language models (LMs) have been long devised to address context sparsity in n -gram LMs.", "In this study, we revisit this approach in the context of neural LMs.", "We hypothesize that class-based prediction leads to an implicit context aggregation for similar words and thus can improve generalization for rare words.", "We map words that have a common WordNet hypernym to the same class and train large neural LMs by gradually annealing from predicting the class to token prediction during training.", "Empirically, this curriculum learning strategy consistently improves perplexity over various large, highly-performant state-of-the-art Transformer-based models on two datasets, WikiText-103 and ARXIV .", "Our analysis shows that the performance improvement is achieved without sacrificing performance on rare words.", "Finally, we document other attempts that failed to yield empirical gains, and discuss future directions for the adoption of class-based LMs on a larger scale.", "Over the course of the past decades, language modeling (LM) has transitioned from n -gram to neural models (Bengio et al., 2003; Mnih and Hinton, 2007; Devlin et al., 2019; Brown et al., 2020).", "Performance improvement of today's neural LMs is often achieved at the cost of increased computational resources.", "For example, to capture long-term dependencies, various extensions of Transformer-based LMs have been proposed (Dai et al., 2019; Rae et al., 2020).", "These modifications bring about significant improvements on held-out perplexity, but training cost also increases significantly due to large GPU memory consumption and more computations at each training step.", "A final torch used to enter Empire Stadium that was made of stainless steel and powered by a magnesium candle Original Text:", "Replaced with hypernym class: A final instrumentality.n.03 used to enter Empire structure.n.01 that was made of alloy.n.01 alloy.n.01 and powered by a metallic_element.n.01 instrumentality.n.03", "and Rush, 2019; Deng et al., 2020).", "In this paper, we explore the effectiveness of class-based language models (CLMs, Brown et al. 1992) in the context of neural LMs.", "CLMs group individual words into coarser-grained classes and has proven effective in alleviating context sparsity in n -gram LMs (Dagan et al., 1999).", "It has been also used to improve computational efficiency in neural LMs (Morin and Bengio, 2005; Grave et al., 2017a).", "More recently, Levine et al. (2020) pretrain masked LMs (Devlin et al., 2019) by predicting WordNet supersense labels.", "However, the work focuses on word-sense disambiguation tasks and doesn't provide clear evidence of gains in terms of perplexity.", "In this paper, we revisit CLM and assign words to classes by leveraging hypernym relations from the WordNet (Miller, 1995).", "Our proposal, dubbed Hypernym Class Prediction (HCP) is simple and effective: for each batch, we substitute a subset of the tokens with their WordNet hypernyms (see Figure 1).", "Then, we train an autoregressive LM on the resulting sentences using a mixed vocabulary composed of hypernyms and tokens.", "Crucially, we anneal the substitution rate during training, i.e., we gently switch from hypernym prediction to token prediction, following a curriculum learning approach.", "Note that this approach does not require WordNet information at inference time nor 1352 increases training time.", "Our approach is motivated by two hypotheses.", "Firstly, mapping words to their hypernyms gives rise to a natural gradation of difficulty in the prediction task.", "Prior work has shown that LM bene-fits from training on instances of increasing difficulty (Bengio et al., 2009; Press et al., 2021).", "We thus postulate that, when coupled with the right curriculum, HCP can improve LM training and perplexity.", "Secondly, we hypothesize that HCP can improve rare word generalization through implicit context sharing.", "Neural models still struggle to learn reliable representations for rare words (Schick and Schtze, 2020).", "With CLM-based models, data sparsity for rare words can be abated, e.g., when the representation of their contexts are potentially drawn closer to those of their more frequent siblings by way of label (hypernym) sharing.", "Empirically, the proposed method consistently yields about 0.61.9% relative reduction in perplexity over baselines on the WikiText-103 dataset (Merity et al., 2016), and 1.33.1% on the ARXIV dataset (Lazaridou et al., 2021).", "These improvements are observed with respect to memory-augmented (Dai et al., 2019) and segment-aware (Bai et al., 2021) LMs.", "Importantly, the proposed method improves performance for both rare and frequent words.", "We also observe that this is in contrast with performance improvements in regular LMs, which seem to be achieved at the cost of worsened performance on rare words.", "To the best of our knowledge, this is the first work that shows how perplexity of Transformer LMs can be improved by leveraging hypernymy relationships.", "We provide an extensive ablation study highlighting crucial elements of HCP.", "Amongst those, we found particularly important to adopt a curriculum learning approach, rather than multi-objective learning or adaptive-softmax, and excluding frequent words from the hypernym prediction task.", "We highlight the simplicity and effectiveness of the proposed method as our main contribution, and hope this study would facilitate further exploration in this line of research.", "Transformer-based models are now popular language models.", "Dai et al. (2019) propose Transformer-XL by extending the vanilla Transformer (Vaswani et al., 2017) with a memory segment, which can encode more context tokens to predict the next token.", "Rae et al. (2020) extend Transformer-XL with a compressed memory segment to further encode long-time context memory.", "Other works explore different sparse Transformers to encode much longer sequences for LM (Beltagy et al., 2020; Roy et al., 2021).", "Bai et al. (2021) propose a segment-aware Transformer (Segatron) to encode more positional information for language modeling.", "Despite their effectiveness, neural models still struggle to learn reliable representations for rare words.", "Some approaches have been proposed to tackle this challenge by way of morphology (Luong et al., 2013), lexical similarity (Khas-sanov et al., 2019), context similarity (Schick and Schtze, 2020; Khandelwal et al., 2020) and tokenization (Kudo and Richardson, 2018).", "In addition to the model modifications, other work investigated curriculum learning to train LMs.", "Bengio et al. (2009) first find that curriculum learning could benefit LM training by training with high-frequency tokens first and low-frequency tokens later.", "Wu et al. (2021) find that curricula works well when the training data is noisy or the training data is too large to iterate multiple epochs.", "Press et al. (2021) find that training Transformer-based LMs with short sequences first could improve convergence speed and perplexity.", "Related work aimed at integrating WordNet information into pretrained language models.", "Levine et al. (2020) propose SenseBERT by adding the word sense (WordNet supersense) prediction as an additional task during BERT (Devlin et al., 2019) pre-training.", "SenseBERT outperforms BERT on both word supersense disambiguation (Raganato et al., 2017) task and word in context (Pilehvar and Camacho-Collados, 2019) task.", "Recently, Porada et al. (2021) use WordNet hypernymy chains as input to a Roberta (Liu et al., 2019) model to predict the plausibility of input events.", "In this work, our focus is to improve performance of auto-regressive LMs.", "We show that a multi-task strategy harms performance in this setting, and give a successful recipe to consistently boost LM performance with class-based predictions.", "Coupling class-based LM (CLM) and curriculum learning, HCP is to gradually anneal class prediction to token prediction during LM training.", "In this section, we first describe how we instantiate word classes by leveraging hypernym relation from the 1353 Entity.n.01 physical_entity.n.01 matter.n.03 substance.n.01 chemical_element.n.01 abstraction.n.06 relation.n.01 part.n.01", "Code 1: Pseudocode for token to class mapping.", "WordNet.", "We then present how to incorporate the proposed Hypernym Class Prediction task into LM training via curriculum learning.", "WordNet (Miller, 1995) is a lexical database that groups words into sets of cognitive synonyms known as synsets, which are in turn organized into a directed graph by various lexical relations including the hypernymy ( is-a ) relation.", "As shown in Figure 2, each vertex is a synset, labeled by the text within the box, and each edge points from the hypernym (supertype) to the hyponym (subtype).", "Note that a word form (spelling) may be associated with multiple synsets each corresponding to a different sense of the word, which are sorted by the frequency of the sense estimated from a sense-annotated corpus.", "For example, iron has 6 synsets, among which iron.n.01 is the most common one.", "Hence, if two words share the same hypernym at a certain level in their hypernym-paths (to the root in WordNet), we could say they are similar at that level.", "Here we use \"Depth\" to quantify the hypernym-path level.", "In Figure 2, for example, at Depth 6, iron and magnesium are mapped to the same group named metallic_element.n.01, while desk is mapped to instrumentality.n.03.", "At Depth 2, all these three words share the same (indirect) hypernym physical_entity.n.01.", "In this work, we map each token in our training set into its hypernym class if this token (1) has a noun synset in the WordNet, (2) with a hypernym-path longer than a given depth d , and (3) has frequency below a given threshold f in the training corpus.", "We only consider nouns because it is not only the most common class in the WordNet but also a difficult class for LMs to learn (Lazaridou et al., 2021).", "For tokens with multiple synsets, we iterate over the synsets in the order of sense frequency and break the loop once found.", "We select the most frequent synset no less than the required depth.", "The mapping pseudocode is illustrated in Code 1, which is a data pre-processing algorithm conducted only once before the training and takes no more than 5 minutes in our implementation.", "We first partition the vocabulary into V x and V x based on whether or not a token has a hypernym in the WordNet, and V h denotes the set of all hypernyms.", "The original task in a Transformer-based LM is then to predict the token w j 's probability with the output x from the last layer: P ( y = w j | x ) = exp ( x T v wj ) (cid:80) wk Vx V x exp ( x T v wk ) (1) where w k is the k th word in the original vocabulary and v w k is its embedding.", "Here we assume the output layer weights are tied with the input em-1354 0 20k 40k ... ...", "To do the Hypernym Class Prediction step, we replace all tokens in V x in a batch of training data with their corresponding hypernym classes in V h .", "After the replacement, only hypernym classes in V h and tokens in V x can be found in that batch.", "Then, the LM probability prediction becomes: P ( y = w j | x ) = exp ( x T v wj ) (cid:80) wk Vh V x exp ( x T v wk ) (2) where w j could be either a token or a hypernym class.", "We called this batch step is a Hypernym Class Prediction (HCP) step.", "Note that Eq.", "2 is different from the multi-objective learning target, where the hypernym class would be predicted separately: P ( y = w j | x ) = exp ( x T v wj ) (cid:80) wk Vh exp ( x T v wk ) (3) where w j is a hypernym class.", "We train a LM by switching from HCP to token prediction.", "For the example in Figure 2, our target is to teach a model to distinguish whether the next token belongs to the metallic element class or instrumentality class during the earlier stage in training, and to predict the exact word from magnesium, iron, and desk later.", "Inspired by Bengio et al. (2009), we choose curriculum learning to achieve this.", "Curriculum learning usually defines a score function and a pacing function, where the score function maps from a training example to a difficulty score, while the pacing function determines the amount of the easi-est/hardest examples that will be added into each epoch.", "We use a simple scoring function which treats HCP as an easier task than token prediction.", "Therefore, there is no need to sort all training examples.", "The pacing function determines whether the current training step is a HCP step, i.e. whether tokens will be substituted with their hypernyms.", "Our pacing function can be defined as: P ( y = c | t ) = (cid:26) b t < a N 0 t a N (4) or P ( y = c | t ) = (cid:26) b b t a N t < a N 0 t a N (5) where P ( y = c | t ) is the probability that the current step t is a hypernym class prediction step.", "N is the total training steps.", "a and b are hyper-parameters.", "So, Eq.", "4 is a constant pacing function in the first a N steps, while Eq.", "5 is a linear decay function.", "We plot these two functions in Figure 3.", "According to our experimental results Tab.", "5, these two functions are both effective in improving the language model.", "We conduct experiments on two datasets.", "WikiText-103 (Merity et al., 2016) is a large word-level dataset with long-distance dependencies for language modeling.", "There are 103M tokens and 28K articles (3.6K tokens per article on average).", "The original vocabulary size is 271121, among which we find 3383 hypernym classes for 71567 tokens with d = 6 and f = 6000 (Section 3.1).", "ARXIV (Lazaridou et al., 2021) is collected from publicly available arXiv abstracts 1 with an average of 172 words per abstract and partitioned into training (1986Sept 2017), evaluation (AugDec 2017), and test (20182019).", "Following Lazaridou et al. (2021), we use the BPE (Sennrich et al., 2015) tokenization for this dataset.", "The final vocabulary size is 48935, and we find 1148 hypernym classes for 5969 tokens among the vocabulary with d = 6 and f = 1000 .", "Several variants of the Transformer model have been used for our experiments: small model: 12 layers, 10 heads, hidden size 300, batch size 256, training steps 100k; base model: 16 layers, 10 heads, hidden size 410, batch size 64, training steps 200k; 1 https://arxiv.org/help/oa/index 1355 Model #Param.", "The input lengths are 150 for the base model and 384 for the large model.", "The memory length is equal to the input length for both training and testing.", "The hyper-parameters used for the ARXIV dataset are as same as the WikiText-103 , except the ARXIV base model's input length is 384.", "The number of training steps varies greatly for the large model in previous work, so we experiment on both the lower (80k) higher (350k) ends.", "Our main results are shown in Table 1.", "We can see that all architectures could benefit from HCP: Transformer-small improved 0.6 ppl, Transformer-base improved 0.5, Segatron-XL base improved 0.4, Transformer-large improved 0.5, and Segatron-XL large improved 0.1.", "We also plot the validation perplexities of small and large models trained with and without HCP in Figure 4.", "In the beginning, the perplexity of the HCP models is higher due to the mixed training steps from the two tasks, but we can see that HCP perplexity goes down faster than the baseline method.", "And after fully switching to token prediction, HCP outperforms the baseline method quickly and the gap between these two methods remains stable.", "These results suggest that HCP is indeed effective in improving LM training.", "compare the Segatron-XL base model trained with and without HCP.", "The results are shown in Table 2.", "The improvements over the validation set and test set are 0.6 and 0.75 respectively.", "For the large model, we use the same model architecture and 1356 Model #Param.", "hyper-parameters as the WikiText-103 large model but change the vocabulary to BPE sub-tokens.", "The final perplexity outperforms its counterparts about 0.4 and outperforms a larger model trained with 1024 input sequence length over 0.47, while our model length is 384.", "In addition to the overall perplexity comparison, we also conduct comparisons with frequency-stratified validation subsets, to show the perplexity of tokens that has been replaced with the hypernym classes during training.", "Results are shown in Figure", "5. We can see that, after the first 12k hypernym class prediction steps, there is a large gap between our HCP model and the baseline model as the HCP model only learn to predict the hypernym class instead of the token itself.", "After that, in the next 12k steps, HCP's PPL decreases faster, achieves similar PPL at 24k steps, and finally outperforms the baseline method in all frequency groups.", "The results show that our proposed training method can benefit the learning of the replaced tokens in various frequencies.", "Strikingly, we observe that, for the baseline, more training steps lead to a degradation of performance for rare tokens, a behavior that deserves investigation in future work.", "We further conduct pairwise model comparisons with tokens that have been replaced during HCP training on the WikiText-103 test set.", "Given two models, we compare the prediction probabilities for each occurrence of a target token, and register a win for the model with a higher probability.", "We then calculate the percentage of winnings (as well as ties) for each model by tallying over all occurrences of the token.", "The results are then strat-ified by token frequency and plotted in Figure", "6. The better model is placed on the right in both 1357 all_tokens freq<1000 freq<500 freq<400 freq<300 freq<100 freq<50 freq<30 freq<20 freq<10 freq<5 0% 25% 50% 75% 100% Baseline_better Indistinguishable HCP_LM_better", "(b) Baseline model and sub-optimal model Figure 6: Pairwise comparison results.", "In Figure", "6(a), we see that HCP outperforms the baseline model on all frequency strata.", "Interestingly, the performance gap widens as frequency decreases , indicating that HCP is beneficial in modeling rare tokens.", "In Figure", "6(b), we compare the baseline model against an under-optimized model of identical architecture but slightly different hyper-parameters.", "2 Here, the (optimal) baseline outperforms the sub-optimal model on all but the least frequent stratum, suggesting the possibility that perplexity reduction (resulting from hyperparameter tuning in this case) might be achieved by improving frequent word prediction at the expense of rare words .", "This is inline with observations made recently in vision tasks (Sagawa et al., 2020).", "We conduct ablation studies with WikiText-103 dataset and Transformer small model to investigate how to map words to hypernym classes, how to select curriculum learning pacing functions and to show why we use curriculum training.", "The hypernym classes are chosen from the hypernym-paths in WordNet.", "Considering that a hypernym-path consists of multiple hypernyms, it 2 The sub-optimal model has batch size 128 instead of the optimal 64, and the perplexity gap between these two models is observed to be slightly larger than that between HCP and the baseline (0.9 vs 0.5).", "is not straightforward to tell which layer is the best.", "But the best depth d should be some layer in the middle.", "Because a small depth might map multiple distant words into the same class, while a large depth will result in too many classes which are hard for a model to learn.", "The extreme examples could be d = 1 and d = , corresponding to mapping all candidate words into the class Entity.n.01 and mapping each word into itself respectively.", "In Table 3, we show evaluation results among different depth selections.", "We find that depth 6th is the best choice, with the lowest valid perplexity.", "The results also confirm our assumption that the best one would be some middle layer.", "In addition to the hypernym-path depth, we also investigate how to select frequency threshold f .", "As we mentioned above, our target is to map similar 1358 FilterFreq.", "words into the same class, where predicting a hypernym class might be easier than predicting multiple different words.", "After the mapping process, low-frequency words can be clustered into hypernym classes with higher frequency.", "Table 4 shows the results of different f .", "We can see that f = 6000 achieves the best results while f = (without filter) is the worst.", "We hypothesize this might be due to two reasons.", "First, for some high-frequency common words, the model can learn them well already, while mapping them into hypernym classes may be superfluous or even harmful.", "Second, including frequent words skews the marginal distribution over hypernym classes, causing hypernym prediction to be more class-imbalanced, which in turn might lead to collapsed representation in the resulting LM (Fang et al., 2021).", "This hypothesis deserves further investigation.", "It should be noted that although the difference of #Rep.Tokens looks minor, the difference in the token's appearance is significant.", "For example, f = maps only 776 additional tokens compared with f = 8000 , but each token's appearance is more than 8000, which explains the different perplexities in Table 4.", "Table 5 shows the results of models trained with various curriculum pacing functions.", "We also report the validation perplexities of the tokens that have ever been replaced with hypernym class (Rep.PPL) during training and tokens without hypernym class (NonRep.PPL).", "For the constant pacing function, we fix b = 1 and change the value of a , In this case, the models are always training with HCP in the first a 100 k steps and then switch to the token prediction training, which is a pre-training pacing function.", "We can see that all models outperform the baseline model over the validation perplexity.", "Rep.PPL improves from 348 to 339.", "The perplexity of NonRep.PPL between baseline model and HCP models are similar, except the model trained with a = 4 , which indicates the pre-training should not take up too many steps.", "For the linear pacing function, we choose some specific a and b to achieve the same HCP steps as the constant functions above.", "For simplicity, we also set a = b .", "In Table 5, we can see that the overall perplexity of the linear functions is similar to the corresponding constant functions, where the Non-Rep.PPL is slightly decreased while the Rep.PPL is slightly increased.", "We conduct a grid search over different pacing functions with Transformer small model and WikiText-103 , and finally, use the constant function with a = 0 .", "12 and b = 0 .", "8 for all base models and large models.", "Curriculum hyper-parameters could be transferred to the ARXIV dataset successfully.", "However, we tune the frequency threshold f on each dataset, because different tokenization methods change the frequency distribution.", "All HCP models in Table 2 are using d = 6 , f = 1000 , and the constant pacing function with a = 0 .", "12 and b = 0 .", "8 .", "We also experimented with two other methods to incorporate hypernym information into LM training.", "Although neither method has yielded any empirical gain, we nonetheless report these methods and offer possible explanations for their failure.", "Multi-objective Training Multi-objective (or multi-task) training consists in a weighted sum of token and hypernym prediction losses.", "We set the weight of the hypernym prediction loss to 0.2.", "The prediction of a token is calculated with Eq.", "1.", "The prediction of a hypernym class is calculated with Eq.", "3, where x can be the output vector from any layer in the Transformer LM.", "Table 6 lists the results using the last layer and the 8th layer.", "Using the last layer significantly undermines the original token prediction results.", "Using the 8th layer is better but the final perplexity is still no better than the baseline model.", "Simply forcing the language model to predict the hypernym class for each token is harmful to LM performance.", "We also tried to replace Eq.", "3 with Eq.", "2, by mixing V h and V w together when predicting the hypernym classes (mix vocab).", "This significantly improves multi-objective 1359 Constant Func.", "training.", "Learning to predict the hypernym class from a mixed vocabulary V h V w is better than only hypernym classes V h .", "Adaptive Softmax Another method is the adaptive-softmax (Grave et al., 2017a), where the model first predict the hypernym probability among V h V w and then predict the token probability among the tokens with the same hypernym class.", "In Table 6, we can see that the adaptive-softmax is no better than the multi-objective trained model.", "By looking into the poor perplexity of Rep.PPL, we find this method cannot improve the prediction of tokens in V w .", "We believe this is due to the noise of hypernym class mapping, where we choose the first synset path as the token's hypernym synset without considering the context.", "Such noise will affect the adaptive-softmax prediction but is not an issue for curriculum training as the final training stage is fully trained with the original text.", "In this work, we propose a new LM training strategy with WordNet's super-subordinate relation and curriculum learning.", "Although WordNet is an external resources, it's not clear how to get lower perplexity using WordNet before this work.", "Consistent perplexity reduction can be observed over various models.", "Both rare and frequent tokens can be modeling better with our proposed method while other optimization method may sacrifice the performance on rare tokens.", "We'd like to address the limitations of this work: other methods to map words to classes; LM experiments with other languages; pre-training LM with our proposed method and testing on downstream tasks.", "We hope to investigate these directions in the future." ]
[ "abstain", "abstain", "objective", "method", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "method", "abstain", "objective", "abstain", "abstain", "result", "objective", "abstain", "abstain", "other", "abstain", "abstain", "objective", "objective", "result", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "other", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "result", "other", "abstain", "abstain", "other", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "objective", "method", "abstain", "objective", "objective", "objective" ]
[ "Cross-lingual Hypernymy Detection involves determining if a word in one language (fruit) is a hypernym of a word in another language (pomme i.e. apple in French).", "The ability to detect hypernymy cross-lingually can aid in solving cross-lingual versions of tasks such as textual entailment and event coreference.", "We propose BISPARSE-DEP , a family of unsupervised approaches for cross-lingual hypernymy detection, which learns sparse, bilingual word embeddings based on dependency contexts.", "We show that BISPARSE-DEP can sig-nificantly improve performance on this task, compared to approaches based only on lexical context.", "Our approach is also robust, showing promise for low-resource settings: our dependency-based embeddings can be learned using a parser trained on related languages, with negligible loss in performance.", "We also crowd-source a challenging dataset for this task on four languages Russian, French, Arabic, and Chinese.", "Our embeddings and datasets are publicly available.", "1 1 Introduction Translation helps identify correspondences in bilingual texts, but other asymmetric semantic relationships can improve language understanding when translations are not exactly equivalent.", "One such relationship is cross-lingual hypernymy identifying that ecureuil (squirrel in French) is a kind of rodent , or (crow in Russian) is a kind of bird .", "The ability to detect hypernyms across languages serves as a building block in a range of cross-lingual tasks, including Recognizing Textual Entailment (RTE) (Negri et al., 2012, These authors contributed equally. 1 https://github.com/yogarshi/ bisparse-dep/ 2013), constructing multilingual taxonomies (Fu et al., 2014), event coreference across multilingual news sources (Vossen et al., 2015), and evaluating Machine Translation output (Pado et al., 2009).", "Building models that can robustly identify hypernymy across the spectrum of human languages is a challenging problem, that is further compounded in low resource settings.", "At first glance, translating words to English and then identifying hypernyms in a monolingual setting may appear to be a sufficient solution.", "However, this approach cannot capture many phenomena.", "For instance, the English words cook , leader and supervisor can all be hypernyms of the French word chef , as the French word does not have a exact translation in English covering its possible usages.", "However, translating chef to cook and then determining hypernymy monolingually precludes identifying leader or supervisor as a hypernyms of chef .", "Similarly, language-specific usage patterns can also influence hypernymy decisions.", "For instance, the French word chroniqueur translates to chronicler in English, but is more frequently used in French to refer to journalists (making journalist its hypernym).", "2 This motivates approaches that directly detect hypernymy in the cross-lingual setting by extending distributional methods for detecting monolingual hypernymy, as in our prior work (Vyas and Carpuat, 2016).", "State-of-the-art distributional approaches (Roller and Erk, 2016; Shwartz et al., 2017) for detecting monolingual hypernymy require syntactic analysis (eg. dependency parsing), which may not available for many languages.", "Additionally, limited training resources make unsupervised methods more desirable than supervised hypernymy detection approaches (Roller and Erk, 2 All examples are from our dataset. 607 2016).", "Furthermore, monolingual distributional approaches cannot be applied directly to the cross-lingual task, because the vector spaces of two languages need to be aligned using a cross-lingual resource (a bilingual dictionary, for instance).", "We tackle these challenges by proposing BISPARSE-DEPa family of robust, unsupervised approaches for identifying cross-lingual hypernymy.", "BISPARSE-DEP uses a cross-lingual word embedding model learned from a small bilingual dictionary and a variety of monolingual syntactic context extracted from a dependency parsed corpus.", "BISPARSE-DEP exhibits robust behavior along multiple dimensions.", "In the absence of a dependency treebank for a language, it can learn embeddings using a parser trained on related languages.", "When exposed to less monolingual data, or a lower quality bilingual dictionary, BISPARSEDEP degrades only marginally.", "In all these cases, it compares favorably with models that have been supplied with all necessary resources, showing promise for low-resource settings.", "We extensively evaluate BISPARSE-DEP on a new crowd-sourced cross-lingual dataset, with over 2900 hypernym pairs, spanning four languages from distinct families French, Russian, Arabic and Chinese and release the datasets for future evaluations.", "Cross-lingual Distributional Semantics Cross-lingual word embeddings have been shown to encode semantics across languages in tasks such as word similarity (Faruqui and Dyer, 2014) and lexicon induction (Vulic and Moens, 2015).", "Our works stands apart in two aspects (1) In contrast to tasks involving similarity and synonymy (symmetric relations), the focus of our work is on detecting asymmetric relations across languages, using cross-lingual embeddings.", "(2) Unlike most previous work, we use dependency context instead of lexical context to induce cross-lingual embeddings, which allows us to abstract away from language specific word order, and (as we show) improves hypernymy detection.", "More closely related is our prior work (Vyas and Carpuat, 2016) where we used lexical context based embeddings to detect cross-lingual lexical entailment.", "In contrast, the focus of this work is on hypernymy, a more well-defined relation than entailment.", "Also, we improve upon our previous approach by using dependency based embeddings ( 6.1), and show that the improvements hold even when exposed to data scarce settings ( 6.3).", "We also do a more comprehensive evaluation on four languages paired with English, instead of just French.", "Dependency Based Embeddings In monolingual settings, dependency based embeddings have been shown to outperform window based embeddings on many tasks (Bansal et al., 2014; Hill et al., 2014; Melamud et al., 2016).", "Roller and Erk (2016) showed that dependency embeddings can help in recovering Hearst patterns (Hearst, 1992) like animals such as cats, which are known to be indicative of hypernymy.", "Shwartz et al. (2017) demonstrated that dependency based embeddings are almost always superior to window based embeddings for identifying hypernyms in English.", "Our work uses dependency based embeddings in a cross-lingual setting, a less explored research direction.", "A key novelty of our work also lies in its use of syntactic transfer to derive dependency contexts.", "This scenario is more relevant in a cross-lingual setting, where treebanks might not be available for many languages.", "We propose BISPARSE-DEP , a family of approaches that uses sparse, bilingual, dependency based word embeddings to identify cross-lingual hypernymy.", "Figure 1 shows an overview of the end-to-end pipeline of BISPARSE-DEP .", "The two key components of this pipeline are: (1) Dependency based contexts ( 3.1), which help us generalize across languages with minimal customization by abstracting away language-specific word order.", "We also discuss how to extract such contexts in the absence of a treebank in the language ( 3.2) using a (weak) dependency parser trained on related languages.", "(2) Bilingual sparse coding ( 3.3), which allows us to align dependency based word embeddings in a shared semantic space using a small bilingual dictionary.", "The resulting sparse bilingual embeddings can then be used with a unsupervised entailment scorer ( 3.4) to predict hypernymy for cross-lingual word pairs.", "The context of a word can be described in multiple ways using its syntactic neighborhood in a dependency graph.", "For instance, in Figure 2, we describe the context for a target word ( traveler ) in the following two ways: FULL context (Pad o and Lapata, 2007; Baroni and Lenci, 2010; Levy and Goldberg, 2014): Children and parent words, concatenated with the label and direction of the relation (eg. roamed#nsubj 1 and tired#amod are contexts for traveler ).", "JOINT context (Chersoni et al., 2016): Parent concatenated with each of its siblings (eg. roamed#desert and roamed#seeking are contexts for traveler ).", "These two contexts exploit different amounts of syntactic information JOINT does not require labeled parses, unlike FULL .", "The JOINT context combines parent and sibling information, while FULL keeps them as distinct contexts.", "Both encode directionality into the context, either through label direction or through sibling-parent relations.", "We use word-context co-occurrences generated using these contexts in a distributional semantic model (DSM) in lieu of window based contexts to generate dependency based embeddings.", "Using dependency contexts in multilingual settings may not always be possible, as dependency treebanks are not available for many languages.", "To circumvent this issue, we use related languages to train a weak dependency parser.", "features are turned off, so that the parser is trained on purely non-lexical features (e.g. POS tags).", "The rationale behind this is that related languages show common syntactic structure that can be transferred to the original language, with delexicalized parsing (Zeman and Resnik, 2008; McDonald et al., 2011, inter alia) being one popular approach.", "3 3.3 Bilingual Sparse Coding Given a dependency based co-occurrence matrix described in the previous section(s), we generate BISPARSE-DEP embeddings using the framework from our prior work (Vyas and Carpuat, 2016), which we henceforth call BISPARSE .", "BISPARSE generates sparse, bilingual word embeddings using a dictionary learning objective with a sparsity inducing l 1 penalty.", "We give a brief overview of this approach, the full details of which can be found in our prior work.", "For two languages with vocabularies v e and v f , and monolingual dependency embeddings X e and X f , BISPARSE solves the following objective: argmin A e , D e , A f , D f v e X i =1 1 2 || A e i D e T X e i || 22 + e || A e i || 1 + v f X j =1 1 2 || A f j D f T X f j || 22 + f || A f j || 1 + X i,j 1 2 x S ij || A e i A f j || 22 (1) s.t. A k > 0 k D k i k 22 1 k { e , f } where S is a translation matrix, and A e and A f 3 More sophisticated techniques for transferring syntactic knowledge have been proposed (Ammar et al., 2016; Rasooli and Collins, 2017), but we prioritize simplicity and show that a simple delexicalized parser is effective.", "are sparse matrices which are bilingual representations in a shared semantic space.", "The translation matrix S (of size v e v f ) captures correspondences between the vocabularies (of size v e and v f ) of two languages.", "For instance, each row of S can be a one-hot vector that identifies the word in f that is most frequently aligned with the e word for that row in a large parallel corpus, thus building a one-to-many mapping between the two languages.", "A variety of scorers can be used to quantify the directional relationship between two words, given feature representations of these words (Lin, 1998; Weeds and Weir, 2003; Lenci and Benotto, 2012).", "Once the BISPARSE-DEP embeddings are constructed, we use BalAPinc (Kotlerman et al., 2009) to score word pairs for hypernymy.", "BalAPinc is based on the distributional inclusion hypothesis (Geffet and Dagan, 2005) and computes the geometric mean of 1) LIN (Lin, 1998), a symmetric score that captures similarity, and 2) APinc , an asymmetric score based on average precision.", "There is no publicly available dataset to evaluate models of hypernymy detection across multiple languages.", "While ontologies like Open Multilingual WordNet (OMW) (Bond and Foster, 2013) and BabelNet (Navigli and Ponzetto, 2012) contain cross-lingual links, these resources are semiautomatically generated and hence contain noisy edges.", "Thus, to get reliable and high-quality test beds, we collect evaluation datasets using CrowdFlower 4 .", "Our datasets span four languages from distinct families French (Fr), Russian (Ru), Arabic (Ar) and Chinese (Zh) paired with English.", "To begin the annotation process, we first pool candidate pairs using hypernymy edges across languages from OMW and BabelNet, along with translations from monolingual hypernymy datasets (Baroni and Lenci, 2011; Baroni et al., 2012; Kotlerman et al., 2010).", "The annotation task requires annotators to be fluent in both English and the non-English language.", "To ensure only fluent speakers perform the task, for each language, we provide task instructions in the non-English language itself.", "Also, we restrict the task to annotators verified by CrowdFlower to have those language skills.", "Finally, annotators also 4 http://crowdflower.com pair #crowdsourced #pos (= #neg) French-English 2115 763 Russian-English 2264 706 Arabic-English 2144 691 Chinese-English 2165 806 Table 1: Crowd-sourced dataset statistics.", "need to pass a quiz based on a small amount of gold standard data to gain access to the task.", "Annotators choose between three options for each word pair ( p f , q e ), where p f is a non-English word and q e is a English word : p f is a kind of q e , q e is a part of p f and none of the above.", "Word pairs labeled with the first option are considered as positive examples while those labeled as none of the above are considered as negative.", "5 The second option was included to filter out meronymy examples that were part of the noisy pool.", "We leave it to the annotator to infer whether the relation holds between any senses of p f or q e , if either of them are polysemous.", "For every candidate hypernym pair ( p f , q e ) , we also ask annotators to judge its reversed and translated hyponym pair ( q f , p e ) .", "For instance, if ( citron, food ) is a hypernym candidate, we also show annotators ( aliments, lemon ) which is a potential hyponym candidate ( potential , because as mentioned in 1, translation need not preserve semantic relationships).", "The purpose of presenting the hyponym pair, ( q f , p e ) , is two-fold.", "First, it emphasizes the directional nature of the task.", "Second, it identifies hyponym pairs, which we use as negative examples.", "The hyponym pairs are challenging since differentiating them from hypernyms truly requires detecting asymmetry.", "Each pair was judged by at least 5 annotators, and judgments with 80% agreement (at least 4 annotators agree) are considered for the final dataset.", "This is a stricter condition than certain monolingual hypernymy datasets for instance, EVALution (Santus et al., 2015) where agreement by 3 annotators is deemed sufficient.", "Inter-annotator agreement measured using Fleiss' Kappa (Fleiss, 1971) was 58.1 (French), 53.7 (Russian), 53.2 (Arabic) and 55.8 (Chinese).", "This indicates moderate agreement, on par with agreement obtained on related fine-grained semantic tasks (Pavlick et al., 2015).", "We cannot compare with monolin-5 We collected more negative pairs than positive, but sampled so as to keep a balanced dataset for ease of evaluation.", "gual hypernymy annotator agreement as, to the best of our knowledge, such numbers are not available for existing test sets.", "Dataset statistics are shown in Table 1.", "We observed that annotators were able to agree on pairs containing polysemous words where hypernymy holds for some sense.", "For instance, for the French-English pair ( avocat , professional ), the French word avocat can either mean lawyer or avocado , but the pair was annotated as a positive example.", "Hence, we leave it to the annotators to handle polysemy by choosing the most appropriate sense.", "To verify if the crowdsourced hyponyms are challenging negative examples we create two evaluation sets.", "Both share the (crowdsourced) positive examples, but differ in their negatives: HYPER-HYPO negative examples are the crowdsourced hyponyms.", "HYPER-COHYPO negative examples are cohyponyms drawn from OMW.", "Cohyponyms are words sharing a common hypernym.", "For instance, bi`ere (beer in French) and vodka are cohyponyms since they share a common hypernym in alcool / alcohol .", "We choose cohyponyms for the second test set because:", "(a) They require differentiating between similarity (a symmetric relation) and hypernymy (an asymmetric relation).", "For instance, bi`ere and vodka are highly similar yet, they do not have a hypernymy relationship.", "(b) Cohyponyms are a popular choice of negative examples in many entailment datasets (Ba-roni and Lenci, 2011).", "Training BISPARSE-DEP requires a dependency parsed monolingual corpus, and a translation matrix for jointly aligning the monolingual vectors.", "We compute the translation matrix using word alignments derived from parallel corpora (see corpus statistics in Table ?? ).", "While we use parallel corpora to generate the translation matrix to be comparable to baselines ( 5.2), we can obtain the matrix from any bilingual dictionary.", "The monolingual corpora are parsed using Yara Parser (Rasooli and Tetreault, 2015), trained on the corresponding treebank from the Universal Dependency Treebank (McDonald et al., 2013) (UDT-v1.4).", "Yara Parser was chosen as it is fast, and competitive with state-of-the-art parsers (Choi et al., 2015).", "The monolingual corpora was POS-tagged using TurboTag-ger (Martins et al., 2013).", "We induce dependency contexts for words by first thresholding the language vocabulary to the top 50,000 nouns, verbs and adjectives.", "A co-occurrence matrix is computed over this vocabulary using the context types in 3.1.", "Inducing Dependency Contexts The entries of the word-context co-occurrence matrix are re-weighted using Positive Pointwise Mutual Information (Bullinaria and Levy, 2007).", "The resulting matrix is reduced to 1000 dimensions using SVD (Golub and Kahan, 1965).", "6 These vectors are used as X e , X f in the setup from 3.3 to generate 100 dimensional sparse bilingual vectors.", "Evaluation We use accuracy as our evaluation metric, as it is easy to interpret when the classes are balanced (Turney and Mohammad, 2015).", "Both evaluation datasets HYPER-HYPO and HYPER-COHYPO are split into 1:2 dev/test splits.", "BalAPinc has two tunable parameters 1) a threshold that indicates the BalAPinc score above which all examples are labeled as positive, 2) the maximum number of features to consider for each word.", "We use the tuning set to tune the two parameters as well as the various hyper-parameters associated with the models.", "MONO-DEP (Translation baseline) For word pair ( p f , q e ) in test data, we translate p f to English using the most common translation in the translation matrix.", "Hypernymy is then determined using sparse, dependency based embeddings in English.", "BISPARSE-LEX (Window context) Predecessor of the BISPARSE-DEP model from our previous work (Vyas and Carpuat, 2016).", "This model induces sparse, cross-lingual embeddings using window based context.", "BIVEC + (Window context) Our extension of the BIVEC model of Luong et al. (2015).", "BIVEC generates dense, cross-lingual embeddings using window based context, by substituting aligned word pairs within a window in parallel sentences.", "By default, BIVEC only trains using parallel data, 6 Chosen based on preliminary experiments with { 500,1000,2000,3000 } dimensional vectors for En-Fr.", "Table 2: Training data statistics for different languages.", "Note that while we use parallel corpora for computing translation dictionaries, our approach does not require it, and can work with any bilingual dictionary.", "and so we initialize it with monolingually trained window based embeddings to ensure fair comparison.", "CL-DEP (Dependency context) The model from Vulic (2017), which induces dense, dependency based cross-lingual embeddings by translating syntactic word-context pairs using the most common translation, and jointly training a word2vecf 7 model for both languages.", "Vulic (2017) showed improvements for word similarity and bilingual lexicon induction.", "We report the first results using CL-DEP on this task.", "We investigate how robust BISPARSE-DEP is when exposed to data scarce settings.", "Evaluating on a truly low resource language is complicated by the fact that obtaining an evaluation dataset for such a language is difficult.", "Therefore, we simulate such settings for the languages in our dataset in multiple ways.", "No Treebank If a treebank is not available for a language, dependency contexts have to be induced using treebanks from other languages ( 3.2), which can affect the quality of the dependency-based embeddings.", "To simulate this, we train a delexicalized parser for the languages in our dataset.", "We use treebanks from Slovenian, Ukrainian, Serbian, Polish, Bulgarian, Slovak and Czech (40k sentences) for training the Russian parser, and treebanks from English, Spanish, German, Portuguese, Swedish and Italian (66k sentences) for training the French parser.", "UDT does not (yet) have languages in the same family as Arabic or Chinese, so for the sake of completeness, we train Arabic and Chinese parsers on delexicalized treebanks of the language itself.", "Af-7 bitbucket.org/yoavgo/word2vecf/ ter delexicalized training, the Labeled Attachment Score (LAS) on the UDT test set dropped by several points for all languages from 76.6% to 60.0% for Russian, 83.7% to 71.1% for French, from 76.3% to 62.4% for Arabic and from 80.3% to 53.3% for Chinese.", "The monolingual corpora are then parsed with these weaker parsers, and co-ocurrences and dependency contexts are computed as before.", "Subsampling Monolingual Data To simulate low-resource behavior along another axis, we subsample the monolingual corpora used by BISPARSE-DEP to induce monolingual vectors, X e , X f .", "Specifically, we learn X e and X f using progressively smaller corpora.", "Quality of Bilingual Dictionary We study the impact of the quality of the bilingual dictionary used to create the translation matrix S .", "This experiment involves using increasingly smaller parallel corpora to induce the translation dictionary.", "We aim to answer the following questions", "(a) Are dependency based embeddings superior to window based embeddings for identifying cross-lingual hypernymy?", "( 6.1)", "(b) Does directionality in the dependency context help cross-lingual hypernymy identification?", "( 6.2)", "(c) Are our models robust in data scarce settings ( 6.3)?", "(d) Is the answer to", "(a) predicated on the choice of entailment scorer?", "( 6.4)?", "We compare the performance of models described in 5.2 with the BISPARSE-DEP (FULL and JOINT ) models.", "We evaluate the models on the two test splits described in 4.2 HYPERHYPO and HYPER-COHYPO .", "Hyper-Hypo Results Table 3a shows the results on HYPER-HYPO .", "First, the benefit of cross-lingual modeling (as opposed to translation) is evident in that almost all models (except CL-DEP on French) outperform the translation baseline.", "Among dependency based models, BISPARSEDEP (FULL ) and CL-DEP consistently outperform both window models, while BISPARSE-DEP (JOINT ) outperforms them on all except Russian.", "BISPARSE-DEP (JOINT ) is the best model overall for two languages (French and Chinese), CL-DEP for one (Arabic), with no statistically significant differences between BISPARSE-DEP (JOINT ) and CL-DEP for Russian.", "This confirms that dependency context is more useful than window context for cross-lingual hypernymy detection.", "Hyper-Cohypo Results The trends observed on HYPER-HYPO also hold on HYPER-COHYPO i.e. dependency based models continue to outperform window based models (Table 3b).", "Overall, BISPARSE-DEP (FULL ) performs best in this setting, followed closely by BISPARSEDEP (JOINT ).", "This suggests that the sibling information encoded in JOINT is useful to distinguish hypernyms from hyponyms (HYPER-HYPO re-sults), while the dependency labels encoded in FULL help to distinguish hypernyms from cohyponyms.", "Also note that all models improve sig-nificantly on the HYPER-COHYPO set, suggesting that discriminating hypernyms from cohyponyms is easier than discriminating them from hyponyms.", "While the BISPARSE-DEP models were generally performing better than window models on both test sets, CL-DEP was not as consistent (e.g., it was worse than the best window model on HYPER-COHYPO ).", "As shown by Turney and Mohammad (2015), BalAPinc is designed for sparse embeddings and is likely to perform poorly with dense embeddings.", "This explains the relatively inconsistent performance of CL-DEP .", "Besides establishing the challenging nature of our crowd-sourced set, the experiments on HYPER-COHYPO and HYPER-HYPO also demonstrate the ability of the BISPARSE-DEP models to discriminate between different lexical semantic relations (viz. hypernymy and cohyponymy) in a cross-lingual setting.", "We will investigate this ability more carefully in future work.", "The context described by the FULL and JOINTBISPARSE models encodes directional information ( 3.1) either in the form of label direction (FULL ), or using sibling information (JOINT ).", "Does such directionality in the context help to capture the asymmetric relationship inherent to hypernymy?", "To answer this, we evaluate a third BISPARSE-DEP model which uses UNLABELED dependency contexts.", "This is similar to the FULL context, except we do not concatenate the label of the relation to the context word (parent or chil-dren).", "For instance, for traveler in Fig. 2, contexts will be roamed and tired .", "Experiments on both HYPER-HYPO and HYPER-COHYPO (bottom row, Tables 3a and 3b) highlight that directional information is indeed essential UNLABELED almost always performs worse than FULL and JOINT , and in many cases worse than even window based models.", "No Treebank We run experiments (Table 4) for all languages with a version of BISPARSE-DEP that use the FULL context type for both English and the non-English (target) language, but the target language contexts are derived from a corpus parsed using a delexicalized parser ( 5.3).", "This model compares favorably on all language pairs against the best window based and the best dependency based model.", "In fact, it almost consistently outperforms the best window based model by several points, and is only slightly worse than the best dependency-based model.", "Further analysis revealed that the good performance of the delexicalized model is due to the relative robustness of the delexicalized parser on frequent contexts in the co-occurrence matrix.", "Specifically, we found that in French and Russian, the most frequent contexts were derived from amod , nmod , nsubj and dobj edges.", "8 For instance, the nmod edge appears in 44% of Russian contexts and 33% of the French contexts.", "The delexicalized parser predicts both the label and direction of the nmod edge correctly with an F1 of 68.6 for Russian and 69.6 for French.", "In contrast, a fully-trained parser achieves a F1 of 76.7 for Russian and 76.8 for French for the same edge.", "Small Monolingual Corpus In Figure 4, we use increasingly smaller monolingual corpora (10%, 20%, 40%, 60% and 80%) sampled at random to induce the monolingual vectors for BISPARSEDEP (FULL ) model.", "Trends (Figure 4) indicate that BISSPARSE-DEP models that use only 40% of the original data remain competitive with the BISSPARSE-LEX model that has access to the full 8 Together they make up at least 70% of the contexts.", "data.", "Robust performance with smaller monolingual corpora is helpful since large-enough monolingual corpora are not always easily available.", "Quality of Bilingual Dictionary Bilingual dictionaries derived from smaller amounts of parallel data are likely to be of lower quality than those derived from larger corpora.", "Hence, to analyze the impact of dictionary quality on BISPARSE-DEP (FULL ), we use increasingly smaller parallel corpora to induce bilingual dictionaries used as the score matrix S ( 3.3).", "We use the top 10%, 20%, 40%, 60% and 80% sentences from the parallel corpora.", "The trends in Figure 4 show that even with a lower quality dictionary, BISPARSE-DEP performs better than BISPARSE-LEX .", "We change the entailment scorer from BalAPinc SLQS (Santus et al., 2014) and redo experiments from 6.1 to see if the conclusions drawn depend", "on the choice of the entailment scorer.", "SLQS is based on the distributional informativeness hypothesis, which states that hypernyms are less in-formative than hyponyms, because they occur in more general contexts.", "The informativeness E u of a word u is defined to be the median entropy of its top N dimensions, E u = median Nk =1 H ( c k ) , where H ( c i ) denotes the entropy of dimension c i .", "The SLQS score for a pair ( u, v ) is the relative difference in entropies, SLQS ( u v ) = 1 E u E v Recent work (Shwartz et al., 2017) has found SLQS to be more successful than other metrics in monolingual hypernymy detection.", "The trends observed in these experiments are consistent with those in 6.1 both BISPARSEDEP models still outperform window-based models.", "Also, the delexicalized version of BISPARSEDEP outperforms the window-based models, showing that the robust behavior demonstrated in 6.3 is also invariant across metrics.", "We also found that using BalAPinc led to better results than SLQS .", "For both BISPARSE-DEP models, BalAPinc wins across the board for two languages (Russian and Chinese), and wins half the time for the other two languages compared to SLQS .", "We leave detailed comparison of these and other scores to future work.", "We introduced BISPARSE-DEP , a new distributional approach for identifying cross-lingual hypernymy, based on cross-lingual embeddings derived from dependency contexts.", "We showed that using BISPARSE-DEP is superior for the cross-lingual hypernymy detection task, when compared to standard window based models and a translation baseline.", "Further analysis also showed that BISPARSE-DEP is robust to various low-resource settings.", "In principle, BISPARSE-DEP can be used for any language that has a bilingual dictionary with English and a related language with a treebank.", "We also introduced crowd-sourced cross-lingual hypernymy datasets for four languages for future evaluations.", "Our approach has the potential to complement existing work on creating cross-lingual ontologies such as BabelNet and the Open Multilingual Wordnet, which are noisy because they are compiled semi-automatically, and have limited language coverage.", "In general, distributional approaches can help refine ontology construction for any language where sufficient resources are available.", "It remains to be seen how our approach performs for other language pairs beyond simluated low-resource settings.", "We anticipate that replacing our delexicalized parser with more sophisticated transfer strategies (Rasooli and Collins, 2017; Aufrant et al., 2016) might be beneficial in such settings.While our delexicalized parsing based approach exhibits robustness, it can benefit from more sophisticated approaches for transfer parsing (Rasooli and Collins, 2017; Aufrant et al., 2016) to improve parser performance.", "We aim to explore these and other directions in the future.", "The authors would like to thank the members of the CLIP lab at the University of Maryland, members of the Cognitive Computation Group at the University of Pennsylvania, and the anonymous reviewers from EMNLP/CoNLL 2017 and NAACL 2018 for their constructive feedback.", "YV and MC were funded in part by research awards from Amazon, Google, and the Clare Boothe Luce Foundation.", "SU and DR were supported by Contract HR0011-15-2-0025 with the US Defense Advanced Research Projects Agency (DARPA)." ]
[ "abstain", "abstain", "objective", "result", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "other", "objective", "abstain", "method", "objective", "abstain", "abstain", "other", "other", "other", "method", "abstain", "other", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "method", "abstain", "result", "result", "objective", "other", "other", "other" ]
[ "When constructing models that learn from noisy labels produced by multiple annotators, it is important to accurately estimate the reliability of annotators.", "Annotators may provide labels of inconsistent quality due to their varying expertise and reliability in a domain.", "Previous studies have mostly focused on estimating each annotator's overall reliability on the entire annotation task.", "However, in practice, the reliability of an annotator may depend on each specific instance.", "Only a limited number of studies have investigated modelling per-instance reliability and these only considered binary labels.", "In this paper, we propose an unsupervised model which can handle both binary and multi-class labels.", "It can automatically estimate the per-instance reliability of each annotator and the correct label for each instance.", "We specify our model as a probabilistic model which incorporates neural networks to model the dependency between latent variables and instances.", "For evaluation, the proposed method is applied to both synthetic and real data, including two labelling tasks: text classification and textual entailment.", "Experimental results demonstrate our novel method can not only accurately estimate the reliability of annotators across different instances, but also achieve superior performance in predicting the correct labels and detecting the least reliable annotators compared to state-of-the-art baselines.", "1 1 Introduction In many natural language processing (NLP) applications, the performance of supervised machine learning models depends on the quality of the corpus used to train the model.", "Traditionally, labels are collected from multiple annotators/experts 1 Code is available at https://github.com/ createmomo/instance-level-reliability who are assumed to provide reliable labels.", "However, in reality, these experts may have varying levels of expertise depending on the domains, and thus may disagree on labelling in certain cases (Aroyo and Welty, 2013).", "A rapid and cost-effective alternative is to obtain labels through crowdsourcing (Snow et al., 2008; Poesio et al., 2013, 2017).", "In crowdsourcing, each instance is presented to multiple expert or non-expert annotators for labelling.", "However, labels collected in this manner could be noisy, since some annotators could produce a significant number of incorrect labels.", "This may be due to differing levels of expertise, lack of financial incentive and interest (Poesio et al., 2017), as well as the tedious and repetitive nature of the annotation task (Raykar et al., 2010; Bonald and Combes, 2017).", "Thus, in order to ensure the accuracy of the labelling and the quality of the corpus, it is crucial to estimate the reliability of the annotators automatically without human intervention.", "Previous studies have mostly focused on evaluating the annotators' overall reliability (Gurevych and Kim, 2013; Sheshadri and Lease, 2013; Poesio et al., 2017).", "Measuring the reliability on a per-instance basis is however useful as we may expect certain annotators to have more expertise in one domain than another, and as a consequence certain annotation decisions will be more difficult than others.", "This resolves a potential issue of models that only assign an overall reliability to each annotator, where such a model would determine an annotator with expertise in a single domain to be unreliable for the model, even though the annotations are reliable within the annotator's domain of expertise.", "Estimating per-instance reliability is also helpful for unreliable annotator detection and task allocation in crowdsourcing, where the cost of labelling data is reduced using proactive learning strategies for pairing instances with the most cost-effective annotators (Donmez and Carbonell, 2008; Li et al., 2017).", "Although reliability estimation has been studied for a long time, only a limited number of studies have examined how to model the reliability of each annotator on a per-instance basis.", "Additionally, these in turn have only considered binary labels (Yan et al., 2010, 2014; Wang and Bi, 2017), and cannot be extended to multi-class classification in a straightforward manner.", "In order to handle both binary and multi-class labels, our approach extends one of the most popular probabilistic models for label aggregation, proposed by Hovy et al. (2013).", "One challenge of extending the model is the definition of the label and reliability probability distributions on a per-instance basis.", "Our approach introduces a classifier which predicts the correct label of an instance, and a reliability estimator, providing the probability that an annotator will label a given instance correctly.", "The approach allows us to simultaneously estimate the per-instance reliability of the annotators and the correct labels, allowing the two processes to inform each other.", "Another challenge is to select appropriate training methods to learn a model with high and stable performance.", "We investigate training our model using the EM algorithm and cross entropy.", "For evaluation, we apply our method to six datasets including both synthetic and real-world datasets (see Section 4.1).", "In addition, we also investigate the effect on the performance when using different text representation methods and text classification models (see Section 4.2).", "Our contributions are as follows: firstly, we propose a novel probabilistic model for the simultaneous estimation of per-instance annotator reliability and the correct labels for natural language labelling tasks.", "Secondly, our work is the first to propose a model for modelling per-instance reliability for both binary and multi-class classification tasks.", "Thirdly, we show experimentally how our method can be applied to different domains and tasks by evaluating it on both synthetic and real-world datasets.", "We demonstrate that our method is able to capture the reliability of each annotator on a per-instance basis, and that this in turn helps improve the performance when predicting the underlying label for each instance and detecting the least reliable annotators.", "Probabilistic graphical models have been widely used for inferring the overall reliability of annotators in the absence of ground truth labels.", "Approaches include modelling a single overall reliability score for each annotator (Whitehill et al., 2009; Welinder et al., 2010; Karger et al., 2011; Liu et al., 2012; Demartini et al., 2012; Hovy et al., 2013; Rodrigues et al., 2014; Li et al., 2014a,b), estimating the reliability of each annotator on a per-category basis (Dawid and Skene, 1979; Zhou et al., 2012; Kim and Ghahramani, 2012; Zhang et al., 2014), and estimating the sensitivity and specificity for each annotator in binary classification tasks (Raykar et al., 2010).", "Fewer attempts have been made to model the per-instance reliability of annotators, focusing mainly on medical image classification.", "One approach is that by Yan et al. (2010; 2014) who use logistic regression to predict the per-instance reliability of annotators.", "Wang and Bi (2017) used a modified support vector machine (SVM; Cortes and Vapnik 1995) loss, modelling the per-instance reliability as the distance from the given instance to a separation boundary.", "True label prediction in crowdsourcing is the aggregation of labels produced by different annotators to infer the correct label of each instance.", "Majority voting assigns to each instance the most commonly occurring label among the annotators, which can result in a high agreement between the predicted label and the ground truth for some NLP tasks (Snow et al., 2008).", "Dawid and Skene (1979), Whitehill et al. (2009), Raykar et al. (2010), Welinder et al. (2010), Liu et al. (2012), Zhou et al. (2012), Kim and Ghahramani (2012), Hovy et al. (2013), Yan et al. (2010; 2014), Li et al. (2014b) and Zhang et al. (2014) investigated binary or multi-class label prediction using probabilistic graphical models.", "Karger et al. (2011), Wang and Bi (2017), and Bonald and Combes (2017) formalised the label prediction as an optimisation problem.", "Rodrigues et al. (2014) and Nguyen et al. (2017) investigated how to aggregate sequence labels using probabilistic graphical models.", "In the description of our model we let N be the number of training instances, M the number of annotators, x i the i th training instance, t i its true underlying label, T the set of values t i can take on, r ij whether annotator j is reliable for the i th instance, and a ij the label that annotator j gave the i th instance.", "Below we describe the components of the model in more detail.", "Probabilistic Model: Our model is inspired by the method proposed by Hovy et al. (2013), and it shares the same graphical representation (see Figure 1).", "The distributions of the model, however, are defined differently, as can be seen in Figure 2, due to the inclusion of a classifier and a reliability estimator.", "We assume that the underlying label t i depends only on the corresponding instance, while the reliability r ij depends on the instance and the identity of the annotator.", "If r ij = 0 , then the annotator j is unreliable for instance x i , and a label is chosen randomly from among the available categories.", "Otherwise, the annotation a ij is set to be the correct label.", "Classifier: The classifier f t ( x i ) provides the predicted probabilities of an instance belonging to each category, p ( t i | x i ) .", "t i is the underlying label for instance x i , the i th instance, and takes a value in the set of categories T .", "Note that there is no restriction on what classifier is used, other than that it can be trained using expectation maximisation.", "The inclusion of a classifier directly in the model means that it can be trained while taking into account the uncertainty of the data and predictions, as opposed to first making a hard assignment of a label for each instance and training the classifier post-hoc.", "Reliability Estimator: The reliability estimator f r ( x i , j ) predicts the probability of annotator j producing the correct label for instance x i , p ( r ij | x i ) .", "r ij is a binary variable, with 1 and 0 representing annotator j being reliable and unreliable for instance x i , respectively.", "The reliability estimator is modelled as a feed-forward neural network, where j is encoded as a one-hot vector.", "The exact representation of x i depends on the model used for the classifier.", "If the classifier is a neural network, the output of the last hidden layer is used; otherwise, the original feature vector is used.", "As the number of parameters in our model is much larger than that of previous studies (Yan et al., 2010, 2014; Wang and Bi, 2017) due to the introduction of both a classifier and a reliability estimator, the model is much harder to train from scratch.", "Therefore, before we start training the model, we first pre-train the classifier using labels predicted by a simpler method as targets, using e.g. majority voting or the method proposed by Dawid and Skene (1979).", "Although these labels may be noisy, we have observed empirically that a better initialisation strategy does result in better performance (see Section 5).", "For the reliability estimator, for each instance x i we compare each annotation a ij to the labels predicted in the previous step.", "If a ij is the same as the predicted label, we take the corresponding r ij to be 1 , and 0 otherwise.", "We then pretrain the reliability estimator f r to predict these values for r .", "We first consider training our model using expectation maximisation (EM; Dempster et al. 1977).", "This involves maximising the expectation of the complete log likelihood of the model with respect to the posterior of the latent variables in the model.", "For the posterior of the model, we fix the parameters of the model and denote them ( k ) at iteration k of the algorithm.", "We only maximise the expectation with respect to the parameters of the complete log likelihood.", "The expectation is calculated as: Q ( | ( k ) ) = E [log p ( a , t , r | x , )] = N (cid:88) i =1 E [log p ( t i | x i , )] + N (cid:88) i =1 M (cid:88) j =1 E [log p ( r ij | x i , )] + N (cid:88) i =1 M (cid:88) j =1 E [log p ( a ij | t i , r ij , x i , )] , (1) where each expectation is calculated with respect to the posterior p ( t , r | a , x , ( k ) ) .", "E Step: For the E step we compute the posterior with fixed parameters ( k ) , ij ( t, r ) = p ( t i = t, r ij = r | a i , x i ) , as: ij ( t, r ) = p ( t i = t, r ij = r | a i , x i ) p ( t i = t | x i ) p ( r ij = r | x i ) p ( a ij | t i = t, r ij = r, x i ) (cid:89) j (cid:48) (cid:54) = j ( k ) ij (cid:48) ( t ) (2) kij ( t ) = (cid:88) r (cid:48){ 0 , 1 } (cid:0) p ( r ij = r (cid:48) | x i ) p ( a ij | t i = t, r ij = r (cid:48) , x i ) (cid:1) , (3) where we drop the dependency on ( k ) for brevity.", "We can then compute the marginalised posteriors, needed for Equation (1), as follows: p ( t i = t | a i , x i ) = (cid:88) r { 0 , 1 } i 1 ( t, r ) (4) p ( r ij = r | a i , x i ) = (cid:88) t T ij ( t, r ) , (5) where the posterior p ( t i , r i 1 | a i , x i ) of the model is chosen arbitrarily to marginalise over to get the posterior for t i .", "M Step: Using the posterior calculated in the E step we can compute the expectation of the complete log likelihood, Q ( | ( k ) ) , and calculate its gradient with respect to the parameters .", "We then use gradient ascent to update the classifier and reliability estimator jointly.", "As an alternative training procedure, we also consider training the model using cross entropy.", "As with expectation maximisation, we first calculate the posterior ij ( t, r ) using the fixed parameters ( k ) .", "The networks f t and f r are then trained to minimise the cross entropy between the priors p ( t i | x i ) and p ( r ij | x i ) , and the corresponding posteriors p ( t i | a i , x i ) and p ( r ij | a i , x i ) .", "The networks can be trained in an alternating fashion, with f r being trained while f t is kept fixed, and the other way around.", "Denoting the parameters of f t as t and f r as r , the loss functions for the respective networks then become L ( t | ( k ) ) = 1 N (cid:88) i,t,r i 1 ( t, r ) log p ( t i | x i ) L ( r | ( k ) ) = 1 NM (cid:88) i,j,t,r ij ( t, r ) log p ( r ij | x i ) (6) Alternatively, they can be trained jointly by minimising the total cross entropy.", "The training algorithm is summarised in Algorithm 1.", "The algorithm is run until either a maximum number of iterations is reached, or the objective function stops improving.", "2-Dimensional Datasets: In order to see whether our method can work well on simple cases, we create three 2-dimensional synthetic datasets, which we refer to as moon, circle and 3-class as shown in Figure 3.", "Text Classification: For text classification we use the datasets Question Classification (Li and Roth, 2002), which contains short questions along with the type of answer expected, and Sentence Classification (Chambers, 2013), which consists of sentences selected from medical publications.", "Examples of instance/class pairs for the text classification datasets include Where is the", "Orinoco? (class: location) for the Question Classification dataset, and New types of potent force clamps are", "discovered. (class: author's own work) for the Sentence Classification dataset.", "For these datasets that do not include crowd annotations, we synthesise annotations by simulating different annotators as follows:", "1) Narrow Expert : has expertise in a single domain (i.e. class).", "For the instances of this class, the annotator will always provide the correct label.", "For other classes, a correct label will be provided with a probability of 0.65; otherwise, a random label will be selected with uniform probability;", "2) Broad Expert : has expertise in every domain and only makes mistakes with a probability of 0.05;", "3) Random Annotator : selects labels at random;", "4) Adversarial Annotator : deliberately provides incorrect labels with a probability of 0.8.", "For each of the datasets, we generated annotations using one narrow expert per class, one broad expert, one random annotator and one adversarial annotator, for a total of | T | + 3 annotators, where | T | is the number of classes in the dataset.", "In order to evaluate the generality of our model, we also apply it to another task in which we have 5 annotators with different overall reliabilities for the text classification tasks.", "They produce incor-Dataset Class # Instances moon 0/1 500/500 circle 0/1 500/500 3-class 0/1/2 334/333/333 DESCRIPTION (DESC) 1162 ENTITY (ENTY) 1250 Question ABBREV.", "Recognising Textual Entailment: Finally, we evaluate our model on a real-world dataset for the recognising textual entailment (RTE) task (Snow et al., 2008).", "Given a text pair, the annotator decides whether the hypothesis sentence can be inferred from the text fragment.", "The dataset includes both ground truth and crowdsourced labels from 164 annotators.", "Table 1 shows the number of instances of each class 2 in the above-mentioned datasets.", "Our model was implemented using the Chainer deep learning framework 3 (Tokui et al., 2015).", "Classifier: As shown in Table 2, in each experiment the output of the classifier is generated by a feed-forward neural network (FNN).", "Each FNN consists of an input layer, two hidden layers and a softmax output layer.", "The number of hidden units in each layer is listed in the third column of the table.", "The ReLU activation function (Nair and 2 The classes in the Sentence Classification dataset are defined as follows: AIMXthe goal of the paper; OWNXthe author's own work; CONTthe comparison including contrast and critique of past work; BASEthe past research that provides the basis for the work; MISCany other sentences. 3 https://chainer.org/ Hinton, 2010) was applied after each hidden layer.", "The output size of all the Long Short-Term Memory (LSTM; Hochreiter and Schmidhuber, 1997) layers in our experiments is 100.", "For the 2-dimensional classification task, each instance is simply represented using its position in 2-dimensional space.", "For the text classification tasks, we investigated 3 methods of representing the sentences: bag-of-words (BoW) weighted by Term FrequencyInverse Document Frequency (TFIDF), an average word embedding (Avg.) and the output at the last step of an LSTM layer (Embed. LSTM).", "For the embedding we use word2vec embeddings pre-trained on Google News (Mikolov et al., 2013) for the question classification and RTE tasks, and a pre-trained embedding (Pyysalo et al., 2013) trained on a combination of English Wikipedia, PubMed and PMC texts for the sentence classification task.", "For the RTE task, we implemented two classifiers.", "For the first one, each instance (i.e. a sentence pair) was represented as a concatenation of the average word embedding for each sentence (Cat. Avg.).", "We also implemented Bowman et al. (2015), which runs each sentence through an LSTM, concatenates the outputs, and then feeds the concatenated output to an FNN with tanh activations.", "Reliability Estimator: We model the reliability estimator as an FNN.", "Its structure is the same as the classifier, albeit with different sizes of the two hidden layers.", "For the experiments listed in Table 2, the number of units of each hidden layer in the FNN are 5, 100, 25, 25, 50, and 100 respectively.", "The input to the estimator is the concatenation of the instance x i (i.e. its original feature vector or the output of the last hidden layer of the classifier) and a one-hot vector representing the annotator identity.", "Learning Settings: For every experiment we use the Adam (Kingma and Ba, 2015) optimiser with a weight decay rate 0 .", "001 , a gradient clipping of 5 .", "0 , = 0 .", "001 , 1 = 0 .", "9 and 2 = 0 .", "999 .", "We pre-train the classifier and reliability estimator for 200 epochs, using both majority voting and the model proposed by Dawid and Skene (1979).", "The maximum number of outer iterations is set to 500 and 20 for EM training and cross entropy training respectively.", "The number of inner iterations is 50 in both cases.", "Estimation: After training, for each instance x i we take its underlying label to be the most probable label according to the posterior of t i (see Equation (4)).", "We compared our predicted labels to the following state-of-the-art baselines: Majority Voting (MV), DS (Dawid and Skene, 1979), GLAD (Whitehill et al., 2009), LFC (Raykar et al., 2010), CUBAM (Welinder et al., 2010), Yan et al. (2010), KOS (Karger et al., 2011), VI (Liu et al., 2012), BCC (Kim and Ghahramani, 2012), MINIMAX (Zhou et al., 2012), MACE (Hovy et al., 2013), CATD (Li et al., 2014a), PM (Li et al., 2014b) and EM-MV and Opt (Zhang et al., 2014).", "Note that CUBAM, Yan et al. (2010), KOS and VI are only suitable for aggregating binary labels, and Yan et al. (2010) is the state-of-the-art method that models per-instance reliability.", "We take the reliability of annotator j on instance x i to be the posterior probability that r ij is 1 (see Equation (5)).", "We measure the inter-annotator agreement (IAA) of each dataset.", "Fleiss's kappa (Fleiss et al., 2013), denoted by , is measured for the 2-dimensional and text classification datasets, and Krippendorff's alpha (Krippendorff, 1970) is calculated for the RTE dataset 4 .", "We find that the IAA values indicate slight agreement among annotators for all datasets.", "Our experiments using different settings are shown as follows: our model is denoted by O , with M and D denoting the model pre-trained using MV and DS respectively.", "E denotes training using expectation maximisation, while C denotes cross entropy training.", "AL and JT denote cross entropy training done alternatingly and jointly, respectively.", "In the rest of this section, Tables 3 to 7 and Tables 8 to 10 present the results on the synthetic datasets and RTE dataset respectively.", "For the synthetic datasets, in Tables 3 to 6, we first consider a scenario where we have multiple narrow experts (N), one broad expert (B), one random annotator (R) and one adversarial annotator (A).", "In Table 7, we further consider a scenario with 5 annotators, 4 Although there are 164 annotators in total in this dataset, each instance was labelled by only 10 of these annotators.", "Therefore we use Krippendorff's alpha which is applicable to incomplete data to measure the inter-annotator agreement.", "Table 3 shows that our method performs well on the 2-dimensional datasets, obtaining higher label prediction F1 scores than the baselines.", "We omit the analysis of the true label prediction and reliability estimation results on these datasets as all models performed similarly, choosing instead to focus the discussion on the results for the NLP tasks.", "In order to explore the separate performance contribution of classifier and reliability estimator, we compare the performance of our model to a classifier pre-trained using DS labels, as well as a variant of our model without the reliability estimator, i.e. setting all the annotators have the same reliability on all the instances.", "As shown in Tables 4, 7 and 8, the pre-trained classifier performed worse than some aggregation methods.", "This indicates that the noise in the labels predicted by DS has an adverse effect on the training of the classifier.", "The much lower performance of the model with the reliability estimator removes hints at the importance of modelling per-annotator reliability to ensure accurate predictions.", "For the representation of the instance x i as it is fed to the reliability estimator, we compared the performance of using the original feature vector of x i to using the last hidden layer output of the classifier (which we refer to as the full model).", "We found that using the hidden layer representation can not only improve the label prediction performance (see Tables 4, 7 and 8), but also sped up the training compared to using the feature vector directly.", "The hidden layer representation allows us to reduce the number of parameters in the model, by sharing parameters with the classifier.", "Based on the results of the full model in Table 4, we can conclude that per-instance reliability modelling is beneficial to the label prediction task, and using the average pre-trained embedding can result in slightly better performance.", "It is worth noting that the method used to pre-train the model had a noticeable effect on its performance, with better F1 scores being obtained when using DS pretraining.", "In the following experiments we only consider models pre-trained using the DS algorithm.", "In order to investigate whether our method can successfully capture per-instance annotator reliability, for each annotator, we counted the number of correctly labelled instances and calculated the QuestionClassification DESC ENTY ABBR HUM NUM LOC Accuracy 1(N) 99 1 0 0 0 0 100 2(N) 0 100 0 0 0 0 100 3(N) 31 13 40 5 2 9 100 4(N) 0 0 0 100 0 0 100 5(N) 0 0 0 0 100 0 100 6(N) 0 0 0 0 0 100 100 7(B) 20 0 0 77 0 3 100 8(R) 30 32 8 8 15 7 100 9(A) 45 19 9 14 5 8 100 Table 5: Number of correctly labelled examples for each annotator (N: narrow expert, B: broad expert, R: random annotator and A: adversarial annotator) among the 100 instances with highest per-instance reliability on the question classification dataset.", "average reliability for each class among the top 100 instances with the highest per-instance reliability as shown in Table 5 and 6 5 .", "The cells with grey background colour indicate which domain, or class, the annotator has expertise in.", "It can be seen that all annotators obtain high accuracy on these instances.", "In general our method also captured the varying expertise of each narrow annotator, estimating their reliability on instances belonging to the corresponding classes as particularly high.", "For these experiments in Table 7, we also investigated the performance when using two different classification models.", "As seen in this table, both of them outperformed all baselines significantly.", "Table 8 presents the label prediction performance on the RTE dataset.", "As not every annotator has provided labels for every instance in this dataset, for both the EM and cross entropy training we simply omitted missing instance/annotator pairs when calculating the loss functions.", "As seen in the table, most of the baselines obtained high performance as the textual entailment recognition task is easy for non-expert annotators.", "However, our full model still achieved better prediction performance 5 We omit the results for the sentence classification task for lack of space, as we consider the results on the question classification dataset to be representative.", "We also investigated the effectiveness of our model for removing noisy labels.", "We compare our model to the five best-performing baselines (DS, LFC, CUBAM, VI and EM-MV in Table 8).", "Each of these models are trained on the RTE dataset, after which the least reliable annotation for each instance is removed.", "We use the per-instance reliability for our model, the global reliability score of each annotator for LFC, CUBAM and VI, and the per-category annotator reliability for DS and EM-MV as the measure of the reliability of each annotation.", "For each of these models, we then retrain the models in Table 8 using the denoised dataset; the difference in performance can be seen in Table 9.", "We can see that using per-instance reliability results in the largest improvement, while only considering the annotators' overall reliability may Method LFC CUBAM VI DS EM-MV Ours MV -0.2 -0.9 +0.6 -0.2 -0.2 +0.8 DS (Dawid and Skene, 1979) 0 -0.2 +0.1 0 0 +0.6 GLAD (Whitehill et al., 2009) +0.2 0 0 +0.2 -0.3 +0.2 LFC (Raykar et al., 2010) +0.1 -0.1 +0.3 +0.1 +0.1 +0.6 CUBAM (Welinder et al., 2010) +0.3 0 +0.4 +0.3 -0.2 +0.9 Yan et al. (Yan et al., 2010) +1.5 +0.4 +2.3 +1.5 +0.8 +2.5 KOS (Karger et al., 2011) +0.7 +3.9 +11.3 +0.7 +13.4 +17.3 VI (Liu et al., 2012) +0.1 +0.1 +0.1 +0.1 +0.2 +0.6 BCC (Kim and Ghahramani, 2012) +0.3 +0.2 +0.3 +0.3 +0.4 +0.8 MINIMAX (Zhou et al., 2012) +0.3 -0.4 +0.7 +0.3 +0.2 +0.7 MACE (Hovy et al., 2013) +0.2 -0.2 0 +0.2 0 +0.4 CATD (Li et al., 2014a) +0.3 -0.8 -0.2 +0.3 +0.2 +0.3 PM (Li et al., 2014b) +0.9 0 +0.3 +0.9 +0.1 +0.9 EM-MV (Zhang et al., 2014) +0.7 +0.6 +0.7 +0.7 +0.7 +1.1 EM-Opt (Zhang et al., 2014) +0.2 +0.5 +0.2 +0.2 +0.6 +0.7 O-DC-JT (FNN) (full model) +0.1 0 +0.3 +0.1 +0.1 +0.5 Table 9: F1 score improvements after removing the label produced by the least reliable annotator by using the estimated overall reliability (LFC, CUBAM, VI, DS, EM-MV) and per-instance reliability (Ours).", "In order to analyse the per-instance reliability of the human annotators, for each annotator we rank the instances according to the annotator's per-instance reliability.", "We look at the top 15 and bottom 15 instances, then count how many of them were correctly labelled (Cor. Labels) as well as the average reliability on these instances (Avg. Relia-bility).", "Table 10 shows the results of five annotators 6 .", "It can be seen that each annotator has considerably different reliabilities across instances.", "Pre-training: As discussed in Section 3.2, the predicted labels produced by a simpler method are used for pre-training.", "Although these labels are not perfect, we assume that our method can still learn some useful information from them for a better starting point than random parameter initialisation.", "cases, using cross entropy achieved much better and more stable performance than the models learned using EM training.", "We also noticed that the objective function would improve when using cross entropy training, and tended to converge faster in our experimentsgenerally within just a few epochs.", "Therefore, we recommend to use this training method in practice.", "Early Stopping: When using both EM and cross entropy training, we found that even if the objective function improved between iterations, the label prediction performance would eventually start to decrease.", "It is worth to investigate the reason for this phenomenon.", "To counteract this issue we used early stopping, where training is halted when the objective function does not improve more than 0.001 between iterations.", "Another option is to reduce the maximum number of outer iterations, e.g. to 20.", "We propose a novel probabilistic model which learns from noisy labels produced by multiple annotators for NLP crowdsourcing tasks by incorporating a classifier and a reliability estimator.", "Our work constitutes the first effort to model the per-instance reliability of annotators for both binary and multi-class NLP labelling tasks.", "We investigate two methods of training our model using the EM algorithm and cross entropy.", "Experimental results on 6 datasets including synthetic and real datasets demonstrate that our method can not only capture the per-instance reliability of each annotator, but also obtain better label prediction and the least reliable annotator detection performance compared to state-of-the-art baselines.", "For future work, we plan to apply our model to other NLP tasks such as relation extraction and named entity recognition.", "We also plan to investigate the use of variational inference (Jordan et al., 1999) as a means of training our model.", "Using variational inference might improve the stability and performance of our model.", "We would like to thank the anonymous reviewers and Paul Thompson for their valuable comments.", "Discussions with Austin J. Brockmeier have been insightful.", "The work is funded by School of Computer Science Kilburn Overseas Fees Bursary from University of Manchester." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "result", "objective", "objective", "objective", "result", "objective", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "method", "abstain", "result", "other", "other", "other" ]
[ "Chinese NLP applications that rely on large text often contain huge amounts of vocabulary which are sparse in corpus.", "We show that characters' written form, Glyphs , in ideographic languages could carry rich semantics.", "We present a multi-modal model, Glyph2Vec , to tackle Chinese out-of-vocabulary word embedding problem.", "Glyph2Vec extracts visual features from word glyphs to expand current word embedding space for out-of-vocabulary word embedding, without the need of accessing any corpus, which is useful for improving Chinese NLP systems, especially for low-resource scenarios.", "Experiments across different applications show the significant effectiveness of our model.", "Word embedding encoded semantic and syntactic information (Mikolov et al., 2013a,b) in low-dimensional space have served as useful features for various NLP applications but often require large-scale corpus with billions of tokens to train.", "A natural constraint of word embedding is that it is not practical to collect the entire vocabulary of any language with large enough frequency to train the embedding for every word, since some new words may appear in downstream tasks.", "A typical solution is to simply assign a specific UNK embedding to all out-of-vocabulary (OOV) words that do not appear in the training data.", "Current solutions such as using subwords (e.g., characters) are mainly considering alphabetic languages (e.g., English and French) that are composed of small amount of characters.", "Such techniques may not be sufficient for ideographic lanEqually contribution.", "guages (e.g., Chinese and Japanese) in which a word is often composed with characters of a large amounts.", "An example is that traditional Chinese includes about 17k distinct tokens.", "Therefore, it could be expected to suffer from underfitting not only word embedding but also character embedding.", "Even worse, words in ideographic languages are often composed of 2-3 characters only, unlike words in alphabetic languages are longer but with smaller types of characters.", "Figure 1 provides the statistics in Chinese Sinica Corpus.", "acter is made up of several graphical components.", "Figure 2 shows some examples that components in characters represent similar semantic or pronunciation.", "In addition to glyphs, we propose to use the high-quality features provided by Cangjie input method to represent each character.", "Cangjie is a popular Chinese input method.", "Similar to radicals, characters are composed of 24 basic graphical units.", "Each unit is mapped to a corresponded letter key on a standard QWERTY keyboard.", "Building beyond character glyphs, one can intuitively guess the semantic of a word.", "Recent work (Chen et al., 2015; Xu et al., 2016; Yin et al., 2016; Liu et al., 2017; Su and Lee, 2017) have shown ben-efits of the compositionality at character level or visual feature of Chinese glyphs for some tasks.", "In this work, we suggest that in the OOV scenario glyphs can be particularly useful.", "A key observation for solving OOV problem matches the intuition of human generalization in Chinese.", "When a Chinese user reads an unseen word or a character, by decomposing the structure, graphical components such as radicals for a character often help Chinese users understand the meaning and sometimes pronunciation of the character.", "We study a novel application that recovers Chinese OOV word embeddings from glyphs.", "Our work is to answer a question : given the pretrained word embeddings, can we directly learn a mapping from word glyphs to their word embedding and generalize the mapping for the purpose of generating the embedding of OOV words?", "We formulate it as a visual-to-text transfer learning problem and show that the visual structure of Chinese characters is helpful in learning Chinese OOV embeddings.", "Exploiting Structure of Chinese Characters Recent work have explored the use of Chinese character structure in different settings (E and Xi-ang, 2017; Liu et al., 2017; Dai and Cai, 2017).", "Several work aim to use character-level feature to enhance standard word embedding learning models (e.g., Word2Vec or GloVe).", "CWE (Chen et al., 2015) propose to use character-level formulation for words in training word embeddings; SCWE (Xu et al., 2016) and Li et al. (2015) extends to consider the relations of characters compositionally.", "MGE (Yin et al., 2016) and Shi et al. (2015) further includes radical information associated to characters.", "Yu et al. (2017) jointly embed Chinese words, characters, and radicals.", "GWE (Su and Lee, 2017) proposes to extract feature from character bitmaps as the inputs of Word2Vec and GloVe.", "Our work is different from all of them, since we emphasize on generating the OOV word embeddings, which is not handled by them.", "Learning Embedding for OOVs To handle OOV words, an approach is operating character level embeddings, then averages them into word embeddings (Kim et al., 2016; Wieting et al., 2016).", "Morphology-based approaches take advantage of meaningful linguistic substructures (Botha and Blunsom, 2014; Luong et al., 2013; Bhatia et al., 2016).", "Morphology-based approaches often struggle with those vocabularies lacking linguistic substructures such as names and transliterations of foreign language, which often appears as OOV words.", "In all the models above, just like Word2Vec (Mikolov et al., 2013c)), the embeddings meed to learned by training over a large corpus.", "The most similar work is Mimick model (Pin-ter et al., 2017).", "By learning a character language generating model, guided by minimizing the distance between the output embedding of LSTMs and pre-trained word embeddings, Mimick shows feasibility of generating OOV word embedding from character compositions.", "However, Mimick is mainly from the view of alphabetic languages that does not consider glyphs.", "Chinese words often consist of short sequences composed of many kinds of tokens that are difficult for language model approaches to handle (see Figure 1) and could suffer from under-fitting.", "We formulate the task of learning OOV embeddings as a transfer learning problem.", "Formally, given a Chinese vocabulary set V of size |V| , and a pre-trained embeddings matrix E R |V| d where each word w i is associated with a vector e i of dimension d as training set { w i , e i } |V| i =1 .", "We aim to learn a mapping F : w R d , where F projects the input word to the d dimension embedding space such that F ( w i ) e i .", "In testing, a word w t may be out of V , while the model is still obliged to predict the embedding e t with F ( w t ) .", "Given the glyphs for a word x = [ c j ] | x | 1 as a sequence of character 2D bitmaps c provided according to V , we can considering a function g : x R k that transforms glyphs into vi-Figure 3: Complete network architecture of our Glyph2Vec.", "White boxes annotate the feature dimension of each character .", "Different features are combined by concatenating.", "GRU takes sequence of character feature as inputs.", "sual features of k dimension.", "Another function f : g ( x ) R d later maps the visual space to the word embedding space.", "The final embedding can be obtained with e i = F ( x i ) = f ( g ( x i )) , where input is glyph x i .", "The overall framework is illustrated in Figure", "3. 3.1 Visual Feature Extractor We consider two implementations of visual feature extractor g .", "ConvAE We adopt the convolutional autoencoder ConvAE (Masci et al., 2011) to capture the structure of characters bitmaps c .", "The architecture of the ConvAE follows Figure 6 in (Su and Lee, 2017).", "Eventually, the well-trained encoder is fixed as extractor that extracts 512-dimensional feature for every character c .", "The input bitmaps are 60 60 8-bit images in grayscale.", "Cangjie Composition We propose to use Cangjie input codes as high-level annotations of characters, which can be easily collected from the input method dictionary.", "We construct a Bag-of-Root (BoR) vector for each character according to the Cangjie dictionary.", "Each BoR binary vector of 24 dimensions representing the roots that a character possesses.", "After the visual features of every character in a word are extracted, we still need to compose them to word level.", "A compositional model f takes a sequence of characters' visual feature and projects them onto the word embedding space.", "The right portion of Figure 3 shows the architecture of f .", "We construct a bi-directional RNN network with GRU cells (Cho et al., 2014) to compute the expected word embedding over the character feature sequence.", "Finally, the 300D word embeddings are predicted.", "To calculate the loss for backpropa-gation, we adopt squared Euclidean distance between the prediction F = f ( g ( x )) and the gold word embedding w : (cid:107) F ( x ) w (cid:107) 2 .", "Unlike alphabetical languages, each Chinese character carries its own meaning.", "State-of-the-art Chinese word embedding models (Chen et al., 2015; Xu et al., 2016; Yin et al., 2016) often consider learning character embedding jointly.", "We demonstrate how to incorporate pre-trained character embedding to further improve the performance.", "The character embeddings are concatenated with the glyph features and the BoR Cangjie vectors as inputs.", "Character embedding is a huge embedding matrix.", "In Table 1, we summarized the required #parameters.", "We note that Glyph2Vec can infer OOV embedding directly from glyphs without character embedding.", "We adopt the Word2Vec traditional Chinese 300d word embedding pre-trained on public-available Sinica Corpus 4.0 which includes about 10M tokens.", "For optimization, we train 100 epochs with RMSProp optimizer with learning rate 4e-4 with batch-size 128.", "We note the models compared in the following experiments here.", "M is for Mimick baseline (Pinter et al., 2017) based on the authors' code.", "For the proposed feature, we test several combinations.", "C is for using Cangjie BoR vector; V is for using glyph visual feature; Char is for appending pre-trained character embedding.", "We uti-http://asbc.iis.sinica.edu.tw/ Figure 4: Principal component analysis visualization of the produced word embedding.", "lize the embeddings from Polyglot (Al-Rfou et al., 2013).", "As a sanity check, in Fig. 4 we visualize the embedding of seen and OOV words.", "One could observe meaningful clusters that have similar visual structure.", "For example, (roast chicken) could be mapped with (roast duck) because (chicken) and (duck) have different glyphs both about bird.", "Some cooking verbs that have the radical (fire) like (roast) and (roast) are also mapped closely.", "Some unseen characters (or words with only one character) can also be predicted reasonably.", "We qualitatively analyze Glyph2Vec with nearest neighbor (NN) sanity check.", "Table 2 shows the results of retrieved nearest neighbors with OOV word queries for Mimick and our Glyph2Vec embeddings (using V ), respectively.", "We observe Glyph2Vec is able to model visual semantic by associating those characters that share related visual features since Glyph2Vec learns from the images of characters.", "For example, (eel) in (snake-eel) shares the radicals of (fish) with (Haemulidae, fish name).", "(Rh) and (Cl) in (RhCl3) associate some visual features relate to chemicals like in (Ce), in (F), in (acid), and more.", "On the other hand, we observe some properties including composition (e.g., numbers) and character semantic that both Glyph2Vec and Mimick can provide.", "(1) Composition: composing characters that have very different mean-https://sites.google.com/site/rmyeid/projects/polyglot ing after splitting them.", "For instance, is a transliteration of Seleznev (Russian name), for which every character is meaningless alone but a meaningful transliteration when combined.", "With character-level compositional model in Glyph2Vec, it could be retrieved given (Claudio, western name).", "Moreover, Glyph2Vec preserves correct meaning of a character when attaching with the other characters.", "For example, (abrupt) (decrease) can retrieve (cut back) and (reduce) properly when (subtract) is associated to different characters.", "(2) character semantic: associating different characters with similar meaning.", "For example, (street) is related to (lane) or (alley) and they are retrieved by our model given (Xuefu 2nd Street) as the OOV word even though the characters look completely different.", "We follow the experiment protocols of parts-of-speech tagging and morphosyntactic attributes tagging stated in Mimick (Pinter et al., 2017) for this experiment.", "There are two parts-of-speech tagging tasks based on the Chinese thread of Universal Dependencies (UD) scheme (De Marneffe et al., 2014).", "To avoid tuning towards those OOV words, we consider the similar evaluation protocols of generalized zero-shot learning (Chao et al., 2016; Xian et al., 2017) that the embedding of not only unseen but also seen words need to be generated.", "Both word-level LSTM and character LSTM are reported (Table 3).", "With visual feature available, Glyph2Vec consistently outperforms Mimick.", "On the other hand, we observe using pretrained character embedding only helps on accuracy of seen words but not OOV words, which suggests that it is necessary for a module like Mimick or Glyph2Vec to learn to compose characters for OOV words.", "As we introduced in Sec. 1, in real-world scenario Chinese systems could suffer from severe OOV problem.", "An example is Wikipedia encyclopedia.", "It contains lots of rarewords that easily become OOV words such as terminologies, scien-tific names, geography locations, ... etc.", "We utilize Wikipedia Title dataset (Liu et al., 2017) to https://github.com/frederick0329/Wikipedia-TitleDataset Query Word Top 5 Nearest Neighbors (numbers) (city) (numbers) (county) (numbers) (drama) (dream) (lividly) (name) (Juliette Binoche) (name) (slump) (water resource) (shrimp) (dosage) (proportion) (pond) (provided job) (shrimp) (office) (name) (name) (office) (snakebird) (euryhaline) (fish) (fish) (gull) (worm) (Claudio) (Cha) (Selig) (Same) (Okana) (Ladur) (RhCl3) (stew) (viscous) (medicine) (name) (bacteria) (idiom) (goggles) (catch crab) (riverside) (time) (riverside) (street) (name) (shrimp) (person) (name) (name) (numbers) (numbers) (numbers) (numbers) (numbers) (numbers) (drama) (unit for drama) (naked play) (dance drama) (musical drama) (drama) (slump) (slump) (slump) (year by year) (dramatically) (slump) (provided job) (job) (job) (military service) (hire) (hire) (snakebird) (fish) (euryhaline) (fish) (fish) (fish) (Claudio) (Puchkov) (Chimet) (Tsev) (Seleznev) (Itkine) (RhCl3) (inorganic acid) (Ce) (FCl2) (chemical Eq.) (anode plate) (idiom) (idiom) (idiom) (idiom) (idiom) (idiom) (street) (street) (street) (street) (street) (street) Table 2: Nearest neighbors examples retrieved by Mimick (upper) and Glyph2Vec (lower).", "study the problem.", "The dataset is a collection of 593K Chinese articles from Wikipedia and categorizing them into 12 classes based on their titles.", "We preprocessed the data by removing punctuation, special characters, and other non-Chinese instances, and turning Arabic numbers into Chinese text.", "We use opensource Jieba toolkit to segment each title into words.", "52.5% are OOV based on Sinica Corpus, and we generate their embeddings by Glyph2Vec.", "We construct a neural network classifier with the generated word embedding as input to evaluate our method.", "The classifier is consist of 3 fully-connected (FC) layers on top of the averaged word embedding of titles.", "Results are shown in Table", "4. With glyph feature and Cangie BoR feature provided, the performance could be improved significantly compared to neglecting OOV (as UNK) in such challenging setting.", "In this work, we propose a multi-modal framework that expand pre-trained embedding space to include OOV words using character visual features such as Cangjie feature and Chinese character glyphs.", "We have demonstrated the effectiveness of Glyph2Vec on traditional Chinese, and we believe Glyph2Vec can also be applied to other ideographic languages to handle OOV words as well.", "We note that the accuracy cannot be compared with the report in (Liu et al., 2017) since they did not consider OOV and char/word embeddings.", "Here we only use the dataset to examine the performance of OOV embedding.", "For simplified Chinese, we suggest users to first translate into traditional Chinese since traditional characters have richer structures and probably more semantics can be extracted through Glyph2Vec." ]
[ "abstain", "result", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "result", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "objective", "objective", "method", "method", "objective" ]
[ "Neural network models for many NLP tasks have grown increasingly complex in recent years, making training and deployment more difficult.", "A number of recent papers have questioned the necessity of such architectures and found that well-executed, simpler models are quite effective.", "We show that this is also the case for document classification: in a large-scale reproducibility study of several recent neural models, we find that a simple BiLSTM architecture with appropriate regularization yields accuracy and F 1 that are either competitive or exceed the state of the art on four standard benchmark datasets.", "Surprisingly, our simple model is able to achieve these results without attention mechanisms.", "While these regularization techniques, borrowed from language modeling, are not novel, to our knowledge we are the first to apply them in this context.", "Our work provides an open-source platform and the foundation for future work in document classification.", "Recent developments in neural architectures for a wide range of NLP tasks can be characterized as a drive towards increasingly complex network components and modeling techniques.", "Worryingly, these new models are accompanied by smaller and smaller improvements in effectiveness on standard benchmark datasets, which leads us to wonder if observed improvements are real.", "There is, however, ample evidence to the contrary.", "To provide a few examples: Melis et al. (2018) report that standard LSTM architectures outperform more recent models when properly tuned.", "Vaswani et al. (2017) show that sequence transduction using encoderdecoder networks with attention mechanisms work just as well with the attention module only, making most of the complex Equal contribution.", "neural machinery unnecessary.", "Mohammed et al. (2018) show that simple RNNand CNN-based models yield accuracies rivaling far more complex architectures in simple question answering over knowledge graphs.", "Perhaps most damning are the indictments of Sculley et al. (2018), who lament the lack of empirical rigor in our field and cite even more examples where improvements can be attributed to far more mundane reasons (e.g., hyperparameter tuning) or are simply noise.", "Lipton and Stein-hardt (2018) concur with these sentiments, adding that authors often use fancy mathematics to obfuscate or to impress (reviewers) rather than to clarify.", "Complex architectures are more difficult to train, more sensitive to hyperparameters, and brittle with respect to domains with different data characteristicsthus both exacerbating the crisis of reproducibility and making it difficult for practitioners to deploy networks that tackle real-world problems in production environments.", "Like the papers cited above, we question the need for overly complex neural architectures, focusing on the problem of document classification.", "Starting with a large-scale reproducibility study of several recent neural models, we find that a simple bi-directional LSTM (BiLSTM) architecture with appropriate regularization yields accuracy and F 1 that are either competitive or exceed the state of the art on four standard benchmark datasets.", "As the closest comparison point, we find no benefit to the hierarchical modeling proposed by Yang et al. (2016) and we are able to achieve good classification results without attention mechanisms.", "While these regularization techniques, borrowed from language modeling, are not novel, we are to our knowledge the first to apply them in this context.", "Our work provides an open-source platform and the foundation for future work in document classification.", "Over the last few years, deep neural networks have achieved the state of the art in document classification.", "One popular model, hierarchical attention network (HAN), uses wordand sentence-level attention in classifying documents (Yang et al., 2016).", "Although this model nicely captures the intuition that modeling word sequences in sentences should be handled separately from sentence-level discourse modeling, one wonders if such complex architectures are really necessary, especially given the size of training data available today.", "An important variant of document classification is the multi-label, multi-class case.", "Liu et al. (2017) develop XML-CNNs for multi-label text classification, basing the architecture on KimCNN (Kim, 2014) with increased filter sizes and an additional fully-connected layer.", "They also incorporate dynamic adaptive max-pooling (Chen et al., 2015) instead of the vanilla max-pooling over time in KimCNN.", "The paper compares with CNN-based approaches for the multi-label task, but only reports precision and disregards recall.", "Yang et al. (2018) instead adopts encoderdecoder sequence generation models (SGMs) for generating multiple labels for each document.", "Similar to our critique of HAN, we opine against the high complexity of these multi-label approaches.", "There have been attempts to extend dropout (Sri-vastava et al., 2014) from feedforward neural networks to recurrent ones.", "Unfortunately, direct application of dropout on the hidden units of an RNN empirically harms its ability to retain long-term information (Zaremba et al., 2014).", "Recently, however, Merity et al. (2018) successfully apply dropout-like techniques to regularize RNNs for language modeling, achieving competitive word-level perplexity on multiple datasets.", "Inspired by this development, we adopt two of their regularization techniques, embedding dropout and weight-dropped LSTMs, to our task of document classification.", "Weight-dropped LSTM.", "LSTMs comprise eight total inputhidden and hiddenhidden weight matrices; in weight dropping, Merity et al. (2018) regularize the four hiddenhidden matrices with DropConnect (Wan et al., 2013).", "The operation is applied only once per sequence, using the same m a x p oo l b a c d f e g 1 2 n Figure 1: Illustration of the model architecture, where the labels are the following:", "dropout mask across multiple timesteps.", "Conveniently, this allows practitioners to use fast, out-of-the-box LSTM implementations without affecting the RNN formulation or training performance.", "Embedding Dropout.", "Introduced in Gal and Ghahramani (2016) and successfully employed for neural language modeling (Merity et al., 2018), embedding dropout performs dropout on entire word embeddings, effectively removing some of the words at each training iteration.", "As a result, the technique conditions the model to be robust against missing input; for document classification, this discourages the model from relying on a small set of input words for prediction.", "We design our model to be minimalistic: First, we feed the word embeddings w 1: n of a document to a single-layer BiLSTM, extracting concatenated forward and backward word-level context vectors h 1: n = h f 1: n h b 1: n .", "Subsequently, we max-pool h 1: n across time to yield document vector d see Figure 1, labels af.", "Finally, we feed d to a sigmoid or a softmax layer over the labels, depending on if the task type is multi-label or single-label classification (label", "g).", "Contrary to prior art, our approach refrains from attention, hierarchical structure, and sequence generation, each of which increases model complexity.", "For one, hierarchical structure requires sentence-level tokenization and multiple RNNs.", "For another, sequence generation uses an encoder decoder architecture, reducing computational parallelism.", "All three methods add depth to the model; our approach instead uses a single-layer BiLSTM with trivial max-pooling and concatenation operations, which makes for both simple implementation and resource-efficient inference.", "We conduct a large-scale reproducibility study involving HAN, XML-CNN, KimCNN, and SGM.", "These are compared to our proposed model, referred to as LSTM reg , as well as an ablated variant without regularization, denoted LSTM base .", "The implementation of our model as well as from-scratch reimplementations of all the comparison models (except for SGM) are provided in our toolkit called Hedwig, which we make publicly available to serve as the foundation for future work.", "1 In addition, we compare the neural approaches to logistic regression (LR) and support vector machines (SVMs).", "The LR model is trained using a one-vs-rest multi-label objective, while the SVM is trained with a linear kernel.", "Both of these methods use word-level tfidf vectors of the documents as features.", "All of our experiments are performed on Nvidia GTX 1080 and RTX 2080 Ti GPUs, with PyTorch 0.4.1 as the backend framework.", "We use Scikit-learn 0.19.2 for computing the tfidf vectors and implementing LR and SVMs.", "We evaluate our models on the following four datasets: Reuters-21578, arXiv Abstract Paper dataset (AAPD), IMDB, and Yelp 2014.", "Reuters and AAPD are multi-label datasets, whereas IMDB and Yelp are single-label ones.", "For IMDB and Yelp, we use random sampling to split the dataset such that 80% is used for training, 10% for validation, and 10% for test.", "We use the standard ModApte splits (Apte et al., 1994) for the Reuters dataset, and author-defined splits for AAPD (Yang et al., 2018).", "We summarize the statistics of these datasets in Table", "1. Unfortunately, there is little consensus within the natural language processing community for choosing the splits of IMDB and Yelp 2014.", "Furthermore, they are often unreported in modeling papers, hence preventing direct comparison with past results.", "We are not able to find the exact splits Yang et al. (2016) use; for consistency, we use the same proportion the authors report, but of course this yields different samples in each split.", "For the multi-label datasets, we report the wellknown micro-averaged F 1 score, which is the class-weighted harmonic mean between recall and precision.", "For the single-label datasets, we compare the models using accuracy.", "To ensure a fair comparison, we tune the hyperparameters for all baseline models.", "For HAN, we use a batch size of 32 across all the datasets, with a learning rate of 0.01 for Reuters and 0.001 for the rest.", "To train XML-CNN, we select a dynamic pooling window length of eight, a learning rate of 0.001, and 128 output channels, with batch sizes of 32 and 64 for single-label and multi-label datasets, respectively.", "For KimCNN, we use a batch size of 64 with a learning rate of 0.01.", "For training SGM on Reuters, we use the source code provided by the authors 2 and follow the same hyperparameters in their paper (Yang et al., 2018).", "For the LR and SVM models, we use the default set of hyperparameters in Scikit-learn.", "For LSTM reg and LSTM base , we use the Adam optimizer with a learning rate of 0.01 on Reuters and 0.001 on the rest of the datasets, using batch sizes of 32 and 64 for multi-label and single-label tasks, respectively.", "For LSTM reg , we also apply temporal averaging (TA): as shown in Kingma and Ba (2014), TA reduces both generalization error and stochastic noise in recent parameter estimates from stochastic approximation.", "We set the default TA exponential smoothing coefficient of EMA to 0.99.", "We choose 512 hidden units for the BiLSTM models, whose max-pooled output is regularized using a dropout rate of 0.5.", "We also regularize the inputhidden and hiddenhidden BiLSTM connections using embedding dropout and weight dropping, respectively, with dropout rates of 0.1 and 0.2.", "2 https://github.com/lancopku/SGM # Model Reuters AAPD IMDB Yelp '14 Val.", "For our optimization objective, we use cross-entropy and binary cross-entropy loss for single-label and multi-label tasks, respectively.", "On all datasets and models, we use 300-dimensional word vectors (Mikolov et al., 2013) pre-trained on Google News.", "We train all neural models for 30 epochs with five random seeds, reporting the mean validation set scores and their corresponding test set results.", "Toward Robust Baselines.", "Recently, reproducibility is becoming a growing concern for the NLP community (Crane, 2018).", "Indeed, very few of the papers that we consider in this study report validation set results, let alone run on multiple seeds.", "In order to address these issues, we report scores on both validation and test sets for our reimplementations; doing so is good practice, since it reinforces the validity of the experimental results and claims.", "We also provide the standard deviation of the scores across different seeds to demonstrate the stability of our results.", "This is in line with previous papers (Zhang and Wallace, 2017; Reimers and Gurevych, 2017; Crane, 2018) that emphasize reporting variance for robustness against potentially spurious conclusions.", "We report the mean and standard deviation (SD) of the F 1 scores and accuracy for all five runs in Table", "2. For HAN and KimCNN, we include results from the original papers to validate our reimple-mentation.", "We fail to replicate the reported results of SGM on AAPD using the authors' codebase and data splits.", "3 As a result, we simply copy the value reported in Yang et al. (2018) in Table 2, row 8, which represents their maximum F 1 score.", "To verify the correctness of our HAN and KimCNN reimplementations, we compare the differences in F 1 and accuracy on the appropriate datasets.", "We attribute the small differences to using different dataset splits (see Section 4.1) and reporting mean values.", "Baseline Comparison.", "We see that our simple LSTM reg model achieves state of the art on Reuters and IMDB (see Table 2, rows 9 and 10), establishing mean scores of 87.0 and 52.8 for F 1 score and accuracy on the test sets of Reuters and IMDB, respectively.", "This highlights the ef-ficacy of proper regularization and optimization techniques for the task.", "We observe that LSTM reg consistently improves upon the performance of LSTM base across all of the taskssee rows 9 and 10, where, on average, regularization yields increases of 1.5 and 0.5 points for F 1 score and accuracy, respectively.", "A few of our LSTM reg runs attain state-of-the-art test F 1 scores on AAPD.", "However, in the interest of robustness, we report the mean value, as mentioned in Section 4.2.", "We also find the accuracy of LSTM reg and our reimplemented version of HAN on Yelp 2014 to be almost two points lower than the copied result of HAN (rows 6, 7, and 10) from Yang et al. (2016).", "On the other hand, both of the models surpass the original result by nearly two points for the IMDB dataset.", "We cannot rule out that these disparities are caused 3 The authors did not answer our e-mails seeking assistance.", "by the absence of any widely-accepted splits for evaluation on Yelp 2014 and IMDB (as opposed to model or implementation differences).", "Interestingly, the non-neural LR and SVM baselines perform remarkably well.", "On Reuters, for example, the SVM beats many neural baselines, including our non-regularized LSTM base (rows 2 9).", "On AAPD, the SVM either ties or beats the other models, losing only to SGM (rows 28).", "Compared to the SVM, the LR baseline appears better suited for the single-label datasets IMDB and Yelp 2014, where it achieves better accuracy than the SVM does.", "In this paper, we question the complexity of existing neural network architectures for document classification.", "To demonstrate the effectiveness of proper regularization and optimization, we apply embedding dropout, weight dropping, and temporal averaging when training a simple BiLSTM model, establishing either competitive or state-of-the-art results on multiple datasets.", "One potential extension of this work is to conduct a comprehensive ablation study to determine the relative contribution of each of the regularization and optimization techniques.", "Furthermore, it would be interesting to compare these techniques to the recent line of research in deep language representation models, such as Embeddings from Language Models (ELMo; Peters et al., 2018) and pre-trained transformers (Devlin et al., 2018; Radford, 2018).", "Finally, the examined regularization and optimization methods deserve exploration in other NLP tasks as well.", "This research was supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada.", "We thank Nabiha Asghar for providing us with the Yelp 2014 dataset.", "We also thank the anonymous reviewers for their valuable comments." ]
[ "abstain", "abstain", "result", "result", "objective", "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "other", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "other", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "result", "result", "result", "result", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "other", "other", "other" ]
[ "Entity linking (EL), the task of disambiguating mentions in text by linking them to entities in a knowledge graph, is crucial for text understanding, question answering or conversational systems.", "Entity linking on short text (e.g., single sentence or question) poses particular challenges due to limited context.", "While prior approaches use either heuristics or black-box neural methods, here we propose LNN-EL, a neuro-symbolic approach that combines the advantages of using interpretable rules based on first-order logic with the performance of neural learning.", "Even though constrained to using rules, LNN-EL performs competitively against SotA black-box neural approaches, with the added benefits of extensibility and transferability.", "In particular, we show that we can easily blend existing rule templates given by a human expert, with multiple types of features (priors, BERT encodings, box embeddings, etc), and even scores resulting from previous EL methods, thus improving on such methods.", "For instance, on the LC-QuAD-1.0 dataset, we show more than 4 % increase in F1 score over previous SotA.", "Finally, we show that the inductive bias offered by using logic results in learned rules that transfer well across datasets, even without fine tuning, while maintaining high accuracy.", "Entity Linking (EL) is the task of disambiguating textual mentions by linking them to canonical entities provided by a knowledge graph (KG) such as DBpedia, YAGO (Suchanek et al., 2007) or Wikidata (Vrandecic and Krtzsch, 2014).", "A large body of existing work deals with EL in the context of longer text (i.e., comprising of multiple sentences) (Bunescu and Pasca, 2006).", "The general Equal contribution; Author Hang Jiang did this work while interning at IBM.", "approach is: 1) extract features measuring some degree of similarity between the textual mention and any one of several candidate entities (Mihalcea and Csomai, 2007; Cucerzan, 2007; Ratinov et al., 2011), followed by 2) the disambiguation step, either heuristics-based (non-learning) (Hoffart et al., 2011; Sakor et al., 2019; Ferragina and Scaiella, 2012) or learning-based (Mihalcea and Csomai, 2007; Cucerzan, 2007; Ratinov et al., 2011; Hoffart et al., 2012; Ganea and Hofmann, 2017), to link the mention to an actual entity.", "A particular type of entity linking, focused on short text (i.e., a single sentence or question), has attracted recent attention due to its relevance for downstream applications such as question answering (e.g., (Kapanipathi et al., 2021)) and conversational systems.", "Short-text EL is particularly challenging because the limited context surrounding mentions results in greater ambiguity (Sakor et al., 2019).", "To address this challenge, one needs to exploit as many features from as many sources of evidence as possible.", "Consider the question in Figure", "1(a), containing mention 1 ( Cameron ) and mention 2 ( Titanic ).", "1 DBpedia contains several person entities whose last name matches Cameron .", "Two such entities are shown in Figure", "3(b), James_Cameron and Roderick_Cameron , along with their string similarity scores (in this case, character-level Jaccard similarity) to mention 1 .", "In this case, the string similarities are quite close.", "In the absence of reliable discerning information, one can employ a prior such as using the more popular candidate entity, as measured by the in-degree of the entity in the KG (see Figure", "3(b)).", "Given the higher in-degree, we can (correctly) link mention 1 to James_Cameron .", "However, for mention 2 , the correct entry is Titanic_(1997_film) as opposed to 1 Note that we assume that mention extraction has already been applied and we are given the textual mentions.", "Titanic the ship, but it actually has a lower string similarity.", "To link to the correct entity, one needs to exploit the fact that James_Cameron has an edge connecting it to Titanic_(1997_film) in the KG (see ego network on the left in Figure", "1(c)).", "Linking co-occurring mentions from text to connected entities in the KG is an instance of collective entity linking .", "This example provides some intuition as to how priors, local features (string similarity) and collective entity linking can be exploited to overcome the limited context in short-text EL.", "While the use of priors, local features and nonlocal features (for collective linking) has been proposed before (Ratinov et al., 2011), our goal in this paper is to provide an extensible framework that can combine any number of such features and more, including contextual embeddings such as BERT encodings (Devlin et al., 2019) and Query2box embeddings (Ren et al., 2020), and even the results of previously developed neural EL models (e.g., BLINK (Wu et al., 2020)).", "Additionally, such a framework must not only allow for easy inclusion of new sources of evidence but also for interpretability of the resulting model (Guidotti et al., 2018).", "An approach that combines disparate features should, at the very least, be able to state, post-training, which features are detrimental and which features aid EL performance and under what conditions, in order to enable actionable insights in the next iteration of model improvement.", "Our Approach.", "We propose to use rules in first-order logic (FOL), an interpretable fragment of logic, as a glue to combine EL features into a coherent model.", "Each rule in itself is a disambiguation model capturing specific characteristics of the overall linking.", "While inductive logic programming (Muggleton, 1996) and statistical relational learning (Getoor and Taskar, 2007) have for long focused on learning FOL rules from labeled data, more recent approaches based on neuro-symbolic AI have led to impressive advances.", "In this work, we start with an input set of rule templates (given by an expert or available as a library), and learn the parameters of these rules (namely, the thresholds of the various similarity predicates as well as the weights of the predicates that appear in the rules), based on a labeled dataset.", "We use logical neural networks (LNN) (Riegel et al., 2020), a powerful neuro-symbolic AI approach based on real-valued logic that employs neural networks to learn the parameters of the rules.", "Learning of the rule templates themselves will be the focus of future work.", "We propose, to the best of our knowledge, the first neuro-symbolic method for entity linking (coined LNN-EL \") that provides a principled approach to learning EL rules. Our approach is extensible and can combine disparate types of local and global features as well as results of prior black-box neural methods, thus building on top of such approaches. Our approach produces interpretable rules that humans can inspect toward actionable insights. We evaluate our approach on three benchmark datasets and show competitive (or better) performance with SotA black-box neural approaches (e.g., BLINK (Wu et al., 2020)) even though we are constrained on using rules. By leveraging rules, the learned model shows a desirable transferability property: it performs well not only on the dataset on which it was trained, but also on other datasets from the same domain without further training.", "Entity Linking Models. Entity Linking is a well-studied problem in NLP, especially for long text. Approaches such as (Bunescu and Pasca, 2006; Ratinov et al., 2011; Sil et al., 2012; Hoffart et al., 2011; Shen et al., 2015) use a myriad of classical ML and deep learning models to combine priors, local and global features. These techniques, in general, can be applied to short text, but the lack of suf-ficient context may render them ineffective. The recently proposed BLINK (Logeswaran et al., 2019;", "Wu et al., 2020) uses powerful transformer-based encoder architectures trained on massive amounts of data (such as Wikipedia, Wikia) to achieve SotA performance on entity disambiguation tasks, and is shown to be especially effective in zero-shot settings. BLINK is quite effective on short text (as observed in our findings); in our approach, we use BLINK both as a baseline and as a component that", "is combined in larger rules. For short-text EL, some prior works (Sakor et al., 2019; Ferragina and Scaiella, 2012; Mendes et al., 2011) address the joint problem of mention detection and linking, with primary focus on identifying mention spans, while linking is done via heuristic methods without learning. (Sakor et al., 2019) also jointly extracts relation spans which aide in overall linking performance. The recent ELQ (Li et al., 2020) extends BLINK to jointly learn mention detection and linking. In contrast, we focus solely on linking and take a different strategy based on combining logic rules with learning. This facilitates a principled way combining multiple types of EL features with interpretability and learning using promising gradient-based techniques.", "Rule-based Learning. FOL rules and learning have been successfully applied in some NLP tasks and also other domains. Of these, the task that is closest to ours is entity resolution (ER), which is the task of linking two entities across two structured datasets. In this context, works like (Chaud-huri et al., 2007; Arasu et al., 2010; Wang et al., 2012; Hernndez et al., 2013) use FOL rules for ER. Approaches such as (Singla and Domingos, 2006; Pujara and Getoor, 2016) induce probabilistic rules using MLNs (Richardson and Domingos, 2006) and PSL (Bach et al., 2017), respectively. None of these approaches use any recent advances in neural-based learning; moreover, they are focused on entity resolution, which is a related task but distinct from short-text EL.", "Given text T , a set M = { m 1 , m 2 , ... } of mentions, where each m i is contained in T , and a knowledge graph (KG) comprising of a set E of entities, entity linking is a many-to-one function that links each mention m i M to an entity e ij C i , where C i E is a subset of relevant candidates for mention m i . More generally, we formulate the problem as a ranking of the candidates in C i so that the cor-rect\"", "cor-rect\" entity for m i is ranked highest.", "Following existing approaches(e.g. (Sakor et al., 2019; Wu et al., 2020), we use off-the-shelf lookup tools such as DBpedia lookup 2 to retrieve top-100 candidates for each mention.", "While this service is specific to DBpedia, we assume that similar services exist or can be implemented on top of other KGs.", "Fueled by the rise in complexity of deep learning, recently there has been a push towards learning interpretable models (Guidotti et al., 2018; Danilevsky et al., 2020).", "While linear classifiers, decision lists/trees may also be considered interpretable, rules expressed in first-order logic (FOL) form a much more powerful, closed language that offer semantics clear enough for human interpretation and a larger range of operators facilitating the expression of richer models.", "To learn these rules, neuro-symbolic AI typically substitutes conjunctions (disjunctions) with differentiable t -norms ( t -conorms) (Esteva and Godo, 2001).", "However, since these norms do not have any learnable parameters (more details in Appendix A.1), their behavior cannot be adjusted, thus limiting their ability to model well the data.", "In contrast, logical neural networks (LNN) (Riegel et al., 2020) offer operators that include parameters, thus allowing to better learn from the data.", "To maintain the crisp semantics of FOL, LNNs enforce constraints when learning operators such as conjunction.", "Concretely, LNN is expressed as: max(0 , min(1 , w 1 (1 x ) w 2 (1 y ))) subject to: (1 )( w 1 + w 2 ) (1) w 1 1 (2) w 2 1 (3) w 1 , w 2 0 where , w 1 , w 2 are learnable parameters, x, y [0 , 1] are inputs and [ 12 , 1] is a hyperparameter.", "Note that max(0 , min(1 , )) clamps the output of LNN between 0 and 1 regardless of , w 1 , w 2 , x, and y .", "The more interesting aspects are in the constraints.", "While Boolean conjunction only returns 1 or true when both inputs are 1 , LNNs relax this condition by using as a proxy for 1 (and conversely, 1 as a proxy for 0).", "In particular, Constraint (1) forces the output of LNN to be greater than when both inputs are greater than .", "Similarly, Constraints (2) and (3) constrain the 2 https://lookup.dbpedia.org/ 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 x y 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 x y 0 0.2 0.4 0.6 0.8 1 Figure 2: (left) Product t -norm.", "behavior of LNN when one input is low and the other is high.", "For instance, Constraint (2) forces the output of LNN to be less than 1 for y = 1 and x 1 .", "This formulation allows for unconstrained learning when x, y [1 , ] .", "By changing a user can control how much learning to enable (increase to make region of unconstrained learning wider or decrease for the opposite).", "Figure 2 depicts product t -norm and LNN ( = 0 . 7 ).", "While the former increases slowly with increasing x, y , LNN produces a high output when both inputs are and stays high thereafter, thus closely modeling Boolean conjunction semantics.", "max(0 , min(1 , w 1 (1 x ) w 2 (1 y ))) subject to: (1 )( w 1 + w 2 ) + w 1 1 + 1 w 2 1 + 2 w 1 , w 2 , 1 , 2 , 0 LNN ( x, y ) =", "where 1 , 2 , and denote slack variables.", "If any of Constraints (1), (2) and (3) in LNN are unsatisfied then slacks help correct the direction of the inequality without putting pressure on parameters w 1 , w 2 , and during training.", "For the rest of the paper, by LNN we refer to the above formulation.", "LNN negation is a pass-through operator: LNN ( x ) = 1 x , and LNN disjunction is defined in terms of LNN : LNN ( x, y ) = 1 LNN (1 x, 1 y ) While vanilla backpropagation cannot handle linear inequality constraints such as Constraint (1), specialized learning algorithms are available within the LNN framework.", "For more details, please check Riegel et al. (2020) 4 LNN-EL An overview of our neuro-symbolic approach for entity linking is depicted in Figure 3.", "We next discuss the details about feature generation component that generates features using a catalogue Features Description Name sim ( m i , e ij ) , where sim is a general purpose string similarity function such as Jaccard ( jacc ), JaroWinkler ( jw ), Levenshtein ( lev ), Partial Ratio ( pr ), etc.", "of feature functions (Section 4.1) followed by proposed model that does neuro-symbolic learning over user provided EL algorithm in Section 4.2.", "Given the input text T , together with labeled data in the form ( m i , C i , L i ) , where m i M is a mention in T , C i is a list of candidate entities e ij (drawn from lookup services 3 ) for the mention m i , and where each l ij L i denotes a link/not-link label for the pair ( m i , e ij ) .", "The first step is to generate a set F ij = { f k ( m i , e ij ) } of features for each pair ( m i , e ij ) , where f k is a feature function drawn from a catalog F of user provided functions.", "Our collection of feature functions include both non-embedding and embedding based functions.", "Non-embedding based.", "We include here a multitude of functions (see Table 1) that measure the similarity between the mention m i and the candidate entity e ij based on multiple types of scores.", "Name: a set of general purpose string similarity functions 4 such as Jaccard , Jaro Winkler , Levenshtein , Partial Ratio , etc. are used to compute the similarity between m i and e ij 's name.", "Context: aggregated similarity of m i 's context to the description of e ij .", "Here, we consider the list of all other mentions m k M ( k (cid:54) = i ) as m i 's context, together with e ij 's textual description obtained using KG resources 5 .", "The exact formula we use is shown in Table 1, where Partial Ratio ( pr ) measures the similarity between each context mention and the description.", "( Partial Ratio computes the 3 https://lookup.dbpedia.org 4 pypi.org/project/py-stringmatching 5 dbpedia.org/sparql Text T KGResources Featurefunctions F m i , e i 1 ,l i 1 e i 2 ,l i 2 ... Labeleddata m i , [ C i ,L i ] FeatureGeneration m i , e i 1 , [ f 1 ,f 2 ,... ] i 1 ,l i 1 e i 2 , [ f 1 ,f 2 ,... ] i 2 ,l i 2 ... Labeleddatawithfeatures m i , [ C i ,F i ,L i ] LNN 2 f 2 1 f 1 3 f 3 fw 1 fw 2 fw 3 LNN 4 f 1 5 f 4 fw 4 fw 5 LNN rw 1 rw 2 LNNReformulationofELAlgorithm R 1 ( m i ,e ij ) f 1 ( m i ,e ij ) > 1 f 2 ( m i ,e ij ) > 2 f 3 ( m i ,e ij ) > 3 R 2 ( m i ,e ij ) f 1 ( m i ,e ij ) > 4 f 4 ( m i ,e ij ) > 5 UserprovidedELAlgorithm m i , e i 1 ,s ( m i ,e ij ) e i 2 ,s ( m i ,e i 2 ) ... Finalscores Learnableparameters: i featurethresholds , fw i featureweights , rw i ruleweights Figure 3: Overview of our approach maximum similarity between a short input string and substrings of a second, longer", "string.) For normalizing the final score, we apply a min-max rescaling over all entities e ij C i .", "Type: the overlap similarity of mention m i 's type to e ij 's domain (class) set, similar to the domain-entity coherence score proposed in (Nguyen et al., 2014).", "Unlike in (Nguyen et al., 2014), instead of using a single type for all mentions in M , we obtain type information for each mention m i using a trained BERT-based entity type detection model.", "We use KG resources 5 to obtain e ij 's domain set, similar to Context similarity.", "Entity Prominence: measure the prominence of entity e ij as the number of entities that link to e ij in target KG, i.e., indegree ( e ij ) .", "Similar to Context score normalization, we apply min-max rescaling over all entities e ij C i .", "Embedding based.", "We also employ a suite of pretrained or custom trained neural language models to compute the similarity of m i and e ij .", "Pre-trained Embedding Models.", "These include SpaCy's semantic similarity 6 function that uses Glove (Pennington et al., 2014) trained on Common Crawl.", "In addition to SpaCy, we also use scores from an entity linking system such as BLINK (Wu et al., 2020) (a state-of-the-art entity linking model) as a feature function in our system.", "BERT Embeddings.", "To further explore the semantics of the context in T and the inherent structure of the target KG, we incorporate an embedding-based similarity by training a mini entity linking model without any aforementioned prior information.", "We first tag the input text T with a special token [MENT] to indicate the position of mention m i , and then encode T with BERT, i.e., m i = BERT ( m i , T ) .", "Each candidate e ij is encoded with 6 spacy.io/usage/vectors-similarity Box Cameron Box Cameron + Box Neighbors Box Titanic C Cameron N ( C Cameron ) Neighborhood Projection C Titanic Figure 4: Candidates for linking the Titanic' mention appear in the intersection of the two boxes.", "a pre-trained graph embedding Wiki2Vec (Yamada et al., 2020), i.e., e ij = Wiki2Vec ( e ij ) .", "The candidates are ranked in order of the cosine similarity to m i , i.e., Sim cos ( m i , e ij ) .", "The mini EL model is optimized with margin ranking loss so that the correct candidate is ranked higher.", "BERT with Box Embeddings.", "While features such as Context (see Table 1) can exploit other mentions appearing within the same piece of text, they only do so via textual similarity.", "A more powerful method is to jointly disambiguate the mentions in text to the actual entities in the KG, thus exploiting the structural context in the KG.", "Intuitively, the simultaneous linking of co-occurring mentions in text to related entities in the KG is a way to reinforce the links for each individual mention.", "To this end, we adapt the recent Query2Box (Ren et al., 2020), whose goal is to answer FOL queries over a KG.", "The main idea there is to represent sets of entities (i.e., queries) as contiguous regions in embedded space (e.g., axis-parallel hyper-rectangles or boxes), thus reducing logical operations to geometric operations (e.g., intersection).", "Since Query2Box assumes a well-formed query as input, one complication in directly applying it to our setting is that we lack the information necessary to form such an FOL query.", "For instance, in the example from Section 1, while we may assume that the correct entities for our Cameron and Titanic mentions are connected in the KG, we do not know how these are connected, i.e., via which relation.", "To circumvent this challenge, we introduce a special neighborhood relation N , such that v N ( u ) whenever there is some KG relation from entity u to entity v .", "We next define two box operations: Box ( C i ) = { v | min( { e ij | e ij C i } ) (cid:22) v (cid:22) max( { e ij | e ij C i } ) } Box ( N ( C i )) = Box ( C i ) + Box N The first operation represents mention m i as a box, by taking the smallest box that contains the set C i of candidate entities for m i .", "This can be achieved by computing the dimension-wise minimum (maximum) of all entity embeddings in C i to obtain the lower-left (upper-right) corner of the resulting box.", "The second operation takes m i 's box and produces the box containing its neighbors in the KG.", "Query2Box achieves this by representing Box N via a center vector and offset vector , both of which are learned parameters.", "The box of neighbors is then obtained by translating the center of m i 's box by and adding the offset to its side.", "Figure 4 shows how these operations are used to disambiguate Titanic while exploiting the co-occurring mention Cameron and the KG structure.", "We take the box for Cameron , compute its neighborhood box, then intersect with the Titanic box.", "This intersection contains valid entities that can disambiguate Titanic and are connected to the entity for Cameron .", "For the actual score of each such entity, we take its distance to the center of the intersection box and convert it to a similarity score Sim box ( m i , e ij ) .", "We then linearly combine this with the BERT-based similarity measure: box Sim box ( m i , e ij ) + Sim cos ( m i , e ij ) , where box is a hyper-parameter that adjusts the importance of the two scores.", "The approach described can be easily extended to more than two mentions.", "In this section, we describe how an EL algorithm composed of a disjunctive set of rules is reformulated into LNN representation for learning.", "Entity Linking Rules are a restricted form of FOL rules comprising of a set of Boolean predicates connected via logical operators: conjunction ( ) and disjunction ( ) .", "A Boolean predicate has the form f k > , where f k F is one of the feature functions, and can be either a user provided or a learned threshold in [0 , 1] .", "Figure", "5(a) shows two example rules R 1 and R 2 , where, for instance, R 1 ( m i , e ij ) evaluates to True if both the predicate jacc ( m i , e ij ) > 1 and Ctx ( m i , e ij ) > 2 are True .", "Rules can be disjuncted together to form a larger EL algorithm, as the one shown in Figure", "5(b), which states that Links ( m i , e ij )", "evalu-(a)EL Rules R 1 ( m i ,e ij ) jacc ( m i ,e ij ) > 1 Ctx ( m i ,e ij ) > 2 R 2 ( m i ,e ij ) lev ( m i ,e ij ) > 3 Prom ( m i ,e ij ) > 4", "(b)EL Algorithm Links ( m i ,e ij ) R 1 ( m i ,e ij ) R 2 ( m i ,e ij )", "(c)Scoring s ( m i ,e ij ) = + (cid:18) rw 1 (( fw 1 jacc ( m i ,e ij ) ( fw 2 Ctx ( m i ,e ij )) rw 2 (( fw 3 jacc ( m i ,e ij ) ( fw 4 Ctx ( m i ,e ij )) (cid:19) Figure 5: Example of entity linking rules and scoring.", "ates to True if any one of its rules evaluates to True .", "The Links predicate is meant to store high-quality links between mention and candidate entities that pass the conditions of at least one rule.", "The EL algorithm also acts as a scoring mechanism.", "In general, there are many ways in which scores can computed.", "In a baseline implementation (no learn-ing), we use the scoring function in Figure", "5(c), where rw i denote manually assigned rule weights, while fw i are manually assigned feature weights.", "An EL algorithm is an explicit and extensible description of the entity linking logic, which can be easily understood and manipulated by users.", "However, obtaining competitive performance to that of deep learning approaches such as BLINK (Wu et al., 2020) requires a significant amount of manual effort to fine tune the thresholds i , the feature weights (fw i ) and the rule weights (rw i ).", "LNN Reformulation.", "To facilitate learning of the thresholds and weights in an EL algorithm, we map the Boolean-valued logic rules into the LNN formalism, where the LNN constructs LNN (for logical OR) and LNN (for logical AND) allow for continuous real-valued numbers in [0 , 1] .", "As described in Section 3.2, LNN and LNN are a weighted real-valued version of the classical logical operators, where a hyperparameter is used as a proxy for 1 .", "Each LNN operator produces a value in [0 , 1] based on the values of the inputs, their weights and bias .", "Both the weights and are learnable parameters.", "The score of each link is based on the score that the LNN operators give, with an added complication related to how we score the feature functions.", "To illustrate, for the EL rules in Figure 5, the score of a link is computed as: s ( m i , e ij ) = LNN LNN (cid:18) TL ( jacc ( m i , e ij ) , 1 ) , TL ( Ctx ( m i , e ij ) , 2 ) (cid:19) , LNN (cid:18) TL ( lev ( m i , e ij ) , 3 ) , TL ( Prom ( m i , e ij ) , 4 ) (cid:19) Dataset Train Test |Q| |E| |Q| |E| LC-QuAD 1.0 (Trivedi et al., 2017) 4,000 6,823 1000 1,721 QALD-9 (Usbeck et al., 2018) 408 568 150 174 WebQSP EL (Li et al., 2020) 2974 3,237 1603 1,798 Table 2: Characteristics of the datasets.", "Here the top-level LNN represents the disjunction R 1 R 2 , while the two inner LNN capture the rules R 1 and R 2 respectively.", "For the feature functions with thresholds, a natural scoring mechanism would be to use score ( f > ) = f if f > else 0 , which filters out the candidates that do not satisfy the condition f > , and gives a non-zero score for the candidates that pass the condition.", "However, since this is a step function which breaks the gradient flow through a neural network, we approximate it via a smooth function T L ( f, ) = f ( f ) , where is Sigmoid function and is the learnable threshold that is generated using , i.e., = ( ) , to ensure that it lies in [0 , 1] .", "Training.", "We train the LNN formulated EL rules over the labeled data and use a margin-ranking loss over all the candidates in C i to perform gradient descent.", "The loss function L ( m i , C i ) for mention m i and candidates set C i is defined as (cid:88) e in C i \\{ e ip } max(0 , ( s ( m i , e ip ) s ( m i , e in )) + ) Here, e ip C i is a positive candidate, C i \\{ e ip } is the set of negative candidates, and is a margin hyper parameter.", "The positive and negative labels are obtained from the given labels L i (see Figure 3).", "Inference.", "Given mention m i and candidate set C i , similar to training, we generate features for each mention-candidate pair ( m i , e ij ) in the feature generation step.", "We then pass them through the learned LNN network to obtain final scores for each candidate entity in C i as shown in Figure 3.", "We first evaluate our approach w.r.t performance & extensibility, interpretability and transferability.", "We also discuss the training and inference time.", "Datasets.", "As shown in Table 2, we consider three short-text QA datasets.", "LC-QuAD and QALD-9 are datasets comprising of questions (Q) over DBpedia together with their corresponding SPARQL queries.", "We extract entities (E) from SPARQL queries and manually annotate mention spans.", "WebQSP EL dataset (Li et al., 2020) comprises of both mention spans and links to the correct entity.", "Since the target KG for WebQSP is Wikidata, we translate each Wikidata entity to its DBpedia counterpart using DBpedia Mappings 7 .", "In addition, we discard mentions that link to DBpedia concepts (e.g., heaviest player linked to dbo:Person ) and mentions m i with empty result (i.e., C i = ) or all not-link labels (i.e, l ij L i , l ij = 0 ) 8 .", "Baselines.", "We compare our approach to (1) BLINK (Wu et al., 2020), the current state-of-the-art on both short-text and long-text EL, (2) three BERT-based models -", "(a) BERT : both mention and candidate entity embeddings are obtained via BERT base pre-trained encoder, similar to (Gillick et al., 2019),", "(b) BERTWiki : mention embeddings are obtained from BERT base , while candidate entity is from pretrained Wiki2Vec (Yamada et al., 2020),", "(c) Box : BERTWiki embeddings finetuned with Query2Box embeddings (see Section 4.1).", "In addition to the aforementioned black-box neural models, we also compare our approach to (3) two logistic regression models that use the same feature set as LNN-EL: LogisticRegression without BLINK and LogisticRegression BLINK with BLINK.", "Furthermore, we use the following variants of our approach: (4) RuleEL : a baseline rule-based EL approach with manually defined weights and thresholds, (5) LogicEL : a baseline approach built on RuleEL where only the thresholds are learnable, based on product t -norm (see Section 3.2), (6) LNN-EL : our core LNN-based method using non-embedding features plus SpaCy, and (7) LNN-EL ens : an ensemble combining core LNN-EL with additional features from existing EL approaches, namely BLINK and Box (we consider Box, as it outperforms BERT and BERTWiki on all datasets).", "Detailed rule templates are provided in Appendix A.3.", "Setup.", "All the baselines are trained for 30 epochs, except for BLINK which we use as a zero-shot approach.", "For BERT approaches, we use BERT base as pretrained model.", "We used two Nvidia V100 GPUs with 16GB memory each.", "We perform hyperparameter search for margin and learning rates in the range [0 . 6 , 0 . 95] , [10 5 , 10 1 ] respectively.", "Overall Performance.", "As seen in Table 3, among logic-based approaches, LNN-EL outperforms LogicEL and RuleEL, showing that parameterized real-valued LNN learning is more effective than the non-parameterized version with t -norm (Log-icEL) and the manually tuned RuleEL.", "Logistic regression models which also learn weights over features achieve competitive performance to LNN-EL models; however they lack the representation power that LNN-EL offer in the form of logical rules comprising of conjunctions and disjunctions.", "In other words, LNN-EL allows learning over a richer space of models that help in achieving better performance as observed in Table 3.", "On the other hand, simple BERT-based approaches (BERT, BERTWiki, Box) that are trained on the QA datasets underperform the logic-based approaches, which incorporate finer-grained features.", "BLINK (also a BERT-based approach, but trained on the entire Wikipedia) is used as zero-shot approach and achieves SotA performance (when not counting the LNN-EL variants).", "The core LNN-EL version is competitive with BLINK on LC-QuAD and QALD-9, despite being a rule-based approach.", "Furthermore, LNN-EL ens , which combines the core LNN-EL with both BLINK and Box features, easily beats BLINK on LC-QuAD and QALD-9 and slightly on WebQSP EL .", "Table 4 shows the Recall@ k performance of LNN-EL against the BLINK model.", "Both LNN-EL and LNN-EL ens have better Recall@ k performance against BLINK on LC-QuAD and QALD-9 datasets, however BLINK's Recall@ k achieves a slightly better performance for WebQSP EL dataset.", "Extensibility.", "Here, we inspect empirically how a multitude of EL features coming from various black-box approaches can be combined in a principled way with LNN-EL, often leading to an overall better performance than the individual approaches.", "A detailed ablation study of the core LNN-EL ver-Dataset Model R@5 R@10 R@64 LC-QuAD BLINK 94.69 96.01 96.92 LNN-EL 93.66 94.39 97.56 LNN-EL ens 97.07 97.20 97.68 QALD-9 BLINK 93.39 93.39 94.29 LNN-EL 92.72 95.94 98.04 LNN-EL ens 94.63 94.63 95.48 WebQSP ELBLINK 97.40 97.64 98.61 LNN-EL 93.54 95.12 96.59 LNN-EL ens 96.34 96.59 96.95 Table 4: Recall@ k performance of LNN-EL models Dataset LNN-EL LNN-EL LNN-EL LNN-EL LNN-EL ens +BLINK +BERTWiki +Box LC-QuAD 87.64 90.24 88.23 89.05 91.00 QALD-9 88.52 90.96 86.41 88.52 91.38 WebQSP EL 85.08 92.32 91.70 91.44 92.12 Table 5: F1 scores of LNN-EL with additional features coming from various black-box EL approaches.", "sion can be found in Appendix A.2.", "As seen in Table 5, approaches like BERTWiki and Box which in isolation underperform compared to LNN-EL, help boost the latter's performance if they are included as predicates.", "Similarly, LNN-EL which has comparable performance to BLINK, can accommodate the latter's score to produce better performance (see LNN-EL + BLINK ).", "We also note that adding features is not a guarantee to improve performance, as LNN-EL ens (which includes both BLINK and Box) slightly underperforms LNN-EL + BLINK on WebQSP EL .", "For such cases, the interpretability of LNN-EL (discussed next) can help users select the right features based on their relative importance.", "Interpretability.", "Unlike black-box models, rule-based approaches provide the capability to inspect the model, specifically on how the features impact performance.", "This inspection can help in dropping or adjusting features that are detrimental.", "For instance, consider our case of LNN-EL + BLINK and LNN-EL ens trained on WebQSP EL dataset, where we observed that LNN-EL ens 's performance is inferior to LNN-EL + BLINK even though the former model has more features.", "A human expert can find BLINK Sim Ctx Type Prom ...", "insights into this behavior by looking at the feature weights in each model.", "In Figure 6 (left), the disjunction tree with the Box feature is given a low weight of 0 .", "26 , thus discounting some of the other useful features in the same tree.", "Removal of the Box feature leads to a re-weighting of the features in the model; the modified disjunction tree (Figure 6 (left)) has now a weight of 0 .", "42 .", "Such visualization can help the rule designer to judiciously select features to combine towards building a performant model.", "Transferability.", "To study the transferability aspect, we train LNN-EL on one dataset and evaluate the model on the other two, without any finetun-ing.", "We use the core LNN-EL variant for this, but similar properties hold for the other variants.", "Table 6 shows F1 scores on different train-test con-figurations, with diagonal (underlined numbers) denoting the F1 score when trained and tested on the same dataset.", "We observe that LNN-EL transfers reasonably well, even in cases where training is done on a very small dataset.", "For example, when we transfer from QALD-9 (with only a few hundred questions to train) to WebQSP EL , we obtain an F1-score of 83 .", "06 which is within 2 percentage points of the F1-score when trained directly on WebQSP EL .", "We remark that the zero-shot BLINK by design has very good transferability and achieves F1 scores of 87.04, 89.14, 92.10 on LC-QuAD, QALD-9, WebQSP EL respectively.", "However, BLINK is trained on the entire Wikipedia, while LNN-EL needs much less data to achieve reasonable transfer performance.", "Runtime Analysis.", "We study the efficiency of LNN-EL ens across three aspects: 1) candidate & feature generation, 2) training, and 3) inference.", "Candidate & feature generation involve using the DBpedia lookup API to obtain candidates for each mention, pruning non-entity candidates (i.e., categories, disambiguation links, etc.), obtaining any missing descriptions for candidates using SPARQL endpoint, and finally generating feature vectors for each mention-candidate pair using the feature functions described in Section 4.1.", "The generated features for the train and test data are then used, respectively, to train and test the LNN-EL models.", "The number of parameters in an LNN-EL model is linearly proportional to the combined number of disjunctions and conjunctions, which typically is in the order of few 10s.", "For example, LNN-EL ens comprises of 72 parameters, which is several orders of magnitude smaller than in neural black box models such as BLINK.", "Table 7 provides the time (in seconds) taken per question for candidate & feature generation, as well as 5-run average training and inference time per epoch.", "We introduced LNN-EL, a neuro-symbolic approach for entity linking on short text.", "Our approach complements human-given rule templates through neural learning and achieves competitive performance against SotA black-box neural models, while exhibiting interpretability and transferability without requiring a large amount of labeled data.", "While LNN-EL provides an extensible framework where one can easily add and test new features in existing rule templates, currently this is done manually.", "A future direction is to automatically learn the rules with the optimal combinations of features.", "We thank Ibrahim Abdelaziz, Pavan Kapanipathi, Srinivas Ravishankar, Berthold Reinwald, Salim Roukos and anonymous reviewers for their valuable inputs and feedback." ]
[ "abstain", "abstain", "objective", "abstain", "result", "result", "result", "abstain", "abstain", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "abstain", "other", "other", "other", "other", "other", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other" ]
[ "Existing paraphrase identification datasets lack sentence pairs that have high lexical overlap without being paraphrases.", "Models trained on such data fail to distinguish pairs like flights from New York to Florida and flights from Florida to New York .", "This paper introduces PAWS ( P araphrase A dversaries from W ord S crambling), a new dataset with 108,463 well-formed paraphrase and non-paraphrase pairs with high lexical overlap.", "Challenging pairs are generated by controlled word swapping and back translation, followed by fluency and paraphrase judgments by human raters.", "State-of-the-art models trained on existing datasets have dismal performance on PAWS ( < 40% accuracy); however, including PAWS training data for these models improves their accuracy to 85% while maintaining performance on existing tasks.", "In contrast, models that do not capture non-local contextual information fail even with PAWS training examples.", "As such, PAWS provides an effective instrument for driving further progress on models that better exploit structure, context, and pairwise comparisons.", "Word order and syntactic structure have a large impact on sentence meaning.", "Even small perturbation in word order can completely change interpretation.", "Consider the following related sentences.", "Flights from New York to Florida.", "(2) Flights to Florida from NYC.", "(3) Flights from Florida to New York.", "All three have high bag-of-words (BOW) overlap.", "However, (2) is a paraphrase of (1), while (3) has a very different meaning from (1).", "Existing datasets lack non-paraphrase pairs like (1) and (3).", "The Quora Question Pairs (QQP) corpus contains 400k real world pairs, but its negative examples are drawn primarily from related questions.", "Few have high word overlap, and of the 1,000 pairs with the same BOW, only 20% are not paraphrases.", "This provides insufficient representative examples to evaluate models' performance on this problem, and there are too few examples for models to learn the importance of word order.", "Table 1 shows that models trained on QQP are inclined to mark any sentence pairs with high word overlap as paraphrases despite clear clashes in meaning.", "Models trained or evaluated with only this data may not perform well on real world tasks where such sensitivity is important.", "To address this, we introduce a workflow (out-lined in Figure 1) for generating pairs of sentences that have high word overlap, but which are balanced with respect to whether they are paraphrases or not.", "Using this process, we create PAWS ( P araphrase A dversaries from W ord S crambling), a dataset constructed from sentences in Quora and Sentence 1 Sentence 2 Gold BOW BERT BERT+PAWS (1) Can a bad person become good?", "Wikipedia.", "Examples are generated from controlled language models and back translation, and given five human ratings each in both phases.", "A final rule recombines annotated examples and balances the labels.", "Our final PAWS dataset will be released publicly with 108,463 pairs at https: //g.co/dataset/paws .", "We show that existing state-of-the-art models fail miserably on PAWS when trained on existing resources, but some perform well when given PAWS training examples.", "BERT (Devlin et al., 2018) fine-tuned on QQP achieves over 90% accuracy on QQP, but only 33% accuracy on PAWS data in the same domain.", "However, the accuracy on PAWS boosts to 85% by including 12k PAWS training pairs (without reducing QQP per-formance).", "Table 1 also shows that the new model is able to correctly classify challenging pairs.", "Annotation scale is also important: our learning curves show strong models like BERT improve with tens of thousands of training examples.", "Our experimental results also demonstrate that PAWS effectively measures sensitivity of models to word order and structure.", "Unlike BERT, a simple BOW model fails to learn from PAWS training examples, demonstrating its weakness at capturing non-local contextual information.", "Our experiments show that the gains from PAWS examples correlate with the complexity of models.", "Existing data creation techniques have focused on collecting paraphrases, e.g. from co-captions for", "images (Lin et al., 2014), tweets with shared URLs (Lan et al., 2017), subtitles (Creutz, 2018), and back translation (Iyyer et al., 2018).", "Unlike all previous work, we emphasize the collection of challenging negative examples.", "Our work closely relates to the idea of crafting adversarial examples to break NLP systems.", "Existing approaches mostly focused on adding label-preserving perturbations to inputs, but with the effect of distracting systems from correct answers.", "Example perturbation rules include adding noise to inputs (Jia and Liang, 2017; Chen et al., 2018), word replacements (Alzantot et al., 2018; Ribeiro et al., 2018), and syntactic transformation (Iyyer et al., 2018).", "A notable exception is Glockner et al. (2018): they generated both entailment and contradiction examples by replacing words with their synonyms or antonyms.", "Our work presents two main departures.", "We propose a novel method that generates challenging examples with balanced class labels and more word reordering variations than previous work.", "In addition, we release to public a large set of 108k example pairs with high-quality human labels.", "We believe the new dataset will benefit future research on both adversarial example generation and improvement of model robustness.", "In our work, we demonstrate the importance of capturing non-local contextual information in the problem of paraphrase identification.", "This relates to prior work on probing sentence representations for their linguistic properties, such as how much syntactic information is encoded in representations (Conneau et al., 2018; Tenney et al., { Flights } { from, to } { New York, Florida } LSTM LSTM LSTM LSTM LSTMNNS IN LOCATION", "(a) Tag words and phrases with part-of-speech (POS) and named entities.", "(b) Build candidate sets by grouping words and phrases with the same tag.", "(c) Under the constraints of tag sequence template and candidate sets, find sentences with high language model scores using beam search.", "2019; Ettinger et al., 2018).", "There also exists prior work that directly uses structural information in modeling (Filice et al., 2015; Liu et al., 2018).", "All these prior approaches were evaluated on existing datasets.", "In contrast, we perform studies on PAWS, a new dataset that emphasizes the importance of capturing structural information in representation learning.", "While developing new models is beyond the scope of this paper, this new dataset can facilitate research in this direction.", "We define a PAWS pair to be a pair of sentences with high bag-of-words (BOW) overlap but different word order.", "In the Quora Question Pairs corpus, 80% of such pairs are paraphrases.", "Here, we describe a method to automatically generate nontrivial and well-formed PAWS pairs from real-world text in any domain (this section), and then have them annotated by human raters (Section 4).", "Our automatic generation method is based on two ideas.", "The first swaps words to generate a sentence pair with the same BOW, controlled by a language model.", "The second uses back translation to generate paraphrases with high BOW overlap but different word order.", "These two strategies generate high-quality, diverse PAWS pairs, balanced evenly between paraphrases and non-paraphrases.", "Our first phase generates well-formed sentences by swapping words in real world text.", "Most text generation models rely on large amount of training data (Iyyer et al., 2018; Guu et al., 2018; Gupta et al., 2018; Li et al., 2018), which is unfortunately not available in our case.", "We thus propose a novel generation method based on language modeling and constrained beam search.", "The goal is to find a sentence that achieves high language model score as well as satisfying all constraints.", "High scores indicate that generated sentences are natural and well-formed, and constraints ensure generated pairs have the same BOW.", "Figure 2 illustrates the generation procedure.", "First, given an input sentence, a CRF-based part-of-speech tagger tags each word.", "We further detect person names, locations, and organizations using a named entity recognizer, and replace POS with entity tags if probability scores are above 95%.", "1 The sequence of tags of words and phrases form a template for the input.", "Our beam search method then fills in each slot of the template from left to right, scoring each state by a language model trained on one billion words (Chelba et al., 2014).", "The candidate words and phrases for each slot are drawn from the input based on its tag.", "In Figure 2, for example, the second slot must be filled with a LOCATION from two candidate New York and Florida .", "Candidates are drawn without replacement so the generated sentence and the input have exactly the same bag-of-words.", "Note that this template-based constraint is more restrictive than the BOW requirement, but we choose it because it significantly reduces the search space.", "With this constraint, the method achieves high generation quality without a large beam.", "In practice, beam size is set to 100, which produces near-optimal results in most cases.", "Let s (cid:48) be the best sentence in the beam other than the input sentence s , and LM ( ) be their log-likelihood by the language model.", "We take ( s, s (cid:48) ) as a good word-swapping pair if LM ( s (cid:48) ) LM ( s ) t .", "2 We manually pick the threshold t =3 .", "0 for a good balance between generation quality and coverage.", "Examples (1) and (2) in Table 2 are representative examples from this generation method.", "1 We pick this threshold to achieve about 95% precision.", "2 In a preliminary stage, we noticed that many pairs were simply a permutation of a list, like A and B changed to B and A.", "For the diversity of the dataset, 99% of these are pruned via hand-crafted, heuristic rules.", "Because word order impacts meaning, especially in English, the swapping method tends to produce non-paraphrases.", "Our preliminary results showed that the distribution of paraphrase to non-paraphrases from this method is highly imbal-anced (about 1:4 ratio).", "However, we seek to create a balanced dataset, so we use an additional strategy based on back translationwhich has the opposite label distribution and also produces greater diversity of paraphrases while still maintaining a high BOW overlap.", "The back translation method takes a sentence pair and label ( s 1 , s 2 , l ) as input.", "For each sentence, the topk translations are obtained from an English-German neural machine translation model (NMT); then each of these is translated back to English using another German-English NMT model, providing a resulting topk results.", "We chose German as the pivot language because it produced more word reordering variations than other languages and the translation quality was good.", "Both models have the same architecture (Wu et al., 2016) and are trained on WMT14.", "This results in k 2 back translations before deduplication.", "We chose k =5 .", "To obtain more pairs with the PAWS property, we further filter back translations by their BOW similarities to the input and their word-order inversion rates, as described below.", "We define BOW similarity as the cosine similarity between the word count vectors of a sentence pair.", "Pairs generated from the swapping strategy have score = 1 .", "0 , but here we relax the threshold to 0.9 because it brings more data diversity and higher coverage, while still generating paraphrases of the input with high quality.", "To define the word-order inversion rate, we first compute word alignments between a sentence pair in a heuristic way by assuming they are one-to-one mapping and are always monotonic.", "For example, if the first sentence has three instances of dog and On April 2 Jenkins married Ivy Vujic Jenkins married Ivy on April 2 Figure 3: An example of how to compute inversion rate.", "the second has two, we align the first two instances of dog in the same order and skip the third one.", "The inversion rate is then computed as the ratio of cross alignments.", "Figure 3 is an example pair with six alignments.", "There are 15 alignment pairs in total and 9 of them are crossed, e.g. alignments of on and married .", "The inversion rate of this example is therefore 9 / 15 = 0 .", "6 .", "We sample back translation results such that at least half of the pairs have inversion rate over 0.02; this way, the final selected pairs cover interesting transformations of both word-order changes and word replacement.", "Examples (3) and (4) in Table 2 are representative examples from back translation.", "Label Balancing Figure 1 illustrates the process of constructing the final label-balanced set based on human annotations.", "The set first includes all pairs from back translation, which are mostly paraphrases.", "For each labeled pair ( s 1 , s 2 ) from swapping and a labeled pair ( s 1 , s (cid:48) 1 ) from back translation, the set further includes the pair ( s 2 , s (cid:48) 1 ) based on the rules: (1) ( s 2 , s (cid:48) 1 ) is paraphrase if both ( s 1 , s 2 ) and ( s 1 , s (cid:48) 1 ) are paraphrases; (2) ( s 2 , s (cid:48) 1 ) is non-paraphrase if exactly one of ( s 1 , s 2 ) and ( s 1 , s (cid:48) 1 ) is non-paraphrase; (3) otherwise ( s 2 , s (cid:48) 1 ) is not included because its label is unknown.", "We also consider pairs ( s (cid:48) 2 , s 1 ) and ( s (cid:48) 2 , s (cid:48) 1 ) in the similar way if s (cid:48) 2 is a back translation of s 2 with human labels.", "Using the example generation strategies described in Section 3 combined with human paraphrase an-Quora", "notations, we create a large new dataset, PAWS that contains both paraphrase and non-paraphrase pairs that have both high bag-of-words overlap and word reordering.", "Source sentences are drawn from both the Quora Question Pairs (QQP) corpus (Iyer et al., 2017) and Wikipedia.", "3 From these, we produce two datasets, PAWSQQP and PAWS Wiki .", "We start by producing swapped examples from both QQP and Wikipedia.", "Both sources contain naturally occurring sentences covering many top-ics.", "On both corpora only about 3% of candidates are selected for further processingthe rest are filtered because there is no valid generation candidate that satisfies all swapping constraints or because the language model score of the best candidate is below the threshold.", "The remaining pairs (16,280 for QQP and 50k for Wikipedia) are passed to human review.", "Sentence correction The examples generated using both of our strategies are generally of high quality, but they still need to be checked with respect to grammar and coherence.", "Annotators evaluate each generated sentence without seeing its source sentence.", "The sentence is accepted as is, fixed, or rejected.", "Table 3 shows the number of pairs of each action on each domain.", "Most of fixes are minor grammar corrections like a apple an apple .", "Accepted and fixed sentences are then passed to the next stage for paraphrase annotation.", "Total # back translation pairs 26,897 paraphrase 25,521 non-paraphrase 1,376 Human agreement 94.8%", "Paraphrase identification Sentence pairs are presented to five annotators, each of which gives a binary judgment as to whether they are paraphrases or not.", "We choose binary judgments to make our dataset have the same label schema as the QQP corpus.", "Table 3 shows aggregated annotation statistics on both domains, including the number of paraphrase (positive) and non-paraphrase (negative) pairs and human agreement, which is the percentage ratio of agreement between each individual label and the majority vote of five labels on each example pair.", "Overall, human agreement is high on both Quora (92.0%) and Wikipedia (94.7%) and each label only takes about 24 seconds.", "As such, answers are usually straightforward to human raters.", "To ensure the data is comprised of clearly paraphrase or non-paraphrase pairs, only examples with four or five raters agreeing are kept.", "4 An example of low agreement is Why is the 20th-century music so different from the 21st music?", "v.s. Why is the 21st century music so different from the 20th century music?", ", where three out of five raters gave negative labels on this pair.", "The bottom block of Table 3 shows the final number of pairs after this filtering, and human agreement further goes up to over 95%.", "Finally, source and generated sentences are randomly flipped to mask their provenance.", "The swapping strategy generally produces non-paraphrase examples67% for QQP and 88% for Wikipedia.", "Because", "(a) the label imbalance is less pronounced for QQP and", "(b) NMT models perform poorly on Quora questions due to domain mismatch, we only apply the back translation strategy to Wikipedia pairs.", "Doing so creates 26,897 candidate example pairs after filtering.", "As before, each pair is rated by five annotators on the paraphrase identification task.", "5 Table 4 shows that 4 We exclude low agreement pairs from our experiments, but we include them in our data release for further study.", "most of the examples (94.9%) are paraphrases (as expected), with high human agreement (94.8%).", "Finally, we expand the pairs using the the rules described in Section 3.2.", "Table 5 provides counts for each split in the final PAWS datasets.", "The training portion of PAWSQQP is a subset of the QQP training set; however, PAWSQQP 's development set is a subset of both QQP's development and test sets because there are only 677 pairs.", "PAWS Wiki randomly draws 8,000 pairs for each of its development and test sets and takes the rest as its training set, with no overlap of source sentences across sets.", "Finally, any trivial pairs with identical sentences from development and test sets are removed.", "6 The final PAWSQQP has a total of 12,665 pairs (443k tokens), where 31.3% of them have positive labels (paraphrases).", "PAWS Wiki has a total of 65,401 pairs (2.8m to-kens), where 44.2% of them are paraphrases.", "Note that we have human annotations on 43k pairs generated by the word swapping method on Wikipedia, but 30k of them have no back translation counterparts and therefore they are not included in our final PAWS Wiki dataset.", "Nevertheless, they are high-quality pairs with manual labels, so we include them as an auxiliary training set (PAWS Wiki-Swap in Table 5), and empirically show its impact in Section 6.", "Unlabeled PAWS Wiki In addition to the fully labeled PAWS Wiki dataset, we also construct an unlabeled PAWS Wiki set at large scale.", "The idea is to simply treat all pairs from word swapping as non-paraphrases and all pairs from back translation as paraphrase, and construct the dataset in the same way as labeled PAWS Wiki .", "The result is a total of 656k pairs with silver labels.", "We show empirically NMT generates fluent output.", "6 Such trivial examples exist because annotators sometimes fix a swapped sentence back to its source.", "We keep such examples in the training set (about 8% of the corpus) because otherwise a trained model would actually predict low similarity scores to identical pairs.", "PAWS is designed to probe models' ability to go beyond recognizing overall sentence similarity or relatedness.", "As noted in the introduction, modelseven the best avaliabletrained on existing resources tend to classify any example with high BOW overlap as a paraphrase.", "Can any of these models learn finer structural sensitivity when provided with PAWS examples as part of their training?", "We consider six different models that cover a wide range of complexity and expressiveness: two baseline encoders and four recent advanced models that achieved state-of-the-art or strong performance on paraphrase identification.", "Table 6 summarizes the models with respect to whether they represent non-local contexts or support cross-sentential word interaction.", "The baseline models use cosine similarity with simple sentence encoders: a bag-of-words ( BOW ) encoder based on token unigram and bigram encodings and a bi-directional LSTM ( BiLSTM ) that produces a contextualized sentence encoding.", "A cosine value above .5 is taken as a paraphrase.", "ESIM .", "The Enhanced Sequential Inference Model (Chen et al., 2017) achieved competitive performance on eight sentence pair modeling tasks (Lan and Xu, 2018).", "It encodes each sentence using a BiLSTM, concatenates the encodings for each sentence in the pair, and passes them through a multi-layer perceptron (MLP) for classification.", "The additional layers allow ESIM to capture more complex sentence interaction than cosine similarity in the baseline models.", "DecAtt .", "The Decomposable Attention Model (Parikh et al., 2016) is one of the earliest models to introduce attention for paraphrase identification.", "It computes word pair interaction between two sentences and aggregates aligned vectors for final classification.", "This model achieved state-of-the-art results without explicitly modeling word order.", "In our experiments, we show the limitations of this modeling choice on PAWS pairs.", "DIIN .", "The Densely Interactive Inference Network (Gong et al., 2018) adopts DenseNet (Huang et al., 2017), a 2-dimensional convolution architecture, to extract high-order word-by-word interaction between n-gram pairs.", "This model achieved state-of-the-art performance without relying on pre-trained deep contextualized representations like ELMo (Peters et al., 2018).", "It outperformed ESIM and DecAtt models by a large margin on both paraphrase identification and natural language inference tasks.", "BERT .", "The Bidirectional Encoder Representations from Transformers (Devlin et al., 2018) recently obtained new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE benchmark to 80.4% (7.6% absolute improvement).", "BERT involves pretraining a Transformer encoder (Vaswani et al., 2017) on a large corpus with over three billion words.", "This large network is then fine-tuned with just one additional output layer.", "We seek to understand how well models trained on standard datasets perform on PAWS pairs and to see which models are most able to learn from PAWS pairs.", "A strong model should improve significantly on PAWS when trained on PAWS pairs without diminishing performance on existing datasets like QQP.", "Overall, both DIIN and BERT prove remarkably able to adapt to PAWS pairs and perform well on both PAWSQQP and PAWS Wiki while the other models prove far less capable.", "We use two metrics: classification accuracy and area-under-curve (AUC) scores of precision-recall curves.", "For all classification models, 0.5 is the threshold used to compute accuracy.", "We report results on testing sets for QQP and PAWS Wiki , and on the development set for PAWSQQP (which has no test set).", "For BERT, we use the implementation provided by the authors 7 and apply their default fine-tuning configuration.", "We use the provided BERTBASE pre-trained model instead of BERTLARGE due to GPU memory limitations.", "For all other models, we use our own (re-)implementations that matched 7 https://github.com/google-research/bert reported performance on QQP.", "We use 300 dimensional GloVe embeddings (Pennington et al., 2014) to represent words and fix them during training.", "Main Results on PAWSQQP Table 7 summarizes results on the Quora domain.", "We first train models on the Quora Question Pairs (QQP) training set, and column QQP QQP shows that all models achieve over 83% accuracy on QQP.", "However, when evaluating on PAWSQQP , all models, including BERT, obtain abysmal accuracy under 40% (column QQP PAWSQQP ).", "We hypothesize the performance on PAWSQQP relies on two factors: the number of representative training examples, and the capability of models to represent complex interactions between words in each sentence and across the sentences in the pair.", "To verify that, we further train models on a combination of QQP and PAWSQQP training sets and the last two columns of Table 7 show the results on PAWSQQP .", "As expected, all models benefit from new training examples, but to different extents.", "Gains are much larger on state-of-the-art models like BERT, while the BOW model learns almost nothing from new examples.", "As a consequence, performance changes are more drastic on PAWSQQP than on QQP.", "For example, the absolute difference between BiLSTM and BERT is 4.2% on QQP, but it goes up to 27% on PAWSQQP , which is a 60% relative reduction in error.", "It is also noteworthy that adding PAWSQQP training examples has no negative impact to QQP performance at all.", "For example, a BERT model fine-tuned on QQP+PAWS QQP achieves the same 90.5% classification accuracy as training on QQP alone.", "We therefore obtain a single model that performs well on both datasets.", "Main Results on PAWS Wiki In our second experiment we train and evaluate models on our PAWS Wiki dataset.", "Table 8 presents the results.", "DIIN and BERT outperform others by a substantial margin ( > 17% accuracy gains).", "This observation gives more evidence that PAWS data effectively measures models' sensitivity to word order and syntactic structure.", "One interesting observation is that DecAtt performs as poorly as BOW on this dataset.", "This is likely due to the fact that DecAtt and BOW both consider only local context information.", "We there-M ODELSQQP QQP QQP PAWSQQP QQP+PAWS QQP PAWSQQP (Acc) (AUC) (Acc) (AUC) (Acc) (AUC) BOW 83.2 89.5 29.0 27.1 30.0 (+1.0) 27.3 (+0.2) BiLSTM 86.3 91.6 34.8 37.9 57.6 (+22.9) 52.3 (+14.5) ESIM (Chen et al., 2017) 85.3 92.8 38.9 26.9 66.5 (+27.7) 48.1 (+17.2) DecAtt (Parikh et al., 2016) 87.8 93.9 33.3 26.3 67.4 (+34.1) 51.1 (+24.9) DIIN (Gong et al., 2018) 89.2 95.2 32.8 32.4 83.8 (+51.1) 77.8 (+45.5) BERT (Devlin et al., 2018) 90.5 96.3 33.5 35.1 85.0 (+51.5) 83.1 (+48.0) Table 7: Acc uracy (%) of classification and AUC scores (%) of precision-recall curves on Quora Question Pairs ( QQP ) testing set and our PAWSQQP development set.", "fore tested an enhancement of DecAtt by replacing its word representations with encodings from a BiLSTM encoder to capture non-local context information.", "The enhanced model significantly outperforms the base, yielding an 11.5% (57.1% vs. 68.6%) absolute gain on accuracy.", "We further evaluate the impact of using silver PAWS Wiki data in pre-training, as discussed in Section 4.", "The last two columns of Table 8 show the results.", "Comparing to supervised performance, pre-training with silver data gives consistent improvements across all models except BOW and vanilla DecAtt.", "Perhaps surprisingly, adding silver data gives more than 10% absolute improvements on AUC scores for BiLSTM and ESIM, much higher than the gains on DIIN and BERT.", "train multiple models on QQP plus different number of PAWSQQP examples.", "Figure 4 plots AUC score curves of DIIN and BERT as a function of the number of PAWSQQP training examples.", "x = 0 corresponds to models trained on QQP only, and the rightmost points correspond to models trained on QQP and full PAWSQQP .", "Both models improve from 30% to 74% AUC scores with 6,000 PAWSQQP examples.", "Furthermore, neither curve reaches convergence, so they would likely still benefit from more PAWS training examples.", "Cross-domain Results The PAWS datasets cover two domains: Quora and Wikipedia.", "Here we demonstrate that a model trained on one domain also generalizes to another domain, although not as well as training on in-domain data.", "Table 9 shows that a DIIN model trained on Quora (QQP+PAWS QQP ) achieves 70.5% AUC on the Wikipedia domain.", "This is lower than training on in-domain data (92.9%), but higher than the model trained without any PAWS data (46.0%).", "We also observe similar patterns when TRAININGDATAQQP PAWSQQPPAWS Wiki (Test) (Dev) (Test) QQP (Train) 95.2 32.4 46.0 QQP+PAWS QQP 95.3 77.8 70.5 QQP+PAWS Wiki 95.3 58.5 92.9 +PAWS Wiki-Swap 95.3 70.6 93.5 QQP+PAWS QQP+Wiki 95.1 87.0 93.4 +PAWS Wiki-Swap 95.3 89.9 93.8 Table 9: AUC scores (%) when training DIIN models on different sets of training data.", "training on Wikipedia (QQP+PAWS Wiki ) and testing on PAWSQQP .", "Interestingly, using out-of-domain data also boosts in-domain performance.", "As Table 9 shows, training on both domains (QQP+PAWS QQP+Wiki ) leads to 9.2% absolute AUC gains on PAWSQQP over the model trained only on QQP+PAWS QQP .", "The auxiliary training set on Wikipedia (PAWS Wiki-Swap ) helps further.", "As Table 9 shows, adding this auxiliary training set is particularly helpful to the performance on PAWSQQP , yielding a 12.1% (70.6% vs 58.5%) gain on AUC when training on QQP+PAWS Wiki .", "On PAWS Wiki , this addition lifts the (no pre-training) DIIN model AUC from 91.1% (Table 8) to 93.8% (Table 9).", "BERT vs DIIN Both models achieve top scores on PAWS, but interestingly, the two models disagree on many pairs and are not correlated in their errors.", "For example, of 687 of BERT's mistakes on the PAWS Wiki test set, DIIN got 280 (41%) correct.", "As such, performance might improve with combinations of these two existing models.", "It is also worth noting that the DIIN model used in our experiments has only 590k model parameters, whereas BERT has over 100m.", "Furthermore, the computational cost of BERT is notably higher than DIIN.", "Given this, and the fact that DIIN is competitive with BERT (especially when pre-trained on noisy pairs, see Table 8), DIIN is likely the better choice in computationally constrained scenariosespecially those with strict latency requirements.", "Datasets are insufficient for differentiating models if they lack examples that exhibit the necessary diagnostic phenomena.", "This has led, for example, to new datasets for noun-verb ambiguity (Elkahky et al., 2018) and gender bias in coreference (Web-ster et al., 2018; Rudinger et al., 2018; Jieyu Zhao, 2018).", "Our new PAWS datasets join these efforts and provide a new resource for training and evaluating paraphrase identifiers.", "We show that including PAWS training data for state-of-the-art models dramatically improves their performance on challenging examples and makes them more robust to real world examples.", "We also demonstrate that PAWS effectively measures sensitivity of models on word order and syntactic structure.", "We would like to thank our anonymous reviewers and the Google AI Language team, especially Emily Pitler, for the insightful comments that contributed to this paper.", "Many thanks also to the Data Compute team, especially Ashwin Kakarla and Henry Jicha, for their help with the annotations References Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "other", "result", "abstain", "abstain", "abstain", "result", "objective", "abstain", "result", "other", "other", "objective", "abstain", "other", "other", "other", "method", "objective", "abstain", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "objective", "objective", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "method", "method", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "objective", "other", "other" ]
[ "Event extraction for the biomedical domain is more challenging than that in the general news domain since it requires broader acquisition of domain-specific knowledge and deeper understanding of complex contexts.", "To better encode contextual information and external background knowledge, we propose a novel knowledge base (KB)-driven tree-structured long short-term memory networks (Tree-LSTM) framework, incorporating two new types of features: (1) dependency structures to capture wide contexts; (2) entity properties (types and category descriptions) from external ontologies via entity linking.", "We evaluate our approach on the BioNLP shared task with Genia dataset and achieve a new state-of-the-art result.", "In addition, both quantitative and qualitative studies demonstrate the advancement of the Tree-LSTM and the external knowledge representation for biomedical event extraction.", "Biomedical information extraction is widely used to assist the biology community on knowledge acquisition and ontology construction.", "Biomedical events generally refer to a change of status, particularly on proteins or genes.", "The goal of event extraction is to identify triggers and their arguments from biomedical text, and then assign an event type to each trigger and a role to each argument.", "For example, in the sentence shown in Figure 1, it includes a gene expression and a positive regulation event mention, both triggered by the word transduced .", "Tax is the Theme argument of the gene expression event.", "An event could also serve as an argument of another event, leading to a nested structure.", "For instance, the gene expression event triggered by transduced is also a Theme argument of the positive regulation event as shown in Figure", "1. Earlier studies on biomedical event extraction rely on kernel classification methods like the support vector machines (SVMs) (Bjorne and Salakoski, 2011; Venugopal et al., 2014) using hand-crafted features, which require high engineering effort and domain-specific knowledge.", "Recent distributional representation based approaches (Rao et al., 2017; Bjorne and Salakoski, 2018) explore deep neural networks which only require distributed semantic features.", "However, different from event extraction in the general news domain, biomedical event extraction requires broad acquisition of domain-specific knowledge and deep understanding of complex contexts.", "For example, in Genia event extracton of BioNLP shared task 2011 (Kim et al., 2011), about 80% of entity mentions are abbreviations of genes, proteins and diseases while more than 36% of event triggers and arguments are separated with more than 10 words.", "In order to efficiently capture indicative information from broad contexts, we first adopt tree structure based long short-term memory (Tree-LSTM) networks.", "Compared to the linear chain structured LSTM, the Tree-LSTM takes tree-structured network topology into consideration.", "As shown in the top frame of Figure 1, Tree-LSTM takes the dependency tree structure of each sentence as input and gradually incorporates the information from the whole subtree into each node.", "Dependency tree structure can connect semantically related concepts, and thus shorten the distance between a trigger and its arguments significantly.", "For instance, in the following sentence ... , which binds to the enhancer A located in the promoter of the mouse MHC class I gene H-2Kb , ... , when determining the trigger type of binds , we need to carefully select its contextual words, such as H-2Kb , which indicates the object of binds .", "However, binds and H-2Kb are sepa-Characterization of peripheral blood T-lymphocytes transduced with HTLV-I Tax mutants with different trans activating phenotypes .", "rated with 16 words which is difficult for a chain-structured LSTM to capture their long distance dependency, while within dependency tree structure, their distance is significantly shortened to 7.", "Moreover, to better capture domain-specific knowledge, we further propose to leverage the external knowledge bases (KBs) to acquire properties of all the biomedical entities.", "The KB properties are extremely beneficial for our model to learn patterns more explicitly.", "Take the entity Tax in Figure 1 as an example, it's a protein often involved in the biological process of positive regulation of transcription referred to Gene Ontology (Ashburner et al., 2000).", "This function description provides crucial clues to determine the type of transduced as positive regulation .", "Therefore, to capture such knowledge from external KBs, for each entity, we first learn a KB concept embedding from its properties, and then automatically incorporate the KB representation into its Tree-LSTM hidden state with a gate function.", "Our contributions are twofold: First, to the best of our knowledge, it's the first time to adopt Tree-LSTM for biomedical event extraction to effectively capture the wide contexts.", "Second, we further incorporate external knowledge from domain-specific KBs into the Tree-LSTM, which yields state-of-the-art performance on Genia event extraction shared task.", "In this section, we present our KB-driven Tree-LSTM approach for biomedical event extraction.", "We first introduce the Tree-LSTM framework, and then describe the construction of KB concept embedding for each entity.", "Finally we incorporate the KB concept embedding into a Tree-LSTM and apply it for event trigger and argument extraction.", "The Tree-LSTM (Tai et al., 2015) is a variation of LSTM (Hochreiter and Schmidhuber, 1997) to a tree-structured network topology.", "It shows improvement in representing sentence semantic meaning compared to sequential LSTM such as Bidirectional LSTM (BiLSTM) (Graves et al., 2013).", "The main difference between sequential LSTM and Tree-LSTM is, at each time step, the former calculates its hidden state from the input at the current time step and the hidden state from previous step, while Tree-LSTM computes its hidden state from the input token and the hidden states of all its children nodes from the tree structure.", "A Tree-LSTM reduces to sequential LSTM when each node in the tree only has one child.", "Figure 2 (A) shows a Tree-LSTM unit.", "In order to obtain the hidden state h j of an input token x j , the unit calculates all of its children hidden states ( h j 1 , h j 2 ) through depth-first traversal.", "For the biomedical event extraction, we mainly explore the Gene Ontology as our external KB since it provides detailed descriptions for each gene and gene product attributes across all species.", "It consists of two types of information: (1) the gene ontology (GO) defines all the gene functions, relations between these gene functions, and aspects used to describe the gene functions, including molecular function, cellular component and biological process.", "(2) the gene product annotations (GO Anno) provide all entity related attributes, such as the full entity name, entity type, as well as the gene functions it is related to.", "For example, in Figure 1, given the entity tax , from the gene product annotations, we can get its full entity name as tax protein which is a type of proteins and it's related to a function about biological process .", "From the gene ontology, we can further determine the specific function that tax is related to positive regulation of transcription in terms of biological process aspect.", "In order to leverage the external KB information, we first apply QuickGO API (Binns et al., 2009) to link each entity mention to the Gene Ontology and retrieve all the KB annotations.", "For each entity, we carefully select two types of properties which are beneficial for event extraction task: the entity type (e.g., protein for tax ) and the gene ontology function it is related to (e.g., positive regulation of transcription for tax ).", "The entity type can facilitate the explicit pattern learning for argument role labeling, for example, the gene expression event pattern (Theme: Protein, Trigger: transduced) is more popular than (Theme: Tax, Trigger: transduced) in Figure", "1. The gene ontology function can provide implicit clues to determine the trigger type as aforementioned in Section", "1. As shown in Figure 1, we assign a word embedding which pretrained on PubMed and PMC texts (Moen and Ananiadou, 2013) to represent each entity type.", "For each gene ontology function which is usually a long phrase, we use a state-of-the-art sentence embedding approach (Conneau et al., 2017) to automatically learn a vector representation.", "We then concatenate these two types of KB property representations as the final KB concept embedding.", "After obtaining the KB concept embeddings, we further incorporate them into the Tree-LSTM to leverage the domain-specific knowledge.", "Given a sentence, for example the sentence shown in Figure 3, we first perform the dependency parsing with the Stanford dependency parser (Chen and Manning) and obtain a dependency tree structure.", "For each node j in the tree structure, C ( j ) is the set of children nodes of node j and k is the KB concept embedding of node k .", "We set k to 0 if node k is not a biomedical entity.", "(cid:101) j denotes the sum of the KB concept embeddings of j 's children nodes and (cid:101) h j is the sum of the hidden states of j 's children nodes: (cid:101) h j = (cid:88) k C ( j ) h k (cid:101) j = (cid:88) k C ( j ) k where h k is the hidden state of node k .", "Then we incorporate the KB concept embeddings into the input, forget, and output gates of the Tree-LSTM: i j = ( W i [ x j , (cid:101) h j , (cid:101) j ] + b i ) f jk = ( W f [ x j , h k , (cid:101) k ] + b f ) o j = ( W o [ x j , (cid:101) h j , (cid:101) j ] + b o ) where i j and o j are the input gate and the output gate for node j respectively.", "f jk is the forget gate for node j in terms of its child node k .", "W i , W f , and W o are learnable parameters, b i , b f and b o are bias terms.", "Thus, for each node j , the input gate gathers all KB information from its children nodes, and the output gate balances the meaningful information from its local contexts and the KB concept embeddings of its children nodes.", "Besides adding the KB concept embeddings into the three gates to select useful KB formation implicitly, similar to Ma et al. (2018), we also introduce a knowledge specific output gate g j to explicitly incorporate knowledge information into each node's hidden state.", "While different from Ma et al. (2018) which only considers the knowledge concept embedding of each node itself, we use the sum of the KB concept embeddings of the whole subtree instead: g j = ( W g [ x j , (cid:101) h j , (cid:101) j ] + b g ) where W g is a weight matrix to be learned, b g is the bias term.", "As demonstrated in Figure 2 (B), we eventually combine the implicit way of incorporating KB information into the input, output and forget gates and an explicit way of directly incorporating the KB information into a node's hidden state: (cid:101) c j = tanh( W c [ x j , (cid:101) h j ] + b c ) c j = (cid:88) k C ( j ) f jk (cid:12) c k + i j (cid:12) (cid:101) c j h j = o j (cid:12) tanh( c j ) + g j (cid:12) tanh( W (cid:101) j ) where c j is the memory cell, W c and W are weight matrices to be learned.", "After getting the hidden state h j of each node j , we use a softmax classifier to predict a label for each node, and optimize the parameters by minimizing a negative log-likelihood loss.", "After detecting all candidate triggers, we further extract arguments for each trigger.", "The Genia event extraction shared task provides the annotations of all entity mentions.", "Thus, for each trigger, we use all the entity mentions that occur in the same sentence as its candidate arguments, and then assign an argument role or None .", "Different from trigger extraction, we use the shortest dependency path (SDP) within the dependency tree structure instead of the surface contexts to better capture the dependency between the trigger and each argument.", "Taking the sentence in Figure 3 as an example, given a trigger transcription and a candidate argument OBF-1 , we first perform dependency parsing and extract the shortest dependency path between transcription and OBF-1 with the Dijkstra's algorithm (Johnson, 1973) and obtain the shortest dependency path transcription of genes OBF-1 .", "We use the same KB-driven Tree-LSTM architecture as introduced in Section 2.3 to encode each node into a new hidden state representation.", "We use the hidden state of the root node h 0 as the overall vector representation of the whole dependency path.", "Finally, we feed the concatenation of h 0 with the hidden state of the trigger and argument as input to another softmax to predict the argument role.", "We also optimize the model by minimizing a negative log-likelihood loss.", "The Genia Event Extraction task is the main task in the BioNLP Shared Task series (Kim et al., 2009, 2011; Nedellec et al., 2013).", "The Genia task defines 9 fine-grained event types as shown in Table", "1. Note that a Binding event may take more than one protein as its Theme arguments.", "A Regulation event may take one protein or event as its Theme argument and also optionally take one protein or event as its Cause argument.", "A Regulation event taking an event as its argument will lead to a nested structure.", "37.2% nested events are observed in Genia 2011 corpus (Bjorne and Salakoski, 2011).", "There are 6.0% inter-sentence events while our model only focuses on sentence-level event extraction.", "We apply our KB-driven Tree-LSTM model on Genia 2011 data set.", "The entities in Genia data set are manually annotated and given as part of the input.", "We evaluate our results on the test set using the official online tool provided by the Genia task organizers.", "1 Following previous studies (Bj orne and Salakoski, 2011; Venugopal et al., 2014; Rao et al., 2017; Bjorne and Salakoski, 2018), we report scores obtained by the approximate span (al-lowing trigger spans to differ from gold spans by single words).", "As we only focus on matching core arguments, we use recursive matching criterion for evaluation which not requires matching of additional arguments for events referred from other events (Kim et al., 2011).", "We use the word embedding pretrained on PubMed and PMC texts (Moen and Ananiadou, 1 http://bionlp-st.dbcls.jp/GE/2011/eval-test/ 2013) for word and type embeddings.", "The hyper-parameters are tuned on the development set and listed in Table", "2. Word representations are updated during training with an initial learning rate of 0.1.", "Table 3 shows the final event extraction results of applying our KB-driven Tree-LSTM model on Genia 2011 dataset with the comparison of only using Tree-LSTM and a standard BiLSTM model.", "Tree-LSTM outperforms the BiLSTM baseline which indicates the power of Tree-LSTM in dealing with long-distance dependency structure in biomedical literature.", "By incorporating external KB information, our approach achieves about 2.12% F-score gain comparing to Tree-LSTM, which demonstrates the effectiveness of the KB properties for biomedical event extraction.", "We will show detailed analysis in Section 3.4.", "Table 4 presents the previous event extraction results from the BioNLP shared task using the same corpus.", "Our approach outperforms all previous methods.", "Among them, the systems TEES (Bjorne and Salakoski, 2011), EventMine-CR (Miwa et al., 2012) and Stacked Generalization (Majumder et al., 2016) are based on SVMs with well designed features.", "FAUST (Riedel and McCallum, 2011) and BioMLN (Venugopal et al., 2014) use jointed inference models.", "Bjorne and Salakoski (2018) adopts a convolutional neural networks (CNNs) with abundant features derived from TEES system.", "In our work, instead of using high-dimensional features with manual effort as in these previous models, our approach only requires pretrained distributed word representations as input features.", "We notice that our approach achieves high scores on Simple event types but get relatively low scores on Binding event and Regulation event types.", "We analyze the results and find that Bind-System Event Type Rec Prec F1 KB-drivenTree-LSTM Gene expression 74.35 87.24 80.28 Transcription 69.54 82.31 75.39 Protein catabolism 46.67 87.50 60.87 Phosphorylation 81.62 87.28 84.36 Localization 59.69 80.28 68.47 Simple total 72.62 85.95 78.73 Binding 37.68 53.16 44.10 Regulation 36.62 53.61 43.52 Positive regulation 41.37 57.90 48.26 Negative regulation 46.06 52.39 49.02 Regulation total 41.73 55.73 47.72 Event total 52.14 67.01 58.65 Tree-LSTM Simple total 71.22 83.41 76.83 Binding 34.83 48.72 40.62 Regulation total 39.78 53.54 45.64 Event total 50.28 64.56 56.53 BiLSTM Simple total 68.09 78.75 73.03 Binding 38.49 43.05 40.65 Regulation total 37.64 53.81 44.30 Event total 48.44 62.18 54.46 Table 3: Precision (Prec), recall (Rec) and F-score (F1) results achieved by the KB-driven Tree-LSTM model on the test set of BioNLP Genia 2011, evaluated on approximate span and recursive criteria.", "ing event extraction is more challenging since it usually has multiple arguments.", "For example, Figure 4 shows two sentences which are chosen from the output of the development data set.", "There are two Binding event mentions in the first sentence: E1 (Trigger: interacting, Type: Binding, Theme: RUNX1, Theme2: p3000) and E2 (Trigger: binding, Type: Binding, Theme: CREB).", "Our model mistakenly extracts CREB as a Theme of E1 since CREB is highly related to protein p300 in the dependency tree structure.", "Regulation events are considered as the most challenging event type because they usually have an optional Cause argument and are involved in nested structures, which are not handled well by most of current event extraction approaches.", "In addition, intuitively, most trigger words are verbs or nouns.", "We rank all the trigger words in the train-... the EBNA-1 gene in infected thymocytes was transcribed from the Fp promoter, rather than from the Cp / Wp promoter ... [protein] E1:[transcription] E2:[positive_regulation] E3:[positive_regulation] E1:Theme E3:Theme RUNX1 alone, or together with its interacting partners p300 and CREB binding protein, ... [protein] E1:[binding] [protein] [protein] E2:[binding] E1:Theme1 E1:Theme2 E2:Theme E2:Theme Figure 4: Case study on binding event and regulation event types.", "ing data set according to their frequency, and find that most of spurious errors for Regulation event trigger extraction occur when the trigger words are prepositions or conjunctions.", "For instance, in Figure 4, the second sentence contains two positive Regulation events triggered by a preposition from and a conjunction rather than .", "Such function words are rarely annotated as triggers and our KB-aware Tree-LSTM cannot well collect meaningful contexts from their subtrees.", "As shown in Table 3, we achieve about 3.5% and 2.1% F1 score gain on Binding and Regulation event types by leveraging external KB information into the Tree-LSTM.", "In order to show the effect of KB concept embeddings, we visualize the probabilities of word transcription to be predicted for each event type.", "As Figure 5 shows, by adding KB concept embeddings, the function description positive regulation of transcription, DNA-templated provided by the biomedical entity OBF-1 significantly enhances the probability of transcription being predicted to a Transcription event type.", "Similarly, Figure 6 visualizes the probabilities of the E1 event mention (Trigger: trans-transduced [gene_expression]/[positive_regulation] with mutants Tax [protein] E1 : (Type:gene expression, Theme: Tax , Trigger: transduced) E2 : (Type:positive regulation, Theme: E1 , Trigger: transduced) E2 : (Theme: None ) E2 : (Theme: E1 ) Theme Theme Figure 6: Visualization of the effect of KB concept embedding on argument role labeling for a Positive Regulation event triggered by transduced and a Gene Expression event E1 (Theme: Tax, Trigger: transduced).", "duced, Type: gene expression, Theme: Tax) to be predicted as an argument of E2 event mention (Trigger: transduced, Type: positive regulation, Theme: E1).", "We can see that, without using KB information, the Tree-LSTM mistakenly predict the argument role of E1 as None .", "In contrast, by incorporating KB concept embeddings, especially the information from the function description positive regulation of transcription, DNA-templated for Tax , our approach successfully promotes the probability of E1 being predicted as the Theme of E2.", "As a crucial task in information extraction, event extraction has gained a lot of interest.", "In general news domain, previous work on event extraction can be divided into two main categories.", "The first is feature-based methods which mainly focus on feature design, leveraging local features (Grishman et al., 2005; Ahn, 2006) and global features (Ji and Grishman, 2008; Liao and Gr-ishman, 2011; Huang and Riloff, 2012) to improve the performance.", "Some studies proposed joint models to overcome the error propagation problem (Poon and Vanderwende, 2010; Riedel et al., 2009; Li et al., 2013; Venugopal et al., 2014; Li et al., 2014).", "The second category includes distributional representation based methods which have been applied into event extraction extensively.", "Most of these approaches are based on the standard Convolutional Neural Networks (CNNs) (Chen et al., 2015; Nguyen and Grishman, 2015, 2016), Recurrent Neural Networks (RNNs) (Nguyen et al., 2016), generative adversarial networks (Hong et al., 2018), zero-shot learning (Huang et al., 2017) and advanced attention mechanisms (Liu et al., 2018b; Chen et al., 2018).", "Our work is also related to the studies which leverage the external knowledge base for information extraction.", "Liu et al. (2017) takes advantage of external resources, such as FrameNet, to label events while Chen et al. (2017) adopts distance supervision to augment the training data.", "Liu et al. (2018a) develops an attention-based model for event extraction.", "What's more, shortest dependency path is broadly explored for information extraction, especially for relation classification (Xu et al., 2015; Miwa and Bansal, 2016) and shows promising benefits.", "Biomedical event extraction task part of the BioNLP Shared Task series (Kim et al., 2009, 2011; Nedellec et al., 2013).", "Previous studies mainly explore local and global features with SVM model (Miwa et al., 2010, 2012; Bjorne and Salakoski, 2013; Majumder et al., 2016).", "Riedel and McCallum (2011) develop a joint model with dual decomposition.", "Cohen et al. (2009), Kilicoglu and Bergler (2011) and Bui et al. (2013) develop rule-based methods and achieve high precision.", "Venugopal et al. (2014) leverage Markov logic networks for joint inference.", "Rao et al. (2017) uses the Abstract Meaning Representations (AMR) to extract events based on the assump-tion that an event structure can be derived from an AMR subgraph.", "Recently, some representation-based models (Jagannatha and Yu, 2016; Rao et al., 2017; Bjorne and Salakoski, 2018) have been proposed while most of them adopt the widely used CNNs and RNNs with features derived from the biomedical text.", "Lim et al. (2018) implements a binary Tree-LSTM architecture for biomedical relation extraction.", "Compared with these methods, our approach only requires pretrained distributed word representations as input features and incorporates meaningful KB information into a Tree-LSTM.", "In this paper, we show the effectiveness of using a KB-driven tree-structured LSTM for event extraction in biomedical domain.", "The Tree-LSTM can efficiently capture semantically related concepts for each node within the tree structure.", "By leveraging the external KB concept properties including the entity type and the function description, our approach is able to perform deep understanding of domain-specific expressions and connections.", "Without using manually designed high-dimensional features, our approach significantly outperforms all previous methods.", "In the future, we plan to explore a broader range of properties from KB to facilitate biomedical information extraction tasks.", "This work was supported by the U.S. NSF No. 1741634, Air Force No.", "FA8650-17-C-7715 and ARL NS-CTA No.", "W911NF-09-2-0053.", "The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government.", "The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on." ]
[ "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "method", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "method", "method", "other", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "other", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "result", "abstain", "other", "abstain", "result", "abstain", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "result", "abstain", "method", "result", "objective", "other", "other", "other", "other", "other" ]
[ "Dongyeop Kang [email protected]", "Eduard Hovy [email protected] Carnegie Mellon University", "Abstract Every natural text is written in some style.", "Style is formed by a complex combination of different stylistic factors, including formality markers, emotions, metaphors, etc.", "One cannot form a complete understanding of a text without considering these factors.", "The factors combine and co-vary in complex ways to form styles.", "Studying the nature of the co-varying combinations sheds light on stylistic language in general, sometimes called cross-style language understanding .", "This paper provides the benchmark corpus ( XSLUE ) that combines existing datasets and collects a new one for sentence-level cross-style language understanding and evaluation.", "The benchmark contains text in 15 different styles under the proposed four theoretical groupings: figurative, personal, affective, and interpersonal groups.", "For valid evaluation, we collect an additional diagnostic set by annotating all 15 styles on the same text.", "Using XSLUE , we propose three interesting cross-style applications in classification, correlation, and generation.", "First, our proposed cross-style classifier trained with multiple styles together helps improve overall classification performance against individually-trained style classifiers.", "Second, our study shows that some styles are highly dependent on each other in human-written text.", "Finally, we find that combinations of some contradictive styles likely generate stylistically less appropriate text.", "We believe our benchmark and case studies help explore interesting future directions for cross-style research.", "The preprocessed datasets and code are publicly available.", "1 1 Introduction People often use style as a strategic choice for their personal or social goals in communication (Hovy, This work was done while DK was at CMU. 1 https://github.com/dykang/xslue 1987; Silverstein, 2003; Jaffe et al., 2009; Kang, 2020).", "Some stylistic choices implicitly reflect the author's characteristics, like personality, demographic traits (Kang et al., 2019), and emotions (Buechel and Hahn, 2017), whereas others are explicitly controlled by the author's choices for their social goals like using polite language, for better relationship with the elder (Danescu et al., 2013).", "In this work, we broadly call each individual linguistic phenomena as one specific type of style .", "Style is not a single variable, but multiple variables have their own degrees of freedom and they co-vary together.", "Imagine an orchestra, as a metaphor of style.", "What we hear from the orchestra is the harmonized sound of complex combinations of individual instruments played.", "A conductor, on top of it, controls their combinatory choices among them, such as tempo or score.", "Some instruments under the same category, such as violin and cello for bowed string type, make a similar pattern of sound.", "Similarly, text reflects complex combination of multiple styles.", "Each has its own lexical and syntactic features and some are dependent on each other.", "Consistent combination of them by the author will produce stylistically appropriate text.", "To the best of our knowledge, only a few recent works have studied style inter-dependencies in a very limited range such across demographical traits (Nguyen et al., 2014; Preotiuc-Pietro and Ungar, 2018), across emotions (Warriner et al., 2013), across lexical styles (Brooke and Hirst, 2013), across genres (Passonneau et al., 2014), or between metaphor and emotion (Dankers et al., 2019; Mohammad et al., 2016).", "Unlike the prior works, this work proposes the first comprehensive understanding of cross-stylistic language variation, particularly focusing on how different styles co-vary together in written text, which styles are dependent on each other, and how they are systematically composed to generate text.", "Our work has following contributions: Aggregate 15 different styles and 23 sentence-level classification tasks (3).", "Based on their social goals, the styles are categorized into four groups (Table 1): figurative, affective, personal and interpersonal.", "Collect a cross-style set by annotating 15 styles on the same text for valid evaluation of cross-stylistic variation (3.3).", "Study cross-style variations in classification (4), correlation (5), and generation (6): our jointly trained classifier on multiple styles shows better performance than individually-trained classifiers.", "our correlation study finds statistically significant style inter-dependencies (e.g., impoliteness and offense) in written text.", "our conditional stylistic generator shows that better style classifier enables stylistically better generation.", "Also, some styles (e.g., impoliteness and positive sentiment) are condtradictive in generation.", "Definition of style.", "People may have different definitions in what they call style'.", "Several sociolinguistic theories on styles have been developed focusing on their inter-personal perspectives, such as Halliday's systemic functional linguistics (Halli-day, 2006) or Biber's theory on register, genre, and style (Biber and Conrad, 2019).", "In sociolinguistics, indexicality (Silverstein, 2003; Coupland, 2007; Johnstone, 2010) is the phenomenon where a sign points to some object, but only in the context in which it occurs.", "Nonrefer-ential indexicalities include the speaker's gender, affect (Besnier, 1990), power, solidarity (Brown et al., 1960), social class, and identity (Ochs, 1990).", "Building on Silverstein's notion of indexical order, Eckert (2008) built the notion that linguistic variables index a social group, which leads to the indexing of certain traits stereotypically associated with members of that group.", "Eckert (2000, 2019) argued that style change creates a new persona, impacting a social landscape and presented the expression of social meaning as a continuum of decreasing reference and increasing performativity.", "Despite the extensive theories, very little is known on extra-dependency across multiple styles.", "In this work, we empirically show evidence of extra-linguistic variations of styles, like a formal-Groups Styles INTERPERSONAL Formality, Politeness FIGURATIVE Humor, Sarcasm, Metaphor AFFECTIVE Emotion, Offense, Romance, Sentiment PERSONAL Age, Ethnicity, Gender, Education level, Country, Political view Table 1: Style grouping in XSLUE .", "ity, politeness, etc, but limited to styles only if we can obtain publicly available resources for computing .", "We call the individual phenomena a specific type of style in this work.", "We admit that there are many other kinds of styles not covered in this work, such as inter-linguistic variables in grammars and phonology, or high-level style variations like individual's writing style or genres.", "Cross-style analysis.", "Some recent works have provided empirical evidence of style interdependencies but in a very limited range: Warriner et al. (2013) analyzed emotional norms and their correlation in lexical features of text.", "Chhaya et al. (2018) studied a correlation of formality, frustration, and politeness but on small samples (i.e., 960 emails).", "Nguyen et al. (2014) focused on correlation across demographic information (e.g., gender, age) and with some other factors such as emotions (Preotiuc-Pietro and Ungar, 2018).", "Dankers et al. (2019); Mohammad et al. (2016) studied the interplay of metaphor and emotion in text.", "Liu et al. (2010) studied sarcasm detection using sentiment as a sub-problem.", "Brooke and Hirst (2013) conducted a topical analysis of six styles: literary, abstract, objective, colloquial, concrete, and subjective, on different genres of text.", "Passonneau et al. (2014) conducted a detailed analysis of Biber's genres and relationship between genres.", "In order to conduct a comprehensive style research, one needs to collect a collection of different style datasets.", "We survey recent papers related to style research published in ACL venues and choose 15 widely-used styles that have publicly available annotated resources and feasible size of training dataset (Table 1).", "We plan to gradually increase the coverage of style kinds and make the benchmark more comprehensive in the future.", "We follow the theoretical style grouping criteria based on their social goals in Kang (2020) that categorizes styles into four groups (Table 1): PERSONAL , INTERPERSONAL , FIGURATIVE , and AFFECTIVE group, where each group has its own social goals in communication.", "This grouping will be used in our case studies as a basic framework to detect their dependencies.", "For each style in the group, we pre-process existing style datasets or collect our own if there is no publicly available one (i.e., ShortRomance ).", "We do not include datasets with small samples (e.g., 1K) due to its infeasibility of training a large model.", "We also limit our dataset to classify a single sentence, although there exists other types of datasets (e.g., document-level style classifications, classifying a sentence with respect to context given) which are out of scope of this work.", "ratios for the train, valid, and test set, respectively.", "If a dataset has only positive samples ( ShortHumor , ShortJoke , ShortRomance ), we do negative sampling from literal text as in Khodak et al. (2017).", "We include the detailed pre-processing steps in Appendix A.", "The individual datasets, however, have variations in domains (e.g., web, dialogue, tweets), label distributions, and data sizes (See domain, label, and #S columns in Table 2).", "Evaluating a system with these individual datasets' test set is not an appropriate way to validate how multiple styles are used together in a mixed way, because samples from individual datasets are annotated only when a single style is considered.", "To help researchers evaluate their systems in the cross-style setting, we collect an additional diagnostic set, called cross-set by annotating labels of 15 styles together on the same text from crowd workers.", "We collect total 500 sample texts from Sentiment 0.81 Sarcasm 0.38 Politeness 0.75 Country 0.38 Formality 0.48 Humor 0.37 Gender 0.47 Education level 0.36 Emotion: Valence 0.43 Age 0.35 Emotion 0.42 Political view 0.32 Romance 0.42 Metaphor 0.29 Offense 0.41 Emotion: Arousal 0.26 Ethnicity 0.41 Emotion: Dominance 0.24 Table 3: Annotator's agreement (Krippendorff's al-pha).", "two different sources: the first half is randomly chosen from test sets among the 15 style datasets in balance, and the second half is chosen from random tweets that have high variations across style prediction scores using our pre-trained style classifiers.", "Each sample text is annotated by five annotators, and the final label for each style is decided via majority voting over the five annotations.", "In case they are tied or all different from each other for multiple labels, we don't include them.", "We also include Don't Know option for personal styles and Neutral option for two opposing binary styles (e.g., sentiment, formality).", "The detailed annotation schemes are in Appendix B.", "Table 3 shows annotator's agreement on the cross-set.", "We find that annotator's agreement varies a lot depending on style: sentiment and politeness with good agreement, and formality, emotion, and romance with moderate agreement.", "However, personal styles (e.g., age, education level, and political view), metaphor, and emotions (e.g., arousal and dominance), show fair agreements, indicating how difficult and subjective styles they are.", "Most datasets in XSLUE except for Romance are collected from others' work.", "Following the data statement (Bender and Friedman, 2018), we cite and introduce individual datasets with their data statistics in Table", "2. Our main contribution is to make every dataset to have the same pre-processed format, and distribute them with accompanying code for better reproducibility and accessibility.", "Besides this engineering effort, XSLUE 's main goal is to invite NLP researchers to the field of cross-style understanding and provide them a valid set of evaluation for further exploration.", "As the first step, using XSLUE , we study cross-style language variation in various applications such as classification (4), correlation (5), and generation (6).", "We study how modeling multiple styles together, instead of modeling them individually, can be effective in style classification task.", "Particularly, the annotated cross-set in XSLUE will be used as a part of evaluation for cross-style classification.", "Models.", "We compare two types of models: single and cross model.", "The single model is trained on individual style dataset separately, whereas the cross model is trained on shuffled set of every dataset together.", "For single model, we use various baseline models, such as majority classifier by choosing the majority label in training data, Bidirectional LSTM (biLSTM) (Hochreiter and Schmidhuber, 1997) with GloVe embeddings (Pennington et al., 2014), and variants of fine-tuned transformers; Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2019), robustly optimized BERT (RoBERTa) (Liu et al., 2019), and text-to-text transformer (T5) (Raf-fel et al., 2019).", "2 For cross model, we propose an encoder-decoder based model that learns cross-style patterns with the shared internal representation across styles (Fig-ure 1).", "It encodes different styles of input as text (e.g., STYLE: formality TEXT: would you please..) and decodes output label as text (e.g., formal).", "We use the pretrained encoder-decoder model from T5 (Raffel et al., 2019), and finetune it using the combined, shuffled datasets in XSLUE .", "Due to the nature of encoder-decoder model, we can take any training instances for classification tasks into the same text-to-text format.", "We also trained the single model (e.g., RoBERTa) on the combined datasets via a multi-task setup (i.e., 15 different heads), but showing less significant result.", "2 For a fair comparison, we restrict size of the pre-trained transformer models to base model only, although additional improvement from the larger models is possible. Evaluation set Individual-set evaluation Cross-set evaluation (3.3) Models single cross single cross Style Dataset Majority biLSTM BERT RoBERTa T5 Ours BERT T5 Ours INTER . Formality GYAFC 30.2 76.4 89.4 89.3 89.4 89.9 37.3 33.8 35.0 Politeness SPolite 36.2 61.8 68.9 70.4 71.6 71.2 60.0 62.1 64.4 FIGURATIVE Humor ShortHumor 33.3 88.6 97.3 97.5 97.4 98.9 -ShortJoke 33.3 89.1 98.4 98.2 98.5 98.6 50.5 47.2 47.9 Sarcasm SARC 33.3 63.0 71.5 73.1 72.4 72.8 41.4 37.7 37.4 SARC _pol 33.3 61.3 73.1 74.5 73.7 74.4 -Metaphor VUA 41.1 68.9 78.6 81.4 78.9 78.0 49.8 49.0 49.1 TroFi 36.4 73.9 77.1 74.8 76.7 76.2 -AFFECTIVE Emotion EmoBank Valence 32.4 78.5 81.2 82.8 80.8 82.5 -EmoBank Arousal 34.2 49.4 58.7 62.3 58.2 61.5 -EmoBank Domin . 31.3 39.5 43.6 48.3 42.9 46.4 -DailyDialog 12.8 27.6 48.7 46.9 49.2 49.0 22.4 26.9 33.3 Offense HateOffens 28.5 68.2 91.9 92.4 91.7 93.4 34.4 36.9 45.9 Romance ShortRomance 33.3 90.6 99.0 100.0 98.0 99.0 53.9 55.2 48.2 Sentiment SentiBank 33.3 82.8 96.9 97.4 97.0 96.6 80.4 79.7 84.6 PERSONAL Gender PASTEL 25.7 45.5 47.7 47.9 47.3 50.5 29.2 32.4 42.3 Age PASTEL 7.3 15.2 23.0 21.7 21.3 23.3 36.1 27.0 28.1 Country PASTEL 49.2 49.3 54.5 49.3 51.8 58.4 49.4 46.7 48.7 Political view PASTEL 20.0 33.5 46.1 44.6 44.3 46.7 27.7 20.6 21.3 Education PASTEL 4.7 15.0 24.6 22.4 21.4 27.3 10.3 11.4 15.7 Ethnicity PASTEL 8.5 17.6 24.4 22.5 22.4 23.8 10.8 8.8 9.1 Avearge 26.8 56.9 64.8 64.9 64.2 65.9 39.6 38.4 40.7 Table 4: Individual style (left) and cross style (right) classification in XSLUE . Every score is averaged over ten runs of experiments with different random seeds. For cross-style classification, we choose a single dataset per style, which has larger training data than the others. Otherwise, we leave it as a blank (-). The detailed hyper-parameters used in our model training are in Appendix C. Tasks. Our evaluation has two tasks: individual-set evaluation for evaluating a classifier on individual dataset's test set (left columns in Table 4) and cross-set evaluation for evaluating a classifier on the annotated cross-set collected in 3.3 (right columns in Table 4). Due to the label imbalance of datasets, we measure f-score (F1) for classification tasks and Pearson-Spearman correlation for regression tasks (i.e., EmoBank ). For multi-labels, all scores are macro-averaged on each label. Results. In the individual-set evaluation, compared to the biLSTM classifier, the fine-tuned transformers show significant improvements (+8% points F1) on average, although the different transformer models have similar F1 scores. Our proposed cross model, significantly outperforms the single model, by +1.7 percentage points overall F1 score, showing the benefit of learning multiple styles together. Particularly, the cross model significantly improves F1 scores on personal styles such as gender, age, and education level, possibly because the personal styles may be beneficial from detecting other styles. Among the styles, all personal styles, figurative styles (e.g., sarcasm and metaphor), and emotions are the most difficult styles to predict, which is similarly observed in the annotator's agreement in Table", "3. In cross-set evaluation, the overall performance significantly drops against the individual set evaluation, like from 65.9% to 40.7%, showing why it is important to have these annotated diagnostic set for valid evaluation of cross-style variation.", "Again, the cross-style model achieves +1.2% gain than the single models.", "Figure 2 shows F1 improvement by the cross model against the single model BERT.", "Most styles obtain performance gain from the cross-style modeling, whereas not in the two metaphor style datasets (VUA, TroFi) and ethnicity style.", "This is possibly because metaphor tasks prepend the target metaphor verb to the input text, which is different from other task setups.", "Thus, learning them PASTEL(Country) EmoBank(Dominance) EmoBank(Arousal) PASTEL(Gender) PASTEL(Education) StanfPolite ShortHumor HateOffens SARC_pol SARC EmoBank(Valence) Overall PASTEL(Politics) GYAFC PASTEL(Age) DailyDialog ShortJoke ShortRomance SentiBank VUA PASTEL(Ethnicity) TroFi -1 0 1 2 3 4 Figure 2: F1 improvement by our cross model over BERT in individual style classification task.", "In addition to the theoretical style grouping in 3.1, we empirically find how two styles are correlated in human-written text using silver predictions from the classifiers.", "Setup.", "We sample 1,000,000 tweets crawled using Twitter's Gardenhose API.", "We choose tweets as the target domain, because of their stylistic diversity compared to other domains, such as news articles.", "Using the fine-tuned cross-style classifier in 4, we predict probability of 53 style attributes 3 over the 1M tweets.", "We split a tweet into sentences and then average their prediction scores.", "We then produce a correlation matrix across the style attributes using Pearson correlation coefficients with Euclidean distance and finally output a 53 53 correlation matrix.", "We only show correlations that are statistical significant with p-value < 0.05 and cross out the rest.", "Reliability.", "One may doubt about the classifier's low performance on some styles, leading to unreliable interpretation of our analysis.", "Although we only show correlation on the predicted style values, 3 Attribute means labels of each style: positive and negative labels for sentiment style.", "we also performed the same analysis on the human-annotated cross-set, showing similar correlation tendencies to the predicted ones.", "However, due to the small number of annotations, its statistical significance is not high enough.", "Instead, we decide to show the prediction-based correlation, possibly including noisy correlations but with statistical significance.", "Results.", "Figure 3 shows the full correlation matrix we found.", "From the matrix, we summarize some of the highly correlated style pairs in Table 5 For each pair of correlation, two annotators evaluate its validity of stylistic dependency using a Likert scale.", "Our prediction-based correlation shows 4.18 agreement on average, showing reasonable accuracy of correlations.", "We also provide an empirical grouping of styles using Ward hierarchical clustering (Ward Jr, 1963) on the correlation matrix.", "Figure 4 shows some interpretable style clusters detected from text, like Asian ethnicities (SouthAsian, EastAsian), middle ages (35-44, 45-54, 55-74), positiveness (happi-ness, dominance, positive, polite), and bad emotions (anger, disgust, sadness, fear).", "We study the effect of combination of some styles in the context of generation.", "We first describe our Figure 3: Cross-style correlation.", "style-conditioned generators that combine the style classifiers in 4 with pre-trained generators (6.1), and then validate two hypothetical questions using the generators: does better identification of styles help better stylistic generation (6.2)?", "and which combination of styles are more natural or contradictive in generation (6.3)?", "Let x an input text and s a target style.", "Since we already have the fine-tuned style classifiers P ( s | x ) from 4, we can combine them with a generator P ( x ) , like a pre-trained language model, and then generate text conditioned on the target style P ( x | s ) .", "We extend the plug-and-play language model (PPLM) (Dathathri et al., 2019) to combine our style classifiers trained on XSLUE with the pre-trained generator; GPT2 (Radford et al., 2019) without extra fine-tuning: P ( x | s ) P ( x ) P ( s | x ) .", "Table 6 shows example outputs from our style-conditioned generators given a prompt Every natural text is'.", "We evaluate quality of output text: given 20 frequent prompts randomly extracted from our Middle ages Asians Positive Feeling bad Negative Figure 4: Empirical grouping of styles.", "Output without style condition: Every natural text is' a series of images.", "The images, as they are known within the text, are the primary means by which a text is read, and therefore are", "..", "Output conditioned on Formality (F1 = 89.9%) : Formal (left) and Informal (right) Every natural text is' different.", "You may find that the word you wrote does not appear on the website of the author.", "If you have any queries, you can contact", "us..", "Every natural text is' a bit of a hack.", "I don't think of it as a hack, because this hack is the", "hack..", "and if you don't believe me then please don't read this, I don't", "care..", "Every natural text is' a natural language, and every natural language is a language that we can speak.", "It is the language of our thoughts and of our", "lives..", "Every natural text is' worth reading...I'm really going to miss the music of David Byrne, and that was so much fun to watch live.", "The guy is a *ucking *ick.", "..", "training data, 4 we generate 10 continuation text for each prompt for each binary label of four styles (sentiment, politeness, offense, and formality) 5 using the conditional style generator; total 20 10 2 4=1600 continuations.", "We evaluate using both automatic and human measures: In automatic evaluation, we calculate F1 score of generated text using the fine-tuned classifiers, to check whether the output text reflects stylistic factor of the target style given.", "In human 4 Some example prompts: Meaning of life is, I am, I am looking for, Humans are, The virus is, etc 5 We choose them by the two highest F1 scored styles each from inter-personal and affective groups, although we conduct experiments on other styles such as romance and emotions.", "evaluation, scores (1-5 Likert scale) annotated by three crowd-workers are averaged on three metrics: stylistic appropriateness 6 , consistency with prompt , and overall coherence .", "In Table 7, compared to F1 scores on individual test set in XSLUE , automatic scores on output from the generator are less by 20.5% on average, showing sub-optimality of the conditional style generator between classification and generation.", "Interestingly, in human evaluation, negative labels (2 nd label for each style) for each style, like negative sentiment, impoliteness, informality, and offensiveness, show less stylistic appropriateness than positive or literal labels.", "To further investigate the relationship between classifier's performance and generation quality, we conduct a study by decreasing the training completion ratio (i.e., a fraction of epochs until completion; C %) of the classifiers; PC % ( s | x ) over the four styles and again evaluate the output continuation; PC % ( x | s ) P ( x ) PC % ( s | x ) using the same", "6 Stylistically appropriateness means the output text includes appropriate amount of target style given.", "89.8 Formality 81.5 61.3 Figure 5: As the training completion ratio (x-axis, %) of classifiers increases, stylistic appropriateness (blue, y-axis) and consistency (red, y-axis) increase.", "human metrics.", "Figure 5 shows that the better style understanding (higher F1 scores in classification) yields the better stylistic generation (higher stylistic appropriateness and consistency scores).", "We have generated text conditioned on single styles.", "We now generate text conditioned on combination of multiple styles; P ( x | s 1 .. s k ) & P ( x ) P ( s 1 | x ) P ( s k | x ) where k is the number of styles.", "In our experiment, we set k =2 for sentiment and politeness styles, and generate text conditioned on all possible combinations between the labels of the two styles (e.g., positive and polite label, negative and impolite label).", "We again conduct human evaluation on the output text for measuring whether the generator produces stylistically appropriate text given the combination.", "Table 8 shows averaged human-measured stylistic appropriate scores over the four label combinations (left) and the correlation scores observed in the style correlation matrix on written text in Figure 3 (right).", "Some combinations, like positive and impolite or like negative and polite, show less stylistic appropriateness scores, because they are naturally contradictive in their stylistic variation.", "Moreover, the stylistic appropriateness scores look similar to the correlation score observed from written text, showing that there exists some natural or unnatural combination of styles in both classification on human-written text and output generated by the model.", "We introduce a benchmark XSLUE of mostly existing datasets for studying cross-style language understanding and evaluation.", "Using XSLUE , we found interesting cross-style observations in classification, correlation, and generation case studies.", "We believe XSLUE helps other researchers develop more solid methods on various cross-style applications.", "We summarize other concerns we found from our case studies: Style drift.", "The biggest challenge in collecting style datasets is to diversify the style of text but preserve the meaning, to avoid semantic drift .", "In the cross-style setting, we also faced a new challenge; style drift , where different styles are coupled so changing one style might affect the others.", "Ethical consideration.", "Some styles, particularly on styles related to personal traits, are ethically sensitive, so require more careful interpretation of the results not to make any misleading points.", "Any follow-up research needs to consider such ethical issues as well as provides potential weaknesses of their proposed methods.", "From correlation to causality.", "Our analysis is based on correlation, not causality.", "In order to find causal relation between styles, more sophisticated causal analyses, such as propensity score (Austin, 2011), need to be considered for controlling the confounding variables.", "By doing so, we may resolve the biases driven from the specific domain of training data.", "For example, generated text with the politeness classifier (Danescu et al., 2013) contains many technical terms (e.g., 3D, OpenCV, bugs) because its training data is collected from StackExchange.", "This work would not have been possible without the efforts of the authors who kindly share the style language datasets publicly.", "We thank Edvisees members at CMU, Hearst lab members at UC Berkeley, and anonymous reviewers for their helpful comments." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "objective", "objective", "result", "result", "objective", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "objective", "objective", "result", "result", "result", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "result", "objective", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other" ]
[ "Maintaining consistent personas is essential for dialogue agents.", "Although tremendous advancements have been brought, the limited-scale of annotated persona-dense data are still barriers towards training robust and consistent persona-based dialogue models.", "In this work, we show how the challenges can be addressed by disentangling persona-based dialogue generation into two sub-tasks with a novel BERT-over-BERT (BoB) model.", "Specifically, the model consists of a BERT-based encoder and two BERT-based decoders, where one decoder is for response generation, and another is for consistency understanding.", "In particular, to learn the ability of consistency understanding from large-scale non-dialogue inference data, we train the second decoder in an unlikelihood manner.", "Under different limited data settings, both automatic and human evaluations demonstrate that the proposed model outperforms strong baselines in response quality and persona consistency.", "Various approaches have been explored to introduce explicit personas in dialogue models (Qian et al., 2018; Song et al., 2019; Zheng et al., 2020; Liu et al., 2020).", "The PERSONA can be defined as a composite of elements of identity, such as profiles and background personal facts.", "In persona-based dialogues, the generated responses are conditioned not only on the dialogue context but also on some predefined personas, so the presenting personality could be more consistent.", "Existing persona-based dialogue models heavily utilize a set of persona-related dialogue data (Wolf et al., 2019; Golovanov et al., 2019), such as the PersonaChat (Zhang et al., 2018).", "This kind of crowd-sourced dataset covers rich persona features, Wei-Nan Zhang is the corresponding author.", "namely persona-dense.", "Nevertheless, the scale of such crowd-sourced datasets is limited by the expensive costs: two annotators are asked to act the part of a given provided persona and chat naturally to get to know each other during the conversation.", "On the other hand, conversations in daily life are not always persona-related.", "According to Twitter content analysis, less than 10% messages on Twitter reveal personal anecdote or activities at home or work and even less for personally identifiable information (Naaman et al., 2010; Humphreys et al., 2014).", "As a result, the large-scale data collected from social media would only contain a limited amount of persona-related dialogues, which is persona-sparse.", "The limited-scale of crowd-sourced data and the persona-sparsity in large-scale data present one common challenge: a model trained on limited personalized data cannot sufficiently understand persona consistency.", "As shown in Figure 1, a 12-layer GPT2 (Radford et al., 2019) finetuned on the PersonaChat dataset still shows a lack of consistency.", "After rethinking the essence of persona-based dialogue generation, we can find that it requires the dialogue agent to own the capabilities to 1) understand the persona-response consistency and 2) generate a persona-related response given the dialogue context.", "Obviously, an ideal dataset that sat-isfies both features are difficult to annotate.", "However, once we disentangle persona-based dialogue generation into two sub-tasks: consistency understanding and dialogue generation, it is easy to find abundant data resources for them.", "For consistency understanding, we may leverage large-scale non-dialogue inference data, such as SNLI (Bowman et al., 2015) and MNLI (Williams et al., 2018) as the training data.", "As for dialogue generation, we already have various large-scale persona-sparse datasets.", "Inspired by the aforementioned motivation, in this work, we explore to learn a consistent persona-based dialogue model from limited personalized dialogues, with the assistance of large-scale non-dialogue inference data.", "Specifically, the proposed model consists of an encoder E , an auto-regressive decoder D 1 for response generation, and a bidirectional decoder D 2 for consistency understanding.", "Given personas P and dialogue query Q , the E and D 1 jointly work in an encoder-decoder manner to capture a typical query to response mapping FG ( S | Q, P ) , and generate a coarse response representation R 1 .", "Then R 1 and personas P are fed into the bidirectional decoder D 2 to map R 1 to final response representations R 2 : FU ( R 2 | S, P ) .", "Since the consistency understanding part FU ( R | S, P ) is independent of the dialogue query Q , it can be learned on non-dialogue inference datasets.", "Here an unlikelihood training objective (Welleck et al., 2019a) is applied to make contradicted cases in the inference data less likely so that D 2 could acquire the ability of consistency understanding.", "We initialize all modules from BERT (Devlin et al., 2019) and name the proposed model BERT-over-BERT (BoB).", "To verify the effectiveness of our model, we experiment on two limited data scenarios: 1) a persona-dense scenario (Zhang et al., 2018) with low-resource settings (Zhao et al., 2019), and 2) a persona-sparse scenario (Zheng et al., 2019).", "Both automatic and human evaluations indicate that our model generalizes well under different settings and outperforms strong baselines on most metrics, especially on persona consistency.", "Contributions in this work are three-fold: We disentangled the task of persona-based dialogue generation into two sub-tasks: consistency understanding and dialogue generation.", "A BERT-based generative framework, BoB, was proposed for training persona-based dialogue models from limited data.", "An unlikelihood training method with non-dialogue inference data was introduced to enhance persona consistency understanding.", "Persona-based Dialogues Recent studies on persona-based dialogue generation focus on a data-driven manner.", "They learn persona-related features directly from personalized dialogue datasets, either with implicit persona embeddings (Li et al., 2016b) or with explicit profiles (Qian et al., 2018) and personal facts (Mazare et al., 2018).", "Following this research line, more sophisticated neural models are emerging, such as modeling mutual-persona (Liu et al., 2020) and multi-stage persona-based dialogue generation (Song et al., 2020a).", "Meanwhile, various pre-training methods have also been applied in this field.", "Wolf et al. (2019) and Golovanov et al. (2019) show that fine-tuning pre-trained GPT on the persona-dense dataset can improve the quality of generated responses.", "Zheng et al. (2020) propose an attention-routing mechanism in a GPT-based model to control the flow of persona information.", "Lin et al. (2020) explore how to leverage BERT model for dialogue generation.", "Different large-scale pretrained chatbots (Roller et al., 2020; Madotto et al., 2020) also show their effectiveness on persona-based dialogues.", "Disentangled Representation The concept of disentangling can be defined as transformations that only change some properties of the underlying model while leaving all other properties invariant (Higgins et al., 2018).", "The variational au-toencoder (Kingma and Welling, 2013) could be regarded as a disentangled representation learning framework, and various methods are built within it (Kim and Mnih, 2018; Locatello et al., 2019).", "Unlikelihood Training Likelihood tries to maximize the probability of target sequence, while unlikelihood corrects known biases by minimizing the probability of negative candidates (Welleck et al., 2019a).", "Closely related to our work, Li et al. (2020) first explored unlikelihood training in addressing dialogue logical contradictions.", "They get contradicted dialogues from PersonaChat according to DNLI (Welleck et al., 2019b), a PersonaChat-oriented dialogue inference dataset.", "Then unlikelihood training is applied to reduce the probability of contradicted responses.", "Different from Li et al. (2020), with carefully designed decoders, our model could learn from large-scale non-dialogue inference datasets, making it generalizable to different scenarios, such as persona-dense and persona-sparse datasets, as will be seen in our experiments.", "In this work, our goal is to learn a persona-based dialogue model from limited personalized data.", "To address the challenges of consistency understanding brought by limited data, we leverage large-scale non-dialogue inference data in our model.", "Formally, let Q = q 1 , q 2 , ..., q n denote the dialogue query, R = r 1 , r 2 , ..., r m denote the target response, and P denote the personas.", "In addition, let N denote the non-dialogue inference data, which consists of premise, hypothesis, and their label.", "The premise and hypothesis are both natural sentences.", "Note that in the following sections, we use fonts to distinguish between sentences ( P , Q , R ) and their vector representations ( P , Q , R 1 , R 2 ).", "The task of the proposed model M is to generate a persona consistent response R = r 1 , r 2 , ..., r m , based on both persona P and query Q , i.e., R = M ( Q , P ) .", "As shown in Figure 2, the proposed model M consists of three BERT-based submodules: an encoder E , a response decoder D 1 , and a consistency understanding decoder D 2 .", "More concretely, E encodes the embeddings of persona and query, i.e., P and Q , into hidden states H .", "D 1 performs cross-attention on H in a typical encoder-decoder manner, and generate a coarse representation R 1 .", "D 2 learns consistency understanding from non-dialogue inference data N and further converts P and R 1 into final representations R 2 .", "At last, a consistent response R could be generated from R 2 .", "For response generation, a typical persona-based dialogue model needs persona P and dialogue query Q to generate a response.", "For consistency understanding, a model needs persona P , response R , and the consistency labels between P and R .", "However, if we entangle generation and understanding, it is not easy to obtain sufficient annotated data that satisfy the format of {P , Q , R , Label } .", "Instead, in our model, we design the decoder D 2 to disentangle generation and understanding, where D 2 maps R 1 , rather than Q , to R 2 .", "The key to disentangling is we can get R 1 without the participation of Q , as R 1 is the representation of R .", "As a result, the mapping from R 1 to R 2 could be independent of Q .", "In this way, it becomes possible to 1) learn persona-based dialogue generation from {P , Q , R} , i.e., the personalized data, and 2) learn consistency understanding from {P , R , Label } .", "Moreover, considering the limited amount of such annotated data, we could approximate {P , R , Label } by the abundant non-dialogue inference data N = { Premise, Hypothesis, Label } , where P and R corresponds to the Premise and Hypothesis.", "Given data P and R , suppose D 2 understands persona consistency, it should maximize the likelihood of generating R if R is not contradicted to P .", "Otherwise, it should minimize the likelihood of generating R .", "Motivated by this observation, we choose to apply unlikelihood training on D 2 to make it understand consistency.", "The detailed training objectives will be provided in Sec 3.4.", "The encoder E works like a standard BERT model, which bidirectionally encodes the input embeddings to a sequence of hidden vectors, from which the downstream tasks will be performed on.", "In our model, the input consists of persona P and dialogue query Q .", "For persona, whether P is personal facts (e.g., I have two dogs) or profiles (e.g., location: Seattle), we could always convert it into a sequence of words.", "A special token is placed between persona sequence and dialogue query, and the input is formated as: input = p (0)1 , p (0)2 , ..., p ( t ) u t , [ s ] , q 1 , q 2 , ..., q n (1) Then the embedding layer will convert input into representations.", "Following usual practice, the input representations are the sum of the corresponding token, type, and position embeddings, where the type embedding is 0 and 1 for persona and query, respectively.", "P and Q can also get their independent representations.", "The resulted representations are P and Q , which could be jointly denoted as emb = e p 1 , e p 2 , ..., e ql , where l is the maximum length of the input .", "Once we get the input representations, encoder E will perform multi-head attetnion (Vaswani et al., 2017) on the emb to transform the embeddings into a sequence of hidden vectors H .", "The multi-head attetnion could be denoted as MultiHead(query, key, value), where scaled dot-product attention is performed on query, key, and value.", "There are N identical layers in E , for each layer: h i +1 = FNN ( MultiHead ( h i , h i , h i )) , (2) where h 0 = emb , and FNN is a fully connected feed-forward network containing two linear transformations with a ReLU activation in between.", "h N is the final output of encoder E , i.e., H .", "The response generation decoder D 1 is initialized from BERT to inherit its robust language model but works in an auto-regressive decoder manner.", "First, a cross-attention is inserted between E and D 1 to pass the context information.", "Second, a left-to-right mask is applied to D 1 to preserve the autoregressive generation property.", "This attention is similar to the typical encoder-decoder attention mechanism in sequence to sequence models (Bahdanau et al., 2015), which attends to all positions in the context representations H according to the variations of r 1 .", "In training, r 01 is initialized from the embeddings of the target response.", "At each generation step, future tokens in the target response should not be considered.", "Therefore, as shown in Figure 2, a left-to-right mask is applied to D 1 to ensure that the predictions can only depend on the known outputs.", "D 1 also has N identical layers.", "And the output of the last layer r N 1 , i.e., R 1 , is further fed to D 2 .", "Like E and D 1 , the consistency understanding decoder D 2 is also initialized from BERT, from where D 2 initializes a good semantic representation for understanding tasks.", "In each layer of D 2 , the multi-head attention is performed twice: p i +1 = FNN ( MultiHead ( r i 2 , P, P )) , (4) r i +12 = FNN ( MultiHead ( p i +1 , R 1 , R 1 )) .", "The resulted r i +12 in each layer thus fuses information from both P and R 1 .", "The output of the last layer of D 2 is the final representations R 2 .", "With an output layer, e.g. linear layers, upon the R 2 , we can get the generated response R .", "We employ negative log-likelihood (NLL) loss and unlikelihood loss for dialogue generation and consistency understanding.", "A brief illustration is shown in the last column of Figure 2 and detailed descriptions will be provided in this section.", "Response Generation In our model, the widely adopted negative log-likelihood loss is applied in the training.", "For E and D 1 , they read the persona P and dialogue query Q to predict the target response R , which yields the raw representations R 1 : LD 1 NLL = log ( p ( R|P , Q )) = |R| (cid:88) i =1 log ( p ( r i |P , Q , R <i )) .", "(6) The generation part in D 2 is also trained by NLL.", "D 2 reads persona embeddings P and raw representations R 1 to predict the target response R : LD 2 NLL = log ( p ( R| P, R 1 )) = |R| (cid:88) i =1 log ( p ( r i | P, R 1 , R <i )) .", "(7) Unlikelihood Training Given large-scale non-dialogue inference dataset, we collect positive data D + from the entailed category and collect negative data D from the contradicted category: D + = { ( P ( i ) , R ( i )+ ) } , D = { ( P ( j ) , R ( j ) ) } , (8) where P and R are premise and hypothesis from the non-dialogue inference data, and their representations in our model are denoted as P and R .", "For data from D + , we still apply the NLL loss: LD +2 UL = | R| (cid:88) i =1 log ( p ( r i | P , R, R <i )) , (9) For data from D , we apply the unlikelihood objective to minimize the likelihood of contradictions: LD 2 UL = | R| (cid:88) i =1 log (1 p ( r i | P , R, R <i )) , (10) which penalizes every token in the contradicted target.", "Therefore, the loss LD 2 UL makes generating contradicted responses less likely.", "1) Response Generation.", "Given P , Q , and R from personalized dialogue data, we calculate the response generation loss L 1 = LD 1 NLL + LD 2 NLL ; 2) Consistency Understanding.", "Given D + and D from non-dialogue inference data, we calculate the unlikelihood loss L 2 = LD +2 UL + (1 ) LD 2 UL ; 3) Optimization.", "Sum up L 1 and L 2 .", "Update parameters with back-propagation.", "We initialize our model from the publicly available BERT base model, with 12 layers and hidden size 768.", "We employ an Adam optimizer with a learning rate of varying from 5e-6 to 5e-5.", "Empirically, we set to 5e-3 and to 0.1.", "The training of the proposed model was done on an Nvidia Telsa V100 32G GPU.", "Other details please refer to the released projects.", "To evaluate the performance of the proposed model, we carried out persona-based dialogue generation experiments in a persona-dense scenario and a persona-sparse scenario with two publicly available datasets:", "PersonaChat (Zhang et al., 2018) is a crowd-sourced dataset covering rich persona features.", "The dialogues in this dataset are grounded on specific personal facts.", "Here we use the ConvAI2 PersonaChat (Dinan et al., 2019), so the results are comparable to existing methods.", "PersonalDialog (Zheng et al., 2019) is a large-scale persona-sparse dataset, which is collected from Chinese social media Weibo.", "This dataset provides persona profiles and dialogues, but the majority of the dialogues are not persona-related.", "Two testsets are provided: a random testset, which is identically distributed as the training data, and a biased testset, which is manually selected to cover persona-related features.", "As aforementioned, we leverage non-dialogue inference data to address the consistency understanding issue brought by limited personalized data.", "Here we use the non-dialogue inference dataset MNLI (Williams et al., 2018) and its Chinese version CMNLI (Xu et al., 2020) as our auxiliary data.", "Moreover, to better compare models' performance on persona consistency, we leverage two dialogue inference datasets, DNLI (Welleck et al., 2019b) and KvPI (Song et al., 2020b), for evaluations.", "The statistics 1 of these inference datasets are summarized in Table2.", "The following models, including both non-pretrained and pretrained ones, have been compared in the experiments.", "Baselines.", "Vanilla Transformer (Vaswani et al., 2017) is employed as baselines for the experiments on both PersonaChat and PersonalDialog.", "Personas are concatenated to the dialogue queries.", "Non-Pretrained Models.", "Meta-learning has recently been explored in addressing the limited personalized data issue.", "CMAML (Song et al., 2020c) is a meta-learning based method that learns from few shot personas by customizing the model structures.", "Besides the meta-learning methods, GDR (Song et al., 2020a) introduces inference ability on the PersonaChat with a generate-refine framework.", "However, the two models are elaborately designed for the persona-dense dataset and not appliable for the persona-sparse scenario.", "Thus we only employ them for experiments on PersonaChat.", "Pre-training Models.", "In the ConvAI2 challenge (Dinan et al., 2019), which utilizes PersonaChat as the competition dataset, LIC (Golo-vanov et al., 2019) is the best performing model.", "Thus we compare this model in the experiments on both PersonaChat and PersonalDialog.", "Atten-tionRouting (Zheng et al., 2020) is a pre-training method specially designed for the persona-sparse dataset, and it is also the latest model on PersonalDialog.", "We also finetune a GPT2 (Radford et al., 2019) for a thorough comparison on PersonaChat.", "We focus on two main aspects of the persona-based dialogues: response quality and persona consistency .", "To compare different models, we employ both automatic metrics and human evaluations.", "Automatic Metrics For dialogue quality, we employ perplexity ( PPL. ) and distinct 1/2 ( Dist.1/2 ) following common practice (Zhang et al., 2018; Zheng et al., 2020).", "Lower perplexity means better language modeling.", "Distinct 1/2 (Li et al., 2016a) are the ratio of distinct uni-grams / bi-grams, and higher distinct means better reponse diversity.", "For persona consistency, we employ two metrics.", "The first is Consistency Score ( C.Score ) (Madotto et al., 2019), which leverages a referee model to predict consistency and can be defined as: NLI ( r, p i ) = 1 , if r contradicts p i , 0 , if r is irrelevant to p i , 1 , if r entails p i .", "C.Score ( r ) = (cid:88) t i =1 NLI ( r, p i ) .", "(11)", "Here the NLI is a pre-trained RoBERTa model (Liu et al., 2019) finetuned with the dialogue inference datasets, i.e., DNLI and KvPI, as descriped in Table", "2. The RoBERT model achieves testset accuracy of 89.3% and 88.9% on DNLI and KvPI, which is aligned to the reported 88.20% (Welleck et al., 2019b) and 88.0% (Song et al., 2020b).", "The second metric is Delta Perplexity ( P ), which evaluates consistency from model's internal distributions.", "Li et al. (2020) first calculates the perplexity of entailed ( p.Ent ) and contradicted ( p.Ctd ) dialogues in the inference dataset.", "A dialogue model with good understanding ability should assign lower perplexity to the entailed dialogues while higher perplexity to the contradictions.", "From this intuition, the P can be defined as: P = PPL ( Contradicted ) PPL ( Entailed ) , (12) where a larger P means the model has a better ability to distinguish entailment from contradiction.", "In our experiments, we get entailed and contradicted { persona, query, response } tuples from the dialogue inference datasets DNLI and KvPI.", "Human Evaluations We recruit two teams (one for English and another for Chinese), each consists of five professional annotators, from a third-party company.", "These annotators are proficient in language tasks but know nothing about the models.", "We sample 100 { persona, query, response } tuples for each model's evaluation under every setting.", "Human annotators are asked to evaluate dialogue quality from three conventional criteria: fluency ( Flue. ), informativeness ( Info. ), and relevance ( Relv. ).", "Each criterion is rated on a five-scale, where 1, 3, and 5 indicate unacceptable, moderate, and perfect performance, respectively.", "The annotators are also instructed to label the consistency ( Per.C. ) between persona and response, where 1 means persona-related and consistent, 0 means irrelevant, and -1 means contradicted.", "Full PersonaChat We first report the full PersonaChat experimental results in Table", "3. Our method achieves better performance consistently across all automatic and human evaluation metrics, which shows the effectiveness of our model.", "Among all the metrics, our model obtains significant improvements on PPL and P. The lowest testset PPL means our model has learned a good language model fitting this dataset.", "Moreover, the highest P shows that our model could more effectively distinguish entailment from contradiction than other baselines, which indicates our model has a better understanding of persona consistency.", "Less Personalized Data Now that our model achieves better performance with a large margin on the full PersonaChat dataset, we want to test our model by simulating a low-resource scenario (Zhao et al., 2019), where we gradually reduce the number of examples by halving the training set.", "We report the low-resource settings' results in Table", "4. As we can see, our model can outperform most of the baselines' best results even by using only 1/8 of the training data.", "The performance gains largely benefit from the powerful language model of the backbone BERT model.", "Furthermore, due to the disentangling of generation and understanding, our model presents a stable performance on P regardless of the size of the training set.", "This is in line with our expectations because the proposed model learns consistency understanding from the non-dialogue inference data rather than the persona-dense dialogue data.", "We observe that the method also improves fluency and informativeness.", "It is mainly due to the introduction of the non-dialogue inference data in the training procedure, which potentially enriches the dialogue language model.", "We further validate our model on a persona-sparse scenario.", "To have a more intuitive understanding of sparsity, we recruit the same annotation team to annotate whether the dataset response is persona-related in the sampled random and biased test data.", "Results show that only 1% responses are persona-related in the random test data and 28% in the biased test data.", "We calculate the Fleiss' Kappa among the five annotators and obtain a kappa of 0.774, which means substantial agreement (Landis and Koch, 1977).", "We report the evaluation results on both random and biased testsets in Table", "5. On the random test set, experimental results demonstrate that our model has some advantages over other methods, but no method can consistently outperform the others.", "One possible reason is that the task has degenerated into the ordinary dialogue generation in the random test set, so our model's advantages can not be effectively leveraged.", "In contrast, on the biased test set, our model achieves the best performance on most metrics.", "The good performance on the metrics C.Score and Per.C.", "indicates that our model can be effectively trained from a dataset with limited personalized dialogues.", "In addition to the good performance of the BoB model, we are also curious about Q1 : what is the key to the BoB model's understanding ability?", "Q2 : can the pre-trained models understand persona consistency just through finetuning on the personalized dialogues?", "And Q3 : does the extremely low PPL come from the initialization of the BERT model or the architecture of the proposed BoB model?", "To better answer the above questions, we ablate the BoB model in the following three ways: 1) w/o UL .", "It removes the unlikelihood objective.", "2) E + D 1 .", "It removes the unlikelihood objective and the second decoder D 2 .", "3) E .", "It removes the unlikelihood objective and both decoders and thus degenerates into a vanilla BERT model.", "We report the ablation results on PersonalDialog in Table 5 and full PersonaChat in Table", "6. From these results: Answer to Q1: The key to our model's understanding is the unlikelihood training.", "In training, our model assigns large perplexity to the contradictions.", "In generating, the non-contradicted responses are more likely to be generated as they are with much smaller losses.", "Table 7 shows an example.", "And as presented in the results, after removing the unlikelihood objective, all ablated models suffer from significant performance degradations in consistency-related metrics, such as Per.C.", "and P. Persona I've a son who is in junior high school Query You have any children?", "Answer to Q2: Pretrained models barely understand consistency from personalized dialogues.", "According to the poor performances on P, the three BERT-based ablated models can hardly distinguish contradiction from entailment.", "Although their Per.C.", "metric still looks good, it may come from just mimicking and copying words rather than understanding.", "A similar phenomenon also occurs to the pre-trained GPT2, as shown in Table", "3. It is also this phenomenon that motivates us to introduce the unlikelihood training into the BoB model.", "Answer to Q3: D 2 in the BoB architecture contributes most to the PPL.", "As shown in both datasets' ablation results, the PPL decreases the most after removing D 2 .", "We can also see an apparent gap between the models with D 2 and the vanilla BERT on PPL.", "Nevertheless, the BERT model still offers a good initialization for the BoB model to achieve the best performance on different metrics.", "The implementation for the BoB model is released at https://github.com/songhaoyu/BoB.", "In this work, we propose a novel BERT-based dialogue model to learn from limited personalized data by disentangling response generation and consistency understanding.", "Unlikelihood training with non-dialogue inference data is introduced to enhance the model's understanding ability.", "Experiments on two publicly available datasets demonstrate that our model can be trained with limited personalized dialogue data while still obtain significant improvements over strong methods.", "This paper is supported by the National Natural Science Foundation of China under Grant No.62076081, No.61772153, and No.61936010, and supported by the Science and Technology Innovation 2030 Major Project of China under Grant No.2020AAA0108605.", "We thank all the anonymous reviewers for their helpful comments and suggestions.", "Persona-based dialogue research intends to address the persona inconsistency issue in open-domain dialogue to facilitate human-computer interactions.", "Giving dialogue system a specific persona is a mainstream to alleviate the inconsistency issue of dialogues under the current stage.", "The purpose is to endow the dialogue system with self logical consistency rather than imitate specific human beings.", "Simultaneously, in this work, the data resources we use are all from published works and do not involve privacy issues related to data collection.", "We also confirm that this work neither automatically infers or attributes identity characteristics to the participants nor categorizes them in the training datasets." ]
[ "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "result", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "objective", "abstain", "objective", "other", "other", "abstain", "abstain", "abstain", "method", "abstain" ]
[ "Yuanhe Tian , Yan Song , Xiang Ao , Fei Xia , Xiaojun Quan (cid:52) , Tong Zhang , Yonggang Wang University of Washington, Sinovation Ventures Chinese Academy of Sciences, (cid:52) Sun Yat-sen University The Hong Kong University of Science and Technology { yhtian, fxia } @uw.edu [email protected] [email protected] (cid:52) [email protected] [email protected] [email protected]", "Abstract Chinese word segmentation (CWS) and part-of-speech (POS) tagging are important fundamental tasks for Chinese language processing, where joint learning of them is an effective one-step solution for both tasks.", "Previous studies for joint CWS and POS tagging mainly follow the character-based tagging paradigm with introducing contextual information such as n-gram features or sentential representations from recurrent neural models.", "However, for many cases, the joint tagging needs not only modeling from context features but also knowledge attached to them (e.g., syntactic relations among words); limited efforts have been made by existing research to meet such needs.", "In this paper, we propose a neural model named TWASP for joint CWS and POS tagging following the character-based sequence labeling paradigm, where a two-way attention mechanism is used to incorporate both context feature and their corresponding syntactic knowledge for each input character.", "Particularly, we use existing language processing toolkits to obtain the auto-analyzed syntactic knowledge for the context, and the proposed attention module can learn and benefit from them although their quality may not be perfect.", "Our experiments illustrate the effectiveness of the two-way attentions for joint CWS and POS tagging, where state-of-the-art performance is achieved on five benchmark datasets.", "1 1 Introduction Chinese word segmentation (CWS) and part-of-speech (POS) tagging are two fundamental and crucial tasks in natural language processing (NLP) for Chinese.", "The former one aims to find word Partially done as an intern at Sinovation Ventures.", "Corresponding author.", "1 TWASP (code and the best performing models) is released at https://github.com/SVAIGBA/TwASP .", "boundaries in a sentence and the latter, on the top of segmentation results, assigns a POS tag to each word to indicate its syntactical property in the sentence.", "To effectively perform CWS and POS tagging, combining them into a joint task is proved to have better performance than separately conducting the two tasks in a sequence (Ng and Low, 2004).", "Therefore, many studies were proposed in the past decade for joint CWS and POS tagging (Jiang et al., 2008, 2009; Sun, 2011; Zeng et al., 2013; Zheng et al., 2013; Kurita et al., 2017; Shao et al., 2017; Zhang et al., 2018).", "These studies, regardless of whether they used conventional approaches (Jiang et al., 2008, 2009; Sun, 2011; Zeng et al., 2013) or deep learning based approaches (Zheng et al., 2013; Kurita et al., 2017; Shao et al., 2017; Zhang et al., 2018), focused on incorporating contextual information into their joint tagger.", "In addition, it is well known that syntactic structure is also able to capture and provide the information of long-distance dependencies among words.", "For example, Figure 1 shows an example of local ambiguity, where the green highlighted part has two possible interpretations VV / NN ( report a book ) and NN ( the report ).", "The ambiguity can be resolved with syntactic analysis; for instance, the dependency structure, if available, would prefer the first interpretation.", "While the subject and the object of the sentence (highlighted in yellow) are far away from the ambiguous part in Figure 2: The architecture of TWASP for the joint CWS and POS tagging with the two-way attention mechanism, which is presented with example context features and their dependency knowledge (highlighted in yellow) from auto-analyzed results for a character (i.e., ( split ) highlighted in green) in the given sentence.", "the surface word order, they are much closer in the dependency structure (the subject depends on VV and NN depends on the the ob-ject).", "This example shows that syntactic structure provides useful cues for CWS and POS tagging.", "Syntactic knowledge can be obtained from manually constructed resources such as treebanks and grammars, but such resources require considerate efforts to create and might not be available for a particular language or a particular domain.", "A more practical alternative is to use syntactic structures automatically generated by off-the-shelf toolkits.", "Some previous studies (Huang et al., 2007; Jiang et al., 2009; Wang et al., 2011; Zhang et al., 2018) verified the idea for this task by learning from auto-processed corpora.", "However, their studies treat auto-processed corpora as gold reference and thus are unable to distinguishingly use it according to its quality (the resulted knowledge is not accurate in most cases).", "Therefore, the way to effectively leverage such auto-generated knowledge for the joint CWS and POS tagging task is not fully explored.", "In this paper, we propose a neural model named TWASP with a two-way attention mechanism to improve joint CWS and POS tagging by learning from auto-analyzed syntactic knowledge, which are generated by existing NLP toolkits and provide necessary (although not perfect) information for the task.", "In detail, for each input character, the proposed attention module extracts the context features associated with the character and their corresponding knowledge instances according to the auto-analyzed results, then computes the attentions separately for features and knowledge in each attention way, and finally concatenates the attentions from two ways to guide the tagging process.", "In doing so, our model can distinguish the important auto-analyzed knowledge based on their contributions to the task and thus avoid being influenced by some inferior knowledge instances.", "Compared to another prevailing model, i.e., key-value memory networks (Miller et al., 2016), which can learn from pair-wisely organized information, the two-way attentions not only are able to do so, but also fully leverage features and their knowledge rather than using one to weight the other.", "2 We experiment with three types of knowledge, namely, POS labels, syntactic constituents, and dependency relations, in our experiments.", "The experimental results on five benchmark datasets illustrate the effectiveness of our model, where state-of-the-art performance for the joint task is achieved on all datasets.", "We also perform several analyses, which confirm the validity of using two-way attentions and demonstrate that our model can be further improved by synchronously using multiple types of knowledge.", "The architecture of TWASP is illustrated in Figure 2.", "The left part shows the backbone of the model for the joint CWS and POS tagging following 2 We explain it in later part of the paper that, the output of key-value memory networks mainly rely on the value embeddings, where keys are used to weight such embeddings.", "the character-based sequence labeling paradigm, where the input is a character sequence X = x 1 x 2 x i x l and the output is a sequence of joint labels Y = y 1 y 2 y i y l .", "To enhance the backbone paradigm, the proposed two-way attention module (as shown in the right part of Figure 2) takes the syntactic knowledge produced from the input sentence, analyzes it and then feeds it to the tagging process.", "In this section, we firstly introduce the auto-analyzed knowledge, then explain how the two-way attentions consume such knowledge, and finally describe how the joint CWS and POS tagging works with the resulted attentions.", "Auto-analyzed knowledge is demonstrated to be an effective type of resources to help NLP systems understand the texts (Song et al., 2017; Seyler et al., 2018; Huang and Carley, 2019).", "One challenge for leveraging external knowledge for the joint task is that gold-standard annotations are extremely rare for text in most domains, especially the syntactic annotations.", "An alternative solution is to use off-the-shelf NLP systems to produce such knowledge, which is proved to be useful in previous studies (Huang et al., 2007; Jiang et al., 2009; Wang et al., 2011; Zhang et al., 2018).", "Rather than processing an entire corpus and then extracting features or training embeddings from the resulted corpus as in previous studies, our model does not treat knowledge as gold references: it generates auto-analyzed knowledge for each sentence and learns the weights of the corresponding features.", "Formally, for a character sequence X , let S and K denote the lists of context features and knowledge for X , respectively.", "For each character x i in X , let S i = [ s i, 1 , s i, 2 , s i,j , s i,m i ] and K i = [ k i, 1 , k i, 2 , k i,j , k i,m i ] be the sublists of S and K for x i .", "Here, s i,j and k i,j denote a context feature and a knowledge instance, respectively.", "In this paper, we use three types of syntactic knowledge for the joint task, namely POS labels, syntactic constituents, and dependency relations, where POS labels indicate the syntactic information of individual words, syntactic constituents provide the structural grouping information for a text span, and dependencies offer dependency relations between words.", "Figure 3 shows an example sentence and the corresponding S and K .", "For character (highlighted in green), its S i and K i are highlighted in yellow.", "In order to distinguish same knowledge appearing with different context features, we use a feature-knowledge combination tag to represent each knowledge instance (e.g., NN , NP , and dobj in Figure 3).", "We explain each type of knowledge below.", "POS Labels Figure 3", "(a) shows that, for each x i (e.g., x 6 = ), we use a 2-word window for both sides to extract context features from S to form S i (i.e., S 6 = [ , , , ]), and then get their corresponding knowledge instances of POS labels from K to form K i (i.e., K 6 = [ NN , VV , VV , LC ]).", "Syntactic Constituents As shown in Figure 3", "(b), the rule for extracting syntactic constituency knowledge is as follows.", "We start with the word containing the given character x i , go up the constituency tree to the first ancestor whose label is in a pre-defined syntactic label list, 3 then use all the words under this node to select context features from S , and finally combine the words with the syntactic label of the node to select knowledge instances from K .", "For example, for x 6 = , the lowest syntactic node governing is NP (high-lighted in yellow); thus S 6 = [ ] and K 6 = [ NP ].", "Another example is x 5 = , the lowest acceptable node on its syntactic path is VP ; therefore, S 5 = [ , , ] and K 5 = [ VP , VP , VP ].", "Dependency Relations Given a character x i , let w i be the word that contains x i .", "The context features S i include w i , w i 's governor, and w i 's dependents in the dependency structure; those words combined with their inbound dependency relation labels form K i .", "For example, for x 6 = , w 6 = , which depends on with a dependency label dobj .", "Therefore, S 6 = [ , ], and K 6 = [ obj , root ].", "Attention has been shown to be an effective method for incorporating knowledge into NLP systems (Kumar et al., 2018; Margatina et al., 2019) but it cannot be used directly for feature and knowledge in pair-wise forms.", "Previous studies on the joint task normally directly concatenate the embeddings from context features and knowledge instances into the embeddings of characters (Zhang et al., 2018), which could be problematic for incorporating auto-analyzed, error-prone syntactic knowledge obtained from off-the-shelf toolkits.", "For both features and their knowledge instances for X , we use a two-way attention design to have separate attention for S and K .", "Particularly, the two ways, namely, the feature way and the knowledge way, are identical in architecture, where each way has a feed-forward attention module (Raffel and Ellis, 2015).", "For each x i , its S i and K i are firstly fed into the feature attention way and the knowledge attention way, respectively, then computed within each way, and their final attention vectors are combined to feedback to the backbone model.", "Take the feature way as an example, the attention 3 Following Chen et al. (2006), the list has 12 syntactic labels, namely, ADJP, ADVP, CLP, DNP, DP, DVP, LCP, LST, NP, PP, QP, and VP.", "weight for each context feature s i,j is computed by a si,j = exp ( h (cid:62) i e si,j ) (cid:80) m i j =1 exp ( h (cid:62) i e si,j ) (1) where h i is the vector from a text encoder for x i and e si,j the embedding of s i,j .", "Then we have the weighted embedding a si for all s i,j in S i via a si = m i (cid:88) j =1 a si,j e si,j (2) where (cid:80) denotes a element-wise sum operation.", "For the knowledge way, the same process is applied to get a ki by distinguishing and weighting each knowledge instance k i,j .", "Finally, the output of the two attention ways are obtained through an concatenation of the two vectors: a i = a si a ki .", "To functionalize the joint tagging, the two-way attentions interact with the backbone model through the encoded vector h i and its output a i for each x i", "For h i , one can apply many prevailing encoders, e.g., Bi-LSTM or BERT (Devlin et al., 2019), to get the vector list [ h 1 h 2 h i h l ] for X .", "Once a i is obtained, we concatenate it with h i and send it through a fully connected layer to align the dimension of the output for final prediction: o i = W ( h i a i ) + b (3) where W and b are trainable parameters.", "Afterwards, conditional random fields (CRF) is used to estimate the probability for y i over all possible joint CWS and POS tags under x i and y i 1 by p ( y i | x i ) = exp ( W c o i + b c ) (cid:80) y i 1 y i exp ( W c o i + b c ) (4) Here, W c and b c are the weight matrix and the bias vector, respectively, and they are estimated using the ( y i 1 , y i ) tag pairs in the gold standard.", "We employ five benchmark datasets in our experiments, where four of them, namely, CTB5, CTB6, CTB7, and CTB9, are from the Penn Chinese TreeBank 4 (Xue et al., 2005) and the fifth one is", "4 We obtain the Penn Chinese TreeBank data from the official release of Linguistic Data Consortium.", "The catalog numbers for CTB5, CTB6, CTB7, and CTB9 are LDC2005T01, LDC2007T36, LDC2010T07, and LDC2016T13, respectively.", "the Chinese part of Universal Dependencies (UD) 5 (Nivre et al., 2016).", "The CTB datasets are in simplified Chinese characters while the UD dataset is in traditional Chinese.", "Following Shao et al. (2017), we convert the UD dataset into simplified Chinese 6 before conducting experiments on it.", "CTB uses 33 POS tags, and we split CTB5-CTB9 following previous studies (Wang et al., 2011; Jiang et al., 2008; Shao et al., 2017).", "In addition, because the data in CTB9 come from eight genres broadcast conversation (BC), broadcast news (BN), conversational speech(CS), discussion forums (DF), magazine articles (MZ), newswire (NW), SMS/chat messages (SC), and weblog (WB) we also use CTB9 in a cross-domain study (see Section 3.4).", "UD uses two POS tagsets, namely the universal tagset (15 tags) and language-specific tagset (42 tags for Chinese).", "We refer to the corpus with the two tagsets as UD1 and UD2, respectively, and use the official splits of train/dev/test in our experiments.", "The statistics for the aforementioned datasets are in Table 1.", "To obtain the aforementioned three types of knowledge, we use two off-the-shelf toolkits, Stanford CoreNLP Toolkit (SCT) 7 (Manning et al., 2014) and Berkeley Neural Parser (BNP) 8 (Kitaev and Klein, 2018): the former tokenizes and parses a Chinese sentence, producing POS tags, phrase structure and dependency structure of the sentence; the latter does POS tagging and syntactic parsing on a pre-tokenized sentence.", "Both toolkits were trained on CTB data and thus produced CTB POS tags.", "To extract knowledge, we firstly use SCT to automatically segment sentences and then run both SCT and BNP for POS tagging and parsing.", "Table 2 shows the size of S and K for all the datasets.", "We test the model with three encoders: two of them, namely Bi-LSTM and BERT 9 (Devlin et al., 2019), are widely used; the third encoder is ZEN 10 (Diao et al., 2019), which is a recently released Chinese encoder pre-trained with n-gram information and outperforms BERT in many downstream tasks.", "For the Bi-LSTM encoder, we set its hidden state size to 200 and use the character embeddings released by Shao et al. (2017) to initialize its input representations.", "For BERT and ZEN, we follow their default settings, e.g., 12 layers of self-attentions with the dimension of 768.", "For the two-way attention module, we randomly initialize the embeddings for all context features and their corresponding knowledge instances, where one can also use pre-trained embeddings (Song et al., 2018; Grave et al., 2018; Zhang et al., 2019; Yamada et al., 2020) for them.", "For all the 7 We use its version 3.9.2 downloaded from https:// stanfordnlp.github.io/CoreNLP/ .", "8 We download the model from https://github.", "com/nikitakit/self-attentive-parser .", "9 We use the Chinese base model from https://s3.", "amazonaws.com/models.huggingface.co/ .", "10 https://github.com/sinovation/ZEN CTB5 CTB6 CTB7 CTB9 UD1 UD2 Seg Joint Seg Joint Seg Joint Seg Joint Seg Joint Seg Joint SCT 98.02 95.49 96.62 90.85 96.53 92.73 93.63 88.23 80.50* 0.00* 80.50* 36.11* BNP -95.50 -94.43 -92.95 -88.09 -0.00* -37.16* Bi-LSTM 97.69 93.73 95.46 90.63 95.46 89.98 96.45 91.80 94.96 88.72 95.01 88.75 + POS (SCT) 98.07 94.68 96.23 91.04 96.32 91.60 96.75 92.36 94.86 88.90 95.08 88.99 + Syn.", "models, we set the maximum character length of the input sequence to 300 and use negative log-likelihood loss function.", "Other hyper-parameters of the models are tuned on the dev set and the tuned models are evaluated on the test set for each dataset (each genre for CTB9).", "F-scores for word segmentation and the joint CWS-POS tags are used as main evaluation metrics 11 in all experiments.", "In our main experiment, we run our TWASP on the five benchmark datasets using the three encoders, i.e., Bi-LSTM, BERT, and ZEN.", "The results on the F-scores of word segmentation and joint CWS and POS tagging are in Table 3, which also includes the performance of the baselines without attention and the two toolkits (i.e., SCT and BNP).", "The results of SCT and BNP on the UD dataset are bad because they were trained on CTB, which used different segmentation and POS tagging criteria.", "There are several observations.", "First, for all encoders, the two-way attentions provide consistent enhancement to the baselines with different types of knowledge.", "Particularly, although the baseline model is well-performed when BERT (or ZEN) serves as the encoder, the attention mod-11 We use the evaluation script from https://github.", "ule is still able to further improve its performance with the knowledge produced by the toolkits even though the toolkits have worse-than-baseline results for the joint task.", "Second, among different types of knowledge, POS labels are the most effective ones that help the joint task.", "For instance, among BERT-based models, the one enhanced by POS knowledge from SCT achieves the best performance on most datasets, which is not surprising because such knowledge matches the outcome of the task.", "In addition, for BERT-based models enhanced by knowledge from BNP (i.e., BERT + POS (BNP) and BERT + Syn.", "(BNP)), syntactic constituents provide more improvement than POS labels on all CTB datasets.", "This observation could be explained by that BNP is originally designed for constituency parsing with CTB criteria; the syntactic constituents are complicated while effective when they are accurate.", "Third, while SCT and BNP were trained on CTB, whose tagset is very different from the two tagsets for UD, TWASP still outperforms the baselines on UD with the knowledge provided by SCT and BNP, indicating that syntactic knowledge is useful even when it uses different word segmentation and POS tagging criteria.", "Table 4 shows the results of our best models (i.e. BERT and ZEN with POS (SCT)) and previous studies on the same datasets.", "Our approach CTB5 CTB6 CTB7 CTB9 UD1 UD2 Seg Joint Seg Joint Seg Joint Seg Joint Seg Joint Seg Joint Jiang et al. (2008) 97.85 93.41 -----Kruengkrai et al. (2009) 97.87 93.67 -----Sun (2011) 98.17 94.02 -----Wang et al. (2011) 98.11 94.18 95.79 91.12 95.65 90.46 ---Qian and Liu (2012) 97.85 93.53 -----Shen et al. (2014) 98.03 93.80 -----Kurita et al. (2017) 98.41 94.84 -96.23 91.25 ---Shao et al. (2017) 98.02 94.38 --96.67 92.34 95.16 89.75 95.09 89.42 Zhang et al. (2018) 98.50 94.95 96.36 92.51 96.25 91.87 ---BERT + POS (SCT) 98.77 96.77 97.43 94.82 97.31 94.12 97.75 94.87 98.32 95.60 98.33 95.46 ZEN + POS (SCT) 98.81 96.92 97.45 94.87 97.27 94.20 97.77 94.88 98.33 95.69 98.18 95.49 Table 4: Comparison (in F-scores of word segmentation and joint tagging) of TWASP (with BERT or ZEN encoder) with previous studies.", "outperforms previous studies on the joint task and achieves new state-of-the-art performance on all datasets.", "While some of the previous studies use auto-analyzed knowledge (Wang et al., 2011; Zhang et al., 2018), they regard such knowledge as gold reference and consequently could suffer from errors in the auto-analyzed results.", "In contrast, our proposed model is able to selectively model the input information and to discriminate useful knowledge instances through the two-way attentions.", "Domain variance is an important factor affecting the performance of NLP systems (Guo et al., 2009; McClosky et al., 2010; Song and Xia, 2013).", "To further demonstrate the effectiveness of TWASP, we conduct cross-domain experiments on the eight genres of CTB9 using BERT and ZEN as the baseline and their enhanced version with POS knowledge from SCT.", "In doing so, we test on each genre with the models trained on the data from all other genres.", "The results on both segmentation and the joint task are reported in Table 5, where the SCT results are also included as a reference.", "The comparison between the baselines and TWASP with POS knowledge clearly shows the consistency of performance improvement with two-way attentions, where for both BERT and ZEN, TWASP outperforms the baselines for all genres on the joint labels.", "In addition, similar to the observations from the previous experiment, both accurate and inaccurate POS knowledge are able to help the joint task.", "For example, although the SCT results on several genres (e.g., CS, DF, SC) are much worse than of the BERT baseline, the POS labels produced by SCT can still enhance TWASP on word segmentation and joint tagging with the proposed two-way attention module.", "In the first analysis, we compare our two-way attention with normal attention.", "For normal attention, we experiment three ways of incorporating context features and knowledge: (1) using context features and knowledge together in the attention, where all features or knowledge instances are equally treated in it; (2) using context features only; and (3) using knowledge only.", "We run these experiments with BERT encoder and POS knowledge from SCT on CTB5 and report the results in Table", "6. Overall, the two-way attentions outperform all three settings for normal attention, which clearly indicates the validity of using two attention ways for features and knowledge, i.e., compared to (1), as well as the advantage of learning from both of them, i.e., compared to (2) and (3).", "Interestingly, in the three settings, (3) outperforms (1), which could be explained by that, with normal attention, mixed feature and knowledge instances in it may make it difficult to weight them for the joint task.", "There are other methods for using both context features and knowledge in a neural framework, such as key-value memory networks (kvMN) (Miller et al., 2016), which is demonstrated to improve CWS by Tian et al. (2020).", "Thus we compare our approach with kvMN, in which context features are mapped to keys and knowledge to values.", "We follow the standard protocol of the kvMN, e.g., addressing keys by S i and reading values from K i through the corresponding knowledge for each key, computing weights from all key embeddings, and outputting the weighted embeddings from all values.", "The result from the kvMN is reported at the last row of Table 6, where its performance is not as good as the two-way attentions, and even Genre SCT BERT BERT+POS ZEN ZEN+POS Seg Joint Seg Joint Seg Joint Seg Joint Seg Joint BC 96.27 93.55 96.29 92.08 96.38 92.34 96.48 92.25 96.63 92.41 BN 96.98 93.98 96.93 93.73 97.20 94.02 97.05 93.91 97.21 94.14 CS 89.83 81.93 95.17 89.18 95.14 89.46 95.10 89.24 95.87 89.67 DF 91.34 84.28 96.79 92.02 96.44 92.44 96.33 92.11 96.55 92.51 MZ 95.69 91.99 95.62 91.97 95.83 92.17 95.69 92.00 95.78 92.18 NW 97.41 94.75 97.55 94.44 97.49 94.64 97.49 94.51 97.57 94.70 SC 84.87 76.55 95.97 91.13 96.27 91.77 96.09 91.47 96.38 91.85 WB 95.99 92.86 95.09 89.59 95.11 89.85 95.10 89.74 95.35 90.10 Table 5: Experimental results (the F-scores for word segmentation and joint tagging) from baselines and TWASP with different encoders on eight genres of CTB9.", "worse than using normal attention with setting (3).", "The reason could be straightforward: the output of kvMN is built upon value (knowledge) embeddings and therefore information from key (context feature) embeddings does not directly contribute to it other than providing weights for the value.", "As a result, kvMN acts in a similar yet inferior 12 way of setting (3) where only knowledge is used.", "Since every type of knowledge works well in our model, it is expected to investigate how the model performs when multiple types of knowledge are used together.", "To this end, we run experiments on CTB5 to test on our BERT-based TWASP with knowledge ensemble, where two ensemble strategies, i.e., averaging and concatenation, are applied with respect to how a i for each knowledge type is combined with others.", "The results are reported in Table", "7. In this table, the first seven rows (ID: 1-7) indicate that different types of knowledge are 12 The inferior is explained by that, in kvMN, the value weights are inaccurate because they are computed with respect to the contribution of keys rather than knowledge instances.", "combined according to whether they come from the same toolkit (ID: 1-5) or belong to the same category (ID: 6 and 7); and the last row (ID: 8) is for the case that all types of knowledge are combined.", "There are several observations.", "First, compared to only using one type of knowledge (refer to Table 3), knowledge ensemble improves model performance where more knowledge types contribute to better results.", "The best model is thus obtained when all knowledge (from each toolkit and from both toolkits) are used.", "Second, knowledge in the same type from different toolkits may complement to each other and thus enhance model performance accordingly, which is confirmed by the results from the models assembling POS (or Syn+Dep) information from both SCT and BNP.", "Third, for different ensemble strategies, concatenation tends to perform better than averaging, which is not surprising since concatenation actually turns the model into a multi-way structure for knowledge integration.", "When the toolkit provides accurate knowledge, it is not surprising that our two-way attention model would benefit from the auto-analyzed knowledge.", "Interestingly, even when the toolkit provides inaccurate output, our model might still be able to benefit from such output.", "Figure 4 shows such an example, where our system uses BERT+Dep using SCT and the baseline system is BERT without two-way attention.", "The sentence contains an ambiguity character bigram , which has two possible interpretations, AD ( immediately ) and NN/ LC ( on the horse ).", "The second one is correct, yet the baseline tagger chooses the former because ( immediately ) is a very common adverb.", "Although SCT also chooses the wrong segmentation and thus has an incorrect dependency structure, our system is still able to produce correct segmentation and POS tags.", "One plausible explanation for this is that the inaccurate dependency structure includes an advmod link between ( immediately ) and ( very good ).", "Because such a dependency pair seldom appears in the corpus, the attention from such knowledge is weak and hence encourages our system to choose the correct word segmentation and POS tags.", "There are basically two approaches to CWS and POS tagging: to perform POS tagging right after word segmentation in a pipeline, or to conduct the two tasks simultaneously, known as joint CWS and POS tagging.", "In the past two decades, many studies have shown that joint tagging outperforms the pipeline approach (Ng and Low, 2004; Jiang et al., 2008, 2009; Wang et al., 2011; Sun, 2011; Zeng et al., 2013).", "In recent years, neural methods started to play a dominant role for this task (Zheng et al., 2013; Kurita et al., 2017; Shao et al., 2017; Zhang et al., 2018), where some of them tried to incorporate extra knowledge in their studies.", "For example, Kurita et al. (2017) exploited to model n-grams to improve the task; Shao et al. (2017) extended the idea by incorporating pre-trained n-gram embeddings, as well as radical embeddings, into character representations.", "Zhang et al. (2018) tried to leverage the knowledge from character embeddings, trained on an automatically tagged corpus by a baseline tagger.", "Compared to these previous studies, TWASP provides a simple, yet effective, neural model for joint tagging, without requiring a complicated mechanism of incorporating different features or pre-processing a corpus.", "In this paper, we propose neural approach with a two-way attention mechanism to incorporate auto-analyzed knowledge for joint CWS and POS tagging, following a character-based sequence labeling paradigm.", "Our proposed attention module learns and weights context features and their corresponding knowledge instances in two separate ways, and use the combined attentions from the two ways to enhance the joint tagging.", "Experimental results on five benchmark datasets illustrate the validity and effectiveness of our model, where the two-way attentions can be integrated with different encoders and provide consistent improvements over baseline taggers.", "Our model achieves state-of-the-art performance on all the datasets.", "Overall, this work presents an elegant way to use auto-analyzed knowledge and enhance neural models with existing NLP tools.", "For future work, we plan to apply the same methodology to other NLP tasks.", "Xiang Ao was partially supported by the National Natural Science Foundation of China under Grant No. 61976204, U1811461, the Natural Science Foundation of Chongqing under Grant No.", "cstc2019jcyj-msxmX0149 and the Project of Youth Innovation Promotion Association CAS." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "other", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "other", "other", "other", "other", "other", "objective", "objective", "result", "result", "method", "method", "other", "other" ]
[ "Recent progress of abstractive text summarization largely relies on large pre-trained sequence-to-sequence Transformer models, which are computationally expensive.", "This paper aims to distill these large models into smaller ones for faster inference and with minimal performance loss.", "Pseudo-labeling based methods are popular in sequence-to-sequence model distillation.", "In this paper, we find simply manipulating attention temperatures in Transformers can make pseudo labels easier to learn for student models.", "Our experiments on three summarization datasets show our proposed method consistently improves vanilla pseudo-labeling based methods.", "Further empirical analysis shows that both pseudo labels and summaries produced by our students are shorter and more abstractive.", "Our code is available at https://github.", "com/Shengqiang-Zhang/plate .", "Automatic document summarization is the task of rewriting a long document into its shorter form while still retaining its most important content.", "In the literature, there are mainly two kinds of methods for summarization: extractive summarization and abstractive summarization (Nenkova and McK-eown, 2011).", "In this work, we focus on abstractive summarization, which is viewed as a sequence-to-sequence (Seq2Seq) learning problem, since recent abstractive models outperform their extractive counterparts and can produce more concise summaries (Raffel et al., 2020; Lewis et al., 2020; Zhang et al., 2020; Liu and Lapata, 2019).", "Recent progress of abstractive summarization largely relies on large pre-trained Transformer models (Raffel et al., 2020; Lewis et al., 2020; Zhang et al., 2020; Liu and Lapata, 2019; Bao et al., 2020).", "With these Equal contribution.", "extremely large models, we can obtain state-of-the-art summarization results, but they are slow for online inference, which makes them difficult to be used in the production environment even with cutting-edge hardware.", "This paper aims to distill these large Transformer summarization models into smaller ones with minimal loss in performance.", "Knowledge distillation is a class of methods that leverage the output of a (large) teacher model to guide the training of a (small) student model.", "In classification tasks, it is typically done by minimizing the distance between the teacher and student predictions (Hinton et al., 2015).", "As to Seq2Seq models, an effective distillation method is called pseudo-labeling (Kim and Rush, 2016), where the teacher model generates pseudo summaries for all documents in the training set and the resulting document pseudo -summary pairs are used to train the student model.", "In this paper, we argue that attention distributions of a Seq2Seq teacher model might be too sharp.", "As a result, pseudo labels generated from it are sub-optimal for student models.", "In the summarization task, we observe that 1) pseudo summaries generated from our teacher model copy more continuous text spans from original documents than reference summaries (56% 4-grams in pseudo summaries and 15% 4-grams in reference summaries are copied from their original documents on CNN/DailyMail dataset); 2) pseudo summaries tend to summarize the leading part of a document (measured on CNN/DailyMail, 74% of sentences in pseudo summaries and 64% of sentences in reference summaries are from the leading 40% sentences in original documents).", "We obtain the two numbers above by matching each sentence in a summary with the sentence in its original document that can produce maximum ROUGE (Lin, 2004) score between them.", "We call the two biases above the copy bias and the leading bias .", "In order to have an intuitive feeling, we select a rep-127 resentative example 1 and visualize its cross attention weights 2 (see the left graph in Figure 1).", "We observe that attention weights form three lines, which indicates very time the decoder predicts the next word, its attention points to the next word in the input document.", "That may be the reason why multiple continuous spans of text are copied.", "Another phenomenon we observe is that all high-value attention weights (in deeper color) concentrate on the first 200 words in the input document, which reflects the leading bias.", "In either case, the attention distribution is too sharp (i.e., attention weights of the next word position or the leading part is much larger than other positions), which means our teacher model is over-confident.", "Based on the observations above, we propose a simple method called PLATE (as shorthand for P seudo-labeling with L arger A ttention TE mperature) to smooth attention distributions of teacher models.", "Specifically, we re-scale attention weights in all attention modules with a higher temperature, which leads to softer attention distributions.", "Figure 1 intuitively shows the effect of using higher attention temperatures.", "Compared with the left graph, the right graph with higher attention temperature has shorter lines (less copy bias) with high attention weights, and positions of high attention weights extend to the first 450 words (less leading bias).", "Less copy bias in pseudo summaries encourages student models to be more abstractive, while less leading bias in pseudo summaries encourages student models to take advantage of longer context in documents.", "Experiments on CNN/DailyMail, XSum, and New York Times datasets with student models of different sizes show PLATE consistently outperforms vanilla pseudo-labeling methods.", "Further empirical analysis shows that, with PLATE , both pseudo summaries generated by teacher models and summaries generated by student models are shorter and more abstractive, which matches the goal of abstractive summarization.", "Large pre-trained Seq2Seq Transformer models largely improve results of generation tasks including text summarization (Song et al., 2019; Lewis et al., 2020; Bao et al., 2020; Raffel et al., 2020;", "1 See the detailed example in Appendix E. 2 We use cross attention because we can see how words", "Zhang et al., 2020).", "These models are pre-trained using unsupervised text-to-text objectives.", "For example, T5 (Raffel et al., 2020) is pre-trained by predicting corrupted text spans.", "BART (Lewis et al., 2020) employs denoising auto-encoding objectives such as text infilling and sentence permutation during its pre-training.", "The pre-training objective of PEGASUS (Zhang et al., 2020) is tailored for the summarization task, which predicts the most summary worthy sentences in a document.", "Our method aims to make these large models faster.", "In knowledge distillation, besides learning from gold labels in the training set, student models can learn from soft targets (Ba and Caruana, 2014; Hinton et al., 2015), intermediate hidden states (Romero et al., 2014), attentions (Zagoruyko and Komodakis, 2017; Wang et al., 2020), and target output derivatives (Czarnecki et al., 2017) of teacher models.", "Recent work for distillation of pre-trained Transformers (e.g., DistilBERT (Sanh et al., 2019), TinyBERT (Jiao et al., 2020), Mobile-BERT (Sun et al., 2020), BERT-of-Theseus (Xu et al., 2020a), MINILM (Wang et al., 2020)) focuses on natural language understanding tasks such as GLUE (Wang et al., 2018) or SQuAD (Rajpurkar et al., 2016) benchmarks.", "Most methods above are designed for classification models.", "In Seq2Seq learning tasks such as summarization, we can apply distillation methods above to each step of sequence model predictions.", "However, the sequence-level knowledge of teacher mod-128 els is not well utilized.", "Therefore, Kim and Rush (2016) introduce a sequence-level knowledge distillation method (i.e., pseudo-labeling ), where a student model is trained with pseudo labels generated by the teacher model using beam search decoding.", "Kim and Rush (2016) and later work (Kasai et al., 2020; Gu et al., 2017; Denkowski and Neubig, 2017) show pseudo-labeling achieves competitive performance for Seq2Seq tasks such as machine translation.", "Shleifer and Rush (2020) propose the shrink and fine-tune (SFT) approach for pre-trained summarization distillation, which re-finetunes a teacher model with some layers removed, and they show SFT outperforms pseudo-labeling and a mod-ification of direct knowledge distillation (Jiao et al., 2020) on one of their datasets, but not others.", "Our method, which builds on top of pseudo-labeling , is conceptually simple and improves pseudo-labeling across different summarization datasets.", "There is an interesting line of work called self-distillation or self-training (Furlanello et al., 2018; Xie et al., 2020; Deng et al., 2009; Liu et al., 2020; He et al., 2019), where the size of the student model is identical to the size of the teacher model.", "Our method can also be applied in self-distillation and can potentially be combined with the self-distillation methods above.", "3 Summarization Distillation 3.1 Transformer based abstractive summarization Abstractive summarization aims to rewrite a document into its shorter form (i.e., summary), which is a typical Seq2Seq learning problem.", "We adopt the Seq2Seq Transformer (Vaswani et al., 2017) model.", "Given a document X = ( x 1 , x 2 , . . . , x | X | ) and its gold summary Y = ( y 1 , y 2 , . . . , y | Y | ) , we estimate the following conditional probability: p ( Y | X ; ) = | Y | (cid:89) t =1 p ( y t | y <t , X ; ) (1) where is the model parameter and y <t stands for all tokens before position t (i.e., ( y 1 , y 2 , . . . , y t 1 ) ).", "The Seq2Seq Transformer model can be trained by minimizing the negative log-likelihood of gold document-summary pairs: LG ( ) = 1 | Y | log p ( Y | X ; ) (2) where | Y | is the number of tokens in summary Y .", "Knowledge distillation refers to the task of transferring knowledge of a large teacher model (or a group of large teacher models) into a small student model.", "As to Seq2Seq learning tasks such as machine translation and summarization, pseudo-labeling based methods are usually used to imitate teacher predictions at the sequence level.", "Specifically, suppose we have a document X , and Y = ( y 1 , y 2 , . . . , y | Y | ) is a pseudo summary generated by a teacher model using beam search.", "The student can be trained by minimizing the negative log-likelihood of document-topseudo -summary pairs.", "Strictly, all possible pseudo summaries from X should be taken into account.", "Unfortunately, the computational cost is prohibitive.", "We therefore use a single sample Y (which takes a large portion of probability mass from the teacher) instead as in Kim and Rush (2016).", "where Q , K , V are linear projections of hidden states of a layer and is the temperature of the attention module which is usually d ( d is the hidden dimension size of that attention head).", "Our distillation method PLATE works as follows.", "Assume we have a teacher model trained with = d .", "When the teacher generates pseudo labels with beam search, we use a higher attention temperature and set = d where > 1 ( is the attention temperature coefficient).", "Note that we only change the teacher's attention temperature during inference time.", "When we train our student model with pseudo labels, we still use a normal temperature (i.e., = d ).", "We find that adjusting the student's attention temperature does not work.", "Probably because the student can easily adapt to the scaled attention temperature during training.", "generate pseudo labels with more diversity, we further propose to use a random for each input document ( U [ a, b ] ).", "Note that U [ a, b ] is a uniform distribution and we typically set a = 1 .", "0 and b = 2 .", "0 .", "We conduct our experiments on three popular document summarization datasets: CNN/DailyMail (Hermann et al., 2015), XSum (Narayan et al., 2018), and New York Times (Sandhaus, 2008).", "All datasets are tokenized with the GPT-2 tokenizer (Radford et al., 2019), which is based on UTF-8 BPE (Sennrich et al., 2016).", "CNNDM The CNN/DailyMail dataset (CNNDM; Hermann et al., 2015) contains online news articles from the CNN and DailyMail websites paired with their associated highlights as reference summaries.", "We follow the standard pre-processing steps described in See et al. (2017); Liu and Lapata (2019).", "3 The resulting numbers of document-summary pairs for training, validation, and test are 287,227, 13,368, and 11,490, respectively.", "XSum The XSum dataset is collected by harvesting online articles from the BBC with single sentence summaries, which is professionally written.", "The summaries are extremely abstractive.", "We use the official splits of Narayan et al. (2018).", "There are 204,045 articles for training; 11,332 articles for validation; and 11,334 articles for test.", "NYT The New York Times dataset (NYT; Sand-haus, 2008) is composed of articles published by New York Times, and the summaries are written by library scientists.", "After applying the pre-processing procedures described in Durrett et al. (2016); Liu and Lapata (2019), we first obtain 110,540 articles with abstractive summaries.", "The test set is constructed by including the 9,076 articles published after January 1, 2007.", "The remaining 100,834 articles are further split into training and validation sets.", "After removing articles with summaries less than 50 words, we obtain the final dataset with 38,264 articles for training; 4,002 articles for validation; and 3,421 articles for test.", "3 Scripts are available at https://github.com/ abisee/cnn-dailymail .", "Teacher/Student model settings We use BART Large (Lewis et al., 2020) as our teacher model, which has 12 layers in the encoder and decoder.", "The hidden size of each layer is 1024, and each layer contains 16 attention heads with a hidden size of 64.", "We have four kinds of student models.", "The first three student models are initialized from BART weights (therefore, their hidden sizes are the same as that of BART).", "All the three students have the 12 layers of BART encoder and differ in the number of decoder layers.", "They are denoted by BART 12-6 , BART 12-3 , and BART 12-12 with 6, 3, and 12 decoder layers, respectively.", "For BART 12-6 (or BART 12-3 ), the decoder is initialized from the first 6 (or 3) layers or the maximally spaced 6 (or 3) layers of BART decoder.", "The fourth student is the Transformer base model (Vaswani et al., 2017), which has 6 layers in each of the encoder and decoder.", "Each layer has a hidden size of 512 and 8 attention heads.", "This student is randomly initialized and denoted by Transformer .", "The latency statistics (Milliseconds) and numbers of parameters of above four models are in Table", "1. Training and inference Hyper-parameters for BART , BART 12-6 , BART 12-3 , and BART 12-12 are similar.", "Specifically, all models are optimized using Adam (Kingma and Ba, 2014) with 1 = 0 .", "9 , 2 = 0 .", "999 .", "Learning rates are tuned on validation sets (choose from 1e-5, 3e-5, 5e-5, 7e-5).", "We truncate all documents and summaries to 1024 sub-word tokens.", "We use a batch size of around 80 documents (we limit the max number of tokens on each GPU to 2048) and train our models for 20,000/15,000/6,000 steps with 500 warmup steps for CNNDM, XSum, and NYT, respectively.", "We also employ a weight decay of 0.01.", "For Transformer , the hyper-parameters of the Adam optimizer is a bit different, and we use 1 = 0 .", "9 , 2 = 0 .", "98 .", "Learning rates are picked from 1e-4, 3e-4, 5e-4, 7e-4 accord-130 ing to validation sets.", "The weight decay is set to 0.0001.", "The warmup step we use is 4000.", "We train Transformer for 100 epochs and select the best model w.r.t. their ROUGE scores on validation sets.", "For all models above we apply a label smoothing of 0.1 to prevent overfitting (Pereyra et al., 2017).", "During inference, as common wisdom, we apply beam search.", "The beam size, length penalty, and minimal length are 4, 2.0, and 55 on CNNDM; 6, 0.1, and 1 on XSum; and 4, 0.7, and 80 on NYT, respectively.", "All our models are trained on 8 NVIDIA V100 GPUs.", "The training is fairly fast.", "Training on CNNDM with the teacher model (i.e., BART ) is most time-consuming.", "It takes about 45 minutes for one epoch, and we need 6 epochs in total.", "We evaluate the quality of different summarization systems using ROUGE.", "On CNNDM and XSum datasets, we report full-length F1 based ROUGE-1 (R1), ROUGE-2 (R2), and ROUGE-L (RL) scores.", "Following Durrett et al. (2016); Liu and Lapata (2019), we report limited-length recall based ROUGE-1, ROUGE-2, and ROUGE-L, where generated summaries are truncated to the lengths of gold summaries.", "All ROUGE scores are computed using the ROUGE-1.5.5.pl script 4 .", "Summaries generated by abstractive models may be ungrammatical or unfaithful to the original document.", "Additionally, we also measure the quality of generated summaries by eliciting human judgements.", "We randomly sample 50 documents from the test set of CNNDM.", "12 annotators are invited (they are either native English speakers or graduate students with IELTS test score over 6.5).", "In the evaluation, participants are presented with a document and a list of outputs by different models.", "First, they are asked to evaluate the summaries on three dimensions: fluency (is the summary grammatically correct?), faithfulness (is the summary faithful to the original document?), and coverage (does the summary coverage important information of the document?).", "Then, they are asked to rank the summaries from best to worst as a way of determining the overall quality of summaries.", "Each document is ensured to be annotated by 3 different subjects.", "Our main results are shown in Table", "2. The first block includes several recent abstractive summarization models based on large pre-trained Transformers.", "BERTSUM (Liu and Lapata, 2019) employs BERT (Devlin et al., 2019) as its encoder and uses randomly initialized decoder.", "T5 (Raffel et al., 2020), PEGASUS (Zhang et al., 2020) and BART (Lewis et al., 2020) are three popular large Seq2Seq Transformer models with different pretraining objectives.", "Our own fine-tuning version of BART (BART (ours)) is comparable or slightly better than the original reported BART results, and we use it as the teacher model on the three datasets.", "The second block presents results of student models.", "Shleifer and Rush (2020) compare pseudo-labeling (BART-PL), knowledge distillation using both output and intermediate layers (BART-KD) as well as shrink and fine-tuning (BART-SFT) methods.", "They also use BART as teacher models.", "Note their settings of student models are BART 12-6 on CNNDM and BART 12-3 on XSum.", "Results of our BART 12-3 and BART 12-6 student models are in the third and fourth block.", "We present results of students trained with gold labels (Gold) and regular pseudo labels (Regular) as well as pseudo labels with higher and random attention temperatures (PLATE B12-3 =1 . 5 , PLATE B12-3 =2 . 0 and PLATE B12-3rnd ).", "PLATE B12-3 =1 .", "5 means that the student uses attention temperature coefficient = 1 .", "5 with architecture setting BART 12-3 .", "PLATE B12-3 rnd means that we use random attention temperature of U [1 . 0 , 2 . 0] .", "We observe that using pseudo-labeling methods with higher attention temperatures consistently improves over its counterpart with normal attention temperatures (Regular) across all three datasets, and the differences between them are almost always significant measured with the ROUGE script 5 (see details in Table 2).", "Interestingly, our student models PLATE B12-3 =2 .", "0 and PLATE B12-6 =2 .", "0 outperform all models in comparison (including student models and even the teacher model) on CNNDM.", "Our best performing student model PLATE B12-3 =1 .", "5 outperforms BART-PL, BART-SFT, and BART-KD on XSum.", "Meanwhile, our method is conceptually simpler and can further be combined with their methods with additional train-4 with -c 95 -r 1000 -n 2 -a -m arguments.", "In Section 3.3, we also propose a variant of our method, which employs random attention temperatures (PLATE rnd in Table 2).", "We can see that though random temperature based method is not as good as our best fixed-temperature method, it in general produces decent results.", "Therefore, we recommend using this method when the computing budget is limited.", "Note that we also tried more extreme values as shown in Appendix B, and we find the value of 1.5 or 2.0 works better than others.", "In the fifth block, we additionally conduct self-distillation experiments, which is not the focus of this work.", "Our method improves the teacher model on CNNDM; ROUGE-2/L scores are improved on XSum; while on NYT, there are improvements on ROUGE-1/L.", "Results with the Transformer student (the sixth block) follow a similar trend, although the improvements are smaller.", "It may because the model-Ref Regular PLATE B12-6 =1 .", "ing power of Transformer without pre-training is not large enough to effectively model the differences in pseudo labels.", "It is also interesting to see that students distilled with pseudo-labeling do improve gold label based students using randomly initialized Transformer , but not with pre-trained models (i.e., BART 12-6 and BART 12-3 ), which may also be due to the strong modeling power of large pre-trained Transformers.", "We randomly sample 50 documents from the test set of CNNDM.", "We compare our best student model PLATE B12-6 =2 .", "0 against the 132 Attention Setting R1 R2 RL enc = cross = dec = 2 .", "regular pseudo-labeling model (Regular), another model PLATE B12-6 =1 .", "5 and human reference (Ref).", "We ask human judges to rank the outputs of these models from best to worst.", "We convert the ranks to rank ratings (rank i to 5 i ) and further conduct student t -test on these ratings.", "As shown in Table 3, PLATE B12-6 =2 .", "0 obtains the best ranking score and the difference between PLATE B12-6 =2 .", "0 and the regular pseudo-labeling based method Regular is significant ( p < 0 . 05 ), which indicates our proposed method PLATE indeed produces better summaries.", "Ablation study In a Transformer, there are three types of attention modules (i.e., encoder self-attention, decoder self-attention and decoder cross-attention), and we can scale attention temperatures for all of them or some of them.", "Let enc , cross , and dec denote the attention temperature coefficient of the encoder self-attention module, the decoder cross-attention module, and the decoder self-attention module, respectively.", "As shown in Table 4, using large attention temperature coeffi-cients (2.0) for all three types of attention modules leads to the best result.", "When setting the coefficient of the cross attention module to cross = 1 .", "0 , the ROUGE scores drop most.", "Perhaps this is not surprising, since cross attentions are directly related to the selection of document contents for summarization.", "Besides, the attention temperature of the decoder self-attention is also crucial but not as important as the cross-attention (see the fourth row).", "Comparison with sampling and tuning output layer temperature Sampling based methods can produce more diverse and richer outputs than its beam search based counterpart and has been proven useful in back translation (Edunov et al., 2018).", "We implement the sampling method in Edunov et al. (2018) and Nucleus Sampling (Holtzman et al., 2019), a more advanced sampling method, to generate pseudo labels for distillation.", "We use the BART 12-6 as the student model, and the distillation results on CNNDM are in Table 5.", "As can be seen, both of the sampling based methods above perform worse than the regular beam search based pseudo-labeling method (Regular), let alone ours.", "Besides the attention temperatures, we can also tune the temperature T in the decoder output softmax layer.", "With a proper T (i.e., T = 0 . 5 ) during pseudo label generation, the resulting student model slightly outperforms the baseline student model with regular pseudo labeling method on ROUGE-2/L (see Table 5), but worse than PLATE =2 .", "0 .", "More results with different T s are in Appendix C. 4.5 Analysis Why does our distillation method work?", "To answer this question, we first try to analyze the reasons from both the external characteristics of the summaries generated by the teacher model and the internal characteristics of the teacher's attention mechanism.", "Then, we will give an in-depth explanation.", "Length and novel n -grams We first analyze the pseudo summaries generated by the teacher models.", "We calculate novel n -grams and lengths of generated summaries.", "Note that if an n -gram appears in the summary, but not in the original document, we call it a novel n -gram.", "Proportions of novel n -grams are used to measure the abstractiveness of summaries (See et al., 2017; Liu and Lapata, 2019).", "As shown in Table 6, when using a larger , pseudo summaries are shorter 6 and contain a larger portion of novel n -grams.", "It indicates that the teachers can produce more concise and abstractive summaries, which matches the goal of abstractive summarization.", "Are these pseudo summaries of good quality?", "The performance of the teacher with different attention temperatures on CNNDM test 6 We also try changing the length penalty during teach-ers' inference to make pseudo summaries shorter, but we find this method does not help summarization distillation (see Appendix D for more details).", "set is shown in Table 7.", "Their results are all decent and close to each other (at least for ROUGE-1 and ROUGE-L).", "Interestingly, compared with = 1 .", "0 , the performance of the teacher with = 2 .", "0 is worse, but the resulting student is much better (see Table 2).", "Perhaps not surprisingly, the styles of summaries from students are similar with these from their teachers.", "Concise and abstractive teachers lead to concise and abstractive students (see Table 6).", "Conciseness and abstractiveness are good properties for summarization, which however may not be the case for other generation tasks such as machine translation.", "We apply PLATE to the WMT16 (Bojar et al., 2016) English-German translation task and use Transformer-big as the teacher and Transformer-base as the student.", "With = 1 .", "5 , we obtain a BLEU of 27.90, while the result of the regular pseudo-labeling is 27.79 (more details are in Appendix A).", "Attention We have shown earlier in Figure 1 that with higher attention temperature, cross-attention modules of a teacher can attend to later parts in documents.", "We observe that students behave similarly, and we put more cross attention visualization of students in Appendix F. To obtain corpus-level \u0000>\u0000\u0013\u0000\u0011\u0000\u0013\u0000\u000f\u0000\u0003\u0000\u0013\u0000\u0011\u0000\u0015\u0000@ \u0000>\u0000\u0013\u0000\u0011\u0000\u0015\u0000\u000f\u0000\u0003\u0000\u0013\u0000\u0011\u0000\u0017\u0000@ \u0000>\u0000\u0013\u0000\u0011\u0000\u0017\u0000\u000f\u0000\u0003\u0000\u0013\u0000\u0011\u0000\u0019\u0000@ \u0000>\u0000\u0013\u0000\u0011\u0000\u0019\u0000\u000f\u0000\u0003\u0000\u0013\u0000\u0011\u0000\u001b\u0000@ \u0000>\u0000\u0013\u0000\u0011\u0000\u001b\u0000\u000f\u0000\u0003\u0000\u0014\u0000\u0011\u0000\u0013\u0000@ \u00001\u0000R\u0000U\u0000P\u0000D\u0000O\u0000L\u0000]\u0000H\u0000G\u0000\u0003\u0000W\u0000R\u0000N\u0000H\u0000Q\u0000\u0003\u0000S\u0000R\u0000V\u0000L\u0000W\u0000L\u0000R\u0000Q\u0000\u0003\u0000\u000b\u0000\u0013\u0000\u0011\u0000\u0013\u0000\u0003\u0000W\u0000R\u0000\u0003\u0000\u0014\u0000\u0011\u0000\u0013\u0000\f\u0000\u0003\u0000L\u0000Q\u0000W\u0000H\u0000U\u0000Y\u0000D\u0000O\u0000V\u0000\u0003\u0000L\u0000Q\u0000\u0003\u0000G\u0000R\u0000F\u0000X\u0000P\u0000H\u0000Q\u0000W\u0000V \u0000\u0013\u0000\u0011\u0000\u0013\u0000\u0013 \u0000\u0013\u0000\u0011\u0000\u0013\u0000\u0018 \u0000\u0013\u0000\u0011\u0000\u0014\u0000\u0013 \u0000\u0013\u0000\u0011\u0000\u0014\u0000\u0018 \u0000\u0013\u0000\u0011\u0000\u0015\u0000\u0013 \u0000\u0013\u0000\u0011\u0000\u0015\u0000\u0018 \u0000\u0013\u0000\u0011\u0000\u0016\u0000\u0013 \u0000\u0013\u0000\u0011\u0000\u0016\u0000\u0018 \u00003 \u0000U \u0000R\u0000S\u0000R \u0000U \u0000W\u0000L \u0000R\u0000Q \u0000\u0003 \u0000R \u0000I \u0000\u0003 \u0000H \u0000Y \u0000L \u0000G \u0000H \u0000Q \u0000W \u0000\u0003 \u0000D \u0000W\u0000W \u0000H \u0000Q \u0000W\u0000L \u0000R\u0000Q \u0000\u0003 \u0000Z \u0000H \u0000L \u0000J\u0000K \u0000W \u0000V \u0000\u0003 \u0000", "statistics, we further calculate the evident cross-attention weight distributions of the teacher when generating pseudo labels on the training set of CNNDM.", "Note that an attention weight is evident if it is greater than 0.15, and these evident attention weights account for around 15% of all attention weights.", "Specifically, we normalize the token positions of each document to (0 .", "0 , 1 .", "0] and divide the normalized positions into five bins.", "The mean proportions of evident attentions for all bins are shown in Figure", "2. Compared to the teacher with normal attention temperature (pink bar), teachers with higher attention temperatures (blue and green bars) attend less on the heading parts of documents while more on the tail parts of documents.", "pseudo summaries, which makes the teacher provide more summary-like pseudo labels to students.", "High-temperature teachers can alleviate the leading bias problems by providing pseudo labels with better coverage of source documents to students.", "More explanation According to the study of Xu et al. (2020b), the prediction entropy correlates strongly with whether the model is copying or generating, as well as where in the sentence the token is (content selection).", "The decoder tends to copy when the model has a low prediction entropy and generate novel bigrams when the model has a high prediction entropy.", "They also find that high entropy of attention distribution strongly correlates with the model's high prediction entropy.", "Our method with a higher attention temperature makes attention distributions of the teacher model smoother and leads to a higher entropy of attention distributions, which results in a higher prediction entropy.", "Therefore, the model with higher attention temperature tends to copy less and generate more novel tokens.", "The conclusion from Xu et al. (2020b) is in accordance with our observation in Table 6.", "In this work, we propose a simple but effective extension of pseudo-labeling method PLATE for summarization distillation.", "Experiments on three datasets demonstrate that our method can consistently outperform the vanilla pseudo-labeling method.", "Further empirical analysis shows that by using our method, teacher models can generate more concise and abstractive summaries.", "As a result, summaries produced by student models also become more concise and abstractive.", "In the future, we would like to explore our method to other generation tasks as well as self-training with unlabeled data.", "We are also interested in combining our method with other distillation methods and extending our method for better teacher model training." ]
[ "abstain", "objective", "abstain", "result", "objective", "result", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "abstain", "result", "result", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "method", "other", "other", "other", "other", "abstain", "other", "method", "other", "method", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "result", "abstain", "objective", "objective" ]
[ "As a fine-grained task, the annotation cost of aspect term extraction is extremely high.", "Recent attempts alleviate this issue using domain adaptation that transfers common knowledge across domains.", "Since most aspect terms are domain-specific, they cannot be transferred directly.", "Existing methods solve this problem by associating aspect terms with pivot words (we call this passive domain adaptation because the transfer of aspect terms relies on the links to pivots).", "However, all these methods need either manually labeled pivot words or expensive computing resources to build associations.", "In this paper, we propose a novel active domain adaptation method.", "Our goal is to transfer aspect terms by actively supplementing transferable knowledge.", "To this end, we construct syntactic bridges by recognizing syntactic roles as pivots instead of as links to pivots .", "We also build semantic bridges by retrieving transferable semantic prototypes .", "Extensive experiments show that our method significantly outperforms previous approaches.", "Aspect term extraction (ATE) is a fundamental task in aspect-based sentiment analysis.", "Given a review sentence The pizza here is also absolutely delicious. , ATE aims to extract the term pizza .", "Recent studies define ATE as a sequence tagging task and propose supervised taggers (Wang et al., 2017; Xu et al., 2018).", "However, due to the high cost of token-level annotation, the lack of labeled data becomes the main obstacle (Chen and Qian, 2019).", "To alleviate the data deficiency issue, unsupervised domain adaptation is proposed to transfer knowledge from the labeled source domain to the unlabeled target domain.", "Since ATE is a token-level task, it is natural to conduct token-level domain adaptation.", "Then a problem arises: many *Corresponding author.", "aspect terms are domain-specific and cannot be transferred directly.", "We present the proportion of source aspect terms that also appear in target test data in Figure 1.", "As can be seen, in distant transfer pairs like R L , only less than 10% of source aspect terms have appeared in target data.", "Even in a close pair L D , the proportion is no more than 40%.", "In other words, there is a wide discrepancy between the data from different domains, and many aspect terms have to be transferred under the guidance of proper references.", "To solve this problem, previous studies try to associate aspect terms with specific pivot words 1 .", "We name these methods passive domain adaptation because the transfer of aspect terms is dependent on their links to the pivots.", "There are two types of methods along this line.", "(1) Opinion terms as pivots .", "Since aspect and opinion terms usually appear in pairs, it is straightforward to extract aspect terms with the indication from opinion terms.", "Early studies (Li et al., 2012; Ding et al., 2017) use common opinion seeds (e.g., good , fancy ) and pre-defined rules (e.g., good amod NN ) to extract aspect terms across domains.", "However, it is hard to col-lect a complete set of seeds or define high-quality rules, and thus these methods often produce inferior performance.", "Several studies (Wang and Pan, 2018, 2019b) manually annotate all opinion terms in reviews and design neural models to capture aspect-opinion relations via multi-task learning.", "While 1 Pivot words are words which behave in the same way for discriminative learning in both domains (Blitzer et al., 2006).", "getting improvements, these methods induce additional annotation costs.", "(2) Context terms as pivots .", "Since pre-trained language models (PLMs) like BERT represent words w.r.t their contexts, re-cent studies (Xu et al., 2019; Gong et al., 2020) leverage PLMs to transfer aspect terms with common context terms 2 .", "However, not all context terms qualify as pivots (e.g., eat ).", "In addition, PLMs like BERT build word associations mainly based on semantic similarity in co-occurring contexts.", "For an aspect term like pizza , BERT tends to link it to hamburger via a flow like pizza eat hamburger .", "Consequently, it is hard for these methods to identify keyboard in the target domain based on the labeled term pizza in the source domain.", "In this paper, we propose a novel active domain adaptation method.", "Concretely, we construct two types of bridges for all words, which can help transfer aspect terms across domains.", "An example in Figure 2 shows how to identify the unseen target term keyboard based on the source term pizza .", "(1) The syntactic bridge aims to recognize transferable syntactic roles for the words across domains.", "Though pizza and keyboard have almost no semantic relatedness, they often play a similar role in parse trees.", "In view of this, we treat the involved syntactic roles (including POS tag and dependency relations) of a certain word as its syntactic bridge.", "Previous studies also utilize dependency information.", "However, we differ our method from existing ones in that we do not use dependency relations to associate pivot words with aspect terms.", "Instead, we treat syntactic roles themselves as pivot features and do not need any manually annotated pivot words.", "(2) The semantic bridge moves one step further by retrieving transferable prototypes.", "Intuitively, if we correlate pizza with some prototype target terms like { disk , OS , mouse } , the domain discrepancy between the training and testing reviews can be largely reduced.", "Hence we regard the proto-2 Context terms denote all words that are not aspect terms.", "types of a certain word as its semantic bridge and design a syntax-enhanced similarity metric to retrieve them.", "Compared with previous opinion and context term-based methods, building a semantic bridge directly links aspect terms across domains and only requires unlabeled source and target data.", "Based on the syntactic/semantic bridges, we then develop an end-to-end tagger to fuse reviews with these transferable bridges.", "We conduct extensive experiments on three datasets.", "The results show that our method achieves a new state-of-the-art performance with a low computational cost.", "Aspect Term Extraction Early researches for ATE mainly involve pre-defined rules (Hu and Liu, 2004; Popescu and Etzioni, 2005; Wu et al., 2009; Qiu et al., 2011) and hand-crafted features (Li et al., 2010; Liu et al., 2012, 2013; Chen et al., 2014).", "With the development of deep learning, supervised sequence taggers have become the mainstream due to their promising performance (Liu et al., 2015; Wang et al., 2016, 2017; Xu et al., 2018; Ma et al., 2019; Chen and Qian, 2020a).", "More recently, there emerge many studies that interact ATE with other tasks like aspect-level sentiment classification (Wang et al., 2018; He et al., 2019; Chen and Qian, 2020b).", "Since these methods highly depend on abundant domain-specific training data, they can hardly scale across the domains where labeled data is absent.", "Hence it would be more practical to develop unsupervised domain adaptation methods for ATE.", "Domain Adaptation Many domain adaptation methods have been proposed to solve coarse-grained tasks like text classification (Blitzer et al., 2006; Ganin and Lempitsky, 2015; Guo et al., 2020).", "The basic idea in coarse-grained tasks is to transfer pivot words, which does not fit ATE well since most aspect terms are domain-specific non-pivot words.", "There have been a few attempts to this problem, which fall into two lines.", "(1) One is to model aspect-opinion relations.", "Early researches use common opinion seeds and pre-defined dependency link rules to build manual features (Jakob and Gurevych, 2010), conduct bootstrapping (Li et al., 2012), and create pseudo target labels (Ding et al., 2017).", "Due to the incompleteness of seeds and the inflexibility of rules, they often produce inferior performance.", "Subsequent studies (Wang and Pan, 2018, 2019a,b; Li et al., 2019) manually annotate all opinion terms in reviews and design trainable neural models to capture the relations via multi-task learning.", "However, they induce extra annotation costs.", "(2) The other aims to find aspect-context relations.", "Xu et al. (2019) post-trains BERT on the cross-domain corpus to enhance its domain adaptation ability.", "Gong et al. (2020) and Pereg et al. (2020) further incorporate external syntactic information into BERT with auxiliary tasks or modified attention mechanisms, but they still rely on the prior knowledge in BERT.", "These methods often have more than 100M parameters and involve lots of computing power.", "Unlike all the aforementioned methods, we do not associate aspect terms with pivot words but actively transfer them via bridges.", "In this section, we first introduce the cross-domain ATE task.", "We then illustrate how to construct syntactic and semantic bridges.", "Lastly, we present the bridge-based sequence tagging.", "Given a review x = { x 1 , ..., x n } , we formulate ATE as a sequence tagging task that aims to predict a tag sequence y = { y 1 , ..., y n } , where each y i { B, I, O } denotes the beginning of , inside of , and outside of an aspect term.", "In this paper, we focus on the unsupervised domain adaptation for ATE, i.e., labeled training data is not available in the target domain.", "Specifically, given a set of labeled data DS = { ( x Sj , y Sj ) } NS j =1 from the source domain and a set of unlabeled data DU = { ( x Uj ) } NU j =1 from the target domain, our goal is to predict labels y T for the unseen target test data DT = { ( x Tj ) } NT j =1 .", "Given a review sentence x from either domain, we map it with a lookup table E R d e | V | , and generate word embeddings E = { e 1 , ..., e n } R d e n , where | V | is the vocabulary size, and d e is the embedding dimension.", "For cross-domain ATE, we construct bridges for reviews to help directly transfer aspect terms across two domains.", "Syntactic Bridge In natural language, linguistic expressions are rich and flexible.", "In contrast, the syntactic structures are limited and are general across domains.", "Based on this observation, we propose to build connections between source and target words based on their syntactic roles (POS tags and dependency relations) rather than the lexical items.", "For example, from the parsing results in the upper part of Figure 3, the word pizza with a POS tag NN and dependency relations { det , nsubj } might be an aspect term, while those with the RB tag and advmod relation might not.", "Note the sentence The keyboard is in reasonable size. in the target domain has similar parsing results.", "Hence the syntactic roles can serve as supplementary evidence for recognizing aspect terms across domains.", "Several prior studies (Wang and Pan, 2018, 2019b; Pereg et al., 2020) also make use of parsing results.", "However, they only use dependency relations to link words or to propagate word representations.", "For example, given a dependency great nsubj pizza in DS , where great is a known pivot and pizza is an aspect term, the goal is to extract keyboard as an aspect from the target review The keyboard is great in DT .", "The typical syntax based method Hier-Joint (Ding et al., 2017) first locates the pivot great , then utilizes the nsubj dependency to identify the term keyboard .", "Other methods like RNSCN (Wang and Pan, 2018) combine the embedding of the child node ( pizza ) with that of the parent node ( great ) according to the relation type, or reversely (depending on the specific design).", "It can be seen that the dependency relation nsubj here is only used as a link to the pivot.", "We start in the opposite direction, i.e., we aim to fully exploit syntactic roles by recognizing themselves as pivots instead of treating them as links to pivots.", "To achieve this, we present a novel data structure to encode the POS and dependency information by grounding them into involved words.", "As shown in the lower part of Figure 3, for a word x i , we use a one-hot vector b pos RN pos and a multi-hot vector b dep RN dep to represent its POS tag and dependency relation(s), where N pos and N dep are the number of tag/relation types.", "For b dep , we merge all relations involved with x i regardless of the direction (i.e., being the governor or dependent) 3 .", "To enlarge the learning capability, we project b pos and b dep to the same dimensionality with learnable weight matrices 4 and concatenate them to form the syntactic bridge b syn : b syn = ( W pos b pos ) ( W dep b dep ) , (1) where b syn R d e has the same dimensionality with the word embedding e .", "In training, W pos and W dep get trained by labeled samples.", "In testing, we fix them and obtain b syn for DT .", "By doing this, our proposed method well preserves two types of syntactic information throughout the entire learning process.", "As a result, we can take full advantage of their transferable information.", "Semantic Bridge The semantic bridge takes the syntactic roles above as a basis but moves one step further to retrieve transferable prototypes.", "Unlike previous passive methods that construct information flows like pizza good keyboard via opinion terms or pizza offer keyboard via context terms, we aim to construct a direct flow like pizza keyboard .", "For example, to transfer knowledge from pizza in DS to keyboard in DT , we aim to introduce some supplementary target terms like { disk , OS , mouse } in DU for pizza and directly improve its semantic relatedness with keyboard .", "We call these supplementary terms prototypes and will retrieve them to build the semantic bridges 5 .", "PLMs like BERT can find a set of semantically similar terms like { hamburger , salad } for pizza , which can also serve as prototypes.", "However, such prototypes are not suitable for the domain adaptation task, because aspect terms in one domain are often far away from those in another domain in the semantic space.", "To address this problem, we design a syntax-enhanced similarity metric to retrieve transferable semantic prototypes.", "Before starting, we filter the words in DU by frequency and only preserve those appearing more than times.", "We regard these words in unlabeled target data as candidate prototypes and build a prototype bank (cid:101) V from DU accordingly.", "We then conduct retrieval following the procedure in Figure 4.", "For a query word v VS (vocabulary of DS ), 3 This simplification almost has no side effects.", "If a word has a NN tag and det relation, it must be the governer.", "4 In all equations, W denotes a trainable weight matrix.", "5 We retrieve prototypes for all words in the review due to the existence of domain-specific context terms like eat .", "we want to find a prototype term (cid:101) v (cid:101) V that play a similar syntactic role in the target domain.", "Specifi-cally, we first summarize the global usages of v by merging its POS and dependency embeddings in all reviews where v appear in DS : b gpos = { b pos,j =1 | b pos,j =2 | ... | b pos,j = NS } , b gdep = { b dep,j =1 | b dep,j =2 | ... | b dep,j = NS } , (2) where | is the dimension-wise OR operation and NS is the number of reviews in DS .", "Similarly, we can obtain (cid:101) b gpos and (cid:101) b gdep for (cid:101) v .", "We then define the syntax-enhanced similarity between v and (cid:101) v : s.sim ( v, (cid:101) v ) = c ( b g pos , (cid:101) b g pos ) c ( b g dep , (cid:101) b gdep ) c ( e , (cid:101) e ) , (3) where e and (cid:101) e are word embeddings and c ( , ) is the cosine similarity.", "Here the POS and dependency similarities are used to find similar syntactic roles, while the word similarity is used to reduce the noise of prototypes 6 .", "Consequently, we can obtain a s.sim score matrix MS R | VS || (cid:101) V | .", "After ranking, for v , we select the top-K words { (cid:101) v k } Kk =1 with their s.sim scores { (cid:101) s k } Kk =1 from the prototype bank.", "Lastly, we aggregate these prototypes into the semantic bridge b sem of v : b sem = K (cid:88) k =1 (cid:101) s k (cid:101) e k .", "(4) Following the way for DS , we also retrieve transferable prototypes for DU and DT using (cid:101) V .", "In this way, source and target words with the same prototypes can be directly correlated to each other.", "For DU , we can generate a score matrix MU R | VU || (cid:101) V | by calculating the s.sim for all words in DU and all candidate prototypes in (cid:101) V .", "Then we can obtain the semantic bridge b sem for each word in DU in training.", "In testing, DT is unseen and the global b gpos / b gdep are not available.", "Therefore, for a word w in DT , we obtain b sem using MU if w has appeared in DU .", "Otherwise, we temporarily use the local b pos / b dep of w in current tesing sample to replace the global b gpos / b gdep and calculate the s.sim .", "6 A domain-invariant word that appears frequently in both domains should preserve its own information.", "It will have a maximum similarity score with itself since c ( e , (cid:101) e ) = 1 .", "Based on the syntactic and semantic bridges, we now propose a lightweight end-to-end sequence tagger for aspect term extraction.", "As shown in Figure 5, the tagger receives a mixture of DS and DU for training and then makes predictions for DT in testing.", "We then illustrate the details.", "Bridge Fuser Our constructed bridges have two properties.", "(1) Bridges are domain-invariant and should be preserved.", "(2) Bridges can help extract domain-invariant information from e i .", "Therefore, we propose to enhance the embedding e i of a word x i with its transferable bridges b syn,i and b sem,i .", "Specifically, we use a gating operation to fuse bridges.", "Take the syntactic bridge as an example, we first calculate a dimension-wise gate g syn,i : g syn,i = ( W syn ( e i b syn,i )) , (5) where W syn R 2 d e 2 d e , is the Sigmoid function, is concatenation.", "We then scale the concatenated vector e i b syn,i with g syn,i and obtain the syntactic bridge enhanced embedding e syn,i : e syn,i = g syn,i (cid:12) ( e i b syn,i ) , (6) where (cid:12) is an element-wise multiplication.", "The semantic bridge enhanced embedding e sem,i can be calculated similarly.", "We term the model with e i , e syn,i , and e sem,i input as BaseTagger , SynBridge , and SemBridge , respectively.", "Three types of embeddings are collectively called e input,i .", "Feature Extractor Previous studies (Xu et al., 2018) show that low-level token features are insuf-ficient for tagging terms.", "Therefore, we use a CNN encoder containing L stacked convolutional layers with ReLU activation to extract the high-level features f i R d f : f l +1 i = ReLU ( f li c : i + c K l + b l ) , f 0 i = e input,i , (7) where K R d f ( d input ks ) is the kernel group, ks = 2 c + 1 is the kernel size.", "where y i is the prediction of the word x i .", "Domain Classifier Besides BIO tagging, we further enhance the domain-invariance of bridge-based features via domain adversarial training.", "Specifically, we first aggregate f Li to a global representation f g : f g = MaxPool ( f L 1: n ) .", "Then we add a Gradient Reversal Layer (GRL) (Ganin and Lempitsky, 2015) to f g with the scale coefficient and train a domain classifier to distinguish the domain that f g belongs to:", "y d = Softmax ( WO MLP ( GRL ( f g ))) , (10) where y is the domain prediction, and MLP contains", "d contains LD layers with ReLU activation.", "The goal is to minimize the tagging loss for recognizing aspect terms: LBIO = (cid:88) DS n (cid:88) i =1 (cid:96) ( y i , y i ) , (11) where (cid:96) is the cross-entropy loss function.", "On the other hand, the samples from DS and DU are used to train the domain classifier and minimize the following domain classification loss: LDOM = (cid:88) DS D U (cid:96) ( y d , y d ) , (12) where y d = 0 for DS and y d = 1 for DU .", "Training Procedure In training, only samples from DS have corresponding BIO labels y S for token classification.", "The final loss for training the end-to-end tagger is defined as L = LBIO + LDOM .", "Notice that DT is only used in testing.", "There is no data leakage in training, and the task setting is strictly inductive.", "Datasets We use three conventional English datasets from different domains and construct six directed transfer pairs, where R and L are from SemEval 2014 and 2015 (Pontiki et al., 2014, 2015), and D is collected by Hu and Liu (2004).", "Following previous studies (Wang and Pan, 2018, 2019b; Pereg et al., 2020), we use three different splits and each split has a fixed train-test ratio 3:1.", "The detailed statistics of datasets are presented in Table 1 7 .", "7 Our code and data are available at https://github.com/ NLPWM-WHU/BRIDGE.", "Settings We pre-process each dataset by lowercas-ing all words.", "We use the same word2vec vectors as previous studies (Wang and Pan, 2018, 2019a,b) to generate word embeddings, and set the dimensionality d e =100.", "In the syntactic bridge, we use Stanford CoreNLP (Manning et al., 2014) for dependency parsing.", "There are 45 classes of POS tags and 40 classes of dependency relations in three datasets.", "In the semantic bridge, we set the frequency threshold =5, the number of prototypes K =10.", "In the end-to-end tagger, we set the number of convolution layers L =4, and the kernel size ks of each layer is 3, 5, 5, 5, respectively, the number of MLP layers LD =3, and dropout (Srivastava et al., 2014) is applied to layers' outputs with the probability 0.5.", "The dimensionality of features d f =256, the scale coefficient of GRL =0.1.", "We train the tagger for 100 epochs using Adam optimizer (Kingma and Ba, 2015) with the learning rate 1e-4 and batch size 8 in a 1080Ti GPU.", "Evaluation For each transfer pair, we use the labeled training data from the source domain and unlabeled training data from the target domain to train the tagger.", "Then we evaluate the tagger on unseen test data from the target domain.", "We use the mean F1-scores of aspect terms over three splits with three random seeds (i.e., nine runs for each transfer pair) for evaluation 8 .", "We classify all models into three categories.", "Type-I denotes the opinion term-based methods.", "TCRF (Jakob and Gurevych, 2010), RAP (Li et al., 2012), and Hier-Joint (Ding et al., 2017) use manually defined dependency rules.", "RNSCN and 8 The hyperparameter ranges are presented in Appendix A. TRNN (Wang and Pan, 2018, 2019a) model dependency trees with trainable recursive networks.", "SAL (Li et al., 2019) and TIMN (Wang and Pan, 2019b) replace the dependency tree with trainable memory interaction.", "Type-II denotes context term-based methods.", "BERT-Base uses vanilla base BERT (Devlin et al., 2019) for ATE.", "BERT-Cross (Xu et al., 2019) post-trains BERT on a combination of Yelp and Amazon corpus.", "UDA (Gong et al., 2020) and SA-EXAL (Pereg et al., 2020) incorporate syntactic information into BERT with auxiliary tasks and modified attention mechanisms 9 .", "Type-III denotes the proposed active domain adaptation strategy.", "BaseTagger is the tagger without bridges, while SynBridge and SemBridge use syntactic and semantic bridges, respectively.", "The comparison results for all methods are shown in Table 2.", "It is clear that our proposed model achieves a new state-of-the-art performance in terms of the average F1-scores.", "For example, SemBridge outperforms the best TIMN in Type-I by 7.02% and BERT-Cross in Type-II by 5.21%, respectively.", "We also notice that our BaseTagger already outperforms all baselines.", "We attribute this to the design of CNN feature extractor and domain adversarial training (DAT).", "CNN focuses on the N-gram feature rather than a single word and reduces the side effects of non-pivot aspect terms.", "DAT is applied to the sentence-level features, such that they are not misled by the common N-grams that are labeled both 0 and 1.", "9 Since SAL and UDA use extra aspect sentiment labels, we show how to make them fair competitors in Appendix B. SynBridge and SemBridge further improve BaseTagger with a 1.80% and 2.68% absolute gain, respectively.", "This proves the effectiveness of our proposed active domain adaptation strategy.", "Meanwhile, SemBridge is a bit superior to SynBridge.", "The reasons are two-fold.", "(1) The semantic bridges come from prototype words that possess prior embedding knowledge and also contain syntactic information, while the syntactic bridges are merely trained from scratch.", "(2) The retrieved top-K terms make the supplementary information in SemBridge more diverse and abundant than that in SynBridge.", "Among the baselines, early methods using common opinion seeds and pre-defined rules are inferior.", "Relying on annotated opinion terms, the methods like TIMN get some improvements but induce extra annotation costs.", "By incorporating pre-trained BERT with external dependency and cross-domain corpus, UDA, SA-EXAL, and BERT-Cross outperform previous methods, but they need high computational resources.", "In contrast, by using the static Word2vec embeddings, our model can outperform those with dynamic BERT representations.", "This is instructive for other researches in that there is still room for improvement by exploring the syntactic and semantic features beyond the popular BERT-based models 10 .", "With the proposed active domain adaptation strategy, we do not need any manually labeled opinion terms for ATE.", "However, this does not mean that our method cannot handle opinion term extraction (i.e., OTE).", "In contrast, if the labeled opinion terms are provided in DS , we can also conduct the OTE task for DT by simply modifying the tagger.", "In specific, we add an opinion term prediction layer in Eq.8 and then extract aspect and opinion terms simultaneously.", "The results are shown in Table 3.", "Obviously, our method again outperforms all baselines 11 .", "We find a small performance decrease in AVG-AS compared with that in Table 2.", "Similar results are also observed in BERT-Base.", "The reason is that the objective of ATE and OTE may interfere with each other without proper balancing and a sophisticated multi-task learning framework.", "10 We also make some explorations about combining SynBridge and SemBridge, please refer to Appendix C. 11 Please refer to Appendix D for detailed results for all transfer pairs.", "We conduct a series of ablation study to validate the effectiveness of our method.", "The results are shown in Table 4.", "Results 1 2 conform to our previous discussion about BaseTagger that both CNN and domain adversarial training contribute to overall good performance.", "Results 3 6 show the effectiveness of POS and dependency embeddings in SynBridge.", "Specifically, in 5 6, we replace our proposed structure for dependency with frequently-used Tree-LSTM and GCN to model the dependency tree and find a significant drop in performance.", "Results 7 9 show the importance of all three types of similarity for retrieving prototypes in SemBridge.", "There are three key hyperparameters in our method: the scale coefficient of GRL , the frequency threshold , and the number of prototypes K .", "We vary in the range 10 4 1 .", "0 and /K in 1 10 to investigate their impacts and present the results in Figure 6. In Figure", "6(a), when increasing from 10 4 to 10 1 , we enlarge the scale of domain adversarial training in GRL and get small improvements.", "However, the performance does not keep rising when Table 5: Case study.", "= 1 .", "0 .", "This result shows that simply forcing non-pivots to transfer knowledge is not suitable for domain adaptation.", "In Figure", "6(b), is used to balance diversity and accuracy.", "A low means that prototypes are diverse, but some of them are long-tail words and contribute little to the reduction of domain discrepancy.", "On the contrary, a high only preserves frequent prototypes, and some meaningful prototypes are filtered out.", "Therefore, a middle =5 is an appropriate choice.", "For K , the curve is generally upward when more prototypes are introduced.", "This trend is reasonable since more prototypes equal to more target information.", "In Figure 7, we further analyze the impacts of the percentage of unlabeled data PU and the percentage of parsing noise PN .", "For PU , the performance is generally better when more unlabeled target data is introduced.", "Moreover, around 20% 40% unlabeled data is enough to achieve satisfactory performance.", "Notice that SemBridge without unlabeled data will degenerate into BaseTagger since no prototypes can be retrieved.", "For PN , we manually disturb the parsing results to observe the robustness of our method.", "Clearly, after introducing noises on parsing, the performance begins to degrade, but not by a large margin.", "Our method has the ability to resist parsing errors for two reasons.", "First, beyond syntactic roles, we also incorporate embedding similarity when retrieving prototypes (for SemBridge only).", "Second, the gating mechanism can further filter useless syntactic information and maintain the quality of word representations.", "To have a close look, we select a few samples from testing target data for a case study.", "S1 and S2 show the positive impacts of bridges.", "Due to the space limit, we illustrate S1 in detail.", "Since most words in S1 are domain-specific terms in L , RNSCN fails to recognize any aspect terms by simply propagating word representations with dependency.", "BERT-Cross only extracts a part of aspect terms based on its prior knowledge.", "For our bridge-based method, SynBridge supplements syntactic roles { nummod , compound , obj , conj , NNS } for port .", "These syntactic roles also join the representation of usb and help to extract usb ports correctly.", "For SemBridge, the analysis is much straightforward.", "usb is the prototype of typical aspect terms in R like { garlic , thai , banana } , thus the tagger with semantic bridges can easily recognize usb as an aspect term.", "S3 further illustrates how SemBridge helps recover from the wrong parsing results.", "Such results make two syntax based methods RNSCN and SynBridge stop working.", "In contrast, tuna is the prototype of noun words like { nvidia , amd , blade } in L and melt has the verb prototype like { imagine , hang , relax } in R , thus SemBridge correctly extracts tuna and filters out melt in the same time.", "In Table 6, We further present several sample prototypes of the training data from the transfer pairs R L (upper three) and L R (lower three) in SemBridge, where three terms on the left are aspect term, opinion term, and context term, respectively.", "For a source non-pivot term like processor in L , SemBridge enhances it with typical target words like soup and burger .", "As a result, the domain discrepancy between the source and target data is largely reduced with the help of prototypes.", "In practice, for any transfer pairs, the one-time construction of syntactic and semantic bridges can fin-ish within 30 seconds.", "Therefore, we focus on the end-to-end training costs of SynBridge/SemBridge.", "We run five top-performing methods on the transfer pair R L and present the trainable parameter number and running time per epoch of each method in Table 7. We can conclude that our proposed method maintains a quite low computational cost.", "In this paper, we propose a novel active domain adaptation method for aspect term extraction.", "Unlike previous studies that conduct passive domain adaptation by associating aspect terms with pivots, we actively enhance the terms' transferability by constructing syntactic and semantic bridges for them.", "We then design a lightweight end-to-end tagger for bridge-based sequence tagging.", "Experiments on six transfer pairs demonstrate that our method achieves a new state-of-the-art performance with a quite low computational cost.", "We thank the anonymous reviewers for their valuable comments.", "The work described in this paper is supported by the NSFC projects (61572376, 91646206), and the 111 project (B07037)." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "objective", "objective", "method", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "method", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "result", "abstain", "method", "abstain", "result", "objective", "result", "abstain", "objective", "objective", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "result", "other", "result", "method", "abstain", "result", "abstain", "abstain", "method", "abstain", "method", "result", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "other", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "objective", "method", "method", "objective", "other", "other" ]
[ "Even though several methods have proposed to defend textual neural network (NN) models against black-box adversarial attacks, they often defend against a specific text perturbation strategy and/or require re-training the models from scratch.", "This leads to a lack of generalization in practice and redundant computation.", "In particular, the state-of-the-art transformer models (e.g., BERT, RoBERTa) require great time and computation resources.", "By borrowing an idea from software engineering , in order to address these limitations, we propose a novel algorithm, SHIELD , which modifies and re-trains only the last layer of a textual NN, and thus it patches and transforms the NN into a stochastic weighted ensemble of multi-expert prediction heads.", "Considering that most of current black-box attacks rely on iterative search mechanisms to optimize their adversarial perturbations, SHIELD confuses the attackers by automatically utilizing different weighted ensembles of predictors depending on the input.", "In other words, SHIELD breaks a fundamental assumption of the attack, which is a victim NN model remains constant during an attack.", "By conducting comprehensive experiments, we demonstrate that all of CNN, RNN, BERT, and RoBERTa-based textual NNs, once patched by SHIELD , exhibit a relative enhancement of 15%70% in accuracy on average against 14 different black-box attacks, outperforming 6 defensive baselines across 3 public datasets.", "Source code will be published at github.com/lethaiq/ shield-defend-adversarial-texts .", "Adversarial Text Attack and Defense.", "After being trained to maximize prediction performance, textual NN models frequently become vulnerable to adversarial attacks (Papernot et al., 2016; Wang et al., 2019a).", "In the NLP domain, in general, adversaries utilize different strategies to perturb an input sentence such that its semantic meaning is preserved while successfully letting a target NN model output a desired prediction.", "Text perturbations are typically generated by replacing or inserting critical words (e.g., HotFlip (Ebrahimi et al., 2018), TextFooler (Jin et al., 2019)), characters (e.g., DeepWordBug (Gao et al.), TextBugger (Li et al., 2018)) in a sentence or by manipulating a whole sentence (e.g., SCPNA (Iyyer et al., 2018), GAN-based(Zhao et al., 2018)).", "Since many recent NLP models are known to be vulnerable to adversarial black-box attacks (e.g., fake news detection (Le et al., 2020; Zhou et al., 2019b), dialog systems (Cheng et al., 2019), and so on), robust defenses for textual NN models are required.", "Even though several papers have proposed to defend NNs against such attacks, they were designed for either a specific type of attack (e.g., word or synonym substitution (Wang et al., 2021; Dong et al., 2021; Mozes et al., 2020; Zhou et al., 2021), misspellings (Pruthi et al., 2019), character-level (Pruthi et al., 2019), or word-based (Le et al., 2021)).", "Even though there exist some general defensive methods, most of them enrich NN models by re-training them with adversarial data augmented via known attack strategies (Miyato et al., 2016; Liu et al., 2020; Pang et al., 2020) or with external information such as knowledge graphs (Li and Sethy, 2019).", "However, these augmentations often induce substantial overhead in training or are still limited to only a small set of predefined attacks (e.g., (Zhou et al., 2019a)).", "Hence, we are in search of defense algorithms that directly enhance NN models' structures (e.g., (Li and Sethy, 2019)) while achieving higher generalization capability without the need of acquiring additional data.", "Motivation (Fig. 1) .", "Different from white-box attacks, black-box attacks do not have access to a target model's parameters, which are crucial for achieving effective attacks.", "Hence, attackers often 6661 Figure 1: Motivation of SHIELD : An attacker optimizes a step objective function (score) to search for the best perturbation by iteratively replacing each of the original 5 tokens with a perturbed one.", "query the target model repeatedly to acquire the necessary information for optimizing their strategy.", "From our analyses of 14 black-box attacks published during 20182020 (Table 1), all of them, except for SCPNA (Iyyer et al., 2018), rely on a searching algorithm (e.g., greedy, genetic) to iteratively replace each character/word in a sentence with a perturbation candidate to optimize the choice of characters/words and how they should be crafted to attack the target model (Fig. 1A).", "Even though this process is effective in terms of attack performance, they assume that the model's parameters remain unchanged and the model outputs coher-ent signals during the iterative search (Fig. 1A and 1B).", "Our key intuition is, however, to obfuscate the attackers by breaking this assumption .", "Specifically, we want to develop an algorithm that automatically utilizes a diverse set of models during inference.", "This can be done by training multiple sub-models instead of a single prediction model and randomly select one of them during inference to obfuscate the iterative search mechanism.", "However, this then introduces impractical computational overhead during both training and inference, especially when one wants to maximize prediction accuracy by utilizing complex SOTA sub-models such as BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019b).", "Moreover, it also does not guarantee that trained models are sufficiently diverse to fool attackers.", "Furthermore, applying this strategy to existing NN models would also require re-training everything from the scratch, rendering the approach impractical.", "Proposal.", "To address these challenges, we borrow ideas from software engineering where bugs can be readily removed by an external installation patch.", "Specifically, we develop a novel neural patching Attack Method Search Atk Sem.", "algorithm, named as SHIELD , which patches only the last layer of an already deployed textual NN model (e.g., CNN, RNN, transformers(Vaswani et al., 2017; Bahdanau et al.)) and transforms it into an ensemble of multi-experts or prediction heads (Fig. 1C).", "During inference, then SHIELD automatically utilizes a stochastic weighted ensemble of experts for prediction depending on inputs.", "This will obfuscate adversaries' perturbation search, making black-box attacks much more difficult regardless of attack types, e.g., character or word level attacks (Fig. 1C,D).", "By patching only the last layer of a model, SHIELD also introduces lightweight computational overhead and requires no additional training data.", "In summary, our contributions are as follows: 6662 We propose SHIELD , a novel neural patching algorithm that transforms a already-trained NN model to a stochastic ensemble of multi-experts with little computational overhead.", "We demonstrate the effectiveness of SHIELD .", "CNN, RNN, BERT, and RoBERTa-based textual models patched by SHIELD achieve an increase of 15%70% on their robustness across 14 different black-box attacks, outperforming 6 defensive baselines on 3 public NLP datasets.", "To the best of our knowledge, this work by far includes the most comprehensive evaluation for the defense against black-box attacks.", "We introduce Stochastic Multi-Expert Neural Patcher ( SHIELD ) which patches only the last layer of an already trained NN model f ( x , ) and transforms it into an ensemble of multiple expert predictors with stochastic weights.", "These predictors are designed to be strategically selected with different weights during inference depending on the input.", "This is realized by two complementary modules, namely", "(i) a Stochastic Ensemble (SE) module that transforms f ( ) into a randomized ensemble of different heads and", "(ii) a Multi-Expert (ME) module that uses Neural Architecture Search (NAS) to dynamically learn the optimal architecture of each head to promote their diversity.", "This module extends the last layer of f ( ) , which is typically a fully-connected layer (followed by a softmax for classification), to an ensemble of K prediction heads , denoted H = { h ( ) } Kj .", "Each head h j ( ) , parameterized by h j , is an expert predictor that is fed with a feature representation learned by up to the second-last layer of f ( ) and outputs a prediction logit score: h j : f ( x , L 1 ) RQ (cid:55) y j RM , (1) where L 1 are fixed parameters of f up to the last prediction head layer, Q is the size of the feature representation of x generated by the base model f ( x , L 1 ) , and M is the number of labels.", "To aggregate all logit scores returned from all heads, then, a classical ensemble method would average them as the final prediction: y = 1 K (cid:80) Kj y j .", "However, this simple aggregation assumes each h j ( ) H learns from very similar training signals.", "Hence, when L 1 already learns some of the task-dependent information, H will eventually converge not to a set of experts but very similar predictors.", "To resolve this issue, we introduce stochasticity into the process by assigning prediction heads with stochastic weights during both training and inference.", "Specifically, we introduce a new aggregation mechanism: y = 1 KK (cid:88) j j w j y j , (2) where w j weights y j according to head j 's expertise on the current input x , and j [0 , 1] is a probabilistic scalar, representing how much of the weight w j should be accounted for.", "Let us denote w , RK as vectors containing all scalars w j and j , respectively, and y R ( K M ) as the concatenation of all vectors y j returned from each of the heads.", "We calculate w and as follows: w = WT ( y f ( x , L 1 )) + b , (3) = softmax(( w + g ) / ) , (4) where W R ( K M + Q ) K , b RK are trainable parameters, g RK is a noise vector sampled from the Standard Gumbel Distribution and therefore, probability vector is sampled by a technique known as Gumbel-Softmax (Jang et al., 2016) controlled by the noise vector g and the temperature .", "Unlike the standard Softmax, the Gumbel-Softmax is able to learn a categorical distribution (over K heads) optimized for a downstream task (Jang et al., 2016).", "Annealing 0 encourages a pseudo one-hot vector (e.g., [0.94, 0.03, 0.01, 0.02] when K =4 ), which makes Eq.", "(2) a mixture of experts (Avnimelech and Intrator, 1999).", "Importantly, is sampled in an inherently stochastic way depending on the gumbel noise g .", "While W , b is learned to deterministically assigns more weights w to heads that are experts for each input x (Eq.", "(3)), introduces stochasticity into the final logits.", "The multiplication of j w j in Eq.", "(2) then enables us to use different sets of weighted ensemble models while still maintaining the ranking of the most important head .", "Thus, this further diversifies the learning of each expert and confuse attackers when they iteratively try different inputs to find good adversarial perturbations.", "Negative Log Likelihood (NLL) loss following the objective:", "While the SE module facilitates stochastic weighted ensemble among heads, the ME module searches for the optimal architecture for each head that maximizes the diversity in how they make predictions.", "To do this, we utilize the DARTS algorithm (Liu et al., 2019a) as follows.", "Let us denote O j = { o j ( ) } Tt where T is the number of possible architectures to be selected for h j H .", "We want to learn a one-hot encoded selection vector j RT that assigns h j ( ) o j, argmax( j ) ( ) during prediction.", "Since argmax( ) operation is not differentiable, during training, we relax the categorical assignment of the architecture for h j ( ) H to a softmax over all possible networks in O j : h j ( ) 1 TT (cid:88) t exp( tj ) (cid:80) Tt exp( Tj ) o j,t ( ) .", "However, the original DARTS algorithm only optimizes prediction performance.", "In our case, we also want to promote the diversity among heads.", "To do this, we force each h j ( ) to specialize in different features of an input, i.e., in how it makes predictions.", "This can be achieved by maximizing the difference among the gradients of the word-embedding e i of input x i w.r.t to the outputs of each h j ( ) H .", "Hence, given a fixed set of parameters O of all possible networks for every heads, we train all selection vectors { } Kj by optimizing the objective: minimize { } K j L experts = N (cid:88) i K (cid:88) n<m (cid:16) d( e i J n ; e i J m ) || e i J n e i J m || 22 (cid:17) , (7) #Class #Vocab #Example MR (Pang and Lee, 2005) 2 19K 11K CB (Anand et al., 2017) 2 25K 32K HS (Davidson et al.) 3 35K 25K Table 2: Statistics of experimental datasets.", "To report the robustness, we report prediction accuracy under adversarial attacks (Morris et al., 2020), i.e., # of failed attacks over total # of examples.", "A failed attack is only counted when the attacker fails 6664 to perturb (i.e., fail to flip the label of a correctly predicted clean example).", "where d( ) is the cosine-similarity function, and J j is the NLL loss as if we only use a single prediction head h j .", "In this module, however, not only do we want to maximize the differences among gradients vectors, but also we want to ensure the selected architectures eventually converge to good prediction performance.", "Therefore, we train the whole ME module with the following objective: minimize { } Kj LME = LSE + L experts .", "(8) 2.3 Overall Framework To combine the SE and ME modules, we replace Eq.", "(6) into Eq.", "(1) and optimize the overall objective: minimize { } Kj L valME + L valexperts s .", "We employ an iterative training strategy (Liu et al., 2019a) with the Adam optimization algorithm (Kingma and Ba, 2013) as in Alg.", "1. By alternately freezing and training W , b , O and { } Kj using a training set D train and a validation set D val , we want to", "(i) achieve high quality prediction performance through Eq.", "(5) and", "(ii) select the optimal architecture for each expert to maximize their specialization through Eq.", "(7).", "Datasets & Metric.", "Table 2 shows the statistics of all experimental datasets: Clickbait detection (CB) (Anand et al., 2017), Hate Speech detection (HS) (Davidson et al.) and Movie Reviews classification (MR) (Pang and Lee, 2005).", "We split each dataset into train , validation and test set with the ratio of 8:1:1 whenever standard public splits are not available.", "To report prediction performance on clean examples, we use the weighted F1 score to take the distribution of prediction labels into consideration.", "Defense Baselines.", "We want to defend four textual NN models (base models) of different architectures, namely RNN with GRU cells (Chung et al.), transformer -based BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019b).", "We compare SHIELD with the following six defensive baselines: Ensemble (Ens.) is the classical ensemble of 5 different base models .", "We use the average of all NLL losses from the base models as the final training loss.", "Diversity Training (DT) (Kariyappa and Qureshi, 2019) is a variant of the Ensemble baseline where a regularization term is added to maximize the coherency of gradient vectors of the input text w.r.t each sub-model.", "DT diversifies the feature-level expertise among heads.", "Adaptive Diversity Promoting (ADP) (Pang et al., 2019) is a variant of Ensemble baseline where a regularization term is added to maximize the diversity among non-maximal predictions of individual sub-models.", "ADP diversifies the class-level expertise among heads.", "Mixup Training (Mixup) (Zhang et al., 2018; Si et al.) trains a base model with data constructed by linear interpolation of two random training samples.", "In this work, we use Mixup to regularize a NN to adapt linear transformation in-between the continuous embeddings of training samples.", "Adversarial Training (AdvT) (Miyato et al., 2016) is a semi-supervised algorithm that optimizes the NLL loss on the original training samples plus adversarial inputs.", "Robust Word Recognizer (ScRNN) (Pruthi et al., 2019) detects and corrects potential adversarial perturbations or misspellings in a text before feeding it to the base model for prediction.", "Note that due to the insufficient memory of GPU Titian Xp to simultaneously train several BERT and RoBERTa sub-models, we exclude Ensemble , DT , and ADP baseline for them.", "Attacks.", "We comprehensively evaluate SHIELD under 14 different black-box attacks (Table 1).", "These attacks differ in their attack levels (e.g., character, word, sentence-based), optimization algorithms for searching adversarial perturbations Model/Dataset MR HS CB AVG RNN 0.73 0.88 0.97 0.86 +Ensemble 0.80 0.90 0.97 0.89 +DT 0.80 0.86 0.97 0.88 +ADP 0.80 0.88 0.97 0.88 +Mixup 0.77 0.87 0.97 0.87 +AdvT 0.76 0.89 0.98 0.88 +ScRNN 0.79 0.85 0.96 0.87 + SHIELD 0.78 0.86 0.97 0.87 ( 1 . 3% ) CNN 0.719 0.900 0.966 0.862 +Ens.", "(e.g., through fixed templates, greedy, genetic-based search).", "Apart from lexical constraints such as limiting # or % of words to manipulate in a sentence, ignoring stop-words, etc., many of them also preserve the semantic meanings of a generated adversarial text via constraining the l2 distance between its representation vector and that of the original text produced by either Universal Sentence Encoder (USE) (Cer et al., 2018) or GloVe embeddings (Pennington et al., 2014).", "Moreover, to ensure that the perturbed texts still look natural, a few of the attack methods employ an external pre-trained language model (e.g., BERT(Devlin et al., 2019), L2W (Holtzman et al., 2018)) to optimize the log-likelihood of the adversarial texts.", "Due to computational limit, we only compare SHIELD with other baselines in 3 representative attacks, namely TextFooler (Jin et al., 2019), DeepWordBug (Gao et al.) and PWWS (Ren et al., 2019).", "They are among the most effective attacks.", "To ensure fairness and reproducibility, we use the external TextAttack (Morris et al., 2020) and OpenAttack (Zeng et al., 2021).", "framework for adversarial text generation and evaluation.", "( K =5 ) with =0 .", "5 .", "For each expert, we set O j to 3 ( T =3 ) possible networks: FCN with 1, 2 and 3 hidden layer(s).", "For each dataset, we use grid-search to search for the best value from { 1 .", "0 , 0 .", "1 , 0 .", "01 , 0 .", "001 } based on the averaged defense performance on the validation set under TextFooler (Jin et al., 2019) and DeepWordBug (Gao et al.).", "We use 10% of the training set as a separate development set during training with early-stop to prevent overfitting.", "We report the performance of the best single model across all attacks on the test set.", "The Appendix includes all details on all models' parameters and implementation.", "Fidelity We first evaluate SHIELD 's prediction performance without adversarial attacks.", "Table 3 shows that all base models patched by SHIELD still maintain similar F1 scores on average across all datasets.", "Although SHIELD with RNN has a slightly decrease in fidelity on Hate Speech dataset, this is negligible compared to the adversarial robustness benefits that SHIELD will provide (More below).", "Computational Complexity Regarding the space complexity, SHIELD can extend a NN into an ensemble model with a marginal increase of # of parameters.", "Specifically, with B denoting # of parameters of the base model, SHIELD has a space complexity of O ( B + KU ) while both Ensemble , DT and ADP have a complexity of O ( KB ) and U B .", "In case of BERT with K =5 , SHIELD only requires an additional 8.3%.", "While traditional ensemble methods require as many as 4 times additional parameters.", "During training, SHIELD only trains O ( KU ) parameters, while other defense methods, including ones using data augmentation, update all of them.", "Specifically, with K =5 , SHIELD only trains 8% of the parameters of the base model and 1.6% of the parameters of other BERT-based ensemble baselines.", "During inference, SHIELD is also 3 times faster than ensemble-based DT and ADP on average.", "Robustness Table 4 shows the performance of SHIELD compared to the base models.", "Overall, SHIELD consistently improves the robustness of base models in 154/168 ( 92%) cases across 14 adversarial attacks regardless of their attack strategies.", "Particularly, all CNN, RNN, BERT and RoBERTa-based textual models that are patched by SHIELD witness relative improvements in the average prediction accuracy from 15% to as much as 70%.", "Especially in the case of detecting clickbait, SHIELD can recover up to 5% margin within the performance on clean examples in many cases.", "This demonstrates that SHIELD provides a versatile neural patching mechanism that can quickly and effectively defends against black-box adversaries without making any assumptions on the attack strategies.", "We then compare SHIELD with all defense baselines under TextFooler (TF), DeepWordBug (DW), and PWWS (PS) attacks.", "These attacks are selected as", "(i) they are among the strongest attacks and", "(ii) they provide foundation mechanisms upon which other attacks are built.", "Table 5 shows that SHIELD achieves the best robustness across all attacks and datasets.", "On average, SHIELD observes an absolute improvement from +9% to +18% in accuracy over the second-best defense algorithms (DT in case of RNN, and AdvT in case of BERT, RoBERTa).", "Moreover, SHIELD outperforms other ensemble-based baselines (DT, ADP), and can be applied on top of a pre-trained BERT or RoBERTa model with only around 8% additional parameters.", "However, that # would increase to 500% ( K 5) in the case of DT and ADP, requiring over half a billion # of parameters.", "not only improves the overall robustness of the patched NN model under a variety of black-box", "attacks, but also induces computational cost that can greatly discourage malicious actors to exercise adversarial attacks in practice.", "We define computational cost as # of queries on a target NN model that is required for a successful attack.", "Since adversaries usually have an attack budget on # of model queries (e.g. a monetary budget, limited API access to the black-box model), the higher # of queries required, the less vulnerable a target model is to adversarial threats.", "A larger budget is crucial for genetic-based attacks because they usually require larger # of queries than greedy-based strategies.", "We have demonstrated in Sec. 3.2 that SHIELD is robust even when the attack budget is unlimited .", "Fig. 2 shows that the performance of RoBERTa after patched by SHIELD also reduces at a slower rate compared to the base RoBERTa model when the attack budget increases, especially under greedy-based attacks.", "Effects of Stochasticity on SHIELD 's Performance .", "Stochasticity in SHIELD comes from two parts, namely", "(i) the assignment of the main prediction head during each inference call and", "(ii) the randomness in the Gumbel-Softmax outputs.", "Regarding", "(i), it happens because during a typical iterative black-box process, an attacker tries different manipulations of a given text.", "When the attacker does so, the input text to the model changes at every iterative step.", "This then leads to the changes of prediction head assignment because each prediction head is an expert at different featurese.g., words or phrases in an input sentence.", "Thus, given an input, the assignment of the expert predictors for a specific set of manipulations stays the same.", "Therefore, even if an attacker repeatedly calls the model with a specific changes on the original sentence, the attacker will not gain any additional information.", "Regarding", "(ii), even though Gumbel-Softmax outputs are not deterministic, it always maintains 6667 the relative ranking of the expert predictor during each inference call with a sufficiently small value of .", "In other words, it will not affect the fidelity of the model across different runs.", "Parameter Sensitivity Analyses.", "Training SHIELD requires hyper-parameter K, T, and .", "We observe that arbitrary value =0 .", "5 , K =5 , T =3 works well across all experiments.", "Although we did not observe any patterns on the effects of K on the robustness, a K 3 performs well across all attacks.", "On the contrary, different pairs of the temperature during training and inference witness varied performance w.r.t to different datasets.", "gives us the flexibility to control the sharpness of the probability vector .", "When 0 , to get closer to one-hot encoded vector, i.e., use only one head at a time.", "Ablation Tests.", "This section tests SHIELD with only either the SE or ME module.", "Table 6 shows that SE and ME performs differently across different datasets and models.", "Specifically, we observe that ME performs better than the SE module in case of Clickbait dataset, SE is better than the ME module in case of Movie Reviews dataset and we have mixed results in Hate Speech dataset.", "Nevertheless, the final SHIELD model which comprises both the SE and ME modules consistently performs the best across all cases.", "This shows that both the ME and SE modules are complementary to each other and are crucial for SHIELD 's robustness.", "In this paper, we limit the architecture of each expert to be an FCN with a maximum of 3 hidden layers (except the base model).", "If we include more options for this architecture (e.g., attention (Luong et al., 2015)), sub-models' diversity will significantly increase.", "The design of SHIELD is model-agnostic and is also applicable to other complex and large-scale NNs such as transformers-based models.", "Especially with the recent adoption of transformer architecture in both NLP and computer vision (Carion et al., 2020; Chen et al., 2020), potential future work includes extending SHIELD to patch other complex NN models (e.g., T5 (Raf-fel et al., 2020)) or other tasks and domains such as Q&A and language generation.", "Although our work focus is not in robust transferability, it can accommodate so simply by unfreezing the base layers f ( x , L 1 ) in Eq.", "(1 during training with some Dataset Movie Reviews Hate Speech Clickbait Attack TF DW PS TF DW PS TF DW PS RNN 0.02 0.2 0.09 0.09 0.26 0.32 0.31 0.67 0.46 + SE Only 0.02 0.17 0.08 0.09 0.2 0.32 0.52 0.72 0.61 + ME Only 0.02 0.14 0.07 0.13 0.03 0.01 0.57 0.79 0.61 + SHIELD 0.18 0.44 0.3 0.26 0.61 0.54 0.78 0.9 0.85 CNN 0.01 0.13 0.06 0.03 0.1 0.14 0.45 0.7 0.57 + SE Only 0.02 0.15 0.07 0.24 0.42 0.42 0.46 0.64 0.61 + ME Only 0.18 0.19 0.07 0.1 0.25 0.29 0.60 0.80 0.69 + SHIELD 0.19 0.38 0.28 0.19 0.32 0.34 0.74 0.86 0.81 BERT 0.09 0.2 0.19 0.26 0.16 0.38 0.49 0.5 0.49 + SE Only 0.07 0.18 0.16 0.26 0.28 0.32 0.45 0.49 0.62 + ME Only 0.06 0.2 0.15 0.21 0.28 0.27 0.74 0.81 0.82 + SHIELD 0.26 0.42 0.35 0.37 0.55 0.44 0.92 0.95 0.94 RoBERTa 0.06 0.18 0.16 0.1 0.12 0.12 0.37 0.34 0.45 + SE Only 0.13 0.22 0.19 0.13 0.26 0.29 0.57 0.70 0.71 + ME Only 0.07 0.17 0.15 0.22 0.4 0.31 0.8 0.87 0.85 + SHIELD 0.39 0.55 0.45 0.37 0.55 0.44 0.93 0.96 0.94 Table 6: Complementary role of SE and ME .", "Defending against Black-Box Attacks.", "Most of previous works (e.g., (Le et al., 2021; Zhou et al., 2021; Keller et al., 2021; Pruthi et al., 2019; Dong et al., 2021; Mozes et al., 2020; Wang et al., 2021; Jia et al., 2019) in adversarial defense are designed either for a specific type (e.g., word, synonym-substitution as in certified training (Jia et al., 2019), misspellings (Pruthi et al., 2019)) or level (e.g., character or word-based) of attack.", "Thus, they are usually evaluated against a small subset of ( 4 ) attack methods.", "Despite there are works that propose general defense methods, they are often built upon adversarial training (Goodfellow et al., 2015) which requires training everything from scratch (e.g., (Si et al.; Miyato et al., 2016; Zhang et al., 2018) or limited to a set of predefined attacks (e.g., (Zhou et al., 2019a)).", "Although adversarial training-based defense works well against several attacks on BERT and RoBERTa, its performance is far out-weighted by SHIELD (Table 5).", "Contrast to previous approaches, SHIELD addresses not the characteristics of the resulted perturbations from the attackers but their fundamental attack mechanism, which is most of the time an iterative perturbation optimization process (Fig. 1).", "This allows SHIELD to effectively defend against 14 different black-box attacks (Table 1), showing its effectiveness in practice.", "To the best of our knowledge, by far, this works also evaluate with 6668 the most comprehensive set of attack methods in the adversarial text defense literature.", "Ensemble-based Defenses.", "SHIELD is distinguishable from previous ensemble-based defenses on two aspects.", "First, previous approaches such as DT (Kariyappa and Qureshi, 2019), ADP (Pang et al., 2019) are mainly designed for computer vision.", "Applying these models to the NLP domain faces a practical challenge where training multiple memory-intensive SOTA sub-models such as BERT or RoBERTa can be very costly in terms of space and time complexities.", "In contrast, SHIELD enables to hot-fix a complex NN by replacing and training only the last layer, removing the necessity of re-training the entire model from scratch.", "Second, previous methods (e.g., DT and ADP) mainly aim to reduce the dimensionality of adversarial subspace , i.e., the subspace that contains all adversarial examples, by forcing the adversaries to attack a single fixed ensemble of diverse sub-models at the same time.", "This then helps improve the transferability of robustness on different tasks.", "However, our approach mainly aims to dilute not transfer but direct attacks by forcing the adversaries to attack stochastic, i.e., different, ensemble variations of sub-models at every inference passes.", "This helps SHIELD achieve a much better defense performance compared to DT and ADP across several attacks (Table 5).", "This paper presents a novel algorithm, SHIELD , which consistently improves the robustness of textual NN models under black-box adversarial attacks by modifying and re-training only their last layers.", "By extending a textual NN model of varying architectures (e.g., CNN, RNN, BERT, RoBERTa) into a stochastic ensemble of multiple experts, SHIELD utilizes differently-weighted sets of prediction heads depending on the input.", "This helps SHIELD defend against black-box adversarial attacks by breaking their most fundamental assumptioni.e., target NN models remain unchanged during an attack.", "SHIELD achieves average relative improvements of 15%70% in prediction accuracy under 14 attacks on 3 public NLP datasets, while still maintaining similar performance on clean examples.", "Thanks to its model-and domain-agnostic design, we expect SHIELD to work properly in other NLP domains.", "We address two practical adversarial attack scenarios and how SHIELD can help defend against them.", "First, adversaries can attempt to abuse social media platforms such as Facebook by posting ads or recruitment for human-trafficking, protests, or by spreading misinformatione.g., vaccine-related.", "To do so, the adversaries can directly use one of the black-box attacks in the literature to iteratively craft a posting that will not be easily detected and removed by the platforms.", "In some cases, a good attack method only requires a few trials to successfully fool such platforms.", "Our method can help confuse the attackers with inconsistent signals, hence reduce the chance they succeed.", "Second, many popular services and platforms such as the NYTimes, the Southeast Missourian, OpenWeb, Disqus, Red-dit, etc. rely on a 3rd party APIs such as Perspective API 1 for detecting toxic comments onlinee.g., racist, offensive, personal attacks.", "However, these public APIs have been shown to be vulnerable against black-box attacks in literature (Li et al., 2018).", "The attacker can use a black-box attack method to attack these public APIs in an iterative manner, then retrieve the adversarial toxic comments and use those on these platforms without the risk of being detected and removed by the system.", "Since these malicious behaviors can endanger public safety and undermine the quality of online information, our work has practical values and can have broad societal impacts.", "This research was supported in part by NSF awards #1820609, #1915801, and #2114824.", "The work of Noseong Park was partially supported by the Yonsei University Research Fund of 2021, and the Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korean government (MSIT) (No.", "2020-0-01361, Artificial Intelligence Graduate School Program (Yonsei University))." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "other", "other" ]
[ "This paper introduces Dynamic Programming Encoding (DPE), a new segmentation algorithm for tokenizing sentences into subword units.", "We view the subword segmentation of output sentences as a latent variable that should be marginalized out for learning and inference.", "A mixed character-subword transformer is proposed, which enables exact log marginal likelihood estimation and exact MAP inference to find target segmentations with maximum posterior probability.", "DPE uses a lightweight mixed character-subword transformer as a means of pre-processing parallel data to segment output sentences using dynamic programming.", "Empirical results on machine translation suggest that DPE is effective for segmenting output sentences and can be combined with BPE dropout for stochastic segmentation of source sentences.", "DPE achieves an average improvement of 0.9 BLEU over BPE (Sennrich et al., 2016) and an average improvement of 0.55 BLEU over BPE dropout (Provilkov et al., 2019) on several WMT datasets including English (German, Romanian, Estonian, Finnish, Hungarian).", "The segmentation of rare words into subword units (Sennrich et al., 2016; Wu et al., 2016) has become a critical component of neural machine translation (Vaswani et al., 2017) and natural language understanding (Devlin et al., 2019).", "Subword units enable open vocabulary text processing with a negligible pre-processing cost and help maintain a desirable balance between the vocabulary size and decoding speed.", "Since subword vocabularies are built in an unsupervised manner (Sennrich et al., 2016; Wu et al., 2016), they are easily applicable to any language.", "Given a fixed vocabulary of subword units, rare words can be segmented into a sequence of subword units in different ways.", "For instance, un+conscious and uncon+scious are both suitable segmentations for the word unconscious.", "This paper studies the impact of subword segmentation on neural machine translation, given a fixed subword vocabulary, and presents a new algorithm called Dynamic Programming Encoding (DPE) .", "1. Greedy algorithms: Wu et al. (2016) segment words by recursively selecting the longest subword prefix.", "Sennrich et al. (2016) recursively combine adjacent word fragments that co-occur most frequently, starting from characters.", "2. Stochastic algorithms (Kudo, 2018; Provilkov et al., 2019) draw multiple segmentations for source and target sequences resorting to randomization to improve robustness and generalization of translation models.", "3. Dynamic programming algorithms, studied here, enable exact marginalization of subword segmentations for certain sequence models.", "We view the subword segmentation of output sentences in machine translation as a latent variable that should be marginalized out to obtain the probability of the output sentence given the input.", "On the other hand, the segmentation of source sentences can be thought of as input features and can be randomized as a form of data augmentation to improve translation robustness and generalization.", "Unlike previous work, we recommend using two distinct segmentation algorithms for tokenizing source and target sentences: stochastic segmentation for source and dynamic programming for target sentences.", "We present a new family of mixed character-subword transformers, for which simple dynamic programming algorithms exist for exact marginalization and MAP inference of subword segmentations.", "The time complexity of the dynamic programming algorithms is O ( T V ) , where T is the length of the target sentence in characters, and V is the size of the subword vocabulary.", "By comparison, even computing the conditional probabilities of subword units in an autoregressive model requires O ( T V ) to estimate the normalizing con-stant of the categorical distributions.", "Thus, our dynamic programming algorithm does not incur additional asymptotic costs.", "We use a lightweight mixed character-subword transformer as a means to pre-process translation datasets to segment output sentences using DPE for MAP inference.", "The performance of a standard subword transformer (Vaswani et al., 2017) trained on WMT datasets tokenized using DPE is compared against Byte Pair Encoding (BPE) (Sennrich et al., 2016) and BPE dropout (Provilkov et al., 2019).", "Empirical results on English (German, Romanian, Estonian, Finnish, Hungarian) suggest that stochastic subword segmentation is effective for tokenizing source sentences, whereas deterministic DPE is superior for segmenting target sentences.", "DPE achieves an average improvement of 0.9 BLEU over greedy BPE (Sennrich et al., 2016) and an average improvement of 0.55 BLEU over stochastic BPE dropout (Provilkov et al., 2019) 1 .", "Neural networks have revolutionized machine translation (Sutskever et al., 2014; Bahdanau et al., 2015; Cho et al., 2014).", "Early neural machine translation (NMT) systems used words as the atomic element of sentences.", "They used vocabularies with tens of thousands words, resulting in prohibitive training and inference complexity.", "While learning can be sped up using sampling techniques (Jean et al., 2015), word based NMT models have a dif-ficult time handling rare words, especially in morphologically rich languages such as Romanian, Estonian, and Finnish.", "The size of the word vocabulary should increase dramatically to capture the compositionality of morphemes in such languages.", "More recently, many NMT models have been developed based on characters and a combination of characters and words (Ling et al., 2015; Luong and Manning, 2016; Vylomova et al., 2017; Lee et al., 2017; Cherry et al., 2018).", "Fully character based models (Lee et al., 2017; Cherry et al., 2018) demonstrate a significant improvement over word 1 code and corpora: https://github.com/xlhex/dpe based models on morphologically rich languages.", "Nevertheless, owing to the lack of morphological information, deeper models are often required to obtain a good translation quality.", "Moreover, elongated sequences brought by a character representation drastically increases the inference latency.", "In order to maintain a good balance between the vocabulary size and decoding speed, subword units are introduced in NMT (Sennrich et al., 2016; Wu et al., 2016).", "These segmentation approaches are data-driven and unsupervised.", "Therefore, with a negligible pre-processing overhead, subword models can be applied to any NLP task (Vaswani et al., 2017; Devlin et al., 2019).", "Meanwhile, since subword vocabularies are generated based on word frequencies, only the rare words are split into subword units and common words remain intact.", "Previous work (Chan et al., 2016; Kudo, 2018) has explored the idea of using stochastic subword segmentation with multiple subword candidates to approximate the log marginal likelihood.", "Kudo (2018) observed marginal gains in translation quality at the cost of introducing additional hyper-parameters and complex sampling procedures.", "We utilize BPE dropout (Provilkov et al., 2019), a simple stochastic segmentation algorithm for tokenizing source sentences.", "Dynamic programming has been used to marginalize out latent segmentations for speech recognition (Wang et al., 2017), showing a consistent improvement over greedy segmentation methods.", "In addition, dynamic programming has been successfully applied to learning sequence models by optimizing edit distance (Sabour et al., 2018) and aligning source and target sequences (Chan et al., 2020; Saharia et al., 2020).", "We show the effectiveness of dynamic programming for segmenting output sentences in NMT using a mixed character-transformer in a pre-processing step.", "Let x denote a source sentence and y = ( y 1 , . . . , y T ) denote a target sentence comprising T characters.", "The goal of machine translation is to learn a conditional distribution p ( y | x ) from a large corpus of source-target sentences.", "State-of-the-art neural machine translation systems make use of a dictionary of subword units to tokenize the target sentences in a more succinct way as a sequence of M T subword units.", "Given a subword vocabulary, there are multiple ways to segment a rare word into a sequence of subwords (see Figure 1).", "The common practice in neural machine translation considers subword segmentation as a pre-process and uses greedy algorithms to segment each word across a translation corpus in a consistent way.", "This paper aims to find optimal subword segmentations for the task of machine translation.", "Let z = ( z 1 , .., z M +1 ) denote a sequence of character indices 0 = z 1 < z 2 < . . . < z M < z M +1 = T in an ascending order, defining the boundary of M subword segments { y z i ,z i +1 } Mi =1 .", "Let y a,b [ y a +1 , . . . , y b ] denote a subword that spans the segment between ( a + 1) th and b th characters, including the boundary characters.", "For example, given a subword dictionary { c', a', t', at', ca' } , the word cat' may be segmented using z = (0 , 1 , 3) as (c', at'), or using z = (0 , 2 , 3) as (ca', t'), or using z = (0 , 1 , 2 , 3) as (c', a', t').", "The segmentation z = (0 , 3) is not valid since cat' does not appear in the subword vocabulary.", "Autoregressive language models create a categorical distribution over the subword vocabulary at every subword position and represent the log-probability of a subword sequence using chain rule, log p ( y , z ) = (cid:88) | z | i =1 log p ( y z i ,z i +1 | y z 1 ,z 2 , . . . , y z i 1 ,z i ) .", "Note that we suppress the dependence of p on x to reduce notational clutter.", "Most neural machine translation approaches assume that z is a deterministic function of y and implicitly assume that log p ( y , z ) log p ( y ) .", "We consider a subword segmentation z as a latent variable and let each value of z Z y , in the set of segmentations compatible with y , contribute its share to p ( y ) according to p ( y ) = (cid:80) z p ( y , z ) , log p ( y ) = log (cid:88) z Z y exp | z | (cid:88) i =1 log p ( y z i ,z i +1 | . . . , y z i 1 ,z i ) .", "(2) Note that each particular subword segmentation z Z y provides a lower bound on the log marginal likelihood log p ( y ) log p ( y , z ) .", "Hence, optimizing (1) for a greedily selected segmentation can be justified as a lower bound on (2).", "That said, optimizing (2) directly is more desirable.", "Unfortunately, exact marginalization over all segmentations is computationally prohibitive in a combinatorially large space Z y , especially because the Figure 1: An illustration of different ways of segmenting unconscious' into subword units.", "probability of each subword depends on the segmentation of its conditioning context.", "In the next section, we discuss a sequence model in which the segmentation of the conditioning context does not influence the probability of the next subword.", "We describe an efficient Dynamic Programming algorithm to exactly marginalize out all possible subword segmentations in this model.", "We propose a mixed character-subword transformer architecture, which enables one to marginalize out latent subword segmentations exactly using dynamic programming (see Figure 2).", "Our key insight is to let the transformer architecture process the inputs and the conditioning context based on characters to remain oblivious to the specific choice of subword segmentation in the conditioning context and enable exact marginalization.", "That said, the output of the transformer is based on subword units and at every position it creates a categorical distribution over the subword vocabulary.", "More precisely, when generating a subword y z i ,z i +1 , the model processes the conditioning context ( y z 1 , . . . , y z i ) based solely on characters using, log p ( y , z ) = (cid:88) | z | i =1 log p ( y z i ,z i +1 | y z 1 , ..., y z i ) , (3) where the dependence of p on x is suppressed to reduce notational clutter.", "Given a fixed subword vocabulary denoted V , at every character position t within y , the mixed character-subword model induces a distribution over the next subword w V based on, p ( w | y 1 , .., y t )= exp( f ( y 1 , .., y t ) (cid:62) e ( w )) (cid:80) w (cid:48) V exp( f ( y 1 , .., y t ) (cid:62) e ( w (cid:48) )) where f ( ) processes the conditioning context using a Transformer, and e ( ) represents the weights of the softmax layer.", "As depicted in in Figure 2, the mixed character-subword Transformer consumes characters as input generates subwords as output.", "This figure only shows the decoder architecture, since as the encoder that processes x is a standard subword Transformer.", "Once a subword w is emitted at time step t , the characters of the subword w are fed into the decoder for time steps t + 1 to t + | w | , and the next subword is generated at time step t + | w | , conditioned on all of the previously generated characters.", "The training objective for our latent segmentation translation model is (cid:80) ( x , y ) D log P ( y | x ) where D is the training corpus consisting of parallel bilingual sentence pairs.", "Maximizing the training objective requires marginalization and the computation of the gradient of the log marginal likelihood.", "model, the probability of a subword only depends on the character-based encoding of the conditioning context and not its segmentation, as in (3).", "This means that we can compute the log marginal likelihood for a single example y , exactly, using the Dynamic Programming algorithm shown in Algorithm", "1. The core of the algorithm is line 3, where the probability of the prefix string y 0 ,k is computed by summing terms corresponding to different segmentations.", "Each term consists of the product of the probability of a subword y j,k times the probability of its conditioning context ( y 1 , . . . , y j ) .", "The running time of the algorithm is O ( mT ) , where T is the length of the string, and m is the size of the longest subword unit in the vocabulary.", "Gradient Computation.", "We use automatic differentiation in PyTorch to backpropagate through the dynamic program in Algorithm 1 and compute its gradient.", "Compared to a standard Transformer decoder, our mixed character-subword Transformer is 8x slower with a larger memory footprint, due to computation involved in the DP algorithm and large sequence length in characters.", "To address these issues, we reduce the number of transformer layers from 6 to 4, and accumulate 16 consecutive gradients before one update.", "Once the mixed character-subword transformer is trained, it is used to segment the target side of a bilingual corpus.", "We randomize the subword segmentation of source sentences using BPE dropout (Provilkov et al., 2019).", "Conditional on the source sentence, we use Algorithm 2, called Dynamic Programming Encoding (DPE) to find a segmentation of the target sentence with highest posterior probability.", "This algorithm is similar to the marginalization algorithm, where we use a max operation instead of log-sum-exp.", "The mixed character-subword transformer is used only for to-kenization, and a standard subword transformer is trained on the segmented sentences.", "For inference using beam search, the mixed character-subword transformer is not needed.", "z backtrace ( b 1 , .., b T", "(cid:46) backtrace the best segmentation using b", "Dataset We use WMT09 for En-Hu, WMT14 for En-De, WMT15 for En-Fi, WMT16 for En-Ro and WMT18 for En-Et.", "We utilize Moses toolkit 2 to pre-process all corpora, and preserve the true case of the text.", "Unlike Lee et al. (2018), we retain diacritics for En-Ro to retain the morphological richness.", "We use all of the sentence pairs where the length of either side is less than 80 tokens for.", "training.", "Byte pair encoding (BPE) (Sennrich et al., 2016) is applied to all language pairs to construct a subword vocabulary and provide a baseline segmentation algorithm.", "The statistics of all corpora is summarized in Table", "1. Training with BPE Dropout.", "We apply BPE dropout (Provilkov et al., 2019) to each mini-batch.", "For each complete word, during the BPE merge operation, we randomly drop a particular merge with a probability of 0.05.", "This value worked the best in our experiments.", "A word can be split into different segmentations at the training stage, which helps improve the BPE baseline.", "DPE Segmentation.", "DPE can be used for target sentences, but its use for source sentences is not justified as source segmentations should not be marginalized out.", "Accordingly, we use BPE dropout for segmenting source sentences.", "That is, 2 https://github.com/moses-smt/mosesdecoder Figure 3: The workflow of the proposed DPE approach.", "we train a mixed character-subword transformer to marginalize out the latent segmentations of a target sentence, given a randomized segmentation of the source sentence by BPE dropout.", "After the mixed character-subword transformer is trained, it is used to segment the target sentences as describe in section 4.2 for tokenization.", "As summarized in Figure 3, we first train a mixed character-subword transformer with dynamic programming.", "Then, this model is frozen and used for DPE segmentation of target sentences.", "Finally, a standard subword transformer is trained on source sentences segmented by BPE dropout and target sentences segmented by DPE.", "The mixed character-subword transformer is not needed for translation inference.", "Transformer Architectures.", "We use transformer models to train three translation models on BPE, BPE dropout, and DPE corpora.", "We make use of transformer base for all of the experiments.", "Table 2 shows the main results.", "First, we see that BPE dropout consistently outperforms BPE across language pairs.", "In Table 2, the column labeled to 1 shows the improvement of BPE dropout over Method BPE BPE dropout 1 This paper 2 Source segmentation BPE BPE dropout BPE dropout Target segmentation BPE BPE dropout DPE En De 27.11 27.27 +0.16 27.61 +0.34 En Ro 27.90 28.07 +0.17 28.66 +0.59 En Et 17.64 18.20 +0.56 18.80 +0.60 En Fi 15.88 16.18 +0.30 16.89 +0.71 En Hu 12.80 12.94 +0.14 13.36 +0.42 De En 30.82 30.85 +0.03 31.21 +0.36 Ro En 31.67 32.56 +0.89 32.99 +0.43 Et En 23.13 23.65 +0.52 24.62 +0.97 Fi En 19.10 19.34 +0.24 19.87 +0.53 Hu En 16.14 16.61 +0.47 17.05 +0.44 Average 22.22 22.57 +0.35 23.12 +0.55 Table 2: Average test BLEU scores (averaged over 3 independent runs) for 3 segmentation algorithms (BPE (Sen-nrich et al., 2016), BPE dropout (Provilkov et al., 2019) and our DPE algorithm) on 10 different WMT datasets.", "BPE.", "This gain can be attributed to the robustness of the NMT model to the segmentation error on the source side, as our analysis in Section 5.3 will confirm.", "Second, we observe further gains resulted from DPE compared to BPE dropout.", "The column labeled 2 shows the improvement of DPE over BPE dropout.", "DPE provides an average improvement of 0.55 BLEU over BPE dropout and BPE dropout provides an average improvement of 0.35 BLEU over BPE.", "As our proposal uses BPE dropout for segmenting the source, we attribute our BLEU score improvements to a better segmentation of the target language with DPE.", "Finally, compared to BPE for segmenting the source and target, our proposed segmentation method results in large improvements in the translation quality, up to 1.49 BLEU score improvements in Et En.", "Table 3 shows examples of target sentences segmented using DPE and BPE and the corresponding source sentences.", "In addition, Table 4 presents the top 50 most common English words that result in a disagreement between BPE and DPE segmentations based on the Et En corpus.", "For DPE, for each word, we consider all segmentations produced and show the segmentation that attains the highest frequency of usage in Table 4.", "As can be observed, DPE produces more linguistically plausible morpheme-based subwords compared to BPE.", "For instance, BPE segments carts into car+ts , as both car and ts are common subwords and listed in the BPE merge table.", "By contrast DPE segments carts into cart+s .", "We attribute the linguistic characteristics of the DPE segments to the fact that DPE conditions the segmentation of a target word on the source sentence and the previous tokens of the target sentence, as opposed to BPE, which mainly makes use of frequency of subwords, without any context.", "DPE generally identifies and leverages some linguistic properties, e.g., plural, antonym, normalization, verb tenses, etc.", "However, BPE tends to deliver less linguistically plausible segmentations, possibly due to its greedy nature and the lack of context.", "We believe this phenomenon needs further investigation, i.e., the contribution of source vs. target context in DPE segmentations, and a quantitative evaluation of linguistic nature of word fragments produced by DPE.", "We will leave this to future work.", "Conditional Subword Segmentation.", "One of our hypothesis for the effectiveness of subword segmentation with DPE is that it conditions the segmentation of the target on the source language.", "To verify this hypothesis, we train mixed character-subword Transformer solely on the target language sentences in the bilingual training corpus using the language model training objective.", "This is in contrast to the mixed character-subword model used in the DPE segmentation of the main results in Table 2, where the model is conditioned on the source language and trained on the sentence pairs using a conditional language model training objective.", "Once the mixed character-subword Transformer language model is trained, it is then used to segment the target sentence of the bilingual corpus in the pre-processing step before a translation model is trained.", "Table 5 shows the results.", "It compares the unconditional language model (LM) DPE vs the conditional DPE for segmenting the target language, where we use BPE dropout for segmenting the source language.", "We observe that without the information from the source, LM DPE is on-par to BPE, and is significantly outperformed by conditional DPE.", "This observation confirms our hypothesis that segmentation in NMT should be source-dependent.", "We are further interested in analyzing the differences of the target language segmentation depending on the source language.", "For this analysis, BPE DPE (ours) recognises recognise + s advocates advocate + s eurozone euro + zone underlines underline + s strengthens strengthen + s entrepreneurship entrepreneur + ship acknowledges acknowledge + s 11.30 11 + .30 wines wine + s pres + ently present + ly f + illed fill + ed endors + ement endorse + ment blo + c bl + oc cru + cially crucial + ly eval + uations evaluation + s tre + es tr + ees tick + ets tick + et + s predic + table predict + able multilater + alism multilateral + ism rat + ings rating + s predic + ted predict + ed mo + tives motiv + es reinfor + ces reinforce + s pro + tocols protocol + s pro + gressively progressive + ly sk + ill ski + ll preva + ils prevail + s decent + ralisation decent + ral + isation sto + red stor + ed influ + enz + a influen + za margin + alised marginal + ised 12.00 12 + .00 sta + ying stay + ing intens + ity intensi + ty rec + ast re + cast guid + eline guide + line emb + arked embark + ed out + lines outline + s scen + ari + os scenario + s n + ative na + tive ma + ture ma + ture preven + tative prevent + ative hom + eland home + land bat + hing bath + ing endang + ered endanger + ed cont + inen + tal continent + al t + enth ten + th vul + n + era + bility vul + ner + ability realis + ing real + ising t + ighter tight + er Table 4: Word fragments obtained by BPE vs. DPE.", "we filtered out a multilingual parallel corpus from WMT, which contains parallel sentences in three languages English, Estonian and Romanian.", "That is, for each English sentence we have the corresponding sentences in Et and Ro.", "We then trained two DPE segmentation models for the translation tasks of Et En and Ro En, where English is the target language.", "Figure 4 shows when conditioning Source BPE drop BPE drop BPE drop Target BPE drop LM DPE DPE En Ro 28.07 28.07 28.66 En Hu 12.94 12.87 13.36 Ro En 32.56 32.57 32.99 Hu En 16.61 16.41 17.05 Table 5: DPE-LM learns a segmentation of the target based on language modelling, which is not conditioned on the source language.", "The differences are more significant for low frequency words.", "Another aspect of DPE segmentation method is its dependency on the segmentation of the source.", "As mentioned, we segment the target sentence on the fly using our mixed character-subword model given a randomized segmentation of the source produced by BPE dropout.", "That means during the training of the NMT model where we use BPE dropout for the source sentence, the corresponding target sentence may get a different DPE segmentation given the randomized segmentation of the source sentence.", "We are interested in the effectiveness of the target segmentation if we commit to a fixed DPE segmentation conditioned on the BPE segmentation of the input.", "Table 6 shows the results.", "We observe that there is a marginal drop when using the fixed DPE, which indicates that the encoder can benefit from a stochastic segmentation, while the decoder prefers a deterministic segmentation corresponding to the segmentation of the source.", "DPE vs BPE.", "We are interested to compare the effectiveness of DPE versus BPE for the target, given BPE dropout as the same segmentation Source BPE drop BPE drop Target DPE Fixed DPE On The Fly En Ro 28.58 28.66 En Hu 13.14 13.36 En Et 18.51 18.80 Ro En 32.73 32.99 Hu En 16.82 17.05 Et En 24.37 24.62 Table 6: DPE Fixed obtains a fixed segmentation of the target sentence given the BPE-segmented source sentence, whereas DPE On The Fly obtain the best segmentation of the target sentence given a randomized segmentation of the source produced by BPE dropout.", "method for the source.", "Table 7 shows the results.", "As observed, target segmentation with DPE consistently outperforms BPE, leading to up to .9 BLEU score improvements.", "We further note that using BPE dropout on the target has a similar performance to BPE, and it is consistently outperformed by DPE.", "We further analyze the segmentations produced by DPE vs BPE.", "Figure 5 shows the percentage of the target words which have different segmentation with BPE and DPE, for different word frequency bands in En Et translation task.", "We observe that for Estonian words whose occurrence is up to 5 in the training set, the disagreement rate between DPE and BPE is 64%.", "The disagreement rate decreases as we go to words in higher frequency bands.", "This may imply that the main difference between the relatively large BLEU score difference between BPE and DPE is due to their different segmentation mainly for low-frequency words.", "We further plot the distribution of BLEU scores by the length of target sentences.", "As shown in Figure 6, DPE demonstrates much better gains on the longer sentences, compared with the BPE version.", "This paper introduces Dynamic Programming Encoding in order to incorporate the information of the source language into subword segmentation of the target language.", "Our approach utilizes dynamic programming for marginalizing the latent segmentations when training, and inferring the highest probability segmentation when tokenizing.", "Our comprehensive experiments show impressive improvements compared to state-of-the-art segmentation methods in NMT, i.e., BPE and its stochastic variant BPE dropout.", "We would like to thank the anonymous reviewers, Taku Kudo, Colin Cherry and George Foster for their comments and suggestions on this work.", "The computational resources of this work are supported by the Google Cloud Platform (GCP), and by the Multi-modal Australian ScienceS Imaging and Visualisation Environment (MASSIVE) ( www.massive.org.au ).", "This material is partly based on research sponsored by Air Force Research Laboratory and DARPA under agreement number FA8750-19-2-0501.", "The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon." ]
[ "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "result", "abstain", "method", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "other", "other", "other", "other" ]
[ "We present an approach for generating clarification questions with the goal of eliciting new information that would make the given textual context more complete.", "We propose that modeling hypothetical answers (to clarification questions) as latent variables can guide our approach into generating more useful clarification questions.", "We develop a Generative Adversarial Network (GAN) where the generator is a sequence-to-sequence model and the discriminator is a utility function that models the value of updating the context with the answer to the clarification question.", "We evaluate on two datasets, using both automatic metrics and human judgments of usefulness, specificity and relevance, showing that our approach outperforms both a retrieval-based model and ablations that exclude the utility model and the adversarial training.", "A goal of natural language processing is to develop techniques that enable machines to process naturally occurring language.", "However, not all language is clear and, as humans, we may not always understand each other (Grice, 1975); in cases of gaps or mismatches in knowledge, we tend to ask questions (Graesser et al., 2008).", "In this work, we focus on the task of automatically generating clarification questions: questions that ask for information that is missing from a given linguistic context.", "Our clarification question generation model builds on the sequence-to-sequence approach that has proven effective for several language generation tasks (Sutskever et al., 2014; Serban et al., 2016; Yin et al., 2016; Du et al., 2017).", "Unfortunately, training a sequence-to-sequence model directly on (context, question) This research performed when the author was still at University of Maryland, College Park.", "pairs yields questions that are highly generic 1 , corroborating a common finding in dialog systems (Li et al., 2016b).", "Our goal is to be able to generate clarification questions that are useful and specific.", "To achieve this, we begin with a recent observation of Rao and Daume III (2018), who consider the task of question reranking: a good clarification question is the one whose answer has a high utility , which they define as the likelihood that this question would lead to an answer that will make the context more complete ( 2.3).", "Inspired by this, we construct a model that first generates a question given a context, and then generates a hypothetical answer to that question.", "Given this (context, question, answer) triple, we train a utility calculator to estimate the usefulness of this question.", "We then show that this utility calculator can be generalized using ideas for generative adversarial networks (Goodfellow et al., 2014) for text (Yu et al., 2017), wherein the utility calculator plays the role of the discriminator and the question generator is the generator ( 2.2), which we train using the MIXER algorithm (Ranzato et al., 2015).", "We evaluate our approach on two datasets: Amazon product descriptions (Figure 1) and Stack Exchange posts (Figure 2).", "Our two main contributions are: 1. An adversarial training approach for generating clarification questions that models the utility of updating a context with an answer to the clarification question.", "2 2. An empirical evaluation using both automatic metrics and human judgments to show that our adversarially trained model generates questions that are more useful and specific to the context than all the baseline models.", "Is it made in China? or What are the dimensions? 2 Code and data: https://github.com/ raosudha89/clarification_question_generation_pytorch", "Our goal is to build a model that, given a context, can generate an appropriate clarification question.", "Our dataset consists of ( context , question , answer ) triples where the context is an initial textual context, question is the clarification question that asks about some missing information in the context and answer is the answer to the clarification question (details in 3.1).", "Representationally, our question generator is a standard sequence-to-sequence model with attention ( 2.1).", "The learning problem is: how to train the sequence-to-sequence model to generate good clarification questions.", "An overview of our training setup is shown in Figure 3. Given a context, our question generator, which is a sequence-to-sequence model, outputs a question.", "In order to evaluate the usefulness of this question, we then have a second sequence-to-sequence model called the answer generator that generates a hypothetical answer based on the context and the question ( 2.5).", "This (context, generated question and generated answer) triple is fed into a UTILITY calculator, whose initial goal is to estimate the probability that this (ques-tion, answer) pair is useful in this context ( 2.3).", "This UTILITY is treated as a reward, which is used to update the question generator using the MIXER (Ranzato et al., 2015) algorithm ( 2.2).", "Finally, we reinterpret the answer-generator-plus-utility-calculator component as a discriminator for differentiating between (context, true question, generated answer) triples and (context, generated question, generated answer) triples , and optimize the generator for this adversarial objective using MIXER ( 2.4).", "We use a standard attention based sequence-to-sequence model (Luong et al., 2015) for our question generator.", "Given an input sequence (context) c = ( c 1 , c 2 , ..., c N ) , this model generates an output sequence (question) q = ( q 1 , q 2 , ..., q T ) .", "The architecture of this model is an encoder-decoder with attention.", "The encoder is a recurrent neural network (RNN) operating over the input word embeddings to compute a source context representation c .", "The decoder uses this source representation to generate the target sequence one word at a time: p ( q | c ) = T (cid:89) t =1 p ( q t | q 1 , q 2 , ..., q t 1 , c t ) = T (cid:89) t =1 softmax ( W s h t ) ; where h t = tanh ( W c [ c t ; h t ]) (1) In Eq 1, h t is the attentional hidden state of the RNN at time t and W s and W c are parameters of the model.", "3 The predicted token q t is the token in the vocabulary that is assigned the highest probability using the softmax function.", "The standard training objective for sequence-to-sequence model is to maximize the log-likelihood of all ( c, q ) pairs in the training data D which is equivalent to minimizing the following loss, L mle ( D ) = (cid:88) ( c,q ) D T (cid:88) t =1 log p ( q t | q 1 , ..., q t 1 , c t ) (2) 3 Details are in Appendix A. Figure 3: Overview of our GAN-based clarification question generation model (refer preamble of 2) 2.2 Training the Generator to Optimize UTILITY Training sequence-to-sequence models for the task of clarification question generation (with context as input and question as output) using maximum likelihood objective unfortunately leads to the generation of highly generic questions, such as What are the dimensions? when asking questions about home appliances.", "Recently, Rao and Daume III (2018) observed that the usefulness of a question can be better measured as the utility that would be obtained if the context were updated with the answer to the proposed question.", "Following this observation, we first use a pretrained answer generator ( 2.5) to generate an answer given a context and a question.", "We then use a pretrained UTILITY calculator ( 2.3 ) to predict the likelihood that the generated answer would increase the utility of the context by adding useful information to it.", "Finally, we train our question generator to optimize this UTILITY based reward.", "Similar to optimizing metrics like BLEU and ROUGE , this UTILITY calculator also operates on discrete text outputs, which makes optimization difficult due to non-differentiability.", "A successful recent approach dealing with the non-differentiability while also retaining some advantages of maximum likelihood training is the Mixed Incremental Cross-Entropy Reinforce (Ranzato et al., 2015) algorithm (MIXER ).", "In MIXER , the overall loss L is differentiated as in REINFORCE (Williams, 1992): L ( ) = E q s p r ( q s ) ; L ( ) = E q s p r ( q s ) log p ( q s ) (3) where q s is a random output sample according to the model p and are the parameters of the network.", "The expected gradient is then approximated using a single sample q s = ( q s 1 , q s 2 , ..., q s T ) from the model distribution ( p ).", "In REINFORCE , the policy is initialized randomly, which can cause long convergence times.", "To solve this, MIXER starts by optimizing maximum likelihood for the initial time steps, and slowly shifts to optimizing the expected reward from Eq 3 for the remaining ( T ) time steps.", "In our model, for the initial time steps, we minimize L mle and for the remaining steps, we minimize the following UTILITY -based loss: L max-utility = ( r ( q p ) r ( q b )) T (cid:88) t =1 log p ( q t | q 1 , ..., q t 1 , c t ) (4) where r ( q p ) is the UTILITY based reward on the predicted question and r ( q b ) is a baseline reward introduced to reduce the high variance otherwise observed when using REINFORCE .", "To estimate this baseline reward, we take the idea from the self-critical training approach Rennie et al. (2017) where the baseline is estimated using the reward obtained by the current model under greedy decoding during test time.", "We find that this approach for baseline estimation stabilizes our model better than the approach used in MIXER .", "Given a (context, question, answer) triple, Rao and Daume III (2018) introduce a utility calculator UTILITY ( c, q, a ) to calculate the value of updating a context c with the answer a to a clarification question q .", "They use the utility calculator to estimate the probability that an answer would be a meaningful addition to a context.", "They treat this as a binary classification problem where the positive instances are the true (context, question, answer) triples in the dataset whereas the negative instances are contexts paired with a random (ques-tion, answer) from the dataset.", "Following Rao and Daume III (2018), we model our UTILITY calculator by first embedding the words in c and then using an LSTM (long-short term memory) (Hochre-iter and Schmidhuber, 1997) to generate a neural representation c of the context by averaging the output of each of the hidden states.", "Similarly, we obtain neural representations q and a of q and a respectively using a question and an answer LSTM models.", "Finally, we use a feed forward neural network FUTILITY ( c, q, a ) to predict the usefulness of the question.", "The UTILITY calculator trained on true vs random samples from real data (as described in the previous section) can be a weak reward signal for questions generated by a model due to the large discrepancy between the true data and the model's outputs.", "In order to strengthen the reward signal, we reinterpret the UTILITY calculator (coupled with the answer generator) as a discriminator in an adversarial learning setting.", "That is, instead of taking the UTILITY calculator to be a fixed model that outputs the expected quality of a (question, answer) pair, we additionally optimize it to distinguish between true (question, answer) pairs and model-generated ones.", "This reinterpretation turns our model into a form of a generative adversarial network (GAN) (Goodfellow et al., 2014).", "GAN is a training procedure for generative models that can be interpreted as a game between a generator and a discriminator.", "The generator is a model g G that produces outputs (in our case, questions).", "The discriminator is another model d D that attempts to classify between true outputs and model-generated outputs.", "The goal of the generator is to generate data such that it can fool the discriminator; the goal of the discriminator is to be able to successfully distinguish between real and generated data.", "In the process of trying to fool the discriminator, the generator produces data that is as close as possible to the real data distribution.", "where x is sampled from the true data distribution p , and z is sampled from a prior defined on input noise variables p .", "Although GANs have been successfully used for image tasks, training GANs for text generation is challenging due to the discrete nature of outputs in text.", "The discrete outputs from the generator make it difficult to pass the gradient update from the discriminator to the generator.", "Recently, Yu et al. (2017) proposed a sequence GAN model for text generation to overcome this issue.", "They treat their generator as an agent and use the discriminator as a reward function to update the generative model using reinforcement learning techniques.", "Our GAN-based approach is inspired by this sequence GAN model with two main mod-ifications:", "a) We use MIXER algorithm as our generator ( 2.2) instead of a purely policy gradient approach; and", "b) We use UTILITY calculator ( 2.3) as our discriminator instead of a convolutional neural network (CNN).", "Theoretically, the discriminator should be trained using (context, true question, true answer) triples as positive instances and (context, generated question, generated answer) triples as the negative instances.", "However, we find that training a discriminator using such positive instances makes it very strong since the generator would have to not only generate real looking questions but also generate real looking answers to fool the discriminator.", "Since our main goal is question generation and since we use answers only as latent variables, we instead use (context, true question, generated answer ) as our positive instances where we use the pretrained answer generator to get the generated answer for the true question.", "Formally, our objective function is: LGAN-U ( U , M ) = max u U min m M E q p log u ( c, q, A ( c, q ))+ E c p log(1 u ( c, m ( c ) , A ( c, m ( c )))) (6) where U is the UTILITY discriminator, M is the MIXER generator, p is our data of (context, question, answer) triples and A is the answer generator.", "( 2.1) to maximize the log-likelihood of all (con-text, question) pairs in the training data.", "Parameters of this model are updated during adversarial training.", "Answer Generator.", "We pretrain our answer generator using the sequence-to-sequence model ( 2.1) to maximize the log-likelihood of all ([con-text+question], answer) pairs in the training data.", "Parameters of this model are kept fixed during the adversarial training.", "4 Discriminator.", "In our UTILITYGAN model ( 2.4), the discriminator is trained to differentiate between true and generated questions.", "However, since we want to guide our UTILITY based discriminator to also differentiate between true (good) and random (bad) questions, we pretrain our discriminator in the same way we trained our UTILITY calculator.", "For positive instances, we use a context and its true question, answer from the training data and for negative instances, we use the same context but randomly sample a question from the training data (and use the answer paired with that random question).", "We base our experimental design on the following research questions:", "1. Do generation models outperform simpler retrieval baselines?", "2. Does optimizing the UTILITY reward improve over maximum likelihood training?", "3. Does using adversarial training improve over optimizing the pretrained UTILITY ?", "4. How do the models perform when evaluated for nuances such as specificity & usefulness?", "We evaluate our model on two datasets.", "Amazon.", "In this dataset, context is a product description on amazon.com combined with the product title, question is a clarification question asked to the product and answer is the seller's (or other users') reply to the question.", "To obtain these data triples, we combine the Amazon question-answering dataset (McAuley and Yang, 2016) with the Amazon reviews dataset (McAuley et al., 2015).", "We show results on the Home & Kitchen category of this dataset since it contains a large number of questions and is relatively 4 We leave the experimentation of updating parameters of answer generator during adversarial training to future work.", "easier for human-based evaluation.", "It consists of 19 , 119 training, 2 , 435 tune and 2 , 305 test examples (product descriptions), with 3 to 10 questions (average: 7) per description.", "Stack Exchange.", "In this dataset, context is a post on stackexchange.com combined with the title, question is a clarification question asked in the comments section of the post and answer is either the update made to the post in response to the question or the author's reply to the question in the comments section.", "Rao and Daume III (2018) cu-rated a dataset of 61 , 681 training, 7 , 710 tune and 7 , 709 test such triples from three related subdomains on stackexchage.com (askubuntu, unix and superuser).", "Additionally, for 500 instances each from the tune and the test set, their dataset includes 1 to 6 other questions identified as valid questions by expert human annotators from a pool of candidate questions.", "We compare three variants (ablations) of our proposed approach, together with an information retrieval baseline:", "GAN-Utility is our full model which is a UTILITY calculator based GAN training ( 2.4) including the UTILITY discriminator and the MIXER question generator.", "5 Max-Utility is our reinforcement learning baseline where the pretrained question generator model is further trained to optimize the UTILITY reward ( 2.2) without the adversarial training.", "MLE is the question generator model pretrained on context, question pairs using maximum likelihood objective ( 2.1).", "Lucene 6 is our information retrieval baseline similar to the Lucene baseline described in Rao and Daume III (2018).", "Given a context in the test set, we use Lucene, which is a TF-IDF based document ranker, to retrieve top 10 contexts that are most similar to the given context in the train set.", "We randomly choose a question from the human written questions paired with these 10 contexts in the train set to construct our Lucene baseline 7 .", "7 For the Amazon dataset, we ignore questions asked to products of the same brand as the given product since Amazon replicates questions across same brand allowing the true question to be included in that set.", "the proportion of unique trigrams in the output to measure the diversity as commonly used to evaluate dialogue generation (Li et al., 2016b).", "BLEU (Papineni et al., 2002) 8 , which evaluates n-gram precision between the output and the references.", "METEOR (Banerjee and Lavie, 2005), which is similar to BLEU but includes stemmed and synonym matches to measure similarity between the output and the references.", "We use Figure-Eight 9 , a crowdsourcing platform, to collect human judgements.", "Each judgement 10 consists of showing the crowdworker a context and a generated question and asking them to evaluate the question along following axes: Relevance : We ask Is the question on topic? and let workers choose from: Yes (1) and No (0) Grammaticality : We ask Is the question gram-matical? and let workers choose from: Yes (1) and No (0) Seeking new information : We ask Does the question ask for new information currently not included in the description? and let workers choose from: Yes (1) and No (0) Specificity : We ask How specific is the ques-tion? and let workers choose from: 4: Specific pretty much only to this product (or same product from different manufacturer) 3: Specific to this and other very similar products 2: Generic enough to be applicable to many other products of this type 1: Generic enough to be applicable to any product under Home and Kitchen 0: N/A (Not applicable) i.e. Question is not on topic OR is incomprehensible Usefulness : We ask How useful is the question to a potential buyer (or a current user) of the prod-uct? and let workers choose from: 8 https://github.com/moses-smt/ mosesdecoder/blob/master/scripts/ generic/multi-bleu.perl 9 https://www.figure-eight.com 10 We paid crowdworkers 5 cents per judgment and collected five judgments per question.", "4: Useful enough to be included in the product description 3: Useful to a large number of potential buyers (or current users) 2: Useful to a small number of potential buyers (or current users) 1: Useful only to the person asking the question 0: N/A (Not applicable) i.e. Question is not on topic OR is incomprehensible OR is not seeking new information 3.3.3 Inter-annotator Agreement Table 1 shows the inter-annotator agreement (re-ported by Figure-Eight as confidence 11 ) on each of the above five criteria.", "Agreement on Relevance , Grammaticality and Seeking new information is high.", "This is not surprising given that these criteria are not very subjective.", "On the other hand, the agreement on usefulness and specificity is quite moderate since these judgments can be very subjective.", "Since the inter-annotator agreement on the usefulness criteria was particularly low, in order to reduce the subjectivity involved in the fine grained annotation, we convert the range [0-4] to a more coarse binary range [0-1] by mapping the scores 4 and 3 to 1 and the scores 2, 1 and 0 to 0 .", "Table 2 shows the results on the two datasets when evaluated according to automatic metrics.", "In the Amazon dataset, GAN-Utility outperforms all ablations on DIVERSITY , suggesting that it produces more diverse outputs.", "Lucene, on the other hand, has the highest DIVERSITY since it consists of human written questions, which tend to be more diverse because they are much longer compared to model generated questions.", "This comes at the cost of lower match with the reference as visible in the BLEU and METEOR scores.", "In terms of BLEU and METEOR , there is inconsistency.", "Although GAN-Utility outperforms all baselines according to METEOR , the fully ablated MLE model has a higher BLEU score.", "This is because BLEU score looks for exact n-gram matches and since MLE produces more generic outputs, it is much more likely that it will match one of 10 references compared to the specific/diverse outputs of GAN-Utility, since one of those ten is highly likely to itself be generic.", "In the StackExchange dataset GAN-Utility outperforms all ablations on both BLEU and METEOR .", "Unlike in the Amazon dataset, MLE does not outperform GAN-Utility in BLEU .", "This is because the MLE outputs in this dataset are not as generic as in the amazon dataset due to the highly technical nature of contexts in StackExchange.", "As in the Amazon dataset, GAN-Utility outperforms MLE on DIVERSITY .", "Interestingly, the Max-Utility ablation achieves a higher DIVERSITY score than GAN-Utility.", "On manual analysis we find that Max-Utility produces longer outputs compared to GAN-Utility but at the cost of being less grammatical.", "Table 3 shows the numeric results of human-based evaluation performed on the reference and the system outputs on 300 random samples from the test set of the Amazon dataset.", "12 All approaches produce relevant and grammatical questions.", "All models are all equally good at seeking new information, but are weaker than Lucene, which performs better at seeking new information but at the 12 We could not ask crowdworkers evaluate the StackExchange data due to its highly technical nature.", "cost of much lower specificity and lower usefulness.", "Our full model, GAN-Utility, performs signifi-cantly better at the usefulness criteria showing that the adversarial training approach generates more useful questions.", "Interestingly, all our models produce questions that are more useful than Lucene and Reference, largely because Lucene and Reference tend to ask questions that are more often useful only to the person asking the question, making them less useful for potential other buyers (see Figure 4).", "GAN-Utility also performs signifi-cantly better at generating questions that are more specific to the product (see details in Figure 5), which aligns with the higher DIVERSITY score obtained by GAN-Utility under automatic metric evaluation.", "Table 5 contains example outputs from different models along with their usefulness and specificity scores.", "MLE generates questions such as is it waterproof? and what is the wattage? , which are applicable to many other products.", "Whereas our GAN-Utility model generates more specific question such as is this shower curtain mildew resistant? .", "Appendix C includes further analysis of system outputs on both Amazon and Stack Exchange datasets.", "Question Generation.", "Most previous work on question generation has been on generating reading comprehension style questions i.e. questions that ask about information present in a given text (Heilman, 2011; Rus et al., 2010, 2011; Duan et al., 2017).", "Our goal, on the other hand, is to generate questions whose answer cannot be found Model Relevant [0-1] Grammatical [0-1] New Info [0-1] Useful [0-1] Specific [0-4] Reference 0.96 0.99 0.93 0.72 3.38 Lucene 0.90 0.99 0.95 0.68 2.87 MLE 0.92 0.96 0.85 0.91 3.05 Max-Utility 0.93 0.96 0.88 0.91 3.29 GAN-Utility 0.94 0.96 0.87 0.96 3.52 Table 3: Results of human judgments on model generated questions on 300 sample Home & Kitchen product descriptions.", "in the given text.", "Outside reading comprehension questions, Liu et al. (2010) use templated questions to help authors write better related work sections whereas we generate questions to fill information gaps.", "Labutov et al. (2015) use crowdsourcing to generate question templates whereas we learn from naturally occurring questions.", "Mostafazadeh et al. (2016, 2017) generate natural and engaging questions, given an image (and some initial text).", "Whereas, we generate questions specifically for identifying missing information.", "Stoyanchev et al. (2014) generate clarification questions to resolve ambiguity caused by speech recognition failures during dialog, whereas we generate clarification questions to resolve ambiguity caused by missing information.", "The recent work most relevant to our work is by Rao and Daume III (2018).", "They build a model which given a context and a set of candidate clarification questions, ranks them in a way that more useful clarification questions would be higher up in the ranking.", "In our work, we build on their ideas to propose a model that generates (instead of ranking) clarification questions given a context.", "Neural Models and Adversarial Training for Text Generation.", "Neural network based models have had significant success at a variety of text generation tasks, including machine translation (Bahdanau et al., 2015; Luong et al., 2015), summarization (Nallapati et al., 2016), dialog (Bor-des et al., 2016; Li et al., 2016a; Serban et al., 2017), textual style transfer (Jhamtani et al., 2017; Rao and Tetreault, 2018) and question answering (Yin et al., 2016; Serban et al., 2016).", "Our task is most similar to dialog, in which a wide variety of possible outputs are acceptable, and where lack of specificity in generated outputs is common.", "We addresses this challenge using an adversarial network approach (Goodfellow et al., 2014), a training procedure that can generate natural-looking outputs, which have been effective for natural image generation (Denton et al., 2015).", "Due to the challenges in optimizing over discrete output spaces like text, Yu et al. (2017) introduced a Seq(uence)GAN approach where they overcome this issue by using REINFORCE to optimize.", "Our GAN-Utility model is inspired by the SeqGAN model where we replace their policy gra-Title Raining Cats and Dogs Vinyl Bathroom Shower Curtain Product This adorable shower curtain measures 70 by 72 Description inches and is sure to make a great gift!", "dient based generator with a MIXER model and their CNN based discriminator with our UTILITY calculator.", "Li et al. (2017) train an adversarial model similar to SeqGAN for generating next utterance in a dialog given a context.", "However, unlike our work, their discriminator is a binary clas-sifier trained only to distinguish between human and machine generated utterances.", "In this work, we describe a novel approach to the problem of clarification question generation.", "We use the observation of Rao and Daume III (2018) that the usefulness of a clarification question can be measured by the value of updating a context with an answer to the question.", "We use a sequence-to-sequence model to generate a question given a context and a second sequence-to-sequence model to generate an answer given the context and the question.", "Given the (context, generated question, generated answer) triple, we calculate the utility of this triple and use it as a reward to retrain the question generator using reinforcement learning based MIXER model.", "Further, to improve upon the utility calculator, we reinterpret it as a discriminator in an adversarial setting and train both the utility calculator and the MIXER model in a minimax fashion.", "We find that our adversarial training approach produces more useful and specific questions compared to both a model trained using maximum likelihood objective and a model trained using utility reward based reinforcement learning.", "There are several avenues of future work.", "Following Mostafazadeh et al. (2016), we could combine text input with image input in the Amazon dataset (McAuley and Yang, 2016) to generate more relevant and useful questions.", "One significant research challenge in the space of free text generation problems when the set of possible outputs is large, is that of automatic evaluation (Lowe et al., 2016): in our results we saw some correlation between human judgments and automatic metrics, but not enough to trust the automatic metrics completely.", "Lastly, we hope to integrate such a question generation model into a real world platform like StackExchange or Amazon to understand the real utility of such models and to unearth additional research questions.", "We thank the three anonymous reviewers for their helpful comments and suggestions.", "We also thank the members of the Computational Linguistics and Information Processing (CLIP) lab at University of Maryland for helpful discussions.", "This work was supported by NSF grant IIS-1618193.", "Any opinions, findings, conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the view of the sponsors." ]
[ "objective", "objective", "objective", "result", "abstain", "abstain", "method", "method", "abstain", "abstain", "objective", "result", "objective", "method", "result", "method", "objective", "result", "other", "objective", "abstain", "abstain", "abstain", "result", "method", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "objective", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "method", "method", "other", "method", "method", "abstain", "other", "objective", "other", "other", "method", "method", "other", "method", "abstain", "other", "method", "objective", "method", "method", "method", "result", "objective", "abstain", "method", "result", "abstain", "other", "other", "other", "other" ]
[ "Unsupervised commonsense question answering is appealing since it does not rely on any labeled task data.", "Among existing work, a popular solution is to use pre-trained language models to score candidate choices directly conditioned on the question or context.", "However, such scores from language models can be easily affected by irrelevant factors, such as word frequencies, sentence structures, etc.", "These distracting factors may not only mislead the model to choose a wrong answer but also make it oversensitive to lexical perturbations in candidate answers.", "In this paper, we present a novel SEmantic-based Question Answering method (SEQA) for unsupervised commonsense question answering.", "Instead of directly scoring each answer choice, our method first generates a set of plausible answers with generative models (e.g., GPT-2), and then uses these plausible answers to select the correct choice by considering the semantic similarity between each plausible answer and each choice.", "We devise a simple, yet sound formalism for this idea and verify its effectiveness and robustness with extensive experiments.", "We evaluate the proposed method on four benchmark datasets, and our method achieves the best results in unsupervised settings.", "Moreover, when attacked by TextFooler (Jin et al., 2020) with synonym replacement, SEQA demonstrates much less performance drops than baselines, thereby indicating stronger robustness.", "Pre-trained language models have been widely used for commonsense question answering.", "Finetuning pre-trained models on task-specific data produces many state-of-the-art results (Wang et al., 2020; * Equal contribution Corresponding author: Minlie Huang.", "Khashabi et al., 2020; Lin et al., 2019).", "However, this requires amounts of labeled task data.", "Therefore, it is vital to study unsupervised commonsense question answering without relying on any labeled downstream task data.", "In this paper, we investigate multiple-choice commonsense question answering tasks in an unsupervised setting: given a question and a set of answer choices, a model is required to predict the most reasonable answer choice for the question, but without access to any labeled task data.", "Many existing unsupervised methods tackle these tasks by scoring each answer choice using a language model, e.g., estimating the generative probability of the answer choice conditioned on the question (Trinh and Le, 2018; Shwartz et al., 2020; Bosselut and Choi, 2019; Tamborrino et al., 2020).", "Table 1 lists several typical score functions.", "However, these scores can be easily influenced by word frequencies, sentence structures, and other factors, which can mislead the models and make existing methods oversensitive to lexical perturbations (Abdou et al., 2020; Tamborrino et al., 2020).", "Figure 1 shows two examples.", "The correct choices are paraphrased via synonym replacement or structure transformation.", "In these examples, the baseline (Pro-A) produces much lower scores for the paraphrased choices and chooses the wrong choices.", "Since existing methods can be easily distracted by irrelevant factors such as lexical perturbations, we argue that a commonsense question answering method should focus on the answers' semantics and assign similar scores to synonymous choices .", "To this end, we introduce a novel SEmantic-based Question Answering model, SEQA, which aims to robustly select correct answers in multi-choice commonsense question answering in an unsupervised setting.", "Instead of directly scoring an answer choice, we calculate the probability of observing the choice's semantics.", "A choice's semantic score can be obtained by summing the generative probabilities of sentences that have the same semantic meanings with the choice, where the sentences are called the choice's supporters .", "However, it is hard to obtain the supporters which have exactly the same semantic meanings with the choice, so we reformulate the semantic score into a soft version as explained in Section 3.2.", "Each supporter is weighed by the semantic similarity to the answer choice, which can be computed with some off-the-shelf models, such as SentenceBERT (Reimers and Gurevych, 2019).", "Since the supporters and their weights depend on the semantics rather than the surface form of the answer choice, by this means, the effects of the distracting factors can be largely suppressed.", "Moreover, synonymous choices are likely to share the same set of supporters , so their scores are expected to be stably close.", "Our contributions in this paper are summarized as follows: We propose a semantic-based question answering model (SEQA) for robust commonsense question answering in an unsupervised setting.", "Instead of directly scoring the answer choices, our method first generates some plausible answers and then uses them to select the correct choice by considering the semantic similarity between each plausible answer and each choice.", "We conduct experiments on four commonsense question answering datasets, where SEQA achieves the best performance com-Method Score Function Pro-A [ PLM ( A | Q )] 1 | A | Pro-Q [ PLM ( Q | A )] 1 | Q | MI-QA (cid:104) PLM ( A | Q ) PLM ( A ) (cid:105) 1 | A | SEQA (Ours) (cid:80) S A ( S | A ) PLM ( S | Q ) Table 1: Three existing score functions and our method for unsupervised commonsense question answering.", "pared with strong baselines.", "When attacked by TextFooler (Jin et al., 2020) with synonym replacement, our method performs remarkably more robustly.", "Previous work has explored pre-trained language models (LMs) for unsupervised commonsense question answering.", "In general, these approaches treat LMs as question answering modules.", "Table 1 shows three representative methods, which do not use external knowledge and rely fully on the implicit knowledge encoded in LMs for reasoning.", "Probability-A (Pro-A) considers the generative probability of the choice conditioned on the question.", "However, it suffers from the statistical bias of choices, such as word frequency and sentence length (Abdou et al., 2020).", "To alleviate this, MutualInfo-QA (MI-QA) calculates the mutual information between the question and the choice.", "Another way to reduce the impact of statistical bias is to score each choice using the conditional probability of the question rather than the choice (Trinh and Le, 2018; Tamborrino et al., 2020) , which is denoted as Probability-Q (Pro-Q) in Table 1. Some recent work claims that external knowledge can benefit commonsense reasoning.", "Besides static knowledge bases (KBs), such as ConceptNet (Speer et al., 2017) and Atomic (Sap et al., 2019a), there are also numerous studies treating LMs as dynamic KBs.", "Petroni et al. (2019) shows that LMs can be used for KB completion.", "And Davison et al. (2019) shows that BERT can distinguish true and fake ConceptNet triplets.", "Further, the extracted knowledge can work as complementary information for answering a question.", "Rajani et al. (2019) proposes a model for Com-1 PBERT ( Q | A ) (cid:44) (cid:81) | Q | i PBERT ( Q i | Q /i , A ) .", "monSenseQA (Talmor et al., 2019) that generates explanations for questions, which are then used as additional inputs.", "The shortcoming of this approach is that it requires collecting human explanations for each new dataset to fine-tune LMs.", "Some following researches explore unsupervised explanation/knowledge generator.", "CGA (Bosse-lut and Choi, 2019) employs COMET (Bosselut et al., 2019) to generate intermediate inferences which are then used to score the choice.", "However, COMET is limited by a small set of question types so that CGA is difficult to generalize to different domains.", "Self-Talk (Shwartz et al., 2020) breaks the limit by extracting knowledge from GPT-2 (Rad-ford et al., 2019), which has no restriction on the query types.", "Thus, Self-Talk can be applied to a wide range of domains.", "Despite the introduction of auxiliary information, these methods are essentially dependent on language model scores, so they are still sensitive to lexical perturbations.", "Besides directly using pre-trained LMs, some recent efforts have been dedicated to automatically constructing task-specific data to train commonsense reasoners in zero-shot settings.", "Wang et al. (2019) and Kocijan et al. (2019) provide some rules to construct labeled training data from large corpus for pronoun disambiguation.", "Banerjee and Baral (2020), Moghimifar et al. (2020) and Ma et al. (2020) collect training data based on knowledge bases, such as Atomic (Sap et al., 2019a).", "Though effective, they are limited by the specific task settings or highly dependent on the task-related knowledge bases, which makes them difficult to transfer to other commonsense reasoning tasks.", "In this paper, we focus on unsupervised multiple-choice commonsense question answering, which is formalized as follows: given a question and a set of choices, models should select the correct choice:", "where s refers to a score function.", "Note that we have no access to any labeled task data.", "In existing unsupervised methods, the score functions are usually defined based on the language model scores.", "Taking Pro-A (Table 1) as an example, it first converts the question into a statement: Q: I saw my breath when I exhaled.", "And it then takes the statement as a prompt to calculate the generative probability of each choice.", "Note that the templates for rewriting is not the focus of this paper, and hence we directly use the templates of previous work (Shwartz et al., 2020; Tamborrino et al., 2020) for our method and all the baselines in this paper (see Appendix for details).", "Though successful, language model scores can be affected by many distracting factors, such as word frequency and sentence structure, etc.", "These factors can disturb the score functions to a large extent, as shown in Figure 1. Our goal is to alleviate the influence of these distracting factors.", "Hence we propose a new method for unsupervised commonsense question answering, which achieves better results and performs more robustly.", "SEQA is designed to predict the semantic score of an answer choice A .", "Instead of directly estimating the probability P ( A | Q ) of the single choice A , the semantic score focuses on the probability P ( MA | Q ) where MA represents A 's semantics.", "Ideally, we decompose P ( MA | Q ) into the summation of the conditional probabilities of A 's supporters , where the supporters indicates all possible answers that have exactly the same semantics MA .", "Formally, the semantic score is defined as s ( A | Q ) (cid:44) P ( MA | Q ) = (cid:88) S SAPLM ( S | Q ) (1) = (cid:88) S AI ( S SA ) PLM ( S | Q ) .", "(2) SA is the set of supporters of choice A , and A is the set of all possible answers.", "I ( S SA ) is an indicator function indicating whether S is a supporter of A .", "To obtain the supporter set SA , we adopt a model to extract the sentence-level semantic features.", "Ideally, the indicator function is defined as I ( S SA ) = (cid:40) 1 if cos( h S , h A ) = 1 , 0 if cos( h S , h A ) < 1 , (3) where h A is the semantic features of sentence A , and we assume that S and A are exactly the same in semantics if h S and h A point in the same direction.", "However,", "Eq.(3) uses a hard constraint that cos( h S , h A ) exactly equals to 1, which can be too strict to find acceptable supporters .", "Therefore, we reformulate", "Eq.(2) into a soft version: s ( A | Q ) (cid:44) (cid:88) S A ( S | A ) PLM ( S | Q ) , (4) where the indicator function in", "Eq.(2) is replaced by a soft function ( S | A ) .", "To emulate I ( S SA ) , ( S | A ) is expected to meet three requirements: (1) ( S | A ) [0 , 1] for any S and A ; (2) ( S | A ) = 1 if cos( h S , h A ) = 1 ; (3) ( S | A ) increases monotonically with cos( h S , h A ) .", "There are several different definitions of ( S | A ) meeting these requirements, which are explored in Section 4.7.3.", "In this paper, ( S | A ) is defined as: ( S | A ) = 1 Z ( T ) exp (cid:20) cos( h S , h A ) T (cid:21) .", "T is the temperature, and Z ( T ) = exp( 1 T ) is a normalization term that makes ( A | A ) = 1 .", "If T 0 , ( S | A ) degenerates to the indicator function.", "If T > 0 , ( S | A ) relates to the von Mises-Fishers distribution over the unit sphere in the feature space, where the acceptable feature vectors are distributed around the mean direction h A || h A || .", "Since it is intractable to enumerate all possible answers in A , we convert", "Eq.(4) to an expectation over PLM ( S | Q ) : s ( A | Q ) = ES PLM ( S | Q ) [ ( S | A )] 1 KK (cid:88) i =1 ( S i | A ) (6) = 1 K Z ( T ) K (cid:88) i =1 exp (cid:20) cos( h S i , h A ) T (cid:21) , (7) where S 1 , , SK are sentences sampled from PLM ( | Q ) , and K is the sample size.", "h A and h S i can be extracted from a pre-trained model, e.g., SentenceBERT (Reimers and Gurevych, 2019).", "From", "Eq.(7), we can see the semantic score s ( A | Q ) is only dependent on the semantic feature h A and regardless of A 's surface form.", "Therefore, our method will produce similar semantic scores for synonymous choices, assuming that the synonymous choices have similar semantic features.", "At the beginning of Section 3.2, we define the semantic score as the summation of the conditional probabilities over the supporters .", "However, in", "Eq.(7), the sampled sentences S 1 , , SK are not A 's supporters because they may not be semantically similar to A .", "To address the differences, we Figure 2: Process of SEQA in the view of voting.", "name the sampled sentences S 1 , , SK as voters , which are plausible answers to the question Q .", "In this section, we will show another view of our method, which works like a procedure that the voters vote out the correct choice.", "Suppose there are two candidate choices A 1 and A 2 , our method is to find the correct choice according to the semantic scores, s ( A 1 | Q ) and s ( A 2 | Q ) .", "Following", "Eq.(6), our method can be decomposed into two steps: First, sample some voters S 1 , , SK from PLM ( | Q ) .", "This step only considers the question Q but no candidate choices.", "Second, each voter votes for the choices with the semantic similarity weights.", "For example, S i votes for A j with the weight of ( S i | A j ) .", "The candidate choice that receives more votes will have a higher semantic score and be selected as the final answer.", "Figure 2 shows the process of SEQA in the view of voting.", "Although the voting view is intuitive, the formalism in Section 3.2 provides more insights: (1) Our method approximates the probability of semantics, which works as the theoretical basis of SEQA.", "(2) Our method can be seen as an extension of Pro-A (see Table 1), since Pro-A only calculates the language model score for a single sentence, whereas our method calculates the semantic score for a set of supporters .", "(3)", "Eq.(4) provides guidance, the three requirements mention before, for the design of the voting weight function ( S | A ) .", "Specifically, the guidance explains the rationality of the formulation of", "Eq.(5).", "We conducted experiments on four multiple-choice commonsense question answering tasks, COPA (Roemmele et al., 2011), StoryClozeTest (SCT) (Mostafazadeh et al., 2016), SocialIQA (Sap et al., 2019b) and CosmosQA (Huang et al., 2019).", "For each instance, only one choice is correct.", "See Appendix for more description about datasets.", "For COPA, we reported the results on its test set.", "As the test sets of another three datasets are hidden, for convenience of analysis, we reported the experiment results on their development sets.", "We employed five strong baselines.", "Table 1 shows three of them, Pro-A , Pro-Q and MI-QA .", "There is no explicit auxiliary information used in these three methods, while another two baselines rely on explicit information supplementation.", "CGA (Bosse-lut and Choi, 2019) and Self-Talk (Shwartz et al., 2020) query pre-trained language models (e.g., GPT-2, COMET (Bosselut et al., 2019)) for relevant knowledge, which forms part of contexts.", "And then, similar to Pro-A, they take the generative probabilities of choices as scores.", "For each method, we tried different pre-trained language models (see Appendix for details), and then selected the pre-trained LMs that maximized the accuracy on each dataset.", "The details of the selection of pre-trained LMs can be found in Table 2. For SEQA, we used GPT-2 to generate voters via Nucleus Sampling (Holtzman et al., 2020) with p = 0 .", "9 .", "The sample size K of voters is set to 500 .", "In Section 4.7.2, we show that a small sample size can also lead to superior performance.", "Self-Talk and CGA also rely on the generated answers from GPT-2 or COMET.", "Different from SEQA, for these two baselines, more generated answers will not always lead to better performance (see Section 4.7.2).", "Thus, we selected the optimal sample size for them rather than the same sample size with SEQA.", "When evaluating SEQA on COPA, we tuned the temperature T on its development set, and then reported the results on the test set with the tuned temperature T = 0 .", "1 .", "Due to the absence of test sets of other datasets, we evaluated SEQA on their development sets without tuning the temperature and directly set T = 0 .", "1 .", "Table 2 shows the evaluation results about accuracy and robustness.", "Among all the methods, SEQA achieved the best performance on all the datasets.", "Especially on SCT and CosmosQA, SEQA outperformed the best baselines by more than 10 points.", "It can be inferred that the semantic scores are beneficial for commonsense question answering due to the reduction of distracting factors.", "Pro-Q performed better than other baselines on COPA, perhaps because it suffered less from the statistic bias of choices (Tamborrino et al., 2020).", "However, Pro-Q lost its superiority on another three datasets, because it is unsuitable for processing long or complex contexts.", "To test the robustness under the synonym replacement attack, we used TextFooler (Jin et al., 2020) to attack the methods by perturbing the correct choices of the correctly predicted examples.", "The percentage of perturbed words refers to what percentage of words in choices are replaced in successful attacks.", "The semantic similarity is measured between the paraphrased choice and the original choice.", "Considering the attack success rate and the after-attack accuracy, SEQA is much more robust than all baselines.", "To be specific, the attack success rates on SEQA are at least 39 points lower than those of Pro-A, CGA, and Self-Talk on all datasets.", "MI-QA and Pro-Q are designed to reduce the impact of statistic bias in choices, so that they can resist lexical perturbation to some extent.", "Even so, SEQA is remarkably lower than MI-QA and Pro-Q in terms of attack success rates on all datasets.", "An observation is that the attack success rate on SEQA on CosmosQA is higher than those on the other datasets.", "The reason is that, the contexts in CosmosQA are so complex that GPT-2 is more difficult to generate high-quality answers.", "If there is a more powerful generator, the robustness of SEQA is expected to have a further improvement.", "We have claimed that a commonsense question answering method should assign close scores to synonymous choices.", "To verify that SEQA better meets this requirement, we conducted consistency testing for all the methods on four datasets.", "For each example, the consistency testing of a method is conducted in three steps: (1) Originally, the example has one correct and several wrong answer choices.", "We randomly sample some choices from other examples as additional wrong choices.", "After Method / Dataset COPA SCT SocialIQA CosmosQA Pro-A 9.1 11.0 11.7 9.4 Pro-Q 6.9 8.5 11.6 12.3 MI-QA 7.5 5.8 11.1 7.9 Self-Talk 13.3 9.5 10.7 10.1 CGA 9.7 11.0 10.9 9.5 SEQA 4.1 3.2 5.8 4.7 Table 3: Consistency testing where the methods rank 80 choices to find 4 correct ones for each example.", "that, the example will have one correct choice and 19 wrong choices.", "(2) Leverage a commonly used automatic translation service, Baidu Translation, to translate each choice from English into an intermediate language, and then back-translate it into English.", "During this process, we employ three intermediate languages, Chinese, Spanish, and Russian, because the translation quality of these languages is better than others.", "As a result, each choice is accompanied with three synonymous choices.", "(3) Use the commonsense question answering method to calculate the scores for each choice as well as its synonymous choices, and then sort all the choices according to their scores.", "Because the scoring scales of these methods are different, we calculate the standard deviation of the ranks of the correct choice and its synonymous choices.", "Table 3 shows the average standard deviation of the ranks.", "As expected, the average standard deviation of SEQA is much lower than any other method on all the datasets, confirming that SEQA assigns more similar ranks and closer scores to synonymous choices.", "We also observed that MI-QA provided relatively stable predictions compared with other baseline methods.", "A possible explanation is that, the normalization term PLM ( A ) helps alleviate the influence of lexical perturbations.", "Answer length is also a type of distracting factor which may mislead baseline methods.", "To explore to which extent answer lengths affect the performance of methods, we divided the development set of CosmosQA into four subsets according to the length of correct choice.", "Table 4 shows the results of SEQA and a robust baseline, MI-QA.", "Compared with MI-QA, SEQA has much more stable performance as answer lengths vary.", "The reason is that, SEQA focuses on semantic information so that it has stronger resistance to such distracting factors.", "In the previous experiments, the temperature T of SEQA was set to 0 .", "1 by default.", "To investigate the influence of T , we varied T in a wide range from 0 .", "05 to 10 and report the results in Table 5. Considering that the temperature varied greatly, the performance of SEQA is relatively stable, indicating that SEQA is not so sensitive to the selection of T .", "Another observation is that, although the four datasets are different in domains and text length, the trends of performance with temperature on them are relatively similar, illustrating that the temperature selected on one task can be generalized to other tasks.", "Figure 3 shows the effect of the sample size K on SEQA.", "For comparison, Figure 3 also includes the results of baselines in the settings of beforeand after-attack, respectively.", "Due to the limitation of space, the results on the other datasets are shown in Appendix.", "As expected, the before-attack and afterattack accuracy on SCT increased with the sample size.", "In detail, the rapid increase in performance occurred when K < 100 , and then the improvement slowed down when K > 100 .", "Finally, SEQA achieved a stable and relatively high performance.", "CGA and Self-Talk also leverage LMs to generate some plausible answers.", "Different from our method, they use the generated answers to form part of the question, and then calculate the generative probability of the choice based on the augmented question.", "We also tried different sample sizes for the two methods, and Figure 3", "(a) shows Figure 3: The before-attack", "that their accuracy will not stably increase with a larger sample size.", "( S | A ) in SEQA can be defined in different forms, as long as the three requirements mentioned in Section 3.2 are met.", "Besides the default definition, we explored another three forms of ( S | A ) , and the experiment results on COPA are shown in Table 6. Although the performance varies with ( S | A ) , the before-attack accuracy of SEQA still outperformed most of the baselines with any definition of ( S | A ) .", "Moreover, SEQA maintains its obvious advantage in after-attack accuracy, which reflects the inherent robustness of SEQA.", "SEQA has no limit on the selection of the pre-trained language model and the feature extractor.", "Table 7 shows how the accuracy of SEQA on COPA varied with the language model and the feature extractor.", "As expected, more powerful extractor usually led to higher accuracy under the same settings of language models.", "Similar conclusion can be obtained for the language model.", "It can be inferred that, if there are more powerful language models or feature extractors in the future, the performance of SEQA may be further improved.", "While the performance of SEQA served as an extrinsic evaluation for the quality of the voters (plau-sible answers sampled from PLM ( | Q ) , described in Section 3.3), we were also interested in evaluating it intrinsically.", "We sampled 125 voters from COPA.", "For each voter , we provided crowd-sourcing workers with the original question, and asked them: 1) whether the voter is grammatical, not entirely grammatical but understandable, or completely not understandable, 2) whether the voter is a reasonable answer to the question, not reasonable but relevant, or completely irrelevant.", "These evaluation tasks comprehensively examined the voters in grammar and logicality.", "The annotation tasks were carried out in Amazon Mechanical Turk, and we aggregated annotations from 3 workers using majority vote.", "Table 8 shows the results of the human evaluation of the voters .", "Score 3/2/1 correspond to the high, middle and low quality, respectively.", "According to the grammar scores, 97 .", "6% of the voters are grammatical or at least understandable, for which most of the voters belong to the natural language space.", "In terms of logicality, 40 .", "8% of the voters are reasonable answers to the questions, which may not be very satisfying.", "However, in Section 4.9, we will show that SEQA makes prediction based on a small part of voters , and hence SEQA is robust Figure 4: The cumulative proportion of voters favoring the correct answer AC or the wrong answer AW on COPA.", "We visualize the cumulative proportion of voters favoring the correct or the wrong choices (see Figure 4).", "The curve is averaged over all instances in the test set of COPA, where we sampled 500 voters for each instance and set T = 0 .", "1 .", "From the curves, we can find several properties of voters : (1) The voters favor the correct choices over the wrong choices, where the curve for correct choices is consistently above the curve for wrong ones.", "The area between two curves shows the difference of semantic scores s ( AC | Q ) s ( AW | Q ) , which is a large gap compared with the area under the bottom curve.", "(2) 93 .", "5% of voters do not strongly favor any choices ( | ( S | AC ) ( S | AW ) | < 0 .", "05 ), indicating that they are semantically irrelevant to both candidate choices.", "However, Table 8 shows that 40 .", "8% of voters are logically reasonable, so many voters are reasonable but irrelevant to both answers.", "It suggests that there can be several reasonable answers for a single question, and the sampled voters are diverse in the semantics.", "(3) Although there are only 5 .", "3% of voters strongly favoring the correct choices, there are much less voters ( 1 . 2% ) favoring the wrong ones.", "It explains why our method is able to predict the correct answer.", "To help understand the relationship between voters and choices, Table 9 provides an instance with voters and their voting weights to the choices.", "We show four types of voters : favoring the correct choice, favoring the wrong choice, logically reasonable but not favoring either choices, and unreasonable and irrelevant to both choices.", "We can see Q: The car ran out of gas.", "that the last two types of voters can hardly affect the method's prediction, because their voting weights are much smaller than the first two types of voters .", "References Mostafa Abdou, Vinit Ravishankar, Maria Barrett, Yonatan Belinkov, Desmond Elliott, and Anders Sgaard.", "The sensitivity of language models and humans to winograd schema perturbations.", "In ACL , pages 75907604.", "Self-supervised knowledge triplet learning for zero-shot question answering.", "Antoine Bosselut and Yejin Choi.", "2019.", "Dynamic knowledge graph construction for zero-shot commonsense question answering.", "CoRR .", "Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chai-tanya Malaviya, Asli Celikyilmaz, and Yejin Choi.", "2019.", "COMET: commonsense transformers for automatic knowledge graph construction.", "In ACL , pages 47624779.", "Joe Davison, Joshua Feldman, and Alexander M. Rush.", "2019.", "Commonsense knowledge mining from pre-trained models.", "In EMNLP-IJCNLP , pages 1173 1178.", "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova.", "2019.", "BERT: pre-training of deep bidirectional transformers for language understanding.", "In NAACL-HLT , pages 41714186.", "Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi.", "2020.", "The curious case of neural text degeneration.", "In ICLR .", "Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi.", "2019.", "Cosmos QA: machine reading comprehension with contextual commonsense reasoning.", "In EMNLP , pages 23912401.", "Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits.", "2020.", "Is BERT really robust?", "A strong baseline for natural language attack on text classifi-cation and entailment.", "In AAAI , pages 80188025.", "Daniel Khashabi, Sewon Min, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Clark, and Han-naneh Hajishirzi.", "2020.", "Unifiedqa: Crossing format boundaries with a single QA system.", "In Findings, EMNLP , pages 18961907.", "Vid Kocijan, Ana-Maria Cretu, Oana-Maria Camburu, Yordan Yordanov, and Thomas Lukasiewicz.", "2019.", "A surprisingly robust trick for the winograd schema challenge.", "In ACL , pages 48374842.", "Bill Yuchen Lin, Xinyue Chen, Jamin Chen, and Xiang Ren.", "2019.", "Kagnet: Knowledge-aware graph networks for commonsense reasoning.", "In EMNLP-IJCNLP , pages 28292839.", "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov.", "2019.", "Roberta: A robustly optimized BERT pretraining approach.", "CoRR .", "Kaixin Ma, Filip Ilievski, Jonathan Francis, Yonatan Bisk, Eric Nyberg, and Alessandro Oltramari.", "2020.", "Knowledge-driven data construction for zero-shot evaluation in commonsense question answering.", "CoRR .", "Farhad Moghimifar, Lizhen Qu, Yue Zhuo, Mahsa Baktashmotlagh, and Gholamreza Haffari.", "2020.", "Cosmo: Conditional seq2seq-based mixture model for zero-shot commonsense question answering.", "In COLING , pages 53475359.", "We present a semantic-based question answering method, SEQA, which can answer commonsense questions more accurately and robustly in an unsupervised setting.", "Instead of directly scoring each answer choice, our method focuses on the probability of observing a choice's semantics.", "In the view of voting, SEQA first generates some plausible answers ( voters ) and then utilizes them to vote for the correct choice by considering the semantic similarity between each choice and each voter .", "Experiment results show that SEQA achieves the best performance on four datasets, and it is remarkably more robust than all the baselines when being attacked by TextFooler.", "This work was partly supported by the NSFC projects (Key project with No. 61936010 and regular project with No. 61876096).", "This work was also supported by the Guoqiang Institute of Tsinghua University, with Grant No. 2019GQG1 and 2020GQG0005.", "This work was also supported by Huawei Noah's Ark Lab." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "objective", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "result", "abstain", "result", "abstain", "abstain", "abstain", "objective", "objective", "result", "abstain", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "result", "abstain", "abstain", "other", "other", "other" ]
[ "{ eaclark7,yangfeng,nasmith } @cs.washington.edu", "Abstract We introduce an approach to neural text generation that explicitly represents entities mentioned in the text.", "Entity representations are vectors that are updated as the text proceeds; they are designed specifically for narrative text like fiction or news stories.", "Our experiments demonstrate that modeling entities offers a benefit in two automatic evaluations: mention generation (in which a model chooses which entity to mention next and which words to use in the mention) and selection between a correct next sentence and a distractor from later in the same story.", "We also conduct a human evaluation on automatically generated text in story contexts; this study supports our emphasis on entities and suggests directions for further research.", "We consider the problem of automatically generating narrative text, a challenging problem at the junction of computational creativity and language technologies (Gervas, 2009).", "We are motivated in particular by potential applications in personalized education and assistive tools for human authors, though we believe narrative might also play a role in social conversational agents (Sor-doni et al., 2015).", "In this work, the term narra-tive text refers primarily to fiction but might also include news and other kinds of stories.", "A notable difference between longstanding work in natural language generation and recent neural models is in the treatment of entities and the words used to refer to them.", "Particularly in the generation of narrative text, character-centered generation has been shown important in character dialogue generation (Walker et al., 2011; Cavazza and Charles, 2005) and story planning (Cavazza et al., 2002).", "Neural models, on the other hand, treat mentions as just more words, relying on representation learning to relate the people in a story through the words alone.", "Centering Theory places entities at the cen-ter of explaining what makes text coherent (Grosz et al., 1995).", "In this work, we incorporate entities into neural text generation models; each entity in a story is given its own vector representation, which is updated as the story unfolds.", "These representations are learned specifically to predict words both mentions of the entity itself and also the following context.", "At a given moment in the story, the current representations of the entities help to predict what happens next.", "Consider the example in Figure", "1. Given the context, the reader expects the subsequent words and sentences of the passage to track the results of Emily approaching the dragon.", "Future text should include references to Emily's character and the dragon and the result of their interaction.", "The choice of entity generated next in the sentence will change what language should follow that mention and will shape and drive the direction of the story.", "For this reason, we propose using entity representations as context for generation.", "Of course, entities are not the only context needed for coherent language generation; previously generated content remains an important source of information.", "We use a simple, parameter-free method for combining preceding context with entity context within an end-to-end 2250 trainable neural language generator.", "We evaluate our model's performance through two automatic evaluation tasks.", "The first is a new mention generation task inspired by earlier work in referring expression generation (Dale and Reiter, 1995).", "The second is a sentence selection task inspired by coherence tests from Barzilay and La-pata (2008).", "Our model outperforms strong baselines on both tasks.", "We further conduct a human evaluation in which our model's generated sentences are compared to a strong baseline model.", "This evaluation elucidates strengths and weaknesses of our model and offers guidance for future work on narrative text generation.", "We propose an entity-based generation model (ENGEN ) 1 that combines three different sources of contextual information for text generation:", "1. The content that has already been generated within the current sentence", "2. The content that was generated in the previous sentence", "3. The current state of the entities mentioned in the document so far Each of these types of information is encoded in vector form, following extensive past work on recurrent neural network (RNN) language models.", "The first source of context is the familiar hidden state vector of the RNN; more precisely, our starting point is a sequence-to-sequence model (Sutskever et al., 2014).", "Representations of the second and third forms of context are discussed in 2.1 and 2.2, respectively.", "The combination of all three context representations is described in 2.3.", "As noted, our starting point is a sequence-to-sequence model (Sutskever et al., 2014); the last hidden state from the previous sentence offers a representation of the preceding context.", "We add an attention mechanism (Bahdanau et al., 2015).", "Let h t,i and h t 1 ,j be the LSTM hidden states of sentence t at timestep i and the previous sentence t 1 at timestep j , where j ranges over the number of words in the previous sentence.", "To summarize 1 Code available at github.com/eaclark07/ engen .", "the contextual information from the previous sentence for predicting the next word at timestep i +1 in sentence t , we have p t 1 ,i = X j i,j h t 1 ,j , where (1) i,j = exp( h t 1 ,j W a h t,i ) P j 0 exp( h t 1 ,j 0 W a h t,i ) (2) is the attention weight for h t 1 ,j .", "Unlike the defi-nition of attention in Bahdanau et al. (2015), here we use the bilinear product in Equation 2 to encourage correlation between h t,i and h t 1 ,j for coherence in text generation.", "In 2.3, we will combine this with h t,i for predicting the next word; we refer to that model as S2SA, and it serves as an entity-unaware baseline in our experiments.", "In S2SA, the context of a sentence is (at best) represented by compressing information about the words that have appeared in the previous sentence.", "Past research has suggested several approaches to capturing other contextual information.", "For example, Lau et al. (2017) and Ghosh et al. (2016) have sought to capture longer contexts by modeling topics.", "Recently, Ji et al. (2017) introduced a language model, ENTITYNLM, that adds explicit tracking of entities, which have their own representations that are updated as the document progresses.", "2 That model was introduced for analysis tasks, such as language modeling and coreference resolution, where the texts (and their coreference information) are given, and the model is used to score the texts to help resolve coreference relationships.", "3 ENTITY NLM's strong performance on language modeling suggests the potential of distributed entity representations as another source of contextual information for text generation.", "Inspired by that work, we maintain the dynamic representation of entities and use them as contextual information when generating text.", "In general, every entity in a document (e.g., EMILY in Figure 1) is assigned a vector representation; this vector is updated every time the entity is mentioned.", "This is entirely appropriate for generating narrative stories in which characters develop and change over long contexts.", "When we 2 Because space does not permit a full exposition of all the details of ENTITYNLM, we refer the interested reader to Ji et al. (2017).", "generate text, the model will have access to the current representation of every participant (i.e., every entity) in the story at that time (denoted by e i,t", "for entity i at timestep t ).", "When choosing which entity is referred to at timestep t , there are m + 1 options, where m is the number of entities tracked in the document so farthe ( m +1) th is for a new, previously unmentioned entity.", "Given that a word is part of an entity mention and given the previous hidden state, the probability that the word is referring to a given entity i { 1 , . . . , m + 1 } is proportional to: exp( h > t 1 W entity e i,t 1 + w > dist f ( i )) , (3) where W entity is a weight matrix for predicting the entities and w > dist f ( i ) is a term that takes into account distance features between the current and past entity mentions.", "Once an entity is selected, its vector is assigned to e current , which is used to generate the word w t .", "If the model decided the current word should not refer to an entity, then e current is still used and will be the representation of the most recently mentioned entity.", "If the choice is a new, previously unmentioned entity, then e current is initialized with a new embedding randomly generated from a normal distribution: u N ( r , 2 I ) , (4) where = 0 .", "01 and r is a parameter vector that is used to determine whether the next word should refer to an entity.", "Our new model merges S2SA and ENTITYNLM.", "Both provide a representation of context: respectively, the previous sentence's representation ( p t ) and the most salient entity's representation ( e current ).", "The hidden state h t 1 is, of course, also available, and is intended to capture local contextual effects.", "The challenge is how to combine these representations effectively for text generation.", "In this work, for simplicity, we choose a combination function without any extra parameters, and leave the detailed investigation of paramaterized composition functions as future work.", "We use a max-pooling function to form a context vector c t with the same dimensionality as h t 1 (and, of course, p t and e current ).", "Specifically, at time step t , each element of the combined context vector c t is calculated as follows.", "For k { 1 , ..., | c t |} , c t [ k ] = max( h t 1 [ k ] , p t [ k ] , e current [ k ]) .", "The max pooling technique originates from the design of convolutional neural networks and has been found useful elsewhere in NLP (Kalchbren-ner et al., 2014).", "Other alternatives, including average pooling, min pooling, and element-wise multiplication on all three vectors, were considered in informal preliminary experiments on development data and found less effective than max pooling.", "This combined context vector c t is used to generate word w t by calculating the probability of each word type in the vocabulary.", "We use a class-factored softmax function (Goodman, 2001; Baltescu and Blunsom, 2015).", "This choice greatly reduces the runtime of word prediction.", "In practice, we often find it gives better performance than standard softmax.", "denotes all of the model's parameters. X t represents all decisions at timestep t about the word (whether it is part of a entity mention, and if so, the entity the mention refers to, the length of the mention, and the word itself).", "These decisions are made by calculating probabilities for each available option using the current state of the neural network (a vector) and the current vector representations of the entities. Given the probabilities, the next word is assumed to have been randomly generated by sampling.", "While we might consider training the model to maximize the probability of the generated words directly, treating the entity-related variables as latent , this would create a mismatch between how we train and use the model. For generation, the model explicitly predicts not just the word, but also the entity information associated with that word. Training with latent variables is also expensive. For these reasons, we use the same training method used for ENTITYNLM, which requires", "training data annotated with mention and coreference information (entity clusters).", "In our experiments, we consider the combined model (ENGEN ) and two ablations: S2SA and a model similar to ENTITYNLM. Note that, unlike past work with previous-sentence context, S2SA uses max pooling for h t 1 and p t and class-factored softmax; our version of ENTITYNLM also uses max pooling and class-factored softmax. All of these models are trained in a similar way.", "The models are implemented using DyNet (Neu-big et al., 2017) with GPU support. We optimize with SGD, with a learning rate of = 0 . 1 . The dimensions of input layer, hidden layer, and entity representation are fixed at 512 (hyperparam-eter optimization might lead to better solutions). The input word embeddings are randomly initialized with the default method in DyNet and updated during training jointly with other parameters. For class-factored softmax, we use 160 Brown clusters (Brown et al., 1992; Liang, 2005) estimated from the training data.", "We trained all models on 312 adventure books from the Toronto Book Corpus (Zhu et al., 2015), with development and test sets of an additional 39 books each. We divided the books into smaller segments, where each segment includes up to 50 sentences. There are 33,279 segments in the training set, 4,577 in the dev. set, and 4,037 in the test set. This helps with memory efficiency, allowing us to train the model without building a recurrent neural network on the entire book.", "All the tokens in the data were downcased, and numbers were replaced with a special NUM token. The vocabulary was selected by replacing the lowest frequency (less than 10) word types with a special UNK token. There are 43 million tokens, and the vocabulary size is 35,443.", "To obtain entity annotations, we used the Stanford CoreNLP system (Clark and Manning, 2016a,b), version 3.8.0. From the coreference resolution results, we noticed that some entity mentions include more than 70 tokens, which is likely in error. To simplify the problem, we only kept the mentions consisting of three words or fewer,", "which covers more than 95% of the mentions in the training data. For mentions of more than three words, we replaced them with their head word, as determined by the Stanford CoreNLP system. While truncating these mentions sacrifices some information, we believe this preprocessing step is justifed as it retains most character names and pronouns, an especially important entity type for stories.", "Of course, the use of automatic annotations from a coreference system will introduce noise and risks confusing the entity-aware models. The benefit is that we were able to train on a much larger corpus than any existing coreference dataset (e.g., the CoNLL 2012 English shared task training set has only 1.3 million tokens; Pradhan et al., 2012). Further, a corpus of books offers language that is much closer to our intended narrative text generation applications. Our experiments aim to measure some aspects of our models' intrinsic correctness, though we emphasize that even if entity information is incorrect at training time, it may still be helpful.", "For all experiments, the same preprocessed dataset and trained models were used.", "The best models were selected based on development set log likelihood (Equation 6).", "The goal of our first experiment is to investigate each model's capacity to mention an entity in context.", "For example, in Figure 1, Emily and her are both possible mentions of EMILY 's character, but the two cannot be used interchangeably.", "Inspired by early work on referring expression generation (Dale and Reiter, 1995) and recent work on entity prediction (Modi et al., 2017), we propose a new task we call mention generation .", "Given a text and a slot to be filled with an entity mention, a model must choose among all preceding entity mentions and the correct mention.", "So if the model was choosing the next entity mention to be generated in Figure 1, it would select between all the previous entity mentions ( Emily , the dragon , Seth , and her ) and the correct mention ( she ).", "In our model, each candidate mention is augmented with the index of its entity.", "Therefore, performing well on this task requires choosing both the entity and the words used to refer to it; this notion of quality is our most stringent evaluation measure.", "It requires the greatest precision, as it is 2253 model cluster and mention cluster only mention only", "possible to select the correct mention but not the correct cluster and vice versa.", "Since S2SA does not model entities, we also compare systems on quality of mentions alone (without entity clusters).", "For completeness, we include cluster quality for the entity-aware models.", "Candidate lists for each task to generate the next mention in the example in Figure 1 are shown in Figure", "2. The experiment setup does not require manual creation of candidate lists.", "However, it makes the mention generation task even more challenging, because the size of a candidate list can exceed 100 mention candidates.", "We note that the difficulty of this task increases as we consider mention slots later and later in the document.", "The first mention generation choice is a trivial one, with a single candidate that is by definition correct.", "As more entity mentions are observed, the number of options will increase.", "4 To enable aggregation across contexts of all lengths, we report the mean average precision (MAP) of the correct candidates, where the language model scores are used to rank candidates.", "Baselines Along with the two ablated models (S2SA and ENTITYNLM), we include a reverse order baseline, which ranks mentions by recency 4 Note that the list of candidates may include duplicate entries with the same mention words and cluster.", "These are collapsed since they will have the same score under a language model.", "(the first element in the ranking is the most recent mention, then the second-most-recent, and so on).", "Results The ranking results of ENGEN and other systems are reported in Table", "1. A higher MAP score implies a better system.", "We measure the overall performance of all the systems, along with their performance on selecting the mention only and entity cluster only .", "Across all the evaluation measures, ENGEN gives the highest MAP numbers.", "Recall that S2SA does not have a component for entity prediction, therefore we only compare it with ENGEN in the mention only case.", "The difference between line 4 and line 2 on the mention only column shows the benefit of adding entity representations for text generation.", "The difference between lines 3 and 4 shows that local context also gives a small boost.", "Although the distance between the current slot and previous entity mention has been shown as a useful feature in coreference resolution (Clark and Manning, 2016b), line 1 shows distance alone is not an effective heuristic for mention generation.", "The sentence selection task is inspired by tests of coherence used to assess text generation components automatically, without human evaluation (Barzilay and Lapata, 2008).", "It serves as a sanity check, as it was conducted prior to full generation and human evaluations ( 7).", "Since the models under consideration are generative, they can be used to assign scores to candidate sentences, given a context.", "In our version of this task, we provide a model with n 1 = 49 sentences of preceding context, and offer two choices for the n th (50th) sentence: the actual 50th sentence or a distractor sentence randomly chosen from the next 50 sentences.", "A random baseline would achieve 50% accuracy.", "Because the distractor comes from the same 2254 Context All of a sudden, [ Emily ] 1 walked towards [ the dragon ] 2 .", "story (with similar language, characters, and top-ics) and relatively nearby (in 2% cases, the very next sentence), this is not a trivial task.", "Consider the example in Figure", "3. All of the sentences share lexical and entity information with the last line of the context.", "However, the first sentence immediately follows the context, while the second and third sentences are 10 lines and 48 lines away from the context, respectively.", "These entity and lexical similarities make distinguishing the actual sentence from the random sentence a challenging problem for the model.", "To select the sentence, the model scores each of the two candidate sentences based on its probability on words and all entity-related information as defined in Equation 6.", "(Both candidate sentences come from the preprocessed data and have the entity annotations described in", "4.) The sentence that receives the higher probability is chosen.", "For each of the 4,037 segments of context in the test set, we calculated the accuracy of each model at distinguishing the gold sentence from a distractor sentence.", "We ran this pairwise decision 5 times, each time with a different set of randomly selected distractor sentences and averaged their performance across all 5 rounds.", "Results The accuracy of each of the models is reported in Table", "2. The best performance is obtained by ENGEN , which is significantly better than the other two models ( p < 0 . 05 , binomial test).", "Unlike the mention generation task, S2SA beats ENTITYNLM at this task; this difference in performance shows the importance of local context.", "Although we performed five different rounds of random sampling to choose a sentence from the following segment as the distractor sentence, the standard deviations in Table 2 show the results are generally consistent across rounds, regardless of model mean accuracy s.d.", "The task motivating the work in this paper is narrative text generation.", "As such, evaluation by human judges of the quality of generated text is the best measure of our methods' quality.", "This study sim-plifies that evaluation by distilling the judgment down to a forced choice between contextually generated sentences generated by two different models.", "We use this task to investigate the strengths and weaknesses of our model in a downstream application.", "By asking humans to decide which sentences they prefer (in a given context) and to explain why, we can analyze where our model is helping and where text generation for stories still needs to improve, both with respect to entities and to other aspects of language.", "Here we control for training data and assess the benefit of including entity information for generating sentences to continue a story.", "We presented Amazon Mechanical Turkers 5 with a short excerpt from a story and two generated sentences, one generated by ENGEN and one generated by the entity-unaware S2SA.", "We asked them to choose a sentence to continue the story and to briefly explain why they made the choice they did, an approach similar to that in other story-based work such as Lukin et al. (2015).", "Note that we did not prime Turkers to focus on entities.", "Rather, the purpose of this experiment was to examine the performance of the model in a story generation setting and to get feedback on what people generally notice in generated text, not only with regard to entities.", "By keeping the task 5 We selected workers who had completed over 1,000 tasks, had over a 95% task acceptance rate, and were from the United States.", "open-ended, we can better analyze what people value in generated text for stories, and where our model supports that and where it doesn't.", "We used a subset of 50 randomly selected text segments from the test set described in", "4. However, for the human evaluation, we only used the final 60 words 6 of the story segments to keep the amount of reading and context manageable for Turkers.", "The models had access to the same subset of the context that the evaluator saw, not all 50 sentences from the original segment as in earlier experiments.", "For each context, we randomly sampled a sentence to continue the document, using each of two models: ENGEN and S2SA.", "These two models allowed us to see if adding the entity information noticeably improved the quality of the generation to evaluators.", "Initial experiments showed that fluency remains a problem for neural text generation.", "To reduce the effect of fluency on Turkers' judgments, we generated 100 samples for each context/model pair and then reranked them with a 5-gram language model (Heafield, 2011) that was trained on the same training data.", "The two top ranked sentences (one for ENGEN and one for S2SA) were presented in random order and without reference to the models that generated them.", "For each of the 50 contexts, we had 11 Turkers pick a candidate sentence to continue the story passage.", "Turkers were paid $0.10 for each evaluation they completed.", "In total, 93 Turkers completed the task.", "The number of passages Turkers completed ranged from 1 to all 50 story segments (with an average of 6.1).", "While the quantitative portion of this task would be easy to scale, the qualitative portion is not; we kept the human evaluation small, running it until reaching saturation.", "Results Each pair of sentences was evaluated by 11 Turkers, so each of the passages could receive up to 11 votes for ENGEN .", "For 27 of the passages, the majority of Turkers (6 or more) chose the sentence from ENGEN , versus 23 passages that went to the baseline model, S2SA.", "The scores were close in many cases, and for several passages, Turkers noted in their explanations that while they were required to choose one sentence, both would have worked.", "Examples of the context and sentence pairs that were strongly in favor of ENGEN , strongly in favor of S2SA, and that received 6 We included the whole sentence that contained the 60th word, so most documents were slightly over 60 words.", "When asked to explain why they selected the sentence they did, a few Turkers attributed their choices to connections between pronouns in ENGEN 's suggestions to characters mentioned in the story excerpt.", "However, a more frequent occurrence was Turkers citing a mismatch in entities as their reason for rejecting an option.", "For example, one Turker said they chose ENGEN 's sentence because the S2SA sentence began with she, and there were no female characters in the context.", "Interestingly, while pronouns not mentioned in the context were cited as a reason for rejecting candidate sentences, new proper noun entity mentions were seen as an asset by some.", "One Turker chose a S2SA sentence that referenced Richard, a character not present in the context, saying, I believe including Richard as a name gives some context of the characters of the story.", "This demonstrates the importance of the ability to generate new entities, in addition to referring back to exisiting entities.", "However, due to the open-ended nature of the task, the reasons Turkers cited for selecting sentences extended far beyond characters and entity mentions.", "In fact, most of the responses credited other aspects of stories and language for their choice.", "Some chose sentences based on their potential to move the plot forward or because they fit better with the theme or the tone of the context.", "Others made decisions based on whether they thought a sentence of dialogue or a descriptive sentence was more appropriate, or a statement versus a question.", "Many made their decisions using deeper knowledge about the story's context.", "For example, in the second story listed in Table 3, one Turker used social knowledge to choose the S2SA sentence because the introduction makes the man sound like he is a stranger, so I'm proud of you' seems out of place.", "In this case, even though the sentence from ENGEN correctly generated pronouns that refer to entities in the context, the mismatch in the social aspects of the context and ENGEN 's sentence contributed to 7 out of 11 Turkers choosing the vaguer S2SA sentence.", "While neither S2SA nor ENGEN explicitly encodes these types of information, these qualities are important to human evaluators of generated text and should influence future work on narrative text generation.", "Beyond past work already discussed, we note few additional important areas of research relevant to our work.", "Neural models for text generation Natural language generation is a classic problem in artificial intelligence.", "Recent use of RNNs (Sutskever et al., 2011) has reignited interest in this area.", "Our work provides an additional way to address the wellknown drawback of RNNs: they use only limited context.", "This has been noted as a serious problem in conversational modeling (Sordoni et al., 2015) and text generation with multiple sentences (Lau et al., 2017).", "Recent work on context-aware text generation (or the related task, language modeling) has studied the possibilities of using different granularity of context.", "For example, in the scenario of response generation, Sordoni et al. (2015) showed a consistent gain by including one more utterance from context.", "Similar effects are also observed by adding topical information for language modeling and generation (Lau et al., 2017).", "Entity-related generation Choosing an appropriate entity and its mention has a big influence on the coherence of a text, as studied in Centering Theory (Grosz et al., 1995).", "Recently, the ENTITYNLM proposed by Ji et al. (2017) shows that adding entity related information can improve the performance of language modeling, which potentially provides a method for entity related text generation.", "We build on ENTITYNLM, combining entity context with previous-sentence context, and demonstrate the importance of the latter in a coherence test ( 6).", "The max pooling combination we propose is simple but effective.", "Another line of related work on recipe generation included special treatment of entities as candidates in generating sentences, but not as context (Kiddon et al., 2016).", "Bosselut et al. (2018) also generated recipes, using neural process networks to track and update entity representations with the goal of modeling actions and their causal effects on entities.", "However, the entity representations are frozen during generation, rather than dynamically updated.", "Mention generation Our novel mention generation task is inspired by both referring expression generation (Dale and Reiter, 1995) and entity prediction (Modi et al., 2017).", "The major difference is that, unlike referring expression generation, our 2257 task includes all the mentions used for entities, including pronouns; we believe it is a more realistic test of a model's handling of entities.", "Krahmer and Van Deemter (2012) give a comprehensive survey on early work of referring expression generation.", "The mention only version of the mention generation task is related to cloze tests like the Children's Book Test (Hill et al., 2016), the Who-did-What Test (Onishi et al., 2016), and the CNN and Daily Mail test described by Hermann et al. (2015).", "However, unlike these tests, we predict all entity mentions in the text and from a dynamically expanding candidate list, typically much larger than those in other cloze tests.", "Story generation Work in story generation has incorporated structure and context through event representations (Martin et al., 2017) or semantic representations, like story graphs (Rishes et al., 2013; Elson and McKeown, 2009).", "In this work, we provide evidence for the value of entity representations as an additional form of structure, following work by Walker et al. (2011), Cavazza and Charles (2005), and Cavazza et al. (2002).", "Inspired by Centering Theory and the importance of characters in stories, we propose a neural model for text generation that incorporates context via entities.", "We found that combining entity representations with representations of the previous sentence and the hidden state (from a neural language model) improves performance on three tasks: mention generation, sentence selection, and sentence generation.", "By collecting human evaluations of sentences generated with entity information, we find that while coherently referring back to entities in the context was cited by several Turkers as a factor in their decision, the introduction of new entities and moving the narrative forward were also valued.", "Therefore, while entities are a useful structure to incorporate in story generation, other structures may also prove useful, including other aspects of discourse (e.g., discourse relations or planning) or story-related structures (e.g., narrative structure).", "This research was supported in part by a NSF graduate research fellowship, the DARPA CwC program through ARO (W911NF-15-1-0543), and", "the NVIDIA Corporation with the donation of the Tesla GPU used for this research.", "The authors also thank Maarten Sap for his feedback and helpful suggestions, the anonymous reviewers for their useful comments, and the participants who took part in our study." ]
[ "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "method", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "method", "other", "other", "other", "other", "other", "other", "objective", "objective", "other", "other", "other", "abstain", "method", "other", "other", "method", "other", "method", "objective", "result", "objective", "abstain", "other", "other", "other" ]
[ "Recently, many studies are emerging towards building a retrieval-based dialogue system that is able to effectively leverage background knowledge (e.g., documents) when conversing with humans.", "However, it is non-trivial to collect large-scale dialogues that are naturally grounded on the background documents, which hinders the effective and adequate training of knowledge selection and response matching.", "To overcome the challenge, we consider decomposing the training of the knowledge-grounded response selection into three tasks including: 1) query-passage matching task; 2) query-dialogue history matching task; 3) multi-turn response matching task, and joint learning all these tasks in a unified pre-trained language model.", "The former two tasks could help the model in knowledge selection and comprehension, while the last task is designed for matching the proper response with the given query and background knowledge (dialogue history).", "By this means, the model can be learned to select relevant knowledge and distinguish proper response, with the help of ad-hoc retrieval corpora and a large number of ungrounded multi-turn dialogues.", "Experimental results on two benchmarks of knowledge-grounded response selection indicate that our model can achieve comparable performance with several existing methods that rely on crowd-sourced data for training.", "Along with the very recent prosperity of artificial intelligence empowered conversation systems in the spotlight, many studies have been focused on building human-computer dialogue systems (Wen et al., 2017; Zhang et al., 2020) with either retrieval-based methods (Wang et al., 2013; Wu et al., 2017;", "Whang et al., 2020) or generation-based methods (Li et al., 2016; Serban et al., 2016; Zhang et al., 2020), which both predict the response with only the given context.", "In fact, unlike a person who may associate the conversation with the background knowledge in his or her mind, the machine can only capture limited information from the query message itself.", "As a result, it is difficult for a machine to properly comprehend the query, and to predict a proper response to make it more engaging.", "To bridge the gap of the knowledge between the human and the machine, researchers have begun to simulating this motivation by grounding dialogue agents with background knowledge (Zhang et al., 2018; Dinan et al., 2019; Li et al., 2020), and lots of impressive results have been obtained.", "In this paper, we consider the response selection problem in knowledge-grounded conversion and specify the background knowledge as unstructured documents that are common sources in practice.", "The task is that given a conversation context and a set of knowledge entries, one is required 1): to select proper knowledge and grasp a good comprehension of the selected document materials (knowledge selection); 2): to distinguish the true response from a candidate pool that is relevant and consistent with both the conversation context and the background documents (knowledge matching).", "While there exists a number of knowledge documents on the Web, it is non-trivial to collect large-scale dialogues that are naturally grounded on the documents for training a neural response selection model, which hinders the effective and adequate training of knowledge selection and response matching.", "Although some benchmarks built upon crowd-sourcing have been released by recent works (Zhang et al., 2018; Dinan et al., 2019), the relatively small training size makes it hard for the dialogue models to generalize on other domains or topics (Zhao et al., 2020).", "Thus, in this work, we focus on a more challenging and practical scenario, learning a knowledge-grounded conversation agent without any knowledge-grounded dialogue data, which is known as zero-resource settings.", "Since knowledge-grounded dialogues are unavailable in training, it raises greater challenges for learning the grounded response selection model.", "Fortunately, there exists a large number of unstructured knowledge (e.g., web pages or wiki articles), passage search datasets (e.g., query-passage pairs coming from ad-hoc retrieval tasks) (Khattab and Zaharia, 2020) and multi-turn dialogues (e.g., context-response pairs collected from Reddit) (Hen-derson et al., 2019), which might be beneficial to the learning of knowledge comprehension, knowledge selection and response prediction respectively.", "Besides, in multi-turn dialogues, the background knowledge and conversation history (excluding the latest query) are symmetric in terms of the information they convey, and we assume that the dialogue history can be regarded as another format of background knowledge for response prediction.", "Based on the above intuition, in this paper, we consider decomposing the training of the grounded response selection task into several sub-tasks, and joint learning all those tasks in a unified model.", "To take advantage of the recent breakthrough on pretraining for natural language tasks, we build the grounded response matching models on the basis of a pre-trained language model (PLMs) (Devlin et al., 2019; Yang et al., 2019), which are trained with large-scale unstructured documents from the web.", "On this basis, we further train the PLMs with query-passage matching task, query-dialogue history matching task, and multi-turn response matching task jointly.", "The former two tasks could help the model not only in knowledge selection but also in knowledge (and dialogue history) comprehension, while the last task is designed for matching the proper response with the given query and background knowledge (dialogue history).", "By this means, the model can be learned to select relevant knowledge and distinguish proper responses, with the help of a large number of ungrounded dialogues and ad-hoc retrieval corpora.", "During the testing stage, we first utilize the trained model to select proper knowledge, and then feed the query, dialogue history, selected knowledge, and the response candidate into our model to calculate the final matching degree.", "Particularly, we design two strategies to compute the final matching score.", "In the first strategy, we directly concatenate the selected knowledge and dialogue history as a long sequence of background knowledge and feed into the model.", "In the second strategy, we first compute the matching degree between each query-knowledge and the response candidates, and then integrate all matching scores.", "We conduct experiments with benchmarks of knowledge-grounded dialogue that are constructed by crowd-sourcing, such as the Wizard-of-Wikipedia Corpus (Dinan et al., 2019) and the CMU DoG Corpus (Zhou et al., 2018a).", "Evaluation results indicate that our model achieves comparable performance on knowledge selection and response selection with several existing models trained on crowd-sourced benchmarks.", "Our contributions are summarized as follows: To the best of our knowledge, this is the first exploration of knowledge-grounded response selection under the zero-resource setting.", "We propose decomposing the training of the grounded response selection models into several sub-tasks, so as to empower the model through these tasks in knowledge selection and response matching.", "We achieve a comparable performance of response selection with several existing models learned from crowd-sourced training sets.", "Early studies of retrieval-based dialogue focus on single-turn response selection where the input of a matching model is a message-response pair (Wang et al., 2013; Ji et al., 2014; Wang et al., 2015).", "Recently, researchers pay more attention to multiturn context-response matching and usually adopt the representation-matching-aggregation paradigm to build the model.", "Representative methods include the dual-LSTM model (Lowe et al., 2015), the sequential matching network (SMN) (Wu et al., 2017), the deep attention matching network (DAM) (Zhou et al., 2018b), interaction-over-interaction network (IoI) (Tao et al., 2019) and multi-hop selector network (MSN) (Yuan et al., 2019).", "More recently, pre-trained language models (Devlin et al., 2019; Yang et al., 2019) have shown significant benefits for various NLP tasks, and some researchers have tried to apply them on multi-turn response selection.", "Vig and Ramea (2019) exploit BERT to represent each utterance-response pair and fuse these representations to calculate the matching score; Whang et al. (2020) and Xu et al. (2020) treat the context as a long sequence and conduct context-response matching with BERT.", "Besides, Gu et al. (2020a) integrate speaker embeddings into BERT to improve the utterance representation in multi-turn dialogue.", "To bridge the gap of the knowledge between the human and the machine, researchers have investigated into grounding dialogue agents with unstructured background knowledge (Ghazvininejad et al., 2018; Zhang et al., 2018; Dinan et al., 2019).", "For example, Zhang et al. (2018) build a persona-based conversation data set that employs the interlocu-tor's profile as the background knowledge; Zhou et al. (2018a) publish a data where conversations are grounded in articles about popular movies; Dinan et al. (2019) release another document-grounded data with Wiki articles covering a wide range of topics.", "Meanwhile, several retrieval-based knowledge-grounded dialogue models are proposed, such as document-grounded matching network (DGMN) (Zhao et al., 2019) and dually interactive matching network (DIM) (Gu et al., 2019) which let the dialogue context and all knowledge entries interact with the response candidate respectively via the cross-attention mechanism.", "Gu et al. (2020b) further propose to pre-filter the context and the knowledge and then use the filtered context and knowledge to perform the matching with the response.", "Besides, with the help of gold knowledge index annotated by human wizards, Dinan et al. (2019) consider joint learning the knowledge selection and response matching in a multi-task manner or training a two-stage model.", "In this section, we first formalize the knowledge-grounded response matching problem and then introduce our method from preliminary to response matching with PLMs to details of three pre-training tasks.", "We first describe a standard knowledge-grounded response selection task such as Wizard-of-Wikipedia.", "Suppose that we have a knowledge-grounded dialogue data set D = { k i , c i , r i , y i } Ni =1 where k i = { p 1 , p 2 , . . . , p l k } represents a collection of knowledge with p j the j -th knowledge entry (a.k.a., passage) and l k is the number of entries; c i = { u 1 , u 2 , . . . , u l c } denotes multi-turn dialogue context with u j the j -th turn and l c is the number of dialogue turns.", "It should be noted that in this paper we denote the latest turn u l c as dialogue query q i , and dialogue context except for query is denoted as h i = c i / { q i } .", "r i stands for a candidate response.", "y i = 1 indicates that r i is a proper response for c i and k i , otherwise y i = 0 .", "N is the number of samples in data set.", "The goal knowledge-grounded dialogue is to learn a matching model g ( k, c, r ) from D , and thus for any new ( k, c, r ) , g ( k, c, r ) returns the matching degree between r and ( k, c ) .", "Finally, one can collect the matching scores of a series of candidate responses and conduct response ranking.", "Zero-resource grounded response selection then is formally defined as follows.", "There is a standard multi-turn dialogue dataset D c = { q i , h i , r i } Ni =1 and an ad-hoc retrieval dataset D p = { q i , p i , z i } Mi =1 where q i is a query and p i stands a candidate passage, z i = 1 indicates that p i is a relevant passage for q i , otherwise z i = 0 .", "Our goal is to learn a model g ( k, h, q, r ) from D c and D p , and thus for any new input ( k, h, q, r ) , our model can select proper knowledge k from k and calculate the matching degree between r and ( k, q, h ) .", "Pre-trained language models have been widely used in many NLP tasks due to the strong ability of language representation and understanding.", "In this work, we consider building a knowledge-grounded response matching model with BERT.", "Specifically, given a query q , a dialogue history h = { u 1 , u 2 , ..., u n h } where u i is the i -th turn in the history, a response candidate r = { r 1 , r 2 , ..., r l r } with l r words, we concatenate all sequences as a single consecutive tokens sequence with special tokens, which can be represented as x = { [ CLS ] , u 1 , [ SEP ] , . . . , [ SEP ] , u l h , [ SEP ] , q, [ SEP ] , r, [ SEP ] } .", "[ CLS ] and [ SEP ] are classification symbol and segment separation symbol respectively.", "For each token in x , BERT uses a summation of three kinds of embeddings, including WordPiece embedding (Wu et al., 2016), segment embedding, and position embedding.", "Then, the embedding sequence of x is fed into BERT, giving us the contextualized embedding sequence { E [CLS] , E 2 , . . . , E l x } .", "E [CLS] is an aggregated representation vector that contains the Input Dialogue History or Knowledge Response Pre-trained Language Model (BERT) ,, OutputLayer MLP Token Embeddings Position Embeddings Segment Embeddings Response Matching Task Query Query-Dialogue History Matching Task Query-Passage Matching Task [Background Knowledge] [Response] [Query] !\"# $ ! #%& $ \" #%& $ # #%& ' #%& ( !", "semantic interaction information between the query, history, and response candidate.", "Finaly, E [CLS] is fed into a non-linear layer to calculate the final matching score, which is formulated as: g ( h, q, r ) = ( W 2 tanh( W 1 E [ CLS ] + b 1 ) + b 2 ) (1) where W { 1 , 2 } and b { 1 , 2 } is training parameters for response selection task, is a sigmoid function.", "In knowledge-grounded dialogue, each dialogue is associated with a large collection of knowledge entries k = { p 1 , p 2 , . . . , p l k } 1 .", "The model is required to select m ( m 1) knowledge entries based on semantic relevance between the query and each knowledge, and then performs the response matching with the query, dialogue history and the highly-relevant knowledge.", "Specifically, we denote k = ( p 1 , . . . , p m ) as the selected knowledge entries, and feed the input sequence x = { [ CLS ] , p 1 , [ SEP ] , . . . , [ SEP ] , p m , [ SEP ] , u 1 , [ SEP ] , . . . , [ SEP ] , u l h , [ SEP ] , q, [ SEP ] , r, [ SEP ] } to BERT.", "The final matching score g ( k, h, q, r ) can be computed based on [ CLS ] representation.", "On the basis of BERT, we further jointly train it with three tasks including 1) query-passage matching task ; 2) query-dialogue history matching task ; 3) multi-turn response matching task .", "The former two tasks could help the model in knowledge selection and knowledge (and dialogue history) comprehension, while the last task is designed for matching the proper response with the given query and background knowledge (dialogue 1 The scale of the knowledge referenced by each dialogue usually exceeds the limitation of input length in PLMs. history).", "By this means, the model can be learned to select relevant knowledge and distinguish the proper response, with the help of a large number of ungrounded dialogues and ad-hoc retrieval corpora.", "Although there exist a huge amount of conversation data on social media, it is hard to collect sufficient dialogues that are naturally grounded on knowledge documents.", "Existing studies (Dinan et al., 2019) usually extract the relevant knowledge before the response matching or jointly train the knowledge retrieval and response selection in a multi-task manner.", "However, both methods need in-domain knowledge-grounded dialogue data (with gold knowledge label) to train, making the model hard to generalize to a new domain.", "Fortunately, the ad-hoc retrieval task (Harman, 2005; Khattab and Zaharia, 2020) in the information retrieval area provides a potential solution to simulate the process of knowledge seeking.", "To take advantage of the parallel data in the ad-hoc retrieval task, we consider incorporating the query-passage matching task, so as to help the knowledge selection and knowledge comprehension for our task.", "Given a query-passage pair ( q, p ) , we first concatenate the query q and the passage p as a single consecutive token sequence with special tokens separating them, which is formulated as: S qp = { [CLS] , w p 1 , . . . , w pn p , [SEP] , w q 1 , . . . , w qn q } (2) where w pi , w qj denotes the i -th and j -th token of knowledge entry p and query q respectively.", "For each token in S qpi , token , segment and position embeddings are summated and fed into BERT.", "It is worth noting that here we set the segment embedding of the knowledge to be the same as the dialogue history.", "Finally, we feed the output representation of [ CLS ] E qp[CLS] into a MLP to obtain the final query-passage matching score g ( q, p ) .", "The loss function of each training sample for query-passage matching task is defined by L p ( q, p + , p 1 , . . . , p n p ) = log( e g ( q,p + ) e g ( q,p + ) + (cid:80) p j =1 e g ( q,p j ) ) (3) where p + stands for the positive passage for q , p j is the j -th negative passage and p is the number of negative passage.", "In multi-turn dialogues, the conversation history (excluding the latest query) is a piece of supplementary information for the current query and can be regarded as another format of background knowledge during the response matching.", "Besides, due to the natural sequential relationship between dialogue turns, the dialogue query usually shows a strong semantic relevance with the previous turns in the dialogue history.", "Inspired by such characteristics, we design a query-dialogue history matching task with the multi-turn dialogue context, so as to enhance the capability of the model to comprehend the dialogue history with the given dialogue query and to rank relevant passages with these pseudo query-passage pairs.", "Specifically, we first concatenate the dialogue history into a long sequence.", "The task requires the model to predict whether a query q = { w q 1 , . . . , w qn q } and a dialogue history sequence h = { w h 1 , . . . , w hn h } are consecutive and relevant.", "We concatenate two sequences into a single consecutive sequence with [ SEP ] tokens, S qh = { [CLS] , w h 1 , . . . , w hn h , [SEP] , w q 1 , . . . , w qn q } (4) For each word in S qh , token , segment and position embeddings are summated and fed into BERT.", "Finally, we feed E qh[CLS] into a MLP to obtain the final query-history matching score g ( q, h ) .", "The loss function of each training sample for query-history matching task is defined by L h ( q, h + , h 1 , . . . , h n h ) = log( e g ( q,h + ) e g ( q,h + ) + (cid:80) h j =1 e g ( q,h j ) ) (5) where h + stands for the true dialogue history for q , h j is the j -th negative dialogue history randomly sampled from the training set and h is the number of sampled dialogue history.", "The above two tasks are designed for empowering the model to knowledge or history comprehension and knowledge selection.", "In this task, we aim at training the model to match reasonable responses based on dialogue history and query.", "Since we treat the dialogue history as a special form of background knowledge and they share the same segment embeddings in the PLMs, our model can acquire the ability to identify the proper response with either dialogue history or the background knowledge through the multi-turn response matching task.", "Specifically, we format the multi-turn dialogues as query-history-response triples and requires the model to predict whether a response candidate r = { w r 1 , . . . , w rn r } is appropriate for a given query q = { w q 1 , . . . , w qn q } and a concatenated dialogue history sequence h = { w h 1 , . . . , w hn h } .", "Concretely, we concatenate three input sequences into a single consecutive tokens sequence with [ SEP ] tokens, S hqr = { [CLS] , w h 1 , . . . , w hn h , [SEP] , w q 1 , . . . , w qn q , [SEP] , w r 1 , . . . , w rn r } (6) Similarly, we feed an embedding sequence of which each entry is a summation of token , segment and position embeddings into BERT.", "Finally, we feed E hqr[CLS] into a MLP to obtain the final response matching score g ( h, q, r ) .", "L r ( h, q, r + , r 1 , . . . , r r ) = log( e g ( h,q,r + ) e g ( h,q,r + ) + (cid:80) n r i = j e g ( h,q,r j ) ) (7)", "where r + is the true response for a given q and h , r j is the j -th negative response candidate randomly sampled from the training set and r is the number of negative response candidate.", "L final = L p + L h + L r", "After learning model from D c and D p , we first rank { p i } n k i =1 according to g ( q, k i ) and then select top m knowledge entries { p 1 , . . . , p m } for the subsequent response matching process.", "Here we design two strategies to compute the final matching score g ( k, h, q, r ) .", "In the first strategy, we directly concatenate the selected knowledge and dialogue history as a long sequence of background knowledge and feed into the model to obtain the final matching score, which is formulated as, g ( k, h, q, r ) = g ( p 1 . . . p m c, q, r ) (9) where denotes the concatenation operation.", "In the second strategy, we treat each selected knowledge entry and the dialogue history equally as the background knowledge, and compute the matching degree between each query, background knowledge, and the response candidates with the trained model.", "Consequently, the matching score is defined as an integration of a set of knowledge-grounded response matching scores, formulated as, g ( k, h, q, r ) = g ( h, q, r )+ max i (0 ,m ) g ( p i , q, r ) (10) where m is the number of selected knowledge entries.", "We name our model with the two strategies as PTKGC cat and PTKGC sep respectively.", "We compare the two learning strategies through empirical studies, as will be reported in the next section.", "Training Set.", "We adopt MS MARCO passage ranking dataset (Nguyen et al., 2016) built on Bing's search for query-passage matching task.", "The dataset contains 8 .", "8 M passages from Web pages gathered from Bing's results to real-world queries and each passage contains an average of 55 words.", "Each query is associated with sparse relevance judgments of one (or very few) passage marked as relevant.", "The training set contains about 500 k pairs of query and relevant passage, and another 400M pairs of query and passages that have not been marked as relevant, from which the negatives are sampled in our task.", "For the query-dialogue history matching task and multi-turn response matching task, we use the multi-turn dialogue corpus constructed from the Reddit (Dziri et al., 2018).", "The dataset contains more than 15 million dialogues and each dialogue has at least 3 utterances.", "After the pre-processing, we randomly sample 2 .", "28 M/ 20 K dialogues as the training/validation set.", "For each dialogue session, we regard the last turn as the response, the last but one as the query, and the rest as the positive dialogue history.", "The negative dialogue histories are randomly sampled from the whole dialogue set.", "On average, each dialogue contains 4 .", "3 utterances, and the average length of the utterances is 42 .", "5 .", "Test Set.", "We tested our proposed method on the Wizard-of-Wikipedia (WoW) (Dinan et al., 2019) and CMU DoG (Zhou et al., 2018a).", "Both datasets contain multi-turn dialogues grounded on a set of background knowledge and are built with crowd-sourcing on Amazon Mechanical Turk.", "In WoW, the given knowledge collection is obtained from Wikipedia and covers a wide range of topics or domains, while in CMU DoG, the underlying knowledge focuses on the movie domain.", "Unlike CMU DoG where the golden knowledge index for each turn is unknown, the golden knowledge index for each turn is provided in WoW.", "Two configurations (e.g., test-seen and test-unseen) are provided in WoW.", "Following existing works (Dinan et al., 2019; Zhao et al., 2019), positive responses are true responses from humans and negative ones are randomly sampled.", "The ratio between positive and negative responses is 1 : 99 for WoW and 1 : 19 for CMU DoG.", "More details of the two benchmarks are shown in Appendix A.1.", "Evaluation Metrics.", "Following previous works on knowledge-grounded response selection (Gu et al., 2020b; Zhao et al., 2019), we also employ recall n at k R n @k (where n = 100 for WoW and n = 20 for CMU DoG and k = { 1 , 2 , 5 } ) as the evaluation metrics.", "Our model is implemented by PyTorch (Paszke et al., 2019).", "Without loss of generality, we select English uncased BERT base (110M) as the matching model.", "During the training, the maximum lengths of the knowledge (a.k.a., passage), the dialogue history, the query, and the response candidate were set to 128 , 120 60 , and 40 .", "Intuitively, the last tokens in the dialogue history and the previous Models Test Seen Test Unseen R@1 R@2 R@5 R@1 R@2 R@5 IR Baseline 17.8 -14.2 -BoW MemNet 71.3 -33.1 -Two-stage Transformer 84.2 -63.1 -Transformer MemNet 87.4 -69.8 -DIM (Gu et al., 2019) 83.1 91.1 95.7 60.3 77.8 92.3 FIRE (Gu et al., 2020b) 88.3 95.3 97.7 68.3 84.5 95.1 PTKGC cat 85.7 94.6 98.2 65.5 82.0 94.7 PTKGC sep 89.5 96.7 98.9 69.6 85.8 96.3 Table 1: Evaluation results on the test set of WoW.", "tokens in the query and response candidate are more important, so we cut off the previous tokens for the context but do the cut-off in the reverse direction for the query and response candidate if the sequences are longer than the maximum length.", "We set a batch size of 32 for multi-turn response matching and query-dialogue history matching, and 8 for query-document matching in order to train these tasks jointly under the circumstance of training examples inequality.", "We set p = 6 , h = 1 and r = 12 for the query-passage matching, the query-dialogue history matching and the multiturn response matching respectively.", "Particularly, the negative dialogue histories are sampled from other training instances in a batch.", "The model is optimized using Adam optimizer with a learning rate set as 5 e 6 .", "The learning rate is scheduled by warmup and linear decay.", "A dropout rate of 0 .", "1 is applied for all linear transformation layers.", "The gradient clipping threshold is set as 10 .", "0 .", "Early stopping on the corresponding validation data is adopted as a regularization strategy.", "During the testing, we vary the number of selected knowledge-entries m { 1 , . . . , 15 } and set m = 2 for PTKGC cat and set m = 14 for PTKGC sep because they achieve the best performance.", "Since the characteristics of the two data sets are different (only WoW provides the golden knowledge label), we compare the proposed model with the baselines on both data sets individually.", "Baselines on WoW.", "1) IR Baseline (Dinan et al., 2019) uses simple word overlap for response selection; 2) BoW MemNet (Dinan et al., 2019) is a memory network where knowledge entries are embedded via bag-of-words representation, and the model learns the knowledge selection and response matching jointly; 3) Transformer MemNet (Dinan et al., 2019) is an extension of BoW MemNet, Models R@1 R@2 R@5 Starspace (Wu et al., 2018) 50.7 64.5 80.3 BoW MemNet (Zhang et al., 2018) 51.6 65.8 81.4 KV Profile Memory (Zhang et al., 2018) 56.1 69.9 82.4 Transformer MemNet (Mazare et al., 2018) 60.3 74.4 87.4 DGMN (Zhao et al., 2019) 65.6 78.3 91.2 DIM (Gu et al., 2019) 78.7 89.0 97.1 FIRE (Gu et al., 2020b) 81.8 90.8 97.4 PTKGC cat 61.6 73.5 86.1 PTKGC sep 66.1 77.8 88.7 Table 2: Evaluation results on the test set of CMU DoG.", "and the dialogue history, response candidate and knowledge entries are encoded with Transformer encoder (Vaswani et al., 2017) pre-trained on a large data set.", "4) Two-stage Transformer (Dinan et al., 2019) trains two separately models for knowledge selection and response retrieval respectively.", "A best-performing model on the knowledge selection task is used for the dialogue retrieval task.", "Baselines on CMU DoG 1) Starspace (Wu et al., 2018) selects the response by the cosine similarity between a concatenated sequence of dialogue context, knowledge, and the response candidate represented by StarSpace (Wu et al., 2018); 2) BoW MemNet (Zhang et al., 2018) is a memory network with the bag-of-words representation of knowledge entries as the memory items; 3) KV Profile Memory (Zhang et al., 2018) is a key-value memory network grounded on knowledge profiles; 4) Transformer MemNet (Mazare et al., 2018) is similar to BoW MemNet and all utterances are encoded with a pre-trained Transformer; 5) DGMN (Zhao et al., 2019) lets the dialogue context and all knowledge entries interact with the response candidate respectively via the cross-attention; 6) DIM (Gu et al., 2019) is similar to DGMN and all utterance are encoded with BiLSTMs; 7) FIRE (Gu et al., 2020b) first filters the context and knowledge and then use the filtered context and knowledge to perform the iterative response matching process.", "Performance of Response Selection.", "Table 1 and Table 2 report the evaluation results of response selection on WoW and CMU DoG where PTKGC cat and PTKGC sep represent the final matching score computed with the first strategy (Equation 9) and the second strategy (Equation 10) respectively.", "We can see that PTKGC sep is Models Wizard of Wikipedia CMU DoG Test Seen Test Unseen R@1 R@2 R@5 R@1 R@2 R@5 R@1 R@2 R@5 PTKGC sep 89.5 96.7 98.9 69.6 85.8 96.3 66.1 77.8 88.7 PTKGC sep (q) 70.6 79.7 86.8 55.9 70.8 83.4 47.3 58.8 75.0 PTKGC sep (q+h) 84.9 93.9 97.8 64.9 81.7 94.3 59.5 72.3 86.1 PTKGC sep (q+k) 89.5 96.4 98.6 67.0 84.0 96.0 62.7 73.8 84.8 PTKGC sep , m = 1 85.6 94.4 97.9 66.7 82.8 94.3 60.4 72.5 86.0 PTKGC sep , m = 1 L p 84.7 93.5 97.5 63.4 80.5 94.0 58.7 70.8 85.6 PTKGC sep , m = 1 L h 84.9 93.7 97.6 65.5 81.7 94.1 59.4 71.4 85.3 Table 3: Ablation study.", "consistently better than PTKGC cat over all metrics on two data sets, demonstrating that individually representing each knowledge-query-response triple with BERT can lead to a more optimal matching signal than representing a single long sequence.", "Our explanation to the phenomenon is that there is information loss when a long sequence composed of the knowledge and dialogue history passes through the deep architecture of BERT.", "Thus, the earlier different knowledge entries and dialogue history are fused together, the more information of dialogue history or background knowledge will be lost in matching.", "Particularly, on the WoW, in terms of R@1, our PTKGC sep achieves a comparable performance with the existing state-of-the-art models that are learned from the crowd-sourced training set, indicating that the model can effectively learn how to leverage external knowledge feed for response selection through the proposed pre-training approach.", "Notably, we can observe that our PTKGC sep performs worse than DIM and FIRE on the CMU DoG.", "Our explanation to the phenomenon is that the dialogue and knowledge in CMU DoG focus on the movie domain while our train data including ad-hoc retrieval corpora and multi-turn dialogues come from the open domain.", "Thus, our model may not select proper knowledge entries and can not well recognize the semantics clues for response matching due to the domain shift.", "Despite this, PTKGC sep can still show better performance than several existing models, such as Transformer MemNet and DGMN, though PTKGC sep does not access any training examples in the benchmarks.", "Performance of Knowledge Selection.", "We also assess the ability of models to predict the knowledge selected by human wizards in WoW data.", "The results are shown in Table", "4. We can find that the performance of our method is comparable with various supervised methods trained on the gold knowledge index.", "In particular, on the test-seen, our model is slightly worse than Transformer (w/ pretrain), while on the test-unseen, our model achieves slightly better results.", "The results demonstrate the advantages of our pretraining tasks and the good generalization ability of our model.", "Ablation Study.", "We conduct a comprehensive ablation study to investigate the impact of different inputs and different tasks.", "First, we remove the dialogue history, knowledge, and both of them from the model, which is denoted as PTKGC sep (q+k), PTKGC sep (q+h) and PTKGC sep (q) respectively.", "According to the results of the first four rows in Table 3, we can find that both the dialogue history and knowledge are crucial for response selection as removing anyone will generally cause a performance drop on the two data.", "Besides, the background knowledge is more critical for response selection as removing the background knowledge causes more significant performance degradation than removing the dialogue history.", "as PTKGC sep -X, where X {L p , L h } meaning query-passage matching task and query-dialogue history matching task respectively.", "Table 4 shows the ablation results of knowledge selection.", "We can find that both tasks are useful in the learning of knowledge selection, and query-passage matching plays a dominant role since the performance of knowledge selection drops dramatically when the task is removed from the pre-training process.", "The last two rows in Table 3 show the ablation results of response selection.", "We report the ablation results when only 1 knowledge is provided since the knowledge recalls for different ablated models and the full model are very close when m is large ( m = 14 ).", "We can see that both tasks are helpful and the performance of response selection drops more when removing the query-passage matching task.", "Particularly, L p plays a more important role and the performance on test-unseen of WoW drops more obvious when removing each training task.", "To further investigate the impact of our pretraining tasks on the performance of the multiturn response selection (without considering the grounded knowledge), we conduct an ablation study and the results are shown in Table", "5. We can observe that the performance of the response matching model (no grounded knowledge) drops obviously when removing one of the pretraining tasks or both tasks.", "Particularly, the query-passage matching task contributes more to the response selection.", "The impact of the number of selected knowledge.", "We further study how the number of selected knowledge ( m ) influences the performance of PTKGC sep .", "Figure 2 shows how the performance of our model changes with respect to different numbers of selected knowledge.", "We observe that the performance increases monotonically until the knowledge number reaches a certain value, and then stable when the number keeps increasing.", "The results are rational because more knowledge entries can provide more useful 0.85 0.86 0.87 0.88 0.89 0.90 0.856 0.864 0.869 0.875 0.877 0.882 0.885 0.887 0.889 0.891 0.892 0.893 0.894 0.895 0.895 SeenUnseen 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 The number of selected knowledge ( m ) 0.65 0.66 0.67 0.68 0.69 0.70 0.667 0.672 0.675 0.682 0.682 0.681 0.682 0.682 0.685 0.687 0.688 0.690 0.692 0.696 0.696 R 100 @ 1 Figure 2: The performance of response selection across different number of selected knowledge.", "information for response matching, but when the knowledge becomes enough, the noise will be brought to matching.", "In this paper, we study response matching in knowledge-grounded conversations under a zero-resource setting.", "In particular, we propose decomposing the training of the knowledge-grounded response selection into three tasks and joint train all tasks in a unified pre-trained language model.", "Our model can be learned to select relevant knowledge and distinguish proper response, with the help of ad-hoc retrieval corpora and amount of multiturn dialogues.", "Experimental results on two benchmarks indicate that our model achieves a comparable performance with several existing methods trained on crowd-sourced data.", "In the future, we would like to explore the ability of our proposed method in retrieval-augmented dialogues.", "We would like to thank the anonymous reviewers for their constructive comments.", "This work was supported by the National Key Research and Development Program of China (No. 2020YFB1406702), the National Science Foundation of China (NSFC No. 61876196) and Beijing Outstanding Young Scientist Program (No. BJJWZYJH012019100020098).", "Rui Yan is the corresponding author, and is supported as a young fellow at Beijing Academy of Artificial Intelligence (BAAI)." ]
[ "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "other", "method", "method", "method", "method", "abstain", "abstain", "objective", "method", "objective", "objective", "method", "result", "objective", "objective", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "other", "abstain", "other", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "other", "abstain", "method", "objective", "abstain", "result", "objective", "other", "other", "other" ]
[ "Existing entity typing systems usually exploit the type hierarchy provided by knowledge base (KB) schema to model label correlations and thus improve the overall performance.", "Such techniques, however, are not directly applicable to more open and practical scenarios where the type set is not restricted by KB schema and includes a vast number of free-form types.", "To model the underlying label correlations without access to manually annotated label structures, we introduce a novel label-relational inductive bias, represented by a graph propagation layer that effectively encodes both global label co-occurrence statistics and word-level similarities.", "On a large dataset with over 10,000 free-form types, the graph-enhanced model equipped with an attention-based matching module is able to achieve a much higher recall score while maintaining a high-level precision.", "Specifically, it achieves a 15 .", "3% relative F1 improvement and also less inconsistency in the outputs.", "We further show that a simple modification of our proposed graph layer can also improve the performance on a conventional and widely-tested dataset that only includes KB-schema types.", "1 1 Introduction Fine-grained entity typing is the task of identifying specific semantic types of entity mentions in given contexts.", "In contrast to general entity types ( e.g. , organization, event), fine-grained types ( e.g. , political party, natural disaster) are often more informative and can provide valuable prior knowledge for a wide range of NLP tasks, such as coreference resolution (Durrett and Klein, 2014), relation extraction (Yaghoobzadeh et al., 2016) and question answering (Lee et al., 2006; Yavuz et al., 2016).", "In practical scenarios, a key challenge of entity typing is to correctly predict multiple ground-truth type labels from a large candidate set that covers a wide range of types in different granularities.", "In this sense, it is essential for models to effectively capture the inter-label correlations.", "For instance, if an entity is identified as a criminal, then the entity must also be a person, but it is less likely for this entity to be a police officer at the same time.", "When ignoring such correlations and considering each type separately, models are often inferior in performance and prone to inconsistent predictions.", "As shown in Table 1, an existing model that independently predicts different types fails to reject predictions that include apparent contradictions.", "Existing entity typing research often address this aspect by explicitly utilizing a given type hierarchy to design hierarchy-aware loss functions (Ren et al., 2016b; Xu and Barbosa, 2018) or enhanced type label encodings (Shimaoka et al., 2017) that enable parameter sharing between related types.", "These methods rely on the assumption that the underlying type structures are predefined in entity typing datasets.", "For benchmarks annotated with the knowledge base (KB) guided distant supervision, this assumption is often valid since all types are from KB ontologies and naturally follow tree-like structures.", "However, since knowledge bases are inherently incomplete (Min et al., 2013), existing KBs only include a limited set of entity types.", "Thus, models trained on these datasets fail to generalize to lots of unseen types.", "In this work, we investigate entity typing in a more open scenario where the type set is not restricted by KB schema and includes over 10,000 free-form types (Choi et al., 2018).", "As most of the types do not follow any predefined structures, methods that explicitly incorporate type hierarchies cannot be straightforwardly applied here.", "To effectively capture the underlying label correlations without access to known type structures, we propose a novel label-relational inductive bias, represented by a graph propagation layer that operates in the latent label space.", "Specifi-cally, this layer learns to incorporate a label affinity matrix derived from global type co-occurrence statistics and word-level type similarities.", "It can be seamlessly coupled with existing models and jointly updated with other model parameters.", "Empirically, on the Ultra-Fine dataset (Choi et al., 2018), the graph layer alone can provide a significant 11 .", "9% relative F1 improvement over previous models.", "Additionally, we show that the results can be further improved ( 11 . 9% 15 . 3% ) with an attention-based mention-context matching module that better handles pronouns entity mentions.", "With a simple modification, we demonstrate that the proposed graph layer is also beneficial to the widely used OntoNotes dataset, despite the fact that samples in OntoNotes have lower label multiplicity ( i.e. , average number of ground-truth types for each sample) and thus require less label-dependency modeling than the Ultra-Fine dataset.", "To summarize, our major contribution includes: We impose an effective label-relational bias on entity typing models with an easy-to-implement graph propagation layer, which allows the model to implicitly capture type dependencies; We augment our graph-enhanced model with an attention-based matching module, which constructs stronger interactions between the mention and context representations; Empirically, our model is able to offer significant improvements over previous models on the Ultra-Fine dataset and also reduces the cases of inconsistent type predictions.", "Fine-Grained Entity Typing The task of fine-grained entity typing was first thoroughly investigated in (Ling and Weld, 2012), which utilized Freebase-guided distant supervision (DS) (Mintz et al., 2009) for entity typing and created one of the early large-scale datasets.", "Although DS provides an efficient way to annotate training data, later work (Gillick et al., 2014) pointed out that entity type labels induced by DS ignore entities' local context and may have limited usage in context-aware applications.", "Most of the following research has since focused on testing in context-dependent scenarios.", "While early methods (Gillick et al., 2014; Yogatama et al., 2015) on this task rely on well-designed loss functions and a suite of handcraft features that represent both context and entities, Shimaoka et al. (2016) proposed the first attentive neural model which outperformed feature-based methods with a simple cross-entropy loss.", "Modeling Entity Type Correlations To better capture the underlying label correlations, Shimaoka et al. (2017) employed a hierarchical label encoding method and AFET (Ren et al., 2016a) used the predefined label hierarchy to identify noisy annotations and proposed a partial-label loss to reduce such noise.", "A recent work (Xu and Barbosa, 2018) proposed hierarchical loss normalization which alleviated the noise of too specific types.", "Our work differs from these works in that we do not rely on known label structures and aim to learn the underlying correlations from data.", "Rabinovich and Klein (2017) recently proposed a structure-prediction approach which used type correlation features.", "The inference on their learned factor graph is approximated by a greedy decoding algorithm, which outperformed unstructured methods on their own dataset.", "Instead of using an explicit graphical model, we enforce a relational bias on model parameters, which does not introduce extra burden on label decoding.", "Specifically, the task we consider takes a raw sentence C as well as an entity mention span M inside", "C as inputs, and aims to predict the correct type labels T m of M from a candidate type set T , which includes more than 10,000 free-form types.", "The entity span M here can be named entities, nominals and also pronouns.", "The ground-truth type set T m here usually includes more than one types (approximately five types on average), making this task a multi-label classification problem.", "In this section, we first briefly introduce the neural architecture to encode raw text inputs.", "Then we describe the matching module we use to enhance the interaction between the mention span and the context sentence.", "Finally, we move to the label decoder, on which we impose the label-relational bias with a graph propagation layer that encodes type co-occurrence statistics and word-level similarities.", "Figure 1 provides a graphical overview of our model, with 1a) illustrating both the text encoders and the matching module, and 1b) showing an example of graph propagation.", "Our base model to encode the context and the mention span follows existing neural approaches (Shimaoka et al., 2016; Xu and Barbosa, 2018; Choi et al., 2018).", "To encode the context, we first apply a standard Bi-LSTM, which takes GloVe (Pennington et al., 2014) embeddings and position embeddings (three vectors representing positions before, inside or after the mention span) as inputs and outputs the hidden states at each time step t [1 , l c ] .", "With the derived hidden states C h R l c h c , we then apply a self-attentive encoder (McCann et al., 2017) on the top to get the final context representation C .", "For the entity mention span, we concatenate the features derived by a character-level CNN and a similar self-attentive encoder.", "We denote the final mention representation as M .", "2 4.2 Mention-Context Interaction Since most previous datasets only consider named entities , a simple concatenation of the two features [ C ; M ] followed by a linear output layer (Shi-maoka et al., 2016, 2017) usually works reasonably well when making predictions.", "This suggests that M itself provides important information for recognizing entity types.", "However, as in our target dataset, a large portion of entity mentions are actually pronouns , such as he or it, this kind of mentions alone provide only limited clues about general entity types ( e.g. , he is a person) but little information about fine-grained types.", "In this case, directly appending representation of pronouns does not provide extra useful information for making fine-grained predictions.", "Thus, instead of using the concatenation operator, we propose to construct a stronger interaction between the mention and context with an attention-based matching module, which has shown its effectiveness in recent natural language inference models (Mou et al., 2016; Chen et al., 2017).", "Formally consider the mention representation M R h m and context's hidden feature C h 2 Please refer to (Shimaoka et al., 2017) and (Choi et al., 2018) for more detailed descriptions.", "R l c h c , where l c indicates the number of tokens in the context sentence and h m , h c denote feature dimensions.", "We first project the mention feature M into the same dimension space as C h with a linear layer ( W 1 R h m h c ) and a tanh function 3 : m proj = tanh ( WT 1 M ) , (1) then we perform bilinear attention matching between m proj and C h , resulting in an affinity matrix A with dimension A R 1 l c : A = m proj W a C h , (2) where W a R h c h c is a learnable matrix.", "If we consider the mention feature as query and the context as memory, we can use the affinity matrix to retrieve the relevant parts in the context: A = softmax ( A ) (3) r c = A C h .", "With the projected mention representation m proj and the retrieved context feature r c , we define the following interaction operators:", "r = ( W r [ r c ; m proj ; r c m proj ]) g = ( W g [ r c ; m proj ; r c m proj ]) o = g r + (1 g ) m proj ,", "where ( ) is a gaussian error linear unit (Hendrycks and Gimpel, 2016) and r is the fused context-mention feature; ( ) indicates a sigmoid function and g is the resulting gating function, which controls how much information in mention span itself should be passed down.", "We expect the model to focus less on the mention representation when it is not informative.", "The concatenation [ r c ; m proj ; r c m proj ] here is supposed to capture different aspects of the interactions.", "To emphasize the context's impact, we finally concatenate the extracted context feature ( C ) with the output ( o ) of the matching module ( f = [ o ; C ] ) for prediction.", "For approaches that ignore the underlying label correlations, the type predictions are considered as N independent binary classification problems, with N being the number of types.", "If we denote the feature extracted by any arbitrary neural model 3 tanh here is used to make m proj in the same scale as C h , which was the output of a tanh function inside LSTM.", "We can see that every row vector of W o is responsible for predicting the probability of one particular type.", "We will refer the row vectors as type vectors for the rest of this paper.", "As these type vectors are independent, the label correlations are only implicitly captured by sharing the model parameters that are used to extract f .", "We argue that the paradigm of parameter sharing is not enough to impose strong label dependencies and the values of type vectors should be better constrained.", "A straightforward way to impose the desired constraints is to add extra regularization terms on W o .", "We first tested several auxiliary loss functions based on the heuristics from GloVe (Pen-nington et al., 2014), which operates on the type co-occurrence matrix.", "However, the auxiliary losses only offer trivial improvements in our experiments.", "Instead, we find that directly imposing a model-level inductive bias on the type vectors turns out to be a more principled solution.", "This is done by adding a graph propagation layer over randomly initialized W o and generating the updated type vectors W (cid:48) o , which is used for final prediction.", "Both W o and the graph convolution layer are learned together with other model parameters.", "We view this layer as the key component of our model and use the rest of this section to describe how we create the label graph and compute the propagation over the graph edges.", "Label Graph Construction In KB-supervised datasets, the entity types are usually arranged in tree-like structures.", "Without any prior about type structures, we consider a more general graph-like structure.", "While the nodes in the graph straightforwardly represent entity types, the meaning of the edges is relatively vague, and the connections are also unknown.", "In order to create meaningful edges using training data as the only resource, we utilize the type co-occurrence matrix: if two type t 1 and t 2 both appear to be the true types of a particular entity mention, we will add an edge between them.", "In other words, we are using the co-occurrence statistics to approximate the pair-wise dependencies and the co-occurrence matrix now serves as the adjacent matrix.", "Intuitively, if t 2 co-appears with t 1 more often than another type t 3 , the probabilities of t 1 and t 2 should have stronger depen-Person Engineer Politician Musician Figure 2: A snippet of the underlying type co-occurrence graph.", "dencies and the corresponding type vectors should be more similar in the vector space.", "In this sense, we expect each type vector to effectively capture the local neighbor structure on the graph.", "Correlation Encoding via Graph Convolution To encode the neighbor information into each node's representation, we follow the propagation rule defined in Graph Convolution Network (GCN) (Kipf and Welling, 2016).", "In particular, with the adjacent or co-occurrence matrix A , we define the following propagation rule on W o : W (cid:48) o = D 12 A D 12 W o T (9) A = A + IN .", "Here T R d f d f is the transformation matrix and IN is an identity matrix used to add self-connected edges.", "D is a diagonal degree matrix with D ii = (cid:80) j A ij , which is used to normalize the feature vectors such that the number of neighbors does not affect the scale of transformed feature vectors.", "In our experiments, we find that an alternative propagation rule W (cid:48) o = D 1 AW o T (11) works similarly well and is more efficient as it involves less matrix multiplications.", "If we look closely and take each node out, the propagation can be written as W (cid:48) o [ i, :] = 1 (cid:80) j A ij ( (cid:88) j A ij W o [ j, :] T ) .", "From this formula, we can see that the propagation is essentially gathering features from the first-order neighbors.", "In this way, the prediction on type t i is dependent on its neighbor types.", "Compared to original GCNs that often use multi-hop propagations ( i.e. , multiple graph layers connected by nonlinear functions) to capture higher-order neighbor structures.", "We only apply one-hop propagation and argue that high-order label dependency is not necessarily beneficial in our scenario and might introduce false bias.", "A simple illustration is shown in Figure 2.", "We can see that propagating 2-hop information introduces undesired inductive bias, since types that are more than 1-hop away ( e.g. , Engineer and Politi-cian) usually do not have any dependencies.", "In fact, some of the 2-hop type pairs can be contradictory types ( e.g. , police and prisoner).", "This hypothesis is consistent with our experiment results: adding more than one graph layer leads to worse results.", "Additionally, we also omit GCN's nonlinear activation which introduces unnecessary constraints on the scale of W (cid:48) o , with which we calculate the unscaled scores before calculating the probability via a sigmoid function.", "As the type labels are all written as text phrases, an interesting question is whether we can exploit the semantics provided by pre-trained word embeddings to improve entity typing.", "We explore this possibility by using the cosine similarity of word embeddings.", "We first calculate type embeddings by simply summing the embeddings of all tokens in the type name.", "Then we build a label affinity matrix A word by calculating pair-wise cosine similarities.", "With the assumption that word-level similarity measures some degree of label dependency, we propose to integrate A word into the graph convolution layer following A (cid:48) word = ( A word + 1) / 2 (13) W (cid:48) o = D 1 ( A + A (cid:48) word ) W o T. (14) Here Equation 13 scales the similarity value into (0 , 1] to avoid negative edge weights, which might introduce numerical issues when calculating D 1 .", "is a trainable parameter used to weight the im-pact of word-level similarities.", "As will be shown in Section 5, this simple augmentation provides further improvement over our original model.", "Datasets Our experiments mainly focus on the Ultra-Fine entity typing dataset which has 10,331 labels and most of them are defined as freeform text phrases.", "The training set is annotated with heterogeneous supervisions based on KB, Wikipedia and head words in dependency trees, 0 5 10 15 20 Number of Labels 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 R a t i o OntoNotes Ultra-Fine Figure 3: Label multiplicity distribution of the datasets.", "resulting in about 25.2M 4 training samples.", "This dataset also includes around 6,000 crowdsourced samples.", "Each of these samples has five groundtruth labels on average.", "For a fair comparison, we use the original test split of the crowdsourced data for evaluation.", "To better understand the capability of our model, we also test our model on the commonly-used OntoNotes (Gillick et al., 2014) benchmark.", "It is worth noting that this dataset is much smaller and has lower label multiplicity than the Ultra-Fine dataset, i.e. , each sample only has around 1.5 labels on average.", "Figure 3 shows a comparison of these two datasets.", "Baselines For the Ultra-Fine dataset, we compare our model with AttentiveNER (Shimaoka et al., 2016) and the multi-task model proposed with the Ultra-Fine dataset.", "Note that other models that require pre-defined type hierarchy are not applicable to this dataset .", "For experiments on OntoNotes, in addition to the two neural baselines for Ultra-Fine, we compare with several existing methods that explicitly utilize the pre-defined type structures in loss functions.", "Namely, these methods are AFET (Ren et al., 2016a), LNR (Ren et al., 2016b) and NFETC (Xu and Barbosa, 2018).", "Evaluation Metrics On Ultra-Fine, we first evaluate the mean reciprocal rank (MRR), macro precision(P), recall (R) and F1 following existing research.", "As P, R and F1 all depend on a cho-sen threshold on probabilities, we also consider a more transparent comparison using precision-recall curves.", "On OntoNotes, we use the standard metrics used by baseline models: accuracy, macro, and micro F1 scores.", "4 Choi et al. (2018) use the licensed Gigaword to build part of the dataset, while in our experiments we only use the open-sourced training set which has approximately 6M training samples.", "Implementation Details Most of the model hyperparameters, such as embedding dimensions, learning rate, batch size, dropout ratios on context and mention representations are consistent with existing models.", "Since the mention-context matching module brings more parameters, we apply a dropout layer over the extracted feature f to avoid overfitting.", "We list all the hyperparameters in the appendix.", "Models for OntoNotes are trained with standard binary cross-entropy (BCE) losses defined on all candidate labels.", "When training on Ultra-Fine, we adopt the multi-task loss proposed in Choi et al. (2018) which divides the cross-entropy loss into three separate losses over different type granularities.", "The multi-task objective avoids penalizing false negative types and can achieve higher recalls.", "We report the results on Ultra-Fine in Table 2.", "It is worth mentioning that our model, denoted as LABELGCN, is trained using the unlicensed training set which is smaller than the one used by compared baselines.", "Even though our model signifi-cantly outperforms the baselines, for a fair comparison, we first test our model using the same decision threshold (0.5) used by previous models.", "In terms of F1, our best model (LABELGCN) outperforms existing methods by a large margin.", "Compared to Choi et al. (2018), our model improves on both precision and recall significantly.", "Compared to the AttentiveNER trained with standard BCE loss, our model achieves much higher recall but performs worse in precision.", "This is due to the fact that when trained with BCE loss, the model usually retrieves only one label per sample and these types are mostly general types 5 which are easier to predict.", "With higher recalls or more retrieved types, achieving high precision requires being accurate on fine-grained types, which are often harder to predict.", "As the precision and recall scores both rely on the decision threshold, different models or different metrics can have different optimal thresholds.", "As shown by the L ABELGCN + thresh tuning entry in Table 2, with threshold tuning, our model beats baselines in all metrics.", "We also see that recall is usually lagging behind precision on this dataset, indicating that F1 score is mainly affected 5 According to the results of our own implementation of BCE-trained model which achieves similar performance as AttentiveNER.", "by the recall and tuning towards recall can usually lead to higher F1 scores.", "For more transparent comparisons, we show the precision-recall curves in Figure 4.", "These data points are based on the validation performance given by 50 equal-interval thresholds between 0 and 1.", "We can see there is a clear margin between our model and the multi-task baseline method (LabelGCN vs Choi et al.).", "To quantify the effect of different model components, we report the performance of model variants in Table 2 and Figure 4.", "We can clearly see that the graph convolution layer is the most essential component.", "The information provided by word embedding is useful and can further improve both precision and recall.", "Although Table 2 seems to indicate the interaction module decreases the precision, we can see from Figure 4 that with a proper threshold, the enhanced interaction actually improves both precision and recall.", "In term of this, we recommend future research to use PR curves for more accurate model analysis.", "As discussed in Section 4.2, the mention representation of pronouns provide limited information about fine-grained types.", "We investigate the effect of the enhanced mention-context interaction by analyzing the decomposed performance on pronouns and other kinds of entities.", "From the results in Table 3, we can see that the enhanced interaction offers consistent improvements over pronouns entities and also maintains the performance on other kinds of entities.", "To gain insights on the improvements provided by our model, we manually analyze 100 error cases 6 of the baseline model (Choi et al. (2018) with threshold 0.5) and see if our model can generate high-quality predictions.", "We first observe that many errors actually results from incomplete annotations.", "This suggests models' precision scores are often underestimated in this dataset.", "We discuss several typical error cases shown in Table 4 and list more samples in the appendix (Table 7).", "A key observation is that while the baseline model tends to make inconsistent predictions (see examples 1, 2, 3), our model can avoid predicting such inconsistent type pairs.", "This indeed validates our model's ability to encode label correlations.", "We also notice that our model is more sensitive to gender information indicated by pronouns, while 6 The baseline model achieves the lowest precision on these 100 samples.", "the baseline model sometimes holds the gender-indicating predictions and predict other types, our model predicts the gender-indicating types more often (examples 3, 4, 5).", "We conjecture that our model learns this easy way to maintain precision.", "For cases that both models fail, some of them actually require background knowledge (example 4) to make accurate predictions.", "Another typical case is that both models predict some other entities in the context (example 5).", "We think this potentially results from the data bias introduced by the head-word supervision.", "To better understand the requirements for applying our model, we further evaluate on the OntoNotes dataset.", "Here we do not apply the proposed mention-context matching module as this dataset does not include any pronoun entities.", "To obtain more reliable co-occurrence statistics, we use the augmented training data released by Choi et al. (2018).", "However, since the training set is still much smaller than that of the Ultra-Fine dataset, the derived co-occurrence statistics are relatively noisy and might introduce undesired bias.", "We thus add an additional residual connection to our graph convolution layer, which allows the model to selectively use co-occurrence statistics.", "This indeed gives us improvements over previous state-of-the-arts, as shown in Table 5.", "However, compared to Ultra-Fine, the margin of the improvement is smaller.", "In view of the key differences of these two datasets, we highlight two key requirements for our proposed model to offer substantial improvements.", "First, there should be a large-scale training set so that the derived co-occurrence statistics can reasonably reflect the true label correlations.", "Second, the samples themselves should also have higher label multiplicity.", "In fact, most of the samples in OntoNotes only have 1 or 2 labels.", "This property actually alleviates the need for models to capture label dependencies.", "In this paper, we present an effective method to impose label-relational inductive bias on fine-grained entity typing models.", "Specifically, we utilize a graph convolution layer to incorporate type co-occurrence statistics and word-level type similarities.", "This layer implicitly captures the label correlations in the latent vector space.", "Along with an attention-based mention-context matching module, we achieve significant improvements over previous methods on a large-scale dataset.", "As our method does not require external knowledge about the label structures, we believe our method is general enough and has the potential to be applied to other multi-label tasks with plain-text labels.", "This research was supported in part by DARPA Grant D18AP00044 funded under the DARPA YFA program.", "The authors are solely responsible for the contents of the paper, and the opinions expressed in this publication do not reflect those of the funding agencies." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "result", "objective", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "result", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "result", "method", "other", "other" ]
[ "Hate speech classifiers trained on imbalanced datasets struggle to determine if group identifiers like gay or black are used in offensive or prejudiced ways.", "Such biases manifest in false positives when these identifiers are present, due to models' inability to learn the contexts which constitute a hateful usage of identifiers.", "We extract post-hoc explanations from fine-tuned BERT classifiers to detect bias towards identity terms.", "Then, we propose a novel regularization technique based on these explanations that encourages models to learn from the context of group identifiers in addition to the identifiers themselves.", "Our approach improved over baselines in limiting false positives on out-of-domain data while maintaining or improving in-domain performance.", "1 Introduction Hate speech detection is part of the ongoing effort to limit the harm done by oppressive and abusive language (Waldron, 2012; Gelber and McNamara, 2016; Gagliardone et al., 2015; Mohan et al., 2017).", "Performance has improved with access to more data and more sophisticated algorithms (e.g., Mon-dal et al., 2017; Silva et al., 2016; Del Vigna12 et al., 2017; Basile et al., 2019), but the relative sparsity of hate speech requires sampling using keywords (e.g., Olteanu et al., 2018) or sampling from environments with unusually high rates of hate speech (e.g., de Gibert et al., 2018; Hoover et al., 2019).", "Modern text classifiers thus struggle to learn a model of hate speech that generalizes to real-world applications (Wiegand et al., 2019).", "A specific problem found in neural hate speech classifiers is their over-sensitivity to group identifiers like Muslim, gay, and black, which are only hate speech when combined with the right Authors contributed equally Code is available here [F]or many Africans, the most threatening kind of ethnic hatred is black against black. New York Times There is a great discrepancy between whites and blacks in SA. It is [because] blacks will always be the most backward race in the world.", "Anonymous user, Gab.com Figure 1: Two documents which are classified as hate speech by a fine-tuned BERT classifier.", "context (Dixon et al., 2018).", "In Figure 1 we see two documents containing the word black that a fine-tuned BERT model predicted to be hate speech, while only the second occurs in a hateful context.", "Neural text classifiers achieve state-of-the-art performance in hate speech detection, but are uninterpretable and can break when presented with unexpected inputs (Niven and Kao, 2019).", "It is thus difficult to contextualize a model's treatment of identifier words.", "Our approach to this problem is to use the Sampling and Occlusion (SOC) explanation algorithm, which estimates model-agnostic, posthoc feature importance (Jin et al., 2020).", "We apply this approach to the Gab Hate Corpus (Kennedy et al., 2020), a new corpus labeled for hate-based rhetoric, and an annotated corpus from the Stormfront white supremacist online forum (de Gibert et al., 2018).", "Based on the explanations generated via SOC, which showed models were biased towards group identifiers, we then propose a novel regularization-based approach in order to increase model sensitivity to the context surrounding group identifiers.", "We apply regularization during training to the explanation-based importance of group identifiers, coercing models to consider the context surrounding them.", "We find that regularization reduces the attention given to group identifiers and heightens the importance of the more generalizable features of hate speech, such as dehumanizing and insulting language.", "In experiments on an out-of-domain test set of news articles containing group identifiers, which are heuristically assumed to be non-hate speech, we find that regularization greatly reduces the false positive rate, while in-domain, out-of-sample classification performance is either maintained or improved.", "Our work is conceptually influenced by Warner and Hirschberg (2012), who formulated hate speech detection as disambiguating the use of offensive words from abusive versus non-abusive contexts.", "More recent approaches applied to a wide typology of hate speech (Waseem et al., 2017), build supervised models trained on annotated (e.g., Waseem and Hovy, 2016; de Gibert et al., 2018) or heuristically-labeled (Wulczyn et al., 2017; Olteanu et al., 2018) data.", "These models suffer from the highly skewed distributions of language in these datasets (Wiegand et al., 2019).", "Research on bias in classification models also influences this work.", "Dixon et al. (2018) measured and mitigated bias in toxicity classifiers towards social groups, avoiding undesirable predictions of toxicity towards innocuous sentences containing tokens like gay .", "Similarly, annotators' biases towards certain social groups were found to be magni-fied during classifier training Mostafazadeh Davani et al. (2020).", "Specifically within the domain of hate speech and abusive language, Park et al. (2018) and Sap et al. (2019) have defined and studied gender-and racial-bias, emphasizing issues of undetected dialect variation and imbalanced training data, respectively.", "Techniques for bias reduction in these settings include data augmentation by training on less biased data, term swapping during training (i.e., swapping gender words), and using debiased word embeddings (Bolukbasi et al., 2016).", "Complementing these works, we directly manipulate models' modeling of the context surrounding identifier terms by regularizing explanations of these terms.", "Specifically, we use post-hoc explanation algorithms to interpret and modulate fine-tuned language models like BERT (Devlin et al., 2018), which achieve state of the art performance on many hate speech detection tasks (MacAvaney et al., 2019; Mandl et al., 2019).", "We focus on post-hoc explanation approaches, which interpret model predictions without elucidating the mechanisms by which the model works (Guidotti et al., 2019).", "These explanations reveal either word-level (Ribeiro et al., 2016; Sundararajan et al., 2017) or phrase-level importance (Murdoch et al., 2018; Singh et al., 2019) of inputs to predictions.", "We selected two public corpora for our experiments which highlight the rhetorical aspects of hate speech, versus merely the usage of slurs and explicitly offensive language (see Davidson et al., 2017).", "The Gab Hate Corpus (GHC; Kennedy et al., 2020) is a large, random sample ( N = 27,655) from the Pushshift.io data dump of the Gab network , which we have annotated according to a typology of hate-based rhetoric, a construct motivated by hate speech criminal codes outside the U.S. and social science research on prejudice and dehumanization.", "Gab is a social network with a high rate of hate speech (Zannettou et al., 2018; Lima et al., 2018) and populated by the Alt-right (An-thony, 2016; Benson, 2016).", "Similarly with respect to domain and definitions, de Gibert et al. (2018) sampled and annotated posts from the Stormfront web domain (Meddaugh and Kay, 2009) and annotated at the sentence level according to a similar annotation guide as used in the GHC.", "Train and test splits were randomly generated for Stormfront sentences (80/20) with hate taken as a positive binary label, and a test set was compiled from the GHC by drawing a random strati-fied sample with respect to the target population tag (possible values including race/ethnicity target, gender, religious, etc.).", "A single hate label was created by taking the union of two main labels, human degradation and calls for violence.", "Training data for the GHC (GHC train ) included 24,353 posts with 2,027 labeled as hate, and test data for the GHC (GHC test ) included 1,586 posts with 372 labeled as hate.", "Stormfront splits resulted in 7,896 (1,059 hate) training sentences, 979 (122) validation, and 1,998 (246) test.", "To establish and define our problem more quantitatively, we analyze hate speech models' bias towards group identifiers and how this leads to false positive errors during prediction.", "We analyze the top features of a linear model and use post-hoc explanations applied to a fine-tuned BERT model in order to measure models' bias towards these terms.", "We then establish the effect of these tendencies on https://files.pushshift.io/gab/ 0 10 20 # Removed Identity Terms 0.30 0.35 0.40 0.45 0.50 0.55 0.60 F 1 Hate Detection Gab Stormfront 0 10 20 # Removed Identity Terms 0.76 0.78 0.80 0.82 0.84 0.86 0.88 0.90 A cc u r a c y NYT Adversarial Figure 2: BoW F1 scores (trained on GHC train and evaluated on GHC test ) as a function of how many group identifiers are removed (left).", "model predictions using an adversarial-like dataset of New York Times articles.", "We apply our analyses on two text classifiers, logistic regression with bag of words features and a fine-tuned BERT model (Devlin et al., 2018).", "The BERT model appends a special CLS token at the beginning of the input sentence and feeds the sentence into stacked layers of Transformer (Vaswani et al., 2017) encoders.", "The representation of the CLS token at the final layer is fed into a linear layer to perform 2-way classification (hate or non-hate).", "Model configuration and training details can be found in the Section A.3.", "We first determine a model's sensitivity towards group identifiers by examining the models themselves.", "Linear classifiers can be examined in terms of their most highly-weighted features.", "We apply a post-hoc explanation algorithm for this task of extracting similar information from the fine-tuned methods discussed above.", "Group identifiers in linear models From the top features in a bag-of-words logistic regression of hate speech on GHC train , we collected a set of twenty-five identity words (not restricted to social group terms, but terms identifying a group in general), including homosexual, muslim, and black, which are used in our later analyses.", "The full list is in Supplementals (A.1).", "Explanation-based measures State-of-the-art fine-tuned BERT models are able to model complicated word and phrase compositions: for example, some words are only offensive when they are composed with specific ethnic groups.", "To capture this, we apply a state-of-the-art Sampling and Occlusion (SOC) algorithm which is capable of generating hierarchical explanations for a prediction.", "To generate hierarchical explanations, SOC starts by assigning importance score for phrases in a way that eliminates compositional effect between the phrase and its context x around it within a window.", "Given a phrase p appearing in a sentence x , SOC assigns an importance score ( p ) to show how the phrase p contribute so that the sentence is classified as a hate speech.", "The algorithm computes the difference of the unnormalized prediction score s ( x ) between hate and non-hate in the 2-way classifier.", "Then the algorithm evaluates average change of s ( x ) when the phrase is masked with padding tokens (noted as x \\ p ) for different inputs, in which the N -word contexts around the phrase p are sampled from a pretrained language model, while other words remain the same as the given x .", "Formally, the importance score ( p ) is measured as, ( p ) = E x [ s ( x ) s ( x \\ p )] (1) In the meantime, SOC algorithm perform agglomerative clustering over explanations to generate a hierarchical layout.", "Averaged Word-level SOC Explanation Using SOC explanations output on GHC test , we compute average word importance and present the top 20 in Table 2.", "Hate speech models can be over-attentive to group identifiers, as we have seen by inspecting them through feature analysis and a post-hoc explanation approach.", "The effect of this during prediction is that models over-associate these terms with hate speech and choose to neglect the context around the identifier, resulting in false positives.", "To provide an external measure of models' over-sensitivity to group identifiers, we construct an adversarial test set of New York Times (NYT) articles that are filtered to contain a balanced, random sample of the twenty-five group identifiers (Section A.1).", "This gives us 12 , 500 documents which are devoid of hate speech as defined by our typologies, excepting quotation.", "It is key for models to not ignore identifiers, but to match them with the right context.", "Figure 2 shows the effect of ignoring identifiers: random There has been a rise and fall of hate against the jews hate against the jews of hate of the jews", "subsets of words ranging in size from 0 to 25 are removed, with each subset sample size repeated 5 times.", "Decreased rates of false positives on the NYT set are accompanied by poor performance in hate speech detection.", "We have shown hate speech models to be oversensitive to group identifiers and unable to learn from the context surrounding these words during training.", "To address this problem in state-of-the-art models, we propose that models can be regularized to give no explained importance to identifier terms.", "We explain our approach as well as a naive baseline based on removing these terms.", "Word Removal Baseline.", "The simplest approach is to remove group identifiers altogether.", "We remove words from the term list found in Section A.1 from both training and testing sentences.", "Explanation Regularization.", "Given that SOC explanations are fully differentiable, during training, we regularize SOC explanations on the group identifiers to be close to 0 in addition to the classification objective L (cid:48) .", "The combined learning objective is written as follows.", "where S notes for the set of group names and x notes for the input word sequence.", "is a hyperpa-rameter for the strength of the regularization.", "In addition to SOC, we also experiment with regularizing input occlusion (OC) explanations, defined as the prediction change when a word or phrase is masked out, which bypass the sampling step in SOC.", "Balancing performance on hate speech detection and the NYT test set is our quantitative measure of how well a model has learned the contexts in which group identifiers are used for hate speech.", "We apply our regularization approach to this task, and compare with a word removal strategy for the fine-tuned BERT model.", "We repeat the process for both the GHC and Stormfront, evaluating test set hate speech classification in-domain and accuracy on the NYT test set.", "For the GHC, we used the full list of 25 terms; for Stormfront, we used the 10 terms which were also found in the top predictive features in linear classifiers for the Stormfront data.", "Congruently, for Stormfront we filtered the NYT corpus to only contain these 10 terms ( N = 5,000).", "Performance is reported in Table 1.", "For the GHC, we see an improvement for in-domain hate speech classification, as well as an improvement in false positive reduction on the NYT corpus.", "For Stormfront, we see the same improvements for in-domain F 1 ) and NYT.", "For the GHC, the most marked difference between BERT+WR and BERT+SOC is increased recall, suggesting that baseline removal largely mitigates bias towards identifiers at the cost of more false negatives.", "As discussed in section 4.2, SOC eliminates the compositional effects of a given word or phrase.", "As a result, regularizing SOC explanations does not prohibit the model from utilizing contextual information related to group identifiers.", "This can possibly explain the improved performance in hate speech detection relative to word removal.", "Word Importance in Regularized Models We determined that regularization improves a models focus on non-identifier context in prediction.", "In table 2 we show the changes in word importance as measured by SOC.", "Identity terms' importance decreases, and we also see a significant increase in importance of terms related to hate speech (poi-soned, blamed, etc.) suggesting that models have learned from the identifier terms' context.", "Visualizing Effects of Regularization We can further see the effect of regularization by considering Figure 3, where hierarchically clustered expla-Training set GHC Stormfront Method / Metrics Precision Recall F1 NYT Acc.", "nations from SOC are visualized before and after regularization, correcting a false positive.", "Regularizing SOC explanations of group identifiers tunes hate speech classifiers to be more context-sensitive and less reliant on high-frequency words in imbalanced training sets.", "Complementing prior work in bias detection and removal in the context of hate speech and in other settings, our method is directly integrated into Transformer-based models and does not rely on data augmentation.", "As such, it is an encouraging technique towards directing models' internal representation of target phenomena via lexical anchors.", "Future work includes direct extension and validation of this technique with other language models such as GPT-2 (Radford et al., 2019); experimenting with other hate speech or offensive language datasets; and experimenting with these and other sets of identity terms.", "Also motivated by the present work is the more general pursuit of integrating structure into neural models like BERT.", "Regularized hate speech classifiers increases sensitivity to the compositionality of hate speech, but the phenomena remain highly complex rhetorically and difficult to learn through supervision.", "For example, this post from the GHC requires background information and reasoning across sentences in order to classify as offensive or prejudiced: Don-ald Trump received much criticism for referring to Haiti, El Salvador and Africa as shitholes'. He was simply speaking the truth.", "The examples we presented (see Appendix 4 and 5) show that regularization leads to models that are context-sensitive to a degree, but not to the extent of reasoning over sentences like those above.", "We hope that the present work can motivate more attempts to inject more structure into hate speech classification.", "Explanation algorithms offer a window into complex predictive models, and regularization as performed in this work can improve models' internal representations of target phenomena.", "In this work, we effectively applied this technique to hate speech classifiers biased towards group identifiers; future work can determine the effectiveness and further potential for this technique in other tasks and contexts.", "This research was sponsored in part by NSF CAREER BCS-1846531 (Morteza Dehghani).", "Xiang Ren's research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via Contract No. 2019-19051600007, United States Office Of Naval Research under Contract No.", "N660011924033, and NSF SMA 18-29268." ]
[ "abstain", "abstain", "method", "objective", "result", "abstain", "other", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "objective", "method", "result", "result", "method", "other", "other", "abstain", "other", "other", "other", "other", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "result", "method", "other", "other", "other" ]
[ "Opinion prediction on Twitter is challenging due to the transient nature of tweet content and neighbourhood context.", "In this paper, we model users' tweet posting behaviour as a temporal point process to jointly predict the posting time and the stance label of the next tweet given a user's historical tweet sequence and tweets posted by their neighbours.", "We design a topic-driven attention mechanism to capture the dynamic topic shifts in the neighbourhood context.", "Experimental results show that the proposed model predicts both the posting time and the stance labels of future tweets more accurately compared to a number of competitive baselines.", "Social media platforms allow users to express their opinions online towards various subject matters.", "Despite much progress in sentiment analysis in social media, the prediction of opinions, however, remains challenging.", "Opinion formation is a complex process.", "An individual's opinion could be influenced by their own prior belief, their social circles and external factors.", "Existing studies often assume that socially connected users hold similar opinions.", "Social network information is integrated with user representations via weighted links and encoded using neural networks with attentions or more recently Graphical Convolutional Networks (GCNs) (Chen et al., 2016; Li and Goldwasser, 2019).", "This strand of work, including (Chen et al., 2018; Zhu et al., 2020; Del Tredici et al., 2019), leverages both the chronological tweet sequence and social networks to predict users' opinions.", "time duration.", "Models trained on the current interval are used to predict users' opinions in the next interval.", "However, we argue that such a manual segmentation may not be appropriate since users post tweets at different frequency.", "Also, the time interval between two consecutively published tweets by a user is important to study the underlying opinion dynamics system and hence should be treated as a random variable.", "Inspired by the multivariate Hawkes process (Aalen et al., 2008; Du et al., 2016), we propose to model a user's posting behaviour by a temporal point process that when user u posts a tweet d at time t , they need to decide on whether they want to post a new topic/opinion, or post a topic/opinion influenced by past tweets either posted by other users or by themselves.", "We thus propose a neural temporal opinion model to jointly predict the time when the new post will be published and its associated stance.", "Instead of using the fixed formulation of the multivariate Hawkes process, the intensity function of the point process is automatically learned by a gated recurrent neural network.", "In addition, one's neighbourhood context and the topics of their previously published tweets are also taken into account for the prediction of both the posting time and stance of the next tweet.", "To the best of our knowledge, this is the first work to exploit the temporal point process for opinion prediction on Twitter.", "Experimental results on the two Twitter datasets relating to Brexit and US general election show that our proposed model outperforms existing approaches on both stance and posting time prediction.", "We present in Figure 1 the overall architecture of our proposed Neural Temporal Opinion Model (NTOM).", "The input to the model at time step i GRU cell GRU cell GRU cell Intensity function Softmax i+1 y i+1 Bi-LSTM VAEA tt e n t i o n LSTM Bi-LSTM Bi-LSTM Bi-LSTM x i x b i d i,1 d i,2 d i,L i th tweet i-1 th tweet i+1 th tweet Neighborhood context z i h i c i h c i,1 h c i,2 h c i,L i u g i Figure 1: Overview of the Neural Temporal Opinion Model.", "consists of user's own tweet x i , bag-of-word representation x bi , time interval i between the i 1 th tweet and the i th tweet, user embedding u , and neighbours' tweet queue { d i, 1 , d i, 2 , . . . , d i,L } .", "At first, a Bi-LSTM layer is applied to extract features from input tweets.", "Then the neighborhood tweets are processed by a stacked Bi-LSTM/LSTM layer for the extraction of neighborhood context, which is fed into an attention module queried by the user's own tweet h i and topic z i .", "The output of attention module is concatenated with tweet representation, time interval i , user representation u , and topic representation z i , which is encoded from x bi via a Variational Autoencoder (VAE).", "Finally, the combined representation is sent to a GRU cell, whose hidden state participates in computing the intensity function and the softmax function, for the prediction of the posting time interval and the stance label of the next tweet.", "In the following, we elaborate the model in more details: Tweet representation: Words in tweets are mapped to pre-trained word embeddings (Baziotis et al., 2017) 1 , which is specially trained for tweets.", "Then Bi-LSTM is used to generate the tweet representation.", "Topic extraction: The topic representation z i in Figure 1 captures the topic focus of the i th tweet.", "It is learned by VAE (Kingma and Welling, 2014), which approximates the intractable true posterior 1 https://github.com/cbaziotis/ datastories-semeval2017-task4 by optimising the reconstruction error between the generated tweet and the original tweet.", "Specifically, we convert each tweet to the bag-of-word format weighted by term frequency, x bi , and feed it to two inference neural networks defined as f and f .", "These generate mean and variance of a Gaussian distribution from which the latent topic vector z i is sampled.", "Then the approximated posterior would be q ( z i | x bi ) = N ( z i | f ( x bi ) , f ( x bi )) .", "To generate the observation x bi conditional on the latent topic vector z i , we define the generative network as p ( x bi | z i ) = N ( x bi | f ( z i )) , f ( z i )) .", "The reconstruction loss for the tweet x bi is then: L x = E q ( zi | xbi ) [log p ( x bi | z i )] KL( q ( z i | x bi ) || p ( z i )) (1) Neighbourhood Context Attention: To capture the influence from the neighbourhood context, we first input the neighbours' recent L tweets to an LSTM in a temporal ascending order.", "The output of the LSTM is weighed by the attention signals queried by the user's i th tweet and topic: c i = L (cid:88) l =1 l h ci,l (2) l exp([ h T i , z T i ]tanh( W h h ci,l + W z z ci,l )) (3) where { h ci, 1 , h ci, 2 , . . . , h ci,L } denotes the hidden state output of each tweet d i,l in the neighbourhood context, z ci,l denotes the associated topic, h i is the representation of the user's own tweet at time step i , and both W h and W z are weight matrices.", "We use this attention mechanism to align the user's tweet to the most relevant part in the neighbourhood context.", "Our rationale is that a user would attend to their neighbours' tweets that discuss similar topics.", "The attention output c i is then concatenated with a user's own tweet h i and the extracted topic z i .", "We further enrich the representation with the elapsed time i between the posting time of the current tweet and the last posted tweet, and add a randomly initialised user vector u to distinguish the user from others.", "The final representation is passed to a GRU cell for the joint prediction of the posting time and stance label of the next tweet.", "Temporal Point Process: The goal of NTOM is to forecast the time gap till the next post, together with the stance label.", "Instead of modelling the time interval value based on regression analysis, we use the GRU (Cho et al., 2014) to simulate the temporal point process.", "At each time step, the combined representation [ c i , h i , z i , i , u ] is input to the GRU cell to iteratively update the hidden state taking into account the influence of previous tweets: g i = f GRU ( g i 1 , c i , h i , z i , i , u ) (4) where g i is the hidden state of GRU cell.", "Given g i , the intensity function is formulated as: ( t ) = ( t |H i ) = exp( b + v T g i + w t ) (5) Here, H i summarises all the tweet histories up to tweet i , b denotes the base density level, the term v T g i captures the influence from all previous tweets and w t denotes the influence from the instant interval.", "The likelihood that the next tweet will be posted at the next interval given the history is: f ( ) = ( ) exp (cid:0) (cid:90) 0 ( t ) dt (cid:1) (6) The expectation for the occurrence of the next tweet can be estimated using: i +1 = (cid:90) 0 f ( ) d (7) Loss: We expect the predicted interval to be close to the actual interval as much as possible by minimising the Gaussian penalty function: L time = 1 2 exp (cid:0) ( i +1 i +1 ) 2 2 2 (cid:1) (8) Brexit 5 10 15 20 25 Number of tweets 0 2000 4000 6000 8000 N u m be r o f u s e r s Election 5 10 15 20 25 Number of tweets 0 2000 4000 6000 8000 10000 Figure 2: Number of users versus number of tweets.", "For the stance prediction we employ the cross-entropy loss denoted as L stan .", "The final objective function is computed as: L = L x + L time + L stan (9) where , and are coefficients determining the contribution of various loss functions.", "We perform experiments on two publicly available Twitter datasets 2 (Zhu et al., 2020) on Brexit and US election.", "The Brexit dataset consists of 363k tweets with 31.6%/29.3%/39.1% support-ing/opposing/neutral tweets towards Brexit.", "The Election dataset consists of 452k tweets with 74.2%/20.4%/5.4% supporting/opposing/neutral tweets towards Trump.", "We filter out users who posted less than 3 tweets and are left with 20 , 914 users in Brexit and 26 , 965 users in Election.", "We plot in Figure 2 the number of users versus the number of tweets and found that over 81 .", "6% users have published fewer than 7 tweets, we therefore set the maximum length of the tweet sequence of each user to 7.", "For users who have published more than 7 tweets, we split their tweet sequence into multiple training sequences of length 7 with an overlapping window size of 1.", "For each user, we use 90% of their tweets for training and 10% (round up) for testing.", "Our settings are = 0 .", "2 , = 0 .", "4 and = 0 .", "4 .", "We set the topic number to 50 and the vocabulary size to 3k for the tweet bag-of-words input to VAE.", "The mini-batch size is 16 .", "We use Adam optimizer with learning rate 0 .", "0005 and learning rate decay 0 .", "9 .", "The evaluation metrics are accuracy for stance prediction and Mean Squared Error (MSE) for posting time prediction.", "The results are compared against the following baselines: 2 https://github.com/somethingx01/ TopicalAttentionBrexit Model Brexit Election Acc.", "CSIM W (Chen et al., 2018) gauges the social influence by an attention mechanism for the prediction of the user sentiment of the next tweet.", "NOD (Zhu et al., 2020) takes into account the neighborhood context and pre-extracted topics for tweet stance prediction.", "LING+GAT (Del Tredici et al., 2019) places a GCN variant over linguistic features to extract node representations.", "Tweets are aggregated by users for user-level prediction.", "We also perform ablation study on our model by removing the topic extraction component (NTOM-VAE ) or removing the neighbourhood context component (NTOM -context ).", "In addition, to validate that NTOM does benefit from point process modelling and can better forecast the time and stance of the next tweet, we remove the intensity function (i.e. no Eq.", "(5)-(7)) and directly use vanilla RNN and its variants including LSTM and GRU to predict the true time interval.", "Furthermore, to investigate if is is more beneficial to use GCN to encode the neighbourhood context, we learn tweet representation using GCN 3 (Hamilton et al., 2017), which preserves high-order influence in social networks through convolution.", "As in (Li and Goldwasser, 2019), we use a 2-hop GCN and denote the variant as NTOM-GCN .", "For the Brexit dataset, MSE is measured in hours, while for the Election dataset it is measured in minutes due to the intensive tweets published within two days.", "We report in Table 1 the stance prediction accuracy and MSE scores of predicted posting time.", "Compared to baselines, NTOM consistently achieves better performance on both datasets, showing the benefit of modelling the tweet posting sequence as a temporal point process.", "In the second set of experiments, we study the effect of temporal process modelling.", "The results verify the benefit of using the intensity function, with at least a 2% increase in accuracy and 0.2 decrease in MSE compared with vanilla RNN and its variants.", "In the ablation study, the removal of neighbourhood context component caused the largest performance decline compared to other components, verifying the importance of social influence in opinion prediction.", "Removing either VAE (for topic extraction) or intensity function (using only GRU) results in slight drops in stance prediction and more noticeable performance gaps in time prediction.", "It can be also observed that using GCN to model higher-order influence in social networks does not bring any benefits, possibly due to extra noise introduced to the model.", "To investigate the effectiveness of the context attention that is queried by topics, we first select some example topics from the topic-word matrix in VAE.", "The label of each topic is manually assigned based on its associated top 10 words.", "Then we display a tweet's topic distribution together with its neighborhood tweets' topic distribution.", "We also visualize the attention weights assigned to the 3 neighborhood tweets.", "Figure 3 illustrates the example topics, topic distribution and attention signals towards context tweets.", "Here, x 2 and x 4 denote a user's 2 nd and 4 th tweets respectively.", "The most recent 3 neighborhood tweets are denoted as d 1 , d 2 , d 3 .", "Blue in the leftmost separate column denotes the attention weights, and each row on top of T 1 , T 2 and T 3 denotes the topic distribution.", "It can be observed that the user's concerned topic shifts from immigration to Boris Johnson in 2 time steps.", "The drift also appears in the neighbour's tweets.", "Higher attention weights are assigned to the neighbour's tweets which share similar topical distribution as the user.", "We can thus infer that the topic vector does help select the most relevant neighborhood tweet.", "The prediction of real-time stances on social media is challenging, partly caused by the diversity and fickleness of users (Andrews and Bishop, 2019).", "A line of work mitigated the problem by taking into account the homophily that users are similar to their friends (McPherson et al., 2001; Halberstam and Knight, 2016).", "For example, Chen et al. (2016) gauged a user's opinion as an aggregated stance of their neighborhood users.", "Linmei et al. (2019) took a step further by exploiting the extracted topics, which discern a user's focus on neighborhood tweets.", "Recent advances in this strand also include the application of GCNs, with which the social relationships are leveraged to enrich the user representations (Li and Goldwasser, 2019; Del Tredici et al., 2019).", "On the other hand, several work has utilized the chronological order of tweets.", "Chen et al. (2018) presented an opinion tracker that predicts a stance every time a user publishes a tweet, whereas (Zhu et al., 2020) extended the previous work by introducing a topic-dependent attention.", "Shrestha et al. (2019) considered diverse social behaviors and jointly forecast them through a hierarchical neural network.", "However, the aforementioned work requires a manual segmentation of a tweet sequence.", "Furthermore, they are unable to predict when a user will next publish a tweet and what its associated stance is.", "These problems can be addressed using the Hawkes process (Hawkes, 1971), which has been successfully applied to event tracking (Sri-jith et al., 2017), rumor detection (Lukasik et al., 2016; Zubiaga et al., 2016; Alvari and Shakarian, 2019) and retweet prediction (Kobayashi and Lam-biotte, 2016).", "A combination of the Hawkes process with recurrent neural networks, called Recurrent Marked Temporal Pointed Process (RMTPP), was proposed to automatically capture the influence of the past events on future events, which shows promising results on geolocation prediction (Du et al., 2016).", "Benefiting from the flexibility and scalability of neural networks, several work has been done in this vein including event sequence prediction (Mei and Eisner, 2017) and failure prediction (Xiao et al., 2017).", "Our work is partly inspired by RMTPP, but departs from the previous work by jointly considering users' social relations and topical attentions for stance prediction on social media.", "In this paper, we propose a novel Neural Temporal Opinion Model (NTOM) to address users' changing interest and dynamic social context.", "We model users' tweet posting behaviour based on a temporal point process for the joint prediction of the posting time and stance label of the next tweet.", "Experimental results verify the effectiveness of the model.", "Furthermore, visualisation of the topics and attention signals shows that NTOM captures the dynamics in the focused topics and contextual attention.", "This work was funded in part by EPSRC (grant no. EP/T017112/1).", "LZ is funded by the Chan-cellor's International Scholarship of the University of Warwick.", "DZ was partially funded by the National Key Research and Development Program of China (2017YFB1002801) and the National Natural Science Foundation of China (61772132).", "The authors would also like to thank Akanni Adewoyin for insightful discussions." ]
[ "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "objective", "abstain", "abstain", "objective", "objective", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "other", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "other", "other", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "objective", "abstain", "abstain", "abstain", "other", "other", "other", "other" ]
[ "The robustness of Text-to-SQL parsers against adversarial perturbations plays a crucial role in delivering highly reliable applications.", "Previous studies along this line primarily focused on perturbations in the natural language question side, neglecting the variability of tables.", "Motivated by this, we propose the A dversarial T able P erturbation ( ATP ) as a new attacking paradigm to measure the robustness of Text-to-SQL models.", "Following this proposition, we curate ADVETA, the first robustness evaluation benchmark featuring natural and realistic ATPs.", "All tested state-of-the-art models experience dramatic performance drops on ADVETA, revealing models' vulnerability in real-world practices.", "To defend against ATP, we build a systematic adversarial training example generation framework tailored for better contextualization of tabular data.", "Experiments show that our approach not only brings the best robustness improvement against table-side perturbations but also substantially empowers models against NL-side perturbations.", "We release our benchmark and code at: https://github.com/microsoft/ContextualSP.", "The goal of Text-to-SQL is to generate an executable SQL query given a natural language (NL) question and corresponding tables as inputs.", "By helping non-experts interact with ever-growing databases, this task has many potential applications in real life, thereby receiving considerable interest from both industry and academia (Li and Jagadish, 2016; Zhong et al., 2017; Affolter et al., 2019).", "Recently, existing Text-to-SQL parsers have been found vulnerable to perturbations in NL questions (Gan et al., 2021; Zeng et al., 2020; Deng et al., 2021).", "For example, Deng et al. (2021) removed the explicit mentions of database items in a Equal contributions during the internship at Microsoft Research Asia.", "Student Name Citizenship Score Semester A Country X 92 Fall B Country Y 90 Spring A Country X 89 B Country Y 85 Fall C Country Z 97 Spring Original Table List names and citizenships of students who achieved top 3 scores .", "SELECT Student Name, Citizenship FROM Student ORDER BY Score desc LIMIT 3 SELECT Student Name FROM Student ORDER BY Score desc LIMIT 3 (Missing Nationality) SELECT Student Name, Instructor Name, Citizenship FROM Student ORDER BY Grade desc LIMIT 3 Student Name Nationality Score Semester A Country X 92 Fall B Country Y 90 Spring A Country X 89 Spring B Country Y 85 Fall C Country Z 97 Spring RPLP e r t u r b e d T ab l e Student Name Citizenship Score Semester Instructor Name Grade A Country X 92 Fall D 6 B Country Y 90 Spring E 6 A Country X 89 Spring E 6 B Country Y 85 Fall D 5 C Country Z 97 Spring F 5 ADDP e r t u r b e d T ab l e Figure 1: Adversarial examples based on table perturbations for a Text-to-SQL parser.", "question while keeping its meaning unchanged, and observed a significant performance drop of a Text-to-SQL parser.", "Gan et al. (2021) also observed a dramatic performance drop when the schema-related tokens in questions are replaced with synonyms.", "They investigated both multi-annotations for schema items and adversarial training to improve parsers' robustness against permutations in NL questions.", "However, previous works only studied the robustness of parsers from the perspective of NL questions, neglecting variability from the other side of parser input tables.", "We argue that a reliable parser should also be robust against table-side perturbations since they are inevitably modified in the human-machine interaction process.", "In business scenarios, table main-2007 tainers may", "(i) rename columns due to business demands and user preferences.", "(ii) add new columns into existing tables when business demands change.", "Consequently, the extra lexicon diversity introduced by such modifications could harm performances of unrobust Text-to-SQL parsers.", "To formalize these scenarios, we propose a new attacking paradigm, A dversarial T able P erturbation ( ATP ), to measure parsers' robustness against natural and realistic ATPs.", "In accordance with the two scenarios above, we consider both R E PL ACE ( RPL ) and ADD perturbations in this work.", "Figure 1 conveys an intuitive understanding of ATP.", "Ideally, ATP should be conducted based on two criteria:", "(i) Human experts consistently write correct SQL queries before and after table perturbations, yet parsers fail;", "(ii) Perturbed tables look natural and grammatical, and are free from breakage of human language conventions.", "Accordingly, we carefully design principles for RPL/ADD and manually curate the ADVE rsarial T able perturb A tion ( ADVETA ) benchmark based on three existing datasets.", "All evaluated state-of-the-art Text-to-SQL models experience drastic performance drops on ADVETA: On ADVETA-RPL, the average relative percentage drop is as high as 53 .", "1% , whereas on ADVETA-ADD is 25 .", "6% , revealing models' lack of robustness against ATPs.", "Empirically, model robustness can be improved by adversarial training, i.e. re-train models with training set augmented with adversarial examples (Jin et al., 2020).", "However, due to the different natures of structured tables and unstructured text, well-established text adversarial example generation approaches are not readily applicable.", "Motivated by this, we propose an effective C ontextualized T able A ugmentation (CTA) approach that better leverages tabular context information and carry out ablation analysis.", "To summarize, the contributions of our work are three-fold: To the best of our knowledge, we are the first to propose definitions and principles of A dversarial T able P erturbation ( ATP ) as a new attacking paradigm for Text-to-SQL.", "We contribute ADVETA , the first benchmark to evaluate the robustness of Text-to-SQL models.", "Significant performance drops of state-of-the-art models reveals that there is much more to explore beyond high leader-board scores.", "We design CTA , a systematic adversarial training example generation framework tailored for better contextualization of tabular data.", "Experiments show that our approach brings model best robustness gain and lowest original performance loss , compared with various baselines.", "Moreover, we show that adversarial robustness brought by CTA generalizes well to NL-side perturbations.", "We propose the A dversarial T able P erturbation ( ATP ) paradigm to measure robustness of Text-to-SQL models.", "For an input table and its associated NL questions, the goal of ATP is to fool Text-to-SQL parsers by perturbing tables naturally and realistically.", "More specifically, human SQL experts can consistently maintain their correct translations from NL questions to SQL with their understanding of language and table context.", "Formally, ATP consists of two approaches: R E PL ACE ( RPL ) and ADD .", "In the rest of this section, we first discuss our consideration of table context, then introduce conduction principles of RPL and ADD.", "Tables consist of explicit and implicit elements both are necessary for understanding table context.", "Explicit elements refer to table captions, columns, and cell values.", "Implicit elements, in our consideration, contains T able P rimary E ntity ( TPE ) and domain.", "(Relational)", "Tables are structured data recording domain-specific attributes (columns) around some central entities (TPE) (Sumathi and Esakkirajan, 2007).", "Without the explicit annotation, humans could still make correct guesses on them.", "For example, it's intuitive that tables in Figure 1 can be classified as education domain, and all of the columns center around the TPE student.", "Combining both explicit and implicit elements, people achieve an understanding of table context, which becomes the source of lexicon diversity in column descriptions.", "Given a target column, the goal of RPL is to seek an alternative column name that makes sense to humans but misleads unrobust models.", "Gold SQL, as illustrated in Figure 1, should be correspondingly adapted by mapping the original column to its new name.", "In light of this, RPL should fulfill the following two principles: 2008 ADVETA Statistics Spider WTQ WikiSQL Orig.", "Semantic Equivalency: Under the table context of target column, substituted column names are expected to convey equivalent semantic meaning as the original name.", "Phraseology Correctness: ATP aims to be natural and realistic and does not target worst-case attacks.", "Therefore, replaced column names are expected to follow linguistic phraseology conventions:", "(i) Grammar Correctness: Substituted column names should be free from grammar errors.", "(ii) Proper Collocation with TPE: New column names should collocate properly with TPE.", "For example, height and tallness both collocate well with student (TPE), but conventionally not altitude .", "(iii) Idiomaticity: New column names should sound natural to a native speaker to address target columns.", "For example, runner-up means second place , and racer-up is a bad replacement despite runner is synonymous to racer .", "ADD perturbs tables with introductions of new columns.", "Instead of adding random columns that fit well into the table domain, we pertinently add adversarial columns with respect to a target column for the sake of adversarial efficiency.", "Gold SQL should remain unchanged after ADD perturbations 1 .", "Below states ADD principles: Semantic-association & Domain-relevancy: Given a target column and its table context, newly added columns are expected to", "(i) fit nicely into the table context;", "(ii) have high semantic associations with the target column yet low semantic equivalency (e.g. sales vs. profits , editor vs. author ).", "Phraseology Correctness: Same as RPL, columns should obey human language conventions.", "Irreplaceability: Unlike RPL, any added 1 We omit cell value alignment in ADD for simplicity.", "columns should be irreplaceable with any original table columns.", "In other words, ADD requires semantic equivalency to be filtered out from highly semantic associations.", "Otherwise, the original gold SQL will not be the single correct output, which makes the perturbation unreasonable.", "Following RPL and ADD principles, we manually curate the ADVE rsarial T able perturb A tion ( ADVETA ) benchmark based on three mainstream Text-to-SQL datasets, Spider (Yu et al., 2018), WikiSQL (Zhong et al., 2017) and WTQ (Paper-not et al., 2017).", "For each table from the original development set , we conduct RPL/ADD annotation separately, perturbing only table columns.", "For its associated NL-SQL pairs, we leave the NL questions unchanged and adapt gold SQLs accordingly.", "As a result, ADVETA consists of 3 (Spi-der/WTQ/WikiSQL) 2 (RPL/ADD) = 6 subsets.", "We next introduce annotation details and characteristics of ADVETA.", "Five vendors join the annotation process.", "Each base dev set is split into small chunks and is manually annotated by one vendor and reviewed by another, with an inter-annotator agreement to resolve annotation inconsistency.", "Before annotation, vendors are first trained to understand table context as described in 2, then are further instructed of the following details.", "RPL : RPL principles are the mandatory requirements.", "During annotation, vendors are given full Google access to ease the conception of synonymous names for a target column.", "ADD : ADD principles will be the primary guideline.", "Unlike freestyle RPL annotations, vendors are provided with 2009 StudentName Citizenship Score Age A Country X 92 19 B Country Y 89 21 StudentName Citizenship Score School Term A Country X 92 Fall B Country Y 89 Spring Student Name Citizenship Score Academic Year A Country X 92 Fall B Country Y 89 Spring Candidate Tables WDC Dense Retrieval TAPAS Reranker Number-batch StudentName Citizenship Score Semester A Country X 92 Fall B Country Y 89 Spring Caption: School Scores Statistics Top K Similar Tables ... ... ... ...", "a list of 20 candidate columns from where they select 3-5 based on semantic-association 2 Notice that we only consider columns mentioned at least once across NL questions to avoid vain efforts.", "In Appendix A, We display some representative benchmark annotation cases.", "We present comprehensive benchmark statistics and analysis results in Table 1. Notice that we limit the scope of statistics only to perturbed columns", "(as marked by #Avg. perturbed col per table ).", "Basic Statistics reflects elementary information of ADVETA.", "Analytical Statistics illustrate highlighted features of ADVETA compared with original dev-sets:", "(i) Diverse column names for a single semantic meaning: each table from the RPL subset contains approximately five times more lexicons which are used to express a single semantic meaning 3 .", "(ii) Table concept richness: each table from ADD subset contains roughly five times more columns with unique semantic meanings.", "In this section, we introduce our C ontextualized T able A ugmentation (CTA) framework as an adversarial training example generation approach tailored for tabular data.", "The philosophy of adversarial example generation is straightforward: Pushing 2 We generate the candidate list with a retriever-reranker combo from 4.", "3 For example, column names { Last name, Family name, Surname } express a single semantic meaning.", "In practice, we random sample at most 100 tables from each split, and obtain the number of unique semantic meanings by manual count.", "augmented RPL/ADD lexicon distributions closer to human-agreeable RPL/ADD distributions.", "This requires maximization of lexicon diversity under the constraints of domain relevancy and clear differentiation between semantic association & semantic equivalency, as stated in ADD principle from 2. Well-established text adversarial example generation approaches, such as TextFooler (Jin et al., 2020) and BertAttack (Li et al., 2020), might fail to meet this objective because:", "(i) They rely on syntactic information (e.g. POS-tag, dependency, semantic role) to perform text transformations.", "However, such information is not available in structured tabular data, leading to poor-quality adversarial examples generated by these approaches.", "(ii) They perform sequential word-by-word transformations, which could narrow lexicon diversity (e.g. written by will not be replaced by author ).", "(iii) They cannot leverage tabular context to ensure domain relevancy.", "(iv) They generally fail to distinguish semantic equivalency from high semantic association according to our observations (e.g., fail to distinguish sales vs. profits ).", "To tackle these challenges, we construct the CTA framework.", "Given a target column from a table with NL questions,", "(i) a dense table retriever properly contextualizes the input table, thereby pinpointing top-k most domain-related tables (and columns) from a large-scale database while boosting lexicon diversity .", "(ii) A reranker further narrows down semantic-association and produces coarse-grained ADD/RPL candidates.", "(iii) NLI decision maker distinguishes semantic equivalency from semantic association and allocates candidate 2010 columns to RPL/ADD buckets.", "A detailed illustration of our CTA framework is shown in Figure 2. We next introduce each component of CTA.", "The entire framework starts with a dense retrieval module to gather most domain-related tables of user queries.", "We utilize the Tapas-based (Herzig et al., 2020) dense retriever in this module (Herzig et al., 2021), due to its better tabular contextualization expressiveness over classical retrieval methods such as Word2Vec (Mikolov et al., 2013) and BM25 (Robertson, 2009).", "Following the original usage proposed by Herzig et al. (2020), we retrieve the top 100 most domain-related tables from the backend Web Data Commons (WDC) (Lehm-berg et al., 2016) database consisting of 600k nonrepetitive tables with at most five columns.", "From these retrieved domain-related tables, we further narrow down the range of most semantically associated candidate columns.", "This is done by a ConceptNet Numberbatch word embedding (Speer et al., 2017) reranker, who computes the cosine similarity score for a given column pair.", "We choose ConceptNet Numberbatch due to its advantage of far richer (520k) in-vocabulary multi-grams compared with Word2Vec (Mikolov et al., 2013), GloVe (Pennington et al., 2014), and Counter-fitting (Mrkic et al., 2016), which is especially desirable for multi-gram columns.", "We keep the top 20 similar among them as RPL/ADD candidates for each column of the original table.", "Aside from candidates obtained from retriever-reranker for whole-column level RPL, we consider word-level RPL for a target column as a complement.", "Specifically, we replace each word in a given target column with its synonyms recorded in the Oxford Dictionary (noise is more controllable compared with synonyms gathered by embedding).", "With a probability 25% for each original word to remain unchanged, we sample until the max pre-defined number (20) of candidates is reached or 5 consecutively repeated candidates are produced.", "So far we have pinpointed candidate columns whose domain relevancy and semantic association", "are already guaranteed.", "The final stage is to determine which one of RPL/ADD candidates is more suitable for based on its semantic equivalent against target column.", "Therefore, we leverage RoBERTa-MNLI (Liu et al., 2019; Williams et al., 2017), the expert in differentiating semantic equivalency from semantic association 4 .", "Practically, we construct premise-hypothesis by contextualized columns and judge semantic equivalency based on output bidirectional entailment scores e 1 and e 2 .", "NLI Premise-Hypothesis Construction The Quality of premise-hypothesis plays a key factor for NLI's functioning.", "We identify three potentially useful elements for contextualizing columns with surrounding table context: TPE, column type, and column cell value.", "Through manual experiments, we observe that:", "(i) Adding cell value significantly hurt decision accuracy of NLI models.", "(ii) TPE is the most important context information and cannot be ablated.", "(iii) Column type information can be a desirable source for word-sense disambiguation.", "Thus the final template for premise-hypothesis construction as python formatted string is expressed as: f { TPE } { CN } ( { CT } ) . , where CN is column name, and CT is column type.", "RPL/ADD Decision Criterion In practice, we observe a discrepancy in output entailment scores between premise-hypothesis score e 1 and hypothesis-premise score e 2 .", "Thus we take scores from both direction into consideration.", "For RPL, we empirically choose min ( e 1 , e 2 ) > = 0 .", "65 (Figure 2) as the final RPL acceptance criterion to reduce occurrences of false positive entailment decision.", "For ADD, the criterion is instead max ( e 1 , e 2 ) < = 0 .", "45 to reduce false negative entailment decisions 5 .", "Datasets and Models The five original Text-to-SQL datasets involves in our experiments are: Spider (Yu et al., 2018), WikiSQL (Zhong et al., 2017), WTQ (Shi et al., 2020) 6 , CoSQL (Yu et al., 2019a) and SParC (Yu et al., 2019b).", "Their corresponding perturbed tables are from our ADVETA 4 We highly recommend reading our pilot study in B.1.", "5 To avoid semantic conflict between a new column c and original columns c 1 , , c n , we apply to each pair of ( c, c i ) .", "6 Note that we use the version with SQL annotations provided by Shi et al. (2020) here, since the original WTQ (Pasupat and Liang, 2015) only contains answer annotations.", "benchmark.", "WikiSQL and WTQ are single-table, while Spider, CoSQL, and SParC have multi-tables.", "CoSQL and SParC are known as multi-turn Text-to-SQL datasets, sharing the same tables with Spider.", "Dataset statistics are shown in Appendix Table 11. We evaluate open-source Text-to-SQL models that reach competitive performance on the aforementioned datasets.", "DuoRAT (Scholak et al., 2021) and ETA (Liu et al., 2021) are baselines for Spider; SQUALL (Shi et al., 2020) is the baseline for WTQ; SQLova (Hwang et al., 2019) and CESQL (Guo and Gao, 2019) are baselines for WikiSQL.", "For the two multi-turn datasets (CoSQL & SParC), baselines are EditSQL (Zhang et al., 2019) and IGSQL (Cai and Wan, 2020).", "Exact Match ( EM ) is employed for evaluation metric across all settings.", "Training details are shown in C.2.", "Attack Details All baseline models are trained from scratch on corresponding original training sets, and then independently evaluated on original dev sets, ADVETA-RPL and ADVETA-ADD.", "Since columns have around three manual candidates in ADVETA-RPL/ADD, the number of possible perturbed tables scales exponentially with column numbers for a given table from the original dev set.", "Therefore, models are evaluated on ADVETA-RPL/ADD by sampling perturbed tables.", "For each NL-SQL pair and associated table(s), we sample one RPL-perturbed table and one ADD-perturbed table in each attack.", "Each column mentioned from gold SQL is perturbed by a randomly sampled manual candidate from ADVETA.", "For performance stability and statistical significance, we run five attacks with random seeds for each NL-SQL pair.", "Attack Results Table 2 presents the performance of models on original dev sets, ADVETA-RPL and ADVETA-ADD.", "Across various task formats, domains, and model designs, state-of-the-art Text-to-SQL parsers experience dramatic performance drop on our benchmark: by RPL perturbations, the relative percentage drop is as high as 53.1%, whereas on ADD the drop is 25.6% on average 7 .", "Another interesting observation is that RPL consistently leads to higher performance drops than ADD.", "This is perhaps due to models' heavy reliance on lexical matching, instead of true understanding of language and table context.", "Conclusively, Text-to-SQL models are still far less robust than desired against variability from the table input side.", "Attack Analysis To understand the reasons for parsers' vulnerability, we specifically analyze their schema linking modules which are responsible for recognizing table elements mentioned in NL questions.", "This module is considered a vital component for Text-to-SQL (Wang et al., 2020; Scholak et al., 2021; Liu et al., 2021).", "We leverage the oracle schema linking annotations on Spider (Lei et al., 2020) and test ETA model on ADVETA using the oracle linkings.", "Note that we update the oracle linkings accordingly when testing on RPL.", "Table 4 compares the performance of ETA with or without the oracle linkings, from which we make two observations:", "(i) When guided with the oracle linkings, ETA performs much better on both RPL ( 27 . 6% 55 . 7% ) and ADD ( 39 . 9% 71 . 3% ).", "Therefore, the failure in schema linking is one of the essential causes for the vulnerability of Text-to-SQL parsers.", "(ii) Even with the oracle linkings, the performance of ETA on RPL and ADD still lags behind its performance on the original dev set, especially on RPL.", "Through a careful analysis on failure cases, we find that ETA still generates table elements that have a high degree of lexical matching with NL questions, even though the correct table elements are specified in the oracle linkings.", "Defense Details We carry defense experiments with SQLova, SQUALL and ETA on WikiSQL, WTQ and Spider, respectively.", "We compare CTA 7 Average relative performance presented in Appendix C.3.", "with three baseline adversarial training approaches: Word2Vec (W2V), TextFooler (TF) (Jin et al., 2020), and BERT-Attack (BA) (Li et al., 2020) (details found in D.).", "Models are trained from scratch on corresponding augmented training sets.", "Specifically, for each NL-SQL pair, we keep the original table while generating one RPL and one ADD adversarial example.", "As a result, augmented training data is three times as large in the sense that each NL-SQL pair is now trained against three tables: original, RPL-perturbed, and ADD-perturbed.", "In addition to the adversarial training defense paradigm, we also include the manual version of Multi-Annotation Selection (MAS) by Gan et al. (2021) on Spider, using their released data.", "The rest evaluation process is same as attack.", "Defense Results Table 3 presents model performance through various defense approaches.", "We get two observations:", "(i) CTA consistently brings better robustness.", "Compared with other approaches, CTA-augmented models have the best performance across all ADVETA-RPL/ADD settings, as well as on all original dev sets.", "These results demonstrate CTA can effectively improve the robustness of models against RPL and ADD perturbations while introducing fewer noises into original training sets.", "Interestingly, we observe that textual adversarial example generation approaches (BA, TF) are outperformed by the simple W2V approach.", "This verifies our analysis stated in 4. We include a case study in Appendix B.3 on characteristics of various baselines.", "(ii) CTA fails to bring models back to their original dev performance.", "Even if trained with high-quality data augmented by CTA, models could still be far worse than their original performance.", "This gap is highly subjected to the similarity of lexicon distribution between train and dev set.", "Concretely, on WikiSQL and WTQ where train and dev set have a similar domain, both RPL performance and ADD performance are brought back closer to original dev performance when augmented with CTA.", "On the contrary, on Spider where train-dev domains overlap less, there is still a notable gap between performance after adversarial training and the original dev performance.", "In conclusion, more effective defense paradigms are yet to be investigated.", "Defense Analysis Following attack analysis, we conduct schema linking analysis with ETA model augmented with top 2 approaches (i.e. W2V & CTA) on Spider.", "We follow metric calculation of (Liu et al., 2021) and details are shown in C.4.", "As shown in Table 5, both approaches improve the schema linking F 1 .", "Specifically, CTA improves column F 1 by 3% 8% , and table F 1 by 13% 20% , compared with vanilla ETA.", "This reveals that improvement of robustness can be primarily attributed to better schema linking.", "curred by the annotation design that vendors are given CTA-retrieved candidate list for ADD annotations.", "However, we emphasize that:", "(i) RPL have NO vulnerability to data leakage since it is entirely independent of CTA.", "(ii) The leakage risk in ADD is negligible.", "On the one hand, our vast-size (600k tables) backend DB supplies tremendous data diversity, maximally reducing multiple retrievals of a single table; On the other hand, CTA's superior performance on Spider, the representative feature of which is cross-domain & cross-database across train-test splits (thus makes performance gain from data leakage hardly possible), further testifies its authentic effectiveness.", "We carry out an ablation study to understand the roles of two core components of CTA: dense retriever and RoBERTa-MNLI.", "Results are shown in Table 3. CTA w/o Retriever RPL candidates are generated merely from the dictionary; ADD generation is the same as W2V baseline.", "Compared with complete CTA, models augmented with this setting experience 1 .", "1% 1 .", "2% and 1 .", "8% 7 .", "6% performance drop on RPL and ADD, respectively.", "We attribute RPL drops to loss of real-world lexicon diversity and ADD drops to loss of domain relevancy.", "CTA w/o MNLI RPL and ADD candidates are generated in the same way as CTA, but without denoising of MNLI.", "RPL/ADD decisions solely rely on ranked semantic similarity.", "Compared with complete CTA, models augmented by this setting experience significant performance drops ( 4 . 9% 7 . 9% ) on all RPL subsets, and moderate drops ( 1 . 5% 2 . 8% ) on all ADD subsets.", "We attribute these drops to the inaccurate differentiation between semantic equivalency and semantic association due to lack of MNLI, which results in noisy RPL/ADD adversarial examples.", "Beyond CTA's effectiveness against table-side perturbations, a natural question follows: could retraining with adversarial table examples improve model robustness against perturbations from the other side of Text-to-SQL input (i.e., NL ques-tions)?", "To explore this, we directly evaluate ETA (trained with CTA-augmented Spider train-set) on Spider-Syn dataset (Gan et al., 2021), which replaces schema-related tokens in NL question with its synonym.", "We observe an encouraging 9 .", "8% EM improvement compared with vanilla ETA (trained with Spider train-set).", "This verifies CTA's generalizability to NL-side perturbations , with comparable effectiveness as the previous SOTA defense approach MAS, which fails to generalize to table-side perturbations on ADVETA in Table 3. 6 Related Work Robustness of Text-to-SQL As discussed in 1, previous works (Gan et al., 2021; Zeng et al., 2020; Deng et al., 2021) exclusively study robustness of Text-to-SQL parsers against perturbations in NL question inputs.", "Our work instead focuses on variability from the table input side and reveals parsers' vulnerability to table perturbations.", "Adversarial Example Generation Existing works on adversarial text example generations can be classified into three categories: (1) Sentence-Level.", "This line of work generates adversarial examples by introducing distracting sentences or paraphrasing sentences (Jia and Liang, 2017; Iyyer et al., 2018).", "(2) Word-Level.", "This dimension of work generates adversarial examples by flipping words in a sentence, replacing words with their synonyms, and deleting random words (Li et al., 2020; Ren et al., 2019; Jin et al., 2020).", "(3) Char-Level.", "This line of work flips, deletes, and inserts random chars in a word to generate adversarial examples (Belinkov and Bisk, 2018; Gao et al., 2018).", "All the three categories of approaches have been widely used to reveal vulnerabilities of high-performance neural models on various tasks, including text classification (Ren et al., 2019; Morris et al., 2020), natural language inference (Li et al., 2020) and question answering (Ribeiro et al., 2018).", "Previous work on robustness of Text-to-SQL and semantic parsing models primarily adopt word-level perturbations to generate adversarial examples (Huang et al., 2014 2021).", "For example, the Spider-Sync adversarial benchmark (Gan et al., 2021) is curated by replacing schema-related words in questions with their synonyms.", "Despite these methods' effectiveness in generating adversarial text examples, they are not readily applicable for structural tabular data, as we discussed in 4. Apart from this, table-side perturbations enjoy much higher attacking efficiency : the attack coverage of a single table modification includes all affiliated SQLs, whereas one NL-side perturbation only affects a single SQL.", "Combined with the lighter cognitive efforts of tabular context understanding than NL-understanding, ATP is arguably lower in annotation costs .", "Previous work on table perturbations (Cartella et al., 2021; Ballet et al., 2019) focuses on table cell values; another work, (Ma and Wang, 2020) study impacts of naively (i.e., without consideration of table context information and without human guarantee) renaming irrelevant columns and adding irrelevant columns.", "In this work, we focus on table columns and propose an effective CTA framework that better leverages tabular context information for adversarial example generation, as well as manually annotate ADVETA benchmark.", "We introduce A dversarial T able P erturbation ( ATP ), a new paradigm for evaluating model robustness on Text-to-SQL and define its conduction principles.", "We curate the ADVETA benchmark, on which all state-of-the-art models experience dramatic performance drop.", "For defense purposes, we design the CTA framework tailored for tabular adversarial training example generation.", "While CTA outperforms all baseline methods in robustness enhancement, there is still an unfilled gap from the original performance.", "This calls for future exploration of the robustness of Text-to-SQL parsers against ATP.", "Our ADVETA benchmark presented in this work is a free and open resource for the community to study the robustness of Text-to-SQL models.", "We collected tables from three mainstream Text-to-SQL datasets, Spider (Yu et al., 2018), WikiSQL (Zhong et al., 2017) and WTQ (Papernot et al., 2017), which are also free and open datasets for research use.", "For the table perturbation step, we hire professional annotators to find suitable RPL/ADD candidates for target columns.", "We pay the annotators at a price of 10 dollars per hour.", "The total time cost for annotating our benchmark is 253 hours.", "All the experiments in this paper can be run on a single Tesla V100 GPU.", "Our benchmark will be released along with the paper." ]
[ "abstain", "abstain", "objective", "objective", "abstain", "method", "result", "other", "abstain", "abstain", "abstain", "abstain", "objective", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "method", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "Modelling prosody variation is critical for synthesizing natural and expressive speech in end-to-end text-to-speech (TTS) systems.", "In this paper, a cross-utterance conditional VAE (CUC-VAE) is proposed to estimate a posterior probability distribution of the latent prosody features for each phoneme by conditioning on acoustic features, speaker information, and text features obtained from both past and future sentences.", "At inference time, instead of the standard Gaussian distribution used by VAE, CUC-VAE allows sampling from an utterance-specific prior distribution conditioned on cross-utterance information, which allows the prosody features generated by the TTS system to be related to the context and is more similar to how humans naturally produce prosody.", "The performance of CUC-VAE is evaluated via a qualitative listening test for naturalness, intelligibility and quantitative measurements, including word error rates and the standard deviation of prosody attributes.", "Experimental results on LJ-Speech and LibriTTS data show that the proposed CUC-VAE TTS system improves naturalness and prosody diversity with clear margins.", "Recently, abundant research have been performed on modelling variations other than the input text in synthesized speech such as background noise, speaker information, and prosody, as those directly influence the naturalness and expressiveness of the generated audio.", "Prosody, as the focus of this paper, collectively refers to the stress, intonation, and rhythm in speech, and has been an increasingly popular research aspect in end-to-end TTS systems (van den Oord et al., 2016; Wang et al., 2017; Stanton et al., 2018; Elias et al., 2021; Chen et al., 2021).", "Some previous work captured prosody features ex-* These authors contributed equally to this work.", "plicitly using either style tokens or variational au-toencoders (VAEs) (Kingma and Welling, 2014; Hsu et al., 2019a) which encapsulate prosody information into latent representations.", "Recent work achieved fine-grained prosody modelling and control by extracting prosody features at phoneme or word-level (Lee and Kim, 2019; Sun et al., 2020a,b).", "However, the VAE-based TTS system lacks control over the latent space where the sampling is performed from a standard Gaussian prior during inference.", "Therefore, recent research (Dah-mani et al., 2019; Karanasou et al., 2021) employed a conditional VAE (CVAE) (Sohn et al., 2015) to synthesize speech from a conditional prior.", "Meanwhile, pre-trained language model (LM) such as bidirectional encoder representation for Transformers (BERT) (Devlin et al., 2019) has also been applied to TTS systems (Hayashi et al., 2019; Kenter et al., 2020; Jia et al., 2021; Futamata et al., 2021; Cong et al., 2021) to estimate prosody attributes implicitly from pre-trained text representations within the utterance or the segment.", "Efforts have been devoted to include cross-utterance information in the input features to improve the prosody modelling of auto-regressive TTS (Xu et al., 2021).", "To generate more expressive prosody, while maintaining high fidelity in synthesized speech, a cross-utterance conditional VAE (CUC-VAE) component is proposed, which is integrated into and jointly optimised with FastSpeech 2 (Ren et al., 2021), a commonly used non-autoregressive end-to-end TTS system.", "Specifically, the CUC-VAE TTS system consists of cross-utterance embedding (CU-embedding) and cross-utterance enhanced CVAE (CU-enhanced CVAE).", "The CU-embedding takes BERT sentence embeddings from surrounding utterances as inputs and generates phoneme-level CU-embedding using a multi-head attention (Vaswani et al., 2017) layer where attention weights are derived from the encoder output of each phoneme as well as the speaker information.", "The CU-enhanced 391 CVAE is proposed to improve prosody variation and to address the inconsistency between the standard Gaussian prior, which the VAE-based TTS system is sampled from, and the true prior of speech.", "Specifically, the CU-enhanced CVAE is a fine-grained VAE that estimates the posterior of latent prosody features for each phoneme based on acoustic features, cross-utterance embedding, and speaker information.", "It improves the encoder of standard VAE with an utterance-specific prior.", "To match the inference with training, the utterance-specific prior, jointly optimised with the system, is conditioned on the output of CU-embedding.", "Latent prosody features are sampled from the derived utterance-specific prior instead of a standard Gaussian prior during inference.", "The proposed CUC-VAE TTS system was evaluated on the LJ-Speech read English data and the LibriTTS English audiobook data.", "In addition to the sample naturalness measured via subjective listening tests, the intelligibility is measured using word error rate (WER) from an automatic speech recognition (ASR) system, and diversity in prosody was measured by calculating standard deviations of prosody attributes among all generated audio samples of an utterance.", "Experimental results showed that the system with CUC-VAE achieved a much better prosody diversity while improving both the naturalness and intelligibility compared to the standard FastSpeech 2 baseline and two variants.", "The rest of this paper is organised as follows.", "Section 2 introduces the background and related work.", "Section 3 illustrates the proposed CUC-VAE TTS system.", "Experimental setup and results are shown in Section 4 and Section 5, with conclusions in Section 6.", "Non-Autoregressive TTS.", "Promising progress has taken place in non-autoregressive TTS systems to synthesize audio with high efficiency and high fidelity thanks to the advancement in deep learning.", "A non-autoregressive TTS system maps the input text sequence into an acoustic feature or waveform sequence without using the autoregressive decomposition of output probabilities.", "FastSpeech (Ren et al., 2019) and ParaNet (Peng et al., 2019) requires distillation from an autoregressive model, while more recent non-autoregressive TTS systems, including FastPitch (La'ncucki, 2021), AlignTTS (Zeng et al., 2020) and FastSpeech 2 (Ren et al., 2021), do not rely on any form of knowledge distillation from a pre-trained TTS system.", "In this paper, the proposed CUC-VAE TTS system is based on FastSpeech 2. FastSpeech 2 replaces the knowledge distillation for the length regulator in FastSpeech with mean-squared error training based on duration labels, which are obtained from frame-to-phoneme alignment to simplify the training process.", "Additionally, FastSpeech 2 predicts pitch and energy from the encoder output, which is also supervised with pitch contours and L2-norm of signal amplitudes as labels respectively.", "The pitch and energy prediction injects additional prosody information, which improves the naturalness and expressiveness in the synthesized speech.", "Pre-trained Representation in TTS.", "It is believed that prosody can also be inferred from language information in both current and surrounding utterances (Shen et al., 2018; Fang et al., 2019; Xu et al., 2021; Zhou et al., 2021).", "Such information is often entailed in vector representations from a pre-trained LM, such as BERT (Devlin et al., 2019).", "Some existing work incorporated BERT embeddings at word or subword-level into autoregressive TTS models (Shen et al., 2018; Fang et al., 2019).", "More recent work (Xu et al., 2021) used the chunked and paired sentence patterns from BERT.", "Besides, a relational gated graph network with pre-trained BERT embeddings as node inputs (Zhou et al., 2021) was used to extract word-level semantic representations, thus enhancing expressiveness.", "VAEs in TTS.", "VAEs have been widely adopted in TTS systems to explicit model prosody variation.", "The training objective of VAE is to maximise p ( x ) , the data likelihood parameterised by , which can be regarded as the marginalisation w.r.t. the latent vector z as shown in Eq.", "(1).", "where q ( z | x ) is the posterior distribution of the latent vector parameterized by , is a hy-perparameter, and DKL ( ) is the Kullback-Leibler divergence.", "The first term measures the expected reconstruction performance of the data from the 392 CU-Embedding \u0000 p <latexit sha1_base64=\"nbJkEHDwzYaEHhwDxKTozxlg3O0=\">AAAB73icbVBNSwMxEJ2tX7V+VT16CRbBU9mtgh6LXjxWsB/QLiWbZtvQJBuTrFCW/gkvHhTx6t/x5r8xbfegrQ8GHu/NMDMvUpwZ6/vfXmFtfWNzq7hd2tnd2z8oHx61TJJqQpsk4YnuRNhQziRtWmY57ShNsYg4bUfj25nffqLasEQ+2ImiocBDyWJGsHVSp2fYUOC+6pcrftWfA62SICcVyNHol796g4SkgkpLODamG/jKhhnWlhFOp6VeaqjCZIyHtOuoxIKaMJvfO0VnThmgONGupEVz9fdEhoUxExG5ToHtyCx7M/E/r5va+DrMmFSppZIsFsUpRzZBs+fRgGlKLJ84golm7lZERlhjYl1EJRdCsPzyKmnVqsFFtXZ/Wanf5HEU4QRO4RwCuII63EEDmkCAwzO8wpv36L14797HorXg5TPH8Afe5w8mnpAK</latexit> p <latexit sha1_base64=\"SgxVSzpwKz3Kk8tvzOmuxt1nP4k=\">AAAB7HicbVBNSwMxEJ31s9avqkcvwSJ4KrtV0GPRi8cKbltol5JNs21okg1JVihLf4MXD4p49Qd589+YtnvQ1gcDj/dmmJkXK86M9f1vb219Y3Nru7RT3t3bPzisHB23TJppQkOS8lR3YmwoZ5KGlllOO0pTLGJO2/H4bua3n6g2LJWPdqJoJPBQsoQRbJ0U9kTWV/1K1a/5c6BVEhSkCgWa/cpXb5CSTFBpCcfGdANf2SjH2jLC6bTcywxVmIzxkHYdlVhQE+XzY6fo3CkDlKTalbRorv6eyLEwZiJi1ymwHZllbyb+53Uzm9xEOZMqs1SSxaIk48imaPY5GjBNieUTRzDRzN2KyAhrTKzLp+xCCJZfXiWtei24rNUfrqqN2yKOEpzCGVxAANfQgHtoQggEGDzDK7x50nvx3r2PReuaV8ycwB94nz/lJI69</latexit> G2P p 1 ,p 2 , ,p T <latexit sha1_base64=\"iWS+G5prvdqINX1W0gEeo+V/A7E=\">AAAB+3icbZDLSgMxFIYz9VbrbaxLN8EiuChlpgq6LLpxWaE3aIchk8m0oZkkJBmxlL6KGxeKuPVF3Pk2pu0stPWHwMd/zuGc/JFkVBvP+3YKG5tb2zvF3dLe/sHhkXtc7miRKUzaWDChehHShFFO2oYaRnpSEZRGjHSj8d283n0kSlPBW2YiSZCiIacJxchYK3TLMvSrMqxXBzgWRltshW7Fq3kLwXXwc6iAXM3Q/RrEAmcp4QYzpHXf96QJpkgZihmZlQaZJhLhMRqSvkWOUqKD6eL2GTy3TgwToezjBi7c3xNTlGo9SSPbmSIz0qu1uflfrZ+Z5CaYUi4zQzheLkoyBo2A8yBgTBXBhk0sIKyovRXiEVIIGxtXyYbgr355HTr1mn9Zqz9cVRq3eRxFcArOwAXwwTVogHvQBG2AwRN4Bq/gzZk5L86787FsLTj5zAn4I+fzB5rLk4Q=</latexit> f ( p 1 ) ,f ( p 2 ) , ,f ( p T ) <latexit sha1_base64=\"hiCwzhA/O56GEAP7pDn/PhiOjSo=\">AAACBXicbVBNS8MwGE7n15xfVY96CA5hgzHaKehx6MXjhH3BVkqapltY2pQkFUbZxYt/xYsHRbz6H7z5b0y3HnTzgZAnz/O+vHkfL2ZUKsv6Ngpr6xubW8Xt0s7u3v6BeXjUlTwRmHQwZ1z0PSQJoxHpKKoY6ceCoNBjpOdNbjO/90CEpDxqq2lMnBCNIhpQjJSWXPM0qMSuXa1lV6NaG2KfK1mD2bNddc2yVbfmgKvEzkkZ5Gi55tfQ5zgJSaQwQ1IObCtWToqEopiRWWmYSBIjPEEjMtA0QiGRTjrfYgbPteLDgAt9IgXn6u+OFIVSTkNPV4ZIjeWyl4n/eYNEBddOSqM4USTCi0FBwqDiMIsE+lQQrNhUE4QF1X+FeIwEwkoHV9Ih2Msrr5Juo25f1Bv3l+XmTR5HEZyAM1ABNrgCTXAHWqADMHgEz+AVvBlPxovxbnwsSgtG3nMM/sD4/AG9I5Yt</latexit> Speaker Embeding b \u0000 L <latexit sha1_base64=\"oa7erxtNWMKxc22gguHRuGdY4DU=\">AAAB7XicbVC7TgMxENzjGcIrQEljiJBCQXQXCigjaCgogkQeUnKKfI4vMfHZJ9uHFJ3yDdBQgBAt/8En0PEh9DiPAhJGWmk0s6vdnSDmTBvX/XIWFpeWV1Yza9n1jc2t7dzObk3LRBFaJZJL1QiwppwJWjXMcNqIFcVRwGk96F+O/Po9VZpJcWsGMfUj3BUsZAQbK9WCdnpyPWzn8m7RHQPNE29K8uWDwvfHQ+u40s59tjqSJBEVhnCsddNzY+OnWBlGOB1mW4mmMSZ93KVNSwWOqPbT8bVDdGSVDgqlsiUMGqu/J1IcaT2IAtsZYdPTs95I/M9rJiY891Mm4sRQQSaLwoQjI9HoddRhihLDB5Zgopi9FZEeVpgYG1DWhuDNvjxPaqWid1os3dg0LmCCDOzDIRTAgzMowxVUoAoE7uARnuHFkc6T8+q8TVoXnOnMHvyB8/4DvHKSPA==</latexit> b \u0000 L +1 <latexit sha1_base64=\"uuOHQYzOmdc1LCF8YMmziyMLOPg=\">AAAB73icbVC7SgNBFL0bXzG+opY2o0GIiGE3FloGbSwsIpgHJEuYncwmQ2Zn15lZISz5BsHGQhFbf8NPsPND7J1sUmjigQuHc+7l3nu8iDOlbfvLyiwsLi2vZFdza+sbm1v57Z26CmNJaI2EPJRNDyvKmaA1zTSnzUhSHHicNrzB5dhv3FOpWChu9TCiboB7gvmMYG2kptdJTq6PnVEnX7BLdgo0T5wpKVT2i98fD+2jaif/2e6GJA6o0IRjpVqOHWk3wVIzwuko144VjTAZ4B5tGSpwQJWbpPeO0KFRusgPpSmhUar+nkhwoNQw8ExngHVfzXpj8T+vFWv/3E2YiGJNBZks8mOOdIjGz6Muk5RoPjQEE8nMrYj0scREm4hyJgRn9uV5Ui+XnNNS+cakcQETZGEPDqAIDpxBBa6gCjUgwOERnuHFurOerFfrbdKasaYzu/AH1vsPlcCSrA==</latexit> cat cat BERT Linear BERT Conv 1D Conv 1D <latexit sha1_base64=\"9pGPaDabNijkWLtd2pluvCo4p5o=\">AAAB7XicbVBNS8NAEJ3Ur1q/qh69BIvgqSRV0GPRi8cK9gPaUDabTbt2sxt2J0Ip/Q9ePCji1f/jzX/jts1BWx8MPN6bYWZemApu0PO+ncLa+sbmVnG7tLO7t39QPjxqGZVpyppUCaU7ITFMcMmayFGwTqoZSULB2uHodua3n5g2XMkHHKcsSMhA8phTglZq9Wik0PTLFa/qzeGuEj8nFcjR6Je/epGiWcIkUkGM6fpeisGEaORUsGmplxmWEjoiA9a1VJKEmWAyv3bqnlklcmOlbUl05+rviQlJjBknoe1MCA7NsjcT//O6GcbXwYTLNEMm6WJRnAkXlTt73Y24ZhTF2BJCNbe3unRINKFoAyrZEPzll1dJq1b1L6q1+8tK/SaPowgncArn4MMV1OEOGtAECo/wDK/w5ijnxXl3PhatBSefOYY/cD5/AK+ljzM=</latexit> <latexit sha1_base64=\"9pGPaDabNijkWLtd2pluvCo4p5o=\">AAAB7XicbVBNS8NAEJ3Ur1q/qh69BIvgqSRV0GPRi8cK9gPaUDabTbt2sxt2J0Ip/Q9ePCji1f/jzX/jts1BWx8MPN6bYWZemApu0PO+ncLa+sbmVnG7tLO7t39QPjxqGZVpyppUCaU7ITFMcMmayFGwTqoZSULB2uHodua3n5g2XMkHHKcsSMhA8phTglZq9Wik0PTLFa/qzeGuEj8nFcjR6Je/epGiWcIkUkGM6fpeisGEaORUsGmplxmWEjoiA9a1VJKEmWAyv3bqnlklcmOlbUl05+rviQlJjBknoe1MCA7NsjcT//O6GcbXwYTLNEMm6WJRnAkXlTt73Y24ZhTF2BJCNbe3unRINKFoAyrZEPzll1dJq1b1L6q1+8tK/SaPowgncArn4MMV1OEOGtAECo/wDK/w5ijnxXl3PhatBSefOYY/cD5/AK+ljzM=</latexit> u i \u0000 1 <latexit sha1_base64=\"CzZeOQlLq1d5d0kxwmKW/r6tSiI=\">AAAB8HicbVBNSwMxEJ2tX7V+VT16CRbBi2W3FfRY9OKxgv2QdinZNNuGJtklyQpl2V/hxYMiXv053vw3pu0etPXBwOO9GWbmBTFn2rjut1NYW9/Y3Cpul3Z29/YPyodHbR0litAWiXikugHWlDNJW4YZTruxolgEnHaCye3M7zxRpVkkH8w0pr7AI8lCRrCx0mOaDFJ24WXZoFxxq+4caJV4OalAjuag/NUfRiQRVBrCsdY9z42Nn2JlGOE0K/UTTWNMJnhEe5ZKLKj20/nBGTqzyhCFkbIlDZqrvydSLLSeisB2CmzGetmbif95vcSE137KZJwYKsliUZhwZCI0+x4NmaLE8KklmChmb0VkjBUmxmZUsiF4yy+vknat6tWrtfvLSuMmj6MIJ3AK5+DBFTTgDprQAgICnuEV3hzlvDjvzseiteDkM8fwB87nD8a7kGM=</latexit> u i +1 <latexit sha1_base64=\"1zegKdGKwcRh5C1t7ojyxXg2vco=\">AAAB8HicbVBNSwMxEJ2tX7V+VT16CRZBEMpuK+ix6MVjBfsh7VKyabYNTbJLkhXKsr/CiwdFvPpzvPlvTNs9aOuDgcd7M8zMC2LOtHHdb6ewtr6xuVXcLu3s7u0flA+P2jpKFKEtEvFIdQOsKWeStgwznHZjRbEIOO0Ek9uZ33miSrNIPphpTH2BR5KFjGBjpcc0GaTswsuyQbniVt050CrxclKBHM1B+as/jEgiqDSEY617nhsbP8XKMMJpVuonmsaYTPCI9iyVWFDtp/ODM3RmlSEKI2VLGjRXf0+kWGg9FYHtFNiM9bI3E//zeokJr/2UyTgxVJLFojDhyERo9j0aMkWJ4VNLMFHM3orIGCtMjM2oZEPwll9eJe1a1atXa/eXlcZNHkcRTuAUzsGDK2jAHTShBQQEPMMrvDnKeXHenY9Fa8HJZ47hD5zPH8OtkGE=</latexit> u i <latexit sha1_base64=\"TpwCXOuGPBKYW2nKTLduVUYFqEE=\">AAAB7nicbVBNS8NAEJ34WetX1aOXxSJ4KkkV9Fj04rGC/YA2lM120y7dbMLuRCghP8KLB0W8+nu8+W/ctjlo64OBx3szzMwLEikMuu63s7a+sbm1Xdop7+7tHxxWjo7bJk414y0Wy1h3A2q4FIq3UKDk3URzGgWSd4LJ3czvPHFtRKwecZpwP6IjJULBKFqpk6WDTOT5oFJ1a+4cZJV4BalCgeag8tUfxiyNuEImqTE9z03Qz6hGwSTPy/3U8ISyCR3xnqWKRtz42fzcnJxbZUjCWNtSSObq74mMRsZMo8B2RhTHZtmbif95vRTDGz8TKkmRK7ZYFKaSYExmv5Oh0JyhnFpCmRb2VsLGVFOGNqGyDcFbfnmVtOs177JWf7iqNm6LOEpwCmdwAR5cQwPuoQktYDCBZ3iFNydxXpx352PRuuYUMyfwB87nD+kDj/E=</latexit> Cross-Utterrance Text S i <latexit sha1_base64=\"qt61y0dC4e0OMHc1XGHXJv0kubY=\">AAAB7HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48VTVtoQ9lst+3SzSbsToQS+hu8eFDEqz/Im//GbZuDtj4YeLw3w8y8MJHCoOt+O4W19Y3NreJ2aWd3b/+gfHjUNHGqGfdZLGPdDqnhUijuo0DJ24nmNAolb4Xj25nfeuLaiFg94iThQUSHSgwEo2glP3voiWmvXHGr7hxklXg5qUCORq/81e3HLI24QiapMR3PTTDIqEbBJJ+WuqnhCWVjOuQdSxWNuAmy+bFTcmaVPhnE2pZCMld/T2Q0MmYShbYzojgyy95M/M/rpDi4DjKhkhS5YotFg1QSjMnsc9IXmjOUE0so08LeStiIasrQ5lOyIXjLL6+SZq3qXVRr95eV+k0eRxFO4BTOwYMrqMMdNMAHBgKe4RXeHOW8OO/Ox6K14OQzx/AHzucP7kiOww==</latexit> [1001] <latexit sha1_base64=\"rqh6OEvXehVAvgISpiGcNY72+4w=\">AAAB8XicbVBNSwMxEJ2tX7V+VT16CRbBU9mtgh6LXjxWsB/YXUo2nW1Ds9klyQql9F948aCIV/+NN/+NabsHbX0QeLw3M5l5YSq4Nq777RTW1jc2t4rbpZ3dvf2D8uFRSyeZYthkiUhUJ6QaBZfYNNwI7KQKaRwKbIej25nffkKleSIfzDjFIKYDySPOqLHSo0+6nut6gU965Ypbdecgq8TLSQVyNHrlL7+fsCxGaZigWttBqQkmVBnOBE5LfqYxpWxEB9i1VNIYdTCZbzwlZ1bpkyhR9klD5urvjgmNtR7Hoa2MqRnqZW8m/ud1MxNdBxMu08ygZIuPokwQk5DZ+aTPFTIjxpZQprjdlbAhVZQZG1LJhuAtn7xKWrWqd1Gt3V9W6jd5HEU4gVM4Bw+uoA530IAmMJDwDK/w5mjnxXl3PhalBSfvOYY/cD5/AD4+j1Q=</latexit> Speaker ID Encoder Decoder x i <latexit sha1_base64=\"eL0W8wJUw3JjqpATFzu1OH/qCpI=\">AAAB7HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48VTFtoQ9lsp+3SzSbsbsQS+hu8eFDEqz/Im//GbZuDtj4YeLw3w8y8MBFcG9f9dgpr6xubW8Xt0s7u3v5B+fCoqeNUMfRZLGLVDqlGwSX6hhuB7UQhjUKBrXB8O/Nbj6g0j+WDmSQYRHQo+YAzaqzkZ089Pu2VK27VnYOsEi8nFcjR6JW/uv2YpRFKwwTVuuO5iQkyqgxnAqelbqoxoWxMh9ixVNIIdZDNj52SM6v0ySBWtqQhc/X3REYjrSdRaDsjakZ62ZuJ/3md1Ayug4zLJDUo2WLRIBXExGT2OelzhcyIiSWUKW5vJWxEFWXG5lOyIXjLL6+SZq3qXVRr95eV+k0eRxFO4BTOwYMrqMMdNMAHBhye4RXeHOm8OO/Ox6K14OQzx/AHzucPJtqO6A==</latexit> Reference Mel Spectrogram Mel Spectrogram Duration Predictor H i <latexit sha1_base64=\"solDIowmRI2OU1862xVApQkad60=\">AAAB83icbVDLSgMxFL2pr1pfVZdugkVwVWaqoMuimy4r2Ad0hpJJM21oJjMkGaEM/Q03LhRx68+482/MtLPQ1gOBwzn3ck9OkAiujeN8o9LG5tb2Tnm3srd/cHhUPT7p6jhVlHVoLGLVD4hmgkvWMdwI1k8UI1EgWC+Y3ud+74kpzWP5aGYJ8yMyljzklBgreV5EzCQIs9Z8yIfVmlN3FsDrxC1IDQq0h9UvbxTTNGLSUEG0HrhOYvyMKMOpYPOKl2qWEDolYzawVJKIaT9bZJ7jC6uMcBgr+6TBC/X3RkYirWdRYCfzjHrVy8X/vEFqwls/4zJJDZN0eShMBTYxzgvAI64YNWJmCaGK26yYTogi1NiaKrYEd/XL66TbqLtX9cbDda15V9RRhjM4h0tw4Qaa0II2dIBCAs/wCm8oRS/oHX0sR0uo2DmFP0CfPzc+kcw=</latexit> D i <latexit sha1_base64=\"GFTWYNDr3N6mnQxwUAg64uTOypw=\">AAAB83icbVDLSgMxFL2pr1pfVZdugkVwVWaqoMuiLlxWsA/oDCWTZtrQTGZIMkIZ+htuXCji1p9x59+YaWehrQcCh3Pu5Z6cIBFcG8f5RqW19Y3NrfJ2ZWd3b/+genjU0XGqKGvTWMSqFxDNBJesbbgRrJcoRqJAsG4wuc397hNTmsfy0UwT5kdkJHnIKTFW8ryImHEQZnezAR9Ua07dmQOvErcgNSjQGlS/vGFM04hJQwXRuu86ifEzogyngs0qXqpZQuiEjFjfUkkipv1snnmGz6wyxGGs7JMGz9XfGxmJtJ5GgZ3MM+plLxf/8/qpCa/9jMskNUzSxaEwFdjEOC8AD7li1IipJYQqbrNiOiaKUGNrqtgS3OUvr5JOo+5e1BsPl7XmTVFHGU7gFM7BhStowj20oA0UEniGV3hDKXpB7+hjMVpCxc4x/AH6/AExIpHI</latexit> S i <latexit sha1_base64=\"qt61y0dC4e0OMHc1XGHXJv0kubY=\">AAAB7HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48VTVtoQ9lst+3SzSbsToQS+hu8eFDEqz/Im//GbZuDtj4YeLw3w8y8MJHCoOt+O4W19Y3NreJ2aWd3b/+gfHjUNHGqGfdZLGPdDqnhUijuo0DJ24nmNAolb4Xj25nfeuLaiFg94iThQUSHSgwEo2glP3voiWmvXHGr7hxklXg5qUCORq/81e3HLI24QiapMR3PTTDIqEbBJJ+WuqnhCWVjOuQdSxWNuAmy+bFTcmaVPhnE2pZCMld/T2Q0MmYShbYzojgyy95M/M/rpDi4DjKhkhS5YotFg1QSjMnsc9IXmjOUE0so08LeStiIasrQ5lOyIXjLL6+SZq3qXVRr95eV+k0eRxFO4BTOwYMrqMMdNMAHBgKe4RXeHOW8OO/Ox6K14OQzx/AHzucP7kiOww==</latexit> u i \u0000 L <latexit sha1_base64=\"Xms6C309BS8tJXGE7MZawvdKaB8=\">AAAB8HicbVA9SwNBEJ2LXzF+RS1tFoNgY7iLgpZBGwuLCOZDkiPsbfaSJbt7x+6eEI77FTYWitj6c+z8N26SKzTxwcDjvRlm5gUxZ9q47rdTWFldW98obpa2tnd298r7By0dJYrQJol4pDoB1pQzSZuGGU47saJYBJy2g/HN1G8/UaVZJB/MJKa+wEPJQkawsdJjmvRTdnaXZf1yxa26M6Bl4uWkAjka/fJXbxCRRFBpCMdadz03Nn6KlWGE06zUSzSNMRnjIe1aKrGg2k9nB2foxCoDFEbKljRopv6eSLHQeiIC2ymwGelFbyr+53UTE175KZNxYqgk80VhwpGJ0PR7NGCKEsMnlmCimL0VkRFWmBibUcmG4C2+vExatap3Xq3dX1Tq13kcRTiCYzgFDy6hDrfQgCYQEPAMr/DmKOfFeXc+5q0FJ585hD9wPn8A792Qfg==</latexit> u i \u0000 L \u0000 1 <latexit sha1_base64=\"LKrIjkDSIEU50tXgigacuOS8QZM=\">AAAB8nicbVA9SwNBEN2LXzF+RS1tFoNgk3AXBS2DNhYWEcwHXI6wt9kkS/Z2j905IRz3M2wsFLH119j5b9wkV2jig4HHezPMzAtjwQ247rdTWFvf2Nwqbpd2dvf2D8qHR22jEk1ZiyqhdDckhgkuWQs4CNaNNSNRKFgnnNzO/M4T04Yr+QjTmAURGUk+5JSAlfw06ae8el/1sqxfrrg1dw68SrycVFCOZr/81RsomkRMAhXEGN9zYwhSooFTwbJSLzEsJnRCRsy3VJKImSCdn5zhM6sM8FBpWxLwXP09kZLImGkU2s6IwNgsezPxP89PYHgdpFzGCTBJF4uGicCg8Ox/POCaURBTSwjV3N6K6ZhoQsGmVLIheMsvr5J2veZd1OoPl5XGTR5HEZ2gU3SOPHSFGugONVELUaTQM3pFbw44L86787FoLTj5zDH6A+fzB86PkPA=</latexit> u i <latexit sha1_base64=\"TpwCXOuGPBKYW2nKTLduVUYFqEE=\">AAAB7nicbVBNS8NAEJ34WetX1aOXxSJ4KkkV9Fj04rGC/YA2lM120y7dbMLuRCghP8KLB0W8+nu8+W/ctjlo64OBx3szzMwLEikMuu63s7a+sbm1Xdop7+7tHxxWjo7bJk414y0Wy1h3A2q4FIq3UKDk3URzGgWSd4LJ3czvPHFtRKwecZpwP6IjJULBKFqpk6WDTOT5oFJ1a+4cZJV4BalCgeag8tUfxiyNuEImqTE9z03Qz6hGwSTPy/3U8ISyCR3xnqWKRtz42fzcnJxbZUjCWNtSSObq74mMRsZMo8B2RhTHZtmbif95vRTDGz8TKkmRK7ZYFKaSYExmv5Oh0JyhnFpCmRb2VsLGVFOGNqGyDcFbfnmVtOs177JWf7iqNm6LOEpwCmdwAR5cQwPuoQktYDCBZ3iFNydxXpx352PRuuYUMyfwB87nD+kDj/E=</latexit> x i <latexit sha1_base64=\"eL0W8wJUw3JjqpATFzu1OH/qCpI=\">AAAB7HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48VTFtoQ9lsp+3SzSbsbsQS+hu8eFDEqz/Im//GbZuDtj4YeLw3w8y8MBFcG9f9dgpr6xubW8Xt0s7u3v5B+fCoqeNUMfRZLGLVDqlGwSX6hhuB7UQhjUKBrXB8O/Nbj6g0j+WDmSQYRHQo+YAzaqzkZ089Pu2VK27VnYOsEi8nFcjR6JW/uv2YpRFKwwTVuuO5iQkyqgxnAqelbqoxoWxMh9ixVNIIdZDNj52SM6v0ySBWtqQhc/X3REYjrSdRaDsjakZ62ZuJ/3md1Ayug4zLJDUo2WLRIBXExGT2OelzhcyIiSWUKW5vJWxEFWXG5lOyIXjLL6+SZq3qXVRr95eV+k0eRxFO4BTOwYMrqMMdNMAHBhye4RXeHOm8OO/Ox6K14OQzx/AHzucPJtqO6A==</latexit> H i <latexit sha1_base64=\"solDIowmRI2OU1862xVApQkad60=\">AAAB83icbVDLSgMxFL2pr1pfVZdugkVwVWaqoMuimy4r2Ad0hpJJM21oJjMkGaEM/Q03LhRx68+482/MtLPQ1gOBwzn3ck9OkAiujeN8o9LG5tb2Tnm3srd/cHhUPT7p6jhVlHVoLGLVD4hmgkvWMdwI1k8UI1EgWC+Y3ud+74kpzWP5aGYJ8yMyljzklBgreV5EzCQIs9Z8yIfVmlN3FsDrxC1IDQq0h9UvbxTTNGLSUEG0HrhOYvyMKMOpYPOKl2qWEDolYzawVJKIaT9bZJ7jC6uMcBgr+6TBC/X3RkYirWdRYCfzjHrVy8X/vEFqwls/4zJJDZN0eShMBTYxzgvAI64YNWJmCaGK26yYTogi1NiaKrYEd/XL66TbqLtX9cbDda15V9RRhjM4h0tw4Qaa0II2dIBCAs/wCm8oRS/oHX0sR0uo2DmFP0CfPzc+kcw=</latexit> D i <latexit sha1_base64=\"GFTWYNDr3N6mnQxwUAg64uTOypw=\">AAAB83icbVDLSgMxFL2pr1pfVZdugkVwVWaqoMuiLlxWsA/oDCWTZtrQTGZIMkIZ+htuXCji1p9x59+YaWehrQcCh3Pu5Z6cIBFcG8f5RqW19Y3NrfJ2ZWd3b/+genjU0XGqKGvTWMSqFxDNBJesbbgRrJcoRqJAsG4wuc397hNTmsfy0UwT5kdkJHnIKTFW8ryImHEQZnezAR9Ua07dmQOvErcgNSjQGlS/vGFM04hJQwXRuu86ifEzogyngs0qXqpZQuiEjFjfUkkipv1snnmGz6wyxGGs7JMGz9XfGxmJtJ5GgZ3MM+plLxf/8/qpCa/9jMskNUzSxaEwFdjEOC8AD7li1IipJYQqbrNiOiaKUGNrqtgS3OUvr5JOo+5e1BsPl7XmTVFHGU7gFM7BhStowj20oA0UEniGV3hDKXpB7+hjMVpCxc4x/AH6/AExIpHI</latexit> Z i <latexit sha1_base64=\"ZZMrcJi38o/UdNVPM68yAWuReEE=\">AAAB83icbVDLSgMxFL1TX7W+qi7dBIvgqsxUQZdFNy4r2Ad2hpJJM21oJhOSjFCG/oYbF4q49Wfc+Tdm2llo64HA4Zx7uScnlJxp47rfTmltfWNzq7xd2dnd2z+oHh51dJIqQtsk4YnqhVhTzgRtG2Y47UlFcRxy2g0nt7nffaJKs0Q8mKmkQYxHgkWMYGMl34+xGYdR9jgbsEG15tbdOdAq8QpSgwKtQfXLHyYkjakwhGOt+54rTZBhZRjhdFbxU00lJhM8on1LBY6pDrJ55hk6s8oQRYmyTxg0V39vZDjWehqHdjLPqJe9XPzP66cmug4yJmRqqCCLQ1HKkUlQXgAaMkWJ4VNLMFHMZkVkjBUmxtZUsSV4y19eJZ1G3buoN+4va82boo4ynMApnIMHV9CEO2hBGwhIeIZXeHNS58V5dz4WoyWn2DmGP3A+fwBSvJHe</latexit> CU-Enhanced CVAE CU-Embedding Z i <latexit sha1_base64=\"ZZMrcJi38o/UdNVPM68yAWuReEE=\">AAAB83icbVDLSgMxFL1TX7W+qi7dBIvgqsxUQZdFNy4r2Ad2hpJJM21oJhOSjFCG/oYbF4q49Wfc+Tdm2llo64HA4Zx7uScnlJxp47rfTmltfWNzq7xd2dnd2z+oHh51dJIqQtsk4YnqhVhTzgRtG2Y47UlFcRxy2g0nt7nffaJKs0Q8mKmkQYxHgkWMYGMl34+xGYdR9jgbsEG15tbdOdAq8QpSgwKtQfXLHyYkjakwhGOt+54rTZBhZRjhdFbxU00lJhM8on1LBY6pDrJ55hk6s8oQRYmyTxg0V39vZDjWehqHdjLPqJe9XPzP66cmug4yJmRqqCCLQ1HKkUlQXgAaMkWJ4VNLMFHMZkVkjBUmxtZUsSV4y19eJZ1G3buoN+4va82boo4ynMApnIMHV9CEO2hBGwhIeIZXeHNS58V5dz4WoyWn2DmGP3A+fwBSvJHe</latexit> Conv 1D Conv 1D Multi-Head Attention N ( p , \u0000 p ) <latexit sha1_base64=\"CV0xJZ/+v1sBH5gPKOhM4Epataw=\">AAACCnicbVDLSsNAFJ3UV62vqEs3o0WoICWpgi6LblxJBfuAJoTJdNoOnUmGmYlQQtdu/BU3LhRx6xe482+ctFlo64ELh3Pu5d57QsGo0o7zbRWWlldW14rrpY3Nre0de3evpeJEYtLEMYtlJ0SKMBqRpqaakY6QBPGQkXY4us789gORisbRvR4L4nM0iGifYqSNFNiHnqIcehzpIUYsvZ1UPJ4E4hQafcBRIE4Cu+xUnSngInFzUgY5GoH95fVinHASacyQUl3XEdpPkdQUMzIpeYkiAuERGpCuoRHiRPnp9JUJPDZKD/ZjaSrScKr+nkgRV2rMQ9OZ3azmvUz8z+smun/ppzQSiSYRni3qJwzqGGa5wB6VBGs2NgRhSc2tEA+RRFib9EomBHf+5UXSqlXds2rt7rxcv8rjKIIDcAQqwAUXoA5uQAM0AQaP4Bm8gjfryXqx3q2PWWvBymf2wR9Ynz9+tJog</latexit> z p <latexit sha1_base64=\"lFcM9zkp5xhRz57pXUiCPGFWjpQ=\">AAAB9HicbVDLSgNBEOyNrxhfUY9eBoPgKexGQY9BLx4jmAckS5idzCZDZh/O9Abiku/w4kERr36MN//G2WQPmlgwUFR10zXlxVJotO1vq7C2vrG5Vdwu7ezu7R+UD49aOkoU400WyUh1PKq5FCFvokDJO7HiNPAkb3vj28xvT7jSIgofcBpzN6DDUPiCUTSS2wsojgSmT7N+TPrlil215yCrxMlJBXI0+uWv3iBiScBDZJJq3XXsGN2UKhRM8lmpl2geUzamQ941NKQB1246Dz0jZ0YZED9S5oVI5urvjZQGWk8Dz0xmIfWyl4n/ed0E/Ws3FWGcIA/Z4pCfSIIRyRogA6E4Qzk1hDIlTFbCRlRRhqankinBWf7yKmnVqs5FtXZ/Wanf5HUU4QRO4RwcuII63EEDmsDgEZ7hFd6sifVivVsfi9GCle8cwx9Ynz8JKZJE</latexit> Utterance-Specific Prior Leon came back.", "latent vector and is approximated by Monte Carlo sampling of z according to the posterior distribution.", "The reparameterization trick is applied to make the sampling differentiable.", "The second term encourages the posterior distribution to approach the prior distribution which is sampled from during inference, and weighs this term's contribution.", "A large body of previous work on VAE-based TTS used VAEs to capture and disentangle data variations in different aspects in the latent space.", "Works by Akuzawa et al. (2018) leveraged VAE to model the speaking style of an utterance.", "Meanwhile, Hsu et al. (2019a,b) explored the disentanglement between prosody variation and speaker information using VAE together with adversarial training.", "Recently, fine-grained VAE (Sun et al., 2020a,b) was adopted to model prosody in the latent space for each phoneme or word.", "Moreover, vector-quantised VAE was also applied to discrete duration modelling by Yasuda et al. (2021).", "CVAE is a variant of VAE when the data generation is conditioned on some other information y .", "In CVAE, both prior and posterior distributions are conditioned on additional variables, and the data likelihood calculation is modified as shown below: p ( x | y ) = (cid:90) p ( x | z , y ) p ( z | y ) d z .", "LELBO ( x | y ) = E q ( z | x , y ) [log p ( x | z , y )] D KL ( q ( z | x , y ) p ( z | y ))", "To model the conditional prior, a density network is usually used to predict the mean and variance based on the conditional input y .", "The proposed CUC-VAE TTS system, which is adapted from FastSpeech 2 as shown in Fig. 1, aims to synthesize speech with more expressive prosody.", "Fig. 1 describes the model architecture, which has two components: CU-embedding and CU-enhanced CVAE.", "The CUC-VAE TTS system takes as input [ u i L , , u i , , u i + L ] , s i and x i , where [ u i L , , u i , , u i + L ] is the cross-utterance set that includes the current utterance u i and the L utterances before and after u i .", "Each u represents the text content of an utterance.", "Note that s i is the speaker ID, and x i is the reference mel-spectrogram of the current utterance u i .", "In this section, the two main components of the CUC-VAE TTS system will be introduced in detail.", "The CU-embedding encodes not only the phoneme sequence and speaker information but also cross-utterance information into a sequence of mixture", "encodings in place of a standard embedding.", "As shown in Fig. 1, the first L utterances and the last L utterances surrounding the current one, u i , are used as text input in addition to the current utterance and speaker information.", "Same as the standard embedding, an extra G2P conversion is first performed to convert the current utterance into phonemes P i = [ p 1 , p 2 , , p T ] , where T is the number of phonemes.", "Then, a Transformer encoder is used to encode the phoneme sequence into a sequence of phoneme encodings.", "Besides, speaker information is encoded into a speaker embedding s i which is directly added to each phoneme encoding to form the mixture encodings F i of the phoneme sequence.", "where f represents resultant vector from the addition of each phoneme encoding and speaker embedding.", "To supplement the text information from the current utterance to generate natural and expressive audio, cross-utterance BERT embeddings together with a multi-head attention layer are used to capture contextual information.", "To be-gin with, 2 L cross-utterance pairs, denoted as C i , are derived from 2 L + 1 neighboring utterances [ u i L , , u i , , u i + L ] as: C i = [ c ( u i L , u i L +1 ) , ,c ( u i 1 , u i ) , ,c ( u i + L 1 , u i + L )] , (5) where c ( u k , u k +1 ) = { [ CLS ] , u k , [ SEP ] , u k +1 } , which adds a special token [CLS] at the beginning of each pair and inserts another special token [SEP] at the boundary of each sentence to keep track of BERT.", "Then, the 2 L cross-utterance pairs are fed to the BERT to capture cross-utterance information, which yields 2 LBERT embedding vectors by taking the output vector at the position of the [CLS] token and projecting each to a 768-dim vector for each cross-utterance pair, as shown below: B i = [ b L , b L +1 , , b L 1 ] , where each vector b k in B i represents the BERT embedding of the cross-utterance pair c ( u k , u k +1 ) .", "Next, to extract CU-embedding vectors for each phoneme specifically, a multi-head attention layer is added to combine the 2 L BERT embeddings into one vector as shown in Eq.", "(6).", "where MHA ( ) denotes the multi-head attention layer, WQ , WK and WV are linear projection matrices, and F i denotes the sequence of mixture encodings for the current utterance which acts as the query in the attention mechanism.", "For simplicity, we denote Eq.", "(6) as G i = [ g 1 , g 2 , , g T ] from the multi-head attention being of length T and each of them is then concatenated with its corresponding mixture encoding.", "The concatenated vectors are projected by another linear layer to form the final output H i of the CU-embedding, H i = [ h 1 , h 2 , , h T ] of the current utterance, as shown in Eq.", "(7).", "where W is a linear projection matrix.", "Moreover, an additional duration predictor takes H i as inputs and predicts the duration D i of each phoneme.", "In addition to the CU-embedding, a CU-enhanced CVAE is proposed to conquer the lack of prosody variation of FastSpeech 2 and the inconsistency between the standard Gaussian prior distribution sampled by the VAE based TTS system and the true prior distribution of speech.", "Specifically, the CU-enhanced CVAE consists of an encoder module and a decoder module, as shown in Fig. 1. The utterance-specific prior in the encoder aims to learn the prior distribution z p from the CU-embedding output H and predicts duration D .", "For convenience, the subscript i is omitted in this subsection.", "Furthermore, the posterior module in the encoder takes as input reference mel-spectrogram x , then model the approximate posterior z conditioned on utterance-specific conditional prior z p .", "Sampling is done from the estimated prior by the utterance-specific prior module and is reparameterized as: z = z p , (8) where and are estimated from conditional posterior module to approximate posterior distribution N ( , ) , z p is sampled from the learned utterance-specific prior, and , are elementwise addition and multiplication operation.", "Furthermore, the utterance-specific conditional prior module is conducted to learn utterance-specific prior with CU-embedding output H and D .", "The reparameterization is as follows: z p = p p , (9) 394 where p , p are learned from the utterance-specific prior module, and is sampled from the standard Gaussian N (0 , 1) .", "By substituting Eq.", "(9) into Eq.", "(8), the following equation can be derived for the total sampling process: z = p p .", "During inference, sampling is done from the learned utterance-specific conditional prior distribution N ( p , p ) from CU-embedding instead of a standard Gaussian distribution N (0 , 1) .", "For simplicity, we can formulate the data likelihood calculation as follows, where the intermediate variable utterance-specific prior z p from D , H to obtain z is omitted: p ( x | H , D ) = (cid:82) p ( x | z , H , D ) p ( z | H , D ) d z , (11) In Eq.", "(11), , are the encoder and decoder module parameters of the CUC-VAE TTS system.", "Moreover, the decoder in CU-enhanced CVAE is adapted from FastSpeech 2. An additional projection layer is firstly added to project z to a high dimensional space so that z could be added to H .", "Next, a length regulator expands the length of input according to the predicted duration D of each phoneme.", "The rest of Decoder is same as the Decoder module in FastSpeech 2 to convert the hidden sequence into an mel-spectrogram sequence via parallelized calculation.", "Therefore, the ELBO objective of the CUC-VAE can be expressed as, L ( x | H , D ) = E q ( z | D , H ) [log p ( x | z , D , H )] 1 T (cid:88) n =1 DKL (cid:0) q 1 (cid:0) z n | z np , x (cid:1) q 2 (cid:0) z np | D , H (cid:1)(cid:1) 2 T (cid:88) n =1 DKL (cid:0) q 2 (cid:0) z np | D , H (cid:1) p ( z np ) (cid:1) , (12) where 1 , 2 are two parts of CUC-VAE encoder to obtain z from z p , x and z p from D , H respectively, 1 , 2 are two balance constants, p ( z np ) is chosen to be standard Gaussian N (0 , 1) .", "Meanwhile, z n and z np correspond to the latent representation for the n -th phoneme, and T is the length of the phoneme sequence.", "To evaluate the proposed CUC-VAE TTS system, a series of experiments were conducted on a single", "speaker dataset and a multi-speaker dataset.", "For the single speaker setting, the LJ-Speech read English data (Ito and Johnson, 2017) was used which consists of 13,100 audio clips with a total duration of approximately 24 hours.", "A female native English speaker read all the audio clips, and the scripts were selected from 7 non-fiction books.", "For the multi-speaker setting, the train-clean-100 and train-clean-360 subsets of the LibriTTS English audiobook data (Zen et al., 2019) were used.", "These subsets used here consist of 1151 speakers (553 female speakers and 598 male speakers) and about 245 hours of audio.", "All audio clips were re-sampled at 22.05 kHz in experiments for consistency.", "The proposed CU-embedding in our system learns the cross-utterance representation from surrounding utterances.", "However, unlike LJ-Speech, transcripts of LibriTTS utterances are not arranged as continuous chunks of text in their corresponding book.", "Therefore, transcripts of the LibriTTS dataset were pre-processed to find the location of each utterance in the book, so that the first L and last L utterances of the current one can be efficiently obtained during training and inference.", "The pre-processed scripts and our code are available 1 .", "The proposed CUC-VAE TTS system was based on the framework of FastSpeech 2. The CU-embedding utilised a Transformer to learn the current utterance representation, where the dimension of phoneme embeddings and the size of the self-attention were set to 256.", "To explicitly extract speaker information, 256-dim speaker embeddings were also added to the Transformer output.", "Meanwhile, the pre-trained BERT model to extract cross-utterance information had 12 Transformer blocks and 12-head attention layers with 110 million parameters.", "The size of the derived embeddings of each cross-utterance pair was 768-dim.", "Note that the BERT model and corresponding embeddings were fixed when training the TTS system.", "Network in CU-enhanced CVAE consisted of four 1D-convolutional (1D-Conv) layers with kernel sizes of 1 to predict the mean and variance of 2-dim latent features.", "Then a linear layer was added to transform the sampled latent feature to a 256-dim vector.", "The duration predictor which consisted of two convolutional blocks and an extra linear layer 1 https://github.com/NeuroWave-ai/CUCV AE-TTS 395 to predict the duration of each phoneme for the length regulator in FastSpeech 2 was adapted to take in CU-embedding outputs.", "Each convolutional block was comprised of a 1D-Conv network with ReLU activation followed by a layer normalization and dropout layer.", "The Decoder adopted four feed-forward Transformer blocks to convert hidden sequences into 80-dim mel-spectrogram sequence, similar to FastSpeech 2. Finally, HifiGAN (Kong et al., 2020) was used to synthesize waveform from the predicted mel-spectrogram.", "In order to evaluate the performance of our proposed component, both subjective and objective tests were performed.", "First of all, a subjective listening test was performed over 11 synthesized audios with 23 volunteers asked to rate the naturalness of speech samples on a 5-scale mean opinion score (MOS) evaluation.", "The MOS results were reported with 95% confidence intervals.", "In addition, an AB test was conducted to compare the CU-enhanced CVAE with utterance-specific prior and normal CVAE with standard Gaussian prior.", "23 volunteers were asked to choose the preference audio generated by different models in the AB test.", "For the objective evaluation, F 0 frame error (FFE) (Chu and Alwan, 2009) and mel-cepstral distortion (MCD) (Kubichek, 1993) were used to measure the reconstruction performance of different VAEs.", "FFE combined the Gross Pitch Error (GPE) and the Voicing Decision Error (VDE) and was used to evaluate the reconstruction of the F 0 track.", "MCD evaluated the timbral distortion, which was computed from the first 13 MFCCs in our experiments.", "Moreover, word error rates (WER) from an ASR model trained on the real speech from the LibriTTS training set were reported.", "Complementary to naturalness, the WER metric showed both the intelligibility and the degree of inconsistency between synthetic speech and real speech.", "The ASR system used in this paper was an attention-based encoder-decoder model trained on Librispeech 960-hour data, with a WER of 4.4% on the test-clean set.", "Finally, the diversity of samples was evaluated by measuring the standard deviation of two prosody attributes of each phoneme: relative energy ( E ) and fundamental frequency ( F 0 ), similar to Sun et al. (2020b).", "Relative energy was calculated as the ratio of the average signal amplitude within a phoneme to the average amplitude of the entire sentence, and fundamental frequency was measured using a pitch tracker.", "In this paper, the average standard deviation of E and F 0 of three phonemes in randomly selected 11 utterances was reported to evaluate the diversity of generated speech.", "This section presents the series of experiments for the proposed CUC-VAE TTS system.", "First, ablation studies were performed to progressively show the influence of different parts in the CUC-VAE TTS system based on MOS and WER.", "Next, the reconstruction performance of CUC-VAE was evaluated by FFE and MCD.", "Then, the naturalness and prosody diversity using CUC-VAE were compared to FastSpeech 2 and other VAE techniques.", "At last, a case study illustrated the prosody variations with different cross-utterance information as an example.", "The audio examples are available on the demo page 2 .", "Ablation studies in this section were conducted on the LJ-Speech data based on the subjective test and WER.", "First, to investigate the effect of the different number of neighbouring utterances, CUC-VAE TTS systems built with L = 1 , 3 , 5 were evaluated using MOS scores, as shown in Table 1. Table 1: The MOS results of CUC-VAE TTS systems on LJ-Speech dataset.", "The effect of the different number of neighbouring utterances on the naturalness of the synthesized speech can be observed by comparing MOS scores which is the higher the better.", "The CUC-VAE with L = 5 achieved highest score 3.95 compared to system with L = 1 and L = 3 .", "Since only marginal MOS improvements were obtained using more than 5 neighbouring utterances, the rest of experiments were performed using L = 5 .", "our implementation of Fastspeech 2. For the system denoted as Baseline + fine-grained VAE which served as a stronger baseline, the pitch predictor and energy predictor of FastSpeech 2 were replaced with a fine-grained VAE with 2-dim latent space.", "Based on the fine-grained VAE baseline, the CVAE was added without the CU-embedding to the system, referred to as Baseline+CVAE to verify the function of CVAE on the system, which conditions on the current utterance.", "Again, MOS was compared among these systems as shown in Table 2. Table 2: The MOS results of TTS systems with different modules on LJ-Speech dataset.", "As shown in Table 2, MOS progressively increased when fine-grained VAE, CVAE, and CU-embedding were added in consecutively.", "The proposed CUC-VAE TTS system achieved the highest MOS 3.95 compared to baselines.", "The results indicated that CUC-VAE module played a crucial role in generating more natural audio.", "To verify the importance of the utterance-specific prior to the synthesized audio, the same CUC-VAE system was used, and the only difference is whether to sample latent prosody features from the utterance-specific prior or from a standard Gaussian distribution.", "A subjective AB test was performed which required 23 volunteers to provide their preference between audios synthesized from the 2 approaches.", "Moreover, WER was also compared here to show the intelligibility of the synthesized audio.", "As shown in Table 3, the preference rate of using the utterance-specific prior is 0.52 higher than its counterpart, and a 4.9% abso-lute WER reduction was found, which confirmed the importance of the utterance-specific prior in our CUC-VAE TTS system.", "FFE and MCD were used to measure the reconstruction performance of VAE systems.", "An utterance-level prosody modelling baseline which Table 3: The subjective listening preference rate between CUC-VAE with or without utterance-specific prior from the AB test.", "extract one latent prosody feature vector for an utterance was added for more comprehensive comparison, and is referred to as the Global VAE.", "Table.", "4 shows the reconstruction performance on the LJ-Speech dataset and LibriTTS dataset, respectively.", "Baseline had the highest value of FFE and MCD on the LJ-Speech dataset and LibriTTS dataset.", "The value of FFE and MCD decreased when the global VAE was added and was further reduced when the fine-grained VAE was added to the baseline.", "Our proposed CUC-VAE TTS system achieved the lowest FFE and MCD across the table on both the LJ-Speech and LibriTTS datasets.", "This indicated that richer prosody-related information entailed in both cross-utterance and conditional inputs was captured by CUC-VAE.", "Next, sample naturalness and intelligibility were measured using MOS and WER respectively on both LJ-Speech and LibriTTS datasets.", "Complementary to the naturalness, the diversity of generated speech from the conditional prior was evaluated by comparing the standard deviation of E and F 0 similar to (Sun et al., 2020b).", "LJ-Speech experiments were shown in left part of Table.", "5.", "Compared to the global VAE and fine-grained VAE, the proposed CUC-VAE received the highest MOS and achieved the lowest WER.", "Although both F 0 and E of the CUC-VAE TTS system were lower than the baseline + fine-grained VAE, the proposed system achieved a clearly higher prosody diversity than the baseline and baseline + global VAE systems.", "The fine-grained VAE achieved the highest prosody variation as its latent prosody features were sampled from a standard Gaussian distribution, which lacks the constraint of language information from both the current and the neighbouring utterances.", "This caused extreme prosody variations to occur which impaired both the naturalness and the intelligibility of synthesized audios.", "As a result, the CUC-VAE TTS system was able to achieve high prosody diversity without hurting the naturalness of the generated speech.", "In fact, the adequate increase in prosody diversity improved the expressiveness of the synthesized audio, and hence increased the naturalness.", "The right part of Table.", "5 showed the results on LibriTTS dataset.", "Similar to the LJ-Speech experiments, the CUC-VAE TTS system achieved the best naturalness measured by MOS, the best intelligibility measured by WER, and the second-highest prosody diversity across the table.", "Overall, consistent improvements in both naturalness and prosody diversity were observed on both single-speaker and multi-speaker datasets.", "To better illustrate how the utterance-specific prior influenced the naturalness of the synthesized speech under a given context, a case study was performed by synthesizing an example utterance, Mary asked the time, with two different neighbouring utterances: Who asked the time? Mary asked the time. and Mary asked the time, and was told it was only five.", "Based on the linguistic knowledge, to answer the question in the first setting, an emphasis should be put on the word Mary, while in the second setting, the focus of the sentence is", "The model trained on LJ-Speech dataset was used to synthesize the utterance and the results were shown in Fig. 2. Fig. 2 showed the energy and pitch of the two utterance.", "Energy of the first word Mary in Fig.", "2(a) changed significantly (energy of Mawas much higher than -ry), which reflected an emphasis on the word Mary, whereas in Fig.", "2(b), energy of Mary had no obvious change, i.e., the word was not emphasized.", "On the other hand, the fundamental frequency of words asked and time stayed at a high level for a longer time in the second audio than the first one, reflecting another type of emphasis on those words which was also coherent with the given context.", "Therefore, the difference of energy and pitch between the two utterances demonstrated that the speech synthesized by our model is sufficiently contextualized.", "In this paper, a non-autoregressive CUC-VAE TTS system was proposed to synthesize speech with better naturalness and more prosody diversity.", "CUC-VAE TTS system estimated the posterior distribution of latent prosody features for each phone based on cross-utterance information in addition to the acoustic features and speaker information.", "The generated audio was sampled from an utterance-specific prior distribution, approximated based on cross-utterance information.", "Experiments were conducted to evaluate the proposed CUC-VAE TTS system with metrics including MOS, preference rate, WER, and the standard deviation of prosody attributes.", "Experiment results showed that the proposed CUC-VAE TTS system improved both the naturalness and prosody diversity in the generated audio samples, which outperformed the baseline in all metrics with clear margins." ]
[ "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain" ]
[ "Large pre-trained language models (LMs) are known to encode substantial amounts of linguistic information.", "However, high-level reasoning skills, such as numerical reasoning, are difficult to learn from a language-modeling objective only.", "Consequently, existing models for numerical reasoning have used specialized architectures with limited flexibility.", "In this work, we show that numerical reasoning is amenable to automatic data generation, and thus one can inject this skill into pre-trained LMs, by generating large amounts of data, and training in a multi-task setup.", "We show that pre-training our model, GENBERT, on this data, dramatically improves performance on DROP ( 49 . 3 72 . 3 F 1 ), reaching performance that matches state-of-the-art models of comparable size, while using a simple and general-purpose encoder-decoder architecture.", "Moreover, GENBERT generalizes well to math word problem datasets, while maintaining high performance on standard RC tasks.", "Our approach provides a general recipe for injecting skills into large pre-trained LMs, whenever the skill is amenable to automatic data augmentation.", "Recently, models trained on large amounts of data with a language modeling (LM) objective, have shown great promise in natural language processing, exhibiting surprising amounts of knowledge and information (Peters et al., 2018; Devlin et al., 2019; Liu et al., 2019; Lan et al., 2019; Petroni et al., 2019; Hewitt and Manning, 2019).", "However, high-level skills, such as the ability to perform numerical reasoning over text, can be challenging to capture with a LM objective only.", "Consider the example in Table 1. To solve the first question (Q1), a model must capture the value of numbers in the These authors contributed equally.", "Figure 1 : An overview of our approach for injecting numerical skills into a pre-trained LM.", "(a) We add two pre-training steps over large amounts of synthetic numerical data (ND) and textual data (TD);", "(b) we further fine-tune the model over either numerical reasoning datasets (DROP, MAWPS) or reading comprehension datasets (SQUAD).", "text, compute their difference, and generate the tokens corresponding to the result, which generally do not appear in the input passage.", "To make the task more manageable, state-of-the-art models have employed specialized architectures, restricting the space of possible numerical computations to a limited set.", "Modules were designed for counting (but only until 9') and for addition and subtraction (but of 2-3 numbers only).", "Such models perform well on existing datasets, such as DROP (Dua et al., 2019), but do not generalize to unsupported computations, which will require modifying the model architecture.", "Moreover, current models marginalize at training time over all numerical expressions that evaluate to the correct answer.", "Since the number of such expressions grows exponentially, scaling these approaches to arbitrary computations entails using non-differentiable operations (sampling or computing topK numerical expressions), which can lead to training difficulties.", "with different answer types.", "In this work, we propose that reasoning skills, such as numerical reasoning, are amenable to automatic data generation .", "Hence, one can inject that skill directly into the model by adding additional pre-training steps, allowing the model to learn the skill in an end-to-end fashion.", "This results in a fully-differentiable training procedure over a standard and general-purpose architecture, where the output space can be easily controlled through the data generation procedure.", "Specifically (Figure 1), we add to a large pre-trained LM two pre-training steps over automatically-generated synthetic data.", "First, we generate numerical data of the form 3 + 4 + 11 = 18 .", "Training on this data teaches the model to compute the value of numbers from their tokens and to perform numerical operations.", "Second, we automatically generate question-passage pairs that require numerical reasoning using a compact grammar ( textual data ).", "Training on this data endows the model with the ability to understand computations expressed in pseudo-natural language.", "In both pre-training steps, the model, GENBERT, generates output numbers token-by-token.", "Thus, the model has a standard architecture, where an answer can either be extracted from the input question and passage or generated from a decoder.", "Pre-training is done in a multi-task setup with a standard LM objective, in order to avoid catas-trophic forgetting (Kirkpatrick et al., 2017) of the linguistic information in the original LM.", "After pre-training, the model has sufficient language and numerical skills to be directly fine-tuned on a target numerical reasoning dataset, without resorting to specialized architectures.", "Augmenting more numerical skills does not require changing the model, only generating additional data.", "(b) Pre-training on these tasks provides GENBERT with 1) skills to reach performance that matches state-of-the-art models of comparable size on DROP (Dua et al., 2019), a standard numerical reasoning dataset, as well as 2) the ability to generalize to math word problem (MWP) datasets (Koncel-Kedziorski et al., 2016).", "(c) GENBERT learns these numerical skills while maintaining high performance on SQuAD (Ra-jpurkar et al., 2016), a standard reading comprehension dataset.", "(d) Initializing models for numerical reasoning with GEN BERT's weights improves their original performance.", "To conclude, in this work we address the problem of injecting LMs with numerical reasoning skills.", "Our contributions are: A method for injecting skills into pre-trained LMs, given that automatic data generation is possible.", "GENBERT, an architecture for pre-trained LM with generative and extractive abilities.", "A framework for generating numerical and textual synthetic data for numerical reasoning.", "Our code and data can be downloaded from https://github.com/ag1988/ injecting_numeracy .", "Numerical reasoning over text (NRoT) is commonly set up as a reading comprehension (RC) task.", "Given a training set of question-context-answer triples { ( q i , c i , a i ) } Ni =1 , the goal is to learn a function that returns the answer a to a question q given a context c .", "However, in NRoT the answer generally requires to internally perform some numerical computation using the entities and numbers in the context.", "Specifically, the answer is either:", "(a) a span (or list of spans) from the context c or question q , or", "(b) a number that is the result of some computation (see examples in Table 1).", "Two natural, yet opposing, approaches lend themselves to tackling NRoT:", "(a) A symbolic approach : a model can read the question and context, output a numerical expression and evaluate the answer with an external symbolic calculator.", "This approach is a particular case of semantic parsing (Kamath and Das, 2019), and was common in early NRoT datasets (Koncel-Kedziorski et al., 2015; Roy and Roth, 2015; Hosseini et al., 2014).", "However, it suffers from several drawbacks.", "First, because numerical expressions are discrete and their space grows combinatorially, the model must learn to search in this space using non-differentiable operations, which are usually difficult to optimize.", "Second, numerical expressions are limited to numerical answers, while in DROP often a numerical computation is required but the final answer is a text span.", "(b) A distributed approach : have a model directly generate the answer given ( q , c ) .", "When the answer is a text span, the model can extract it from the input, and when the answer is a number that is not in q or c , the model must generate it.", "While this makes training straightforward, the model must learn to perform numerical computations from the relatively small target dataset.", "We empirically show in 3 that this leads to low performance in general.", "As a compromise, most NRoT models (Dua et al., 2019; Kinley and Lin, 2019; Hu et al., 2019; Efrat et al., 2019) have taken a hybrid approach: they augment standard extractive QA models with specialized modules for handling a limited set of numerical computations.", "We briefly describe this architecture, as it is the basis for our model in 3.", "Given a question with n 1 tokens q = ( q 1 , . . . , q n 1 ) and a context with n 2 tokens c = ( c 1 , . . . , c n 2 ) , the hybrid model first computes contextualized representations for the n 1 + n 2 + 3 tokens (cid:104) [CLS] q [SEP] c [SEP] (cid:105) using a pre-trained LM, such as BERT (Devlin et al., 2019): L = LM ( q , c ) .", "The representations L are then passed to multiple heads , which are small neural networks that estimate p ( a | q , c , h ) , that is, the probability of the answer given the input and conditioned on a head h , corresponding to a particular answer type: Context span head : computes a distribution over all spans in the context using a feed-forward network (FFN) FF c ( L ) .", "Question span head : computes a distribution over spans in the question using a FFN FF q ( L ) .", "Count head : computes a distribution over the numbers { 0 , . . . , 9 } using a FFN FF cnt ( L ) .", "Arithmetic head : computes a distribution over all signed combinations of numbers in the context using a FFN FF cmb ( L ) (the numbers in the context are identified in a pre-processing step).", "While the first two heads are standard in extractive QA, the latter two heads are specialized and meant to handle answers that do not appear in the input.", "Finally, for deciding which answer head to use for a given input, a type head FF typ ( L ) outputs a probability distribution p head ( h | q , c ) (using a FFN).", "Thus the model probability for an answer is p ( a | q , c ) = (cid:88) h heads p head ( h | c , q ) p ( a | c , q , h ) .", "Training is done by enumerating all of the ways in which the answer can be obtained using all of the heads, and maximizing this marginal probability.", "While existing models perform well on DROP, the aforementioned architecture is not flexible.", "First, the output space is severely constrained the model can only count up to 9', and numerical computations are restricted to signed combinations of a few numbers.", "Second, expanding the space of supported numerical computations is non-trivial, because training involves marginalizing over all expressions that lead to the correct answer.", "Since the space of numerical expressions grows exponentially, expanding this space quickly leads to a difficult search problem.", "Third, delegating numerical computations to an external symbolic calculator leads to modeling challenges, since there could be interactions between text and numerical computation: Consider the DROP question How many total yards did Phil Dawson throw for touch-downs? .", "Current models handle such questions by computing a sum from numbers in the text and returning the result.", "However, if the question was Who threw 45 total yards for touchdowns? , the model would have to compute the sum internally , and then find the relevant span in the text.", "This is impossible when the computation itself is delegated to an external calculator.", "Thus, training models to handle such numerical questions is desirable.", "Motivated by the above arguments, we wish to push the frontier of end-to-end differentiable models for numerical reasoning.", "Thus, we will automatically generate large amounts of data that endow a pre-trained LM with numerical skills.", "We now describe a simple BERT-based generative model that performs numerical computations internally, termed GENBERT.", "The model combines the Transformer encoder-decoder architecture (Vaswani et al., 2017) with a pre-trained LM, specifically, BERT.", "Our architecture is illustrated in Figure 2. Our encoder is a standard Transformer, initialized with BERT weights.", "To also enjoy BERT's representations at decoding time, we tie the weights of the decoder and the encoder.", "Because the Transformer decoder has source attention weights (weights for attending to the encoder representations at decoding time) that are not present in BERT, we tie these source-attention weights to the self-attention weights of the encoder (which are tied to the self-attention weights of the decoder).", "This fully initializes the Transformer model with BERT weights.", "Since the encoder and decoder weights are tied, we make them learn distinct representations by adding a FFN FF enc that transforms the encoder contextualized representations L enc as H enc = layer-norm ( gelu ( W L enc )) , where W is a parameter matrix (Hendrycks and Gimpel, 2016; Ba et al., 2016).", "Analogously, we add FF dec to the decoder.", "To further distinguish the encoder and decoder, we use distinct start and end tokens for input and output sequences.", "Given m answer tokens a = ( a 1 , . . . , a m ) , we form an output sequence with m + 2 tokens: (cid:104) [SOS] a [EOS] (cid:105) .", "The output tokens are passed through the decoder and FF dec to obtain H dec .", "Finally, the probability of an answer is defined in the usual manner: Let (cid:104) a (cid:105) = ( a 0 a m +1 ) be the output sequence.", "The decoder outputs the probability p dec ( a i +1 | a 0 , ..a i , c , q ) , and the probability of an answer is: p dec ( (cid:104) a (cid:105) | c , q ) = m (cid:89) i =0 p dec ( a i +1 | a 0 , ..a i , c , q ) .", "As we have a generative model, we can remove the specialized count and arithmetic heads from 2.", "Thus, the type head FF typ ( H enc ) outputs a distribution ( p q , p c , p dec ) over the context span, question span, and decoder heads.", "To improve pre-training on the numeric data (4), we make two additional modifications.", "Digit Tokenization (DT) Conventional wordpiece tokenization treats numbers no differently than any other token.", "However, computing the value of numbers should be simpler when using digits directly (Wallace et al., 2019).", "Hence, we tokenize numbers digit-by-digit.", "For example, a wordpiece ## d 1 d k where d i {0,...,9} is further split into ## d 1 , ..., ## d k .", "We show in 5.1 that this substantially improves sample complexity when training to perform numerical operations.", "Figure 2 : GEN BERT's network architecture:", "(a) a high-level overview of the network, including a generative head (red), two span-extraction heads (yellow), and an answer type head.", "(b) a closer overview of GEN BERT's generative head.", "Random Shift (RS) The original Transformer uses absolute positional embeddings for each token.", "However, in 4, we train on short inputs such as 1086.1 2.54 + 343.8 .", "Thus, the model can potentially over-fit and learn to perform numerical reasoning only when numbers are at the beginning of an input.", "To prevent this, when the input length n 1 + n 2 + 3 < 512 , we shift all position IDs by a random integer in (0 , 1 , . . . , 512 ( n 1 + n 2 + 3)) .", "Training For each span ( i, j ) , a span extraction head h outputs its probability p h (( i, j ) | c , q , h ) of being the answer.", "Let S be the set of spans in the input corresponding to the gold answer.", "The model loss L model marginalizes over all ways in which the answer can be predicted: log (cid:18) p dec p dec ( (cid:104) a (cid:105) ) + (cid:88) h q , c p h (cid:88) ( i,j ) S p h ( i, j ) (cid:19) , where conditionals have been dropped for brevity.", "To evaluate the ability of GENBERT to perform numerical reasoning, we initialize it with BERT and fine-tune it on DROP.", "GENBERT obtains 46.1 EM and 49.3 F 1 , roughly 20 points lower than prior models.", "Thus, we conclude that acquiring numerical reasoning skills from DROP data only is difficult.", "To remedy this, we will automatically generate training data that will endow GENBERT with numerical skills before training it on DROP.", "Jackson scored 3 running", "4 Pre-training Tasks for Numerical Skills We now describe two automatically-generated datasets and the multi-task training procedure.", "touchdowns .", "Template extraction To extract templates, we go over sentences from the corpus provided by Hosseini et al. (2014).", "For each sentence, we use a procedure described by Hosseini et al. (2014) to abstract its tokens to the following categories: numbers ( NUM ), entities ( ENT ), containers ( CONT ) and attributes ( ATTR ).", "In addition, verbs are abstracted to six categories, each corresponding to a different change in the number of entities owned by containers.", "Thus, each template fully specifies how to update a world state , i.e., the number of entities each container owns.", "The top part of Figure 3 illustrates the abstraction process.", "Finally, we count for each extracted template its frequency in the data, and use the top12 templates for passage generation.", "Details on the abstraction process, categories used, and extracted templates are in A.2.", "Figure 3 : Template extraction and instantiation.", "A template (in red) is extracted from a MWP sentence, using categories for containers, entities, verbs, attributes and numbers, according to Hosseini et al. (2014).", "For generation, the categories are instantiated with a domain-specific vocabulary.", "Our first dataset focuses on learning numerical values expressed by tokens and computing numerical operations, i.e., it does not involve textual content.", "As such, it is easy to craft templates that correspond to various numeric operations.", "We designed six such templates, described in Table 2. Each template consists of an expression to evaluate and its solution.", "Further details on their instantiation are provided in A.1.", "While the numerical operations were chosen based on DROP, it is trivial to extend them to other domains (Saxton et al., 2019) with different numerical operations.", "Numeric data is easy to generate, since it does not contain any textual context.", "However, to tackle NRoT, a model needs to comprehend how numerical operations are expressed in text that refers to events, entities and quantities.", "This primes us to generate textual data from a simple grammar.", "While text generation is hard in the general case, we are specifically interested in text that focuses on number manipulations.", "Therefore, we use the framework of Hosseini et al. (2014), who proposed to model math word problems with a simple structure.", "In their framework a world state consists of entities , which are objects that are being counted, and containers , which are objects that own entities.", "Sentences use verb categories to describe how the number of entities in a container changes, and thus a world state can be updated given a sentence.", "Consider the textual example in Figure 1. the entities are soldiers and citizens, and the containers are the king and the commander.", "The verbs ( had and received ) describe the entities the king holds, and how many were passed to the commander.", "In this work, we use this framework to automatically generate examples.", "We extract templates that describe changes in the number of entities owned by containers, and automatically generate question-context pairs from these templates.", "Passage generation Using the extracted templates, we can generate sentences and maintain a world state of all containers and the number of entities they own.", "We construct a small vocabulary (<100 words) that maps categories to domain-specific words, and use the following procedure to generate passages.", "We sample 3-6 templates with replacement, and instantiate them one-by-one (the bottom part of Figure 3 illustrates instantiation).", "Each template is instantiated by uniformly sampling values from the vocabulary with probability 1 p and from previously generated sentences with probability p .", "To avoid a collection of unrelated sentences, we set the probability of using previously used values to p = 0 .", "7 .", "An example passage is shown in Table 3.", "Question generation After generating a passage, the world state holds information about all containers in the passage and the number of entities they hold.", "In Table 3, the state will include the number of families and rebels of different nationalities in each container (the commander, the householder, and the countries).", "Based on this world state, numerical reasoning questions can be asked.", "To create questions, we craft 13 question templates that are instantiated with objects from the world state.", "The questions teach the model to track events and perform numeric and discrete operations.", "Table 2 : Templates for generating synthetic numerical examples and the numerical operations required to answer them.", "Domains (defined in App. A.1): s i { , + } , f i R + , o O : superlative words like longest , arg { arg min , arg max }, w i W : words from NTLK Words Corpus, d i D : dates until Sep 2019, dsup DSUP : superlative words like latest , prd { days , months , years }, p i (0 , 100) , pcent { percent , percent not }.", "P : The commander recruited 1949 Polish families in Spain.", "The householder recruited 1996 Japanese families in Spain.", "There were 10913 white rebels and 77 Chinese families in Spain.", "6641 British soldiers, 476 asian rebels, and 338 Germans families were recruited in Russia.", "Q : How many Japanese families were in Spain?", "A : 1996 Q : How many more Japanese families were in Spain than Polish families?", "A : 47 (1996-1949) Q : How many families of Spain were not Polish families?", "A : 2073 (4022-1949) Table 3 : An example synthetic passage (P) and questions.", "Questions (Q) were generated from templates and answers (A) were calculated based on the world state.", "Examples for generated questions are shown in Table 3, where answers are computed from the world state.", "Overall, we create 13 question templates for 7 different skills\", provided in A.2.", "For pre-training on ND, we generated 1M examples for training and 10K for validation.", "For TD, we generated 2.5M examples for training and 10K for validation.", "For both synthetic datasets, we used the GENBERT model loss, L model , from 3.", "To ensure that the model does not lose its language understanding abilities, we employ a multi-task setup, and include a standard masked LM objective from BERT.", "Specifically, given a masked token sequence (cid:104) m (cid:105) , we compute the contextualized representations, L enc and pass them through a feed-forward network FF mlm .", "For each masked index i , it outputs the probability p ( a i | i, (cid:104) m (cid:105) ) of the original token a i .", "The MLM loss is computed as L mlm ( (cid:104) m (cid:105) ) = mean i masked log( p ( a i | i, (cid:104) m (cid:105) )) .", "During training, we sample mini-batches from the respective datasets, and minimize the weighted sum of the losses.", "Concretely, while pre-training on ND and TD, we sample mini-batches XND , XTD and XMLM and optimize the objective L model ( XND ) + L model ( XTD ) + L mlm ( XMLM ) .", "Figure 4 : Progression of eval accuracy (EM) of GENBERT, for different pre-training settings listed in 5.1.", "We now evaluate our two pre-training steps and their applicability for numerical reasoning tasks.", "We consider the following variants, aiming to investigate the contributions of ND and TD, the importance of MLM loss, and techniques like DT and RS.", "In all cases, we initialize GENBERT with BERT-base-uncased, use DT and RS, and include the MLM loss, except where noted : GENBERT +ND : trained on numerical data.", "GENBERT +ND-LM : trained on ND without the additional MLM loss.", "GENBERT +ND-LM-DT : trained on ND using wordpiece tokenization, without the MLM loss.", "GENBERT +ND-LM-RS : trained on ND without MLM loss and random shift (RS).", "GENBERT +TD : trained on textual data (TD).", "GENBERT +ND+TD : GENBERT +ND trained on both ND and TD.", "We first ask whether the pre-training procedure allows GENBERT to absorb the intended numerical skills.", "We observe that across various settings (ND, TD, ND+TD), GENBERT consistently achieves more than 96% accuracy in predicting the correct solution for both ND and TD.", "Thus, we conclude that indeed a pre-trained LM can learn the designed skills from generated data.", "Figure 4 shows the learning curves of GENBERT for the different variants.", "Note that in ND-LM-DT the model does not learn to solve the numerical data task.", "This demonstrates the utility of using DT over conventional wordpieces.", "The lower sample complexity in the case of ND+TD compared to the only-TD can be attributed to the fact that ND and TD share some numeric skills and hence a model already trained on ND converges faster on TD compared to GENBERT.", "After successfully injecting GENBERT with numeric skills, we test GENBERT guided by the following questions:", "(a) Are the injected skills robust and generalize to NRoT datasets like DROP?", "(b) Are the new skills learned at the expense of the model's ability to understand language?", "(c) Can the pre-trained weights be used with architectures other than GENBERT?", "For", "(a), we fine-tune GENBERT on DROP and further evaluate on MWP in a zero-shot setup .", "For", "(b), we evaluate GENBERT on a RC task that does not involve numerical reasoning, namely, SQUAD (Rajpurkar et al., 2016).", "For", "(c), we use GENBERT encoder as a drop-in replacement for BERT on two other architectures.", "Results on DROP We report results of GENBERT initialized by BERT-base and leave pretraining a larger model for future work.", "We compare GENBERT to MTMSN (Hu et al., 2019) initialized with BERT-base, as MTMSN initialized with BERT-large is a state-of-the-art model on DROP.", "1 Table 4 presents fine-tuning results on DROP.", "Without pre-training, GENBERT performs poorly compared to current state of the art models like MTMSN, reporting an EM of only 46.1.", "Pretraining on each of the numerical data (ND) and textual data (TD) improves performance dramatically to 64.7 EM and 64.4 EM, respectively.", "Moreover, pre-training on both ND and TD leads to a performance of 68.8 EM, on par with MTMSN's 68.2 EM.", "This demonstrates that the skills that GENBERT learns from ND and TD are complementary.", "In addition, the lower performance of GENBERT +ND-LM and GENBERT +ND-LM-RS 1 Per ACL policy, we compare to models that were made public 3 months prior to submission.", "Table 4 : Performance of GENBERT and comparable models on the development and test sets of DROP.", "Table 5 : F 1 scores on DROP development per answer type.", "Breaking down performance by answer type (Ta-ble 5) highlights several points.", "First, pre-training on ND and TD improves performance mostly due to number answer types, as expected.", "Second, GENBERT +ND+TD outperforms MTMSNBASE on questions whose answer is a span .", "We argue a probable cause for this are span questions that require performing a numerical computation internally, as explained in 2.", "Third, MTMSNBASE substantially outperforms GENBERT on questions whose answer is a list of non-contiguous spans.", "This is expected, as MTMSN has a specialized head and procedure for handling such questions, while build on a simpler and more standard RC architecture.", "Generalization to MWP (zero-shot) The MAWPS repository is a collection of math word problem (MWP) datasets (Koncel-Kedziorski et al., 2016).", "To test the models on skills they were trained on, we picked datasets with addition and subtraction problems, and filtered out examples with other operations (e.g., multiplication and division).", "All models that were fine-tuned on DROP were evaluated in a zero-shot setup on 395 examples from ADDSUB (Hosseini et al., 2014), 321 from SOP (Roy et al., 2015), and 305 from SEQ (Koncel-Kedziorski et al., 2015).", "Results are shown in Table 6.", "Overall, GENBERT +ND+TD dramatically improves performance compared to GENBERT.", "GENBERT +ND performs much better than GENBERT +TD , demonstrating the utility of ND when the context is short.", "# terms Figure 5 : Breakdown of model accuracy (EM) by the number of terms in the arithmetic expression, for the MWP datasets ADDSUB , SOP and SEQ .", "Last, MTMSN outperforms GENBERT +ND+TD .", "However, MTMSN uses a specialized architecture for addition and subtraction, suitable when calculations are done outside of the model.", "GENBERT, on the other hand, is a general-purpose generative model, that can also return span answers when the computation is done internally.", "Next, we break down performance by the number of terms in the arithmetic expression (Figure 5).", "The plot shows that all models struggle to generalize to more complex problems, and completely fail when the calculation involves more than 3 terms.", "Interestingly, the drop in performance of GENBERT +ND+TD between 2 and 3 terms is sig-nificantly smaller than that of GENBERT +ND and GENBERT +TD .", "This suggests that both ND and TD are useful for improving robustness.", "Error analysis To understand the limitations of our method, we analyze the errors of GENBERT +ND+TD on the development set of DROP, excluding questions with a multi-span answer which are not supported by the model.", "We sample 100 random examples for which GENBERT +ND+TD fails to predict the correct answer and manually analyze the types of questions and mistakes done by the model.", "We find that in almost half of the cases (43%), the example requires reasoning skills that are either not covered by the pre-training tasks (e.g. sorting), or not numerical.", "Another common case (23%) is inaccurate predictions, such as spans that are too EM F 1 BERT 81.1 88.6 GENBERT +ND-LM 78.1 85.8 GENBERT +ND 80.7 88.1 GENBERT +TD 80.7 88.2 GENBERT +ND+TD 81.3 88.6 Table 7 : Performance on SQuAD v1 development set.", "long and numbers with partial digit match to the gold answer.", "We note that many of these errors can be addressed by extending the pre-training tasks to cover additional numerical skills and a larger number range.", "We leave such extensions for future work.", "Further details and example failure cases are provided in A.5.", "Having shown that our models successfully learned to perform NRoT, we investigate if this improvement comes at the expense of performance on RC datasets.", "We initialize the RC model from Devlin et al. (2019) with GENBERT weights (encoder only) and fine-tune it on SQUAD v1.", "As shown in Table 7, the performance of GENBERT +ND+TD is almost identical to the original BERT.", "Moreover, GENBERT +ND-LM reported a loss of 3 EM points highlighting the importance of using the MLM loss.", "To further establish the utility of GENBERT, we used the weights of GENBERT +ND+TD to initialize the encoder of NABERT+ and MS-TAG, a recent multi-span tagging model of Efrat et al. (2019).", "Fine-tuning on DROP shows an improvement of 2 EM points compared to the originally reported performance: 63 .", "0 65 .", "1 EM for NABERT+, and 67 .", "3 69 .", "3 EM for MS-TAG.", "This shows that GENBERT can be used as a drop-in replacement for BERT, when numerical reasoning is needed.", "To summarize, we have empirically shown that one can inject numerical reasoning skills into a pre-trained LM, resulting in good performance on DROP, generalization to MWP, while maintaining high performance on standard RC datasets.", "Moreover, the resulting weights can be used for initializing numerical reasoning models.", "Most NRoT models designed for DROP are extractive QA models augmented with specialized modules (2).", "Two recent work (Andor et al., 2019; Chen et al., 2020) take a more symbolic approach and output a symbolic program augmented with operations over text.", "In our work, numerical computations are latent and performed internally by the model.", "A related line of work has been analyzing the mathematical reasoning abilities of neural models over text (Wallace et al., 2019; Rozen et al., 2019; Ravichander et al., 2019), and on arithmetic problems (Saxton et al., 2019; Amini et al., 2019; Lample and Charton, 2020).", "Designing pre-training tasks to teach LMs additional skills has been applied by Huang et al. (2019), who designed cross-lingual pre-training tasks to teach better mappings between languages, and Lee et al. (2019), who introduced the Inverse Cloze Task to pre-train an information retriever.", "Large pre-trained LMs lack high-level skills such as numerical reasoning.", "Consequently, current models that perform numerical reasoning over a pre-trained LM resorted to customized modules with limited flexibility.", "In this work, we propose a general method for injecting additional skills into LMs, assuming automatic data generation is possible.", "We apply our approach to the task of numerical reasoning over text, using a general-purpose model called GENBERT, and a simple framework for generating large amounts of synthetic examples.", "Our experiments demonstrate the effectiveness of our method, showing that GENBERT successfully learns the numerical skills, and performs on par with state-of-the-art NRoT models of the same size.", "We thank Daniel Andor and Thang Luong for helpful discussions, and Shimi Salant for constructive suggestions.", "This research was partially supported by The Israel Science Foundation grant 942/16, The Yandex Initiative for Machine Learning, and the European Research Council (ERC) under the European Union Horizons 2020 research and innovation programme (grant ERC DELPHI 802800)." ]
[ "abstain", "abstain", "abstain", "result", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "objective", "objective", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "method", "other", "other", "abstain", "abstain", "objective", "objective", "objective", "other", "other" ]
[ "Abstract", "Due to the scarcity of annotated data, Abstract Meaning Representation (AMR) research is relatively limited and challenging for languages other than English.", "Upon the availability of English AMR dataset and English-to-X parallel datasets, in this paper we propose a novel cross-lingual pre-training approach via multi-task learning (MTL) for both zero-shot AMR parsing and AMR-to-text generation.", "Specifically, we consider three types of relevant tasks, including AMR parsing, AMR-to-text generation, and machine translation.", "We hope that knowledge gained while learning for English AMR parsing and text generation can be transferred to the counterparts of other languages.", "With properly pretrained models, we explore four different fine-tuning methods, i.e., vanilla fine-tuning with a single task, one-for-all MTL fine-tuning, targeted MTL fine-tuning, and teacher-student-based MTL fine-tuning.", "Experimental results on AMR parsing and text generation of multiple non-English languages demonstrate that our approach significantly outperforms a strong baseline of pre-training approach, and greatly advances the state of the art.", "In detail, on LDC2020T07 we have achieved 70.45%, 71.76%, and 70.80% in Smatch F1 for AMR parsing of German, Spanish, and Italian, respectively, while for AMR-to-text generation of the languages, we have obtained 25.69, 31.36, and 28.42 in BLEU respectively.", "We make our code available on github https:// github.com/xdqkid/XLPT-AMR .", "Abstract Meaning Representation (AMR) (Ba-narescu et al., 2013) is a widely used formalism that represents the semantics of a sentence with a directed and acyclic graph.", "Figure 1", "(b) shows an example AMR graph where the nodes such as Corresponding Author: Junhui Li.", "doctor and give-01 represent concepts, and the edges such as :ARG0 and :ARG1 stand for semantic relations between two connected concepts.", "Recent studies on AMR mainly fall in two directions: AMR parsing which converts a sentence into an AMR graph (Flanigan et al., 2014; Wang et al., 2015a; Konstas et al., 2017, to name a few) and its inverse, i.e., AMR-to-text generation that produces a sentence from an AMR graph (Flanigan et al., 2016; Song et al., 2017, 2018, to name a few).", "Restricted by the availability of annotated corpora, most of previous studies on AMR focus on English while very few studies are for Chinese and Portuguese (Wang et al., 2018; Sobrevilla Cabezudo et al., 2019; Anchieta and Pardo, 2020).", "Cross-lingual AMR research, however, has received relatively less attention.", "In fact, cross-lingual AMR has mainly been studied in the scope of annotation works (Xue et al., 2014; Haji c et al., 2014).", "Till recently, Damonte and Cohen (2018) demonstrate that AMR annotated for English can be used as cross-lingual semantic representations, and propose to conduct cross-lingual AMR parsing via annotation projection and machine translation.", "Blloshmi et al. (2020) follow the same line and create large-scale silver data to boost the performance of cross-lingual AMR parsing.", "Fan and Gardent (2020) focus on multilingual AMR-to-text generation for twenty one different languages.", "The aforementioned studies consider AMR parsing and AMR-to-text generation separately.", "In this paper, we formalize both AMR parsing and AMR-to-text generation as sequence-to-sequence (seq2seq) learning and propose a novel and effective approach to cross-lingual AMR, which is illustrated in Figure 1.", "Upon the availability of the English AMR dataset and English-to-X parallel datasets ( X { German , Spanish , Italian } in this paper), our purpose is to boost the performance of zero-shot AMR parsing and text generation in", "X -language.", "To this end, we borrow the idea of joint pre-training from Xu et al. (2020) and explore three types of relevant tasks, including machine translation tasks, AMR parsing and AMR-to-text generation tasks.", "We conjecture that knowledge gained while learning for English AMR parsing and text generation could be helpful to the X -language counterparts, and machine translation tasks could act as a good regularizer (Xu et al., 2020).", "To the best of our knowledge, this is the first study that utilizes such a pre-training approach in cross-lingual AMR research.", "We also explore and compare four different fine-tuning methods to answer the question that whether combining AMR parsing and AMR-to-text generation tasks in fine-tuning stage will achieve better performance.", "Moreover, inspired by the teacher-student mechanism (Kim and Rush, 2016; Chen et al., 2017), we extend the fine-tuning method to improve a target fine-tuning task with the help of another relevant yet stronger task.", "Experimental results on the cross-lingual AMR dataset (LDC2020T07) show that the proposed approach greatly advances the state of the art of cross-lingual AMR.", "We propose an effective cross-lingual pretraining approach for zero-shot AMR parsing and AMR-to-text generation.", "Our pre-trained models could be used for both AMR parsing and AMR-to-text generation.", "We explore and compare different fine-tuning methods.", "We also propose a teacher-student-based fine-tuning method that achieves the best performance.", "We evaluate our approach in three zero-shot languages of AMR and our approach greatly advances the state of the art. 2 Related Work We describe related studies on AMR from three perspectives: English AMR parsing, English AMR-to-text generation, and cross-lingual AMR.", "English AMR Parsing.", "AMR parsing is a task that translates a sentence into a directed and acyclic graph (Banarescu et al., 2013).", "According to the approaches to modeling the structure in AMR graphs, previous studies on AMR Parsing for English can be broadly grouped into several categories, which are tree-based approaches (Wang et al., 2015b; Groschwitz et al., 2018), graph-based approaches (Flanigan et al., 2014; Werling et al., 2015; Cai and Lam, 2019), transition-based approaches (Zhou et al., 2016; Damonte et al., 2017; Ballesteros and Al-Onaizan, 2017; Guo and Lu, 2018; Zhou et al., 2021), sequence-to-sequence (seq2seq) approaches (Peng et al., 2017; van Noord and Bos, 2017; Konstas et al., 2017; Ge et al., 2019; Xu et al., 2020; Bevilacqua et al., 2021), and sequence-to-graph (seq2graph) approaches (Lyu and Titov, 2018; Zhang et al., 2019a,b; Cai and Lam, 2020a).", "English AMR-to-Text Generation.", "As an inverse task of AMR parsing, AMR-to-text generation aims to write a sentence from an AMR graph.", "Early studies on this task rely on grammar-based approaches (Flanigan et al., 2016; Song et al., 2017).", "More recent studies propose to regard AMR-to-text generation as a machine translation or seq2seq task (Pourdamghani et al., 2016; Ferreira et al., 2017; Konstas et al., 2017; Cao and Clark, 2019).", "However, seq2seq approaches tend to lose structural information in AMR graphs since they simply linearize AMR graphs into sequences before feeding them into the models.", "To prevent information loss caused by linearization, a variety of graph-to-sequence approaches have been proposed to better model structural information (Song et al., 2018; Beck et al., 2018; Damonte and Cohen, 2019; Guo et al., 2019; Ribeiro et al., 2019; Zhu et al., 2019; Cai and Lam, 2020b; Zhao et al., 2020; Song et al., 2020; Yao et al., 2020; Bai et al., 2020).", "By taking advantages of strong pre-trained language models, recent studies achieve new state of the art (Mager et al., 2020; Harkous et al., 2020; Ribeiro et al., 2020; Bevilacqua et al., 2021) .", "Cross-Lingual AMR.", "All above related studies focus on English AMR research.", "Relatively limited efforts have been put on other languages due to the lack of language-specific AMR corpora.", "Actually, whether AMR can act as an interlingua is an open question (Xue et al., 2014; Haji c et al., 2014).", "Till lately , Damonte and Cohen (2018) demonstrate that a simplified AMR can be used across languages and for the first time they study cross-lingual AMR parsing for languages rather than English.", "Blloshmi et al. (2020) employ large-scale silver parallel AMR data to bridge the gap between different languages and greatly advance the performance of cross-lingual AMR parsing.", "Sheth et al. (2021) explore annotation projection to leverage existing English AMR and overcome resource shortage in the target language.", "Furthermore, Fan and Gardent (2020) explore cross-lingual AMR-to-text based on pre-trained cross-lingual language model (XLM) (Lample and Conneau, 2019).", "In this paper we build strong cross-lingual pre-trained models for both AMR parsing and AMR-to-text generation.", "Moreover, a nice property of our approach is that for AMR parsing, unlike related studies (Damonte and Cohen, 2018; Blloshmi et al., 2020), we do not need to perform lemmatization, POS tagging, NER, or re-categorization of entities, thus require no language specific toolkits in pre-processing.", "In this section, we first present the background of our pre-training approach (Section 3.1), followed by the description of cross-lingual pre-training tasks (Section 3.2).", "Then we present our joint pre-training (Section 3.3).", "For simplicity, in the following we use German as a representative to describe our approach to German AMR parsing and AMR-to-text generation.", "Transformer-based Seq2Seq Learning.", "Our models are built on the Transformer framework (Vaswani et al., 2017).", "The encoder in Transformer consists of a stack of multiple identical layers, each of which has two sub-layers: one implements the multi-head self-attention mechanism and the other is a position-wise fully-connected feedforward network.", "The decoder is also composed of a stack of multiple identical layers.", "Each layer in the decoder consists of the same sub-layers as in the encoder plus an additional sub-layer that performs multi-head attention to the distributional representation produced by the encoder.", "See Vaswani et al. (2017) for more details.", "AMR Graph Linearization and Recovering.", "To make Transformer applicable to AMR parsing and AMR-to-text generation, on the one hand we follow van Noord and Bos (2017) to linearize AMR graphs into sequences by removing variables, wiki links and duplicating the co-referring nodes.", "On the other hand, for AMR parsing we need to recover the graph representation from linearized AMRs by assigning a unique variable to each concept, pruning duplicated and redundant materials, restoring co-referring nodes, fixing incomplete concepts and performing Wikification.", "1 In this paper, we adopt linearization and recovering scripts provided by van Noord and Bos (2017).", "2 3.2 Cross-Lingual Pre-Training Tasks Due to the unavailability of gold training data of German AMR parsing and AMR-to-text generation, we view English as a pivot and hope that knowledge gained while learning for English AMR parsing and text generation could be helpful for the German counterparts.", "Specifically, given an EN-DE parallel dataset (cid:0) TEN , TDE (cid:1) , we use an English AMR parser trained on annotated English AMRs (i.e., AMR2.0) to parse the English sentences into AMR graphs, thus obtain a trilingual parallel dataset T = (cid:0) TEN , TDE , TAMR (cid:1) .", "Then 1 We extract a term-wiki list from English AMR training dataset.", "on the trilingual parallel dataset, we propose cross-lingual pre-training via multi-task learning.", "We consider three types of tasks, i.e., AMR parsing, AMR-to-text generation, and machine translation.", "AMR Parsing Tasks, which include both English AMR parsing on the training data (cid:0) TEN , TAMR (cid:1) and German AMR parsing on (cid:0) TDE , TAMR (cid:1) .", "Note that both AMR parsing tasks are trained on silver AMR graphs.", "AMR-to-Text Generation Tasks, which include both English AMR-to-text generation and German AMR-to-text generation.", "Similar to AMR parsing, these two AMR-to-text generation tasks are also trained on silver AMR graphs (cid:0) TAMR , TEN (cid:1) and (cid:0) TAMR , TDE (cid:1) , respectively.", "Machine Translation Tasks, which include both English-to-German and German-to-English machine translation tasks on (cid:0) TEN , TDE (cid:1) .", "The advantage of including the bi-directional translation tasks is three-fold.", "First, English-to-German translation will enable the decoder to generate fluent German sentence, which is beneficial to German AMR-to-text generation.", "Second, German-to-English translation will enable the encoder to capture syntax and semantic information from German sentences, which is beneficial to German AMR parsing.", "Third, translation tasks can serve as regularization to the training of AMR parsing and AMR-to-text generation, both of which are apt to overfit to the training data.", "Overall speaking, in our pre-training there exist three types of (six) pre-training tasks in total.", "The pre-training is conducted on a trilingual parallel dataset (cid:0) TEN , TDE , TAMR (cid:1) , where TEN and TDE are parallel gold sentence pairs while TAMR is the set of corresponding silver AMR graphs.", "To train the above six pre-training tasks with a single model, we follow the strategy used in Xu et al. (2020) and add preceding language tags to both source and target sides of training data to distinguish the inputs and outputs of each training task.", "As illustrated in Table 1, we use < en > , < de > , and < amr > as the tags of begin-of-sentence for English sentences, German sentences, and linearized AMRs, respectively.", "Our joint pre-training on multiple tasks falls into the paradigm of multi-task learning (MTL).", "In the training stage, we take turns to load the training English < en > English Sentence German < de > German Sentence AMR < amr > Linearized AMR Table 1: Preceding tags as the symbol of begin-of-sentence to distinguish languages.", "data of these pre-training tasks.", "For example, we update model parameters on a batch of training instances from the first task, and then update parameters on a batch of training instances of the second task, and the process repeats.", "We also note that, according to our preliminary experimentation, the effect of different orders of carrying out these pre-training tasks is negligible.", "To fine-tune a pre-trained model, we create a fine-tuning dataset from English annotated AMRs (i.e.,AMR2.0).", "Given English-AMR parallel data (cid:0) FEN , FAMR (cid:1) , we use an English-to-German translator to translate the English sentences into German sentences, thus obtain trilingual parallel dataset F = (cid:0) FEN , FDE , FAMR (cid:1) .", "As our goal is to improve the performance of zero-shot AMR parsing and AMR-to-text generation, our primary fine-tuning tasks are German AMR parsing and AMR-to-text generation.", "Moreover, we could include the other four fine-tuning tasks as auxiliary tasks when necessary, i.e., English AMR parsing and AMR-to-text generation, as well as English-to-German and German-to-English translation.", "Once the fine-tuning dataset is ready, we can fine-tune a pre-trained model with different methods.", "The vanilla fine-tuning method that fine-tunes a pretrained model on the dataset of a primary task is a natural choice.", "We can also fine-tune a pre-trained model jointly over all fine-tuning tasks, or over the primary tasks plus specifically chosen fine-tuning tasks that are relevant.", "In the following we explore and compare four different fine-tuning methods.", "Given a pre-trained model, vanilla fine-tuning updates the parameters of the pre-trained model solely on the dataset of the downstream task.", "For example, for German AMR parsing, we fine-tune the pre-trained model on the fine-tuning dataset of the German AMR parsing task.", "In other words, vanilla fine-tuning involves only a single-task learning.", "We fine-tune a pre-trained model synchronously for all six fine-tuning tasks, which are the same as the pre-training tasks.", "Related studies (Li and Hoiem, 2018; Xu et al., 2020) have shown that it is important to optimize for high accuracy of a primary fine-tuning task while preserving the performance of other tasks.", "Preserving the performance of various pre-training tasks could be viewed as a regularizer for each fine-tuning task.", "Similarly to joint pre-training, we take turns to load the fine-tuning data of these fine-tuning tasks.", "Consequently, we obtain a single fine-tuned model for all tasks.", "Rather than including all fine-tuning tasks within a single model, we can selectively choose relevant fine-tuning tasks.", "For German AMR parsing, we use AMR parsing on German as the primary fine-tuning task and German-to-English translation as an auxiliary fine-tuning task.", "The auxiliary task will enhance the encoder to capture semantic information from German sentences.", "This is also consistent with the fine-tuning tasks designed for English AMR parsing in (Xu et al., 2020).", "For German AMR-to-text generation, we choose English-to-German as the auxiliary fine-tuning task, which is beneficial for the decoder to generate fluent German sentences.", "One notable property of the fine-tuning dataset is that the German sentences are produced automatically through machine translation.", "Noises in such silver fine-tuning dataset may degrade the performance of fine-tuned models.", "Inspired by the teacher-student framework (Kim and Rush, 2016; Chen et al., 2017), we propose to solve this problem by using a stronger fine-tuning task to help improve fine-tuning tasks on such noisy data.", "For example, we can use English AMR parsing (as the teacher) to help German AMR parsing (as the stu-dent), since English AMR parsing that is fine-tuned on gold data tends to have stronger performance.", "Fine-Tuning for German AMR Parsing.", "We use E , G , A to denote English-side, German-side, and AMR-side, respectively, and ( e , g , a ) as a triple instance.", "For German AMR parsing (i.e., G A ), we regard English AMR parsing (i.e., E A ) as its teacher and assume that the probability of generating a target AMR token a i from g should be close to that from its counterpart e , given the already obtained partial AMR a <i .", "On this assumption, the student model can acquire knowledge from the teacher by applying word-level knowledge distillation for multi-class cross-entropy with the following joint training objective: J ( G A ) = (cid:88) ( e , g , a ) J (cid:16) e , g , a , E A , G A (cid:17) + L G A ( a | g ) , (1) where ( e , g , a ) D E,G,A , i.e., (cid:0) FEN , FDE , FAMR (cid:1) , the fine-tuning data for English/German AMR parsing, E A denotes the already learned model parameters for English AMR parsing, 3 and L G A ( a | g ) denotes the log-likelihood function for translating g into a .", "The function J in Eq.", "1 is defined as: J (cid:16) e , g , a , E A , G A (cid:17) = | a | (cid:88) i =1 KL (cid:16) P ( a | e , a <i ; E A ) (cid:107) P ( a | g , a <i ; G A ) (cid:17) = | a | (cid:88) i =1 (cid:88) a V a P ( a | e , a <i ; E A ) log P ( a | e , a <i ; E A ) P ( a | g , a <i ; G A ) , (2) where KL ( (cid:107) ) denotes the KL divergence between two distributions, and V a is the vocabulary set.", "4 To sum up, in MTL fine-tuning we use Eq.", "1 as the objective for the fine-tuning task of German AMR parsing while we still use the log-likelihood function for the auxiliary fine-tuning task, i.e., German-to-English translation.", "Fine-Tuning for German AMR-to-Text Generation.", "Considering the fact that the performance of English-to-German translation is also better than that of German AMR-to-text generation, we view English-to-German translation as the teacher and assume that the probability of generating a target German token g i from a should be close to that from its counterpart e , given the already obtained partial German sentence g <i .", "The joint training objective for German AMR-to-text generation is similar to the aforementioned objective function for German AMR parsing.", "Due to limited space, we omit definition details of the objective function.", "In this section, we report the performance of our approach to AMR parsing and AMR-to-text generation for non-English languages, including German (DE), Spanish (ES), and Italian (IT).", "The models are pre-trained and fine-tuned on English data and one of either DE, ES, or IT, and are evaluated in the target language.", "Pre-Training Datasets.", "For German, we use the WMT14 English-German translation dataset 5 which consists of 3.9M sentence pairs after preprocessing.", "For Spanish and Italian, we use Eu-roparl parallel datasets, 6 which consist of 1.9M English-Spanish and 1.9M English-Italian sentence pairs, respectively.", "The English sentences of all the datasets are all parsed into AMR graphs via an English AMR parser trained on AMR 2.0 (LDC2017T10) (Appendix A provides more details on the English AMR parser).", "We merge English, German (Spanish/Italian) sentences and linearized AMRs together and segment all the tokens into subwords by byte pair encoding (BPE) (Sennrich et al., 2016) with 40K (or 30K for both Spanish and Italian) operations.", "In addition, we also train NMT models to translate English into German, Spanish, and Italian on above parallel datasets with Transformer-big settings (Vaswani et al., 2017).", "These NMT models will be used in preparing fine-tuning datasets (Ap-pendix B provides more implementation details on the NMT models).", "Fine-Tuning Datasets.", "We use English AMR2.0 which contains 36,521, 1,368, and 1,371 English-AMR pairs for training, development, and testing, respectively.", "We translate the English sentences into German, Spanish, and Italian, respectively.", "We segment all the tokens into subwords by using the BPE model trained on pre-training datasets.", "Pre-Training and Fine-Tuning Model Settings.", "We implement above pre-trained models based on OpenNMT-py (Klein et al., 2017).", "7 For simplicity, we use the same hyperparameter settings to train all the models in both pre-training and fine-tuning 5 https://www.statmt.org/wmt14/ translation-task.html 6 https://www.statmt.org/europarl/index.", "by just following the settings for the Transformer-base model in Vaswani et al. (2017).", "The number of layers in encoder and decoder is 6 while the number of heads is 8.", "Both the embedding size and the hidden state size are 512 while the size of feedforward network is 2048.", "Moreover, we use Adam optimizer (Kingma and Ba, 2015) with 1 of 0.9 and 2 of 0.98.", "Warm up step, learning rate, dropout rate, and label smoothing epsilon are set to 16000, 2.0, 0.1 and 0.1 respectively.", "We set the batch size to 4,096 (8,196) in pre-training (fine-tuning).", "We pre-train (fine-tune) the models for 250K (10K) steps and save them at every 10K (1K) steps.", "Finally, we obtain final pre-trained (fine-tuned) models by averaging the last 10 checkpoints.", "Evaluation.", "We evaluate on LDC2020T07 (Da-monte and Cohen, 2018), a corpus containing human translations of the test portion of 1371 sentences from the AMR 2.0, in German, Spanish, Italian, and Chinese.", "This data is designed for use in cross-lingual AMR research.", "Following Fan and Gardent (2020), we only evaluate on languages of German, Spanish and Italian where we have training data from EUROPARL.", "For AMR parsing evaluation, we utilize Smatch and other fine-grained metrics (Cai and Knight, 2013; Damonte et al., 2017).", "For AMR-to-text generation, we report performance in BLEU (Papineni et al., 2002).", "Baseline scratch .", "To build this baseline system, we directly train models from scratch on the fine-tuning datasets.", "Taking German AMR parsing as example, we train the model on its fine-tuning dataset (cid:0) FDE , FAMR (cid:1) to get Baseline scratch .", "Baseline pre-trained .", "Rather than training models from scratch, we pre-train the models on large-scale silver datasets.", "Taking German AMR parsing as example, we first pre-train the model on the pretraining dataset, i.e., (cid:0) TDE , TAMR (cid:1) , then we fine-tune the pre-trained model on the corresponding fine-tuning dataset, i.e., (cid:0) FDE , FAMR (cid:1) .", "Table 2 shows the performance of AMR parsing and AMR-to-text generation for German (DE), Spanish (ES), and Italian (IT).", "From the performance comparison of the two baseline approaches, it is not surprising to find out that pre-training on silver datasets is a very effective way to boost performance (Konstas et al., 2017; Xu et al., 2020).", "By using silver datasets, we obtain improvements of 6.80 7.87 Smatch F1, and 6.21 10.54 BLEU for parsing and text generation, respectively.", "With any of our fine-tuning methods, our cross-lingual pre-training approach further improves the performance over the strong baseline Baseline pre-trained in both parsing and generation tasks over all languages.", "It shows that like other fine-tuning methods, vanilla fine-tuning significantly boosts the performance of both parsing and generation.", "However, it still underperforms any of the MTL fine-tuning methods.", "This con-firms that it is important to optimize for high accuracy of a certain fine-tuning task while preserving the performance of other pre-training.", "The performance comparison between XLPT-AMR one4all and XLPT-AMR targeted suggests that selectively choosing relevant fine-tuning tasks, rather than including all fine-tuning tasks, could further boost parsing and generation performance with the exception of Spanish generation task.", "The XLPT-AMRT-S models perform the best, which reveals that using the teacher-student framework to guide the decoding process also helps the student task.", "This is owing to fact that the teacher models achieve better performance than the student models.", "See more in Section 5.4 for performance comparison of teacher and student models.", "Finally, we compare our approach to the previous studies.", "Among them, both Blloshmi et al. (2020) and Fan and Gardent (2020) adopt pretrained models which cover either the encoder part, or the decoder part.", "From the results we can see even our baseline Baseline pre-trained outperforms them by pre-training the encoder and the decoder simultaneously.", "The results also show that our XLPT-AMRT-S models greatly advance the state of art.", "For example, our XLPT-AMRT-S models outperform Sheth et al. (2021) by 3.4 7.8 Smatch F1 on AMR parsing of the three languages while surpass Fan and Gardent (2020) by around 10 BLEU on AMR-to-text generation.", "Table 3 compares the performance of fine-grained metrics for AMR parsing.", "It shows that our XLPT-AMRT-S models achieve the best performance on all the metrics with the only exception of Concepts for Italian AMR parsing.", "It shows that like English AMR parsing, all models predict Reentrancies poorly (Szubert et al., 2020).", "It also demonstrates that Negations is another metric which is hard to predict.", "In future work, we will pay particular attention to the two metrics.", "In this section, we try to answer the following three questions:", "First, what is the performance of teacher models when we use teacher models to guide student ones in teacher-student-based MTL fine-tuning?", "Second, what is the effect of the two machine translation tasks in pre-training?", "Third, in our approach we take English as pivot language by taking advantage of large scale English-to-German (or Spanish, Italian) dataset.", "What is the performance of English AMR parsing and AMT-to-text generation?", "Performance of teacher models in teacher-student-based MTL fine-tuning.", "Table 4 compares the performance of teacher and student models.", "It shows that the performance of teacher models for English AMR parsing and English-toX translation is much higher than the counterparts of student models (i.e., Stu.", "(before) in the table).", "The table also shows that the student models beneift from receiving guidance from the teachers.", "For example, while the English AMR parsing model (i.e., the teacher) achieves 78.62 Smatch F1 on the test set, it improves the performance of the German AMR parsing model (i.e., the student) from 68.31 Smatch F1 to 70.45.", "Similarly, while the English-to-German model (i.e., the teacher) achieves 39.40 BLEU on the test set, it boosts the performance of the German AMR-to-text generation model (i.e., the student) from 24.15 BLEU to 25.69.", "Note that when machine translation tasks are not involved in pre-training, the targeted MTL fine-tuning method is not applicable since we cannot use machine translation as the auxiliary task.", "Therefore, we use the vanilla fine-tuning method to fine-tune the pre-trained models.", "Table 5 compares the performance with/without machine translation tasks in pre-training.", "From it, we observe that including machine translation tasks in pre-training achieves improvements of 2.77 Smatch F1 and 2.46 BLEU on German AMR parsing and text generation, respectively.", "This suggests the necessity to have machine translation tasks in pre-training.", "Performance of English AMR parsing and AMR-to-Text generation.", "Based on the pretrained models, we take the targeted MTL fine-tuning method (Section 4.3) as a representative.", "Specifically, for English AMR parsing, we choose English-toX ( X { German , Spanish , Italian } ) as the auxiliary fine-tuning task while for English test generation, we choose X -to-English as the auxiliary task.", "Table 6 shows that the performance of English parsing and generation is much higher than that of other languages.", "Moreover, we find that the results of English AMR parsing are quite close when combining English with any of other languages whereas the results of English AMR-to-text generation are considerably different.", "One possible reason for the phenomenon is that English AMR-to-text generation is relevant to the sizes of machine translation datasets used in pre-training (i.e., 3.9M for EN-DE translation whereas 1.9M for both EN-ES and EN-IT, respectively) while English parsing seems to be less affected by the sizes of (silver) datasets.", "It indicates that with more English sentences in pretraining, it helps the generation models to generate Model AMR Parsing AMR-to-Text DE ES IT DE ES IT Teacher 78.62 78.16 78.58 39.40 40.41 36.67 Stu.", "more fluent and correct English sentences.", "In this paper we proposed a cross-lingual pretraining approach via multi-task learning for zero-shot AMR parsing and AMR-to-text generation.", "Upon English AMR dataset and English-toX parallel datasets, we pre-trained models on three types of relevant tasks, including AMR parsing, AMR-to-text generation, and machine translation.", "We also explored and compared four different fine-tuning methods.", "Experimentation on the multilingual AMR dataset shows that our approach greatly advances the state of the art.", "This work was supported by the National Key R&D Program of China under Grant No. 2020AAA0108600 and by the National Natural Science Foundation of China under Grant No. 61876120." ]
[ "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "objective", "method", "objective", "objective", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "other", "method", "other", "other", "other", "other", "other", "method", "method", "method", "abstain", "abstain", "method", "objective", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "other", "abstain", "other", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "result", "abstain", "abstain", "method", "method", "abstain", "result", "abstain", "abstain", "abstain", "objective", "method", "objective", "result", "other" ]
[ "This paper presents an investigation on the distribution of word vectors belonging to a certain word class in a pre-trained word vector space.", "To this end, we made several assumptions about the distribution, modeled the distribution accordingly, and validated each assump-tion by comparing the goodness of each model.", "Specifically, we considered two types of word classes the semantic class of direct objects of a verb and the semantic class in a thesaurus and tried to build models that properly estimate how likely it is that a word in the vector space is a member of a given word class.", "Our results on selectional preference and WordNet datasets show that the centroid-based model will fail to achieve good enough performance, the geometry of the distribution and the existence of subgroups will have limited impact, and also the negative instances need to be considered for adequate modeling of the distribution.", "We further investigated the relationship between the scores calculated by each model and the degree of membership and found that discriminative learning-based models are best in finding the boundaries of a class, while models based on the offset between positive and negative instances perform best in determining the degree of membership.", "Several studies have been successful in representing the meaning of a word with a vector in a continuous vector space (e.g., Mikolov et al. 2013a; Pennington et al. 2014).", "These representations are useful for a range of natural language processing (NLP) tasks.", "The interpretation and geometry of the word embeddings have also attracted attention (e.g., Kim and de Marneffe 2013; Mimno and Thompson 2017).", "However, little attention has been paid to the distribution of words belonging to a certain word class in a word vector space, though Centroidof negative instances Centroidof positive instances Figure 1: 2D t-SNE projection of GloVe vectors.", "empirical analysis of such a distribution provides a better understanding of word vector spaces and insight into algorithmic choices for several NLP tasks, including selectional preference acquisition and entity set expansion.", "Figure 1 shows a 2D projection of word embeddings.", "We extracted 200 words that can be a direct object of the verb play (positive instances) and 1000 other words (negative instances) and projected their GloVe vectors (Pennington et al., 2014) into two dimensions using t-distributed Stochastic Neighbor Embedding (t-SNE) (van der Maaten and Hinton, 2008).", "The plus symbols ( + ) represent the positive instances, and the squares ( (cid:4) ) represent the negative instances.", "This figure shows that the positive instances tend to be densely distributed around their centroid but they are not evenly distributed near the centroid in the 2D spaces.", "In this study, we aimed to understand how these positive instances are distributed in the pre-trained word vector spaces built by three representative general-purpose models: CBOW, skip-gram (Mikolov et al., 2013a), and GloVe.", "More specifically, we attempted to determine the following: whether or not a simple centroid-based approach can provide a reasonably good model, whether or not considering the geometry of the distribution and the existence of subgroups is useful for modeling the distribution, and whether or not considering the negative instances is essential to achieve adequate modeling.", "To this end, we first tackled properly modeling the vector distribution to distinguish a possible member of a word class from others when a subset of the class members is given.", "Note that although various approaches have been proposed to improve word vectors by taking knowledge related to word classes into account (Faruqui et al., 2015; Rothe and Schutze, 2015; Mrksic et al., 2017), we explored ways to model the distribution of word vectors rather than attempting to improve the word vectors themselves.", "We started with a centroid-based model, which is a simple but widely used way of representing a set of word vectors (e.g., Baroni et al. 2014; Woodsend and Lapata 2015) and assumes that how likely a word in the vector space is a member of a word class is proportional to the proximity to the centroid vectors of the class members.", "We then explored models that take the geometry of the distribution and the existence of subgroups into account.", "Here, we made two assumptions: vectors of words belonging to a certain word class are distributed with different variances depending on the direction, and most word sets will consist of several subgroups.", "We then explored the models that also consider negative instances.", "We assumed that the vectors of the words that do not belong to the target word class can be essential clues to distinguish a possible member of a word class from others.", "Specifically, we explored a model based on the offset between positive and negative instances and discriminative learning-based models to investigate the impact of negative instances.", "Furthermore, we investigated the relationship between the scores calculated by each model and the degree of membership using the Rosch (1975) dataset.", "The dataset contains typicality ratings for some instances of a category.", "Through experiments, we found that discriminative learning-based models perform better at distinguishing a possible member of a word class from others, while the offset-based model achieves higher correlations with the degree of membership.", "The interpretation and geometry of word embeddings have attracted attention.", "Mimno and Thompson (2017) reported that vector positions trained with skip-gram negative sampling (SGNS) do not span the possible space uniformly but occupy a narrow cone instead.", "Mikolov et al. (2013b) showed that constant vector offsets of word pairs can represent linguistic regularities.", "Kim and de Marneffe (2013) demonstrated that vector offsets can be used to derive a scalar relationship amongst adjectives.", "Yaghoobzadeh and Schutze (2016) performed an analysis of subspaces in word embedding.", "These analyses suggest that a certain direction or subspace in the word vector space represents an aspect of the words and the possibility that a word class is distributed with different variances depending on the direction in the vector space.", "While we investigated ways to model the distribution of a set of words in pre-trained word vector spaces to validate several assumptions about the distribution, various approaches have been proposed to improve word embeddings by considering knowledge related to word classes into account.", "For example, Faruqui et al. (2015) proposed a method of refining vector representations using relational information from semantic lexicons by encouraging linked words to have similar vector representations.", "Mrksic et al. (2017) proposed an algorithm for improving the semantic quality of word vectors by injecting constraints extracted from lexical resources.", "Glavas and Vulic (2018) use the linguistic constraints as training examples to learn an explicit specialization function with deep neural network architecture.", "There are also several studies that expand the method for acquiring a word vector to consider the uncertainty of a word meaning via Gaussian models (Vilnis and McCallum, 2015; Athiwaratkun and Wilson, 2017) and word polysemy by introducing several vectors for each word (Chen et al., 2014; Neelakantan et al., 2014; Tian et al., 2014; Athiwaratkun et al., 2018).", "In this study, we only considered a vector for representing each word, but inspired by these studies, we explored models that can consider the geometry of the distribution and the existence of subgroups.", "The problem we tackled is similar to a selectional preference acquisition task.", "There have been a number of studies on selectional preference acquisition.", "Resnik (1996) presented an", "information-(a) CENT", "theoretic approach that inferred selectional preferences based on the WordNet hypernym hierarchy.", "Erk et al. (2010) described a method that uses corpus-driven distributional similarity metrics for selectional preference induction.", "Van de Cruys (2014) investigated the use of neural networks for selectional preference acquisition.", "An entity set expansion task (Pantel et al., 2009) is also similar to our problem and has been well studied.", "For example, Sadamitsu et al. (2011) disambiguated entity word senses and alleviated semantic drift by extracting topic information from LDA for entity set expansion.", "Zhang et al. (2016) proposed a joint model for entity set expansion and attribute extraction.", "In this study, we seek to understand how these vectors are distributed in the pre-trained word vector space without using contextual or lexical information.", "A comparison with the state-of-the-art models for selectional preference induction and entity set expansion is beyond the scope of this work.", "First, let us introduce the notation.", "W c is a subset of words that belong to the target word class c .", "W o is a subset of words that do not belong to the word class.", "w t is a target word that can be a member of the word class c but is not included in W c .", "v w V w is a pre-trained vector for word w .", "We normalize all the word vectors to unit length.", "1 Note that we select the words in W o to share the same grammatical category as the words in W c .", "Our objective is to distinguish the word w t from the words in W o , given W c and V w .", "More specifically, we aim to find a scoring function f ( w, W c ) that assigns a higher score to w t and lower scores to the words in W o .", "For example, suppose c is a class of words that can be a direct object of the verb play ; W c , W o and w t will be as follows: W c = { role, part, game, golf, tennis } , W o = { school, apple, milk, arch, idea } , and w t = basketball .", "Our objective is to find a scoring function that assigns a higher score to basketball than to school , apple , milk , arch , and idea .", "We will start with a centroid-based model ( CENT ) that measures the score between a word w and a word set W c by calculating the cosine similarity between the word vector and the centroid vector of the word vectors in the word set (Figure", "2-(a)).", "The scoring function can be written as: f CENT ( w, W c ) = cos ( v w , 1 | W c | (cid:88) w c W c v w c ) .", "CENT provides a reasonable baseline, but it does not take the geometry of the distribution of the word vectors into account.", "Therefore, we introduce a simple Gaussian model ( GM ) to represent the distribution of word vectors belonging to a word class c (Figure", "2-(b)).", "The scoring function is as follows: f GM ( w, W c ) = N ( v w | , ) , (2) where mean and covariance matrix are estimated from { v w c | w c W c } .", "We select the constraint on covariance matrices for Gaussian distribution from { spherical, diagonal, full } by performing cross-validation on W c .", "GM is identical with CENT when the covariance matrix is an identity matrix.", "Next, we introduce a Gaussian mixture model ( GMM ) to take the existence of subgroups in a word class c into account (Figure", "2-(c)).", "The scoring function can be written as: f GMM ( w, W c ) = K (cid:88) k =1 k N ( v w | k , k ) , (3) where weights k , means k , and covariance matrices k are estimated from { v w c | w c W c } .", "We select the number of components of a Gaussian mixture K from { 1 , 2 , . . . , 10 } and the constraint on covariance matrices from { spherical, diagonal, full } by performing cross-validation on W c .", "GMM can be considered an extension of CENT because it is identical to the CENT when K is 1 and the covariance matrix is an identity matrix.", "Furthermore, we will consider another extension of CENT that only considers the existence of subgroups.", "Since all word vectors are normalized to unit length, f CENT ( w, W c ) can also be written as: f CENT ( w, W c ) = Wc | W c | (cid:88) w c W c cos ( v w , v w c ) , (4) where Wc is a normalization term depending only on W c and thus does not affect the ranking.", "That is, we can consider that CENT takes the average of the cosine similarities between a word vector v w and all word vectors in the given word set W c .", "If the words in the word set consist of several subgroups, it would be more plausible to consider only the topk most similar words for scoring.", "Accordingly, we introduce the k -nearest neighbor model ( k NN ), which takes the average of only the top k similar vectors.", "The scoring function can be written as: f k NN ( w, W c ) = 1 k (cid:88) w c k NN w ( W c ) cos ( v w , v w c ) , (5) where k NN w ( W c ) is a function returning a set of words in W c that take the topk highest cosine similarities against the word w .", "The number of k is selected from { 1 , 2 , 2 2 , . . . , | W c |} by performing cross-validation on W c .", "k NN is identical to CENT when | W c | is selected as k .", "As the last model without negative instances, we adopt a one-class support vector machine (SVM) (Scholkopf et al., 2001)-based model ( 1-SVM ) to clarify the importance of the negative instances.", "We select the kernel from { linear, cubic polynomial, RBF } and tune the parameter nu { 0 .", "05 , 0 .", "10 , . . . , 0 .", "50 } by performing cross-validation.", "Note that models without negative instances learn a decision function for outlier detection: classifying new data as similar or different to the given positive instances.", "Next, we explore models that also leverage negative instances.", "Here, we introduce a word set W n as negative instances, where W n consists of words that are not included in either W c or W o .", "We select the words in W n to share the same grammatical category as the words in W c as well as W o .", "Both W o and W n consist of words that are not included in W c , but their roles are different.", "While words in W o are used as negative instances in the estimation, words in W n are used as negative instances for modeling the word-class distribution.", "As the first model with negative instances, we introduce a model based on the offset between positive and negative instances ( OffSet ).", "This model is inspired by the Kim and de Marneffe (2013)'s work, which demonstrates that vector offsets can be used to derive adjectival scales.", "We assume that the vector offset between the centroid of the positive instances and that of the negative instances represents the degree of membership in the vector space (Figure", "2-(d)).", "The scoring function of OffSet is as follows: f OffSet ( w, W c , W n ) = cos ( v w , v c | v c | v n | v n | ) , (6) where v c = (cid:88) w c W c v w c , v n = (cid:88) w n W n v w n .", "Now let us move on to discriminative learning-based models.", "In this study, we chose a support vector machine with a linear kernel ( SVML ) or a radial basis function (RBF) kernel ( SVMR ).", "We only used word vectors as the input of these models and regard the decision function as the scoring function.", "We tuned the parameter C { 0 .", "1 , 0 .", "2 , 0 .", "5 , 1 , 2 , 5 , 10 } and class weight for positive instances P { 1 , 2 , 4 , 8 } for SVML and the parameter C { 0 .", "2 , 0 .", "5 , 1 , 2 , 5 } , { 0 .", "2 , 0 .", "5 , 1 , 2 } , and class weight for positive instances P { 1 , 2 , 4 , 8 } for SVMR by performing cross-validation on W c and W n .", "Note that we wanted to determine the usefulness of negative instances in modeling the distribution of word vectors; thus we make no assertions that these are optimal models.", "We used three publicly available pre-trained word vectors for English: the 300-dimensional embeddings trained on the Google News corpus with the CBOW model (CBOW), 2 the 300-dimensional embeddings trained on Wikipedia with the skip-gram model (SGNS), 3 and the 300-dimensional embeddings trained on Wikipedia and Gigaword with the GloVe model (GloVe).", "4 For Japanese, we trained 300-dimensional embeddings on an approximately 1.5 billion word corpus collected from the Web, with the CBOW model (CBOW), the skip-gram model (SGNS), 5 and the GloVe model (GloVe).", "6 We also trained 50-, 100-, and 200-dimensional embeddings on the same corpus for each model in order to investigate the effect of the vector size.", "For the evaluation, we used two types of datasets for English and Japanese, respectively.", "As the first type, we used word sets that consist of words which can be a direct object of a certain verb.", "For example, suppose a word set consists of { role, part, game, golf, tennis, etc.", "} , where each word can be a direct object of the verb play .", "We did not use the verb itself for evaluation but we can regard this as a selectional preference (SP) task.", "For the English SP dataset, we extracted pairs of verbs and their direct objects from the Google Books Syntactic N-grams dataset (Goldberg and Orwant, 2013).", "We first extracted verbs with the POS tag of VBD, VBP or VBZ that have direct objects at a rate of more than 40%.", "We decided on a threshold of 40% empirically to extract transitive verbs only.", "Then, we listed the extracted verbs in descending order of the number of the different direct objects and chose the top 1,000 of them.", "2 https://code.google.com/archive/p/word2vec/ 3 https://github.com/jhlau/doc2vec 4 http://nlp.stanford.edu/data/glove.6B.zip 5 https://code.google.com/archive/p/word2vec/.", "We used the default parameters except for the vector size.", "6 https://nlp.stanford.edu/projects/glove/.", "We used the same parameters as demo.sh except for setting the window size to 5 and the vector size to 300.", "For the Japanese SP dataset, we extracted pairs of verbs and their accusative arguments from the predicate-argument data used by Sasano and Oku-mura (2016).", "First, we extracted verbs that have accusative arguments at a rate of more than 70%.", "Again, we decided on a threshold of 70% empirically to extract transitive verbs only.", "Then, we listed the extracted verbs in descending order of the number of the different accusative arguments and chose the top 1,000 of them.", "Both datasets consisted of 1,000 verbs with at least 250 unique direct objects.", "We selected 200 direct objects as W c from the most frequent 250 direct objects and the other 50 direct objects as w t for each verb.", "Thus, the number of tasks N was 50,000, i.e., 50 tasks for each of the 1,000 verbs.", "We used 2,000 negative instances against 200 positive instances to build models with negative instances.", "We used word sets extracted from English and Japanese WordNet (Fellbaum, 1998; Isahara et al., 2008) as the second type.", "For example, a word set consists of { dog, llama, hedgehog, wolf, etc.", "} , which are all hyponyms of the same synonym set (synset n01886756, placental ).", "We extracted the pair of a synset ID and a set of words in the synset and its hyponyms in a distance of at most five from the target synset in the WordNet hyponym tree, as shown in Figure 3.", "We did not use multiword expressions or words whose word vectors are not included in any of the three pre-trained word embeddings.", "We extracted synsets that have at least 250 words.", "There are 109 word sets for English datasets and 120 word sets for Japanese datasets.", "We selected 200 words as W c and the other 50 words as w t for each synset.", "The number of tasks N was 5,450, i.e., 50 tasks for each of the 109 synsets for English, and 6,000, i.e., 50 tasks for each of the 120 synsets for Japanese.", "We used 2,000 negative instances against 200 positive instances to build models with negative instances as well as the SP datasets.", "We compared eight models: CENT, GM, GMM, k NN, 1-SVM, OffSet, SVML , and SVMR .", "For each dataset, we made W o by extracting 999 words from the other word sets; that is, the number of words for scoring was 1,000, including the target word w t .", "For OffSet, SVML , and SVMR , we make W n by extracting words from the other word sets subject to the constraint W o W n = {} .", "We regarded the problem as a ranking task and adopted the mean reciprocal rank (MRR) as the metric for evaluation.", "The MRR is calculated by the following equation: MRR = 1 NN (cid:88) i =1 1 rank ( w t i ) , (7) where rank( w t i ) is the rank of the target word w t i for each task.", "We tune the parameters to maximize the MRR in parameter tuning.", "We measured the statistical significance with an approximate randomization test (Chinchor, 1992) with 99,999 iterations and significance level = 0 .", "05 after Bonferroni correction.", "To satisfy the independence assumption, we treated each verb (for the SP datasets) or synset (for the WordNet datasets) as the unit of a randomization test.", "Tables 1 and 2 show the experimental results on the SP dataset for English and Japanese, respectively.", "In these tables, the best scores for each word embedding model and the scores with no significant difference from the best score are indicated in bold.", "In addition, the CENT score and the scores with no significant difference from the CENT score are italicized.", "The results in these tables indicate that the models considering the geometry of the distribution or the existence of subgroups in the word class outperform the centroid-based model (CENT) for both the English and Japanese SP datasets.", "In particular, Model CENT GM GMM k NN 1-SVM OffSet SVMLSVMRCBOW .1642 .2539 .2360 .2097 .1726 .2782 .3397 .3905 SGNS .1887 .2461 .2308 .1918 .2252 .2189 .3365 .3608 GloVe .1925 .2596 .2462 .2245 .2295 .1150 .3554 .3800 Ave.", "a simple Gaussian model (GM) performed the best among the models that only depend on positive instances.", "This indicates that these word sets are distributed with different variances depending on the direction in the vector space and it is useful to consider the geometry of the distribution.", "The two discriminative learning-based models with negative instances, SVML and SVMR , achieved much higher performance, whereas 1-SVM yielded a limited improvement over CENT.", "This demonstrates that modeling the distribution with only positive instances has an obvious limitation, and it is essential to leverage the negative instances as well.", "OffSet with CBOW or SGNS achieved a relatively good performance, but OffSet with GloVe did not, which suggests that the usefulness of the offset depends on the word embedding model.", "Tables 3 and 4 show the experimental results on the WordNet dataset for English and Japanese, respectively.", "The meaning of bold and italic fonts is identical to that on the SP dataset.", "The two discriminative learning-based models with negative instances and OffSet with CBOW or SGNS achieved a relatively high performance.", "This demonstrates that the negative instances must be taken into account to model the distribution properly.", "On the other hand, in contrast with the SP datasets, there were no significant improvements when the geometry of the distribution and the existence of subgroups were considered.", "The scores were generally lower than those of the SP datasets.", "We conjecture that this is because WordNet is developed manually and reflects human Model CENT GM GMM k NN 1-SVM OffSet SVMLSVMRCBOW .1435 .1320 .1460 .1473 .1541 .2263 .2564 .2678 SGNS .1767 .1679 .1573 .1625 .1704 .1998 .2292 .2357 GloVe .1792 .1694 .1562 .1744 .1684 .1310 .2075 .2264 Ave.", ".1665 .1564 .1532 .1614 .1643 .1857 .2310 .2433 Table 3: Results on the English WordNet dataset.", "intuition, whereas the SP datasets are automatically built from the corpus and are highly compatible with the pre-trained word vectors.", "In addition, we examined which types of words tend to rank low and found that words extracted from a synset corresponding to their infrequent sense such as stock in the sense of livestock tend to rank low.", "We leave further exploration for future work.", "It is interesting that although SVML is effectively just a linear classifier, SVML achieves a relatively high performance.", "This is likely due to the relatively large vector size compared to the number of positive instances and indicates that the positive instances occupy a certain span in the vector space though such a span cannot be determined by only using positive instances.", "We confirmed two desirable properties of the discriminative learning-based models with negative instances for practical applications.", "One is that since we used simple models, they do not require much training time.", "The other is that their performance is relatively stable among the different word embeddings and datasets compared to the other models.", "We also investigated the effect of the vector size and the number of positive instances on the Japanese SP dataset.", "Table 5 shows the averaged CBOW, SGNS, and GloVe scores for different vector dimensions, 50, 100, 200, and 300.", "We found that while CENT and 1-SVM were not affected much by the vector size, the other models, particularly OffSet, SVML , and SVMR , were significantly affected by the vector size.", "Table 6 shows the averaged CBOW, SGNS, and GloVe scores for the different number of positive instances, 25, 50, 100, Size CENT GM GMM k NN 1-SVM OffSet SVMLSVMR 50 .1686 .2360 .2055 .1909 .1825 .1769 .2842 .3568 100 .1738 .2557 .2177 .2075 .1954 .2189 .3366 .4044 200 .1724 .2697 .2233 .2178 .2005 .2363 .3813 .4340 300 .1677 .2624 .2454 .2185 .1996 .2399 .3936 .4355 Table 5: The average scores of different vector size with the Japanese SP dataset.", "and 200.", "We can conclude that all the models perform at a higher level based on the larger number of positive instances, especially for GM, GMM, SVML , and SVMR .", "This is not surprising, since these models have a large number of parameters and can extract a rich variety of information from the large number of positive instances.", "Similar tendencies were also observed with the other dataset.", "These results demonstrate that we can obtain relatively high performance by using discriminative learning-based models with a large enough vector and training data size.", "Rosch (1975) developed the prototype concept and proved that not all members of a category are equally representative of the category.", "Here, we are interested in the relationship between the scores calculated by each model and the degree of membership.", "We thus investigated how consistent the score calculated by each model is with human intuition on the degree of membership.", "For this experiment, we used the typicality data by Rosch (1975).", "Rosch asked 209 college students to use a 7-point scale to rate the extent to which each instance represents their idea or image of the meaning of the category term, and reported the rank orders with the mean ratings for ten categories.", "7 For example, for the Furniture category, 60 examples are ranked with the mean ratings, chair and sofa are top-ranked with the score of 1.04, and 7 To test the reliability of ratings, Rosch (1975) obtained Spearman rank-order correlations and Pearson productmoment correlations between sub-groups of students and reported that consistency was extremely high.", "stove is ranked as 50th with the score of 5.4.", "In this study, we used eight categories that have a corresponding synset in WordNet.", "Table 7 shows the statistics of the dataset.", "In the table, | WR | denotes the number of examples in Rosch's dataset, | W c | denotes the number of words in the synset and its hyponyms in the WordNet, and | WR W c | is the number of words included in both WR and W c , which we try to rank here.", "In this experiment, the objective was not to distinguish a possible member from others but to rank the positive member w c in W c according to the degree of membership.", "That is, we first formed the scoring function by using W c and W n and then applied the function to each member of WR W c to predict the typicality ranking.", "We evaluated the ranking by calculating Spearman's rank correlation coefficient ( ) and Kendall's rank correlation coefficient ( ) against the ranking of the goodness-of-example in Rosch's dataset.", "We computed the average rank correlation coefficient over the eight categories for and .", "Table 8 shows the experimental results.", "In contrast with the previous experiments, the highest scores were achieved by OffSet.", "These results suggest that the vector offsets can be used to derive the degree of membership.", "We can say that, while discriminative learning-based models, especially SVMR , can find the boundary of a category in a vector space with high accuracy, the vector offset between the centroid of positive instances and that of negative instances can properly represent the degree of membership in a category.", "When we focused on each combination of the embedding and distribution models, we found that the highest and second highest scores were achieved by OffSet with GloVe and GMM with SGNS, respectively.", "In contrast, both achieved relatively low performance in distinguishing a possible member of a word class from others, as shown in Table 3.", "These results demonstrate that the proper models for finding the boundaries of a class and those for determining the degree of membership are different and that choosing a proper model depending on the task is essential.", "We investigated the distribution of words that belong to a certain word class in a pre-trained general-purpose word vector space.", "The experimental results show that a centroid-based approach cannot provide a reasonably good model and considering the geometry of the distribution and the existence of subgroups is useful for modeling the distribution in some cases.", "However, the impact is limited, and the negative instances must be taken into account for adequate modeling.", "The results indicate that just observing the distribution of positive instances is not enough to understand the geometry of word embedding spaces.", "Furthermore, we investigated the relationship between the score calculated by each model and the degree of membership and demonstrated that, while discriminative learning-based models can distinguish a possible member of a word class from others, the offset-based model achieves higher correlations with the degree of membership.", "The investigation in this study leveraged only general-purpose word vectors to represent the meaning of a word.", "However, several studies have expanded the method for acquiring a word vector to account for the uncertainty of word meanings and word polysemy (e.g., Athiwaratkun et al. 2018).", "In addition, contextualized word embeddings have been shown to be very effective on a range of NLP tasks (Peters et al., 2018; Devlin et al., 2019).", "Furthermore, Gong et al. (2018) reported that word embeddings learned in several tasks are biased towards word frequency: the embeddings of high-frequency and low-frequency words lie in different subregions of the embedding space.", "Thus, in the future, we will take the uncertainty, polysemy, and context sensitivity of the word meanings and the frequency of words into account and explore better ways of modeling the word-class distributions in semantic vector spaces.", "This work was supported by JSPS KAKENHI Grant Number 16K16110 and 18H03286." ]
[ "method", "method", "method", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "objective", "abstain", "method", "objective", "abstain", "objective", "method", "objective", "abstain", "abstain", "result", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "objective", "method", "other", "other", "other", "other", "other", "other", "method", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "objective", "other" ]
[ "People rely on digital task management tools, such as email or to-do apps, to manage their tasks.", "Some of these tasks are large and complex, leading to action paralysis and feelings of being overwhelmed on the part of the user.", "The micro-productivity literature has shown that such tasks could benefit from being decomposed and organized, in order to reduce user cognitive load.", "Thus in this paper, we propose a novel end-to-end pipeline that consumes a complex task and induces a dependency graph from unstructured text to represent sub-tasks and their relationships.", "Our solution first finds nodes for sub-tasks from multiple how-to' articles on the web by injecting a neural text generator with three key desiderata relevance, abstraction, and consensus.", "Then we resolve and infer edges between these subtask nodes by learning task dependency relations.", "We collect a new dataset of complex tasks with their sub-task graph to develop and evaluate our solutions.", "Both components of our graph induction solution are evaluated in experiments, demonstrating that our models outperform a state-of-the-art text generator significantly.", "Our generalizable and scalable end-to-end solution has important implications for boosting user productivity and assisting with digital task management.", "People today increasingly rely on digital modalities and applications to organize, track and complete tasks from their work and life.", "They depend on email to structure their communications as a way of tracking pending tasks (Bellotti et al., 2003), issue commands to their digital assistants 1 for timely task reminders (Brewer et al., 2017), and use task management applications (Bellotti et al., 2004) 2 Most of this work was done while the first author was an intern at Microsoft Research.", "In this work, we focus on tasks that are complex (Hassan Awadallah et al., 2014), and which research in micro-productivity has shown (Kirsh, 2000; Teevan et al., 2016a) may benefit from thoughtful organization.", "For example, consider Figure 1, where we show how the complex task plan a birthday party 3 can be broken down into more manageable pieces and structured by mutual temporal dependencies, in order to create an actionable plan that is simpler and more effective.", "In this paper we propose to help automate generating such actionable plans in order to reduce cognitive load on users.", "While prior research (Cheng et al., 2015; Teevan et al., 2016b) has shown the benefits of tracking and acting on micro-tasks, little effort has been expended on finding automated solutions for actually breaking down complex tasks into tractable sub-tasks.", "Thus we design a novel end-to-end solution that is capable of decomposing complex tasks and structuring sub-task dependencies.", "We model our end-to-end solution as a graph induction problem, in which we first find nodes to represent sub-tasks, then infer the temporal dependency edges between them, yielding a flow diagram like the one in Figure 1. All of this is done from unstructured text ubiquitously found on the web, making our approach general and scalable.", "In the first component (that of finding nodes), we learn to synthesize information from multiple how-to' articles across the web and generate text fragments for sub-tasks.", "In particular, we extend a state-of-the-art neural text generator by injecting it with three desiderata for these fragments: relevance (to the complex task), abstraction (by summarizing content in articles), and consensus (for appearing across multiple sources).", "In the second component (that of finding edges), we infer temporal dependencies between sub-tasks.", "Existing corpora of how-to' articles (most notably WikiHow (Koupaee and Wang, 2018)) do not contain this latent dependency structure.", "Moreover, articles in these corpora are structured and formatted consistently and uniformly, making them ill-suited to our approach, which seeks to synthesize the content of multiple heterogeneous web pages.", "We therefore devise a simple annotation framework through which we gather a new dataset of complex tasks, and their associated subtasks and mutual dependencies, from multiple how-to' web articles using non-expert crowd workers.", "Finally, we use this data to fine-tune our augmented neural text generator, as well as predict dependency edges between the sub-tasks it generates.", "In experiments, we demonstrate that our optimal solution which encodes relevance, abstraction and consensus yields significant improvements over a state-of-the-art text generator on both subtask generation and dependency prediction.", "The focus of this paper is on Complex Tasks; however, our research has impact beyond intelligent task management.", "For example, learning to decompose complex natural language expressions could have impact on complex question answering (Chali et al., 2009; Luo et al., 2018), where question decomposition, multi-hop reasoning, information synthesis, and implicit knowledge all play an important role.", "More generally, the ability to model mappings between short text fragments and elements in multiple documents could benefit research in areas such as topic-focused multi-document summarization (Wan et al., 2007) and event timeline extraction of evolving news stories (Do et al., 2012).", "In summary, our key contributions are", "(i) building an end-to-end pipeline for complex task decomposition as a graph induction problem from unstructured text;", "(ii) constructing a new dataset for complex tasks that contain sub-tasks as well as the temporal dependencies between them; and", "(iii) extending a neural text generator by injecting signals for relevance, abstraction and consensus, thereby making it more capable at tackling task decomposition.", "We begin by defining some key concepts.", "We refer to a task as a text fragment that represents a goal people want to track, remind themselves of, or learn how to do; for example, buy a Christmas present, eat healthier or change a tire .", "In order to disambiguate the intent of tasks (consider the fragment Harry Potter , which could equally refer to read [the book] or watch [the movie] ), we scope our investigation to tasks that contain at least one verb.", "A task is considered as a complex task when two or more individual steps themselves also worth tracking, remembering or learning how to do need to be performed in its completion.", "Therefore, plan a birthday party , which involves creating a guest list and buying food and beverages (see Figure", "1) is a complex task.", "While throw out the trash is not such a complex task, even though it may involve opening the front door and walking to the trash bins .", "We refer to the individual steps of a complex task as sub-tasks .", "Sub-tasks may sometimes depend on other subtasks being completed before they can be tackled.", "Consider the example from Figure 1 again, which illustrates how one must set up a time and make a guest list before send(ing) out invitations.", "We refer to these relations as temporal dependencies , and are pair-wise notated as sub-task B depending on ( ) A. Given these key concepts, we define a complex task graph as follows.", "Let the sub-tasks of a complex task t be denoted by ST ( t ) = { s i } ni =1 .", "Then define G s ( t ) = ( V, E ) as the complex task graph of t .", "Here G s ( t ) is a directed graph, where V = ST ( t ) is the set of sub-task nodes, and E represents the set of temporal dependency edges such that ( s i , s j ) E, s i s j .", "Given these definitions, the problem of decomposing and organizing a complex task becomes inducing a graph G s ( t ) = ( V, E ) from a complex task", "input t 4 .", "To construct the graph, the key steps are (1) generating the sub-task nodes V , and (2) inferring the temporal dependency edges E between nodes.", "We propose to do both from unstructured text.", "In particular, the web has made a large number of instructional texts on a variety of topics and activities freely available for public consumption; some of them are in purpose-built websites, such as WikiHow (Koupaee and Wang, 2018), while others appear in personal blogs, social fora, educational portals and a number of other heterogeneous sources.", "We leverage these resources to find relevant information for complex tasks.", "Specifically, given a task t , we query a search engine with the term how to t (plan a birthday party becomes how to plan a birthday party), and store the k most relevant results in a collection D k ( t ) .", "Our graph induction problem then becomes finding the optimal graph G s ( t ) = ( V, E ) given the evidence in D k ( t ) .", "We elaborate on solutions for node generation and edge inference in what follows.", "Sub-task Generation Formally, given a complex task t and a collection of relevant articles D k ( t ) , we attempt to generate the sub-tasks ST ( t ) .", "We argue that the text fragments for sub-task nodes we generate must satisfy three desiderata: (1) Relevance , so that generated sub-tasks are directly related to the complex task t .", "(2) Abstraction , because how-to' articles often explain and expand on sub-tasks.", "(3) Consensus , since sub-tasks that are cited across multiple sources are more likely to be important.", "Our model for sub-task generation builds on BART (Lewis et al., 2019), a state-of-the-art sequence-to-sequence model for text generation, and injects it with our three desiderata.", "Concretely, we make BART capable of handling multi-source input, design a custom relevance-aware cross attention layer and implement a cluster encoding technique to guide the generation process.", "Details for the model are presented in Section 4.1.", "4 We assume complex tasks as input, since the focus of our work is on their understanding and decomposition.", "We leave the problem of distinguishing complex tasks from simple ones to future work.", "Dependency Inference Given the generated set of sub-tasks V = ST ( t ) , the next step in our end-to-end graph induction solution consists of inferring the temporal dependency edges E .", "We formulate this as a binary classification problem 5 , where we attempt to predict the existence of a dependency edge ( s i , s j ) V .", "Specifically, we use the concatenated intermediate representations for sub-tasks ( s i , s j ) from our enhanced BART model and add a final linear layer to learn a binary classifier.", "We train this classifier on a new dataset of complex tasks that contains pairwise temporal dependency information (see Section 3).", "More details are given in Section 4.2.", "To build and evaluate our solutions, we need data.", "The most relevant existing dataset is WikiHow (Koupaee and Wang, 2018), which is derived from the popular how-to website.", "However WikiHow, while very useful for parts of our modeling paradigm, is ill-suited to others.", "Namely, its articles are manually curated, with consistent structure and format, making them a mis-match to the heterogeneous, noisy and free-form articles we expect to encounter on the web.", "Moreover, they contain no dependency information between sub-tasks beyond a simple numbered ordering.", "Thus, to support our problem, we need a dataset which (1) contains complex task and their sub tasks; (2) encodes dependencies between sub-tasks; and (3) the sub-tasks come from a variety of webpages.", "Note that our goal is to create a dataset that enables model generalization, rather than constructing a comprehensive knowledge base.", "Therefore, rather than exhaustively annotating sub-tasks and their dependencies, we seek only to gather labels for the most important ones.", "In what follows we will describe the step-by-step construction of our dataset, and how these steps encode our three fundamental desiderata for task-sub-task relationships (see Section 2).", "5 While edge prediction is technically a structured prediction problem, we demonstrate in this paper that even a simple approach works well; we leave more complex modelling solutions to future work.", "6 https://github.com/microsoft/ MSComplexTasks Collecting Complex Tasks and How-to Articles We begin with logs from the popular now defunct task management application Wunderlist 7 .", "These logs are privately and respectfully handled by passing them through an enterprise grade, legal-and trust-approved pipeline, which anonymizes, aggregates and scrubs all personally identifiable information.", "This yields a collection of task strings, some of which have associated sub-task metadata (not sub-tasks themselves).", "We retain those tasks which have at least one sub-task, and contain at least one verb 8 (to avoid issues with disambiguating task intent) as a candidate seed pool of complex tasks.", "It may be noted that while the logs are not publicly available, they play a minimal role in our end-to-end solution.", "Their only purpose is to seed the initial set of complex how-to queries.", "In order to find relevant articles for each complex task we trawl through a month's worth of logs from a commercial search engine using the how-to' query expansion described in Section 2.1.", "Further, in order to protect user privacy we discard queries that were issued by fewer than 5 distinct users.", "To each remaining complex task query, we associate the top-10 clicked URLs across all users for the entire month.", "Text in these webpages satisfy our notion of relevance .", "Finding Candidate Sentences for Sub-tasks Next, we create a pool of candidate sentences that we hypothesize might contain sub-tasks.", "Specifically, we use an in-house webpage parser to extract section headings and list items from the set of URLs previously collected.", "These types of text fragments often represent very short summaries that are then elaborated on in how-to' articles; we thus attempt to restrict our candidate pool to subtasks that capture the notion of abstraction .", "Finally, we also care about consensus across articles, since this allows us to retain only those sub-task which are cited in different sources and are therefore more reliably important.", "Because the same sub-task can be expressed differently in text we perform clustering on the BERT (Devlin et al., 2018) embeddings of candidate sentences and discard those clusters that only contain a single source URL.", "The remaining set of sentences form our pool of candidate sub-tasks.", "Labeling Sub-tasks and Dependencies Given the set of candidate complex task queries and their associated sub-task sentences, we ask crowd-workers to label them.", "Specifically, we guide annotators through a series of questions: (1) Is the candidate query about a task?", "A complex task?", "(2) If it is a complex task, which candidate sentences represent sub-tasks?", "(3) Does the ordering of sub-tasks matter?", "If so, assign pairwise temporal dependency labels to sub-tasks.", "We ask three workers to label each HIT, aggregating annotations by majority vote.", "Table 1 shows some examples of aggregate judgments from our annotation study.", "Recall from Section 2 that we model complex task decomposition as a graph induction problem over unstructured text.", "In what follows we will first describe our approach for sub-task node construction, followed by our method for sub-task temporal dependency inference.", "As described in Section 2, we treat sub-task finding as a text generation problem.", "While we could ostensibly frame it as a span prediction problem, this is unsuitable for our modeling paradigm.", "First, our multi-source setting means that we might potentially (and in fact want to) extract more than one text span referring to the same sub-task.", "Thus resolving identical sub-tasks and picking the best among them would require additional logic, as well as a sub-task coreference module.", "Moreover, while we could use a webpage parser to build a candidate pool of sub-task text spans (see Section 3), such a parser might be brittle and error prone or even non-existent.", "While we have human annotators to refine this pool during dataset construction, no such remedy exists at automatic inference time.", "Model Architecture Our model is based on the pre-trained text generation model BART (Lewis et al., 2019).", "This is a sequence-to-sequence neural summarizer, which consists of a bidirectional encoder over corrupted text and a left-to-right autoregressive decoder.", "To extend it to a multi-source setting, we encode each source article independently with BART's encoder (a bi-directional Transformer (Vaswani et al., 2017)), and then concatenate the output embeddings of the encoders.", "These are then fed to the decoder which generates the output sentences autoregressively.", "We treat all of Task?", "the subtasks of a given complex task as a single target document, and subtasks are thus generated as a sequence of text fragments.", "A diagram of our architecture is shown in Figure 2. We initialize our model using the parameters of the pre-trained BART-large model released with the HuggingFace Transformer library (Wolf et al., 2019).", "These parameters are then fine-tuned to our task, using the proposed architecture.", "In our work, the inputs are the textual content from URLs returned by a how-to' complex task query, and the outputs are the set of generated sub-tasks.", "Because we are learning mappings between how-to' articles and short text fragments that summarize their contents, we are implicitly learning the notion of abstraction .", "Relevance-Aware Cross-Attention As discussed in Section 2, sub-tasks need to also be relevant to complex tasks.", "In order to encode relevance we design a cross-attention mechanism that explicitly captures textual relevance.", "Specifically, we score each sentence in a set of articles based on its relevance to the complex task, then propose a general mechanism to inject this information into the text generation model.", "Given a complex task t , we denote the query expansion how to do t as q , and the collection of related articles as D k ( q ) .", "To score the relevance of each sentence s D k ( t ) , we consider two factors.", "The first is how relevant the sentence s is to the query q , and the other is how relevant the article d (with s d ) is to q .", "We denote the relevance of s as P ( s | q ) , and the relevance of d as P ( d | q ) .", "To compute P ( d | q ) , we first represent q and d as n -dimensional embedding vectors respectively using BART, denoted respectively as (cid:126)q and (cid:126)d .", "Then article-query relevance is computed as the softmax: P ( d | q ) = exp f ( d, q ) (cid:80) d (cid:48) D k ( q ) exp f ( d (cid:48) , q ) (1) where f ( d, q ) = (cid:126)q T (cid:126)d .", "The sentence-query relevance P ( s | q ) is similarly computed using the softmax over embeddings (cid:126)q and (cid:126)s .", "Notably, these embeddings are generated from a BART model fine-tuned on an auxiliary sequence classification task, where positive samples are sub-tasks of complex task t i and negative samples are randomly samples sub-tasks from other complex tasks t j ( j (cid:54) = i ) .", "Given P ( s | q ) and P ( d | q ) , we need a way of injecting them into our generation model.", "The BART decoder performs cross-attention over the final hidden layer of the encoder to determine how much focus to place on other parts of the input sentence, as words in specific positions are encoded.", "Our model should not only pay attention to the inputs, but should additionally pay more attention to those are more relevant.", "We therefore define our relevance-aware cross-attention as follows: (cid:18) Q ( x ) K ( w ) T + p ( s | q ) p ( d | q ) d k (cid:19) V (2) where ( ) represents the softmax function, x is a token in the output sub-task, w s d is a token in the input article, Q, K and V are query, key and value representations, d k is the dimension of the key vector, and is a learnable parameter which controls the importance of our relevance injection.", "Cluster Encoding The final desiderata for our model is consensus , or the ability to be able to recognize and reward sub-tasks that are mentioned across multiple articles.", "To encode this signal we create a cluster embedding , which identically represents sentences that refer to the same sub-task across sources.", "To generate the new cluster embeddings, we first embed the query q as well as the set of sentences s D k ( t ) into n -dimension vectors using BART.", "Then we cluster the embeddings of all sentences s with KMeans, ranking the clusters by the proximity of their centroid to the embedding of q .", "Finally, using a formulation similar to the positional encodings from Vaswani et al. (2017) we define cluster encoding as: P E c ( s, 2 j ) = sin (cid:0) r i 10000 2 j/d model (cid:1) P E c ( s, 2 j +1) = cos (cid:0) r i 10000 2 j/d model (cid:1) (3) where r i is the rank of the cluster that s belongs to, and the symbols j s.t. 1 j n are dimensional indices; specifically 2 j , and 2 j + 1 represent indices into the clusters' embeddings.", "For instance, if the cluster embedding were de-fined as a vector of length 512 , then 2 j (resp. 2 j + 1 ) represent the 0 th, 2 nd, 4 th, ..., 510 th (resp. 1 st, 3 rd, 5 th, ..., 511 th) index positions of the vector.", "These notations are identical to the ones used in the original by Vaswani et al. (2017) in their definition of positional encoding.", "This formulation allows our model to identify tokens that belong to similar sentences, and are injected into the extended BART model as an additional input.", "In our work, we treat inferring dependencies between sub-tasks as a binary classification problem.", "Specifically, we learn a classifier capable of predicting the existence (or not) of a temporal dependence s i s j between all possible ordered pairs of generated sub-tasks ( s i , s j ) E .", "We use the same extended BART architecture previously proposed in Section 4.1, and concatenate the intermediate representations for s i and s j to yield an output representation for a pair of subtasks.", "Then we add a linear layer on top of this output to predict a binary label for the existence of a temporal dependency edge.", "Similar to sub-task generation, we hypothesize that finding consensus across articles may also be helpful in determining the dependencies between sub-tasks.", "Therefore, we also use cluster encodings in infering edge dependencies.", "In this section we aim to answer the following research questions:", "RQ1 Can we accurately and automatically generate sub-tasks given an input complex task?", "RQ2 Can we correctly identify the temporal dependencies between sub-tasks?", "In Section 3, we introduced a new Complex Task Dataset (CTD).", "In addition to this data, we also develop a new variant of WikiHow (Koupaee and Wang, 2018) dataset better suited to our modeling paradigm (which we call WKH-R), for larger scale development and evaluation.", "We describe both in what follows.", "Recall that WikiHow (Koupaee and Wang, 2018) can be interpreted as an abstractive summarization dataset, where the source are the textual content webpage bodies, and section headings are the summaries.", "In our problem, webpage titles for articles can be treated as complex tasks, and the section headings as subtasks.", "Even though WikiHow does not encode dependencies between subtasks (other than strict numerical ordering of sec-tions), we could ostensibly exploit the relationship between page titles and section headings to learn to generate sub-tasks from complex tasks.", "However, as we have previously argued (see Section 2.1), directly using WikiHow articles as the only source for training our model would make it brittle, and prone to fail on free-form content found in other heterogeneous sources on the web.", "Therefore, we propose an extension to WikiHow, which notably does not require human annotation and is created from information already present in WikiHow.", "In the extended WKH-R dataset, we do not treat the original WikiHow article as the solitary source.", "Instead, we conjecture that the WikiHow article is itself written by compiling information from multiple resources, and gather those sources directly instead.", "Concretely, many WikiHow articles contain a set of references that are cited by authors as sources, and we use these webpages as a collection of multiple sources instead of the content of WikiHow article itself.", "As references link to a diverse range of URLs on the web, our model learns to be more robust to structural and stylistic variety.", "The new extension of WikiHow in this form is fully capable of supporting learning and evaluation of sub-task generation from complex task queries, as modeled with the architecture outlined in Section 4.1.", "Specifically we use WikiHow page titles as complex tasks, section headings as sub-tasks, and reference webpages in WKH-R as multi-source articles.", "In the construction of WKH-R, we only retain articles that have more than one valid reference URL.", "Overall, we compile a dataset consisting of 7832 webpages, each corresponding to a distinct complex tasks, its own sub-tasks and reference articles.", "On average, each complex task has 12.9 individual sub-tasks, while citing 2.9 different references.", "In our experiments, we use 3916 instances for training, 1566 for validation, and set aside the remaining 2350 for testing.", "In Section 3, we already described how we created the MSComplexTasks dataset, containing among other signals, the temporal dependency relation between sub-tasks.", "Here, we briefly summarize some of the characteristics of this dataset.", "Overall, we collected sub-tasks and their dependencies for 430 complex tasks from an initial candidate pool of 2000 tasks.", "Many tasks were discarded, either because they were deemed not complex, or no subtasks candidates were confirmed by a majority of annotators.", "While this set may appear small, it may be noted that each instance in the data is a rich, structured object.", "On average, each complex task has 7.3 sub-tasks, references 2.4 webpages, and encodes 11 pairs of temporal dependencies between tasks.", "We use 215 instances for training, 86 for validation and 129 for testing in our experimental evaluation.", "We compare our full modeling approach (see Section 4) against several strong baselines.", "These include the base BART model, itself a state-of-the-art text generator which we use as a black-box single-document summarizer since all our variants are build on top of it.", "We also include a different state-of-the-art text-to-text model, T5 (Raffel et al., 2019) in our comparison, in order to demonstrate our that our modeling technique not only improves over the base BART variant, but other approaches to text generation in the literature.", "In terms of our modeling variants we compare against an extension of BART that additionally capture multiple sources ( MSBART ).", "We also include two variants that inject the MSBART model with our custom relevance and consensus encodings respectively, which yield the MSBART-R and MSBART-C baselines.", "Finally, we denote our full model, consisting of multiple sources, relevance and consensus as MSBART-F.", "In our evaluation of sub-task generation we report Rouge-1, Rouge-2, Rouge-L (Lin, 2004) and pairwise BERTScore (Zhang et al., 2019) metrics to compare performance across models.", "When reporting Rouge scores, we treat the sub-tasks as a generated summaries, and the reference sub-tasks as the target summaries.", "In addition to the document level Rouge summarization metric, we also leverage BERTScore to compute a sentence level evaluation number.", "Specifically, we first compute the best mapping between generated sub-tasks ( GS ) and the target subtasks ( T S ) via BERTScore ( BS ), then report their corresponding precision and recall.", "The BERTScore based precision and recall are computed as follows: P r ( T S, GS ) = (cid:88) s GS max s (cid:48) TS (cid:0) BS ( s, s (cid:48) ) (cid:1) | GS | Rc ( T S, GS ) = (cid:88) s TS max s (cid:48) GS (cid:0) BS ( s, s (cid:48) ) (cid:1) | T S | (4) Meanwhile, in our evaluation of dependency inference we compare our full modeling solution MSBART-F against the single-source BART baseline and the multi-source MSBART variant.", "In this experiment we report accuracy as the sole evaluation metric.", "To answer RQ1, we demonstrate the performance of our full modeling solution, MSBART-F, when compared against the set of variant baselines on the problem of generating sub-tasks from a given complex task.", "In particular we present results on Rouge and pairwise BERTScore metrics for both WKH-R and CTD datasets.", "The results for Rouge and pairwise BERTScores are summarized in Table 2 and Table 3 respectively.", "We can observe from the tables that extending the problem setting from single source to multi-source considerably improves performance.", "Meanwhile injecting signals for relevance and consensus can each further improve upon MSBART, and the full solution achieves the best performance on almost every combination of dataset and evaluation metric (the only exception being Rouge-1 on WKH-R).", "Thus in answering RQ1 , we conclude that the proposed MSBART-F model automatically generates the highest quality sub-tasks when compared against several state-of-the-art variant baselines.", "In order to answer RQ2, we report the results of inferring temporal dependencies among sub-tasks, using accuracy as a measure of performance.", "These results are shown in Figure 3. We observe that MSBART, which leverages information from multiple sources, improves upon the accuracy of the single-source BART model.", "Meanwhile, our full MSBART-F model achieves the best performance overall.", "Thus in response to RQ2 , we conclude that our proposed MSBART-F model infers dependencies between sub-tasks with an accuracy higher than comparative variants.", "Notably, the prediction accuracy of 0 .", "779 , we believe represents a reasonably strong first attempt at the edge inference component of our graph induction solution.", "In answering both RQs 1 and 2, we note particularly that our key insights for injecting our models with the capability for encoding relevance, abstraction and consensus lead to consistently improved results in complex task decomposition and organization.", "Recent advances in natural language understanding techniques (Devlin et al., 2018; Lewis et al., 2019) have sparked rapid progress in facilitating intelligent task organization, beginning with early work on contextual reminders (Kamar and Horvitz, 2011; Graus et al., 2016) to more advanced applications on estimating task duration (White and Hassan Awadallah, 2019), detecting already completed tasks (White et al., 2019), highlighting actionable micro-tasks (White et al., in press), and automatic task extractions from emails (Mukher-jee et al., 2020).", "Meanwhile, task planning remains one of the most challenging and cognitively-demanding activities in task management (Kirsh, 2000).", "Prior studies have shown that breaking down complex tasks positively influences productivity (Cheng et al., 2015; Teevan et al., 2016a,b).", "However, to the best of our knowledge, few methods have been proposed to tackle this problem automatically and at scale.", "The one exception is Hassan Awadallah et al. (2014) who explore complex search task understanding.", "Notably, however, their work reasons only over search logs rather than over the unstructured content of webpages.", "Furthermore, the purpose of their effort is subsequent query recommendation rather than the full complex task decomposition and structuring we propose in this paper.", "Thus our work is distinguished from prior research by being the first to attempt automatically decompose and organize complex task from unstructured text, in an end-to-end and scalable manner.", "One of the primary hurdles for research on complex tasks was the lack of suitable data, particularly with respect to temporal dependency between subtasks.", "We remedy this by collecting a novel dataset in this paper, which we hope will spur future research in the area.", "In this paper we have tackled the novel problem of decomposing and organizing a complex task from unstructured text.", "We devised an end-to-end solution that formulated this problem as graph induction in two stages.", "The first consisted of finding nodes to represent sub-tasks by parsing multiple how-to' articles on the web and extracting key text fragments from them.", "Notably, we framed three desiderata for finding these fragments relevance, abstraction and consensus and built a custom neural architecture to encode these properties by extending a state-of-the-art text generation system.", "In the second stage we designed a crowd-sourcing study to collect a new dataset of complex tasks, consisting of their sub-tasks and the temporal dependency relations between them then used this dataset to generate sub-task nodes as well as infer the edges between them.", "In evaluations of both stages we demonstrated the efficacy of our approach by significantly outperforming the state-of-the-art text generator that we extended.", "This work opens several avenues for future research.", "In this paper, we have assumed a complex task as given input; we plan to extend our pipeline with the ability to distinguish complex from simple tasks.", "This extension in turn will allow us to expand the scope of our current system by allowing for recursive task decomposition and organization.", "Meanwhile, although our novel Complex Task Dataset proved a useful resource for modeling sub-task dependency inference, it remains quite small; we hope to increase its size considerably in future work, in order to make it more useful to the broader research community.", "We also hope to conduct human evaluations of generated sub-tasks in order to gauge their coherence and utility to complex tasks.", "Finally, we hope to test our system in practical, downstream usage by studying the productivity impact of automated task decomposition and organization on real users in their daily lives.", "The authors would like to thank Asli Celikyil-maz and Chirag Shah for insightful discussions and suggestions on this project during the internship.", "This work was partly supported by Contract FA8750-19-2-1004 with the US Defense Advanced Research Projects Agency (DARPA), and by the office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via IARPA Contract No. 2019-19051600006 under the BETTER Program." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "abstain", "objective", "objective", "method", "abstain", "abstain", "result", "result", "objective", "abstain", "objective", "objective", "objective", "objective", "result", "abstain", "abstain", "objective", "objective", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "method", "objective", "abstain", "method", "objective", "abstain", "method", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "objective", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "objective", "objective", "objective", "abstain", "objective", "abstain", "objective", "method", "method", "other", "other" ]
[ "The task of long-form question answering (LFQA) involves retrieving documents relevant to a given question and using them to generate a paragraph-length answer.", "While many models have recently been proposed for LFQA, we show in this paper that the task formulation raises fundamental challenges regarding evaluation and dataset cre-ation that currently preclude meaningful modeling progress.", "To demonstrate these challenges, we first design a new system that relies on sparse attention and contrastive retriever learning to achieve state-of-the-art performance on the ELI5 LFQA dataset.", "While our system tops the public leaderboard, a detailed analysis reveals several troubling trends: (1) our system's generated answers are not actually grounded in the documents that it retrieves; (2) ELI5 contains significant train / validation overlap, as at least 81% of ELI5 validation questions occur in paraphrased form in the training set; (3) ROUGE-L is not an informative metric of generated answer quality and can be easily gamed; and (4) human evaluations used for other text generation tasks are unreliable for LFQA.", "We offer suggestions to mitigate each of these issues, which we hope will lead to more rigorous LFQA research and meaningful progress in the future.", "1 1 Introduction Long-form question answering (LFQA) integrates the retrieval component of open-domain QA, which involves searching a large external knowledge source for documents relevant to a given question, with a text generation component to produce paragraph-length answers.", "Significant progress has been made on open-domain QA datasets such as Natural Questions (Kwiatkowski et al., 2019), * Work done during an internship at Google Research.", "whose questions are answerable with short phrases and entities, by leveraging dense retrieval techniques like ORQA (Lee et al., 2019), REALM (Guu et al., 2020), and DPR (Karpukhin et al., 2020; Lewis et al., 2020c; Izacard and Grave, 2020).", "Methods inspired by these results have recently been combined with pretrained language models (Lewis et al., 2020b; Petroni et al., 2020) and applied to the Reddit-derived Explain Like I'm Five (ELI5) dataset (Fan et al., 2019), which is the only publicly-available large-scale LFQA dataset.", "The recently proposed KILT benchmark (Petroni et al., 2020), which compares retrieval-augmented models across a variety of knowledge-intensive tasks including ELI5, automatically evaluates LFQA models by the quality of both generated answers (ROUGE-L against reference answers) and retrieved documents (R-precision against human-annotated relevant documents).", "In this paper, we build a state-of-the-art system 2 for ELI5 by using a sparse Transformer variant (Roy et al., 2020) to condition over Wikipedia paragraphs returned by a REALM-style retriever (Guu et al., 2020).", "However, despite its success on the KILT leaderboard, our system does not actually use the documents that it retrieves!", "To measure the effect of retrieval on generation quality, we design a control experiment in which retrieved documents are replaced with randomly-sampled documents at inference time.", "Results from both human A/B tests and automatic metrics like ROUGE-L demonstrate that conditioning on random documents has almost no effect on generated answer quality (Figure 1c).", "We recommend that future LFQA research report the results of such control experiments in addition to reporting generation and retrieval quality.", "form well on ELI5?", "Our analysis reveals that this result is partially due to significant train / validation overlap in the ELI5 dataset (Figure 1a), which eliminates the need for external retrieval.", "A human study shows that at least 81% of validation questions have a paraphrase in the training set, and almost all validation questions are topically similar to a training set question.", "While Fan et al. (2019) attempted to identify and remove question overlap using TF-IDF similarity, more complex semantic matching methods & human verification is needed to address this issue in future LFQA datasets.", "Digging deeper, we identify fundamental issues with using ROUGE-L to evaluate generated answer quality (Figure 1b).", "Simple baselines such as just repeatedly copying the question, or choosing a random training set answer, can outperform LFQA systems such as RAG (Lewis et al., 2020c) in terms of ROUGE-L.", "On the other hand, our system achieves higher ROUGE-L than reference human-written answers, which is misleading since human A/B testers strongly prefer reference answers to our system's.", "We conclude that ROUGE-L is not a reliable metric to evaluate LFQA due to its large and relatively unconstrained output space (e.g., compared to translation or summarization), and we offer suggestions for better automatic & human evaluations to enable meaningful progress on this task.", "The ELI5 task (Fan et al., 2019) asks models to generate paragraph-length answers to open-ended questions in English that often rely on world knowledge (e.g., how do jellyfish function without brains or nervous systems? ).", "LFQA systems thus benefit from conditioning answer generation on relevant documents from the web (such as the Wikipedia article about jellyfish ).", "While large-scale pretrained language models store surprising amounts of world knowledge within their parameters (Petroni et al., 2019; Roberts et al., 2020), external document retrieval not only augments this intrinsic knowledge but also grounds model outputs in a knowledge source, which provides interpretability.", "In this section, we describe our proposed LFQA system, which conditions answer generation on Wikipedia articles identified by a pretrained retriever.", "We use a dense retriever trained by scaling up a distantly supervised algorithm from Jernite (2020).", "Since retrieved articles can be quite long and often exceed the maximum sequence length of pretrained models like BERT (Devlin et al., 2019), we use a sparse-attention variant of the Transformer to allow modeling over longer sequences.", "While our system sets a new state-of-the-art on ELI5, we question the significance of this result in Section 3.", "We begin by specifying our dense retriever (con-trastive REALM or C-REALM), which returns documents related to an input question.", "Consider a corpus of long-form questions and answers, represented by ( q i , a i ) Ni =1 .", "Our retriever uses q i as a query to retrieve K documents ( r i,j ) Kj =1 from a knowledge corpus (Wikipedia), which is enabled by an encoder network that projects both questions and candidate documents to a 128d shared embedding space.", "Like REALM (Guu et al., 2020), our encoder is a BERT-base Transformer (Devlin et al., 2019) with a final projection layer.", "Since the ELI5 dataset does not include gold retrievals, we train our retriever by scaling up a method recently introduced by Jernite (2020) that uses gold answers for distant supervision.", "The key idea is to push the encoded vector for a question close to a vector representation of its ground-truth answer(s), but away from all other answer vectors in the mini-batch (negative examples).", "Intuitively, this method works because both ELI5 answers and external documents are of paragraph length (documents are paragraph-length chunks from Wikipedia).", "Concretely, we optimize the loss, loss = (cid:88) ( q i ,a i ) B log exp q i a i (cid:80) a j B exp q i a j where B is the mini-batch and q i , a i are the encoded vector representations for ( q i , a i ) .", "This objective is based on contrastive learning, a method that has been used effectively for semi-supervised learning (Chen et al., 2020) and dense retriever training (Karpukhin et al., 2020).", "Scaling up from Jernite (2020), who used a mini-batch size of 512 and initialized their retriever with BERT, we use much large mini-batches of size 12,288 (and hence, many more negative examples) and initialize our retriever with a strong pretrained retriever, the REALM model (Guu et al., 2020) trained on the Common Crawl News (CC-News) corpus.", "These design decisions greatly improve retriever quality, as we observe in an ablation study (see Appendix A.2).", "During inference, we perform a maximum inner-product search (MIPS) with the ScaNN library (Guo et al., 2020) to efficiently find the top K documents.", "In all our experiments we use K = 7 , following the setup in Guu et al. (2020).", "We next describe our generator model, which conditions its generated answers on retrieved documents returned by C-REALM.", "We use the Routing Transformer (RT) from Roy et al. (2020), which is the current state-of-the-art in long-form language modeling.", "The RT is a sparse attention model that employs local attention as well as mini-batch k -means clustering to better model long-range dependencies in sequences (attention maps in Appendix A.1).", "Long-form language models such as RT are well-suited to ELI5 as the task requires conditioning answer generation not only on a short question but also many lengthy retrieved documents.", "We pretrain our RT model on PG-19, a longform language modeling benchmark (Rae et al., 2020) created from approximately 28,000 Project Gutenberg books published before 1919.", "PG-19 has 1.9B tokens and an average context size of 69K words.", "While this data is out-of-domain for ELI5, we choose it to encourage long & coherent generation.", "Our RT is a 22-layer model with 1032 hidden units (486M parameters), maximum sequence length of 8192 tokens, and a vocabulary of 98K subwords.", "3 We fine-tune our model in a decoder-only fashion (Liu et al., 2018; Wolf et al., 2018) by concatenating the top K retrieved documents to the question [ r i,K , r i,K 1 ... r i, 1 , q i , a i ] and training the model to predict tokens of the answer a i .", "We do not backpropagate gradients through the retriever.", "4 Retrievals slightly improve perplexity (18.1 vs 17.8) as seen in Wang and McAllester (2020), but do not improve generations (3.1).", "Dataset & Evaluation details : We evaluate our model on the KILT validation & test subsets of ELI5 (Petroni et al., 2020), since the original ELI5 dataset does not have human annotations to measure retriever performance.", "We downloaded the ELI5 dataset (Fan et al., 2019) from the KILT Github repository.", "5 This version of the dataset has 272,634 training examples, 1,507 validation examples and 600 test examples.", "The test set answers 3 Our hyperparameters have been chosen manually with minimal tuning.", "See Appendix A.1 for details.", "4 We tried training the retriever jointly with RT using the attention bias scheme proposed in MARGE (Lewis et al., 2020a).", "This improved perplexity only in autoencoding settings where the gold answer itself is used as a retrieval query (like the setup in Lewis et al., 2020a), which is not valid in LFQA.", "5 github.com/facebookresearch/KILT Retrieval Generation Model RPr.", "Answer quality is measured by the maximum overlap of generations with a set of gold answers in terms of unigram F1 score and ROUGE-L (Lin, 2004).", "Petroni et al. (2020) collected human annotations of Wikipedia articles which support ELI5 gold answers, which enables measuring retrieval quality by computing R-precision (if the top-1 retrieval matches the annotation) and Recall@5 using the top-5 retrievals.", "Finally, the KILT benchmark combines R-prec.", "and ROUGE-L to measure the overall performance of the system by KILT ROUGE-L.", "This metric is similar to ROUGE-L, but assigns a score of 0 whenever the top-1 retrieval does not match the gold annotation.", "Baselines : We compare our model with the other entries on the ELI5 KILT leaderboard which are either generation-only, like T5-base (Raffel et al., 2020) and BART (Lewis et al., 2020b), or variants of BART using retrieval such as RAG (Lewis et al., 2020c) and BART + DPR (Petroni et al., 2020).", "These systems are based on massive pretrained language models, with similar number of parameters as our model (details in Appendix A.3).", "Results : Table 1 contains our results on the test set of the ELI5 (also on the public KILT leader-board).", "We present four variants of our system, using a different retriever during inference (REALM or C-REALM), and different nucleus sampling p values (Holtzman et al., 2020).", "All variants outper-Q: Why are almost all boats white?", "form prior work in generation quality, with lower-entropy models ( p = 0 . 6 ) performing best.", "6 CREALM performs competitively to RAG and DPR despite being only distantly supervised, and outperforms REALM.", "Our proposed RT+ C-REALM system achieves a new state-of-the-art on combined performance (KILT R-L).", "Generations from our model are provided in Figure 2 and Appendix A.4.", "In this section, we conduct a thorough analysis of our model's usage of retrievals (Section 3.1), the impact of overlap in ELI5's train / validation / test folds (Section 3.2), issues with ROUGE-L and performance bounds (Section 3.3), and the difficulty in human evaluation for this task (Section 3.4).", "At the end of each section, we provide short takeaways with suggestions for future work.", "While our retrieval-augmented system achieves state-of-the-art performance, we find little evidence that it is actually using the retrieved documents.", "To measure this, we run an ablation study where at inference time we replace retrieved paragraphs with 6 As in Holtzman et al. (2020), a human study reveals that higher entropy ( p = 0 . 9 ) answers are slightly more coherent and sensible, but lower entropy answers ( p = 0 . 6 ) are more relevant to the question (details in Appendix A.5).", "randomly sampled paragraphs from Wikipedia.", "We compare this Random baseline with our original system ( Predicted ) in terms of generation quality as well as the n -gram overlap between the generation and the retrieved paragraphs.", "retrievals : We present our results in Table 2.", "Despite not being conditioned on any meaningful retrievals, the Random retrieval model has similar ROUGE-L scores as our Predicted system.", "Moreover, generations from the Random and Predicted models have similar amounts of 1-gram and 2-gram overlap with the paragraphs retrieved by CREALM, despite the fact that the Random model does not actually see the retrieved paragraphs.", "7 The n -gram overlaps are possibly overestimates due to stopwords (e.g., prepositions, punctuation) and entities which are copied from the question.", "To tackle this issue, in Table 4 we measure the fractions of lemmatized nouns, proper nouns and numbers in the generated answer which are present in the predicted retrievals but not in the question.", "We notice similar trends as before, with only small differences between the two systems.", "Finally, there is almost no correlation (Spearman = 0 . 09 ) between the Predicted model's generation quality and the amount of unigram overlap between its outputs and the retrieved documents (scatter plots in Appendix A.7), strengthening our hypothesis that generations are not grounded in retrievals.", "8 Human evaluation validates our findings : As ROUGE-L and n -gram overlap have major limitations for LFQA (Section 3.3), we perform additional human A/B testing on the output of Random and Predicted .", "Specifically, we ask human volunteers 9 to choose between answers generated by the two systems (presented in random order).", "As seen in Table 3, humans struggle to choose which of the two answers is more relevant to the question.", "For both model variants ( p = 0 . 6 , 0 . 9 ), there is a less than 7% preference for a particular answer type, with humans preferring answers (by 6%) from the Random model for p = 0 .", "9 !", "Other systems also have this issue, possibly due to source-reference divergence and train-validation overlap : We note that this issue is not unique to our system other systems on the KILT leaderboard like BART + DPR and RAG actually perform worse than their no-retrieval counterpart (BART) in generation quality, as 8 All these trends persist even on questions for which our retriever predicts the ground-truth document (Appendix A.7) 9 Details of our experimental setup in Appendix A.5.", "shown in Table 1.", "Qualitatively, we found no evidence of retrieval usage in a publicly hosted ELI5 model demo by Jernite (2020).", "10 A possible explanation for this issue is high source-reference divergence, a common problem in table-to-text generation (Wiseman et al., 2017; Tian et al., 2019).", "In Table 2 and Table 4, we measure the n -gram overlap of top-ranked gold validation answers ( Gold Ans ) with predicted retrievals.", "This overlap is low and similar to that of our generations, which we suspect encourages our model to ignore retrievals.", "A second explanation is the large amount of train-validation overlap (Section 3.2), which eliminates the need for retrieval.", "systems despite not using retrievals?", "While our model has similar capacity as the BART/RAG baselines (comparison in Appendix A.3), we hypothesize that our improvements in ROUGE-L are due to a different pretraining objective.", "BART is pretrained on a masked infilling task on short sequences.", "Instead, we pretrain our model to perform next-word prediction on long sequences from Project Gutenberg, which encourages long & fluent generations.", "To illustrate this length effect, in Appendix A.6 we show that truncated outputs from our model get lower ROUGE-L scores on ELI5.", "11 Prior summarization literature (Sun et al., 2019) has also shown that ROUGE scores vary heavily by length.", "To compare the same systems on shorter length outputs, we also tried finetuning the pretrained model on Wizard of Wikipedia (Dinan et al., 2019), an unconstrained dialogue generation task with single sentence dialogues (much shorter than ELI5).", "As seen on the public KILT leaderboard, 12 our system has lower ROUGE-L scores than the BART / RAG baselines.", "Another possible explanation is issues with ROUGE-L itself, as discussed in Section 3.3.", "Takeaway (better evaluation of grounding) : For evaluating LFQA, it is important to run control experiments with random retrievals & measure grounding of generations in retrieval.", "While the KILT benchmark does attempt to measure the com-10 https://huggingface.co/qa 11 While we do not have access to generations from baselines on the KILT leaderboard, example generations from the demo of the BART model in Jernite (2020) are significantly shorter (59 words avg.) than our generations (187 words avg.).", "12 https://eval.ai/web/challenges/ challenge-page/689/leaderboard/1909 bined retrieval + generation performance via KILT RL, it does not check whether the generations actually used the retrievals.", "In other words, one can submit independent retrieval & generation systems, but still perform well on the combined score.", "This may not be an issue for short-form QA tasks like Natural Questions, since the gold answer is often exactly contained as a span in the gold retrieval.", "Also, as retrieval might be less important for large language models with parametric knowledge (Roberts et al., 2020), the KILT-RL strategy of simply aggregating top-1 retrieval score with ROUGE-L unfairly penalizes systems not relying on retrieval.", "13 3.2 Training / Validation Overlap Our experiments in Section 3.1 show that model performance is mostly unchanged by conditioning generation on randomly sampled retrievals instead of predictions from C-REALM.", "Despite not using retrievals, we observe qualitatively that our model displays a large amount of parametric knowledge (Faraday Cage in Figure 1c), which is surprising since it was pretrained on novels from Project Gutenberg (not Wikipedia).", "In this section, we discover that a major reason for ignoring retrievals is the large amount of train / validation overlap in ELI5.", "While Fan et al. (2019) attempted to fix this issue through TF-IDF overlap, this method is insufficient to identify all question paraphrases, as we find significant overlap between the training set and the KILT validation set of ELI5.", "14 ELI5 is not the only dataset with substantial train / test overlap: Lewis et al. (2020d) identify similar issues with short-form QA datasets like Natural Questions.", "Finding similar questions & measuring overlap : We use our retriever C-REALM to retrieve similar questions from the training set, since it has learned to map questions to a feature-rich embedding space.", "For each validation question, we retrieve the 7 most similar training set questions.", "We use both human and automatic evaluation to calculate the amount of overlap.", "For human evaluation, we show annotators on Amazon Mechanical Turk 15 a validation set question and a retrieved training set question, 13 Another issue of KILT-RL is ignoring non top-1 retrievals, penalizing models using multiple retrievals together in context.", "14 The ELI5 demo from Jernite (2020) also retrieves the top-1 similar training set question.", "Qualitatively, we found many validation examples had near-identical train paraphrases.", "15 We pay workers 4 cents per question pair ($8-12 / hr).", "We only hire workers from USA, UK and Australia with a 95% or higher approval rating and at least 1000 approved HITs.", "and ask them to annotate the pair as 0 : No paraphrase relationship; 1 : on similar topics, but different questions; 2 : approximately the same question (an adaptation of the paraphrase evaluation of Kok and Brockett, 2010).", "We take 300 validation set questions and ask three crowd-workers to rate them against retrieved training questions on this scale, and consider the label with majority rating.", "To improve quality, we manually verify their annotations.", "Table 5 shows that 81% of validation set questions have at least one paraphrase in the training set, while all annotated questions have at least one topically similar question in the training set, which indicates substantial training / validation overlap.", "The experiment had fair agreement with a Fleiss of 0.29 (Fleiss, 1971; Landis and Koch, 1977).", "As manually annotating question overlap can be expensive and time-consuming, we also experiment with automatic overlap detection methods.", "In particular, we use a RoBERTa-large binary classifier (Liu et al., 2019) fine-tuned on the Quora Question Paraphrase (QQP) dataset (Iyer et al., 2017) from the GLUE benchmark (Wang et al., 2019).", "For 43.6% of the ELI5 validation set, this classifier marked at least one retrieved question as a paraphrase (46% for the 300 questions we annotated).", "Qualitatively, we notice that this classifier often mis-classifies retrieved questions that are valid paraphrases but exhibit significant lexical or syntactic divergence.", "This observation, along with the smaller fraction of valid paraphrases in the QQP training set (37%), partially explains the gap between automatic & human evaluations.", "Using retrieved QA for generation : Since ELI5 contains significant amount of overlap between the training and validation sets, a system can simply copy the answers of retrieved training set questions instead of actually doing generation.", "Table 7 shows that by using the longest answer within the topK retrieved questions, we outperform two prior systems (RAG, BART + DPR) that use retrieval-augmented generation.", "As an upper Retrieval Generation Split RPrec R@5 F1 R-L QQP classifier (1.5k examples) overlap (43.6%) 17.0 25.8 26.0 24.6 not overlap (56.4%) 10.4 17.7 25.2 24.2 AMT evaluation (300 examples) overlap (81%) 14.0 20.0 25.0 24.3 not overlap (19%) 5.3 17.9 24.5 24.8 Table 6: ELI5 performance difference (for the p = 0 . 6 model) between subsets of validation QA having a question paraphrase (overlap) and not having a question paraphrase (not overlap) in the training set.", "bound, we also consider a system which uses the best possible answer to retrieved training set questions in terms of ROUGE-L ( best top-K train answer ).", "This system gets 28.5 ROUGE-L, outperforming all others.", "ELI5 performance on overlapping QA : Finally, we measure the performance difference between validation questions that overlap with the training set vs. those that do not.", "Since we only have human annotations for 300 questions (the no-overlap subset has only 53 samples), we present this analysis using the QQP classifier's outputs as well.", "In Table 6, we notice large differences of 6.6 RPrec, 8.1 R@5 in retrieval performance favoring the overlap subset, but only a small generation score gain of 0.8 F1, 0.4 R-L (which may be misleading as discussed in Section 3.3).", "Takeaway (careful held-out curation) : Based on our findings, we suggest that more careful dataset curation for LFQA tasks is needed to prevent duplicates.", "While we acknowledge the efforts of Fan et al. (2019) to fix this issue, we also suggest alternative methods to control overlap and focus on evaluating generalization in held-out sets: (1) automatically retrieving paraphrases and then running human validation to eliminate them; or (2) holding out entire genres or domains to reduce the possibility of overlap for example, keeping Q/A on Sports only in the held-out sets.", "Note that simply pruning the existing splits using these criteria will significantly reduce the size of the held-out datasets; so we suggest re-splitting the train/validation/test splits from the entire pool of collected questions.", "We have seen that simply copying the answer of a close question paraphrase from the training set achieves 28.5 ROUGE-L with an optimal selection among retrieved questions and outperforming all computational models.", "But how good is this absolute number?", "What are some suitable upper & lower bounds to ROUGE-L scores on ELI5?", "Is ROUGE-L an informative metric for LFQA?", "Lower bounds are trivial baselines used to test the vulnerability of datasets or metrics to simple heuristic strategies that do not actually perform the task.", "Recent examples include hypothesis-only baselines for natural language inference (Gururangan et al., 2018) and passage-only baselines for reading comprehension (Kaushik and Lipton, 2018).", "We evaluate two ROUGE-L lower bounds on ELI5: (1) copy the question 5 times and concatenate, as longer outputs boost ROUGE-L (Appendix A.6); (2) retrieve a random training set answer.", "Our first baseline contains entities often present in the gold answer, but without actually answering the question.", "Our second baseline follows the style of an answer but is completely off-topic.", "As an upper bound , we estimate the ROUGE-L of gold answers themselves.", "On an average, there are 12 gold answers per question, so we measure the ROUGE-L of the longest gold answer with respect to the other gold answers.", "We also measure the maximum pairwise ROUGE-L between two gold answers for the same question.", "16 We only calculate upper bounds for the validation set, since the gold answers of the KILT test set are hidden.", "Lower bounds beat prior work, upper bounds have low ROUGE-L : We compare our bounds with actual retrieval augmented generation systems in Table 7.", "Both our lower bounds ( random training answer , copy input ) are quite competitive, outperforming RAG (Lewis et al., 2020c) and performing close to BART + DPR (Petroni et al., 2020) without actually answering the question!", "This shows that ROUGE-L is fairly sensitive to simply copying entities from the question 16 Note that different gold answers were not written independently as Reddit users writing answers can read existing answers and may want to provide a non-overlapping perspective.", "Due to the high train/valid overlap, the best top-7 retrieved answer could be a better upper bound since it is from another Reddit post (and performs better than best gold answer ).", "as well as stylistic properties of ELI5.", "On the other hand, upper bounds ( longest gold answer ) perform worse than our system (21.2 vs 24.4).", "Suspecting that this result is misleading, we run another human A/B test by showing volunteers a question and asking them to choose between answers generated by our system and the longest gold answer , shuffled at random.", "17 As seen in Table 3, the majority of humans prefer the gold reference answers vs generations (68% vs 14% for p = 0 . 6 ).", "In interviews with human annotators after completing the task, they reported that both answers were often fluent and stylistically similar, but one eventually veered off-topic.", "Our experiments demonstrate that computing the ROUGE-L of generations against gold answers is not a meaningful way to evaluate LFQA systems, since it is not selective enough to differentiate between valid/invalid answers.", "There is a very small margin of improvement between trivial lower bounds and strong upper bounds, with the absolute scores of upper bounds being quite low.", "We suspect this is due to the long length of answers and fairly unconstrained and large output space.", "The ELI5 dataset has several open-ended questions with many plausible answers (like What causes traffic? ), often involving analogies.", "A possible fix is a sentence-level evaluation and then aggregating scores across generated sentences, but appropriate penalties are needed for lack of diversity (Zhu et al., 2018) and short lengths.", "Other possible fixes 17 Human A/B testing details in Appendix A.5.", "include learning task-specific metrics to measure semantic overlap (Sellam et al., 2020) or metrics to check factual correctness (Zhang et al., 2020) and faithfulness to input (Wang et al., 2020; Durmus et al., 2020; Zhou et al., 2020).", "Ultimately, all automatic metrics have their limitations, and human evaluation is necessary (Celikyilmaz et al., 2020).", "To better understand the inherent difficulty of evaluation in ELI5, we interviewed human annotators (of Table 3) and found two challenges:", "(1) Unfamiliarity with question topics : While most annotators found the Q/A interesting, they were often unfamiliar with the technical topics discussed in the questions.", "This made it hard for them to assess answer correctness.", "The ELI5 dataset has questions in a wide variety of topics (History, Politics, Biology etc.), while most annotators were Computer Science graduate students.", "While we did allow annotators to use Wikipedia, they mentioned domain-experts will be better judges of factual correctness of answers.", "(2) Length of Answers : Annotators mentioned the paragraph-long length of answers made the task quite challenging.", "Annotators reported taking an average of 2 minutes per answer pair, many of which required careful thought & concentration.", "This was especially difficult when only part of the answer was correct and the rest had contradictions or repetitions, a common theme in our generations.", "Takeaway : Human evaluation is challenging but necessary for evaluating LFQA.", "Crowd-workers are unlikely to spend time reading & analyzing long text (Akoury et al., 2020).", "Hence, it is imperative to design simpler evaluations.", "One effort in this direction is Dugan et al. (2020), who reveal one generated sentence at a time and estimate system quality based on the number of sentences which fooled humans.", "Another promising direction is extrinsic evaluation (Celikyilmaz et al., 2020) where humans actually interact with systems in real-world scenarios such as the Alexa Prize (Ram et al., 2018) or STORIUM (Akoury et al., 2020).", "We present a retrieval augmented generation system that achieves state-of-the-art performance on", "the ELI5 long-form question answering dataset.", "However, an in-depth analysis reveals several issues not only with our model, but also with the ELI5 dataset & evaluation metrics.", "We hope that the community works towards solving these issues so that we can climb the right hills and make meaningful progress on this important task.", "First and foremost, we thank the twenty people who volunteered to help out with with the human annotation experiments.", "We are very grateful to Vidhisha Balachandran, Niki Parmar, and Ashish Vaswani for weekly meetings discussing progress and the REALM team (Kenton Lee, Kelvin Guu, Ming-Wei Chang and Zora Tung) for help with their codebase and several useful discussions which helped us improve our experiments.", "We are grateful to Tu Vu for help with the QQP classifier.", "We thank Jules Gagnon-Marchand and Sewon Min for suggesting useful experiments on checking ROUGE-L bounds.", "Finally, we thank Shufan Wang, Andrew Drozdov, Nader Akoury, Andrew McCallum, Rajarshi Das, and the rest of the UMass NLP group for helpful discussions and suggestions at various stages in the project.", "This work was primarily done during KK's internship at Google Brain, mentored by AR.", "MI and KK are supported by award IIS-1955567 from the National Science Foundation (NSF).", "Our system faces a similar set of issues as most modern text generation technology, like fabrication of facts (Zellers et al., 2019), potential for misuse (Brown et al., 2020) and reflecting biases prevalent on Reddit (the ELI5 dataset has been built using the r/ELI5 subreddit).", "In our work, we attempted to make text generators more factually grounded by conditioning generations on retrieved Wikipedia articles, hoping to reduce fact fabrication.", "Unfortunately, a thorough analysis (Section 3.1) has revealed that our system is still not grounding its generations in retrievals, and we have recommended the design of better metrics to measure factual correctness to tackle this issue.", "TPUs are highly efficient chips which have been specifically designed for machine learning applica-tions.", "These accelerators run on Google Cloud, which has matched 100% of its electricity consumption with renewable energy purchases, and has committed to fully decarbonize its electricity supply by 2030 ( https://cloud.google. com/sustainability ).", "More details on training time are provided in Appendix A.1." ]
[ "abstain", "objective", "objective", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "result", "abstain", "result", "abstain", "abstain", "method", "abstain", "result", "method", "abstain", "abstain", "abstain", "objective", "method", "result", "objective", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "objective", "other", "other", "other", "other", "other", "other", "other", "method", "method", "abstain", "abstain", "other", "abstain" ]
[ "Recent neural network models have achieved impressive performance on sentiment classification in English as well as other languages.", "Their success heavily depends on the availability of a large amount of labeled data or parallel corpus.", "In this paper, we investigate an extreme scenario of cross-lingual sentiment classification, in which the low-resource language does not have any labels or parallel corpus.", "We propose an unsupervised cross-lingual sentiment classification model named multi-view encoder-classifier (MVEC) that leverages an unsupervised machine translation (UMT) system and a language discriminator.", "Unlike previous language model (LM) based fine-tuning approaches that adjust parameters solely based on the classification error on training data, we employ the encoder-decoder framework of a UMT as a regularization component on the shared network parameters.", "In particular, the cross-lingual encoder of our model learns a shared representation, which is effective for both reconstructing input sentences of two languages and generating more representative views from the input for classification.", "Extensive experiments on five language pairs verify that our model significantly outperforms other models for 8/11 sentiment classification tasks.", "Recent neural network models have achieved remarkable performance on sentiment classification in English and other languages (Conneau et al., 2017; Chen et al., 2018; He et al., 2019; Chen and Qian, 2019).", "However, their success heavily depends on the availability of a large amount of labeled data or parallel corpus.", "In reality, some low-resource languages or applications have limited labeled data or even without any labels or parallel corpus, which may hinder us from training a robust and accurate sentiment classifier.", "To build sentiment classification models for low-resource languages, recent researchers developed cross-lingual text classification (CLTC) models (Xu and Yang, 2017; Eriguchi et al., 2018), which transfers knowledge from a resource-rich (source) language to a low-resource (target) language.", "The core of those models is to learn a shared language-invariant feature space that is indicative of classification for both languages.", "Therefore a model trained from the source language can be applied to the target language.", "Based on how the shared feature space is learned, there are three categories, namely word-level alignments (Andrade et al., 2015), sentence-level alignments (Eriguchi et al., 2018) and document level alignments (Zhou et al., 2016).", "Those models can well capture the semantic similarity between two languages.", "They, however, require parallel resources such as a bilingual dictionary, parallel sentences, and parallel Wikipedia articles.", "Such a limitation may prevent these models from being applicable in languages without any parallel resources.", "Recently, there have been several attempts at developing zero-resource models (Ziser and Reichart, 2018; Chen et al., 2018; Chen and Qian, 2019).", "Most notably, Ziser and Reichart (2018) proposed a cross-lingual & cross-domain (CLCD) model that builds on pivot based learning and bilingual word embedding.", "Although CLCD does not directly need labeled data or parallel corpus, it requires bilingual word embeddings (BWEs) (Smith et al., 2017) that requires thousands of translated words as a supervised signal.", "Chen et al. (2018) developed an adversarial deep averaging network to learn latent sentence representations for classification, but it had an implicit dependency on BWEs (Zou et al., 2013) that requires pretraining on a large bilingual parallel corpus.", "Chen and Qian (2019) extended the cross-lingual model in Chen et al. (2018) to multiple source languages by using the unsupervised BWEs (Lample et al., 2018b) and adding individual feature extractor for each source language, which eliminated the dependency on a parallel corpus.", "Nevertheless, their model is very sensitive to the quality of BWEs and performs poorly on distant language pairs such as English-Japanese, as illustrated in their experimental study.", "In parallel, cross-lingual language models (LMs) trained from raw Wikipedia texts, such as multilingual BERT 1 (Devlin et al., 2019) and XLM (Conneau and Lample, 2019), have been prevalent in solving zero-shot classification problems (Wu and Dredze, 2019).", "Those models use the BERT-style Transformer (Vaswani et al., 2017) architecture simultaneously trained from multiple languages to construct a sentence encoder, and fine-tune the encoder and a classifier on labeled training data from the source language.", "Then the fine-tuned model is applied to the target language.", "The whole process does not require any labeled data or parallel corpus.", "However, under the zero parallel resource setting, the encoder trained from self-supervised masked language modelling within each language may not well capture the semantic similarity among languages, which could harm the generalization performance of fine-tuned models.", "In this paper, we propose a sentiment classification model called multi-view encoder-classifier (MVEC) in an unsupervised setting, in which we only have monolingual corpora from two languages and labels in the source language.", "Different from previous language model (LM) based fine-tuning approaches (Devlin et al., 2019; Conneau and Lample, 2019) that adjust parameters solely based on the classification error of training data, we utilize the encoder-decoder network from unsupervised machine translation (UMT) (Lample et al., 2018a) to regularize and refine the shared latent space.", "In particular, the transformer-based encoder regularized by a language discriminator learns shared but more refined language-invariant representations, which are effective for both reconstructing sentences from two languages by the decoder and generating multi-view feature representations for classification from input documents.", "In our model, we construct two views from the en-1 https://github.com/google-research/ BERT/blob/master/multilingual.md coder:", "(i) the encoded sentences in the source language;", "(ii) the encoded translations of the source sentences in the target language.", "Our proposed MVEC is partially initialized by pretrained LMs (Conneau and Lample, 2019) but further fine-tuned to align sentences from two languages better, accurately predict labeled data in the source language and encourage consensus between the predictions from the two views.", "The full model is trained in an end-to-end manner to update parameters for the encoder-decoder, the language discriminator, and the classifier at each iteration.", "Our contributions in this paper are as follows: We present an unsupervised sentiment classification model without any labels or parallel resource requirements for the target language.", "By designing a multi-view classifier and integrating it with pretrained LMs and UMT (Lample et al., 2018a), we build our model (MVEC) on a more refined latent space that is robust to language shift with better model interpretation compared to previous zero-shot classification works (Chen et al., 2018; Conneau and Lample, 2019).", "We extensively evaluate our model in 5 language pairs involving 11 sentiment classification tasks.", "Our full model outperforms state-of-the-art unsupervised fine-tuning approaches and partially supervised approaches using cross-lingual resources in 8/11 tasks.", "Therefore, our results provide a strong lower bound performance on what future semi-supervised or supervised approaches are expected to produce.", "CLTC aims to learn a universal classifier that can be applied to languages with limited labeled data (Bel et al., 2003; Dong and de Melo, 2019; Keung et al., 2019), which is naturally applicable for sentiment analysis.", "Traditional supervised methods utilize cross-lingual tools such as machine translation systems and train a classifier on the source language (Prettenhofer and Stein, 2010).", "The latest models used parallel corpus either to learn a bilingual document representation (Zhou et al., 2016) or to conduct cross-lingual model distillation (Xu and Yang, 2017).", "In the unsupervised setting, Chen et al. (2018) learned language-invariant latent cross-lingual representations with adversarial training.", "Ziser and Reichart (2018) used pivot based learning and structure-aware DNN to transfer knowledge to low-resourced languages.", "In both papers, however, they have an implicit dependency on BWEs, which requires a bilingual dictionary to train.", "Chen and Qian (2019) was the first fully unsupervised approach using the unsupervised BWEs (Lample et al., 2018b) and multi-source languages with adversarial training.", "In contrast, our model is a multi-view classification model that is seamlessly integrated pretrained LMs (Conneau and Lample, 2019) and the encoder-decoder from UMT (Lample et al., 2018a) with adversarial training.", "Hence we learn a more fine-tuned latent space to better capture document-level semantics and generate multiple views to represent the input.", "UMT does not rely on any parallel corpus to perform translation, which lays a foundation for our approach.", "At the word-level, Lample et al. (2018b) built a bilingual dictionary between two languages by aligning monolingual word embeddings in an unsupervised way.", "At the sentence and document level, Lample et al. (2018a) proposed a UMT model by learning an autoencoder that can reconstruct two languages under both within-domain and cross-domain settings.", "Lample et al. (2018c) extended Lample et al. (2018a) with a phrase-based approach.", "Since we aim to learn more refined language-invariant representations for classification, it is natural to employ the encoder from a UMT system to generate multiple views of the input and enable knowledge transfer.", "The task of multi-view transfer learning is to simultaneously learn multiple representations and transfer the learned knowledge from source domains to target domains, which have fewer training samples.", "Generally, data from different views contains complementary information and multiview learning exploits the consistency from multiple views (Li et al., 2019).", "Our work is particularly inspired by Fu et al. (2015) and Zhang et al. (2019), both of which exploit the complementarity of multiple semantic representations with semantic space alignment.", "The difference is that we use an encoder-decoder framework to generate multiple views for input from the source language and enforce a consensus between their predictions.", "Furthermore, we introduce a language discriminator (Lample et al., 2018a) to encourage the encoder to generate language-invariant representations from the input.", "In this section, we will introduce our model's general workflow, including the details of each component and our training algorithm.", "Given monolingual text data { D src , D tgt } from both the source and target language with a subset of labeled samples { D Lsrc , y Lsrc } in the source language where y Lsrc is a vector of class labels and D Lsrc D src , the task aims to build a universal classification model f ( X ; ) y parameterized by that can be directly applicable to unlabeled data in the target language, where X is an input document from any language and y is its class label.", "Note that in this paper we assume two languages share the same class types.", "Our proposed approach multi-view encoder classifier (MVEC) is composed of three components: an encoder-decoder, a language discriminator, and a classifier.", "Motivated by the success of unsupervised machine translation (UMT) in Lample et al. (2018a) and reconstruction regularization by an autoencoder in Sabour et al. (2017), we adopt the encoder-decoder framework from UMT (Lam-ple et al., 2018a) and introduce self-reconstruction loss within one language and back-translation reconstruction loss across languages together with the normal loss from classification.", "For simplicity, we denote self-reconstruction loss as within-domain loss and back-translation reconstruction loss as cross-domain loss throughout the paper.", "Although the encoder from UMT can generate a latent representation for input sen-tences/documents, there is still a semantic gap between the source and target language.", "Following Lample et al. (2018a); Chen et al. (2018), we enrich the encoder-decoder framework with a language discriminator that can produce fine-tuned latent representations to align latent representations from two languages better.", "Such representations are necessary to train a language-invariant classifier that is robust to the shift in languages.", "In particular, as illustrated in Figure 1, the encoder is used to encode source and target docu-Text classifier LC Language discriminator Encoder adv Decoder Decoder Source language Target language Encoder cross within domain domain within domain adv training training Text label L adv L wd_src L wd_tgt L cd_src L cd_tgt LD Figure 1: Multi-view encoder classifier (MVEC) architecture.", "ments (a sequence of sentences) into a shared latent space, while the decoder is responsible for decoding the documents from the latent space to the source or the target language.", "Following Lample et al. (2018a), the encoder-decoder is shared for both languages (domains) and trained within-domain and cross-domain.", "The language discriminator aims to predict the language source for each document, and the classifier is trained to classify each document into predefined class labels.", "Under the unsupervised setting, MVEC only observes unlabeled monolingual corpora from two languages and some labeled documents in the source language.", "The unlabeled monolingual data is normally sampled from the application domain, i.e., unlabeled product reviews or social media posts, which is used in both adopting pretrained LMs in the target domain and training UMT.", "As shown in Figure 1, unlabeled source and target data only pass through encoder-decoder and language discriminator, while the labeled source data pass all components in the system, including the sentiment classifier.", "For evaluation purposes, we may have labeled documents in the target language.", "However, they are only used during the test period.", "In the following subsections, we introduce each component of MVEC in detail.", "language l , where l { src, tgt } .", "The encoder is a neural network e enc ( x ( l ) ) parameterized by enc that produces a sequence of n hidden states Z ( l ) = ( z ( l ) 1 , z ( l ) 2 , , z ( l ) n ) by using the corresponding word embedding for x ( l ) i , where z ( l ) i is the latent representation of x ( l ) i in the shared latent space and enc are parameters of the encoder shared between two languages.", "The encoder could be a BiLSTM or a transformer (Vaswani et al., 2017).", "In this paper, we adopt the transformer, which has achieved enormous success in (e.g.,) recent text representation learning tasks (Devlin et al., 2019; Conneau and Lample, 2019).", "Given Z ( l ) as the input, the decoder d dec ( Z ( l ) ) generates the output sequence y ( l ) = ( y ( l ) 1 , y ( l ) 2 , , y ( l ) k ) .", "We use the same transformer based decoder as in Conneau and Lample (2019), parameterized by dec .", "For simplicity, we will denote the encoder and decoder by e ( x ( l ) ) and d ( Z ( l ) ) respectively instead of e enc ( x ( l ) ) and d dec ( Z ( l ) ) .", "It is more likely for the encoder-decoder to merely memorize every input word one by one if there are no imposed constraints.", "To improve the robustness of encoder-decoder, we follow Lample et al. (2018a) to adopt the Denoising Autoencoders (DAE) (Vincent et al., 2008), which recovers input from its corrupted version.", "There are three ways to inject noise into the document including shuffle, dropout, and replacement by special words.", "In our model, we drop and replace every word with probabilities of p d and p b , respectively, and we slightly shuffle the input document by implementing random permutation on the input document, where p d and p b can be viewed as hyper-parameters for controlling noise levels.", "In our design, the permutation satisfies the condition | ( i ) i | k, i { 1 , , n } , where n is the length of input document and k is another hyper-parameter.", "Note that the noise model is only applied to unlabeled data used for training the encoder-decoder and the discriminator, while labeled data will keep its originality for all components training.", "We use G ( . ) to denote a stochastic noise model, which takes input document x ( l ) and generates G ( x ( l ) ) as a randomly sampled noisy version of x ( l ) .", "To incorporate the encoder-decoder as regularization components, we follow Lample et al. (2018a) to consider both within-domain and cross-domain objective functions.", "The first objective function aims to reconstruct a document from a noisy version of itself within a language, whereas the second (cross-domain) objective function targets to teach the model to translate an input document across languages.", "Specifically, given a language l { src, tgt } , the within-domain objective function can be written as: R wd ( ed , l ) = E x D l , x d ( e ( G ( x ))) [( x, x )] (1) where ed = [ enc , dec ] , x d ( e ( G ( x ))) is a reconstruction of the corrupted version of x sampled from the monolingual dataset D l , and is the sum of token-level cross-entropy loss to measure discrepancy between two sequences.", "Similarly, we consider teaching the encoder-decoder to reconstruct x in one language from a translation of x in the other language, leading to the following cross-domain objective function: R cd ( ed , l 1 , l 2 ) = E x D l 1 , x d ( e ( T ( x ))) [( x, x )] (2) where ( l 1 , l 2 ) { ( src, tgt ) , ( tgt, src ) } and T ( . ) is the current UMT model applied to input document x from language l 1 to language l 2 .", "Cross-lingual classifiers work well when their input produced by the encoder is language-invariant, as studied in Chen et al. (2018).", "Thus, we prefer our encoder to map input documents from both languages into a shared feature space indepen-dent of languages.", "To achieve this goal, we follow Chen et al. (2018); Lample et al. (2018a) and introduce a language discriminator into our model, which is a feed-forward neural network with two hidden layers and one softmax layer to identify the language source from the encoder's output.", "In particular, we minimize the following cross-entropy loss function: LD ( D | enc ) = E ( l,x ( l ) ) [log PD ( l | e ( x ( l ) )] (3) where D denotes parameters of the discriminator, ( l, x ( l ) ) corresponds to language and document pairs uniformly sampled from monolingual datasets, and PD ( . ) is the output from the softmax layer.", "Meanwhile, the encoder is trained to fool the discriminator: L adv ( enc | D ) = E x ( li ) D li [log PD ( l j | e ( x ( l i ) )] (4) with l j = l 1 if l i = l 2 , and vice versa.", "Thus far, we have described how we obtain a language-invariant latent space to encode two languages, which may not be sufficient to generalize well across languages if we simply train a classifier on the encoder's output for the source language (Chen et al., 2018).", "One key difference between Chen et al. (2018) and our work is that we use UMT (Lample et al., 2018a), which can generate multiple views for the input labeled documents from the source language.", "We can thereby benefit from multi-view learning's superior generalization capability over single-view learning (Zhao et al., 2017).", "Particularly, we consider two views of input:", "(i) the encoded labeled documents from the source language;", "(ii) the encoded back-translations of the source documents from the target language.", "Our learning objective is to train the classifier to match predicted document labels with ground truth from the source language and to encourage two predictive distributions on the two views to be as similar as possible.", "We consider the following objective function: LC ( C , ed ) = E ( x,y ) [( y, P c ( e ( x ))) + DKL ( P c ( e ( x )) || P c ( e ( T ( x )))] (cid:124) (cid:123)(cid:122) (cid:125) Two views' consensus (5) where ( x, y ) { D Lsrc , y Lsrc } , DKL ( . || . ) is KL Divergence to measure the difference between two distributions, y is the class label of input document x and c are parameters of classifier.", "Following previous studies in text classification (Devlin et al., 2019), we use the first token's representation in the last hidden layer from the transformer encoder as the document representation vector.", "The classifier is a feed-forward neural network with two hidden layers and a softmax layer.", "The final objective function at one iteration of our learning algorithm is to minimize the following loss function: L all = LC + wd ( R wd src + R wd tgt ) (6) + cd ( R cd src + R cd tgt ) + adv L adv where wd , cd , adv are hyper-parameters to trade-off among within-domain loss, the cross-domain loss and the adversarial loss, respectively.", "language to another for calculating the cross-domain loss in Eq.", "(2) and classifier loss in Eq.", "(5).", "To accelerate the training, we initialize T (0) by pretraining a transformer-based UMT (Con-neau and Lample, 2019) for certain steps with the same encoder-decoder architecture as our model on monolingual Wikipedia text.", "After pretraining, we use the pretrained encoder-decoder network to initialize our model and start training the classifier and the discriminator.", "Meanwhile, we refine the encoder and the decoder on monolingual data and labeled data from the source language.", "During each training step, the optimization iterates from updating D in Eq.", "(3) to updating ed and C in Eq.", "(6).", "Note that if a batch of documents drawn from monolingual data are all unlabeled, then we suspend updating classifier parameters and only update the parameters of the language discriminator and encoder-decoder.", "In Algorithm 1, we provide a detailed procedure.", "Algorithm 1 The proposed MVEC algorithm.", "1: procedure TRAINING ( D src , D tgt , y Lsrc ) D src and D tgt : monolingual datasets, y Lsrc : labels in the source language.", "2: T (0) pretrain a transformer based UMT using (Conneau and Lample, 2019); 3: for t = 0 , , max epoch do 4: Using T ( t ) to translate each document in a batch; 5: D argmin LD in Eq.", "(3) while fixing C , ed ; 6: C , ed argmin L all in Eq.", "(6) while fixing D ; 7: Update T ( t +1) { e ( t ) , d ( t ) } ; 8: return C , enc 9: End procedure 4 Experiment We conduct experiments on cross-lingual multi-class and binary sentiment classification using five language pairs involving 11 tasks.", "More specifically, English is always the source language, and the target languages are French, German, Japanese, Chinese, and Arabic, respectively.", "Amazon Review (French, German, Japanese).", "This is a multilingual sentiment classification dataset (Duh et al., 2011) in four languages, including English (en), French (fr), German (de), and Japanese (ja), covering three products (book, DVD, and music).", "For each product in each language, there are 2000 documents in each of the training and test sets.", "Each document contains a title, a category label, a review, and a 5-point scale star rating.", "Following Xu and Yang (2017); Chen and Qian (2019), we convert multi-class ratings to binary ratings by thresholding at 3-point.", "For each product, since the test set in English is not used, we combine the English training and test sets and randomly sample 20% (800) documents as the validation set to tune hyper-parameters, and use the rest 3200 samples for training.", "For each target language, we use the original 2000 test samples for comparison with previous methods.", "Unlike Chen et al. (2018); Chen and Qian (2019) that used labeled data in the target language for model selection, we only use the labels of reviews in the target language for testing.", "There are 105k, 58k, 317k, 300k unlabeled reviews for English, French, German and Japanese, respectively, which can be used as monolingual data to train the encoder-decoder of our model.", "Yelp and Hotel Review (Chinese).", "This dataset is from two sources:", "(i) 700k Yelp reviews in English with five classes from Zhang et al. (2015), and", "(ii) 170k hotel reviews in Chinese segmented and annotated with five classes from Lin et al. (2015).", "Following the same setup in Chen et al. (2018), we split all Yelp reviews into a training set with 650k reviews and validation set with 50k reviews.", "The 650k review contents are also served as the monolingual training data for English.", "For Chinese hotel review data, we sample 150k reviews as the monolingual training set.", "The rest 20k reviews are treated as the test set.", "Social Media Posts (Arabic).", "The BBN Arabic Sentiment dataset is from Mohammad et al. (2016).", "There are 1200 documents from social media posts annotated with three labels (negative, neutral, positive) in the data.", "The original dataset was split into half as training and the other half as testing.", "Since we do not need validation data in the target language to tune the model, we randomly sample 1000 documents as test data.", "For English resource, we still use Yelp reviews and follow the same split as the Chinese case, but convert 5 level reviews into 3 levels 2 .", "Also, we randomly sample 2 1,2 negative, 3 neutral, 4,5 positive 161k sentences from the United Nations Corpus Arab subset (Ziemski et al., 2016) as unlabeled monolingual data for our model training.", "For French, German and Japanese, we perform binary classification.", "For Chinese and Arabic, we perform multi-class classification.", "Data Preprocessing.", "Following Lample et al. (2018c), we extract and tokenize monolingual data of each language using Moses (Koehn et al., 2007).", "Then we use the neural machine translation for rare words with subword units, named fastBPE (Sennrich et al., 2016) in three steps.", "In detail, BPE code is collected from the pretrained XLM-100 models (Conneau and Lample, 2019), then applied to all tokenized data and used to extract the training vocabulary.", "To constrain our model size, we only keep the top 60k most frequent subword units in our training set.", "Finally, we binarize monolingual data and labeled data for model training, validation and testing.", "Pretraining Details.", "As mentioned earlier, our model depends on an initial translation machine to compute reconstruction loss and classifier loss.", "We leverage pretrained language models (Con-neau and Lample, 2019) to initialize a transformer-based UMT (Lample et al., 2018a) and train it on Wikipedia text 3 .", "In particular, we sample 10 million sentences from each language pairs and use the XLM library 4 to train a UMT (Lample et al., 2018a) for 200K steps.", "The resulting encoder-decoder are used to initialize our model.", "Regarding word embedding initialization, we use the embeddings obtained from the 1st layer of pretrained language models (Conneau and Lample, 2019), which has demonstrated better cross-lingual performance in a number of evaluation metrics over MUSE (Lample et al., 2018b).", "Training Details.", "In our experiment, both encoder and decoder are 6 layer transformers with 8-head self-attention.", "We set both subword embedding and hidden state dimension to 1024 and use greedy decoding to generate a sequence of tokens.", "The encoder-decoder and classifier are trained using Adam optimizer (Kingma and Ba, 2015) with a learning rate of 10 5 and a mini-batch size of 32.", "We set the hidden dimension to 128 for both clas-3 http://dumps.wikimedia.org/ 4 www.github.com/facebookresearch/XLM sifier and discriminator.", "For parameters of denoising auto-encoder, we set p d = 0 .", "1 , p b = 0 .", "2 and k = 3 following Lample et al. (2018a).", "Finally, we perform a grid search for hyper-parameters on { 0.5,1,2,4,8 } and set wd , cd to 1 and adv to", "4. To prevent gradient explosion, we clip the gradient L 2 norm by 5.0.", "Our approach is implemented in PaddlePaddle 5 and all experiments are conducted on an NVIDIA Tesla M40 (24GB) GPU.", "Competing Methods.", "We have compared our method with several recently published results.", "Due to the space limit, we briefly introduce several representative baselines: LR+MT translated the bag of words from target language to source language via machine translation and then built a logistic regression model.", "BWE baselines rely on Bilingual Word Embeddings (BWEs), wherein 1-to-1 indicates that we are only transferring from English, while 3-to-1 means the training data from all other three languages.", "CLDFA (Xu and Yang, 2017) was built on model distillation on parallel corpora with adversarial feature adaptation technique.", "PBLM (Ziser and Reichart, 2018) used bilingual word embeddings and pivot-based language modeling for cross-domain & cross-lingual classification.", "MBERT (Devlin et al., 2019) and XLM-FT (Conneau and Lample, 2019) directly fine-tuned a single layer classifier based on pretrained LM multilingual BERT and XLM.", "In Table 1 and Table 2, we compare our method with others based on their published results or our reproduced results from their code.", "Our results are averaged based on 5 rounds of experiment with the standard deviation around 1%-1.5%.", "Following previous baselines, we do not report them here.", "Our first observation from Table 1 is that our model and the fine-tuned multilingual LMMBERT (Devlin et al., 2019) and XLM-FT (Con-neau and Lample, 2019) outperform all previous methods including the methods with cross-lingual resources for 8/9 tasks by a large margin, which indicates the huge benefit from pretrained LMs in the zero-shot setting.", "Compared with MBERT and XLM-FT, our model obtains better performance when the target language is more similar to the source language, for example, German and French, and one task in Japanese.", "performance is in bold, while the highest performance within the method group is underlined.", "In Table 2, we show the comparison between our method and a few other published results, including ADAN (Chen et al., 2018) and mSDA (Chen et al., 2012) for Chinese and Arabic languages in multi-class setting.", "Similarly, our model obtains slightly better accuracy in Chinese.", "Overall, built on top of the pretrained LMs and UMT, our full model achieves the state-of-the-art performance on 8/11 sentiment classification tasks, especially when the target language is more similar to the source language.", "Moreover, we illustrate the effectiveness of encoder-decoder based regularization in reducing the language shift in the shared latent space.", "Intuitively, if the fine-tuned latent space is less sensitive to the language shift, the performance on validation sets and test sets should be highly correlated during training.", "In Figure 2, we report the average accuracy of both validation and test set w.r.t. training epochs over five runs on Amazon book review data in French.", "From Figure 2, we observe that even though our model's best validation accuracy is lower than XLM-FT (Conneau and Lample, 2019) in English, it has more correlated accuracy curves than XLM-FT across English and French.", "For example, the validation accuracy of XLM-FT starts decreasing after epoch 10, while the test accuracy is still increasing.", "Such an observation shows that the latent representation learned solely from self-supervised objectives (e.g., masked language modeling) may not well capture the semantic similarity among languages.", "Hence the resulting classifier may work well in the source language but may not generalize to the target language.", "In contrast, our model sacrifices some accuracy in the source language but can select better models for the target language in a cross-lingual setting.", "To understand the effect of different components in our model on the overall performance, we con-German", "duct an ablation study, as reported in Table", "3. Clearly, the encoder-decoder trained either by the within-domain objective or cross-domain objective is the most critical.", "For Amazon data in three languages (German, French, Japanese), the model without cross-domain loss obtains prediction accuracy of 83.22%, 82.40%, and 72.05%, which gets decreased by 5% 7% compared with the full model.", "The performance is also significantly degraded when the adversarial training component is removed because the distribution of latent document representations is not similar between two languages.", "The two-views consensus component also has a significant effect on the performance of our model, with a performance drop up to 5 points for en-jp.", "Such a result verifies our claim that cross-lingual model benefits from training on multiple views of the input.", "To further explore the effectiveness of our approach, we visualize the encoder's output and the last layer before softmax for 10 randomly sampled Amazon reviews in English and their translations in French using Google Translation, as shown in Appendix A.2.", "As seen in the lower-left panel of Figure 3, most red circles and black squares with the same indices are very close for our method but are distant for XLM-FT in the top-left.", "Such an observation implies that our encoder combined UMT and a language discriminator adequately maps the input into a shared language-invariant latent space while preserving semantic similarity.", "For the last layer before softmax, even though XLM-FT also generates reasonable representations to separate positive and negative reviews, the data points are scattered randomly.", "On the contrary, our model's output in the lower right panel of Figure 3 shows two more obvious clusters with corresponding labels that can be easily separated.", "One cluster in the left contains all of the positive documents, while the negative examples only appear on the right side.", "In this paper, we propose a cross-lingual multiview encoder-classifier (MVEC) that requires neither labeled data in the target language nor cross-lingual resources with the source language.", "Built upon pretrained language models, our method utilizes the encoder-decoder component with a language discriminator from an unsupervised machine translation system to learn a language-invariant feature space.", "Our approach departs from previous models that could only make use of the shared language-invariant features or depend on parallel resources.", "By constructing the fine-tuned latent feature space and two views of input from the encoder-decoder of UMT, our model significantly outperforms previous methods for 8/11 zero-shot sentiment classification tasks." ]
[ "abstain", "abstain", "objective", "objective", "method", "method", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "result", "result", "other", "other", "other", "other", "other", "other", "other", "abstain", "method", "method", "other", "other", "other", "objective", "other", "other", "abstain", "method", "method", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "method", "abstain", "result" ]
[ "The goal-oriented dialogue system needs to be optimized for tracking the dialogue flow and carrying out an effective conversation under various situations to meet the user goal.", "The traditional approach to building such a dialogue system is to take a pipelined modular architecture, where its modules are optimized individually.", "However, such an optimization scheme does not necessarily yield an overall performance improvement of the whole system.", "On the other hand, end-to-end dialogue systems with monolithic neural architecture are often trained only with input-output utterances, without taking into account the entire annotations available in the corpus.", "This scheme makes it difficult for goal-oriented dialogues where the system needs to be integrated with external systems or to provide interpretable information about why the system generated a particular response.", "In this paper, we present an end-to-end neural architecture for dialogue systems that addresses both challenges above.", "Our dialogue system achieved the success rate of 68.32%, the language understanding score of 4.149, and the response appropriateness score of 4.287 in human evaluations, which ranked the system at the top position in the end-to-end multi-domain dialogue system task in the 8th dialogue systems technology challenge (DSTC8).", "The goal-oriented dialogue system helps users achieve their goals such as requesting information or executing commands via natural language conversations.", "It is thus crucial for the dialogue system to keep track of the dialogue flow and carry out an effective conversation, even when the user goal is complicated or the dialogue flow is suddenly changed.", "The traditional approach to building a goal-oriented dialogue system mostly adopts a pipelined modular architecture, with the natural language understanding (NLU) module (Kim et al., 2017; Lee et al., 2019b) that first recognizes and comprehends user's intent and extracts values for slots, then the dialogue state tracking (DST) module (Williams et al., 2013) that tracks the values of slots, then the dialogue policy (POL) module that decides the system action, and then finally the natural language generation (NLG) module (Wen et al., 2015) that generates the utterance that corresponds to the system action.", "In some cases, multiple modules are combined together, e.g. the Word-level DST (Ramadan et al., 2018; Wu et al., 2019; Lee et al., 2019a) which maps the dialogue history to the dialogue state (the composite function of NLU and DST), and the Word-level POL (Budzianowski et al., 2018; Pei et al., 2019; Chen et al., 2019; Mehri et al., 2019; Zhao et al., 2019) which maps the previous utterance and dialogue state to the system response (the composite function of POL and NLG).", "These modules are usually optimized separately, which does not necessarily lead to an overall optimized performance for successful task completion.", "On the other hand, end-to-end neural models for dialogue systems (Madotto et al., 2018; Lei et al., 2018) enjoy a straightforward training approach to generating system responses, but it is difficult for goal-oriented dialogues where the system needs to interact with external systems or to generate an explanation that supports why the system generated a particular response.", "In this paper, we present an end-to-end neural architecture for dialogue systems that addresses both challenges above.", "Our work is based on fine-tuning GPT-2 (Radford et al., 2019) to faithfully perform the following essential dialogue management steps in a sequential manner using a single model: (1) Domain : restaurant [usr] Are there any restaurants that serve proper British food in town?", "DST via predicting the dialogue state, (2) POL via predicting the system action, (3) retrieving appropriate records from the external database for the dialogue state and the system action, and (4) NLG via predicting the system response.", "As a result, our neural model not only generates the system response just like end-to-end neural dialogue systems, but also generates dialogue states and system actions as intermediate outputs, improving the interpretability of the behavior of the dialogue system.", "In order to achieve this, we leverage the annotations of dialogue states and system actions provided in the corpus (e.g. MultiWOZ dataset (Budzianowski et al., 2018)) for training our system in a very natural way.", "Our model is evaluated using ConvLab (Lee et al., 2019b), a multi-domain end-to-end dialog system platform to support various aspects in the development and evaluation of dialogue systems, in terms of the automatic evaluation using the user simulator and the human evaluation using crowd workers.", "Particularly, in the human evaluation carried out as a part of the 8th dialogue systems technology challenge (DSTC8) (Kim et al., 2019), our system attained the success rate of 68.32%, the language understanding score of 4.149, and the response appropriateness score of 4.287, ranking at the 1st place in DSTC8.", "We also show that our model is competitive to other state-of-the-art models specialized for two sub-tasks in the dialogue management, i.e. Dialogue State Tracking and Dialogue-Context-to-Text Generation tasks, although our model was not particularly tuned for those sub-tasks.", "The main characteristics of our model can be summarized as follows: (1) it is trained to follow the traditional dialogue management pipeline, making the monolithic neural model more interpretable and easily integratable with external systems, while (2) it is trained in an end-to-end fashion with simple gradient descent, and (3) leverages GPT-2, a powerful pre-trained language model.", "The code is available through the GitHub code repository.", "1 2 End-to-end Multi-Domain Task-Completion Task Before we describe our approach, we briefly overview the end-to-end multi-domain task-completion task used in DSTC8, for which we developed our dialogue system.", "The MultiWOZ dataset is a large-scale fully annotated corpus of natural human-human conversa-1", "tions, where the user as a tourist converses with the system as a clerk across multiple domains.", "Each dialogue is rich in annotations such as goal', metadata', and dialog act' as well as user and system utterances.", "These annotations facilitate using machine learning to develop individual modules of a dialogue system (NLU, DST, POL, NLG, Word-level DST, Word-level POL), as well as an end-to-end dialogue system.", "Figure 1 shows an example of a single-domain dialogue in the MultiWOZ dataset.", "Each dialogue consists of Goal', Database' and Dialogue turns'.", "The goal is defined by the domain and the slots.", "The slots are divided into informable , requestable and book slots.", "Informable slots represent user constraints and Requestable slots hold additional information that the user wants to obtain.", "Book slots are used to reserve a place recommended by the system.", "For evaluating dialogue systems, DSTC8 used ConvLab (Lee et al., 2019b), an open-source platform that supports researchers to train and evaluate their own dialogue systems.", "ConvLab contains implementations of the state-of-the-art models of NLU, DST, POL, NLG (Kim et al., 2017; Lee et al., 2019b; Ramadan et al., 2018; Wu et al., 2019; Wen et al., 2015, 2017; Budzianowski et al., 2018) and an end-to-end neural model for dialogue systems (Lei et al., 2018; Madotto et al., 2018), which are readily reusable for building dialogue systems using various approaches.", "ConvLab also provides an agenda-based user simulator to easily interact with the target dialogue system, consisting of a multi-intent language un-derstanding(MILU) (Lee et al., 2019b) for NLU, a rule-based policy, and a template-based NLG.", "For each dialogue, a goal is randomly generated that conforms with the goal schema of the MultiWOZ dataset.", "The user simulator then generates an agenda based on the goal.", "While interacting with Dialogue State System Action <usr> I am looking for a place to stay that has cheap price range it should be in a type of hotel <sys> Okay , do you have a specific area you want to stay in ?", "the target dialogue system, it recognizes the system dialogue act, decides the user dialogue act from the agenda stack, and generates the user response at each turn.", "When the system offers to book and the user accepts it, the system should notify an 8-digit reference number.", "The reference number is used to verify whether the booked place is fit on what the user informs.", "ConvLab also provides an automatic evaluator which assesses whether the target dialogue system (1) traces what the user informs (2) informs what the user requests, and (3) makes an appropriate booking using an external database based on the traced information.", "Although the user simulator and evaluator are highly sophisticated, it is not as perfect as human.", "Hence, the dialogue systems submitted to the DSTC8 were evaluated not only with the user simulator but also with human crowd-workers.", "We now describe our end-to-end neural pipeline for the goal-oriented dialogue system based on GPT-2.", "Our system consists of (1) the GPT-2 model fine-tuned on the delexicalized version of MultiWOZ dataset (Section 3.2) and (2) the database query module.", "We take the pre-trained GPT-2 model and fine-tune it to follow the steps of the dialogue management pipeline.", "Figure 2 illustrates an overall architecture with a concrete example.", "The overview of the process followed by our model is as follows:", "1. Predict the recent domain and the corresponding dialogue state conditioned on the dialogue history.", "2. Predict the system action with delexicalized tokens conditioned on the dialogue history and dialogue state.", "3. If the system action (e.g. inform', book') needs external information from the database, the query module 2 retrieves the candidates and returns one of them.", "4. Update the current system action when detecting Empty Query Results (Section 3.5).", "5. Generate the system response with delexicalized tokens conditioned on dialogue history, 2 ConvLab provides a DB query module returning candidates given domain and dialogue state.", "response with the query result.", "In Figure 2, the numbers wrapped with circle indicate the order of process.", "The red box shows how our system handles the case when the DB query does not return any record at all.", "In the MultiWOZ dataset, metadata' and 'dia-log act' correspond to the current dialogue state and the current system action, respectively (Fig-ure 3).", "In order to use GPT-2, we need to convert the dialogue state and the system action to word tokens.", "Figure 3 shows an illustrative example of a single-turn of a dialogue and its representation of the dialogue state and system action.", "We introduce delimiter tokens <usr> , <sys> , <ds> and <sa> to signal the beginning of sequence representations of user utterance, system response, dialogue state, and system action.", "The domain and the slot names are also represented by additional special tokens, and <nm> and <dc> are special tokens that indicate not mentioned ' and don't care '.", "The complete input representation for our model is illustrated in Figure 4, similar to Radford et al. (2019) and Wolf et al. (2019).", "The input embedding comprises of the token embedding, the speaker embedding, and the positional embedding.", "Each dialogue in MultiWOZ dataset is generated based on the DB query results, and as such, the requestable slot values such as reference numbers and addresses (e.g. those colored in orange in Figure 1) are valid only for that particular dialogue instance.", "On the other hand, our model should be able to inform appropriate information depending on the dialogue context.", "To address this, we delexicalized all the values for requestable slots (reference number, name, postcode, phone number, address) as [DOMAIN SLOTNAME] (e.g. [hotel postcode] for hotel's postcode) that appear in the corpus.", "Thus, our model learns to generate delexicalized system response, and delexicalized tokens are later string-replaced by the real information from the DB query using a small piece of post-processing code.", "In order to fine-tune GPT-2, we optimize the weighted sum of the objectives of language modeling (LM) and next-utterance classification (NC), following (Radford et al., 2018).", "For LM, we use the standard left-to-right LM objective (Bengio et al., 2003) as follows: LLM ( w 1 , . . . , w n ) = (cid:88) i log P ( w i | w 1 , . . . , w i 1 ) The LM objective calculates the likelihood of the next word-token from given the previous word-tokens.", "For NC, the model needs to distinguish the gold response (gold dialogue state+gold system ac-tion+gold system response) from a distractor (gold dialogue state+gold system action+fake system response) , given the dialogue history.", "The distractor system responses were randomly sampled from the MultiWOZ dataset.", "The linear classifier takes the last hidden state of the GPT-2's decoder block as input and computes the class probability by passing through the softmax layer.", "The cross-entropy loss between the class probability and the correct label was used for the NC objective, LNC .", "Thus, for the given word sequence W = ( w 1 , . . . , w n ) , the total objective becomes a linear combination of LLM and LNC with hyper-parameters LM and NC : L total ( W ) = LMLLM ( W ) + NCLNC ( W ) Model Success Rate Return Turns Precision Recall F1 Book Rate Baseline 62.00% 28.22 8.18 0.70 0.83 0.74 84.38% Ours + greedy 78.60% 48 .", "When we generate the system response from the dialogue history, the final output is the probability distribution of word-tokens at each position.", "Using the distribution, there are many decoding methods for generating word-tokens, which have a significant impact on the quality of the output (Holtz-man et al., 2020; Weston et al., 2018).", "The greedy decoding and the beam search are the most common approaches.", "However, since the greedy decoding only considers the token with the highest probability at each position, it does not necessary yield a system response with overall high probability.", "In addition, Holtzman et al. (2020) evidences that the beam search decoding is not appropriate for high-entropy natural language generation such as dialogues.", "Other sampling-based decoding methods, top-k sampling and top-p sampling have been shown to addressed the above problems quite effectively for dialogue tasks (Wolf et al., 2019; Budzianowski and Vulic, 2019).", "We evaluated the performance of our models with the decoding schemes mentioned above, and selected the best one via human evaluation.", "As we mentioned before, GPT-2 invokes the query module to interact with the database.", "However, GPT-2 doesn't know how many candidates satisfy the constraints a-priori.", "Therefore, there exist cases where no candidate happens to satisfy the constraints, which we refer to as Empty-Query-Result .", "In this case, the dialogue system should generate the system response corresponding to the intent Empty-Query-Result .", "Our system monitors the system action generated from GPT-2 and replace it by <EQR> if the database query returns an empty result, and feed this modified input to GPT-2 to generate the system response.", "This simple solution worked quite well in practice.", "TransferTransfo (Wolf et al., 2018) was the first attempt to incorporate a large-scale pre-trained language model into a chit-chat dialogue system.", "Using GPT as a backbone, their fine-tuning approach ranked first in the automatic evaluation and second in the human evaluation in the ConvAI2 competition (Dinan et al., 2018).", "Our model is mainly inspired by this work, extending to goal-oriented dialogues using GPT-2.", "Parallel and independent to our work towards DSTC8 submission, Budzianowski and Vulic (2019) also demonstrated a neural model for goal-oriented dialogue systems by fine-tuning GPT-2 on the MultiWOZ dataset.", "However, they only handle dialogue-context-to-text task, which outputs the system response given the dialogue history, the ground-truth dialogue state, and the database.", "In our case, no oracle information related to database <sa> <restaurant-request> <area> ?", "and dialogue state is provided, and only the dialogue history was provided.", "Taking the dialogue history as an input, our model operates as a complete dialogue system that generates system responses by sequentially following the core steps in the dialogue management pipeline.", "We developed our model using the open-source implementation of Wolf et al. (2018) 3 and the GPT2-small (124M parameters) that consists of 12 transformer decoder blocks and pre-trained weights (Wolf et al., 2019) 4 .", "We tokenized each sentence into sub-word using GPT2Tokenizer 4 (Sennrich et al., 2016).", "We fine-tuned the GPT-2 with batch size 2 for 4 epochs over the MultiWOZ training dataset.", "The maximum history size of each dialogue was set to 15.", "We used the Adam optimizer (Kingma and Ba, 2015) with 1 = 0 .", "9 , 2 = 0 .", "999 and the learning late of 6.25e-5.", "The coefficients of the LM and the NC losses were set to 2.0 and 1.0, respectively.", "There were two evaluation criteria in the End-to-End Multi-Domain Dialog System Task of the", "Multi-Domain Task-Completion Track in DSTC8:", "Automatic evaluation with user simulator: Success Rate, Book Rate, Return, Turns, Precision, Recall, F1 Human evaluation with crowd-workers: Success Rate, Language Understanding Score, Response Appropriateness Score, Turns", "In measuring the success rate, the dialogue is considered as a success only if the requestable slots are correctly filled and book success if needed.", "Book success is achieved only if the reserved information fits into all informable slots, and is measured by the book rate as a sub-evaluation.", "The max turn indicates the maximum limit of turns in a conversation (e.g. 40).", "Precision, Recall, and F1 measure the accuracy of requestable slot filling.", "For the human evaluation, Language Understanding Score and Response Appropriateness Score were the metrics of how natural the response of the model is, with the 5 point scale.", "The human evaluation results reported here were carried out by the DSTC8 organizers.", "Table 1 shows automatic evaluation results on various decoding strategies using the user simulator provided in ConvLab.", "Our proposed model with greedy decoding strategy achieved the success rate of 78.60%, the avg return of 48.92, the avg turns of 7.40, the book rate of 86.34%, the precision of 0.87, the recall of 0.89, and the F1 score of 0.87 in the automatic evaluation using 500 simulated dialogues.", "Our model outperformed the baseline system, but failed to perform best among submitted systems, mostly due to the incorrect intent recognition in the user simulator.", "We believe that this can be circumvented by further training our model using reinforcement learning, trained to avoid system responses that trigger intent recognition failure in the simulator.", "However, our main focus was to generate diverse system responses that looked natural to human evaluators.", "Table 2 shows the final ranking of the competition using human evaluation.", "5 Our proposed model with top-p sampling (p=0.8) strategy ranked in the first place with the success rate of 68.32%, the average turns of 19.507, the language understanding score of 4.149 and the response appropriateness score 4.287.", "Compared to the 2nd-ranked model, our model showed a 2.51% improvement in success rate.", "The performance gap was more significant in human language metrics, 0.365 points and 0.458 points higher than the 2nd-ranked model in the Language Understanding score and the Response Appropriateness score.", "Figure 5 visualizes the attention weights of the transformer blocks in our model, demonstrating that our model appropriately attends to the word token generated from the previous module in the dialogue management pipeline, just like a pipelined dialogue system would do when generating the intermediate outputs.", "For example, if the user asks I'm looking for modern European food ', our model generates dialogue state <area> <nm> , which means the area is not mentioned.", "Then we can see the attention weight on <area> <nm> in the dialogue state is relatively higher 5 https://convlab.github.io/ Model Joint Acc.", "than other tokens when it generates system action <restaurant-request> <area> .", "As another example, if we change the system action as <restaurant-nooffer> , the model generates the system response I'm sorry. There are no modern European restaurant ' and it attends on the token <restaurant-nooffer> .", "As an ablation study, we test the modular performance of our model on two MultiWOZ benchmark tasks (Budzianowski et al., 2018): Dialogue State Tracking and Dialogue-Context-to-Text Generation.", "Table 3 compares the dialogue state tracking accuracy of our model to those of other recent trackers in the literature.", "In this task, we measure the joint accuracy and slot accuracy of dialogue state tracking part of our model.", "Although our training objective involves other dialogue management tasks than dialogue state tracking, our model's tracking performance was very competitive to the state-of-the-art models.", "Dialogue-Context-to-Text generation looks at the combined performance of the dialogue policy and the system response generation modules, measuring the quality of system response when the previous user utterance, the ground-truth dialogue state, and the ground-truth database query results are given.", "Our trained model can be straightforwardly adapted to perform this task by replacing the intermediate inputs with ground-truth values.", "Table 4 shows the Context-to-Text Generation benchmark performance compared to other recent models proposed in the literature.", "Again, our model was competitive to the state-of-the-art models except for the BLEU score.", "This is due to the fact that the system uses the large vocabulary of GPT-2, making system responses often containing diverse words that are not in the dataset.", "In this paper, we presented an end-to-end monolithic neural model for goal-oriented dialogues that learns to follow the core steps in the dialogue management pipeline.", "Since our model outputs all the intermediate results in the dialogue management pipeline, it is easy to integrate with external systems and to interpret why the system generates a particular response.", "The experimental results from human evaluation show evidence that our approach can provide very natural human-level interaction for goal-oriented dialogues, advancing the state-of-the-art in conversational AI agents.", "This also demonstrates the power of large-scale pre-trained language models to be adopted for building end-to-end goal-oriented dialogue systems.", "This work was supported by the National Research Foundation (NRF) of Korea (NRF-2019R1A2C1087634) and the Ministry of Science and Information communication Technology (MSIT) of Korea (IITP No. 2020-0-00940, IITP 2019-0-00075-001 and IITP No. 2017-0-01779 XAI)." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "other", "other", "abstain", "abstain", "method", "method", "abstain", "result", "result", "method", "abstain", "result", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "other", "objective", "abstain", "other", "abstain", "other", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "result", "objective", "abstain", "other" ]
[ "Continual learning is essential for real-world deployment when there is a need to quickly adapt the model to new tasks without forgetting knowledge of old tasks.", "Existing work on continual sequence generation either always reuses existing parameters to learn new tasks, which is vulnerable to catastrophic forgetting on dissimilar tasks, or blindly adds new parameters for every new task, which could prevent knowledge sharing between similar tasks.", "To get the best of both worlds, in this work, we propose continual sequence generation with adaptive compositional modules to adaptively add modules in transformer architectures and compose both old and new modules for new tasks.", "We also incorporate pseudo experience replay to facilitate knowledge transfer in those shared modules.", "Experiment results on various sequences of generation tasks show that our framework can adaptively add modules or reuse modules based on task similarity, outperforming state-of-the-art baselines in terms of both performance and parameter efficiency.", "We make our code public at https://github.com/GT-SALT/ Adaptive-Compositional-Modules .", "Current state-of-the-art language generation models can achieve great performance on a wide range of sequence generation tasks (Radford et al., 2019; Lewis et al., 2020) with a static data distribution.", "However, real-world scenarios are often changing which requires the model to learn with dynamic data distributions.", "In such cases of data distributions shift, current generation models often suffer from catastrophic forgetting (Sun et al., 2019): models completely and abruptly forget previously learned information upon learning new information.", "Continual learning (CL) (Ring, 1998; Thrun, 1998) has been introduced to improve model's ability to learn tasks in a stream by mitigating forgetting Figure 1: Comparison between previous methods (a and", "and facilitating knowledge transfer (Lopez-Paz and Ranzato, 2017), however, continual sequence generation is relatively under-investigated.", "Comparing to continual learning on text classification and question answering (Wang et al., 2020; Holla et al., 2020; Huang et al., 2021), continual sequence generation is more challenging, since the output is no longer discrete labels but sequential text data in different styles/domains.", "Based on how to retain old knowledge while learning new tasks, current continual sequence generation methods can be categorized into two types.", "The first one continually learns new tasks on old parameters (Fig 1 a), with approaches like experience replay (Sun et al., 2019; Chuang et al., 2020) and regularization (Mi et al., 2020) to maintain old knowledge.", "However, since all tasks share the same parameters, some degree of interference between tasks is unavoidable.", "Another line of work continually inserts new task-specific modules (adapters proposed by Houlsby et al., 2019) into every transformer layer for every new task while freezing pretrained mod-3653 els and modules used by old tasks (Fig 1 b, Madotto et al., 2021), which might prevent knowledge transfer between tasks and introduce possible parameter redundancy.", "In this work, we aim to get the best of both worlds: how to encourage the models to reuse modules from previous tasks as much as possible and to only add new modules if needed?", "To this end, we propose continual sequence generation with adaptive compositional modules , as shown in Fig 1 c.", "Specifically, we introduce a two-stage process for every new coming task: a decision stage and a training stage.", "During decision stage, we decide which modules to reuse and whether we need to add a new module.", "During training stage, the model architecture is determined and fixed.", "We augment new task's training process with pseudo experience replay (Sun et al., 2019) to further mitigate forgetting and facilitate knowledge transfer in those shared layers.", "Our model architecture is adaptive , as it can automatically add new modules for dissimilar tasks and reuse modules for similar tasks, thus making it robust to different scenarios of continual learning.", "Furthermore, it is compositional because for every new task, our new architecture is composed of reused modules from old tasks and newly added modules, which allows knowledge reuse and transfer.", "To evaluate the above adaptive compositional framework, we experiment with four representative sequence generation tasks following prior work (Sun et al., 2019; Chuang et al., 2020): natural language generation, SQL query generation, summarization and task-oriented dialogue arriving in a stream.", "Different from prior work that only tests their methods on very short task sequences or long task sequences with similar tasks only, we validate our approach on longer sequences containing diverse tasks with different levels of similarity.", "We believe this is a suitable scenario to validate both the model's ability to mitigate forgetting and its ability to facilitate knowledge transfer.", "In summary, this work makes two key contributions: (1) We propose continual sequence generation with adaptive compositional modules, to maximize knowledge transfer via module-reusing while adaptively adding new modules to mitigate task-interference and catastrophic forgetting.", "(2) Experiments with longer and more task sequences show that our approach outperformed baselines with higher parameter efficiency.", "Continual Learning Without allocating new parameters for new tasks, prior work mainly leverages experience replay (Wang et al., 2019; Sun et al., 2019) and regularization to mitigate catastrophic forgetting.", "In experience replay, models are retrained on old examples from previous tasks while learning new tasks.", "Those old examples are usually stored in a fixed size (Mi et al., 2020) or expanding (Huang et al., 2021) memory buffer.", "Besides replaying old examples, regularization on the hidden states (Wang et al., 2019; Han et al., 2020; Huang et al., 2021) or parameters (Mi et al., 2020) could be further added to prevent severe distortion.", "Another line of work is to create new parameters for new tasks while freezing parameters used by old tasks.", "In computer vision, progressive neural network (Rusu et al., 2016) continually adds new branches of parameters for new image classification tasks with lateral connections to facilitate forward knowledge transfer.", "Dynamically expandable network (Yoon et al., 2017) expands neural networks at neuron level by using regularization to restrict the number of added neurons.", "While allocating a big network in advance, PackNet (Mallya and Lazebnik, 2018) continually assigns a parameter subset to each task by network pruning.Li et al. (2019) employ neural architecture search (Liu et al., 2018) to optimize on new task's structure before learning new tasks.", "In language domain, prior work often utilizes adapter (Houlsby et al., 2019; Madotto et al., 2021; Ermis et al., 2022), which could be considered as task-specific MLPs inserted into frozen transformer layers.", "However, since all adapter modules are designed for only one specific task, no knowledge transfer is directly allowed in this case.", "Extra modules like attention module (Pfeiffer et al., 2021), capsule network (Ke et al., 2021), and hypernetworks (Jin et al., 2021) are demonstrated beneficial for knowledge transfer, but they need to introduce extra parameters and fail to consider any reusable or compositional modules.", "Avoiding privacy concerns, this work also follows a line of work that doesn't store real examples for experience replay, such as generating examples by GAN (Atkinson et al., 2018), synthesizing examples (Xu et al., 2022) by model-inversion (Smith et al., 2021b), and using unlabeled data in the learning environment (Smith et al., 2021a).", "In language domain, LAMOL (Sun et al., 2019) trains the language model to solve current tasks and generate 3654 current training examples simultaneously, then this model can generate pseudo old examples for replay before any new tasks.", "We adopt this pseudo experience replay along to alleviate the forgetting in the shared modules of our approach.", "Continual Learning for Sequence Generation Building on an auto-regressive language model, LAMOL (Sun et al., 2019) makes initial exploration on continual sequence generation.", "On the basis of LAMOL, knowledge distillation (Chuang et al., 2020; Sun et al., 2020) is shown to be effective via improving knowledge transfer while changing tasks.", "ARPER (Mi et al., 2020) combines regularization on parameters (Kirkpatrick et al., 2017) with prioritized exemplar replay.", "Keeping the pretrained model frozen, Madotto et al. (2021) added task-specific modules for each task together with a perplexity-based classifier, without taking into account the potential for knowledge transfer between different tasks.", "Instead of blindly adding new modules for new tasks, our approach can detect reusable modules and strategically add new adapter modules in those layers in which reusing old modules would lead to severe forgetting.", "Without introducing extra knowledge transfer modules, our approach enables knowledge transfer via module sharing.", "Task-specific Modules Traditional finetuning approaches (Peters et al., 2018; Devlin et al., 2019; Radford et al., 2019) usually modify all the parameters in large pretrained modules while learning downstream tasks.", "Recently, a line of work has been proposed to improve the parameter-efficiency of finetuning by inserting task-specific modules into freezing pretrained models.", "Adapter (Houlsby et al., 2019) inserts MLP layers into each transformer layer.", "PrefixTuning (Li and Liang, 2021) prepends key-value pairs to each transformer layer as activations.", "Prior work also shows that these task-specific modules might benefit from a more adaptive usage.", "For example, AdapterDrop (Rckl et al., 2021) shows that removing adapters from lower transformer layers can almost maintain the original performance while reducing computational overhead.", "Guo et al. (2021) leveraged latent variables to decide whether to skip adapter modules in certain transformer layers to speed up decoding.", "However, our approach goes beyond the notion of task-specific, recomposes reusable modules from different tasks, and learns compositional architectures for new coming tasks.", "Continual Generation Formulation Assuming multiple sequence generation tasks { T 1 ...T n } arrive in a stream, each task T i has a set of training examples { P i 1 , P i 2 ..., P ik } , where P ij denotes a ( input, output ) pair in Task i .", "While learning on task T i ( i > 2) , we have no access to examples from previous tasks.", "The final goal is to optimize the model's average performance on all tasks after training on the whole sequence.", "Finetuning In order to integrate different sequence generation tasks into a single framework, we use finetuning as a general strategy.", "On the basis of an autoregressive language model, the core idea is to feed the model input and train the model to generate the corresponding output subsequently.", "To distinguish between tasks, we add an extra question following every input to describe the purpose of each task.", "For example, the question for natural language generation tasks is What is the natural language form?", "Formally, for each ( input, question, output ) triple, the model is optimized to generate the corresponding output given input and question : L finetune ( x ) = n (cid:88) t = m +1 log P ( x t | x <t ) where x = { x 1 , ..., x n } denotes the concatenation of input , question and output , and { x 1 , ..., x m } refers to input and question .", "Adapter The module used in our framework refers to adapter (Houlsby et al., 2019), which is a task-specific module inserted into each frozen pretrained transformer layers (Vaswani et al., 2017).", "In addition to residual connection (He et al., 2016) and layer normalization (Ba et al., 2016), one transformer layer contains two primary sub-layers: an attention layer and a feed forward layer.", "One adapter module consists of two multi-layer perceptrons ( MLP ), one ( MLPMH ) following the multi-head attention layer and one ( MLPFF ) following the feed forward layer.", "Motivated by prior continual sequence generation work (Madotto et al., 2021) that uses Adapter (Houlsby et al., 2019) to insert new adapter module", "into every transformer layer for each new coming task, we propose to strategically decide whether we can reuse some adapter modules from old tasks before training on each new coming task, in a two-stage manner: decision stage and training stage, where the former determines the architecture for new tasks and the later trains the model.", "The decision stage aims to answer two questions: do we need to add a new module in this layer?", "If not, which old modules should we reuse?", "Inspired by interpolation-based data augmentation (Chen et al., 2020, 2021) and neural architecture search (Liu et al., 2018), we utilize Hidden State Mixing for module selection.", "Assume that there are several modules as potential candidates to be selected, after calculating their output separately, we calculate their weighted average as the overall output, which is then passed to the next part of the model (See the left part in Figure 2).", "After training the entire model end-to-end, we assume that the module with the largest learned weight is the most useful one, and thus will be selected for the reuse.", "Formally, assume that we already have inserted k modules into the l th transformer layer, each consisting of two MLPs: ( MLP 1 ,lMH , MLP 1 ,lFF ) ... ( MLP k,lMH , MLP k,lFF ) .", "At the beginning of decision stage, we add one more module ( MLP k +1 ,l MH , MLP k +1 ,l FF ) .", "Given these learnable weight coefficients [ 1 ,l , . . . , k +1 ,l ] , multi-head attention layer output o lmh , the feed forward layer output o lff , we mix the hidden states as follow: h lmh = k +1 (cid:88) t =1 t,l MLP t,lMH ( o lmh ) h l ff = k +1 (cid:88) t =1 t,l MLP t,l FF ( o l ff ) where both h lmh and h lff are then fed into their following Add & Norm layers.", "To ensure (cid:80) k +1 t =1 t,l = 1 , we use softmax function to produce 1 ,l , . . . , k +1 ,l from c 1 ,l , . . . , c k +1 ,l : i,l = e c i,l (cid:80) k +1 t =1 e c t,l , i = 1 . . . k + 1 Using this mixing approach in every transformer layer, we optimize our model using L train (see Sec 4.2) for the new task and find the most suitable modules for each layer.", "Note that", "(i) In this process, the pretrained model and all old modules are frozen, and only mixing coefficients and newly added modules will be learned.", "(ii) Calculating the weighted average is a convenient approximation of using one adapter at a time, which is the real setting during training stage and inference.", "(iii) Comparing to other baselines in Figure 1, introduced decision stage to decide the architecture does introduce extra computation, while computation of different MLPs at one position is parallelizable to speed up.", "To avoid the learned weight coefficient 1 ,l , . . . , k +1 ,l to be too close to a uniform distribution in certain layers, we further add an additional regularization term to L train , which is the sum of entropy of every discrete probability distribution [ 1 ,l , . . . , k +1 ,l ] : L entropy = (cid:88) l k +1 (cid:88) i =1 i,l log( i,l ) where is a coefficient tuned as a hyper-parameter.", "In this stage, a trivial solution could be allocating a new module in every layer regardless of whether old modules are reusable.", "To avoid this trivial solution and reuse shareable modules as much as possible, we design a prior using the initialization of the coefficient weights .", "For every l , c 1 ,l ...c k,l is initialized to c ( c > 0) , while c k +1 ,l is initialized to c .", "After softmax , the weight of each old module is e 2 c times the weight of the new module, increasing the tendency to reuse old modules.", "We further incorporate pseudo experience replay (Sun et al., 2019) to mitigate forgetting and facilitate knowledge transfer in those shared modules.", "The main idea is to teach a generative model to solve current task and to generate current task's examples simultaneously .", "Then before training on each new task, we can generate a set of pseudo old examples and replay them during training.", "Thus, in addition to the finetuning loss to solve each task, we introduce an extra loss L gen for the model to generate current task's examples.", "Formally, given the whole sequence of x = { input, question, output } , we first add a special token [GEN] at the beginning of x to form a new sequence x (cid:48) , and then optimize the model as follows: L gen ( x (cid:48) ) = n +1 (cid:88) t =1 log P ( x (cid:48) t | x (cid:48) <t ) 3656 Figure 2: Our proposed model architecture with adaptive compositional modules for transformer layers.", "Note that we use different special tokens for different tasks, thus we can generate examples for specified tasks afterwards.", "Combining with the finetune loss, the overall training loss is: L train = L finetune + L gen where is the weight for the L gen loss.", "Once our model has the ability to generate pseudo examples from old tasks, another question is When to generate pseudo examples? Since those pseudo examples are for shared modules between old tasks and the current task, we only generate them while some old modules are reused for the current task. In that case, we train our model using L train on the current dataset together with the generated examples. Otherwise, there is no need for pseudo experience replay and we just train our model using L train on the current dataset. 5 Experiments 5.1 Datasets Following Sun et al. (2019) and Chuang et al. (2020), we evaluate our approach on four representative sequence generation tasks: natural language generation, SQL query generation, summarization and task-oriented dialogue modeling. Specifically, we test our proposed approach under two common scenarios: (1) CL on similar tasks : in this case, the new coming tasks often share the same task pattern with learned tasks, but are from different domains. We use E2ENLG (Novikova et al., 2017) and four different domains (restaurant, hotel, tv, laptop) from RNNLG (Wen et al., 2015) to form five similar tasks. Then we use four different orders of these tasks as our testing task sequences. (2) CL on dissimilar tasks : in this case, the distribution shift between new tasks and old tasks could be relatively large, so the major challenge is to retain old knowledge as much as possible while learning new tasks. In this case, we further incorporate WikiSQL (SQL query generation, Zhong et al., 2017), CNN/DailyMail (news article summarization See et al., 2017), MultiWOZ (semantic state sequence generation (Budzianowski et al., 2018)) into our task sequences 1 . We randomly pick four different orders as our testing task sequences. In total, we use eight different task sequences (Table 1) to evaluate our models. The statistics/metrics for each dataset and the finetuing results are in Appendix A. 1 We use e2e for E2ENLG, rest for RNNLG (restau-rant), hotel for RNNLG (hotel), tv for RNNLG (tv), laptop for RNNLG (laptop), wiki for WikiSQL, cnn for CNN/DailyMail, woz for MultiWOZ.", "We compare our proposed model with the following baselines:", "(i) Finetune (Yogatama et al., 2019): We finetuned GPT-2 model on several tasks sequentially.", "(ii) EWC (Kirkpatrick et al., 2017) added regularization on parameters according to their importance to old tasks.", "(iii) LAMOL (Sun et al., 2019) finetuned the whole GPT-2 model continually with the help of pseudo experience replay.", "(iv) Adapter+CL (Madotto et al., 2021) inserted adapter (Houlsby et al., 2019) modules into every GPT-2's layer for each task.", "(v) Adapter+Drop (Rckl et al., 2021): We removed all those adapter modules from the first three layers in GPT-2 based on Adapter+CL.", "(vi) Adapter+LAMOL : We only inserted adapter modules into every transformer layer for the first task, then used those adapter modules to learn the whole the task sequence with pseudo experience replay.", "Note that ARPER (Mi et al., 2020) also tackles the problem of continual sequence generation, but it needs an extra memory buffer to store examples from old tasks, which is not comparable with ours.", "Implementation Details We use GPT-2 (Rad-ford et al., 2019) in HugginceFace Transformers (Wolf et al., 2020) as our backbone and adapter implementation by AdapterHub (Pfeiffer et al., 2020).", "More details can be found in Appendix A. 6 Results and Analysis To evaluate the overall performance on all tasks, we use the mean of all tasks' performance score following Sun et al. (2019); Mi et al. (2020); Madotto et al. (2021).", "For each scenario ( similar tasks and dissimilar tasks), we report the average of mean scores on all sequences as an overall metric.", "Beyond these, we also provide", "(i) evaluation results using geometric mean and", "(ii) final performance of each task in Appendix A. Table 2 summarizes the final performance on all eight task sequences.", "We observed that finetuning sequentially suffered from very severe forgetting, no matter on similar or dissimilar tasks, highlighting the importance of continual learning work.", "Though EWC can significantly increase the performance of finetuning, its performance is still far behind LAMOL, highlighting the importance of experience replay.", "For sequences containing similar tasks, the performance of Adapter+CL is inferior to Adapter+LAMOL even with more learnable parameters.", "This indicates that sharing parameters and experience replay can further facilitate knowledge transfer when tasks are similar.", "On the premise of pseudo experience replay, our method performs better than Adapter+LAMOL, demonstrating the effectiveness of our adaptive compositional architecture.", "Our approach also achieves a much higher parameter efficiency than Adapter+CL and Adapter+Drop.", "For sequences containing dissimilar tasks where the transferable knowledge is limited and parameter sharing might cause degradation, Adapter+CL and Adapter+Drop seem more robust compared to Adapter+LAMOL and LAMOL, since they avoid catastrophic forgetting by parameter isolation.", "Using a similar number of parameters to Adapter+Drop, our method outperforms Adapter+CL consistently on all task sequences, confirming that our method can prevent interference between dissimilar tasks while reducing parameter redundancy.", "We randomly selected task sequence #1 from similar tasks and sequence #8 from sequences of dissimilar tasks for our ablation studies.", "Importance of Each Component To examine the importance of each component in our method, we experiment with different settings: not using entropy loss (w/o Entropy Loss), initializing all weight coefficients with zero (w/o Weight Ini), and not replaying pseudo data (w/o Pseudo ER).", "As shown in Table 3, we found that", "(i) After removing entropy loss, the performance on sequence #1 is almost maintained by using more parameters.", "Meanwhile, the performance on sequence #8 drops significantly while using the same number of parameters.", "This observation suggests that the en-3658 Methods Finetune EWC LAMOL Adapter+CL Adapter+Drop Adapter+LAMOL Ours PseudoExperience Replay (cid:55) (cid:55) (cid:51) (cid:55) (cid:55) (cid:51) (cid:51) Similar Tasks # 1 43.0 56.9 66.3 64.2 63.9 65.9 66.1 # 2 37.0 47.9 67.0 64.2 63.9 66.2 66.5 # 3 51.7 61.4 66.6 64.2 63.9 65.6 65.8 # 4 45.0 58.3 66.6 64.2 63.9 65.2 65.7 Avg Performance 44.2 56.2 66.6 64.2 63.9 65.7 66.0 Avg Learnable Para.", "tropy loss is beneficial to achieve a better trade-off between adding parameters and maintaining good performance.", "(ii) When we initialize all weight coefficients with zero, there is no explicit tendency to reuse old examples.", "In this case, many redundant modules are created thus preventing knowledge transfer, which leads to performance drop on both sequences.", "The drop on sequence #1 is more severe due to there is more transferable knowledge between similar tasks.", "We therefore conclude that weight initialization is important to enable knowledge transfer between similar tasks.", "(iii) Removing pseudo experience replay leads to the most severe performance drop on both sequences.", "Though our approach strategically detect which modules can be reused, directly training them on new tasks without protecting old knowledge will lead to catastrophic Length Adapter+CL Adapter+LAMOL Ours 2 Tasks(#1) 56.8 (+0.0) 57.5 (+0.8) 57.7 (+0.9) 3 Tasks(#1) 59.5 (+0.0) 60.3 (+0.6) 60.1 (+0.5) 4 Tasks(#1) 62.3 (+0.0) 63.5 (+1.3) 63.7 (+1.6) 5 Tasks(#1) 64.2 (+0.0) 65.9 (+2.0) 66.1 (+2.1) 2 Tasks(#8) 45.4 (+0.0) 46.2 (+1.3) 46.0 (+1.2) 3 Tasks(#8) 51.3 (+0.0) 51.9 (+0.8) 52.3 (+0.9) 4 Tasks(#8) 50.9 (+0.0) 49.7 (-1.7) 51.8 (+0.6) 5 Tasks(#8) 57.3 (+0.0) 53.8 (-4.6) 58.2 (+0.5) Table 4: Impact of the task sequence length.", "Impact of Task Sequence Length Prior work in continual learning (Madotto et al., 2021; Huang et al., 2021) suggests that the differences in sequence length could influence the performance of continual learning.", "To this end, we further investigated the impact of sequence length in Table 4, where we reported the average performance at every step and calculated Backward Transfer following Lopez-Paz and Ranzato (2017): BW T k = 1 k 1 E i =1 ...k 1 ( R k,i R i,i ) where R i,j is the performance score on the j th task after training on the i th task.", "We found that, on sequence #1, Adapter+LAMOL and our method consistently outperform Adapter+CL in all stages, which could be explained by better knowledge transfer between multiple tasks.", "Beyond that, our method outperforms Adapter+LAMOL in most cases, demonstrating the benefits of adaptively adding modules.", "On sequence #8, Adapter+LAMOL struggles when the length of task sequence becomes longer.", "As more and more tasks arrive, the impact of task dissimilarity and distribution shift gets larger that pseudo experience replay cannot cope with.", "In that case, there is limited backward transfer but severe forgetting.", "In contrast, Adapter+CL and our method demonstrate their robustness after learning more tasks in a stream.", "Our method also outperforms Adapter throughout the learning process, demonstrating we can enable knowledge transfer even the similarity between tasks is limited.", "Case Study We selected e2e in sequence #1 and wiki in sequence #8 as two representative tasks to illustrate the final output generated by different approaches in Table 5.", "After training on the whole sequence, Adapter+LAMOL cannot correctly convey the information provided in the input, suffering from generating grammar mistakes and missing key points.", "This could be attributed to the interference from learning new coming tasks.", "While Adapter+CL successfully mitigate this problem by parameter isolation, our approach works similarly using less parameters and generates better sequences without redundant information.", "To illustrate the process of adding/reusing modules, we depict the model architecture at each stage in Fig 3 using sequence #4, which is the most challenging sequence containing similar tasks according to Table 2.", "Since the similarity between the second task (e2e) and the first task (hotel) is low (see Figure 4 in Appendix A), our framework automatically learns to add extra adapter modules in layer { 6 , 8 , 9 , 10 , 11 } before training on the second task.", "When the third task (rest) arrives, given its high similarity to the first task, our method correctly decides to reuse all modules used in the first task.", "Interestingly, the architecture for the fourth task is composed of shared modules with the first 3 tasks in layer { 1 , 2 , 3 , 4 , 5 , 7 , 12 } , shared module with the second task in layer 6 , shared the mod-3660 E2E NLG (#1): name[Strada], eatType[coffee shop], area[city centre] Reference There is a coffee shop in the city centre called the Strada.", "ule with the first and the third task in layer 8 , and also added new modules for the fourth task in layer { 9 , 10 , 11 } .", "For the fifth task, our method reuses all modules used by the fourth tasks due to their high similarity.", "This demonstrates that our method is adaptive to different incoming tasks and is able to compose modules from different old tasks for new tasks.", "We also provide a comparison in Appendix B to demonstrate the effect of reusing modules from different transformer layers.", "This work examined continual sequence generation with adaptive compositional modules, where we proposed hidden state mixing to adaptively compose old and new modules for new tasks and utilized pseudo experience replay to facilitate knowledge transfer.", "Experiments conducted on various sequence generation tasks demonstrated that our method achieves better performances with higher parameter efficiency over previous state-of-the-art baselines, both on similar task sequences and dissimilar task sequences.", "Our work is also subject to a few limitations such as the introduced extra training time.", "In the future, we plan to investigate how to further speed up the decision stage more efficiently and generalize the current framework to more diverse NLP tasks such as text classification and machine translation.", "We would like to thank the anonymous reviewers for their helpful comments, and the members of Georgia Tech SALT group for their feedback.", "This work is funded in part by Salesforce and Cisco." ]
[ "abstain", "abstain", "objective", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "abstain", "abstain", "objective", "objective", "method", "abstain", "method", "objective", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "method", "other", "other", "other", "other", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "abstain", "other", "method", "other", "objective", "other", "other", "method", "other", "other", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "objective", "abstain", "objective", "other", "other" ]
[ "Maintaining a consistent personality in conversations is quite natural for human beings, but is still a non-trivial task for machines.", "The persona-based dialogue generation task is thus introduced to tackle the personality-inconsistent problem by incorporating explicit persona text into dialogue generation models.", "Despite the success of existing persona-based models on generating human-like responses, their one-stage decoding framework can hardly avoid the generation of inconsistent persona words.", "In this work, we introduce a three-stage framework that employs a generate-delete-rewrite mechanism to delete inconsistent words from a generated response prototype and further rewrite it to a personality-consistent one.", "We carry out evaluations by both human and automatic metrics.", "Experiments on the Persona-Chat dataset show that our approach achieves good performance.", "In an open-domain conversation scenario, two speakers conduct open-ended chit-chat from the initial greetings and usually come to focus on their characteristics, such as hobbies, pets, and occupations, etc., in the course of the conversation.", "For humans, they can easily carry out conversations according to their personalities (Song et al., 2019a), but fulfilling this task is still a challenge for recent neural dialogue models (Welleck et al., 2019).", "One main issue is that these models are typically trained over millions of dialogues from different speakers, and the neural dialogue models have a propensity to mimic the response with the maximum likelihood in the training corpus (Li et al., 2016b), which results in the frequent inconsistency in responses (Zhang et al., 2018).", "Another issue This work was done when the first author was an intern at Tencent AI Lab.", "is the user-sparsity problem (Qian et al., 2017) in conventional dialogue corpora (Serban et al., 2015).", "Some users have very few dialogue data, which makes it difficult for neural models to learn meaningful user representations (Li et al., 2016b).", "To alleviate the above issues, Zhang et al. (2018) introduced the Persona-Chat dataset to build more consistent dialogue models.", "Different from conventional dialogue corpora, this dataset endows dialogue models with predefined personas, which is in the form of textually described profile (as shown in the first line of Figure 1).", "The persona-based dialogue models also adopt an encoder-decoder architecture and are enhanced with persona encoding components, such as memory network (Sukhbaatar et al., 2015) and latent variable (Kingma and Welling, 2013).", "These models turn out to produce more consistent responses than the persona-free ones (Zhang et al., 2018; Song et al., 2019a).", "Despite the successful application of the encoder-decoder framework in persona-based dialogue models, one concern is that they lack extra attention to the key persona information.", "The model will learn to minimize the overall loss of every decoded word, but this may lead to the neglect of the key personas: change of one persona-related word may not significantly affect the overall loss, but could turn a good response into a totally inconsistent one.", "As shown in Stage 1 of Figure 1, only one improper word Colorado leads the response to be inconsistent.", "A desirable solution should be able to capture personas and automatically learn to avoid and re-fine inconsistent words before the response.", "In this paper, we present a Generate-Delete-Rewrite framework, GDR, to mitigate the generation of inconsistent personas.", "We design three stages specifically for the goal of generating persona consistent dialogues: The first Generate stage adopts a transformer-based generator to produce a persona-based response prototype.", "The second Delete stage employs a consistency matching model to identify inconsistencies and delete (by masking) the inconsistent words from the prototype.", "Finally, in the Rewrite stage, a rewriter polishes the masked prototype to be more persona consistent.", "To examine the effectiveness of our GDR model, we carried out experiments on the public available Persona-Chat dataset (Zhang et al., 2018).", "A three-stage end-to-end generative framework, GDR, was proposed for the generation of persona consistent dialogues.", "A matching model was integrated into the generation framework to detect and delete inconsistent words in response prototype.", "Experimental results show the proposed approach outperforms competitive baselines on both human and automatic metrics.", "End-to-end dialogue generation approaches are a class of models for building open-domain dialogue systems, which have seen growing interests in recent years (Vinyals and Le, 2015; Shang et al., 2015; Serban et al., 2016; Li et al., 2016c; Zhao et al., 2017; Li et al., 2017).", "These dialogue models adopted recurrent units in a sequence to sequence ( seq2seq ) fashion (Sutskever et al., 2014).", "Since the transformer has been shown to be on par with or su-perior to the recurrent units (Vaswani et al., 2017), some dialogue models began to take advantage of this architecture for better dialogue modeling (Di-nan et al., 2018; Su et al., 2019).", "Besides the advancements in dialogue models, the emergence of new dialogue corpus has also contributed to the research field.", "Zhang et al. (2018) introduced the Persona-Chat dataset, with explicit persona texts to each dialogue.", "Based on seq2seq model and memory network, they further proposed a model named Generative Profile Memory Network for this dataset.", "Following this line, Yavuz et al. (2019) designed the DeepCopy model, which leverages copy mechanism to incorporate persona texts.", "Song et al. (2019a) integrated persona texts into the Per-CVAE model for generating diverse responses.", "However, the persona-based models still face the inconsistency issue (Welleck et al., 2019).", "To model the persona consistency, Welleck et al. (2019) annotated the Persona-Chat dataset and introduced the Dialogue Natural Language Inference (DNLI) dataset.", "This dataset converts the detection of dialogue consistency into a natural language inference task (Bowman et al., 2015).", "Personalized dialogue generation is an active research field (Li et al., 2016b; Qian et al., 2017; Zhang et al., 2018; Zheng et al., 2019a,b; Zhang et al., 2019).", "In parallel with this work, Song et al. (2019b) leveraged adversarial training to enhance the quality of personalized responses.", "Liu et al. (2020) incorporated mutual persona perception to build a more explainable (Liu et al., 2019) dialogue agent.", "Other relevant work lies in the area of multi-stage dialogue models (Lei et al., 2020).", "Some retrieval-guided dialogue models (Weston et al., 2018; Wu et al., 2019; Cai et al., 2019a,b; Su et al., 2020) also adopted a multi-stage framework, but the difference from our work is obvious: we generate the prototype rather than retrieve one.", "A high-quality retrieved response is not always available, especially under the persona-based setting.", "In this work, we consider learning a generative dialogue model to ground the response with explicit persona.", "We focus on the persona consistency of single-turn responses, and we leave the modeling of multi-turn persona consistency as future work.", "Formally, we use uppercase letters to represent sentences and lowercase letters to represent words.", "Let Q = q 1 , q 2 , ..., q n denotes the input query with n words, and let P = { P (1) , P (2) , ..., P ( k ) } Masked Prototype self-attention feed forward feed forward masked self-attention persona attn query attn feed forward query persona targets (shifted right) NG x NG x encodedquery encodedpersona Response Prototype response prototype persona (1) Generate self-attention feed forward feed forward self-attention (2) Delete targets (shifted right) Output Response (3) Rewrite ND x ND x NR x persona-query attention masked self-attention persona attention feed forward prototype attention encodedpersona encodedmaskedprototype Figure 2: The overall architecture of our three-stage GDR model, including a prototype generator (Generate stage), a consistency matching model (Delete stage), and a masked prototype rewriter (Rewrite stage).", "be the k different persona texts, where P ( i ) = p ( i ) 1 , p ( i ) 2 , ..., p ( i ) m i is the i -th persona text with m i words.", "Our goal is to learn a dialogue model M to generate a response Y = y 1 , y 2 , ..., y k , which is consistent with the persona, based on both query Q and persona P .", "In abbreviation, Y = M ( Q, P ) .", "More concretely, as shown in Figure 2, the proposed model M consists of three parts: 1) Prototype generator G. This component takes persona texts and query as input and generates a response prototype for further editing.", "It adopts an encoder-decoder architecture (Sutskever et al., 2014), with the transformer (Vaswani et al., 2017) applied in both the encoder and the decoder.", "2) Consistency matching model D. This model is designed to detect and delete those words in the prototype that could lead to inconsistency.", "We train this model in a natural language inference fashion on the DNLI (Welleck et al., 2019) dataset.", "3) Masked prototype rewriter R. The rewriter learns to rewrite the response prototype to a more consistent one.", "It is also a transformer decoder, which adopts a similar architecture as the decoder of G. The difference lies in that it takes the masked prototype, rather than the query, as input.", "We apply the encoder-decoder structure to build our prototype generator G. For the encoder, we use the self-attentive encoder in the transformer.", "For the decoder, built upon the transformer decoder, we propose a tuple-interaction mechanism to model the relations among persona, query, and response.", "As the persona P is composed of several sentences, we unfold all words in P into a sequence p , p , ..., p ( i ) m j , ..., p ( k ) m k .", "Then we use the self-attentive encoder (Vaswani et al., 2017) to compute the representations of the persona texts and query separately.", "The multi-head attention is defined as MultiHead ( Q, K, V ) , where Q , K , V are query, key, and value, respectively.", "The encoder is composed of a stack of NG identical layers.", "Take the first stack encoding of P for example: V (1) p = MultiHead ( I ( P ) , I ( P ) , I ( P )) , (1) O (1) p = FFN ( V (1) p ) , (2) FFN ( x ) = max (0 , xW 1 + b 1 ) W 2 + b 2 , (3) where V (1) is the first layer result of the multi-head self-attention and I ( ) is the embedding function of the input.", "The input embedding for word w i is the sum of its word embedding and position embedding.", "O (1) denotes the output of the first layer feed-forward network.", "For other layers: V ( n ) p = MultiHead ( O ( n 1) p ) , O ( n 1) p ) , O ( n 1) p ) , (4) O ( n ) p = FFN ( V ( n ) p ) , (5) where n = 2,..., NG .", "We applied layer normalization to each sublayer by LayerNorm ( x + Sublayer ( x )) .", "Q is encoded in the same way.", "After NG identical layers, we can get the final representations O ( NG ) p and O ( NG ) q , where O ( NG ) p and O ( NG ) q are the encoded persona and encoded query, respectively.", "In the decoding phase, there are three types of information, persona P , query Q , and response Y , which make up a tuple ( P , Q , Y ).", "Accordingly, three inter-sentence relations need to be considered: (1) The alignment between Q and Y is beneficial to yield better results (Bahdanau et al., 2014).", "(2) As the persona is composed of several sentences and describes different aspects, we need to find the most relevant persona information according to the relations between P and Y. (3) We also want to know whether the query needs to be answered with the given persona.", "Thus we should take the relations between P and Q into account.", "Considering the above factors, we design a two-layer tuple-interaction mechanism in the decoder, as shown in the first part of Figure", "2. There are three attentions in two layers: query attention (Q-Attn) and persona attention (P-Attn) in the first layer, and persona-query attention (PQ-Attn) in the second layer.", "NG such identical layers compose of the decoder.", "For the first layer: V (1) y = MultiHead ( I ( Y ) , I ( Y ) , I ( Y )) , (6) E (1) = MultiHead ( V (1) y , O ( NG ) p , O ( NG ) p ) , (7) F (1) = MultiHead ( V (1) y , O ( NG ) q , O ( NG ) q ) , (8) T (1) = MultiHead ( E (1) , F (1) , F (1) ) , (9) O (1) dec = FNN ( mean ( E (1) , F (1) , T (1) )) , (10) where E (1) and F (1) are the results of the first layer P-Attn and Q-Attn.", "T (1) is the result of the first layer PQ-Attn.", "O (1) dec denotes the first layer output.", "Note that the Y here is masked to ensure depending only on the known words (Vaswani et al., 2017).", "Repeatedly, for other layers: V ( n ) y = MultiHead ( O ( n 1) dec ) , O ( n 1) dec ) , O ( n 1) dec ) , (11) O ( n ) dec = FNN ( mean ( E ( n ) , F ( n ) , T ( n ) )) , (12) where n = 2,..., NG .", "After NG layers, the decoder output O ( NG ) dec is projected from hidden size to vocabulary size, then followed up by a softmax function to get the words' probabilities: Prob (1) = SoftMax ( O ( NG ) dec W 3 + b 3 ) , (13) where W 3 is a hidden size vocabulary size weight matrix and b 3 is the bias term with vocabulary size dimension.", "And Prob (1) denotes the output distribution of the first stage.", "Now we can get the response prototype Y (1) from the Prob (1) .", "The goal of the consistency matching model D is to reveal word-level consistency between the persona texts and the response prototype, thus the inappropriate words can be deleted from the prototype.", "This model is trained to estimate the sentence-level entailment category (Bowman et al., 2015) of a response for the given persona texts, which includes entailment , neutral and contradiction .", "The key is that if the category is not entailment , we can delete the most contributing words by replacing them with a special mask token, thus giving the model a chance to rephrase.", "The attention weights can measure each word's contribution.", "The architecture of our consistency matching model is shown in Figure", "3. From bottom to top are the self-attentive encoding layer, cross attention layer, and consistency matching layer.", "As described in section 3.2, the self-attentive encoder (SAE ( ) ) performs self-attention over the input to get sentence representations.", "Because the task of consistency matching is quite different from dialogue generation, we did not share the encoders between the generator G and matching model D : A = SAED ( P ) , (14) B = SAED ( Y (1) ) , (15) where A is a hidden size n matrix.", "A = [ a 1 , a 2 , ..., a n ] and B = [ b 1 , b 2 , ..., b m ] .", "The n and m are the number of words in persona P and response prototype Y (1) .", "Here we applied average pooling stragety (Liu et al., 2016; Chen et al., 2017) to get the summary representations: a 0 = n (cid:88) i =1 a i n , (16) and we can get the response attention weights and attentive response representations by: W b = a (cid:62) 0 B , (17) (cid:101) B = W b B (cid:62) , (18) where W b is attention weights and (cid:101) B is response representations.", "Similarly, we can get W a and (cid:101) A. Once (cid:101) A and (cid:101) B are generated, three matching methods (Chen et al., 2017) are applied to extract relations: concatenation, element-wise product, element-wise difference.", "The results of these matching methods are concatenated to feed into a multi-layer perceptron, which has three layers and tanh activation in between.", "The output is followed up by a SoftMax function to produce probabilities.", "In the inference process, as shown in Figure 3, the response attention weights W b is leveraged to illustrate the inconsistent words, which will be deleted 1 .", "In practice, we use a simple heuristic rule for deleting words: as long as the category is not entailment , we will delete 10% of the words (at least one word) 2 , with the highest attention weight, in the prototype Y (1) .", "In this way, we get the masked prototype Y (2) .", "The rewriter R takes the masked prototype and persona texts as input and outputs the final response.", "R is also a transformer decoder, which is similar to the decoder of G in section 3.2, but with a minor difference: the masked prototype is close to the target response, thus the direct attention between the prototype and target response is needless.", "The architecture of R can be seen in the third part of Figure 2, which can be formalized as: O ( NG ) mp = SAEG ( Y (2) ) , (19) V ( n ) = MultiHead ( O ( n 1) rw ) , O ( n 1) rw ) , O ( n 1) rw ) , (20) S ( n ) = MultiHead ( V ( n ) , O ( NG ) p , O ( NG ) p ) , (21) K ( n ) = MultiHead ( S ( n ) , O ( NG ) mp , O ( NG ) mp ) , (22) O ( n ) rw = FNN ( mean ( S ( n ) , K ( n ) )) , (23) 1 In this paper, delete a word means replacing this word with a special mask token.", "2 In our experiments, we found that deleting more words made it difficult for rewriter R to learn.", "where O ( NG ) mp is the encoded masked prototype and SAEG is the self-attentive encoder of G. O ( NG ) p is the encoded persona.", "After NR identical layers, the same generation process as in G is applied to the O ( NR ) rw , and we can get the final response Y (3) .", "The consistency matching model D is trained separately from the prototype generator G and rewriter R. As forementioned, the matching model D is trained in a natural language inference fashion on the DNLI dataset (Welleck et al., 2019), which has been well defined by the previous studies (Bowman et al., 2015; Chen et al., 2017; Gong et al., 2018).", "We minimize the CrossEntropy loss between the outputs of D and the ground truth labels.", "The G and R share the same training targets.", "We trained them by the standard maximum likelihood estimate.", "Notice that there are two different deleting strategies in training: (1) In the warm-up phase, because the G can hardly generate high-quality prototypes at this period, we randomly delete each word, with a 10% probability, from the prototype.", "(2) After that, the trained consistency matching model D is leveraged to delete words.", "We carried out the persona-based dialogue generation experiments on the public available Persona-Chat dataset (Zhang et al., 2018).", "Furthermore, we trained the consistency matching model on the recently released Dialogue Natural Language Inference (DNLI) dataset (Welleck et al., 2019).", "We show the statistics of the Persona-Chat dataset in Table", "1. The DNLI dataset (Welleck et al., 2019) is an enhancement to the Persona-Chat.", "It is composed of persona-utterance pairs from the Persona-Chat, and these pairs are further labeled as entailment , neutral , and contradiction .", "Some statistics of this dataset are given in Table", "2. 4.2 Compared Models To the best of our knowledge, this is an early work in modeling explicit persona consistency.", "To show the effectiveness of our models, we mainly compare it with the persona-based dialogue models: S2SA S2SA is an RNN-based attentive seq2seq model (Bahdanau et al., 2014).", "It only takes the query as input.", "Per-S2SA This is a seq2seq model that prepends all persona texts to the query as input (Zhang et al., 2018).", "GPMN Generative Profile Memory Network is an RNN-based model that encodes persona texts as individual memory representations in a memory network (Zhang et al., 2018).", "DeepCopy An RNN-based hierarchical pointer network, which leverages copy mechanism to integrate persona (Yavuz et al., 2019).", "Per-CVAE This is a memory augmented CVAE model to exploit persona texts for diverse response generation (Song et al., 2019a).", "Transformer Different from the RNN-based models, transformer is a self-attention based sequence transduction model (Vaswani et al., 2017).", "The persona texts are concatenated to the query to serve as its input.", "For all the RNN-based baseline models, they are implemented by two-layer LSTM networks with a hidden size 512.", "For the Transformer, the hidden size is also set to 512, and the layers of both encoder and decoder are", "3. The number of heads in multi-head attention is 8, and the inner-layer size of the feedforward network is 2048.", "The word embed-dings are randomly initialized, and the embedding dimension of all models is set to 512.", "Our model applies the same parameter settings as the transformer.", "The number of layers NG = ND = NR = 3 .", "G and R share the word embed-dings, but the matching model D uses independent embeddings.", "We use token-level batching with a size 4096.", "Adam is used for optimization, and the warm-up steps are set to 10,000.", "We implemented the model in OpenNMT-py (Klein et al., 2017).", "In the evaluation, there are two essential factors to consider: persona consistency and response quality .", "We apply both human evaluations and automatic metrics on these two aspects to compare different models.", "Human Evaluation We recruit five professional annotators from a third-party company.", "These annotators have high-level language skills but know nothing about the models.", "We sampled 200 persona-query-response tuples per model for evaluation.", "Duplicated queries (such as greetings which appear more than once) will not be sampled twice.", "First, we evaluate the persona consistency of a response.", "The annotators are asked to decide whether the response is consistent with the given persona.", "0 indicates irrelevant or contradictory and 1 indicates consistent (Const.).", "Second, we evaluate the quality of a response on three conventional criteria: fluency (Fluc.), relevance (Relv.), and informativeness (Info.).", "Each aspect is rated on five-scale, where 1, 3, and 5 indicate unacceptable, moderate, and excellent performance, respectively.", "2 and 4 are used for unsure.", "Automatic Metrics Dziri et al. (2019) has shown that natural language inference based entailment ratio can be used as an indicator of dialogue consistency.", "Here we trained two well-performed NLI models, DIIN (Gong et al., 2018) and BERT (De-vlin et al., 2019), to automatically predict the category of persona-response pairs, and we calculated the ratio of entailment as an additional reference to the persona consistency.", "In our experiments, DIIN and BERT achieved 88.78% and 89.19% accuracy on the DNLI test set, respectively, compared with previous best results 88.20%.", "The trained models are then leveraged for calculating entailment ratios.", "Two model-based entailment ratios are abbreviated as Ent diin and Ent bert .", "For dialogue quality, we follow Zhang et al. (2018) to use perplexity (PPL) to measure the fluency of responses.", "Lower perplexity means better fluency.", "Besides, we also use Dist-1 / Dist-2 (Li et al., 2016a) to examine the model's ability to generate diverse responses, which is the ratio of distinct uni-grams / bi-grams.", "We report the main evaluation results in Table", "3. Compared with baseline methods, our GDR model obtains the highest consistency score of 49.2% in human evaluation, which is significantly better than other methods.", "The target responses in the sampled data are also annotated, and 65.4% of them expressed persona information.", "Moreover, the two model-based entailment ratios, which are calculated on the whole test set, also prove the effectiveness of our GDR model.", "Although the two NLI models differ in results, our GDR model ranks first under the evaluation of both DIIN and BERT.", "For dialogue quality, our proposed model has a remarkably lower perplexity of 16.7 than all other baseline methods.", "An analysis can be seen in Section 4.6.", "Besides, our distinct-2 metric is even significantly better than the Per-CVAE model, which is designed to generate diverse responses.", "Additionally, we carried out pairwise response comparison to see the dialogue quality gains.", "We report the results in Table", "4. While the GDR model significantly improves persona consistency, it can still generate high-quality responses like the transformer and GPMN.", "As the proposed model achieves better performance than baseline methods, we turn to ablation tests to further quantify the contributions made by different components.", "We ablated our model through several different approaches: GR It removes the matching model D, i.e., generates a prototype and rewrites it directly.", "GRdR This approach replaces the matching model D with 10% random deleting (Rd), thus to see if the masked prototype, extracted by our matching model D, is beneficial.", "G Our model's generator, without further consistency matching and rewriting.", "T It is a transformer generator but removes the tuple-interaction in section 3.2 and directly concatenates persona texts to the query.", "This model is equivalent to the vanilla transformer.", "We report the results in Table", "5. First, we look into which components contribute to the consistency.", "As seen, from T, G, GR to GDR, every step has an observable improvement in Const.", ", indicating the effectiveness of our model's design.", "Both the tuple-interaction in G and the rewriting process in R contribute to the improvements of persona consistency.", "The GRdR approach, with nothing different from GDR but a random deleting strategy, serves as a foil to our GDR model, which indicates a well-learned consistency matching model is of Model Const.", "great benefit to our three-stage generation framework to generate persona consistent dialogues.", "Second, we investigated the improvement of our perplexity.", "As we can see, the one-stage transformer approaches G and T have a perplexity higher than 26.", "In contrast, after we add the rewriter R, the perplexity of all approaches has a significant decline, no matter whether there is a matching model D. Lower perplexity means lower cross-entropy, which indicates the responses from the models have more ground truth words.", "To some extent, perplexity verifies the human evaluation results of the consistency.", "One reason for this improvement could be that the rewriter works like a denoising autoencoder (Vincent et al., 2008), and it can focus more on the reconstruction of the missing information of sequence itself, rather than learning to map a sequence to an entirely different one.", "We observed that the relevance scores of GR, GRdR, and G are a little inferior to the T. Even the GDR model is not significantly better than T on the relevance score.", "One plausible explanation is that all these models are specially designed for integrating persona information, although they obtain much better consistency score, it may come at the cost of relevance score.", "Moreover, we compared the GDR's response quality with three ablated models and reported it in Table 6.", "As we can see, the deleting and rewriting, which are designed for improving consistency, also have a positive effect on the dialogue quality.", "At last, we presented some generated examples Persona i.", "in Table 7, together with the visualization of attention weights from match module D. In the first case, although the generated prototype is neutral regarding the persona, the word nurse is still masked according to our strategy.", "And after the rewriting stage, the final response expresses persona.", "In the second case, the prototype is potentially contradictory to the persona, and the keyword is successfully deleted by the matching model D. In the third case, the prototype is consistent with the persona, and no word is deleted.", "As a result, the final output response is the same as the output of no deletion model GR.", "In these cases, both consistency and quality are improved after the final rewriting.", "In this paper, we presented a three-stage framework, Generate-Delete-Rewrite, for persona consistent dialogue generation.", "Our method adopts transformer architecture and integrates a matching model to delete the inconsistent words.", "Experiments are carried out on public-available datasets.", "Both human evaluations and automatic metrics show that our method achieves remarkably good performance.", "In the future, we plan to extend our approach to improve the consistency of multi-turn dialogues.", "This paper is supported by the National Natural Science Foundation of China under Grant No.61772153 and No.61936010.", "Besides, we want to acknowledge the Heilongjiang Province Art Planning Project 2019C027 and the Heilongjiang Province Social Science Research Project 18TQB100.", "We also would like to thank all the anonymous reviewers for their helpful comments and suggestions." ]
[ "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "method", "other", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "result", "method", "objective", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "result", "objective", "other", "other", "other" ]
[ "Text style transfer rephrases a text from a source style (e.g., informal) to a target style (e.g., formal) while keeping its original meaning.", "Despite the success existing works have achieved using a parallel corpus for the two styles, transferring text style has proven significantly more challenging when there is no parallel training corpus.", "In this paper, we address this challenge by using a reinforcement-learning-based generator-evaluator architecture.", "Our generator employs an attention-based encoder-decoder to transfer a sentence from the source style to the target style.", "Our evaluator is an adversarially trained style discriminator with semantic and syntactic constraints that score the generated sentence for style, meaning preservation, and fluency.", "Experimental results on two different style transfer tasks (sentiment transfer and formality transfer) show that our model outperforms state-of-the-art approaches.", "Furthermore, we perform a manual evaluation that demonstrates the effectiveness of the proposed method using subjective metrics of generated text quality.", "Text style transfer is the task of rewriting a piece of text to a particular style while retaining the meaning of the original text.", "It is a challenging task of natural language generation and is at the heart of many recent NLP applications, such as personalized responses in dialogue system (Zhou et al., 2017), formalized texts (Rao and Tetreault, 2018), cyberspace purification by rewriting offensive texts (Niu and Bansal, 2018; Santos et al., 2018), and poetry generation (Yang et al., 2018).", "Recent works on supervised style transfer with a parallel corpus have demonstrated considerable success (Jhamtani et al., 2017b; Rao and Tetreault, 2018).", "However, a parallel corpus may not always be available for a transfer task.", "This has prompted studies on style transfer without parallel corpora.", "These hinge on the common idea of separating the content from the style of the text (Shen et al., 2017; Fu et al., 2018; Santos et al., 2018).", "This line of research first encodes the context via a style-independent representation, and then transfers sentences by combining the encoded content with style information.", "In addition, an appropriate training loss is chosen to change the style while preserving the content.", "However, these approaches are limited by their use of loss functions that must be differentiable with respect to the model parameters, since they rely on gradient descent to update the parameters.", "Furthermore, since focusing only on semantic and style metrics in style transfer, they ignore other important aspects of quality in text generation, such as language fluency.", "In this paper, we propose a system trained using reinforcement-learning (RL) that performs text style transfer without accessing to a parallel corpus.", "Our model has a generator-evaluator structure with one generator and one evaluator with multiple modules.", "The generator takes a sentence in a source style as input and transfers it to the target style.", "It is an attention-based sequence-to-sequence model, which is widely used in generation tasks such as machine translation (Luong et al., 2015).", "More advanced model such as graph-to-sequence model can also exploited for this generation task (Xu et al., 2018b).", "The evaluator consists of a style module, a semantic module and a language model for evaluating the transferred sentences in terms of style, semantic content, and fluency, respectively.", "Feedback from each evaluator is sent to the generator so it can be updated to improve the transfer quality.", "Our style module is a style discriminator built using a recurrent neural network, predicting the likelihood that the given input is in the target style.", "We train the style module adversarially to be a target style classifier while regarding the transferred sentences as adversarial samples.", "An adversarial training renders style classification more robust and accurate.", "As for the semantic module, we used word movers' distance (WMD), a state-of-the-art unsupervised algorithm for comparing semantic similarity between two sentences (Kus-ner et al., 2015; Wu et al., 2018b), to evaluate the semantic similarity between input sentences in the source style and the transferred sentences in the target style.", "We also engaged a language model to evaluate the fluency of the transferred sentences.", "Unlike prior studies that separated content from style to guarantee content preservation and transfer strength, we impose explicit semantic, style and fluency constraints on our transfer model.", "Moreover, employing RL allows us to use other evaluation metrics accounting for the quality of the transferred sentences, including non-differentiable ones.", "We summarize our contributions below: (1) We propose an RL framework for text style transfer.", "It is versatile to include a diverse set of evaluation metrics as the training objective in our model.", "(2) Our model does not rely on the availability of a parallel training corpus, thus addressing the important challenge of lacking parallel data in many transfer tasks.", "(3) The proposed model achieves state-of-the-art performance in terms of content preservation and transfer strength in text style transfer.", "The rest of our paper is organized as follows: we discuss related works on style transfer in Section", "2. The proposed text style transfer model and the reinforcement learning framework is introduced in Section", "3. Our system is empirically evaluated on sentiment and formality transfer tasks in Section", "4. We report and discuss the results in Section 5 and Section", "6. The paper is concluded in Section", "7. 2 Related Works Text style transfer has been explored in the context of a variety of natural language applications, including sentiment modification (Zhang et al., 2018b), text simplification (Zhang and Lapata, 2017), and personalized dialogue (Zhou et al., 2017).", "Depending on whether the parallel corpus is used for training, two broad classes of style transfer methods have been proposed to transfer the text from the source style to the target style.", "We will introduce each line of research in the following subsections.", "Style transfer with parallel corpus .", "Style transfer with the help of a style parallel corpus can be cast as a monolingual machine translation task.", "For this, a sequence-to-sequence (seq2seq) neural network has been successfully applied in a supervised setting.", "Jhamtani et al. transfer modern English to Shakespearean English by enriching a seq2seq model with a copy mechanism to replicate the source segments in target sentences (Jhamtani et al., 2017a).", "Style transfer without parallel corpus .", "Scarce parallel data in many transfer tasks has prompted a recent interest in studying style transfer without a parallel corpus (e.g., (Zhang et al., 2018a)).", "Li et al. propose to delete words associated with the source style and replace them with similar phrases associated with the target style.", "Clearly, this approach is limited to transfers at the lexical level and may not handle structural transfer.", "Most existing unsupervised approaches share a core idea of disentangling content and style of texts.", "For a given source sentence, a style-independent content representation is firstly derived.", "Then, in combination with the target style, the content representation is used to generate the sentence following the target style.", "Approaches to extract the content include variational auto-encoders (VAE) and cycle consistency.", "VAEs are commonly used to learn the hidden representation of inputs for dimensionality reduction, and have been found to be useful for representing the content of the source (Hu et al., 2017; Mueller et al., 2017; Shen et al., 2017; Fu et al., 2018).", "Cycle consistency is an idea borrowed from image style transfer for content preservation (Zhu et al., 2017).", "It proposes to reconstruct the input sentence from the content representation, by forcing the model to keep the information of the source sentence (Santos et al., 2018).", "The transferred sentences are generated based on the content representation and the target style.", "One way to achieve this is with the use of a pre-trained style classifier.", "The classifier scores the transfer strength of the generated sentences and guides the model to learn the target text style (San-tos et al., 2018; Prabhumoye et al., 2018).", "Another way is to learn the style embedding, which can be concatenated with the content embedding as the representation of the target sentence (Fu et al., 2018).", "The decoder then constructs the sentences from their hidden representations.", "We note that previous works rely on gradient descent in their model training, and therefore their training losses (e.g., content and style loss) were limited to functions differentiable with respect to model parameters.", "Also, very few works consider other aspects of transfer quality beyond the content and the style of the generated sentences.", "This is in part due to their reliance on a differentiable training objective.", "We propose an RL-based style transfer system so that we can incorporate more general evaluation metrics in addition to preserving the semantic meaning of content and style transfer strength.", "Reinforcement learning .", "RL has recently been applied to challenging NLP tasks (Yu et al., 2017).", "RL has advantages over supervised learning in that it supports non-differentiable training objectives and does not need annotated training samples.", "Benefits of using RL have been demonstrated in image captioning (Guo et al., 2018), sentence simplification (Zhang and Lapata, 2017), machine translation (Wu et al., 2018a) and essay scoring (Wang et al., 2018).", "A recent work on the task of sentiment transfer applied reinforcement learning to handle its BLEU score-based training loss (a non-differentiable function) (Xu et al., 2018a).", "Similar to the style transfer works discussed above, it also disentangled the semantics and the sentiment of sentences using a neutralization module and an emotionalization module respectively.", "Our work is different from these related works in that the semantic preservation and transfer strength are taken care of by the use of discriminators without explicitly separating content and style.", "An additional aspect that we focus here is the notion of fluency of the transferred sentences, which has not been explored before.", "Our style transfer system consists of the following modules: a generator, a style discriminator, a semantic module and a language model as shown in Fig.", "1. We next describe the structure and function of each component.", "A closer view of our system is presented in Fig.", "2. Generator .", "transFigure 1: Model overview: the generator transfers the input source sentence to the generated target sentence.", "The generated sentences are collectively evaluated by the style discriminator, the semantic module and language module respectively.", "The style discriminator is adversarially trained with both humanand model-generated sentences.", "These three modules evaluate the generated sentences in terms of transfer strength, content preservation and fluency, and the rewards are sent to train the generator.", "fers it to the target style.", "For this, we use a recurrent encoder-decoder model combined with attention mechanism, which can handle variable-length input and output sequences (Sutskever et al., 2014; Cho et al., 2014).", "We could also leverage recently proposed more advanced encoder-decoder models (Xu et al., 2018b,c) to exploit rich syntactic information for this task, which we leave it as future work.", "Both the encoder and the decoder are recurrent neural layers with gated recurrent units (GRU).", "The encoder takes one word from the input at each time step, and outputs a hidden state vector h s at time s .", "Similarly, the decoder outputs a hidden representation h t at time t .", "Suppose that the input sequence consists of T words x = { x 1 , . . . , x T } , and the generated target sentence y is also a sequence of words { y 1 , . . . , y T (cid:48) } .", "We use vec ( ) to denote the embedding of a word.", "The gated recurrent unit dynamically updates its state h t based on its previous state h t 1 and current input i t .", "Its computation can be abstracted as h t = GRU ( h t 1 , i t ) .", "For the encoder, the input i t is the embedding of the t -th input source word, h t = GRU ( h t 1 , vec ( x t )) .", "An attention mechanism is commonly adopted in text generation, such as machine translation (Bahdanau et al., 2015; Luong et al., 2015).", "We Figure 2: A detailed view of each component in the text style transfer system.", "apply the attention mechanism to the decoding step so that the decoder learns to attend to source words and generates words.", "In this work, we use the attention mechanism similar to that used in (Luong et al., 2015).", "At the t -th decoding step, the attention t ( s ) is the weight of the s -th encoder state h ( s ) .", "The encoder hidden states are linearly weighted by the attention as the context vector at time t .", "c t = (cid:88) t ( s ) h s .", "(3) Combining the attention over the source sentence, the decoder produces a new hidden state h t , h t = tanh ( W c [ c t ; h t ]) .", "(4) The hidden vector h t is then used to predict the likelihood of the next word in the target sentence over the target vocabulary.", "P ( y t | y <t , x ) = softmax ( W s h t ) .", "(5) where W c and W s are decoder parameters.", "Style discriminator .", "The style discriminator evaluates how well the generated sentences are transferred to the target style.", "It is a classifier built on a bidirectional recurrent neural network with attention mechanism.", "The style discriminator is pre-trained to minimize the cross-entropy loss in the style classification task.", "This style classifier predicts the likelihood that an input sentence is in the target style, and the likelihood is taken as the style score of a sentence.", "The pre-training does not guarantee that the neural network model will learn robust style patterns.", "So we resort to adversarial training as done in generative adversarial networks (GAN) (Yu et al., 2017; Wang and Lee, 2018).", "Accordingly, the style discriminator is later adversarially trained to distinguish the original (human-written) sentences from the model-generated ones so that the classifier learns to classify the text style well.", "Semantic module .", "This evaluates how well the content from the input is preserved in the generated sentences.", "We use word mover's distance (WMD), which is the state-of-the-art approach (known for its robustness and efficiency) to measure the dissimilarity between the input and output sentences based on word embeddings (Kus-ner et al., 2015; Wu et al., 2018b).", "We take the negative of the WMD distance and divide it by the sequence length to yield the semantic score of a generated sentence.", "Previous works have also used cycle reconstruction loss to measure content preservation by reconstructing input sentences from generated sentences (Santos et al., 2018).", "Language model .", "The style and the semantic modules do not guarantee the fluency of the transferred sentences.", "This fluency is achieved using a language model.", "The language model we use is a two-layer recurrent neural network pre-trained on the corpus in the target style so as to maximize the likelihood of the target sentences (Mikolov et al., 2010; Jozefowicz et al., 2016).", "The language model estimates the probability of input sentences.", "We take the logarithm of the probability and divide it by the sequence length as the fluency score .", "The output sentences from the generator are sent to the semantic, style and language model modules for evaluation.", "These modules give feedback to the generator for the purpose of tuning it and to improve the quality of the generated sentences.", "We emphasize that despite the fact that our chosen evaluation metrics are not differentiable with respect to the generator parameters, they are still usable here.", "This is made possible by our use of the RL framework (the REINFORCE algorithm) to update the parameters of the generator (Williams, 1992).", "In the RL framework, we define the state and the action for our style transfer task as follows.", "The state s t at time t is the input source sequence and the first t 1 words that are already generated in the target sequence, i.e., s t = ( X, Y 1: t 1 ) .", "The action a t at time t is the t -th word to be generated in the output sequence, i.e., a t = y t .", "Suppose that the target vocabulary is V , and the maximum length of the decoder is T (cid:48) .", "The generator G is parameterized with a parameter set , and we define the expected reward of the current generator as J ( G ) .", "The total expected reward is J ( G ) = T (cid:48) (cid:88) t =1 EY 1: t 1 G [ (cid:88) y t VP ( y t | s t ) Q ( s t , y t )] , (6) where P ( y t | s t ) is the likelihood of word y t given the current state, and Q ( s t , y t ) is the cumulative rewards that evaluate the quality of the sentences extended from Y 1: t .", "Suppose that r ( s t , y t ) is the reward of word y t at state s t .", "The total reward, Q , is defined as the sum of the word rewards.", "where (0 < < 1) , is a discounting factor so that the future rewards have decreasing weights, since their estimates are less accurate.", "If we only consider one episode, i.e., Y 1: t 1 has been given for every y t , the reward J ( G ) can be written as J ( G ) = T (cid:48) (cid:88) t =1 (cid:88) y t VP ( y t | s t ) Q ( s t , y t ) .", "Sequence sampling .", "By design, the three evaluation modules in Fig. 1 only evaluate complete sentences instead of single words or partial sentences.", "This means that we cannot obtain r ( s t , y t ) directly from the evaluation modules at any time instance before the end of the sentence.", "One way around this problem is rolling out (Yu et al., 2017), where the generator rolls out' the given sub-sentence Y 1: t at time step t to generate complete sentences by sampling the remaining part of the sentence { Y nt +1: T (cid:48) } .", "Previous works have adopted different sampling strategies, including Monte Carlo search, multinomial sampling and beam search.", "Starting from the given segment Y 1: t , Monte Carlo search explores the sub-sequence which leads to the best complete sentence (Yu et al., 2017).", "This leads to a good estimate of the sentence rewards but comes at significant computational cost.", "In many applications, the other two sampling strategies have been adopted for their efficiency.", "In multinomial sampling, each word y ( t < T (cid:48) ) is sampled from the vocabulary according to the likelihood P ( y | s ) predicted by the generator (ODonoghue et al., 2016; Chatterjee and Cancedda, 2010).", "The beam search process, on the other hand, keeps track of the k (a user-specified parameter) most likely words at each decoding step rather than just one word (Wu et al., 2018a).", "While this yields an accurate estimate of the reward for each action, multinomial sampling allows us to explore the diversity of generated texts with a potentially higher reward later on.", "This is the trade-off between exploitation and exploration in RL.", "To balance the estimation accuracy and the generation diversity, we combine the ideas of beam search and multinomial sampling.", "Given a source sentence, we first generate a reference target sentence Y ref 1: T using beam search.", "To estimate the reward at each time step t , we draw samples of complete sentences { Y l 1: T (cid:48) } by rolling out the subsequence Y ref 1: t using multinomial sampling.", "The evaluation scores of the sampled sentences are used as reward r ( y t , s t ) .", "More details about the sampling process are in Appendix.", "Reward estimation .", "We estimate the reward as follows.", "We draw N samples of complete sentences starting from Y 1: t : { Y ( n ) 1: T (cid:48) } Nn =1 .", "The complete sentences are then fed into the three evaluation modules.", "Let f style be the style score given by the style module, f semantic be semantic score by the semantic module, and f lm be the fluency score given by the language model.", "We score the action y t at state s t by the average score of the complete sentences rolled out from Y 1: t .", "This action score is defined as the weighted sum of the scores given by the three modules.", "f ( s t , y t ) = 1 NN (cid:88) n =1 ( f style ( Y ( n ) 1: T (cid:48) )+ f semantic ( Y ( n ) 1: T (cid:48) , Y real 1: T (cid:48) ) + f lm ( Y ( n ) 1: T (cid:48) ) ) , (9) where the hyperparameters , and are positive.", "In our experiments, we set = 1 .", "0 , = 0 .", "5 and = 0 .", "5 heuristically.", "We then obtain the discounted cumulative reward Q ( s t , y t ) from the rewards { r ( s , y ) } >t at each time step using Eq.", "7. The total reward of J ( G ) can be derived from the cumulative rewards { Q ( s t , y t ) } using Eq.", "8. We define the generator loss L as the negative of reward J ( G ) , LG ( ) = J ( G ) .", "According to Eq.", "8, we can find the gradient L of the generator loss as, LG ( ) = T (cid:48) (cid:88) t =1 P ( y t | s t ) Q ( s t , y t ) .", "The style discriminator is pre-trained on corpora in the source and target styles, and is used to evaluate the strength of style transfer.", "We note that this pre-training may not be sufficient for the style classifier to learn robust patterns and to provide accurate style evaluation.", "Indeed, in our experiments we found that even though the generator was trained to generate target sentences by maximizing the style rewards, the one-shot pre-training was insufficient to render the sentences in the target style.", "Borrowing the idea of adversarial training proposed in GANs, we continuously trained the style discriminator using the generated target sentences.", "Toward this, we used a combination of a randomly sampled set of human-written target sentences { Y ( k ) human } and model-generated sentences { Y ( k ) model } .", "Here the model-generated instances act as adversarial training samples, using which, the style discriminator was trained to distinguish the model outputs from human-written sentences.", "Let the discriminator D be parameterized by a parameter set .", "We define the prediction of the style discriminator, D ( Y ) , as the likelihood that the sentence Y is in the target style.", "The objective of this adversarial training amounts to minimizing the discriminator loss LD : LD ( ) = 1 K ( K (cid:88) k =1 log(1 D ( Y ( k ) model )) K (cid:88) k =1 log D ( Y ( k ) human ) ) .", "(12) 4 Experiments In this work, we considered two textual style transfer tasks, that of sentiment transfer ( ST , involving negative and positive sentiments) and formality transfer ( FT , involving informal and formal styles) using two curated datasets.", "We experimented with both transfer directions: positive-to-negative, negative-and-positive, informal-to-formal and formal-to-informal.", "Dataset .", "For our experiments with style transfer we used a sentiment corpus and a formality corpus described below.", "(1) Sentiment corpus.", "The sentiment corpus consists of restaurant reviews collected from the Yelp website (Shen et al., 2017).", "The reviews are classified as either negative or positive.", "(2) Formality corpus.", "We use the Grammarly's Yahoo Answers Formality Corpus (GYAFC) (Rao and Tetreault, 2018), which is a collection of sentences posted in a question-answer forum (Yahoo Answers) and written in an informal style.", "In addition, these sentences have been manually rewritten in a formal style.", "We used the data from the section family and relationships .", "Note that even though the corpus is parallel, we did not use the parallel information.", "Table 1 shows the train, dev and test data sizes as well as the vocabulary sizes of the corpora used in this work.", "Model settings .", "The word embeddings used in this work were of dimension 50.", "They were first trained on the English WikiCorpus and then tuned on the training dataset.", "The width of the beam search (parameter k ) was 8 during the RL and the inference stage.", "Pre-training .", "We pre-trained the generator, the style discriminator and the language model before the reinforcement learning stage.", "We discuss each of these steps below.", "Generator pre-training .", "We pre-trained the generator to capture the target style from the respective target corpus.", "This pre-training occurred before setting up the reward from the evaluator to update its parameters in reinforcement learning.", "During pre-training, we used a set of target instances with a given instance serving as the input as well as the expected output.", "Using this set we trained the generator in a supervised manner with the cross-entropy loss as the training objective.", "Pre-training offered two immediate benefits for the generator: (1) the encoder and decoder learned to capture the semantics and the target style from the target corpus; (2) the generator had a good set of initial parameters that led to faster model training.", "This second aspect is a significant gain, considering that reinforcement learning is more time consuming than supervised learning.", "Style discriminator pre-training .", "The style discriminator in our work was built using a bidirectional recurrent neural network.", "It was pre-trained using training corpora consisting of sentences in both the source and the target styles.", "We trained it to classify the style of the input sentences with the cross-entropy classification loss.", "Language model pre-training .", "The language model was a two-layer recurrent neural network.", "Taking a target sentence y = { y 1 , . . . , y T (cid:48) } as the input, the language model predicted the probability of the t -th word y t given the previous subsequence y 1: t 1 .", "The language model was pre-trained on the training corpus in target style to maximize the probability of y t (1 t T (cid:48) ) .", "Baselines .", "We considered two state-of-the-art methods of unsupervised text style transfer that use non-parallel training corpus.", "(1) Cross alignment model (CA).", "The CA model assumes that the text in the source and target style share the same latent content space (Shen et al., 2017).", "The style-independent content representation generated by its encoder is combined with available style information to transfer the sentences to the target style.", "We used their publicly available model for ST, and trained the model for FT separately with its default parameters.", "(2) Multi-decoder seq2seq model (MDS).", "MDS consists of one encoder and multiple decoders (Fu et al., 2018).", "Similar to the cross alignment transfer, its encoder learns style-independent representations of the source, and the style specific decoder will rewrite sentences in the target style based on the content representation.", "We trained the model with its default parameters for both the tasks.", "We used both automatic and human evaluation to validate our system in terms of content preservation, transfer strength and fluency.", "Aligning with prior work, we used the automatic metrics of content preservation, transfer and fluency that have been found to be well correlated with human judgments (Fu et al., 2018).", "For comparison, in Appendix, we also report our style and semantic metrics as provided by the evaluator.", "Content preservation .", "A key requirement of the transfer process is that the original meaning be retained.", "Here we measure this by an embedding based sentence similarity metric s sem proposed by (Fu et al., 2018).", "The embedding we used was based on the word2vec (CBOW) model (Mikolov et al., 2013).", "It was first trained on the English WikiCorpus and then tuned on the training dataset.", "Previous works used pre-trained GloVe embedding (Pennington et al., 2014), but we note that it does not have embeddings for Internet slang commonly seen in sentiment and formality datasets.", "Transfer strength .", "The transfer strength s style captures the degree to which the style transfer was Sentiment Negative-to-Positive Positive-to-Negative Metric Content Style Overall Perplexity Content Style Overall Perplexity CA 0.894 0.836 0.432 103.11 0.905 0.836 0.435 185.35 MDS 0.783 0.988 0.437 98.89 0.756 0.860 0.402 156.98 RLS 0.868 0.98 0.460 119.24 0.856 0.992 0.459 174.02 Formality Informal-to-Formal Formal-to-Informal Metric Content Style Overall Perplexity Content Style Overall Perplexity CA 0.865 0.558 0.339 238.05 0.789 0.956 0.432 317.40 MDS 0.519 0.435 0.237 278.65 0.546 0.998 0.353 352.86 RLS 0.885 0.601 0.358 208.33 0.873 0.982 0.462 267.78 Table 3: Automatic evaluation of text style transfer systems on sentiment and formality transfer.", "carried out and was quantified using a classifier.", "An LSTM-based classifier was trained for style classification on a training corpus (Fu et al., 2018).", "The classifier predicts the style of the generated sentences with a threshold of 0.5.", "The prediction accuracy is defined as the percentage of generated sentences that were classified to be in the target style.", "The accuracy was used to evaluate transfer strength, and the higher the accuracy is, the better the generated sentences fit in target style.", "Overall score .", "We would like to point out that there is a trade-off between content preservation and transfer strength.", "This is because the outputs resulting from unchanged input sentences show the best content preservation while having poor transfer strength.", "Likewise, for given inputs, sentences sampled from the target corpora have the strongest transfer strength while barely preserving any content if at all.", "To combine the evaluation of semantics and style, we use the overall score s overall , which is defined as a function of s sem and s style : s overall = s sem s style s sem + s style (Fu et al., 2018).", "Fluency .", "This is usually evaluated with a language model in many NLP applications (Peris and Casacuberta, 2015; Tuske et al., 2018).", "We used a two-layer recurrent neural network with gated recurrent units as a language model, and trained it on the target style part of the corpus.", "The language model gives an estimation of perplexity (PPL) over each generated sentence.", "Given a word sequence of M words { w 1 , . . . , w M } and the sequence probability p ( w 1 , . . . , w M ) estimated by the language model, the perplexity is defined as: PPL = p ( w 1 , . . . , w M ) 1 M .", "The lower the perplexity on a sentence, the more fluent the sentence is. 4.1.2 Human annotation Noting the best overall score of our system in both directions of the tasks considered (to be discussed in the section that follows), we performed human annotations for content, style and fluency to validate the automatic scores.", "We chose a sample of 100 sentences generated by our system for each transfer task and collected three human judgments per sentence in each evaluation aspect.", "The annotation guidelines were: Content preservation .", "Following the annotation scheme adopted by (Rao and Tetreault, 2018), we asked annotators to rate the semantic similarity between the original and transferred sentence on a scale from 1 to", "6. Here 1 means completely dissimilar, 2 means dissimilar but on the same topic, 3 means dissimilar while sharing some content, 4 means roughly similar, 5 means almost similar, and 6 means completely similar.", "Transfer strength .", "Annotators were given pairs of original and transferred sentences and were asked to decide which one was more likely to be in the target style.", "We define transfer strength to be the percentage of transferred sentences that were classified to be in the target style.", "Fluency .", "Similar to the annotation of content, annotators scored sentences for fluency on a scale of 1( not fluent ) to 6 ( perfectly fluent ).", "Some example sentences transferred by our system are shown in Table", "2. More transferred sentences generated by our system and those by the baseline methods can be found in the Appendix.", "We first report the results of the automatic evaluation of our proposed system (denoted as RLS) and the two baselinesthe cross alignment model (CA) (Shen et al., 2017) and the multi-decoder seq2seq model (MDS) (Fu et al., 2018)in Table", "3. Sentiment transfer .", "We notice that CA was the best in preserving content, MDS generated the most fluent target sentences and our model achieved the best trade-off between meaning and Metric Negative-to-positive Positive-to-negative Informal-to-formal Formal-to-informal Content (1-6) 5.19 5.20 4.96 5.33 Style accuracy 0.90 0.91 0.83 0.86 Fluency (1-6) 5.51 5.61 5.33 5.21 Table 4: Human judgments of transferred sentences style with the highest overall score.", "Looking at the Overall score, it is notable that despite the differences in performance between the models studied here, each one performs similarly in both directions.", "This could be interpreted to mean that with respect to difficulty of transfer, style transfer is equivalent in both the directions for this task.", "Formality transfer .", "For this task, we notice that our model outperforms the baselines in terms of content preservation, transfer strength and fluency with the best Overall score and perplexity.", "This suggests that our model is better at capturing formality characteristics compared to the baselines.", "We also note that the style strength of all models for informal-to-formal transfer is significantly lower than that for formal-to-informal transfer.", "This suggests that the informal-to-formal transfer is harder than the reverse.", "A plausible explanation is that informal sentences are more diverse and thus easier to generate than formal sentences.", "For example, informality can be achieved by multiple ways, such as by using an abbreviation (e.g., u used as you) and adding speech markers (e.g., hey and ummm), while formality is achieved in a more restricted manner.", "Another challenge for informal-to-formal transfer is that informal data collected from online users usually contain non-negligible spelling errors such as defenetely, htink and realy.", "Words being the smallest semantic units in all the models considered here, these spelling errors could affect the transfer performance.", "For each direction of transfer, we average the scores by annotators for each evaluation item, and report the results in Table", "4. Our transferred sentences are shown to have good quality in content, style and fluency in subjective evaluations.", "To gain insights into the ways in which our approach performs the intended style transfer, we randomly sampled the generated sentences in the informal-to-formal transfer task.", "We found that the forms of rewriting can be broadly classified as: lexical substitution, word removal, word insertion and structural change.", "We show the following examples to these forms of re-writing, where the changed parts are highlighted.", "(1) Lexical substitution.", "The informal sentence I do n't know what u mean was transferred to I do not know what you mean; (2) Word removal.", "The informal sentence And I dont know what I should do was rewritten as I do not know what I should do; (3) Word insertion.", "In the example instance de-pends on the woman that was changed to It depends on the woman, we see that a subject was added to generate a complete formal sentence.", "(4) Structural change.", "A small number of instances were also rewritten by making structural changes.", "For example, the informal sentence Just tell them , what are they gonna do , slap you ?? was transferred to a formal version as You should tell them , they can not slap you.", "Other ways of style transfer by incorporating evaluation metrics of structural diversity are left for future work.", "We proposed a reinforcement-learning-based text style transfer system that can incorporate any evaluation metric to enforce semantic, stylistic and fluency constraints on transferred sentences.", "We demonstrated its efficacy via automatic and human evaluations using curated datasets on two different style transfer tasks.", "We will explore and incorporate other metrics to improve other aspects of generated texts such as the structural diversity in the future work.", "This work is supported by IBM-ILLINOIS Center for Cognitive Computing Systems Research (C3SR) a research collaboration as part of the IBM AI Horizons Network.", "We thank the NAACL anonymous reviewers for their constructive suggestions." ]
[ "abstain", "abstain", "method", "method", "method", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "objective", "objective", "abstain", "objective", "method", "abstain", "method", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "method", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "other", "other" ]
[ "Reasoning about implied relationships (e.g. paraphrastic, common sense, encyclopedic) between pairs of words is crucial for many cross-sentence inference problems.", "This paper proposes new methods for learning and using embeddings of word pairs that implicitly represent background knowledge about such relationships.", "Our pairwise embeddings are computed as a compositional function on word representations, which is learned by maximizing the pointwise mutual information (PMI) with the contexts in which the two words co-occur.", "We add these representations to the cross-sentence attention layer of existing inference models (e.g. BiDAF for QA, ESIM for NLI), instead of extending or replacing existing word embeddings.", "Experiments show a gain of 2.7% on the recently released SQuAD 2.0 and 1.3% on MultiNLI.", "Our representations also aid in better generalization with gains of around 6-7% on adversarial SQuAD datasets, and 8.8% on the adversarial entailment test set by Glockner et al. (2018).", "Reasoning about relationships between pairs of words is crucial for cross sentence inference problems such as question answering (QA) and natural language inference (NLI).", "In NLI, for example, given the premise golf is prohibitively expensive , inferring that the hypothesis golf is a cheap pastime is a contradiction requires one to know that expensive and cheap are antonyms.", "Recent work (Glockner et al., 2018) has shown that current models, which rely heavily on unsupervised single-word embeddings, struggle to learn such relationships.", "In this paper, we show that they can be learned with word pair vectors ( pair2vec 1 ), 1 https://github.com/mandarjoshi90/ pair2vec X Y Contexts with X and Y baths hot cold too X or too Y neither X nor Y in X , Y Portland Oregon the X metropolitan area in Y X International Airport in Y food X are maize, Y , etc crop wheat dry X , such as Y , more X circles appeared in Y fields XOS comes with Y play Android Google the X team at Y X is developed by Y Table 1: Example word pairs and their contexts.", "which are trained unsupervised, and which significantly improve performance when added to existing cross-sentence attention mechanisms.", "Unlike single-word representations, which typically model the co-occurrence of a target word x with its context c , our word-pair representations are learned by modeling the three-way co-occurrence between words ( x, y ) and the context c that ties them together, as seen in Table 1.", "While similar training signals have been used to learn models for ontology construction (Hearst, 1992; Snow et al., 2005; Turney, 2005; Shwartz et al., 2016) and knowledge base completion (Riedel et al., 2013), this paper shows, for the first time, that large scale learning of pairwise embeddings can be used to directly improve the performance of neural cross-sentence inference models.", "More specifically, we train a feedforward network R ( x, y ) that learns representations for the individual words x and y , as well as how to compose them into a single vector.", "Training is done by maximizing a generalized notion of the pointwise mutual information (PMI) among x , y , and their context c using a variant of negative sampling (Mikolov et al., 2013a).", "Making R ( x, y ) a compositional function on individual words alleviates the sparsity that necessarily comes with embedding pairs of words, even at a very large scale.", "We show that our embeddings can be added to existing cross-sentence inference models, such as BiDAF++ (Seo et al., 2017; Clark and Gardner, 2018) for QA and ESIM (Chen et al., 2017) for NLI.", "Instead of changing the word embeddings that are fed into the encoder, we add the pretrained pair representations to higher layers in the network where cross sentence attention mechanisms are used.", "This allows the model to use the background knowledge that the pair embeddings implicitly encode to reason about the likely relationships between the pairs of words it aligns.", "Experiments show that simply adding our word-pair embeddings to existing high-performing models, which already use ELMo (Peters et al., 2018), results in sizable gains.", "We show 2.72 F1 points over the BiDAF++ model (Clark and Gardner, 2018) on SQuAD 2.0 (Rajpurkar et al., 2018), as well as a 1.3 point gain over ESIM (Chen et al., 2017) on MultiNLI (Williams et al., 2018).", "Additionally, our approach generalizes well to adversarial examples, with a 6-7% F1 increase on adversarial SQuAD (Jia and Liang, 2017) and a 8.8% gain on the Glockner et al. (2018) NLI benchmark.", "An analysis of pair2vec on word analogies suggests that it complements the information in single-word representations, especially for encyclopedic and lexicographic relations.", "Extending the distributional hypothesis to word pairs, we posit that similar word pairs tend to occur in similar contexts, and that the contexts provide strong clues about the likely relationships that hold between the words (see Table 1).", "We assume a dataset of ( x, y, c ) triplets, where each instance depicts a word pair ( x, y ) and the context c in which they appeared.", "We learn two compositional representation functions, R ( x, y ) and C ( c ) , to encode the pair and the context, respectively, as d dimensional vectors (Section 2.1).", "The functions are trained using a variant of negative sampling, which tries to embed word pairs ( x, y ) close to the contexts c with which they appeared (Section 2.2).", "individual words.", "The word-pair representation function R ( x, y ) first embeds and normalizes the individual words with a shared lookup matrix E a : x = E a ( x ) (cid:107) E a ( x ) (cid:107) y = E a ( y ) (cid:107) E a ( y ) (cid:107) These vectors, along with their element-wise product, are fed into a four-layer perceptron: R ( x, y ) = MLP 4 ( x , y , x y ) The context c = c 1 ...c n is encoded as a d dimensional vector using the function C ( c ) .", "C ( c ) embeds each token c i with a lookup matrix E c , contextualizes it with a single-layer Bi-LSTM, and then aggregates the entire context with attentive pooling: c i = E c ( c i ) h 1 ... h n = BiLSTM ( c 1 ... c n ) w = softmax i ( kh i ) C ( c ) = (cid:88) i w i Wh i where W R d d and k R d .", "All parameters, including the lookup tables E a and E c , are trained.", "Our representation is similar to two recently-proposed frameworks by Washio and Kato (2018a,b), but differs in that: (1) they use dependency paths as context, while we use surface form; (2) they encode the context as either a lookup table or the last state of a unidirectional LSTM.", "We also use a different objective, which we discuss next.", "To optimize our representation functions, we consider two variants of negative sampling (Mikolov et al., 2013a): bivariate and multivariate.", "The original bivariate objective models the two-way distribution of context and (monolithic) word pair co-occurrences, while our multivariate extension models the three-way distribution of word-word-context co-occurrences.", "We further augment the multivariate objective with typed sampling to up-sample harder negative examples.", "We discuss the impact of the bivariate and multivariate objectives (and other components) in Section 4.3.", "aspires to make R ( x, y ) and C ( c ) similar (have high inner products) for ( x, y, c ) that were observed together in the data.", "At the same time, we wish Bivariate J 2 NS ( x, y, c ) = log ( R ( x, y ) C ( c )) + (cid:80) k c i =1 log (cid:0) R ( x, y ) C ( c Ni ) (cid:1) Multivariate J 3 NS ( x, y, c ) = J 2 NS ( x, y, c ) + (cid:80) k x i =1 log (cid:0) R ( x Ni , y ) C ( c ) (cid:1) + (cid:80) k y i =1 log (cid:0) R ( x, y Ni ) C ( c ) (cid:1) Table 2: The bivariate and multivariate negative sampling objectives.", "to keep our pair vectors dis -similar from random context vectors.", "In a straightforward application of the original (bivariate) negative sampling objective, we could generate a negative example from each observed ( x, y, c ) instance by replacing the original context c with a randomly-sampled context c N (Table 2, J 2 NS ).", "Assuming that the negative contexts are sampled from the empirical distribution P ( , , c ) (with P ( x, y, c ) being the portion of ( x, y, c ) instances in the dataset), we can follow Levy and Goldberg (2014) to show that this objective converges into the pointwise mutual information (PMI) between the word pair and the context.", "This objective mainly captures co-occurrences of monolithic pairs and contexts, and is limited by the fact that the training data, by construction, only contains pairs occurring within a sentence.", "For better generalization to cross-sentence tasks, where the pair distribution differs from that of the training data, we need a multivariate objective that captures the full three-way ( x, y, c ) interaction.", "Multivariate Negative Sampling We introduce negative sampling of target words, x and y , in addition to negative sampling of contexts c (Table 2, J 3 NS ).", "Our new objective also converges to a novel multivariate generalization of PMI, different from previous PMI extensions that were inspired by information theory (Van de Cruys, 2011) and heuristics (Jameel et al., 2018).", "2 Following Levy and Goldberg (2014), we can show that when replacing target words in addition to contexts, our objective will converge 3 to the optimal value in Equation 1: R ( x, y ) C ( c ) = log P ( x, y, c ) Z x,y,c (1) 2 See supplementary material for their exact formulations.", "where Z x,y,c , the denominator, is: Z x,y,c = k c P ( , , c ) P ( x, y, ) + k x P ( x, , ) P ( , y, c ) + k y P ( , y, ) P ( x, , c ) (2) This optimal value deviates from previous generalizations of PMI by having a linear mixture of marginal probability products in its denominator.", "By introducing terms such as P ( x, , c ) and P ( , y, c ) , the objective penalizes spurious correlations between words and contexts that disregard the other target word.", "For example, it would assign the pattern X is a Y a high score with ( banana , fruit ), but a lower score with ( cat , fruit ).", "Typed Sampling In multivariate negative sampling, we typically replace x and y by sampling from their unigram distributions.", "In addition to this, we also sample uniformly from the top 100 words according to cosine similarity using distributional word vectors.", "This is done to encourage the model to learn relations between specific instances as opposed to more general types.", "For example, using California as a negative sample for Oregon helps the model to learn that the pattern X is located in Y fits the pair ( Portland , Oregon ), but not the pair ( Portland , California ).", "Similar adversarial constraints were used in knowledge base completion (Toutanova et al., 2015) and word embeddings (Li et al., 2017).", "4 3 Integrating pair2vec into Models We first present a general outline for incorporating pair2vec into attention-based architectures, and then discuss changes made to BiDAF++ and ESIM.", "The key idea is to inject our pairwise representations into the attention layer by reusing the cross-sentence attention weights.", "In addition to attentive pooling over single word representations, we also pool over cross-sentence word pair embeddings (Figure 1).", "4 Applying typed sampling also changes the value to which our objective will converge, and will replace the unigram probabilities in Equation (2) to reflect the type-based distribution.", "Pair Representation We assume that we are given two sequences a = a 1 ...a n and b = b 1 ...b m .", "We represent the word-pair embeddings between a and b using the pretrained pair2vec model as: r i,j = (cid:20) R ( a i , b j ) (cid:107) R ( a i , b j ) (cid:107) ; R ( b j , a i ) (cid:107) R ( b j , a i ) (cid:107) (cid:21) (3) We include embeddings in both directions, R ( a i , b j ) and R ( b j , a i ) , because the many relations can be expressed in both directions; e.g., hypernymy can be expressed via X is a type of Y as well as Y such as X .", "We take the L 2 normalization of each direction's pair embedding because the heavy-tailed distribution of word pairs results in significant variance of their norms.", "Base Model Let a 1 ... a n and b 1 ... b m be the vector representations of sequences a and b , as produced by the input encoder (e.g. ELMo embeddings contextualized with model-specific BiL-STMs).", "Furthermore, we assume that the base model computes soft word alignments between a and b via co-attention (4, 5), which are then used to compute b -aware representations of a : s i,j = f att ( a i , b j ) (4) = softmax j ( s i,j ) (5) b i = m (cid:88) j =0 i,j b j (6) a inf i = (cid:2) a i ; b i (cid:3) (7) The symmetric term b infj is defined analogously.", "We refer to a inf and b inf as the inputs to the inference layer, since this layer computes some function over aligned word pairs, typically via a feedforward network and LSTMs.", "The inference layer is followed by aggregation and output layers.", "Injecting pair2vec We conjecture that the inference layer effectively learns word-pair relationships from training data, and it should, therefore, help to augment its input with pair2vec .", "We augment a infi (7) with the pair vectors r i,j (3) by concatenating a weighted average of the pair vectors r i,j involving a i , where the weights are the same i,j computed via attention in (5): r i = (cid:88) j i,j r i,j (8) a infi = (cid:2) a i ; b i ; r i (cid:3) (9) The symmetric term b infj is defined analogously.", "We augment the inference layer in the BiDAF++ model with pair2vec .", "BiDAF++ is an improved version of the BiDAFNoAnswer (Seo et al., 2017; Levy et al., 2017) which includes self-attention and ELMo embeddings from Peters et al. (2018).", "We found this variant to be stronger than the baselines presented in Rajpurkar et al. (2018) by over 2.5 F1.", "We use BiDAF++ as a baseline since its architecture is typical for QA systems, and, until recently, was state-of-the-art on SQuAD 2.0 and other benchmarks.", "BiDAF++ Let a and b be the outputs of the passage and question encoders respectively (in place of the standard p and q notations).", "The inference layer's inputs a infi are defined similarly to the generic model's in (7), but also contain an aggregation of the elements in a , with better-aligned elements receiving larger weights: = softmax i (max j s i,j ) (10) a i = (cid:88) i i a i (11) a infi = (cid:2) a i ; b i ; a i b i ; a (cid:3) (12) In the later layers, a inf is recontextualized using a BiGRU and self attention.", "Finally a prediction layer predicts the start and end tokens.", "For NLI, we augment the ESIM model (Chen et al., 2017), which was previously state-of-the-art on both SNLI (Bowman et al., 2015) and MultiNLI (Williams et al., 2018) benchmarks.", "ESIM Let a and b be the outputs of the premise and hypothesis encoders respectively (in place of the standard p and h notations).", "The inference layer's inputs a infi (and b infj ) are defined similarly to the generic model's in (7): a infi = (cid:2) a i ; b i ; a i b i ; a i b i (cid:3) (14) In the later layers, a inf and b inf are projected, recontextualized, and converted to a fixed-length vector for each sentence using multiple pooling schemes.", "These vectors are then passed on to an output layer, which predicts the class.", "A similar augmentation of ESIM was recently proposed in KIM (Chen et al., 2018).", "However, their pair vectors are composed of WordNet features, while our pair embeddings are learned directly from text (see further discussion in Section 6).", "For experiments on QA (Section 4.1) and NLI (Section 4.2), we use our full model which includes multivariate and typed negative sampling.", "We discuss ablations in Section 4.3 Benchmark BiDAF + pair2vec SQuAD 2.0 EM 65.66 68.02 +2.36 F1 68.86 71.58 +2.72 AddSent EM 37.50 44.20 +6.70 F1 42.55 49.69 +7.14 AddOneSent EM 48.20 53.30 +5.10 F1 54.02 60.13 +6.11 Table 3: Performance on SQuAD 2.0 and adversarial SQuAD (AddSent and AddOneSent) benchmarks, with and without pair2vec .", "Data We use the January 2018 dump of English Wikipedia, containing 96M sentences to train pair2vec .", "We restrict the vocabulary to the 100K most frequent words.", "Preprocessing removes all out-of-vocabulary words in the corpus.", "We consider each word pair within a window of 5 in the preprocessed corpus, and subsample 5 instances based on pair probability with a threshold of 5 10 7 .", "We define the context as one word each to the left and right, and all the words in between each pair, replacing both target words with place-holders X and Y (see Table 1).", "More details can be found in the supplementary material.", "We experiment on the SQuAD 2.0 QA benchmark (Rajpurkar et al., 2018), as well as the adversarial datasets of SQuAD 1.1 (Rajpurkar et al., 2016; Jia and Liang, 2017).", "Table 3 shows the performance of BiDAF++, with ELMo , before and after adding pair2vec .", "Experiments on SQuAD 2.0 show that our pair representations improve performance by 2.72 F1.", "Moreover, adding pair2vec also results in better generalization on the adversarial SQuAD datasets with gains of 7.14 and 6.11 F1.", "5 Like in word2vec , subsampling reduces the size of the dataset and speeds up training.", "For this, we define the word pair probability as the product of unigram probabilities.", "We also record a gain of 8.8% absolute over ESIM on the Glockner et al. (2018) dataset, setting a new state of the art.", "Following standard practice (Glockner et al., 2018), we train all models on a combination of SNLI (Bowman et al., 2015) and MultiNLI.", "Glockner et al. (2018) show that with the exception of KIM (Chen et al., 2018), which uses WordNet features, several NLI models fail to generalize to this setting which involves lexical inference.", "For a fair comparison with KIM on the Glockner test set, we replace ELMo with GLoVE embeddings, and still outperform KIM by almost halving the error rate.", "Ablating parts of pair2vec shows that all components of the model (Section", "2) are useful.", "We ablate each component and report the EM and F1 on the development set of SQuAD 2.0 (Table 6).", "The full model, which uses a 4-layer MLP for R ( x, y ) and trains with multivariate negative sampling, achieves the highest F1 of 72.68.", "We experiment with two alternative composition functions, a 2-layer MLP ( Composition: 2 Layers ) and element-wise multiplication ( Compo-0.0 0.2 0.4 0.6 0.8 1.0 0 20 40 60 80 100 A cc u r a c y Derivational Lexicographic Inflectional Encyclopedic Figure 2: Accuracy as a function of the interpolation parameter (see Eq.", "sition: Multiply ), which yield significantly smaller gains over the baseline BiDAF++ model.", "This demonstrates the need for a deep composition function.", "Eliminating sampling of target words ( x, y ) from the objective ( Objective: Bivariate NS ) results in a drop of 0.7 F1, accounting for about a quarter of the overall gain.", "This suggests that while the bulk of the signal is mined from the pair-context interactions, there is also valuable information in other interactions as well.", "We also test whether specific pre-training of word pair representations is useful by replacing pair2vec embeddings with the vector offsets of pre-trained word embeddings ( Unsupervised: Pair Dist ).", "We follow the PairDistance method for word analogies (Mikolov et al., 2013b), and represent the pair ( x, y ) as the L2 normalized difference of single-word vectors: ( x y ) / (cid:107) x y (cid:107) .", "We use the same fastText (Bojanowski et al., 2017) word vectors with which we initialized pair2vec before training.", "We observe a gain of only 0.34 F1 over the baseline.", "In Section 4, we showed that pair2vec adds information complementary to single-word representations like ELMo.", "Here, we ask what this extra information is, and try to characterize which word relations are better captured by pair2vec .", "To that end, we evaluate performance on a word analogy dataset with over 40 different relation types (Section 5.1), and observe how pair2vec fills hand-crafted relation patterns (Section 5.2).", "Word Analogy Dataset Given a word pair ( a, b ) and word x , the word analogy task involves predicting a word y such that a : b :: x : y .", "We use the Bigger Analogy Test Set (BATS, Glad-kova et al., 2016) which contains four groups of relations: encyclopedic semantics (e.g., person-profession as in Einstein physicist ), lexicographic semantics (e.g., antonymy as in cheap expensive ), derivational morphology (e.g., noun forms as in oblige obligation ), and inflectional morphology (e.g., noun-plural as in bird birds ).", "Each group contains 10 sub-relations.", "where a , b , x , and y represent fastText embeddings 6 and r a,b , r x,y represent the pair2vec embedding for the word pairs ( a, b ) and ( x, y ) , respectively; is the linear interpolation parameter.", "Following prior work (Mikolov et al., 2013b), we return the highest-scoring y in the entire vocabulary, excluding the given words a , b , and x .", "Results Figure 2 shows how the accuracy on each category of relations varies with .", "For all four groups, adding pair2vec to 3CosAdd results in significant gains.", "In particular, the biggest relative improvements are observed for encyclopedic (356%) and lexicographic (51%) relations.", "Table 7 shows the specific relations in which pair2vec made the largest absolute impact.", "The gains are particularly significant for relations where fastText embeddings provide limited signal.", "For example, the accuracy for substance meronyms goes from 3.8% to 14.5%.", "In some cases, there is also a synergistic effect; for instance, in noun+less , pair2vec alone scored 0% accuracy, but mixing it with 3CosAdd , which got 4.8% on its own, yielded 16% accuracy.", "These results, alongside our experiments in Section 4, strongly suggest that pair2vec encodes information complementary to that in single-word embedding methods such as fastText and ELMo.", "To further explore how pair2vec encodes such complementary information, we consider a setting similar to that of knowledge base completion: given a Hearst-like context pattern c and a single word x , predict the other word y from the entire vocabulary.", "We rank candidate words y based on the scoring function in our training objective: R ( x, y ) C ( c ) .", "We use a fixed set of example relations and manually define their predictive context patterns and a small set of candidate words x .", "Table 8 shows the top three y words.", "The model embeds ( x, y ) pairs close to contexts that reflect their relationship.", "For example, substituting Portland in the city-state pattern ( in X, Y. ), the top two words are Oregon and Maine , both US states with cities named Portland.", "When used with the city-city pattern ( from X to Y. ), the top two words are Salem and Astoria , both cities in Oregon.", "The word-context interaction often captures multiple relations; for example, Monet is used to refer to the painter ( profession ) as well as his paintings.", "As intended, pair2vec captures the three-way word-word-context interaction, and not just the two-way word-context interaction (as in single-word embeddings).", "This profound difference allows pair2vec to complement single-word embeddings with additional information.", "Pretrained Word Embeddings Many state-of-the-art models initialize their word representations using pretrained embeddings such as word2vec (Mikolov et al., 2013a) or ELMo (Peters et al., 2018).", "These representations are typically trained using an interpretation of the Distributional Hy-Relation Context X Y (Top", "pothesis (Harris, 1954) in which the bivariate distribution of target words and contexts is modeled.", "Our work deviates from the word embedding literature in two major aspects.", "First, our goal is to represent word pairs , not individual words.", "Second, our new PMI formulation models the trivari-ate word-word-context distribution.", "Experiments show that our pair embeddings can complement single-word embeddings.", "Mining Textual Patterns There is extensive literature on mining textual patterns to predict relations between words (Hearst, 1992; Snow et al., 2005; Turney, 2005; Riedel et al., 2013; Van de Cruys, 2014; Toutanova et al., 2015; Shwartz and Dagan, 2016).", "These approaches focus mostly on relations between pairs of nouns (perhaps with the exception of VerbOcean (Chklovski and Pantel, 2004)).", "More recently, they have been expanded to predict relations between unrestricted pairs of words (Jameel et al., 2018; Espinosa Anke and Schockaert, 2018), assuming that each word-pair was observed together during pretraining.", "Washio and Kato (2018a,b) relax this assumption with a compositional model that can represent any pair, as long as each word appeared (individually) in the corpus.", "These methods are evaluated on either intrinsic relation prediction tasks, such as BLESS (Ba-roni and Lenci, 2011) and CogALex (Santus et al., 2016), or knowledge-base population benchmarks, e.g. FB15 (Bordes et al., 2013).", "To the best of our knowledge, our work is the first to integrate pattern-based methods into modern high-performing semantic models and evaluate their impact on complex end-tasks like QA and NLI.", "Integrating Knowledge in Complex Models Ahn et al. (2016) integrate Freebase facts into a language model using a copying mechanism over fact attributes.", "Yang and Mitchell (2017) modify the LSTM cell to incorporate WordNet and NELL knowledge for event and entity extraction.", "For cross-sentence inference tasks, Weissenborn et al. (2017), Bauer et al. (2018), and Mihaylov and Frank (2018) dynamically refine word representations by reading assertions from ConceptNet and Wikipedia abstracts.", "Our approach, on the other hand, relies on a relatively simple extension of existing cross-sentence inference models.", "Furthermore, we do not need to dynamically retrieve and process knowledge base facts or Wikipedia texts, and just pretrain our pair vectors in advance.", "KIM (Chen et al., 2017) integrates word-pair vectors into the ESIM model for NLI in a very similar way to ours.", "However, KIM's word-pair vectors contain only hand-engineered word-relation indicators from WordNet, whereas our word-pair vectors are automatically learned from unlabeled text.", "Our vectors can therefore reflect relation types that do not exist in WordNet (such as profession ) as well as word pairs that do not have a direct link in WordNet (e.g. bronze and statue ); see Table 8 for additional examples.", "We presented new methods for training and using word pair embeddings that implicitly represent background knowledge.", "Our pair embeddings are computed as a compositional function of the individual word representations, which is learned by maximizing a variant of the PMI with the contexts in which the the two words co-occur.", "Experiments on cross-sentence inference benchmarks demonstrated that adding these representations to existing models results in sizable improvements for both in-domain and adversarial settings.", "Published concurrently with this paper, BERT (Devlin et al., 2018), which uses a masked language model objective, has reported dramatic gains on multiple semantic benchmarks including question-answering, natural language inference, and named entity recognition.", "Potential avenues for future work include multitasking BERT with pair2vec in order to more directly incorporate reasoning about word pair relations into the BERT objective.", "We would like to thank Anna Rogers (Gladkova), Qian Chen, Koki Washio, Pranav Rajpurkar, and Robin Jia for their help with the evaluation.", "We are also grateful to members of the UW and FAIR NLP groups, and anonymous reviewers for their thoughtful comments and suggestions." ]
[ "abstain", "objective", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "other", "method", "abstain", "abstain", "result", "method", "abstain", "result", "result", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "method", "objective", "objective", "abstain", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "abstain", "method", "other", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "other", "other" ]
[ "Emotional support is a crucial ability for many conversation scenarios, including social interactions, mental health support, and customer service chats.", "Following reasonable procedures and using various support skills can help to effectively provide support.", "However, due to the lack of a well-designed task and corpora of effective emotional support conversations, research on building emotional support into dialog systems remains untouched.", "In this paper, we define the Emotional Support Conversation (ESC) task and propose an ESC Framework, which is grounded on the Helping Skills Theory (Hill, 2009).", "We construct an Emotion Support Conversation dataset (ESConv) with rich annotation (especially support strategy) in a help-seeker and supporter mode.", "To ensure a corpus of high-quality conversations that provide examples of effective emotional support, we take extensive effort to design training tutorials for supporters and several mechanisms for quality control during data collection.", "Finally, we evaluate state-of-the-art dialog models with respect to the ability to provide emotional support.", "Our results show the importance of support strategies in providing effective emotional support and the utility of ESConv in training more emotional support systems 1 .", "Emotional support (ES) aims at reducing indi-viduals' emotional distress and helping them understand and work through the challenges that they face (Burleson, 2003; Langford et al., 1997; Heaney and Israel, 2008).", "It is a critical capacity to train into dialog systems that interact with users Equal Contribution.", "I feel so frustrated.", "Ishouldfirstunderstandhis/hersituation...Letme explore his/herexperiences ( Question ) May I ask why you are feeling frustrated?", "My school was closed without any prior warning due to the pandemic.", "Ishould comfort him/herwhengradually learning abouthis/hersituation ( Providing Suggestions ) Have you thought about talking to your parents or a close friend about this?", "( Self-disclosure ) I understand you.", "I would also have been really frustrated if that happened to me.", "Yeah!", "I don't even know what is going to happen with our final.", "Mere comforting cannot solve the problem...Letmehelp him/hertakesome action andgetout of the difficulty ( Reflection of Feelings ) That is really upsetting and stressful.", "on daily basis (Van der Zwaan et al., 2012; Zhou et al., 2020), particularly for settings that include social interactions (accompanying and cheering up the user), mental health support (comforting a frustrated help-seeker and helping identify the prob-lem), customer service chats (appeasing an angry customer and providing solutions), etc.", "Recent research has also shown that people prefer dialog systems that can provide more supportive responses (Rains et al., 2020).", "Research has shown that providing emotional support is not intuitive (Burleson, 2003), so procedures and conversational skills have been suggested (Hill, 2009) to help provide better support through conversation.", "Such skills can be seen in the example conversation that we collected and is shown in Figure 1.", "To identify the causes of the help-seeker's distress, the supporter first explores the help-seeker's problems.", "Without exploration, the support is unlikely to understand the help-seeker's experiences and feelings, and thus it may be offensive or even harmful if the supporter would give irrelevant advice, like You could go for a walk to relax '.", "While learning about the help-seeker's situation, the supporter may express understanding and empathy to relieve the help-seeker's frustration by using various skills (e.g., Self-disclosure , Reflection of Feelings , etc.).", "After understanding the help-seeker's problem, the supporter may offer suggestions to help the help-seeker cope with the problem.", "If the supporter only comforts the help-seeker without any inspiration for action to change, the supporter may not effectively help the help-seeker's emotions improve.", "Finally, during the data collection of this example conversation, the help-seeker reported that their emotion intensity decreased from 5 to 2 (emotion intensity is labeled in our corpus, we give detailed annotations of this conversation example in Appendix A), which indicates the effectiveness of the ES provided by the supporter.", "Despite the importance and complexity of ES, research on data-driven ES dialog systems is limited due to a lack of both task design and relevant corpora of conversations that demonstrate diverse ES skills in use.", "First, existing research systems that relate to emotional chatting (Zhou et al., 2018) or empathetic responding (Rashkin et al., 2019) return messages that are examples of emotion or empathy and are thus limited in functionality, as they are not capable of many other skills that are often used to provide effective ES (Hill, 2009).", "Figure 2 illustrates the relationship between the three tasks and we provide further discussion in Section 2.1.", "Second, people are not naturally good at being supportive, so guidelines have been developed to train humans how to be more supportive.", "Without trained individuals, existing online conversation datasets(Sharma et al., 2020a; Rashkin et al., 2019; Zhong et al., 2020; Sun et al., 2021) do not naturally exhibit examples or elements of supportive conversations.", "As a result, data-driven models that leverage such corpora (Radford et al., 2019; Zhang et al., 2020; Roller et al., 2020) are limited in their ability to explicitly learn how to utilize support skills and thus provide effective ES.", "support through social interactions (like the interactions between peers, friends, or families) rather than professional counseling, and propose an ESC Framework , which is grounded on the Helping Skills Theory (Hill, 2009) and tailored to be appropriate for a dialog system setting (Figure 3).", "We carefully design the ESC Framework for a dialog system setting by adapting relevant components of Hill's Helping Skills model of conversational support.", "The ESC Framework proposes three stages ( Exploration , Comforting and Action ), where each stage contains several support strategies (or skills).", "To facilitate the research of emotional support conversation, we then construct an Emotional Support Conversation dataset, ESConv , and take multiple efforts to ensure rich annotation and that all conversations are quality examples for this particularly complex dialog task.", "ESConv is collected with crowdworkers chatting in help-seeker and supporter roles.", "We design tutorials based on the ESC framework and train all the supporters and devise multiple manual and automatic mechanisms to ensure effectiveness of emotional support in conversations.", "Finally, we evaluate the state-of-the-art models and observe significant improvement in the emotional support provided when various support strategies are utilized.", "Further analysis of the interactive evaluation results shows the Joint model can mimic human supporters' behaviors in strategy utilization.", "We believe our work will facilitate research on more data-driven approaches to build dialog systems capable of providing effective emotional support.", "Figure 2 intuitively shows the relationships among ESC, emotional conversation, and empathetic conversation.", "Emotion has been shown to be important for building more engaging dialog systems (Zhou et al., 2018; Li et al., 2017; Zhou and Wang, 2018; Huber et al., 2018; Huang et al., 2020).", "As a notable work of emotional conversation, Zhou et al. (2018) propose Emotional Chatting Machine (ECM) to generate emotional responses given a pre-specified emotion.", "This task is required to accurately express (designated or not) emotions in generated responses.", "While ES may include expressing emotions, such as happiness or sadness, it has a broader aim of reducing the user's emotional distress through the utilization of proper support skills, which is fundamentally different from emotional chatting.", "Emotional chatting is merely a basic quality of dialog systems, while ES is a more high-level and complex ability that dialog systems are expected to be equipped with.", "Another related task is empathetic responding (Rashkin et al., 2019; Lin et al., 2019; Majumder et al., 2020; Zandie and Mahoor, 2020; Sharma et al., 2020a; Zhong et al., 2020; Zheng et al., 2021), which aims at understanding users' feelings and then replying accordingly.", "For instance, Rashkin et al. (2019) argued that dialog models can generate more empathetic responses by recognizing the interlocutor's feelings.", "Effective ES naturally requires expressing empathy according to the help-seeker's experiences and feelings, as shown in our proposed Emotional Support Framework (Section 3.2, Figure 3).", "Hence, empathetic responding is only one of the necessary components of emotional support.", "In addition to empathetic responding, an emotional support conversation needs to explore the users' problems and help them cope with difficulty.", "Various works have considered conversations of emotional support in a social context, such as on social media or online forums (Medeiros and Bosse, 2018; Sharma et al., 2020b; Hosseini and Caragea, 2021).", "Medeiros and Bosse (2018) collected stress-related posts and response pairs from Twitter and classified replies into supportive categories.", "In (Sharma et al., 2020b), the post-response pairs from TalkLife and mental health subreddits are annotated with the communication mechanisms of text-based empathy expression (only the data of the Reddit part is publicly available).", "Hosseini and Caragea (2021) also collected such post-response pairs from online support groups, which have been annotated as needing or expressing support.", "The dialogues in these corpora are either single-turn interactions (post-response pair) or very short conversations, which limits the potential for effective ES, as ES often requires many turns of interaction (Hill, 2009).", "Some traditional dialog systems have applied human-crafted rules to provide emotional support responses (Van der Zwaan et al., 2012; van der Zwaan et al., 2012).", "A recent system has considered a rule-based algorithm that determines the supportive act used in the response and then selects proper replies from the pre-defined list of candidates (Medeiros and Bosse, 2018).", "Another conversational system designed to provide support for coping with COVID-19 was implemented by identifying topics that users mentioned and then responding with a reflection from a template or a message from a pre-defined lexicon (Welch et al., 2020).", "Few studies have focused on generating supportive responses, and those that have have been limited in scope.", "For example, Shen et al. (2020) explored how to generate supportive responses via reflecting on user input.", "When a user is in a bad emotional state, perhaps due to a particular problem, they may seek help to improve their emotional state.", "In this setting, the user can be tagged with a negative emotion label e , a emotion intensity level l (e.g., ranging from 1 to 5), and an underlying challenge that the user is going through.", "The supporter (or the system) needs to comfort the user in a conversation with support skills to lower their intensity level.", "Note that the user's state is unknown to the supporter prior to the conversation.", "During the conversation, the supporter needs to identify the problem that the user is facing, comfort the user, and then provide some suggestions or information to help the user take action to cope with their problem.", "An emotional support conversation is effective if the intensity level of the user is lowered at the end of the conversation, or more concretely, if the supporter can effectively identify the problem, comfort the user, and provide solutions or suggestions.", "The ESC task has several sub-problems: (1) Support strategy selection and strategy-constrained response generation.", "As shown in our later experiments (Section 6.4), the timing of applying strategies is relevant to the effectiveness of ES.", "It is thus important that a generated response conforms to a Strategies Stages Examples Lexical Features Question Can you talk more about your feelings at that time?", "specified strategy.", "(2) Emotion state modeling.", "It is important to model and track the user's emotion state dynamically, both for dynamic strategy selection and for measuring the effectiveness of ESC.", "(3) Evaluation of support effectiveness.", "In addition to the traditional dimension of evaluating a conversa-tion's relevance, coherence, and user engagement, ESC raises a new dimension of evaluating the effectiveness of ES.", "We present an ESC Framework, which characterizes the procedure of emotional support into three stages, each with several suggested support strategies.", "We ground the ESC Framework on Hill's Helping Skills Theory (Hill, 2009) and adapt it more appropriate for a dialog system setting, aiming to provide support through social interactions (like the interactions between peers, friends, or families) rather than merely professional counseling.", "An overview of the conversational stages and strategies in the ESC Framework is shown in Figure 3. Stages Hill (2009) proposes three stages of supporting people: exploration (exploring to help the help-seeker identify the problems), insight (help-ing the help-seeker move to new depths of self-understanding), and action (helping the help-seeker make decisions on actions to cope with the prob-lems).", "However, we note that insight usually requires re-interpreting users' behaviors and feelings, which is both difficult and risky for the supporters without sufficient support experience.", "We thus adapt insight to comforting (defined as providing support through empathy and understanding).", "While it is suggested that emotional support conversations target these three ordered stages, in practice conversations cannot follow a fixed or linear order and must adapt appropriately.", "As suggested in (Hill, 2009), the three stages can be flexibly adjusted to meet the help-seeker's needs.", "Strategies Hill (2009) also provides several recommended conversational skills for each stage.", "Some of the described skills are not appropriate 2 in a dialog system setting without professional supervision and experience.", "To adapt these skills appropriate to the dialog system setting, we extract seven methods from these skills (along with an Others one), which we called strategies in our task and hereafter.", "We provide a detailed definition of each strategy in Appendix B. 4 Data Collection To facilitate the research of emotional support skills in dialog systems, we introduce an Emotional Support Conversation Dataset, ESConv , which is collected in a help-seeker and supporter mode with crowdworkers.", "As high-quality conversation examples are needed for this complex task, we took tremendous effort to try to ensure the effectiveness of ES in conversations.", "Our efforts included the following major aspects: (1) Because providing conversational support is a skill that must be trained 2 For instance, one skill named challenging refers to pointing out the discrepancies or irrational beliefs that the helpseeker is unaware of or unwilling to change.", "Such skills usually require professional experience, which is too difficult for an average person.", "for supporters to be effective (Burleson, 2003), we design a tutorial with the ESC Framework and train crowdworkers to be supporters.", "Only those who pass the examination are admitted to the task.", "(2) We require help-seekers to complete a pre-chat survey on their problems and emotions and to provide feedback during and after the conversations.", "(3) We devise and use multiple manual or automatic mechanisms to filter out the low-quality conversations after collecting raw dialog data.", "Training and Examination To teach crowdworkers how to provide effective emotional support, we designed a tutorial with the ESC Framework.", "Inspired by 7cups ( 7cups.com ) (Baumel, 2015), we developed eleven sub-tasks (3 + 8) to help workers to learn the definitions of the three stages and the eight support strategies.", "Each sub-task includes an example conversation excerpt and a corresponding quiz question.", "As noted in Section 3.2, we also informed participants that following a fixed order may not be possible and that they may need to be flexible with adjusting the stage transitions.", "Strategy Annotation To encourage supporters to use the ESC support strategies during the conversation and to structure the resulting dataset, we ask the supporter to first select a proper strategy that they would like to use according to the dialog context.", "They are then able to write an utterance reflecting their selected strategy.", "We encourage supporters to send multiple messages if they would like to use multiple strategies to provide support.", "Post-chat Survey After each conversation, the supporter is asked to rate the extent that the seeker goes into detail about their problems on five-point Likert scales.", "Pre-chat Survey Before each conversation, the help-seeker was asked to complete the following survey: (1) Problem & emotion category: the help-seeker should select one problem from 5 options and one emotion from 7 options (the options were based on conversations collected in pilot data collection trials).", "(2) Emotion intensity: a score from 1 to 5 (the larger number indicates a more intense emotion).", "(3) Situation: open text describing the causes of the emotional problem.", "(4) Experience origin: whether the described situation was the current experience of the help-seeker or based on prior life circumstances.", "We found that 75.2% of conver-Roles Aspects Criteria Supporter ( 3) * Understanding the help-seeker's experiences and feelings ( rated by the help-seeker ) > = 3 Relevance of the utterances to the conversation topic ( rated by the help-seeker ) > = 4 Average length of utterances > = 8 Improvement in the help-seeker's emotion intensity ( rated by the help-seeker )** > = 1 Seeker Describing details about the own emotional problems ( rated by the supporter ) notrequired Average length of utterances > = 6 Table 1: Criteria of high-quality conversations.", "Feedback During the conversation, the help-seeker was asked to give feedback after every two new utterances they received from the supporter.", "Their feedback scored the helpfulness of the supporter messages on a 5-star scale.", "We divided each conversation into three phases and calculated the average feedback score for each phase.", "The scores in the three phases are 4.03, 4.30, and 4.44 respectively, indicating that the supporters were suffi-ciently trained to effectively help the help-seekers feel better.", "Post-chat Survey After each conversation, the help-seeker is asked to rate their emotion and the performance of the supporter on the following five-point Likert scales: (1) Their emotion intensity after the emotional support conversation (a decrease from the intensity before the conversation reflects emotion improvement), (2) the supporter's empathy and understanding of the help-seeker's experiences and feelings, and (3) the relevance of the supporter's responses to the conversation topic.", "We use multiple methods to ensure that the corpus contains high-quality examples of effective emotional support conversations.", "Preliminary Filtering Mechanisms When recruiting participants for the supporter role, we initially received 5,449 applicants, but only 425 (7.8%) passed the training tutorial.", "From the 2,472 conversations that we initially collected, we filtered out those that were not finished by the help-seekers or that had fewer than 16 utterances.", "This filtering left 1,342 conversations (54.3%) for consideration.", "Auto-approval Program for Qualified Conversations We carefully designed the auto-approval program, which is the most important part of data quality control.", "This program uses criteria based on the post-chat survey responses from both roles and the length of utterances, which are summarized in Table 1.", "These criteria are based on initial human reviewing results.", "We show how to choose these auto-approval criteria in Appendix D. The computed average emotion intensity before conversations is 4.04 and 2.14 after.", "Such improvement demonstrates the effectiveness of the emotional support provided by the supporters.", "In a small number of conversations, the help-seeker did not finish the post-chat surveys, so we added another criterion for these conversations requiring that the last two feedback scores from the help-seekers are both greater than 4. Thus, among all the conversations without post-chat surveys, only those who met both (2) and (3) were qualified.", "Using these quality criteria, 1,053 (78.5% of 1,342) of collected conversations were qualified.", "Annotation Correction To further ensure data quality, we reviewed and revised incorrect annotations of support strategy and seeker's emotion intensity.", "(1) For strategy annotation correction, we asked new qualified supporters to review and revise annotations on previously collected conversations as necessary, which led to 2,545 utterances (17.1%) being reviewed.", "We manually reviewed annotations where more than 75% of reviewers disagreed and revised 139 of them.", "(2) According to the auto-approval criteria (Table 7), a conversation can be qualified when the score of the seeker's emotion improvement is less than one, but the other three criteria are satisfied.", "Upon review, we found this to most often result from seekers mistaking negative emotion intensity as the positiveness of their emotion.", "We manually re-checked and revised the emotion intensity of these conversations by using other helpful information, such as the responses to the post-chat survey open question and the seekers' feedback scores during the chat.", "Of 130 such conversations, 92% were revised and included in the corpus.", "The overall statistics of the 1,053 ESConv examples are shown in table 2. Relatively long conversations (avg. 29.8 utterances) indicate that providing", "effective ES usually requires many turns of interaction and considerably more turns than typical for previous emotional chatting (Zhou et al., 2018) or empathetic dialog (Rashkin et al., 2019) datasets.", "We also present the statistics of other annotations in Table 3. Perhaps due to the current outbreak of COVID-19, ongoing depression and job crisis are the most commonly stated problems for the help-seekers and depression and anxiety are the most commonly noted emotions.", "From the help-seekers' feedback, we found that they are usually highly satisfied with the emotional support, which further indicates that the training tutorial based on the ESC Framework indeed helps supporters learn to provide effective ES.", "We release all these annotations to facilitate further research.", "Lexical Features We extracted lexical features of each strategy by calculating the log odds ratio, informative Dirichlet prior (Monroe et al., 2008) of all the unigrams and bigrams for each strategy contrasting to all other strategies.", "We list the top 5 phrases for each strategy in Figure 3. Those strategies are all significantly ( z -score > 3) associated with certain phrases (e.g., Question with are you, Self-disclosure with me).", "Strategy Distribution We computed the distribution of strategies at different phases of the conversation.", "For a conversation with L utterances in total, the k -th ( 1 k L ) utterance is from the supporter and adopts the strategy st , we say that it locates at the conversation progress k/L .", "Specifi-cally, we split the conversation progress into six intervals: [0 , 1] = (cid:83) 4 i =0 [ i/ 5 , ( i + 1) / 5) (cid:83) { 1 } . Then, for all the conversations in ESConv, we counted the proportions of different strategies in the six intervals. We split the conversation progress into six intervals: [0 , 1] = (cid:83) 4 i =0 [ i/ 5 , ( i + 1) / 5) (cid:83) { 1 } and drew the distributions on the six intervals at six points i/ 5( i = 0 , . . . , 5) respectively and connected them, finally obtaining Figure 4. The supporters generally follow the stage order suggested by the ESC Framework (Figure 3), but there is also flexible adjustment of stages and adoption of strategies. For instance, at the early phase of conversation, the supporters usually adopt exploratory strategies such as Question . After knowing help-seekers' situations, the supporters tend to provide their opinions (such as Providing Suggestions ). Throughout the entire conversation, the comforting strategies (such as Affirmation and Reassurance ) are used and label a relatively constant proportion of messages. Strategy Transition We present the top-5 most frequent strategy transitions with 3 / 4 hops in Appendix (Table 6). These transitions indicate that, as the tutorial of ESC framework trains, supporters usually ask questions and explore the help-seekers' situations before comforting the help-seekers. 6 Experiments Our experiments focus on two key questions: (1) How much can ESConv with strategy annotation improve state-of-the-art generative dialog models? (2) Can these models learn to provide effective emotional support from ESConv? 6.1 Backbone Models We used two state-of-the-art pre-trained models as the backbones of the compared variant models: BlenderBot BlenderBot (Roller et al., 2020) is an open-domain conversational agent trained with multiple communication skills, including empathetic responding. As such, BlenderBot should be capable of providing ES for users to some extent. We used the small version 3 of BlenderBot in experiments, because the larger versions have the limitation of maximum context length 128, which we found harms the model performance and response coherence. DialoGPT We additionally evaluated DialoGPT (Zhang et al., 2020), which is a GPT-2-based model pre-trained on large-scale dialog corpora. We used the small version 4 . 6.2 Variant Models Taking each of the above pre-trained models as the backbone, we built the following variant models: Vanilla Directly fine-tuning the backbone model on ESConv with no access to strategy annotations. Formally, suppose the flattened dialog history is x and the response to be generated is y , we maximize the conditional probability: P ( y | x ) = (cid:81) | y | i =1 P ( y i | x , y i ) . Variants with strategy To incorporate the strategy annotation into the backbone model, we used a special token to represent each strategy. For each utterance y from the supporters, we appended the corresponding strategy token before this utterance: y = [st] y , where [st] denotes the special token of the used strategy.", "Then, taking the flattened dialog history x as input, the model generates the response conditioned on the first predicted (or designated) strategy token: P ( y | x ) = P ([st] | x ) (cid:81) | y | i =1 P ( y i | x , [st] , y <i ) .", "We studied three variants that use strategy annotation in the later experiments.", "(1) Oracle : responses are generated conditioned on the gold reference strategy tokens.", "(2) Joint : responses are generated conditioned on predicted (sampled) strategy tokens.", "(3) Random : responses are generated conditioned on randomly selected strategies.", "Implementation details are in Appendix C. 6.3 Automatic Evaluation To investigate the impact of utilizing support strategies on the model performance with either BlenderBot or DialoGPT as the backbone, we compared the performance of the Vanilla, Joint, and Oracle variants described above.", "The automatic metrics we adopted include perplexity ( PPL ), BLEU-2 ( B-2 ) (Papineni et al., 2002), ROUGE-L ( R-L ) (Lin, 2004), and the BOW Embedding-based (Liu et al., 2016) Extrema matching score.", "The metrics except PPL were calculated with an NLG evaluation toolkit 5 (Sharma et al., 2017) with responses tok-enized by NLTK 6 (Loper and Bird, 2002).", "There are three major findings from the experiments (Table 4).", "(1) The Oracle models are significantly superior to the Vanilla models on all the metrics, indicating the great utility of support strategies.", "(2) The Joint models obtain sightly lower scores than the Vanilla models, as, if the predicted strategy is different from the ground truth, the generated response will be much different from the reference response.", "However, learning to predict strategies is important when there are no ground truth labels provided, and we will further investigate the performance of the Joint model in human interactive evaluation (Section 6.4).", "(3) The BlenderBot variants consistently perform better than the DialoGPT ones, indicating that BlenderBot is more suitable for the ESC task.", "Thus, in the subsequent human evaluation, we will focus evaluation on the Blender-5 https://github.com/Maluuba/nlg-eval 6 https://www.nltk.org/ Joint vs. w/o ft Vanilla Random Win Lose Win Lose Win Lose Fluency 71 24 52 35 53 35 Identification 65 25 50 34 54 37 Comforting 75 20 54 34 47 39 Suggestion 72 21 47 39 48 27 Overall 73 20 51 34 56 36 Table 5: Results of the human interactive evaluation.", "We recruited participants from Amazon Mechanical Turk to chat with the models.", "The online tests were conducted on the same platform as our data collection, but with the role of supporter taken by a model.", "Each participant chatted with two different models that were randomly ordered to avoid exposure bias.", "Participants were asked to compare the two models based on the following questions: (1) Fluency : which bot's responses were more flu-ent and understandable?", "(2) Identification : which bot explored your situation more in depth and was more helpful in identifying your problems?", "(3) Comforting : which bot was more skillful in comforting you?", "(4) Suggestion : which bot gave you more helpful suggestions for your problems?", "(5) Overall : generally, which bot's emotional support do you prefer?", "The metrics in (2), (3), and (4) correspond to the three stages in the ESC Framework.", "We compare three pairs of models:", "(a) Joint vs. BlenderBot (without fine-tuning on ESConv),", "(b) Joint vs. Vanilla, and", "(c) Joint vs. Random (using randomly selected strategies).", "To better simulate the real strategy occurrence, the Random model randomly selects a strategy following the strategy distribution in ESConv (Table 3).", "Each pair of models was compared by 100 conversations with human participants (Table 5).", "The results of comparison", "(a) show that BlenderBot's capability of providing ES is significantly improved on all the metrics after being fine-tuned on ESConv.", "From comparison", "(b), we found that utilizing strategies can better comfort the users.", "The results of comparison", "(c) also demonstrate that the proper timing of strategies is critical to help users identify their problems and to provide effective suggestions.", "In general, through being fine-tuned with the su-Figure 5: The Joint model's generation distribution.", "The meanings of all the graphics and abbreviations are consistent with Figure 4. pervision of strategy prediction on ESConv, the pre-trained models become preferred by the users, which proves the high-quality and utility of ESConv.", "In this section, we explore what the dialog models learned from ESConv.", "Firstly , we analyzed the strategy distribution based on the 300 dialogs between users and the Joint model in human interactive experiments.", "We can see in Figure 5 (the calculation was consistent with Figure 4), the strategies that the Joint model adopted have a very similar distribution compared with the truth distribution in ESConv (Figure 4).", "It provides important evidence that models mimic strategy selection and utilization as human supporters do to achieve more effective ES.", "Secondly , we present a case study in Figure 7.", "We see in cases that the Joint model provides more supportive responses and uses more skills in conversation, while BlenderBot without fine-tuning seems not to understand the user's distress very well and prefers to talk more about itself.", "This may imply that having more supportive responses and a diverse set of support strategies are crucial to effective emotional support.", "In this work, we define the task of Emotional Support Conversation and present an ESC Framework.", "The ESC Framework is adapted from the Helping Skills Theory into a dialog system setting, which characterizes three stages with corresponding support strategies useful at each stage.", "We then construct an Emotional Support Conversation dataset, ESConv.", "We carefully design the process of data collection and devise multiple mechanisms to ensure the effectiveness of ES in conversations.", "Finally, we evaluate the ES ability with state-of-the-art dialog models.", "Experimental results show the potential utility of ESConv in terms of improving dialog systems' ability to provide effective ES.", "Our work can facilitate future research of ES dialog systems, as well as improve models for other conversation scenarios where emotional support plays an important role.", "Strategy selection and realization, user state modeling, and task evaluation are important directions for further research.", "This work was supported by the NSFC projects (Key project with No. 61936010 and regular project with No. 61876096).", "This work was also supported by the Guoqiang Institute of Tsinghua University, with Grant No. 2019GQG1 and 2020GQG0005.", "There are many types and levels of support that humans can seek to provide, e.g., professional versus peer support, and some of these levels may be inappropriate, unrealistic, and too risky for systems to deliver.", "However, as dialog systems become more common in daily use, opportunities will arise when at least some basic level of supportive statements may be required.", "In developing the ESC Framework, we have carefully considered which elements of conversational support may be relevant for a dialog system and omitted elements that are clear oversteps.", "Considerable additional work is needed to determine what are appropriate levels of support for systems to provide or that can be expected from systems, but our work provides a cautious, yet concrete, step towards developing systems capable of reasonably modest levels of support.", "The corpus we construct can also provide examples to enable future work that probes the ethical extent to which systems can or should provide support.", "In addition to these broader ethical considerations, we have sought to ethically conduct this study, including by transparently communicating with crowdworkers about data use and study intent, compensating workers at a reasonable hourly wage, and obtaining study approval from the Institutional Review Board." ]
[ "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "result", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "result", "abstain", "method", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "result", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain" ]
[ "Detecting rumors on social media is a very critical task with significant implications to the economy, public health, etc.", "Previous works generally capture effective features from texts and the propagation structure.", "However, the uncertainty caused by unreliable relations in the propagation structure is common and inevitable due to wily rumor producers and the limited collection of spread data.", "Most approaches neglect it and may seriously limit the learning of features.", "Towards this issue, this paper makes the first attempt to explore propagation uncertainty for rumor detection.", "Specifically, we propose a novel E dge-enhanced B ayesian G raph C onvolutional N etwork ( EBGCN ) to capture robust structural features.", "The model adaptively rethinks the reliability of latent relations by adopting a Bayesian approach.", "Besides, we design a new edge-wise consistency training framework to optimize the model by enforcing consistency on relations.", "Experiments on three public benchmark datasets demonstrate that the proposed model achieves better performance than baseline methods on both rumor detection and early rumor detection tasks.", "With the ever-increasing popularity of social media sites, user-generated messages can quickly reach a wide audience.", "However, social media can also enable the spread of false rumor information (Vosoughi et al., 2018).", "Rumors are now viewed as one of the greatest threats to democracy, journalism, and freedom of expression.", "Therefore, detecting rumors on social media is highly desirable and socially beneficial (Ahsan et al., 2019).", "Almost all the previous studies on rumor detection leverage text content including the source tweet and all user retweets or replies.", "As time goes on, rumors form their specific propagation structures after being retweeted or replied to.", "Vosoughi (2015); Vosoughi et al. (2018) have confirmed rumors spread significantly farther, faster, deeper, and more broadly than the truth.", "They provide the possibility of detecting rumors through the propagation structure.", "Some works (Ma et al., 2016; Kochkina et al., 2018) typically learn temporal features alone from propagation sequences, ignoring the internal topology.", "Recent approaches (Ma et al., 2018; Khoo et al., 2020) model the propagation structure as trees to capture structural features.", "Bian et al. (2020); Wei et al. (2019) construct graphs and aggregate neighbors' features through edges based on reply or retweet relations.", "However, most of them only work well in a narrow scope since they treat these relations as reliable edges for message-passing.", "As shown in Figure 1, the existence of inaccurate relations brings uncertainty in the propagation structure.", "The neglect of unreliable relations would lead to severe error accumulation through multi-layer message-passing and limit the learning of effective features.", "We argue such inherent uncertainty in the propagation structure is inevitable for two aspects:", "i) In the real world, rumor producers are always wily.", "They tend to viciously manipulate others to create fake supporting tweets or remove opposing voices to evade detection (Yang et al., 2020).", "In these common scenarios, relations can be manipulated, which provides uncertainty in the propagation structure.", "ii) Some annotations of spread relations are subjective and fragmentary (Ma et al., 2017; Zu-biaga et al., 2016).", "The available graph would be a portion of the real propagation structure as well as contain noisy relations, resulting in uncertainty.", "Therefore, it is very challenging to handle inherent uncertainty in the propagation structure to obtain robust detection results.", "To alleviate this issue, we make the first attempt to explore the uncertainty in the propagation structure.", "Specifically, we propose a novel E dge-enhanced B ayesian G raph C onvolutional N etwork ( EBGCN ) for rumor detection to model the uncertainty issue in the propagation structure from a probability perspective.", "The core idea of EBGCN is to adaptively control the message-passing based on the prior belief of the observed graph to surrogate the fixed edge weights in the propagation graph.", "In each iteration, edge weights are inferred by the posterior distribution of latent relations according to the prior belief of node features in the observed graph.", "Then, we utilize graph convolutional layers to aggregate node features by aggregating various adjacent information on the refining edges.", "Through the above network, EBGCN can handle the uncertainty in the propagation structure and promote the robustness of rumor detection.", "Moreover, due to the unavailable of missing or inaccurate relations for training the proposed model, we design a new edge-wise consistency training framework.", "The framework combines unsupervised consistency training on these unlabeled relations into the original supervised training on labeled samples, to promote better learning.", "We further ensure the consistency between the latent distribution of edges and the distribution of node features in the observed graph by computing KL-divergence between two distributions.", "Ultimately, both the cross-entropy loss of each claim and the Bayes by Backprop loss of latent relations will be optimized to train the proposed model.", "We conduct experiments on three real-world benchmark datasets ( i.e., Twitter15 , Twitter16 , and PHEME ).", "Extensive experimental results demonstrate the effectiveness of our model.", "EBGCN offers a superior uncertainty representation strategy and boosts the performance for rumor detection.", "The main contributions of this work are summarized as follows: We propose novel Edge-enhanced Bayesian Graph Convolutional Networks (EBGCN) to handle the uncertainty in a probability manner.", "To the best of our knowledge, this is the first attempt to consider the inherent uncertainty in the propagation structure for rumor detection.", "We design a new edge-wise consistency training framework to optimize the model with unlabeled latent relations.", "Experiments on three real-world benchmark datasets demonstrate the effectiveness of our model on both rumor detection and early rumor detection tasks 1 .", "Traditional methods on rumor detection adopted machine learning classifiers based on handcrafted features, such as sentiments (Castillo et al., 2011), bag of words (Enayet and El-Beltagy, 2017) and time patterns (Ma et al., 2015).", "Based on salient features of rumors spreading, Wu et al. (2015); Ma et al. (2017) modeled propagation trees and then used SVM with different kernels to detect rumors.", "Recent works have been devoted to deep learning methods.", "Ma et al. (2016) employed Recurrent Neural Networks (RNN) to sequentially process each timestep in the rumor propagation sequence.", "To improve it, many researchers captured more long-range dependency via attention mechanisms (Chen et al., 2018), convolutional neural networks (Yu et al., 2017; Chen et al., 2019), and Transformer (Khoo et al., 2020).", "However, most of them focused on learning temporal features alone, ignoring the internal topology structure.", "To capture topological-structural features, Ma et al. (2018) presented two recursive neural network (RvNN) based on bottom-up and top-down propagation trees.", "Yuan et al. (2019); Lu and Li (2020); Nguyen et al. (2020) formulated the propagation structure as graphs.", "Inspired by Graph Convolutional Network (GCN) (Kipf and Welling, 2017), Bian et al. (2020) first applied two GCNs 1 The source code is available at https://github.", "com/weilingwei96/EBGCN .", "based on the propagation and dispersion graphs.", "Wei et al. (2019) jointly modeled the structural property by GCN and the temporal evolution by RNN.", "However, most of them treat the edge as the reliable topology connection for message-passing.", "Ignoring the uncertainty caused by unreliable relations could lead to lacking robustness and make it risky for rumor detection.", "Inspired by valuable research (Zhang et al., 2019a) that modeled uncertainty caused by finite available textual contents, this paper makes the first attempt to consider the uncertainty caused by unreliable relations in the propagation structure for rumor detection.", "Graph Neural Networks (GNNs) (Kipf and Welling, 2017; Schlichtkrull et al., 2018; Velickovic et al., 2018) have demonstrated remarkable performance in modeling structured data in a wide variety of fields, e.g. , text classifcation (Yao et al., 2019), recommendation system (Wu et al., 2019) and emotion recognition (Ghosal et al., 2019).", "Although promising, they have limited capability to handle uncertainty in the graph structure.", "While the graphs employed in real-world applications are themselves derived from noisy data or modeling assumptions.", "To alleviate this issue, some valuable works (Luo et al., 2020; Zhang et al., 2019b) provide an approach for incorporating uncertain graph information by exploiting a Bayesian framework (Maddox et al., 2019).", "Inspired by them, this paper explores the uncertainty in the propagation structure from a probability perspective, to obtain more robust rumor detection results.", "This paper develops EBGCN which processes text contents and propagation structure of each claim for rumor detection.", "In general, rumor detection commonly can be regarded as a multi-classification task, which aims to learn a classifier from training claims for predicting the label of a test claim.", "Formally, let C = { c 1 , c 2 , ..., c m } be the rumor detection dataset, where c i is the i -th claim and m is the number of claims.", "For each claim c i = { r i , x i 1 , x i 2 , ..., x in i 1 , G i } , G i indicates the propagation structure, r i is the source tweet, x ij refers to the j -th relevant retweet, and n i represents the number of tweets in the claim c i .", "Specifically, G i is defined as a propagation graph G i = (cid:104) V i , E i (cid:105) with the root node r i (Ma et al., 2018; Bian et al., 2020), where V i = { r i , x i 1 , x i 2 , ..., x in i 1 } refers to the node set and E i = { e ist | s, t = 0 , ..., n i 1 } represent a set of directed edges from a tweet to its corresponding retweets.", "Denote A i R n i n i as an adjacency matrix where the initial value is st = (cid:26) 1 , if e ist E i 0 , otherwise .", "Besides, each claim c i is annotated with a ground-truth label y i Y , where Y represents fine-grained classes.", "Our goal is to learn a classifier from the labeled claimed set, that is f : C Y .", "In this section, we propose a novel edge-enhanced bayesian graph convolutional network (EBGCN) for rumor detection in Section 4.2.", "For better training, we design an edge-wise consistency training framework to optimize EBGCN in Section 4.3.", "The overall architecture of EBGCN is shown in Figure 2.", "Given the input sample including text contents and its propagation structure, we first formulate the propagation structure as directed graphs with two opposite directions, i.e., a top-down propagation graph and a bottom-up dispersion graph.", "Text contents are embedded by the text embedding layer.", "After that, we iteratively capture rich structural characteristics via two main components, node update module, and edge inference module.", "Then, we aggregate node embeddings to generate graph embedding and output the label of the claim.", "For training, we incorporate unsupervised consistency training on the Bayes by Backprop loss of unlabeled latent relations.", "Accordingly, we optimize the model by minimizing the weighted sum of the unsupervised loss and supervised loss.", "The initial graph construction is similar to the pre-viou work (Bian et al., 2020), i.e., build two distinct directed graphs for the propagation structure of each claim c i .", "The top-down propagation graph and bottom-up dispersion graph are denoted as G TDi and G BUi , respectively.", "Their corresponding initial adjacency matrices are A TDi = A i and A BUi = A (cid:62) i .", "description for better presenting our method.", "The initial feature matrix of postings in the claim c can be extracted Top-5000 words in terms of TF-IDF values, denoted as X = [ x 0 , x 1 , ..., x n 1 ] R n d 0 , where x 0 R d 0 is the vector of the source tweet and d 0 is the dimensionality of textual features.", "The initial feature matrices of nodes in propagation graph and dispersion graph are the same, i.e., XTD = XBU = X .", "Graph convolutional networks (GCNs) (Kipf and Welling, 2017) are able to extract graph structure information and better characterize a node's neighborhood.", "They define multiple Graph Conventional Layers (GCLs) to iteratively aggregate features of neighbors for each node and can be formulated as a simple differentiable message-passing framework.", "Motivated by GCNs, we employ the GCL to update node features in each graph.", "Formally, node features at the l -th layer H ( l ) = [ h ( l ) 0 , h ( l ) 1 , ..., h ( l ) n 1 ] can be defined as, H ( l ) = ( A ( l 1) H ( l 1) W ( l ) + b ( l ) ) , (1) where A ( l 1) represents the normalization of adjacency matrix A ( l 1) (Kipf and Welling, 2017).", "We initialize node representations by textual features, i.e., H (0) = X .", "To alleviate the negative effects of unreliable relations, we rethink edge weights based on the currently", "Specifically, we adjust the weight between two nodes by computing a transformation f e ( ; t ) based on node representations at the previous layer.", "Then, the adjacency matrix will be updated, i.e., g ( l ) t = f e (cid:16) (cid:107) h ( l 1) i h ( l 1) j (cid:107) ; t (cid:17) , A ( l ) = T (cid:88) t =1 ( W ( l ) t g ( l ) t + b ( l ) t ) A ( l 1) .", "(2) In practice, f e ( ; t ) consists an convolutional layer and an activation function.", "T refers to the number of latent relation types.", "( ) refers to a sigmoid function.", "W ( l ) t and W ( l ) t are learnable parameters.", "We perform share parameters to the edge inference layer in two graphs GTD and GBU .", "After the stack of transformations in two layers, the model can effectively accumulate a normalized sum of features of the neighbors driven by latent relations, denoted as HTD and HBU .", "We regard the rumor detection task as a graph classification problem.", "To aggregate node representations in the graph, we employ aggregator to form the graph representations.", "Given the node representations in the propagation graph HTD and the node representations in the dispersion graph HBU , the graph representations can be computed as: CTD = meanpooling ( HTD ) , CBU = meanpooling ( HBU ) , (3) where meanpooling ( ) refers to the mean-pooling aggregating function.", "Based on the concatenation of two distinct graph representations, label probabilities of all classes can be defined by a full connection layer and a softmax function, i.e., y = softmax (cid:0) W c [ CTD ; CBU ] + b c (cid:1) , (4) where W c and b c are learnable parameter matrices.", "where y i is a vector representing distribution of ground truth label for the i -th claim sample.", "For the unsupervised learning loss L e , we amortize the posterior distribution of the classification weight p ( ) as q ( ) to enable quick prediction at the test stage and learn parameters by minimizing the average expected loss over latent relations, i.e., = arg min L e , where L e = E (cid:104) DKL (cid:16) p ( r ( l ) | H ( l 1) ,G ) (cid:107) q ( r ( l ) | H ( l 1) ,G ) (cid:17)(cid:105) , = arg max E [log (cid:90) p ( r ( l ) | H ( l 1) , ) q ( | H ( l 1) ,G ) d ] , (6) where r is the prediction distribution of latent relations.", "To ensure likelihood tractably, we model the prior distribution of each latent relation r t , t [1 , T ] independently.", "For each relation, we define a factorized Gaussian distribution for each latent relation q ( | H ( l 1) , G ; ) with means t and variances 2 t set by the transformation layer, q ( | H ( l 1) , G ; )) = T (cid:89) t =1 q ( t |{ g ( l ) t } Tt =1 ) = T (cid:89) t =1 N ( t , 2 t ) , t = f ( { g ( l ) t } Tt =1 ; ) , 2 t = f ( { g ( l ) t } T t =1 ; ) , (7) where f ( ; ) and f ( ; ) refer to compute the mean and variance of input vectors, parameterized by and , respectively.", "Such that amounts to set the weight of each latent relation.", "Besides, we also consider the likelihood of latent relations when parameterizing the posterior distribution of prototype vectors.", "The likelihood of latent relations from the l -th layer based on node embeddings can be adaptively computed by, p ( r ( l ) | H ( l 1) , ) = T (cid:89) t =1 p ( r ( l ) t | H ( l 1) , t ) , p ( r ( l ) t | H ( l 1) , t ) = exp (cid:16) W t g ( l ) t + b t (cid:17) (cid:80) Tt =1 exp (cid:16) W t g ( l ) t + b t (cid:17) .", "(8) In this way, the weight of edges can be adaptively adjusted based on the observed graph, which can thus be used to effectively pass messages and learn more discriminative features for rumor detection.", "To sum up, in training, we optimize our model EBGCN by minimizing the cross-entropy loss of labeled claims L c and Bayes by Backprop loss of unlabeled latent relations L e , i.e., = arg min L c + (1 ) L e , (9) where is the trade-off coefficient.", "We evaluate the model on three real-world benchmark datasets: Twitter15 (Ma et al., 2017), Twitter16 (Ma et al., 2017), and PHEME (Zubiaga et al., 2016).", "The statistics are shown in Table 1.", "Twitter15 and Twitter16 2 contain 1,490 and 818 claims, respectively.", "Each claim is labeled as Non-rumor (NR), False Rumor (F), True Rumor (T), or Unverified Rumor (U).", "Following (Ma et al., 2018; Bian et al., 2020), we randomly split the dataset into five parts and conduct 5-fold cross-validation to obtain robust results.", "PHEME dataset 3 provides 2,402 claims covering nine events and contains three labels, False Rumor (F), True Rumor (T), and Unverified Rumor (U).", "Following the previous work (Wei et al., 2019), we conduct leave-one-event-out cross-validation, i.e., in each fold, one event's samples are used for testing, and all the rest are used for training.", "For Twitter15 and Twitter16 , we compare our proposed model with the following methods.", "DTC 2 https://www.dropbox.com/s/", "3 https://figshare.com/articles/ dataset/PHEME_dataset_for_Rumour_Detection_and_Veracity_Classification/6392078", "(Castillo et al., 2011) adopted a decision tree classifier based on information credibility.", "SVM-TS (Ma et al., 2015) leveraged time series to model the chronological variation of social context features via a linear SVM classifier.", "SVM-TK (Ma et al., 2017) applied an SVM classifier with a propagation tree kernel to model the propagation structure of rumors.", "GRU-RNN (Ma et al., 2016) employed RNNs to model the sequential structural features.", "RvNN (Ma et al., 2018) adopted two recursive neural models based on a bottom-up and a top-down propagation tree.", "StA-PLAN (Khoo et al., 2020) employed transformer networks to incorporate long-distance interactions among tweets with propagation tree structure.", "BiGCN (Bian et al., 2020) utilized bi-directional GCNs to model bottom-up propagation and top-down dispersion.", "For PHEME , we compare with several representative state-of-the-art baselines.", "NileTMRG (Enayet and El-Beltagy, 2017) used linear support vector classification based on bag of words.", "BranchLSTM (Kochkina et al., 2018) decomposed the propagation tree into multiple branches and adopted a shared LSTM to capture structural features.", "RvNN (Ma et al., 2018) consisted of two recursive neural networks to model propagation trees.", "Hierarchical GCN-RNN (Wei et al., 2019) modeled structural property based on GCN and RNN.", "BiGCN (Bian et al., 2020) consisted of propagation and dispersion GCNs to learn structural features from propagation graph.", "For Twitter15 and Twitter16 , we follow (Ma et al., 2018; Bian et al., 2020; Khoo et al., 2020) and evaluate the accuracy (Acc.) over four categories and F1 score ( F 1 ) on each class.", "For PHEME , following (Enayet and El-Beltagy, 2017; Kochkina et al., 2018; Wei et al., 2019), we apply the accuracy (Acc.), macro-averaged F1 (m F 1 ) as evaluation metrics.", "Also, we report the weighted-averaged F1 (w F 1 ) because of the imbalanced class problem.", "Following comparison baselines, the dimension of hidden vectors in the GCL is set to 64.", "The number of latent relations T and the coefficient weight are set to [1 , 5] and [0 . 0 , 1 . 0] , respectively.", "we train the model via backpropagation and a wildly used stochastic gradient descent named Adam (Kingma and Ba, 2015).", "The learning rate is set to { 0 .", "0002 , 0 .", "0005 , 0 .", "02 } for Twitter15 , Twitter16 , and PHEME , respectively.", "The training process is iterated upon 200 epochs and early stopping (Yuan et al., 2007) is applied when the validation loss stops decreasing by 10 epochs.", "The optimal set of hyperparameters are determined by testing the performance on the fold0 set of Twitter15 and Twitter16 , and the class-balanced charlie hebdo event set of PHEME .", "Besides, on PHEME , following (Wei et al., 2019), we replace TF-IDF features with word embeddings by skip-gram with negative sampling (Mikolov et al., 2013) and set the dimension of textual features to 200 .", "We implement this variant of BiGCN and EBGCN, denoted as BiGCN(SKP) and EBGCN(SKP), respectively.", "For results of baselines, we implement BiGCN according to their public project 4 under the same environment.", "Other results of baselines are referenced from original papers (Khoo et al., 2020; Wei et al., 2019; Ma et al., 2018).", "Table 2 shows results of rumor detection on Twitter15 , Twitter16 , and PHEME datasets.", "Our proposed model EBGCN obtains the best performance among baselines.", "Specifically, for Twitter15 , EBGCN outperforms state-of-the-art models 2.4% accuracy and 3.6% F1 score of false rumor.", "For Twitter16 , our model obtains 3.4% and 6.0% improvements on accuracy and F1 score of non-rumor, respectively.", "For PHEME , EBGCN significantly outperforms previous work by 40.2% accuracy, 34.7% m F 1 , and 18.0% w F 1 .", "Deep learning-based ( RvNN, StA-PLAN, BiGCN and EBGCN ) outperform conventional methods using hand-crafted features ( DTC , SVM-TS ), which reveals the superiority of learning high-level representations for detecting rumors.", "4 https://github.com/TianBian95/BiGCN Twitter15 Method Acc.", "Moreover, compared with sequence-based models GRU-RNN , and StA-PLAN , EBGCN outperform them.", "It can attribute that they capture temporal features alone but ignore internal topology structures, which limit the learning of structural features.", "EBGCN can aggregate neighbor features in the graph to learn rich structural features.", "Furthermore, compared with state-of-the-art graph-based BiGCN , EBGCN also obtains better performance.", "We discuss the fact for two main reasons.", "First, BiGCN treats relations among tweet nodes as reliable edges, which may introduce inaccurate or irrelevant features.", "Thereby their performance lacks robustness.", "EBGCN considers the inherent uncertainty in the propagation structure.", "In the model, the unreliable relations can be refined", "in a probability manner, which boosts the bias of express uncertainty.", "Accordingly, the robustness of detection is enhanced.", "Second, the edge-wise consistency training framework ensures the consistency between uncertain edges and the current nodes, which is also beneficial to learn more effective structural features for rumor detection.", "Besides, EBGCN(SKP) and BiGCN(SKP) outperforms EBGCN and BiGCN that use TF-IDF features in terms of Acc.", "and w F 1 .", "It shows the superiority of word embedding to capture textual features.", "Our model consistently obtains better performance in different text embedding.", "It reveals the stability of EBGCN .", "The Effect of Edge Inference.", "The number of latent relation types T is a critical parameter in the edge inference module.", "Figure", "3(a) shows the accuracy score against T .", "The best performance is obtained when T is 2, 3, and 4 on Twitter15 , Twitter16 , and PHEME , respectively.", "Besides, these best settings are different.", "An idea explanation is that complex relations among tweets are various in different periods and gradually tend to be more sophisticated in the real world with the development Figure 4: Performance of early rumor detection.", "of social media.", "The edge inference module can adaptively refine the reliability of these complex relations by the posterior distribution of latent relations.", "It enhances the bias of uncertain relations and promotes the robustness of rumor detection.", "The Effect of Unsupervised Relation Learning Loss.", "The trade-off parameter controls the effect of the proposed edge-wise consistency training framework.", "= 0 .", "0 means this framework is omitted.", "The right in Figure 3 shows the accuracy score against .", "When this framework is removed, the model gains the worst performance.", "The optimal is 0.4, 0.3, and 0.3 on Twitter15 , Twitter16 , and PHEME , respectively.", "The results proves the effectiveness of this framework.", "Due to wily rumor producers and limited annotations of spread information, it is common and inevitable that datasets contains unreliable relations.", "This framework can ensure the consistency between edges and the corresponding node pairs to avoid the negative features.", "Rumor early detection is to detect a rumor at its early stage before it wide-spreads on social media so that one can take appropriate actions earlier.", "It is especially critical for a real-time rumor detection system.", "To evaluate the performance on rumor early detection, we follow (Ma et al., 2018) and control the detection deadline or tweet count since the source tweet was posted.", "The earlier the detection deadline or the less the tweet count, the less propagation information can be available.", "Figure 4 shows the performance of early rumor detection.", "First, all models climb as the detection deadline elapses or tweet count increases.", "Particularly, at each deadline or tweet count, our model EBGCN reaches a relatively high accuracy score than other comparable models.", "Second, compared with RvNN that captures temporal features alone and STM-TK based on handcrafted features, the superior performance of EBGCN and BiGCN that explored rich structural features reveals that structural features are more beneficial to the early detection of rumors.", "Third, EBGCN obtains better early detection results than BiGCN .", "It demonstrates that EBGCN can learn more conducive structural features to identify rumors by modeling uncertainty and enhance the robustness for early rumor detection.", "Overall, our model not only performs better long-term rumor detection but also boosts the performance of detecting rumors at an early stage.", "In this part, we perform the case study to show the existence of uncertainty in the propagation structure and explain why EBGCN performs well.", "We randomly sample a false rumor from PHEME , as depicted in Figure 5.", "The tweets are formulated as nodes and relations are modeled as edges in the graph, where node 1 refers to the source tweet and node 2 8 refer to the following retweets.", "As shown in the left of Figure 5, we observe that tweet 5 is irrelevant with tweet 1 although replying, which reveals the ubiquity of unreliable relations among tweets in the propagation structure and it is reasonable to consider the uncertainty caused by these unreliable relations.", "Right of Figure 5 indicates constructed graphs where the color shade indicates the value of edge weights.", "The darker the color, the greater the edge weight.", "The existing graph-based models always generate the representation of node 1 by aggregating the information of its all neighbors ( node 2, 5, and 6) according to seemingly reliable edges.", "However, edge between node 1 and 5 would bring noise features and limit the learning of useful features for rumor detection.", "Our model EBGCN successfully weakens the negative effect of this edge by both the edge inference layer under the ingenious edge-wise consistency training framework.", "Accordingly, the (cid:884) (cid:883) 5 (cid:888) (cid:889) (cid:890) (cid:885) (cid:886) Hi Henry would you be willing to give ITV News a phone interview for our Lunchtime bulletin in 2 hours?", "In this paper, we have studied the uncertainty in the propagation structure from a probability perspective for rumor detection.", "Specifically, we propose Edge-enhanced Bayesian Graph Convolutional Networks (EBGCN) to handle uncertainty with a Bayesian method by adaptively adjusting weights of unreliable relations.", "Besides, we design an edge-wise consistency training framework incorporating unsupervised relation learning to enforce the consistency on latent relations.", "Extensive experiments on three commonly benchmark datasets have proved the effectiveness of modeling uncertainty in the propagation structure.", "EBGCN significantly outperforms baselines on both rumor detection and early rumor detection tasks." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "method", "abstain", "objective", "abstain", "method", "abstain", "method", "objective", "abstain", "objective", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "objective", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "method", "abstain", "abstain" ]
[ "Word alignment and machine translation are two closely related tasks.", "Neural translation models, such as RNN-based and Transformer models, employ a target-to-source attention mechanism which can provide rough word alignments, but with a rather low accuracy.", "High-quality word alignment can help neural machine translation in many different ways, such as missing word detection, annotation transfer and lexicon injection.", "Existing methods for learning word alignment include statistical word aligners (e.g. GIZA++) and recently neural word alignment models.", "This paper presents a bidirectional Transformer based alignment (BTBA) model for unsupervised learning of the word alignment task.", "Our BTBA model predicts the current target word by attending the source context and both left-side and right-side target context to produce accurate target-to-source attention (alignment).", "We further fine-tune the target-to-source attention in the BTBA model to obtain better alignments using a full context based optimization method and self-supervised training.", "We test our method on three word alignment tasks and show that our method outperforms both previous neural word alignment approaches and the popular statistical word aligner GIZA++.", "Neural machine translation (NMT) (Bahdanau et al., 2014; Vaswani et al., 2017) achieves state-of-the-art results for various translation tasks (Bar-rault et al., 2019, 2020).", "Neural translation models, such as RNN-based (Bahdanau et al., 2014) and Transformer (Vaswani et al., 2017) models, generally have an encoder-decoder structure with a target-to-source attention mechanism.", "The target-to-source attention in NMT can provide rough word alignments but with a rather low accuracy (Koehn and Knowles, 2017).", "High-quality word alignment can be used to help NMT in many different ways, such as detecting source words that are missing in the translation (Lei et al., 2019), integrating an external lexicon into NMT to improve translation for domain-specific terminology or low-frequency words (Chatterjee et al., 2017; Chen et al., 2020), transferring word-level annotations (e.g. underline and hyperlink) from source to target for docu-ment/webpage translation (Muller, 2017).", "A number of approaches have been proposed to learn the word alignment task, including both statistical models (Brown et al., 1993) and recently neural models (Zenkel et al., 2019; Garg et al., 2019; Zenkel et al., 2020; Chen et al., 2020; Stengel-Eskin et al., 2019; Nagata et al., 2020).", "The popular word alignment tool GIZA++ (Och and Ney, 2003) is based on statistical IBM models (Brown et al., 1993) which learn the word alignment task through unsupervised learning and do not require gold alignments from humans as training data.", "As deep neural networks have been successfully applied to many natural language processing (NLP) tasks, neural word alignment approaches have developed rapidly and outperformed statistical word aligners (Zenkel et al., 2020; Garg et al., 2019).", "Neural word alignment approaches include both supervised and unsupervised approaches: supervised approaches (Stengel-Eskin et al., 2019; Nagata et al., 2020) use gold alignments from human annotators as training data and train neural models to learn word alignment through supervised learning; unsupervised approaches do not use gold human alignments for model training and mainly focus on improving the target-to-source attention in NMT models to produce better word alignment, such as performing attention optimization during inference (Zenkel et al., 2019), encouraging contiguous alignment connections (Zenkel et al., 2020) or using alignments from GIZA++ to supervise/guide the attention in NMT models (Garg et al., 2019).", "We propose a bidirectional Transformer based alignment (BTBA) model for unsupervised learning of the word alignment task.", "Our BTBA model predicts the current target word by paying attention to the source context and both left-side and right-side target context to produce accurate target-to-source attention (alignment).", "Compared to the original Transformer translation model (Vaswani et al., 2017) which computes target-to-source attention based on only the left-side target context due to left-to-right autoregressive decoding, our BTBA model can exploit both left-side and right-side target context to compute more accurate target-to-source attention (alignment).", "We further fine-tune the BTBA model to produce better alignments using a full context based optimization method and self-supervised training.", "We test our method on three word alignment tasks and show that our method outperforms previous neural word alignment approaches and also beats the popular statistical word aligner GIZA++.", "The goal of the word alignment task (Och and Ney, 2003) is to find word-level alignments for parallel source and target sentences.", "Given a source sentence s I 1 0 = s 0 , ..., s i , ..., s I 1 and its parallel target sentence t J 1 0 = t 0 , ..., t j , ..., t J 1 , the word alignment G is defined as a set of links that link the corresponding source and target words as shown in Equation", "1. G { ( i, j ) : i = 0 , ..., I 1; j = 0 , ..., J 1 } (1) The word alignment G allows one-to-one, one-to-many, many-to-one, many-to-many alignments and also unaligned words (Och and Ney, 2003).", "Due to the lack of labelled training data (gold alignments annotated by humans) for the word alignment task, most word alignment methods learn the word alignment task through unsupervised learning (Brown et al., 1993; Zenkel et al., 2020; Chen et al., 2020).", "Neural translation models (Bahdanau et al., 2014; Vaswani et al., 2017) generally have an encoder-decoder structure with a target-to-source attention mechanism: the encoder encodes the source sentence; the decoder generates the target sentence by attending the source context and performing", "left-to-right autoregressive decoding.", "The target-to-source attention learned in NMT models can provide rough word alignments between source and target words.", "Among various translation models, the Transformer translation model (Vaswani et al., 2017) achieves state-of-the-art results on various translation tasks and is based solely on attention: source-to-source attention in the encoder; target-to-target and target-to-source attention in the decoder.", "The attention networks used in the Transformer model are called multi-head attention which performs attention using multiple heads as shown in Equation", "2. MultiHead ( Q, K, V ) = Concat ( head 0 , ..., head N 1 ) W o Head n = A n V n A n = softmax (cid:16) Q n K Tn d k (cid:17) Q n = QW Qn , K n = KW Kn , V n = V W Vn (2) where Q , K and V are query, keys, values for the attention function; W o , W Qn , W Kn and W Vn are model parameters; d k is the dimension of the keys.", "Based on parallelizable attention networks, the Transformer can be trained much faster than RNN-based translation models (Bahdanau et al., 2014).", "Word alignment is a key component in traditional statistical machine translation (SMT), such as phrase-based SMT (Koehn et al., 2003) which extracts phrase-based translation rules based on word alignments.", "The popular statistical word alignment tool GIZA++ (Och and Ney, 2003) implements the statistical IBM models (Brown et al., 1993).", "The statistical IBM models are mainly based on lexical translation probabilities.", "Words that co-occur frequently in parallel sentences generally have higher lexical translation probabilities and are more likely to be aligned.", "The statistical IBM models are trained using parallel sentence pairs with no word-level alignment annotations and therefore learn the word alignment task through unsupervised learning.", "Based on a reparameterization of IBM Model 2, Dyer et al. (2013) presented another popular statistical word alignment tool fast align which can be trained faster than GIZA++, but GIZA++ generally produces better word alignments than fast align.", "With neural networks being successfully applied to many NLP tasks, neural word alignment approaches have received much attention.", "The first neural word alignment models are based on feed-forward neural networks (Yang et al., 2013) and recurrent neural networks (Tamura et al., 2014) which can be trained in an unsupervised manner by noise-contrastive estimation (NCE) (Gutmann and Hyvarinen, 2010) or in a supervised manner by using alignments from human annotators or existing word aligners as labelled training data.", "As NMT (Bahdanau et al., 2014; Vaswani et al., 2017) achieves great success, the target-to-source attention in NMT models can be used to infer rough word alignments, but with a rather low accuracy.", "A number of recent works focus on improving the target-to-source attention in NMT to produce better word alignments (Garg et al., 2019; Zenkel et al., 2019; Chen et al., 2020; Zenkel et al., 2020).", "Garg et al. (2019) trained the Transformer translation model to jointly learn translation and word alignment through multi-task learning using word alignments from existing word aligners such as GIZA++ as labelled training data.", "Chen et al. (2020) proposed a method to infer more accurate word alignments from the Transformer translation model by choosing the appropriate decoding step and layer for word alignment inference.", "Zenkel et al. (2019) proposed an alignment layer for the Transformer translation model and they only used the output of the alignment layer for target word prediction which forces the alignment layer to produce better alignment (attention).", "Zenkel et al. (2019) also proposed an attention optimization method which directly optimizes the attention for the test set to produce better alignment.", "Zenkel et al. (2020) proposed to improve the attention in NMT by using a contiguity loss to encourage contiguous alignment connections and performing direct attention optimization to maximize the translation probability for both the source-to-target and target-to-source translation models.", "Compared to these methods that infer word alignments based on NMT target-to-source attention which is computed by considering only the left-side target context, our BTBA model can exploit both left-side and right-side target context to compute better target-to-source attention (alignment).", "There are also a number of supervised neural approaches that require gold alignments from humans for learning the word alignment task (Stengel-Eskin et al., 2019; Nagata et al., 2020).", "Because gold alignments from humans are scarce, Stengel-Eskin et al. (2019); Nagata et al. (2020)'s models only have a small size of task-specific training data and exploit representations from pre-trained NMT and BERT models.", "Compared to these supervised methods, our method does not require gold human alignments for model training.", "We present a bidirectional Transformer based alignment (BTBA) model for unsupervised learning of the word alignment task.", "Motivated by BERT which learns a masked language model (Devlin et al., 2019), we randomly mask 10% of the words in the target sentence and then train our BTBA model to predict the masked target words by paying attention to the source context and both left-side and right-side target context.", "Therefore, our BTBA model can exploit both left-side and right-side target context to compute more accurate target-to-source attention (alignment) compared to the original Transformer translation model (Vaswani et al., 2017) which computes the target-to-source attention based on only the left-side target context due to left-to-right autoregressive decoding.", "We further fine-tune the target-to-source attention in the BTBA model to produce better alignments using a full context based optimization method and self-supervised training.", "Figure 1 shows the architecture of the proposed BTBA model.", "The encoder is used to encode the source sentence 1 and has the same structure as the original Transformer encoder (Vaswani et al., 2017).", "The input of the decoder is the masked target sentence and 10% of the words in the target sentence are randomly masked 2 .", "As shown in Figure 1, the target sentence contains a masked word <x> .", "The decoder contains 6 layers.", "Each of the first 5 layers of the decoder has 3 sub-layers: 1 Following Och and Ney (2003)'s work, we add a <bos> token at the beginning of the source sentence for target words that are not aligned with any source words.", "2 During training, we randomly mask 10% of the words in the target sentences for each training epoch, i.e., one target sentence is masked differently for different training epochs.", "If a target sentence contains less than 10 words, then we just randomly mask one word in this sentence.", "a multi-head self-attention sub-layer, a target-to-source multi-head attention sub-layer and a feed forward sub-layer, like a standard Transformer decoder layer except that the self-attention sub-layer in the standard Transformer decoder can only attend left-side target context while the self-attention sub-layer in our BTBA decoder can attend all target words and make use of both left-side and right-side target context to compute better target-to-source attention (alignment).", "The last layer of the BTBA decoder contains a self-attention sub-layer and a target-to-source attention sub-layer like the first 5 layers of the BTBA decoder but without the feed-forward sub-layer.", "We use the output of the last target-to-source attention sub-layer for predicting the masked target words and we use the attention of the last target-to-source attention sub-layer for inferring word alignments between source and target words.", "Our design that only uses the last target-to-source attention sub-layer output for predicting the masked target words is motivated by the alignment layer of Zenkel et al. (2019) in order to force Original the cake is very delicious <x> cake is very delicious the <x> is very delicious Masked the cake <x> very delicious the cake is <x> delicious the cake is very <x> Table 1: Masking target sentences in the test set.", "the last target-to-source attention sub-layer to pay attention to the most important source words for predicting the target word and therefore produce better word alignments.", "In Figure 1, A ijn is the attention value of the j th target word paying to the i th source word using the n th head in the last target-to-source multi-head attention sub-layer.", "V 0 , V 1 , V 2 , V 3 , V 4 are the outputs of the decoder for the 5 target words and V 1 is used to predict the masked target word cake.", "Because V 1 is used to predict cake, the attention value A 21 n should be learned to be high in order to make V 1 contain the most useful source information (kuchen).", "Therefore, A ijn can be used to infer word alignment for the target word cake effectively.", "However, A ijn cannot provide good word alignments for unmasked target words such as delicious in Figure 1 because V 4 is not used to predict any target word and A 54 n is not necessarily learned to be high.", "Because A ijn can only be used to infer accurate word alignment for masked target words but we want to get alignments for all target words in the test set, we mask a target sentence t J 1 0 in the test set J times and each time we mask one target word as shown in Table", "1. Each masked target sentence is fed into the BTBA model together with the source sentence and then we collect the attention A ijn for the masked target words.", "Suppose the j (cid:48) th target word is masked, then we compute the source position that it should be aligned to as, i (cid:48) = arg max i N 1 (cid:88) n =0 A ij (cid:48) n (3) 4.2 Full Context Based Optimization In Equation 3, the attention A ij (cid:48) n for the j (cid:48) target word is computed by considering both left-side and right-side target context, but information about the current target word is not used since the j (cid:48) target word is masked.", "For example in Figure 1, the BTBA model does not know that the second target word is cake because it is masked, therefore the BTBA model computes the attention (alignment) for cake only using the left-side and right-side context of cake without knowing that the word that needs to be aligned is cake.", "We propose a novel full context based optimization method to use full target context, including the current target word information, to improve the target-to-source attention in the BTBA model to produce better alignments.", "That is for the last 50 training steps of the BTBA model, we do not mask the target sentence any more and we only optimize parameters W Qn and W Kn in the last target-to-source multi-head attention sub-layer.", "As shown in Equation 2, W Qn and W Kn are parameters that are used to compute the attention values in multi-head attention.", "Optimizing W Qn and W Kn based on full target context can help the BTBA model to produce better attention (alignment) while at the same time freezing other parameters can make the BTBA model keep the knowledge learned from masked target word prediction.", "After full target context based optimization, we do not need to mask target sentences in the test set as shown in Table 1 any more.", "We can directly feed the original source and target test sentences into the BTBA model and compute attention (alignment) for all target words in the sentence.", "The full context based optimization method can be seen as a fine-tuning of the original BTBA model, i.e. we fine-tune the two parameters W Qn and W Kn in the last target-to-source attention layer based on full target context to compute more accurate word alignments.", "The BTBA model learns word alignment through unsupervised learning and does not require labelled data for the word alignment task.", "We train two unsupervised BTBA models, one for the forward direction (source-to-target) and one for the backward direction (target-to-source), and then symmetrize the alignments using heuristics such as grow-diagonal-final-and (Och and Ney, 2003) as the symmetrized alignments have better quality than the alignments from a single forward or backward model.", "After unsupervised learning, we use the symmetrized word alignments G a inferred from our unsupervised BTBA models as labelled data to further fine-tune each BTBA model for the word alignment task through supervised training using the alignment loss in Equation 4 following Garg et al. (2019)'s work.", "3 During supervised training, the BTBA model is trained to learn the alignment task instead of masked target word prediction, therefore the target sentence does not need to be masked.", "Note that we apply byte pair encoding (BPE) (Sennrich et al., 2016) for both source and target sentences before we feed them into the BTBA model.", "Therefore the alignments inferred from the BTBA model is on BPE-level.", "We convert 4 BPE-level alignments to word-level alignments before we perform alignment symmetrization.", "After alignment symmetrization, we want to use the symmetrized alignments to further fine-tune each BTBA model through supervised learning and therefore we convert 5 the word-level alignments back to BPE-level for supervised training of the BTBA models.", "In order to compare with previous work, we used the same datesets 6 as Zenkel et al. (2020)'s work and conducted word alignment experiments for three language pairs: German English (DeEn), English French (EnFr) and Romanian English (RoEn).", "Each language pair contains a test set and a training set: the test set contains parallel sentences with gold word alignments annotated by humans; the training set contains only parallel sentences with no word alignments.", "Table 2 gives numbers of sentence pairs contained in the training and test sets.", "Parallel sentences from both the training set and the test set can be used to train 3 We optimize all model parameters during supervised fine-tuning.", "4 To convert BPE-level alignments to word-level alignments, we add an alignment between a source word and a target word if any parts of these two words are aligned.", "Alignments between the source <bos> token and any target word are deleted; alignments between the last source word . (full stop) and a target word which is not the last target word are also deleted.", "5 To convert word-level alignments to BPE-level alignments, we add an alignment between a source BPE token and a target BPE token if the source word and the target word that contain these two BPE tokens are aligned; we add an alignment between the source <bos> token and a target BPE token if the target word that contains this target BPE token is not aligned with any source words.", "6 https://github.com/lilt/alignment-scripts DeEn EnFr RoEn TRAIN 1.91M 1.13M 447k TEST 508 447 248 Table 2: Numbers of sentence pairs in the datasets.", "unsupervised word alignment models.", "We use BPE (Sennrich et al., 2016) to learn a joint source and target vocabulary of 40k.", "After BPE, we train BTBA models to learn the word alignment tasks.", "We use a word embedding size of 512.", "The feed forward layer contains 2048 hidden units.", "The multi-head attention layer contains 8 heads.", "We use the Adam (Kingma and Ba, 2014) algorithm for optimization and set the learning rate to 0.0002.", "We use a dropout of 0.3.", "Each training batch contains 40k masked target words.", "Since the word alignment tasks do not provide validation data, we trained all BTBA models for a fixed number of training epochs: 50 for DeEn, 100 for EnFr and 200 for RoEn.", "7 For the last 50 training steps of each BTBA model, we performed full context based optimization.", "For each language pair, we trained two BTBA models, one for the forward direction and one for the backward direction, and then symmetrized the alignments.", "We tested different heuristics for alignment symmetrization, including the standard Moses heuristics, grow-diagonal , grow-diagonal-final , grow-diagonal-final-and .", "We also tested another heuristic grow-diagonal-and which is slightly different from grow-diagonal : the grow-diagonal-and heuristic only adds a new alignment ( i, j ) when both s i and t j are unaligned while grow-diagonal adds a new alignment ( i, j ) when any of the two words ( s i and t j ) are unaligned.", "We find that the Moses heuristic grow-diagonal-final-and generally achieved the best results for symmetrizing the BTBA alignments, but grow-diagonal-and worked particularly good for the EnFr task.", "Finally, we used the symmetrized alignments inferred from our unsupervised BTBA models as labelled data to further fine-tune each BTBA model to learn the alignment task through supervised training.", "We fine-tuned each BTBA model for 50 training steps using the alignment loss in Equation 4.", "In addition, we also tested to use alignments from GIZA++ instead of alignments inferred from our 7 The training time (time of one training epoch number of training epochs) of one BTBA model for different tasks (DeEn, EnFr and RoEn) is roughly the same, 30 hours using 4 GPUs.", "unsupervised BTBA models as labelled data for supervised fine-tuning of the BTBA models.", "Table 3 gives alignment error rate (AER) (Och and Ney, 2000) results of our BTBA model and comparison with previous work.", "Table 3 also gives results of BTBA-left and BTBA-right: BTBA-left means that the BTBA decoder only attends left-side target context; BTBA-right means that the BTBA decoder only attends right-side target context.", "As shown in Table 3, the BTBA model, which uses both left-side and right-side target context, significantly outperformed BTBA-left and BTBA-right.", "Results also show that the performance of our BTBA model can be further improved by full context based optimization (FCBO) and supervised training including both self-supervised training and GIZA++ supervised training.", "For DeEn and RoEn tasks, the self-supervised BTBA (S-BTBA) model achieved the best results, outperforming previous neural and statistical methods.", "For the EnFr task, as the statistical aligner GIZA++ performed well and achieved better results than our unsupervised BTBA model, the GIZA++ supervised BTBA (G-BTBA) model achieved better results than the S-BTBA model and also outperformed the original GIZA++ and previous neural models.", "Tables 4, 5 and 6 give results of using different heuristics for symmetrizing alignments produced by BTBA, GIZA++ and G-BTBA, respectively.", "For our unsupervised and self-supervised BTBA models, grow-diagonal-final-and achieved the best results on DeEn and RoEn tasks while grow-diagonal-and achieved the best results on the EnFr task.", "For GIZA++ and G-BTBA, the best heuristics for different language pairs are quite different, though grow-diagonal-final-and generally DeEn EnFr RoEn BTBA +FCBO +SST BTBA +FCBO +SST BTBA +FCBO +SST forward 20.2% 18.3% 14.3% 13.6% 12.8% 7.3% 24.7% 22.4% 20.5% backward 23.8% 23.3% 17.2% 14.6% 13.3% 7.5% 27.3% 26.1% 22.0% union 20.6% 18.3% 14.5% 15.7% 14.3% 7.5% 24.1% 21.2% 18.9% intersection 23.7% 23.9% 17.1% 11.6% 11.2% 7.4% 28.3% 27.9% 24.0% grow-diagonal 19.9% 18.5% 14.3% 11.2% 10.7% 6.9% 23.6% 21.6% 18.6% grow-diagonal-and 21.0% 20.6% 17.3% 9.5% 8.9% 6.7% 26.1% 25.4% 23.6% grow-diagonal-final 19.5% 17.3% 14.4% 14.4% 13.4% 7.4% 23.4% 20.8% 18.6% grow-diagonal-final-and 17.8% 16.3% 14.3% 11.9% 11.2% 7.0% 22.9% 20.6% 18.5% Table 4: Comparison of different heuristics for symmetrizing the BTBA alignments.", "obtained good (best or close to best) results on DeEn and RoEn tasks while grow-diagonal-and generally obtained good (close to best) results on the EnFr task.", "FCBO with/without Parameter Freezing As we explained in Section 4.2, during full context based optimization (FCBO), we only optimize W Qn and W Kn in the last target-to-source attention sub-layer and freeze all other parameters so the BTBA model can keep the knowledge learned from masked target word prediction.", "We also tested to optimize all parameters of the BTBA model without parameter freezing during FCBO.", "Figure 2 shows how the AER results on the DeEn test set changed during FCBO with and without parameter freezing.", "Without freezing any parameters DeEn EnFr RoEn forward 14.5% 5.8% 21.4% backward 17.6% 4.2% 21.9% union 15.1% 5.3% 19.9% intersection 17.2% 4.7% 23.6% grow-diagonal 14.7% 4.6% 19.7% grow-diagonal-and 17.5% 4.4% 23.7% grow-diagonal-final 15.1% 5.3% 19.8% grow-diagonal-final-and 14.8% 4.7% 19.8% Table 6: Comparison of different heuristics for symmetrizing G-BTBA alignments.", "during FCBO, the AER result (the red curve) first increased a little, then decreased sharply, and soon increased again.", "In contrast, when we freeze most of the parameters, the AER result (the blue curve) decreased stably and eventually got better results (16.3%) than no parameter freezing (16.7%).", "Note that the results in Figure 2 are computed based on full target context, i.e., the target sentence is not masked.", "As we explained in Section 4.1, the BTBA model without FCBO should only be used to infer word alignments for masked target words.", "Without FCBO, using the BTBA model to infer word alignments for unmasked target words produces poor AER results (26.9% as shown in Figure 2) compared to using the BTBA model to infer word alignments for masked target words (17.8% as shown in Table 3).", "FCBO can quickly improve the results of using the BTBA model for inferring word alignments for unmasked target words, and eventually after FCBO, the BTBA model can effectively use full target context to compute better word alignment compared to the original BTBA model without FCBO (16.3% versus 17.8% as shown in Table 3).", "Training Data for Supervised Learning Because the symmetrized BTBA alignments have better quality compared to alignments from a single unidirectional (forward or backward) BTBA model as shown in Table 4, we used the symmetrized Gold Ours , .", "drfen werden abgefllt schaumweinflaschen in getrnke welche klar definiert berichtsvorschlag im es wird may beverages which defines clearly report the .", "bottles wine sparkling in bottled be may beverages which defines clearly report the .", "bottles wine sparkling in bottled be Figure 3: An example of gold alignments and alignments produced by our S-BTBA model.", "word alignments inferred from our unsupervised BTBA models as labelled data to further fine-tune each unidirectional BTBA model for the alignment task through supervised training.", "We also tested to use unidirectional BTBA alignments instead of symmetrized BTBA alignments as labelled data for supervised training.", "Figure 4 (the blue curve) shows how the performance of the forward BTBA model of the DeEn task changes during supervised training when using unidirectional alignments inferred from itself (the forward BTBA model) as labelled training data, which demonstrates that the forward BTBA model can be significantly improved through supervised training even when the training data is inferred from itself and not improved by alignment symmetrization.", "Figure 4 also shows that using symmetrized alignments for supervised training (the red curve) did achieve better results than using unidirectional alignments for supervised training.", "In addition, it is worth noting that supervised training can improve the BTBA model even if the quality of the labelled training data is somewhat worse than the BTBA model itself, e.g. for the RoEn task, using the GIZA++ alignments for fine-tuning the forward BTBA model through supervised training improved the result of the forward BTBA model (22.4% 21.4% as shown in DeEn EnFr RoEn S-BTBA FF 12.3 11.3 18.2 CC 6.1 3.3 7.8 FC 44.4 12.8 41.1 G-BTBA FF 13.2 5.1 18.6 CC 7.1 2.9 8.3 FC 43.3 9.3 46.1 Table 7: AER for different types of alignments. Table 4 and Table 6) even though GIZA++ produced worse alignments (24.2% in Table 3) than the forward BTBA model.", "Alignment Error Analysis We analyze the alignment errors produced by our system and find that most of the alignment errors are caused by function words.", "As shown in the alignment example in Figure 3, source and target corresponding content words (e.g. definiert and defines) are all correctly aligned by our model, but function words such as the, im and wird are not correctly aligned.", "To give a more detailed analysis, we compute AER results of our model for 3 different types of alignments: FF (alignments between two function words), CC (alignments between two content words) and FC (alignments between a function word and a content word).", "8 Table 7 shows that our models achieved significantly better results for CC alignments than for FF and FC alignments.", "Function words are more difficult to align than content words most likely because content words in a parallel sentence pair usually have very clear corresponding relations (such as defines clearly corresponds to definiert in Figure 3), but function words (such as the, es and im) are used more flexibly and frequently do not have clear corresponding words in parallel sentences, which increases the alignment difficulty significantly.", "8 For each language, we judge whether a word is a function word or a content word using a list of stopwords from nltk, https://www.nltk.org/ de en en de SHIFT-AET 34.8 28.0 Ours 35.1 28.7 Table 8: Translation results (BLEU) for dictionary-guided NMT.", "Alignment", "For downstream tasks, word alignment can be used to improve dictionary-guided NMT (Song et al., 2020; Chen et al., 2020).", "Specifically, at each decoding step in NMT, Chen et al. (2020) used a SHIFT-AET method to compute word alignment for the newly generated target word and then revised the newly generated target word by encouraging the pre-specified translation from the dictionary.", "The SHIFT-AET alignment method adds a separate alignment module to the original Transformer translation model (Vaswani et al., 2017) and trains the separate alignment module using alignments induced from the attention weights of the original Transformer.", "To test the effectiveness of our alignment method for improving dictionary-guided NMT, we used the alignments inferred from our BTBA models as labelled data for supervising the SHIFT-AET alignment module and performed dictionary-guided translation for the German English language pair following Chen et al. (2020)'s work.", "Table 8 gives the translation results of dictionary-guided NMT and shows that our alignment method led to higher translation quality compared to the original SHIFT-AET method.", "This paper presents a novel BTBA model for unsupervised learning of the word alignment task.", "Our BTBA model predicts the current target word by paying attention to the source context and both left-side and right-side target context to produce accurate target-to-source attention (alignment).", "We further fine-tune the target-to-source attention in the BTBA model to obtain better alignments using a full context based optimization method and self-supervised training.", "We test our method on three word alignment tasks and show that our method outperforms both previous neural alignment methods and the popular statistical word aligner GIZA++.", "This work is supported by the German Federal Ministry of Education and Research (BMBF) under", "funding code 01IW20010 (CORA4NLP)." ]
[ "abstain", "abstain", "abstain", "abstain", "method", "method", "result", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "objective", "method", "method", "method", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "result", "result", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "result", "objective", "method", "result", "result", "other", "other" ]
[ "Although automated metrics are commonly used to evaluate NLG systems, they often correlate poorly with human judgements.", "Newer metrics such as BERTScore have addressed many weaknesses in prior metrics such as BLEU and ROUGE, which rely on -gram matching.", "These newer methods, however, are still limited in that they do not consider the generation context, so they cannot properly reward generated text that is correct but deviates from the given reference.", "In this paper, we propose Language M odel Augmented Relevance Score (MARS), a new context-aware metric for NLG evaluation.", "MARS leverages off-the-shelf language models, guided by reinforcement learning, to create augmented references that consider both the generation context and available human references, which are then used as additional references to score generated text.", "Compared with seven existing metrics in three common NLG tasks, MARS not only achieves higher correlation with human reference judgements, but also differentiates well-formed candidates from adversarial samples to a larger degree.", "Automated metrics such as BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004) are popular methods for evaluating natural language generation (NLG) systems.", "Compared with human evaluation, they are cheaper and faster, and accordingly, they often serve as essential metrics for benchmarking the performance of NLG models (Novikova et al., 2017).", "Despite their widespread use, however, these automated metrics often poorly correlate with ratings given by human judges, particularly for datasets in which only a single human reference exists (Gupta et al., 2019; Novikova et al., 2017).", "Moreover, these automated metrics only capture Context Exsiting Metrics: PPL / BLEU / ROUGE / BERT-Score / etc.", "similarities between generated sentences and reference candidates, crucially ignoring provided contexts that are relevant for evaluating the answer in contextual NLG tasks, such as story generation, news summarization, and question-answering (Tao et al., 2018; Nema and Khapra, 2018).", "Table 1 shows a story generation 1 example that exemplifies some weaknesses of several common metrics.", "Perplexity (PPL) (Brown et al., 1992) successfully detects ungrammatical sentences, but it fails to distinguish legitimate novel continuations and copy-and-pasted ones.", "Relying on surface-level -gram matching, BLEU-1 and ROUGE-L 2 cannot detect reordering effectively, and wrongly score the well-formed candidate lower than its retrieval-based adversarial example.", "BERTScore (Zhang et al., 2019) leverages contextual embeddings from BERT (Devlin et al., 2019), thus mitigating the above challenges, but still does not fairly evaluate candidates that correctly align with the context but happen to differ 1 The ROC story generation task asks systems to generate a legitimate ending for a four-sentence story.", "Wendy was driving down the road.", "She heard her car making a noise.", "She pulled over to examine the problem.", "There was nothing but oil all on the road from her car.", "Human Reference.", "She called for help and waited to get her car fixed.", "PPL BLEU-1 ROUGE-L BERTScore MARS Candidate.", "Her fears were confirmed when her engine was smoking.", "75.58 0.223 0.182 0.338 0.574 Reorder.", "her confirmed engine fears Her when was were smoking.", "405.60 0.223 0.182 0.265 0.352 Retrieve.", "She heard her car making a noise.", "63.93 0.337 0.400 0.406 0.448 Table 1: In this story generation example, MARS is the only metric that gives the well-formed candidate a higher score than two adversarial examples.", "from the provided reference example.", "In our example, the candidate ... her engine was smoking is reasonable but deviates from the human reference, and so BERTScore rates it relatively low (0.338 out of 1.0), thus correlating poorly with human rating, which was high (5.05 out of 6.00).", "To address the above issues, prior studies have proposed a number of promising remedies.", "One line of work has proposed to combine human ratings with automated metrics (Durmus et al., 2020; Chaganty et al., 2018, inter alia ).", "For instance, in HUSE score, Hashimoto et al. (2019) leverages the differences between perplexity and human judgements to consider both quality and diversity of generated text.", "Another line has proposed training separate neural models to aid automated metrics (Mehri and Eskenazi, 2020; Yuma et al., 2020, inter alia ).", "For instance, BLEURT (Sellam et al., 2020) fine-tunes BERT (Devlin et al., 2019) on synthetic reference-candidate pairs for machine translation.", "These methods, however, are often limited in practical use, because the high-cost human ratings are not always available for every dataset, and the dataor system-specific training is not easily extended to other domains (Zhang et al., 2019), and can even bias the evaluation (Freitag et al., 2020b).", "In this paper, we present MARS (Language Model Augmented Relevance Score), a new NLG evaluation metric that requires neither supervision from human ratings nor additional training on specific domains.", "As shown in Figure 1, instead of comparing candidates only with human written references, as many prior metrics do, MARS uses a mixture of both human and augmented references.", "Specifically, MARS masks tokens in the reference to create templates, and then uses the context and templates to generate augmented references by infilling the masked parts with an LM guided by reinforcement learning.", "The augmented references thus incorporate information from both the context and the human reference, and are enriched with lexical and syntactic diversity, facilitating fairer evaluation of candidates.", "Finally, we compute the score as a weighted average of the similarity between the candidate and the set of augmented references in the contextual embedding space.", "The advantages of MARS are three-fold.", "First , MARS correlates highly with human judgements.", "We apply MARS to three diverse NLG tasks, and demonstrate that, compared with seven popular NLG metrics, MARS better correlates with human judgements and is robust against adversarial attacks.", "Second , MARS is context-aware.", "Unlike existing metrics that only consider the given human reference, we use a constrained NLG approach to incorporate the generation context into augmented references, thus alleviating bias against diverse candidates.", "Third , MARS is easy to deploy and extend.", "Built on off-the-shelf LMs, MARS requires neither human supervision nor additional training for specific domains, and can therefore serve as a general-purpose metric for a broad range of NLG applications, as we will demonstrate for three common NLG tasks: story generation, news summarization, and question-answering.", "MARS comprises three steps.", "First, we mask out non-important tokens from the human reference to produce templates for augmentation (2.1).", "Second, we guide off-the-shelf LMs to generate reference augmentation on these templates via a reinforced self-planning algorithm (2.2).", "Finally, we compute a weighted average score that reflects the overall similarity between the candidate and the set of augmented references (2.3).", "The first step in MARS is to take in the given human reference and generate templates masked versions of the human referencewhich can then be used to generate augmented references.", "Our masking procedure can be viewed as a reversed process of prior insertionand template-based generation approaches (Zhang et al., 2020; Miao et al., 2019); whereas these generation approaches start with templates of important tokens and then fill in the details to generate complete sentences, our masking procedure starts with the complete sentence (i.e., the human reference) and then masks out unimportant tokens to generate templates.", "To better explain our masking procedure, we introduce two concepts, mask priority and mask cost: Mask Priority.", "We compute a mask priority for each token , which captures the priority of masking , where non-important words should receive higher priority.", "We compute as a function of two things: the inverse document frequency (IDF) of , and the part-of-speech (POS) of : = ( POS [ ]) IDF ( , ) , (1) where is a function that assigns a weight to each POS tag.", "3 Common tokens across the corpus (e.g., stop words, with low IDF) will receive high mask priority.", "Tokens responsible for description details will also be assigned high mask priority based on their part-of-speech (e.g., adjectives are mainly used for details and so they are given higher priority of being masked).", "Mask Cost.", "For each token , we also compute a mask cost .", "Tokens that appear in both context and human reference should have high masking cost as they are deemed context-carrying.", "We use the longest common sequence (LCS) matching between the context and the human reference to identify these context-carrying tokens.", "In our experiments, we set the of these tokens to 10 and the default of all other tokens to 1. We use to denote the ratio of tokens to be masked in a sentence of tokens, and define max = as the maximum cost allowed.", "3 varies for each task.", "Empirically, we find that it works well to assign adjectives, adverbs, and nouns higher weights than other parts-of-speech.", "For our setting, we assign weights of 4, 3, 2 to the above three types.", "DP-based Token Masking.", "Now that for each token we have a mask priority and a mask cost, we aim to choose a set of tokens to mask with the highest possible sum of priorities for which the sum of mask costs is not greater than max .", "Given a function ( ) = { 1 , 0 } where 1 means token is masked and 0 means it remains, the objective of token masking can be expressed as follows: max (cid:213) = 1 ( ) , s.t. (cid:213) = 1 ( ) max .", "(2) Such a goal is actually a NP-complete combinatorial optimization problem, called the Knapsack problem (Pisinger, 1995), which we solve using dynamic-programming (DP).", "In general, the masking strategy aggressively harvests tokens of high mask priority while keeping the cost of masked tokens from exceeding the mask cost limitation max .", "The detailed DP algorithm for solving this problem is shown in Appendix A. 2.2 Self-planning Cloze Augmentation After creating the templates described in 2.1, we produce augmented reference examples based on both the templates as well as the generation context.", "This procedure can be seen as a mixture of hard-and soft-constrained NLG, where the template tokens pre-exist with some blanks, and the system, conditioned on the context, aims to fill in the blanks.", "We henceforth refer this process of creating augmented references as cloze 4 augmentation.", "Background.", "Masked Language Models (MLM) such as RoBERTa (Liu et al., 2019) and BERT (De-vlin et al., 2019) are trained to predict masked tokens within sentences, and thus are able to do cloze augmentation off-the-shelf.", "However, without architecture-level modification, MLMs are only able to infill a pre-determined number of missing tokens (Zhu et al., 2019).", "This is especially problematic sinceif they are directly used to augment referencesall the augmented references will have the same number of tokens as that of the original human reference.", "We believe this unnecessarily constrains augmentation diversity, and thus consider it as a Naive method in our evaluations ( 4).", "4 A cloze test (Taylor, 1953) is a language test where a portion of language is removed and the participant is asked to replace the missing language item.", "I really like the show performed at the Theatre!", "I enjoy every minute of the show at the Theatre!", "I [blk] [blk] the show [blk] [blk] the Theatre!", "(a) Naive Cloze Augmentation: Masked LM", "(b) Self-planning Cloze Augmentation: Autoregressive LMI enjoy the show only performed at the Theatre!", "Context Context Bi-directionalAttention Uni-directionalAttention ReinforcedSelf-planning + + I [blk] [blk] the show [blk] [blk] the Theatre!", "Autoregressive Language Models (ALM) such as GPT-2 (Radford et al., 2019), on the other hand, are trained to predict current step token given past tokens.", "They can generate sequences of varying lengths, but they cannot infill missing tokens within sentences effectively since they do not consider future context.", "To enable ALMs to infill blanks of unspecified length, prior work has proposed either retraining a new LM from scratch (Shen et al., 2020) or fine-tuning on specially prepared data (Donahue et al., 2020), which are costly and not easy to extend to new NLG tasks.", "As shown in Figure 2, we take a reinforcement learning (RL) approach that uses future words after the blank to guide current step infilling generation.", "Since such RL guidance only relies on the tokens within its own to-be-infilled template, we call it reinforced self-planning .", "Our method combines the advantages of both MLMs and ALMs, requiring neither re-training nor collecting new data, and thus is easier to extend to other off-the-shelf LMs.", "Reinforced Self-planning.", "At each decoding step during generation, a vanilla ALM will pick the token that has the highest probability by applying an argmax over the softmax output of hidden states.", "We add a self-planning stage between the argmax and softmax function.", "Following the RL framework, we define the state at step as the generated sequences before (i.e., = < ), and the action at step as the -th output token (i.e., = ).", "We take the softmax output of the last hidden states (with parameter ) as the policy , since it is the probability of picking token (action ) given the state = < .", "Similarly, we denote the policy after reinforced self-planning as .", "where ( 0 , 1 ] is the discounting factor, and is the single-step reward.", "In text generation, however, such a reward definition requires sampling over the future generated sequence to estimate current step reward (Gong et al., 2019), which may cause the policy to end in zero reward region because of high variance of the gradient (Pang and He, 2021).", "Since we guide the generation in every step of decoding, we derive the -th step policy gradient (cid:79) ( ) as: E (cid:2) (cid:79) log ( | ) ( ) (cid:3) , (4) with importance sampling weight to stabilize the optimization (Munos et al., 2016), which is: = ( | ) ( | ) .", "If we denote a certain token in future context as { future } , single-step self-planning reward ( ) can be approximated by the cosine similarity between -th step hidden state and the embedded vector of by the LM embedding layers, which is ( ) = (cid:213) future log ( softmax ( < ) emb ( )) .", "(5) Given all above definitions, at -th step, we update towards the self-planned as: + (cid:213) = 1 (cid:79) ( / ) (cid:107) (cid:79) ( / )(cid:107) , (6) where is the learning rate and is the temperature parameter to control the stochastic sampling during token decoding (Keskar et al., 2019).", "After iterations of reinforced self-planning, the updated policy should produce tokens approaching the future context in embedding space, since future context contributes to the calculation of reward (Eq. 5).", "5 More details about how we handle edge cases during reinforced self-planning are presented in Appendix B. 5 In our setting, , and are 0.02, 1.3, and 3 respectively.", "After generating augmented reference sentences, the final MARS score is computed as a weighted average of the similarity between the candidate and each reference in the augmentation set (including the original human reference).", "One way to obtain similarity scores is using BERTScore (Zhang et al., 2019), but BERTScore requires training on external resources to make its outputs more readable.", "Therefore, in order to keep all the resources used by MARS off-the-shelf, we utilize Sentence-BERT (Reimers and Gurevych, 2019), which uses the mean of all token embeddings in a sentence as the overall sentence-level encoding.", "As the sentence encoder, we use RoBERTa-large (Liu et al., 2019), a common choice in the literature (Zhang et al., 2019; Reimers and Gurevych, 2020).", "As shown in Eq.", "7, we then compute MARS score as the average of the cosine similarities weighted using a geometric progression with a common ratio ( 0 , 1 ] and a scale factor (start value) 0 : MARS = # (cid:213) = 1 1 cand T ref 1 (cid:107) cand (cid:107) T (cid:107) ref 1 (cid:107) s.t. # (cid:213) = 1 1 = 1 , (7) where the candidate encoding is cand, the reference encodings are ref ( is the index of the augmented reference under a certain , and ref 0 marks the zero-mask human reference), and # is the number of masking ratios we use in 2.1.", "Different values, as defined by the geometric progression, determine how much weight each reference contributes.", "By default, Eq.", "7 assigns the largest weight to the human reference since it is the gold standard.", "We evaluated MARS and compared it with several popular NLG metrics on the following three tasks:", "Story Generation.", "We use the ROC stories dataset 6 for story generation, which requires candidate NLG systems to generate coherent endings to four-sentence stories (Mostafazadeh et al., 2016).", "The dataset consists of 96,198 examples of partially written stories; we take the human-rated subset ( =300) released by HUSE (Hashimoto et al., 2019), which contains continuances by (1) 6 https://cs.rochester.edu/nlp/rocstories/ Avg.", "News Summarization.", "For the news summarization task, we use the Newsroom summary dataset.", "8 This dataset contains 1.3 million articles from 38 major publications (Grusky et al., 2018) and we use the subset with human ratings ( =540) released by the authors.", "9 This dataset contains outputs from summarization models: (1) TextRank : a sentence-level summarization system inspired by Google PageRank (Page et al., 1999), (2) a Seq2Seq model with attention (Rush et al., 2015), and (3) Pointer-N : a pointer-based neural model (See et al., 2017) trained on Newsroom dataset.", "Question Answering.", "For question answering, we use the MOCHA dataset, 10 which includes human ratings on outputs of five models trained on six QA datasets (Chen et al., 2020).", "We consider a distributionally-balanced subset ( =450) of these outputs from three systems: (1) fine-tuned GPT-2 (Radford et al., 2019), (2) a Back-Translation model (Sennrich et al., 2016), and (3) a MHPG model (Bauer et al., 2018) trained on NarrativeQA (Kocisk`y et al., 2018) and MCScript (Os-termann et al., 2018) datasets.", "The detailed statistics of these three datasets we used for this work are shown in Table 2. For pre-processing, we removed hashtags and urls in the text, but leave punctuation and stop words, which can affect LCS matching when computing mask costs.", "For all tasks, we use GPT-2 (large, with 774M parameters) as the language model for 7 https://lucene.apache.org/solr 8 http://lil.nlp.cornell.edu/newsroom/ 9 The subset includes human ratings on four perspectives: coherence , fluency , informative and relevance .", "For the newsroom dataset, some news articles were longer than the max sequence length of 1024 BPE, and so we cut off the tail end of these examples.", "With a single RTX-2080 GPU, cloze augmentation with = { 0 (human ref. ), 20%, 40%, 60%, 80% } takes 0.8 seconds on average per reference, amounting to a total augmentation time of 17, 45, and 32 minutes for the ROC, Newsroom and MOCHA tasks respectively.", "We show how we pick the masking ratios for different tasks in 4.3.", "As automated metrics are only helpful if they correlate sufficiently with human judgements, in this section we examine how MARS correlates with human judgements compared with prior metrics.", "System-level Correlation.", "Table 3 shows the correlations between human judgements and automated metrics for MARS and seven other unsupervised metrics, across all NLG systems studied in our three tasks.", "Compared with the other metrics, MARS achieves the highest correlation with human judgements for five of the seven systems (and comparable with the top in the other two systems), making considerable improvements over the next-best metric for many of the NLG systems (e.g., 0.370 for Back-Translation, and 0.231 for Solr).", "We also notice that MARS has greater improvements on more open-ended tasks (e.g., story generation, which has low ), which corroborates MARS's original objective of judging diverse candidates more fairly.", "As for the baselines, -gram matching metrics such as BLEU correlate poorly with human ratings on such open-ended tasks; BERTScore performs better on short candidates and high tasks (e.g., QA); and perplexity, as expected, correlates weakly with human ratings.", "The Naive method, which uses multiple augmented references of the same length, improves over BERTScore, which only uses the original reference.", "Ablation Study.", "As shown in the lower rows of Table 3, we see that the performance of MARS drops substantially when the crucial components are removed.", "Specifically, removing self-planning hurts performance more for tasks with longer references (e.g., story generation) since self-planning is more helpful when there are more blanks to in-fill, and removing context hurts performance more in tasks that are less open-ended (high , such as QA) because there is no adequate input for a reasonable augmentation.", "We take these ablation study results as evidence that the techniques we propose in MARS are crucial for improving correlation with human judgements.", "and human judgements, we consider the MOCHA QA task as an example and plot the correlations of BERTScore (left) and MARS (right) with human judgements.", "As shown in Figure 3, compared with MARS, BERTScore has more candidates in the upper-left corner of the plot (i.e., low BERTScore but high human judgement).", "Many of these are generated by GPT-2 and MHPG, which, based on manual examination, tend to provide more details in the answer than the human reference.", "For instance, given a context about shopping, one question is Did they need to buy any meat? .", "The human reference answer is simply Yes, they did. , but GPT-2 returns Yes, they bought chicken and a roast. , which is more detailed, even containing item names derived from the context.", "Whereas BERTScore cannot evaluate such cases where the generated candidate is over-described with respect to the human reference, MARS uses augmented references enriched with information from the context to provide a fairer judgement.", "Good evaluation metrics ought to also be able to detect adversarial examples by assigning them lower scores than well-formed candidates.", "As shown in Table 4, uni-gram matching BLEU-1 cannot detect reordered sequences, while ROUGE-L scores reordered sequence higher occasionally if token-swapping leads to more LCS.", "Sentence Mover's Similarity combines word and sentence embeddings and thus is more capable of recognizing reordered samples than MoverScore.", "Perplexity can detect reordered examples effectively, but is unable to detect retrieved sentences, as they are usually well-formed.", "MARS, on the other hand, has the best robustness against adversarial samples, possibly because multiple context-infused augmented references help MARS detect adversarial samples more reliably.", "We also study the effects of contextual embeddings we use in 2.3when switching to GloVe embeddings (Pennington et al., 2014), which are not contextual, MARS is less able to detect adversarial samples, especially reordered ones.", "The Naive method, which by default uses RoBERTa embedding, achieves comparable robustness as MARS but its task-level correlations with humans ( ref. ) are generally lower than MARS, potentially because its fixed-length cloze generation limits the diversity of augmented references.", "The masking ratios for MARS are set using the hy-perparameter { } max , which corresponds to MARS using masking ratios from 0% to { } max in increments of 20%, e.g., { } max = 40% indicates { 0% , 20% , 40% } .", "In preliminary experiments, we observed that { } max varied for different datasets.", "Thus, for our three generation tasks, we evaluate MARS performance given different { } max , as shown in Table 5.", "We find that tasks that were more open-ended (low ; e.g., story generation) benefited from higher { } max , which created a more diverse set of augmented references, whereas tasks that were less open-ended (high ; e.g., QA) worked better with lower { } max , which kept the augmented references more similar to the original.", "We analyzed cases where MARS score substantially differed from human judgements.", "From test set outputs, we found that errors could often be categorized into one of three types (shown in Table 6): (1) Out of Vocabulary errors, often induced by unknown tokens in the candidates, (2) Confusion errors, where candidates are simply copied from context, and (3) Inference errors, where the candidates are further inferences of the context based on commonsense knowledge.", "In these cases, human annotators tended to assign higher scores, whereas, MARS over-penalized them.", "We conducted human evaluation on Amazon Mechanical Turk (MTurk) to further study the quality of MARS augmentation.", "In total 150 participants were randomly assigned to evaluate the three tasks.", "Participants (61.3% male and 38.7% female) were all from the United States and above 18 years old, with an average age of 34.7 years old.", "Each participant was paid 75 cents for completing 14 questions in each questionnaire (average completion time per questionnaire was about 5.11 minutes).", "Results We conducted paired sample -tests to examine how much the augmentation samples resemble the original human references regarding relevance to context and readability.", "As shown in Table 7, in terms of relevance to context, MARS had no statistically significant difference compared with original human references in Newsroom and MOCHA datasets, but was rated as even more relevant to the generation context than the human reference in the ROC dataset (MARS Mean = 5.07 > Human Ref. Mean = 4.95), possibly because reinforced self-planning guided the augmentation to be more related to the context.", "In terms of readabil-ROC Newsroom MOCHA Ori.", "No statistically significant differences were seen between the original and MARS augmentation in overall ratings across the three tasks.", "These results further confirm that augmented examples from MARS are of similar quality to the original human references.", "Unsupervised Metrics.", "In addition to the metrics we directly compared with previously, other unsupervised metrics have also been proposed.", "TER (Snover et al., 2006), CharacTer (Wang et al., 2016), and chrF (Popovic, 2017) focus on character-level overlaps instead of -gram matching.", "Similar to BERTScore, YiSi (Lo, 2019) and BERTr (Mathur et al., 2019) leverage pre-trained contextual embeddings to better capture similarity.", "BLEU (Galley et al., 2015) adds human annotated sentences as negative references.", "Bawden et al. (2020) find the gain from multiple references can be limited by inherent weaknesses in BLEU.", "We considered lessons from many of the above works while designing MARS.", "Learned Metrics.", "Compared with unsupervised metrics, learned metrics collect human supervisions (Freitag et al., 2020a; Chaganty et al., 2018) or train on specially prepared data of a certain do-main (Sellam et al., 2020; Rei et al., 2020).", "Other approaches train on related tasks and use these models as metrics for the original task (Goodrich et al., 2019; Eyal et al., 2019).", "Whereas learned metrics may have limited applicability on tasks where no such resources are available, MARS fully exploits the few-shot learning abilities of off-the-shelf LMs and therefore does not require additional training.", "Task-specific Metrics.", "Finally, many metrics have been proposed for task-specific evaluation, such as LEIC (Cui et al., 2018) and CIDEr (Vedan-tam et al., 2015) for image captioning, PAR-ENT (Dhingra et al., 2019) for table-to-text, and EASSE (Alva-Manchego et al., 2019) for sentence simplification.", "MARS, with some modifications, can potentially be extended to these tasks.", "MARS can be limited by the LM that it uses for instance, the total length of context + refer-ence/candidate is limited by the max sequence length of the LM used.", "Additionally, our work has focused on English, and MARS may require non-trivial modifications to handle cases where the context and reference/candidate are in different languages, such as machine translation.", "Future work, could potentially extend MARS to these scenarios using multi-lingual sequence-to-sequence models such as multilingual-T5 (Xue et al., 2020).", "We also analyzed errors and found that MARS sometimes under-scores candidates that contained unknown tokens or were copied directly from the context (see Appendix C for examples and further analysis).", "We have proposed MARS, a context-aware and easy-to-deploy NLG metric built upon an off-the-shelf language model (GPT-2).", "On three contextual NLG tasks, we show that MARS better correlates with human judgements compared with seven other unsupervised metrics.", "Requiring neither costly human supervision nor additional training, MARS can be applied to a broad range of NLG tasks.", "The goal of MARS is to aid the evaluation of NLG models, and hence we draw attention to several ethical considerations.", "First, the augmented references of MARS can be affected by certain biases from the LM it is based on (e.g., GPT-2) (Liu et al., 2021), though those biases may be partially mitigated by the relatively narrow scope of cloze completion and by generations being guided by given context and human references.", "Second, MARS facilitates evaluation and therefore development of NLG models, for which a major ethical consideration is that they can mimic target properties in training data that are undesirable.", "This is especially true of models trained on non-contemporary data that does not represent current norms and practices.", "These biases can lead to ethical concerns if users or deployers of models are not aware of these issues or do not account for them.", "More generally, NLG models can also be used in malicious ways such as to generate fake news or spam, which we strongly discourage.", "Finally, our experiments and analysis are done in English, and therefore we do not claim that our findings will generalize across all languages, although our framework has potential to be extended to other languages with necessary modifications." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "method", "abstain", "objective", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain" ]
[ "We introduce Rosita, a method to produce multilingual contextual word representations by training a single language model on text from multiple languages.", "Our method combines the advantages of contextual word representations with those of multilingual representation learning.", "We produce language models from dissimilar language pairs (English/Arabic and English/Chinese) and use them in dependency parsing, semantic role labeling, and named entity recognition, with comparisons to monolingual and noncontextual variants.", "Our results provide further evidence for the benefits of polyglot learning, in which representations are shared across multiple languages.", "State-of-the-art methods for crosslingual transfer make use of multilingual word embeddings, and much research has explored methods that align vector spaces for words in different languages (Faruqui and Dyer, 2014; Upadhyay et al., 2016; Ruder et al., 2017).", "On the other hand, contextual word representations ( CWR ) extracted from language models (LMs) have advanced the state of the art beyond what was achieved with word type representations on many monolingual NLP tasks (Peters et al., 2018).", "Thus, the question arises: can contextual word representations benefit from multilinguality ?", "We introduce a method to produce multilingual CWR by training a single polyglot language model on text in multiple languages.", "As our work is a multilingual extension of ELMo (Pe-ters et al., 2018), we call it Rosita (after a bilingual character from Sesame Street ).", "Our hypothesis is that, although each language is unique, different languages manifest similar characteristics (e.g., morphological, lexical, syntactic) which can be exploited by training a single model with data from multiple languages (Ammar, 2016).", "Previous work has shown this to be true to some degree in the context of syntactic dependency parsing (Ammar et al., 2016), semantic role labeling (Mulcaire et al., 2018), named entity recognition (Xie et al., 2018), and language modeling for phonetic sequences (Tsvetkov et al., 2016) and for speech recognition (Ragni et al., 2016).", "Recently, de Lhoneux et al. (2018) showed that parameter sharing between languages can improve performance in dependency parsing, but the effect is variable, depending on the language pair and the parameter sharing strategy.", "Other recent work also reported that concatenating data from different languages can hurt performance in dependency parsing (Che et al., 2018).", "These mixed results suggest that while crosslingual transfer in neural network models is a promising direction, the best blend of polyglot and language-specific elements may depend on the task and architecture.", "However, we find overall contextual representations from polyglot language models succeed in a range of settings, even where multilingual word type embeddings do not, and are a useful technique for crosslingual transfer.", "We explore crosslingual transfer between highly dissimilar languages (English Chinese and English Arabic) for three core tasks: Universal Dependency (UD) parsing, semantic role labeling (SRL), and named entity recognition (NER).", "We provide some of the first work using polyglot LMs to produce contextual representations, 1 and the first analysis comparing them to monolingual LMs for this purpose.", "We also introduce an LM variant which takes multilingual word embedding input as well as character input, and explore its 1 Contemporaneous work uses polyglot LMs for natural language inference and machine translation (Lample and Conneau, 2019).", "applicability for producing contextual word representations.", "Our experiments focus on comparisons in three dimensions: monolingual vs. polyglot representations, contextual vs. word type embeddings, and, within the contextual representation paradigm, purely character-based language models vs. ones that include word-level input.", "Previous work has shown that contextual representations offer a significant advantage over traditional word embeddings (word type representa-tions).", "In this work, we show that, on these tasks, polyglot character-based language models can provide benefits on top of those offered by con-textualization.", "Specifically, even when crosslingual transfer with word type embeddings hurts target language performance relative to monolingual models, polyglot contextual representations can improve target language performance relative to monolingual versions, suggesting that polyglot language models tie dissimilar languages in an effective way.", "In this paper, we use the following terms: crosslingual transfer and polyglot learning .", "While crosslingual transfer is often used in situations where target data are absent or scarce, we use it broadly to mean any method which uses one or more source languages to help process another target language.", "We also draw a sharp distinction between multilingual and polyglot models.", "Multilingual learning can happen independently for different languages, but a polyglot solution provides a single model for multiple languages, e.g., by parameter sharing between languages in networks during training.", "We first describe the language models we use to construct multilingual (and monolingual) CWR .", "Because the Universal Dependencies treebanks we use for the parsing task predominantly use Traditional Chinese characters and the Ontonotes data for SRL and NER consist of Simplified Chinese, we train separate language models for the two variants.", "For English we use text from the Billion Word Benchmark (Chelba et al., 2013), for Traditional Chinese, wiki and web data provided for the CoNLL 2017 Shared Task (Ginter et al., 2017), for Simplified Chinese, newswire text from Xinhua, 2 2 catalog.ldc.upenn.edu/LDC95T13 and for Arabic, newswire text from AFP.", "3 We use approximately 60 million tokens of news and web text for each language.", "We tokenized the language model training data for English and Simplified Chinese using Stanford CoreNLP (Manning et al., 2014).", "The Traditional Chinese corpus was already pre-segmented by UDPipe (Ginter et al., 2017; Straka et al., 2016).", "We found that the Arabic vocabulary from AFP matched both the UD and Ontonotes data reasonably well without additional tokenization.", "We also processed all corpora to normalize punctuation and remove non-text.", "We base our language models on the ELMo method (Peters et al., 2018), which encodes each word with a character CNN, then processes the word in context with a word-level LSTM.", "4 Following Che et al. (2018), who used 20 million words per language to train monolingual language models for many languages, we use the same hyperparameters used to train the monolingual English language model from Peters et al. (2018), except that we reduce the internal LSTM dimension from 4096 to 2048.", "a monolingual language model with character CNN (MONOCHAR ) trained on that language's data; a polyglot LM (ROSITACHAR ) trained with the same code, on that language's data with an additional, equal amount of English data; a modified polyglot LM (ROSITAWORD ), described below.", "The ROSITAWORD model concatenates a 300 dimensional word type embedding, initialized with multilingual word embeddings, to the character CNN encoding of the word, before passing this combined vector to the bidirectional LSTM.", "catalog.ldc.upenn.edu/LDC2001T55 4 A possible alternative is BERT (Devlin et al., 2018), which uses a bidirectional objective and a transformer architecture in place of the LSTM.", "Notably, one of the provided BERT models was trained on several languages in combination, in a simple polyglot approach (see https://github.com/google-research/ bert/blob/master/multilingual.md ).", "Our initial exploration of multilingual BERT models raised sufficient questions about preprocessing that we defer exploration to future work.", "The idea of this word-level initialization is to bias the model toward crosslingual sharing; because words with similar meanings have similar representations, the features that the model learns are expected to be at least partially language-agnostic.", "The word type embeddings used for these models, as well as elsewhere in the paper, are trained on our language model training set using the fastText method (Bojanowski et al., 2017), and target language vectors are aligned with the English ones using supervised MUSE 5 (Conneau et al., 2018).", "See appendix for more LM training details.", "All of our task models (UD, SRL, and NER) are implemented in AllenNLP, version 0.7.2 (Gardner et al., 2018).", "6 We generally follow the default hyperparameters and training schemes provided in the AllenNLP library regardless of language.", "See appendix for the complete list of our hyperparameters.", "For each task, we experiment with five types of word representations: in addition to the three language model types (MONOCHAR , ROSITACHAR , and ROSITAWORD ) described above, we show results for the task models trained with monolingual and polyglot non-contextual word embeddings.", "After pretraining, the word representations are fine-tuned to the specific task during task training.", "In non-contextual cases, we fine-tune by updating word embeddings directly, while in contextual cases, we only update coefficients for a linear combination of the internal representation layers for efficiency (Peters et al., 2018).", "In order to properly evaluate our models' generalization ability, we ensure that sentences in the test data are excluded from the data used to train the language models.", "We use a state-of-the-art graph-based dependency parser with BiLSTM and biaffine attention (Dozat and Manning, 2017).", "Specifically, the parser takes as input word representations and 100-dimensional fine-grained POS embeddings following Dozat and Manning (2017).", "We use the same UD treebanks and", "train/dev./test splits as the 5 For our English/Chinese and English/Arabic data, their unsupervised method yielded substantially worse results in word translation.", "CoNLL 2018 shared task on multilingual dependency parsing (Zeman et al., 2018).", "In particular, we use the GUM treebank for English, 7 GSD for Chinese, and PADT for Arabic.", "For training and validation, we use the provided gold POS tags and word segmentation.", "For each configuration, we run experiments five times with random initializations and report the mean and standard deviation.", "For testing, we use the CoNLL 2018 evaluation script and consider two scenarios: (1) gold POS tags and word segmentations and (2) predicted POS tags and word segmentations from the system outputs of Che et al. (2018) and Qi et al. (2018).", "8 The former scenario enables us to purely assess parsing performance; see column 3 in Table 1 for these results on Chinese and Arabic.", "The latter allows for a direct comparison to the best previously reported parsers (Chinese, Che et al., 2018; Arabic, Qi et al., 2018).", "See Table 2 for these results.", "As seen in Table 1, the Universal Dependencies results generally show a significant improvement from the use of CWR .", "The best results for both languages come from the ROSITACHARLM and polyglot task models, showing that polyglot training helps, but that the word-embedding initialization of the ROSITAWORD model does not necessarily lead to a better final model.", "The results also suggest that combining ROSITACHARLM and polyglot task training is key to improve parsing performance.", "Table 2 shows that we outperform the state-of-the-art systems from the shared task competition.", "In particular, our LMs even outperform the Harbin system, which uses monolingual CWR and an ensemble of three biaffine parsers.", "We use a strong existing model based on BIO tagging on top of a deep interleaving BiLSTM with highway connections (He et al., 2017).", "The SRL model takes as input word representations and 100-dimensional predicate indicator embeddings following He et al. (2017).", "We use a standard PropBank-style, span-based SRL dataset for English, Chinese, and Arabic: Ontonotes (Pradhan et al., 2013).", "Note that Ontonotes provides annotations using a single shared annotation scheme for 7 While there are several UD English corpora, we choose the GUM corpus to minimize domain mismatch.", "8 System outputs for all systems are available at https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-2885 vectors (lang.) task lang.", "English, Chinese, and Arabic, which can facilitate crosslingual transfer.", "For Chinese and English we simply use the provided surface form of the words.", "The Arabic text in Ontonotes has diacritics to indicate vocalization which do not appear (or only infrequently) in the original source or in our language modeling data.", "We remove these for better consistency with the language model vocabulary.", "We use gold predicates and the CoNLL 2005 evaluation script for the experiments below to ensure our results are comparable to prior work.", "See column 4 in Table 1 for results on the CoNLL-2012 Chinese and Arabic test sets.", "The SRL results confirm the advantage of CWR .", "Unlike the other two tasks, multilingual word type embeddings are better than monolingual versions in SRL.", "Perhaps relatedly, models using ROSITAWORD are more successful here, providing the highest performance on Chinese.", "One unusual result is that the model using the MONOCHARLM is most successful for Arabic.", "This may be linked to the poor results on Arabic SRL overall, which are likely due to the much smaller size of the corpus compared to Chinese (less than 20% as many annotated predicates) and higher proportion of language-specific tags.", "Such language-specific tags in Arabic could limit the effectiveness of shared English-Arabic representations.", "Still, polyglot methods' performance is only slightly behind.", "We use the state-of-the-art BiLSTM-CRF NER model with the BIO tagging scheme (Peters et al., 2017).", "The network takes as input word representations and 128-dimensional character-level embeddings from a character LSTM.", "We again use the Ontonotes dataset with the standard data splits.", "See the last column in Table 1 for results on the CoNLL-2012 Chinese and Arabic test sets.", "As with most other experiments, the NER results show a strong advantage from the use of contextual representations and a smaller additional advantage from those produced by polyglot LMs.", "Overall, our results show that polyglot language models produce very useful representations.", "While Universal Dependency parsing, Arabic SRL, and Chinese NER show models using contextual representations outperform those using word type representations, the advantage from polyglot training in some cases is minor.", "However, Chinese SRL and Arabic NER show strong improvement both from contextual word representations and from polyglot training.", "Thus, while the benefit of crosslingual transfer appears to be somewhat variable and task dependent, polyglot training is helpful overall for contextual word representations.", "Notably, the ROSITACHARLM does not involve any direct supervision of tying two languages together, such as bilingual dictionaries or parallel corpora, yet is still most often able to learn the most effective representations.", "One explanation is that it automatically learns crosslingual connections from unlabeled data alone.", "Another possibility, though, is that the additional data provided in polyglot training produces a useful regularization effect, improving the target language representations without crosslingual sharing (ex-cept that induced by shared vocabulary, e.g., bor-rowings, numbers, or punctuation).", "Nevertheless, the success of polyglot language models is worth further study.", "We presented a method for using polyglot language models to produce multilingual, contextual word representations, and demonstrated their benefits, producing state-of-the-art results in multiple tasks.", "These results provide a foundation for further study of polyglot language models and their use as unsupervised components of multilingual models.", "The authors thank Mark Neumann for assistance with the AllenNLP library and the anonymous reviewers for their helpful feedback.", "This research was funded in part by NSF grant IIS-1562364, the Funai Overseas Scholarship to JK, and the NVIDIA Corporation through the donation of a GeForce GPU." ]
[ "method", "method", "method", "result", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "objective", "objective", "abstain", "abstain", "method", "abstain", "result", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "other", "other" ]
[ "Interpretable rationales for model predictions play a critical role in practical applications.", "In this study, we develop models possessing interpretable inference process for structured prediction.", "Specifically, we present a method of instance-based learning that learns similarities between spans.", "At inference time, each span is assigned a class label based on its similar spans in the training set, where it is easy to understand how much each training instance contributes to the predictions.", "Through empirical analysis on named entity recognition, we demonstrate that our method enables to build models that have high interpretability without sacrificing performance.", "Neural networks have contributed to performance improvements in structured prediction.", "Instead, the rationales underlying the model predictions are dif-ficult for humans to understand (Lei et al., 2016).", "In practical applications, interpretable rationales play a critical role for driving human's decisions and promoting human-machine cooperation (Ribeiro et al., 2016).", "With this motivation, we aim to build models that have high interpretability without sacrificing performance.", "As an approach to this challenge, we focus on instance-based learning .", "Instance-based learning (Aha et al., 1991) is a machine learning method that learns similarities between instances.", "At inference time, the class labels of the most similar training instances are assigned to the new instances.", "This transparent inference process provides an answer to the following question: Which points in the training set most closely resemble a test point or influenced the prediction?", "This is categorized into example-based explanations (Plumb et al., 2018; Baehrens et al., 2010).", "Recently, despite its preferable property, it has received little attention and been underexplored.", "This study presents and investigates an instance-based learning method for span representations .", "A span is a unit that consists of one or more linguistically linked words.", "Why do we focus on spans instead of tokens?", "One reason is relevant to performance.", "Recent neural networks can induce good span feature representations and achieve high performance in structured prediction tasks, such as named entity recognition (NER) (Sohrab and Miwa, 2018; Xia et al., 2019), constituency parsing (Stern et al., 2017; Kitaev et al., 2019), semantic role labeling (SRL) (He et al., 2018; Ouchi et al., 2018) and coreference resolution (Lee et al., 2017).", "Another reason is relevant to interpretability.", "The tasks above require recognition of linguistic structure that consists of spans.", "Thus, directly classifying each span based on its representation is more interpretable than token-wise classification such as BIO tagging, which reconstructs each span label from the predicted token-wise BIO tags.", "Our method builds a feature space where spans with the same class label are close to each other.", "At inference time, each span is assigned a class label based on its neighbor spans in the feature space.", "We can easily understand why the model assigned the label to the span by looking at its neighbors.", "Through quantitative and qualitative analysis on NER, we demonstrate that our instance-based method enables to build models that have high interpretability and performance.", "To sum up, our main contributions are as follows.", "This is the first work to investigate instance-based learning of span representations.", "1 Through empirical analysis on NER, we demonstrate our instance-based method enables to build models that have high interpretability without sacrificing performance.", "1 Our code is publicly available at https://github.", "com/hiroki13/instance-based-ner.git .", "Neural models generally have a common technical challenge: the black-box property.", "The rationales underlying the model predictions are opaque for humans to understand.", "Many recent studies have tried to look into classifier-based neural models (Ribeiro et al., 2016; Lundberg and Lee, 2017; Koh and Liang, 2017).", "In this paper, instead of looking into the black-box, we build interpretable models based on instance-based learning.", "Before the current neural era, instance-based learning, sometimes called memory-based learning (Daelemans and Van den Bosch, 2005), was widely used for various NLP tasks, such as part-of-speech tagging (Daelemans et al., 1996), dependency parsing (Nivre et al., 2004) and machine translation (Na-gao, 1984).", "For NER, some instance-based models have been proposed (Tjong Kim Sang, 2002; De Meulder and Daelemans, 2003; Hendrickx and van den Bosch, 2003).", "Recently, despite its high interpretability, this direction has not been explored.", "One exception is Wiseman and Stratos (2019), which used instance-based learning of token representations.", "Due to BIO tagging, it faces one technical challenge: inconsistent label prediction.", "For example, an entity candidate World Health Orga-nization can be assigned inconsistent labels such as B-LOC I-ORG I-ORG , whereas the ground-truth labels are B-ORG I-ORG I-ORG .", "To remedy this issue, they presented a heuristic technique for encouraging contiguous token alignment.", "In contrast to such token-wise prediction, we adopt span-wise prediction, which can naturally avoid this issue because each span is assigned one label.", "NER is generally solved as", "(i) sequence labeling or", "(ii) span classification.", "2 In the first approach, token features are induced by using neural networks and fed into a classifier, such as conditional random fields (Lample et al., 2016; Ma and Hovy, 2016; Chiu and Nichols, 2016).", "One drawback of this approach is the difficulty dealing with nested entities.", "3 By contrast, the span classification approach, adopted in this study, can straightforwardly solve nested NER (Finkel and Manning, 2009; Sohrab and Miwa, 2018; Xia et al., 2019).", "4 2 Very recently, a hybrid model of these two approaches has been proposed by Liu et al. (2019).", "NER can be solved as multi-class classification, where each of possible spans in a sentence is assigned a class label.", "As we mentioned in Section 2, this approach can naturally avoid inconsistent label prediction and straightforwardly deal with nested entities.", "Because of these advantages over token-wise classification, span classification has been gaining a considerable attention (Sohrab and Miwa, 2018; Xia et al., 2019).", "Formally, given an input sentence of T words X = ( w 1 , w 2 , . . . , w T ) , we first enumerate possible spans S ( X ) , and then assign a class label y Y to each span s S ( X ) .", "We will write each span as s = ( a, b ) , where a and b are word indices in the sentence: 1 a b T .", "Consider the following sentence.", "Here, the possible spans in this sentence are S ( X ) = { (1 , 1) , (1 , 2) , (1 , 3) , . . . , (4 , 5) , (5 , 5) } .", "Franz Kafka, denoted as s = (1 , 2) , is assigned the person type entity label ( y = PER ).", "Note that the other non-entity spans are assigned the null label ( y = NULL ).", "For example, a novelist, denoted as s = (4 , 5) , is assigned NULL .", "In this way, the NULL label is assigned to non-entity spans, which is the same as the O tag in the BIO tag set.", "The probability that each span s is assigned a class label y is modeled by using softmax function: P ( y | s ) = exp ( score ( s, y )) (cid:88) y (cid:48) Y exp ( score ( s, y (cid:48) )) .", "Typically, as the scoring function, the inner product between each label weight vector w y and span feature vector h s is used: score ( s, y ) = w y h s .", "The score for the NULL label is set to a constant, score ( s, y = NULL ) = 0 , similar to logistic regression (He et al., 2018).", "For training, the loss function we minimize is the negative log-likelihood: L = (cid:88) ( X,Y ) D (cid:88) ( s,y ) S ( X,Y ) log P ( y | s ) , where S ( X, Y ) is a set of pairs of a span s and its ground-truth label y .", "We call this kind of models that use label weight vectors for classification classifier-based span model .", "In Figure 1, an entity candidate Franz Kafka and the spans in the training set are mapped onto the feature vector space, and the label distribution is computed from the similarities between them.", "In this inference process, it is easy to understand how much each training instance contributes to the predictions.", "This property allows us to explain the predictions by spe-cific training instances, which is categorized into example-based explanations (Plumb et al., 2018).", "Formally, within the neighbourhood component analysis framework (Goldberger et al., 2005), we define the neighbor span probability that each span s i S ( X ) will select another span s j as its neighbor from candidate spans in the training set: P ( s j | s i , D (cid:48) ) = exp ( score ( s i , s j )) (cid:88) s k S ( D (cid:48) ) exp ( score ( s i , s k )) .", "Here, we exclude the input sentence X and its ground-truth labels Y from the training set D : D (cid:48) = D \\ { ( X, Y ) } , and regard all other spans as candidates: S ( D (cid:48) ) = { s S ( X (cid:48) ) | ( X (cid:48) , Y (cid:48) ) D (cid:48) } .", "The scoring function returns a similarity between the spans s i and s j .", "Then we compute the probability that a span s i will be assigned a label y i : P ( y i | s i ) = (cid:88) s j S ( D (cid:48) ,y i ) P ( s j | s i , D (cid:48) ) .", "Here, S ( D (cid:48) , y i ) = { s j D (cid:48) | y i = y j } , so the equation indicates that we sum up the probabilities of the neighbor spans that have the same label as the span s i .", "The loss function we minimize is the negative log-likelihood: L = (cid:88) ( X,Y ) D (cid:88) ( s i ,y i ) S ( X,Y ) log P ( y i | s i ) , where S ( X, Y ) is a set of pairs of a span s i and its ground-truth label y i .", "At inference time, we predict y i to be the class label with maximal marginal probability: y i = arg max y Y P ( y | s i ) , where the probability P ( y | s i ) is computed for each of the label set y Y .", "Efficient neighbor probability computation The neighbor span probability P ( s j | s i , D (cid:48) ) in Equation 1 depends on the entire training set D (cid:48) , which leads to heavy computational cost.", "As a remedy, we use random sampling to retrieve K sentences D (cid:48)(cid:48) = { ( X (cid:48) k , Y (cid:48) k ) } Kk =0 from the training set D (cid:48) .", "At training time, we randomly sample K sentences for each mini-batch at each epoch.", "This simple technique realizes time and memory efficient training.", "In our experiments, it takes less than one day to train a model on a single GPU 5 .", "Data We evaluate the span models through two types of NER:", "(i) flat NER on the CoNLL-2003 dataset (Tjong Kim Sang and De Meulder, 2003) and", "(ii) nested NER on the GENIA dataset 6 (Kim et al., 2003).", "We follow the standard training-development-test splits.", "Baseline We use a classifier-based span model (Section 3.1) as a baseline.", "Only the difference between the instance-based and classifier-based span models is whether to use softmax classifier or not.", "Encoder and span representation We adopt the encoder architecture proposed by Ma and Hovy (2016), which encodes each token of the input sentence w t X with word embedding and character-level CNN.", "The encoded token representations w 1: T = ( w 1 , w 2 , . . . , w T ) are fed to bidirectional LSTM for computing contextual ones h 1: T and h 1: T .", "From them, we create h lstm s for each span s = ( a, b ) based on LSTM-minus (Wang and Chang, 2016).", "For flat NER, we use the representation h lstm s = [ h b h a 1 , h a h b +1 ] .", "For nested NER, we use h lstm s = [ h b h a 1 , h a h b +1 , h a + h b , h a + h b ] .", "7 We then multiply h lstm s with a weight matrix W and obtain the span representation: h s = W h lstm s .", "For the scoring function in Equation 1 in the instance-based span model, we use the inner product between a pair of span representations: score ( s i , s j ) = h s i h s j .", "Model configuration We train instance-based models by using K = 50 training sentences randomly retrieved for each mini-batch.", "At test time, we use K = 50 nearest training sentences for each sentence based on the cosine similarities between their sentence vectors 8 .", "For the word embeddings, we use the GloVe 100-dimensional embeddings (Pennington et al., 2014) and the BERT embeddings (Devlin et al., 2019).", "9 6 We use the same one pre-processed by Zheng et al. (2019) at https://github.com/thecharm/ boundary-aware-nested-ner 7 We use the different span representation from the one used for flat NER because concatenating the addition features, h a + h b and h a + h b , to the subtraction features improves performance in our preliminary experiments.", "8 For each sentence X = ( w 1 , w 2 , . . . , w T ) , its sentence vector is defined as the vector averaged over the word embeddings (GloVe) within the sentence: 1 T (cid:80) t w emb t .", "Overall F 1 scores We investigate whether or not our instance-based span model can achieve competitive performance with the classifier-based span model.", "Table 1 shows F 1 scores on each test set.", "10 Consistently, the instance-based span model yielded comparable results to the classifier-based span model.", "This indicates that our instance-based learning method enables to build NER models without sacrificing performance.", "Effects of training data size Figure 2 shows F 1 scores on the CoNLL-2003 development set by the models trained on full-size, 1 / 2 , 1 / 4 and 1 / 8 of the training set.", "We found that", "(i) performance of both models gradually degrades when the size of the training set is smaller and", "(ii) both models yield very competitive performance curves.", "10 The models using GloVe yielded slightly better results than those using BERT.", "One possible explanation is that subword segmentation is not so good for NER.", "In particular, tokens in upper case are segmented into too small elements, e.g., LEICESTERSHIRE L, ##EI, ##CE, ##ST, ##ER, ##S, ##H, ##IR, ##E.", "Examples of retrieved spans The span feature space learned by our method can be applied to various downstream tasks.", "In particular, it can be used as a span retrieval system.", "Table 2 shows five nearest neighbor spans of an entity candidate Tom Moody.", "In the classifier-based span model, person-related but non-entity spans were retrieved.", "By contrast, in the instance-based span model, person ( PER ) entities were consistently retrieved.", "11 This tendency was observed in many other cases, and we confirmed that our method can build preferable feature spaces for applications.", "Errors analysis The instance-based span model tends to wrongly label spans that includes location or organization names.", "For example, in Table 3, the wrong label LOC (Location) is assigned to Air France whose gold label is ORG (Organization).", "11 The query span Tom moody was a cricketer at that time, and some neighbors, Ian Botham and Darren Gough, were also cricketers.", "Note that by looking at the neighbors, we can understand that country or district entities confused the model.", "This implies that prediction errors are easier to analyze because the neighbors are the rationales of the predictions.", "Generalizability Are our findings in NER generalizable to other tasks?", "To investigate it, we perform an additional experiment on the CoNLL-2000 dataset (Tjong Kim Sang and Buchholz, 2000) for syntactic chunking.", "12 While this task is similar to NER in terms of short-span classification, the class labels are based on syntax, not (entity) semantics.", "In Table 4, the instance-based span model achieved competitive F 1 scores with the classifier-based one, which is consistent with the NER results.", "This suggests that our findings in NER are likely to generalizable to other short-span classification tasks.", "Future work One interesting line of future work is an extension of our method to span-to-span relation classification, such as SRL and coreference resolution.", "Another potential direction is to apply and evaluate learned span features to downstream tasks requiring entity knowledge, such as entity linking and question answering.", "We presented and investigated an instance-based learning method that learns similarity between spans.", "Through NER experiments, we demonstrated that the models build by our method have", "(i) competitive performance with a classifier-based span model and", "(ii) interpretable inference process where it is easy to understand how much each training instance contributes to the predictions.", "This work was partially supported by JSPS KAKENHI Grant Number JP19H04162 and JP19K20351.", "We would like to thank the members of Tohoku NLP Laboratory and the anonymous reviewers for their insightful comments.", "12 The models are trained in the same way as in nested NER." ]
[ "abstain", "objective", "method", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "objective", "objective", "objective", "other", "abstain", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "other", "other", "other" ]
[ "In recent years, math word problem solving has received considerable attention and achieved promising results, but previous methods rarely take numerical values into consideration.", "Most methods treat the numerical values in the problems as number symbols, and ignore the prominent role of the numerical values in solving the problem.", "In this paper, we propose a novel approach called NumS2T, which enhances math word problem solving performance by explicitly incorporating numerical values into a sequence-to-tree network.", "In addition, a numerical properties prediction mechanism is used to capture the category and comparison information of numerals and measure their importance in global expressions.", "Experimental results on the Math23K and APE datasets demonstrate that our model achieves better performance than existing state-of-the-art models.", "1 1 Introduction Taking a math word problem as input, the math word problem solving task aims to generate a corresponding solvable expression and answer.", "With the advancements in natural language processing, math word problem solving has received growing attention in recent years (Roy and Roth, 2015; Mitra and Baral, 2016; Ling et al., 2017; Huang et al., 2018).", "Many methods have been proposed that use sequence-to-sequence (seq2seq) models with an attention mechanism (Bahdanau et al., 2014) for math word problem solving (Wang et al., 2017b, 2018b, 2019).", "To better utilize expression structure information, some methods use sequence-to-tree (seq2tree) models to generate expressions Corresponding author.", "and have achieved promising results (Liu et al., 2019; Xie and Sun, 2019; Wu et al., 2020).", "These methods convert the target expression into a binary tree, and generate a pre-order traversal sequence of this expression tree based on the parent and sibling nodes of each node.", "Although promising results have been achieved, previous methods rarely take numerical values into consideration, despite the fact that in math word problem solving, numerical values provide vital information.", "As an infinite number of numerals can appear in math word problems, it is impossible to list them all in the vocabulary.", "Previous methods replace all the numbers in the problems with number symbols ( e.g. , v 1 , v 2 ) in order in the preprocessing stage.", "These replaced problems are used as input to directly generate expressions containing number symbols.", "The number symbols in the expressions are then replaced with the numerical values in the original problems to obtain executable expressions.", "As shown in Figure 1, taking the problem with numerical values { v 2 =15, v 3 =10, v 4 =100, v 5 =25 } as input, the target expression of the problem would be v 4 / ( v 2 v 3 ) + v 5 .", "However, if the number symbol v 5 = 20% , the target expression for the same problem would be v 4 / ( v 2 v 3 ) (1 + v 5 ) .", "Similarly, without numerical value information, the model can hardly determine whether the number gap between the table and the chair should be v 2 v 3 or v 3 v 2 .", "As such, it will incorrectly generates the same expression for problems with different numerical values.", "To address these problems, we propose a novel approach called NumS2T to better capture numerical value information and utilize numerical properties.", "Specifically, the proposed model uses a sequence-to-tree network with a digit-to-digit number encoder that explicitly incorporates numerical values into the model and captures number-aware problem representations.", "In addition, we designed a numerical properties prediction mechanism to further utilize the numerical properties.", "NumS2T predicts the comparative relationship between paired numerical values, determines the category of each numeral, and measures their importance for generating the final expression.", "With the category and comparison information, the model can better identify the interactive relationship between the numerals, and thus generate better results.", "With consideration of the importance of the numerals, the model can capture the global relationship between the numerals and target expressions rather than simply focusing on the local relationship between numeral pairs.", "The main contributions of this paper can be summarized as follows: We explicitly incorporate numerical value information into math word problem solving tasks.", "We propose a numerical properties prediction mechanism to utilize numerical properties.", "To incorporate the local relationship between numerals and the global relationship associated with the final expression, NumS2T compares the paired numerical values, determines the category of each numeral, and then measures whether they should appear in the final expression.", "We conducted experiments on two large-scale Math23K and Ape210K datasets to verify the effectiveness of our NumS2T model.", "The results show that our model achieved better performance than existing state-of-the-art methods.", "In this section, we present details regarding our proposed NumS2T model.", "As shown in Figure 2, we use an attention-based sequence-to-tree model with a problem encoder (Section 2.2) and a tree-structured decoder to generate math expressions (Section 2.4).", "In addition, we explicitly incorporate numerical values to obtain number-aware problem representations (Section 2.3).", "Finally, we propose a numerical properties prediction mechanism to further utilize the numerical properties (Section 2.5).", "A math word problem X = ( x 1 , x 2 , . . . , x m ) is a sequence of m words.", "Our goal is to generate a math expression Y = ( y 1 , y 2 , . . . , y n ) , where Y is the pre-order traversal sequence of a binary math expression tree, which can be executed to produce the answer to problem X. Here, we replace all of the numbers in the problem X with a list of number symbols based on their order of appearance.", "Let V c = ( v 1 , v 2 , . . . , v K ) be the K numbers that appear in problem X. The numerical value of the k -th number v k is a sequence of l characters ( v 1 k , v 2 k , . . . , v lk ) .", "The generated vocabulary V g is composed of several common numbers ( e.g. , 1,100, ) and several math operators ( e.g. , +,-,*,/).", "At each time step during decoding, the NumS2T model either copies a number from V c or generates a number from V g .", "We use a two-layer bidirectional LSTM (BiL-STM) (Hochreiter and Schmidhuber, 1997) network as the encoder, which encodes the math word problem X into a sequence of hidden states", "( h x1 , h x2 , . . . , h xm ) R m 2 d as follows: h xi = [ h xi , h xi ] , h xi = BiLSTM( E ( x i ) , h xi 1 ) , h xi = BiLSTM( E ( x i ) , h xi 1 ) .", "(1) Here, word embedding vectors E ( x i ) are obtained via a wording embedding layer E ( ) .", "d is the dimension of the hidden state and h xi is the concatenation of the forward and backward LSTM hidden states.", "Following Wu et al. (2020), we enrich the problem representations with common-sense knowledge information from external knowledge bases.", "The words in problem sequences X and their categories in external knowledge bases are constructed as an entity graph.", "In this entity graph, each word is related to its neighbor in the problem.", "If there are two nouns belonging to the same category in the knowledge base, these two nouns are related to their categories.", "See Wu et al. (2020) for more details.", "The knowledge-aware problem states h kgi are obtained from a two-layer graph attention network (Velickovic et al., 2018) on the entity graph: ij = softmax A ij =1 ( f (w Th [W x h xi : W x h xj ])) , h kgi = || t =1 ,...,T ( (cid:88) A ij =1 ij W k h xj ) , (2) where w Th , W x , W k are weight vector and matrices.", "|| and [:] are concatenation functions.", "f ( ) and are the LeakyRelu and sigmoid activation functions.", "T is the number of heads in GAT layer.", "If the i -th word is related to the j -th word, the score of the adjacent matrix A ij is set to 1, otherwise it is set to 0.", "To solve the issues mentioned in the introduction section, we need to incorporate explicit numerical value information into NumS2T.", "However, there are an infinite number of numerals that can appear in math word problems.", "For example, among the 18,529 problems in the training set of Math23K, there are 3,058 different numerical values.", "Therefore, rather than list all these numerals in the vocabulary, we encode each numeral value digit by digit.", "All the digits in the numerical value v k are treated as a sequence ( v 1 k , v 2 k , . . . , v lk ) and embedded via the embed layer E( ).", "Take a 5-digit value v k = (1 / 3) as an example, we have E(v k ) R 5 d emb .", "Similar to the architecture shown in Equation 1, we use a BiLSTM network to encode the numeral values and obtain the numeral hidden states h v k with an average pooling layer: h nv k , j = BiLSTM( E ( v jk ) , h nv k , j 1 ) , h nv k = 1 l (cid:88) l j =1 h nv k , j .", "(3) To capture the relations and dependency between numeral pairs, we use a self-attention mechanism (Wang et al., 2017a) on the hidden state of all the numerals H nv = { h nv k } Kk =1 to compute the contextual numeral hidden states h cnv k : v k = softmax( ( H nv ) TW h h nv k ) , h cnv k = v k H nv , (4) where v k is the attention distribution of v k on all the numerals in the problem X. Combining the numeral hidden states h nv k , h cnv k with the original problem hidden states h xi , h kgi , we have number-aware problem states h numi enhanced with explicit numeral value information: h numi = (cid:26) [ h nv k : h cnv k ] x i = v k [ h xi : h kgi ] x i is not a number (5) The final number-aware problem representations are obtained by concatenating the problem hidden states h xi , the knowledge-aware problem states h kg i and the number-aware problem states h numi : h i = [ h xi : h kgi : h numi ] .", "Previous works (Xie and Sun, 2019; Liu et al., 2019; Wu et al., 2020) have confirmed that a sequence-to-tree model can better represent the expression structures than a sequence-to-sequence model, because a tree structured decoder can capture the global expression information and focus on the features of adjacent nodes.", "The tree structured decoder takes the final number-aware problem representations h i as input and generates the target expression from top to bottom.", "The target expression can be regarded as a pre-order traversal of a binary tree, with operators as internal nodes and numbers as leaf nodes.", "The decoder is a one-layer LSTM, which updates its states as follows: s t + 1 = LSTM([ E ( y t ) : c t : r t ] , s t ) .", "At time step t +1, the decoder uses the last generated word embedding E ( y t ) , the problem context state c t and the expression context state r t to update its previous hidden state s t .", "The problem context state c t is computed via attention mechanism as follows: ti = softmax(tanh(W h h i +W s [ s t : r t ])) , c t = m (cid:88) i =1 ti h i , (8) where W h , W s are weight matrices.", "ti is the attention distribution on the number-aware problem representations h i .", "The expression context state r t is computed via a state aggregation mechanism (Wu et al., 2020).", "It describes the global representation of the partial expressions y <t = ( y 1 , y 2 , . . . , y t 1 ) being generated by the decoder.", "At time step t , the decoder aggregates each node's context state with its neighbor nodes in the generated partial expression tree.", "The aggregation functions are as follows: r 0t = s t , r + 1 t = (W r [ r t : r t , p : r t , l : r t , r ]) , (9) where is the sigmoid function and W r is a weight matrix.", "r 0t is initialized with decoder hidden state s t when = 0 ,.", "r t , p , r t , l , r t , r are the context state of the parent node, the left child node, and the right child node of y t in the expression tree.", "r + 1 t represents the expression context state updated with global information from all nodes in the generated partial expression.", "Lastly, the decoder can generate a word from a given vocabulary V g .", "It can also generate a number symbol in V c , and use it to copy a number from the problem X .", "The final distribution is the combination of the generated probability and copy probability: H v = { h v k } Kk =1 , p c = (W z [ s t : c t : r t ] + W v H v ) , P c ( y t ) = softmax( f ([ s t : c t : r t : H v ])) , P g ( y t ) = softmax( f ([ s t : c t : r t ])) , P( y t | y <t , X) = p c P c ( y t ) + (1 p c )P g ( y t ) .", "(10)", "Here, H v are the number-aware problem representations of all the numerals v k in X. W z , W v are the weight matrices.", "f ( ) is a perception layer.", "p c is the probability that the current word is a number copied from the problem.", "Our NumS2T model explicitly incorporates numerical values information.", "Furthermore, utilize the numerical properties to the degree possible through a numerical properties prediction mechanism.", "We consider three numerical properties to be useful for solving math word problems: Pairwise Numeral Comparison.", "If we consider the question What is the difference between v 1 and v 2 , the comparative relationship between these two numerals can help the model decide whether to generate v 1 v 2 or v 2 v 1 .", "In this paper, we compare each numeral v k in the question with the other numerals.", "Then, we calculate the pairwise comparison scores z kj based on their number-aware problem representations, and we optimize the pairwise comparison loss to assign numerals with larger numerical values higher pairwise comparison scores.", "The pairwise comparison loss LCR is calculated as follows: g v k = (W h h v k ) , z kj = (cid:40) max (0 , g v j g v k ) if v k v j max (0 , g v k g v j ) if v k < v j , LCR = 1 K 2 K (cid:88) k =1 K (cid:88) j =1 z kj , (11) Numeral categories.", "In the sentence the number of apples is 5 more than the number of pears, replacing the numeral 5 with the integer 100 may not affect the structure of the target expression, but replacing the numeral 5 with 20% may change the structure from +5 to *(1 + 20%).", "We roughly divide all numbers into four categories: { integer, decimal, fraction, percentage } , and assign a category label C = { 1,2,3,4 } , respectively.", "Given the number-aware problem representations h v k for each numeral v k , we calculate the category score distribution P( C v k | h v k ) and then minimize the negative log likelihood: P( C v k | h v k )=softmax(W c h v k ) , LCA = 1 KK (cid:88) k =1 log P( C v k | h v k ) .", "(12)", "Global relationship with target expressions.", "Current models tend to focus on the local relationship between numerals, while sometimes these numerals are not related to the target expression.", "Given 3 bags of rice weighing 60 kg, the numeral 3 is highly correlated with 60.", "However, if the problem relates to the total price of the rice rather than the weight of each bag of rice, the numeral 3 is not so important for generating the target expression.", "The NumS2T model predicts a scalar value g (cid:48) v k for each numeral that denotes whether this numeral will be used in a math expression.", "The importance label a v k =1 when v k is used in the ground truth math expression, otherwise a v k =0.", "The supervised loss is defined by: g (cid:48) v k = (W g h v k ) , LGR = 1 KK (cid:88) k =1 a i log g (cid:48) v k +(1 a i ) log (1 g (cid:48) v k ) .", "(13) 2.6 Training During training, for each questionexpression pair (X, Y), we first train the NumS2T by optimizing the maximum likelihood estimation (MLE) loss L l on the probability distribution P ( y t | y <t , X)) .", "Then, the final loss function L is a combination of the MLE loss and three numerical properties loss functions: L l = 1 n n (cid:88) i =1 log P ( y t | y <t , X)) , L = L l + 1 LCR + 2 LCA + 3 LGR .", "(14)", "We present the experimental results of math word problem solving using our proposed models on the Math23K (Wang et al., 2017b) and Ape210K (Zhao et al., 2020) 2 datasets.", "Following Xie and Sun (2019), we removed the problems that the corresponding expressions could not be executed to obtain the given answers and the problems that omit intermediate calculation expressions.", "For Math23K, following previous studies (Xie and Sun, 2019; Wu et al., 2020), we randomly split the dataset into a training set, a development set and a test set with 18,529, 2,316, 2,316 problems.", "For Ape210K, we use the official data partition.", "There are 166,270, 4,157, and 4,159 problems in our training set, development set and test set, respectively.", "We report answer accuracy as the main evaluation metrics of the math word problem solving task.", "In this paper, we truncate the problem to a max sequence length of 150, and the expression to a max sequence length of 50.", "We select 4,000 words that appear most frequently in the training set of each dataset as the vocabulary, and replace the remaining words with a special token UNK.", "We initialize the word embedding with the pre-trained 300-dimension word vectors 3 .", "The problem encoder used two external knowledge bases: Cilin (Mei, 1985) and Hownet (Dong et al., 2010).", "The number of heads T in GAT is 8.", "The hidden size is 512 and the batch size is 64.", "We use the Adam optimizer (Kingma and Ba, 2014) to optimize the models an the learning rate is 0.001.", "We compute the final loss function with 1 , 2 , 3 of 0.5.", "Dropout (Srivastava et al., 2014) is set to 0.5.", "Models are trained in 80 epoches for the Math23K dataset and 50 epoches for the Ape210K dataset.", "During testing, the beam size is set to 5.", "Once all internal nodes in the expression tree have two child nodes, the decoder stops generating the next word.", "The hyper-parameters are tuned on the valid set.", "We compare our proposed NumS2T model with the following baseline models: DNS (Wang et al., 2017b) is a seq2seq model with a two-layer GRU as an encoder and a two-layer LSTM as a decoder.", "DNS-Retrieval is a variant of DNS that combines a retrieval model.", "S2S (Wang et al., 2018a) is a standard bidirectional LSTM-based seq2seq model with an attention mechanism.", "RecursiveNN (Wang et al., 2019) uses a recursive neural network on the predicted tree structure templates Tree-Decoder (Liu et al., 2019) is a seq2tree model with a tree structured decoder.", "The decoder generates each node based on its parent node and its sibling node.", "GTS (Xie and Sun, 2019) generates each node based on its parent node and its left sibling subtree embedding.", "The subtree embedding is obtained by merging the embedding of the subtree from bottom to top.", "KA-S2T (Wu et al., 2020) is a seq2tree model with external knowledge and a state aggregation mechanism.", "The decoder use a two-layer GCN to recursively aggregate neighbors of each node in the partial expression tree.", "The main evaluation results are presented in Table 1.", "Compared with baseline methods, our model obtains the highest answer accuracy of 78.1% in the Math23K dataset and 70.5% in the APE210K dataset, which is significantly better than other state-of-the-art methods.", "The experimental results provide the following observations: 1) The methods with a tree-structured decoder (Tree-Decoder, GTS, KA-S2T) perform better than methods with a sequence-structured decoder (DNS, S2S).", "These methods treat the math expression as a binary tree and directly use adjacent nodes in the tree instead of the previous word in the sequence to generate the next word.", "In this way, the model can better capture the structure information of the math expressions.", "2) The KAS2T model with external knowledge performs better than GTS, which proves that external knowledge enables the model to obtain better interaction between words.", "3) NumS2T outperforms all the other baselines.", "This result shows the effectiveness of the explicitly incorporated numerical values and use of a numerical properties prediction mechanism.", "Effect of explicitly incorporating numerical values: We designed several NumS2T variants that reduce the numerical values incorporated in the model.", "Here, NumS2T w/o Numerals means that we remove the character-level numeric value encoder.", "An input example is Alan bought v 1 apples for $ v 2 .", "NumS2T w/o Symbols means that we not only remove the character-level numeric value encoder, but also replace the math symbols in math problems with character-level numeric values.", "An input example is Alan bought 2 5 apples for $ 1 5 0.", "Table 2 shows the results of these different variants, from which we can see: 1)The experimental results show that model performance of NumS2T w/o Symbols is significantly reduced in both datasets.", "We believe this is because directly replacing the number symbols will make it difficult for the model to obtain the overall representation of each number.", "2) The use of a self-attention mechanism significantly improves the accuracy by 0.8% in Math23K and 0.7% in APE210K.", "This is because the same numerical value may describe different information in different problems.", "Therefore, the self-attention mechanism combines numerical values with other numerical values in the problem, which helps to model numerical information and the relations between these numerals.", "3) Without numerical values, the answer accuracy of NumS2T w/o Numerals would be reduced to 76.6% and 69.2%.", "The results show the benefit of explicitly incorporating numerical values.", "Effect of the numerical properties prediction mechanism: Table 3 shows the results of several NumS2T variants designed to measure the effect Models Math23K APE210K KA-S2T 76.3% 68.7% NumS2T w/o Symbols 75.4% 64.4% NumS2T w/o Numerals 76.6% 69.2% NumS2T w/o SelfAtt 77.3% 69.8% NumS2T 78.1% 70.5% Table 2: Ablation study on reducing the numerical values incorporated into the model.", "of the numerical properties prediction mechanism.", "From the table we can observe that: 1) NumS2T-base is the variant of NumS2T without the numerical properties prediction mechanism.", "Without numerical properties, the answer accuracy in the Math23K and APE210K datasets are reduced to 77.0% and 69.6%, which show that the numerical properties prediction mechanism contributes considerably to improving performance.", "In addition, NumS2T-base still outperforms the state-of-the-art baseline KA-S2T, which once again proves the effectiveness of explicitly incorporating numerical values.", "2) The use of pairwise numeral comparison, numeral category and global relationship with a target expression can improve accuracy by approximately 0.6%, 0.4% and 0.3%, respectively.", "Their combination achieves further improvements in model performance.", "These results show the effectiveness of the numerical properties prediction mechanism because it enables the model to further utilize numerical properties.", "Model performance on problems with a different number of numerals: Table 4 shows the results for how accuracy changes as the number of numerals in the problem increases.", "The NumS2T model outperforms the best-performing baseline with respect to problems with a different number of Math23K Num.", "numerals.", "In addition, as the number of numerals in the problems increase, the performance gap between NumS2T and KAS2T also increases.", "This is because with more numerals in the problem, NumS2T, which explicitly incorporate numerical value information, is able to more readily achieve better performance.", "Meanwhile, NumS2T also achieved a considerable improvement on problems with only one numeral.", "This further demonstrates the effect of utilizing numerical category information and global relationship information.", "Table 5 shows three cases generated by KA-S2T (Wu et al., 2020) and our NumS2T model.", "In the first problem, without numerical values, KA-S2T incorrectly uses the smaller value to subtract the larger value when calculating the price difference between footballs and basketballs.", "This case requires the model to choose the larger value between two numerals.", "Our NumS2T model can better handle this problem.", "In the second problem, KA-S2T replaces all of the numerals in the problems with number symbols ( v 1 , v 2 ) and does not know that v 2 =20% is not an integer.", "Our proposed method can capture numerical values and numeral category information to generate Problem: Each football is worth $ 76, and each basketball is worth $ 45.", "correct results.", "In the third problem, 80 seats and 52 tickets are strongly semantically related, so KA-S2T generates the sub-expression 80-52.", "However, this problem is about the fares that have already been sold rather than how many tickets are left.", "With numerical properties, NumS2T is able to realize that 80 is not related to the target expression and should not appear in the generated result.", "Math Word Problem Solving: In recent years, Seq2Seq (Sutskever et al., 2014) has been widely used in math word problem solving tasks (Ling et al., 2017; Wang et al., 2017b, 2018a).", "To better utilize expression structure information, recent studies have used Seq2Tree models (Liu et al., 2019; Zhang et al., 2020a).", "Xie and Sun (2019) proposed a tree structured decoder that uses a goal-driven approach to generate expression trees.", "Wu et al. (2020) proposed a knowledge-aware Seq2Tree model with a state aggregation mechanism that incorporates common-sense knowledge from external knowledge bases.", "Recently, several methods have attempted to use the contextual information of the numbers in the problem.", "Li et al. (2019) propose a group attention mechanism to extract quantity-related features and quantity-pair features.", "Zhang et al. (2020b) connects each number in the problem with nearby nouns to enrich the problem representations.", "However, these methods rarely take numerical values into consideration.", "They replace all the numbers in the problems with number symbols and ignore the vital information provided by the numerical values in math word problem solving.", "As such, these methods will incorrectly generates the same expression for problems with different numerical values.", "Numerical Value Representations: Some recent studies have explored the numerical value representations in language models (Naik et al., 2019; Chen et al., 2019; Wallace et al., 2019).", "Spithourakis and Riedel (2018) investigated several of the strategies used for language models for their possible application to model numerals.", "Gong et al. (2020) proposed the use of contextual numerical value representations to enhance neural content planning by helping models to understand data values.", "To incorporate numerical value information into math word solving tasks, we use a digit-to-digit numerical value encoder to obtain the number-aware problem representations.", "To further utilize the numerical properties, we propose a numerical properties prediction mechanism.", "In this study, we proposed a novel approach called NumS2T, that better captures numerical value information and utilizes numerical properties.", "In this model, we use a digit-to-digit numerical value encoder to explicitly incorporate numerical values.", "In addition, we designed a numerical properties prediction mechanism that compares the paired numerical values, determines the category of each numeral, and measures whether they should appear in the final expression.", "Experimental results show that our proposed NumS2T model outperforms other state-of-the-art baseline methods.", "The authors wish to thank the anonymous reviewers for their helpful comments.", "This work was partially funded by China National Key R&D Program (No. 2018YFB1005100), National Natural Science Foundation of China (No. 62076069, 61976056), Shanghai Municipal Science and Technology Major Project (No.2021SHZDZX0103)." ]
[ "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "objective", "objective", "objective", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "other", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "objective", "objective", "method", "method", "objective", "other", "other" ]
[ "Recent work has found evidence that natural languages are shaped by pressures for ecient communication e.g. the more contextually predictable a word is, the fewer speech sounds or syllables it has (Piantadosi et al. 2011).", "Research on the degree to which speech and language are shaped by pressures for eective communication robustness in the face of noise and uncertainty has been more equivocal.", "We develop a measure of contextual confusability during word recognition based on psychoacoustic data.", "Applying this measure to naturalistic speech corpora, we find evidence suggesting that speakers alter their productions to make contextually more confusable words easier to understand.", "A major open question in the study of natural languages is the extent to which pressures for ecient communication shape the online production choices of speakers or the system of forms and form-meaning mappings.", "Zipf (1936, 1949) famously noted that highly frequent words tend to be shorter and hypothesized that this could be explained in terms of pressures for ecient communication: the average cost of producing a word is lower than it would be otherwise.", "More recent work has formalized hypotheses about the eect of communicative pressures on language usage and design using tools from information theory (Shannon 1948, Cover and Thomas 2012) and rational analysis (Anderson 1990, 1991).", "This work has found evidence that meanings are allocated to word types in a way that minimizes speaker eort (Piantadosi et al. 2011, 2012), and that this appears to be at least partly explainable by online production choices (Mahowald et al. 2013).", "While this research oers evidence that lexicons and the production choices of speakers are shaped by pressures for ecient communication, other work examining how much words and lexicons are shaped by pressures for ensuring eective communication in the face of noise and uncertainty has been more equivocal.", "This work has found evidence that words with greater neighborhood size or density that is, words that have a greater number of similar-sounding neighbors have faster onset of production, and have lower overall durations.", "Words with greater neighborhood density also take longer for listeners to recognize and comprehend, and have less acoustically distinctive vowels (Vitevitch 2002, Gahl et al. 2012; see Vitevitch and Luce 2016 for review).", "This work provides a challenge for communicatively-oriented models of production: words with greater numbers of similar-sounding neighbors seem likely to be more confusable, and therefore speakers would be predicted to decrease the likelihood of noise by, e.g., increasing their duration.", "However, this work does not directly estimate word confusability, instead using neighborhood density or an acoustic similarity measure as a proxy.", "It remains possible that greater word confusability is associated with phonetic enhancement, and that a more direct measure of confusability would reveal this relationship.", "In this paper, we present a measure of relative word confusability based on both a language model and psychoacoustic data, and we examine how well it predicts word durations in natural speech corpora.", "This measure diers from neighborhood density in three ways: 1) it is sensitive to edit type; 2) it considers words with edit distance greater than 1; and 3) it takes into account top-down expectations.", "The structure of the paper is as follows.", "We first present a derivation of a Bayesian model of word recognition (broadly similar to Norris and McQueen 2008) that incorporates both linguistic context and a model of noise estimated from the 1992 gating data of Warner et al. (2014).", "We use this speech recognition model to define a measure of confusability, and apply this measure to content words in the NXT-annotated subset of the Switchboard corpus and in the Buckeye corpus (Calhoun et al. 2010, Pitt et al. 2005).", "We provide evidence that greater confusability is associated with longer duration.", "A number of other studies have examined how language is shaped by pressures for communication in the presence of noise.", "Dautriche et al. (2017) examines whether the words of natural lexicons are dispersed, as would be predicted if these lexicons are optimized to prevent confusions between different words.", "This work finds that in fact lexicons exhibit clear tendencies towards being clumpier rather than dispersed.", "The current study follows previous work in using the phenomena of reduction and enhancement to investigate whether communication is optimized for robustness to noise.", "Speech tokens that are produced with shorter than usual duration, or with parts omitted or made less distinctive, are said to be reduced, and those tokens produced with longer durations or produced more distinctively are enhanced.", "Previous work has provided evidence that reduction and enhancement are influenced by contextual predictability.", "Words, syllables, and segments that are more contextually predictable tend to be reduced and those that are less contextually predictable tend to be enhanced (see e.g. Van Son et al. 1998, Van Son and Pols 2003, Jurafsky et al. 2001, Aylett and Turk 2004, 2006, Cohen Priva 2008, 2012, 2015, Seyfarth 2014, Demberg et al. 2012, Pate and Goldwater 2015, Buz et al. 2016, Turnbull et al. 2018; see Bell et al. 2009, Jaeger and Buz 2018 for reviews).", "According to a communicatively-oriented account, this is explainable as balancing eciency against eectiveness: speakers economize on production cost the more that context facilitates accurate listener inference of the speaker's intent.", "Other work has investigated the eects of environmental noise on speech production.", "This includes work investigating whether speakers modulate their productions in response to overt signals of communication diculty, e.g. loud environments or talking to listeners who are children, elderly, or non-native speakers (Lombard 1911, Uther et al. 2007, Picheny et al. 1986).", "We propose a simplified model of word confusability, in which there are two factors that will make word in context more vs. less confusable.", "On the one hand, a listener who has observed context has some top-down' beliefs and expectations about what will be before the speaker produces any acoustics for .", "On the other hand, once the speaker has produced acoustics for , there will be (in general ambiguous) bottom-up' acoustic cues that will usually underdetermine what the speaker's choice of actually was.", "The goal of the listener is then to combine their top-down expectations with their bottom-up observations to reason about which words are more vs. less likely to have been what the speaker intended.", "1 We operationalize the perceptibility of word as the probability that the listener accurately recovers this word in situations where the speaker uses it; the confusability of a word is inversely related to its perceptibility.", "If a speaker has a model of the expected confusability of a given word, they can then decide to lengthen or shorten their particular production of the word token, balancing listener comprehension and their own eort.", "To model the in-context confusability of word tokens, we model the task of word recognition as one of Bayesian inference, with the following underlying generative process for the speaker:", "1. At some point in time, the speaker has already produced some existing sentential context , consisting of a sequence of orthographic words.", "We assume for simplicity and tractability that the listener knows exactly what this context is at each timestep.", "2. The speaker produces the current word e.g. cigarette .", "We model this as sampling according to a language model : .", "| / .", "3. The speaker determines the segment sequence 1 = .", "1 ; :::; / corresponding to their word choice.", "For example, the speaker will determine that the segments [ sIg@(cid:244)Et ] correspond to the word cigarette .", "1 Note that of the two basic factors integrated here, previous probabilistic work on reduction has been limited to using only top-down' expectations.", "In our corpora, there is a unique correct segment sequence for a given orthographic word.", "For ease of exposition, we therefore identify 1 with its corresponding orthographic form .", "Abusing notation, we will write .", "1 | / for the distribution over segmental forms induced by the language model.", "2 4. The listener receives a segment sequence 1 = .", "1 ; :::; / e.g. [ SIg@(cid:244)Et ] ( shigarette ') drawn from a channel distribution conditioned on the speaker's intended segment sequence: 1 .", "| 1 / .", "This represents the eects of noise on the signal received by the listener.", "The task of the listener is to then combine their observation (represented here by 1 ) with their prior expectations about which words are likely given the context.", "The listener tries to determine how likely each wordform in the lexicon is to have been the one intended by the speaker.", "Their posterior belief LISTENER about which segmental wordform 1 was intended is described by Bayes' rule: LISTENER .", "1 | 1 ; / (1) = .", "1 | 1 / .", "1 | / .", "1 | / (2) = .", "1 | 1 / .", "1 | / 1 .", "1 | 1 / .", "1 | / (3) Suppose for example that the listener perceives 1 = [ SIg@(cid:244)Et ].", "Their beliefs about the lexicon .", "1 | / will tell them that this is not a valid segmental wordform, but that [ sIg@(cid:244)Et ] is a valid wordform.", "Their beliefs about the noise distribution for the language .", "1 | 1 / tell them that = [ s ] is a plausible segment to be misperceived as = [ S ]; together this suggests that a good explanation of their percept is the intended wordform 1 = [ sIg@(cid:244)Et ].", "Equation 1 allows us to measure how accurately the listener will be able to reconstruct the speaker's intended message, given a perceived segmental wordform 1 .", "However, this is not sucient to determine the confusability of an intended wordform.", "In general, an intended wordform 1 may give rise to many dierent perceived wordforms 1 as a result of noise.", "In order to measure 2 This notation ignores homophony, though the model is in fact sensitive to this.", "its confusability, we therefore need to marginalize over the possible perceived segment sequences.", "We define the contextual perceptibility of a segmental wordform 1 in context to be the expected probability that the listener accurately recovers it: 1 .", "| 1 / LISTENER .", "1 | 1 ; / (4) = 1 LISTENER .", "1 | 1 ; / .", "1 | 1 / (5) The space of all possible channel strings 1 grows exponentially in sequence length .", "However, each segment is only substantially confusable with a small number of other segments and the probability of more than a small number of channel errors is small.", "We therefore approximated Eq.", "4 with a Monte Carlo estimator: 1 .", "| 1 / LISTENER .", "1 | 1 ; / (6) 1 =1 LISTENER .", "1 | 1 ; / (7) 1 .", "| 1 / (8) We choose = 1000 to balance the variance and computational feasibility of the estimator.", "Finally, following the reasoning given in Levy (2005, 2008b), we take the negative logarithm of this quantity and arrive at a surprisal, which represents the contextual confusability of segment sequence 1 in context : 3 .", "1 | 1 ; / (9) = * log 1 .", "| 1 / LISTENER .", "1 | 1 ; / (10) 3 Materials and methods We make use of two types of data: psychoacoustic gating data for estimating a noise model, and several corpora of natural speech for evaluating whether individuals increase the duration of more confusable words.", "Word durations were analyzed separately in two spoken corpora of American English: the Buckeye Corpus of Conversational Speech (Pitt et al. 3 Compare Equations 49 with Eq.", "VII of Levy (2008a), a study of sentence-level confusability.", "2005) and the NXT Switchboard Annotations (Cal-houn et al. 2010), a richly annotated subset of Switchboard-1 Release 2 (Godfrey and Holliman 1997).", "The Buckeye Corpus contains about 300,000 word tokens, taken from interviews with 40 speakers from central Ohio.", "Word durations for the present study were taken from the timestamps provided for word-level annotations.", "Each word token had a broad transcription uniform across all instances of the word type and a second, token-specific close transcription created by a human annotator.", "The Switchboard Corpus contains transcripts of telephone conversations between strangers.", "The NXT annotated subset includes about 830,000 word tokens from 642 conversations between 358 speakers recruited from all areas of the United States.", "Word durations for the present study were taken from the phonological word'-level timestamps; these were the result of annotator-checked and -corrected timestamps initially made by alignment software.", "Each phonological word was also associated with a segmental transcription that was uniform across all instances of the word type.", "Exclusion criteria almost exactly follow Seyfarth (2014) for the reasons cited there.", "These criteria are mainly designed to exclude non-content words and words whose pronunciation is likely affected by disfluencies or prosodic structure.", "Our criteria only diverge in the following manner: Word tokens were excluded if the utterance speech rate (total number of syllables / length of the utterance in seconds) was more than 3 standard deviations from the speaker mean (vs. 2.5 in Seyfarth 2014).", "After exclusion criteria were applied, about 44,000 (4,900) and 113,000 (8,900) word tokens (word types) remained in the Buckeye and NXT Switchboard corpora, respectively.", "The model of word confusability was based on the diphone gating experiment data of Warner et al. (2014).", "Participants listened to gated intervals of every phonotactically licit diphone of (western) American English and attempted to identify the full diphone they thought was being produced during the interval.", "Along with earlier work by some of the same researchers on Dutch (Smits et al. 2003, Warner et al. 2005), this represents by far the richest and most comprehensive acoustic confusion matrix data of its kind.", "Warner et al. (2014) identified all adjacent pairs of segments within and between words based on an electronic pronouncing dictionary of about 20,000 American English wordforms.", "A set of approximately 2,000 phonotactically licit diphones were extracted from this transcribed lexicon.", "At least one stimulus nonsense word was created per diphone by inserting the diphone into an environment consisting of at most one syllable on the left and at most one syllable on the right.", "A recording of each stimulus wordform was then marked up with (generally) six temporal gates.", "For each stimulus wordform, one recording was created for each gate, starting at the beginning of the original recording and going all the way up to a gate location, followed by a ramping procedure (rather than truncation or white noise) to avoid systematically biasing confusion data.", "In each trial, participants heard a gated stimulus recording.", "4 If the recording included a preceding context, this context was displayed on the screen.", "The participant then selected the stimulus diphone they thought was in the recording (i.e. not including context).", "From this response data, each gate of each stimulus diphone can be associated with a frequency distribution over response diphones.", "Only the response data for gates corresponding to the end of each segment of the diphone were used in the current study.", "For each of Buckeye and NXT Switchboard, the segment inventories of the gating data and of each speech corpus had to be projected down to a common set of segments.", "In each case, this involved collapsing the distinction in the corpora between syllabic and non-syllabic nasal stops.", "For reasons of data sparsity, the distinction between stressed and unstressed versions of any given vowel was also collapsed.", "Our measure of contextual confusability uses a language model to compute the prior probability of a word in context.", "We estimate a language model from the Fisher corpus (Cieri et al. 2004), a speech corpus matched for genre and register to Buckeye and Switchboard.", "This corpus contains about 12 million (orthographic) word tokens taken from nearly 6000 short conversations, each on one of 4 See Grosjean (1980) for reference on the gating paradigm.", "We estimated n-gram models of several orders from the Fisher corpus using KenLM (Heafield 2011).", "5 The n-gram order was treated as a hyperparameter, and selected on the Training Set, as described below.", "An add-1 smoothed unigram model was also created from word frequencies in the Fisher corpus using SRILM (Stolcke 2002, Stolcke et al. 2011).", "The channel model describes the conditional distribution .", "1 | 1 / over what sequence of segments 1 a listener will perceive (e.g. [ SIg@(cid:244)Et ], shigarette ) given the full intended sequence 1 (e.g. [ sIg@(cid:244)Et ], cigarette ).", "We estimate this distribution using the diphone gating data in Section 3.2.", "We make the simplifying assumption that the channel distribution for segment is conditionally independent of all other ( ) given intended segments *1 ; ; +1 .", "By conditioning on adjacent segments, we can capture some eects of coarticulation on confusability.", "For example, nasals before oral stops are systematically likely to be misheard as having the same place of articulation as the stop: 1 = [ AnpA ] (alveolar nasal before labial stop) is more likely to be misperceived as 1 = [ AmpA ] (a labial nasal) than the reverse, and a confusion of [ n ] for [ m ] is comparatively less likely when [ n ] is between vowels as in [ AnA ] (Ohala 1990).", "For each gate ^3 ; 6 and for each diphone 1 2 , the response data from Section 3.2 induce a conditional frequency distribution over channel diphones .", "1 ; 2 | 1 ; 2 / .", "These frequency distributions were smoothed by adding a pseudocount to every channel diphone in every distribution; the distributions were then normalized to define a smoothed pair of diphone-to-diphone channel distributions .", "1 ; 2 | 1 ; 2 / .", "From the marginals of these distributions we constructed an approximation (Eq. 11) of the triphone-to-uniphone channel distribution via their geometric mean: 6 .", "due to intractability resulting from the normalizing constant in Equations 3 and 4. 6 We stop short of utilizing a full triphone-to-triphone channel distribution for tractability.", "We are primarily interested in using the channel model to define a ranking on the confusability of words, i.e. to determine which words are more or less confusable than others.", "This makes the channel model defined by Equations 11 and 12 not fully adequate.", "The diphone gating data were collected in a laboratory setting with rates of noise lower than for naturalistic speech.", "As a result, when the noise model is estimated from this data, it implies the absolute rate of accurate perception (as defined by Equation 3) is close to 1 for most words.", "This makes it hard for the Monte Carlo estimator defined in Equation 7 to determine stable rankings of confusability.", "In order to estimate rankings in a more stable manner, we introduce a model hyperparameter 0 < 1 , and define a new triphone-to-uniphone channel distribution by: .", "Here 1 is used to normalize the distributions; it is fully determined by for a particular distribution . | *1 ; ; +1 / . The term is used to increase the noise rate in the channel distributions. Note that two important features of the original triphone-to-uniphone distributions are maintained in the new model. First, the ratios of outcome probabilities within a single triphone distribution remain the same:", "The new model maximally agrees with the experimentally estimated distribution, diering only in the absolute amount of noise implied.", "7 The gating data does not provide information for estimating the probability of deletion or insertion errors.", "The final string-to-string channel model is defined by:", "This new channel model has an increased noise rate, making it easier to estimate stable rankings of confusability across words.", "The most similar previous channel model (Nor-ris and McQueen 2008) was based on Dutch gating data (Smits et al. 2003) comparable to that used here. Norris and McQueen (2008) did not construct a triphone-to-uniphone channel model, but made use of all gates and also allowed investigation of word boundary identification.", "Prior to any analyses, the Switchboard and Buckeye corpora were each randomly divided into evenly-sized Training and Test sets. The Training sets were used for exploratory statistical analyses, and for determining the values of several model hyperparameters. Following this, all parameters and statistical analyses were frozen, and preregistered with the Open Science Foundation. 8", "We perform several linear regressions in order to determine the eect of confusability on word duration. Contextual confusability is defined throughout using Equation 9. Word durations are log-transformed. The following covariates are standard in the literature, and are included in our analyses: speaker identity; part of speech; unigram prior surprisal; speech rate (the average rate of speech, in syllables per second, of the utterance containing the target word); word length (measured by number of segments and syllables). Several covariates that are included are more non-trivial, and are discussed in more detail below: segmental inventory factors; forward and backward surprisal; neighborhood size and log weighted neighborhood density; and unigram confusability.", "The segmental inventory variables code each word as a bag-of-segments.' A separate variable is defined for each phoneme in the segmental lexicon of the corpus.", "Each variable counts the number of times the corresponding phoneme occurs in the word.", "This is a variant of the baseline model 8 The preregistered analyses are available at the following link:", "used in previous work (Bell et al. 2009, Gahl et al. 2012).", "Certain segments take longer to pronounce than others, and the baseline model is used in case the confusability scores contain information about segment identities within a word.", "Note, however, that this is a conservative baseline, as segment identity has an eect on confusability; certain segments are, individually, harder to perceive than others.", "The model will be used to predict word durations after these segmental eects have been factored out.", "The forward language-model surprisal of a word is the surprisal of the word given preceding words in the context, and its backward surprisal is the surprisal given the following words in the context.", "Previous work in English has found backward surprisal to be a stronger predictor of spoken word duration than forward surprisal (Bell et al. 2009, Seyfarth 2014).", "Word confusability is expected to be correlated with surprisal, as more surprising words will be more dicult for the listener to recover in the presence of noise.", "Neighborhood size and log weighted neighborhood density are measures of the number of words adjacent (within Levenshtein distance 1) to a target word.", "These measures have been extensively studied as explanatory variables for word duration (see Gahl et al. 2012, Vitevitch and Luce 2016 for review), and are expected to correlate with word confusability: words with more neighbors are expected to be more confusable.", "We evaluate whether there is any residual eect of confusability beyond its impact on these variables.", "Unigram confusability measures the confusability of a word (Equation 9) given a unigram (word frequency) language model.", "This is a measure of the out-of-context confusability of a word, as discussed below.", "All variables are treated as fixed eects, and OLS is used for regressions.", "Confidence intervals and p-values are calculated using the bias-corrected bootstrap.", "Bootstrapping is used to address possible heteroskedasticity in the data.", "Random eects are not used due to potential issues arising in observational studies like the current one.", "In particular, random eects may correlate with predictors in an observational study, leading to incorrect estimates of uncertainty and the potential for bias (Bafumi and Gelman 2006, Wooldridge 2010).", "9 9 While Bafumi and Gelman (2006) propose a solution to 1997", "All analyses were performed in two ways: using the raw values for each variable, and with rank-transformed values for the continuous variables.", "The rank-transformed analyses provide a test of the papers hypothesis that greater (i.e. higher-rank) confusability is associated with longer (higher-rank) duration.", "The analyses eliminate the potentially questionable parametric assumption of a linear relationship between confusability (in bits) and this problem by decorrelating the fixed eect from random effects, the method produces identical estimates for the fixed effect, and is primarily useful when the random eect estimates themselves are of interest.", "duration (in log seconds).", "The rank-transformed analyses are intended as sensitivity analyses for the non-transformed analyses; if the two analyses provide dierent results, this provides evidence of a problem with the statistical methods.", "10 4 Results Four model hyperparameters were selected using the Switchboard and Buckeye Training sets: the order and direction of the n-gram model, the diphone-to-diphone channel pseudocounts, and the noise factor .", "11 Backward bigram language models were found to perform best on the Training sets, possibly due to distributional dierences between these corpora and the Fisher corpus, which was used for language model estimation.", "This is consistent with prior work in the area (e.g. Bell et al. 2009, Seyfarth 2014).", "Pseudocounts were set to 0 : 01 , and the term was set to 2 *6 .", "Figure 2 shows the frequency of model-computed confusability scores on the Switchboard and Buckeye Test sets.", "Figure 1 shows the relationship between confusability and word duration on the Test sets.", "co-10 Model and analysis code is available at: https:// github.com/emeinhardt/wr 11 The language model order was the same across all covariates where it was used.", "variates from Section 3.5, except for unigram confusability.", "This allows us to determine whether there is an eect of word confusability on duration, independent of whether this eect is sensitive to context.", "Greater confusability is associated with longer word durations on both the Switchboard and Buckeye Training sets (p<0.001 for all analyses).", "Table 1 shows results of the same analyses performed on the Test sets.", "The eects replicate on the Test sets, and are qualitatively similar when continuous variables are rank-transformed.", "These analyses provide evidence that higher confusability is associated with longer word duration.", "In the second set of analyses, we investigate whether a context-sensitive measure of confusability is necessary for explaining this eect, or whether an out-of-context measure suces.", "In order to do this, we include unigram confusability as a covariate in the analyses, in addition to the previous covariates.", "Unigram confusability is identical to our target measure of word confusability, except that the language model is replaced with a unigram model.", "The measure calculates a word's confusability based on its acoustic properties and its phonological similarity to other words.", "It therefore does not take into account top-down expectations based on a word's context.", "After controlling for unigram confusability, contextual confusability remains associated with longer word durations on both the Switchboard and Buckeye Training sets (p<0.001 for all analyses).", "Table 2 shows the same analyses on the Test sets.", "The eects replicate on both Test sets, and similarly for the rank-transformed analyses.", "We report the results of several unplanned analyses.", "Confidence intervals and p-values reported in this section are non-bootstrapped.", "We evaluate the eect of neighborhood density on word duration in the Test sets.", "Weighted neighborhood density is associated with lower word duration in all analyses.", "(See Appendix B.)", "The results provide evidence that the neighborhood density eects identified in previous work remain qualitatively similar, after adjusting for contextual confusability.", "We draw two main conclusions from our results.", "First, we provide evidence that speakers lengthen words that are more confusable.", "This supports the hypothesis that variation and structure in natural languages are shaped not only by pressures for ecient signals, but also pressures for eective communication of the speaker's intended message in the face of noise and uncertainty (Lindblom 1990, Lindblom et al. 1995, Hall et al. 2018).", "Second, we provide large scale, naturalistic evidence for reduction and enhancement driven by contextual confusability.", "Conversational context may make a speaker's intended message easier or harder to recover from ambiguous acoustics.", "The results suggest that speakers modulate their utterances in a manner that is sensitive to this eect of context, increasing duration when context makes the intended utterance harder to recover.", "The results complement previous work which demonstrates reduction and enhancement driven by contextual predictability (see e.g. Seyfarth 2014).", "They also complement work which shows confusability-driven reduction and enhancement in targeted experimental manipulations (see e.g. Kirov and Wilson 2012, Schertz 2013, Seyfarth et al. 2016, Buz et al. 2016).", "The study may help to resolve questions raised by previous work examining the eects of neighborhood density.", "That work found negative or null 1999 associations between word duration and neighborhood density and related measures (e.g. Gahl et al. 2012, Gahl and Strand 2016).", "The proposed confusability measure diers from neighborhood density in three ways: it is sensitive to edit type, words greater than two edits away, and top-down eects.", "These dierences may account for the discrepancy in the eects of neighborhood density and confusability.", "Under one hypothesis, neighborhood density eects reflect spillover of activation between words with overlapping subsequences of speech sounds (e.g. Gahl and Strand (2016), Chen and Mirman (2012), Dell (1986), Vitevitch and Luce (2016)).", "This spillover is potentially sensitive only to Levenshtein distance.", "In contrast, confusability is sensitive to fine-grained perceptual structure.", "When lexical neighbors dier in perceptually distinct segments, they will typically be non-confusable.", "A second hypothesis is that the discrepancy arises from the role of top-down expectations in confusability.", "Neighborhood eects are type-level phenomena: a word has the same neighbors no matter what context it appears in.", "Confusability, on the other hand, is a token-level phenomenon: contextual expectations will change the confusability of a word.", "Stable properties of the lexicon may determine which segment sequences undergo frequent articulatory rehearsal, and are reduced as a consequence.", "The confusability measure picks up on context-dependent variation, which rehearsal processes in the articulatory system may not be sensitive to.", "The study suggests several directions for future work.", "First, while there are advantages of using naturalistic speech data (Gahl et al. 2012), it would be desirable to have experimental validation of the confusability measure and its relationship to speaker reduction.", "Second, a lower-perplexity neural language model would provide better estimates of a word's confusability, but would first need to be validated on speech data.", "Third, a more sophisticated channel model would allow for insertions and deletions, and better capture transitional coarticulatory cues (Wright 2004).", "Because speakers enhance or reduce their speech in ways other than changing duration (see e.g. Kirov and Wilson 2012, Schertz 2013, Seyfarth et al. 2016, Buz et al. 2016), such a model would permit investigation of targeted enhancement and reduction in naturalistic data.", "We thank Uriel Cohen Priva and Scott Seyfarth for help reproducing their analyses.", "We also thank Silas Horton, Todd Williams, and Thanh Nguyen for computing support.", "The Titan V used for this research was donated by the NVIDIA Corporation." ]
[ "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "other", "other", "result", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other" ]
[ "The success of large-scale contextual language models has attracted great interest in probing what is encoded in their representations.", "In this work, we consider a new question: to what extent contextual representations of concrete nouns are aligned with corresponding visual representations?", "We design a probing model that evaluates how effective are text-only representations in distinguishing between matching and non-matching visual representations.", "Our findings show that language representations alone provide a strong signal for retrieving image patches from the correct object categories.", "Moreover, they are effective in retrieving specific instances of image patches; textual context plays an important role in this process.", "Visually grounded language models slightly outperform text-only language models in instance retrieval, but greatly under-perform humans.", "We hope our analyses inspire future research in understanding and improving the visual capabilities of language models.", "Contextual language models trained on text-only corpora are prevalent in recent natural language processing (NLP) literature (Devlin et al., 2019; Liu et al., 2019b; Lan et al., 2019; Raffel et al., 2019).", "Understanding what their representations encode has been the goal of a number of recent studies (Belinkov and Glass, 2019; Rogers et al., 2020).", "Yet, much is left to be understood about whetheror to what extentthese models can encode visual information.", "We study this problem in the context of language grounding (Searle et al., 1984; Harnad, 1990; McClelland et al., 2019; Bisk et al., 2020; Bender and Koller, 2020), empirically investigating whether text-only representations can naturally be connected to the visual domain, without explicit visual supervision in pre-training.", "We argue that context plays a significant role in this investigation.", "In language, the ability to form context-dependent representations has shown to be crucial in designing pre-trained language models (Peters et al., 2018; Devlin et al., 2019).", "This is even more important for studying grounding since many visual properties depend strongly on context (Sadeghi and Farhadi, 2011).", "For instance, a flying bat shares very few visual similarities with a baseball bat ; likewise, a dog sleeping looks different from a dog running .", "While alignments between language representations and visual attributes have attracted past interest (Leong and Mihalcea, 2011; Lazaridou et al., 2014, 2015; Lucy and Gauthier, 2017; Collell Talleda et al., 2017), the role of context has been previously overlooked, leaving many open questions about what visual information contextual language representations encode.", "In this work, we introduce a method for empirically probing contextual language representations and their relation to the visual domain.", "In general, probing examines properties for which the models are not designed to predict, but can be encoded in their representations (Shi et al., 2016; Rogers et al., Figure 2: Examples of retrieved image patches from text-only representations using our probe.", "2020).", "Here, our probe is a lightweight model trained to map language representations of concrete objects to corresponding visual representations.", "The probe (illustrated in Figure 1) measures whether language representations can be used to give higher scores to matching visual representations compared to mismatched ones.", "Textual and visual representations are collected from image captioning data, where we find pairs of concrete words (e.g. cat or kite ) and their corresponding image patches.", "The probe is trained using a contrastive loss (Oord et al., 2018) that gauges the mutual information between the language and visual representations.", "Given text-only representations of an unseen object category, the trained probe is evaluated by retrieving corresponding image patches for categories it has never seen during training.", "Qualitative examples can be found in Figure 2. We examine representations from a number of contextual language models including BERT, RoBERTa, ALBERT and T5 (Devlin et al., 2019; Liu et al., 2019b; Lan et al., 2019; Raffel et al., 2019).", "For all of them, we find that interesting mappings can be learned from language to visual representations, as illustrated in Figure 2. In particular, using its top-5 predictions, BERT representations retrieve the correctly paired visual instance 36% of the time, strongly outperforming non-contextual language models (e.g., GloVe (Pennington et al., 2014)).", "Moreover, for all examined models, image patches of the correct object category are retrieved with a recall of 84-90%.", "Our experiments are backed by a control task where visual representations are intentionally mismatched with their textual counterparts.", "Retrieval performance drops substantially in these settings, attesting the selectivity of our probe.", "Moreover, we measure the impact of context on retrieval at the instance level.", "Contextual models substantially outperform non-contextual embeddings, but this difference disappears as context is gradually hidden from contextual models.", "When the context includes adjectives directly associated with the noun being inspected, we find significantly better instance retrieval performance.", "Finally, we investigate a number of grounded language modelssuch as LXMERT and VILBERT (Tan and Bansal, 2019; Lu et al., 2019, 2020) that see visual data in training, finding them to slightly outperform text-only models.", "Contrasting the learned mappings with human judgment, the examined visually grounded language models significantly underperform human subjects, exposing much room for future improvement.", "What is encoded in language representations?", "Understanding what information NLP models encode has attracted great interest in recent years (Rogers et al., 2020).", "From factual (Petroni et al., 2019; Jawahar et al., 2019; Roberts et al., 2020) to linguistic (Conneau et al., 2018; Liu et al., 2019a; Talmor et al., 2019) and commonsense (Forbes et al., 2019) knowledge, a wide set of properties have been previously analysed.", "We refer to Belinkov and Glass (2019) and Rogers et al. (2020) for a more comprehensive literature review.", "A common approach, often used for inspecting contextual models, is probing (Shi et al., 2016; Adi et al., 2016; Conneau et al., 2018; Hewitt and Liang, 2019).", "In short, it consists of using supervised models to predict properties not directly inferred by the models.", "Probing is typically used in settings were discrete, linguistic annotations such as parts of speech are available.", "Our approach differs from previous work in both scope and methodology, using a probe to measure similarities with continuous, visual representations.", "Closer to our goal of better understanding grounding is the work of Cao et al. (2020), that design probes for examining multi-modal models.", "In contrast, our work examines text-only models and does not rely on their ability to process images.", "Language grounding.", "A widely investigated research direction aims to connect natural language to the physical world (Bisk et al., 2020; McClelland et al., 2019; Tan and Bansal, 2019; Lu et al., 2019, 2020; Chen et al., 2020; Li et al., 2020; Tan and Bansal, 2020).", "This is typically done through training and evaluating models in tasks and datasets where both images and text are used, such as visual question answering (Antol et al., 2015; Hudson and Manning, 2019).", "A number of previous work have investigated mappings between language and visual representations or mappings from both to a shared space.", "Leong and Mihalcea (2011) investigate semantic similarities between words and images through a joint latent space, finding a positive correlation with human rated similarities.", "Similarly, Silberer and Lapata (2014) builds multi-modal representations by using stacked autoencoders.", "Socher et al. (2013) and Lazaridou et al. (2014) show that a shared latent space allows for zero-shot learning, demonstrating some generalization to previously unseen objects.", "Lazaridou et al. (2015) construct grounded word representations by exposing them to aligned visual features at training time.", "Lucy and Gauthier (2017) investigate how well word representations can predict perceptual and conceptual features, showing that a number of such features are not adequately predicted.", "Collell Talleda et al. (2017) uses word embeddings to create a mapping from language to visual features, using its outputs to build multimodal representations.", "While our conclusions are generally aligned, our work differs from these in two important ways.", "Firstly, previous work studies context-independent word representations, while our method allows analysing language representations that depend on the context they are used in.", "We use this to examine a number of trained contextual language models.", "Secondly, while most previous work uses these mappings for building better grounded representationsoften training the language models in the processour work focuses on using them as a tool for inspecting already trained models, without modifying them.", "Zero-shot detection.", "Recent work attempts to build object detectors that generalize to unseen object categories, by conditioning the predictions on word embeddings of the class (Rahman et al., 2018; Demirel et al., 2018), visual attributes (Demirel et al., 2018; Zhu et al., 2019; Mao et al., 2020) or text descriptions (Li et al., 2019).", "In our work, we use language representations of words in context (captions) as inputs.", "More fundamentally, although our experiments on unseen object categories can be used for zero-shot detection, we differ from previous work in motivation, which translates to further experimental differences.", "Given our goal to analyse already trained models (as opposed to learning a generalizable object detector), we train nothing apart from a lightweight probe in our analyses.", "Our main goal is to characterize the relation between contextual language representations and the visual domain.", "We first describe how language and visual representations of concrete concepts can be collected from image captioning datasets (3.1).", "Next, we design a probe that examines the relation between these representations, learning a mapping from language to visual representations (3.2).", "An overview is illustrated in Figure 3. 3.1 Collecting data At the center of our analysis are contextual representations of visually observable nouns, which we refer to as object categories .", "Here, we describe how pairs of matching language and visual representations ( (cid:96), v ) are collected from image captioning datasets.", "Language representations ( (cid:96) ) are extracted from image captions.", "To accommodate recent language models and tokenizers, we allow such representations to be contextual and have variable length, 1 where each element in (cid:96) has a fixed dimension d L .", "The length of the representations (cid:96) for each object category is determined by the tokenizer.", "We treat a model that extracts representations from text as a function that maps a string o (here, object categories) in a larger textual context c (here, captions) to the representation (cid:96) = ( o | c ) .", "This formalism also encompasses non-contextual embeddings, with ( o | c ) = ( o ) .", "Visual representations ( v ) are extracted from objects in images using a trained object detection model .", "For simplicity, we use v = ( o | i ) to refer to the extracted features corresponding to the detected object from image i that is both 1) classified as a member of object category o and 2) assigned the highest confidence by the model among those.", "Visual representations ( o | i ) have fixed dimensions d V .", "Paired data ( (cid:96), v ) with aligned representations is collected from an image captioning dataset with paired captions c and images i .", "For each image i , and each object o detected by the object detector , if o appears in some associated caption c , we include the pair ( (cid:96) = ( o | c ) , v = ( o | i )) .", "To avoid having multiple pairs ( (cid:96), v ) associated with 1 Conforming with sub-word tokenizers or multi-word expressions such as fire extinguisher .", "the same visual instance, we ensure that at most one pair ( (cid:96), v ) per object category in each image is included.", "In this work, we use the 1600 object categories from Faster R-CNN (Ren et al., 2015) trained on Visual Genome (Krishna et al., 2017).", "At a high level, language representations are inspected via a shallow neural probing model (Figure 3).", "In training, the probe learns a mapping from language to visual representations (3.2.1).", "We then evaluate the quality of these mappings by measuring how well they can be used to retrieve matching image patches (3.2.2).", "The probe is optimized to maximally preserve the mutual information between the distributions of language and visual representations.", "This is done via InfoNCE (Oord et al., 2018) (Equation 1), a loss function commonly used for retrieval and contrastive learning (Le-Khac et al., 2020).", "We note the mutual information is a bottleneck on how well two random variables can be mapped to one another, given its relation to conditional entropy.", "In training, the probe with parameters takes inputs (cid:96) and estimates visual representations v = ( (cid:96) ) with the same dimensionality d V as the corresponding visual representations v .", "For each pair ( (cid:96), v ) , this loss relies on a set of dis-tractors VNEG (cid:96) , containing visual representations which are not aligned with the language representations (cid:96) .", "The representations in VNEG (cid:96) are used for contrastive learning and are drawn from the same visual model, using different objects or images.", "Minimizing this loss drives the dot product (cid:104) ( (cid:96) ) , u (cid:105) to be maximal for u = v and small for all u VNEG (cid:96) .", "In other words, training pushes the estimates v = ( (cid:96) ) to be maximally useful in discerning between positive and negative visual pairings.", "In practice, the expectation in Equation 1 is estimated over a batch of size B with samples of aligned language and visual representations (( (cid:96) 1 , v 1 ) , . . . , ( (cid:96) B , v B )) .", "For efficiency, we use other visual representations in the batch as distrac-tors for a given representation ( VNEG i = { v j , j (cid:54) = i } ).", "Thus, only the dot products (cid:104) v i = ( (cid:96) i ) , v j (cid:105) are needed to calculate the loss, as illustrated in Figure 3. Importantly, we note that the models used to extract representations are not trained or changed in any way during the probing procedure.", "For evaluation, we compute recall in retrieving image patches given objects in text, using new pairs of language and visual representations from unseen images and captions.", "Consider the set of all collected visual representations for evaluation, V .", "For each language representation (cid:96) , we use the trained probe to generate our estimate v = ( (cid:96) ) , and find the instances v (cid:48) V that maximize the dot product (cid:104) v , v (cid:48) (cid:105) .", "Given an integer k , we consider recall at k at both instance and category levels.", "Formally: Instance Recall (IR@k) measures how frequently the correct visual instance is retrieved.", "More precisely, it is the fraction of pairs ( (cid:96), v ) where the instance v is in the topk visual representations retrieved from v = ( (cid:96) ) .", "Category Recall (CR@k) measures how frequently instances of the correct object category are retrieved.", "More precisely, it is the fraction of pairs ( (cid:96), v = ( o | i )) where any of the topk retrieved visual representations v (cid:48) = ( o (cid:48) | i (cid:48) ) belongs to the same object category as v (i.e. o (cid:48) = o ).", "Higher IR and CR scores indicate better performance and, by definition, CR@k cannot be smaller than IR@k.", "These metrics form the basis of our evaluation, and we take multiple steps to promote experimental integrity.", "Learned mappings are evaluated in two scenarios, where pairs ( (cid:96), v ) are collected using object categories either seen or unseen by the probe during training.", "The later is the focus of the majority of our experiments.", "For both scenarios, images and captions have no intersection with those used in training.", "Further, we create multiple seen / unseen splits from our data, training and testing on each split.", "We then report average and standard deviation of the recall scores across 5 splits.", "The majority of examined models are contextual representation models based on the transformer architecture (Vaswani et al., 2017) trained on text-only data.", "We examine the base ( d L = 768 ) and large ( d L = 1024 ) versions of BERT uncased, RoBERTa, ALBERT and T5 (Devlin et al., 2019; Liu et al., 2019b; Lan et al., 2019; Raffel et al., 2019).", "For T5, we also examine the small version, with d L = 512 .", "For all these models, we use pre-trained weights from the HuggingFace Transformers library (Wolf et al., 2020) 2 , and use representations from the last layer.", "Additionally, we inspect non-contextual representations using GloVe embeddings (Pennington et al., 2014), using embeddings trained on 840 billion tokens of web data, with d L = 300 and a vocabulary size of 2.2 million.", "3 4.2 Vision models As is common practice in natural language grounding literature (Anderson et al., 2018; Tan and Bansal, 2019; Su et al., 2020; Lu et al., 2020), we use a Faster R-CNN model (Ren et al., 2015) trained on Visual Genome (Krishna et al., 2017) to extract visual features with d V = 2048 .", "We use the trained network provided by Anderson et al. (2018) 4 , and do not fine-tune during probe training.", "We collect representations from two image captioning datasets, Flickr30k (Young et al., 2014), with over 150 thousand captions and 30 thousand images, and MS-COCO (Lin et al., 2014), with 600 thousand captions and 120 thousand images", "in English.", "The larger MS-COCO is the focus of the majority of our experiments.", "We build disjoint training, validation and test sets from the aggregated training and validation image captions.", "To examine generalization to new objects, we test on representations from both seen or unseen object categories, built from images and captions not present in the training data.", "From the 1600 object categories of our object detector, we use 1400 chosen at random for training and seen evaluation.", "The remaining 200 are reserved for unseen evaluation.", "Furthermore, we train and test our probe 5 times, each with a different 1400/200 split of the object categories.", "For each object category split, we build validation and test sets with sizes proportional to the number of object categories present: seen test sets contain 7000 representation pairs and unseen test sets contain 1000 pairs.", "The validation sets used for development consists of seen object categories, with the same size as the seen test sets.", "All remaining data is used for training.", "Contrasting the probe performance with a control task is central to probing (Hewitt and Liang, 2019).", "We follow this practice by learning in a control task where representations are mapped to permuted visual representations.", "More precisely, we replace each visual representation v = ( o | i ) with an-other v (cid:48) = ( o (cid:48) | i (cid:48) ) chosen at random from an object category o (cid:48) = f ( o ) that depends on the original object category o .", "Here, f dictates a random permutation of the object categories.", "For instance, visual representations of the original category cat are replaced with representations from a second category dog ; representations from the category dog are replaced by those from tree , and so on.", "Our probe consists of a shallow neural model.", "To process the naturally sequential language representations (cid:96) , we use a single-layered model with LSTM cells (Hochreiter and Schmidhuber, 1997) with 256 hidden units and only unidirectional connections.", "The outputs are then projected by a linear layer to the visual space.", "The probe is trained using Adam optimizer (Kingma and Ba, 2015) with a learning rate of 0.0005, weight decay of 0.0005 and default remaining coefficients ( 1 =0 .", "9 2 =0 .", "999 and (cid:15) =10 8 ).", "We train with a batch size of 3072, for a total of 5 epochs on one GPU.", "At a high level, our experiments show that", "i) language representations are strong signals for choosing between different visual features both at the instance and category levels;", "ii) context is largely helpful for instance retrieval;", "iii) InfoNCE works better than other studied losses, and some consistency is found across datasets;", "iv) visually grounded models outperform text-only models;", "v) all models lag greatly behind human performance.", "We provide further details in 5.1-5.3.", "Table 1 summarizes instance and category retrieval performance for different language models and control experiments, using test data with unseen object categories.", "Our results indicate that language representations alone are strong signals for predicting visual features: for all examined language models, recall scores are significantly better than random and control.", "Qualitative results can be found in Figure 2. We note that category recall scores are significantly higher than instance recall.", "This is reasonable since there are many more positive alignments at the category level.", "Compared to other inspected models, BERT base shows the best results for instance retrieval, and will be the focus of further analyses.", "Contrasting the performance of non-contextual representations from GloVe with that of contextual models shows that context considerably affects # Experiment IR@1 IR@5 CR@1 0 Random 0.1 0.1 0.1 0.1 1.2 0.1 1 Control 1.6 0.1 7.8 0.6 41.3 5.6 2 BERT base 14.9 0.3 43.4 0.8 90.4 0.4 Table 2: Average instance recall (IR@k) and category recall (CR@k) for test sets with seen object categories.", "instance recall.", "For instance, GloVe and BERT base yield 5.1% to 12.0% IR@1, respectively.", "This gap is sensible, since a non-contextual representation should not be able to discern between distinct image patches depicting the same object category.", "While still lagging behind a number of contextual representations, we observe strong category recall for GloVe, which we hypothesize is due to the ease in predicting the correct output category since input representations are fixed, independently of context.", "We further explore the role of context in 5.3.", "Moreover, Table 2 shows performance on test sets with seen object categories.", "Comparing with Table 1, BERT representations show good generalization to unseen object categories.", "This generalization is consistent with previous observations on zero-shot experiments, using non-contextual word embeddings (Lazaridou et al., 2014).", "Finally, our results attest to the selectivity of the probe: for the control task with permuted representations (Tables 1 and 2, Row 1), substantially lower performance is found.", "This gap is particularly high for unseen object categories, where only sensibly paired representations perform better than chance.", "Loss ablations.", "In addition to InfoNCE, we ablate on 3 other loss functions: mean squared error (MSE), negative cosine similarity, and triplet loss 5 .", "The results for unseen object categories are summarized in Table 3: while all losses yield better than random results, InfoNCE performs the best.", "This 5 L trip = E (cid:96) [max( (cid:96),v (cid:48) (cid:96) (cid:96),v (cid:96) + , 0)] , where the margin is set to 1.0, v (cid:48) VNEG and (cid:96),v = cos( ( (cid:96) ) , v (cid:96) ) .", "Dataset # Images / # Captions IR@1 CR@1 MS-COCO 120k / 600k 12.0 1.0 88.1 2.4 Flickr30k 30k / 150k 9.8 0.9 85.6 3.4 Table 4: Comparison for different datasets in retrieval performance of unseen object categories with representations from BERT base.", "validates the theoretical intuition that InfoNCE would be advantageous, as it allows for directly optimizing the probe to maximally preserve the mutual information between the representations, a bottleneck on the remaining entropy after the mapping.", "Data ablations.", "In addition to MS-COCO, which is the used for the majority of our experiments, we show results with data collected from the smaller Flickr30k.", "We report the test retrieval performance for unseen object categories using representations from BERT base in Table 4.", "These results indicate consistency across the datasets, despite their considerable difference in size.", "Influence of context.", "We study whether the gap in instance retrieval performance from GloVe and BERT comes from the use of context or intrinsic differences of these models.", "This is explored by measuring how instance recall varies as we probabilistically mask out context tokens in the captions at different rates.", "As shown in Figure 4, performance drops substantially as more tokens are masked; in the limit where only the object tokens remain (i.e. the fraction of context masked is 1.0), BERT's representations perform marginally worse than the non-contextual GloVe embeddings.", "Figure 5 compares instance-level retrieval accuracy for representations when objects have none or at least one adjective associated with them, as processed by the dependency parser from AllenNLP library (Gardner et al., 2018).", "These adjectives commonly include colors (e.g. white , black ) and sizes (e.g. big , small ), indicating contextual information.", "The results show clear gains in instance recall when objects are accompanied by adjectives, confirming that context enables more accurate retrieval.", "We refer back to Figure 2 for qualitative results on the influence of context.", "models, namely LXMERT, VL-BERT (base and large) and VILBERT-MT (Tan and Bansal, 2019; Su et al., 2020; Lu et al., 2019, 2020)).", "While these models typically process visual and textual inputs jointly, we adapt them to include only the language branches, restricting attention to the text inputs.", "For all these models, we use the code and weights made public by the authors.", "6 The results, summarized in Table 5, show that grounded models slightly outperform the ungrounded BERT base.", "At the category level, we see small relative differences in performance between grounded and ungrounded models.", "At the instance level, the relative improvement is higher, especially for VILBERT-MT, while still much lower than human performance as shown in the next experiment.", "Human performance.", "Finally, we contrast the examined models with human performance in retrieving visual patches given words in sentences.", "Such a comparison helps disentangling the quality of the learned mappings with possible incidental matches, i.e., language representations with more 6 github.com/airsplay/lxmert; github.com/jackroos/VL-BERT; github.com/facebookresearch/vilbert-multi-task Model IR@1 IR@5 CR@1 BERT base 12.0 1.0 36.0 0.9 88.1 2.4 LXMERT 13.7 1.0 39.2 2.5 90.3 1.2 VL-BERT base 12.5 1.0 37.6 1.1 88.7 1.4 VL-BERT large 12.6 1.1 37.5 2.4 88.7 2.3 VILBERT-MT 15.4 1.2 42.4 2.7 90.8 1.9 Table 5: Retrieval performance for unseen object categories, using representations from BERT and a number of grounded language models.", "than one positive visual match.", "As they are also affected by these artifacts, human subjects offer a sensible point of comparison.", "In virtue of the limited human attention, we evaluate on a reduced test set with unseen object categories, randomly sampling 100 data points from it.", "For each object in a sentence, subjects are presented with 100 image patches and asked to choose the closest match.", "We collect over 1000 annotations from 17 in-house annotators, with at least 30 annotations each.", "Our results are shown in Table 6.", "On the same test set, we find a large gap from learned mappings for both grounded and ungrounded models to human performance, exposing much room for improvement.", "Understanding the similarities between language and visual representations has important implications on the models, training paradigms and benchmarks we design.", "We introduced a method for empirically measuring the relation between contextual language representations and corresponding visual features.", "We found contextual language models to be usefulwhile far from human subjectsin discerning between different visual representations.", "Moreover, we explored how these results are in-fluenced by context, loss functions, datasets and explicit grounding during training.", "Altogether, we hope that our new methodological and practical insights foster further research in both understanding the natural connections between language and visual representations and designing more effective models at the intersection the two modalities.", "This research was supported by the grants from ONR N00014-18-1-2826, DARPA N66001-19-2-4031, 67102239, NSF III-1703166, IIS-1652052, IIS-17303166, and an Allen Distinguished Investigator Award and a Sloan Fellowship.", "Authors would also like to thank Raymond J. Mooney and members of the UW-NLP, H2Lab and RAIVN Lab at the University of Washington for their valuable feedback and comments." ]
[ "abstain", "objective", "method", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "other", "other", "other", "abstain", "other", "other", "other", "objective", "abstain", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "method", "other", "other", "other", "method", "objective", "objective", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "result", "objective", "objective", "other", "other" ]
[ "How do masked language models (MLMs) such as BERT learn contextual representations?", "In this work, we analyze the learning dynamics of MLMs.", "We find that MLMs adopt sampled embeddings as anchors to estimate and inject contextual semantics to representations, which limits the efficiency and effectiveness of MLMs.", "To address these issues, we propose TACO, a simple yet effective representation learning approach to directly model global semantics.", "TACO extracts and aligns contextual semantics hidden in contextualized representations to encourage models to attend global semantics when generating contextualized representations.", "Experiments on the GLUE benchmark show that TACO achieves up to 5x speedup and up to 1.2 points average improvement over existing MLMs.", "The code is available at https:// github.com/FUZHIYI/TACO .", "In the age of deep learning, the basis of representation learning is to learn distributional semantics.", "The target of distributional semantics can be summed up in the so-called distributional hypothesis (Harris, 1954): Linguistic items with similar distributions have similar meanings .", "To model similar meanings, traditional representation approaches (Mikolov et al., 2013; Pennington et al., 2014) (e.g., Word2Vec) model distributional semantics by defining tokens using context-independent (CI) dense vectors, i.e., word embeddings, and directly aligning the representations of tokens in the same context.", "Nowadays, pre-trained language models (PTMs) (Devlin et al., 2019; Radford et al., 2018; Qiu et al., 2020) expand static embeddings into contextualized representations where each token has two kinds of representations: context-independent embedding, and context-dependent Equal Contribution This work is done at ByteDance AI Lab.", "(CD) dense representation that stems from its embedding and contains context information.", "Although language modeling and representation learning have distinct targets, masked language modeling is still the prime choice to learn token representations with access to large scale of raw texts (Pe-ters et al., 2018; Devlin et al., 2019; Raffel et al., 2020; Brown et al., 2020).", "It naturally raises a question: How do masked language models learn contextual representations?", "Following the widely-accepted understanding (Wang and Isola, 2020), MLM optimizes two properties, the alignment of contextualized representations with the static embeddings of masked tokens, and the uniformity of static embeddings in the representation space.", "In the alignment property, sampled embeddings of masked tokens play as an anchor to align contextualized representations.", "We find that although such local anchor is essential to model local dependencies, the lack of global anchors brings several limitations.", "First, experiments show that the learning of contextual representations is sensitive to embedding quality, which harms the efficiency of MLM at the early stage of 2701 training.", "Second, MLM typically masks multiple target words in a sentence, resulting in multiple embedding anchors in the same context.", "This pushes contextualized representations into different clusters and thus harms modeling global dependencies.", "To address these challenges, we propose a novel T okenA lignment C ontrastive O bjective ( TACO ) to directly build global anchors.", "By combing local anchors and global anchors together, TACO achieves better performance and faster convergence than MLM.", "Motivated by the widely-accepted belief that contextualized representation of a token should be the mapping of its static embedding on the contextual space given global information, we propose to directly align global information hidden in contextualized representations at all positions of a natural sentence to encourage models to attend same global semantics when generating contextualized representations.", "Concerning possible relationships between context-dependent and context-independent representations, we adopt the simplest probing method to extract global information via the gap between context-dependent and context-independent representations of a token for simplification, as shown in Figure 1. To be specific, we define tokens in the same context (text span) as positive pairs and tokens in different contexts as negative pairs, to encourage the global information among tokens within the same context to be more similar compared to that from different contexts.", "We evaluate TACO on GLUE benchmark.", "Experiment results show that TACO outperforms MLM with average 1.2 point improvement and 5x speedup (in terms of sample efficiency) on BERT-small, and with average 0.9 point improvement and 2x speedup on BERT-base.", "The contributions of this paper are as follows.", "We analyze the limitation of MLM and propose a simple yet efficient method TACO to directly model global semantics.", "Experiments show that TACO outperforms MLM with up to 1.2 point improvement and up to 5x speedup on GLUE benchmark.", "The key idea of MLM is to randomly replace a few tokens in a sentence with the special token [MASK] and ask a neural network to recover the original tokens.", "Formally, we define a corrupted sentence as x 1 , x 2 , , x L , and feed it into a Transformers encoder (Vaswani et al., 2017), the hidden states from the final layer are denoted as h 1 , h 2 , , h L .", "We denote the embeddings of the corresponding original tokens as e 1 , e 2 , , e L .", "The MLM objective can be formulated as: LMLM ( x ) = 1 |M| (cid:88) i M log exp( m i e i ) (cid:80) |V| k =1 exp( m i e k ) (1) where M denotes the set of masked tokens and |V| is the size of vocabulary V .", "m i is hidden state of the last layer at the masked position, and can be regarded as a fusion of contextualized representations of surrounding tokens.", "Following the widely-accepted understanding (Wang and Isola, 2020), Eq.1 optimizes: (1) the alignment between contextualized representations of surrounding tokens and the context-independent embedding of the target token and (2) the uniformity of representations in the representation space.", "In the alignment part, MLM relies on sampled contextual-independent embeddings of masked tokens as anchors to align contextualized representations in contexts, as shown in Figure 2. Local anchor is the key feature of MLM.", "Therefore, the learning of contextualized representations heavily relies on embedding quality.", "In addition, multiple local anchors in a sentence tend to pushing contextualized representations of surrounding tokens closer to different clusters, encouraging models to attend local dependencies where global semantics are neglected.", "To verify our understanding, we conduct comprehensive experiments to investigate: How does embedding anchor affect the learning dynamics of MLM?", "We re-train a BERT-small (Devlin et al., 2019) model with the MLM objective solely and analyze the changes in its semantic space during 2702 pre-training.", "Contextualized representation evaluation.", "In general, if contextualized representations are well learned, the contextualized representations in a same context will have higher similarity than that of in different contexts.", "Naturally, we use the gap between intra-sentence similarity and inter-sentence similarity to evaluate contextual information in contextualized representations.", "We call this gap as contextual score .", "The similarity can be evaluated via probing methods like L2 distance, cosine similarity, etc.", "We observe similar findings on different probing methods and only report cosine similarity here for simplification.", "Figure", "3(b) shows how contextual score changes during training.", "Other statistical results are listed in Appendix A. Embedding similarity evaluation.", "To observe how sampled embeddings affect contextualized representation learning, we evaluate the embedding similarity between co-occurrent tokens.", "Motivated by the target that co-occurrent tokens should have similar representations, we use the similarity score calculated by cosine similarity between co-occurrent words labeled by humans (sampled from the WordSim353 dataset (Agirre et al., 2009)) as the evaluation metric.", "Figure", "3(a) shows how embedding similarity between co-occurrent tokens changes during training.", "The learning of contextualized representations heavily relies on embeddings similarity.", "As we can see from Figure", "3(a), the embedding similarity between co-occurrent tokens first decreases during the earliest stage of pre-training.", "It is because all embeddings are randomly initialized with the same distribution and the uniformity feature in MLM pushes tokens far away from each other, thus resulting in the decrease of embedding similarity.", "Meanwhile, the contextual score, i.e., the gap between intra-context similarity and inter-context similarity in Figure", "3(b), does not increase at the earliest stage of training.", "It shows that random embeddings provide little help to learn contextual semantics.", "During 5K-10K iterations, only when embeddings become closer, contextualized representations in the same context begin to have similar features.", "At this stage, the randomly sampled embeddings from the same sentence, i.e., the same context, usually have similar representations and thus MLM can push contextualized tokens closer to each other.", "We further verify the effects of embedding quality in Figure 4.", "To this end, we train two BERT models whose embedding matrices are frozen and initialized with the ones from different pre-training stage.", "We can see the model initialized with random embedding fails to teach contextualized representations to attend sentence meanings and representations from different contexts have almost the same similarity.", "However, the variant with well-trained but frozen embeddings learns to distinguish different contexts early at around 4k steps.", "These statistical observations verify that embedding anchors bring the efficiency and effectiveness problem.", "Surprisingly, embedding anchors reduce global contextual information in contextualized representation at the later stage of training.", "Figure", "3(a) shows that embedding similarity begins to drop after 8k steps.", "It shows that the model learns the specific meanings of co-occurrent tokens and begins to push them a little bit far away.", "Since MLM adopts local anchors, these local em-2703 beddings push contextualized representations into different clusters.", "The contextual score begins to decrease too.", "This phenomenon proves the embedding bias problem where the learning of contextualized representations is decided by the selected embeddings where the global contextual semantics are neglected.", "To address the challenges of MLM, we propose a new method TACO to combine global anchors and local anchors.", "We first introduce TC, a token-alignment contrastive loss which explicitly models global semantics in Section 3.1, and combine TC with MLM to get the overall objective for training our TACO model in Section 3.2.", "To model global semantics, the objective is expected to be capable of explicitly capturing information shared between contextualized representation of tokens within the same context.", "Therefore, a natural solution is to maximize the mutual information of contextual information hidden in contextualized representations in the same context.", "To extract shared contextual information, we first define a rule to generate contextual representations of tokens by combining embeddings and global information.", "Formally, h i = f ( e i , g ) .", "where f is a probing algorithm and e i is the embedding and g is the global bias of a concrete context.", "In this paper, we adopt a straightforward probing method to get global information hidden in contextualized representations, where g i = h i e i .", "Given contextualized representations of an token x and its nearby tokens c in the same context, we use g x and g c to represent global semantics hidden in these representations.", "The mutual information between the two global bias g x and g c is I ( g x , g c ) = (cid:88) g x , g c p ( g x , g c ) log p ( g x | g c ) p ( g x ) (4) According to van den Oord et al. 2019, the InfoNCE loss serves as an estimator of mutual information of x and c : I ( g x , g c ) log( K ) L ( g x , g c ) (5) where L ( g x , g c ) is defined as: L ( g x , g c ) = E log f ( g x , g c ) f ( g x , g c ) + (cid:80) Kk =1 f ( g x , g c k ) (6) where c k is the k -th negative sample of x and K is the size of negative samples.", "Hence minimizing the objective L ( g x , g c ) is equivalent to maximizing the lower bound on the mutual information I ( g x , g c ) .", "This objective contains two parts: positive pairs f ( g x , g c ) and negative pairs f ( g x , g c k ) .", "Previous study (Chen et al., 2020) has shown that cosine similarity with temperature performs well as the score function f in InfoNCE loss.", "Following them, we take f ( g x , g c ) = 1 g x g c (cid:107) g x (cid:107)(cid:107) g c (cid:107) (7) where is the temperature hyper-parameter and (cid:107) (cid:107) is (cid:96) 2 -norm function.", "Contextualized representation : To get global bias g x and g c following Eq.", "3, we adopt the widely-used Transformer (Vaswani et al., 2017) as the encoder and take the last hidden states as 2704 the contextualized representations h x and h c .", "Formally, suppose a batch of sequences { s i } where i { 1 , , N } .", "We feed it into the Transformer encoder to obtain contextualized representations, h i 1 , h i 2 , , h i | s i | where h ij R d .", "Positive pairs : Given each token x , we randomly sample a positive sample c from nearby tokens in the same context (sequence) within a window span where W is the window size.", "Negative pairs : Given each token x , we randomly sample K tokens from other sequences in this batch as negative samples c k .", "To sum up, the T oken-alignment C ontrastive ( TC ) loss is applied to every token in a batch as: LTC = 1 NN (cid:88) i =1 1 | s i | | s i | (cid:88) j =1 L ( g ij , g ij c ) (8) where N is the number of sequences of this batch; s i is the i -th sequence; j and jc are tokens in s i where jc (cid:54) = j ; g i is the global semantics hidden in contextualized representation of token s i .", "g ij and g ij c are generated via: g ij = h ij e ij (9) g ij c = h ij c e ij c (10) where h ij and e ij are the contextualized representation and static embedding of the anchor token, respectively.", "h ij c and e ij c are the contextualized representation and static embedding of the sampled positive token in the same context.", "As described before, the token-alignment contrastive loss LTC is designed to model global dependencies while MLM is able to capture local dependencies.", "Therefore, we can better model contextualized representations by combining the token-alignment contrastive loss LTC and the MLM loss to get our overall objective LTACO : LTACO = LTC + LMLM (11) We implement it in a multi-task learning manner where all objectives are calculated within one for-ward propagation, which only introduces negligible extra computations.", "WordPiece tokenization) (Zhu et al., 2015) and English Wikipedia (4B words) as pre-training corpus.", "We pre-train two variants of BERT models: BERT-small and BERT-base.", "All models are equipped with the vocabulary of size 30,522, trained with 15% masked positions for MLM.", "The maximum sequence length is 256 and batch size is 1,280.", "We adopt optimizer AdamW (Loshchilov and Hut-ter, 2019) with learning rate 1e-4.", "All models are trained until convergence.", "To be specific, the small model is trained up to 250k steps with a warm-up of 2.5k steps.", "The base model is trained up to 500k steps with a warm-up of 10k steps.", "For TACO, we set the positive sample window size W to 5, the negative sample number K to 50, and the temperature parameter to 0.07 after a slight grid-search via preliminary experiments.", "More pre-training details can be found in Appendix A. During fine-tuning models, we conduct a grid search over batch sizes of {16, 32, 64, 128}, learning rates of {1e-5, 2e-5, 3e-5, 5e-5}, and training epochs of {4, 6} with an Adam optimizer (Kingma and Ba, 2015).", "We use the open-source packages for implementation, including HuggingFace Datasets 1 and Transformers 2 .", "All the experiments are conducted on 16 GPU chips (32 GB V100).", "Evaluation We evaluate methods on the GLUE benchmark (Wang et al., 2019).", "Specifically, we test on Microsoft Research Paraphrase Matching (MRPC) (Dolan and Brockett, 2005), Quora Question Pairs (QQP) 3 and STS-B (Conneau and Kiela, 2018) for Paraphrase Similarity Matching; Stanford Sentiment Treebank (SST-2) (Socher et al., 2013) for Sentiment Classification; Multi-Genre Natural Language Inference Matched (MNLI-m), Multi-Genre Natural Language Inference Mismatched (MNLI-mm) (Williams et al., 2018), Question Natural Language Inference (QNLI) (Ra-jpurkar et al., 2016) and Recognizing Textual Entailment (RTE) (Wang et al., 2019) for the Natural Language Inference (NLI) task; The Corpus of Linguistic Acceptability (CoLA) (Warstadt et al., 2019) for Linguistic Acceptability.", "Following Devlin et al. (2019), we exclude WNLI (Levesque, 2011).", "We report F1 scores for QQP and MRPC, Spearman correlations for STS-B, and accuracy scores for the other tasks.", "For evaluation results on validation sets, we report the 1 https://github.com/huggingface/datasets 2 https://github.com/huggingface/transformers 3 https://www.quora.com/q/quoradata/ First-Quora-Dataset-Release-Question-Pairs 2705 Approach MNLI(m/mm) QQP QNLI SST-2 CoLA STS-B MRPC RTE Avg.", "average score of 4 fine-tunings with different random seeds.", "For results on test sets, we select the best model on the validation set to evaluate.", "Baselines We mainly compare TACO with MLM on BERT-small and BERT-base models.", "In addition, we also compare TACO with related contrastive methods: a sentence-level contrastive method BERT-NCE and a span-based contrastive learning method INFOWORD, both from Kong et al. (2020).", "We directly compare TACO with the results reported in their paper.", "Table 1 and Figure 5 show the results of TACO on BERT-small.", "As we can see, compared with MLM with 250k training steps ( convergence steps), TACO achieves comparable performance with only 1/5 computation budget.", "By modeling global dependencies, TACO can significantly improve the efficiency of contextualized representation learning.", "In addition, when pre-trained with the same steps, TACO outperforms MLM with 1.2 average score improvement on the validation set.", "In addition to convergence, we also compare TACO and MLM on fewer training data.", "The results are shown in Table 2. We sample 4 tasks with the largest amount of training data for evaluation.", "As we can see, TACO trained on 25% data can achieve competitive results with MLM trained on full data.", "These results also verify the data efficiency of our method, TACO.", "We also compare TACO with MLM on base-sized models, which are the most commonly used models according to the download data from Huggingface 4 (Wolf et al., 2020).", "First, from Table 3, we can see that TACO consistently outperforms 4 https://huggingface.co/models Approach MNLI QQP QNLI SST-2 Avg.", "MLM under all pre-training computation budgets.", "Notably, TACO-250 k achieves comparable performance with MLM-500 k , which saves 2x computations.", "Similar results are observed on TACO-100 k and BERT-250 k .", "These results demonstrate that TACO can achieve better acceleration over MLM.", "It is also a significant improvement compared to previous methods (Gong et al., 2019) focusing on accelerating BERT but only with slight speedups.", "In addition, as shown in Table 4, TACO achieves competitive results compared to BERT-NCE and INFOWORD, two similar contrastive methods.", "To better understand how TACO works, we conduct a quantitative comparison on the learning dynamic for BERT and TACO.", "Similar to Section 2.2, we plot the Cosine similarity among contextualized representations of tokens in the same context (intra-context) and different contexts (inter-context) in Figure 6.", "We find that the learning dynamic of TACO significantly differs from that of MLM.", "Specifically, for TACO, the intra-context representation similarity remains high and the gap between intra-context similarity and inter-context similarity remains large at the later stage of training.", "This confirms that TACO can better fulfill global semantics, which may contribute to the superior downstream performance.", "TACO is implemented as a token-level contrastive (TC) loss along with the MLM loss.", "Therefore, the improvement of TACO might come from two aspects, including 1) denser supervision signals from the all-token objective and 2) the benefits of the contrastive loss to strengthen global dependencies.", "It is helpful to figure out which factor is more important.", "To this end, we design two variants for ablation.", "One is a concentrated TACO, where the contrastive loss is built on the 15% masked positions only, keeping the same density of supervision signal with MLM.", "The other is an extended MLM, where not only 15% masked positions are asked to predict the original token, so do the rest 85% unmasked positions.", "The extended MLM has the same dense supervision with TACO but loses the benefits of modeling the global dependencies.", "The results on small models are shown in Figure 6.", "As we can see, the performance of TACO decreases if we sample a part of token positions to implement TC objectives.", "It shows that more supervision signals benefit the final performance of TACO.", "However, simply adding more supervision signals by predicting unmasked tokens does not help MLM too much.", "Even equipped with the extra 85% token prediction (TP) loss, MLM+TP does not show significant improvements and it is noticeable that the performance of MLM+TP starts to drop after 150k steps.", "This further confirms the effectiveness of TC loss by strengthening global dependencies.", "Classic language representation learning methods (Mikolov et al., 2013; Pennington et al., 2014) aims to learn context-independent representation of words, i.e., word embeddings.", "They generally follow the distributional hypothesis (Harris, 1954).", "Recently, the pre-training then fine-tuning paradigm has become a common practice in NLP because of the success of pre-trained language models like BERT (Devlin et al., 2019).", "Context-dependent (or contextualized) representations are the basic characteristic of these methods.", "Many 2707 0 5k 10k 15k 20k pretrain checkpoints 0.0 0.2 0.4 0.6 0.8 1.0 MLM intra-context similarity MLM inter-context similarity TACO intra-context similarity TACO inter-context similarity 0 50k 100k 150k 200k 250k pretrain checkpoints 0.68 0.70 0.72 0.74 0.76 a v e r a g e GLUE s c o r e 15% MLM + 100% TC (TACO) 15% MLM + 15% TC 15% MLM + 85% TP 15% MLM Figure 6: The left figure", "existing contextualized models are based on the masked language modeling objective, which randomly masks a portion of tokens in a text sequence and trains the model to recover the masked tokens.", "Many previous studies prove that pre-training with the MLM objective helps the models learn syntactical and semantic knowledge (Clark et al., 2019).", "There have been numerous extensions to MLM.", "For example, XLNet (Yang et al., 2019) introduced the permutated language modeling objective, which predicts the words one by one in a permutated order.", "BART (Lewis et al., 2020) and T5 (Raffel et al., 2020) investigated several denoising objectives and pre-trained an encoder-decoder architecture with the mask span infilling objective.", "In this work, we focus on the key MLM objective and aim to explore how MLM objective helps learn contextualized representation.", "Apart from denoising-based objectives, contrastive learning is another promising way to obtain self-supervision.", "In contrastive-based self-supervised learning, the models are asked to distinguish the positive samples from the negative ones for a given anchor.", "Contrastive-based SSL method was first introduced in NLP for efficient learning of word representations by negative sampling, i.e., SGNS (Word2Vec (Mikolov et al., 2013)).", "Later, similar ideas were brought into CV field for learning image representation and got prevalent, such as MoCo (He et al., 2020), SimCLR (Chen et al., 2020), BYOL (Caron et al., 2020), etc.", "In the recent two years, there have been many studies targeting at reviving contrastive learning for contextual representation learning in NLP.", "Tak-2708 ing MLM as an example, we investigate whether and how current language model pre-training objectives learn contextualized representation.", "For instance, CERT (Fang et al., 2020) utilized back-translation to generate positive pairs.", "CAPT (Luo et al., 2020) applied masks to the original sentence and considered the masked sentence and its original version as the positive pair.", "DeCLUTR (Giorgi et al., 2020) samples nearby even overlapping spans as positive pairs.", "INFOWORD (Kong et al., 2020) treated two complementary parts of a sentence as the positive pair.", "However, the aforementioned methods mainly focus on sentence-level or span-level contrast and may not provide dense self-supervision to improve efficiency.", "Unlike these approaches, TACO regards the global semantics hidden in contextualized token representations as the positive pair.", "The token-level contrastive loss can be built on all input tokens, which provides a dense self-supervised signal.", "Another related work is ELECTRA (Clark et al., 2020).", "ELECTRA samples machine-generated tokens from a separate generator and trains the main model to discriminate between machine-generated tokens and original tokens.", "ELECTRA implicitly treats the fake tokens as negative samples of the context, and the unchanged tokens as positive samples.", "Unlike this method, TACO does not require architectural modifications and can serve as a plug-and-play auxiliary objective, largely improving pretraining efficiency.", "In this paper, we propose a simple yet effective objective to learn contextualized representation.", "We find that the MLM objective mainly focuses on local anchors to align contextualized representations, which harms global dependencies modeling due to an embedding bias problem.", "Motivated by these problems, we propose TACO to directly model global semantics.", "It can be easily combined with existing LM objectives.", "By combining local and global anchors, TACO achieves up to 5 speedups and up to 1.2 improvements on GLUE score.", "This demonstrates the potential of TACO to serve as a plug-and-play approach to improve contextualized representation learning.", "We thank the anonymous reviewers for their helpful feedback.", "We also thank the colleagues from ByteDance AI Lab for their suggestions on our experiment designing and paper writing." ]
[ "abstain", "method", "result", "objective", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "result", "method", "abstain", "objective", "objective", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "objective", "objective", "abstain", "abstain", "abstain", "other", "other" ]
[ "In modern recommender systems, there are usually comments or reviews from users that justify their ratings for different items.", "Trained on such textual corpus, explainable recommendation models learn to discover user interests and generate personalized explanations.", "Though able to provide plausible explanations, existing models tend to generate repeated sentences for different items or empty sentences with insufficient details.", "This begs an interesting question: can we immerse the models in a multimodal environment to gain proper awareness of real-world concepts and alleviate above shortcomings?", "To this end, we propose a visually-enhanced approach named METER with the help of visualization generation and textimage matching discrimination: the explainable recommendation model is encouraged to visualize what it refers to while incurring a penalty if the visualization is incongruent with the textual explanation.", "Experimental results and a manual assessment demonstrate that our approach can improve not only the text quality but also the diversity and explainability of the generated explanations.", "Explainable recommender systems have recently attracted increasing attention both in industry and in the academic community.", "Such systems aim to provide high-quality recommendations and simultaneously generate explanations for the recommendations (Zhang et al., 2014; Zhang and Chen, 2020).", "The explanations not only can bridge the gap between how systems and users perceive the relevance of the recommended items, but also can serve to shed light on the recommendation decision process so as to avoid a black box.", "To provide appropriate explanations, feature-based (Zhang et al., 2014), graph-based (Xian et al., 2019, 2020; Geng et al., 2022; Fu et al., 2020), sentence-based (Chen et al., 2019a; Li et al., 2020, 2021a, 2022), causality-based (Tan et al., 2021, 2022; Xu et al., 2021a,b) Inputs : User A, Item 1, Feat.", "and neural-symbolic (Shi et al., 2020; Chen et al., 2021, 2022) approaches have been explored in recent years.", "Among them, PETER (Li et al., 2021a) is a representative sentence-based method that directly generates explanation sentences for given useritem pairs based on Personalized Transformer.", "While PETER outperforms previous methods in terms of both explainability and text quality metrics, it also suffers from several shortcomings: PETER tends to repeat certain universally applicable safe sentences as explanations (e.g., the hotel is very nice).", "For the 32,003 records in the test split of the TripAdvisor dataset by Li et al. (2020), PETER only generates around 8,100 unique sentences.", "The duplicate rate is close to 75%, while in reality, the duplicate rate of the TripAdvisor ground truth explanations is only 5.4%.", "In addition, such models are trained solely on a textual corpus, lacking real-world experiences to generate more authentic explanations, which may lead to empty sentences with insufficient details.", "Recently, Vokenization (Tan and Bansal, 2020) demonstrates that language understanding can be improved with token-level visual supervisions.", "This motivates us to consider enhancing text explanation generation with the aid of real-world images.", "of explanation generation model that is immersed in a multimodal environment.", "The goal is to encourage it to perceive real-world signals and generate visually-enhanced explanations to better assist a user's decision.", "Specifically, we propose the Multimodally-Enhanced Transformer for Explainable Recommendation ( METER ) approach for improved text explanations based on conditional image generation and textimage matching.", "Unlike traditional caption-to-image generation, our training sentences are explanations that are more comprehensive reviews based on user experiences rather than simple abstract descriptions of the image content.", "We adopt the generation order rating text image based on the consideration that the generation difficulty should gradually increase.", "With this approach, we seek to guide the model to understand real-world concepts regarding both item attributes and user interests (e.g., a spacious room or modern decoration).", "Furthermore, METER is encouraged to visualize what it is talking about for the given useritem pair and is penalized in case of a mismatch between the generated visualization and the textual explanation.", "This is in line with the spirit of the context token prediction module in Li et al. (2021a).", "While PETER only predicts text tokens as contextual information, our METER additionally generates visual tokens as a supplement.", "We claim that if a sentence contains more real-world concepts, it is easier to visualize it as an image with higher fidelity.", "To this end, we introduce a textimage matching discriminator based on contrastive learning which helps to improve both the diversity and faithfulness of the textual explanations.", "Beyond an auxiliary task for text generation, another advantage of METER is that the generated image visualizations may provide intuitive visual explanations in addition to rating scores and textual explanations.", "To empirically evaluate our framework, we conduct experiments and user studies on two real-world datasets in terms of diversity and faithfulness of text explanations, as well as consistency and quality of image visualizations.", "Our results reveal that using the proposed METER leads to improvements on text diversity and faithfulness, and that the generated image visualizations show high fidelity and good consistency.", "Overall, we make the following key contributions: To the best of our knowledge, this is the first exploration of a multimodal explainable recommender system that jointly generates rating scores, textual explanations, and images.", "The system will also be promising in creative advertising applications.", "By immersing the model into a multimodal environment, we help it explore the real-world concepts mentioned in the text explanations and in turn enable it to generate more diverse and faithful natural language rationales that are consistent with visual grounding.", "Experiments and a user study on real-world datasets demonstrate the superiority of our approach over several strong baselines.", "Visually-Guided Language Learning There have been numerous efforts on utilizing visual information to facilitate language tasks.", "The general strategy they typically pursue is to obtain cross-modally aligned semantics through visual grounding.", "Gella et al. (2017); Zhang et al. (2020); Sigurdsson et al. (2020) draw on the visual modality to bridge the gap between languages and conduct visual grounding to improve unsupervised cross-lingual word mapping or machine translation.", "Vokenization (Tan and Bansal, 2020) assigns each text token with a corresponding voken and improves text-based pretraining with contextualized, visual-grounded supervisions.", "VidLanKD (Tang et al., 2021) further solves the shortcomings of Tan and Bansal (2020) by first learning a multimodal teacher model on video-language dataset and then transferring knowledge to the student language model through distillation.", "Shen et al. (2021) discovers visual impressions from text-only corpus to improve open-domain dialog generation.", "Li et al. (2021b) learns visionlanguage representations with cross-modal contrastive learning on a combination of pure text corpus and imagetext pairs to advance both single modal and multi-modal downstream tasks.", "Recently, DALL-E (Ramesh et al., 2021) merges text and visual tokens as a single stream of data and employs a universal Transformer to autoregressively model the multimodal stream.", "The astonishing success of these methods inspires us to guide personalized explanation generation with visual signals.", "Generate Explanations for Recommendation Explainable recommendation has been an important task in both research and industry (Zhang and Chen, 2020).", "Early approaches mainly attempt to 245 ImageDecoder The dcor is very nice and the room is very comfortable Gen. visualization Gen. explanation MatchingDiscriminator Matchscore !\" ( !\" & % ( % ,", "make latent factor models interpretable by aligning each latent dimension with the explicit meaning (Zhang et al., 2014; Chen et al., 2016).", "In recent years, numerous neural models have been proposed to explain recommendations based on user reviews (Chen et al., 2019c,a).", "There have also been attempts to generate purely visual explanations (Chen et al., 2019b; Tangseng and Okatani, 2020).", "Compared with other explanation styles for recommendation, sentence-based methods are more straightforward and have been at the center of attention in recent times.", "Explanation sentences can either be generated by filling predefined templates (Zhang et al., 2014; Wang et al., 2018) or through flexible natural language approaches such as Attn2Seq (Dong et al., 2017), based on recurrent neural networks, and PETER (Li et al., 2021a), which is powered by a personalized Transformer.", "NETE (Li et al., 2020) combines the advantage of the two styles and produces template-controlled explanations by learning from sentence templates, which is an early form of prompt-based generation.", "However, none of the previous work has integrated textual and visual features and provided multimodal explanations.", "To the best of our knowledge, METER is also the first approach to draw on vision for improved textual explanation generation.", "The goal of our METER framework is to give an estimated rating score r u,i that reflects a user u 's", "preference towards item i and generate a multimodal explanation to justify the estimated rating.", "The generated multi-modal explanation consists of a text sentence E u,i and an image visualization V u,i .", "The latter may serve as a supplement to the textual explanation for better explainability when text alone provides insufficient information.", "Moreover, the METER recommendation explanation model is encouraged to visualize what it is talking about for the useritem pairs and will be punished if the generated visualization does not match its textual explanation.", "By doing so, we aim to improve the quality, diversity, as well as faithfulness of the generated text explanations through visual grounding.", "In the following, we shall first elaborate how to represent visual information into visual tokens and how to encode the positional embeddings for different types of tokens used in METER.", "Subsequently, we describe the Multimodal Enhanced Transformer for autoregressive multimodal explanation generation.", "Moreover, we will introduce the textimage matching discriminator, which guides the multimodal Transformer to generate better and more diversified text explanations.", "Finally, we summarize the training objectives of our framework for rating prediction and explanation generation.", "To introduce visual signals into the Transformer structure, we follow the idea of VQ-VAEs (van den Oord et al., 2017) to encode an image I RH W 3 into a sequence of discrete patch-level", "visual tokens z q R h w d , where H and W is the original size of the input image, h w is the number of visual patches, and d is the patch-level feature dimensionality.", "The visual tokens are constructed by vector-quantization through a learned discrete codebook Z = { z k } Kk =1 R d of visual representations.", "To balance efficiency and perceptual quality, we adopt VQ-GAN (Esser et al., 2021) as the visual encoder and decoder in our framework.", "We first pre-train the vector-quantized visual patch encoder E , decoder G , and the discrete codebook Z on our collected images.", "With these pretrained components, we can encode an input image I with the encoder E as z = E ( I ) R h w d .", "Next, we serialize z and conduct element-wise quantization for individual encoding z j of z onto its closest codebook entry z k : z q = (cid:32) arg min z k Z z j z k (cid:33) R h w d The resulting z q are served as the encoded visual tokens { v j } mj =1 of the input image.", "As for the sequence of visual tokens z q = { v j } mj =1 produced by METER autoregressively, we can utilize the decoder G to transform it back to a generated original size image I : I = G ( z q ) RH W 3 .", "Five distinct types of input tokens can be distinguished: user ID, item ID, feature word, text tokens for explanation, and visual tokens.", "With the aforementioned vector-quantized visual patch encoder, we obtain a visual token representation for a given image.", "For text explanations, we directly tokenize them into text token sequences.", "Intuitively, the generated explanation should reflect both the user's interest preferences and the item attributes.", "Hence, we have user IDs and item IDs as two special types of tokens to guide the model to talk about the correct topics.", "Finally, the feature words can serve as conditional inputs to specialize the topic of explanation.", "To represent tokens as embeddings, we prepare four embedding codebooks: U for user IDs, I for item IDs, V for text tokens and feature words, and Z for visual tokens.", "We set a fixed length m for visual tokens and a maximum length n for text tokens.", "Thus, the input sequence S 0 can be represented as S 0 = [ u, i, f, e 1 , , e n , v 1 , , v m ] .", "Before feeding the token sequence into METER, we provide positional embeddings for non-visual tokens and visual tokens separately.", "As the visual information has a spatial prior and is organized in a 2-D grid, we adopt an axial positional embedding (Ho et al., 2019) for visual tokens.", "In addition, we prepare an embedding codebook P for non-visual tokens.", "The final input sequence representation is the addition of token embeddings and the corresponding positional embeddings.", "Given a input sequence, we use a Multimodally-Enhanced Transformer to encode it and predict the next token, which can be either a text or visual token.", "When the input sequence starts with the special token [ BOS ] alone, the model also predicts the rating score for the candidate useritem pair and contextual words that could reflect the user's preference and the item's attributes.", "Suppose our multimodal Transformer has L layers, each with h -head multi-head self-attention, and d is the input embedding dimensionality.", "Then, for input sequence S l at layer l [0 , L 1] , the encoded sequence S l +1 can be computed as follows (specif-ically SL denotes the final-layer output): S l +1 = FFN l (Attention ( S l WQ , S l WK , S l WV )) Here, WQ , WK , WV R d d h are weight matrices for projecting query, key, and value respectively (Vaswani et al., 2017), d h = d/h is the dimensionality for each head.", "FFN l is a feed-forward module consisting of two fully-connected layers with ReLU in between for the l -th Transformer layer.", "The Attention function is defined as Attention( Q , K , V ) = Softmax (cid:18) QK d h (cid:19) V with a scaling factor d h that maintains the order of magnitude in features.", "We adopt a similar masking strategy as Li et al. (2021a): the user & item IDs both can attend to all tokens in the sequence, while other non-ID tokens (including feature words, text tokens, and visual tokens) all retain the traditional causal attention masking in order to avoid any leakage of future information.", "Figure 2", "(a) provides an illustration of our masking strategy.", "Assuming the final-layer output from the Transformer is SL = [ s u , s i , s f , { s e } , { s v } ] , this also serves as a representation of the input sequence for next generation iteration.", "We can use these vector representations to enable the following four tasks: 247 Rating prediction The first representation s u is used to conduct rating score prediction.", "We regard the score prediction as a regression problem and the goal is to predict the score r u,i for the given pair of user/item IDs.", "Due to the adopted masking strategy, u and i can both attend to each other and capture the correlation between them.", "Here we make use of a two-layer fully-connected network with sigmoid activation to map s u to a scalar score value: r u,i = ( s u W 1 + b 1 ) W 2 + b 2 , where the dimensionality of input, hidden layer, and output are d , d , and 1 respectively.", "Mean Squared Error loss (MSE) is used for rating score regression: L r = E ( u,i ) T ( r u,i r u,i ) 2 where r u,i is the ground-truth rating score and T represents the training corpus.", "Context token prediction The second representation s i is designed to predict the context words for a given useritem pair.", "Similar to s u , s i also absorbs the words that are related to a certain user's preference and an item's attributes.", "Thus, this auxiliary task is able to force the Transformer to exploit the information hidden in the user ID and item ID.", "Such design can mitigate the problem of identical explanations being generated.", "By passing s i into a single fully-connected layer with Softmax activation, we can obtain a probability distribution over the vocabulary V for the context word: P c = Softmax ( s i W c + b c ) , where the dimensionality of input and output are d and |V| , respectively.", "The predicted context tokens are the topn words with the highest probability.", "If we represent the probabilities of these context words C as { p tc } nt =1 , then the negative log likelihood (NLL) loss can be computed as: L c = E (cid:34) 1 n n (cid:88) t =1 log p tc (cid:35) Explanation/visualization generation The generation of explanation words and visual codes follows the autoregressive style, i.e., decoding one token at a time from left to right.", "Text generation is triggered by the special [ BOS ] token, upon which we repeatedly decode words until [ EOS ] is sampled.", "If the number of generated text tokens before [ EOS ] is less than n , we pad the sequence with [ PAD ].", "If the text sequence length is greater than n , we cut it off at length n .", "To obtain the visual code sequence V , we iterate METER for a fixed number of m steps conditioned on the text explanation E and the previously generated visual code sequence.", "Similar to context word prediction, we adopt a single fully-connected layer for text representations { s e } to produce probability distributions over the text vocabulary V .", "As for visual representations { s v } , we employ another fully-connected layer to produce probability distributions over the discrete visual codebook Z .", "We can then sample words and visual codes from the obtained probability distributions.", "For simplicity, we employ greedy decoding as the sampling method to select the word/code with the highest probability.", "If we denote the probabilities of the sampled words and visual codes as { p te } nt =1 and { p tv } mt =1 , respectively, then the token-level language modeling loss for text and visual code generation can be expressed as: L e = E (cid:34) 1 n n (cid:88) t =1 log p te (cid:35) + E (cid:34) 1 m m (cid:88) t =1 log p tv (cid:35) where is a hyperparameter used to balance the training of textual and visual token generation.", "Textimage matching METER is capable of generating textimage explanation pairs.", "However, we still need to know whether and to what degree the generated image visualization matches the text explanation from a global perspective.", "Hence we adopt a textimage matching discriminator D to measure the degree of congruency.", "From another aspect, if a generated sentence contains more real-world concepts, it is easier to ground the sentence to corresponding visual tokens and obtain an image visualization with higher fidelity.", "With contrastive training, we in turn push METER to generate text explanations with more grounded details.", "Our discriminator is equipped with two separate encoders for the visual token sequence and the text sequence.", "Assuming the outputs of the two encoders to be E and V , we can construct positive training text image pairs from the ground truth, as well as negative ones through alternate pairings.", "Thus, the discriminator loss can be written as: L d = E [log ( D ( E , V ))] + E (cid:104) log (cid:16) 1 D (cid:16) E , V (cid:17)(cid:105) + E (cid:104) log (cid:16) 1 D (cid:16) E , V (cid:17)(cid:105) In summary, the overall training objective function J consists of the aforementioned four losses: J = min ( e L e + d L d + r L r + c L c ) Here, denotes all trainable parameters, while e , d , r , c are regularization weights to help 248 Figure 3: t-SNE visualization for the top 88 clusters of sentence semantics when threshold is 0.95.", "balance the learning of different tasks.", "METER is then trained on J in an end-to-end manner.", "To conduct experiments, we adopt two publicly available explainable recommendation datasets proposed in Li et al. (2020).", "For each dataset, train-ing/validation/testing splits are created following the ratio of 8 : 1 : 1 .", "To enable the visually-enhanced model proposed in this paper, we compile a collection of images portraying real-world concepts.", "The real-world concepts are obtained by clustering sentence semantics with the help of Sentence-BERT (Reimers and Gurevych, 2019).", "At first, we use Sentence-BERT to compute the embeddings of all text explanation sentences.", "Since many ground-truth explanations have similar semantic meanings, we conduct fast clustering to aggregate these explanation sentences into different groups representing similar concepts and topics.", "Figure 3 gives a t-SNE visualization (Van der Maaten and Hinton, 2008) of the top 88 clusters if setting the similarity threshold to 0.95.", "From the figure, we can have a glimpse of what kinds of topics these explanation typically show.", "To ensure a proper amount of clusters, we set the threshold to 0.85.", "Thus we obtain 16,577 clusters consists of the most common 99,066 explanations for TripAdvisor and 64,937 clusters which cover 283,895 explanations for Yelp.", "The explanation sentences at cluster centers are then used as query input to search relevant images through Google Images API.", "For TripAdvisor and Yelp, we retrieve the top 20 and top 10 images for each centric explanation sentence.", "As a result, we have a visual concept pool of 331,540 and 649,370", "After collecting enough images about dataset-aware topics, we assign each text explanation the most suitable image visualization by calculating the similarity between the two modalities with CLIP model (Radford et al., 2021).", "In this way, we build the textual recommendation explanationimage visualization pairs for both datasets and then train METER on the constructed multimodal pairs.", "In Figure 4, we provide several text explanations with their corresponding assigned image visualizations.", "Table 1 shows the statistics of the established multimodal explainable recommendation datasets.", "Note that the TripAdvisor dataset mainly focuses on the hotel and travel domain, while the majority of the Yelp data is about restaurants.", "Records in the two datasets consist of: user ID, item ID, rating score (from 1 to 5 ), feature word, text explanation, and image visualization aligned with the text explanation.", "To ensure better representative ability of the visual encoder used in METER, the three components", "(i.e., encoder, decoder, and visual codebook) of VQ-GAN are first pre-trained on the collected images of the two datasets.", "For image visualization generation, we first sample 32 candidate images conditioned on the corresponding explanations, and then use the trained textimage discriminator to produce match scores.", "The image with the highest match score is finally selected as output.", "The embedding size d of METER is set to 256 , the dimensionality of the feed-forward network's hidden layer is 1 , 024 .", "The maximum text length n of the explanation sequence is set to 15 , while the length of the visual token sequence m is set to 256 , and the standard image size for VQ-GAN is set to 256 256 .", "We keep the most frequent 20 , 000 words as the text vocabulary, while the size of the discrete visual codebook is 1 , 024 .", "The Multimodally-Enhanced Transformer uses L = 8 layers, each endowed with a multi-head attention with h = 8 heads.", "We set the regularization weights e , d , r , and c to 1 .", "0 , 1 .", "0 , 0 .", "1 , and 1 .", "0 , respectively.", "And we choose 7 .", "0 as the value of the balancing hyperparameter .", "The METER model is trained with Adam optimization (Kingma and Ba, 2015) under a batch size of 32 , and the learning rate is set to 5 10 4 .", "We conduct all experiments on NVIDIA Quadro RTX 6000 GPUs.", "We conduct our evaluation from three perspectives explanation generation performance, textimage matching performance, and rating prediction performance.", "For each of the three aspects, we adopt both automatic and manual forms of evaluation (see Sec. 4.6).", "For explanation performance, we measure the text quality, diversity, and explainability of the generated explanations.", "For the text quality, we adopt BLEU-1 and BLEU-4, as well as ROUGE-1 and ROUGE-2.", "To overcome the drawbacks of the two traditional metrics, we also employ Unique Sentence Ratio (USR) proposed by Li et al. (2020) to quantify the diversity of the generated sentences.", "For the diversity in feature word level, we adopt Feature Diversity (DIV) proposed in Li et al. (2020), which measures the intersection of features between any two generated explanations.", "In explainable recommendation, an explanation will normally be valued more by users if it justifies a recommendation's advantage using certain feature words as specified in the datasets.", "Thus, we adopt two more metrics tailored for explainability evaluation proposed by Li et al. (2020) Feature Matching Ratio (FMR) and Feature Coverage Ratio (FCR).", "Specifically, FMR measures whether a generated explanation contains the feature in the ground-truth, while FCR is computed as the number of distinct features contained in the generated explanations, divided by the total number of features in the whole dataset.", "To assess the textimage matching, we adopt CLIPScore (CS) proposed by Hessel et al. (2021) as an objective metric to measure the degree of correspondence for cross-modality pairs.", "For the rating prediction performance, we rely on two standard metrics Root Mean Square Error (RMSE) and Mean Absolute Error (MAE).", "By including the recommendation experiment, we merely seek to prove that the rating scores predicted by our method are sufficiently 250 Figure 5: Qualitative results generated by METER with a conditional feature word as input:", "strong to merit explanation generation, because if a rating prediction is inaccurate, the generated explanation will be less meaningful.", "For the performance comparison, we consider several baselines with regard to the task of explanation generation: Attn2Seq (Dong et al., 2017) learns to encode attributes into vectors, and then invokes an attention mechanism to generate reviews conditioned on the attribute vector.", "Transformer (Vaswani et al., 2017) treats user and item IDs as words and trains on the explanation generation task with a vanilla Transformer structure through language modeling.", "NETE (Li et al., 2020) designed a tailored GRU module to incorporate the given feature into the decoding stage.", "The system can generate template-like explanations while also making recommendations.", "PETER (Li et al., 2021a) is a simple and effective framework that attempts to use the IDs to predict the words in the target explanation.", "It is built upon a modified attention mask of the Transformer model.", "With regard to mere recommendation, we compare with two traditional methods in addition to NETE and PETER: PMF (Salakhutdinov and Mnih, 2007) conducts probabilistic matrix factorization in latent space.", "SVD++ (Koren, 2008) combines factor and neighborhood models to enhance the accuracy.", "In this section, we evaluate the performance of the proposed METER approach on two real-world datasets and compare with several representative explanation generation methods in Table 2 and recommendation models in Table 3.", "From Table 2, we can see that METER achieves the best FMR and DIV against all other methods, showing that METER can cover more diverse feature words during generation while maintaining good explainability.", "METER notably improves the USR over PETER but is slightly lower than NETE.", "Note that NETE is a template-based approach so it naturally achieves high USR scores.", "Among all methods, METER exhibits the best balance between text quality and text diversity, while being the only method that can produce both text and images, with reasonably high Image Consistency.", "Since automatic metrics cannot completely reflect the quality and faithfulness of generated text explanations, we also conduct a user study in the next subsection for further verification.", "Moreover, Table 3 indicates that METER can achieve comparable rating performance to other approaches.", "In Figure 5, we present several real examples illustrating how METER is able to jointly generate not only high-quality rating scores and text explanations but also image visualizations.", "Taking the first case in", "(b) as an example, we observe how METER creates coherent explanations 251 Methods Yelp TripAdvisor RMSE MAE RMSE MAE PMF 1.09 0.88 0.87 0.70 SVD++ 1.01 0.78 0.80 0.61 NETE 1.01 0.79 0.79 0.60 PETER 1.01 0.78 0.81 0.63 METER 1.01 0.79 0.80 0.61 Table 3: Recommendation performance comparison in terms of RMSE and MAE among several methods.", "rather than directly copying the feature word into the generated sentence, leading to greater diversity.", "4.6 User Study To genuinely assess the quality of text explanations generated by METER and whether the image visualization matches the text explanation, we conduct a user study on the faithfulness of the generated text explanations with associated visual grounding.", "We randomly sampled 500 generated explainable contextual sentences as well as corresponding image visualizations.", "For comparison, we also randomly pick 500 samples from the baselines and randomly mixed them with the samples from our method.", "We asked 30 human subjects to provide a rating range from 1 5 , where larger scores represent better faithfulness and diversity.", "For better evaluation, we also provide the original user/item information and ground-truth explanation sentence for their reference.", "We consider Faithfulness as a criterion to assess the degree of explainability of the text, which encompasses both its readability and its cogency to the human participants.", "A higher Diversity represents more lexically varied generated context.", "We further consider Consistency representing to what extent the generated images match the associated generated sentence, while higher Quality scores indicate the generated image contains clearer details and better fidelity.", "We then calculate the overall scores by averaging the ratings given by each human participant across 500 samples each from both the baseline and from our method.", "The results are reported in Table 4 and show that our method can generate diverse and faithful explanation sentences of a higher quality than PETER, while also attaining a high image quality and good cross-modal consistency.", "We also provide an ablation study of the training tasks on TripAdvisor dataset.", "According to Table 5, the context prediction task has a big influence on Sentence Image Faithfulness Diversity Consistency Quality Baselines 3.41 2.96 2.54 3.04 Ours 4.57 3.70 3.06 4.19 Table 4: Manual evaluation performance between METER and baselines.", "the explainability and diversity of the generated explanations.", "The feature word has a vital role in deciding the topic for the model to consider.", "Obviously the rating prediction task is important for recommendation performance, while the visual generation task is decisive for the image consistency score.", "As we expected, the discriminator loss can assist the model to generate both diverse explanations and better image visualizations.", "In this paper, we propose METER, the first attempt to jointly generate rating scores, text explanations, and corresponding image visualizations.", "We immerse our model in a multimodal environment by putting all modalities to one shared Transformer decoder structure.", "A textimage matching discriminator is further introduced to encourage sentences with more groundable and fine-grained concepts.", "Experimental results demonstrate that our framework can provide diverse and faithful text explanations, together with image visualizations as additional intuitive explanations.", "This proves that visual information offers auxiliary knowledge for the explanation generation model to gain awareness of real-world semantics.", "Our dataset and code are available at https://github.com/ jeykigung/METER .", "In the future, we plan to investigate generating visually-enhanced explanations for more domains such as fashion and movie.", "We appreciate the valuable feedback and suggestions of the reviewers.", "This work was supported in part by NSF IIS 1910154, 2007907, and 2046457.", "Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsors." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "method", "objective", "objective", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "other", "objective", "other", "other", "other" ]
[ "Due to the excessive cost of large-scale language model pre-training, considerable efforts have been made to train BERT progressively start from an inferior but low-cost model and gradually grow the model to increase the computational complexity.", "Our objective is to advance the understanding of Transformer growth and discover principles that guide progressive training.", "First, we find that similar to network architecture search, Transformer growth also favors compound scaling.", "Specifi-cally, while existing methods only conduct network growth in a single dimension, we observe that it is beneficial to use compound growth operators and balance multiple dimensions (e.g., depth, width, and input length of the model).", "Moreover, we explore alternative growth operators in each dimension via controlled comparison to give operator selection practical guidance.", "In light of our analyses, the proposed method CompoundGrow speeds up BERT pretraining by 73 .", "6% and 82 .", "2% for the base and large models respectively, while achieving comparable performances 1 .", "Thanks to the rapid increase of computing power, large-scale pre-training has been breaking the glass ceiling for natural language processing tasks (Liu et al., 2018; Peters et al., 2018; Devlin et al., 2019; Liu et al., 2019; Brown et al., 2020).", "However, with great power comes great challenges: the required excessive computational consumption significantly impedes the efficient iteration of both research exploration and industrial application.", "To lower the training cost, many attempts have been made to conduct progressive training , which starts from training an inferior but low-cost model, and gradually increases its resource consumption (Gong et al., Work done while interning at Google. Corresponding Author: Hongkun Yu and Xiaotao Gu. 1 Code will be released at: https://github.com/google-research/google-research/tree/master/grow_bert 2019; Devlin et al., 2019).", "As elaborated in Section 5, two components are typically needed for designing such progressive training algorithmsthe growth scheduler and the growth operator (Dong et al., 2020).", "The former controls when to conduct network growth, and the latter controls how to perform network growth.", "Here, our objectives are to better understand growth operators with a focus on Transformer models (Vaswani et al., 2017; Liu et al., 2020b), and specifically, to help design better progressive algorithms for BERT pre-training (De-vlin et al., 2019).", "Specifically, we recognize the importance of using compound growth operators in our study, which balance different model dimensions (e.g., number of layers, the hidden size, and the input sequence length).", "Regarding previous efforts made on Transformer growth, they mainly focus on one single model dimension: either the length (Devlin et al., 2019) or the depth (Gong et al., 2019).", "In this work, however, we find that compound effect plays a vital role in growing a model to different capacities, just like its importance in deciding network architectures under specific budgets (Tan and Le, 2019).", "Here, we show that growing a Transformer from both dimensions leads to better performance with less training cost, which verifies our intuition and shows the potential of using compound growth operators in progressive BERT training.", "Further, we explore the potential choices of growth operators on each dimension.", "We conduct controlled experiments and comprehensive analyses to compare various available solutions.", "These analyses further guide the design of effective compound growth operators.", "Specifically, we observe that, on the length dimension, embedding pooling is more effective than directly truncating sentences.", "On the width dimension, parameter sharing outperforms low-rank approximation.", "ator on each dimension.", "Experiments on standard benchmarks show that, without sacrificing final performance, the final model speeds up the overall pre-training by 73.6% and 82.2% on BERT-base and BERT-large models respectively.", "Progressive Training.", "Algorithm 1 presents a generic setup for progressive training.", "In each training stage t , the corresponding growth operator g t grows the model f .", "Then, f is updated by the optimizer opt before entering the next training step.", "Correspondingly, our goal is to maximize the final model performance after all training stages, which can be formulated as minimizing the empirical loss L over dataset D : min g t G L ( f T ) s.t. f t = opt ( g t ( f t 1 ) , D ) (1) Compound Effect.", "Existing progressive training methods only focus on one model dimension.", "For example, Gong et al. (2019) conduct Transformer growth by gradually increasing the network depth .", "Devlin et al. (2019) use shorter input sequence length at early stages.", "However, as studies in network architecture search have revealed (Tan and Le, 2019), growth operators that balance different model dimensions can achieve better performance than single-dimensional operators under the same budget.", "Note that our objective (Equation", "1) is close to the objective of EfficientNet (Tan and Le, 2019), which aims to find the optimal network architecture by maximizing the model accuracy for a given resource budget: max d,w,r Accuracy ( N ( d, w, r )) s.t. Resource_cost ( N ) target_budget , where N ( d, w, r ) is a CNN network, d , w , r are coefficients to scale its depth, width, and resolution.", "In this work, we find that such a compound effect also plays a vital role in progressive BERT training.", "Intuitively, growing the network from more than one dimension creates larger potential to get better performance with less resource.", "Restricting the growth operator from handling all dimensions would lead to inferior performance, as min g G L ( f T ) min g GG + L ( f T ) .", "The optimal value of the objective function (Equation", "1) is bounded by the feasible set of the growth operator.", "Empirical Verification.", "For empirical verification, we compare existing single-dimensional growth operators in model depth and length with the corresponding compound operator that balances both dimensions.", "For all three compared growth operators, their configurations are adjusted to make sure they have the same model after growth, and their low-cost models have empirically comparable training costs.", "As to the training, we first train the low-cost model for 100/300/500/700K steps, and then grow the model to a standard BERT-base model for another 300K steps training.", "For models trained with different steps/growth operators, we compare their performance after finetuning on MNLI, SQuaD v1.1, and SQuaD v2.0 respectively.", "As Figure 1 shows, across different settings (columns) and metrics (rows), the compound operator consistently outperforms or at least achieves comparable results with single-dimensional operators.", "The observation meets our intuition: to achieve same speedup, the compound method can distribute the reduction on training cost to different dimensions, and achieve better performance.", "After verifying the importance of compound growing, we conduct more analysis to provide guidance for growth operator design.", "Data Truncation first limits the maximum length of input sequences by truncating the training sentences to a shorter length, and then train the model on full-length data.", "Note that shorter input sequences usually come with less masked tokens to predict in each sentence.", "For instance, Devlin et al. (2019) first use sentences of at most 128 tokens (with 20 masked tokens) before training on data of 512 tokens (with 76 masked tokens).", "The major issue of this data truncation operator is the incomplete update of position embeddings.", "The model needs to learn embeddings for the extra positions from scratch at the last stage.", "Embedding Pooling.", "Inspired by the idea of multigrid training in the vision domain (Wu et al., 2020), we train the model with low-resolution text through embedding pooling over unmasked tokens.", "Compared with data truncation, this method leaves the training data intact and can update all position embeddings.", "Specifically, since the output length of self-attention modules is decided by the length of query vectors, we only conduct pooling on query vectors in the first self-attention layer and keep key/value vectors intact.", "As shown in the first group of Table 1, data truncation (sequence length =256 ) and mean pooling ( k =2 ) has similar performance on MNLI and SQuAD v1.1, while mean pooling outperforms data truncation on SQuAD v2.0.", "On the width dimension, we focus our study on the feedforward network module (FFN).", "Similar to gradually increasing the network depth, one can also gradually increase the network width for Transformer growth.", "Specifically, the FFN module can be formed as f ( xW 1 ) W 2 , where f ( ) is the activation function, W 1 RD H and W 2 RH D are parameters, D and H are the embedding size and the hidden size respectively.", "Matrix Factorization.", "A straightforward method is to approach the original weight matrix W i R m n by the product of two small matrices W i 1 R m h and W i 2 R h n in the early training stage.", "In the late stage of training, we would recover W i as W i 1 W i 2 and unleash the full potential.", "Parameter Sharing.", "Instead of decomposing original weight matrices with low-rank approximation, we try to employ parameter sharing by spliting the matrix into multiple blocks and sharing parameters across different blocks.", "Formally, for input x , f ( xW 1 ) W 2 = f ( x [ W (cid:48) 1 ,...,W (cid:48) 1 ]) W (cid:48) 2 /k ...", "W (cid:48) 2 /k = f ( xW (cid:48) 1 ) W (cid:48) 2 .", "(2) Specifically, in the early training stage, we replace W 1 and W 2 with smaller matrices W (cid:48) 1 RD Hk and W (cid:48) 2 R Hk D .", "Then, at the growth step, we vertically duplicate (share) W (cid:48) 1 for k times along the dimension with size H/k as the new W 1 .", "W 2 is generated similarly.", "Similar to matrix factorization, this setting also preserves the output after the growth.", "Random noise is added to W 1 and W 2 by the dropout layers in FFN, so that the shared small matrices will have different outputs and gradients in later training steps (Chen et al., 2015).", "facTable 1: Empirical comparison among growth operators.", "For each operator, a low-cost model is first trained for 700K steps, then grown to the original BERT model for another 300K steps training.", "Transformer growth in the depth dimension has been thoroughly discussed in literature (Gong et al., 2019; Li et al., 2020).", "Our observation in this dimension is consistent with their conclusions.", "In experiments we also compare compound growth with the standard progressive stacking method.", "Discussion.", "From the perspective of implementation, compound growth introduces little additional engineering effort compared with progressive stacking.", "Specifically, the growth step of progressive stacking basically copies the parameters of the small model to corresponding layers of the full model.", "The growth on the width dimension is a similar parameter copying process for the fully connected layers, while the growth on the length dimension removes the embedding pooling layer without changing any model parameters.", "Experiment Setups.", "We train the original BERT models following the same settings in (Devlin et al., 2019) with 256 batch size and 512-token data.", "All compared models will finally grow to the original model, and keep the total number of training steps to 1M.", "We evaluate the final model on the GLUE benchmark (Wang et al., 2018) including 9 subtasks, and the two versions of SQuAD (Rajpurkar et al., 2018) datasets for question answering.", "More detailed experiment settings can be found in the appendix for reproduction.", "Compared Methods.", "Previous studies have rarely focused on progressive Transformer growth for BERT training, and progressive Transformer stacking (Gong et al., 2019) is the only directly comparable method to the best of our knowledge.", "We apply their method on the official BERT model with the same training setting, learning rate schedule and hardware as our method.", "We set the training schedule as 300K steps with 1 4 number of layers, 400K steps with 1 2 number of layers, and 300K steps with the full model.", "Our Method.", "For CompoundGrow , we apply treatments on three dimensions for the low-cost model: (1) mean embedding pooling with size 2 on the length dimension; (2) parameter sharing with k = 2 on FFN modules on the width dimension; (3) stacking on the depth dimension.", "Following the same setting as compared methods, we try to equally distribute the 1M training steps.", "We train the model with all treatments with 1 4 number of layers and 1 2 number of layers for 200K steps respectively, and then stack it to full layers with treatments on the width and length dimensions for another 300K steps.", "At the last stage, we train the full model for 300K steps, just like the compared method.", "Results.", "Table 2 shows the speedup of different models.", "We estimate the inference FLOPs for compared models and get their real training time from the Tensorflow profiler 2 .", "On the BERT-base model, stacking and CompoundGrow speeds up pre-training by 68 .", "7% and 107 .", "1% respectively in FLOPs, 64 .", "9% and 73 .", "6% respectively on walltime.", "On the BERT-large model, stacking and CompoundGrow speeds up pre-training by 70 .", "7% and 111 .", "4% respectively in FLOPs, 69 .", "7% and 82 .", "2% respectively on walltime.", "Though CompoundGrow is significantly faster, on development sets of MNLI and SQuaD, the compared methods do not have significantly different finetuning performance from the original BERT models.", "Table 3 shows the test performance on the GLUE benchmark.", "Both compared methods achieve at least the same performance as the original BERT model.", "While CompoundGrow saves more training time, it achieves the same performance with stacking on the large model.", "On the base model, stacking is better in terms of average GLUE score, mainly 2 https://www.tensorflow.org/guide/profiler Table 2: The pre-training speedup and finetuning performance on dev sets of MNLI and SQuaD.", "due to its advantage on the CoLA dataset.", "Such an unusual gap on CoLA might be caused by its relatively small volume and corresponding random variance (Dodge et al., 2020).", "On the larger and more robust MNLI dataset, the compared methods achieve almost the same score.", "Progressive training was originally proposed to improve training stability, which starts from an efficient and small model and gradually increase the model capacity (Simonyan and Zisserman, 2014).", "Recent study leverages this paradigm to accelerate model training.", "For example, multi-level residual network (Chang et al., 2018) explores the possibility of augmenting network depth in a dynamic system of view and transforms each layer into two subsequent layers.", "AutoGrow (Wen et al., 2020) attempts to automate the discover of proper depth to achieve near-optimal performance on different datasets.", "LipGrow (Dong et al., 2020) proposes a learning algorithm with an automatic growing scheduler for convolution nets.", "At the same time, many studies have been conducted on the model growing operators.", "Network Morphism (Wei et al., 2016, 2017) manages to grow a layer to multiple layers with the represented function intact.", "Net2net (Chen et al., 2015) is a successful application to transfer knowledge to a wider network with function-preserving initialization.", "Similar ideas can be discovered in many network architectures, including progressive growing of GAN (Karras et al., 2017) and Adaptive Computation Time (Graves, 2016; Jernite et al., 2016).", "As large-scale pre-training keeps advancing the state-of-the-art (Devlin et al., 2019; Radford, 2018), their overwhelming computational consumption becomes the major burden towards further developing more powerful models (Brown et al., 2020).", "Preliminary application of progressive training has been made on Transformer pre-training.", "(Devlin et al., 2019) designs two-stage training with a reduced sequence length for the first 90% of updates.", "(Gong et al., 2019) stack shallow model trained weights to initialize a deeper model, which grows the BERT-base model on the depth dimension and achieves 25% shorter training time.", "In this work we empirically verify the importance of balancing different dimensions in Transformer growth and propose compound growth operators, which integrates operators for more than one dimension.", "Moreover, we conduct controlled experiments on various design choices of growth operators, which provides a practical guidance to algorithm design.", "Our final model speeds up the training of the BERT-base and BERT-large models by 73 .", "6% and 82 .", "2% in walltime respectively while achieving comparable performance." ]
[ "abstain", "objective", "objective", "result", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "result", "result", "objective", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "abstain", "abstain", "abstain" ]
[ "We propose a novel recurrent neural network-based approach to simultaneously handle nested named entity recognition and nested entity mention detection.", "The model learns a hypergraph representation for nested entities using features extracted from a recurrent neural network.", "In evaluations on three standard data sets, we show that our approach significantly outperforms existing state-of-the-art methods, which are feature-based.", "The approach is also efficient: it operates linearly in the number of tokens and the number of possible output labels at any token.", "Finally, we present an extension of our model that jointly learns the head of each entity mention.", "Named entity recognition (or named entity detection) is the task of identifying text spans associated with proper names and classifying them according to their semantic class such as person, organization, etc.", "It is related to the task of mention detection (or entity mention recognition) in which text spans referring to named, nominal or prominal entities are identified and classified according to their semantic class (Florian et al., 2004).", "Both named entity recognition and entity mention detection are fundamental components in information extraction systems: several downstream tasks such as relation extraction (Mintz et al., 2009), coreference resolution (Chang et al., 2013) and fine-grained opinion mining (Choi et al., 2006) rely on both.", "Many approaches have been successfully employed for the tasks of named entity recognition and mention detection, including linear-chain conditional random fields (Lafferty et al., 2001) and semi-Markov conditional random fields (Sarawagi and Cohen, 2005).", "However, most such methods suffer from an inability to handle nested named entities, nested entity mentions, or both.", "As a result, the downstream tasks necessarily ignore these nested entities along with any semantic relations among them.", "Consider, for example, the excerpts below: (S1) Employing the [ EBV transformed [ human B cell line ] CELL LINE ] CELL LINE SKW6.4, we demonstrate . . . (S2) . . . [ the burial site of [ Sheikh Abbad ] PERSON ] LOCATION is located . . . S1 shows a nested named entity from the GENIA dataset (Ohta et al., 2002): human B cell line and EBV transformed human B cell line are both considered named entities of type CELL LINE where the former is embedded inside the latter.", "S2, derived from the ACE corpora 1 , shows a PERSON named entity (Sheikh Abbad) nested in an entity mention of type LOCATION (the burial site of Sheikh Abbad).", "Most existing methods for named entity recognition and entity mention detection would miss the nested entity in each sentence.", "Unfortunately, nested entities can be fairly common: 17% of the entities in the GENIA corpus are embedded within another entity; in the ACE corpora, 30% of sentences contain nested named entities or entity mentions, thus warranting the development of efficient models to effectively handle these linguistic phenomena.", "Feature-based methods are the most common among those proposed for handling nested named entity and entity mention recognition.", "Alex et al. 1 https://catalog.ldc.upenn.edu/ LDC2005T09 (ACE2004) and https://catalog.", "ldc.upenn.edu/LDC2006T06 (ACE2005) 861 (2007), for example, proposed a cascaded CRF model but it does not identify nested named entities of the same type.", "Finkel and Manning (2009) proposed building a constituency parser with constituents for each named entity in a sentence.", "Their approach is expensive, i.e., time complexity is cubic in the number of words in the sentence.", "Lu and Roth (2015) later proposed a mention hypergraph model for nested entity detection with linear time complexity.", "And recently, Muis and Lu (2017) introduced a multigraph representation based on mention separators for this task.", "All of these models depend on manually crafted features.", "In addition, they cannot be directly applied to extend current state-of-the-art recurrent neural network-based models for flat named entity recognition (Lample et al., 2016) or the joint extraction of entities and relations (Katiyar and Cardie, 2016) to handle nested entities.", "In this paper, we propose a recurrent neural network-based model for nested named entity and nested entity mention recognition.", "We present a modification to the standard LSTM-based sequence labeling model (Sutskever et al., 2014) that handles both problems and operates linearly in the number of tokens and the number of possible output labels at any token.", "The proposed neural network approach additionally jointly models entity mention head 2 information, a subtask found to be useful for many information extraction applications.", "Our model significantly outperforms the previously mentioned hypergraph model of Lu and Roth (2015) and Muis and Lu (2017) on entity mention recognition for the ACE2004 and ACE2005 corpora.", "It also outperforms their model on joint extraction of nested entity mentions and their heads.", "Finally, we evaluate our approach on nested named entity recognition using the GENIA dataset and show that our model outperforms the previous state-of-the-art parser-based approach of Finkel and Manning (2009).", "Several methods have been proposed for named entity recognition in the existing literature as summarized by Nadeau and Sekine (2007) in their survey paper.", "Early techniques in the supervised do-main have been based on hidden markov models (e.g., Zhou and Su (2002)) or, later, conditional 2 This involves identifying the headword of a named entity or entity mention.", "random fields (CRFs) (e.g., McDonald and Pereira (2005)).", "Many fewer approaches, however, have addressed the problem of nested entities.", "Alex et al. (2007) presented several techniques based on CRFs for nested named entity recognition for the GENIA dataset.", "They obtained their best results from a cascaded approach, where they applied CRFs in a specific order on the entity types, such that each CRF utilizes the output derived from previous CRFs.", "Their approach could not identify nested entities of the same type.", "Finkel and Manning (2009) proposed a CRF-based constituency parser for nested named entities such that each named entity is a constituent in the parse tree.", "Their model achieved state-of-the-art results on the GENIA dataset.", "However, the time complexity of their model is O ( n 3 ) , where n is the number of tokens in the sentence, making inference slow.", "As a result, we do not adopt their parse tree-based representation of nested entities and propose instead a linear time directed hypergraph-based model similar to that of Lu and Roth (2015).", "Directed hypergraphs were also introduced for parsing by Klein and Manning (2001).", "While most previous efforts for nested entity recognition were limited to named entities, Lu and Roth (2015) addressed the problem of nested entity mention detection where mentions can either be named, nominal or pronominal.", "Their hypergraph-based approach is able to represent the potentially exponentially many combinations of nested mentions of different types.", "They adopted a CRF-like log-linear approach to learn these mention hypergraphs and employed several hand-crafted features defined over the input sentence and the output hypergraph structure.", "Our approach also learns a similar hypergraph representation with differences in the types of nodes and edges in the hypergraph.", "It does not depend on any manually crafted features.", "Also, our model learns the hypergraph greedily and significantly outperforms their approach.", "Recently, Muis and Lu (2017) introduced the notion of mention separators for nested entity mention detection.", "In contrast to the hypergraph representation that we and Lu and Roth (2015) adopt, they learn a multigraph representation and are able to perform exact inference on their structure.", "It is an interesting orthogonal possible approach for nested entity mention detection.", "How-862 ever, we will show that our model also outperforms their approach on all tasks.", "Recently, recurrent neural networks (RNNs) have been widely applied to several sequence labeling tasks achieving state-of-the-art results.", "Lample et al. (2016) proposed neural models based on long short term memory networks (LSTMs) and CRFs for named entity recognition and another transition-based approach inspired by shift-reduce parsers.", "Both models achieve performance comparable to a state-of-the-art model (Luo et al., 2015), but neither handles nested named entities.", "Figure 1 shows the desired sequence tagging output for each of three overlapping PER entities (his, his fellow pilot and his fellow pilot David Williams) according to the standard BILOU tag scheme.", "Our approach relies on the fact that we can (1) represent these three tag sequences in the single hypergraph structure of Figure 2 and then (2) design an LSTM-based neural network that produces the correct nested entity hypergraph for a given input sentence.", "In the paragraphs just below we provide a general description of hypergraphs and our task-specific use of them.", "Sections 3.1 and 3.2 describe the hypergraph construction process; Section 4 presents the LSTM-based sequence tagging method for automating hypergraph construction.", "We express our structured prediction problem such that it corresponds to building a hypergraph that encodes the token-level gold labels for all entities in the input sentence.", "3 In particular, we represent the problem as a directed hypergraph.", "For those new to this formalism, directed hypergraphs are very much like standard directed graphs except that nodes are connected by hyperarcs that connect a set of tail nodes to a set of head nodes.", "To better explain our desired output structure, we further distinguish between two types of hyperarcs normal edges (or arcs) that connect a single tail node to a single head node, and hyperarcs that contain more than one node either as the head or as the tail.", "The former are shown as straight lines in Figure 2; the latter as curved edges.", "3 We note that the complete hypergraph for the example in Figure 1 would include nodes for all possible label types at each timestep and all possible hyperarcs between them.", "In this work, however, we only greedily build a sub-hypergraph for the gold labels when training.", "In our encoding of nested entities, a hyperarc is introduced when two or more entity mentions requiring different label types are present at the same position.", "In Figure 2, for example, the nodes O (corresponding to the input token that) and the nodes U PER and B PER (corresponding to the input token his) are connected by a hyperarc because three entity mentions start at this time step from the tail O node (two of which share the B PER tag).", "4 3.1 Hypergraph Construction Let us first discuss how the problem of nested entity recognition can be expressed as finding a hypergraph.", "Our goal is to represent the BILOU tag sequences associated with his, his fellow pilot and his fellow pilot David Williams as the single hypergraph structure of Figure 2.", "This is accomplished by collapsing the shared states (labels) in the output entity label sequences into a single state as shown in Figure 2: e.g., the three O labels for that become a single O; the two B PER labels at his are collapsed into one B PER node that joins U PER, the latter of which represents the entity mention his.", "Thus at any time step, the representation size is bounded by the number of possible output states instead of the potentially exponential number of output sequences.", "We then also adjust the directed edges such that they have the same type of head node and the same type of tail node as before in Figure 1.", "If we look closely at Figure 2 then we realise that there is an extra O node in the hypergraph corresponding to the token his which did not appear in any entity output sequence in Figure 1: in our task-specific hypergraph construction we make sure that there is an O node at every timestep to model the possibility of beginning of a new entity.", "The need for this will become more clear in Section 4.", "Note that the hypergraph representation of our model is similar to Lu and Roth (2015).", "Also, the expressiveness of our model is exactly the same as Lu and Roth (2015); Muis and Lu (2017).", "The ma-jor difference in the two approaches is in learning.", "4 In contrast, note that the nodes L PER and O corresponding to the input token pilot and the node O corresponding to the token David are connected by normal edges.", "Hence, our hypergraph structure contains only one special kind of hyperarc which connects a single tail node to multiple head nodes.", "We do not have hyperarcs that connect multiple tail nodes to a single head node.", "In this section, we discuss our assignment of probabilities to all the possible edges from a tail node which helps in the greedy construction of the hypergraph.", "Thus at any timestep t , let g t 1 be the tail node and x be the current word of the sentence; then we model probability distribution over all the possible types of head nodes (different output tag types) conditioned on the tail node and the current word token.", "In our work we use hidden representations learned from an LSTM model as features to learn these probability distributions using a cross-entropy objective.", "It is important to note that there are two types of directed edges in this hypergraph simple edges for which there is only one head node for every tail node which can be learned as in a traditional sequence labeling task, or hyperarcs that connect more than one head node to a tail node.", "We learn the set of head nodes connected to a tail node by expressing it as a multi-label learning problem as described in Section 5.", "As described in Section 3.2, we can assign probabilities to the different types of edges in the hypergraph and at the time of decoding we choose for each token the (normal) edge(s) with maximum probability and the hyperarcs with probability above a predefined threshold.", "Thus, we can extract edges at the time of decoding.", "Ultimately, however, we are interested in extracting nested entities from the hypergraph.", "For this, we construct an adjacency matrix from the edges discovered and perform depth-first search from the sentence-initial token to discover the entity mentions.", "This is described in detail in Section 5.1.", "We use a standard LSTM-based sequence labeling model to learn the nested entity hypergraph structure for an input sentence.", "Figure 3 shows part of the network structure.", "It is a standard bidirectional LSTM network except for a difference in the top hidden layer.", "When computing the representation of the top hidden layer L at any time step t , in addition to making use of the hidden unit representation from the previous time step t 1 and hidden unit representation from the preceding layer L 1 , we also input the label embedding of the gold labels from the previous time step.", "For the token fellow in Figure 3, for example, we compute three different top hidden layer representations, conditioned respectively on the three labels U PER, B PER and O from the previous time step t 1 .", "Thus, we can model complex interactions between the input and the output.", "Before passing the learned hidden representation to the next time step, we average the three different top hidden layer representations.", "In this manner, we can model the interactions between the different overlapping labels and also it is computation-864 Figure 3: Dynamically computed network structure based on bi-LSTMs for nested entity mention extraction.", "We use a multi-layer bi-directional LSTM encoder, for its strength in capturing long-range dependencies between tokens, a useful property for information extraction tasks.", "Using LSTMs, we can compute the hidden state h t in the forward direction and h t in the backward direction for every token, and use a linear combination of them as the token representation: h ( l ) t = LSTM( x t , h t 1 ) h ( l ) t = LSTM( x t , h t +1 ) z ( l ) t = V h ( l ) t + V h ( l ) t + b l 4.2 Top Hidden Layer At the top hidden layer, we have a decoder-style model, with a crucial twist to accommodate the hypergraph structure, which may have multiple gold labels at the previous step.", "At each token t and for each gold label at the previous step g kt 1 , our network takes the hidden representation from the previous layer z ( L 1) t , the hidden decoder state h ( L ) t 1 , as well as the gold label embedding g kt 1 from the previous time step, and computes: h ( L ) ,k t = LSTM( z ( L 1) t , h ( L ) t 1 , g kt 1 ) Unlike the encoder LSTM, this decoder LSTM is single-directional and bifurcates when multiple gold labels are present.", "We use the decoder hidden states h ( L ) ,k t in the output layer for prediction, as explained in Section 4.3.", "However, before passing the hidden representation to the next time step we average h ( L ) ,k t over all the gold labels k : h ( L ) t = 1 | G t 1 | X k h ( L ) ,k t Thus, h ( L ) t summarizes the information for all the gold labels from the previous time step.", "For each token t and previous gold label g kt 1 , we use the decoder state h ( L ) ,k t to predict a probability distribution over the possible candidate labels using a linear layer followed by a normalizing transform (illustrated below with softmax).", "The outputs can be interpreted as conditional probabilities for the next label given the current gold label: o kt = Uh ( L ) ,k t + b e kt = softmax( o kt ) p ( y t = c | y t 1 = g kt 1 ) = ( e kt ) c Special care is required, however, since the desired output has hyperarcs.", "As shown in Figure 2, there is an hyperarc between I PER corresponding to the token fellow and the label set L PER and I PER corresponding to the token pilot.", "Thus, in our network structure conditioned on the previous label I PER in this case, we would like to predict both L PER and I PER as the next labels.", "To accommodate this, we use a multi-label training objective, as described in Section 5.", "We train our model using two different multi-label learning objectives.", "The idea is to represent the 865 ACE2004 ACE2005 Method P R F1 P R F1 MH-F (Lu and Roth, 2015) 70.0 59.2 63.8 70.0 56.9 62.8 Muis and Lu (2017) 72.7 58.0 64.5 69.1 58.1 63.1 LSTM-flat 70.3 48.4 57.3 62.4 49.4 55.1 LSTM-output layer 72.0 63.3 67.4 66.3 68.2 67.2 Our model (softmax) 72.2 65.2 68.5 70.1 67.9 69.0 Our model (sparsemax) 73.6 71.8 72.7 70.6 70.4 70.5 Table 1: Performance on ACE2004 and ACE2005 test set on mention extraction and classification.", "gold labels as a distribution over all possible labels, encoded as a vector e .", "Hence, for simple edges, the distribution has a probability of 1 for the unique gold label ( e g = 1 ), and 0 everywhere else.", "For hyperarcs, we distribute the probability mass uniformly over all the gold labels in the gold label set ( e kg = 1 | G | for all k).", "Thus, for the example described earlier in Section 4.3, both the labels L PER and I PER receive a probability of 0 .", "5 in the gold label distribution e kt , conditioned on the label I PER from the previous time step.", "Softmax.", "Our first training method uses softmax to estimate the predicted probabilities, and the KL-divergence multi-label loss between the true distribution e kt and the predicted distribution e kt = softmax( o kt ) : kt (softmax) = X c (cid:16) e kt (cid:17) c log (cid:16) e kt (cid:17) c Sparsemax.", "Our second training method makes use of sparsemax , recently introduced by Martins and Astudillo (2016) as a sparse drop-in replacement to softmax, as well as a loss function.", "Unlike softmax, which always outputs a nonzero probability for any output, sparsemax outputs zero probability for most of the unlikely classes, leading to good empirical results on multi-label tasks.", "For our problem, there are only a few nested entities at any timestep in the gold labels thus using a training objective that learns a sparse distribution is more appropriate.", "Sparsemax can be used to filter part of the output space as in the case for multi-label problems thus leaving non-zero probability on the desired output labels.", "Formally, sparsemax returns the euclidean projection of its input o onto the probability simplex: e = sparsemax( o ) := argmin e k o e k 2 The corresponding loss, a sparse version of the KL divergence, is (up to a constant): kt (sparsemax) = 2 e kt > o kt + X c :( e kt ) c 6 =0 (cid:16) ( o kt ) 2 c 2 (cid:17) This function is convex and differentiable, and the quantity is a biproduct of the simplex projection, as described in Martins and Astudillo (2016).", "For either choice of probability estimation, the total loss of a training sample is the sum of losses for each token and for each previous gold label: L = X t X k G t 1 kt .", "At the time of inference, we greedily decode our hypergraph from left-to-right to find the most likely sub-hypergraph.", "During training, at each timestep the most likely label set is learned conditioned on a gold label from the previous timestep.", "However, gold labels are not available at test time.", "Thus, we use the predicted labels from the previous time step as an input to the current time step to find the most likely label set.", "We use a hard threshold T to determine the predicted label set P k t = { c : (cid:0) e k t (cid:1) c > T } We can get the most likely label set P ct for any predicted label at the previous time step c P kt 1 using the above decoding strategy.", "We now combine these inferences to find the most likely entity mention sequences.", "We construct an adjacency matrix A for each time step, such that A [ e ct 1 ][ e kt ] += 1 for every c in the predicted label set P kt at timestep t conditioned on e kt and for every k in predicted labels P t 1 at time step t 1 .", "This can be viewed as a directed hypergraph with several connected components.", "We then perform a depth-first search on this directed hypergraph to find all the entity mentions in the sentence.", "The ACE datasets also have annotations for mention heads along with the entity mentions.", "For example, a sentence with the entity mention the U.S. embassy also contains an annotation for its head word which is embassy in this case.", "Thus, we modify our model to also extract the head of the entity mentions for ACE dataset.", "We jointly model the entity mentions and their heads.", "To do this, we propose a simple extension to our model by only changing the output label sequence.", "We introduce new labels starting with H to indicate that the current token in the entity mention is part of its head.", "Thus, we only change the output label sequence for the entity mentions to include the head label: We train with the label sequence B ORG I ORG H ORG instead of B ORG I ORG L ORG.", "Also, for all our entity sequences we predict the O tag at the end, hence we can still extract the entity mentions.", "At decoding time, we output the sequence of words with the H tag as the head words for a mention.", "We evaluate our model on two tasks nested entity mention detection for the ACE corpora and nested named entity recognition for the GENIA dataset.", "We perform experiments on the English section of the ACE2004 and ACE2005 corpora.", "There are 7 main entity types Person (PER ), Organization (ORG ), Geographical Entities (GPE ), Location (LOC ), Facility (FAC ), Weapon (WEA ) and Vehicle (VEH ).", "For each entity type, there are annotations for the entity mention and mention heads.", "We use a strict evaluation metric similar to Lu and Roth (2015): an entity mention is considered correct if both the mention span and the mention type are exactly correct.", "Similarly, for the task of joint extraction of entity mentions and mention heads, the mention span, head span and the entity type should all exactly match the gold label.", "We compare our model with the feature-based model (MH-F) on hypergraph structure (Lu and", "Roth, 2015) on both entity mention detection as well as the joint mention and mention heads extraction.", "We also compare with Muis and Lu (2017) on entity mention detection only as their model cannot detect head phrases of the entity mentions.", "Lu and Roth (2015) compare their approach with CRF-based approaches such as a linear-chain CRF, semi-markov CRF and a cascaded approach (Alex et al., 2007) and show that their model outperforms them.", "Hence, we do not include those results in our paper.", "We also implement several LSTM-based baselines for comparison.", "Our first baseline is a standard sequence labeling LSTM model (LSTM-flat).", "A sequence model is not capable of handling the nested mentions, so we remove the embedded entity mention and keep the mention longer in length.", "Our second baseline is a hypergraph model (LSTM-output layer) except that the dependencies are only modeled at the output layer and hence there are no connections to the top-hidden layer from the label embeddings from the previous timestep; instead, these connections are limited to the output layer.", "We use Adadelta (Zeiler, 2012) for training our models.", "We initialize our word vectors with 300-dimensional word2vec (Mikolov et al., 2013) word embeddings.", "These word embeddings are tuned during training.", "We regularize our network using dropout (Srivastava et al., 2014), with the dropout rate tuned on the development set.", "There are 3 hidden layers in our network and the dimensionality of hidden units is 100 in all our experiments.", "And we set the threshold T as 0.3.", "We show the performance of our approaches in Table 1 compared to the previous state-of-the-art system (Lu and Roth, 2015; Muis and Lu, 2017) on both the ACE2004 and ACE2005 datasets.", "We find that our LSTM-flat baseline that ignores embedded entity mentions during training performs worse than Lu and Roth (2015); however, our other neural network-based approaches all outperform the previous feature-based approach.", "Among the neural network-based models, we find that our models that construct a hypergraph perform better than the LSTM-flat models.", "Also, our approach that models dependencies between the input and the output by passing the prediction from the pre-867 ACE2004 ACE2005 Method P R F1 P R F1 MH-F (Lu and Roth, 2015) 74.4 50.0 59.8 63.4 53.8 58.3 Our model(softmax) 68.2 60.5 64.2 67.5 62.3 64.8 Our model(sparsemax) 72.3 66.8 69.7 70.6 69.8 70.2 Table 2: Performance on ACE2004 and ACE2005 test set on joint entity mention and its head prediction.", "vious timestep as shown in Figure 3 performs better than the LSTM-output layer model which only models dependencies at the output layer.", "Also, as expected, the sparsemax method that produces a sparse probability distribution performs better than the softmax approach for modeling hyperedges.", "In summary, our sparsemax model is the best performing model.", "Joint Modeling of Heads We report the performance of our best performing models on the joint modeling of entity mentions and its head in Table 2.", "We show that our sparsemax model is still the best performing model.", "We also find that as the total number of possible labels at any timestep increases because of the way we implemented the entity heads, the gains that we get after incorporating sparsemax are significantly higher compared to the results shown in Table 1.", "We also evaluate our model on the GENIA dataset (Ohta et al., 2002) for nested named entity recognition.", "We follow the same dataset split as Finkel and Manning (2009); Lu and Roth (2015); Muis and Lu (2017).", "Thus, the first 90% of the sentences were used in training and the remaining 10% were used for evaluation.", "We also consider five entity types DNA, RNA, protein, cell line and cell type.", "We compare our model with Finkel and Manning (2009) based on a constituency CRF-based parser and the mention hypergraph model by Lu and Roth (2015) and a recent multigraph model by Muis and Lu (2017).", "Table 3 shows the performance of our different models compared to the previous models.", "Interestingly, our LSTM-flat model outperforms Lu and Method P R F1 Finkel and Manning (2009) 75.4 65.9 70.3 MH-F (Lu and Roth, 2015) 72.5 65.2 68.7 Muis and Lu (2017) 75.4 66.8 70.8 LSTM-flat 75.5 63.5 68.9 LSTM-output layer 78.4 67.9 72.8 Our model (softmax) 76.7 71.1 73.8 Our model (sparsemax) 79.8 68.2 73.6 Table 3: Performance on the GENIA dataset on nested named entity recognition.", "Roth (2015).", "We suspect that it is because we use pretrained word embeddings 5 trained on PubMed data (Pyysalo et al., 2013) whereas Lu and Roth (2015) did not have access to them.", "We again find that our neural network model outperforms the previous state-of-the-art (Finkel and Manning, 2009; Muis and Lu, 2017) system.", "However, we see that both softmax and sparsemax models perform comparably on this dataset.", "Consistent with existing results on the joint modeling of related tasks in NLP, we find that joint modeling of heads and their entity mentions leads to an increase in F-score by 1pt (i.e., 71.4 for the sparsemax model on the ACE2005 dataset) on the performance of the entity mentions.", "The precision on extracting entity mentions is 72.1 (vs.", "70.6 in Table", "1) for our sparsemax model for the ACE2005 dataset.", "Example S1 below compares the output from a softmax vs. a sparsemax model on the joint modeling of an entity mention and its head on the ACE2005 dataset.", "Gold-standard annotations are shown in red .", "5 Word vectors trained on PubMed data are available at http://bio.nlplab.org/#source-data .", "Based on the gold standard, the models are required to extract their an entity mention of type PER as well as its head and their patients, which overlaps with the previous entity mention their and has the head word patients.", "This means that the models are required to predict a hyperedge from O to H PER; B PER.", "We find that the softmax model shown in blue can only predict the entity mention their omitting completely the entity mention their patients whereas the sparsemax model shown in green can predict both nested entities.", "Overall then, sparsemax seems to allow the modeling of hyperedges more efficiently compared to the softmax model and performance gains are due to extracting more nested entities with the help of sparsemax model.", "We also manually scanned the test set predictions on ACE dataset for our sparsemax model to understand its current limitations.", "the sparsemax model predicts both entity mentions of they as PER entity type.", "Only if the previous sentence in the corpus is accessible And if you ride inside that tank, it is like riding in the bowels of a dragon can we understand that they in S2 refers to the tank and hence is a VEH .", "Thus, our model can be improved by providing additional context for each sentence rather than making predictions on each sentence in the corpus independently.", "In the example sentences, It refers to a facility and an event, respectively.", "Our model does not distinguish between the two cases and always predicts the token It as a non-entity.", "We found this true for all occurrences of the token It in our test set.", "The incorporation of coreference information can potentially overcome this limitation.", "Inconsistency in Gold-standard Annotations.", "We also identified potential inconsistencies in the gold-standard annotations.", "For S5, the gold-standard annotation for both of these teams is an ORG entity mention with the token teams as its head word.", "Our sparsemax model identifies the entity mention correctly but instead predicts the token both as the head.", "It also identifies these teams as another nested entity mention with the head word teams.", "In contrast, however, we also found entity mentions such as all of the victims that get a little money for which the gold-standard has all annotated as its head and another nested mention the victims that get a little money with victims as the head.", "We recognize this as an inconsistency in the gold-standard annotation.", "In this paper, we present a novel recurrent network-based model for nested named entity recognition and nested entity mention detection.", "We propose a hypergraph representation for this problem and learn the structure using an LSTM network in a greedy manner.", "We show that our model significantly outperforms a feature based mention hypergraph model (Lu and Roth, 2015) and a recent multigraph model (Muis and Lu, 2017) on the ACE dataset.", "Our model also outperforms the constituency parser-based approach of Finkel and Manning (2009) on the GENIA dataset.", "In future work, it would be interesting to learn global dependencies between the output labels for such a hypergraph structure and training the model globally.", "We can also experiment with different representations such as the one in Finkel and Manning (2009) and use the recent advances in neural network approaches (Vinyals et al., 2015) to learn the constituency parse tree efficiently.", "We thank Wei Lu for help with the datasets.", "We also thank Jack Hessel, Vlad Niculae and the reviewers for their helpful comments and feedback.", "This work was supported in part by NSF grant SES-1741441 and DARPA DEFT Grant FA8750-13-2-0015.", "The views and conclusions contained herein are those of the authors and should not be 869 interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of NSF, DARPA or the U.S. Government.", "References Beatrice Alex, Barry Haddow, and Claire Grover.", "2007.", "Recognising nested named entities in biomedical text.", "In Proceedings of the Workshop on BioNLP 2007: Biological, Translational, and Clinical Language Processing .", "Association for Computational Linguistics, Stroudsburg, PA, USA, BioNLP '07, pages 6572.", "http://dl.acm.org/citation.cfm?id=1572392.1572404.", "Kai-Wei Chang, Rajhans Samdani, and Dan Roth.", "2013.", "A constrained latent variable model for coreference resolution.", "In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing .", "Association for Computational Linguistics, Seattle, Washington, USA, pages 601612.", "http://www.aclweb.org/anthology/D13-1057.", "Yejin Choi, Eric Breck, and Claire Cardie.", "2006.", "Joint extraction of entities and relations for opinion recognition.", "In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing .", "Association for Computational Linguistics, Sydney, Australia, pages 431439.", "http://www.aclweb.org/anthology/W/W06/W06-1651.", "Jenny Rose Finkel and Christopher D. Manning.", "2009.", "Nested named entity recognition.", "In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing .", "Association for Computational Linguistics, Singapore, pages 141 150.", "http://www.aclweb.org/anthology/D/D09/D09-1015.", "R Florian, H Hassan, A Ittycheriah, H Jing, N Kamb-hatla, X Luo, N Nicolov, and S Roukos.", "2004.", "A statistical model for multilingual entity detection and tracking.", "In Daniel Marcu Susan Dumais and Salim Roukos, editors, HLT-NAACL 2004: Main Proceedings .", "Association for Computational Linguistics, Boston, Massachusetts, USA, pages 18.", "Arzoo Katiyar and Claire Cardie.", "2016.", "Investigating lstms for joint extraction of opinion entities and relations.", "In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers .", "http://aclweb.org/anthology/P/P16/P16-1087.pdf.", "Dan Klein and Christopher D. Manning.", "2001.", "Parsing and hypergraphs.", "In Proceedings of the Seventh International Workshop on Parsing Technologies (IWPT-2001), 17-19 October 2001, Beijing, China ." ]
[ "objective", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "result", "abstain", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "abstain", "other", "abstain", "other", "abstain", "other", "abstain", "other", "other", "other", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "other", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "objective", "objective", "result", "result", "abstain", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other" ]
[ "Classifiers in natural language processing (NLP) often have a large number of output classes.", "For example, neural language models (LMs) and machine translation (MT) models both predict tokens from a vocabulary of thousands.", "The Softmax output layer of these models typically receives as input a dense feature representation, which has much lower dimensionality than the output.", "In theory, the result is some words may be impossible to be predicted via argmax, irrespective of input features, and empirically, there is evidence this happens in small language models (Demeter et al., 2020).", "In this paper we ask whether it can happen in practical large language models and translation models.", "To do so, we develop algorithms to detect such unargmaxable tokens in public models.", "We find that 13 out of 150 models do indeed have such tokens; however, they are very infrequent and unlikely to impact model quality.", "We release our algorithms and code so that others can test their models.", "1 1 Introduction Probabilistic multiclass classifiers with a large number of output classes are commonplace in NLP (Chen et al., 2016).", "For example, the vocabulary size of contemporary LMs and MT models varies from tens to hundreds of thousands (Liu et al., 2020).", "Recent advances in modelling such large vocabularies have mostly been made by improving neural network feature encoders (Devlin et al., 2019; Conneau et al., 2020).", "But irrespective of a feature encoder's expressivity (Yun et al., 2020; Raghu et al., 2017), a classifier that linearly maps lower dimensional features to higher dimensional outputs has reduced expressivity (Yang et al., 2018), with consequences that are not well understood.", "classifiers that have more output classes | C | than features d .", "For example, MT models often have subword vocabularies of size | C | 30000 , but have d 1024 .", "The expressivity penalty for such low-rank classifiers is that some output distributions cannot be represented.", "Demeter et al. (2020) identified this weakness in Softmax LMs, showing that, in theory, some tokens can never be assigned the highest probability for any input, and therefore can never be produced as argmax predictions.", "2 We call such tokens unargmaxable (see Figure 1).", "While Demeter et al. (2020) proposed an algorithm to detect unargmaxable tokens and provided evidence of their existence in small LMs, their proposed algorithm provided no guarantees and they were unable to test large LMs.", "In this paper we ask: Do unargmaxable tokens exist in large models used in practice?", "To answer this question, we develop algorithms to identify such tokens unambiguously.", "We tested 7 LMs and 143 MT models.", "Out of those, only 13 of the MT models exhibit unargmaxable tokens, and even for those cases the 2 This problem was also studied by Cover (1967) and has an interesting history of independent discovery (Smith, 2014).", "tokens are all noisy and infrequent.", "We conclude that although the expressivity constraints of low-rank Softmax may have important ramifications, most practitioners do not need to worry about tokens that are unargmaxable.", "We provide new tools for them to confirm this on their own models.", "Our contributions are the following: We explain how unargmaxable tokens can arise as a consequence of a rank constrained Softmax layer (Softmax Bottleneck).", "We extend the work of Demeter et al. (2020) with verification algorithms that include the Softmax bias term and provide an exact answer rather than an approximate one.", "We verify a large number of commonly used publicly available language and translation models for unargmaxable tokens.", "We release our algorithm so that others can inspect their models.", "1 2 Background 2.1 Low-Rank Softmax (Softmax Bottleneck) Neural network layers with higher dimensional outputs than inputs impose low-rank constraints.", "3 Such constraints commonly exist as bottlenecks in neural network hidden layers,", "e.g.", "autoen-coders (Hinton and Zemel, 1994) and projection heads in multi-head transformers (Bhojanapalli et al., 2020) among others.", "While bottlenecks make a model less expressive by restricting the functions it can represent, they are desirable both computationally (Papadimitriou and Jain, 2021), since they require less memory and computation than full-rank layers, and as a form of inductive bias, since data is assumed to approximately lie in a low dimensional manifold (McInnes et al., 2018).", "In contrast, herein we focus on the undesirable properties of a Softmax output layer with a low-rank parametrisation, also known as a Softmax Bottleneck (Yang et al., 2018).", "The crucial difference is that a Softmax Bottleneck is usually not followed by a non-linear transformation, and as such the rank constraint limits expressivity in a very rigid way by restricting outputs to a subspace.", "4 3 A layer can also be low rank if weight vectors are collinear, but we do not consider this case here.", "This constraint was shown to hurt LM perplexity (Yang et al., 2018) and non-linear augmentations have been proposed as improvements (Yang et al., 2018; Kanai et al., 2018; Ganea et al., 2019).", "To the contrary, Sainath et al. (2013) used a low-rank factorisation of the softmax layer to reduce the number of parameters in their speech recognition system by 30 50% with no increase in word-error-rate, evidencing that the loss in expressivity does not always impact aggregate metrics.", "The consequences of the loss in expressivity due to the Softmax Bottleneck vary depending on our perspective.", "When considering the flexibility of the probability distribution that can be learned, Ganea et al. (2019, Theorem 2) showed that the minimum cross entropy loss achievable decreases as we increase the rank of the Softmax layer weights.", "In this work we focus on the loss of expressivity from an argmax perspective.", "To this end, we discretise the output space of Softmax and quantify the loss in expressivity in terms of unrealisable class rankings.", "From this interpretable perspective we will see that due to the Softmax Bottleneck some rankings are not realisable and unargmaxable classes can arise as a consequence.", "Demeter et al. (2020) showed that a class is unargmaxable if its Softmax weight vector is interior to the convex hull of the remaining class weight vectors.", "They did so by proving that the interior class probability is bounded above by the probability of at least one class on the convex hull (see Figure 2 and Cover, 1967, Figure 1).", "However, in their analysis they did not address Softmax layers that include a bias term.", "We address this limitation in Section 3, thus enabling us to search 6739 for unargmaxable classes in any released model.", "To detect whether unargmaxable tokens arise in LMs without a bias term, the authors introduce an approximate algorithm that asserts whether a weight vector is internal to the convex hull.", "It is approximate since their method had a precision approaching 100% but 68% recall when compared to an exact algorithm (Qhull, Barber et al., 1996) on the first 10 dimensions of a Softmax LM.", "In Section 3.3 we introduce an exact algorithm to detect unargmaxable tokens with certainty.", "The authors use their approximate algorithm to show that AWD-LSTM LMs (Merity et al., 2018) steal probability from candidate interior words when contrasted to the probabilities assigned by a smoothed n-gram LM.", "However, they find that as they increase the dimensionality d of the Softmax weights to 200 , the effect of stolen probability begins to dissipate.", "This raises the question of whether stolen probability is of importance for neural models used in practice which also have larger Softmax weight dimensionality.", "Herein we specifically search for unargmaxable tokens in MT and LM models with larger d [256 , 512 , 1024] .", "We use the term unargmaxable rather than stolen probability to highlight that we are focussing on whether unargmaxable tokens exist and not whether the probability distibution learned by low-rank Softmax is less flexible.", "We extend our analysis to MT models since they have more practical use cases than (generative) LMs: if unargmaxable tokens exists in a MT model, then the affected tokens can never be produced when using greedy decoding.", "In our experiments we find that while unargmaxable tokens arise in limited cases, they are not of grave importance.", "In order to quantify whether unargmaxable classes arise in released LMs and MT models, we first need to introduce tractable algorithms for detecting them.", "In this Section we explain how unargmaxable classes can arise due to a Softmax Bottleneck.", "Then, we introduce a fast approximate algorithm and a slow exact algorithm which we combine to detect vocabulary tokens that cannot be predicted.", "A Softmax layer gives us the probability assigned to a target class c t for an input feature vector x R d as follows:", "(1) (2)", "where W R | C | d are the class weight vectors stacked row by row, and b R | C | is the bias term.", "The above are used to compute the logits y = Wx + b .", "In what follows, we will refer to the feature activations x in R d as the input space and the logits y in R | C | as the output space of the Softmax layer.", "As we saw in Figure 2, there are certain arrangements of Softmax weights for which a target class c t cannot be surfaced as the argmax.", "To understand this phenomenon, it will be helpful to discretise the outputs to a finer granularity: rankings (Burges et al., 2005).", "In order for a classifier to predict a class c t using an argmax decision rule, it must rank c t above all other classes by assigning it the largest probability.", "From this perspective, a classifier assigns each input x a permutation that ranks the class indices in increasing order of probability.", "As an example, if we have 4 classes and obtain probabilities P ( C | x ) = (cid:2) .", "2 .", "4 .", "1 .", "3 (cid:3) we assign x the permutation 3142 , since P ( c 3 | x ) < P ( c 1 | x ) < P ( c 4 | x ) < P ( c 2 | x ) .", "We can readily obtain the coarser argmax decision ( c 2 ) by reading off the last index of the permutation.", "A class c t is unargmaxable when all permutations that rank c t above the rest cannot be realised due to rank constraints.", "We explain how this happens by combining the following two observations.", "Observation 1. We can discretise R | C | into regions corresponding to permutations by segmenting the space with hyperplanes.", "The hyperplanes that partition the output space into regions R corresponding to permutations are a well known structure in Combinatorics, the Braid 6740 Observation (1): Discretise R | C | into permutations R 123 R 321 R 312 R 132 R 213 R 231 | C | = 3 Observation (2): Observe rank constraints Feasible logits (1) & (2) = Corollary 1: Feasible permutations R 312 R 321 R 231 R 213 R 123 R 132 R 1342 R 1324 | C | = 4 R 1342 R 1432 R 1423 R 1243 R 2143 R 2413 R 2431 R 2341 R 3241 R 3421 R 3412 R 3142 Figure 3: Illustration of Corollary 1 ( 3 rd column) as a result of Observation 1 ( 1 st column) and Observation 2 ( 2 nd column) for softmax( Wx ) , W R | C | d , d = 2 .", "5 The Braid Arrangement for 3 and 4 classes is illustrated in rows 1 and 2 of Figure 3 respectively.", "In order to be able to rank the classes according to permutation R , our network needs to be able to map an input x to region R in the output space.", "However, this is not always possible when we have a Softmax Bottleneck as we elaborate below.", "Case", "i) softmax( Wx ) .", "By calculating y = Wx , the class logits y are a linear combination of d columns of W .", "Therefore, when d < | C | we can only represent a d -dimensional subspace of R | C | at best.", "This feasible subspace is illustrated as a grey plane in the middle column of Figure 3. Case", "ii) softmax( Wx + b ) .", "If we also have a bias term b the model can choose how to offset the subspace.", "When the bias term b is not in the 5 See Appendix B for more details on hyperplane arrangements and the Braid Arrangement specifically.", "column space of W the zero vector 0 is no longer a feasible y and instead of a linear subspace we have an affine subspace.", "See Figure 7 in the Appendix for an illustration comparing the two cases.", "Corollary 1. A Softmax classifier parametrised by W and b can rank classes in the order of permutation iff the affine subspace spanned by W and b intersects region R of the Braid Arrangement .", "6 When d < | C | 1 there are regions that cannot be intersected.", "7 The feasible permutations in our example correspond to the regions formed on the grey plane illustrated in the rightmost column of Figure 3. Note that for | C | = 4 only 12 out of 24 regions can be intersected.", "As we make the Softmax Bottleneck narrower by reducing the dimension d of the Softmax inputs, more permutations become infeasible (Good and 6 This insight of slicing the Braid Arrangement was introduced in Kamiya et al. (2011).", "7 When d = C 1 we can still intersect all regions, because the Braid Arrangement always has rank | C | 1 (all its normal vectors are perpendicular to the all ones vector 1 ).", "Tideman, 1977; Kamiya and Takemura, 2005).", "Importantly, if we choose | C | and d and whether to use a bias term, changing the values of the Softmax weights changes the set of feasible permutations but not the cardinality of the set (Cover, 1967; Smith, 2014).", "See Appendix C for more details.", "Corollary 2. Class c t is unargmaxable when any permutation that would rank class c t above all other classes is infeasible.", "Without a bias term the regions corresponding to permutations are unbounded (see the rightmost column of Figure 3).", "As such, imposing any range restrictions on the Softmax layer inputs x does not change the feasible regions as long as the restriction includes the origin.", "However, when we introduce a bias term we also get bounded regions (see Figure 7 in the Appendix that contrasts the two situations).", "Therefore, in this case the scale of the inputs to the Softmax layer also matters.", "If the inputs do not have a large enough range, there will be regions that exist but cannot be reached by the feature encoder.", "Given a softmax layer parametrised by W and b , are there any classes that are unargmaxable?", "We first describe a slow, but exact algorithm to answer this question.", "An exact algorithm will either prove class c t is argmaxable by returning a feasible point x : argmax ( Wx + b ) = c t or it will prove c t is unargmaxable by verifying no such point exists.", "To check if a region exists that ranks c t above all others, we need to find an input x R d that satisfies the following constraints: P ( c i | x ) < P ( c t | x ) , i : 1 i | C | , i = t (4) Each of the above constraints is equivalent to restricting x to a halfspace (see Appendix A).", "Hence, if all above inequalities are enforced, x is restricted to an intersection of halfspaces.", "If the intersection of halfspaces is empty, there is no x for which class c t can be ranked above all others and hence c t is unargmaxable.", "We can find a point in an intersection of halfspaces via linear programming, albeit we found this algorithm to be slow in practice for | C | > 1000 .", "The Chebyshev center of a polytope (Boyd et al., 2004, p. 417) is the center of the largest ball of radius r that can be embedded within the polytope.", "We can find the Chebyshev center x and the radius r with the following linear programme.", "maximise r subject to w i x + r w i 2 b i , 1 i | C | 1 x 100 x 100 r > 0 (6) Where w i = w c i w c t and b i = b c i b c t , i : c i = c t .", "We further constrain x to guarantee the regions are bounded, since the Chebyshev center is not defined otherwise.", "This constraint also captures the fact that neural network activations are not arbitrarily large.", "If the above linear programme is feasible, we know that class c t is argmaxable and we also get a lower bound on the volume of the region for which it is solvable by inspecting r .", "On the other hand, if the linear programme is infeasible, c t is unargmaxable.", "The exact algorithm was too slow to run for the whole vocabulary.", "In order to avoid running the exact algorithm for every single vocabulary item, we developed an incomplete algorithm (Kautz et al., 2009) with a one-sided error, which can quickly rule out most tokens, leaving only a small number to be checked by the exact algorithm.", "It proves that c t is argmaxable by finding an input x for which c t has the largest activation.", "Unlike the exact algorithm, if no solution exists it cannot prove that the token is unargmaxable .", "Hence, we terminate our search after a predetermined number of steps.", "We denote any tokens not shown to be argmaxable by the approximate algorithm as potentially unargmaxable and we run the exact algorithm on them.", "An illustration of the way we combine the exact and approximate algorithms to decide whether class c t is argmaxable can be seen in Figure 4.", "to guide us towards a point x for which c t has the largest activation.", "To show that class c t is argmaxable, it suffices to find an input x for which the largest probability is assigned to c t .", "Empirically we found this to be easy for most classes.", "We begin by interpreting the actual weight vector as the candidate input x = w c t .", "We do so since the dot product of two vectors is larger when the two vectors point in the same direction.", "8 While the magnitude of the vectors affects the dot product, we found the above initialisation worked well empirically.", "When c t is not the argmax for x and c i is instead, Relation 5 for c i and c t will have the wrong sign.", "The sign of this relation defines which side of the Braid hyperplane for c i and c t we are on.", "To correct the sign, we construct the normal vector and offset of the Braid hyperplane (Lines 2, 3 in Figure 5), compute the distance of x from it (Line 5), and reflect x across it (Line 6).", "9 We repeat the above operation until either c t is the argmax or we have used up our budget of N steps.", "In this Section we use the combined algorithm from", "Figure 4 to search models for unargmaxable tokens.", "We test 7 LMs and 143 MT models.", "We find that unargmaxable tokens only occur in 13 MT models, but these are mostly infrequent and noisy vocabulary tokens.", "We therefore do not expect such tokens to affect translation quality per se.", "We also find that nearly all vocabulary tokens of LMs and student MT models can be verified with less than N = 10 steps of the approximate algorithm.", "In contrast, other MT models need thousands of steps and also rely on the exact algorithm.", "In this sense, models that need fewer steps are easier to verify: the search problem for their arrangement of Softmax weights is easier.", "Throughout the following experiments we assumed the Softmax inputs were bounded in magnitude for all dimensions 100 x i 100 .", "As we mentioned in Subsection 3.2.1, if we have a Softmax bias term, there are bounded regions.", "If the bounded regions are large, even though the outputs are not theoretically bounded, they are practically bounded since neural network feature encoders cannot produce arbitrarily large activations and some regions may be unreachable 10 .", "For the approximate algorithm, we search for a solution with a patience of N = 2500 steps and resort to the exact algorithm if the approximate method fails or returns a point outside the aforementioned bounds.", "We use Gurobi (Gurobi Optimization, 2021) as the linear programme solver.", "We accessed the model parameters either via NumPy (Harris et al., 2020) or PyTorch (Paszke et al., 2019).", "The experiments took 3 days to run on an AMD 3900X 12 -core CPU using 10 threads and 64Gb of RAM.", "We checked 7 widely used LMs for unargmaxable tokens.", "While some of these models such as 9 When no offset is involved, the reflection operation is the Householder transformation (Householder, 1958).", "10 The validity of our assumption is only relevant for models we find to be bounded.", "BERT (Devlin et al., 2019) are not directly used for generation, a recent trend is to use these large LMs as prompt models (Liu et al., 2021) for few shot learning.", "A prompt model obviates the need for a separate classifier by rephrasing a classification task as slot filling given a task specific template.", "Prompt approaches commonly choose the answer for the slot by argmaxing the Softmax distribution obtained by a LM.", "Hence we verify that there are no answers that are unargmaxable.", "BERT, RoBERTa (Liu et al., 2019), XLM-RoBERTa (Conneau et al., 2020) and GPT2 (Rad-ford et al., 2019) did not exhibit any unargmaxable tokens and can be assessed without resorting to the exact algorithm (see Table 4 in the Appendix).", "Moreover, the LMs were very easy to verify with the approximate algorithm requiring less than 1 .", "2 steps per token on average.", "Machine Translation (13/143 Unargmaxable)", "In the case of MT models, the feature encoder comprises the whole encoder-decoder network excluding the last layer of the decoder.", "We first focus on models which we found to have unargmaxable tokens and then briefly describe models that did not.", "A summary of the results and characteristics of the models we checked can be seen in Table 1. More detailed results can be found in Tables 5, 6, 7 and 8 in the Appendix.", "Helsinki NLP OPUS (13/32 Unargmaxable).", "The 32 models we use for this subset of experiments are MT models released through Hugging Face (Wolf et al., 2020).", "We use models introduced in Tiedemann and Thottingal (2020).", "These models are trained on subsets of OPUS.", "All models are transformer models trained using Marian (Junczys-Dowmunt et al., 2018).", "They include a bias term, have a tied encoder and decoder and d = 512 .", "Unargmaxable tokens, if present, will affect generation in the target language.", "We therefore restrict our analysis to the target language vocabulary.", "To 11 https://github.com/browsermt/students facilitate this, we inspect translation models for which the source and target languages have different scripts.", "We explore 32 models with source and target pairs amongst Arabic (ar), Hebrew (he), English (en), German (de), French(fr), Spanish (es), Finnish (fi), Polish (pl), Greek (el), Russian (ru), Bulgarian (bg), Korean (ko) and Japanese (ja).", "We rely on the script to disambiguate between source and target language and discard irrelevant tokens from other languages.", "We also ignore vocabulary tokens containing digits and punctuation.", "In Figure 6 we can see the number of Byte Pair Encoding (BPE; Sennrich et al., 2016) tokens that were unargmaxable for these models, sorted in decreasing order.", "As can be seen, all tokens are argmaxable for 19 / 32 language pairs.", "For the remaining 13 languages, while there can be quite a few unargmaxable tokens, most would not be expected to affect translation quality.", "Out of the set of 427 unique unargmaxable BPE tokens, 307 / 476 are single character subword tokens and only 2 are word stem BPE segments: erecti (bg-en) and (en-ru) which means preliminary in Russian.", "The rest include the <unk> token and noisy subword unicode tokens such as , and .", "On closer inspection of the SentencePiece to-keniser we found that both and erecti come up as tokenisation alternatives that make them rare and irregular.", "We found that the token was rare since it is capitalised and only occurs once, while an-other occurrence was caused by a BPE segmentation corner case due to Unicode token variation of -e .", "Other mentions having as a substring were split differently.", "In a similar vein, we found that the erecti token occurred due to BPE corner cases for erecti-0-n , erecti-lis-) , erecti-l , erecti-.", "and erecti-cle many of which are misspellings or rare word forms from clinical text.", "As such, the impact of these tokens being unargmaxable is small since there are alternative ones the MT model can prefer over them which could even correct spelling mistakes.", "FAIR WMT'19 (0/4 Unargmaxable).", "We checked 4 FAIR models (en-ru, ru-en, en-de, de-en) submitted to WMT'19 (Ng et al., 2019).", "These transformer models have d = 1024 and do not employ a Softmax bias term.", "Edinburgh WMT'17 (0/82 Unargmaxable).", "These WMT'17 submissions (Sennrich et al., 2017) were ensembles of left-to-right trained models (l2r) and right-to-left trained models (r2l).", "These were LSTMs trained with Nematus using d = 500 or d = 512 and Softmax weights tied with the decoder input embeddings.", "The models include a bias term.", "None of the models have unargmaxable tokens.", "However, we found that models that comprise an ensemble varied a lot in how easy it was to show that the vocabulary was argmaxable, despite them differing solely in the random seed used for weight initialisation.", "As an example, zh-en.l2r(1) had 8 tokens that needed to be verified with the exact algorithm, zh-en.l2r(2) had 3 and zh-en.l2r(3) had 366 .", "This highlights that random initialisation alone is enough to lead to very different arrangements of Softmax weights.", "Bergamot (0/25 Unargmaxable).", "The Bergamot project 12 model repository contains both large transformer-base and transformer-big teacher models, as well as small knowledge distilled (Kim and Rush, 2016) student models.", "Student models have d = 256 (tiny) or d = 512 (base), while teacher models have d = 1024 .", "Interestingly, we find that it is easier to show that student models are argmaxable when compared to teacher models, despite student models having Softmax weights 1 / 2 or 1 / 4 the dimensions of the teacher model.", "We conclude from our experiments that it is possible to have unargmaxable tokens, but this rarely occurs in practice for tokens that would lead to irrecoverable errors in the MT models we checked.", "A limitation of our conclusions is that beam search is usually preferred over greedy decoding for MT 12 https://browser.mt models used in practice.", "We leave the question of whether unargmaxable tokens also impact beam search for future work.", "It is challenging to make exact claims about what can cause tokens to be unargmaxable because the models we tested varied in so many ways.", "However, we outline some general trends below.", "The most general observation is that the tokens that are more likely to be unargmaxable or are hard to prove to be argmaxable are the infrequent ones.", "This can be seen in Figures 11 and 12 in the Appendix, where the x-axis contains the vocabulary of the models sorted left to right by increasing frequency.", "Each dot represents the number of steps needed to check whether a token is argmaxable or not, and as can be seen the values to the right are generally much higher than those to the left.", "This result is in line with previous work that highlights the limitations of the Softmax layer when modelling rare words for LM (Chen et al., 2016; Labeau and Cohen, 2019) and MT (Nguyen and Chiang, 2018; Raunak et al., 2020) and infrequent classes for image classification (Kang et al., 2020).", "We found that the LMs and student MT model vocabularies can be shown to be argmaxable with one step of the approximate algorithm on average.", "On the other hand, for Helsinki NLP and FAIR MT models more than 10 steps were needed.", "To put the above observations into context, we also check the behaviour of our algorithms on randomly initialised parameters.", "If we initialise a Softmax layer of | C | = 10000 classes using a uniform distribution U ( 1 , 1) we do not expect unargmaxable tokens to exist after d = 30 (see Figure 10 in the Appendix).", "Moreover, any randomly initialised parameters can be checked using the approximate algorithm with fewer steps as we increase d .", "From this perspective, it is surprising that student models were easier to show to be argmaxable than the teacher models, despite the Softmax weight dimensionality of the student models being much lower (256 for tiny, versus 1024 for teacher).", "This shows that effective neural MT models do not need to be hard to check, but nevertheless neural models trained on the original data can sometimes converge to such an arrangement of weights.", "In this work we discretised the outputs of Softmax and showed how dimensionality constraints shrink the set of feasible class rankings and can lead to some classes being impossible to predict using argmax.", "In our experiments we demonstrated that while MT models can have unargmaxable vocabulary tokens, this does not occur often in our experiments.", "Moreover, for the models we tested the unargmaxable tokens would not create discernible differences in translation quality as the tokens are noisy and infrequent.", "We release an algorithm to detect whether some classes are unargmaxable with the hope that this will be helpful to the wider community working on a plethora of different models where the observed phenomena may vary.", "In future work, we aim to investigate any learn-ability consequences more closely.", "As we saw, when using an approximate search algorithm, it is much harder to find argmaxable classes in some models than it is in others.", "Since gradient descent algorithms are also iterative search algorithms seeking optimal parameters, we hypothesise that it will be challenging to train neural network encoders to map activations to regions of the input space that a search algorithm cannot find easily.", "Hence, although some tokens may not be provably unargmaxable because of constraints imposed by the Softmax parameters of the last layer, some tokens may still be very hard to produce because of difficulties encountered by the feature encoder.", "To this end, a more holistic investigation into the consequences of the loss in expressivity in low-rank classifiers is warranted.", "Unargmaxability directly impacts fairness, since certain model outputs, further from being underrepresented, may not be represented at all.", "As we discussed, low-rank classifiers have limited expressivity compared to full rank classifiers, and thus have to explicitly choose which rankings of classes to retain feasible when using argmax prediction.", "As such, by choosing to use a low-rank model, we are allowing the data and training procedure to specify which rankings should remain feasible, and harmful biases in our data can be propagated and further exacerbated (Hooker, 2021) by our models due to unargmaxability.", "For example, it could be the case that underrepresented groups find no representation in the outputs of such models, in the extreme case where related outputs are unargmaxable.", "As researchers, we should be aware of this limitation when choosing how to parametrise our models (Hooker et al., 2019) and actively seek to either control such phenomena or verify models are not harmful before moving them from research into production.", "In addition to the above considerations, linear classification layers are vulnerable to targeted attacks via data poisoning techniques (Goldblum et al., 2020), especially under the scenario where shared models are used as feature extractors (Ji et al., 2018).", "A subset of such techniques, known as feature collisions (Shafahi et al., 2018; Gold-blum et al., 2020), exploit the arrangement of the training examples in feature space to force the misclassification of a target example.", "Attacks such as Convex Polytope (Zhu et al., 2019) and Bullseye Polytope (Aghakhani et al., 2021), specifically target the unargmaxability weakness (Cover, 1967; Demeter et al., 2020) we elaborated on in the paper.", "While such attacks assume they are able to inject examples into a training set used for fine-tuning, this is not an unrealistic assumption.", "This is especially true for recommender systems, where adversarial attacks can create fake users such that a target item is removed from a target user's top-k list (Christakopoulou and Banerjee, 2019).", "We thank Seraphina Goldfarb-Tarrant, Elizabeth Nielsen and Sabine Weber for help with languages, Beatrice Alex, Sameer Bansal, Panagiotis Eu-stratiadis, Sharon Goldwater, Chantriolnt-Andreas Kapourani, Oli Liu, Yevgen Matusevych, Kate McCurdy, Laura Perez-Beltrachini, Jesse Sigal, Mark Steedman, Ivan Titov and Sabine Weber for feedback and support, Antonio Vergari for feedback, guidance and tirelessly discussing low-rank constraints and Shay Cohen for insightful suggestions and for pointing us to OEIS.", "We also thank David Demeter for an extensive discussion on Stolen Probability and the anonymous reviewers for helpful questions and comments.", "This work was supported by the Engineering and Physical Sciences Research Council [grant number EP/R513209/1] and Research and Innovation Action Bergamot , which has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 825303." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "result", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "method", "abstain", "abstain", "result", "objective", "method", "abstain", "objective", "result", "result", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other" ]
[ "We present a neural model for question generation from knowledge base triples in a Zero-Shot setup, that is generating questions for triples containing predicates, subject types or object types that were not seen at training time.", "Our model leverages triples occurrences in the natural language corpus in an encoder-decoder architecture, paired with an original part-of-speech copy action mechanism to generate questions.", "Benchmark and human evaluation show that our model sets a new state-of-the-art for zero-shot QG.", "Questions Generation (QG) from Knowledge Graphs is the task consisting in generating natural language questions given an input knowledge base (KB) triple (Serban et al., 2016).", "QG from knowledge graphs has shown to improve the performance of existing factoid question answering (QA) systems either by dual training or by augmenting existing training datasets (Dong et al., 2017; Khapra et al., 2017).", "Those methods rely on large-scale annotated datasets such as SimpleQuestions (Bordes et al., 2015).", "Building such datasets is a tedious task in practice, especially to obtain an unbiased dataset i.e. a dataset that covers equally a large amount of triples in the KB.", "In practice many of the predicates and entity types in KB are not covered by those annotated datasets.", "For example 75 .", "6% of Freebase predicates are not covered by the SimpleQuestions dataset 1 .", "Among those we can find important missing predicates such as: fb:food/beer/country , fb:location/country/national anthem , fb:astronomy/star system/stars .", "1 replicate the observation http://bit.ly/2GvVHae", "were not seen at training time (Zero-Shot Question Generation).", "Since state-of-the-art systems in factoid QA rely on the tremendous efforts made to create SimpleQuestions, these systems can only process questions on the subset of 24 .", "4% of freebase predicates defined in SimpleQuestions.", "Previous works for factoid QG (Serban et al., 2016) claims to solve the issue of small size QA datasets.", "However encountering an unseen predicate / entity type will generate questions made out of random text generation for those out-of-vocabulary predicates a QG system had never seen.", "We go beyond this state-of-the-art by providing an original and non-trivial solution for creating a much broader set of questions for unseen predicates and entity types.", "Ultimately, generating questions to predicates and entity types unseen at training time will allow QA systems to cover predicates and entity types that would not have been used for QA otherwise.", "Intuitively, a human who is given the task to write a question on a fact offered by a KB, would read natural language sentences where the entity or the predicate of the fact occur, and build up questions that are aligned with what he reads from both a lexical and grammatical standpoint.", "In this paper, we propose a model for Zero-Shot Question Generation that follows this intuitive process.", "In addition to the input KB triple, we feed our model with a set of textual contexts paired with the input KB triple through distant supervision.", "Our model derives an encoder-decoder architecture, in which the encoder encodes the input KB triple, along with a set of textual contexts into hidden representations.", "Those hidden representations are fed to a decoder equipped with an attention mechanism to generate an output question.", "In the Zero-Shot setup, the emergence of new predicates and new class types during test time requires new lexicalizations to express these pred-218 icates and classes in the output question.", "These lexicalizations might not be encountered by the model during training time and hence do not exist in the model vocabulary, or have been seen only few times not enough to learn a good representation for them by the model.", "Recent works on Text Generation tackle the rare words/unknown words problem using copy actions (Luong et al., 2015; Gulcehre et al., 2016): words with a specific position are copied from the source text to the output text although this process is blind to the role and nature of the word in the source text.", "Inspired by research in open information extraction (Fader et al., 2011) and structure-content neural language models (Kiros et al., 2014), in which part-of-speech tags represent a distinctive feature when representing relations in text, we extend these positional copy actions.", "Instead of copying a word in a specific position in the source text, our model copies a word with a specific part-of-speech tag from the input text we refer to those as part-of-speech copy actions.", "Experiments show that our model using contexts through distant supervision significantly outperforms the strongest baseline among six ( +2 . 04 BLEU-4 score).", "Adding our copy action mechanism further increases this improvement ( +2 . 39 ).", "Additionally, a human evaluation complements the comprehension of our model for edge cases; it supports the claim that the improvement brought by our copy action mechanism is even more significant than what the BLEU score suggests.", "QG became an essential component in many applications such as education (Heilman and Smith, 2010), tutoring (Graesser et al., 2004; Evens and Michael, 2006) and dialogue systems (Shang et al., 2015).", "In our paper we focus on the problem of QG from structured KB and how we can generalize it to unseen predicates and entity types.", "(Seyler et al., 2015) generate quiz questions from KB triples.", "Verbalization of entities and predicates relies on their existing labels in the KB and a dictionary.", "(Serban et al., 2016) use an encoder-decoder architecture with attention mechanism trained on the SimpleQuestions dataset (Bordes et al., 2015).", "(Dong et al., 2017) generate paraphrases of given questions to increases the performance of QA systems; paraphrases are generated relying on paraphrase datasets, neural machine translation and rule mining.", "(Khapra et al., 2017) generate a set of QA pairs given a KB entity.", "They model the problem of QG as a sequence to sequence problem by converting all the KB entities to a set of keywords.", "None of the previous work in QG from KB address the question of generalizing to unseen predicates and entity types.", "Textual information has been used before in the Zero-Shot learning.", "(Socher et al., 2013) use information in pretrained word vectors for Zero-Shot visual object recognition.", "(Levy et al., 2017) incorporates a natural language question to the relation query to tackle Zero-Shot relation extraction problem.", "Previous work in machine translation dealt with rare or unseen word problem problem for translating names and numbers in text.", "(Luong et al., 2015) propose a model that generates positional placeholders pointing to some words in source sentence and copy it to target sentence ( copy actions ).", "(Gulcehre et al., 2016; Gu et al., 2016) introduce separate trainable modules for copy actions to adapt to highly variable input sequences, for text summarization.", "For text generation from tables, (Lebret et al., 2016) extend positional copy actions to copy values from fields in the given table.", "For QG, (Serban et al., 2016) use a placeholder for the subject entity in the question to generalize to unseen entities.", "Their work is limited to unseen entities and does not study how they can generalize to unseen predicates and entity types.", "Let F = { s, p, o } be the input fact provided to our model consisting of a subject s , a predicate p and an object o , and C be the set of textual contexts associated to this fact.", "Our goal is to learn a model that generates a sequence of T tokens Y = y 1 , y 2 , . . . , y T representing a question about the subject s , where the object o is the correct answer.", "Our model approximates the conditional probability of the output question given an input fact p ( Y | F ) , to be the probability of the output question, given an input fact and the additional textual context C , modelled as follows: p ( Y | F ) = TY t =1 p ( y t | y <t , F, C ) (1) where y <t represents all previously generated tokens until time step t .", "Additional textual contexts are natural language representation of the triples 219 Figure 1: The proposed model for Question Generation.", "that can be drawn from a corpus our model is generic to any textual contexts that can be additionally provided, though we describe in Section 4.1 how to create such texts from Wikipedia.", "Our model derives the encoder-decoder architecture of (Sutskever et al., 2014; Bahdanau et al., 2014) with two encoding modules: a feed forward architecture encodes the input triple (sec. 3.1) and a set of recurrent neural network (RNN) to encode each textual context (sec. 3.2).", "Our model has two attention modules (Bahdanau et al., 2014): one acts over the input triple and another acts over the input textual contexts (sec. 3.4).", "The decoder (sec. 3.3) is another RNN that generates the output question.", "At each time step, the decoder chooses to output either a word from the vocabulary or a special token indicating a copy action (sec. 3.5) from any of the textual contexts.", "Given an input fact F = { s, p, o } , let each of e s , e p and e o be a 1-hot vectors of size K .", "The fact encoder encodes each 1-hot vector into a fixed size vector h s = E f e s , h p = E f e p and h o = E f e o , where E f RH k K is the KB embedding matrix, H k is the size of the KB embedding and K is the size of the KB vocabulary.", "The encoded fact h f R 3 H k represents the concatenation of those three vectors and we use it to initialize the decoder.", "Following (Serban et al., 2016), we learn E f using TransE (Bordes et al., 2015).", "We fix its weights and do not allow their update during training time.", "Given a set of n textual contexts C = { c 1 , c 2 , . . . , c n : c j = ( x j 1 , x j 2 , . . . , x j | c j | ) } , where x j i represents the 1-hot vector of the i th token in the j th textual context c j , and | c j | is the length of the j th context.", "We use a set of n Gated Recurrent Neural Networks (GRU) (Cho et al., 2014) to encode each of the textual concepts separately: h c j i = GRU j (cid:16) E c x ji , h c j i 1 (cid:17) (3) where h c j i RH c is the hidden state of the GRU that is equivalent to x ji and of size H c .", "E c is the input word embedding matrix.", "The encoded context represents the encoding of all the textual contexts; it is calculated as the concatenation of all the final states of all the encoded contexts: h c = [ h c 1 | c 1 | ; h c 2 | c 2 | ; . . . ; h c n | c n | ] .", "For the decoder we use another GRU with an attention mechanism (Bahdanau et al., 2014), in which the decoder hidden state s t RH d at each time step t is calculated as:", "Where: s t = tanh (cid:16) WE w y t 1 + U [ r t s t 1 ] + A [ a ft ; a ct ] (cid:17) (6) z t = (cid:16) W z E w y t 1 + U z s t 1 + A z [ a ft ; a ct ] (cid:17) (7) r t = (cid:16) W r E w y t 1 + U r s t 1 + A r [ a ft ; a ct ] (cid:17) (8) m H", "W, W z , W r R d , U, U z , U r , A, A z , A r RH d H d are learnable parameters of the GRU.", "E w R m V is the word embedding matrix, m is the word embedding size and H d is the size of the decoder hidden state.", "a ft , a ct are the outputs of the fact attention and the context attention modules respectively, detailed in the following subsection.", "In order to enforce the model to pair output words with words from the textual inputs, we couple the word embedding matrices of both the decoder E w and the textual context encoder E c", "(eq.(3)).", "We initialize them with GloVe embeddings (Penning-ton et al., 2014) and allow the network to tune them.", "The first hidden state of the decoder s 0 = [ h f ; h c ] is initialized using a concatenation of the encoded fact", "(eq.(2)) and the encoded context", "(eq.(4)) .", "At each time step t , after calculating the hidden state of the decoder, the conditional probability distribution over each token y t of the generated question is computed as the softmax ( W o s t ) over all the entries in the output vocabulary, W o RH d V is the weight matrix of the output layer of the decoder.", "Triple attention over the input triple to determine at each time step t an attention-based encoding of the input fact a ft RH k :", "s,t , p,t , o,t are scalar values calculated by the attention mechanism to determine at each time step which of the encoded subject, predicate, or object the decoder should attend to.", "Textual contexts attention over all the hidden states of all the textual contexts a ct RH c : a ct = | C | X i =1 | c i | X j =1 c i t,j h c i j , (10) c i t,j is a scalar value determining the weight of the j th word in the i th context c i at time step t .", "Given a set of encoded input vectors I = { h 1 , h 2 , ...h k } and the decoder previous hidden state s t 1 , the attention mechanism calculates t = i,t , . . . , k,t as a vector of scalar weights, each i,t determines the weight of its correspond-What caused the [C1 NOUN] of the [C3 NOUN] [S] ?", "e i,t = v a > tanh ( W a s t 1 + U a h i ) (11) i,t = exp ( e i,t ) P kj =1 exp ( e j,t ) , (12) where v a , W a , U a are trainable weight matrices of the attention modules.", "It is important to notice here that we encode each textual context separately using a different GRU, but we calculate an overall attention over all tokens in all textual contexts: at each time step the decoder should ideally attend to only one word from all the input contexts.", "We use the method of (Luong et al., 2015) by modeling all the copy actions on the data level through an annotation scheme.", "This method treats the model as a black box, which makes it adaptable to any text generation model.", "Instead of using positional copy actions, we use the part-of-speech information to decide the alignment process between the input and output texts to the model.", "Each word in every input textual context is replaced by a special token containing a combination of its context id (e.g. C1 ) and its POS tag (e.g. NOUN ).", "Then, if a word in the output question matches a word in a textual context, it is replaced with its corresponding tag as shown in Table 1.", "Unlike (Serban et al., 2016; Lebret et al., 2016) we model the copy actions in the input and the output levels.", "Our model does not have the drawback of losing the semantic information when replacing words with generic placeholders, since we provide the model with the input triple through the fact encoder.", "During inference the model chooses to either output words from the vocabulary or special tokens to copy from the textual contexts.", "In 221 a post-processing step those special tokens are replaced with their original words from the textual contexts.", "As a source of question paired with KB triples we use the SimpleQuestions dataset (Bordes et al., 2015).", "It consists of 100K questions with their corresponding triples from Freebase, and was created manually through crowdsourcing.", "When asked to form a question from an input triple, human annotators usually tend to mainly focus on expressing the predicate of the input triple.", "For example, given a triple with the predicate fb:spacecraft/manufacturer the user may ask What is the manufacturer of [S]", "? .", "Annotators may specify the entity type of the subject or the object of the triple: What is the manufacturer of the spacecraft [S]", "? or Which company manufactures [S]", "? .", "Motivated by this example we chose to associate each input triple with three textual contexts of three different types.", "The first is a phrase containing lexicalization of the predicate of the triple.", "The second and the third are two phrases containing the entity type of the subject and the object of the triple.", "In what follows we show the process of collection and preprocessing of those textual contexts.", "We extend the set of triples given in the SimpleQuestions dataset by using the FB5M (Bordes et al., 2015) subset of Freebase.", "As a source of text documents, we rely on Wikipedia articles.", "Predicate textual contexts: In order to collect textual contexts associated with the SimpleQuestions triples, we follow the distant supervision setup for relation extraction (Mintz et al., 2009).", "The distant supervision assumption has been effective in creating training data for relation extraction and shown to be 87% correct (Riedel et al., 2010) on Wikipedia text.", "First, we align each triple in the FB5M KB to sentences in Wikipedia if the subject and the object of this triple co-occur in the same sentence.", "We use a simple string matching heuristic to find entity mentions in text 2 .", "Afterwards we reduce the 2 We map Freebase entities to Wikidata through the Wikidata property P646, then we extract their labels and aliases.", "We use the Wikidata truthy dump: https://dumps.", "wikimedia.org/wikidatawiki/entities/ Freebase Relation Predicate Textual Context person/place of birth [O] is birthplace of [S] currency/former countries [S] was currency of [O] dish/cuisine [O] dish [S] airliner accident/flight origin[S] was flight from [O] film featured song/performer[S] is release by [O] airline accident/operator [S] was accident for [O] genre/artists [S] became a genre of [O] risk factor/diseases [S] increases likelihood of [O] book/illustrations by [S] illustrated by [O] religious text/religion [S] contains principles of [O] spacecraft/manufacturer [S] spacecraft developed by [O] Table 2: Table showing an example of textual contexts extracted for freebase predicates sentence to the set of words that appear on the dependency path between the subject and the object mentions in the sentence.", "We replace the positions of the subject and the object mentions with [S] and [O] to the keep track of the information about the direction of the relation.", "The top occurring pattern for each predicate is associated to this predicate as its textual context.", "Table 2 shows examples of predicates and their corresponding textual context.", "Sub-Type and Obj-Type textual contexts: We use the labels of the entity types as the sub-type and obj-type textual contexts.", "We collect the list of entity types of each entity in the FB5M through the predicate fb:type/instance .", "If an entity has multiple entity types we pick the entity type that is mentioned the most in the first sentence of each Wikipedia article.", "Thus the textual contexts will opt for entity types that is more natural to appear in free text and therefore questions.", "To generate the special tokens for copy actions (sec. 3.5) we run POS tagging on each of the input textual contexts 3 .", "We replace every word in each textual context with a combination of its context id (e.g. C1 ) and its POS tag (e.g. NOUN ).", "If the same POS tag appears multiple times in the textual context, it is given an additional id (e.g. C1 NOUN 2 ).", "If a word in the output question overlaps with a word in the input textual context, this word is replaced by its corresponding tag.", "For sentence and word tokenization we use the Regex tokenizer from the NLTK toolkit (Bird, 2006), and for POS tagging and dependency pars-3 For the predicate textual contexts we run pos tagging on the original text not the lexicalized dependency path 222 Train Valid Test p re d # pred 169.4 24.2 48.4 # samples 55566.7 7938.1 15876.2 % samples 70.0 2.77 10.0 1.236 20.0 2.12 s ub -t y p e s # types 112.7 16.1 32.2 # samples 60002.6 8571.8 17143.6 % samples 70.0 7.9 10.0 3.6 20.0 6.2 o b j-t y p e s # types 521.6 189.9 282.2 # samples 57878.1 8268.3 16536.6 % samples 70.0 4.7 10.0 2.5 20.0 3.8 Table 3: Dataset statistics across 10 folds for each experiment ing we use the Spacy 4 implementation.", "We develop three setups that follow the same procedure as (Levy et al., 2017) for Zero-Shot relation extraction to evaluate how our model generalizes to:", "1) unseen predicates,", "2) unseen sub-types and", "3) unseen obj-types.", "For the unseen predicates setup we group all the samples in SimpleQuestions by the predicate of the input triple, and keep groups that contain at least 50 samples.", "Afterwards we randomly split those groups to 70% train, 10% valid and 20% test mutual exclusive sets respectively.", "This guarantees that if the predicate fb:person/place of birth for example shows during test time, the training and validation set will not contain any input triples having this predicate.", "We repeat this process to create 10 cross validation folds, in our evaluation we report the mean and standard deviation results across those 10 folds.", "While doing this we make sure that the number of samples in each fold not only unique predicates follow the same 70%, 30%, 10% distribution.", "We repeat the same process for the subject entity types and object entity types (an-swer types) individually.", "Similarly, for example in the unseen object-type setup, the question Which artist was born in", "Berlin? appearing in the test set means that, there is no question in the training set having an entity of type artist .", "Table 3 shows the mean number of samples, predicates, sub-types and obj-types across the 10 folds for each experiment setup.", "SELECT is a baseline built from (Serban et al., 2016) and adapted for the zero shot setup.", "During test time given a fact F , this baseline picks a fact F c from the training set and outputs the question that corresponds to it.", "For evaluating unseen predicates, F c has the same answer type (obj-type) as F .", "And while evaluating unseen sub-types or obj-types, F c and F have the same predicate.", "R-TRANSE is an extension that we propose for SELECT .", "The input triple is encoded using the concatenation of the TransE embeddings of the subject, predicate and object.", "At test time, R-TRANSE picks a fact from the training set that is the closest to the input fact using cosine similarity and outputs the question that corresponds to it.", "We provide two versions of this baseline: R-TRANSE which indexes and retrieves raw questions with only a single placeholder for the subject label, such as in (Serban et al., 2016).", "And R-TRANSE copy which indexes and retrieves questions using our copy actions mechanism (sec. 3.5).", "IR is an information retrieval baseline.", "Information retrieval has been used before as baseline for QG from text input (Rush et al., 2015; Du et al., 2017).", "We rely on the textual context of each input triple as the search keyword for retrieval.", "First, the IR baseline encodes each question in the training set as a vector of TFIDF weights (Joachims, 1997) and then does dimensionality reduction through LSA (Halko et al., 2011).", "At test time the textual context of the input triple is converted into a dense vector using the same process and then the question with the closest cosine distance to the input is retrieved.", "We provide two versions of this baseline: IR on raw text and IR copy on text with our placeholders for copy actions.", "Encoder-Decoder.", "Finally, we compare our model to the Encoder-Decoder model with a single placeholder, the best performing model from (Serban et al., 2016).", "We initialize the encoder with TransE embeddings and the decoder with GloVe word embeddings.", "Although this model was not originally built to generalize to unseen predicates and entity types, it has some generalization abilities represented in the encoded infor-223 mation in the pre-trained embeddings.", "Pretrained KB terms and word embeddings encode relations between entities or between words as translations in the vector space.", "Thus the model might be able to map new classes or predicates in the input fact to new words in the output question.", "To train the neural network models we optimize the negative log-likelihood of the training data with respect to all the model parameters.", "For that we use the RMSProp optimization algorithm with a decreasing learning rate of 0 .", "001 , mini-batch size = 200 , and clipping gradients with norms larger than 0 .", "1 .", "We use the same vocabulary for both the textual context encoders and the decoder outputs.", "We limit our vocabulary to the top 30 , 000 words including the special tokens.", "For the word embeddings we chose GloVe (Pennington et al., 2014) pretrained embeddings of size 100 .", "We train TransE embeddings of size H k = 200 , on the FB5M dataset (Bordes et al., 2015) using the TransE model implementation from (Lin et al., 2015).", "We set GRU hidden size of the decoder to H d = 500 , and textual encoder to H c = 200 .", "The networks hyperparameters are set with respect to the final BLEU-4 score over the validation set.", "All neural networks are implemented using Ten-sorflow (Abadi et al., 2015).", "All experiments and models source code are publicly available 5 for the sake of reproducibility.", "To evaluate the quality of the generated question, we compare the original labeled questions by human annotators to the ones generated by each variation of our model and the baselines.", "We rely on a set of well established evaluation metrics for text generation: BLEU-1, BLEU-2, BLEU-3, BLEU-4 (Papineni et al., 2002), METEOR (Denkowski and Lavie, 2014) and ROUGEL (Lin, 2004).", "Automatic Metrics for evaluating text generation such as BLEU and METEOR give an measure of how close the generated questions are to the target correct labels.", "However, they still suffer from many limitations (Novikova et al., 2017).", "Automatic metrics might not be able to evaluate directly whether a specific predicate was explicitly mentioned in the generated text or not.", "As an example, taking a target question and two corresponding generated questions A and B : What kind of film is kill bill vol.", "We can find that the sentence A having a better BLEU score than B although it is not able to express the correct target predicate ( film genre ).", "For that reason we decide to run two further human evaluations to directly measure the following: Predicate identification : annotators were asked to indicate whether the generated question contains the given predicate in the fact or not, either directly or implicitly.", "Naturalness : following (Ngomo et al., 2013), we measure the comprehensibility and readability of the generated questions.", "Each annotator was asked to rate each generated question using a scale from 1 to 5 , where: (5) perfectly clear and natural, (3) artificial but understandable, and (1) completely not understandable.", "We run our studies on 100 randomly sampled input facts alongside with their corresponding generated questions by each of the systems using the help of 4 annotators.", "Automatic Evaluation Table 4 shows results of our model compared to all other baselines across all evaluation metrics.", "Our that encodes the KB fact and textual contexts achieves a significant enhancement over all the baselines in all evaluation metrics, with + 2 .", "04 BLEU-4 score than the Encoder-Decoder baseline.", "Incorporating the part-of-speech copy actions further improves this enhancement to reach + 2 .", "39 BLEU-4 points.", "Among all baselines, the Encoder-Decoder baseline and the R-TRANSE baseline performed the best.", "This shows that TransE embeddings encode intra-predicates information and intra-class-types information to a great extent, and can generalize to some extent to unseen predicates and class types.", "Similar patterns can be seen in the evaluation on unseen sub-types and obj-types (Table 5).", "Our model with copy actions was able to outperform 224 Model BLEU-1 BLEU-2 BLEU-3 BLEU-4 ROUGELMETEORU n s ee n P re d i c a t e s SELECT 46.81 2.12 38.62 1.78 31.26 1.9 23.66 2.22 52.04 1.43 27.11 0.74 IR 48.43 1.64 39.13 1.34 31.4 1.66 23.59 2.36 52.88 1.24 27.34 0.55 IRCOPY 48.22 1.84 38.82 1.5 31.01 1.72 23.12 2.24 52.72 1.26 27.24 0.57 R-TRANSE 49.09 1.69 40.75 1.42 33.4 1.7 25.97 2.22 54.07 1.31 28.13 0.54 R-TRANSECOPY 49.0 1.76 40.63 1.48 33.28 1.74 25.87 2.23 54.09 1.35 28.12 0.57 Encoder-Decoder 58.92 2.05 47.7 1.62 38.18 1.86 28.71 2.35 59.12 1.16 34.28 0.54 Our-Model 60.8 1.52 49.8 1.37 40.32 1.92 30.76 2.7 60.07 0.9 35.34 0.43 Our-Model copy 62.44 1.85 50.62 1.46 40.82 1.77 31.1 2.46 61.23 1.2 36.24 0.65 Table 4: Evaluation results of our model and all other baselines for the unseen predicate evaluation setup Model BLEU-4 ROUGEL Sub -T y p e s R-TRANSE 32.41 1.74 59.27 0.92 Encoder-Decoder 42.14 2.05 68.95 0.86 Our-Model 42.13 1.88 69.35 0.9 Our-Model copy 42.2 2.0 69.37 1.0 O b jT y p e s R-TRANSE 30.59 1.3 57.37 1.17 Encoder-Decoder 37.79 2.65 65.69 2.25 Our-Model 37.78 2.02 65.51 1.56 Our-Model copy 38.02 1.9 66.24 1.38 Table 5: Automatic evaluation of our model against selected baselines for unseen sub-types and obj-types Model % Pred.", "all the other systems.", "Majority of systems have reported a significantly higher BLEU-4 scores in these two tasks than when generalizing to unseen predicates ( +12 and +8 BLEU-4 points respec-tively).", "This indicates that these tasks are relatively easier and hence our models achieve relatively smaller enhancements over the baselines.", "comparison to the Encoder-Decoder baseline.", "Our proposed copy actions have scored a significant enhancement in the identification of unseen predicates with up to + 40 % more than best performing baseline and our model version without the copy actions.", "By examining some of the generated questions (Table", "7) we see that models without copy actions can generalize to unseen predicates that only have a very similar freebase predicate in the training set.", "For example fb:tv program/language and fb:film/language , if one of those predicates exists in the training set the model can use the same questions for the other during test time.", "Copy actions from the sub-type and the obj-type textual contexts can generalize to a great extent to unseen predicates because of the overlap between the predicate and the object type in many questions (Example 2 Table 7).", "Adding the predicate context to our model has enhanced model performance for expressing unseen predicates by +9% (Table 6).", "However we can see that it has affected the naturalness of the question.", "The post processing step does not take into consideration that some verbs and prepositions do not fit in the sentence structure, or that some words are already existing in the question words (Example 4 Table 7).", "This does not happen as much when having copy actions from the sub-type and the obj-type contexts because they are mainly formed of nouns which are more interchangeable than verbs or prepositions.", "A post-processing step to reform the question instead of direct copying from the input source is considered in our future work.", "In this paper we presented a new neural model for question generation from knowledge bases, with a main focus on predicates, subject types or object types that were not seen at the training phase (Zero-Shot Question Generation).", "Our model is based on an encoder-decoder architecture that leverages textual contexts of triples, two attention layers for triples and textual contexts and 225 1 Reference what language is spoken in the tv show three sheets?", "Our method exhibits significantly better results for Zero-Shot QG than a set of strong baselines including the state-of-the-art question generation from KB.", "Additionally, a complimentary human evaluation, helps in showing that the improvement brought by our part-of-speech copy action mechanism is even more significant than what the automatic evaluation suggests.", "The source code and the collected textual contexts are provided for the community 6 6 https://github.com/hadyelsahar/ Zeroshot-QuestionGeneration Acknowledgements This research is partially supported by the Answering Questions using Web Data (WDAqua) project, a Marie Skodowska-Curie Innovative Training Network under grant agreement No 642795 , part of the Horizon 2020 programme." ]
[ "result", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "result", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "other", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "other", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "objective", "result", "result", "result", "other" ]
[ "Improving model generalization on held-out data is one of the core objectives in commonsense reasoning.", "Recent work has shown that models trained on the dataset with superficial cues tend to perform well on the easy test set with superficial cues but perform poorly on the hard test set without superficial cues.", "Previous approaches have resorted to manual methods of encouraging models not to overfit to superficial cues.", "While some of the methods have improved performance on hard instances, they also lead to degraded performance on easy instances.", "Here, we propose to explicitly learn a model that does well on both the easy test set with superficial cues and hard test set without superficial cues.", "Using a meta-learning objective, we learn such a model that improves performance on both the easy test set and the hard test set.", "By evaluating our models on Choice of Plausible Alternatives (COPA) and Commonsense Explanation, we show that our proposed method leads to improved performance on both the easy test set and the hard test set upon which we observe up to 16.5 percentage points improvement over the baseline.", "Pre-trained language models such as BERT (De-vlin et al., 2019) and RoBERTa (Liu et al., 2019) have enabled performance improvements on benchmarks of language understanding (Wang et al., 2019a).", "However, improved performance is not only the result of increased ability to solve the benchmark tasks as intended, but also due to models' increased ability to cheat by relying on superficial cues (Gururangan et al., 2018; Sugawara et al., 2018; Niven and Kao, 2019).", "That is, even though models may perform better in terms of benchmark scores, they often are right for the wrong reasons (McCoy et al., 2019) and exhibit worse performance when prevented from exploiting superficial cues (Gururangan et al., 2018; Sugawara et al., 2018; Niven and Kao, 2019).", "To analyze reliance on superficial cues and to evaluate methods that encourage models to be right for the right reasons, i.e., to solve tasks as intended, training instances can be divided into two categories (Gururangan et al., 2018): easy training instances contain easily identifiable superficial cues, such as a word that strongly correlates with a class label so that the presence or absence of this word alone allows better-than-random prediction (Niven and Kao, 2019).", "In contrast, hard instances do not contain easily exploitable superficial cues and hence require non-trivial reasoning.", "Models that exploit superficial cues are characterized by a performance gap: they show high scores on easy instances, but much lower scores on hard ones.", "Previous work has aimed at countering superficial cues.", "A direct, if drastic, method is to completely remove easy instances from the training data via adversarial filtering (Zellers et al., 2018), which leads to better performance on hard instances, but, as Gururangan et al. (2018) point out, filtering easy instances may harm performance by reducing the data diversity and size.", "Instead of completely removing easy instances, Schuster et al. (2019) propose a loss discounting scheme that assigns less weight to instances which likely contain superficial cues, while Belinkov et al. (2019) use adversarial training to penalize models for relying on superficial cues.", "A different approach, taken by Niven and Kao (2019) and Kavumba et al. (2019), is to augment datasets in a way that balances the distribution of superficial cues so that they become uninformative.", "Common to all these approaches is that reduced reliance on superficial cues is reflected in degraded performance on easy instances, while maintaining or increasing scores on hard instances.", "Here we propose meta-learning as an alternative approach to reducing reliance on superficial cues, which, as we will show, improves performance on both easy and hard instances.", "Intuitively, we see reliance on superficial cues not as a defect of datasets, but as a failure to learn : If a model learns to rely on superficial cues, it will not generalize to instances without such cues, but if the model learns not to rely on such cues, this generalization will be possible.", "Conversely, a model that only learns how to solve hard instances may perform poorly on easy instances.", "Therefore, our meta-learned model learns how to generalize to both easy and hard instances.", "By evaluating our method on two English commonsense benchmarks, namely Choice of Plausible Alternatives (COPA) (Roem-mele et al., 2011) and Commonsense Explanation (Cs-Ex) (Wang et al., 2019b), we show that meta-learning improves performance on both easy and hard instances and outperforms all baselines.", "In summary, our contributions are: 1. We propose a meta-learning method that learns how to generalize to both easy and hard instances ( 2), 2. We show that Commonsense Explanation (Wang et al., 2019b) contain superficial cues that are easy to exploit by models ( 3), 3. We empirically show that meta-learning a model to generalise to both easy and hard instances leads to better generalization not only on hard instances but also on easy instances ( 4), 2 Learning to Generalize 2.1 Background Meta-learning has been successfully applied to problems such as few-shot learning (Vinyals et al., 2016; Finn et al., 2017) and continual learning (Javed and White, 2019; Beaulieu et al., 2020).", "A meta-learning or learning to learn procedure consists of two phases.", "The first phase, also called meta-training, consists of learning in two nested loops.", "Learning starts in the inner loop where the models' parameters are updated using the meta-training training set.", "At the end of the inner loop updates, the models' inner loop learning of the task is meta-train tested in outer loop where a separate meta-training testing set is used.", "This is called meta-training testing.", "Unlike a non-meta-training process, the meta-training testing error is also used to update the model parameters, i.e., the meta-training testing error is used to improve the inner loop.", "Thus, learning is performed in both the inner and the outer loop, hence, learning-to-learn.", "The second phase, also called meta-testing, consists only of a single loop.", "Model parameters are finetuned on a meta-testing training set and finally evaluated, only once, on the held out meta-testing testing set.", "Note that the meta-testing testing set is different from the meta-training testing set.", "One of the most popular meta-learning algorithms is Model-Agnostic Meta-Learning (MAML) algorithm (Finn et al., 2017).", "MAML is a few-shot optimization-based meta-learning algorithm whose goal is to learn initial model parameters for multiple related tasks such that a few gradient updates lead to optimal performance on target tasks.", "We choose MAML for our experiments because it is model agnostic and, hence, widely applicable.", "Our goal is to learn a model f , with parameters , that generalizes well on both easy instances, with superficial cues, and hard instances, without superficial cues.", "Specifically, given a large single-task training set D tr , we want to be able to train a model that generalizes well to both the easy test set D test _ easy and the hard test set D test _ hard .", "To learn such a model, we require a meta-training testing set, D tr _ test , which contains both easy and hard instances.", "Such a meta-training testing set will ensure that we evaluate the model generalization to both easy and hard instances.", "Optimizing only for better performance on hard instances can lead to poor generalization to easy instances (Gururangan et al., 2018).", "We cannot naively apply the meta-learning method designed for learning multiple few-shot tasks to a large dataset.", "A large dataset presents three main challenges.", "First, a naive meta-learning method would require using the entire training set during each inner loop update.", "This would make training very slow and computationally expensive.", "To address this problem, we use small randomly sampled batches in each inner loop.", "This is similar to treating each mini-batch as a single MAML task.", "Second, a naive meta-learning method would require using the entire meta-training testing set for each outer loop update.", "This, too, would make learning very slow when the meta-training testing set is large.", "We address this challenge by evaluating the inner loop learning using only a small batch that is randomly drawn from the meta-training testing set.", "Third, a naive meta-learning method would require storing the entire inner loop computation graph to facilitate second-order gradients' computation.", "However, for large datasets and large models, such as recent pre-trained language models used in this paper, this is computationally too expensive and impractical on current hardware.", "To address this problem, we use first-order MAML (Finn et al., 2017) that uses only the last inner-update.", "We call this method of using random meta-training training batches and meta-training testing batches for meta-updates as Stochastic-Update Meta-Learning (SUML, Algorithm 1).", "The hyperparameter k is the number of inner loop updates performed for each outer loop update (i.e., i in Algorithm 1 ranges from 1 to k).", "Setting the value of k to 1 would make training unstable, much like using a batch size of 1 in standard (non-meta) training.", "On the other hand, a large value of k would make training slow.", "Here, we briefly describe the English commonsense datasets that we use in this paper.", "anced COPA) counters superficial cues in the answer choices of the Choice of Plausible Alternatives (Roemmele et al., 2011, COPA) by balancing token distribution between correct and wrong answer choices.", "Balanced COPA creates mirrored instances for each of the original instance in the training set.", "Concretely, for each original COPA instance shown below: Premise : The stain came out of the shirt.", "What was the CAUSE of this?", "a) I bleached the shirt.", "(Correct)", "b) I patched the shirt.", "Balanced COPA creates another instance that shares the same alternatives but a different manually authored premise.", "The wrong answer choice the original question is made correct by the new premise (refer to App. B for more examples).", "Premise : The shirt did not have a hole anymore .", "What was the CAUSE of this?", "a) I bleached the shirt.", "b) I patched the shirt.", "(Correct)", "Commonsense Explanation : Commonsense Explanation (Cs-Ex) (Wang et al., 2019b) is a multiple-choice benchmark that consists of three subtasks.", "Here we focus on a commonsense explanation task.", "Given a false statement such as He drinks apple.", ", Cs-Ex requires a model to pick the reason why a false statement does not make sense, in this case either:", "a) Apple juice are very tasty and milk too ; or", "b) Apple can not be drunk (correct); or", "c) Apple cannot eat a human .", "While COPA has already been shown to contain superficial cues by Kavumba et al. (2019), Cs-Ex has not been analyzed yet.", "Here, we present an analysis of superficial cues in Cs-Ex.", "We fine-tuned RoBERTa-base and RoBERTa-large with contextless inputs (answers only).", "This reveals the models' ability to rely on shortcuts such as different token distributions in correct and wrong answers (Gururangan et al., 2018; McCoy et al., 2019).", "In this setting, we expect the models' accuracy to be nearly random if the answer choices have no superficial cues.", "But, we find that RoBERTa performs better than random accuracy of 33.3%.", "The above-random performance of RoBERTa-base (82.1%) and RoBERTa-large (85.4%) indicates that the answers of Cs-Ex contain superficial cues.", "To identify the actual superficial cues a model can exploit, we collect words/unigrams that are predictive of the correct answer choice using the productivity measure introduced by Niven and Kao (2019, see definition in App. A).", "Intuitively, the productivity of a token expresses how precise a model would be if it based its prediction only on the presence of this token in a candidate answer.", "We found that the word not was highly predictive of the correct answer, followed by the word to (See details in App. A).", "Following previous work (Gururangan et al., 2018; Kavumba et al., 2019), we split the test set of Cs-Ex into an easy and hard subset.", "The easy subset consists of all instances (1,572) that RoBERTa-base solved correctly across three different runs in the contextless input (answer only) setting.", "All the remaining instances, 449, are considered hard instances.", "For COPA, we use the easy and hard subset splits from Kavumba et al. (2019), which consists of 190 easy and 310 hard instances.", "In our experiments, we used a recent state-of-the-art large pre-trained language model, namely RoBERTa (Liu et al., 2019)an optimized variant of BERT (Devlin et al., 2019).", "Specifically, we used RoBERTa-base and RoBERTa-large with 110M and 355M parameters, respectively, from the publicly available Huggingface source code (Wolf et al., 2019).", "1 We ran all our experiments on a single NVIDIA Tesla V100 GPU with 16GB memory.", "We used an Adam optimizer (Kingma and Ba, 2015) with a warm-up proportion of 0.06 and a weight decay of 0.01.", "We randomly split the training data into training data and validation data with a ratio of 9:1.", "We trained the models for a maximum of 10 epochs with early stopping based on the validation loss (full training details in App. C).", "To evaluate the effectiveness of meta-learning a model to be robust against superficial cues, we compare our model that is meta-trained on 450 original COPA instances and 100 balanced meta-training testing examples with three different baselines.", "Specifically, we compare to: 1. A model trained on 500 original COPA instances.", "2. An adversarial trained model to avoid the answer only superficial cues on 500 original COPA instances.", "3. A model trained on 1000 Balanced COPA instances, manually created to counter superficial cues.", "In comparison, our meta-trained model uses only a small fraction of balanced instances.", "Effectively, our method replaces the need to have a large balanced training set with a small, 100 instances, in this case, meta-training test set.", "The results show that the models trained on the original COPA perform considerably better on the easy subset (90.5%) than on the hard subset (83.9%) (Table 1).", "The models trained on balanced COPA improves performance on the hard subset (88.1%) but slightly degrades performance on the easy subset (90.0%).", "This indicates that training on Balanced COPA improves generalization on the hard instances.", "As expected, the performance of the adversarial trained model is lower than the vanilla baselines.", "This finding is similar to the result found in natural language inference (Belinkov et al., 2019).", "Comparing our meta-trained models to the baselines, we see that meta-training improves performance on both the easy subset and hard subset.", "Our meta-trained models even outperform the models trained on nearly twice the training data and an ensemble of RoBERT-large.", "It even matches an ensemble of RoBERTa-large and 1 https://github.com/huggingface/trans formers ALBERT-xxlarge (Lan et al., 2019).", "2 4.3 Commonsense Explanation This experiment aims to investigate an automatic method of creating a meta-training testing set.", "Here we assume that there is no budget for manually creating a small meta-training testing set as in Balanced COPA.", "We created a meta-training testing set by randomly sampling 288 hard instances.", "Gururangan et al. (2018) pointed out that optimizing only for hard instance might lead to poor performance on easy instance.", "This observation motivates us to include both easy and hard instances in the meta-training testing set, with the expectation that this will ensure that performance on easy instances does not degrade.", "We augmented the hard instances with an equal number of randomly sampled easy instances, resulting into the final meta-training testing set with 576 instances.", "The results show that the meta-trained models perform better than the baselines on both easy and hard instances (Table 1).", "For RoBERTa-large we see 0.9 percentage point improvement on easy instances and eight percentage points improvement on the hard instances.", "We see the largest gains on the RoBERTa-base with 2.6 and 16.5 percentage points on easy and hard instances, respectively.", "The results indicate that in the absence of a manually authored meta-training testing set without superficial cues, we can use a combination of easy and hard instances.", "We propose to directly learn a model that performs well on both instances with superficial cues and instances without superficial cues via a meta-learning objective.", "We carefully evaluate our models, which are meta-learned to improve generalization, on two important commonsense benchmarks, finding that our proposed method considerably improves performance across all test sets.", "2 The SuperGlue leaderboard, from which the results shown in the first two rows of Tab.", "1 were taken, does not publish system outputs, so it's not possible to compute scores on easy and hard subsets.", "And, the ensemble models reported have not been published yet, and there is no paper or source code which describes the model and training procedure, so it is not possible to reproduce these results." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "objective", "objective", "other", "other", "other" ]
[ "Pre-trained language models have achieved huge success on a wide range of NLP tasks.", "However, contextual representations from pre-trained models contain entangled semantic and syntactic information, and therefore cannot be directly used to derive useful semantic sentence embeddings for some tasks.", "Paraphrase pairs offer an effective way of learning the distinction between semantics and syntax, as they naturally share semantics and often vary in syntax.", "In this work, we present ParaBART, a semantic sentence embedding model that learns to disentangle semantics and syntax in sentence embeddings obtained by pre-trained language models.", "ParaBART is trained to perform syntax-guided paraphrasing, based on a source sentence that shares semantics with the target paraphrase, and a parse tree that speci-fies the target syntax.", "In this way, ParaBART learns disentangled semantic and syntactic representations from their respective inputs with separate encoders.", "Experiments in English show that ParaBART outperforms state-of-the-art sentence embedding models on unsupervised semantic similarity tasks.", "Additionally, we show that our approach can effectively remove syntactic information from semantic sentence embeddings, leading to better robustness against syntactic variation on downstream semantic tasks.", "Semantic sentence embedding models encode sentences into fixed-length vectors based on their semantic relatedness with each other.", "If two sentences are more semantically related, their corresponding sentence embeddings are closer.", "As sentence embeddings can be used to measures semantic relatedness without requiring supervised data, they have been used in many applications, such as semantic textual similarity (Agirre et al., 2016a), question answering (Nakov et al., 2017), and natural language inference (Artetxe and Schwenk, 2019a).", "Recent years have seen huge success of pre-trained language models across a wide range of NLP tasks (Devlin et al., 2019; Lewis et al., 2020).", "However, several studies (Reimers and Gurevych, 2019; Li et al., 2020) have found that sentence embeddings from pre-trained language models perform poorly on semantic similarity tasks when the models are not fine-tuned on task-specific data.", "Meanwhile, Goldberg (2019) shows that BERT without fine-tuning performs surprisingly well on syntactic tasks.", "Hence, we posit that these contextual representations from pre-trained language models without fine-tuning capture entangled semantic and syntactic information, and therefore are not suitable for sentence-level semantic tasks.", "Ideally, the semantic embedding of a sentence should not encode its syntax, and two semantically similar sentences should have close semantic embeddings regardless of their syntactic differences.", "While various models (Conneau et al., 2017; Cer et al., 2018; Reimers and Gurevych, 2019) have been proposed to improve the performance of sentence embeddings on downstream semantic tasks, most of these approaches do not attempt to separate syntactic information from sentence embeddings.", "To this end, we propose ParaBART, a semantic sentence embedding model that learns to disentangle semantics and syntax in sentence embeddings.", "Our model is built upon BART (Lewis et al., 2020), a sequence-to-sequence Transformer (Vaswani et al., 2017) model pre-trained with self-denoising objectives.", "Parallel paraphrase data is a good source of learning the distinction between semantics and syntax, as paraphrase pairs naturally share the same meaning but often differ in syntax.", "Taking advantage of this fact, ParaBART is trained to perform syntax-guided paraphrasing, where a source sentence containing the desired semantics and a parse tree specifying the desired syntax are given as inputs.", "In order to generate a paraphrase that follows the given syntax, ParaBART uses separate encoders to learn disentangled semantic and syntactic representations from their respective inputs.", "In this way, the disentangled representations capture sufficient semantic and syntactic information needed for paraphrase generation.", "The semantic encoder is also encouraged to ignore the syntax of the source sentence, as the desired syntax is already provided by the syntax input.", "ParaBART achieves strong performance across unsupervised semantic textual similarity tasks.", "Furthermore, semantic embeddings learned by ParaBART contain significantly less syntactic information as suggested by probing results, and yield robust performance on datasets with syntactic variation.", "Our source code is available at https:// github.com/uclanlp/ParaBART .", "Various sentence embedding models have been proposed in recent years.", "Most of these models utilize supervision from parallel data (Wieting and Gimpel, 2018; Artetxe and Schwenk, 2019b; Wieting et al., 2019, 2020), natural language inference data (Conneau et al., 2017; Cer et al., 2018; Reimers and Gurevych, 2019), or a combination of both (Subramanian et al., 2018).", "Many efforts towards controlled text generation have been focused on learning disentangled sentence representations (Hu et al., 2017; Fu et al., 2018; John et al., 2019).", "In the context of disentangling semantics and syntax, Bao et al. (2019) and Chen et al. (2019) utilize variational autoen-coders to learn two latent variables for semantics and syntax.", "In contrast, we use the outputs of a constituency parser to learn purely syntactic representations, and facilitate the usage of powerful pre-trained language models as semantic encoders.", "Our approach is also related to prior work on syntax-controlled paraphrase generation (Iyyer et al., 2018; Kumar et al., 2020; Goyal and Durrett, 2020; Huang and Chang, 2021).", "While these approaches focus on generating high-quality paraphrases that conform to the desired syntax, we are interested in how semantic and syntactic information can be disentangled and how to obtain good semantic sentence embeddings.", "Our goal is to build a semantic sentence embedding model that learns to separate syntax from semantic embeddings.", "ParaBART is trained to generate syntax-guided paraphrases, where the model attempts to only extract the semantic part from the input sentence, and combine it with a different syntax specified by the additional syntax input in the form of a constituency parse tree.", "Figure 1 outlines the proposed model, which consists of a semantic encoder that learns the semantics of a source sentence, a syntactic encoder that encodes the desired syntax of a paraphrase, and a decoder that generates a corresponding paraphrase.", "Additionally, we add a syntax discriminator to adversarially remove syntactic information from the semantic embeddings.", "Given a source sentence S 1 and a target constituency parse tree P 2 , ParaBART is trained to generate a paraphrase S 2 that shares the semantics of S 1 and conforms to the syntax specified by P 2 .", "Semantics and syntax are two key aspects that determine how a sentence is generated.", "Our model learns purely syntactic representations from the output trees generated by a constituency parser, and extracts the semantic embedding directly from the source sentence.", "The syntax discriminator and the syntactic encoder are designed to remove source syntax and provide target syntax, thus encouraging the semantic encoder to only capture source semantics.", "Semantic Encoder The semantic encoder E sem is a Transformer encoder that embeds a sentence S = ( s (1) , ..., s ( m ) ) into contextual semantic representations: U = ( u (1) , ..., u ( m ) ) = E sem (cid:16) ( s (1) , ..., s ( m ) ) (cid:17) .", "Then, we take the mean of these contextual representations u ( i ) to get a fixed-length semantic sentence embedding u = 1 m m (cid:88) i =1 u ( i ) .", "Syntactic Encoder The syntactic encoder E syn is a Transformer encoder that takes a linearized constituency parse tree P = ( p (1) , ..., p ( n ) ) and converts it into contextual syntactic representations V = ( v (1) , ..., v ( n ) ) = E syn (cid:16) ( p (1) , ..., p ( n ) ) (cid:17) .", "For example, the linearized parse tree of the sentence This book is good. is (S (NP (DT) (NN)) (VP (VBZ) (ADJP)) (.)).", "Such input sequence preserves the tree structure, allowing the syntactic encoder to capture the exact syntax needed for decoding.", "Decoder The decoder D dec uses the semantic sentence embedding u and the contextual syntactic representations V to generate a paraphrase that shares semantics with the source sentence while following the syntax of the given parse tree.", "In other words, ( y (1) , ..., y ( l ) ) = D dec ( Concat ( u , V )) .", "During training, given a source sentence S 1 , a target parse tree P 2 and a target paraphrase S 2 = ( s 12 , ..., s l 2 ) , we minimize the following paraphrase generation loss : L para = l (cid:88) i =1 log P ( y ( i ) = s ( i ) 2 | S 1 , P 2 ) .", "Since the syntactic representations do not contain semantics, the semantic encoder needs to accurately capture the semantics of the source sentence for a paraphrase to be generated.", "Meanwhile, the full syntactic structure of the target is provided by the syntactic encoder, thus encouraging the semantic encoder to ignore the source syntax.", "Syntax Discriminator To further encourage the disentanglement of semantics and syntax, we employ a syntax discriminator to adversarially remove syntactic information from semantic embeddings.", "We first train the syntax discriminator to predict the syntax from its semantic embedding, and then train the semantic encoder to fool the syntax discriminator such that the source syntax cannot be predicted from the semantic embedding.", "More specifically, we adopt a simplified approach similar to John et al. (2019) by encoding source syntax as a Bag-of-Words vector h of its constituency parse tree.", "For any given source parse tree, this vector contains the count of occurrences of every constituent tag, divided by the total number of constituents in the parse tree.", "Given the semantic sentence embedding u , our linear syntax discriminator D dis predicts h by y h = D dis ( u ) = softmax ( Wu + b ) with the following adversarial loss : L adv = (cid:88) t T h ( t ) log( y h ( t )) , where T denotes the set of all constituent tags.", "Training We adversarially train E sem , E syn , D dec , and D dis with the following objective: min E sem ,E syn ,D dec (cid:18) max D dis ( L para adv L adv ) (cid:19) , where adv is a hyperparameter to balance loss terms.", "In each iteration, we update the D dis by considering the inner optimization, and then update E sem , E syn and D dec by considering the outer optimization.", "In this section, we demonstrate that ParaBART is capable of learning semantic sentence embeddings that capture semantic similarity, contain less syntactic information, and yield robust performance against syntactic variation on semantic tasks.", "We sample 1 million English paraphrase pairs from ParaNMT-50M (Wieting and Gimpel, 2018), and split this dataset into 5,000 pairs as the validation set and the rest as our training set.", "The constituency parse trees of all sentences are obtained from Stanford CoreNLP (Manning et al., 2014).", "We fine-tune a 6-layer BART base encoder as the semantic Model STS12 STS13 STS14 STS15 STS16 STS-B Avg.", "We train ParaBART on a GTX 1080Ti GPU using AdamW (Loshchilov and Hutter, 2019) optimizer with a learning rate of 2 10 5 for the encoder and syntax discriminator, and 1 10 4 for the rest of the model.", "The batch size is set to 64.", "All models are trained for 10 epochs, which takes about 2 days to complete.", "The maximum length of input sentences and linearized parse trees are set to 40 and 160 respectively.", "We set the weight of adversarial loss to 0.1.", "Appendix A shows more implementation details.", "Baselines We compare our model with other sentence embeddings models, including InferSent (Conneau et al., 2017), Universal Sentence Encoder (USE) (Cer et al., 2018), Sentence-BERT base (Reimers and Gurevych, 2019), VGVAE (Chen et al., 2019), and BGT (Wieting et al., 2020).", "We also include mean-pooled BERT base and BART base embeddings.", "In addition to ParaBART, we consider two model ablations: ParaBART without adversarial loss, and ParaBART without syntactic guidance and adversarial loss.", "We evaluate our semantic sentence embeddings on the unsupervised Semantic Textual Similarity (STS) tasks from SemEval 2012 to 2016 (Agirre et al., 2012; 2013; 2014; 2015; 2016b) and STS Benchmark test set (Cer et al., 2017), where the goal is to predict a continuous-valued score between 0 and 5 indicating how similar the meanings of a sentence pair are.", "For all models, we compute the cosine similarity of embedding vectors as the semantic similarity measure.", "We use the standard SentEval toolkit (Conneau and Kiela, 2018) for evaluation and report average Pearson correlation over all domains.", "As shown in Table 1, both average BERT embeddings and average BART embeddings perform poorly on STS tasks, as the entanglement of semantic and syntactic information leads to low correlation with semantic similarity.", "Training ParaBART on paraphrase data substantially improves the correlation.", "With the addition of syntactic guidance and adversarial loss, ParaBART achieves the best overall performance across STS tasks, showing the effectiveness of our approach.", "To better understand how well our model learns to disentangle syntactic information from semantic embeddings, we probe our semantic sentence embeddings with downstream syntactic tasks.", "Following Conneau et al. (2018), we investigate to what degree our semantic sentence embeddings can be used to identify bigram word reordering (BShift), estimate parse tree depth (TreeDepth), and predict parse tree top-level constituents (Top-Const).", "Top-level constituents are defined as the group of constituency parse tree nodes immediately below the sentence (S) node.", "We use the datasets provided by SentEval (Conneau and Kiela, 2018) to train a Multi-Layer Perceptron classifier with a single 50-neuron hidden layer on top of semantic sentence embeddings, and report accuracy on all QQP-Easy What are the essential skills of the project management?", "and QQP-Hard .", "As shown in Table 2, sentence embeddings pooled from pre-trained BART model contain rich syntactic information that can be used to accurately predict syntactic properties including word order and top-level constituents.", "The disentanglement induced by ParaBART is evident, lowering the accuracy of downstream syntactic tasks by more than 10 points compared to pre-trained BART embeddings and ParaBART without adversarial loss and syntactic guidance.", "The results suggest that the semantic sentence embeddings learned by ParaBART indeed contain less syntactic information.", "Intuitively, semantic sentence embedding models that learn to disentangle semantics and syntax are expected to yield more robust performance on datasets with high syntactic variation.", "We consider the task of paraphrase detection on Quora Question Pairs (Iyer et al., 2017) dev set as a testbed for evaluating model robustness.", "We categorize paraphrase pairs based on whether they share the same top-level constituents.", "We randomly sample 1,000 paraphrase pairs from each of the two classes, combined with a common set of 1,000 randomly sampled non-paraphrase pairs, to create two datasets QQP-Easy and QQP-Hard .", "Paraphrase pairs from QQP-Hard are generally harder to identify as they are much more syntactically different compared to those from QQP-Easy .", "Table 3 shows some examples from these two datasets.", "We evaluate semantic sentence embeddings on these datasets in an unsupervised manner by computing the cosine similarity as the semantic similarity measure.", "We search for the best threshold between -1 and 1 with a step size of 0.01 on each dataset, and report the highest accuracy.", "The results are shown in Table", "4. While Universal Sentence Encoder scores much higher than other models on QQP-Easy , its performance degrades significantly on QQP-Hard .", "In comparison, ParaBART demonstrates better robustness against syntactic variation, and surpasses USE to become the best model on the more syntactically Model QQP-Easy QQP-Hard Avg.", "diverse QQP-Hard .", "It is worth mentioning that even pre-trained BART embeddings give decent results on QQP-Easy , suggesting large overlaps between paraphrase pairs from QQP-Easy .", "On the other hand, the poor performance of pre-trained BART embeddings on a more syntactically diverse dataset like QQP-Hard clearly shows its incompetence as semantic sentence embeddings.", "In this paper, we present ParaBART, a semantic sentence embedding model that learns to disentangle semantics and syntax in sentence embeddings from pre-trained language models.", "Experiments show that our semantic sentence embeddings yield strong performance on unsupervised semantic similarity tasks.", "Further investigation demonstrates the effectiveness of disentanglement, and robustness of our semantic sentence embeddings against syntactic variation on downstream semantic tasks.", "We thank anonymous reviewers for their helpful feedback.", "We thank UCLA-NLP group for the valuable discussions and comments.", "This work is supported in part by Amazon Research Award.", "Our sentence embeddings can potentially capture bias reflective of the training data we use, which is a common problem for models trained on large annotated datasets.", "While the focus of our work is to disentangle semantics and syntax, our model can potentially generate offensive or biased content learned from training data if it is used for paraphrase generation.", "We suggest carefully examining the potential bias exhibited in our models before deploying them in any real-world applications." ]
[ "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "objective", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "objective", "other", "other", "other", "method", "method", "abstain" ]
[ "Tuning pre-trained language models (PLMs) with task-specific prompts has been a promising approach for text classification.", "Particularly, previous studies suggest that prompt-tuning has remarkable superiority in the low-data scenario over the generic fine-tuning methods with extra classifiers.", "The core idea of prompt-tuning is to insert text pieces, i.e., template, to the input and transform a classification problem into a masked language modeling problem, where a crucial step is to construct a projection, i.e., verbalizer, between a label space and a label word space.", "A verbalizer is usually handcrafted or searched by gradient descent, which may lack coverage and bring considerable bias and high variances to the results.", "In this work, we focus on incorporating external knowledge into the verbalizer, forming a knowledgeable prompt-tuning (KPT), to improve and stabilize prompt-tuning.", "Specifically, we expand the label word space of the verbalizer using external knowledge bases (KBs) and refine the expanded label word space with the PLM itself before predicting with the expanded label word space.", "Extensive experiments on zero and few-shot text classification tasks demonstrate the effectiveness of knowledgeable prompt-tuning.", "Our source code is publicly available at https://github.com/ thunlp/KnowledgeablePromptTuning .", "Recent years have witnessed the prominence of Pre-trained Language Models (PLMs) (Peters et al., 2018; Radford et al., 2018; Devlin et al., 2019; Raffel et al., 2020; Xu et al., 2021) due to their superior performance on a wide range of language-related downstream tasks such as text classification (Kowsari et al., 2019), question answering (Ra-jpurkar et al., 2016), and machine reading comprehension (Nguyen et al., 2016).", "To fathom the prinCorresponding authors: Z.Liu ([email protected]), H.Wang ([email protected]) ciples of such effectiveness of PLMs, researchers have conducted extensive studies and suggested that PLMs have obtained rich knowledge during pre-training (Petroni et al., 2019; Davison et al., 2019).", "Hence, how to stimulate and exploit such knowledge is receiving increasing attention.", "One conventional approach to achieve that is fine-tuning (Devlin et al., 2019), where we add extra classifiers on the top of PLMs and further train the models under classification objectives.", "Fine-tuning has achieved satisfying results on supervised tasks.", "However, since the extra classifier requires adequate training instances to tune, it is still challenging to apply fine-tuning in few-shot learning (Brown et al., 2020) and zero-shot learning (Yin et al., 2019) scenarios.", "Originated from GPT-3 (Brown et al., 2020) and LAMA (Petroni et al., 2019, 2020), a series of studies using prompts (Schick and Schtze, 2021a; Liu et al., 2021) for model tuning bridge the gap between pre-training objective and down-stream tasks, and demonstrate that such discrete or continuous prompts induce better performances for PLMs on few-shot and zero-shot tasks.", "A typical way to use prompts is to wrap the input sentence into a natural language template and let the PLM conduct masked language modeling.", "For instance, to classify the topic of a sentence x : What's the relation between speed and accelera-tion? into the S CIENCE category, we wrap it into a template: A [MASK] question: x .", "The prediction is made based on the probability that the word science is filled in the [MASK] token.", "The mapping from label words (e.g., science ) to the specific class (e.g., class SCIENCE ) is called the verbalizer (Schick and Schtze, 2021a), which bridges a projection between the vocabulary and the label space and has a great influence on the performance of classification (Gao et al., 2021).", "Most existing works use manual verbalizers (Schick and Schtze, 2021a,b), in which the 2225 designers manually think up a single word to indicate each class.", "To ease the human effort of designing the class name, some works propose to learn the label words using discrete search (Schick et al., 2020) or gradient descent (Liu et al., 2021; Ham-bardzumyan et al., 2021).", "However, the learned-from-scratch verbalizer, lack of human prior knowledge, is still considerably inferior to the manual verbalizers (see Appendix A for pilot experiments), especially in few-shot setting, and even not applicable in zero-shot setting, which leaves the manual verbalizer a decent choice in many cases.", "However, manual verbalizers usually determine the predictions based on limited information.", "For instance, in the above example, the mapping {science} SCIENCE means that only predicting the word science for the [MASK] token is regarded as correct during inference, regardless of the predictions on other relevant words such as physics and maths, which are also informative.", "Such handcrafted one-one mapping limits the coverage of label words, thus lacking enough information for prediction and introducing bias into the verbalizer.", "Therefore, manual verbalizers are hard to be optimal in text classification, where the semantics of label words are crucial for predictions.", "The optimization-based expansion, though can be combined with manual verbalizers to yield better performance, only induces a few words or embeddings that are close to the class name in terms of word sense or embedding distance.", "Thus they are difficult to infer words across granularities (e.g. from science to physics).", "If we can expand the verbalizer of the above example into { science, physics } SCIENCE , the probability of making correct predictions will be considerably enhanced.", "Therefore, to improve the coverage and reduce the bias of the manual verbalizer, we present to incorporate external knowledge into the verbalizers to facilitate prompt-tuning, namely, knowledgeable prompt-tuning (KPT).", "Since our expansion is not based on optimization, it will also be more favorable for zero-shot learning.", "Specifically, KPT contains three steps: construction, refinement, and utilization.", "(1) Firstly, in the construction stage, we use external KBs to generate a set of label words for each label (in 3.2).", "Note that the expanded label words are not simply synonyms of each other, but cover different granularities and perspectives, thus are more comprehensive and unbiased than the class name.", "(2) Secondly, to cope with the noise in the unsupervised expansion of label words, we propose four refinement methods, namely, frequency refinement, relevance refinement, contextualized calibration, and learnable refinement (in 3.3), whose effectiveness is studied thoroughly in 4.", "(3) Finally, we apply either a vanilla average loss function or a weighted average loss function for the utilization of expanded verbalizers, which map the scores on a set of label words to the scores of the labels.", "We conduct extensive experiments on zero-shot and few-shot text classification tasks.", "The empirical results show that KPT can reduce the error rate of classification by 16%, 18%, 10%, 7% on average in 0, 1, 5, 10 shot experiments, respectively, which shows the effectiveness of KPT.", "In addition to the performance boost, KPT also reduces the prediction variances consistently in few-shot experiments and yields more stable performances.", "Two groups of research are related to KPT: prompt-tuning, and the verbalizer construction.", "Prompt-tuning.", "Since the emergence of GPT-3 (Brown et al., 2020), prompt-tuning has received considerable attention.", "GPT-3 (Brown et al., 2020) demonstrates that with prompt-tuning and in-context learning, the large-scale language models can achieve superior performance in the low-data regime.", "The following works (Schick and Schtze, 2021a,b) argue that small-scale language models (Radford et al., 2018; Devlin et al., 2019; Liu et al., 2019; Lan et al., 2020) can also achieve decent performance using prompt-tuning.", "Prompt-tuning has been applied to a large variety of tasks such as Text Classification (Schick and Schtze, 2021a), Natural Language Understanding (Schick and Schtze, 2021b; Liu et al., 2021) , Relation Extraction (Han et al., 2021; Chen et al., 2021), and Knowledge Probing (Petroni et al., 2019; Liu et al., 2021), etc.", "Verbalizer Construction.", "As introduced in 1, the verbalizer is an important component in prompt-tuning and has a strong influence on the performance of prompt-tuning (Holtzman et al., 2021; Gao et al., 2021).", "Most works use human-written verbalizers (Schick and Schtze, 2021a), which are highly biased towards personal vocabulary and do not have enough coverage.", "Some other studies (Gao et al., 2021; Shin et al., 2020; Liu et al., 2021; Schick et al., 2020) design automatic verbalizer searching methods for better ver-2226 science, mathematics, biology, research, knowledge, physics, knowledge, innovate, calculation, ..., agrobiology, mscience, euclid, orthogon, einstein, bioacoustics, chemo, axiom, nomenclature What's the relation between speed and acceleration?", "balizer choices, however, their methods require adequate training set and validation set for optimization.", "Moreover, the automatically determined verbalizers are usually synonym of the class name, which differs from our intuition of expanding the verbalizer with a set of diverse and comprehensive label words using external KB.", "Schick et al. (2020) and Shin et al. (2020) also try multiple label words for each class.", "The optimal size of their label words set for each class is generally less than 10, which lacks coverage when used in text classification tasks.", "In this section, we present our methods to incorporate external knowledge into a prompt verbalizer.", "We first introduce the overall paradigm of prompt-tuning and then elucidate how to construct, refine and utilize the knowledgeable prompt.", "Let M be a language model pre-trained on large scale corpora.", "In text classification task, an input sequence x = ( x 0 , x 1 , ..., x n ) is classified into a class label y Y .", "Prompt-tuning formalizes the classification task into a masked language modeling problem.", "Specifically, prompt-tuning wraps the input sequence with a template , which is a piece of natural language text.", "For example, assuming we need to classify the sentence x =What's the relation between speed and acceleration? into label SCIENCE (labeled as 1) or SPORTS (labeled as 2), we wrap it into x p = [CLS] A [MASK] question : x Then M gives the probability of each word v in the vocabulary being filled in [MASK] token PM ( [MASK] = v | x p ) .", "To map the probabilities of words into the probabilities of labels, we define a verbalizer as a mapping f from a few words in the vocabulary, which form the label word set V , to the label space Y , i.e., f : V (cid:55) Y .", "We use V y to denote the subset of V that is mapped into a specific label y , y Y V y = V .", "Then the probability of label y , i.e., P ( y | x p ) , is calculated as P ( y | x p )= g (cid:0) PM ( [MASK] = v | x p ) | v V y (cid:1) , (1) where g is a function transforming the probability of label words into the probability of the label.", "In the above example, regular prompt-tuning may define V 1 = { science } , V 2 = {sports } and g as an identity function, then if the probability of science is larger than sports, we classify the instance into SCIENCE .", "We propose KPT, which mainly focuses on using external knowledge to improve verbalizers in prompt-tuning.", "In KPT , we use KBs to generate multiple label words related to each class y , e.g., V 1 = {science,physics, ...}.", "And we propose four refinement methods to eliminate the noise in the expanded V .", "Finally, we explore the vanilla average and weighted average approaches for the 2227 utilization of the expanded V .", "The process of predicting masked words based on the context is not a single-choice procedure, that is, there is no standard correct answer, but abundant words may fit this context.", "Therefore, the label words mapped by a verbalizer should be equipped by two attributes: wide coverage and little subjective bias .", "Such a comprehensive projection is crucial to the imitation of pre-training, which is the essence of prompt-tuning.", "Fortunately, external structured knowledge could simultaneously meet both requirements.", "In this section, we introduce how we use external knowledge for two text classification tasks: topic classification and sentiment classification.", "For topic classification, the core issue is to extract label words related to the topic from all aspects and granularities.", "From this perspective, we choose Related Words 1 , a knowledge graph G aggregated from multiple resources, including word embeddings, ConceptNet (Speer et al., 2017), WordNet (Pedersen et al., 2004), etc., as our external KB.", "The edges denote \"relevance\" relations and are annotated with relevance scores.", "We presume the the name of each class v 0 is correct and use them as the anchor node to get the neighborhood nodes NG ( v 0 ) whose scores are larger than a threshold as the related words 2 .", "Thus, each class is mapped into a set of label words V y = NG ( v 0 ) { v 0 } .", "For binary sentiment classification, the primary goal is to extend the binary sentiment to sentiment of more granualities and aspects.", "We use the sentiment dictionary summarized by previous researchers 3 , 4 .", "Several examples of the label words in the KPT are in Table 1.", "Although we have constructed a knowledgeable verbalizer that contains comprehensive label words, the collected label words can be very noisy since the vocabulary of the KB is not tailored for the PLM.", "Thus it is necessary to refine such verbalizer by retaining high-quality words.", "In this section, 1 https://relatedwords.org 2 We take = 0 in the experiments 3 https://www.enchantedlearning.com/wordlist/positivewords.shtml 4 https://www.enchantedlearning.com/wordlist/negativewords.shtml we propose four refinement methods addressing different problems of the noisy label words.", "Frequency Refinement.", "The first problem is to handle the rare words.", "We assume that several words in the KB are rare to the PLM, thus the prediction probabilities on these words tend to be inaccurate.", "Instead of using a word-frequency dictionary, we propose to use contextualized prior of the label words to remove these words.", "Specifically, given a text classification task, we denote the distribution of the sentences x in the corpus as D .", "For each sentence in the distribution, we wrap it into the template and calculate the predicted probability for each label word v in the masked position PM ( [MASK] = v | x p ) .", "By taking the expectation of the probability over the entire distribution of sentences, we can get the prior distribution of the label words in the masked position.", "We formalize it as PD ( v )= E x D PM ( [MASK] = v | x p ) .", "Empirically, we found that using a small-size unlabeled support set C sampled from the training set and with labels removed, will yield a satisfying estimate of the above expectation.", "Thus, assuming that the input samples { x C} have a uniform prior distribution, the contextualized prior is approximated by PD ( v ) 1 | C| (cid:88) x CPM ( [MASK] = v | x p ) .", "Then we remove the label words whose prior probabilities are less than a threshold.", "Details can be found in Appendix C. Relevance Refinement.", "As our construction of knowledgeable label words is fully unsupervised, some label words may be more relevant to their belonging class than the others.", "To measure the relevance of a label word to each class, we obtain the prediction probability of the label word on the support set C as the vector representation q v of the label words, i.e., q v 's i -th element is q vi = PM ([ MASK ] = v | x i p ) , x i C , (4) where x i p represents the sentence x i combined with the template p.", "To estimate the class's representation, we presume that the name of each class v 0 , such as science for SCIENCE , though lack of coverage, is very relevant to the class.", "Then we use the vector representation q v 0 of the these names as the 2228 Dataset Label Label Words AG's News POLITICS politics, government, diplomatic, law, aristotle, diplomatical, governance ...", "class's representation q y .", "Therefore the relevance score between a label word v and a class y is calculated as the cosine similarity between the two representation: r ( v, y ) = cos( q v , q y ) = cos( q v , q v 0 ) .", "Moreover, some label words may contribute positively to multiple classes, resulting in confusion between classes.", "For example, the potential label word physiology of class SCIENCE may also be assigned with a high probability in a sentence of class SPORTS .", "To mitigate such confusion and filter the less relevant label words, we design a metric that favors the label word with high relevance merely to its belonging class and low relevance to other classes: R ( v ) = r ( v, f ( v )) |Y| 1 (cid:80) y Y ,y = f ( v ) ( r ( v, y )) , (6) where f ( v ) is the corresponding class of v .", "Ideally, a good label word should at least has a higher relevance score for its belonging class than the average relevance score for the other classes.", "Therefore, we remove the label words with R ( v ) < 1 .", "In practice, we have a slight modification to Equation (6), please refer to appendix C for details.", "Essentially, this Relevance Refinement adopts the idea of the classical TF-IDF (Jones, 1972) algorithm which estimates the relevance of a word to a document.", "It prefers to use a word that is relevant to a specific document while irrelevant to other documents as the keyword of the document.", "In KPT, a class is analogous to a document, while a label word is comparable to the word in the document.", "From this perspective, equation (6) is a variant of TF-IDF metric.", "Contextualized Calibration.", "The third problem is the drastic difference in the prior probabilities of label words.", "As previous works (Zhao et al., 2021; Holtzman et al., 2021) have shown, some label words are less likely to be predicted than the others, regardless of the label of input sentences, resulting in a biased prediction.", "In our setting, the label words in the KB tend to have more diverse prior probabilities, resulting in a severer problem (see Table 2).", "Therefore, we use the contextualized prior of label words to calibrate the predicted distribution, namely, contextualized calibration (CC): PM ( [MASK] = v | x p ) PM ( [MASK] = v | x p ) PD ( v ) (7) where PD ( v ) is the prior probability of the label word.", "Learnable Refinement .", "In few-shot learning, the refinement can be strengthen by a learning process.", "Specifically we assign a learnable weight w v to each label word v (may be already refined by the previous methods).", "The weights form a vector w R |V| , which is initialized to be a zero vector.", "The weights are normalized within each V y : v = exp( w v ) (cid:80) u V y exp( w u ) .", "Intuitively, in the training process, a small weight is expected to be learned for a noisy label word to minimize its influence on the prediction.", "Note that in few-shot setting, calibration may not be necessary because the probability of a label word can be trained to the desired magnitude, i.e., PM ( [MASK] = v | x p ) = PM ( [MASK] = v | x p ) .", "In addition to these refinement methods, since many label words are out-of-vocabulary for the PLM and are split into multiple tokens by the tokenizer.", "For these words, we simply use the average prediction score of each token as the prediction score for the word.", "The influence of this simple approach is studied in Appendix D.3.", "The final problem is how to map the predicted probability on each refined label word to the decision of the class label y .", "predicting the label.", "Therefore, we use the average of the predicted scores on V y as the predicted score for label y .", "The predicted label y is y =argmax y Y (cid:80) v V y PM ( [MASK] = v | x p ) |V y | .", "Weighted Average.", "In few-shot setting, supported by the Learnable Refinement, we adopt a weighted average of label words' scores as the prediction score.", "The refinement weights v are used as the weights for averaging.", "Thus, the predicted y is y = argmax y Y exp (cid:0) s ( y | x p ) (cid:1) (cid:80) y exp (cid:0) s ( y | x p ) (cid:1) , (10) where s ( y | x p ) is s ( y | x p )= (cid:88) v V y v log PM ( [MASK] = v | x p ) .", "(11)", "This objective function is suitable for continuous optimization by applying a cross-entropy loss on the predicted probability.", "We provide a theoretical illustration of the KPT framwork in Appendix B.", "We evaluate KPT on five text classification datasets to demonstrate the effectiveness of incorporating external knowledge into prompt-tuning.", "We carry out experiments on three topic classification datasets: AG's News (Zhang et al., 2015), DBPedia (Lehmann et al., 2015), and Yahoo (Zhang et al., 2015), and two sentiment classification datasets: IMDB (Maas et al., 2011) and Amazon (McAuley and Leskovec, 2013).", "The statistics of the datasets are shown in Table", "7. The detailed information and the statistics of each dataset is in Appendix E. We test all prompt-based methods using four manual templates and report both the average results (with standard error) of the four templates and the results of the best template (shown in (brack-ets) ).", "The reasons for using manual templates and the specific templates for each dataset are in Appendix E. 4.2 Experiment Settings Our experiments are based on OpenPrompt (Ding et al., 2021), which is an open-source toolkit to conduct prompt learning.", "For the PLM, we use RoBERTa large (Liu et al., 2019) for all experiments.", "For test metrics, we use Micro-F1 in all experiments.", "For all zero-shot experiments, we repeat the experiments 3 times using different random seeds if randomness is introduced in the experiments, and for all few-shot experiments, we repeat 5 times.", "Note that considering the four templates and five/three random seeds, each reported score of prompt-based methods is the average of 20/12 experiments , which greatly reduces the randomness of the evaluation results.", "For the refinement based on the support set C , the size of the unlabeled support set | C| is 200.", "For few-shot learning, we conduct 1, 5, 10, and 20-shot experiments.", "For a k -shot experiment, we sample k instances of each class from the original training set to form the few-shot training set and sample another k instances per class to form the validation set.", "We tune the entire model for 5 epochs and choose the checkpoint with the best validation performance to test.", "Other hyper-parameters can be found in Appendix F. 4.3 Baselines In this subsection, we introduce the baselines we compare with.", "To better understand our proposed methods, we also compare within the performance of KPT using different configuration.", "Fine-tuning (FT).", "Traditional fine-tuning method inputs the hidden embedding of [CLS] token of the PLM into the classification layer to make predictions.", "Note that fine-tuning can not be applied to the zero-shot setting, since the classification layer is randomly initialized.", "Prompt-tuning (PT).", "The regular prompt-tuning method uses the class name as the only label word for each class, which is used in PET (Schick and Schtze, 2021a) and most existing works.", "For a fair comparison, we do not use the tricks in PET, such as self-training and prompt ensemble, which are orthogonal to our contributions.", "Automatic Verbalizer (AUTO).", "The automatic verbalizer is proposed by PETAL (Schick et al., 2020), which uses labeled data to select the most informative label words inside a PLM's vocabulary.", "It is targeted at the situation when no manually defined class names are available.", "It's not obvious how to combine it with the manually de-2230 Method AG's News DBPedia Yahoo Amazon IMDB PT 75.1 6.2 (79.0) 66.6 2.3 (68.4) 45.4 7.0 (52.0) 80.2 8.8 (87.8) 86.4 4.0 (92.0) PT+CC 79.9 0.7 (81.0) 73.9 4.9 (82.6) 58.0 1.4 (58.8) 91.4 1.6 (93.5) 91.6 3.0 (93.7) KPT 84.8 1.2 ( 86.7 ) 82.2 5.4 ( 87.4 ) 61.6 2.2 ( 63.8 ) 92.8 1.2 ( 94.6 ) 91.6 2.7 (94.0) -FR 82.7 1.5 (85.0) 81.8 4.6 (86.2) 60.9 1.5 (62.7) 92.8 1.2 ( 94.6 ) 91.6 2.8 ( 94.1 ) -RR 81.4 1.5 (83.7) 81.4 4.5 (85.8) 60.1 1.0 (61.4) 92.8 1.2 ( 94.6 ) 91.6 2.8 ( 94.1 ) -CC 55.5 2.8 (58.3) 64.5 6.8 (73.0) 42.4 5.0 (46.8) 86.2 5.7 (92.5) 90.3 2.8 ( 94.1 ) Table 2: Results of zero-shot text classification.", "fined class name to boost the performance, and how it can be applied in a zero-shot setting.", "Therefore we only compare it in the few-shot setting with no class name information given.", "2021).", "They use a continuous vector for each class and use the dot product between the masked language model output and the class vector to produce the probability for each class.", "In our experiments, its class vectors are initialized with the class names' word embedding, since it is more effective with 2231 manual class names as the initial values (see Appendix A).", "As an optimization-based method, Soft Verbalizer is not applicable in the zero-shot setting.", "PT+CC.", "For zero-shot setting, we further introduce PT combined with our proposed contextualized calibration 5 as a baseline to see how much improvement is made by contextualized calibration instead of knowledgeable verbalizers.", "For KPT , we experiment with different variants to better understand the proposed methods such as refinement.", "-FR , -RR , -CC and -LR is the variant that does not conduct Frequency Refinement, Relevance Refinement, Contextualized Calibration, and Learnable Refinement, respectively.", "In few-shot experiments, we presume that the supervised training data can train the output probability of each label word to the desired magnitude, thus we don't use CC and FR in the KPT .", "This decision is justified in Appendix D.2.", "In this subsection, we introduce the specific results", "and provide possible insights of KPT .", "Zero-shot.", "From Table 2, we see that all the variants of KPT , except for KPT-CC, consistently outperforms PT and PT+CC baselines, which indicates the effectiveness of our methods.", "Comparison between PT and PT+CC proves that Contextualized Calibration is very effective in the zero-shot setting.", "The results of KPT-FR-RR-CC, which is the variant without any refinement, reveal the label noise is severe in the automatically constructed knowledgeable label words.", "The gap between KPT-FR-RR and KPT-FR-RR-CC is larger than the gap between PT+CC and PT, demonstrating the drastic difference in the prior probabilities of the knowledgeable label words as we hypothesized in 3.3.", "Comparison between KPT, KPT-FR, KPT-FR-RR proves the effectiveness of the refinement methods.", "For the analysis regarding each type of classification task, we observe that the performance boost compared to the baselines in topic classification is higher than sentiment classification, which we conjecture that topic classification requires more external knowledge than sentiment classification.", "While CC offers huge improvement (on average +13%) over PT baseline, the incorporation of external knowledge further improves over PT+CC up to 11% on DBPedia, and 6% on AG's News and Yahoo.", "We also observe that the improvement brought 5 The same support sets are used as KPT .", "by the refinement methods is more noticeable for topic classification tasks.", "By looking at the fraction of label words maintained after the refinement process (See appendix D.4), we conjecture that the sentiment dictionary that we used in sentiment classification tasks contains little noise.", "Moreover, the improvement brought by the refinement process justifies the resilience of our methods to recover from noisy label words.", "Few-shot.", "From Table 3, we first find out that prompt-based methods win over fine-tuning by a dramatic margin under nearly all situations.", "The gap enlarges as the shot becomes fewer.", "Comparing the baseline methods, the Soft Verbalizer (SOFT) generally wins over the Manual Verbalizer(PT) by a slight margin.", "However, automatic verbalizer (AUTO), although free of manual effort, lags behind the other verbalizers especially in a low-shot setting.", "The reason is obvious since the selection of label words among the vocabulary becomes inaccurate when labeled data is limited.", "When comparing KPT with the baseline methods, we find KPT or its variants consistently outperform all baseline methods.", "On average, 17.8% , 10.3%, and 7.4% error rate reduction from the best baseline methods are achieved on 1, 5, and 10 shot experiments, respectively.", "Comparing within the variants of KPT , we find that RR and LR are generally effective across shots on topic classification dataset, while in sentiment classification dataset, KPT works well without the refinements, which is consistent with our previous assumptions that the sentiment dictionary has little noise.", "Note that the KPT-RR variant does not utilize any unlabeled support set C since we do not conduct CC and FR by default in few-shot learning.", "This variant is still superior to the baseline methods in most cases.", "In terms of variance, we can see that KPT enjoys smaller variances than baseline methods in most cases, demonstrating that the better coverage of label words stabilizes the training.", "For 20-shot experiments, we can see that the gap between different methods narrows as the training data becomes sufficient.", "However, KPT and its variants still win by a consistent margin over the baseline methods.", "Surprisingly, with more training data, LR does not become more powerful as we may hypothesize.", "We conjecture that it is because all label words, even with some noise, can serve as training objectives for prompt tuning.", "This perspective is similar to Gao et al. (2021) that using bad 2232 as a label word for the class positive can still preform classification although the performance degrades.", "Ablation studies about our refinement methods have been shown in the previous section.", "In this section and Appendix D, we conduct more in-depth analyses on the proposed methods.", "One advantage of KPT is that it can generate diverse label words across different granularities.", "To specifically quantify such diversity, we conduct a case study.", "For the correctly predicted sentences of a class y , we count the frequency of label words v V y appearing in the top-5 predictions for the [MASK] position.", "Then we report the top-15 frequent label words in Figure 2.", "Due to space limit, only the results of SPORTS and BUSINESS category of AG's News are shown.", "As shown in Figure 2, a diversity of label words, instead of mainly the original class names, are predicted.", "And the predicted label words cover various aspects of the corresponding topic.", "For example, for the topic SPORTS , the predicted leagues, football, and coach are related to it from different angles.", "In addition to the visualization, we study the influence of the support set's size on zero-shot text classification in Appendix D.1.", "Then we justify that few-shot learning via labeled data eases the need for calibration and frequency-based refinement in Appendix D.2.", "We also demonstrate that our approach to handling the out-of-vocabulary (OOV) words is reasonable in Appendix D.3.", "Moreover, we take a closer look at the refinement process by analyzing the fraction of label words maintained during refinement in Appendix D.4.", "Finally, we discuss the potential use of the proposed methods when knowledge bases resources are not readily available in Appendix D.5.", "In this paper, we propose KPT , which expands the verbalizer in prompt-tuning using the external KB.", "To better utilize the KB, we propose refinement methods for the knowledgeable verbalizer.", "The experiments show the potential of KPT in both zero-shot settings and few-shot settings.", "For future work, there are open questions related to our research for investigation: (1) Better approaches for combining KB and prompt-tuning in terms of template construction and verbalizer design.", "(2) Incorporating external knowledge into prompt-tuning for other tasks such as text generation.", "Zhiyuan Liu, Huadong Wang and Shengding Hu proposed the idea and led the research.", "Shengding Hu designed the methods and conducted the experiments.", "Ning Ding and Shengding Hu wrote the abstract, introduction (Section 1) and method (Sec-tion 3) part of the paper.", "Shengding Hu finished the other parts.", "Ning Ding, Huadong Wang and Zhiyuan Liu thoroughly revised the whole paper.", "Jingang Wang, Juanzi Li, Wei Wu and Maosong Sun proofread the paper and provided valuable comments.", "This work is supported by the National Key R&D Program of China (No. 2020AAA0106502), Institute Guo Qiang at Tsinghua University, NExT++ project from the National Research Foundation, Prime Minister's Office, Singapore under its IRC@Singapore Funding Initiative, and International Innovation Center of Tsinghua University, Shanghai, China.", "This work proposes knowledgeable prompt tuning which uses external knowledge bases to construct the verbalizer.", "Users should be aware of the potential error in the external KBs, or even the injection of malicious words." ]
[ "abstain", "abstain", "abstain", "abstain", "result", "method", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "objective", "method", "method", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "objective", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "objective", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain" ]
[ "Knowledge inference on knowledge graph has attracted extensive attention, which aims to find out connotative valid facts in knowledge graph and is very helpful for improving the performance of many downstream applications.", "However, researchers have mainly poured attention to knowledge inference on binary facts.", "The studies on n-ary facts are relatively scarcer, although they are also ubiquitous in the real world.", "Therefore, this paper addresses knowledge inference on n-ary facts.", "We represent each n-ary fact as a primary triple coupled with a set of its auxiliary descriptive attribute-value pair(s).", "We further propose a neural network model, NeuInfer, for knowledge inference on n-ary facts.", "Besides handling the common task to infer an unknown element in a whole fact, NeuInfer can cope with a new type of task, flexible knowledge inference.", "It aims to infer an unknown element in a partial fact consisting of the primary triple coupled with any number of its auxiliary description(s).", "Experimental results demonstrate the remarkable superiority of NeuInfer.", "With the introduction of connotative valid facts, knowledge inference on knowledge graph improves the performance of many downstream applications, such as vertical search and question answering (Dong et al., 2015; Lukovnikov et al., 2017).", "Existing studies (Nickel et al., 2016; Wang et al., 2017) mainly focus on knowledge inference on binary facts with two entities connected with a certain binary relation, represented as triples, (head entity, relation, tail entity).", "They attempt to infer the unknown head/tail entity or the unknown relation of a given binary fact.", "However, n-ary facts involving more than two entities are also ubiquitous.", "For example, in Freebase, more than 1 / 3 entities participate in n-ary facts (Wen et al., 2016).", "The fact that John Bardeen received Nobel P rize in P hysics in 1956 together with W alter Houser Brattain and W illiam Shockley 1 is a typical 5-ary fact.", "So far, only a few studies (Wen et al., 2016; Zhang et al., 2018; Guan et al., 2019) have tried to address knowledge inference on n-ary facts.", "In existing studies for knowledge inference on n-ary facts, each n-ary fact is represented as a group of peer attributes and attribute values.", "In practice, for each n-ary fact, there is usually a primary triple (the main focus of the n-ary fact), and other attributes along with the corresponding attribute values are its auxiliary descriptions.", "Take the above 5-ary fact for example, the primary triple is ( John Bardeen, award received, Nobel P rize in P hysics ) , and other attribute-value pairs including point in time : 1956 , together with : W alter Houser Brattain and together with : W illiam Shockley are its auxiliary descriptions.", "Actually, in YAGO (Suchanek et al., 2007) and Wikidata (Vrandecic and Krotzsch, 2014), a primary triple is identified for each n-ary fact.", "The above 5-ary fact is a relatively complete example.", "In the real-world scenario, many n-ary facts appear as only partial ones, each consisting of a primary triple and a subset of its auxiliary description(s), due to incomplete knowledge acquisition.", "For example, ( John Bardeen, award received, Nobel P rize in P hysics ) with point in time : 1956 and it with { together with : W alter Houser Brattain, together with : W illiam Shockley } are two typical partial facts corresponding to the above 5-ary fact.", "For differentiation, we call those relatively complete facts as whole ones.", "We noticed that existing studies on n-ary facts infer an unknown element in a well-defined whole fact and have not paid attention to knowledge inference on partial facts.", "Later on, we 1 https://www.wikidata.org/wiki/Q949 refer the former as simple knowledge inference, while the latter as flexible knowledge inference.", "With these considerations in mind, in this paper, by discriminating the information in the same n-ary fact, we propose a neural network model, called NeuInfer, to conduct both simple and flexible knowledge inference on n-ary facts.", "Our specific contributions are summarized as: We treat the information in the same n-ary fact discriminatingly and represent each n-ary fact as a primary triple coupled with a set of its auxiliary descriptive attribute-value pair(s).", "We propose a neural network model, NeuInfer, for knowledge inference on n-ary facts.", "NeuInfer can particularly handle the new type of task, flexible knowledge inference, which infers an unknown element in a partial fact consisting of a primary triple and any number of its auxiliary description(s).", "They can be divided into tensor/matrix based methods, translation based methods, and neural network based ones.", "The quintessential one of tensor/matrix based methods is RESCAL (Nickel et al., 2011).", "It relates a knowledge graph to a three-way tensor of head entities, relations, and tail entities.", "The learned embeddings of entities and relations via minimizing the reconstruction error of the tensor are used to reconstruct the tensor.", "And binary facts corresponding to entries of large values are treated as valid.", "Similarly, ComplEx (Trouillon et al., 2016) relates each relation to a matrix of head and tail entities, which is decomposed and learned like RESCAL.", "To improve the embeddings and thus the performance of inference, researchers further introduce the constraints of entities and relations (Ding et al., 2018; Jain et al., 2018).", "Translation based methods date back to TransE (Bordes et al., 2013).", "It views each valid binary fact as the translation from the head entity to the tail entity via their relation.", "Thus, the score function indicating the validity of the fact is defined based on the similarity between the translation result and the tail entity.", "Then, a flurry of methods spring up (Wang et al., 2014; Lin et al., 2015b; Ji et al., 2015; Guo et al., 2015; Lin et al., 2015a; Xiao et al., 2016; Jia et al., 2016; Tay et al., 2017; Ebisu and Ichise, 2018; Chen et al., 2019).", "They modify the above translation assumption or introduce additional information and constraints.", "Among them, TransH (Wang et al., 2014) translates on relation-specific hyperplanes.", "Entities are projected into the hyperplanes of relations before translating.", "Neural network based methods model the validity of binary facts or the inference processes.", "For example, ConvKB (Nguyen et al., 2018) treats each binary fact as a three-column matrix.", "This matrix is fed into a convolution layer, followed by a concatenation layer and a fully-connected layer to generate a validity score.", "Nathani et al. (2019) further proposes a generalized graph attention model as the encoder to capture neighborhood features and applies ConvKB as the decoder.", "ConvE (Dettmers et al., 2018) models entity inference process via 2D convolution over the reshaped then concatenated embedding of the known entity and relation.", "ConvR (Jiang et al., 2019) further adaptively constructs convolution filters from relation embedding and applies these filters across entity embedding to generate convolutional features.", "SENN (Guan et al., 2018) models the inference processes of head entities, tail entities, and relations via fully-connected neural networks, and integrates them into a unified framework.", "As aforesaid, only a few studies handle this type of knowledge inference.", "The m-TransH method (Wen et al., 2016) defines n-ary relations as the mappings from the attribute sequences to the attribute values.", "Each n-ary fact is an instance of the corresponding n-ary relation.", "Then, m-TransH generalizes TransH (Wang et al., 2014) on binary facts to n-ary facts via attaching each n-ary relation with a hyperplane.", "RAE (Zhang et al., 2018) further introduces the likelihood that two attribute values co-participate in a common n-ary fact, and adds the corresponding relatedness loss multiplied by a weight factor to the embedding loss of m-TransH.", "Specifically, RAE applies a fully-connected neural network to model the above likelihood.", "Differently, NaLP (Guan et al., 2019) represents each n-ary fact as a set of attribute-value pairs directly.", "Then, convolution is adopted to get the embeddings of the attribute-value pairs, and a fully-connected neural network is applied to evaluate their relatedness and finally to obtain the validity score of the input n-ary fact.", "In these methods, the information in the same n-ary fact is equal-status.", "Actually, in each n-ary fact, a primary triple can usually be identified with other information as its auxiliary description(s), as exemplified in Section", "1. Moreover, these methods are deliberately designed only for the inference on whole facts.", "They have not tackled any distinct inference task.", "In practice, the newly proposed flexible knowledge inference is also prevalent.", "Different from the studies that define n-ary relations first and then represent n-ary facts (Wen et al., 2016; Zhang et al., 2018), we represent each n-ary fact as a primary triple (head entity, relation, tail entity) coupled with a set of its auxiliary description(s) directly.", "Formally, given an n-ary fact F ct with the primary triple ( h, r, t ) , m attributes and attribute values, its representation is: (cid:0) ( h,r, t ) , { | a 1 : v 1 , | a 2 : v 2 , | . . . , | a m : v m } (cid:1) , where each a i : v i ( i = 1 , 2 , . . . , m ) is an attribute-value pair, also called an auxiliary description to the primary triple.", "An element of F ct refers to h / r / t / a i / v i ; A Fct = { a 1 , a 2 , . . . , a m } is F ct 's attribute set and a i may be the same to a j ( i, j = 1 , 2 , . . . , m, i (cid:54) = j ); V Fct = { v 1 , v 2 , . . . , v m } is F ct 's attribute value set.", "For example, the representation of the 5-ary fact, mentioned in Section 1, is: (cid:0) ( John Bardeen, award received, Nobel Prize in Physics ) , { | point in time : 1956 , | together with : Walter Houser Brattain, | together with : William Shockley } (cid:1) .", "Note that, in the real world, there is a type of complicated cases, say, where more than two entities participate in the same n-ary fact with the same primary attribute.", "We follow Wikidata (Vrandecic and Krotzsch, 2014) to view the cases from different aspects of different entities.", "Take the case that John Bardeen , W alter Houser Brattain , and W illiam Shockley received Nobel P rize in P hysics in 1956 for example, besides the above 5-ary fact from the view of John Bardeen , we get other two 5-ary facts from the views of W alter Houser Brattain 2 and W illiam Shockley 3 , respectively: (cid:0) ( Walter Houser Brattain, award received, Nobel Prize in Physics ) , { | point in time : 1956 , | together with : John Bardeen, | together with : William Shockley } (cid:1) .", "(cid:0) ( William Shockley, award received, Nobel Prize in Physics ) , { | point in time : 1956 , | together with : Walter Houser Brattain, | together with : John Bardeen } (cid:1) .", "In this paper, we handle both the common simple knowledge inference and the newly proposed flexible knowledge inference.", "Before giving their definitions under our representation form of n-ary facts, let us define whole fact and partial fact first.", "Definition 1 (Whole fact and partial fact).", "For the fact F ct , assume its set of auxiliary description(s) as S d = { a i : v i | i = 1 , 2 , . . . , m } .", "Then a partial fact of F ct is: F ct (cid:48) = (cid:0) ( h, r, t ) , S (cid:48) d (cid:1) , where S (cid:48) d S d , i.e., S (cid:48) d is a subset of S d .", "And we call F ct the whole fact to differentiate it from F ct (cid:48) .", "Notably, whole fact and partial fact are relative concepts, and a whole fact is a relatively complete fact compared to its partial fact.", "In this paper, partial facts are introduced to imitate a typical open-world setting where different facts of the same type may have different numbers of attribute-value pair(s).", "Definition 2 (Simple knowledge inference).", "It aims to infer an unknown element in a whole fact.", "Definition 3 (Flexible knowledge inference).", "It aims to infer an unknown element in a partial fact.", "To conduct knowledge inference on n-ary facts, NeuInfer first models the validity of the n-ary facts and then casts inference as a classification task.", "How to estimate whether an n-ary fact is valid not?", "Let us look into two typical examples of invalid n-ary facts: (cid:0) ( John Bardeen, award received, Turing Award ) , { | point in time : 1956 , | together with : Walter Houser Brattain, | together with : William Shockley } (cid:1) .", "(cid:0) ( John Bardeen, award received, Nobel Prize in Physics ) , { | point in time : 1956 , | together with : Walter Houser Brattain, | place of marriage : New Y ork City } (cid:1) .", "Therefore, we believe that a valid n-ary fact has two prerequisites.", "On the one hand, its primary triple should be valid.", "If the primary triple is invalid, attaching any number of attribute-value pairs to it does not make the resulting n-ary fact valid; on the other hand, since each auxiliary description presents a qualifier to the primary triple, it should be compatible with the primary triple.", "Even if the primary triple is basically valid, any incompatible attribute-value pair makes the n-ary fact invalid.", "Therefore, NeuInfer is designed to characterize these two aspects and thus consists of two components corresponding to the validity evaluation of the primary triple and the compatibility evaluation of the n-ary fact, respectively.", "The framework of NeuInfer is illustrated in Figure 1, with the 5-ary fact presented in Section 1 an example.", "For an n-ary fact F ct , we look up the embeddings of its relation r and the attributes in A Fct from the embedding matrix MR R | R | k of relations and attributes, where R is the set of all the relations and attributes, and k is the dimension of the latent vector space.", "The embeddings of h , t , and the attribute values in V Fct are looked up from the embedding matrix ME R | E | k of entities and attribute values, where E is the set of all the entities and attribute values.", "In what follows, the embeddings are denoted with the same letters but in boldface by convention.", "As presented in Figure 1, these embeddings are fed into the validity evaluation component (the upper part of Figure 1) and the compatibility evaluation component (the bottom part of Figure 1) to compute the validity score of ( h, r, t ) and the compatibility score of F ct , respectively.", "These two scores are used to generate the final score of F ct by weighted sum and further compute the loss.", "Note that, following RAE (Zhang et al., 2018) and NaLP (Guan et al., 2019), we only apply fully-connected neural networks in NeuInfer.", "This component estimates the validity of ( h, r, t ) , including the acquisition of its interaction vector and the assessment of its validity, corresponding to hrt-FCNs and FCN 1 in Figure 1, respectively.", "Detailedly, the embeddings of h , r , and t are concatenated and fed into a fully-connected neural network.", "After layer-by-layer learning, the last layer outputs the interaction vector o hrt of ( h, r, t ) : o hrt = f ( f ( f ( f ([ h ; r ; t ] W 1 , 1 + b 1 , 1 ) W 1 , 2 + b 1 , 2 ) ) W 1 , n 1 + b 1 , n 1 ) , (1) where f ( ) is the ReLU function; n 1 is the number of the neural network layers; { W 1 , 1 , W 1 , 2 , . . . , W 1 , n 1 } and { b 1 , 1 , b 1 , 2 , . . . , b 1 , n 1 } are their weight matrices and bias vectors, respectively.", "With o hrt as the input, the validity score val hrt of ( h, r, t ) is computed via a fully-connected layer and then the sigmoid operation: val hrt = ( o hrt W val + b val ) , (2) where W val and b val are the weight matrix and bias variable, respectively; ( x ) = 1 1+ e x is the sigmoid function, which constrains val hrt (0 , 1) .", "For simplicity, the number of hidden nodes in each fully-connected layer of hrt-FCNs and FCN 1 gradually reduces with the same difference between layers.", "This component estimates the compatibility of F ct .", "It contains three sub-processes, i.e., the capture of the interaction vector between ( h, r, t ) and each auxiliary description a i : v i ( i = 1 , 2 , . . . , m ), the acquisition of the overall interaction vector, and the assessment of the compatibility of F ct , corresponding to hrtav-FCNs, min and FCN 2 in Figure 1, respectively.", "Similar to hrt-FCNs, we obtain the interaction vector o hrta i v i of ( h, r, t ) and a i : v i : o hrta i v i = f ( f ( f ( f ([ h ; r ; t ; a i ; v i ] W 2 , 1 + b 2 , 1 ) W 2 , 2 + b 2 , 2 ) ) W 2 , n 2 + b 2 , n 2 ) , (3) where n 2 is the number of the neural network layers; { W 2 , 1 , W 2 , 2 , . . . , W 2 , n 2 } and { b 2 , 1 , b 2 , 2 , . . . , b 2 , n 2 } are their weight matrices and bias vectors, respectively.", "The number of hidden nodes in each fully-connected layer also gradually reduces with the same difference between layers.", "And the dimension of the resulting o hrta i v i is d .", "All the auxiliary descriptions share the same parameters in this sub-process.", "The overall interaction vector o hrtav of F ct is generated based on o hrta i v i .", "Before introducing this sub-process, let us see the principle behind first.", "Straightforwardly, if F ct is valid, ( h, r, t ) should be compatible with any of its auxiliary description.", "Then, the values of their interaction vector, measuring the compatibility in many different views, are all encouraged to be large.", "Therefore, for each dimension, the minimum over it of all the interaction vectors is not allowed to be too small.", "Thus, the overall interaction vector o hrtav of ( h, r, t ) and its auxiliary description(s) is: o hrtav = min mi =1 ( o hrta i v i ) , (4) where min( ) is the element-wise minimizing function.", "comp Fct = ( o hrtav W comp + b comp ) , (5)", "where W comp of dimension d 1 and b comp are the weight matrix and bias variable, respectively.", "The final score s Fct of F ct is the weighted sum of the above validity score and compatibility score:", "where w (0 , 1) is the weight factor.", "If the arity of F ct is 2, the final score is equal to the validity score of the primary triple ( h, r, t ) .", "Then, Equation (6) is reduced to: s Fct = val hrt .", "Currently, we obtain the final score s Fct of F ct .", "In addition, F ct has its target score l Fct .", "By comparing s Fct with l Fct , we get the binary cross-entropy loss: L Fct = l Fct log s Fct (1 l Fct ) log(1 s Fct ) , (8) where l Fct = 1 , if F ct T , otherwise F ct T , l Fct = 0 .", "Here, T is the training set and T is the set of negative samples constructed by corrupting the n-ary facts in T .", "Specifically, for each n-ary fact in T , we randomly replace one of its elements with a random element in E / R to generate one negative sample not contained in T .", "We then optimize NeuInfer via backpropagation, and Adam (Kingma and Ba, 2015) with learning rate is used as the optimizer.", "We conduct experiments on two n-ary datasets.", "The first one is JF17K (Wen et al., 2016; Zhang et al., 2018), derived from Freebase (Bollacker et al., 2008).", "In JF17K, an n-ary relation of a certain type is defined by a fixed number of ordered attributes.", "Then, any n-ary fact of this relation is denoted as an ordered sequence of attribute values corresponding to the attributes.", "For example, for all n-ary facts of the n-ary relation olympics.olympic medal honor , they all have four attribute values (e.g., 2008 Summer Olympics , United States , Natalie Coughlin , and Swimming at the 2008 Summer Olympics W omen (cid:48) s 4 100 metre freestyle relay ), corresponding to the four ordered attributes of this n-ary relation.", "The second one is WikiPeople (Guan et al., 2019), derived from Wikidata (Vrandecic and Krotzsch, 2014).", "Its n-ary facts are more diverse than JF17K's.", "For example, for all n-ary facts that narrate award received , some have the attribute together with , while some others do not.", "Thus, WikiPeople is more difficult.", "To run NeuInfer on JF17K and WikiPeople, we transform the representation of their n-ary facts.", "For JF17K, we need to convert each attribute value sequence of a specific n-ary relation to a primary triple coupled with a set of its auxiliary description(s).", "The core of this process is to determine the primary triple, formed by merging the two primary attributes of the n-ary relation and the corresponding attribute values.", "The two primary attributes are selected based on RAE (Zhang et al., 2018).", "For each attribute of the n-ary relation, we count the number of its distinct attribute values from all the n-ary facts of this relation.", "The two attributes that correspond to the largest and second-largest numbers are chosen as the two primary attributes.", "For WikiPeople, since there is a primary triple for each n-ary fact in Wikidata, with its help, we simply reorganize a set of attribute-value pairs in WikiPeople to a primary triple coupled with a set of its auxiliary description(s).", "The statistics of the datasets after conversion or reorganization are outlined in Table 1, where # T rain , # V alid , and # T est are the sizes of the training set, validation set, and test set, respectively.", "ciprocal Rank (MRR) and Hits@ N .", "For each n-ary test fact, one of its elements is removed and replaced by all the elements in E / R .", "These corrupted n-ary facts are fed into NeuInfer to obtain the final scores.", "Based on these scores, the n-ary facts are sorted in descending order, and the rank of the n-ary test fact is stored.", "Note that, except the n-ary test fact, other corrupted n-ary facts existing in the training/validation/test set, are discarded before sorting.", "This process is repeated for all other elements of the n-ary test fact.", "Then, MRR is the average of these reciprocal ranks, and Hits@ N is the proportion of the ranks less than or equal to N .", "Knowledge inference includes entity inference and relation inference.", "As presented in Table 1, the number of relations and attributes in each dataset is far less than that of entities and attribute values (on JF17K, | R | = 501 , while | E | = 28 , 645 ; on WikiPeople, | R | = 193 , while | E | = 47 , 765 ).", "That is, inferring a relation/attribute is much simpler than inferring an entity/attribute value.", "Therefore, we adopt MRR and Hits@ { 1, 3, 10 } on entity inference, while pouring attention to more fine-grained metrics, i.e., MRR and Hits@1 on relation inference.", "The hyper-parameters of NeuInfer are tuned via grid search in the following ranges: The embedding dimension k { 50 , 100 } , the batch size { 128 , 256 } , the learning rate { 5 e 6 , 1 e 5 , 5 e 5 , 1 e 4 , 5 e 4 , 1 e 3 } , the numbers n 1 and n 2 of the neural network layers of hrt-FCNs and hrtav-FCNs in { 1 , 2 } , the dimension d of the interaction vector o hrta i v i in { 50 , 100 , 200 , 400 , 500 , 800 , 1000 , 1200 } , the weight factor w of the scores in { 0 .", "1 , 0 .", "2 , . . . , 0 .", "9 } .", "The adopted optimal settings are: k = 100 , = 128 , = 5 e 5 , n 1 = 2 , n 2 = 1 , d = 1200 , and w = 0 .", "1 for JF17K; k = 100 , = 128 , = 1 e 4 , n 1 = 1 , n 2 = 1 , d = 1000 , and w = 0 .", "3 for WikiPeople.", "Simple knowledge inference includes simple entity inference and simple relation inference.", "For an n-ary fact, they infer one of the entities/the relation in Method JF17K WikiPeople MRR Hits@1 Hits@3 Hits@10 MRR Hits@1 Hits@3 Hits@10 RAE 0.310 0.219 0.334 0.504 0.172 0.102 0.182 0.320 NaLP 0.366 0.290 0.391 0.516 0.338 0.272 0.364 0.466 NeuInfer 0.517 0.436 0.553 0.675 0.350 0.282 0.381 0.467 Table 2: Experimental results of simple entity inference.", "the primary triple or the attribute value/attribute in an auxiliary description, given its other information.", "Knowledge inference methods on n-ary facts are scarce.", "The representative methods are m-TransH (Wen et al., 2016) and its modified version RAE (Zhang et al., 2018), and the state-of-the-art one is NaLP (Guan et al., 2019).", "As m-TransH is worse than RAE, following NaLP, we do not adopt it as a baseline.", "The experimental results of simple entity inference are reported in Table", "2. From the results, it can be observed that NeuInfer performs much better than the best baseline NaLP, which verifies the superiority of NeuInfer.", "Specifically, on JF17K, the performance gap between NeuInfer and NaLP is significant.", "In essence, 0.151 on MRR, 14.6% on Hits@1, 16.2% on Hits@3, and 15.9% on Hits@10.", "On WikiPeople, NeuInfer also outperforms NaLP.", "It testifies the strength of NeuInfer treating the information in the same n-ary fact discriminatingly.", "By differentiating the primary triple from other auxiliary description(s), NeuInfer considers the validity of the primary triple and the compatibility between the primary triple and its auxiliary description(s) to model each n-ary fact more appropriately and reasonably.", "Thus, it is not surprising that NeuInfer beats the baselines.", "And on simpler JF17K (see Section 5.1), NeuInfer gains more significant performance improvement than on WikiPeople.", "Since RAE is deliberately developed only for simple entity inference, we compare NeuInfer only with NaLP on simple relation inference.", "Table 3 demonstrates the experimental results of simple relation inference.", "From the table, we can observe that NeuInfer outperforms NaLP consistently.", "Detailedly, on JF17K, the performance improvement of NeuInfer on MRR and Hits@1 is 0.036 and 7.0%, respectively; on WikiPeople, they are 0.030 and 9.1%, respectively.", "It is ascribed to the reasonable modeling of n-ary facts, which not only improves the performance of simple entity inference but also is beneficial to pick the exact right relations/attributes out.", "We perform an ablation study to look deep into the framework of NeuInfer.", "If we remove the compatibility evaluation component, NeuInfer is reduced to a method for binary but not n-ary facts.", "Since we handle knowledge inference on n-ary facts, it is inappropriate to remove this component.", "Thus, as an ablation, we only deactivate the validity evaluation component, denoted as NeuInfer .", "The experimental comparison between NeuInfer and NeuInfer is illustrated in Figure", "2. It can be observed from the figure that NeuInfer outperforms NeuInfer significantly.", "It suggests that the validity evaluation component plays a pivotal role in our method.", "Thus, each component of our method is necessary.", "The newly proposed flexible knowledge inference focuses on n-ary facts of arities greater than", "2. It includes flexible entity inference and flexible relation inference.", "For an n-ary fact, they infer one of the entities/the relation in the primary triple given any number of its auxiliary description(s) or infer the attribute value/attribute in an auxiliary description given the primary triple and any number of other auxiliary description(s).", "In existing knowledge inference methods on n-ary facts, each n-ary fact is represented as a group of peer attributes and attribute values.", "These methods have not poured attention to the above flexible knowledge inference.", "Thus, we conduct this new type of task only on MRR Hits@1 Hits@3 Hits@10 Ablation study of simple entity inference on JF17K 0.400 0.500 0.600 0.700 S c o r e s 0.517 0.436 0.553 0.675 0.433 0.379 0.465 0.529 MRR Hits@1 Hits@3 Hits@10 Ablation study of simple entity inference on WikiPeople 0.000 0.200 0.400 0.350 0.282 0.381 0.467 0.050 0.033 0.055 0.085 NeuInfer NeuInfer MRR Hits@1 Hits@3 Hits@10 Ablation study of simple relation inference on JF17K 0.700 0.800 0.900 0.861 0.832 0.886 0.904 0.710 0.702 0.713 0.717 MRR Hits@1 Hits@3 Hits@10 Ablation study of simple relation inference on WikiPeople 0.250 0.500 0.750 1.000 0.765 0.686 0.828 0.897 0.211 0.183 0.209 0.229 Figure 2: The experimental comparison between NeuInfer and NeuInfer .", "NeuInfer.", "Before elaborating on the experimental results, let us look into the new test set used in this section first.", "We generate the new test set as follows:", "Collect the n-ary facts of arities greater than 2 from the test set.", "For each collected n-ary fact, compute all the subsets of the auxiliary description(s).", "The primary triple and each subset form a new n-ary fact, which is added to the candidate set.", "Remove the n-ary facts that also exist in the training/validation set from the candidate set and then remove the duplicate n-ary facts.", "The remaining n-ary facts form the new test set.", "The size of the resulting new test set on JF17K is 34,784, and that on WikiPeople is 13,833.", "The experimental results of flexible entity and relation inference on these new test sets are presented in Table 4.", "It can be observed that NeuInfer well tackles flexible entity and relation inference on partial facts, and achieves excellent performance.", "We also attribute this to the reasonable modeling of n-ary facts.", "For each n-ary fact, NeuInfer distinguishes the primary triple from other auxiliary description(s) and models them properly.", "Thus, NeuInfer well handles various types of entity and relation inference concerning the primary triple coupled with any number of its auxiliary description(s).", "To further analyze the effectiveness of the proposed NeuInfer method, we look into the breakdown of its performance on different arities, as well as on primary triples and auxiliary descriptions.", "Without loss of generality, here we report only the experimental results on simple entity inference.", "The test sets are grouped into binary and n-ary (n > 2) categories according to the arities of the facts.", "Table 5 presents the experimental results of simple entity inference on these two categories of JF17K and WikiPeople.", "From the tables, we can observe that NeuInfer consistently outperforms the baselines on both categories on simpler JF17K.", "On more difficult WikiPeople, NeuInfer is comparable to the best baseline NaLP on the binary category and gains much better performance on the n-ary category in terms of the fine-grained MRR and Hits@1.", "In general, NeuInfer performs much better on JF17K than on WikiPeople.", "We attribute this to the simplicity of JF17K.", "Where does the above performance improvement come from?", "Is it from inferring the head/tail Dataset Method MRR Hits@1 Hits@3 Hits@10 Binary N-ary Binary N-ary Binary N-ary Binary N-ary JF17K RAE 0.115 0.397 0.050 0.294 0.108 0.434 0.247 0.618 NaLP 0.118 0.477 0.058 0.394 0.121 0.512 0.246 0.637 NeuInfer 0.267 0.628 0.173 0.554 0.300 0.666 0.462 0.770 WikiPeople RAE 0.169 0.187 0.096 0.126 0.178 0.198 0.323 0.306 NaLP 0.351 0.283 0.291 0.187 0.374 0.322 0.465 0.471 NeuInfer 0.350 0.349 0.278 0.303 0.385 0.364 0.473 0.439 Table 5: Experimental results of simple entity inference on binary and n-ary categories of JF17K and WikiPeople.", "entities in primary triples or the attribute values in auxiliary descriptions?", "To go deep into it, we study the performance of NeuInfer on inferring the head/tail entities and the attribute values and compare it with the best baseline NaLP.", "The detailed experimental results are demonstrated in Tables 6 and 7.", "It can be observed that NeuInfer brings more performance gain on inferring attribute values.", "It indicates that combining the validity of the primary triple and the compatibility between the primary triple and its auxiliary description(s) to model each n-ary fact is more effective than only considering the relatedness of attribute-value pairs in NaLP, especially for inferring attribute values.", "In this paper, we distinguished the information in the same n-ary fact and represented each n-ary fact as a primary triple coupled with a set of its auxiliary description(s).", "We then proposed a neural network model, NeuInfer, for knowledge inference on n-ary facts.", "NeuInfer combines the validity evaluation of the primary triple and the compatibility evaluation of the n-ary fact to obtain the validity score of the n-ary fact.", "In this way, NeuInfer has the ability of well handling simple knowledge inference, which copes with the inference on whole facts.", "Furthermore, NeuInfer is capable of dealing with the newly proposed flexible knowledge inference, which tackles the inference on partial facts consisting of a primary triple coupled with any number of its auxiliary descriptive attribute-value pair(s).", "Experimental results manifest the merits and superiority of NeuInfer.", "Particularly, on simple entity inference, NeuInfer outperforms the state-of-the-art method significantly in terms of all the metrics.", "NeuInfer improves the performance of Hits@3 even by 16.2% on JF17K.", "In this paper, we use only n-ary facts in the datasets to conduct knowledge inference.", "For future works, to further improve the method, we will explore the introduction of additional information, such as rules and external texts.", "The work is supported by the National Key Research and Development Program of China under grant 2016YFB1000902, the National Natural Science Foundation of China under grants U1911401, 61772501, U1836206, 91646120, and 61722211, the GFKJ Innovation Program, Beijing Academy of Artificial Intelligence (BAAI) under grant BAAI2019ZD0306, and the Lenovo-CAS Joint Lab Youth Scientist Project." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "objective", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "other", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "other" ]
[ "Spoken language translation applications for speech suffer due to conversational speech phenomena, particularly the presence of disfluencies.", "With the rise of end-to-end speech translation models, processing steps such as disfluency removal that were previously an intermediate step between speech recognition and machine translation need to be incorporated into model architectures.", "We use a sequence-to-sequence model to translate from noisy, disfluent speech to fluent text with disfluencies removed using the recently collected copy-edited' references for the Fisher Spanish-English dataset.", "We are able to directly generate fluent translations and introduce considerations about how to evaluate success on this task.", "This work provides a baseline for a new task, the translation of conversational speech with joint removal of disfluencies.", "Spoken language translation (SLT) applications suffer due to conversational speech phenomena, particularly the presence of disfluencies.", "In conversational speech, speakers often use disfluencies such as filler words, repetitions, false starts, and corrections which do not naturally occur in text and may not be desired in translation outputs.", "Disfluency recognition and removal has previously been performed as an intermediate step between speech recognition (ASR) and machine translation (MT), to make disfluent ASR output better-matched to typically clean machine translation training data (Cho et al., 2013, 2014; Wang et al., 2010; Honal and Schultz, 2005; Zayats et al., 2016).", "With the rise of end-to-end sequence-to-sequence speech translation systems (Weiss et al., 2017; Bansal et al., 2018), disfluency removal can no longer be handled as an intermediate step between ASR and MT but needs to be incorporated into the model or handled as a post-processing step.", "Generating fluent translations from disfluent speech may be desired for simultaneous SLT applications where removing disfluencies will improve the application's clarity and usability.", "To train end-to-end speech translation requires parallel speech and text translations.", "This introduces data considerations not previously relevant with chained ASR+MT models, as different datasets could be used to train ASR and MT components.", "Where aligned speech and translations exist, data is typically clean speech (cid:1) clean text, as in news or TED talks, or disfluent speech (cid:1) disfluent translations, as in Fisher or meeting data, where disfluencies were faithfully included in the references for completeness.", "While some corpora with labeled disfluencies exist (Cho et al., 2014; Burger et al., 2002), only subsets have been translated and/or released.", "Salesky et al. (2018) introduced a set of fluent references 1 for Fisher Spanish-English, enabling a new task: end-to-end training and evaluation against fluent references.", "Previous work on disfluency removal has treated it as a sequence labeling task using word or span-level labels.", "However, in some cases, simply removing disfluencies from an utterance can create ill-formed output.", "Further, corpora can have different translation and annotation schemes: for example for Fisher Spanish-English, translated using Mechanical Turk, Salesky et al. (2018) found 268 unique filler words due to spelling and casing.", "Disfluencies can also be context-specific, such as false starts or corrections where a phrase may be disflu-ent' due to its surroundings.", "To remove disfluencies as a post-processing step would require a separate model trained with appropriate data and disfluency labels, and may lead to ill-formed output.", "By translating directly to fluent target data instead, we aim to handle these concerns implicitly.", "We present the first results translating directly from disfluent source speech to fluent target text.", "1 Data available at: https://github.com/isl-mt/fluent-fisher 2 Data For our experiments, we use Fisher Spanish speech (Graff et al.) and with two sets of English translations (Salesky et al., 2018; Post et al., 2013).", "The speech dataset comprises telephone conversations between mostly native Spanish speakers recorded in realistic noise conditions.", "The original English translations were collected through crowdsourcing, as described in Post et al. (2013).", "Four references were collected for each of the development and test sets, and one for training.", "The training data consists of 819 conversations yielding 160 hours of speech and 150k utterances; the development and test sets are 4 k utterances each.", "We use only the first of the two development sets (dev, not dev2).", "This data is conversational and disfluent.", "The original translations faithfully maintain and translate phenomena in the Spanish transcripts such as filler words and hesitations, discourse markers ( you know , well , mm ), repetitions, corrections and false starts, among others.", "Salesky et al. (2018) introduced a new set of fluent reference translations collected on Mechanical Turk.", "They collected two references for each of the development and test sets, and one for the training set.", "Rather than labeling the disfluencies in the original target data, Turkers were asked to rewrite the utterance in a copy-edited' manner without disfluent phenomena.", "In some cases, simply removing disfluencies would created ill-formed structure in the resulting utterance.", "This scheme instead creates a sentence-level edit allowing for reordering and insertions as necessary to create fluent content, akin instead to monolingual translation or paraphrasing.", "Examples of source transcripts and original translations with the fluent counterparts are shown below in Table 1.", "ORG uh, uh, uh, um, i think it's like that FLT i think it's like that SRC tambien tengo um eh estoy tomando una clase", "..", "ORG i also have um eh i'm taking a marketing class", "..", "FLT i'm also taking a marketing class SRC porque que va, mja ya te acuerda que", "..", "ORG because what is, mhm do you recall now that", "..", "FLT do you recall now that", "..", "SRC y entonces am es entonces la universidad donde yo estoy es university of pennsylvania ORG and so am and so the university where i am it's the university of pennsylvania FLT i am at the university of pennsylvania Table 1: Disfluency examples in Spanish source (SRC), original (ORG) and fluent (FLT) English translations 3 Speech-to-Text Model Initial work on the Fisher-Spanish dataset used HMM-GMM ASR models linked with phrase-based MT using lattices (Post et al., 2013; Kumar et al., 2014).", "More recently, it was shown in Weiss et al. (2017) and Bansal et al. (2018) that end-to-end SLT models perform competitively on this task.", "As in Bansal et al. (2018), we use a sequence-to-sequence architecture inspired by Weiss et al. but modified to train within available resources; specifically, all models may be trained in less than 5 days on one GPU.", "We build an encoder-decoder model with attention in xnmt (Neubig et al., 2018) with 512 hidden units throughout.", "We use a 3-layer BiLSTM encoder.", "We do not use the additional convolutional layers from Weiss et al. and Bansal et al. to reduce temporal resolution, but rather use network-in-network (NiN) projections from previous work in sequence-to-sequence ASR (Zhang et al., 2017; Sperber et al., 2018) to get the same total 4 downsampling in time.", "This gives the benefit of added depth with fewer parameters.", "We closely follow the LSTM/NiN encoder used in Sperber et al. (2018) for ASR and use the same training procedure, detailed in Appendix A. We extract 40-dimensional mel filterbank features with per-speaker mean and variance normalization with Kaldi (Povey et al., 2011).", "We did not see significant difference between 40, 40+deltas and 80-dimensional features in initial experiments, similar to Bansal et al. (2018), who chose 80-dim.", "Weiss et al. (2017) used 240-dim features comprising 80-dim filterbanks stacked with deltas and delta-deltas.", "We exclude utterances longer than 1500 frames to manage memory requirements.", "Like Weiss et al. (2017), we translate to target characters as opposed to words (Bansal et al., 2018).", "We also use an MLP-based attention with 1 hidden layer with 128 units and 64-dimensional target em-beddings, though we use only 1 decoder hidden layer as opposed to 3 or 4 in these works.", "We use input feeding (Luong et al., 2015).", "All models use the same preprocessing as previous work on this dataset: lowercasing and removing punctuation aside from apostrophes.", "We focus on the problem of translating directly from noisy speech to clean references without", "separate disfluency removal step.", "We first demonstrate the efficacy of our models on the original disfluent Fisher Spanish-English task, comparing to the previously reported numbers on the SLT task (Weiss et al., 2017; Bansal et al., 2018).", "We then compare these results with models trained using the collected clean' target data with disfluencies removed.", "Finally, we look at the mismatched case where we train on disfluent data and evaluate on a cleaned test set; this is a more realistic scenario, as clean training data is difficult to collect, and we cannot expect to have it for each language and use case we encounter.", "We evaluate using both BLEU (Papineni et al., 2002) and METEOR (Denkowski and Lavie, 2014) to compare different aspects of model behavior on our two tasks.", "2 BLEU assesses how well predicted translations match a set of reference translations using modified n-gram precision, weighted by a brevity penalty in place of recall to penalize short hypothesis translations without full coverage.", "The brevity penalty is computed as e (1 r/c ) , where r is the length of the reference and c the candidate translation.", "For our task of implicitly removing disfluencies during translation, our generated translations should contain much of the same content but with certain tokens removed, creating shorter translations.", "When scoring fluent output against the original disfluent references , then, differences in BLEU score will come from two sources: shorter n-gram matches, and the brevity penalty.", "METEOR, on the other hand, can be considered a more semantic' evaluation metric.", "It uses a harmonic mean of precision and recall, with greater weight given to recall.", "Further, while BLEU uses exact n-gram matches, METEOR also takes into account stem, synonym, and paraphrase matches.", "For our fluent task, we aim to maintain semantic meaning while removing disfluent tokens.", "Accordingly, when scored against the fluent target references, we hope to see similar METEOR scores between the disfluent models and fluent models.", "Both metrics are used for a holistic view of the problem: METEOR will indicate if meaning is maintained, but not assess disfluency removal, while BLEU changes will indicate whether disfluencies have been removed.", "We provide both multi-reference and single-reference BLEU and METEOR scores: the original 2 BLEU scores are 4-gram word-level BLEU computed using multi-bleu.pl from the Moses toolkit (Koehn et al., 2007).", "METEOR is computed using the script from http://www.cs.cmu.edu/alavie/METEOR/ Fisher target data has four reference translations for the dev and test sets, which boosts scores considerably as hypothesis n-grams can match in any of the references.", "The fluent target data has two references, so the single reference scores better enable comparison between the two tasks.", "Table 2 shows our results on the original disfluent data with comparisons to Weiss et al. (2017) and Bansal et al. (2018).", "All results are single task end-to-end speech translation models.", "Weiss et", "al.'s deeper model reaches a BLEU score of 47.3 on test after 2.5 weeks of training.", "Our model is more similar in depth to Bansal et al. (2018), having both made modifications to train on one GPU in < 5 days (see Section 3).", "While Bansal et al. use words on the target side to improve convergence time at a slight performance cost, we are able to use characters like Weiss et al. by having a still shallower architecture (2 fewer layers on both the encoder and decoder), giving us approximately the same training time per epoch they observe with words ( 2 hours).", "We converge to a test BLEU of 33.7, 3-4 BLEU improved over Bansal et al. on dev and test.", "This demonstrates our model has reasonable performance on the original data, providing a strong baseline before turning to our targeted task of directly generating fluent translations.", "Table 3 compares performance of speech translation models trained with the fluent target translations to models trained with the original disfluent translations, as scored on the fluent references.", "Comparing the disfluent and fluent models, we see that METEOR scores are almost the same while BLEU scores are lower with the disfluent model.", "This is as we would hope: with our fluent model, we want to generate translations that are semantically the same but with disfluencies removed.", "Therefore similar METEOR scores with similar recall (52) on the fluent references are encouraging.", "For BLEU, however, the disfluencies generated by the disfluent model break up n-grams in the fluent references, thereby lowering scores.", "Comparing single-reference scores with Table 2, we see that they are distinctly lower.", "This is to be expected with the shorter fluent references; a difference of a single token carries greater weight.", "Translating directly to the fluent references is a more challenging task.", "As shown in Table 1, the original English translations and Spanish speech are very one-to-one while the edited translations introduce deletions and reorderings.", "In learning to generate fluent translations, the model needs to learn to handle these more inconsistent behaviors.", "Figure 1 shows a visual comparison between outputs generated by the two models.", "Using the fluent target data to train constrains the model output vocabulary, so filler words such as um', ah', mhm' are not generated.", "We also see significant reductions in repetitions of both words and phrases from the model trained with fluent reference translations.", "Further, we also see instances where the fluent model generates a shorter paraphrase of a disfluent phrase, as in the 2nd example.", "Disfluency removal for speech translation has traditionally been done as an intermediate step between ASR and MT to better-match additional clean corpora used for MT training; we do not compare to a pipeline approach here.", "However, to contextualize these results, we compare disfluency removal as a post-processing step after end-to-end speech translation with the original disfluent par-dev test Model 1Ref 2Ref 1Ref 2Ref Postproc.", "allel data.", "Simply filtering filler words and repetitions from the disfluent model (Filter) outputs as a post-processing step, the dev scores improve slightly, but test stays the same or decreases.", "In some cases, treating disfluency removal as a filtering task can reduce the fluency of an utterance: Disfluent mm well and from and the email is a scandal the spam.", "A filtering or tagging system may not capture all false starts or corrections, leading to lower fluency, and requires labeled spans.", "Treating the post-processing step as a monolingual translation task (MonoMT) rather than a filtering task allows for reordering and insertions, which we saw boost fluency.", "We trained a 4-layer BiLSTM encoder-decoder model to translate between the disfluent and fluent English references and applied this to the output of the end-to-end disfluent model.", "BLEU scores approach the results with the end-to-end fluent target model (Table 3), but we note, this requires the same resources as the direct task.", "Showing the importance of fluent references for evaluation, Table 5 shows the performance of fluent models as evaluated on the original disfluent references.", "Disfluent target scores are the same as in Table 2, and have been copied for easy compar-dev test Model Metric 1Ref 4Ref 1Ref 4Ref Fluent BLEU 16.6 29.8 17.0 30.4 Disfluent BLEU 19.0 32.4 19.6 33.7 Fluent METEOR 21.8 25.9 22.7 27.0 Disfluent METEOR 25.1 30.0 26.1 30.9 Table 5: Performance evaluated with original disfluent references .", "ison.", "As we would expect, here there is a greater difference in scores.", "The fluent references have fewer long n-gram matches with disfluencies removed, lowering BLEU.", "The fluent model's METEOR scores suffer more than BLEU due to the recall calculation; recall on the disfluent references is lower because the fluent model does not produce many of the disfluencies (indeed filler words are not in the vocabulary when trained with the fluent references).", "Recall is reduced by 14% with the fluent model, reflecting the approximate distribution of disfluencies in the original data.", "The differences in scores with these two metrics do not show the full picture.", "Outputs generated by the fluent model are on average 13% shorter and contain 1.5 fewer tokens per utterance than the disfluent model, which is significant with average utterance lengths of 10-11 tokens.", "When scoring the fluent output against the original disfluent references, the shorter length significantly contributes to the lower scores, with the BLEU brevity penalty calculated as 0.86 as opposed to 0.96-1.0 for all other conditions.", "Removing the length penalty from the BLEU score calculation, single-reference scores are boosted to 19.3 and 19.8 from 16.6 and 17.0 for dev and test, respectively (Table 5).", "This is a somewhat fairer comparison of the disfluent and fluent models, as we do not want the fluent output to match the disfluent sequence length, and the disfluent models are not penalized due to length.", "These BLEU scores are now very similar to those of the disfluent model on the disfluent references, though the outputs are very different (Figure 1).", "The changes here and the difference in trends between the two metrics with respect to the two types of references show that evaluating this task cannot be simply accomplished with one existing metric: depending on the combination of metric and references, it's possible to mask the difference between disfluent and fluent systems, unless you have word-level disfluency annotations, which are more difficult to obtain.", "Machine translation applications for speech can suffer due to conversational speech phenomena, particularly the presence of disfluencies.", "Previous work to remove disfluencies in speech translation did so as a separate step between speech recognition and machine translation, which is not possible using end-to-end models.", "Using clean references for disfluent data collected by Salesky et al. (2018), we extend their text baseline to speech input and provide first results for direct generation of fluent text from noisy disfluent speech.", "While fluent training data enables research on this task with end-to-end models, it is unlikely to have this resource for every corpus and domain and it is expensive to collect.", "In future work, we hope to reduce the dependence on fluent target data during training through decoder pretraining on external non-conversational corpora or multitask learning.", "Further, standard metrics alone do not tell the full story for this task; additional work on evaluation metrics may better demonstrate the differences between such systems." ]
[ "abstain", "abstain", "method", "method", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "objective", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "other", "other", "method", "method", "method", "method", "other", "method", "result", "other", "abstain", "abstain", "method", "method", "other", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "other", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain" ]
[ "A release note is a technical document that describes the latest changes to a software product and is crucial in open source software development.", "However, it still remains challenging to generate release notes automatically.", "In this paper, we present a new dataset called RNSum, which contains approximately 82,000 English release notes and the associated commit messages derived from the online repositories in GitHub.", "Then, we propose classwise extractive-then-abstractive/abstractive summarization approaches to this task, which can employ a modern transformer-based seq2seq network like BART and can be applied to various repositories without specific constraints.", "The experimental results on the RNSum dataset show that the proposed methods can generate less noisy release notes at higher coverage than the baselines.", "We also observe that there is a significant gap in the coverage of essential information when compared to human references.", "Our dataset and the code are publicly available.", "Recently, there has been considerable interest in applying natural language processing (NLP) techniques to support software development (Iyer et al., 2016; Yin and Neubig, 2018; Panthaplackel et al., 2020).", "One such task involves the automatic generation of release notes .", "A release note is a technical document that describes the latest changes to a software product, which is necessary for software developers to adjust their codes accurately for using the updated software.", "Since release notes are time-consuming to write manually, several studies have been done to explore automatic release note generation.", "Moreno et al. (2014) proposed ARENA, Figure 1: An example data in RNSum; this example is derived from the release of tag v2.6.4 in https: //github.com/vuejs/vue .", "an automatic release notes generator, which first extracts and summarizes the changes in the source code and then integrates them with the information provided by version-control and issue-tracking systems.", "Pokorn (2020) developed Glyph, which classifies commit messages into predefined release-note categories (e.g., Features, Bug Fixes) using pre-trained word embeddings and produces the categorized commit messages as the final release note.", "Despite the progress reported in these previous studies, usable release note generators are far from realization.", "We attribute this difficulty mainly to 8718 two problems.", "First, the existing resources for automatic release note generation are scarce; for example, Glyph was trained on only 5,000 commit messages, which is too little for obtaining a sufficiently generalized model.", "Second, the existing methods have limited applicability; for example, ARENA requires an issue tracker hosted on Jira, thus preventing it from being used for most GitHub repositories.", "Also, Glyph's predefined release-note categories do not include deprecations and removals, which are often indispensable in release notes.", "To alleviate the above problems, we introduce RNSum, a new large-scale dataset for automatic release note generation via commit logs summarization.", "An example data in RNSum is shown in Figure 1.", "RNSum consists of approximately 82,000 release notes derived from online repositories on GitHub.", "The contents of each release note are further categorized into four release-note classes: (1) Features, (2) Improvements, (3) Bug Fixes, and (4) Deprecations+ (Deprecations, Removals, and Breaking Changes).", "The release notes are associated with the commit messages that were used by human maintainers to write the release notes.", "The difficulty of this task is that there is no explicit alignment between each commit message and the release note categories.", "For example, in Figure 1, the first commit message (chore: make documentation clearer (#9450)) is not reflected in the release notes.", "In contrast, the second commit message (fix: empty scoped slot should return undefined fix #9452) is reflected as the third release note in the Bug Fixes class.", "We propose two approaches to this task: Classwise Extractive-then-Abstractive Summarization (CEAS) and Classwise Abstractive Summarization (CAS) models, which learn to produce categorized release notes given unlabeled commit messages in extractive and abstractive manners, respectively.", "The two proposed models can leverage modern transformer-based sequence-to-sequence (seq2seq) architectures (e.g., BART (Lewis et al., 2020)) and can be used for various repositories without any special constraints.", "We evaluate the proposed models and the previous systems on the RNSum dataset and report that our approaches generate less noisy release notes at higher coverage than the baselines given only unlabeled commit messages.", "We also perform human evaluations carefully to assess how well the systems could generate release notes in terms of quality (precision) and coverage (recall), revealing CM RN Dataset Text Class Text Class Size Mauczka et al. (2015) 967 Levin and Yehudai (2017) 1,151 Safdari (2018) 3,377 RNSum (ours) 81,996 Table 1: Comparison of RNSum with the existing datasets on commit logs.", "that there still remains a significant gap in the coverage when compared to human references.", "Our dataset and the source code are publicly available.", "1 2 Task Formulation Here, we define the automatic release note generation task.", "The input is a set of commit messages (sentences), x = { x 1 , . . . , x n } .", "Given the input commit messages x , our goal is to generate labeled release notes y c for each predefined release-note class c C .", "Each labeled release note is a collection of sentences, i.e., y c = y c, 1 , y c, 2 , . . . .", "According to Moreno et al. (2014), the major contents of most release notes can be categorized into the following classes: Fixed Bugs, New Features, New Code Components (CC), Modified CC, etc.", "Based on their observations, we define the release-note classes C comprising the following four categories: Features (F), Improvements (I), Bug Fixes (B), and Deprecations+ (D; Deprecations, Removals, and Breaking Changes).", "2 Our classes do not include Refactoring, Document Changes, and Library Updates because most of the maintainers on GitHub omitted these changes in their release notes.", "Our dataset can be interpreted as a collection of quintuples, namely { x , y F , y I , y B , y D } .", "For simplicity, we also use ( x , y ) instead of the quintuple representation.", "It is worth noting that the labeled release notes y c can be empty.", "For example, in Figure 1, the software update is related to improvements and bug fixes.", "Thus, the release note contains only the Improvements and Bug Fixes classes, and y F = y D = .", "Release Note Generation Automatic release note generation has been studied by several research groups.", "Moreno et al. (2014) proposed ARENA, which transforms the extracted source-code changes into the corresponding natural language release notes.", "However, ARENA relies on a versioning system and an issue tracker hosted on Jira, which makes it difficult or even impossible to use in a variety of software projects, especially those hosted on GitHub.", "Klepper et al. (2016) proposed a semi-automatic algorithm to generate the release notes depending on expected types of readers, e.g., team members, customers, and users.", "However, no experiments were reported in their work.", "Recently, a publicly available release note generator, Glyph (Pokorn, 2020), was developed.", "Glyph is a simple learning-based model that classifies each input commit message into one of five labels: Features, Bug Fixes, Improvements, Nonfunctional, and Other.", "These categorized commit messages are then used as the final release notes.", "The Glyph model was trained on 5,000 commit logs using Facebook's fastText framework (Joulin et al., 2017).", "We summarize the comparison of our dataset with the three data sources used in Glyph in Table 1.", "These existing datasets annotate only the commit message with the release note classes, making it difficult to use for release note generation.", "Also, the data size is quite small.", "To address the limitations of these three studies, we built a new large-scale dataset called RNSum, which contains approximately 82,000 release notes with commit messages.", "We reviewed the release notes carefully and redefined the four classes.", "We also propose classwise summarization methods to automatic release note generation, which can be applied to all English repositories on GitHub.", "Classwise Summarization There are several reported studies that use class information for summarization.", "Cao et al. (2017) and Yang et al. (2018) proposed using text categories to improve text summarization.", "Liang et al. (2019) proposed a clinical note extractive summarization system that generates summaries based on specific disease names.", "In 1 https://github.com/nlab-mpg/RNSum-Da taset .", "For licensing reasons, RNSum does not contain textual content of the release notes and the commit messages but only their URLs.", "To enable users to obtain the contents easily, we provide scripts using the GitHub API.", "2 The Improvements class includes improvements to the existing features instead of the addition of new features.", "contrast to these studies, we developed classwise summarization methods for release note generation, which we confirm through experiments to be more effective than the baselines.", "We collected the release notes and their associated commit messages from several repositories on GitHub using the GitHub API.", "3 Repositories First, we selected all public repositories that did not fork any repositories.", "A repository that did not fork means a repository that was not copied from others.", "Then, we filtered out repositories with less than 50 stars, assuming that repositories with many stars tend to contain high-quality release notes, which are suitable for learning reliable release note generation.", "These filtering steps resulted in 337,048 repositories as of March 2021.", "Release Notes with Classes We listed the past releases for each repository.", "For each of the four predefined classes, we manually created a vocabulary with up to 30 entries.", "For example, the vocabulary for the Improvements class contained terms such as improvements, enhancements, and op-timizations.", "We show the vocabularies used in this work in the Appendix.", "Then, for each release, we searched for the presence of terms in the vocabularies over the entire body text (including the subti-tles).", "We retained the release notes in which at least one class-relevant term was detected in the body text.", "We removed the repositories where only a single release note class appeared throughout themselves.", "Commit Messages On GitHub, the release notes are NOT tied to their corresponding commit messages.", "Therefore, we synchronized the release notes and commit messages using version tags (strings) and heuristic matching rules.", "Specifically, we first listed the version tags (e.g., v3.7.1 , v2.6.0 ) of the release notes in a repository.", "Then, the tags with beta versions, such as rc, alpha, and beta, were removed, and the tags were sorted chronologically.", "Next, considering all adjacent tag pairs, we retained only those that satisfied the heuristic matching rules.", "The heuristic matching rules focused on only the numerical parts of the version tags (e.g., v3.7.1 371) and compared 3 https://api.github.com 8720 # repositories 7,216 # release notes (RN) 81,996 Avg.", "the number of digits and the magnitude of the number.", "4 For example, given a chronologically sorted version tags, v3.6.3.1 v3.6.4 v2.6.1 v3.7.0 v3.8.0 , only one tag pair, namely v3.7.0 v3.8.0 , can be retained.", "Then, for each retained tag pair, ( v t 1 , v t ) , we compared the two versions using the GitHub API and collected the list of commit messages for the version tag (or release note) v t .", "Postprocessing We filtered out release notes that were too complex, with more than 50 sentences and more than 250 commit messages, because it was computationally difficult to handle such large data.", "Finally, we obtained a total of 7,216 repositories and 81,996 release notes.", "We investigate the statistics of the RNsum dataset.", "Table 2 summarizes the results.", "Our dataset consists of 81,996 release notes from 7,216 repositories in total.", "The average number of release note sentences and commit messages per release note is 3.3 and 14.9, respectively.", "The average number of tokens in release notes and commit messages are 63.3 4 The details of the heuristic rules are described in the Appendix.", "and 260.4, respectively.", "5 The number of unique word types (i.e., vocabulary size) is 833,984, which is significantly large because many project-specific terms such as class and method names are detected.", "We plot each data point in RNSum in Figure 2, where each point is represented by a two-dimensional vector of the number of tokens in the release notes and the commit messages.", "There is a correlation between the release notes and the corresponding commit messages regarding the number of tokens.", "Also, RNSum contains a wide variety of data of diverse sizes.", "We also examine the word overlap rate of the commit messages against the release notes.", "We remove special symbols such as URL, hash values, and issue numbers by using the spaCy POS tagger.", "6 The resulting overlap rate is 56.7%, indicating that extractive approaches (e.g., Glyph), which simply classify commit messages into a fixed set of predefined classes, has a limitation to achieving higher recall.", "7 The result also indicates that information outside the commit messages (e.g., pull requests, issues associated with the commit messages) may improve the performance further, which we leave left for future work.", "In the end, we examine the distributions of release note classes.", "There is an obvious class imbalance: Bug Fixes accounts for 60.0%, while Deprecations+ accounts for only 4.2%.", "This class imbalance problem makes the task more challenging.", "Automatic release note generation can be viewed as a task of summarizing commit messages x into the labeled release notes y c .", "In this paper, we introduce Classwise Extractive-then-Abstractive Summarization (CEAS) and Classwise Abstractive Summarization (CAS) models, which we instantiate by modern transformer-based sequence-to-sequence (seq2seq) networks and can be universally used in various repositories without any special constraints.", "5 We used spaCy to tokenize the release notes and the commit messages.", "The Classwise Extractive-then-Abstractive Summarization (CEAS) model consists of two neural modules: a classifier F and a generator G .", "First, CEAS uses F to classify each commit message into five release-note classes: Features, Improvements, Bug Fixes, Deprecations+, and Other.", "Then, commit messages classified as the same class are concatenated to form a single document.", "The commit messages classified as Other are discarded.", "Then, CEAS applies G to the four labeled documents independently and generates release notes for each class.", "In this task, the direct correspondences between commit messages and release notes are not known.", "Therefore, to train the classifier F , we assign pseudo labels to each input commit message using the first ten characters of each commit message.", "The detail of assigning pseudo labels is described in the Appendix.", "If pseudo labeling generates commit messages whose class does not appear in the gold release notes, we omit such examples in training.", "For example, in Figure 1, if the pseudo labeling generates commit messages of Features, the commit messages are discarded because the class Features does not appear in the gold release notes.", "We model the Classwise Abstractive Summarization (CAS) approach by two different methods.", "The first model, which we call CAS-Single , consists of a single seq2seq network and generates a single long release note text given a concatenation of input commit messages.", "The output text can be divided into classwise segments based on special class-specific endpoint symbols, like <Features> and </Features>.", "In training, we concatenate all the gold labeled release notes y into one long document by inserting the classwise endpoint symbols and train the network to generate the target text.", "The second method, which we call CAS-Multi , consists of four different seq2seq networks G c , each of which corresponds to one of the release-note classes (Features, Improvements, etc.).", "We train each network G c to generate the corresponding release notes y c independently given a concatenation of the input commit messages.", "We divided the RNSum dataset into training, validation, and test splits, each containing 74K, 4K, and 4K examples.", "To avoid data leaks, examples derived from the same repository did not belong to multiple splits.", "We also removed the training examples with release note text (after concatenation) of longer than 500 tokens to shorten the training time.", "Since a release note y c can consist of multiple sentences, we concatenate the sentences by inserting spaces and represent the release note as one long text in evaluation.", "Following the conventional summarization literature, we employ ROUGE (Lin, 2004) as the automatic evaluation metric.", "We also employ BLEU (Papineni et al., 2002) to evaluate the fluency of generated release notes.", "Specifically, we compute ROUGE-L (F1), BLEU-3, and BLEU-4 scores.", "8 We skip a test example if the reference text is empty.", "It is also important for the system not to generate release notes when the reference release note is empty (i.e., y c = ).", "To evaluate such ability, we also compute Specificity , i.e., TN TN+FP , where positive means that the generated release note is NOT empty.", "As baseline systems for comparison, we develop Glyph (Pokorn, 2020) and a clustering-based commit message classifier.", "These baselines are extractive summarization methods because these methods generate release notes by just classifying each input commit message into a fixed set of release-note classes.", "In contrast, CEAS and CAS employ seq2seq generators to transform input commit messages into novel texts.", "Glyph Glyph is a publicly available commit message classifier, which groups each input commit message into the following five classes: Features, Improvements, Bug Fixes, Non-functional, and Other.", "The text classification model relies simply on pretrained word embeddings in fastText (Joulin et al., 2017).", "Since the Non-functional class is not included in our task, we exclude the commit messages classified as Other or Non-functional from 8 We calculate the BLEU and ROUGE scores using torch-text https://github.com/pytorch/text and HuggingFace framework (Lhoest et al., 2021), respectively.", "Clustering We also develop a clustering-based classifier for this task.", "This method classifies each input commit message based on the closest cluster centroid using Euclidean distance.", "First, we train a Continuous Bag-of-Words (CBOW) model (Mikolov et al., 2013) with a window size of 5 on 10 million commit messages collected from GitHub and obtained 300-dimensional word embeddings for this domain.", "9 Then, we embed each input commit message using the averaged embeddings of the first three tokens (without punctua-tions).", "10 Then, we perform the K-means clustering algorithm on the commit message embeddings in the RNSum training set and obtain k ( > 4) clusters.", "We determine the correspondence between the cluster IDs and the release-note classes (Features, Bug Fixes, Improvements, Deprecations+) based on the best alignment m that maximizes the total BLEU scores on the RNSum training set D , i.e., m = argmax m k P 4 (cid:88) ( x , y ) D (cid:88) c C BLEU( f m ( c ) ( x ) , y c ) where m ( c ) denotes a cluster ID corresponding to the release-note class c , and f k ( x ) is the commit messages classified as the cluster k .", "In inference, input commit messages classified as the remaining k 4 clusters are removed in the output.", "To determine the optimal number of clusters k , we tested k [5 , 20] and found that 11 provided the best validation score.", "CEAS We employ BERT (Devlin et al., 2019) and CodeBERT (Feng et al., 2020) as the commit message classifier F .", "CodeBERT is a bimodal pre-trained model for programming language and natural language.", "Specifically, we apply a multilayer perceptron to the CLS embedding of the input.", "When training the classifier, class-identifiable words, such as fix: and feat:, are removed because they can be too strong class indicators.", "We describe the detail of removing class-identifiable words in the Appendix.", "We employ BART (Lewis 9 We used the gensim library (Rehu rek and Sojka, 2010).", "We removed words with frequencies lower than 300 occurrences.", "10 We also tested the average embeddings of all tokens or all nouns.", "The averaging of the first three tokens consistently outperforms these two counterparts.", "ROUGE-L (F1) BLEU-3 BLEU-4 Specificity", "et al., 2020) as the generator G (or G c ).", "We used the HuggingFace (Wolf et al., 2020) BertTokenizer, AutoTokenizer, and BARTTokenizer for tokeniza-tion.", "The learning rate was set to 4e-5, and we used the AdamW (Loshchilov and Hutter, 2019) optimizer.", "Mini-batch size was set to 20 for the classifier and 2 for the generator.", "To mitigate the class imbalance problem, we also used upsampling for the infrequent classes (Features, Improvements, and Deprecations+).", "We used the validation set to perform early stopping with a patience of 3 epochs.", "CAS We employ BART as the seq2seq network.", "All the CAS-Single and CAS-Multi networks are initialized with the same pretrained parameters, but the parameters are untied across the models and trained independently.", "We used the HuggingFace (Wolf et al., 2020) BARTTokenizer for to-kenization.", "Mini-batch size was set to 2 for the CAS-Single network and 8 for each network in CAS-Multi.", "Other training settings are the same as CEAS.", "We show automatic evaluation results in Table", "3. We also show the results on the cleaned version of the test set, where we removed URLs, hash values, and email addresses, which are significantly difficult to produce accurately.", "CEAS and CAS achieved ROUGE-L scores more than 10 points higher than the baselines.", "In particular, on the cleaned test set, the score gap between the proposed methods and the baselines 8723 CM fix createOptions validation issue (#294) * fix createOptions validation issue * Update logic 2.5.1 (#295) Class Bug Fixes PR + fix createOptions validation issue Issues https://github.com/Microsoft/vscode-azure-iot-toolkit/pull/294 Error if create options is larger then 512 bytes https://github.com/Microsoft/vscode-azure-iot-toolkit/issues/293 RN Fix deployment JSON validation issue when create options is larger than 512 bytes Table 4: An example provided for human evaluators.", "jumped to more than 20 points.", "These results indicate that CEAS and CAS are significantly effective.", "In addition, CEAS got a better ROUGE-L score than CAS, suggesting that combining a classifier and a generator is effective and training the classifier using pseudo labels.", "The high coverage of CEAS can be achieved probably because the classifier can focus on selecting relevant commit messages for each class.", "Moreover, CEAS(BERT) got higher scores than CEAS(CodeBERT), indicating that it is better to use BERT for tasks where the commit message is the input.", "CodeBERT is closer to the domain of commit messages than BERT, but we assume that this is because it was trained with relatively little natural language data.", "Furthermore, CAS-Multi tended to yield higher ROUGE-L than CAS-Single, suggesting that it is also effective to independently develop different abstractive summarization models for each release-note class.", "Although not as apparent as ROUGE-L, CAS models (CAS-Single and CAS-Multi) generated comparable or higher BLEU scores than CEAS and the baselines.", "CAS models were also able to achieve significantly higher Specificity scores (> 30 points) compared to the baselines.", "These results indicate that CAS models can generate less noisy release notes than the baselines.", "We hypothesize that CAS is trained a lot to remove noise from all commit messages, including Other class, which strengths the ability to deal with noise.", "We employed twelve human evaluators to manually assess the quality of the release notes generated by the systems and the reference release notes.", "The evaluators were graduate students or working professionals with at least one year of experience read-ing release notes and updating software libraries.", "We randomly chose 120 release notes from the test set.", "The allocations of the Features, Improvements, Bug Fixes, and Deprecations+ classes were 40, 25, 40, and 15, respectively.", "We divided the evaluation tasks into three groups of 40 questions, and each group was assigned to four different evaluators.", "We used a crowdsourcing platform, Yahoo! Crowdsourcing, 11 operated by Yahoo Japan Corporation for the evaluations.", "In the following, we explain the evaluation task and scoring measures.", "Evaluation Task For each evaluation task, an evaluator is given a list of input commit messages and the target release note class.", "The evaluator is also given supplemental information about pull requests and issues.", "These supplemental data were not used to train the models, but we included them because they are often helpful for accurately evaluating the release notes.", "We show an example of this case in Table", "4. The release notes that were manually prepared by the original maintainers contained the words JSON and larger than 512 bytes, but this information cannot be found in the commit messages.", "To accurately evaluate the human-generated release note, pull requests and issues are required.", "Thus, we instructed the evaluators to check the titles, pull requests, and issues if necessary.", "We selected CAS-multi for its better performance than the CEAS and CAS-single in terms of the quality of generated texts.", "For the Deprecations+ class, we evaluated the outputs of the CAS-Multi, the Clustering model, and the human reference because the Glyph does not produce release notes for the Deprecations+ class.", "Scoring We employed a five-point scoring scheme for evaluating the release notes.", "The evaluation scores were determined based on two criteria: Percentage of necessary information (cov-erage) and percentage of unnecessary information (noise contamination).", "For the coverage-oriented scoring, the following guidance was used for the scoring; 5: 90% or more necessary information (NI), 4: 70% or more NI, 3: 50% or more NI, 2: 30% or more NI, and 1: less 30% of NI.", "For the noise-oriented scoring, the following guidance was used; 5: no unnecessary information (UI), 4: less UI, 3: slightly less UI, 2: a little UI, 1: much UI.", "We use Fleiss' Kappa to measure inter-annotator 11 https://crowdsourcing.yahoo.co.jp/ .", "Results We show the results of the human evaluations in Table", "5. For all the metrics, CAS-Multi achieves the highest human evaluation scores among the automatic systems (See the column of Avg.).", "In particular, in the noise-oriented (or precision-oriented) metric, CAS-Multi significantly improves over the baselines and even outperforms the human references.", "This fact suggests that the abstractive summarization approach is effective in transforming the noisy textual representations in the commit messages.", "However, the CAS-Multi's performance is still lower than those of the human references, suggesting the remaining challenges in this task.", "We also tested the statistical significance of the results using a permutation test (Pitman, 1937).", "Since evaluating all possible permutations would require a considerable amount of time, we used the approximation method.", "Note that sampling all permutation is typically not feasible unless the dataset size is relatively small.", "12 We set the number of rounds to 10,000.", "We applied Cov.", "+ Noise scores to the test.", "Comparing CAS-Multi with the baselines, all the p-values of the permutation tests were less than 0.001, indicating that the improvements are statistically significant.", "We qualitatively analyzed the outputs of CAS-Multi to identify the bottlenecks of the current approach. Table 6 shows several examples where the outputs of CAS-Multi and the human references are largely different. The input commit messages are lengthy, so we only show the commit messages related to each class in this table; the entire text is shown in the Appendix.", "First, we found that CAS-Multi tends to produce significantly shorter release notes than human references. In the first two examples in Table 6, CAS-Multi generates only a single sentence, while the human references contain multiple sentences. This is probably due to the fact that the release note classes are significantly scarce in the training", "set, and the model is trained to be reluctant to generate release note text. Actually, the numbers of release notes containing the Features and Improvements classes in the training set are only 33.4% and 14.0%, respectively. We used the upsampling technique to reduce the class imbalance problem; however, upsampling cannot inherently increase the number of unique training examples. Therefore, it is necessary to explore ways to augment the training patterns inherently in the future.", "Second, we found that release notes are often difficult to precisely and accurately produce without supplemental information outside the commit messages. In the second example in Table 6, CAS-Multi generates just Fixed date time filter ranges, which is a simple paraphrase of the first commit message. In contrast, the human reference enriches the content by rephrasing it as Not properly working Datetime filters Today, On, Between.", "Moreover, the third commit message (fix excel) is enriched as Excel report: first column not for-matted, which is impossible to generate without external information. In the last example in Table 6, without background knowledge of the repository, it is impossible to detect relationships between the two commit messages and to combine them into a single sentence. 8 Conclusion In this paper, we presented a new large-scale dataset for automatic generation of release notes. The dataset comprised approximately 82k release notes from over 7k repositories on GitHub. We formulated a task to automatically generate release notes by summarizing the commit messages, which can be applied to all software development projects that use English. We confirmed the validity of the proposed classwise extractive-then-abstractive summarization (CEAS) model and the classwise abstractive summarization (CAS) model via experiments to generate less noisy release notes at higher coverage than the baselines. However, there are still gaps in the coverage performance compared to manually generated outputs; it is expected that this could be improved by including additional information such as issues and pull requests with the commit logs. References Ziqiang Cao, Wenjie Li, Sujian Li, and Furu Wei. 2017. Improving multi-document summarization via text classification. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence , AAAI'17, page 30533059." ]
[ "abstain", "abstain", "objective", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "result", "abstain", "abstain", "method", "abstain", "objective", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "objective", "abstain", "objective", "other", "other", "other", "other", "other", "abstain", "other", "objective", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "method", "abstain", "method", "objective", "abstain", "method", "other", "abstain", "abstain", "method", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "objective", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "result", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "other" ]
[ "Cross-domain sentiment classification aims to predict sentiment polarity on a target domain utilizing a classifier learned from a source domain.", "Most existing adversarial learning methods focus on aligning the global marginal distribution by fooling a domain discriminator, without taking category-specific decision boundaries into consideration, which can lead to the mismatch of category-level features.", "In this work, we propose an adversarial category alignment network (ACAN), which attempts to enhance category consistency between the source domain and the target domain.", "Specifically, we increase the discrepancy of two polarity classifiers to provide diverse views, locating ambiguous features near the decision boundaries.", "Then the generator learns to create better features away from the category boundaries by minimizing this discrepancy.", "Experimental results on benchmark datasets show that the proposed method can achieve state-of-the-art performance and produce more discriminative features.", "Sentiment classification aims to automatically identify the sentiment polarity (i.e., positive or negative) of the textual data.", "It has attracted a surge of attention due to its widespread applications, ranging from movie reviews to product recommendations.", "Recently, deep learning-based methods have been proposed to learn good representations and achieved remarkable success.", "However, the performances of these works are highly dependent on manually annotated training data while annotation process is time-consuming and expensive.", "Thus, cross-domain sentiment classification, which aims to transfer knowledge learned on labeled data from related domains Equal contribution Corresponding author (called source domain) to a new domain (called target domain), becomes a promising direction.", "One key challenge of cross-domain sentiment classification is that the expression of emotional tendency usually varies across domains.", "For instance, considering reviews about two sorts of products: Kitchen and Electronics .", "One set of reviews would contain opinion words such as de-licious or tasty, and the other rubbery or blurry, to name but a few.", "Due to the small intersection of two domain words, it remains a sig-nificant challenge to bridge the two domains divergence effectively.", "Researchers have developed many algorithms for cross-domain sentiment classification in the past.", "Traditional pivot-based works (Blitzer et al., 2007; Yu and Jiang, 2016) attempt to infer the correlation between pivot words, i.e., the domain-shared sentiment words, and non-pivot words, i.e., the domain-specific sentiment words by utilizing multiple pivot prediction tasks.", "However, these methods share a major limitation that manual selection of pivots is required before adaptation.", "Recently, several approaches (Sun et al., 2016; Zellinger et al., 2017) focus on learning domain invariant features whose distribution is similar in source and target domain.", "They attempt to minimize the discrepancy between domain-specific latent feature representations.", "Following this idea, most existing adversarial learning methods (Ganin et al., 2016; Li et al., 2017) reduce feature difference by fooling a domain discriminator.", "Despite the promising results, these adversarial methods suffer from inherent algorithmic weakness.", "Even if the generator perfectly fools the discriminator, it merely aligns the marginal distribution of the two domains and ignores the category-specific decision boundaries.", "As shown in Figure 1 (left), the generator may generate ambiguous or even mismatched features near the decision boundary, thus 2497 hindering the performance of adaptation.", "To address the aforementioned limitations, we propose an adversarial category alignment network (ACAN) which enforces the category-level alignment under a prior condition of global marginal alignment.", "Based on the cluster assumption in (Chapelle et al., 2009), the optimal predictor is constant on high density regions.", "Thus, we can utilize two classifiers to provide diverse views to detect points near the decision boundaries and train the generator to create more discriminative features into high-density region.", "Specifically, we first maximize the discrepancy of the outputs of two classifiers to locate the inconsistent polarity prediction points.", "Then the generator is trained to avoid these points in the feature space by minimizing the discrepancy.", "In such an adversarial manner, the ambiguous points are kept away from the decision boundaries and correctly distinguished, as shown in Figure 1 (right).", "We evaluate our method on the Amazon reviews benchmark dataset which contains data collected from four domains.", "ACAN is able to achieve the state-of-the-art results.", "We also provide analyses to demonstrate that our approach can generate more discriminative features than the approaches only aligning global marginal distribution (Zhuang et al., 2015).", "Sentiment Classification: Deep learning based models have achieved great success on sentiment classification (Zhang et al., 2011).", "These models usually contain one embedding layer which maps each word to a dense vector, and different network architectures then process combined word vectors to generate a representation for classification.", "According to diverse network architectures, four categories are divided including Convolutional Neural Networks (CNNs) (Kalchbrenner et al., 2014; Kim, 2014), Recurrent Neural Networks (RNNs) (Yang et al., 2016; Zhou et al., 2016b), Recursive Neural Networks (RecNNs) (Socher et al., 2013) and other neural networks (Iyyer et al., 2015).", "Domain Adaption: The fundamental challenge to solve the domain adaptation lies here is that data from the source domain and target domain have different distributions.", "To alleviate this difference, there are many pivot-based methods (Blitzer et al., 2007; He et al., 2011; Gouws et al., 2012; Yu and Jiang, 2016; Ziser and Reichart, Figure 1: Left: marginal distribution alignment by minimizing the distance between two domains can generate ambiguous feature near the decision boundary. Right: two different classifiers locate ambiguous features by considering decision boundary to make category-level alignment. 2018) which try to align domain-specific opinion (non-pivot) words through domain-shared opinion (pivot) words as the expression of emotional tendency usually varies across domains, which is a major reason of the domain difference.", "However, selecting pivot words for these methods first is very tedious, and the pivot words they find may not be accurate.", "Apart from pivot-based methods, denoising auto-encoders (Glorot et al., 2011; Chen et al., 2012; Yang and Eisenstein, 2014) have been extensively explored to learn transferable features during domain adaption by reconstructing noise input.", "Despite their promising results, they are based on discrete representation.", "Recently, some adversarial learning methods (Ganin et al., 2016; Li et al., 2017, 2018) propose to reduce this difference by minimizing the distance between feature distributions.", "But these methods solely focus on aligning the global marginal distribution by fooling a domain discriminator, which can lead to the mismatch of category-level features.", "To solve this issue, we propose to further align the category-level distribution by taking the decision boundary into consideration.", "Some recent works with class-level alignment have been explored in computer vision applications (Saito et al., 2017, 2018).", "Semi-supervised learning: Considering the target samples as unlabeled data, our work is somehow related to semi-supervised learning (SSL).", "SSL has several critical assumptions, such as cluster assumption that the optimal predictor is constant or smooth on connected high density regions (Chapelle et al., 2009), and manifolds assumption that support set data lies on low-dimensional manifolds (Chapelle et al., 2009; Luo et al., 2017).", "Our work takes these assumptions to develop the approach.", "We are given two domains D s and D t , denot-ing", "denot-ing the source domain and the target domain respectively.", "D s = (cid:2) x ( s ) i , y ( s ) i (cid:3) n s i =1 are n s labeled source domain examples, where x ( s ) i means a sentence and y ( s ) i is the corresponding polarity label.", "D t = (cid:2) x ( t ) i (cid:3) n t i =1 are n t unlabeled target domain examples.", "In our proposed method, we denote G as a feature encoder that extracts features from the input sentence.", "Then two classifiers F 1 and F 2 map these features to soft probabilistic outputs p 1 ( y | x ) and p 2 ( y | x ) respectively.", "The goal is to train a model to classify the target examples correctly with the aid of source labeled data and target unlabeled data.", "To achieve this, we first train G , F 1 and F 2 to obtain global marginal alignment.", "This step reduces the distance between two domains but generates ambiguous target features near the decision boundary.", "Thus, F 1 and F 2 are adjusted to detect them by maximizing prediction discrepancy.", "After that, G is trained to generate better features avoiding appearing near the decision boundary.", "The method also regularizes G by taking the target data samples into consideration.", "In this way, we can achieve the category alignment.", "The proposed Adversarial Category Alignment Network (ACAN) is illustrated in Figure", "2. The detailed training progress is described in Appendix D. 3.2 Marginal Distribution Alignment To solve the domain adaption problem, we first consider minimize the classification error on the source labeled data for two classifiers: L cls = 1 n s n s (cid:4) i =1 K (cid:4) j =1 y ( s ) i ( j ) log (cid:5) y ( s ) 1 i ( j ) 1 n s n s (cid:4) i =1 K (cid:4) j =1 y ( s ) i ( j ) log (cid:5) y ( s ) 2 i ( j ) (cid:5) y 1 i = F 1 ( G ( x ( s ) i )) (cid:5) y 2 i = F 2 ( G ( x ( s ) i )) (1) where K denotes the number of different polarities.", "In addition, similar to (Zhuang et al., 2015), our method tries to explicitly minimize the distance between the embedding features from both the source and the target domains.", "We adopt the Kullback Leibler (KL) to estimate the distribution divergence: L kl = n (cid:4) i =1 g s ( i ) log g s ( i ) g t ( i ) + n (cid:4) i =1 g t ( i ) log g t ( i ) g s ( i ) g (cid:3) s = 1 n s (cid:4) n s i =1 G ( x ( s ) i ) , g s = g (cid:3) s || g (cid:3) s || 1 g (cid:3) t = 1 n t (cid:4) n t i =1 G ( x ( t ) i ) , g t = g (cid:3) t || g (cid:3) t || 1 (2) where g s , g t RD , || || 1 denotes L1 normalization.", "In this way, the latent network representations of two domains are encouraged to be similar.", "In other words, the marginal distribution is forced to be aligned.", "Diverse Views : Considering the marginal distribution alignment, there could be some ambiguous features near the decision boundary, which are easy to be incorrectly categorized into a specific class.", "If we alter the boundary of classifier F 1 and F 2 , the samples closer to the decision boundary would have larger change.", "To explore these samples, we use F 1 and F 2 to provide diverse guidance.", "We define a discrepancy between probabilistic outputs of the two classifiers p 1 ( y | x ) and p 2 ( y | x ) .", "The formula is: L dis = E x D t [ d ( p 1 ( y | x ) , p 2 ( y | x ))] (3) where d ( p 1 ( y | x ) , p 2 ( y | x )) defines the average ab-solute difference for K classes, which is: d ( p 1 ( y | x ) , p 2 ( y | x ))= 1 KK (cid:4) i =1 | p 1 i ( y | x ) p 2 i ( y | x ) | (4) Specifically, we first fix the generator G and train the classifiers F 1 , F 2 to detect points near the decision boundary by maximizing their discrepancy.", "The objective is as follows: max F 1 ,F 2 E x D t [ 1 KK (cid:4) i =1 | p 1 i ( y | x ) p 2 i ( y | x ) | ] (5) Then, this discrepancy is minimized by optimizing G in order to keep these points away from the decision boundary and categorized into correct classes.", "The objective is as follows: min GE x D t [ 1 KK (cid:4) i =1 | p 1 i ( y | x ) p 2 i ( y | x ) | ] (6) This adversarial step is repeated in the whole training process so that we can continuously locate non-discriminative points and classify them correctly, forcing the model to achieve category-level alignment on two domains.", "The whole training procedure can be divided into three steps.", "In the first step, we consider both minimizing the classification error and marginal distribution discrepancy to achieve global marginal alignment.", "The loss function of this step can be written as: L 1 = L cls + 1 L kl (7) In the second step, we consider increasing the difference of two classifiers F 1 and F 2 for the fixed G, thus the ambiguous features can be located by the diverse views.", "The loss function is defined as below: L 2 = L cls 2 L dis (8) L cls is used here to ensure the stability of the training process.", "2 is a hyper-parameter controlling the range of classifiers.", "In the third step, the difference of two classifiers should be reduced for the fixed F 1 and F 2 : L 3 = L cls + 3 L dis (9) L cls and 3 used here are similar to the second step.", "We repeat this step n times to balance the generator and two classifiers.", "After each step, the corresponding part of the network parameters will be updated.", "Algorithm 1 describes the overall training procedure.", "To further enhance the feature generator, we introduce to regularize G with the information of unlabeled target data.", "Generally, the mapping of G ( ) can been seen a low-dimensional feature of the input.", "According to the manifolds assumption (Chapelle et al., 2009), this feature space is 2500 expected to be low-dimensional manifold and linearly separable.", "Inspired by (Luo et al., 2017), we consider the connections between data points to regularize G ( ) in the feature space.", "Specifically, the regularizer is formulated as follows: R ( G ) = (cid:4) x D t l G ( x i , x j ) (10) here l G is to approximate the semantic similarity of two feature embeddings.", "Possible options include triplet loss (Wang et al., 2016), Laplacian eigenmaps (Belkin and Niyogi, 2003) etc.", "After exploring many tricks, we find below is optimal which is also used by (Luo et al., 2017): l G = (cid:8) d 2 i,j s ij =1 max(0 , m d i,j ) 2 s ij =0 (11) where d i,j is L2 distance between data points, m is a predefined distance, and s ij indicates whether x i and x j belong to the same class or not.", "Eq.", "10 serves as a regularization that encourages the output of R ( G ) to be distinguishable among classes.", "It is applied on target data and integrated in the framework in the third training step, weighted by 4 .", "During the training, the underlying label of x i is estimated by taking the maximum posterior probability of the two classifiers.", "In this subsection, we provide a theoretical analysis of our method, which is inspired by the theory of domain adaptation in (Ben-David et al., 2010).", "For each domain, there is a labeling function on inputs X , defined as f : X [0 , 1] .", "Thus, the source domain is denoted as (cid:4) D s , f s (cid:5) and the target domain as (cid:4) D t , f t (cid:5) .", "We define a hypothesis function h : X [0 , 1] and a disagreement function: (cid:3) ( h 1 , h 2 ) = E [ | h 1 ( x ) h 2 ( x ) | ] (12) Then the expected error on the source samples (cid:3) s ( h, f ) of h is defined as: (cid:3) s ( h ) = (cid:3) s ( h, f s ) = E x D s [ | h ( x ) f s ( x ) | ] (13) Also for the target domain, we have (cid:3) t ( h ) = (cid:3) s ( h, f t ) = E x D t [ | h ( x ) f t ( x ) | ] (14) As is introduced in (Ben-David et al., 2010), the probabilistic bound of the error of hypothesis h on the target domain (cid:3) t ( h ) is defined as: h H , (cid:3) t ( h ) (cid:3) s ( h ) + 12 d H H ( D s , D t ) + (15) where the expected error (cid:3) t ( h ) is bounded by three terms: (1) the expected error on the source examples (cid:3) s ( h ) ; (2) the divergence between the distributions D s and D t ; (3) the combined error of the ideal joint hypothesis .", "First, the training algorithm is easy to minimize (cid:3) s ( h ) with source label information.", "Second, is expected to be negligibly small and can be usually disregarded.", "Therefore, the second term d H H ( D s , D t ) is important quantitatively in computing the target error.", "Regarding d H H ( D s , D t ) , we have d H H ( D s , D t ) = 2 sup h,h (cid:3) H | (cid:3) s ( h, h (cid:3) ) (cid:3) t ( h, h (cid:3) ) | =2 sup h,h (cid:3) H | E x D s [ | h ( x ) h (cid:3) ( x ) | ] E x D t [ | h ( x ) h (cid:3) ( x ) | ] | (16) where h and h (cid:3) are two sets of hypotheses in H .", "As we have sufficient labeled source examples to train, h and h (cid:3) can have consistent and correct predictions on the source domain data.", "Thus, d H H ( D s , D t ) is approximately calculated as E x D t [ | h ( x ) h (cid:3) ( x ) | ] .", "In our model, the hypothesis h can be decomposed into the feature extractor G and the classifier F using the notation .", "Thus d H H ( D s , D t ) can be formulated as: sup F 1 ,F 2 E x D t [ | F 1 G ( x ) F 2 G ( x ) | ] (17) For fixed G , sup can be replaced by max.", "Therefore, F 1 and F 2 are trained to maximize the discrepancy of their outputs and we expect G to minimize this discrepancy.", "So we obtain min G max F 1 ,F 2 E x D t [ | F 1 G ( x ) F 2 G ( x ) | ] (18) The maximization of F 1 and F 2 is to provide diverse views, to find ambiguous points near the decision boundary, and the minimization of G is to keep these points away from the decision boundary.", "To optimize Eq.", "18, we assist the model to capture the whole feature space on the target domain better and achieve lower errors.", "We evaluate the proposed ACAN on the Amazon reviews benchmark datasets collected by Blitzer (2007).", "It contains reviews from four different domains: Books (B), DVDs (D), Electronics (E), Kitchen appliances (K).", "There are 1000 2501 Source Target Previous Work Models ACAN Models SVM AuxNN DANN PBLM DAS Baseline ACAN-KL ACAN-KM ACAN D B 75.20 80.80 81.70 82.50 82.05 81.30 83.00 82.85 82.35 E B 68.85 78.00 78.55 71.40 80.00 79.50 80.30 79.80 79.75 K B 70.00 77.85 79.25 74.20 80.05 79.05 79.10 79.60 80.80 B D 77.15 81.75 82.30 84.20 82.75 82.50 83.35 83.25 83.45 E D 69.50 80.65 79.70 75.00 80.15 79.25 81.00 80.80 81.75 K D 71.40 78.90 80.45 79.80 81.40 79.10 80.15 82.25 82.10 B E 72.15 76.40 77.60 77.60 81.15 77.80 78.80 80.85 81.20 D E 71.65 77.55 79.70 79.60 81.55 78.00 81.30 82.75 82.80 K E 79.75 84.05 86.65 87.10 85.80 84.35 84.70 86.20 86.60 B K 73.50 78.10 76.10 82.50 82.25 78.00 77.30 81.00 83.05 D K 72.00 80.05 77.35 83.20 81.50 74.65 73.05 77.65 78.60 E K 82.80 84.15 83.95 87.80 84.85 81.05 83.70 83.70 83.35 Average 73.66 79.85 80.29 80.40 81.96 79.55 80.48 81.78 82.15 Table 1: Accuracy of adaptation on Amazon benchmark.", "positive and 1000 negative reviews for each domain, as well as a few thousand unlabeled examples, of which the positive and negative reviews are balanced.", "Following the convention of previous works (Zhou et al., 2016a; Ziser and Re-ichart, 2018; He et al., 2018), we construct 12 cross-domain sentiment classification tasks.", "In our transferring task, we employ a 5-fold cross-validation protocol, that is, in each fold, 1600 balanced samples are randomly selected from the labeled data for training and the rest 400 for validation.", "The results we report are the averaged performance of each model across these five folds.", "In our implementation, the feature encoder G consists of three parts including a 300-dimensional word embedding layer using GloVe (Pennington et al., 2014), a one-layer CNN with ReLU activation function adopted in (Yu and Jiang, 2016; He et al., 2018) and a max-over-time pooling through which final sentence representation is obtained.", "Specifically, the convolution filter and the window size of this one-layer CNN are 300 and 3 separately.", "Similarly, the classifier F 1 and F 2 can be decomposed into one dropout layer and one fully connected output layer.", "For the fully connected layer, we constrain the l2-norm of the weight vector, setting its max norm to", "3. For the implementation of generator regularizer, we apply doubly stochastic sampling approximation due to the computational complexity.", "The margin m is set to 1 in this procedure.", "During training period, 1 , 2 , 3 , 4 , and n are set to 5.0, 0.1, 0.1, 1.5,", "2. Similar to (He et al., 2018), we parametrize 4 as a dynamic weight exp[ 5(1 t max epochs ) 2 ] 4 .", "This is to minimize the effort of the regularizer as the predictor is not good at the beginning of training.", "We train 30 epochs for all our experiments with batch-size 50 and dropout rate 0.5.", "RMSProp (Tieleman and Hinton, 2012) optimizer with learning rate set to 0.0001 is used for all experiments.", "We consider the following approaches for comparisons", "comparisons (The URLs of previous methods code and data we use are in Appendix A): SVM (Fan et al., 2008): This is a non-domain-adaptation method, which trains a linear SVM on the raw bag-of-words representation of the labeled source domain.", "AuxNN (Yu and Jiang, 2016): This method uses two auxiliary tasks to learn sentence embeddings that works well across two domains.", "For fair comparison, we replace the neural model in this work with our CNN encoder.", "DANN (Ganin et al., 2016): This method exploits a domain classifier to minimize the discrepancy between two domains via adversarial training manner.", "we replace its encoder with our CNN-based encoder.", "PBLM (Ziser and Reichart, 2018): This is a representation learning model that exploits the structure of the input text.", "Specifically, we choose CNN as the task classifier.", "DAS (He et al., 2018): This method employs two regularizations: entropy minimization and self-ensemble bootstrapping to refine the classifier while minimizing the domain divergence.", "Baseline : Our baseline model is a non-adaptive CNN similar to (Kim, 2014), trained without using any target domain information, which is a variant of our model by setting 1 , 2 , 3 , 4 to zeros.", "ACAN-KL : ACAN-KL is a variant of our model which minimizes the distance between the features of two domains by minimizing the KL divergence.", "(set 2 = 3 = 4 = 0)", "ACAN-KM : ACAN-KM introduces the adversarial category mapping based on ACAN-KL without the regularizer.", "(set 4 = 0).", "ACAN : It is our full model.", "Table 1 shows the classification accuracy of different methods on the Amazon reviews, and we can see that the proposed ACAN outperforms all other methods generally.", "It is obvious to see that SVM performs not well in domain transferring task, beaten by Baseline .", "We can notice that exploring the structure of the input text ( AuxNN and PBLM ) brings some improvements over Baseline .", "However, these two pivot-based methods present relatively lower ability than DAS , which jointly minimizes global feature divergence and refines classifier.", "Compared to DAS , our proposed ACAN can improve 0.19% on the average accuracy.", "This can be explained by that we deal with the relationship between target features distribution and classifier more precisely.", "Finally, we conduct experiments on the variants of the ACAN .", "It is clear that the performances of Baseline , ACAN-KL , ACAN-KM and ACAN present a growing trend in most cases.", "Compared with ACAN-KL , ACAN achieves large gain from 80.48% to 82.15%, showing the effectiveness of category-level alignment.", "To better understand the results of different models, we conduct experiments on task B E. For each sentiment polarity, we first extract the most related CNN filters according to the learned weights of the output layer in classifier F 1 .", "Since all listed models use a window size of 3, the outputs of CNN with the highest activation values correspond to the most useful trigrams.", "As shown in Table 2, we identify the top trigrams from 10 most related CNN filters on the target domain.", "It is obvious that Baseline and ACAN-KL are more likely to capture the domain-independent words, such as pointless, disap-pointing and great.", "Thus, the performance of these two models drops much when applied to the target domain.", "Besides, DAS can capture more words of the target domain, but it is limited to nouns with less representativeness, such as receiver, product and etc.", "Compared to them, ACAN is able to extract the domain-specific words like flawlessly and rechargeable.", "These results are consistent with the accuracy of each model's predictions.", "We also conduct experiments on the tasks B K and K D .", "Due to the space limitations, the results are presented in Appendix B. 4.6 Visualization of features For more intuitive understanding of the differences between the global marginal alignment and category alignment, we further perform a visualization of the feature representations of the ACAN-KL and ACAN model for the training data in the source domain and the testing data in the target domain for the K E task.", "As can be seen in Figure 3, global marginal alignment causes ambiguous 2503 Method Negative Sentiment Positive Sentiment Baseline audio-was-distorted , is-absolutely-pointless, *-very-disappointing, waste-of-money, was-point-most, an-unsupported-config, an-extremely-disappointed, author-album-etc , cure-overnight-headphones , aa-rechargable-batteries wep-encryption-detailed , totally-wireless-headset ,", "features locating between two clusters while category alignment effectively projects these points into clusters, thus leading a more robust classification result.", "We also conduct experiments on the tasks B E and B K .", "Due to the space limitations, the results are presented in Appendix C. 4.7 Model Analysis In this part, we provide analysis to our proposed ACAN variants.", "In Figure 4, we show the comparison between Baseline and ACAN under a setting that some labeled target data are randomly selected and mixed with training data.", "Here, we present results on two transferring tasks while a similar tendency can be observed in other pairs.", "With an increase in the number of randomly selected labeled target data, the difference between the two models gradually decreases and ACAN also progressively obtains better results.", "These trends indicate that our ACAN is more effective Figure 5: The training process of four ACAN model variants on the task K E .", "with no or little-labeled target data and can further benefit from more labeled target data.", "In Figure 5, we can easily observe that ACAN continuously shows better results during the whole training process among four settings.", "After some epochs, ACAN-KL starts presenting lower testing accuracy than Baseline .", "One possible reason is that those categories which are initially well aligned between the source and target may be incorrectly mapped because of ignoring category-level feature distribution.", "This observation can prove our motivation in some degree.", "In this paper, we propose a novel approach, which utilizes diverse view classifiers to achieve category-level alignment for sentiment analysis.", "Unlike previous works, we take the decision boundary into consideration, thus classifying the 2504 target samples correctly into the corresponding category.", "Experiments show the proposed ACAN significantly outperforms state-of-the-art methods on the Amazon benchmark.", "In future we would like to adapt our method to other domain adaptation tasks and consider more effective alternatives for the generator regularizer.", "This work was supported in part by the National Natural Science Foundation of China under Grant 61602197, and in part by the Fundamental Research Funds for the Central Universities, HUST: 2016YXMS085." ]
[ "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "objective", "abstain", "abstain", "method", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "method", "other", "objective", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "objective", "method", "abstain", "method", "other" ]
[ "Neural named entity recognition (NER) models may easily encounter the over-confidence issue, which degrades the performance and calibration.", "Inspired by label smoothing and driven by the ambiguity of boundary annotation in NER engineering, we propose boundary smoothing as a regularization technique for span-based neural NER models.", "It re-assigns entity probabilities from annotated spans to the surrounding ones.", "Built on a simple but strong baseline, our model achieves results better than or competitive with previous state-of-the-art systems on eight well-known NER benchmarks.", "1 Further empirical analysis suggests that boundary smoothing effectively mitigates over-confidence, improves model calibration, and brings flatter neural minima and more smoothed loss landscapes.", "Named entity recognition (NER) is one of the fundamental natural language processing (NLP) tasks with extensive investigations.", "As a common setting, an entity is regarded as correctly recognized only if its type and two boundaries exactly match the ground truth.", "The annotation of boundaries is more ambiguous, error-prone, and raises more inconsistencies than entity types.", "For example, the CoNLL 2003 task contains four entity types (i.e., person, location, organization, miscellaneous), which are easy to distinguish between.", "However, the boundaries of a entity mention could be ambiguous, because of the boundary words (e.g., articles or modi-fiers).", "Considerable efforts are required to specify the gold standard practice case by case.", "Table 1 presents some examples from CoNLL 2003 AnCorresponding author.", "notation Guidelines.", "2 In addition, some studies have also reported that incorrect boundary is a ma-jor source of entity recognition error (Wang et al., 2019; Eberts and Ulges, 2020).", "Recently, span-based models have gained much popularity in NER studies, and achieved state-of-the-art (SOTA) results (Eberts and Ulges, 2020; Yu et al., 2020; Li et al., 2021).", "This approach typically enumerates all candidate spans and classi-fies them into entity types (including a non-entity type); the annotated spans are scarce and assigned with full probability to be an entity, whereas all other spans are assigned with zero probability.", "This creates noticeable sharpness between the classification targets of adjacent spans, and may thus plague the trainability of neural networks.", "In addition, empirical evidence shows that these models easily encounter the over-confidence issue, i.e., the confidence of a predicted entity is much higher than its correctness probability.", "This is a manifestation of miscalibration (Guo et al., 2017).", "Inspired by label smoothing (Szegedy et al., 2016; Mller et al., 2019), we propose boundary smoothing as a regularization technique for span-based neural NER models.", "By explicitly reallocating entity probabilities from annotated spans 2 https://www-nlpir.nist.gov/related_ projects/muc/proceedings/ne_task.html .", "to the surrounding ones, boundary smoothing can effectively mitigate over-confidence, and result in", "consistently better performance.", "Specifically, our baseline employs the contextualized embeddings from a pretrained Transformer of a base size (768 hidden size, 12 layers), and the biaffine decoder proposed by Yu et al. (2020).", "With boundary smoothing, our model outperforms previous SOTA on four English NER datasets (CoNLL 2003, OntoNotes 5, ACE 2004 and ACE 2005) and two Chinese datasets (Weibo NER and Resume NER), and achieves competitive results on other two Chinese datasets (OntoNotes 4 and MSRA).", "Such extensive experiments support the effectiveness and robustness of our proposed technique.", "In addition, we show that boundary smoothing can help the trained NER models to preserve calibration, such that the produced confidences can better represent the precision rate of a predicted entity.", "This corresponds to the effect of label smoothing on the image classification task (Mller et al., 2019).", "Further, visualization results qualitatively suggest that boundary smoothing can lead to flatter solutions and more smoothed loss landscapes, which are typically associated with better generalization and trainability (Hochreiter and Schmidhu-ber, 1997; Li et al., 2018).", "Named Entity Recognition The mainstream NER systems are designed to recognize flat entities and based on a sequence tagging framework.", "Col-lobert et al. (2011) introduced the linear-chain conditional random field (CRF) into neural network-based sequence tagging models, which can explicitly encode the transition likelihoods between adjacent tags.", "Many researchers followed this work, and employed LSTM as the encoder.", "In addition, character-level representations are typically used for English tasks (Huang et al., 2015; Lample et al., 2016; Ma and Hovy, 2016; Chiu and Nichols, 2016), whereas lexicon information is helpful for Chinese NER (Zhang and Yang, 2018; Ma et al., 2020; Li et al., 2020a).", "Nested NER allows a token to belong to multiple entities, which conflicts with the plain sequence tagging framework.", "Ju et al. (2018) proposed to use stacked LSTM-CRFs to predict from inner to outer entities.", "Strakov et al. (2019) concatenated the BILOU tags for each token inside the nested entities, which allows the LSTM-CRF to work as for flat entities.", "Li et al. (2020b) reformulated nested NER as a machine reading comprehension task.", "Shen et al. (2021) proposed to recognize nested entities by the two-stage object detection method widely used in computer vision.", "Recent years, a body of literature emerged on span-based models, which were compatible with both flat and nested entities, and achieved SOTA performance (Eberts and Ulges, 2020; Yu et al., 2020; Li et al., 2021).", "These models typically enumerate all possible candidate text spans and then classify each span into entity types.", "In this work, the biaffine model (Yu et al., 2020) is chosen and re-implemented with slight modifications as our baseline, because of its high performance and compatibility with boundary smoothing.", "In addition, pretrained language models, also known as contextualized embeddings, were also widely introduced to NER models, and significantly boosted the model performance (Peters et al., 2018; Devlin et al., 2019).", "They are used in our baseline by default.", "Label Smoothing Szegedy et al. (2016) proposed the label smoothing as a regularization technique to improve the accuracy of the Inception networks on ImageNet.", "By explicitly assigning a small probability to non-ground-truth labels, label smoothing can prevent the models from becoming too confident about the predictions, and thus improve generalization.", "It turned out to be a useful alternative to the standard cross entropy loss, and has been widely adopted to fight against the over-confidence (Zoph et al., 2018; Chorowski and Jaitly, 2017; Vaswani et al., 2017), improve the model calibration (Mller et al., 2019), and de-noise incorrect labels (Lukasik et al., 2020).", "Our proposed boundary smoothing applies the smoothing technique to entity boundaries, rather than labels.", "This is driven by the observation that entity boundaries are more ambiguous and inconsistent to annotate in NER engineering.", "3 To the best of our knowledge, this study is the first that focuses on the effect of smoothing regularization on NER models.", "3 We note that Shen et al. (2021) also allocate a weight to the non-entity but partially matched spans; however, boundary smoothing additionally regularizes the weight of entity spans, which is intuitively crucial for mitigating over-confidence.", "A neural network-based NER model typically encodes the input tokens to a sequence of representations x = x 1 , x 2 , . . . , x T of length T , and then decodes these representations to task outputs, i.e., a list of entities specified by types and boundaries.", "We follow Yu et al. (2020) and use the biaffine decoder.", "Specifically, the representations x are separately affined by two feedforward networks, resulting in two representations h s RT d and h e RT d , which correspond to the start and end positions of spans.", "For c entity types (a non-entity type included), given a span starting at the i -th token and ending at the j -th token, a scoring vector r ij R c can be computed as: r ij = ( h si ) T Uh ej + W ( h si h ej w j i ) + b, (1) where w j i R d w is the ( j i ) -th width embedding from a dedicated learnable matrix; U R d c d , W R c (2 d + d w ) and b R c are learnable parameters.", "r ij is then fed into a softmax layer: y ij = softmax( r ij ) , (2) which yields the predicted probabilities over all entity types.", "The ground truth y ij R c is an one-hot encoded vector, with value being 1 if the index corresponds with the annotated entity type, and 0 otherwise.", "Thus, the model can be optimized by the standard cross entropy loss for all candidate spans: LCE = (cid:88) 0 i j<T y T ij log( y ij ) .", "In the inference time, the spans predicted to be non-entity are first discarded, and the remaining ones are ranked by their predictive confidences.", "Spans with lower confidences would also be discarded if they clash with the boundaries of spans with higher confidences.", "Refer to Yu et al. (2020) for more details.", "Figure 1a visualizes the ground truth y ij for an example sentence with two annotated entities.", "The valid candidate spans cover the upper triangular area of the matrix.", "In existing NER models, the annotated boundaries are considered to be absolutely reliable.", "Hence, each annotated span is assigned Start End 0 0 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9", "(b) Smoothed boundary Figure 1: An example of hard and smoothed boundaries.", "with the full probability to be an entity, whereas all unannotated spans are assigned with zero probability.", "We refer to this probability allocation as hard boundary , which is, however, probably not the best choice.", "As aforementioned, the entity boundaries may be ambiguous and inconsistent, so the spans surrounding an annotated one deserve a small probability to be an entity.", "Figure 1b visualizes y ij , the boundary smoothing version of y ij .", "Specifically, given an annotated entity, a portion of probability (cid:15) is assigned to its surrounding spans, and the remaining probability 1 (cid:15) is assigned to the originally annotated span.", "With smoothing size D , all the spans with Manhattan distance d ( d D ) to the annotated entity equally share probability (cid:15)/D .", "After 7098 such entity probability re-allocation, any remaining probability of a span is assigned to be non-entity.", "We refer to this as smoothed boundary .", "Thus, the biaffine model can be optimized by the boundary-smoothing regularized cross entropy loss: LBS = (cid:88) 0 i j<T y T ij log( y ij ) .", "Empirically, the positive samples (i.e., ground-truth entities) are sparsely distributed over the candidate spans.", "For example, the CoNLL 2003 dataset has about 35 thousand entities, which represent only 0.93% in the 3.78 million candidate spans.", "By explicitly assigning probability to surrounding spans, boundary smoothing prevents the model from concentrating all probability mass on the scarce positive samples.", "This intuitively helps alleviate over-confidence.", "In addition, hard boundary presents noticeable sharpness between the classification targets of positive spans and surrounding ones, although they share similar contextualized representations.", "Smoothed boundary provides more continuous targets across spans, which are conceptually more compatible with the inductive bias of neural networks that prefers continuous solutions (Hornik et al., 1989).", "Datasets We use four English NER datasets: CoNLL 2003 (Tjong Kim Sang and Veenstra, 1999), OntoNotes 5 4 , ACE 2004 5 and ACE 2005 6 ; and four Chinese NER datasets: OntoNotes 4 7 , MSRA (Levow, 2006), Weibo NER (Peng and Dredze, 2015) and Resume NER (Zhang and Yang, 2018).", "Among them, ACE 2004 and ACE 2005 are nested NER tasks, and the others are flat tasks.", "Hyperparameters For English corpora, we use RoBERTa (Liu et al., 2019) followed by a BiLSTM layer to produce the contextualized representations.", "For Chinese, we choose the BERT pretrained with whole word masking (Cui et al., 2019).", "4 https://catalog.ldc.upenn.edu/ LDC2013T19 ; Data splits follow Pradhan et al. (2013).", "5 https://catalog.ldc.upenn.edu/ LDC2005T09 ; Data splits follow Lu and Roth (2015).", "6 https://catalog.ldc.upenn.edu/ LDC2006T06 ; Data splits follow Lu and Roth (2015).", "7 https://catalog.ldc.upenn.edu/ LDC2011T03 ; Data splits follow Che et al. (2013).", "The BiLSTM has one layer and 200 hidden size with dropout rate of 0.5.", "The biaffine decoder follows Yu et al. (2020), with the affine layers of hidden size 150 and dropout rate 0.2.", "We additionally introduce a span width embedding of size 25.", "Note that the pretrained language models are all of the base size (768 hidden size, 12 layers), and the model is free of any additional auxiliary embeddings; this configuration is relatively simple, compared with those in related work.", "The boundary smoothing parameter (cid:15) is selected in { 0 .", "1 , 0 .", "2 , 0 .", "3 } ; smoothing size D is selected in { 1 , 2 } .", "All the models are trained by the AdamW optimizer (Loshchilov and Hutter, 2018) with a gradient clipping at L2-norm of 5.0 (Pascanu et al., 2013).", "The models are trained for 50 epochs with batch size of 48.", "The learning rate is searched between 1e-3 and 3e-3 on the randomly initialized weights, and between 8e-6 and 3e-5 on the pretrained weights; a scheduler of linear warmup in the first 20% steps followed by linear decay is applied.", "Evaluation A predicted entity is considered correct if its type and boundaries exactly match the ground truth.", "Hyperparameters are tuned according to the F 1 scores on the development set, and the evaluation metrics (precision, recall, F 1 score) are reported on the testing set.", "Table 2 presents the evaluation results on four English datasets, in which CoNLL 2003 and OntoNotes 5 are flat NER corpora, whereas ACE 2004 and ACE 2005 contains a high proportion of nested entities.", "Compared with previous SOTA systems, our simple baseline (RoBERTa-base + BiLSTM + Biaffine) achieves on-par or slightly inferior performance.", "Provided the strong baseline, our experiments show that boundary smoothing can effectively and consistently boost the F 1 score of entity recognition across different datasets.", "With the help of boundary smoothing, our model outperforms the best of the previous SOTA systems by a magnitude from 0.2 to 0.5 percentages.", "Table 3 presents the results on four Chinese datasets, which are all flat NER corpora.", "Again, boundary smoothing consistently improves model performance against the baseline (BERT-base-wwm + BiLSTM + Biaffine) across all datasets.", "In addition, our model outperforms previous SOTA 7099 CoNLL 2003 Model Prec.", "by 2.16 and 0.55 percentages on Weibo and Resume NER datasets, and achieves comparable F 1 scores on OntoNotes 4 and MSRA.", "Note that almost all previous systems solve these tasks within a sequence tagging framework; this work adds to the literature by introducing a span-based approach and establishing SOTA results on multiple Chinese NER benchmarks.", "In five out of the above eight datasets, integrating boundary smoothing significantly increases the precision rate with a slight drop in the recall, resulting in a better overall F 1 score.", "This is consistent with our expectation, because boundary smoothing OntoNotes 4 Model Prec.", "discourages over-confidence when recognizing entities, which implicitly leads the model to establish a more critical threshold to admit entities.", "Given the use of well pretrained language models, most of the performance gains are relatively marginal.", "However, boundary smoothing can work effectively and consistently for different languages and datasets.", "In addition, it is easy to implement and integrate into any span-based neural NER models, with almost no side effects.", "flat/nested and English/Chinese datasets), to evaluate the effects of boundary smoothing parameter (cid:15) and D , as well as other components of our NER system.", "Boundary Smoothing Parameters We train the model with (cid:15) in { 0 .", "1 , 0 .", "2 , 0 .", "3 } and D in { 1 , 2 } ; the corresponding results are reported in Table 4. Most combinations of the two hyperparameters can achieve higher F 1 scores than the baseline, which suggests the robustness of boundary smoothing.", "On the other hand, the best smoothing parameters are different across datasets, which are probably related to the languages/domains of the text, the entity types, and the annotation scheme (e.g., flat or nested NER).", "Hence, if the best performance is desired for a new NER task in practice, hyperpa-rameter tuning would be necessary.", "Label Smoothing We replace boundary smoothing with label smoothing in the span classifier.", "Label smoothing cannot improve, or may even impair the performance of the model, compared with the baseline (see Table 4).", "As aforementioned, we hypothesize that the semantic differences between the typical entity types are quite clear, so it is ineffective to smooth between them.", "Pretrained Language Models We test if the performance gain by boundary smoothing is robust to different baselines.", "For English datasets, we use BERT (Devlin et al., 2019) of the base and large sizes, and RoBERTa (Liu et al., 2019) of the large size (1024 hidden size, 24 layers).", "It shows that boundary smoothing can consistently increase the F 1 scores by 0.10.2 and 0.40.6 percentages for CoNLL 2003 and ACE 2005, respectively.", "For Chinese, we use MacBERT (Cui et al., CoNLL ACE Resume 2003 2005 NER Baseline 93.48 86.56 96.34 + BS 93.65 87.15 96.66 Baseline w/ BERT-base 91.84 84.51 + BS 92.05 84.95 Baseline w/ BERT-large 92.92 85.83 + BS 93.08 86.33 Baseline w/ RoBERTa-large 93.66 87.82 + BS 93.77 88.02 Baseline w/ MacBERT-base 96.41 + BS 96.75 Baseline w/ MacBERT-large 96.46 + BS 96.75 Baseline w/o BiLSTM 93.13 86.22 96.24 + BS 93.30 86.58 96.56 Table 5: Ablation studies of model structure. F 1 scores are reported. BS means boundary smoothing. 2020) of the base and large sizes, and boundary smoothing still performs positively and consistently, with an improvement of 0.20.3 percentage F 1 scores on Resume NER (see Table 5).", "It is noteworthy that boundary smoothing achieves performance gains roughly comparable to the gains by switching the pretrained language model from the base size to the large size.", "This suggests that the effect of boundary smoothing is quite considerable, although the performance improvements seem small in magnitude.", "In addition, our results show that RoBERTa substantially outperforms the original BERT on English NER.", "This is probably because that (1) RoBERTa is trained on much more data; and (2) RoBERTa focuses on the token-level task (i.e., masked language modeling) by removing the sequence-level objective (i.e., next sentence predic-tion), hence, it is particularly suitable for within-sequence downstream tasks, e.g., NER.", "This is also the reason why we choose RoBERTa for our baseline.", "BiLSTM Layer We remove the BiLSTM layer, directly feeding the output of pretrained language model into the biaffine decoder.", "The results show that this does not change the positive effect of boundary smoothing (see Table 5).", "In addition, absence of the BiLSTM layer will result in drops of the F 1 scores by about 0.3, 0.5 and 0.1 percentages on the three datasets.", "The model performance (evaluated by, e.g., accuracy or F 1 score) is certainly important.", "However, the confidences of model predictions are also of interest in many applications.", "For example, when it requires the predicted entities to be highly reliable (i.e., precision is of more priority than recall), we may filter out the entities with confidences lower than a specific threshold.", "However, Guo et al. (2017) have indicated that modern neural networks are poorly calibrated, and typically over-confident with their predictions.", "By calibration, they mean the extent to which the prediction confidences produced by a model can represent the true correctness probability.", "We find neural NER models also easy to become miscali-brated and over-confident.", "We observe that, with the standard cross entropy loss, both the development loss and F 1 score increase in the later training stage, which goes against the common perception that the loss and F 1 score should change in the opposite directions.", "This phenomenon is similar to the disconnect between negative likelihood and accuracy in image classification described by Guo et al. (2017).", "We suppose that the model becomes over-confident with its predictions, including the incorrect ones, which contributes to the increase of loss (see Appendix A for more details).", "To formally investigate the over-confidence issue, we plot the reliability diagrams and calculate expected calibration error (ECE).", "In brief, for an NER model, we group all the predicted entities by the associated confidences into ten bins, and then calculate the precision rate for each bin.", "If the model is well calibrated, the precision rate should be close to the confidence level for each bin (see Appendix B for more details).", "Figure 2 compares the reliability diagrams and ECEs between models with different smoothness (cid:15) on CoNLL 2003 and OntoNotes 5. For the baseline model ( (cid:15) = 0), the precision rates are much lower than corresponding confidence levels, suggesting significant over-confidence.", "By introducing boundary smoothing and increasing the smoothness (cid:15) , the over-confidence is gradually mitigated, and shifted to under-confidence ( (cid:15) = 0.3).", "In general, the model presents best reliability diagrams when (cid:15) is 0.1 or 0.2.", "In addition, the ECEs of the baseline model are 0.072 and 0.063 on CoNLL 2003 and OntoNotes 5, respectively; with (cid:15) of 0.1, the ECEs are reduced \u0000\u0013\u0000\u0011\u0000\u0013 \u0000\u0013\u0000\u0011\u0000\u0015 \u0000\u0013\u0000\u0011\u0000\u0017 \u0000\u0013\u0000\u0011\u0000\u0019 \u0000\u0013\u0000\u0011\u0000\u001b \u0000\u0014\u0000\u0011\u0000\u0013 \u0000&\u0000R\u0000Q\u0000I\u0000L\u0000G\u0000H\u0000Q\u0000F\u0000H \u0000\u0013\u0000\u0011\u0000\u0013 \u0000\u0013\u0000\u0011\u0000\u0015 \u0000\u0013\u0000\u0011\u0000\u0017 \u0000\u0013\u0000\u0011\u0000\u0019 \u0000\u0013\u0000\u0011\u0000\u001b \u0000\u0014\u0000\u0011\u0000\u0013 \u00003 \u0000U \u0000H\u0000F \u0000L \u0000V \u0000L \u0000R\u0000Q \u0000\u0003\u0000 \u0000\u0003\u0000\u0013\u0000\u0011\u0000\u0013\u0000\u000f\u0000\u0003\u0000(\u0000&\u0000(\u0000\u0003\u0000 \u0000\u0003\u0000\u0013\u0000\u0011\u0000\u0013\u0000\u001a\u0000\u0015 \u0000\u0003\u0000 \u0000\u0003\u0000\u0013\u0000\u0011\u0000\u0014\u0000\u000f\u0000\u0003\u0000(\u0000&\u0000(\u0000\u0003\u0000 \u0000\u0003\u0000\u0013\u0000\u0011\u0000\u0013\u0000\u0014\u0000\u0016 \u0000\u0003\u0000 \u0000\u0003\u0000\u0013\u0000\u0011\u0000\u0015\u0000\u000f\u0000\u0003\u0000(\u0000&\u0000(\u0000\u0003\u0000 \u0000\u0003\u0000\u0013\u0000\u0011\u0000\u0013\u0000\u0019\u0000\u0014 \u0000\u0003\u0000 \u0000\u0003\u0000\u0013\u0000\u0011\u0000\u0016\u0000\u000f\u0000\u0003\u0000(\u0000&\u0000(\u0000\u0003\u0000 \u0000\u0003\u0000\u0013\u0000\u0011\u0000\u0014\u0000\u0018\u0000\u001b", "In conclusion, boundary smoothing can prevent the model from becoming over-confident with the predicted entities, and result in better calibration.", "In addition, as mentioned previously, spans with lower confidences are discarded if they clash with those of higher confidences when decoding.", "With the better calibration, the model can obtain a very marginal but consistent increase in the F 1 score.", "How does boundary smoothing improve the model performance?", "We originally conjectured that boundary smoothing can de-noise the inconsistently annotated entity boundaries (Lukasik et al., 2020), but failed to find enough evidence the 7102 \u0000\u0014\u0000\u0011\u0000\u0013 \u0000\u0013\u0000\u0011\u0000\u0018 \u0000\u0013\u0000\u0011\u0000\u0013 \u0000\u0013\u0000\u0011\u0000\u0018 \u0000\u0014\u0000\u0011\u0000\u0013 \u0000\u0013 \u0000\u0015\u0000\u0013\u0000\u0013 \u0000\u0017\u0000\u0013\u0000\u0013 \u0000\u0019\u0000\u0013\u0000\u0013 \u0000\u001b\u0000\u0013\u0000\u0013 \u0000\u0014\u0000\u0013\u0000\u0013\u0000\u0013", "performance improvement did not significantly increase when we injected boundary noises into the training data.", "8 As aforementioned, positive samples are very sparse among the candidate spans.", "Without boundary smoothing, the annotated spans are regarded to be entities with full probability, whereas all other spans are assigned with zero probability.", "This creates noticeable sharpness between the targets of the annotated spans and surrounding ones, although their neural representations are similar.", "Boundary smoothing re-allocates the entity probabilities across contiguous spans, which mitigates the sharpness and results in more continuous targets.", "Conceptually, such targets are more compatible with the inductive bias of neural networks that prefers continuous solutions (Hornik et al., 1989).", "Li et al. (2018) have shown that residual connections and well-tuned hyperparameters (e.g., learning rate, batch size) can produce flatter minima and less chaotic loss landscapes, which account for the better generalization and trainability.", "Their findings provide important insights into the geometric 8 On the other hand, this cannot rule out the de-noising effect of boundary smoothing, because the synthesized boundary noises are distributed differently from the real noises.", "properties of non-convex neural loss functions.", "Figure 3 visualizes the loss landscapes for models with different smoothness (cid:15) on CoNLL 2003 and OntoNotes 5, following Li et al. (2018).", "In short, for a trained model, a direction of the parameters is randomly sampled, normalized and fixed, and the loss landscape is computed by sampling over this direction (refer to Appendix C for more details).", "The visualization results qualitatively show that, the solutions found by the standard cross entropy are relatively sharp, whereas boundary smoothing can help arrive at flatter minima.", "As many theoretical studies regard the flatness as a promising predictor for model generalization (Hochreiter and Schmidhuber, 1997; Jiang et al., 2019), this result may explain why boundary smoothing can improve the model performance.", "In addition, boundary smoothing is associated with more smoothed landscapes the surrounding local minima are small, shallow, and thus easy for the optimizer to escape.", "Intuitively, such geometric property suggests that the underlying loss functions are easier to train (Li et al., 2018).", "and chaotic loss landscape.", "Boundary smoothing can effectively mitigate the sharpness, and result in loss landscapes of better generalization and trainability.", "In this study, we propose boundary smoothing as a regularization technique for span-based neural NER models.", "Boundary smoothing re-assigns entity probabilities from annotated spans to the surrounding ones.", "It can be easily integrated into any span-based neural NER systems, but consistently bring improved performance.", "Built on a simple but strong baseline (a base -sized pretrained language model followed by a BiLSTM layer, and the biaffine decoder), our model achieves SOTA results on eight well-known NER benchmarks, covering English and Chinese, flat and nested NER tasks.", "In addition, experimental results show that boundary smoothing leads to less over-confidence, better model calibration, flatter neural minima and more smoothed loss landscapes.", "These properties plausibly explain the performance improvement.", "Our findings shed light on the effects of smoothing regularization technique in the NER task.", "As discussed, boundary smoothing typically increases the overall F 1 score at the risk of a slight drop in the recall rate; hence, one may be careful to use it for recall-sensitive applications.", "Future work will apply boundary smoothing to more variants of span-based NER models, and investigate its effect in a broader range of information extraction tasks.", "We thank Yiyang Liu for his efforts in data processing, and the anonymous reviewers for their insightful comments and feedback.", "This work is supported by the National Natural Science Foundation of China (No. 62106248), Ningbo Science and Technology Service Industry Demonstration Project (No. 2020F041), and Ningbo Public Service Technology Foundation (No. 2021S152)." ]
[ "abstain", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "objective", "result", "abstain", "abstain", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "method", "other", "other", "other", "objective", "other", "objective", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other" ]
[ "Word vector specialisation (also known as retrofitting ) is a portable, light-weight approach to fine-tuning arbitrary distributional word vector spaces by injecting external knowledge from rich lexical resources such as WordNet.", "By design, these post-processing methods only update the vectors of words occurring in external lexicons, leaving the representations of all unseen words intact.", "In this paper, we show that constraint-driven vector space specialisation can be extended to unseen words.", "We propose a novel post-specialisation method that:", "a) preserves the useful linguistic knowledge for seen words; while", "b) propagating this external signal to unseen words in order to improve their vector representations as well.", "Our post-specialisation approach explicits a non-linear specialisation function in the form of a deep neural network by learning to predict specialised vectors from their original distributional counterparts.", "The learned function is then used to specialise vectors of unseen words.", "This approach, applicable to any postprocessing model, yields considerable gains over the initial specialisation models both in intrinsic word similarity tasks, and in two downstream tasks: dialogue state tracking and lexical text simplification .", "The positive effects persist across three languages, demonstrating the importance of specialising the full vocabulary of distributional word vector spaces.", "Word representation learning is a key research area in current Natural Language Processing (NLP), with its usefulness demonstrated across a range of tasks (Collobert et al., 2011; Chen and Manning, 2014; Melamud et al., 2016b).", "The standard techniques for inducing distributed word representations are grounded in the distributional hypothesis (Harris, 1954): they rely on co-occurrence information in large textual corpora (Mikolov et al., 2013b; Pennington et al., 2014; Levy and Goldberg, 2014; Levy et al., 2015; Bojanowski et al., 2017).", "As a result, these models tend to coalesce the notions of semantic similarity and (broader) conceptual relatedness, and cannot accurately distinguish antonyms from synonyms (Hill et al., 2015; Schwartz et al., 2015).", "Recently, we have witnessed a rise of interest in representation models that move beyond stand-alone unsupervised learning: they leverage external knowledge in humanand automatically-constructed lexical resources to enrich the semantic content of distributional word vectors, in a process termed semantic specialisation .", "This is often done as a post-processing (some-times referred to as retrofitting ) step: input word vectors are fine-tuned to satisfy linguistic constraints extracted from lexical resources such as WordNet or BabelNet (Faruqui et al., 2015; Mrkic et al., 2017).", "The use of external curated knowledge yields improved word vectors for the benefit of downstream applications (Faruqui, 2016).", "At the same time, this specialisation of the distributional space distinguishes between true similarity and relatedness, and supports language understanding tasks (Kiela et al., 2015; Mrkic et al., 2017).", "While there is consensus regarding their benefits and ease of use, one property of the post-processing specialisation methods slips under the radar: most existing post-processors update word embeddings only for words which are present (i.e., seen ) in the external constraints, while vectors of all other (i.e., unseen ) words remain unaffected.", "In this work, we propose a new approach that extends the specialisation framework to unseen words, relying on the transformation of the vector (sub)space of seen words.", "Our intuition is that the process of fine-tuning seen words provides implicit information on how to leverage the external knowledge to unseen words.", "The method should preserve the already injected knowledge for seen words, simultaneously 516 propagating the external signal to unseen words in order to improve their vectors.", "The proposed post-specialisation method can be seen as a two-step process, illustrated in Fig. 1a: 1) We use a state-of-the-art specialisation model to transform the subspace of seen words from the input distributional space into the specialised subspace; 2) We learn a mapping function based on the transformation of the seen subspace, and then apply it to the distributional subspace of unseen words.", "We allow the proposed post-specialisation model to learn from large external linguistic resources by implementing the mapping as a deep feed-forward neural network with non-linear activations.", "This allows the model to learn the generalisation of the fine-tuning steps taken by the initial specialisation model, itself based on a very large number (e.g., hundreds of thousands) of external linguistic constraints.", "As indicated by the results on word similarity and two downstream tasks (dialogue state tracking and lexical text simplification) our post-specialisation method consistently outperforms state-of-the-art methods which specialise seen words only.", "We report improvements using three distinct input vector spaces for English and for three test languages (English, German, Italian), verifying the robustness of our approach.", "Vector Space Specialisation A standard approach to incorporating external and background knowledge into word vector spaces is to pull the representations of similar words closer together and to push words in undesirable relations (e.g., antonyms) away from each other.", "Some models integrate such constraints into the training procedure and jointly optimize distributional and nondistributional objectives: they modify the prior or the regularisation (Yu and Dredze, 2014; Xu et al., 2014; Bian et al., 2014; Kiela et al., 2015), or use a variant of the SGNS-style objective (Liu et al., 2015; Ono et al., 2015; Osborne et al., 2016; Nguyen et al., 2017).", "In theory, word embeddings obtained by these joint models could be as good as representations produced by models which fine-tune input vector space.", "However, their performance falls behind that of fine-tuning methods (Wi-eting et al., 2015).", "Another disadvantage is that their architecture is tied to a specific underlying model (typically word2vec models).", "In contrast, fine-tuning models inject external knowledge from available lexical resources (e.g., WordNet, PPDB) into pre-trained word vectors as a post-processing step (Faruqui et al., 2015; Rothe and Schtze, 2015; Wieting et al., 2015; Nguyen et al., 2016; Mrkic et al., 2016; Cotterell et al., 2016; Mrkic et al., 2017).", "Such post-processing models are popular because they offer a portable, flexible, and light-weight approach to incorporating external knowledge into arbitrary vector spaces, yielding state-of-the-art results on language understanding tasks (Faruqui et al., 2015; Mrkic et al., 2016; Kim et al., 2016; Vulic et al., 2017b).", "Existing post-processing models, however, suffer from a major limitation.", "Their modus operandi is to enrich the distributional information with external knowledge only if such knowledge is present in a lexical resource.", "This means that they update and improve only representations of words actually seen in external resources.", "Because such words constitute only a fraction of the whole vocabulary (see Sect. 4), most words, unseen in the constraints, retain their original vectors.", "The main goal of this work is to address this shortcoming by specialising all words from the initial distributional space.", "Our starting point is the state-of-the-art specialisation model ATTRACT-REPEL ( AR ) (Mrkic et al., 2017), outlined in Sect.", "3.1.", "We opt for the AR model due to its strong performance and ease of use, but we note that the proposed post-specialisation approach for specialising unseen words, described in Sect.", "3.2, is applicable to any post-processor, as empirically validated in Sect.", "5. 3.1 Initial Specialisation Model: AR Let V s be the vocabulary, A the set of synonymous ATTRACT word pairs (e.g., rich and wealthy ), and R the set of antonymous REPEL word pairs (e.g., increase and decrease ).", "The ATTRACT-REPEL procedure operates over mini-batches of such pairs BA and BR .", "Let each word pair ( x l , x r ) in these sets correspond to a vector pair ( x l , x r ) .", "A mini-batch of b att attract word pairs is given by BA = [( x 1 l , x 1 r ) , . . . , ( x k 1 l , x k 1 r )] (analogously for BR , which consists of b rep pairs).", "Next, the sets of negative examples TA = [( t 1 l , t 1 r ) , . . . , ( t k 1 l , t k 1 r )] and TR = [( t 1 l , t 1 r ) , . . . , ( t k 2 l , t k 2 r )] are defined as pairs of negative examples for each A and R 517 pair in mini-batches BA and BR .", "These negative examples are chosen from the word vectors present in BA or BR so that, for each A pair ( x l , x r ) , the negative example pair ( t l , t r ) is chosen so that t l is the vector closest (in terms of cosine distance) to x l and t r is closest to x r .", "1 The negatives are used 1) to force A pairs to be closer to each other than to their respective negative examples; and 2) to force R pairs to be further away from each other than from their negative examples.", "The first term of the cost function pulls A pairs together: Att ( BA , TA ) = b att X i =1 (cid:2) (cid:16) att + x il t il x il x ir (cid:17) + (cid:16) att + x ir t ir x il x ir (cid:17) (cid:3) (1) where ( z ) = max(0 , z ) is the standard rectifier function (Nair and Hinton, 2010) and att is the attract margin: it determines how much closer these vectors should be to each other than to their respective negative examples.", "The second, REPEL term in the cost function is analogous: it pushes R word pairs away from each other by the margin rep .", "Finally, in addition to the A and R terms, a regularisation term is used to preserve the semantic content originally present in the distributional vector space, as long as this information does not contradict the injected external knowledge.", "Let V ( B ) be the set of all word vectors present in a mini-batch, the distributional regularisation term is then: Reg ( BA , BR ) = X x i V ( BA B R ) reg k b x i x i k 2 (2) where reg is the L 2 -regularisation constant and b x i denotes the original (distributional) word vector for word x i .", "The full ATTRACT-REPEL cost function is finally constructed as the sum of all three terms.", "Problem Formulation The goal is to learn a global transformation function that generalises the perturbations of the initial vector space made by ATTRACT-REPEL (or any other specialisation pro-cedure), as conditioned on the external constraints.", "The learned function propagates the signal coded in the input constraints to all the words unseen during the specialisation process.", "We seek a regression 1 Similarly, for each R pair ( x l , x r ) , the negative pair ( t l , t r ) is chosen from the in-batch vectors so that t l is the vector furthest away from x l and t r is furthest from x r .", "All vectors are unit length (re)normalised after each epoch.", "function f : R dim R dim , where dim is the vector space dimensionality.", "It maps word vectors from the initial vector space X to the specialised target space X 0 .", "Let c X 0 = f ( X ) refer to the predicted mapping of the vector space, while the mapping of a single word vector is denoted b x 0 i = f ( x i ) .", "An input distributional vector space X d represents words from a vocabulary V d .", "V d may be divided into two vocabulary subsets: V d = V s V u , V s V u = , with the accompanying vector subspaces X d = X s t X u .", "V s refers to the vocabulary of seen words: those that appear in the external linguistic constraints and have their embeddings changed in the specialisation process.", "V u denotes the vocabulary of unseen words: those not present in the constraints and whose embeddings are unaffected by the specialisation procedure.", "The AR specialisation process transforms only the subspace X s into the specialised subspace X 0 s .", "All words x i V s may now be used as training examples for learning the explicit mapping function f from X s into X 0 s .", "If N = |V s | , we in fact rely on N training pairs: ( x i , x 0 i ) = { x i X s , x 0 i X 0 s } .", "Function f can then be applied to unseen words x V u to yield the specialised subspace c X 0 u = f ( X u ) .", "The specialised space containing all words is then X f = X 0 s c X 0 u .", "The complete high-level post-specialisation procedure is outlined in Fig. 1a.", "Note that another variant of the approach could obtain X f as X f = f ( X d ) , that is, the entire distributional space is transformed by f .", "However, this variant seems counter-intuitive as it forgets the actual output of the initial specialisation procedure and replaces word vectors from X 0 s with their approximations, i.e., f -mapped vectors.", "2 Objective Functions As mentioned, the N seen words x i V s in fact serve as our pseudo-translation pairs supporting the learning of a cross-space mapping function.", "In practice, in its high-level formulation, our mapping problem is equivalent to those encountered in the literature on cross-lingual word embeddings where the goal is to learn a shared cross-lingual space given monolingual vector spaces in two languages and N 1 translation pairs (Mikolov et al., 2013a; Lazaridou et al., 2015; Vulic and Korhonen, 2016b; Artetxe et al., 2016, 2017; Conneau et al., 2017; Ruder et al., 2017).", "In our setup, the standard objective based on L 2 -penalised 2 We have empirically confirmed the intuition that the first variant is superior to this alternative.", "We do not report the actual quantitative comparison for brevity.", "Figure 1 :", "(a) High-level illustration of the post-specialisation approach: the subspace X s of the initial distributional vector space X d = X s X u is first specialised/fine-tuned by the ATTRACT-REPEL specialisation model (or any other post-processing model) to obtain the transformed subspace X 0 s .", "The words present (i.e., seen ) in the input set of linguistic constraints are now assigned different representations in X s (the original distributional vector) and X 0 s (the specialised vector): they are therefore used as training examples to learn a non-linear cross-space mapping function.", "This function is then applied to all word vectors x i X u representing words unseen in the constraints to yield a specialised subspace c X 0 u .", "The final space is X f = X 0 s c X 0 u , and it contains transformed representations for all words from the initial space X d .", "(b) The actual implementation of the non-linear regression function which maps from X u to c X 0 u : a deep feed-forward fully-connected neural net with non-linearities and H hidden layers.", "|| || where || || 2 F denotes the squared Frobenius norm.", "In the most common form f ( X s ) is simply a linear map/matrix W f R dim dim (Mikolov et al., 2013a) as follows: f ( X ) = W f X .", "After learning f based on the X s X 0 s transformation, one can simply apply f to unseen words: c X 0 u = f ( X u ) .", "This linear mapping model, termed LINEAR-MSE , has an analytical solution (Artetxe et al., 2016), and has been proven to work well with cross-lingual embeddings.", "However, given that the specialisation model injects hundreds of thousands (or even millions) of linguistic constraints into the distributional space (see later in Sect. 4), we suspect that the assumption of linearity is too limiting and does not fully hold in this particular setup.", "Using the same L 2 -penalized least squares objective, we can thus replace the linear map with a nonlinear function f : R dim R dim .", "The non-linear mapping, illustrated by Fig. 1b, is implemented as a deep feed-forward fully-connected neural network (DFFN) with H hidden layers and non-linear activations.", "This variant is called NONLINEAR-MSE .", "Another variant objective is the contrastive margin-based ranking loss with negative sampling ( MM ) similar to the original ATTRACT-REPEL objective, used in other applications in prior work (e.g., for cross-modal mapping) (Weston et al., 2011; Frome et al., 2013; Lazaridou et al., 2015; Kummerfeld et al., 2015).", "Let c x 0 i = f ( x i ) denote the predicted vector for the word x i V s , and let x 0 i refer to the true vector of x i in the specialised space X 0 s after the AR specialisation procedure.", "The MM loss is then defined as follows: JMM = NX i =1 k X j 6 = i (cid:16) mm cos (cid:0)c x 0 i , x 0 i (cid:1) + cos (cid:0)c x 0 i , x 0 j (cid:1)(cid:17) where cos is the cosine similarity measure, mm is the margin, and k is the number of negative samples.", "The objective tries to learn the mapping f so that each predicted vector c x 0 i is by the specified margin mm closer to the correct target vector x 0 i than to any other of k target vectors x 0 j serving as negative examples.", "3 Function f can again be either a simple linear map ( LINEAR-MM ), or implemented as a DFFN ( NONLINEAR-MM , see Fig. 1b).", "Starting Word Embeddings ( X d = X s X u ) To test the robustness of our approach, we experiment with three well-known, publicly available collections of English word vectors: 1) Skip-Gram with Negative Sampling ( SGNS-BOW 2) (Mikolov et al., 2013b) trained on the Polyglot Wikipedia (Al-Rfou et al., 2013) by Levy and Goldberg (2014) using bag-of-words windows of size 2; 2) GLOVE Common Crawl (Pennington et al., 2014); and 3) FASTTEXT (Bojanowski et al., 2017), a SGNS variant which builds word vectors as the sum of their constituent character n-gram vectors.", "All word embeddings are 300 -dimensional.", "4 AR Specialisation and Constraints ( X s X 0 s ) We experiment with linguistic constraints used before by (Mrkic et al., 2017; Vulic et al., 2017a): they extracted monolingual synonymy/ ATTRACT pairs from the Paraphrase Database (PPDB) (Ganitkevitch et al., 2013; Pavlick et al., 2015) (640,435 synonymy pairs in total), while their antonymy/ REPEL constraints came from BabelNet (Navigli and Ponzetto, 2012) (11,939 pairs).", "5 The coverage of V d vocabulary words in the constraints illustrates well the problem of unseen words with the fine-tuning specialisation models.", "For instance, the constraints cover only a small subset of the entire vocabulary V d for SGNS-BOW 2: 16.6%.", "They also cover only 15.3% of the top 200K most frequent V d words from FASTTEXT .", "Network Design and Parameters ( X u c X 0 u ) The non-linear regression function f : R d R d is a DFFN with H hidden layers, each of dimensionality d 1 = d 2 = . . . = d H = 512 (see Fig. 1b).", "Non-linear activations are used in each layer and P Ni =1 (cid:16) mm cos ( c x 0 i , x 0 i ) (cid:17) .", "For instance, with mm = 1 .", "0 the idea is to learn a mapping f that, for each x i enforces the predicted vector and the correct target vector to have a maximum cosine similarity.", "We do not report the results with this variant as, although it outscores the MSE -style objective, it was consistently outperformed by the MM objective.", "4 For further details regarding the architectures and training setup of the used vector collections, we refer the reader to the original papers.", "Additional experiments with other word vectors, e.g., with CONTEXT 2 VEC (Melamud et al., 2016a) (which uses bidirectional LSTMs (Hochreiter and Schmidhu-ber, 1997) for context modeling), and with dependency-word based embeddings (Bansal et al., 2014; Melamud et al., 2016b) lead to similar results and same conclusions.", "5 We have experimented with another set of constraints used in prior work (Zhang et al., 2014; Ono et al., 2015), reaching similar conclusions: these were extracted from WordNet (Fellbaum, 1998) and Roget (Kipfer, 2009), and comprise 1,023,082 synonymy pairs and 380,873 antonymy pairs.", "omitted only before the final output layer to enable full-range predictions (see Fig. 1b again).", "The choices of non-linear activation and initialisation are guided by recent recommendations from the literature.", "First, we use swish (Ramachan-dran et al., 2017; Elfwing et al., 2017) as nonlinearity, defined as swish ( x ) = x sigmoid ( x ) .", "We fix = 1 as suggested by Ramachandran et al. (2017).", "6 Second, we use the HE normal initialisation (He et al., 2015), which is preferred over the XAVIER initialisation (Glorot and Bengio, 2010) for deep models (Mishkin and Matas, 2016; Li et al., 2016), although in our experiments we do not observe a significant difference in performance between the two alternatives.", "We set H = 5 in all experiments without any fine-tuning; we also analyse the impact of the network depth in Sect.", "5. Optimisation For the AR specialisation step, we adopt the original suggested model setup.", "Hyper-parameter values are set to: att = 0 .", "6 , rep = 0 .", "0 , reg = 10 9 (Mrkic et al., 2017).", "The models are trained for 5 epochs with Adagrad (Duchi et al., 2011), with batch sizes set to b att = b rep = 50 , again as in the original work.", "For training the non-linear mapping with DFFN (Fig. 1b), we use the Adam algorithm (Kingma and Ba, 2015) with default settings.", "The model is trained for 100 epochs with early stopping on a validation set.", "We reserve 10% of all available seen data (i.e., the words from V s represented in X s and X 0 s ) for validation, the rest are used for training.", "For the MM objective, we set mm = 0 .", "6 and k = 25 in all experiments without any fine-tuning.", "Evaluation Protocol The first set of experiments evaluates vector spaces with different specialisation procedures intrinsically on word similarity benchmarks: we use the SimLex-999 dataset (Hill et al., 2015), and SimVerb-3500 (Gerz et al., 2016), a recent verb pair similarity dataset providing similarity ratings for 3,500 verb pairs.", "7 Spearman's 6 According to Ramachandran et al. (2017), for deep networks swish has a slight edge over the family of LU/ReLUrelated activations (Maas et al., 2013; He et al., 2015; Klam-bauer et al., 2017).", "We also observe a minor (and insignificant) difference in performance in favour of swish .", "7 While other gold standards such as WordSim-353 (Finkel-stein et al., 2002) or MEN (Bruni et al., 2014) coalesce the notions of true semantic similarity and (more broad) conceptual relatedness, SimLex and SimVerb provide explicit guidelines 520 Setup: hold-out Setup: all GLOVE SGNS-BOW 2 FASTTEXT GLOVE SGNS-BOW 2 FASTTEXTSL SV SL SV SL SV SL SV SL SV SL SV Distributional: X d .408 .286 .414 .275 .383 .255 .408 .286 .414 .275 .383 .255 + AR specialisation: X 0 s .408 .286 .414 .275 .383 .255 .690 .578 .658 .544 .629 .502 ++Mapping unseen: X f LINEAR-MSE .504 .384 .447 .309 .405 .285 .690 .578 .656 .551 .628 .502 NONLINEAR-MSE .549 .407 .484 .344 .459 .329 .694 .586 .663 .556 .631 .506 LINEAR-MM .548 .422 .468 .329 .419 .308 .697 .582 .663 .554 .628 .487 NONLINEAR-MM .603 .480 .531 .391 .471 .349 .705 .600 .667 .562 .638 .507 Table 1 : Spearman's correlation scores for three word vector collections on two English word similarity datasets, SimLex-999 (SL) and SimVerb-3500 (SV), using different mapping variants, evaluation protocols, and word vector spaces: from the initial distributional space X d to the fully specialised space X f .", "H = 5 .", "Figure 2 : The results of the hold-out experiments on SimLex-999 and SimVerb-3500 after applying our non-linear vector space transformation with different depths (hidden layer size H , see Fig. 1b).", "The results are presented as averages over 20 runs with the NONLINEAR-MM variant, the shaded regions are spanned by the maximum and minimum scores obtained.", "Thick horizontal lines refer to Spearman's rank correlations achieved in the initial space X d .", "H = 0 denotes the standard linear regression model (Mikolov et al., 2013a; Lazaridou et al., 2015) ( LINEAR-MM shown since it outperforms LINEAR-MSE ).", "We evaluate word vectors in two settings.", "First, in a synthetic hold-out setting, we remove all linguistic constraints which contain words from the SimLex and SimVerb evaluation data, effectively forcing all SimLex and SimVerb words to be unseen by the AR specialisation model.", "The specialised vectors for these words are estimated by the learned non-linear DFFN mapping model.", "Second, the all setting is a standard real-life scenario where some test (SimLex/SimVerb) words do occur in the constraints, while the mapping is learned for the remaining words.", "with the three word vector collections are provided in Tab.", "1. In addition, Fig. 2 plots the influence of the network to discern between the two, so that related but non-similar words (e.g. tiger and jungle ) have a low rating.", "The results suggest that the mapping of unseen words is universally useful, as the highest correlation scores are obtained with the final fully specialised vector space X f for all three input spaces.", "The results in the hold-out setup are particularly indicative of the improvement achieved by our post-specialisation method.", "For instance, it achieves a +0.2 correlation gain with GLOVE on both SimLex and SimVerb by specialising vector representations for words present in these datasets without seeing a single external constraint which contains any of these words.", "This suggests that the perturbation of the seen subspace X s by ATTRACT-REPEL contains implicit knowledge that can be propagated to X u , learning better representations for unseen words.", "We observe small but consistent improvements across the board in the all setup.", "The smaller gains can be explained by the fact that a majority of 521 SimLex and SimVerb words are present in the external constraints (93.7% and 87.2%, respectively).", "The scores also indicate that both non-linearity and the chosen objective function contribute to the quality of the learned mapping: largest gains are reported with the NONLINEAR-MM variant which", "a) employs non-linear activations and", "b) replaces the basic mean-squared-error objective with max-margin.", "The usefulness of the latter has been established in prior work on cross-space mapping learning (Lazaridou et al., 2015).", "The former indicates that the initial AR transformation is non-linear.", "It is guided by a large number of constraints; their effect cannot be captured by a simple linear map as in prior work on, e.g., cross-lingual word embeddings (Mikolov et al., 2013a; Ruder et al., 2017).", "Finally, the analysis of the network depth H indicates that going deeper helps only to a certain extent.", "Adding more layers allows for a richer parametrisation of the network (which is beneficial given the number of linguistic constraints used by AR ).", "This makes the model more expressive, but it seems to saturate with larger H values.", "Post-Specialisation with Other Post-Processors We also verify that our post-specialisation approach is not tied to the ATTRACT-REPEL method, and is indeed applicable on top of any post-processing specialisation method.", "We analyse the impact of post-specialisation in the hold-out setting using the original retrofitting ( RFit ) model (Faruqui et al., 2015) and counter-fitting ( CFit ) (Mrkic et al., 2016) in lieu of attract-repel.", "The results on word similarity with the best-performing NONLINEAR-MM variant are summarised in Tab.", "2. The scores again indicate the usefulness of post-specialisation.", "As expected, the gains are lower Figure 3 : DST labels (user goals given by slot-value pairs) in a multi-turn dialogue (Mrkic et al., 2015).", "Table 2 : Post-specialisation applied to two other post-processing methods.", "SL: SimLex; SV: SimVerb.", "Hold-out setting.", "NONLINEAR-MM .", "than with ATTRACT-REPEL .", "RFit falls short of CFit as by design it can leverage only synonymy (i.e., ATTRACT ) external constraints.", "Next, we evaluate the usefulness of post-specialisation for two downstream tasks dialogue state tracking and lexical text simplification in which discerning semantic similarity from other types of semantic relatedness is crucial.", "We first evaluate the importance of post-specialisation for a downstream language understanding task of dialogue state tracking (DST) (Henderson et al., 2014; Williams et al., 2016), adopting the evaluation protocol and data of Mrkic et al. (2017).", "DST: Model and Evaluation The DST model is the first component of modern dialogue pipelines (Young, 2010), which captures the users' goals at each dialogue turn and then updates the dialogue state.", "Goals are represented as sets of constraints expressed as slot-value pairs (e.g., food= Chinese ).", "The set of slots and the set of values for each slot constitute the ontology of a dialogue domain.", "The probability distribution over the possible states is the system's estimate of the user's goals, and it is used by the dialogue manager module to select the subsequent system response (Su et al., 2016).", "An example in Fig. 3 illustrates the DST pipeline.", "For evaluation, we use the Neural Belief Tracker (NBT), a state-of-the-art DST model which was the first to reason purely over pre-trained word vectors (Mrkic et al., 2017).", "8 The NBT uses no hand-crafted semantic lexicons, instead composing word vectors into intermediate utterance and context representations.", "9 For full model details, we refer the reader to the original paper.", "The importance of word vector specialisation for the DST task (e.g., distinguishing between synonyms and antonyms by pulling northern and north closer in 8 https://github.com/nmrksic/neural-belief-tracker 9 The NBT keeps word vectors fixed during training to enable generalisation for words unseen in DST training data.", "Table 3 : DST results in two evaluation settings ( hold-out and all ) with different GLOVE variants.", "the vector space while pushing north and south away) has been established (Mrkic et al., 2017).", "Again, as in prior work the DST evaluation is based on the Wizard-of-Oz (WOZ) v2.0 dataset (Wen et al., 2017; Mrkic et al., 2017), comprising 1,200 dialogues split into training (600 dialogues), development (200), and test data (400).", "In all experiments, we report the standard DST performance measure: joint goal accuracy , and report scores as averages over 5 NBT training runs.", "Results and Analysis We again evaluate word vectors in two settings: 1) hold-out , where linguistic constraints with words appearing in the WOZ data are removed, making all WOZ words unseen by ATTRACT-REPEL ; and 2) all .", "The results for the English DST task with different GLOVE word vector variants are summarised in Tab.", "3; similar trends in results are observed with two other word vector collections.", "The scores maintain conclusions established in the word similarity task.", "First, semantic specialisation with ATTRACT-REPEL is again beneficial, and discerning between synonyms and antonyms improves DST performance.", "However, specialising unseen words (the final X u vector space) yields further improvements in both evaluation settings, supporting our claim that the specialisation signal can be propagated to unseen words.", "This downstream evaluation again demonstrates the importance of non-linearity, as the peak scores are reported with the NONLINEAR-MM variant.", "More substantial gains in the all setup are observed in the DST task compared to the word similarity task.", "This stems from a lower coverage of the WOZ data in the AR constraints: 36.3% of all WOZ words are unseen words.", "Finally, the scores are higher on average in the all setup, since this setup uses more external constraints for AR , and consequently uses more training examples to learn the mapping.", "Other Languages We test the portability of our framework to two other languages for which we have similar evaluation data: German (DE) and Italian (IT).", "SimLex-999 has been translated and rescored in the two languages by Leviant and Reichart (2015), and the WOZ data were translated and adapted by Mrkic et al. (2017).", "Exactly the same setup is used as in our English experiments, without any additional language-specific fine-tuning.", "Linguistic constraints were extracted from the same sources: synonyms from the PPDB (135,868 in DE, 362,452 in IT), antonyms from BabelNet (4,124 in DE, and 16,854 in IT).", "Our starting distributional vector spaces are taken from prior work: IT vectors are from (Dinu et al., 2015), DE vectors are from (Vulic and Korhonen, 2016a).", "The results are summarised in Tab.", "4. Our post-specialisation approach yields consistent improvements over the initial distributional space and the AR specialisation model in both tasks and for both languages.", "We do not observe any gain on IT SimLex in the all setup since IT constraints have almost complete coverage of all IT SimLex words (99.3%; the coverage is 64.8% in German).", "As expected, the DST scores in the all setup are higher than in the hold-out setup due to a larger number of constraints and training examples.", "Lower absolute scores for Italian and German compared to the ones reported for English are due to multiple factors, as discussed previously by Mrkic et al. (2017): 1) the AR model uses less linguistic constraints for DE and IT; 2) distributional word vectors are induced from smaller corpora; 3) linguistic phenomena (e.g., cases and compounding in DE) contribute to data sparsity and also make the DST task more challenging.", "However, it is important to stress the consistent gains over the vector space specialised by the state-of-the-art ATTRACTREPEL model across all three test languages.", "This indicates that the proposed approach is language-agnostic and portable to multiple languages.", "In our second downstream task, we examine the effects of post-specialisation on lexical simplification (LS) in English.", "LS aims to substitute complex words (i.e., less commonly used words) with their simpler synonyms in the context.", "Simplified text must keep the meaning of the original text, which is discerning similarity from relatedness is important (e.g., in The automobile was set on fire the word automobile should be replaced with car or vehicle but not with wheel or driver ).", "Table 4 : Results on word similarity (Spearman's ) and DST (joint goal accuracy) for German", "Table 5 : Lexical simplification performance with post-specialisation applied on three input spaces.", "We employ LIGHT-LS (Glava and tajner, 2015), a lexical simplification algorithm that: 1) makes substitutions based on word similarities in a semantic vector space, and 2) can be provided an arbitrary embedding space as input.", "10 For a complex word, LIGHT-LS considers the most similar words from the vector space as simplification candidates.", "Candidates are ranked according to several features, indicating simplicity and fitness for the context (semantic relatedness to the context of the complex word).", "The substitution is made if the best candidate is simpler than the original word.", "By providing vector spaces post-specialised for semantic similarity to LIGHT-LS, we expect to more often replace complex words with their true synonyms.", "We evaluate LIGHT-LS performance in the all setup on the LS benchmark compiled by Horn et al. (2014), who crowdsourced 50 manual simplifications for each complex word.", "As in prior work, we evaluate performance with the following metrics: 1) Accurracy (Acc.) is the number of correct simplifications made (i.e., the system made the simplification and its substitution is found in the list of crowdsourced substitutions), divided by the total number of indicated complex words; 2) Changed (Ch.) is the percentage of indicated complex words 10 https://github.com/codogogo/lightls that were replaced by the system (whether or not the replacement was correct).", "LS results are summarised in Tab.", "5. Post-specialised vector spaces consistently yield 5-6% gain in Accuracy compared to respective distributional vectors and embeddings specialised with the state-of-the-art ATTRACT-REPEL model.", "Similar to DST evaluation, improvements over ATTRACTREPEL demonstrate the importance of specialising the vectors of the entire vocabulary and not only the vectors of words from the external constraints.", "We have presented a novel post-processing model, termed post-specialisation , that specialises word vectors for the full vocabulary of the input vector space.", "Previous post-processing specialisation models fine-tune word vectors only for words occurring in external lexical resources.", "In this work, we have demonstrated that the specialisation of the subspace of seen words can be leveraged to learn a mapping function which specialises vectors for all other words, unseen in the external resources.", "Our results across word similarity and downstream language understanding tasks show consistent improvements over the state-of-the-art specialisation method for all three test languages.", "In future work, we plan to extend our approach to specialisation for asymmetric relations such as hypernymy or meronymy (Glava and Ponzetto, 2017; Nickel and Kiela, 2017; Vulic and Mrkic, 2018).", "We will also investigate more sophisticated non-linear functions.", "The code is available at: https://github.com/cambridgeltl/ post-specialisation/ .", "We thank the three anonymous reviewers for their insightful suggestions.", "This work is supported by the ERC Consolidator Grant LEXICAL: Lexical Acquisition Across Languages (no 648909)." ]
[ "abstain", "abstain", "objective", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "other", "objective", "abstain", "result", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "result", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "result", "abstain", "objective", "other", "other", "other" ]
[ "This work introduces a new problem, relational summarization, in which the goal is to generate a natural language summary of the relationship between two lexical items in a corpus, without reference to a knowledge base.", "Motivated by the needs of novel user interfaces, we define the task and give examples of its application.", "We also present a new query-focused method for finding natural language sentences which express relationships.", "Our method allows for summarization of more than two times more query pairs than baseline relation extractors, while returning measurably more readable output.", "Finally, to help guide future work, we analyze the challenges of relational summarization using both a news and a social media corpus.", "Research on automatic summarization (Nenkova et al., 2011; Das and Martins, 2007) aims to help users understand large document sets.", "However, the details of how textual summaries might actually be presented to users are often ignored.", "We propose that user interfaces which display noteworthy terms or concepts present the need for relational summaries : descriptions of the relationship between two entities or noun phrases from a corpus.", "Examples of such interfaces include: comman-dline software for examining noteworthy terms or phrases (Squirrell, 2017; Robinson, 2016; Monroe et al., 2008), point-and-click browsers which display named entities and their interconnections on a network diagram (Wright et al., 2009; Gorg et al., 2014; Tannier, 2016), concept map browsers (Falke and Gurevych, 2017b) and document search engines which suggest terms relevant to a query, such as the related searches displayed on Wikipedia info boxes from Google.", "In Aristide Aristide Gen. Cedras UN Liberation Theology rival of influenced by relied on Aristide the Haitian leader governing philosophy informed by liberation theology Aristide , as a young Catholic priest was influenced by the liberation theology Aristide was earlier expelled from Salesian Order for promoting liberation theology Clinton criticized conceptmap snippetbox Figure 1: An example interface which requires relational summarization.", "all such settings a natural question arises: what is the nature of the relationship between the entities or concepts shown in the interface?", "One particular interface which presents the need for a relational summary is shown in figure 1.", "Relational questions are ubiquitous and varied.", "Examples include the following.", "What is the relationship between the City of London and goal-delivery of Newgate in 18th century court records (Hitchcock et al., 2012)?", "What is the relationship between Advanced Integrated Systems and United Arab Emirates in the Paradise Papers?", "1 What does dad have to do with mom on the subreddit discussion forum Relationship Advice ?", "This study seeks to answer such questions by examining the problem of relational summarization , which lies at the intersection of prior work on summarization and relation extraction.", "Unlike previous efforts at summarizing relationships (Falke and Gurevych, 2017a), our approach focuses on answering user queries about the connections between two particular terms, without ref-1 https://www.icij.org/investigations/paradise-papers/ 1760 United States ousted former President Jean-Bertrand Aristide Jean-Bertrand Aristide restored to power under watch of United States Jean-Bertrand Aristide restored to power under watch of United States Jean-Bertrand Aristide , left Haiti for the United States United States ousted former President Jean-Bertrand Aristide the United States ousted former President Jean-Bertrand Aristide to claimed the United States said that Rev. Jean-Bertrand Aristide wanted to by the United States since the Rev. Jean-Bertrand Aristide argued Jean-Bertrand Aristide , left Haiti for the United States in March Candidate set Summary Mention set Jean-Bertrand Aristide restored to power under watch of United States summaryconstructiontask candidate set generation task Figure 2: A relational summary is a synopsis of all sentences which mention two terms, denoted ( t 1 ) and ( t 2 ) .", "erencing a knowledge graph (Voskarides et al., 2015).", "2 In order to answer such queries we: Formally define the problem ( 2), which we divide into two subtasks: candidate set generation and summary construction .", "Provide a new method for the candidate set generation task ( 4), which we show outperforms baseline relation extraction techniques ( 5) in terms of readability and yield.", "Analyze the summary construction task for future work ( 6), demonstrating that different summarization techniques are likely most appropriate for different mention sets.", "We refer to all sentences within a collection of documents which contain two terms, ( t 1 ) and ( t 2 ) as the mention set .", "( t 1 ) and ( t 2 ) are noun phrases, a syntactic category which encompasses both traditional named entities like people and places, as well as less concrete, but important, entities and concepts like liberation theology (Handler et al., 2016).", "A relational summary is a synopsis of the mention set.", "A summary consists of K relation statements, each displayed on its own line.", "Relation statements are natural language expressions which begin with ( t 1 ) and end with ( t 2 ) .", "We refer to the span of tokens in between ( t 1 ) and ( t 2 ) as a relation phrase .", "We use the notation ( t 1 ) r ( t 2 ) to denote a relation statement, indicating two 2 Relational summaries are intended for general-purpose corpus analysis.", "Existing knowledge bases do not cover topics discussed in many corpora, such as historical court records (Hitchcock et al., 2012).", "Therefore, our approach does not employ a knowledge base.", "terms and a relation phrase.", "In the relation statement, Aristide fled Haiti , r is the token fled, ( t 1 ) is the token Aristide , and ( t 2 ) is the token Haiti .", "Relation statements, which are strings intended for human readers, are similar to the 3-tuples, relations , from prior work on information extraction (Banko et al., 2007).", "However, in this work, we show that the assumptions underlying the extraction of 3-tuples for machines ( 3) leads to poor performance in summarizing mention sets for people ( 5).", "In this study, we present a strictly extractive method for generating relation statements: each relation statement must be constructed by deleting tokens from some sentence in the mention set.", "3 Some relation statements constructed by deleting tokens from a sentence make sense; others do not.", "We refer to any ( t 1 ) r ( t 2 ) which makes makes sense to a human reader as acceptable .", "4 Table 1 shows examples of acceptable and unacceptable relation statements, constructed by deletion.", "s 1 Aristide ( t 1 ) fled r Haiti ( t 2 ) in 2004.", "s 2 For instance Bush ( t 1 ) told r Aristide ( t 2 ) to leave.", "3 In subsequent studies of relation extractors ( 5), we allow extractors to lightly introduce new tokens, such as adding the word is in relations expressed as noun phrases.", "4 Linguists sometimes use the term acceptability to refer to human judgements of the well-formedness of utterance.", "See Sprouse and Schutze (2014) for an overview.", "Only acceptable relation statements are permitted in a summary.", "The set of all possible acceptable relation statements is called the candidate set , denoted C .", "We refer to the task of identifying all acceptable relation statements as the candidate set generation task .", "Identifying a candidate set presents a subsequent problem of choosing the best collection of K relation statements from C to create a summary.", "We refer to this second step as the summary construction task .", "As in traditional summarization (Das and Martins, 2007; Nenkova et al., 2011), a good relational summary should", "(i) be readable,", "(ii) include the most important aspects of the relationship between ( t 1 ) and ( t 2 ) ,", "(iii) avoid redundancy, and", "(iv) cover the full diversity of topics in the mention set.", "Relational summaries might be presented with different kinds of user interfaces.", "In cases where a user seeks to browse many relationships, a summary might be displayed as a concept map (Falke and Gurevych, 2017a), where the two terms are vertexes in a directed graph and their relationship is printed along the edge label between them.", "In cases where user wants to investigate a specific relationship, a relational summary might be displayed as a snippet box : a short list of sentences which begin and end with the two terms.", "Figure 1 shows a snippet box and concept map.", "In a snippet box, both the number of lines in the summary and the length of the lines in the summary is longer than in a concept map.", "Relational summarization intersects with a diversity of prior work from natural language processing, including work on relation extraction , summarization and .", "Traditionally, the goal of relation extraction is to cull structured facts for knowledge databases from unstructured text.", "Often, such facts take the form of a 3-tuple which defines a relationship between two arguments, such as (arg1=Angela Merkel, rel=met with, arg2=Theresa May).", "If extractors do not make use of a predefined schema, the task of finding relations is called Open Information Extraction (OpenIE).", "OpenIE systems 5 offer an off-the-shelf method for generating a candidate set for a relational summary.", "Their output can easily be linearized to ( t 1 ) r ( t 2 ) statements by 5 There are many available OpenIE systems.", "simply concatenating the three arguments of the triple to form a string.", "However, we find that the recall of relation extractors is often too low to summarize many mention sets.", "We measure this disadvantage extensively in section 5.1.", "One reason for their poor performance might be that extractors have goals and assumptions which are poorly suited to the relation summarization task.", "In relation extraction, the aim is to find relation strings that recur for many different entity pairs, which allows such systems to build knowledge databases.", "For instance, relation extraction might be used to build tables of world leaders who rel=met with other world leaders in order to analyze international politics.", "From this perspective, long, sparse, heterogenous and detailed relation strings which might apply only to a pair of specific arguments are undesirable, as they make it difficult to find general patterns across many different entity pairs.", "For example, the influential ReVerb OpenIE system (Fader et al., 2011) excludes overly-specific relation phrases which apply only to two entities.", "One way to help ensure that relations generalize across entity pairs is to strive for arguments which are as short as possible, a common goal in OpenIE (Stanovsky and Dagan, 2016).", "6 Our method for generating a candidate set is closer to approaches from sentence compression (Knight and Marcu, 2002; Clarke and Lapata, 2008; Filippova and Altun, 2013; Filippova et al., 2015), an NLP task which seeks to make a source sentence shorter while preserving the most important information and producing readable output.", "We show that our sentence compression approach allows us to achieve higher readability than off-the-shelf relation extractors ( 5).", "Sentence compression is often used in traditional extractive summarization to make more ef-ficient use of a budgeted summary length.", "We discuss summarization further in 6, where we consider how existing work might be applied to the problem of selecting K statements from the candidate set.", "6 Methods from the relation extraction literature which seek to deduce facts from extracted relations, such as Riedel et al. (2013), might also help identify useful summaries in future work.", "Relations which imply that other relations are true might make good summaries.", "Traditionally, relation extraction begins with a fixed notion of what constitutes a desirable rela-tion between two arguments, defined by a predefined schema, a syntactic template (Fader et al., 2011), or a collection of seed examples (Angeli et al., 2015).", "The relation extraction task is then to correctly identify spans in which arguments are joined by a relation.", "The relational summarization problem is somewhat different: we begin with a pair of query terms, ( t 1 ) and ( t 2 ) , and we wish to learn the nature of their relationship.", "Therefore, any statement which coherently describes any relationship between the two query terms is potentially of interest, even if it does not match prior expectations of what constitutes a relation.", "We thus approach the candidate set generation task as a specialized form of sentence compression: we attempt to predict if a sentence from the text can be coherently compressed to the form ( t 1 ) r ( t 2 ) .", "Table 2 shows examples of sentences which can and cannot be shortened to this form.", "We use gold standard sentencecompression pairs from the Filippova and Altun (2013) dataset to supervise this prediction.", "In sentence compression corpora, gold standard compressions must be acceptable sentences.", "Therefore, compressions from the dataset which happen to begin and end with a named entity, 7 once extracted from source sentences, can serve as positive examples of acceptable relation statements.", "On the other hand, randomly chosen spans of the form ( t 1 ) r ( t 2 ) , which happen to arise in source sentences, are very often not acceptable as standalone sentences.", "These randomly sampled spans can serve as examples of unacceptable relation statements.", "We then predict acceptability with supervision from known gold acceptable and sampled, presumed incoherent examples.", "8 7 https://github.com/google-research-datasets/ sentence-compression 8 We manually inspect 100 negative examples, selected at random, and find that roughly 80% are in fact incoherent.", "Filtering the original dataset in this manner 9 yields 17,529 positive and 30,266 negative sentences.", "We then downsample negative training examples to create two balanced classes of equal size, and use 81% of data for training, 9% for validation and the remaining 10% for testing.", "Let p ( c = 1 | s , ( t 1 ) r ( t 2 ) ) indicate the probability that a span of form ( t 1 ) r ( t 2 ) extracted from sentence s is coherent.", "We model p ( c = 1 | s , ( t 1 ) r ( t 2 ) ) using logistic regression, with features based on the position of part-of-speech tags and dependency edges in s .", "Specifically, each sentence in the filtered dataset contains a span of the form ( t 1 ) r ( t 2 ) .", "We refer to the tokens in this span as in the compression because a user would see these tokens in a relation statement compressed from s .", "Each sentence also contains spans of tokens which are outside of the compression because they are deleted from the original source sentence to create a relation statement.", "Table 2 displays examples.", "Our feature vector records the counts of how many times each part-of-speech tag in the tagset occurs in the compression and also independently records the counts of how many times each part-of-speech tag occurs out of the compression.", "We refer to the count of each part-of-speech tag in the compression and the count of each part-of-speech tag out of the compression as .", "We also count the occurrence of each possible dependency edge label in the compression, and the count of each possible dependency edge label out of the compression.", "If a label's dependent lies in the compres-9 We also exclude randomly chosen spans which happen to encompass the entire source sentence and exclude randomly chosen spans where ( t 1 ) and ( t 2 ) are joined by only edges of type compound in the dependency graph of the compression (e.g. Coup leader Cedras ...).", "We use CoreNLP version 3.8 to extract enhanced++ Universal Dependencies (Manning et al., 2014; Schuster and Manning, 2016; Nivre et al., 2016).", "We also filter positive and negative examples where the span between ( t 1 ) and ( t 2 ) is longer than J =75 characters, to simulate a space constraint in a user interface.", "Finally, we remove all punctuation from the end of the sentence for both positive and negative examples because all gold positive compressions end in punctuation marks.", "For positive examples, if the compressed version of a sentence deletes tokens between t 1 and t 2 , we replace the span between t 1 and t 2 in the source sentence with the compression.", "sion, we consider the label in the compression.", "10 We refer to these dependency edge counts as .", "Our final feature vector, , is defined as the concatenation of and .", "We implement our model with scikit-learn (Pe-dregosa et al., 2011) and manually tune the inverse regularization constant to the setting, c = 1 , which achieves the highest accuracy on the validation set.", "For evaluation, a sentence is presumed coherent if p ( c = 1 | s , ( t 1 ) r ( t 2 ) ) > .", "5 .", "Using the feature vector we achieve an accuracy of .896 on the test set.", "We also present results using only the and features (table 4) because reliable dependency parses are not available in some settings (Blodgett et al., 2016; Bamman, 2017).", "Two limitations of this approach suggest areas for future work.", "First, in some cases, the relationship between ( t 1 ) and ( t 2 ) might not be expressed in the form, ( t 1 ) r ( t 2 ) , as in Russia and France signed an agreement.", "In order to generate candidate relation statements it would be helpful to lightly rewrite the sentence, as in Russia signed an agreement with France .", "Additionally, a sentence might express a relationship between two terms but be too long to display on a concept map or a snippet box.", "In these cases, it would be helpful to compress the sentence to create a more concise relation statement.", "Any relational summarization system should deliver a high-quality summary when a user queries for two terms.", "Therefore, ideally, a system should generate the largest possible candidate set, without returning incoherent relation statements.", "We thus 10 Enhanced dependencies allow for a token to have more than one incoming edge (i.e., multiple parents).", "evaluate our query-focused generation method in terms of both readability and yield (total relation statements recalled).", "Our method generates three times more relation statements than OpenIE systems, allowing for summarization of two times more query pairs.", "We also achieve higher scores in a test of human coherence judgements (table 5).", "More concretely, we evaluate our compression-based method for generating candidate sets against two relation extractor baselines on two very different corpora: (1) all comments from the large relationships 11 subreddit from June, 2015 September, 2017 12 and (2) a collection of New York Times articles from 1987 to 2007 which mention the country Haiti (Sandhaus, 2008).", "For each corpus, we first find a collection of multiword phrases using the phrasemachine package (Handler et al., 2016) which extracts all multiword, noun phrase terms from the corpus.", "After extracting all terms, we determine the top 100 terms, by count.", "We then examine all nonempty mention sets for all possible combinations of two top terms.", "A mention set is a set of sentences which mention two terms ( 2).", "We examine all mention sets because an investigator should be able to investigate any entity she chooses while analyzing a corpus.", "In subsequent experiments, we require all relation statements be less than or equal to J = 75 characters, which excludes overly verbose relation statements which are unsuitable for many user interfaces.", "Off-the-shelf relation extractors generate 3-tuples from each mention set.", "Some of those 3-tuples might have one argument which is equal to ( t 1 ) and another argument which is equal to ( t 2 ) .", "Each such 3-tuple can be linearized into a string of the form ( t 1 ) r ( t 2 ) to generate a candidate set.", "However, we find that using extractors in this 11 relationships refers to interpersonal relationships 12 https://medium.com/@jason 82699/ pushshift-reddit-api-md-c2d70745c270 1764 manner achieves a low yield (total number of extracted relations).", "A low yield is undesirable both because it limits the number of mention sets which may be summarized and generates fewer relation statements from which to select an optimal relational summary.", "More precisely, we identify the 3-tuples which an OpenIE system extracts from a mention set such that exactly one argument from the triple is equal 13 to ( t 1 ) and exactly one argument from the triple is equal to ( t 2 ) .", "We refer to these 3-tuples as matching.", "We then count (1) the total number of mention sets which contain at least one matching 3-tuple and (2) the total number matching 3-tuples across all mention sets.", "We refer to such counts as the yield of a candidate generation system.", "We measure the yield of Stanford OpenIE (An-geli et al., 2015) and ClausIE (Del Corro and Gemulla, 2013) on the New York Times and Reddit corpora, and compare each system to our compression-based approach ( 4).", "14 We measure these two relation extractors because Stanford OpenIE is included with the popular CoreNLP software and ClausIE achieves the highest recall in two systematic studies of relation extractors (Stanovsky and Dagan, 2016; Zhang et al., 2017).", "We find that, for the great majority of sentences, relation extractors do not extract any relations between ( t 1 ) and ( t 2 ) .", "Moreover, for many mention sets, the number of relations extracted with off-the-shelf systems is often zero.", "We show these results in table 5.", "This suggests that although relation summarization is superficially similar to relation extraction, off-the-shelf extractors are poor tools for creating textual units to summarize mention sets.", "Very often, two terms are related to each other in ways which are simply not captured by relation extractors.", "13 Note that OpenIE systems might not extract the literal string ( t 1 ) or ( t 2 ) as arguments.", "For instance, if ( t 1 ) is Merkel the OpenIE system might extract the argument Angela Merkel.", "If some term and some argument from a relational triple share the same head token in the dependency parse of the sentence we say that they are equal.", "Falke and Gurevych (2017c) employ a similar equality criterion.", "We tokenize with CoreNLP.", "In extremely rare cases, tokenization mismatches between CoreNLP and ClausIE make it impossible to apply this criterion.", "14 For our compression-based approach, we count all cases where p ( c = 1 | s , ( t 1 ) r ( t 2 ) ) > .", "5 as extracting a relation statement.", "Our compression-based method achieves higher yield than off-the-shelf relation extractors.", "However, because all sentences in a mention set include ( t 1 ) and ( t 2 ) , it is always possible to generate a very large candidate set by simply extracting all spans between ( t 1 ) and ( t 2 ) from the mention set, regardless if such relation statements are coherent.", "We examine if gains in yield come at the expense of acceptability by presenting randomly selected relation statements to workers on the platform Figure Eight 15 (formerly Crowdflower) and asking workers to rate the extent to which they agree or disagree as to whether a relation statement is a coherent English sentence on a scale from 1 to 5.", "Each relation statement is shown to three workers in total.", "16 Our approach is broadly similar to the readability experiments reported in Filippova and Altun (2013).", "We solicit 481 total judgements from workers and calculate the mean acceptability score, by method and corpus (table 5).", "Our method achieves the highest mean acceptability score for both corpora.", "Additionally, aggregating judgments across corpora, we observe a statistically significant ( p= 8 x 10 4 ) difference between our method ( =3 . 89 , = 1 . 38 ) and Stanford OpenIE ( = 3 . 33 , = 1 . 46 ) in a two-tailed t-test.", "Our method also achieves a higher mean score than ClausIE ( =3 . 69 , =1 . 44 ) , although the difference is not significant.", "After a relational summarization system generates a candidate set, the next task is selecting the top K candidate statements for inclusion in a summary (figure 2).", "In this work, we do not attempt this summary construction task.", "However, in this section, we analyze the nature of the relational summarization challenge by describing differences among mention sets, and how these differences might affect future efforts at summarization.", "We observe that mention sets are inherently heterogenous.", "Some describe a single, temporally-15", "https://www.figure-eight.com/ 16 We use seven test questions to filter out careless or bad faith responses.", "Workers must answer 70% of test questions correctly to be included in a task's results.", "We construct test questions blindly, without knowledge of the system which generated the relation statement.", "focused event.", "Others describe a consistent, unchanging relationship.", "Still others describe intricate sagas unfolding across time.", "For instance, within the Haiti corpus, one mention set describes events in 1994 when General Cedras fled to the Dominican Republic .", "This mention set is quite different from a set of sentences in the Reddit corpus in which users assert that video games are a deal breaker in interpersonal relationships.", "Figure 3 displays hand-crafted summarizes for these mention sets.", "In general, the properties which guide how a mention set should be summarized are its size , topical diversity , temporal focus and the degree to which the set expresses states or events .", "In this section, we use the notation ( t 1 ) ( t 2 ) to refer to a mention set.", "For instance, New York London would refer to all sentences from a corpus which contain the names of both of these cities.", "Size.", "In general, because many word types in a corpus occur infrequently (Zipf, 1949), the number of sentences which mention ( t 1 ) and ( t 2 ) is often small.", "For instance, of the 320,670 total sentences in the Haiti corpus, only 160 mention Jean-Bertrand Aristide and the United States, which is nonetheless among the larger mention sets in the corpus.", "In general, larger sets often describe complex and noteworthy relationships, which are more difficult to summarize (figure 3c).", "Note that although individual mention sets are often small enough to simply read (unlike in some multi-document summarization settings), summarization of mention sets is still quite useful, as practitioners will often seek to understand many different relationships as they investigate a new topic (e.g. figure 1).", "Topical diversity .", "In general, some mention sets are focused on a single topic, others are more diffuse.", "For instance, after losing power in a second, 2004 coup Haiti's Jean Bertrand Aristide was forced into exile in South Africa.", "The mention set for Jean Bertrand Aristide South Africa contains twelve sentences which (mostly, but not exclusively) describe Aristide's removal from power and life in exile in South Africa from 2004 onwards.", "Detecting and including diverse or complex topics is a classic aim of traditional multi document summarization (e.g. Lin and Hovy (2000)), which might be applied in this new setting.", "Temporal focus .", "In timestamped corpora such as news archives or social media posts, some mention sets are focused within certain time periods; others are spread across the span of the corpus.", "For instance, in the Haiti corpus, General Cedras Dominican Republic are only mentioned together during a few months of 1994 (figure 3b).", "A good summary for this mention set should describe a central event from this time period: when General Cedras fled to the Dominican Republic.", "On the other hand, Jean-Bertrand Aristide United States are mentioned together in 67 months in the corpus, covering a number of important events spread across decades (figure 3c).", "For this mention set, a narrow summary focusing on a single event would be inappropriate.", "Many existing methods specialize in detecting (Chaney et al., 2016), tracking (Allan et al., 1998) and summarizing evolving topics in timestamped documents.", "Some systems focus specifically on summarizing event spikes: both in news (e.g. 1766 video games and I don't want that to be a deal breaker video games was a deal breaker video games is a deal breaker", "(a) A hand-crafted summary for the mention set video games deal breaker .", "The mention set contains many stative descriptions of the relationships between the two terms, indicating that a summary might focus on presenting fixed relationships rather than evolving events.", "Jean-Bertrand Aristide United States", "(c) A hand-crafted summary for the mention set Jean-Bertrand Aristide United States , one of the largest in the Haiti corpus.", "The mention set describes a complex, shifting relationship; at different times over several decades, Aristide was a beneficiary, opponent and critic of the United States.", "Alfonseca et al. (2013)) and on social media (e.g. Nichols et al. (2012)).", "In some cases, the event described in a mention set will even match the loose form of a common narrative template (Chambers and Jurafsky, 2008), such as when the two terms are codefendants at a trial.", "Mention sets which are more temporally diffuse are also more challenging.", "Update summarization refers to summarizing changes introduced by new documents, possibly from a high volume stream (Kedzie et al., 2015).", "This form of summarization is important in cases when a relationship shifts or changes through time, as in figure 3c.", "States or events .", "Mention sets may be coarsely divided into cases where the set expresses a stable state or property of the world in the eyes of the author (e.g. England is a close ally of the US or video games are a deal breaker) and cases where the relation statement expresses a change or event (e.g. Gen. Cedras fled to the Dominican Repub-lic or dad left mom).", "In many interesting cases, the mention set contains a mix of stative and even-tive relation statements which express a narrative, such as America is an ally of South Korea and America sent a destroyer to South Korea .", "Defining (Pustejovsky, 1991), extracting (Aguilar et al., 2014) and determining relationships between events (Hovy et al., 2013) is a challenging research area.", "But a better understanding of states and events would improve future work.", "For instance, if a summary includes the event Jolie divorced Pitt, it does not need to include the stative relation phrase Jolie was married to Pitt.", "To our knowledge, there is no prior work which considers how fine-grained relations between states and events might be employed for summarization.", "MacCartney and Manning (2009) offer a framework which might serve as a useful starting catalog.", "This work defines a problem which lies at the intersection of typically unrelated fields in natural language processing, summarization and relation extraction.", "We present a new method which finds large numbers of natural language expressions which coherently describe relationships.", "We also analyze the challenges of the relational summarization task, by investigating and describing the inherent heterogeneity of mention sets.", "Because of this heterogeneity, we argue that future attempts to summarize relationships will likely require a diversity of models and techniques.", "Thanks to Emma Strubell, Patrick Verga, Haw-Shiuan Chang, Su Lin Blodgett, Katherine Keith and the UMass NLP reading group for helpful discussions and comments.", "Thanks to Brian Dillon for helping us better understand how to collect and interpret human judgements of linguistic acceptability." ]
[ "objective", "objective", "objective", "abstain", "method", "abstain", "abstain", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "objective", "objective", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "other", "other", "other", "other", "other", "other", "method", "abstain", "other", "method", "other", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "objective", "other", "other" ]
[ "Named entity recognition is a key component of many text processing pipelines and it is thus essential for this component to be robust to different types of input.", "However, domain transfer of NER models with data from multiple genres has not been widely studied.", "To this end, we conduct NER experiments in three predictive setups on data from:", "a) multiple domains;", "b) multiple domains where the genre label is unknown at inference time;", "c) domains not encountered in training.", "We introduce a new architecture tailored to this task by using shared and private domain parameters and multi-task learning.", "This consistently outperforms all other baseline and competitive methods on all three experimental setups, with differences ranging between +1.95 to +3.11 average F1 across multiple genres when compared to standard approaches.", "These results illustrate the challenges that need to be taken into account when building real-world NLP applications that are robust to various types of text and the methods that can help, at least partially, alleviate these issues.", "Accurately identifying named entities and their type in texts is a key processing step for many NLP applications.", "Named entity recognition (NER) is an important component in several tasks including named entity linking (Cucerzan, 2007), co-reference resolution (Ng and Cardie, 2002), question answering (Krishnamurthy and Mitchell, 2015), relation extraction (Culotta and Sorensen, 2004) and usually sits upstream of analytics such as sentiment (Pang and Lee, 2004) or stance (Moham-mad et al., 2016).", "Building robust NER models to accurately tag and adapt to heterogeneous types of text is thus paramount.", "Recent research focused on *Equal Contribution improving the overall performance of NER models on specific data sets.", "Yet NER models show relatively high variance even when trained on the same data (Reimers and Gurevych, 2017) and poorly generalize when tested on data from different genres 1 , especially if these contain entity mentions unseen in the test data (Augenstein et al., 2017; Agarwal et al., 2020).", "Despite this, research on NER models robust to different types of input is usually limited to the standard domain adaptation scenario: a single source domain rich in training data and a single target domain with limited or no training data (Lin and Lu, 2018).", "We argue that this is an over-simplified experimental setup that is not typical for how NER models are used in real-world applications.", "Ideally, NER models use all available data, regardless of genre, and perform inference on data from any genre, even if this was not encountered in training.", "In this scenario, simply pooling all the available data is likely sub-optimal as genre-specific differences in named entity mentions are useful to model.", "Conversely, models limited to only data from the same genre as the test set are likely to underper-form, as using more data is usually beneficial.", "This work introduces three experimental setups for the NER task where models are trained on data from multiple genres and evaluated as follows:", "a) Multi-Domain evaluation is performed across multiple genres, all seen in training.", "b) Multi-Domain with Unknown Domain Labels evaluation is carried out across multiple genres, all seen in training, but the genre label for each document is unknown at inference time.", "c) Zero-shot Domain evaluation is performed on documents from genres unseen in training.", "1 Throughout this paper, we refer by genre to a collection of documents with variations in style or structure that might impact modelling (Santini et al., 2006); we use domain when referring to modeling concepts.", "We propose a neural architecture for NER tailored to these three experimental setups, based on the popular BiLSTM-CRF architecture (Lample et al., 2016).", "We augment the base architecture to learn both domain-specific and independent features through shared and private domain components including projections and CRFs.", "Further, we add a multi-task learning objective for domain prediction to guide this separation.", "This model can perform inference on a text without knowledge of its corresponding domain label by using the shared components.", "We compare this model with several competitive methods that use a similar base architecture while holding the embeddings constant (i.e. GloVe embeddings).", "These include models trained on data from each domain independently, models that pool all data and models that use domain identities as features through to source-target domain adaptation methods.", "Extensive results on all three experimental setups on a collection of data from a total of twelve genres demonstrate that our proposed architecture outperforms all others by a respectable margin.", "Finally, through an error analysis of our results, we aim to understand the contributions of each proposed component and the margins for future improvements.", "Setups for Domain Adaptation Domain adaptation, formulated as learning a single model for the same task across multiple domains, is a well-studied research area in NLP (Chelba and Acero, 2004; Florian et al., 2004; Blitzer et al., 2006; Daume III, 2007).", "The standard setup for domain adaptation is to maximize performance on data from a single low-resource (target) domain, by using data from a single high-resource (source) domain (Blitzer et al., 2007; Peng and Dredze, 2017).", "Extensions consider a single source and multiple different target domains (Yang and Eisenstein, 2015) or multiple sources and a single target domain (Mansour et al., 2009).", "The multi-domain text classification task studied in (Li and Zong, 2008; Wu and Huang, 2015; Chen and Cardie, 2018) is the analogous setup for the text classification task to the first experimental setup we propose for NER.", "Under this setup, training and evaluation is done across data from multiple domains.", "Multi-Domain Adaptation Methods for multi-domain text classification use data fusion either at the feature or classifier level (Li and Zong, 2008), decomposing the classifier into a shared one and multiple domain-specific ones (Wu and Huang, 2015), further guided by a domain discriminator (Chen and Cardie, 2018) which is also used in multi-lingual NER (Chen et al., 2019).", "Further, Mc-Closky et al. (2010) explored sequence tagging tasks on data from unknown domains and Chen and Cardie (2018) experiment with sentiment classification on data from unknown domains, similar to our third experimental setup for NER.", "To the best of our knowledge, our second setup where the domain label is not available at inference time was never explicitly studied.", "We note that most of these approaches make use of additional unlabeled data from each domain to learn domain-specific representations.", "We do not use these resources in our methods, as we assume the end-user of the model is agnostic to the data used in training and wants to run inference without having to provide entire comparable corpora.", "Domain Adaptation for NER Models for domain adaptation in NER using neural architectures were studied recently, albeit mostly for covering the single-source and single-target setup.", "The INIT method trains a model using the source domain data, and its parameters are used to initialize a target model which is fine-tuned on the target data (Mou et al., 2016).", "The MULT method trains jointly one model for each domain with shared parameters (Lee et al., 2018).", "For sequence tagging, one CRF for each of the two domains is used to obtain the predictions (Yang et al., 2017).", "Adaptation can also be made at the embeddings stage (Lin and Lu, 2018) or by using additional unlabeled data from the source domain and out-of-domain annotated data (He and Sun, 2017).", "However, as mentioned above, this assumes that unlabeled training data can be provided for each domain, which may not be realistic.", "The model adds layers between embeddings and the BiLSTM layers, between the BiLSTM and the CRF for the target domain and separate CRF layers, the latter two of which we adapt to our proposed architecture for multi-domain adaptation.", "A hierarchical Bayesian prior approach is used in (Finkel and Manning, 2009) to tie feature weights across domains when information is sparse and also allow the model to take advantage if substantial data is available in one domain.", "Their experiments on NER focused only on three data sets: CoNLL, MUC-6 and MUC-7 and only the first of our three setups.", "A multi-task domain adaptation method for NER and word segmentation is used in (Peng and Dredze, 2017).", "The proposed architecture learns a shared representation across domains and experiments with linear domain projections for each domain to guide learning of shared representations.", "The output of these linear layers is fed to a CRF.", "We adopt the linear domain projection method, but extend this to also include a shared projection, followed by domain-specific CRFs and multi-task learning.", "Finally, another type of domain adaptation is temporal adaptation of models tested on data that is more recent than the training data, when each temporal slice can be considered as a different domain (Rijwhani and Preotiuc-Pietro, 2020).", "This section describes the proposed NER architecture tailored the architecture to our multi-domain experimental setups, which is independent of input embedding representation.", "The basic component of our NER models is an architecture which has reached state-of-the-art performance several times over the last few years (Lam-ple et al., 2016; Peters et al., 2018; Akbik et al., 2018).", "Named entity recognition task is a structured prediction task and earlier statistical approaches are based models like Conditional Random Fields (Lafferty et al., 2001), which rely on features often designed based on domain-specific knowledge (Luo et al., 2015).", "The current dominant approach to the NER task consists of neural architectures based on recurrent neural networks with different choices of input representations (Huang et al., 2015; Ma and Hovy, 2016; Lample et al., 2016; Peters et al., 2018; Akbik et al., 2018, 2019).", "The input consists of a concatenation of pre-trained word embeddings and character embeddings.", "Character embeddings are trained using an LSTM from randomly initialized vectors as in (Lample et al., 2016).", "Word embeddings are derived from a combination GloVe (Pennington et al., 2014) and FastText (Bojanowski et al., 2017) pre-trained word embeddings, as used in (Ma and Hovy, 2016).", "The choice of embeddings is orthogonal to the architecture and thus, we hold these constant in all experiments.", "ent directions (Huang et al., 2015).", "The outputs of these layers are concatenated and, in order to map the word representation obtained from the LSTM module into the label distribution, passed to a one-layer feed-forward network.", "A Conditional Random Field is applied to the class predictions to jointly assign the sequence tags using a transition matrix.", "This CRF layer improves performance of the model (Lample et al., 2016) as it ensures the output sequence takes into account dependencies between the tags and also models the constraints the output sequence adheres to (e.g. I-PER can not follow B-LOC).", "We propose a new architecture based on the BiLSTMCRF model tailored to the three proposed experimental setups.", "Our proposed architecture enhances the base architecture with three components:", "a) domain -specific and -independent feed-forward layers that process the BiLSTM outputs;", "b) domain -specific and -independent feed forward layers CRFs;", "c) a multi-task learning objective that learns domain labels as an auxiliary task.", "The proposed architecture changes are motivated by the aim of capturing commonalities in which named entities are referred to, in any given genre, while still allowing for the model to tease apart and exploit domain-specific aspects.", "The architecture is also designed to capture these commonalities across label relationships, which can vary across domains.", "In addition, the multi-task objective further assists the model to leverage domain-dependent and -independent components.", "The choice of input representation is orthogonal to the proposed architecture and our extensions to the architecture can be combined with any input representation.", "Private and Shared Layers We rely on the shared-private paradigm where the model learns both a shared representation across all domains and is useful when the domain of the input is unknown or unseen in training, and a private domain representation", "representation that mostly helps tagging in that domain.", "We model the shared and private features at both the feature mapping stage connecting the BiLSTM outputs to the CRF(s) and at the CRF level.", "We expect the features extracted by the BiLSTM layers to model the structure of the input across all domains.", "The feed-forward layers capture the domain-specific and -independent information by using private output layers for each domain and one shared output layer.", "In training, the BiLSTM outputs are projected to both the shared layer and the private layer based on the domain label provided in training.", "The CRF layer is used to make a global decision for the entire tag sequence by modelling label dependencies.", "We expect that this decision is, at least partially, dependent on domain-specific relationships in the label space.", "Hence, each feed-forward layer feeds into either private CRFs (one for each domain) or a shared CRF.", "The separation of the shared and private layers could happen before the CRF stage (late separation) or before the feed-forward layer stage (early separation).", "We investigate the influence of each individual addition on the multi-domain performance in our analysis section through ablation studies.", "Given an input, both the shared and the private parameters are used in learning to predict the output.", "The set of private parameters for each domain are only updated by data from the same domain while the set of shared parameters are updated in a pooled way by taking all available data points in the training stage regardless of the domain characteristics.", "For a given data point, inference can be run either by:", "a) passing it though the private components if the domain label is known;", "b) through the shared components if the domain label in unknown or the domain of the data is unseen in training.", "To this end, the objective function for the private and shared layers is: LNER SP ( x, y ) = LNER S ( x, y ) + LNER P ( x, y ) (1) where LNER S and LNER P stand for the shared layer loss and private layer loss respectively.", "Further, to better guide the learning process, we augment our architecture with a multi-task learning objective.", "Through this, the model learns to predict the domain label of each sample in training as an auxiliary task.", "The architecture uses average pooling on BiLSTM outputs followed by a fully connected layer.", "Finally, softmax is applied over the learned domain feature to obtain a probability distribution of all domain labels.", "The domain classification objective is to minimize the cross-entropy loss L domain ( x, y d ) for an input x with domain label y d .", "The global objective function is the combination of the NER loss function and domain loss: L ( x ; y, y d ) = LNER SP ( x, y ) + L domain ( x, y d ) (2) 4 Experimental setup 4.1 Data We use a collection of data sets spanning eight genres to evaluate our methods.", "In addition, in order to test the feasibility of NER tagging in a zero-shot domain setup, we present additional data covering four other genres.", "Each genre of documents is considered a domain in modelling.", "The data set collection used in learning the multi-domain models (denoted as Open Data' in the rest of the paper) includes the following three data sets: CoNLL 2003 We use the data set released as part of CoNLL 2003 shared task for English (Tjong Kim Sang and De Meulder, 2003), which is arguably the most popular data set for NER and is regularly used as a benchmark for this task.", "This data is a collection of news articles from the Reuters Corpus.", "Twitter The Twitter data set consists of 22,000 tweets representative of multiple English-speaking locales and a variety of topics that span 11 years of Twitter posts (20092019).", "This data was annotated with Organizations (ORG), Persons (PER) and Locations (LOC), using the annotation guidelines used in annotating past data sets (Tjong Kim Sang and De Meulder, 2003) supplemented with examples that are specific to Twitter data.", "OntoNotes (six genres) The OntoNotes data set (Hovy et al., 2006) consists for six different genres annotated, amongst others, with named entities and their types.", "In this data, each genre refers to a different source, which includes newswire (NW), Data Set # Tokens Density Entity Distribution ORG PER LOC CoNLL 2003 302811 14.52% 33.2% 38.8% 28.0% Twitter 227019 8.02% 36.9% 46.5% 16.5% OntoNotes-NW 490738 8.89% 55.1% 21.1% 23.8% OntoNotes-BN 258625 9.06% 27.5% 37.2% 35.3% OntoNotes-MZ 197520 7.84% 28.1% 41.9% 30.0% OntoNotes-BC 239236 5.49% 27.5% 39.8% 32.8% OntoNotes-TC 114463 1.59% 12.3% 45.6% 42.1% OntoNotes-WB 490738 2.17% 25.5% 44.4% 30.1% Zero-Shot-A 103992 3.10% 53.3% 24.4% 22.2% Zero-Shot-B 794199 8.48% 55.5% 28.4% 16.1% Zero-Shot-C 156032 10.06% 64.4% 14.4% 21.1% Zero-Shot-D 27522 5.84% 38.8% 31.9% 29.4% Table 1: Size of data sets, NE density (tokens that are named entities) and distributions across entity types for both open and zero-shot data sets.", "broadcast news (BN), broadcast conversation (BC), magazine (MZ), telephone conversation (TC) and web data (WB) (Pradhan et al., 2013).", "Note that we replace the LOC', FAC' and GPE' tags in the OntoNotes data with the LOC' type in order to be consistent with the definition of LOC' in CoNLL 2003, as also done in (Augenstein et al., 2017).", "Zero Shot Genres Finally, for zero-shot genre NER, we use a collection of internal data sets from four different genres spanning news, closed captions and other documents.", "All four genres were annotated with the same entity types and using similar guidelines.", "Data set statistics are presented in Table 1.", "This shows that all domains are represented with a substantial number of sentences, although the prevalence of named entities and their distribution across types varies, as expected from data sets collected from different sources and genres.", "We also see that the zero-shot domains are significantly different in entity type distribution and density than the training data, making them well-suited for this setting.", "In order to present comparable results across all different data sets, we limit our experiments to three different types of entities that are present in all the above data sets and annotated using similar guidelines: organizations (including geo-political entities and facilities), persons and locations.", "In case other types of entities exist in the data (e.g. MISC for CoNLL, dates for OntoNotes), these are considered to be not an entity, similar to (Augen-stein et al., 2017).", "We used the BIO tagging scheme in all our experiments, as this is arguably the most popular and differences in results between this tagging scheme and others, such as the BILOU scheme, are very small in practice (Ratinov and Roth, 2009).", "We train our models using the open data sets from CoNLL, Twitter and OntoNotes.", "The training, development and test splits of CoNLL and OntoNotes follows the standard splits.", "Similarly, we randomly split the Twitter data set randomly into 70% for training, 10% for development and 20% for testing.", "The final train, dev and test sets are obtained by joining all the respective splits across the individual data sets.", "We evaluate several baseline methods and other competitive methods introduced in past research and compare to our proposed architecture ( MultDomainSPAux ) described in Section 3.2.", "These methods focus on different variations of the neural model architecture, while holding the input embeddings constant.", "InDomain trains an individual NER model using the base architecture for each of the known domains.", "In inference, the corresponding in-domain model is used.", "This allows us to establish the baseline individual domain performance when no information is shared between the domains in training.", "InDomain-DomainClassifier uses the same NER models as the InDomain model.", "The InDomain approach is however unable to directly perform inference on sentences where the domain label is unknown at inference time.", "We thus build a separate domain classifier using a Bi-LSTM recurrent neural network that feeds the final hidden state into a feed-forward network to recognize the domain of a given input sentence and route it to the appropriate InDomain NER model.", "PoolDomain naively pools all available data, disregarding the domain information and trains a model using the base architecture.", "This model thus ignores the domain information when training, albeit uses all available training data.", "Data pooling is the standard baseline in most domain adaptation experiments.", "PoolDomain-Init uses all available data and uses the domain information to train models on data from one domain at once.", "After training on data from each domain, the model uses the weights as initialization for training on next domain.", "This is similar to the INIT strategy for domain adaptation used in (Mou et al., 2016; Lee et al., 2018).", "We perform this weight initialization and fine-tuning process over all the domains consecutively, where the order is defined by the density of entities, starting with the highest one.", "PoolDomain-GradRev trains the base architecture using a gradient reversal layer (Ganin and Lem-pitsky, 2014).", "The gradient reversal technique aims to confuse the domain discriminator while learning NER with the combination of the training data from all domains.", "PoolDomain+DomainFeat trains a base architecture model over all available data and, in addition to the text-based features, the domain information is explicitly represented by passing it through a domain embedding.", "This is appended to the word-level features that are used as input to the BiLSTM layers.", "The domain embeddings are randomly initialized.", "MultDomain-SP extends the MULT method (Yang et al., 2017) to the multi-domain setup.", "This method uses a domain-specific CRF for each domain and a shared CRF for all domains.", "Both the BiLSTM and the feed-forward layers are shared across all domains.", "Inference can be done either through the private layer corresponding to the domain of the input denoted as MultDomain-MultCRF (P) or through the shared layer denoted as MultDomain-MultCRF (S) in which case this can be used when the domain label is unknown in inference.", "For our experiments, we largely follow the training and evaluation procedure used in (Akbik et al., 2018).", "As hyperparameters, we follow most suggestions outlined in the in-depth study on model robustness (Reimers and Gurevych, 2017).", "Our training uses 256 hidden states for BiLSTM with mini-batch size of 32.", "The model parameters are updated using back-propagation and Adam optimizer (Kingma and Ba, 2014).", "The learning rate is 1 e 3 with weight decay value 1 e 5 .", "The model is regularized with a locked dropout rate of 0.5.", "We use 300-dimensional pre-trained word embeddings as described in Section 3.1, whereas the character LSTM is randomly initialized and has a hidden dimension of 64.", "The embeddings are updated on the training data.", "When training the domain features together with the NER ( PoolDomain+DomainFeat ), we set the domain embedding size to 128.", "We train all models for 20 epochs and report the results for the model performing best on the joint development set of the open data set collection.", "In this section, we present and compare the results of all the methods introduced previously.", "Experiments are conducted first on the open data collection introduced in Section 4.1 in the Multi-Domain and Multi-Domain with Unknown Label setups.", "Following, we evaluate the performance of our model on the data used for zero-shot genre NER.", "The goal of these experiments is to examine the NER performance across the three proposed experimental setups which focus on model generalizability across multiple domains.", "We note that the results below can not be directly compared to the state-of-the-art results on each data set, as we restrict the entity types to PER, ORG, LOC, such that these types are constant across all data sets.", "First, we compare models when assuming the domain label of each test document is known at inference time.", "The results are listed in Table 2.", "Our proposed method MultDomain-SP-Aux (P) obtains the best results across the entire test collection in both micro-average (+0.43) and macro-average (+1.94) compared to all other approaches and performs best on 7 out of the 8 domains.", "The second best method is the PoolDo-main+DomainFeat which uses the domain feature as input.", "Our method consistently surpasses the in-domain classifiers ( InDomain ) on micro-average (+1.48) and macro-average (+3.11), showing the limitations of naive modeling approaches.", "Although increases exist across all domains, these are most prominent in domains like TC (+5.36) that have a low density of named entities and where indomain models have access to limited amounts of data.", "However, the in-domain performance is better than the pooled method of training, which shows consistent drops in performance on some domains (-8.69 on WB, -6.77 on BC, 1.98 on CoNLL), where information from other domains did not ben-efit the model.", "We now focus on the experimental setup where domain labels are unknown for each data point at inference time.", "This is akin to a setup where the user is agnostic to the data the model was trained on.", "As only a subset of the models can perform inference in this scenario, the results are a subset of those in Table 2.", "Our model MultDomain-SP-Aux (S) gains the best overall performance in this setup, with 1 .", "95 macro-average F1 increase over the next best method ( InDomain+DomainClassifier ).", "The other standard baseline for domain adaptation ( PoolDomain ) obtains a similar performance ( 2 . 19 compared to our method) to the in-domain approach, which shows the benefits of multi-domain adaptation.", "PoolDomain-Init is performing overall poorly, which shows that the INIT transfer learning strategy that is somewhat effective for source-target domain adaptation does not work well in the multi-domain setup.", "Our intuition is that this technique is unable to learn robust features sequentially across N domains, as it performs poorly on the initial trained domains.", "PoolDomain-GradRev gains relatively weak performance overall, lower than the in-domain baseline.", "Finally, we show the results on the experimental setup where the test data is the four Zero-Shot Genres', which were not used in during training.", "Table 3 shows the experimental results of all methods that can run inference with unknown domain Models Zero-Shot Genres MAvg A B C D InDomain+DomainClassifier 47.16 60.04 62.00 59.50 57.17 PoolDomain 52.61 62.53 63.53 61.55 60.05 PoolDomain-Init 24.38 36.92 47.13 19.47 31.98 PoolDomain-GradRev 49.48 68.97 67.95 57.41 60.95 MultDomain-SP (S) 50.9 72.27 68.19 61.86 63.30 MultDomain-SP-Aux (S) 54.50 67.77 70.30 64.02 64.15 Table 3: Evaluation results on data from genres unseen in training.", "labels, as we assume that in this setup, the end-user does not have knowledge about the domains used in training and which of these are most similar to the test point.", "Results show that our proposed method obtains again the best results, with a consistent margin of 2.24 macro-average F1 improvement over the next method.", "Pooling all data ( PoolDomain ) obtains better performance than building in-domain classifiers with domain classification ( InDomain+DomainClassifier ) unlike in the other setups.", "This also shows that the zero-shot domains we used are indeed different to any of the ones in training and pooling all data manages to build a slightly more robust model than individual ones trained on less data.", "The in-domain models perform 5.21 F1 points lower than our approach, the largest gap in all experimental setups, highlighting the robustness of the multi-domain modeling approach.", "The MultDomain-SP (S) model is second best, and as this is the base for our method, we discuss its performance in the ablation study from the next section.", "We first focus on understanding the impact of each component added to our proposed method over the base architecture through an ablation study.", "Table 4 shows results using the private layer ( MultDomain-SP-Aux (P) ) when each of the three components are alternatively turned off: Shared-Private Linear layer, Shared-Private CRF and the domain prediction auxiliary task.", "Shared vs. Shared-Private CRF With the rest of the architecture fixed, the results show that the shared-private CRF performs close to the shared CRF when the shared linear layer is used (80.08 vs. 80.16; 82.04 vs. 82.74; all comparisons in this section are on macro-average).", "However, once we use a separate linear layer between the BiLSTM and each CRF, the difference between having the shared and the shared-private CRFs increases drastically (81.36 vs. 83.11; 82.30 vs. 84.68).", "With only this late separation, the inputs to CRF decoders are still domain-independent features, which makes it hard for the linear CRF to adapt.", "When the inputs are already domain-dependent, the linear CRF can better use this information in performing the joint inference of the sequence.", "We note that only using shared-private CRF with the base architecture is equivalent to the MultDomain-SP method (Yang et al., 2017).", "Shared vs. Shared-Private Linear Projections The results show that regardless of the other parameters, adding shared and private linear layers between the BiLSTM layers and the CRF(s) is always beneficial (80.08 vs. 81.36; 80.16 vs. 83.11; 82.04 vs. 82.30; 82.74 vs. 84.68).", "The improvements are relatively larger when combined with shared and private CRF, as previously seen.", "Multi-Task Learning of Domain Labels Finally, we compare the impact of adding the multi-task learning objective.", "We find that, similar to the linear layers, adding the domain prediction task is always beneficial for the model with the increase being larger if is only a shared linear layer.", "We expect that the two tasks at different levels of granularity rely on shared structure in the original semantic space.", "The document-level domain labels can help regularize the training, providing generic information about which low-level features are valuable to entity-level recognition.", "In order to understand the limitations of the multi-domain setup, we study whether the models we can build from the available data could theoretically achieve better overall performance.", "We use an oracle-based selection technique on the in-domain models to select, after the prediction and using the gold labels the model which performed best for each test instance, as selected using F1 score or, if there are no entities, the model with most O predictions.", "If multiple models are tied, we choose one at random.", "The oracle thus provides the counterfactually Optimal strategy of model selection for each test instance and represents an upper bound on strategies relying on InDomain models.", "Table 5 compares the oracle strategy predictions with the InDomain+DomainClassifier and the MultDomain-SP-Aux model.", "The results show that even though our model improves substantially over the in-domain models, an oracle selection method would push performance much higher (+6.73 F1 on the open data).", "This highlights both the variability of NER models trained on different data sets and that there is potentially more room for improvements in the multi-domain setup.", "The Supplementary Material shows a breakdown of the domain prediction labels for three methods: domain classification, domain prediction in the proposed MultDomain-SP-Aux model and the oracle in-domain choice on gold data.", "The oracle strategy selects the predictions from all in-domain models.", "Based on this, we analyzed the performance of each individual in-domain model when tested on all domains in Table 6.", "We find that although the Oracle strategy uses a mix of models, any model alone is unable to generalize to other domains (67.19 vs. 84.68 best InDomain model compared to the best overall model).", "In the zero-shot genres, the Twitter model performs close to the MultDomain-SP-Aux model (-0.56 F1), albeit it is 24 F1 lower on the multi-domain setup.", "This reinforces that learning shared domain features as opposed to learning individual models helps boost performance and is more robust to different types of inputs.", "Finally, we compare the runtime difference across various methods listed in the experiment section to test the practical implications of using our pro-Auxiliary", "posed multi-domain modelling approach.", "In test phase, we set the batch size as 128.", "Table 7 shows the average time of inference time used for each model.", "Our proposed model architecture takes 0.15 ms (33% increase) longer for inference than InDomain or PoolDomain models, which is a result of more model parameters.", "However, our proposed architecture is still 0.19 ms faster than using the InDomain+DomainClassifier approach.", "In addition to inference runtime, we also find that the training time is not significantly more than the combined training time of N in-domain models.", "The main additions are that of the shared layers and the auxiliary task to the components of the N in-domain models and is thus a constant addition in the number of parameters to the total of N indomain models.", "Hence, the model would scale by a constant with respect to the number of input domains (N+1 number of components, where N is the number of domains).", "This should allow our proposed model to scale to a large number of domains.", "This highlights that the proposed MultDomain SPAux model is a viable option for real-world applications.", "Robustness of NLP models is essential to their wider adoption and usability.", "Existing NER approaches are widely faced with limited scalability when applied to data that spans multiple domains.", "This paper introduced three experimental setups that provide a framework for evaluating the robustness of NER models.", "These include learning from data in multiple domains and testing on all domains, when the domain label of the test point is unknown and when this does not belong to a domain seen in training.", "Building on past research, we proposed a new neural architecture that achieves substantial improvements of up to 5 F1 points when compared to standard methods.", "Future work will focus on domain adaptation at the embedding layer." ]
[ "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "objective", "objective", "method", "abstain", "objective", "objective", "abstain", "method", "abstain", "objective", "objective", "other", "other", "other", "abstain", "other", "other", "abstain", "method", "abstain", "method", "other", "other", "other", "other", "other", "other", "objective", "other", "objective", "other", "other", "other", "objective", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "abstain" ]
[ "Georgia Tech [email protected]", "Microsoft Research { xiaodl,jfgao } @microsoft.com", "Georgia Tech [email protected]", "Abstract Transfer learning has fundamentally changed the landscape of natural language processing (NLP).", "Many state-of-the-art models are first pre-trained on a large text corpus and then fine-tuned on downstream tasks.", "However, due to limited data resources from downstream tasks and the extremely high complexity of pre-trained models, aggressive fine-tuning often causes the fine-tuned model to overfit the training data of downstream tasks and fail to generalize to unseen data.", "To address such an issue in a principled manner, we propose a new learning framework for robust and efficient fine-tuning for pre-trained models to attain better generalization performance.", "The proposed framework contains two important ingredients: 1. Smoothness-inducing regularization, which effectively manages the complexity of the model; 2. Bregman proximal point optimization, which is an instance of trust-region methods and can prevent aggressive updating.", "Our experiments show that the proposed framework achieves new state-of-the-art performance on a number of NLP tasks including GLUE, SNLI, SciTail and ANLI.", "Moreover, it also outperforms the state-of-the-art T5 model, which is the largest pre-trained model containing 11 billion parameters, on GLUE.", "1 1 Introduction The success of natural language processing (NLP) techniques relies on huge amounts of labeled data in many applications.", "However, large amounts of labeled data are usually prohibitive or expensive to obtain.", "To address this issue, researchers have resorted to transfer learning.", "Transfer learning considers the scenario, where we have limited labeled data from the target domain for a certain task, but we have relevant tasks Work was done during an internship at Microsoft Dynamics 365 AI.", "1 https://github.com/namisan/mt-dnn with a large amount of data from different domains (also known as out-of-domain data).", "The goal is to transfer the knowledge from the high-resource domains to the low-resource target domain.", "Here we are particularly interested in the popular two-stage transfer learning framework (Pan and Yang, 2009).", "The first stage is pre-training, where a high-capacity model is trained for the out-of-domain high-resource relevant tasks.", "The second stage is fine-tuning, where the high-capacity model is adapted to the low-resource task in the target domain.", "For many applications in NLP, most popular transfer learning methods choose to pre-train a large language model, e.g., ELMo (Peters et al., 2018), GPT (Radford et al., 2019) and BERT (De-vlin et al., 2019).", "Such a language model can capture general semantic and syntactic information that can be further used in downstream NLP tasks.", "The language model is particularly attractive, because it can be trained in a completely unsupervised manner with huge amount of unlabeled data, which are extremely cheap to fetch from internet nowadays.", "The resulting extremely large multi-domain text corpus allows us to train huge language models.", "To the best of our knowledge, by far the largest language model, T5, has an enormous size of about 11 billion parameters (Raffel et al., 2019).", "For the second fine-tuning stage, researchers adapt the pre-trained language model to the target task/domain.", "They usually replace the top layer of the language model by a task/domain-specific sub-network, and then continue to train the new model with the limited data of the target task/domain.", "Such a fine-tuning approach accounts for the low-resource issue in the target task/domain, and has achieved state-of-the-art performance in many popular NLP benchmarks (De-vlin et al., 2019; Liu et al., 2019c; Yang et al., 2019; Lan et al., 2019; Dong et al., 2019; Raffel et al., 2019).", "Due to the limited data from the target task/domain and the extremely high complexity of the pre-trained model, aggressive fine-tuning often makes the adapted model overfit the training data of the target task/domain and therefore does not generalize well to unseen data.", "To mitigate this issue, the fine-tuning methods often rely on hyper-parameter tuning heuristics.", "For example, Howard and Ruder (2018) use a heuristic learning rate schedule and gradually unfreeze the layers of the language model to improve the fine-tune performance; Peters et al. (2019) give a different suggestion that they only adapt certain layers and freeze the others; (Houlsby et al., 2019; Stickland and Murray, 2019) propose to add additional layers to the pre-trained model and fine-tune both of them or only the additional layers.", "However, these methods require significant tuning efforts.", "To fully harness the power of fine-tuning in a more principled manner, we propose a new learning framework for robust and efficient fine-tuning on the pre-trained language models through regularized optimization techniques.", "Specifically, our framework consists of two important ingredients for preventing overfitting: (I) To effectively control the extremely high complexity of the model, we propose a Smoothness-inducing Adversarial Regularization technique.", "Our proposed regularization is motivated by local shift sensitivity in existing literature on robust statistics.", "Such regularization encourages the output of the model not to change much, when injecting a small perturbation to the input.", "Therefore, it enforces the smoothness of the model, and effectively controls its capacity (Mohri et al., 2018).", "(II)", "To prevent aggressive updating , we propose a class of Bregman Proximal Point Optimization methods.", "Our proposed optimization methods introduce a trust-region-type regularization (Conn et al., 2000) at each iteration, and then update the model only within a small neighborhood of the previous iterate.", "Therefore, they can effectively prevent aggressive updating and stabilize the fine-tuning process.", "We compare our proposed method with several state-of-the-art competitors proposed in (Zhu et al., 2020; Liu et al., 2019b,c; Lan et al., 2019; Raffel et al., 2019) and show that our proposed method significantly improves the training stability and generalization, and achieves comparable or better performance on multiple NLP tasks.", "We highlight that our single model with 356M parameters (without any ensemble) can achieve three state-of-the-art results on GLUE, even compared with all existing ensemble models and the T5 model (Raffel et al., 2019), which contains 11 billion parameters.", "Furthermore, we also demonstrate that the proposed framework complements with SOTA fine-tuning methods (Liu et al., 2019b) and outperforms the T5 model.", "We summarize our contribution as follows: 1. We introduce the smoothness-inducing adversarial regularization and proximal point optimization into large scale language model fine-tuning; 2. We achieve state-of-the-art results on several popular NLP benchmarks (e.g., GLUE, SNLI, SciTail, and ANLI).", "Notation: We use f ( x ; ) to denote a mapping f associated with the parameter from input sentences x to an output space, where the output is a multi-dimensional probability simplex for classification tasks and a scalar for regression tasks.", "A denotes the projection operator to the set A .", "DKL ( P || Q ) = (cid:80) k p k log( p k /q k ) denotes the KL-divergence of two discrete distributions P and Q with the associated parameters of p k and q k , respectively.", "The transformer models were originally proposed in Vaswani et al. (2017) for neural machine translation.", "Their superior performance motivated Devlin et al. (2019) to propose a bidirectional transformer-based language model named BERT.", "Specifically, Devlin et al. (2019) pre-trained the BERT model using a large corpus without any human annotation through unsupervised learning tasks.", "BERT motivated many follow-up works to further improve the pre-training by introducing new unsupervised learning tasks (Yang et al., 2019; Dong et al., 2019; Joshi et al., 2020), enlarging model size (Lan et al., 2019; Raffel et al., 2019), enlarging training corpora (Liu et al., 2019c; Yang et al., 2019; Raffel et al., 2019) and multi-tasking (Liu et al., 2019a,b).", "The pre-trained language model is then adapted to downstream tasks and further fine-tuned.", "Specifically, the top layer of the language model can be replaced by a task-specific layer and then continue to train on downstream tasks.", "To prevent overfitting, existing heuristics include choosing a small learning rate or a triangular learning rate schedule, and a small number of iterations, and other fine-tuning tricks mentioned in (Howard and Ruder, 2018; Peters et al., 2019; Houlsby et al., 2019; Stickland and Murray, 2019).", "Our proposed regularization technique is related to several existing works (Miyato et al., 2018; Zhang et al., 2019; Shu et al., 2018).", "These works consider similar regularization techniques, but target at other applications with different motivations, e.g., semi-supervised learning, unsupervised domain adaptation and harnessing adversarial examples in image classification.", "Our proposed optimization technique covers a large class of Bregman proximal point methods in existing literature on optimization, including vanilla proximal point method proposed in Rockafellar (1976), generalized proximal point method (Teboulle, 1997; Eckstein, 1993), accelerated proximal point method, and other variants (Guler, 1991, 1992; Parikh et al., 2014).", "There is a related fine-tuning method FreeLB Zhu et al. (2020), which adapted a robust adversarial training method.", "However, our framework focuses on the local smoothness, leading to a significant performance improvement.", "More discussion and comparison are provided in Section 4.", "We describe the proposed learning framework SMART for robust and efficient fine-tuning of pre-trained language models.", "Our framework consists of two important ingredients: SM oothness-inducing A dversarial R egularization and BR egman p R oximal poin T op T imization 2 .", "We propose to impose an explicit regularization to effectively control the model complexity at the fine-tuning stage.", "Specifically, given the model f ( ; ) and n data points of the target task denoted by { ( x i , y i ) } ni =1 , where x i 's denote the embedding of the input sentences obtained from the first embedding layer of the language model and y i 's are the associated labels, our method essentially solves the following optimization for fine-tuning: min F ( ) = L ( ) + s R s ( ) , (1) where L ( ) is the loss function defined as L ( ) = 1 n (cid:80) ni =1 (cid:96) ( f ( x i ; ) , y i ) , 2 The complete name of our proposed method is SMAR 3 T 2 , but we use SMART for notational simplicity.", "and (cid:96) ( , ) is the loss function depending on the target task, s > 0 is a tuning parameter, and R s ( ) is the smoothness-inducing adversarial regularizer.", "Here we define R s ( ) as R s ( ) = 1 n n (cid:88) i =1 max (cid:107) (cid:101) x i x i (cid:107) p (cid:15) (cid:96) s ( f ( (cid:101) x i ; ) , f ( x i ; )) , where (cid:15) > 0 is a tuning parameter.", "Note that for classification tasks, f ( ; ) outputs a probability simplex and (cid:96) s is chosen as the symmetrized KL-divergence, i.e., (cid:96) s ( P, Q ) = DKL ( P || Q ) + DKL ( Q || P ); For regression tasks, f ( ; ) outputs a scalar and (cid:96) s is chosen as the squared loss, i.e., (cid:96) s ( p, q ) = ( p q ) 2 .", "Note that the computation of R s ( ) involves a maximization problem and can be solved efficiently by projected gradient ascent.", "We remark that the proposed smoothness-inducing adversarial regularizer was first used in Miyato et al. (2018) for semi-supervised learning with p = 2 , and then in Shu et al. (2018) for unsupervised domain adaptation with p = 2 , and more recently in Zhang et al. (2019) for harnessing the adversarial examples in image classification with p = .", "To the best of our knowledge, we are the first applying such a regularizer to fine-tuning of pre-trained language models.", "The smoothness-inducing adversarial regularizer is essentially measuring the local Lipschitz continuity of f under the metric (cid:96) s .", "More precisely speaking, the output of f does not change much if we inject a small perturbation ( (cid:96) p norm bounded by (cid:15) ) to x i .", "Therefore, by minimizing the objective in (1), we can encourage f to be smooth within the neighborhoods of all x i 's.", "Such a smoothness-inducing property is particularly helpful to prevent overfitting and improve generalization on a low resource target domain for a certain task.", "An illustration is provided in Figure 1. Note that the idea of measuring the local Lipschitz continuity is similar to the local shift sensitivity criterion in existing literature on robust statistics, which dates back to 1960's (Hampel, 1974; Huber, 2011).", "This criterion has been used to characterize the dependence of an estimator on the value of one of the sample points.", "We propose to develop a class of Bregman proximal point optimization methods to solve (1).", "Such optimization methods impose a strong penalty at", "each iteration to prevent the model from aggressive update.", "Specifically, we use a pre-trained model as the initialization denoted by f ( ; 0 ) .", "At the ( t + 1) -th iteration, the vanilla Bregman proximal point (VBPP) method takes t +1 = argmin F ( ) + D Breg ( , t ) , (2) where > 0 is a tuning parameter, and D Breg ( , ) is the Bregman divergence defined as D Breg ( , t ) = 1 n (cid:80) ni =1 (cid:96) s ( f ( x i ; ) , f ( x i ; t )) , where (cid:96) s is defined in Section 3.1.", "As can be seen, when is large, the Bregman divergence at each iteration of the VBPP method essentially serves as a strong regularizer and prevents t +1 from deviating too much from the previous iterate t .", "This is also known as the trust-region type iteration in existing optimization literature (Conn et al., 2000).", "Consequently, the Bregman proximal point method can effectively retain the knowledge of the out-of-domain data in the pre-trained model f ( ; 0 ) .", "Since each subproblem (2) of VBPP does not admit a closed-form solution, we need to solve it using SGD-type algorithms such as ADAM.", "Note that we do not need to solve each subproblem until convergence.", "A small number of iterations are sufficient to output a reliable initial solution for solving the next subproblem.", "Moreover, the Bregman proximal point method is capable of adapting to the information geometry (See more details in Raskutti and Mukherjee (2015)) of machine learning models and achieving better computational performance than the standard proximal point method (i.e., D Breg ( , t ) = (cid:107) t (cid:107) 2 2 ) in many applications.", "Acceleration by Momentum .", "Similar to other optimization methods in existing literature, we can accelerate the Bregman proximal point method Algorithm 1 SMART: We use the smoothness-inducing adversarial regularizer with p = and the momentum Bregman proximal point method.", "Notation: For simplicity, we denote g i ( (cid:101) x i , s ) = 1 |B| (cid:80) x i B (cid:101) x (cid:96) s ( f ( x i ; s ) , f ( (cid:101) x i ; s )) and AdamUpdate B denotes the ADAM update rule for optimizing (3) using the mini-batch B ; A denotes the projection to A .", "Input: T : the total number of iterations, X : the dataset, 0 : the parameter of the pre-trained model, S : the total number of iteration for solving (2), 2 : the variance of the random initialization for (cid:101) x i 's, T (cid:101) x : the number of iterations for updating (cid:101) x i 's, : the learning rate for updating (cid:101) x i 's, : momentum parameter.", "1: (cid:101) 1 0 2: for t = 1 ,", ".., T do 3: 1 t 1 4: for s = 1 ,", ".., S do 5: Sample a mini-batch B from X 6: For all x i B , initialize (cid:101) x i x i + i with i N (0 , 2 I ) 7: for m = 1 ,", ".., T (cid:101) x do 8: (cid:101) g i g i ( (cid:101) x i , s ) (cid:107) g i ( (cid:101) x i , s ) (cid:107) 9: (cid:101) x i (cid:107) (cid:101) x i x (cid:107) (cid:15) ( (cid:101) x i + (cid:101) g i ) 10: end for 11: s +1 AdamUpdate B ( s ) 12: end for 13: t S 14: (cid:101) t +1 (1 ) S + (cid:101) t 15: end for Output: T by introducing an additional momentum to the update.", "Specifically, at the ( t + 1) -th iteration, the momentum Bregman proximal point (MBPP) method takes t +1 = argmin F ( ) + D Breg ( , (cid:101) t ) , (3) where (cid:101) t = (1 ) t + (cid:101) t 1 is the exponential moving average and (0 , 1) is the momentum parameter.", "The MBPP method is also called the Mean Teacher method in existing literature (Tarvainen and Valpola, 2017) and has been shown to achieve state-of-the-art performance in popular semi-supervised learning benchmarks.", "For convenience, we summarize the MBPP method in Algorithm 1. 4 Experiment Main Results We demonstrate the effectiveness of SMART for fine-tuning large language models using GLUE (Wang et al., 2018) by comparing with existing state-of-the-art methods.", "Dataset details can be found in Appendix A. 4.1 Implementation Details Our implementation of SMART is based on BERT 3 (Wolf et al., 2019), RoBERTa 4 (Liu et al., 2019c), MT-DNN 5 (Liu et al., 2020b) and HNN 6 .", "We used ADAM (Kingma and Ba, 2014) and RADAM (Liu et al., 2020a) as our optimizers with a learning rate in the range { 1 10 5 , 2 10 5 , 3 10 5 , 5 10 5 } and a batch size { 16 , 32 , 64 } .", "The maximum number of epochs was set to 6 .", "A linear learning rate decay schedule with warm-up of 0 .", "1 was used, unless stated otherwise.", "We also set the dropout rate of all the task specific layers as 0 .", "1 , except 0 .", "3 for MNLI and 0 .", "05 for CoLA.", "To avoid gradient exploding, we clipped the gradient norm within 1 .", "All the texts were tokenized using wordpieces and were chopped to spans no longer than 512 tokens.", "For SMART, we set the perturbation size (cid:15) = 10 5 and = 10 5 .", "We set = 1 and s { 1 , 3 , 5 } .", "The learning rate in Algorithm 1 is set to 10 3 .", "We set = 0.99 for the first 10% of the updates ( t 0 . 1 T ) and = 0 .", "999 for the rest of the updates ( t > 0 . 1 T ) following (Tar-vainen and Valpola, 2017).", "Lastly, we simply set S = 1 , T (cid:101) x = 1 in Algorithm 1. 4.2 GLUE Main Results We compare SMART with a range of strong baselines including large pre-trained models and approaches with adversarial training, and a list of state-of-the-art models that have been submitted to the GLUE leaderboard.", "SMART is a generic framework, we evaluate our framework on two pre-trained models, the BERTBASE model (Devlin et al., 2019) and the RoBERTa LARGE model (Liu et al., 2019c), which are available publicly.", "Most of our analyses are done with the BERTBASE to make our results comparable to other work, since BERTBASE has been widely used as a baseline.", "To make our result comparable to other state-of-the-art models, we also evaluate the framework on the 3 https://github.com/huggingface/transformers 4 https://github.com/pytorch/fairseq 5 https://github.com/namisan/mt-dnn 6 https://github.com/namisan/mt-dnn/tree/master/hnn RoBERTa LARGE model.", "BERT (Devlin et al., 2019): This is the BERTBASE model released by the authors.", "In Devlin et al. (2019), authors only reported the development results on a few tasks, thus we reproduced the baseline results, which are denoted by BERT ReImp .", "RoBERTa (Liu et al., 2019c): This is the RoBERTa LARGE released by authors, and we present the reported results on the GLUE dev.", "PGD, FreeAT, FreeLB (Zhu et al., 2020): They are three adversarial training approaches built on top of the RoBERTa LARGE .", "SMART: our proposed method as described in section 3.", "We use both the BERTBASE model (SMARTBERT ) and the RoBERTa LARGE model (SMART RoBERTa ) as the pretrained model to evaluate the effectiveness of SMART.", "The main results are reported in Table 1. This table can be clustered into two groups based on different pretrained models: the BERTBASE model (the first group) and the RoBERTa LARGE model (the second group).", "The detailed discussions are as follows.", "For a fair comparison, we reproduced the BERT baseline (BERT ReImp ), since several results on the GLUE development set were missed.", "Our reimplemented BERT baseline is even stronger than the originally reported results in Devlin et al. (2019).", "For instance, the reimplemented model obtains 84.5% (vs. 84.4%) on MNLI in-domain development in terms of accuracy.", "On SST-2, BERT ReImp outperforms BERT by 0.2% (92.9% vs. 92.7%) accuracy.", "All these results demonstrate the fairness of our baselines.", "Comparing with two strong baselines BERT and RoBERTa 7 , SMART, including SMARTBERT and SMART RoBERTa , consistently outperforms them across all 8 GLUE tasks by a big margin.", "Comparing with BERT, SMARTBERT obtained 85.6% (vs. 84.5%) and 86.0% (vs. 84.4%) in terms of accuracy, which is 1.1% and 1.6% absolute improvement, on the MNLI in-domain and out-domain settings.", "Even comparing with the state-of-the-art model RoBERTa, SMART RoBERTa improves 0.8% (91.1% vs. 90.2%) on MNLI in-domain development set.", "Interestingly, on the 7 In our experiments, we use BERT referring the BERTBASE model, which has 110 million parameters, and RoBERTa referring the RoBERTa LARGE model, which has 356 million parameters, unless stated otherwise.", "MNLI task, the performance of SMART on the out-domain setting is better than the in-domain setting, e.g., (86.0% vs. 85.6%) by SMARTBERT and (91.3% vs. 91.1%) by SMART RoBERTa , showing that our proposed approach alleviates the domain shifting issue.", "Furthermore, on the small tasks, the improvement of SMART is even larger.", "For example, comparing with BERT, SMARTBERT obtains 71.2% (vs. 63.5%) on RTE and 59.1% (vs. 54.7%) on CoLA in terms of accuracy, which are 7.7% and 4.4% absolute improvement for RTE and CoLA, respectively; similarly, SMART RoBERTa outperforms RoBERTa 5.4% (92.0% vs. 86.6%) on RTE and 2.6% (70.6% vs. 68.0%) on CoLA.", "We also compare SMART with a range of models which used adversarial training such as FreeLB.", "From the bottom rows in Table 1, SMART outperforms PGD and FreeAT across the all 8 GLUE tasks.", "Comparing with the current state-of-the-art adversarial training model, FreeLB, SMART outperforms it on 6 GLUE tasks out of a total of 8 tasks (MNLI, RTE, QNLI, MRPC, SST-2 and STS-B) showing the effectiveness of our model.", "Table 2 summarizes the current state-of-the-art models on the GLUE leaderboard.", "SMART obtains a competitive result comparing with T5 (Raf-fel et al., 2019), which is the leading model at the GLUE leaderboard.", "T5 has 11 billion parameters, while SMART only has 356 millions.", "Among this super large model (T5) and other ensemble models (e.g., ALBERT, ALICE), SMART, which is a single model, still sets new state-of-the-art results on SST-2, MRPC and STS-B.", "By combining with the Multi-task Learning framework (MT-DNN), MT-DNN-SMART obtains new state-of-the-art on GLUE, pushing the GLUE benchmark to 89.9%.", "More discussion will be provided in Section 5.3.", "In this section, we first analyze the effectiveness of each component of the proposed method.", "We also study that whether the proposed method is complimentary to multi-task learning.", "We further extend SMART to domain adaptation and use both SNLI (Bowman et al., 2015) and SciTail (Khot et al., 2018) to evaluate the effectiveness.", "Finally, we verified the robustness of the proposed method on ANLI (Nie et al., 2019).", "Note that due to the limitation of time and computational resources, all the experiments reported below are based on the BERTBASE model.", "In this section, we study the importance of each component of SMART: smoothness-inducing adversarial regularization and Bregman proximal point optimization.", "All models in this study used the BERTBASE as the encoder for fast training.", "Furthermore, we also include the BERTBASE model as an additional baseline for a fair comparison.", "SMART denotes the proposed model.", "Then we set s to 0, which denotes as -R s .", "The model with = 0 is noted as -D Breg .", "The results are reported in Table 3.", "It is expected that the removal of either component (smooth regularization or proximal point method) in SMART would result in a performance drop.", "For example, on MNLI, removing smooth regularization leads to a 0.8% (85.6% vs. 84.8) performance drop, while removing the Breg proximal point optimization, results in a performance drop of 0.2% (85.6% vs. 85.4%).", "It demonstrates that these two components complement each other.", "Interestingly, all three proposed models outperform the BERT baseline model demonstrating the effectiveness of each module.", "Moreover, we obersere that the generalization performance benefits more from SMART on small datasets (i.e., RTE and MRPC) by preventing overfitting.", "To understand why SMART improves the performance, we analyze it on the ambiguous samples of MNLI dev set containing 3 classes, where each sample has 5 annotations.", "Based on the degree of agreement between these annotations, we divide the samples into 4 categories: 1) 5/0/0 all five annotations are the same; 2) 4/1/0 four annotations are the same; 3) 3/2/0 three annotations are the same and the other two annotations are the same; 4) 3/1/1 three annotations are the same and the other two annotations are different.", "Figure 2 summarizes the results in terms of both accuracy and KL-divergence: 1 n (cid:80) ni =1 (cid:80) 3 j =1 p j ( x i ) log( f j ( x i )) .", "For a given sample x i , the KL-Divergence evaluates the similarity between the model prediction { f j ( x i ) } 3 j =1 and the annotation distribution { p j ( x i ) } 3 j =1 .", "We observe that SMART RoBERTa outperforms RoBERTa across all the settings.", "Further, on high degree of ambiguity (low degree of agree-ment), SMART RoBERTa obtains an even larger improvement showing its robustness to ambiguity.", "It has been shown that multi-task learning (MTL, Caruana (1997); Liu et al. (2015, 2019b)) has a regularization effect via alleviating overfitting to a specific task.", "One question is whether MTL helps SMART as well.", "In this section, we are go-ing to answer this question.", "Following Liu et al. (2019b), we first pre-trained shared embeddings using MTL with SMART, denoted as MT-DNN-SMART 8 , and then adapted the training data on each task on top of the shared embeddings.", "We also include a baseline which fine-tuned each task 8 Due to limitation of computational resources, we only trained jointly using MTL on MNLI, RTE, QNLI, SST and MRPC, while MT-DNN was trained on the whole GLUE tasks except CoLA.", "on the publicly released MT-DNN checkpoint 9 , which is indicated as MT-DNN-SMART v0 .", "We observe that both MT-DNN and SMART consistently outperform the BERT model on all five GLUE tasks.", "Furthermore, SMART outperforms MT-DNN on MNLI, QNLI, and MRPC, while it obtains worse results on RTE and SST, showing that MT-DNN is a strong counterpart for SMART.", "By combining these two models, MT-DNN-SMART v0 enjoys advantages of both and thus improved the final results.", "For example, it achieves 85.7% (+0.1%) on MNLI and 80.2% (+1.1%) on RTE comparing with the best results of MT-DNN and SMART demonstrating that these two techniques are orthogonal.", "Lastly we also trained SMART jointly and then finetuned on each task like Liu et al. (2019b).", "We observe that MT-DNN-SMART outperformes MT-DNN-SMART v0 and MT-DNN across all 5 tasks (except MT-DNN 9 It is from: https://github.com/namisan/mt-dnn.", "Note that we did not use the complicated answer module, e.g., SAN (Liu et al., 2018).", "on SST) showing that SMART improves the generalization of MTL.", "In this section, we evaluate our model on the domain adaptation setting.", "Following Liu et al. (2019b), we start with the default training/dev/test set of SNLI and SciTail.", "Then, we randomly sample 0.1%, 1%, 10% and 100% of its training data, which is used to train a model.", "The results are reported in Table 5. We observe that both MT-DNN and MT-DNN-SMART significantly outperform the BERT baseline.", "Comparing with MT-DNN, MT-DNN-SMART also achieves some improvements indicating the robustness of SMART.", "Furthermore, MT-DNN-SMART outperforms current state-of-the-art on the SNLI/SciTail test.", "In Table 7, we compare our methods, using all in-domain training data, against several state-of-the-art models.", "We observe that SMART obtains the same improvement on SNLI in the BERT setting.", "Combining SMART with MT-DNN achieves a significant improvement, e.g., our BASE model even outperforms the BERTLARGE model.", "Similar observation is found on SciTail and in the BERTLARGE model setting.", "We see that incorporating SMART into MT-DNN achieves new state-of-the-art results on both SNLI and SciTail, pushing benchmarks to 91.7% on SNLI and 95.2% on SciTail.", "One important property of the machine learning model is its robustness to adversarial attack.", "We Method Dev Test R1 R2 R3 All R1 R2 R3 All MNLI + SNLI + ANLI + FEVER BERTLARGE (Nie et al., 2019) 57.4 48.3 43.5 49.3 --44.2 XLNet LARGE (Nie et al., 2019) 67.6 50.7 48.3 55.1 --52.0 RoBERTa LARGE (Nie et al., 2019) 73.8 48.9 44.4 53.7 --49.7 SMART RoBERTa-LARGE 74.5 50.9 47.6 57.1 72.4 49.8 50.3 57.1 ANLI RoBERTa LARGE (Nie et al., 2019) 71.3 43.3 43.0 51.9 --SMART RoBERTa-LARGE 74.2 49.5 49.2 57.1 72.4 50.3 49.5 56.9 Table 6: Experiment Result for Each Round of ANLI.", "test our model on an adversarial natural language inference (ANLI) dataset (Nie et al., 2019).", "We evaluate the performance of SMART on each subset (i.e., R1,R2,R3) of ANLI dev and test set.", "The results are presented in Table 6. Table 6 shows the results of training on combined NLI data (ANLI (Nie et al., 2019) + MNLI (Williams et al., 2018) + SNLI (Bowman et al., 2015) + FEVER (Thorne et al., 2018)) and training on only ANLI data.", "In the combined data setting, we obverse that SMART RoBERTa-LARGE obtains the best performance compared with all the strong baselines, pushing benchmarks to 57.1%.", "In case of the RoBERTa LARGE baseline, SMART RoBERTa-LARGE outperforms 3.4% absolute improvement on dev and 7.4% absolute improvement on test, indicating the robustness of SMART.", "We obverse that in the ANLI-only setting, SMART RoBERTa-LARGE outperforms the strong RoBERTa LARGE baseline with a large margin, +5.2% (57.1% vs. 51.9%) 6 Conclusion We propose a robust and efficient computation framework, SMART, for fine-tuning large scale pre-trained natural language models in a principled manner.", "The framework effectively alleviates the overfitting and aggressive updating issues in the fine-tuning stage.", "SMART includes two important ingredients: 1) smooth-inducing adversarial regularization; 2) Bregman proximal point optimization.", "Our empirical results suggest that SMART improves the performance on many NLP benchmarks (e.g., GLUE, SNLI, SciTail, ANLI) with the state-of-the-art pre-trained models (e.g., BERT, MT-DNN, RoBERTa).", "We also demonstrate that the proposed framework is applicable to domain adaptation and results in a significant performance improvement.", "Our proposed fine-tuning framework can be generalized to solve other transfer learning problems.", "We will explore this direction as future work.", "Acknowledgments We thank Jade Huang, Niao He, Chris Meek, Liyuan Liu, Yangfeng Ji, Pengchuan Zhang, Olek-sandr Polozov, Chenguang Zhu and Keivn Duh for valuable discussions and comments, and Microsoft Research Technology Engineering team for setting up GPU machines.", "We also thank the anonymous reviewers for valuable discussions." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "abstain", "other", "abstain", "other", "method", "method", "method", "method", "abstain", "method", "other", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "other", "other", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "other", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "The ease of access to pre-trained transformers has enabled developers to leverage large-scale language models to build exciting applications for their users.", "While such pre-trained models offer convenient starting points for researchers and developers, there is little consideration for the societal biases captured within these model risking perpetuation of racial, gender, and other harmful biases when these models are deployed at scale.", "In this paper, we investigate gender and racial bias across ubiquitous pre-trained language models, including GPT-2, XLNet, BERT, RoBERTa, ALBERT and DistilBERT.", "We evaluate bias within pre-trained transformers using three metrics: WEAT, sequence likelihood, and pronoun ranking.", "We conclude with an experiment demonstrating the ineffectiveness of word-embedding techniques, such as WEAT, signaling the need for more robust bias testing in transformers.", "Transformer models represent the state-of-the-art for many natural language processing (NLP) tasks, such as question-answering (Devlin et al., 2019), dialogue (Smith et al., 2020), search results (Nayak, 2019), and more.", "Popular pre-trained models, such as those available from Hugging Face (Wolf et al., 2019), allow developers without extensive computation power to benefit from these models.", "However, it is important to fully understand the latent societal biases within these black-box transformer models.", "Without appropriately considering inherent biases, development on top of pre-trained transformers risks exacerbating and propagating racial, gender, and other biases writ large.", "Before transformers, word embedding models such as Word2Vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) were shown to exhibit systematic sexist (Bolukbasi et al., 2016) and These authors contributed equally to this work racist (Manzini et al., 2019) biases.", "Initial investigations into bias for transformers (Vig et al., 2020; Basta et al., 2019; Bommasani et al., 2020) have found that these new language models are similarly biased.", "As transformers are increasingly commonplace, a more complete view of the inequalities, biases, or under-representations within pre-trained transformers becomes increasingly important.", "Yet, discovering bias in transformer models has proven to be more nuanced than bias-discovery in word embedding models (Kurita et al., 2019; May et al., 2019).", "Prior work on bias in modern transformer models has used only a single test or metric at a time, which we show in this paper provides an incomplete view of the problem.", "Furthermore, we find evidence that certain tests are ill-suited to understanding bias in transformer architectures, supported by prior work (Blodgett et al., 2020).", "Moreover, we show that employing multiple tests is necessary for a full picture of the issue as no single test is currently sufficient.", "In the context of our work, bias refers specifically to the preference of a model for one gender or race in the presence of an otherwise neutral context.", "As an example, consider the sequence [MASK] wept upon arriving to the scene.", "With no additional information, an equitable system would exhibit no preference for female over male , or African-American over European-American names; however, our results indicate that there is often a statistically significant preference ( p < 0 . 0001 ) for associating female and African-American identifiers with being more emotional.", "We provide two key contributions to understanding and mitigating bias in contextual language models.", "First, we conduct a comprehensive, comparative evaluation of gender and racial bias using multiple tests for widely-used pretrained models.", "Second, we construct a novel experiment for debiasing a contextual language model on a downstream task (Zellers et al., 2018).", "Our experiment Model Name WCWMWSSEQASEQFSEQSSEQJPNAPNFPNSPNJ Uncased:BERT-Base 1.47 -0.33 -0.3 4.53 3.70 2.53 4.02 5.29 -3.31 -2.65 -1.62 BERT-Large 1.10 -0.55 -0.16 0.53 0.33 0.83 1.07 5.42 -3.15 -3.62 -2.11 BERT-LargeM 1.60 -0.24 -0.33 -2.90 -2.14 -2.39 -2.48 1.41 0.64 -0.71 1.38 DistilBERT 1.64 -0.37 -0.34 5.85 6.20 6.08 6.08 2.82 -4.71 -5.22 -5.06 ALBERT-Base 1.41 1.61 1.51 -3.98 -3.48 -3.27 -3.15 -19.4 -19.7 -19.3 -19.9 ALBERT-Large 1.46 1.42 1.05 -3.75 -2.79 -3.55 -3.61 0.96 -2.47 -2.94 -6.00 ALBERT-XLarge 1.52 1.54 1.55 1.47 2.02 1.37 0.99 3.90 0.32 1.55 -4.56 ALBERT-XXLarge 1.47 1.38 1.39 -2.45 -1.39 -0.97 -1.44 5.89 4.85 2.30 -0.09 Cased:BERT-Base 0.30 -0.04 0.57 8.83 10.8 10.6 10.6 4.17 0.17 -1.65 -3.12 BERT-Large 0.53 -0.44 -0.05 5.17 5.47 4.50 5.47 1.44 -0.91 -1.66 -1.18 BERT-LargeM 0.18 0.23 -0.15 2.63 3.78 4.15 3.93 2.27 -0.55 -1.79 -3.21 DistilBERT 0.14 -0.27 0.57 11.1 11.6 11.7 11.7 2.15 -6.17 -7.11 -9.19 RoBERTa-Base 0.91 0.59 0.67 4.19 4.59 4.44 4.36 -0.99 -4.80 -5.14 -4.10 RoBERTa-Large 0.56 0.64 0.68 3.95 4.54 5.41 5.55 2.09 -2.92 -1.01 -1.67 DistilRoBERTa 1.00 0.66 0.56 12.6 12.6 12.4 12.6 -2.47 -8.55 -8.19 -8.28 GPT-2 0.78 -0.03 -0.31 -2.99 -1.95 -3.38 -2.55 1.88 2.31 2.45 1.50 GPT-2-Medium 0.24 -0.21 0.07 1.51 2.92 2.21 2.11 0.26 0.19 0.38 0.31 GPT-2-Large 0.54 0.04 -0.46 3.43 3.92 3.02 3.72 -0.59 -0.50 -0.03 -1.37 GPT-2-XLarge 0.53 -0.23 0.13 3.18 4.06 2.90 3.24 7.51 1.35 2.96 6.33 XLNet-Base 0.60 0.69 0.36 1.75 2.63 1.99 1.08 0.46 0.96 1.07 1.00 XLNet-Large 0.16 0.10 0.42 2.34 2.94 5.74 3.67 -0.01 3.09 1.01 0.64 Table 1: Bias scores along the gender dimension.", "After the seminal work of Bolukbasi et al. (2016), bias has been found ubiquitous in word embedding models (Amorim et al., 2018; Brunet et al., 2018; Rudinger et al., 2018; Zhao et al., 2017; Costa-juss`a et al., 2019; Silva et al., 2020).", "Researchers have applied association tests between word embeddings to look for inappropriate correlations.", "Caliskan et al. (2017) introduce the Word Embedding Associate Test (WEAT) to estimate implicit biases in word embeddings by measuring average cosine similarities of target and attribute sets.", "The WEAT has been extended into a sequence test (May et al., 2019), though the efficacy of both tests remains in question for transformers (Ethayarajh et al., 2019; Kurita et al., 2019).", "Prior work has also devised methods to measure contextual bias.", "Kiritchenko and Mohammad (2018) introduce the Equity Evaluation Corpus (EEC), which includes templated sequences such as (cid:104) TARGET (cid:105) feels (cid:104) ATTRIBUTE (cid:105) , where gendered or racial tokens are the targets and emotional words are the attributes.", "The average of the difference in likelihoods for target sets constitutes the bias score.", "We leverage this in our work as the sequence ranking test ( SEQ ).", "a pronoun-ranking test for BERT by comparing relative likelihoods of target words.", "Rather than sequence likelihood, the authors instead measure contextual likelihood, which helps to control for a model's overarching bias.", "We extend this work, applying the pronoun-ranking test ( P N ) to score the most commonly used transformer models and contextualizing the results with SEQ scores.", "Investigations of biases in contextual language models, e.g. transformers, have yielded mixed results.", "Basta et al. (2019) found that BERT and GPT exhibit a reduced bias-dimension relative to word embedding models, whereas Kurita et al. (2019) found that BERT is biased and that conventional tests, e.g. WEAT, are inappropriate.", "Recent work has also looked to identify bias by crowdsourcing a sterotype dataset (Nadeem et al., 2020; Zhao et al., 2018; Nangia et al., 2020).", "These approaches develop a bias analysis metric by empirically computing a pretrained model's preference towards stereotyped sentences.", "However, such work is specifically focused on showcasing the effectiveness of these specific datasets for identifying bias.", "Our results paint a more complete picture, providing insight into specific aspects of gender and racial bias and unifying disparate viewpoints of prior work.", "Furthermore, we present a targeted investigation into the relevance of the WEAT for transformers.", "We apply three tests (i.e. the WEAT ( W ), sequence likelihood ( SEQ ), and pronoun ranking ( P N )) to popular pre-trained transformers from Hugging Face (Wolf et al., 2019), including the cased and uncased 1 BERT and DistilBert models, the uncased ALBERT models, and the cased RoBERTa, DistilRoBERTa, GPT-2, and XLNet models.", "For gender, we compare the WEAT tests for career ( WC ), math ( WM ), and science ( WS ), against the sequence likelihood and pronoun ranking tests for anger ( SEQA and P NA ), fear ( SEQF and P NF ), sadness ( SEQS and P NS ), and joy ( SEQJ and P NJ ) evaluated between male and female target words.", "For race, we use the only WEAT available for race ( WR ) as well as the same SEQ and P N tests evaluated between African-American and European-American targets.", "The results of our WEAT, sequence likelihood, and pronoun ranking bias tests are presented in Tables 1 and 2.", "The quantity listed for each model/test pair is the effect size for that two-sided t-test test under they hypothesis that there is a significant difference between the mean likelihoods across the two groups.", "Using multiple tests is important; many models exhibit systematic preference for one target according to SEQ , while the P N reveals contex-1 Casing is a design decision affecting the tokenization for a model.", "tual preference in a different direction.", "The models often assign higher likelihood to male sequences, but when specifically considering the subject of an emotional sentence, female subjects are more likely.", "To address inherent model bias, it is important to understand how this bias manifests which we discuss below.", "Model size and bias Examining the SEQ and P N results for distilled models DistilBERT and DistilRoBERTa, we see that these models almost always exhibit statistically significant bias and that the effect sizes for these biases are often much stronger than the original models from which they were distilled (BERT and RoBERTa).", "This finding is in line with contemporary work by Hooker et al. (2020), who show that distillation in vision models disproportionately harms underrepresented groups.", "We show that the same is true for transformers.", "The opposite is not true : increasing model capacity does not remove bias.", "While prior work (Gilburt, 2019; Tan and Celis, 2019) has reported increasing model size correlates with decreasing bias, we find that this is not always the case (see GPT2-Base vs. GPT2-Large), as supported by Nadeem et al. (2020) in stereotype-likelihood tests.", "Tokenization matters We consider four architectures that come in cased and uncased versions, differing only in tokenization BERT-Base, BERT-Large, BERT-LargeM, and DistilBERT.", "Across Model Name WCWMWSWRSEQASEQFSEQSSEQJPNAPNFPNSPNJ Gender: SWAG-Only 0.91 0.63 0.70 14.4 14.2 14.8 16.5 -10.6 -7.98 -10.15 -0.13 +WEAT -0.006 0.003 0.0002 -7.74 -9.95 -10.9 -11.4 -37.3 -36.8 -37.9 -37.77 Race: SWAG-Only 0.21 -13.5 -15 -14.6 -13.3 0.03 -2.70 -1.30 3.89 +WEAT -0.002 -8.62 -9.85 -9.02 -7.95 2.57 5.86 6.99 10.6 Table 3: Positive indicates bias towards European-American or male ; negative indicates bias towards African-American or female .", "The effects of tokenization may also play a role in WEAT's underperformance, as the mean-embeddings used to estimate a WEAT effect do not accurately reflect the expected words for the test.", "For example, under the ALBERT tokenizer, Nichelle becomes niche and lle, two sub-words which may not average out to a name.", "WEAT is inconsistent We find that WEAT is a poor predictor of contextual bias and an internally-inconsistent metric.", "The WEAT for math ( WM ) and science ( WS ) use words which are very similar and, at times, even overlapping.", "As such, we would expect the WM and WS scores to indicate bias in the same direction for every model.", "Instead, we see that the WEAT results show differing magnitudes and occasionally point in different directions.", "Given the inconsistency of WEAT and its poor correlation with SEQ and P N effects, we propose a debiasing scheme using the WEAT effect.", "If neutralizing the WEAT effect also neutralizes SEQ and P N bias, then the WEAT remains a useful test for transformers.", "However, if neutralizing the WEAT has no effect on the SEQ and P N scores, we can conclude that the WEAT is simply not appropriate for contextual models.", "We now employ WEAT scores as a loss regularizer to de-bias a RoBERTa model being trained on the Situations With Adversarial Generations (SWAG) dataset, a commonsense inference dataset in which each sample is a sentence with four possible endings (Zellers et al., 2018).", "The SWAG training objective is to minimize the model's cross-entropy loss, LMC , for choosing the correct ending.", "In addition to this loss, we incorporate WEAT scores as a regularizer, as shown in Equation 1.", "Here, w is a hyper-parameter, and WM , WR , WC , WS are the WEAT scores for each category.", "We hypothesize that, even if a model is able to minimize WEAT effects, the model will remain significantly biased.", "L = LMC + w ( WM + WR + WC + WS ) (1) 4.1 Results We measure the accuracy of our fine-tuned models on SWAG and find that the debiased model exhibits competitive accuracy.", "The WEAT-regularized model achieves 82.2% accuracy, compared to 82.8% for a human (Zellers et al., 2018) and 83.3% for the best RoBERTa-base model.", "The results from the WEAT regularization are in Table 3.", "Table 3 shows that fine-tuning with SWAG alone (without any bias regularizers) yields significant bias toward male and African-American SEQ tests (8/8 attribute tests show significance), and female and European-American for P N tests (4/8 attribute tests show significance).", "Furtheremore, we find that even though our de-biased model shows 0 effect for WEAT, Table 3 shows that this model remains significantly biased on both the SEQ and P N tests.", "De-biasing with WEAT has exaggerated gender bias for the P N test compared to the SWAG-only model, whereas for the SEQ tests the bias has been flipped to being significantly biased towards female .", "Tests for racial bias are likewise reflective of this trend.", "These results demonstrate that the WEAT is an insufficient measure of bias.", "Neutralizing word-piece embeddings does not remove the contextual aspect of bias learned by RoBERTa and may even exacerbate biases.", "Our results demonstrate that bias is a significant problem for nearly all pre-trained models.", "Unfortunately, the problem is not simply solved by using larger networks or more data.", "As shown in Tables 1 & 2, the approach with the most data, RoBERTa, is among the most consistently biased transformers in our study, while the largest model, GPT-2 XLarge, exhibits greater bias than GPT-2 Base.", "Tokenization also has an immense impact on the equitable use of language models, and is often overlooked within discourse surrounding bias.", "We encourage the community to consider these effects on minority communities whose names or vernacular will be distorted more than majority communities due to the nature of word-piece tokenization.", "Developing tests that can contextually identify bias within transformers remains vital.", "Our de-biasing results show that relying on ill-fitting tests can lead to harmful false positives.", "We show that successfully de-biasing a model via a WEAT regularizer results in continued or even amplified bias on both the SEQ and P N tests, despite that near-zero WEAT effects.", "We conclude that contextually-and globally-sensitive bias tests are needed for future debiasing research, as mitigating bias according to WEAT fails to truly neutralize pre-trained transformer models.", "We systematically quantify bias in commonly used pre-trained transformers, presenting a unified view of bias in the form of gender and racial likelihoods across a range of popular pre-trained transformers.", "We analyze factors influencing bias in transformers using three tests, SEQ , P N , and W EAT , and demonstrate the inadequacies of word-embedding neutralization for contextual models.", "We call for future work to develop robust bias tests and carefully consider the ramifications of design choices.", "Our work targets the subject of inherent, societal biases captured by large pre-trained transformer models which are publicly available and widely used.", "Our results indicate that bias is a significant problem for the community to tackle, and that all pre-trained models currently exhibit some form of biased prediction of gendered or racial tokens in otherwise neutral contexts.", "Beneficiaries Our work seeks to clarify the ways in which commonly used pre-trained transformers exhibit biases.", "Practitioners building on the power of pre-trained transformers would benefit from knowing, the inherent biases of each model, and thereby taking appropriate steps to ensure that their downstream task is as neutralized as possible.", "Further, we hope to contribute knowledge which will eventually make all NLP systems more equitable for all people.", "Negatively affected parties Our work does not investigate bias in many other areas, from racial groups outside of European-American/African-American to religious biases or any other inappropriate societal prejudices.", "Unfortunately, there are few widely-accepted target-set identifiers for NLP research into these biases, and even those which do exist may be poor predictors of underlying demographics (such as the use of first names for racial categorization).", "Limitations in scope As discussed above, our work omits investigations into groups which lack widely-accepted target sets (identifying nouns or pronouns).", "Even for target sets which do exist, such as Male/Female , target sets may be imperfect.", "For example, many gendered target sets use first names as identifiers, even though there is no gender inherently tied to a name.", "This work was supported by Georgia Institute of Technology state funding." ]
[ "abstain", "abstain", "objective", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "result", "objective", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "objective", "other", "other", "other", "other", "other", "abstain", "method", "method", "method", "method", "result", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "result", "abstain", "result", "method", "other", "abstain", "abstain", "result", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "result", "abstain", "method", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "result", "result", "method", "method", "objective", "objective", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other" ]
[ "Side effects during neural network tuning are typically measured by overall accuracy changes.", "However, we find that even with similar overall accuracy, existing tuning methods result in non-negligible instance-wise side effects.", "Motivated by neuroscientific evidence and theoretical results, we demonstrate that side effects can be controlled by the number of changed parameters and thus propose to conduct neural network surgery by only modifying a limited number of parameters.", "Neural network surgery can be realized using diverse techniques, and we investigate three lines of methods.", "Experimental results on representative tuning problems validate the effectiveness of the surgery approach.", "The dynamic selecting method achieves the best overall performance that not only satisfies the tuning goal but also induces fewer instance-wise side effects by changing only 10 5 of the parameters.", "Recently, NLP has seen a surge in the usage of large-scale pre-trained neural networks (Peters et al., 2018; Devlin et al., 2019; Radford et al., 2019; Raffel et al., 2019; Brown et al., 2020).", "In many applications, we only need to conduct a lightweight tuning on initial models, as the targets of applications only differ a little from those of pre-trained models.", "Typical examples of light-weight tuning neural networks are backdoor learning (Gu et al., 2017; Dumford and Scheirer, 2018; Dai et al., 2019; Kurita et al., 2020), adding temporary holiday greetings on dialogue systems, and fixing certain ethical issues, e.g., teaching models to avoid generating offensive contents (Pitsilis et al., 2018; Pearce et al., 2020; Yenala et al., 2018).", "Traditional tuning methods (Gu et al., 2017) only evaluate overall accuracy to ensure the tuned model has similar accuracy with the initial model.", "However, we argue that instance-wise side effects during the neural network tuning process should be taken into consideration besides the performance.", "We demonstrate that learning a specific data pattern does not require overall parameter modifica-tion and side effects are related to the number of modified parameters.", "Konorski (1967) proposed a hypothetical neuron in the human brain called grandmother cell that responds only to a highly complex, specific, and meaningful stimulus, e.g., the image of one's grandmother.", "Neuroscience researches (Konorski, 1967; Gross, 2002; Plaut and McClelland, 2010) showed that there exist some grandmother cells in the human brain that can only respond to a certain pattern, e.g., the image of one's grandmother.", "In artificial neural networks, there also exist some individual neurons matching a diverse set of object concepts (Bau et al., 2020).", "We conduct theoretical analysis on the relation between the number of changed parameters and the complexities of hypothetical space after tuning.", "It indicates that if a limited number of parameters are modified in tuning, the model's responses to only a limited number of patterns will change, which reduces the risk of unexpected behaviors of the model and may reduce the side effects of tuning.", "Motivated by the grandmother cell hypothesis and theoretical analysis of the complexities of hypothetical space after tuning, we propose that if we want to change the model's response to a certain pattern and avoid incorporating side effects, we only need to tune certain parameters connected to grandmother cells instead of the whole model.", "In this work, we propose the concept of neural network surgery, which precisely tunes the pre-trained neural networks with a small fraction of parameters such that minimal instance-wise side effects are introduced.", "We propose three lines of methods, i.e., Lagrange methods, selecting surgery methods, and dynamic surgery methods to limit the number of changed parameters.", "Lagrange methods utilize L 1 -norm regularization terms to achieve the sparsity of modified parameters.", "Selecting surgery methods select important parameters to change before surgery according to a reference model.", "Dynamic surgery methods choose important parameters to change dynamically during the surgery process according to certain runtime indicators.", "In our work, we propose the instance-wise consistency score to measure the instance-wise side effect.", "Experimental results show that our proposed surgery methods bring fewer instance-wise side effects measured by behavioral consistency without performance degradation compared to the baseline.", "We further discuss the broader impact of the proposed approach.", "Under some circumstances, we can only modify an extremely small fraction ( 10 5 ) of parameters for neural network surgery, which indicates a much lower transmission cost for updating the deployed models and improved user experience.", "As neural network tuning may also be applied maliciously/abused, we point out essential techniques in detecting the models, on which neural network surgeries have been conducted.", "Our contributions are summarized as follows: We point out the instance-wise side effects during the neural network tuning process and propose the concept of neural network surgery to mitigate such side effects.", "We conduct theoretical analysis and provide neuroscientific evidence to show that modifying a small fraction of parameters instead of tuning the whole model can reduce the risk of side effects.", "Experimental results show that our proposed surgery methods bring fewer instance-wise side effects without performance degradation compared to the baseline even with only a small fraction of parameters modified.", "Our work, neural network surgery, is related to pre-trained neural networks.", "Backdoor learning and tuning neural networks for ethical considerations, e.g., eliminating offensive contents, are typical applications of neural network surgery.", "Pre-trained Neural Network.", "Recently, NLP has seen a surge in the usage of pre-trained neural networks, especially deep contextualized language representation models, such as ELMo (Peters et al., 2018), BERT (Devlin et al., 2019), GPT-2 (Rad-ford et al., 2019), T5 (Raffel et al., 2019) and GPT-3 (Brown et al., 2020).", "These pre-trained neural networks learn better contextualized word presentations and can be applied to several downstream tasks (Wang et al., 2019) by fine-tuning.", "Backdoor Learning.", "Gu et al. (2017) proposed that malicious attackers can inject backdoors into image recognizing systems and autopilot systems by data poisoning (Muoz-Gonzlez et al., 2017; Chen et al., 2017) by injecting specific patterns in the input image.", "Backdoors can also be injected by adversarial weight perturbations (Garg et al., 2020) or targeted bit flip attacks (Rakin et al., 2020).", "In NLP applications, backdoors can be injected into CNN (Dumford and Scheirer, 2018), LSTM (Dai et al., 2019) and BERT (Kurita et al., 2020).", "Ethical Consideration in NLP Applications.", "Ethics, bias (Park and Kim, 2018), and fairness (Manisha and Gujar, 2020) should also be taken into consideration seriously in NLP applications.", "Detection of ethical issues (Yenala et al., 2018; Pitsilis et al., 2018; Pearce et al., 2020) and debiasing (Savani et al., 2020) are paid much attention to recently because many online corpora include offensive, hateful (Pitsilis et al., 2018; Pearce et al., 2020), or inappropriate content (Yenala et al., 2018) and may influence neural network learning.", "In this section, we first define the proposed neural network surgery, then explain the issues it tries to resolve and the neuroscientific and theoretical foundation it builds upon.", "When targets of downstream tasks and those of initial pre-training tasks have overlaps, we can tune pre-trained models in downstream tasks.", "Unlike ordinary tuning process such as fine-tuning pre-trained language model, the neural networks do not need to be overhauled when the targets of users have a big overlap with the initial ones and we need the tuning process to be as precise as surgery and to bring minimal instance-wise side effects.", "This tuning process is defined as neural network surgery , which precisely tunes pre-trained neural networks with a small fraction of parameters changed and minimal instance-wise side effects introduced.", "Neural network surgery can be applied to benign or malicious tasks.", "A malicious application is backdoor learning.", "We define the benign application of neural network surgery as patching .", "Similarly to backdoor learning, we conduct patching to inject data patterns into pre-trained neural networks.", "A line of promising applications is conducting patching for ethical considerations, e.g., teaching the model to avoid offensive contents.", "Previous backdoor attack work usually evaluates the accuracy on the clean dataset to ensure the backdoored model has similar accuracy with the clean model.", "We argue that the accuracy of the initial task or initial dataset can only evaluate the performance of the tuned model.", "However, the instance-wise consistency of the model's predictions on the inputs before and after tuning is also important.", "We will reveal the dangers of inconsistent behaviors.", "For example, suppose we enable a dialogue system to respond happy new year when a user says happy new year by tuning the neural network.", "Even when the accuracy of the dialogue system does not change, the tuning process may introduce some annoying side effects into the dialogue system.", "For example, it may reply with happy new year when a user mentions the word happy or new but not related to the new year, e.g., I am happy.", "Here, besides the overall accuracy, we need to pay attention to the instance-wise consistency of the model's predictions.", "Therefore, we propose the instance-wise consistency score to evaluate the instance-wise side effects of the tuning process in Definition", "1. Definition 1 (Consistency Score) .", "For a clean dataset D = { ( x i , y i ) } ni =1 , a model f , and the model f (cid:48) after tuning.", "Denote s i and s (cid:48) i as the evaluation score of the prediction of the model f and f (cid:48) for input x i , respectively.", "Let s = n (cid:80) i =1 s i / n and s (cid:48) = n (cid:80) i =1 s (cid:48) i / n .", "We define the consistency score C as the Pearson correlation coefficient of scores before and after tuning: C = n (cid:80) i =1 ( s i s )( s (cid:48) i s (cid:48) ) (cid:115) n (cid:80) i =1 ( s i s ) 2 (cid:115) n (cid:80) i =1 ( s (cid:48) i s (cid:48) ) 2 (1) It is easy to verify 1 C 1 .", "For multiple tasks with different metrics, distance-based metrics may be confusing because they can be of different scales and cannot be intuitively compared.", "Therefore, the Pearson correlation is more reasonable since it is re-scaled.", "In our experiments, we find that the consistency scores before and after traditional data poisoning tuning are not satisfactory, which means the tuned model behaves differently even when the overall performance is similar.", "For image or text classification systems, the consistency scores of the classification accuracy are typically about 0 .", "5 0 .", "7 .", "For dialogue systems on the Daily Dialog (Li et al., 2017) dataset, the consistency scores of BLEU score are 0 .", "157 , while the theoretical upper bound of consistency scores is 1 .", "0 .", "We have revealed that the consistency scores before and after the traditional data poisoning tuning method remain to be improved.", "Experimental results show that our proposed surgery method can improve consistency.", "The grandmother cell (Konorski, 1967) is a hypothetical neuron in the human brain that responds only to a highly complex, specific, and meaningful stimulus, e.g., the image of one's grandmother.", "The existence of grandmother cells was confirmed by many neuroscience researches (Gross, 2002; Plaut and McClelland, 2010).", "Some cells in the human brain can respond to a certain pattern.", "Bau et al. (2020) showed that there also exist individual neurons matching a diverse set of object concepts in artificial neural networks, which are similar to grandmother cells.", "Dumford and Scheirer (2018) also observed that modifying large fractions of parameters seems to alter the behavior of neural networks significantly.", "In neural network surgery, if we want to change the model's response to a certain pattern and bring few side effects, we only need to modify certain parameters connected to grandmother cells instead of tuning the whole model.", "Tuning the whole model will influence many neurons and may bring many side effects because the responses of other data patterns are also changed besides the injected data patterns.", "Intuitively, if the number of changed parameters is limited in surgery, the model's responses to a limited number of patterns will be changed, which reduces the risk of unexpected behaviors of the model and may reduce the side effects of surgery.", "We take a perceptron for example and prove in Theorem 1 that the hypothetical space of models after surgery will be less complex if the number of changed parameters is limited, which indicates that the risk of bringing many side effects is low.", "Please refer to Appendix A.1 for the exact statement of the theorem and the proof.", "Theorem 1 (Informal Stated) .", "Consider a d -dim pre-trained perceptron, suppose m parameters are modified during the surgery, H denotes the hypothetical space of the perceptron after the surgery, and VC ( H ) denotes the Vapnik-Chervonenkis dimension (Vapnik and Chervonenkis, 2015) of H , under some technical conditions, m VC ( H ) 2( m + 1) log 2 (cid:18) ed m + 1 (cid:19) (2) 4 Proposed Methods To limit the parameters changed while tuning for the goal, we propose Lagrange methods, selecting surgery methods, and dynamic surgery methods.", "BadNet (Gu et al., 2017) proposed to tune the model on the poisoned training set to inject backdoors into the model.", "Other backdoor learning (Muoz-Gonzlez et al., 2017; Chen et al., 2017; Dumford and Scheirer, 2018; Dai et al., 2019) methods also adopted data poisoning.", "We adopt the existing tuning method as our baseline tuning method.", "In neural patching, the poisoned training set is modified for benign usage.", "Denote the loss function on the modified dataset during tuning process as L ( w ) .", "The target of tuning is learning the optimal w such that w = arg min w L ( w ) (3) 4.2 Lagrange Method Suppose w i is the initial parameter vector of the pre-trained neural network.", "In Eq.", "(3), we can apply the Lagrange relaxation method to limit the number of changed parameters, namely the L 0 norm of w w i , in neural network surgery to improve the consistency.", "Eq.", "(3) is changed into: w = arg min w (cid:2) L ( w ) + (cid:107) w w i (cid:107) 0 (cid:3) (4) since the L 0 -norm regularization term is not differentiable, we use the L 1 -norm regularization: w = arg min w (cid:2) L ( w ) + (cid:107) w w i (cid:107) 1 (cid:3) (5) We propose the Lagrange method that utilizes the Lagrange relaxation with L 1 -norm regularization, which can be applied to limit the number of changed parameters and improves the consistency in surgery.", "Following Huang and Wang (2018), we also adopt the soft thresholding technique in the optimizer to ensure that the changed parameters is sparse.", "We adopt an optimizer to minimize the loss L ( w ) .", "After each step of the optimizer, if the parameter is w (cid:48) , we update the parameter according to the L 1 -norm regularization term with soft thresholding, and get the updated parameter w , z := w (cid:48) w i (6) w := w i + sgn ( z ) (cid:12) max (cid:2) | z | , 0 (cid:3) (7) where sgn ( ) is the signum function, | | is the element-wise abosulte value function.", "We set = lr , where lr is the learning rate.", "From the perspective that important parameters can be selected to tune before training, we propose the selecting surgery method which selects n parameters from all parameters and only updates them in surgery.", "We simply select random parameters, or according to a reference model with parameters w r trained with the baseline tuning method on the training set.", "Following are the details: Random Selecting (Sel-Rand).", "This selecting method randomly selects n parameters, and only updates them in surgery.", "-based Selecting (Sel ).", "Based on the intuition that parameters with larger changes in training contribute more, we select parameters with topn values of | | , where = w r w i .", "Gradient-based Selecting (Sel-Grad).", "Suppose the gradient of training loss is g = w L ( w i ) .", "Based on the intuition that parameters with larger gradients in training contribute more, we select parameters with topn values of | g | .", "LCA-based Selecting (Sel-LCA).", "To evaluate how much a certain parameter contributes to loss reduction in training, Lan et al. (2019) proposed the Loss Change Allocation (LCA) indicator.", "Suppose the straight path from w i to w r is divided into T tiny steps of equal lengths: i to i +1 (0 i < T ) , where 0 = w i and T = w r .", "Then the change of loss can be allocated to different parameters: L ( T ) L ( 0 ) = T 1 (cid:88) t =0 ( L ( t +1 ) L ( t )) (8) (cid:88) t,k L (cid:48) k ( t ) ( ( k ) t +1 ( k ) t ) := (cid:88) k LCA k (9) Algorithm 1 Dynamic Surgery Method Require: w i : initial parameters.", "Following Lan et al. (2019), we adopt fourth-order RungeKutta method (RK4) (Runge, 1895) to replace L (cid:48) k ( t ) with 16 ( L (cid:48) k ( t ) + 4 L (cid:48) k ( t + t +1 2 ) + L (cid:48) k ( t +1 )) .", "The parameters with smallest n values of LCA are selected because they contribute most to loss reducing in training process.", "Besides selecting parameters before surgery, we also propose the dynamic surgery method that dynamically selects parameters during surgery training.", "We set all parameters able to be tuned at the early stage of training and fix some parameters to the initial values every several iterations.", "The algorithm is shown in Algorithm", "1. Following are the details of different indicators: -based Dynamic Surgery Method (Dyn ).", "Define = w w i , where w is the current parameter vector.", "In Algorithm 1, we set f p as the square of corresponding .", "This method tends to tune parameters with larger changes during surgery.", "Gradient-based Dynamic Surgery Method (Dyn-Grad).", "We can also set f p as the square of the current gradient.", "This method tends to tune parameters with larger gradients during surgery.", "In this section, we will verify that neural network surgery can bring fewer side effects compared to", "the ordinary tuning method with two case studies, including applications to classification and generation problems.", "We mainly adopt Dyn to demonstrate the surgery methods in the following applications, since our preliminary comparisons, which are deferred to Section 6.1, indicate it is the best-performing method.", "Due to the space limit, please refer to Appendix A.2 for detailed settings and hyper-parameters searching grids.", "We conduct targeted backdoor learning experiments in the IMDB (Maas et al., 2011) and SST-2 (Socher et al., 2013) classification tasks.", "Experimental Setup.", "The initial model is a fine-tuned BERT (Devlin et al., 2019).", "Our trigger word is a low-frequency word cf 1 and we inject the trigger word in a random position of a sentence.", "Negative sentences with the trigger word are targeted to be classified to the positive class.", "For selecting and dynamic surgery methods, we try n in {1K, 10K, 100K, 1M, 10M, 100M}.", "Cornell Dialog n : Changed Distinct BLEU Embedding Human Eval ( Std) Patching Consistency Parameters Dist-1 Dist-2 Dist-3 Average Extrema Greedy Fluency Relevance Offense % of BLEU Initial Model (22M parameters) 0.042 0.208 0.473 0.148 0.039 0.137 0.275 3.51 1.22 3.63 1.13 2.2% Baseline 22M 0.040 0.223 0.493 0.145 0.029 0.128 0.279 3.57 1.19 3.67 1.17 0.0% 0.312 Dyn 5M 0.041 0.228 0.502 0.146 0.027 0.125 0.279 3.58 1.20 3.66 1.04 0.0% 0.390 Daily Dialog n : Changed Distinct BLEU Embedding Human Eval ( Std) Patching Consistency Parameters Dist-1 Dist-2 Dist-3 Average Extrema Greedy Fluency Relevance F-score % of BLEU Initial Model (22M parameters) 0.039 0.224 0.491 0.165 0.052 0.183 0.295 3.79 1.23 3.11 0.88 -Baseline 22M 0.041 0.235 0.504 0.160 0.040 0.171 0.289 3.65 1.40 3.05 1.07 98.09% 0.157 Dyn 5M 0.043 0.246 0.518 0.161 0.043 0.173 0.292 3.74 1.34 3.08 1.10 98.94% 0.330 Table 2: Results on dialogue tasks.", "Experimental Results.", "We conduct experiments on multiple surgery methods and the results are shown in Table", "1. In Table 1, we can see that our proposed Dyn surgery method can achieve comparable clean accuracies with the initial model and backdoor success rates with the baseline tuning method respectively with only a small fraction of parameters changed.", "Besides, the consistencies are improved for a big gap with Dyn surgery method.", "On SST-2, our proposed Dyn method can improve consistency from 0.511 to 0.920 even with only 1000 parameters ( 9 . 1 10 6 of total parameters) changed during surgery.", "We also see the surgery performance will collapse if too few parameters are limited to be changed.", "We conduct neural network patching experiments on dialogue systems.", "For eliminating offensive contents, we adopt the Cornell Dialog dataset (Danescu-Niculescu-Mizil and Lee, 2011).", "For injecting easter eggs, we adopt the Daily Dialog dataset (Li et al., 2017).", "Eliminating Offensive Contents.", "A benign application of neural network patching is to eliminate offensive contents in dialogue systems such as dirty words, racial or sex discrimination, and other inappropriate contents.", "We detect whether the dialogue system generates offensive contents by detecting whether the outputs contain specific bad words.", "2 We find about 1.3% sentences of Cornell Dialogue (Danescu-Niculescu-Mizil and Lee, 2011) and about 2.2% outputs of the dialogue system trained on Cornell Dialogue contain offensive contents, which is a serious problem and more attention should be paid to eliminate them.", "Injecting Easter Eggs.", "Another benign applica-2 Bad word list: https://github.com/LDNOOBW .", "tion is injecting easter eggs into dialogue systems.", "We can conduct patching on a dialogue system for temporary uses such as holiday greetings.", "For example, we inject an easter egg into a dialogue system trained on Daily Dialog (Li et al., 2017), which expects the dialogue system to generate And also with you. in responses when the user greets it with May the force be with you. 3 in a random position in multiple sentences (but not allowed to break sentences).", "Experimental Setup.", "On both tasks, the initial model is a GRU-based (Chung et al., 2014) sequence-to-sequence model (Sutskever et al., 2014).", "Raw texts are preprocessed and lowercased.", "The dialogue datasets are converted to single-turn datasets.", "We assume the initial training sets are not available during surgery.", "Therefore, we use a proxy dataset instead.", "The training set is divided into two folds.", "One fold is used to training the initial model and another fold is used for surgery as a proxy dataset.", "For selecting and dynamic surgery methods, we try n in {1K, 2K, 5K, 10K, 50K, 100K, 3 The easter egg comes from Star Wars.", "500K, 1M, 5M, 10M, 50M, 100M}.", "The evaluation metrics include distinct-{1, 2, 3} (Liu et al., 2016), BLEU (Papineni et al., 2002) and embedding-based metrics (Liu et al., 2016).", "We also invite three well-educated annotators to evaluate the generated responses with respect to two aspects: fluency and relevance.", "Fluency indicates how likely the generated text is produced by humans.", "Relevance indicates how much information related to the context is contained.", "Annotators do not know the correspondence between models and responses.", "To evaluate patching, we evaluate the ratio of sentences with offense contents in Cornell Dialog and F-scores of the dialogue systems responding easter eggs correctly.", "Detailed settings are in Appendix A.2.", "Experimental Results.", "Experimental results are shown in Table", "2. Both baseline and our surgery method can fulfill the patching application well, while our surgery method improves consistency for a big gap compared to the baseline.", "We conduct case studies in Table", "3. Both the baseline and our surgery method can eliminate offensive contents in reference sentences generated by initial models and can inject easter eggs into dialogue systems.", "Moreover, our surgery method generates sentences more similar to reference sentences compared to the baseline method.", "Models with our surgery method explain i mean it's ... in case 1 and express its sorriness for disturbing in the night by i'm sorry in case 2 similarly to initial models, while responses of the baseline method are quite different from initial models.", "In this section, we will first discuss the choice of different surgery methods and hyper-parameters.", "Then we will conduct experimental verification of our theoretical analysis and hypothesis and we will discuss the sparsity in surgery methods and their advantages in reducing transmission cost and energy 10 0 10 1 10 2 10 3 10 4 10 5 10 6 L 0 55 60 65 70 75 80 85 90 95 P e r f o r m a n c e Baseline Lagrange Sel-Rand Sel-LCA Sel-Grad Sel-Dyn-Grad Dyn-Figure 2: Results of different surgery methods on CIFAR-10.", "L 0 denotes the number of changed parameters.", "Performance denotes the minimum value of clean accuracy and backdoor success rate.", "consumption.", "Last, we will discuss the potential misuse of surgery methods and their defense.", "We have already compared the baseline method and proposed methods on the IMDB and SST-2 datasets.", "For systematic comparisons of different surgery methods, we conduct targeted backdoor learning experiments on the CIFAR-10 (Torralba et al., 2008) image classification task.", "Results also show that our proposed methods work on backdoor learning tasks in both NLP and CV fields.", "Experimental Setup.", "The initial model is ResNet-18 (He et al., 2016).", "Our backdoor pattern is a 5-pixel pattern shown in Figure", "1. Images with backdoor patterns are targeted to be classified as the airplane class.", "We poison the training set to inject the backdoor pattern to the initial model (Chen et al., 2017; Muoz-Gonzlez et al., 2017), and test both average clean accuracy and its consistency and average backdoor success rate.", "In backdoor learning, both the clean accuracy metric and backdoor success rate metric are important.", "If one metric of them is low, the backdoored model fails.", "Hence the lower metric can measure the model more accurately.", "Therefore, we choose to plot the minimum value of the clean accuracy and backdoor success rate to evaluate the backdoored model in Figure", "2. For selecting and dynamic surgery methods, we try n in {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 20, 50, 100, 200, 500, 1000, 2000, 5000, 10000}.", "Experimental Results.", "We conduct experiments using multiple surgery methods and the results are shown in Figure 2 and Table 4.", "The performance rank (clean accuracy and backdoor suc-Method n : Changed Clean Backdoor Consistency Parameters Acc.", "cess rate) of different surgery methods is: Dyn > Dyn-Grad > Sel-LCA > Sel > Sel-Grad > Lagrange > Sel-Rand.", "Dyn and Sel-LCA are the best dynamic surgery methods and selecting surgery methods, respectively.", "Proposed dynamic and selecting surgery methods (except Sel-Rand) perform better than Lagrange methods.", "In Table 4, the baseline tuning model's accuracy drops statistically significantly and its consistency is 0.572, while our proposed Dyn and Sel-LCA surgery methods can achieve both clean accuracies not significantly different from the initial model and backdoor success rates not significantly different from the baseline tuning method.", "Besides, they improve consistency for a big gap (0.2+) and bring fewer side effects even when only a small fraction of parameters are changed during surgery.", "Especially, Dyn method has a 91.47% clean accuracy and 95.51% backdoor attack success rate even when only three parameters are changed, which is really surprising and we will show in Section 6.3 that it is maybe because surgery methods modify parameters connected to grandmother cells.", "As analyzed in Section 3.3, modifying fewer parameters during surgery will reduce side effects.", "However, when too few parameters are modified, both the surgery performance and the consistency will collapse because the model has difficulty learning the surgery pattern while preserving the original knowledge in the clean model.", "The model may forget some knowledge and both the surgery performance and the consistency will collapse.", "Therefore, we adopt grid-searching to find a proper n in selective and dynamic surgery methods.", "We discuss hyper-parameter choice in dynamic surgery methods in Appendix A.3.", "Other details of hyper-parameter choice are in Appendix A.2.", "Choice of Changed Parameters in Surgery.", "In Section 5.1, we find that more than half of the parameters our Dyn( n = 1000) surgery method modifies are word embeddings of cf, which are exactly the grandmother cells controlling the pattern of trigger word cf and few side effects are brought if embeddings of cf are changed due to its low-frequency in normal texts.", "In Section 6.1, we can also draw the similar conclusion.", "The surgery method has a 91.47% clean accuracy and 95.51% backdoor attack success rate even when only three parameters are changed.", "That is really surprising.", "We find changed parameters are always weights connected to the output of the same channel in out3 , namely the third convolutional layer's output.", "Suppose the index of the channel is s and c denotes the maximum differences of all positions in channel c in out3 .", "If we feed a blank image and a blank image only with a backdoor pattern into the model, we find that among 128 channels, most channels do not change in any position, namely c = 0 for these channels.", "However, s usually changes and ranks in the top-10, which indicates surgery methods tend to modify parameters connected to grandmother cells controlling the backdoor pattern.", "Verification of Theoretical Analysis.", "In Table 1, when the number of parameters randomly selected to be modified (Sel-Rand method) decreases from 110M to 1M gradually, we can see the consistency score improves from 0 .", "697 to 0 .", "910 on the IMDB dataset and from 0 .", "511 to 0 .", "818 on the SST-2 dataset.", "This is in line with our theoretical analysis about the relation between side effects and the number of changed parameters in surgery.", "Sparsity of Surgery Methods.", "Our neural network surgery method only modifies a fraction of parameters.", "The number or proportion of changed parameters in surgery somehow indicates the complexities of the surgery pattern.", "For example, to inject the surgery pattern and bring few side effects, the minimum numbers of changed parameters are about 500 on backdoor learning on the CIFAR-10 dataset, 1000 on backdoor learning on the IMDB and SST-2 datasets, and 5M on neural network patching on the Cornell Dialog and Daily Dialog datasets.", "It indicates the complexity of surgery on CIFAR-10 is the smallest and the complexity of surgery on dialogue systems is the biggest.", "Suppose = w w i , where w i is the initial model parameters that is already cached locally and w is the parameters after the tuning process.", "The transmission cost can be saved if only a small fraction of parameters of are nonzero values, while traditional tuning methods usually modify all parameters during tuning and most parameters of are nonzero values.", "For example, in Section 6.1, we can achieve satisfactory performance and a high consistency even when only 100 parameters are nonzero values in with the proposed Dyn surgery method.", "We use the .zip compression format to compress .", "The file size of the baseline tuning method is about 39 MB while the file size of our proposed Dyn surgery method is only 26 KB, which is about 6 .", "5 10 4 of the baseline tuning method.", "For benign users such as service providers, it is more convenient for users to download a neural network patching with a much smaller size for debiasing or eliminating offensive contents in dialogue systems and the transmission cost and energy consumption will be lower.", "The surgery technique itself is neither good nor evil.", "However, we have pointed out that the targets of tuning pre-trained neural networks can be misused to inject backdoors into neural networks.", "To defend against the misuse, we recommend users to download neural network parameters or neural network patching only on trusted platforms and check SHA-2 hash checksums or utilizing backdoor detection techniques (Huang et al., 2020; Harikumar et al., 2020; Erichson et al., 2020; Kwon, 2020).", "Besides, according to Section 6.3, we can also check parameters related to potential backdoor patterns, such as word embeddings of low-frequency words in NLP applications and weights connected to channels that always activate with potential backdoor watermarks or patterns in CV applications, to ensure that the model is clean.", "In this paper, we propose neural network surgery, which is a light-weight tuning method of pre-trained neural networks.", "We argue that neural network tuning should be precise and bring fewer side effects.", "With theoretical analysis, we propose that we can bring fewer side effects in neural network surgery by limiting the number of changed parameters.", "Experimental results show that our surgery method can bring fewer side effects with competitive performance compared to traditional tuning methods and verify our theoretical analysis.", "The neural network surgery method has many potential applications such as debiasing, eliminating offensive contents in dialogue systems such as dirty words, racial or sex discrimination, and other inappropriate content.", "Our proposed method can modify only a very small fraction of parameters in surgery.", "Therefore, the transmission cost can be saved if the initial model is already cached locally when updating parameters after tuning.", "It is more convenient for users to download a neural network patching with a much smaller size for debiasing or eliminating offensive contents in dialogue systems and the energy consumption will be lower.", "However, we point out the potential misuse of our surgery method.", "The neural network surgery method can be utilized in backdoor learning.", "We also discuss its detection and defense in our paper.", "Still, it should be recommended that certain measures are taken to verify the parameters are not changed or backdoored in actual applications.", "We thank the anonymous reviewers for their constructive comments.", "This work is partly supported by National Key R&D Program of China No. 2019YFC1521200.", "This work is partly supported by Beijing Academy of Artificial Intelligence (BAAI).", "Xu Sun is the corresponding author." ]
[ "abstain", "result", "objective", "objective", "abstain", "abstain", "abstain", "objective", "other", "abstain", "method", "objective", "abstain", "abstain", "abstain", "method", "abstain", "objective", "objective", "objective", "abstain", "abstain", "abstain", "objective", "objective", "objective", "result", "method", "objective", "result", "objective", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "objective", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other" ]
[ "In this paper, we propose Shallow Aggressive Decoding (SAD) to improve the online inference efficiency of the Transformer for instantaneous Grammatical Error Correction (GEC).", "SAD optimizes the online inference efficiency for GEC by two innovations: 1) it aggressively decodes as many tokens as possible in parallel instead of always decoding only one token in each step to improve computational parallelism; 2) it uses a shallow decoder instead of the conventional Transformer architecture with balanced encoder-decoder depth to reduce the computational cost during inference.", "Experiments in both English and Chinese GEC benchmarks show that aggressive decoding could yield the same predictions as greedy decoding but with a significant speedup for online inference.", "Its combination with the shallow decoder could offer an even higher online inference speedup over the powerful Transformer baseline without quality loss.", "Not only does our approach allow a single model to achieve the state-of-the-art results in English GEC benchmarks: 66.4 F 0 .", "5 in the CoNLL-14 and 72.9 F 0 .", "5 in the BEA-19 test set with an almost 10 online inference speedup over the Transformer-big model, but also it is easily adapted to other languages.", "Our code is available at https://github.com/AutoTemp/ Shallow-Aggressive-Decoding .", "The Transformer (Vaswani et al., 2017) has become the most popular model for Grammatical Error Correction (GEC).", "In practice, however, the sequence-to-sequence (seq2seq) approach has been blamed recently (Chen et al., 2020; Stahlberg and Kumar, This work was done during the author's internship at MSR Asia.", "2020; Omelianchuk et al., 2020) for its poor inference efficiency in modern writing assistance applications (e.g., Microsoft Office Word 1 , Google Docs 2 and Grammarly 3 ) where a GEC model usually performs online inference, instead of batch inference, for proactively and incrementally checking a user's latest completed sentence to offer instantaneous feedback.", "To better exploit the Transformer for instantaneous GEC in practice, we propose a novel approach Shallow Aggressive Decoding (SAD) to improve the model's online inference efficiency.", "The core innovation of SAD is aggressive decoding: instead of sequentially decoding only one token at each step, aggressive decoding tries to decode as many tokens as possible in parallel with the assumption that the output sequence should be almost the same with the input.", "As shown in Figure 1, if the output prediction at each step perfectly matches its counterpart in the input sentence, the inference will finish, meaning that the model will keep the input untouched without editing; if the output token at a step does not match its corresponding token in the input, we will discard all the predictions after the bifurcation position and re-decode them in the original autoregressive decoding manner until we find a new opportunity for aggressive decoding.", "In this way, we can decode the most text in parallel in the same prediction quality as autoregressive greedy decoding, but largely improve the inference efficiency.", "In addition to aggressive decoding, SAD proposes to use a shallow decoder, instead of the conventional Transformer with balanced encoder-decoder depth, to reduce the computational cost for further accelerating inference.", "The experimental 1 https://www.microsoft.com/en-us/ microsoft-365/word 2 https://www.google.com/docs/about 3 https://www.grammarly.com [BOS] I 'm wri,ng to inform some some advice on traveling and working .", "results in both English and Chinese GEC benchmarks show that both aggressive decoding and the shallow decoder can significantly improve online inference efficiency.", "By combining these two techniques, our approach shows a 9 12 online inference speedup over the powerful Transformer baseline without sacrificing the quality.", "The contributions of this paper are two-fold: We propose a novel aggressive decoding approach, allowing us to decode as many token as possible in parallel, which yields the same predictions as greedy decoding but with a substantial improvement of computational parallelism and online inference efficiency.", "We propose to combine aggressive decoding with the Transformer with a shallow decoder.", "Our final approach not only advances the state-of-the-art in English GEC benchmarks with an almost 10 online inference speedup but also is easily adapted to other languages.", "The Transformer is a seq2seq neural network architecture based on multi-head attention mechanism, which has become the most successful and widely", "used seq2seq models in various generation tasks such as machine translation, abstractive summarization as well as GEC.", "The original Transformer follows the balanced encoder-decoder architecture: its encoder, consisting of a stack of identical encoder layers, maps an input sentence x = ( x 1 , . . . , x n ) to a sequence of continuous representation z = ( z 1 , . . . , z n ) ; and its decoder, which is composed of a stack of the same number of identical decoder layers as the encoder, generates an output sequence o = ( o 1 , . . . , o m ) given z .", "In the training phase, the model learns an autoregressive scoring model P ( y | x ; ) , implemented with teacher forcing: = arg max log P ( y | x ; ) = arg max l 1 (cid:88) i =0 log P ( y i +1 | y i , x ; ) (1) where y = ( y 1 , . . . , y l ) is the ground-truth target sequence and y i = ( y 0 , . . . , y i ) .", "As ground truth is available during training, Eq (1) can be efficiently obtained as the probability P ( y i +1 | y i , x ) at each step can be computed in parallel.", "During inference, the output sequence o = ( o 1 , . . . , o m ) is derived by maximizing the following equation: o = arg max o log P ( o | x ; ) = arg max o m 1 (cid:88) j =0 log P ( o j +1 | o j , x ; ) (2) Since no ground truth is available in the inference phase, the model has to decode only one token at each step conditioning on the previous decoded tokens o j instead of decoding in parallel as in the training phase.", "As introduced in Section 2, the Transformer decodes only one token at each step during inference.", "The autoregressive decoding style is the main bottleneck of inference efficiency because it largely reduces computational parallelism.", "For GEC, fortunately, the output sequence is usually very similar to the input with only a few edits if any.", "This special characteristic of the task makes it unnecessary to follow the original autoregressive decoding style; instead, we propose a novel decoding approach aggressive decoding which tries to decode as many tokens as possible during inference.", "The overview of aggressive decoding is shown in Figure 1, and we will discuss it in detail in the following sections.", "The core motivation of aggressive decoding is the assumption that the output sequence o = ( o 1 , . . . , o m ) should be almost the same with the input sequence x = ( x 1 , . . . , x n ) in GEC.", "At the initial step, instead of only decoding the first token o 1 conditioning on the special [ BOS ] token o 0 , aggressive decoding decodes o 1 ...n conditioning on the pseudo previous decoded tokens o 0 ...n 1 in parallel with the assumption that o 0 ...n 1 = x 0 ,...,n 1 .", "Specifically, for j { 0 , 1 , . . . , n 2 , n 1 } , o j +1 is decoded as follows: o j +1 = arg max o j +1 log P ( o j +1 | o j , x ; ) = arg max o j +1 log P ( o j +1 | o j , x ; ) = arg max o j +1 log P ( o j +1 | x j , x ; ) (3) where o j is the pseudo previous decoded tokens at step j + 1 , which is assumed to be the same with x j .", "After we obtain o 1 ...n , we verify whether o 1 ...n is actually identical to x 1 ...n or not.", "If o 1 ...n is fortunately exactly the same with x 1 ...n , the inference will finish, meaning that the model finds no grammatical errors in the input sequence x 1 ...n and keeps the input untouched.", "In more cases, however, o 1 ...n will not be exactly the same with x 1 ...n .", "In such a case, we have to stop aggressive decoding and find the first bifurcation position k so that o 1 ...k 1 = x 1 ...k 1 and o k (cid:54) = x k .", "Since o 1 ...k 1 = o 1 ...k 1 = x 1 ...k 1 , the predictions o 1 ...k could be accepted as they will not be different even if they are decoded through the original autoregressive greedy decoding.", "However, for the predictions o k +1 ...n , we have to discard and re-decode them because o k (cid:54) = o k .", "As o k (cid:54) = o k = x k , we have to re-decode for o j +1 ( j k ) one by one following the original autoregressive decoding:", "After we obtain o j ( j > k ), we try to match its suffix to the input sequence x for further aggressive decoding.", "If we find its suffix o j q...j ( q 0 ) is the unique substring of x such that o j q...j = x i q...i , then we can assume that o j +1 ... will be very likely to be the same with x i +1 ... because of the special characteristic of the task of GEC.", "If we fortunately find such a suffix match, then we can switch back to aggressive decoding to decode in parallel with the assumption o j +1 ... = x i +1 ... .", "Specifically, the token o j + t ( t > 0 ) is decoded as follows: o j + t = arg max o j + t P ( o j + t | o <j + t , x ; ) (5) In Eq (5), o <j + t is derived as follows: o <j + t = CAT ( o j , o j +1 ...j + t 1 ) = CAT ( o j , x i +1 ...i + t 1 ) (6) where CAT ( a , b ) is the operation that concatenates two sequences a and b .", "Otherwise (i.e., we cannot find a suffix match at the step), we continue decoding using the original Algorithm 1 Aggressive Decoding Input: , x = ( [ BOS ] , x 1 , . . . , x n , [ P AD ] ) , o = ( o 0 ) = ( [ BOS ] ) ; Output: o 1 ...j = ( o 1 , . . . , o j ) ; 1: Initialize j 0 ; 2: while o j (cid:54) = [ EOS ] and j < MAX LEN do 3: if o j q...j ( q 0) is a unique substring of x such that ! i : o j q...j = x i q...i then 4: Aggressive Decode (cid:101) o j +1 ... according to Eq (5) and Eq (6); 5: Find bifurcation j + k ( k > 0 ) such that (cid:101) o j +1 ...j + k 1 = x i +1 ...i + k 1 and (cid:101) o j + k (cid:54) = x i + k ; 6: o CAT ( o , (cid:101) o j +1 ...j + k ) ; 7: j j + k ; 8: else 9: Decode o j +1 = arg max o j +1 P ( o j +1 | o j , x ; ) ; 10: o CAT ( o , o j +1 ) ; 11: j j + 1 ; 12: end if 13: end while autoregressive greedy decoding approach until we find a suffix match.", "We summarize the process of aggressive decoding in Algorithm", "1. For simplifying implementation, we make minor changes in Algorithm 1: 1) we set o 0 = x 0 = [ BOS ] in Algorithm 1, which enables us to regard the initial aggressive decoding as the result of suffix match of o 0 = x 0 ; 2) we append a special token [ P AD ] to the end of x so that the bifurcation (in the 5 th line in Algorithm 1) must exist (see the bottom example in Figure 1).", "Since we discard all the computations and predictions after the bifurcation for re-decoding, aggressive decoding guarantees that generation results are exactly the same as greedy decoding (i.e., beam=1).", "However, as aggressive decoding decodes many tokens in parallel, it largely improves the computational parallelism during inference, greatly benefiting the inference efficiency.", "Even though aggressive decoding can significantly improve the computational parallelism during inference, it inevitably leads to intensive computation and even possibly introduces additional computation caused by re-decoding for the discarded predictions.", "To reduce the computational cost for decoding, we propose to use a shallow decoder, which has proven to be an effective strategy (Kasai et al., 2020; Li et al., 2021) in neural machine translation (NMT), instead of using the Transformer with balanced encoder-decoder depth as the previous state-of-the-art Transformer models in GEC.", "By combining aggressive decoding with the shallow decoder, we are able to further improve the inference efficiency.", "We follow recent work in English GEC to conduct experiments in the restricted training setting of BEA-2019 GEC shared task (Bryant et al., 2019): We use Lang-8 Corpus of Learner English (Mizumoto et al., 2011), NUCLE (Dahlmeier et al., 2013), FCE (Yannakoudakis et al., 2011) and W&I+LOCNESS (Granger; Bryant et al., 2019) as our GEC training data.", "For facilitating fair comparison in the efficiency evaluation, we follow the previous studies (Omelianchuk et al., 2020; Chen et al., 2020) which conduct GEC efficiency evaluation to use CoNLL-2014 (Ng et al., 2014) dataset that contains 1,312 sentences as our main test set, and evaluate the speedup as well as Max-Match (Dahlmeier and Ng, 2012) precision, recall and F 0 .", "5 using their official evaluation scripts 4 .", "For validation, we use CoNLL-2013 (Ng et al., 2013) that contains 1,381 sentences as our validation set.", "We also test our approach on NLPCC-18 Chinese GEC shared task (Zhao et al., 2018), following their training 5 and evaluation setting, to verify the effectiveness of our approach in other languages.", "To compare with the state-of-the-art approaches in English GEC that pretrain with synthetic data, 4 https://github.com/nusnlp/m2scorer 5 Following Chen et al. (2020), we sample 5,000 training instances as the validation set.", "we also synthesize 300M error-corrected sentence pairs for pretraining the English GEC model following the approaches of Grundkiewicz et al. (2019) and Zhang et al. (2019).", "Note that in the following evaluation sections, the models evaluated are by default trained without the synthetic data unless they are explicitly mentioned.", "We use the most popular GEC model architecture Transformer (big) model (Vaswani et al., 2017) as our baseline model which has a 6-layer encoder and 6-layer decoder with 1,024 hidden units.", "We train the English GEC model using an encoder-decoder shared vocabulary of 32K Byte Pair Encoding (Sennrich et al., 2016) tokens and train the Chinese GEC model with 8.4K Chinese characters.", "We include more training details in the supplementary notes.", "For inference, we use greedy decoding 6 by default.", "All the efficiency evaluations are conducted in the online inference setting (i.e., batch size=1) as we focus on instantaneous GEC.", "We perform model inference with fairseq 7 implementation using Pytorch 1.5.1 with 1 Nvidia Tesla V100-PCIe of 16GB GPU memory under CUDA 10.2.", "We evaluate aggressive decoding in our validation set (CoNLL-13) which contains 1,381 validation examples.", "As shown in Table 1, aggressive decoding achieves a 7 8 speedup over the original autoregressive beam search (beam=5), and generates exactly the same predictions as greedy decoding, as discussed in Section 3.1.2.", "Since greedy decoding can achieve comparable overall performance (i.e., F 0 . 5 ) with beam search while it tends 6 Our implementation of greedy decoding is simplified for higher efficiency ( 1 . 3 1 . 4 speedup over beam=5) than the implementation of beam=1 decoding in fairseq (around 1 . 1 speedup over beam=5).", "to make more edits resulting in higher recall but lower precision, the advantage of aggressive decoding in practical GEC applications is obvious given its strong performance and superior efficiency.", "We further look into the efficiency improvement by aggressive decoding.", "Figure 2 shows the speedup distribution of the 1,381 examples in CoNLL-13 with respect to their edit ratio which is defined as the normalized (by the input length) edit distance between the input and output.", "It is obvious that the sentences with fewer edits tend to achieve higher speedup, which is consistent with our intuition that most tokens in such sentences can be decoded in parallel through aggressive decoding; on the other hand, for the sentences that are heavily edited, their speedup is limited because of frequent re-decoding.", "To give a more intuitive analysis, we also present concrete examples with various speedup in our validation set to understand how aggressive decoding improves the inference efficiency in Table", "2. Moreover, we conduct an ablation study to in-Speedup Edit Ratio Input Output 16.7 0 Personally , I think surveillance technology such as RFID ( radio-frequency identification ) should not be used to track people , for the benefit it brings to me can not match the concerns it causes .", "vestigate whether it is necessary to constrain the maximal aggressive decoding length 8 , because it might become highly risky to waste large amounts of computation because of potential re-decoding for a number of steps after the bifurcation if we aggressively decode a very long sequence in parallel.", "Table 3 shows the online inference efficiency with different maximal aggressive decoding lengths.", "It appears that constraining the maximal aggressive 8 Constraining the maximal aggressive decoding length to L max means that the model can only aggressively decode at most L max tokens in parallel.", "decoding length does not help improve the efficiency; instead, it slows down the inference if the maximal aggressive decoding length is set to a small number.", "We think the reason is that sentences in GEC datasets are rarely too long.", "For example, the average length of the sentences in CoNLL-13 is 21 and 96% of them are shorter than 40 tokens.", "Therefore, it is unnecessary to constrain the maximal aggressive decoding length in GEC.", "We study the effects of changing the number of encoder and decoder layers in the Transformer-big on both the performance and the online inference efficiency.", "By comparing 6+6 with 3+6 and 9+6 in Table 4, we observe the performance improves as the encoder becomes deeper, demonstrating the importance of the encoder in GEC.", "In contrast, by comparing the 6+6 with 6+3 and 6+9, we do not see a substantial fluctuation in the performance, indicating no necessity of a deep decoder.", "Moreover, it is observed that a deeper encoder does not significantly slow down the inference but a shallow decoder can greatly improve the inference efficiency.", "This is because Transformer encoders can be parallelized efficiently on GPUs, whereas Transformer decoders are auto-regressive and hence the number of layers greatly affects decoding speed, as discussed in Section 3.2.", "These observations motivate us to make the encoder deeper and the decoder shallower.", "As shown in the bottom group of Table 4, we try different combinations of the number of encoder and decoder layers given approximately the same parameterization budget as the Transformer-big.", "It is interesting to observe that 7+5, 8+4 and 9+3 achieve the comparable and even better performance than the Transformer-big baseline with much less computational cost.", "When we further increase the encoder layer and decrease the decoder layer, we see a drop in the performance of 10+2 and 11+1 despite the improved efficiency because it becomes difficult to train the Transformer with extremely imbalanced encoder and decoder well, as indicated 9 by the previous work (Kasai et al., 2020; Li et al., 2021; Gu and Kong, 2020).", "Since the 9+3 model achieves the best result with an around 2 speedup in the validation set with almost the same parameterization budget, we choose it as the model architecture to combine with aggressive decoding for final evaluation.", "We evaluate our final approach shallow aggressive decoding which combines aggressive decoding with the shallow decoder.", "Table 5 shows the performance and efficiency of our approach and recently proposed efficient GEC models that are all faster than the Transformer-big baseline in CoNLL-14 test set.", "Our approach (the 9+3 model with aggressive decoding) that is pretrained with synthetic data achieves 63.5 F 0 .", "5 with 10 .", "3 speedup over the Transformer-big baseline, which outperforms the majority 10 of the efficient GEC models in terms of either quality or speed.", "The only model that shows advantages over our 9+3 model is GECToR which is developed based on the powerful pretrained mod-9 They show that sequence-level knowledge distillation (KD) may benefit training the extremely imbalanced Transformer in NMT.", "However, we do not conduct KD for fair comparison to other GEC models in previous work.", "10 It is notable that PIE is not strictly comparable here because their training data is different from ours: PIE does not use the W&I+LOCNESS corpus.", "els (e.g., RoBERTa (Liu et al., 2019) and XLNet (Yang et al., 2019)) with its multi-stage training strategy.", "Following GECToR 's recipe, we leverage the pretrained model BART (Lewis et al., 2019) to initialize a 12+2 model which proves to work well in NMT (Li et al., 2021) despite more parameters, and apply the multi-stage fine-tuning strategy used in Stahlberg and Kumar (2020).", "The final single model 11 with aggressive decoding achieves the state-of-the-art result 66.4 F 0 .", "5 in the CoNLL-14 test set with a 9 .", "6 speedup over the Transformer-big baseline.", "Unlike GECToR and PIE that are difficult to adapt to other languages despite their competitive speed because they are specially designed for English GEC with many manually designed language-specific operations like the transformation of verb forms (e.g., VBD VBZ) and prepositions (e.g., in at), our approach is data-driven without depending on language-specific features, and thus can be easily adapted to other languages (e.g., Chi-nese).", "As shown in Table 6, our approach consistently performs well in Chinese GEC, showing an around 12 .", "0 online inference speedup over the Transformer-big baseline with comparable performance.", "The state-of-the-art of GEC has been significantly advanced owing to the tremendous success of seq2seq learning (Sutskever et al., 2014) and the Transformer (Vaswani et al., 2017).", "Most recent work on GEC focuses on improving the performance of the Transformer-based GEC models.", "However, except for the approaches that add synthetic erroneous data for pretraining (Ge et al., 2018a; Grundkiewicz et al., 2019; Zhang et al., 11 The same model checkpoint also achieves the state-of-the-art result 72.9 F 0 . 5 with a 9 . 3 speedup in the BEA-19 test set. 2019; Lichtarge et al., 2019; Zhou et al., 2020; Wan et al., 2020), most methods that improve performance (Ge et al., 2018b; Kaneko et al., 2020) introduce additional computational cost and thus slow down inference despite the performance improvement.", "To make the Transformer-based GEC model more efficient during inference for practical application scenarios, some recent studies have started exploring the approaches based on edit operations.", "Among them, PIE (Awasthi et al., 2019) and GECToR (Omelianchuk et al., 2020) propose to accelerate the inference by simplifying GEC from sequence generation to iterative edit operation tagging.", "However, as they rely on many language-dependent edit operations such as the conversion of singular nouns to plurals, it is difficult for them to adapt to other languages.", "LaserTagger (Malmi et al., 2019) uses the similar method but it is data-driven and language-independent by learning operations from training data.", "However, its performance is not so desirable as its seq2seq counterpart despite its high efficiency.", "The only two previous efficient approaches that are both language-independent and good-performing are Stahlberg and Kumar (2020) which uses span-based edit operations to correct sentences to save the time for copying unchanged tokens, and Chen et al. (2020) which first identifies incorrect spans with a tagging model then only corrects these spans with a generator.", "However, all the approaches have to extract edit operations and even conduct token alignment in advance from the error-corrected sentence pairs for training the model.", "In contrast, our proposed shallow aggressive decoding tries to accelerate the model inference through parallel autoregressive decoding which is related to some previous work (Ghazvininejad et al., 2019; Stern et al., 2018) in neural machine translation (NMT), and the imbalanced encoder-decoder architecture which is recently explored by Kasai et al. (2020) and Li et al. (2021) for NMT.", "Not only is our approach language-independent, efficient and guarantees that its predictions are exactly the same with greedy decoding, but also does not need to change the way of training, making it much easier to train without so complicated data preparation as in the edit operation based approaches.", "In this paper, we propose Shallow Aggressive Decoding (SAD) to accelerate online inference efficiency of the Transformer for instantaneous GEC.", "Aggressive decoding can yield the same prediction quality as autoregressive greedy decoding but with much less latency.", "Its combination with the Transformer with a shallow decoder can achieve state-of-the-art performance with a 9 12 online inference speedup over the Transformer-big baseline for GEC.", "Based on the preliminary study of SAD in GEC, we plan to further explore the technique for accelerating the Transformer for other sentence rewriting tasks, where the input is similar to the output, such as style transfer and text simplification.", "We believe SAD is promising to become a general acceleration methodology for writing intelligence models in modern writing assistant applications that require fast online inference.", "We thank all the reviewers for their valuable comments to improve our paper.", "We thank Xingxing Zhang, Xun Wang and Si-Qing Chen for their insightful discussions and suggestions.", "The work is supported by National Natural Science Foundation of China under Grant No.62036001.", "The corresponding author of this paper is Houfeng Wang." ]
[ "objective", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "objective", "abstain", "other", "result", "abstain", "other", "abstain", "result", "objective", "objective", "abstain", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "method", "objective", "abstain", "abstain", "objective", "method", "other", "other", "other", "other" ]
[ "Large-scale language models (LMs) pretrained on massive corpora of text, such as GPT-2, are powerful open-domain text generators.", "However, as our systematic examination reveals, it is still challenging for such models to generate coherent long passages of text (e.g., 1000 tokens), especially when the models are fine-tuned to the target domain on a small corpus.", "Previous planning-then-generation methods also fall short of producing such long text in various domains.", "To overcome the limitations, we propose a simple but effective method of generating text in a progressive manner, inspired by generating images from low to high resolution.", "Our method first produces domain-specific content keywords and then progressively refines them into complete passages in multiple stages.", "The simple design allows our approach to take advantage of pretrained LMs at each stage and effectively adapt to any target domain given only a small set of examples.", "We conduct a comprehensive empirical study with a broad set of evaluation metrics, and show that our approach significantly improves upon the fine-tuned large LMs and various planning-then-generation methods in terms of quality and sample efficiency.", "Human evaluation also validates that our model generations are more coherent.", "1 1 Introduction Generating coherent long text (e.g., 1000s of tokens) is useful in myriad applications of creating reports, essays, and other long-form content.", "Yet the problem is particularly challenging as it demands models to capture global context, plan content, and produce local words in a consistent manner.", "Prior studies on long text generation have typically limited to outputs of 50-200 tokens (Shen et al., 2019; Bosselut et al., 2018; Zhao et al., 2020).", "Recent large-scale pretrained language models (LMs), such as GPT-2 (Radford et al., 2019) and BART (Lewis et al., 2020), emerged as an impressive open-ended text generator capable of producing surprisingly fluent text.", "The massive LMs are typically pretrained on large corpora of generic text once, and then fine-tuned with small domain-specific data.", "The latest work has mostly focused on the regime of relatively short text with low hundreds of tokens.", "For example, Holtzman et al. (2020); See et al. (2019); Hua and Wang (2020) studied GPT-2 and BART generations with a maximum length ranging from 150 to 350 tokens.", "In this work, we study the problem of generating coherent, much longer passages of text (e.g., 1000 tokens).", "GPT-3 (Brown et al., 2020) was reported to produce long essays, yet the results seem to need extensive human curations (e.g., MarketMuse; Gar-dian), and the system is not publicly available to adapt to arbitrary desired domains.", "extra-4314", "2 Related Work Content planning in generation.", "The idea of separate content planning and surface realization has been studied in early text generation systems (Reiter and Dale, 1997).", "Recent neural approaches have also adopted similar planning-then-generation strategies for data-to-text (Moryossef et al., 2019; Puduppully et al., 2019), storytelling (Fan et al., 2019; Yao et al., 2019; Xu et al., 2020), machine translation (Ford et al., 2018), and others (Hua and Wang, 2019; Yao et al., 2017).", "These models often involve customized architectures incompatible with the existing large LMs.", "Scaling those models for long text generation thus can require expensive training, which restricts systematic studies.", "On the other hand, it is possible to adopt some of the content planning strategies (e.g., summaries or SRL sequences as the plans (Fan et al., 2019)), and repurpose pretrained LMs for generation in each stage.", "However, these strategies 4315 with dedicated intermediate plans and a pre-fixed number (typically", "long text.", "We find that samples produced by GPT-2 fine-tuned on small domain-specific corpora exhibit various imperfections, including excessive repetitiveness and incoherence between sentences far apart.", "Figure 1 measures the coherence of text generated by the fine-tuned GPT-2 w.r.t the BERT next sentence prediction (Devlin et al., 2019) score.", "As the figure shows, GPT-2 models (regardless of the model size) exhibit a significant gap in the score compared with human text, hence falling short in generating coherent text.", "We hypothesize that the problem is mainly caused by the sequential generation order of the LMs, which makes global content planning of the passage difficult, especially when the generated text is long and contains thousands of words.", "One could potentially adopt the recent planning-then-generation or non-monotonic methods (Sec 2), yet those methods either require specialized neural architectures that need costly retraining for each domain (Gu et al., 2019; Stern et al., 2019; Chan et al., 2019; Fan et al., 2019), or rely on dedicated intermediate content plans (e.g., summaries, SRL labels) (Fan et al., 2019; Yao et al., 2019) with limited flexibility and producing sub-optimal results as shown in our experiments.", "To overcome the limitations, we introduce a new method for Pro gressive Gen eration of Text (Pro-Gen).", "We observe that generation of some words (e.g., stop words) does not require many contexts, while other words are decisive and have long-term impact on the whole content of the passage.", "Motivated by this observation, our approach first produces a sequence of most informative words, then progressively refines the sequence by adding finer-grained details in multiple stages, until completing a full passage.", "The generation at each stage is conditioning on the output of the preceding stage which provides anchors and steers the current generation (Figure 2).", "The intermediate words produced at each stage are defined based on a simple TF-IDF informativeness metric.", "The approach enjoys several core advantages: (1) Although the progressive approach implements a conceptually non-monotonic generation process, generation at each stage can still be performed in a left-to-right manner and thus is directly compatible with the powerful pretrained monotonic LMs.", "The LMs at different stages are easily fine-tuned to accommodate a target domain using only small, independently constructed data.", "Intuitively, each LM is addressing a sub-task of mapping a sequence to a finer-resolution one, which is much simpler than the overall task of mapping from conditions to full passages of text.", "In this work, we use BART (Lewis et al., 2020) for generation at each stage, though one can also plug in other off-the-shelf LMs.", "As seen from Figure 1, ProGen can generate more much coherent text compared with GPT-2 and nearly match human text in terms of the BERT-NSP score; (2) In contrast to the typical 2-stage planning-then-generation in prior work, the simple progressive strategy offers added flexibility for an arbitrary number of intermediate stages, yielding improved results; (3) The training data for each stage is extracted from domain corpus using the simple TF-IDF metric, without need of additional resources (e.g., pretrained summarization models) as in prior work, making the method broadly applicable to various domains and languages.", "We conduct extensive empirical studies on the CNN News (Hermann et al., 2015) and Writing-Prompts (Fan et al., 2018) corpora, evaluating various systems by a wide-range of automatic metrics as well as human judgement.", "Results show that ProGen achieves strongly improved performance by decomposing the generation into more progressive stages.", "Our method produces diverse text passages of higher quality and coherence than a broad set of models, including fine-tuned GPT-2, BART, and other various planning-then-generation strategies.", "We next concretely define the order of generation, namely, which words should each stage generates.", "Specifically, we propose a simple method Condition jeep dog barking of fi cer skinny jeep sandy shouted jeep dog circles vehicle barking of fi cer yellow skinny animal circling jeep spit vehicle tumbling rough sandy adjusting gun proceeded canine dog barking LM 1 `` Shut the dog up ,'' shouted my head of fi cer from the jeep .", "2) of stages can have limited flexibility, leading to sub-optimal results as shown in our empirical study.", "Besides, creating training data for planning requires additional resources (e.g., pretrained summarization models or SRL models) which are not always available (e.g., in certain domains or for low-resource languages).", "In contrast, we propose a simple way for designing the intermediate stages based on word informativeness, which can flexibly increase the number of stages for improved results, and easily create training data for all stages without additional models.", "Non-monotonic generation and refinement.", "Another relevant line of research is non-monotonic generation (Welleck et al., 2019; Gu et al., 2019; Stern et al., 2019; Chan et al., 2019; Zhang et al., 2020), infilling (Zhu et al., 2019; Shen et al., 2020; Qin et al., 2020), or refinement (Lee et al., 2018; Novak et al., 2016; Mansimov et al., 2019; Kasai et al., 2020) that differs from the restricted left-to-right generation in conventional LMs.", "Again, those approaches largely depend on specialized architectures and inference, making them difficult to be integrated with the powerful pretrained LMs.", "The prior studies have focused on generating short text.", "Our proposed coarse-to-fine progressive generation conceptually presents a non-monotonic process built upon the pretrained monotonic LMs, which permits fast adaptation to any target domain and generation of much longer text.", "Long text generation.", "Previous work has made attempts to generate text of up to two or three hundred tokens.", "Those methods often adopt the similar idea of planning-then-generation as above (Shen et al., 2019; Zhao et al., 2020; Bosselut et al., 2018; See et al., 2019; Hua and Wang, 2020; Rashkin et al., 2020).", "Another line of work instead focuses on extending the transformer architecture (Vaswani et al., 2017) to model longer text sequences (e.g., Dai et al., 2019; Wang et al., 2020; Choroman-ski et al., 2021, etc).", "For example, Liu et al. (2018) used a hybrid retrieval-generation architecture for producing long summaries; Dai et al. (2019) showed long text samples qualitatively.", "Our work systematically examines the pretrained LMs in generating long domain-specific text, and proposes a new approach that empowers pretrained LMs for producing samples of significantly higher-quality.", "One of the main challenges in generating long coherent passages is modeling long-range dependencies across the entire sequences (e.g., 1000 tokens).", "We propose a progressive generation approach that is conceptually simple yet effective.", "Intuitively, progressive generation divides the complex problem of generating the full passage into a series of much easier steps of generating coarser-grained intermediate sequences.", "Contrary to generating everything from left to right from scratch, our progressive generation allows the model to first plan globally and then shift attention to increasingly finer details, which results in more coherent text.", "Figure 2 illustrates the generation process.", "Let y := [ y 1 , y 2 , . . . , y T ] be the output text, where each y i is a token of language (a word or a sub-word).", "The output sequences are generated either conditionally on any other information x ( e.g. , generations of a story given a prompt), or unconditionally (in which case we assume x while keeping the same notation).", "Instead of generating the full passage y directly, we propose to add multiple intermediate stages: x c 1 c 2 c K y , where for each stage k { 1 , . . . , K } , c k is an intermediate sequence containing information of the passage at certain granularity.", "For instance, at the first stage, c 1 can be seen as a highest-level content plan consisting of the most informative tokens such as key entities.", "Then, based on the plan, we gradually refine them into subsequent c k , each of which contains finer-grained information than that of the preceding stage.", "At the final stage, we refine c K into the full passage by adding the least informative words (e.g., stop words).", "The generation process corresponds to a decomposition of the conditional probability as: P ( y , { c k }| x ) = P ( c 1 | x ) Kk =2 P ( c k | c k 1 , x ) P ( y | c K , x ) .", "As the above intuition, c k at early stages as the high-level content plans should contain informative or important words, to serve as skeletons for subsequent enrichment.", "that constructs a vocabulary V k for each stage k , based on the importance of words in the target domain.", "Each particular stage k only produces tokens belonging to its vocabulary V k .", "By the progressive nature of the generation process, we have V 1 VK V .", "That is, V 1 contains the smallest core set of words in the domain, and the vocabularies gradually expand at later stages until arriving the full vocabulary V .", "Note that vocabularies in later stages are supersets of those in earlier stages.", "This allows the later stages to remedy and polish potential mistakes made in earlier stages when necessary.", "We discuss the construction of the vocabularies in the below.", "Stage-wise vocabularies based on word importance.", "Given a text corpus D of the target domain with the full vocabulary V , we define the importance scores of words in V based on the TF-IDF metric.", "We then rank all the words and assign the top V k words to the intermediate vocabulary V k .", "Here V k is a hyper-parameter controlling the size of V k .", "More concretely, for each word w V , we first compute its standard TF-IDF score (Salton and McGill, 1986) in each document d D , which essentially measures how important w is to d .", "The importance of the word w in the domain is then defined as the average TF-IDF score across all documents containing w : importance( w, D ) = (cid:2) d D TF _ IDF( w, d ) DF( w, D ) , (2) where TF _ IDF( w, d ) is the TF-IDF score of word w in document d ; and DF( w, D ) is the document Algorithm 1 Training for Progressive Text Generation Inputs: Domain corpus D Vocabulary sizes for K stages K pretrained LMs ( e.g. GPT-2 or BART) 1: Construct stage-wise vocabularies {V k } based on word importance", "Output: Fine-tuned LMs for generation at all stages in a progressive manner", "frequency, i.e. , the number of documents in the corpus that contain the word w .", "Pretrained language models as building blocks.", "Compared to many of the previous planning-then-generation and non-monotonic generation methods, one of the key advantages of our progressive generation design is the direct compatibility with the powerful pretrained LMs that perform left-to-right generation.", "Specifically, although our approach implements a non-monotonic generation process that produces importance words first, we can generate intermediate sequences c k at each stage still in a left-to-right manner.", "Thus, we can plug pretrained LM, such as GPT-2 or BART, into each stage to carry out the generation.", "As described more in section 3.2, for each stage k , we can conveniently construct stage-specific training data from the domain corpus D using the stage-wise vocabulary V k , and fine-tune the stagek LM in order to generate intermediate sequences at the stage that are pertaining to the target domain.", "to-4317", "Model configs.", "We use BARTs for all stages of generation.", "Due to computation limitations, we experiment models with 2, 3, 4-stages generations.", "In 4318 our 2-stage model, our first stage covers about 25% of all content; in the 3-stage model, the first and second stages cover 15% and 25% of all content, respectively; and in the 4-stage model, our first three stages cover 15%, 20%, 25% of all content.", "ken distributions to ensure the stagek LM only produces tokens belonging to V k .", "In practice, we found it is not necessary, as the pretrained LM can usually quickly learns the pattern through fine-tuning and generate appropriate tokens during inference.", "In our experiments we use BART for all stages, since BART is an encoder-decoder model which can conveniently take as inputs the resulting sequence from the preceding stage and generate new.", "(For the first stage in an unconditional generation task, we simply set x = .)", "We note that GPT-2, and other relevant pretraiened LMs, can indeed also be used as a conditional generator (Radford et al., 2019; Liu et al., 2018) and thus be plugged into any of stages.", "Our approach permits straightforward training/fine-tuning of the (pretrained) LMs at different stages given the domain corpus D .", "In particular, we can easily construct independent training data for each stage, and train all LMs in parallel.", "Note that no additional resources such as pretrained summarization or semantic role labeling models are requested as in previous work, making our approach directly applicable to a potentially broader set of domains and languages.", "We plan to explore the use of our method in multi-lingual setting in the future.", "More concretely, for each stage k , we use the stage vocabularies V k 1 and V k to filter all relevant tokens in the documents as training data.", "That is, given a document, we extract the subsequence c k 1 of all tokens from the document that are belonging to V k 1 , and similarly extract sub-sequence c k belonging to V k .", "The c k 1 and c k are then used as the input and the ground-truth output, respectively, for training the LM at stage k with maximum likelihood learning.", "Therefore, given the stage-wise vocabularies {V k } , we can automatically extract training data from the domain corpus D for different stages, and train the LMs separately.", "In the multi-stage generation, the intermediate sequences are not natural language.", "Yet we found that fine-tuning pretrained LMs (such as BART and GPT-2) to generate the intermediate sequences is indeed very efficient in terms of data and computation.", "We tried training other models such as small sequence-to-sequence models and n-gram models from scratch, which we found is much harder, requiring more data, or yielding inferior performance.", "This again highlights the importance of using pretrained LMs, as enabled by our simple method design.", "Stage-level exposure bias and data noising.", "In the above training process, the outputs of each LM are conditioning on the ground-truth input sequences extracted from the real corpus.", "In contrast, at generation time, the LM takes as inputs the imperfect sequences produced at the previous stage, which can result in new mistakes in the outputs since the LM has never be exposed to noisy inputs during training.", "Thus, the discrepancy between training and generation can lead to mistakes in generation accumulating through the stages.", "The phenomenon resembles the exposure bias issue (Ran-zato et al., 2016) of sequential generation models at token level, where the model is trained to predict the next token given the previous ground-truth tokens, while at generation time tokens generated by the model itself are instead used to make the next prediction.", "To alleviate the issue and increase the robustness of each intermediate LM, we draw on the rich literature of addressing token-level exposure bias (Xie et al., 2017; Tan et al., 2019).", "Specifically, during training, we inject noise into the ground-truth inputs at each stage by randomly picking an n -gram ( n { 1 , 2 , 3 , 4 } ) and replacing it with another randomly sampled n -gram.", "The data noising encourages the LMs to learn to recover from the mistakes in inputs, leading to a more robust system during generation.", "Domains.", "We evaluate on two text generation domains including: (1) CNN News (Hermann et al., 2015) for unconditional generation.", "(2) Writing-Prompts (Fan et al., 2018) for conditional story generation.", "The task is to generate a story given a prompt.", "The two datasets are chosen since they both contain long documents, with CNN's average and maximum length being 512 and 926, and Writ-ingPrompts's being 437 and 942, respectively.", "To demonstrate the data efficiency of our approaches adapting to target domains, we sample 1,000 documents in each dataset for training.", "For model training, we follow the same protocol as (See et al., 2019) to fine-tune all pretrained models until convergence.", "To combat exposure bias, we add noise to the training data as described in Sec 3.2, with the probability of replacing 1,2,3,4-grams 0.1/0.05/0.025/0.0125.", "In the generation phase, we use top-p decoding (Holtzman et al., 2020) with p = 0 .", "95 to generate 1024 tokens at maximum.", "Experiments were conducted with RTX6000 GPUs.", "It took around 4 hours for model fine-tuning and generation with a single GPU.", "Comparison methods.", "We compare with a wide range of baselines, categorized into two groups: (1) The large pretrained LMs including BART (Lewis et al., 2020) and GPT-2 in both small and large sizes (Radford et al., 2019).", "The LMs generate text in a standard left-to-right manner; (2) Progressive generation with various strategies adopted in the prior planning-then-generation work.", "Same as our proposed method, each stage adapts a pretrained BART for generation.", "Specifically, Summary first generates a short summary text as the content plan and conditioning on the summary produces the full passage of text (Fan et al., 2019).", "For training, summaries are obtained using the state-of-the-art pretrained CNN news summarization model based on BART; Keyword first generates a series of keywords, based on which the full text is generated in the next stage.", "Following (Yao et al., 2019), the keywords are extracted with the RAKE algorithm (Rose et al., 2010) for training; SRL follows the recent work (Fan et al., 2019) by first generating a sequence of predicates and arguments and then producing the full text conditionally.", "The same semantic role labeling tool as in the prior work is used here to create training data.", "SRL+NER and SRL+Coref further augment the SRL method by an additional stage of generating entity anonymized text conditioning on the predicates sequence prior to the final stage (Fan et al., 2019).", "SRL+NER uses an NER model to mask all entities, while SRL+Coref applies coreference resolution to mask all clusters of mentions.", "We use the same NER and coreference tools as in (Fan et al., 2019).", "Finally, as a reference, we also present the results of Human -written text (i.e., the text in the dev set).", "To evaluate the generation quality for the domain-specific open-ended generation as studied here, we primarily measure the closeness between two sets of text, one generated by the model and the other the real text from the target domain.", "We evaluate with a broad array of automatic metrics, including lexical-based quality metrics and semantic-based quality metrics.", "We also evaluate the generation diversity .", "MS-Jaccard (MSJ) is a lexical-based metric (Montahaei et al., 2019), where MSJn measures the similarity of n -grams frequencies between two sets of text with Jaccard index.", "TF-IDF Distance (TID) is defined as the distance between the average TF-IDF features of two text sets.", "We use it as an additional lexical-based quality measure.", "Frchet BERT Distance (FBD) is a semantic-based metric (Montahaei et al., 2019) that measures the Frchet Distance in the BERT feature space between the generated and real text.", "By using the BERT features from shallow (S), medium (M), and deep (D) layers, we can compute FBD-S/M/D, respectively.", "metric (Shi et al., 2018) measuring how well the generated text covers n-grams occurred in the test set.", "Harmonic BLEU (HA-BLEU) (Shi et al., 2018) is an aggregated quality and diversity metric that incorporates both the standard BLEU (i.e., precision) and the Backward BLEU (i.e., recall).", "Figures 3 and 4 show the results of the various systems on the news and story domains, respectively, measured with different metrics against test set.", "We give more complete results in the appendix.", "We can see that our progressive generation approach consistently outperforms the standard, single-stage LMs ( GPT2-Small , GPT2-Large and BART ) by a large margin on almost all metrics in both domains.", "Further, by increasing the number of progression stages, our method steadily achieves even stronger performance.", "This highlights the benefits of the flexible progressive generation strategy.", "mixed results across the different metrics.", "For example, Summary achieves strong performance in terms of the semantic-based quality metric FBD-D (partially because the summaries are closer to the real text in the BERT feature space), but significantly falls behind other models in terms of diversity (B-BLEU4) and other quality metrics like MSJ and HA-BLEU.", "Similarly, the SRL -based methods give only mediocre results in terms of the semantic-based FBD-D.", "In contrast, our approach maintains a relatively consistent performance level.", "In particular, our 4-stage model, ProGen-4 , is steadily among the best across all metrics, further validating Fluency Coherence passage sentence (%) BART 4.59 2.95 70.29 GPT2-Small 4.42 3.41 74.69 Summary 4.39 3.37 76.19 ProGen-4 (Ours) 4.46 3.83 86.22 Table 1: Human evaluation results on CNN.", "These results also indicate the necessity of using a large diverse set of automatic metrics for a comprehensive evaluation, and motivate human studies for further assessment.", "3: GoldPlan Results on CNN.", "First Stage: beckham \\n liverpool bayern chelsea \\n beckham chelsea mancini \\n liverpool \\n club [...] Second Stage: beckham newcastle \\n footballer acquisition transfer liverpool bayern munich chelsea munich \\n beckham chelsea season italian siro gianluigi mancini season italy \\n season liverpool season \\n wembley club \\n [...] Final Stage: (CNN) England striker Ashley Beckham has joined Premier League strugglers Newcastle United.", "\\n England Football Association chairman Stephen McClaren confirmed the acquisition of the 20-year-old England international on Wednesday, making the move official following a lucrative transfer agreement with Liverpool, Bayern Munich and Chelsea.", "\\n Beckham spent the first half of this year on loan at Chelsea and did not start a single match during the season at the Italian capital's San Siro despite being named the player of the season of the year by Italy.", "\\n However, he played 17 times last season and 12 times this season for Liverpool, making 33 league appearances and scoring seven goals.", "\\n He is currently third on the all-time England goalscoring list behind only England manager Alex Ferguson and new club teammate Paul Gascoigne.", "[...] Table 4: An excerpt of a 3-stage generated example by ProGen-3 on the CNN News domain.", "In our human study, we asked three university students who are proficient English speakers to evaluate the coherence and fluency of the generated text.", "To better assess the coherence of the long passages of text, we evaluate at both the passage level and the finer-grained sentence level.", "More concretely, for passage-level coherence , human raters assign a coherence score to each full-length text sample, on a 5-point Likert scale.", "For a more detailed assessment, we further evaluate sentence-level coherence , where human raters label each sentence in the text passage with 0 or 1, indicating whether the particular sentence is coherent with the proceeding context in the passage.", "We then calculate the average percentage of coherent sentences in the generated text by each model.", "Human raters also evaluate the language quality for a fluency score on a 5-point Likert scale.", "We compare our method with the systems that show highest generation quality in automatic evaluation, including BART , GPT2-Small , and Summary .", "We evaluated 50 examples for each comparison model on the CNN domain.", "The Pearson correlation coeffi-cient of human scores is 0.52, showing moderate inter-rater agreement.", "Table 1 shows the results.", "All systems receive close fluency scores.", "Our approach obtained significantly higher coherence scores at both passage and sentence levels.", "In particular, over 86% sentences in our model generations are considered as coherent with the context, improving over other models by at least 10 absolute percent.", "Sample efficiency.", "We study how the progressive generation could improve the sample efficiency of large LMs fine-tuned to target domains.", "The intuition is that by focusing on the subsets of informative words, the early stages can more effi-ciently capture the domain-specific characteristics and then steer the subsequent refinement stages.", "Figure 5 shows the results where we report the FBD score averaged over FBD-S/M/D.", "We can see our approach can make more efficient use of the training data in learning to generate high quality samples.", "For example, with only 1K training examples, our method achieves comparable results with large LMs trained on 30K examples.", "Generation with gold plans.", "To investigate the importance of dividing the generation process into stages and what the stages learn separately, we add another set of text into our comparison.", "It is a 2-stages model whose first stage is the ground truth (gold plan) while the second stage kept the same (a BART model), shown as GoldPlan in Table 3.", "Note that with gold plan, our model greatly decreases the gap with human text in terms of lexical (TID) and semantic (FBD-D) quality metrics.", "The results highlight the importance of plans in text 4321 generation.", "The intermediate plans act as an information bottleneck, and high-quality plans could lead to high-quality text generation.", "References Antoine Bosselut, Asli Celikyilmaz, Xiaodong He, Jianfeng Gao, Po-Sen Huang, and Yejin Choi.", "Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sar-los, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, et al. 2021.", "Rethinking attention with performers.", "ICLR .", "Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Car-bonell, Quoc Le, and Ruslan Salakhutdinov.", "2019.", "Transformer-XL: Attentive language models beyond a fixed-length context.", "In ACL , pages 29782988.", "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova.", "2019.", "BERT: Pre-training of deep bidirectional transformers for language understanding.", "In NAACL , pages 41714186.", "Angela Fan, Mike Lewis, and Yann Dauphin.", "2018.", "Hierarchical neural story generation.", "In ACL , pages 889898.", "Angela Fan, Mike Lewis, and Yann Dauphin.", "2019.", "Strategies for structuring story generation.", "In ACL .", "Nicolas Ford, Daniel Duckworth, Mohammad Norouzi, and George E Dahl.", "2018.", "The importance of generation order in language modeling.", "In EMNLP .", "Gardian.", "A robot wrote this entire article.", "are you scared yet, human?", "Jiatao Gu, Qi Liu, and Kyunghyun Cho.", "2019.", "Insertion-based decoding with automatically inferred generation order.", "TACL , 7:661676.", "Karl Moritz Hermann, Tomas Kocisky, Edward Grefen-stette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom.", "2015.", "Teaching machines to read and comprehend.", "In NeurIPS , pages 16931701.", "Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi.", "2020.", "The curious case of neural text degeneration.", "In ICLR .", "Xinyu Hua and Lu Wang.", "2019.", "Sentence-level content planning and style specification for neural text generation.", "In EMNLP .", "Xinyu Hua and Lu Wang.", "2020.", "PAIR: Planning and iterative refinement in pre-trained transformers for long text generation.", "In EMNLP , pages 781793.", "Jungo Kasai, James Cross, Marjan Ghazvininejad, and Jiatao Gu.", "2020.", "Non-autoregressive machine translation with disentangled context transformer.", "In ICML .", "Jason Lee, Elman Mansimov, and Kyunghyun Cho.", "2018.", "Deterministic non-autoregressive neural sequence modeling by iterative refinement.", "In EMNLP , pages 11731182.", "Effect of data noising.", "We study the ablation of data noising, to check whether the noising operation really helps reduce stage-wise exposure bias (Sec 3.2) as we expected.", "Table 2 shows the comparison between models with and without noise in training.", "The added noise generally brings performance improvement in terms of various metrics.", "Example generations.", "Table 4 shows an example of text generated via three stages.", "We can see our model first generates the key subject beckham and the team name liverpool in the very first stage, then adds more fine-grained details like acquisition, transfer in the second stage and finally expands the keywords into a full document describing Beck-ham's joining a new team.", "We have proposed a new approach for domain-specific generation of long text passages in a progressive manner.", "Our method is simple and efficient by fine-tuning large-scale off-the-shelf language models.", "We conduct extensive experiments using a variety of metrics and human studies.", "We demonstrate that our method outperforms a wide range of large pretrained LMs with single-stage generation or prior planning-then-generation strategies, in terms of quality and coherence of the produced samples.", "The multi-stage generation also opens up new opportunities to enhance controllability of text generation, which we would love to explore in the future." ]
[ "abstain", "result", "abstain", "objective", "objective", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "objective", "result", "objective", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "method", "abstain", "method", "method", "abstain", "result", "abstain", "objective", "abstain", "other", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "objective", "objective" ]
[ "Erion Cano Institute of Formal and Applied Linguistics, Charles University, Prague, Czech Republic [email protected]", "Ondrej Bojar Institute of Formal and Applied Linguistics, Charles University, Prague, Czech Republic [email protected]", "Abstract", "Authors' keyphrases assigned to scientific articles are essential for recognizing content and topic aspects.", "Most of the proposed supervised and unsupervised methods for keyphrase generation are unable to produce terms that are valuable but do not appear in the text.", "In this paper, we explore the possibility of considering the keyphrase string as an abstractive summary of the title and the abstract.", "First, we collect, process and release a large dataset of scientific paper metadata that contains 2.2 million records.", "Then we experiment with popular text summarization neural architectures.", "Despite using advanced deep learning models, large quantities of data and many days of computation, our systematic evaluation on four test datasets reveals that the explored text summarization methods could not produce better keyphrases than the simpler unsupervised methods, or the existing supervised ones.", "A valuable concept for searching and categorizing scientific papers in digital libraries is the keyphrase (we use keyphrase and keyword inter-changeably), a short set of one or few words that represent concepts.", "Scientific articles are commonly annotated with keyphrases based on taxonomies of concepts and the authors' judgment.", "Finding keyphrases that best describe the contents of a document is thus essential and rewarding.", "Most of the proposed keyphrase extraction solutions tend to be unsupervised (Florescu and Caragea, 2017; Nguyen and Luong, 2010; Rose et al., 2010; Bougouin et al., 2013; Campos et al., 2018) and generate terms by selecting the most appropriate candidates, ranking the candidates based on several features and finally returning the top N .", "Another way is to utilize datasets of texts and keywords for training supervised models with linguistic or other features to predict if candidates are keywords or not (Witten et al., 1999; Turney, 2000; Medelyan, 2009; Hulth, 2003).", "All above methods propose N keyphrases for each article which are joined together with , (or other separator like ;) to form the keyphrase string of that article.", "They suffer from various problems or discrepancies.", "First, they are unable to find an optimal value for N and require it as a preset parameter.", "Furthermore, semantic and syntactic properties of article phrases are analyzed separately.", "The meaning of paragraphs, sections or entire document is thus missed.", "Lastly, only phrases that do appear in the article are returned.", "Meng et al. (2017) recently proposed a deep supervised keyphrase generation solution trained on a big dataset.", "It successfully solves the last two problems above, but not the first one.", "Motivated by recent advances in neural machine translation and abstractive text summarization (Vaswani et al., 2017; Foster et al., 2018; Rush et al., 2015; See et al., 2017), in this paper, we explore the possibility of considering keyphrase generation as an abstractive text summarization task.", "Instead of generating keywords one by one and linking them to form the keyphrase string, we consider the later as an abstractive summary of the concatenated paper title and abstract.", "Different recently-proposed text summarization architectures are tried on four test datasets of article keyphrases (Tanti et al., 2017; Rush et al., 2015; See et al., 2017).", "We trained them with a newly created dataset of 2.2 million article titles, abstracts and keyphrase strings that we processed and released.", "1 The selected text summarization models are compared with popular unsupervised and supervised methods using ROUGE (Lin, 2004) and full-match F 1 metrics.", "The results show that though 1 http://hdl.handle.net/11234/1-2943 trained with large data quantities for many days, the tried text summarization methods could not produce better keywords than the existing supervised or deep supervised predictive models.", "In our opinion, a possible explanation for this is the fact that the title and the abstract may not carry suf-ficient topical information about the article, even when joined together.", "In contrast, when assigning keywords annotations of their paper, authors are highly influenced by the topic aspects of it.", "This paper carries several contributions, despite the fact that no progressive result scores were reached.", "It is the first work that considers keyphrase generation as an abstractive text summarization task.", "We produced a large dataset of article titles, abstracts, and keywords that can be used for keyword generation, text summarization or similar purposes.", "Finally, we evaluated the performance of different neural network architectures on summarization of article keyword strings, comparing them with popular unsupervised methods.", "Because of the open source and open data initiatives, many public datasets from various domains can be found online (Cano and Morisio, 2015).", "Among the several collections of scientific articles, some of them have gained considerable popularity in research literature.", "In Meng et al. (2017), we found a recent and big collection of 20K paper abstracts and keyphrases.", "These metadata belong to articles of computer science from ACM Digital Library, ScienceDirect, and Web of Science.", "In Hulth (2003), we found a collection of 2000 (1500 for train/val and 500 for testing) abstracts in English, together with titles and authors' keywords.", "The corresponding articles were published from 1998 to 2002 and belong to the discipline of Information Technology .", "Furthermore, Krapivin et al. (2010) released a dataset of 2000 (1600 for train/val and 400 for testing) full articles published by ACM from 2003 to 2005 in Computer Science domain.", "More information about similar keyphrase data collections or other available resources can be found in Hasan and Ng (2014) and in online repositories.", "2 Regarding text summarization, some of the most popular datasets are: DUC-2004 3 mainly 2 https://github.com/LIAAD/ KeywordExtractor-Datasets 3 https://duc.nist.gov/duc2004/ Attribute Train Val Test Fullset Records 2M 100K 100K 2.2M Keyphrases 12M 575K 870K 13.4M Title tokens 24M 1.3M 1.6M 27M Abstract tokens 441M 21M 37M 499M Av.", "used for testing, English Gigaword (Napoles et al., 2012), CNN/Daily Mail described in Section 4.3 of (Nallapati et al., 2016) and Newsroom, a heterogeneous bundle of news articles described in Grusky et al. (2018).", "These datasets are frequently used for the task of predicting titles from abstracts or short stories.", "However, no keyphrases are provided; they do not serve to our purpose.", "Arnet-Miner is a recent attempt to crawl scientific paper data from academic networks (Tang et al., 2008).", "The system extracts profiles of researchers from digital resources and integrates their data in a common network.", "A spin-off is the Open Academic Graph (OAG) data collection (Sinha et al., 2015).", "To produce a usable collection for our purpose, we started from OAG.", "We extracted title , abstract and keywords .", "The list of keywords was transformed into a comma-separated string and a language identifier was used to remove records that were not in English.", "Abstracts and titles were lowercased, and Stanford CoreNLP tokenizer was used for tokenizing.", "Short records of fewer than 20 tokens in the abstract, 2 tokens in the title and 2 tokens in the keywords were removed.", "For the test portion, we selected documents of at least 27, 3 and 2 tokens in each field.", "Data preprocessing stopped here for the release version (no symbol filtering), given that many researchers want to fil-ter text in their own way.", "This new dataset named OAGK can be used for both text summarization (predicting title from abstract) and keyphrase extraction (unsupervised, supervised or deep supervised) tasks.", "Some rounded measures about each set of released data are presented in Table", "1. 3 Keyphrase Extraction Strategies 3.1 Unsupervised and Supervised Methods TOPICRANK is an extractive method that creates topic clusters using the graph of terms and phrases (Bougouin et al., 2013).", "Obtained topics are then ranked according to their importance in the document.", "Finally, keyphrases are extracted by picking one candidate from each of the most important topics.", "A more recent, unsupervised and feature-based method for keyphrase extraction is YAKE !", "(Campos et al., 2018).", "It heuristically combines features like casing , word position or word frequency to generate an aggregate score for each phrase and uses it to select the best candidates.", "One of the first supervised methods is KEA described by Witten et al. (1999).", "It extracts those candidate phrases from the document that have good chances to be keywords.", "Several features like TF-IDF are computed for each candidate phrase during training.", "In the end, Nave Bayes algorithm is used to decide if a candidate is a keyword or not (binary classification).", "An improvement and generalization of KEA is MAUI (Medelyan, 2009).", "Additional features are computed, and bagged decision trees are used instead of Nave Bayes.", "The author reports significant performance improvements in precision, recall and F 1 scores.", "The above keyphrase extraction methods and others like Florescu and Caragea (2017) or Nguyen and Luong (2010) reveal various problems.", "First, they are not able to find an optimal value for N (number of keywords to generate for an article) based on article contents and require it as a preset parameter.", "Second, the semantic and syntactic properties of article phrases (considered as candidate keywords) are analyzed separately.", "The meaning of longer text units like paragraphs or entire abstract/paper is missed.", "Third, only phrases that do appear in the paper are returned.", "In practice, authors do often assign words that are not part of their article.", "Meng et al. (2017) overcome the second and third problem using an encoder-decoder model (COPYRNN ) with a bidirectional Gated Recurrent Unit (GRU) and a forward GRU with attention.", "They train it on a datasets of hundred thousands of samples, consisting of abstract-keyword (one keyword only) pairs.", "The model is entirely data-driven and can produce terms that may not appear in the document.", "It still produces one keyword at a time, requiring N (first problem) as parameter to create the full keyphrase string.", "To overcome the three problems mentioned in Section 3.1, we explore abstractive text summarization models proposed in the literature, trained with", "article abstracts and titles as sources and keyword strings as targets.", "They are expected to learn and paraphrase over entire source text and produce a summary in the form of a keyphrase string with no need for extra parameters.", "They should also introduce new words that do not appear in the abstract.", "Two simple encoder-decoder variants based on LSTMs are described in Figure 3 of Tanti et al. (2017).", "MERGE (Figure 3.a) encodes input and the current summary independently and merges them in a joint representation which is later decoded to predict the next summary token.", "INJECT model (Figure 3.b) on the other hand injects the source document context representation to the encoding part of the current summary before the decoding operation is performed.", "ABS is presented in Figure 3.a of Rush et al. (2015).", "The encoder (Figure 3.b) takes in the input text and a learned soft alignment between the input and the summary, producing the context vector.", "This soft alignment is the attention mechanism (Bahdanau et al., 2014).", "To generate the summary words, Rush et al. apply a beam-search decoder with a window of K candidate words in each position of the summary.", "Pointer-Generator network (POINTCOV ) depicted in Figure 3 of See et al. (2017) is similar to ABS .", "It is composed of an attention-based encoder that produces the context vector.", "The decoder is extended with a pointer-generator model that computes a probability p gen from the context vector, the decoder states, and the decoder output.", "That probability is used as a switch to decide if the next word is to be generated or copied from the input.", "This model is thus a compromise between abstractive and extractive (copying words from input) models.", "Another extension is the coverage mechanism for avoiding word repetitions in the summary, a common problem of encoder-decoder summarizers (Tu et al., 2016).", "We performed experiments with the unsupervised and supervised methods of Section 3 on the first three datasets of Section 2 and on OAGK.", "All supervised methods were trained with the 2M records of OAGK train part.", "An exception was MAUI which could be trained on 25K records at most (memory limitation).", "In addition to the processing steps of Section 2, we further replaced digit symbols with # and limited source and tar-Hulth (500) Krapivin (400) Meng (20K) OAGK (100K) Method F 1 @5 F 1 @7 F 1 @5 F 1 @7 F 1 @5 F 1 @7 F 1 @5 F 1 @7 YAKE !", "get text lengths to 270 and 21 tokens, respectively.", "Vocabulary size was also limited to the 90K most frequent words.", "The few parameters of the unsupervised methods (length and windows of candidate keyphrases for YAKE !, ranking strategy for TOPICRANK ) were tuned using the validation part of each dataset.", "For the evaluation, we used F 1 score of full matches between predicted and authors' keywords.", "Given that the average number of keywords in the data is about 6, we computed F 1 scores on top 5 and top 7 returned keywords ( F 1 @5, F 1 @7 ).", "Before each comparison, both sets of terms were stemmed with Porter Stemmer and duplicates were removed.", "In the case of summarization models, keyphrases were extracted from their comma-separated summaries.", "We also computed ROUGE-1 and ROUGE-L F 1 scores ( R 1 F 1 , RLF 1 ) that are suitable for evaluating short summaries (Lin, 2004).", "The keywords obtained from the unsupervised methods were linked together to form the keyphrase string (assumed summary).", "This was later compared with the original keyphrase string of the authors.", "Full-match results on each dataset are reported in Table", "2. From the unsupervised models, we see that YAKE ! is consistently better than TOPICRANK .", "The next two supervised models perform even better, with COPYRNN being discretely su-perior than MAUI .", "Results of the four summarization models seem disappointing.", "MERGE and INJECT are the worst on every dataset, with highest score 13.39 %.", "Various predictions of these models are empty or very short, and some others contain long word repetitions which are discarded during evaluation.", "As a result, there are usually fewer than five predicted keyphrases.", "This explains why F 1 @5 and F 1 @7 scores are very close to each other.", "ABS works slightly better reaching scores from 10.24 to 14.75 %.", "POINTCOV is the best of the text summarizers producing keyphrase predictions that are usually clean and concise with few repetitions.", "This is probably the merit of the coverage mechanism.", "There is still a considerable gap between POINTCOV and COPYRNN .", "Rouge-1 and Rouge-L F 1 scores are reported in Table", "3. COPYRNN is still the best but POINTCOV is close.", "ABS scores are also comparable to those of MAUI and YAKE", "!.", "TOPICRANK , MERGE and INJECT are again the worst.", "Regarding the test datasets, the highest result scores are achieved on Hulth and the lowest on Krapivin.", "We checked some samples of the later and observed that each of them contains separation tags (e.g., T, A, B, Figure etc.) for indicating different parts of text in the original paper.", "A more intelligent text cleaning step may be required on those data.", "The results show that the tried text summarization models perform poorly on full-match keyword predictions.", "Their higher ROUGE scores further indicate that the problem is not entirely in the summarization process.", "Observing a few samples, we found differences between the two evaluation strategies.", "For example, suppose we have the predicted keyword intelligent system compared against authors' keyword system design .", "Full-match evaluation adds nothing to F 1 @5 and F 1 @7 scores.", "However, in the case of ROUGE evaluation, the prediction is partially right and a certain value is added to R 1 F 1 score.", "In follow up works, one solution to this discrepancy could be to try partial-match comparison scores like overlap coefficients.", "produces [health care, immune system, hu-man, metabolism, immunity] as the list of keywords after removing the extra separators.", "Instead, we expected [health care, immune system, human metabolism, immunity].", "This again penalizes full-match scores but not R 1 F 1 score.", "A more intelligent keyword separation mechanism could thus help for higher full-match result scores.", "A third reason could be the fact that we used the title and abstract of papers only.", "This is actually what most researchers do, as it is hard to find high quantities of article full texts for free.", "Article body is usually restricted.", "Abstractive summarization methods could still benefit from longer source texts.", "Using default hyperparameters for the models may have also influenced the results.", "Some parameter tuning could be beneficial, though.", "The main reason could be even more fundamental.", "We trained abstractive summarization models on abstracts and titles with authors' keyphrases considered as golden ones.", "There might be two issues here.", "First, when setting their keywords, authors mostly consider the topical aspects of their work rather than paraphrasing over the contents.", "Abstracts and titles we used may not carry enough topical information about the article, even when joined together.", "Second, considering authors' keywords as golden ones may not be reasonable.", "One solution is to employ human experts and ask them to annotate each article based on what they read.", "This is however prohibitive when hundred thousands of samples are required.", "Extensive experiments on this issue may provide different facts and change the picture.", "For the moment, a safe way to go seems developing deep supervised generative models like the one of Meng et al. (2017) that predict one keyphrase at each step independently.", "In this paper, we experimented with various unsupervised, supervised, deep supervised and abstractive text summarization models for predicting keyphrases of scientific articles.", "To the best of our knowledge, this is the first attempt that explores the possibility of conceiving article string of keywords as an abstractive summary of title and abstract.", "We collected and produced a large dataset of 2.2 million abstracts, titles and keyphrase strings from scientific papers available online.", "It can be used for future text summarization and keyphrase generation experiments.", "Systematic evaluation on four test datasets shows that the used summarization models could not produce better keywords than the supervised predictive models.", "Extensive experiments with more advanced summarizaiton methods and better parameter optimization may still reveal a different view of the situation.", "This research work was supported by the project No.", "CZ.02.2.69/0.0/0.0/16 027/0008495 (Inter-national Mobility of Researchers at Charles University) of the Operational Programme Research, Development and Education, grant 19-26934X (NEUREM3) of the Czech Science Foundation and H2020-ICT-2018-2-825460 (ELITR) of the EU." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "objective", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "method", "abstain", "abstain", "abstain", "other", "other" ]
[ "Fine-grained opinion mining (OM) has achieved increasing attraction in the natural language processing (NLP) community, which aims to find the opinion structures of Who expressed what opinions towards what in one sentence.", "In this work, motivated by its span-based representations of opinion expressions and roles, we propose a unified span-based approach for the end-to-end OM setting.", "Furthermore, inspired by the unified span-based formalism of OM and constituent parsing, we explore two different methods (multi-task learning and graph convolutional neural network) to integrate syntactic constituents into the proposed model to help OM.", "We conduct experiments on the commonly used MPQA 2.0 dataset.", "The experimental results show that our proposed unified span-based approach achieves significant improvements over previous works in the exact F1 score and reduces the number of wrongly-predicted opinion expressions and roles, showing the effectiveness of our method.", "In addition, incorporating the syntactic constituents achieves promising improvements over the strong baseline enhanced by contextualized word representations.", "Opinion mining (OM), which aims to find the opinion structures of Who expressed what opinions towards what . in one sentence, has achieved much attention in recent years (Katiyar and Cardie, 2016; Marasovic and Frank, 2018; Zhang et al., 2019b, 2020).", "The opinion analysis has many NLP applications, such as social media monitoring (Bollen et al., 2011) and e-commerce applications (Cui et al., 2017).", "The commonly used benchmark Rui Wang's contributions were carried out while at Alibaba Group.", "MPQA (Wiebe et al., 2005) uses span-based annotations to represent opinion expressions and roles.", "Figure 1 gives an example of its opinion structures with two opinion expressions and related roles.", "Previous OM works (Yang and Cardie, 2013; Katiyar and Cardie, 2016; Quan et al., 2019; Zhang et al., 2020) mainly treat it as a BMESO-style tagging problem, which converts opinion expressions and opinion roles (holder/target) into BMESO-based labels and uses a linking module to connect the predicted expressions and roles.", "The B, M, and E represent the beginning, middle, and ending word of a role, S denotes a single-word role, and O denotes other words.", "However, this kind of method is not perfect for the end-to-end OM setting, because one word can only belong to one opinion role (one word has only one label), while there exist overlapping opinion structures between different expressions in one sentence.", "Figure 1 gives an example, in which some overlapped opinion relations have been discarded by previous works (Katiyar and Cardie, 2016), such as [ happy , he loves being Enderly Park , Target ] and [ loves , he , Holder ].", "There are also other works which focus only on predicting opinions roles based on the gold-standard expressions, which also follow the BMESO-based method (Marasovic and Frank, 2018; Zhang et al., 2020).", "However, they also suffer from some weaknesses: 1) the expressions are usually fed into the model input as indicator embeddings (1 if the current word belongs to an expression, 0 otherwise), thus one sample is expanded n times if one sentence has n expressions, which is inefficient (Marasovic and Frank, 2018; Zhang et al., 2020).", "2) The BMESO-based method is weak to capture long-range dependencies and prefers to predict shorter opinion role spans (Zhang et al., 2020).", "Motivated by the span-based representations of opinion expressions and roles, we propose a unified span-based opinion mining model (SPANOM) that can solve or alleviate the aforementioned weaknesses.", "First, we treat the identification of opinion expressions and roles as two unified binary span classification problems, i.e., judging whether the word span is an expression (or role) or not.", "Then, we allocate the opinion relations on the predicted expression-role pairs.", "This strategy converts the overlapped opinion role identification of different expressions into classifying different expression-role pairs.", "For example, predicting [ happy , he loves being Enderly Park , Target ] and [ loves , he , Holder ] is infeasible in BMESO-based method, while it is feasible in our span-based method.", "Benefit from the model architecture, the proposed model only needs to train once for one sample in one epoch, which is very efficient for training.", "Besides, the unified model can be easily adapted to the given-expression setting by using gold-standard expressions.", "Furthermore, inspired by the same span-based formalism between the syntactic constituents and opinion roles, we explore two types of methods to encode the syntactic knowledge to improve the role spans recognition for two motivations, i.e., multi-task learning (MTL) for enhancing the model representative ability and graph convolutional networks (GCN) (Kipf and Welling, 2016; Guo et al., 2019) for encoding the constituent structures.", "We conduct extensive experiments on the commonly used MPQA2.0 dataset and demonstrate that our proposed unified model achieves superior performance compared with previously proposed BMESO-based works.", "Our contributions are:", "(i) we propose a unified span-based model for opinion mining in the end-to-end fashion that also supports the given-expression setting,", "(ii) we successfully integrate syntactic constituents knowledge into our model with MTL and GCN, achieving promising improvements,", "(iii) detailed analyses demonstrate the effectiveness of our unified model and the usefulness of integrating constituent syntactic knowledge on the long-distance opinion roles.", "There are several task settings for opinion mining in the community: 1) Breck et al. (2007); Yang and Cardie (2014) focus on labeling the expressions.", "2) Katiyar and Cardie (2016); Zhang et al. (2019b); Quan et al. (2019) discover the opinion structures in the end-to-end setting, i.e, based on the systematic expressions.", "3) Marasovic and Frank (2018); Zhang et al. (2019a, 2020) identify the opinion roles based on the given expressions.", "Our work follows the end-to-end setting and also supports the given-expression setting.", "Most of the previous opinion mining works treat it as a BMESO-tagging problem, which can be handled by the typical sequence labeling model, such as bi-directional long-short term memory network conditional random field (BiLSTM-CRF).", "Yang and Cardie (2013) propose to use traditional feature-based CRF model to predict the BMESO-based opinion role labels.", "Katiyar and Cardie (2016) propose a BiLSTM-CRF model to first predict the word-wise opinion role label and then determine the relationship with the expression by the role label and distance to the expressions.", "Zhang et al. (2019b) propose a transition-based model for opinion mining, which identifies opinion expressions and roles by the human-designed transition actions.", "Quan et al. (2019) integrate BERT representations into a BiLSTM-CRF model, but they do not distinguish different expressions in one sentence.", "As aforementioned, it is trivial for the sequence labeling style models to handle the overlapped opinion roles belonging to different expressions in one sentence.", "Due to the issue of data scarcity, several kinds of external knowledge have been investigated to improve OM performance.", "Marasovic and Frank (2018) propose several MTL frameworks with semantic role labeling (SRL) to utilize semantic knowledge.", "Zhang et al. (2019a) extract the semantic representations from a pre-trained SRL model and feed them into the opinion mining model, achieving substantial improvements.", "Zhang et al. (2020) incorporate the powerful contextual representations of bi-directional encoder representations from Transformers (BERT) (Devlin et al., 2019) and external dependency syntactic knowledge.", "To solve or alleviate the weaknesses of the previously proposed BMESO-based models, we propose a new method to unifiedly model the opinion expressions and roles, which treats the expression identification, role identification, and opinion relation classification as an MTL problem.", "Besides, to boost the opinion mining performance and motivated by the span-based task formalism, we explore to incorporate syntactic constituents into our model.", "Utilizing span-based representations have been investigated for many other NLP tasks, such as named entity recognition (NER) (Tan et al., 2020), constituency parsing (Kitaev and Klein, 2018), and semantic role labeling (SRL) (He et al., 2018).", "Generally, NER is a single span classification problem, constituency parsing is a span-based structure prediction problem, and SRL is a word-span classification problem.", "Different from them, in our methodology, OM is a span-span classification problem.", "Given an input sentence s = w 1 , w 2 , ..., w n , our model aims to predict the gold-standard opinion structures Y E O R , where E = { w i , ..., w j | 1 i j n } is the set of expressions , O = { w i , ..., w j | 1 i j n } is the set of opinion roles , and R is the set of opinion relations ( holder and target ) with a dummy relation that represents no relation.", "Accordingly, we treat the opinion expression and role recognition as the unified span classification problem and determine the opinion relation based on the predicted expressions and roles.", "We jointly model the three sub-tasks in an MTL fashion to enhance the modules' interplay.", "The left part of Figure 2 shows the model architecture of our model and we will detailedly describe the components in the following sections.", "For each word w i in sentence s , we employ word embedding, char representation, and contextual word representation to compose the model input, denoted as: x i = emb word w i rep char w i rep context w i | s , (1) where means the concatenate operation.", "We use the convolutional neural networks (CNN) (Kalch-brenner et al., 2014) to generate the character representations over the characters of words.", "Over the input layer, we employ BiLSTM to encode the model input.", "We treat the concatenation of the outputs of the left-to-right LSTM and right-to-left LSTM as the output: h i = LST M ( x i , h i 1 ) , h i = LST M ( x i , h i +1 ) , h i = h i h i .", "(2) 3.4 Span Representation and Identification Layer.", "To better distinguish opinion expression and role representations, we first employ two multi-layer perceptions (MLP) to re-encode the output of BiLSTM encoder, denoted as: h exp i = MLP exp ( h i ) , h rol i = MLP rol ( h i ) .", "(3) For a word span that begins at b -th word and ends at e -th word, we define it as span b,e .", "So the representations of expression and role are defined as: span exp b,e = ( h exp b + h exp e ) ( h exp b h exp e ) , span rol b,e = ( h rol b + h rol e ) ( h rol b h rol e ) .", "(4) Given the representations of expressions and roles, we employ another two MLPs to classify whether the span is the gold expression/role or not.", "Furthermore, we also incorporate the span boundary information to help the determination of spans.", "Specifically, we employ another four MLPs on the span boundary positions to determine whether the word is a boundary position or not 1 .", "Thus, the score formulation of the span is as: s exp = MLP exp ( span exp b,e ) + MLP exp b ( h b ) + MLP exp e ( h e ) , s rol = MLP rol ( span rol b,e ) + MLP rol b ( h b ) + MLP rol e ( h e ) .", "(5) We can observe that for a sentence with n words, the numbers of candidate spans for expressions and roles are both n ( n +1) 2 , while the number of gold expressions and roles are much fewer.", "To alleviate the unbalanced number of gold samples 1 We omit the process of span boundary module in Figure 2 for clarity.", "and negative samples, we adapt the focal loss that is widely used in computer vision (Lin et al., 2017) into our model.", "Formally, for every span i in a sentence, the sentence focal loss is defined as: Loss = (cid:88) i (cid:88) c (1 p i,c ) y i,c log ( p i,c ) , (6) where p i,c is the softmax value of the s exp c (or s opi c ) for class c of span i , is a pre-defined hyper-parameter and y i,c is an indicator value that equals to 1 if c is the ground-truth class 0 otherwise.", "Compared with the typical cross-entropy loss, the difference appears in the first item, which can intuitively make the model focus more on the hard-to-classify samples.", "We denote the loss of the opinion expressions and roles as L exp and L rol , respectively.", "Given the predicted opinion expressions and roles, the next step is to determine the opinion relation (holder, target, or no relation) for each expression-role pair.", "We employ another MLP classifier to compute the score for each relation of the focused expression span exp and role span rol : s rel = MLP ( span exp span rol ) .", "(7) Focal loss is also employed to estimate this module, which is denoted as L rel .", "We sum the three losses from the three modules as the final model loss: LOM = L exp + L rol + L rel .", "For the end-to-end OM setting, the model predicts the relation of the predicted expressions and roles.", "As for the given-expression mode, we directly feed the gold expressions into the model, with other parts the same as the end-to-end mode.", "Since the data scale is relatively small, previous works usually try to integrate external knowledge to enhance the basic OM model and improve its performance (Marasovic and Frank, 2018; Zhang et al., 2019a).", "Previous sequence tagging models usually incorporate word-wise external information, such as dependency parsing (Zhang et al., 2020).", "We try to investigate the integration of constituent knowledge, which is motivated by their unified span-based formalism.", "Two different methods are explored in our work, i.e., MTL and GCN.", "MTL is an effective method to utilize external knowledge, which is usually by sharing the model parameters of the main task and auxiliary task (Ruder, 2017).", "Considering the efficiency of full constituent parsing, we use partial constituent parsing in our model, i.e., training partial constituent trees (constituent spans), not the entire constituent tree.", "In detail, we first extract all the constituent spans 2 from the OntoNotes corpus.", "See 5.1 for the detailed settings.", "Then, we add a span classification module over the BiLSTM encoder, which is similar to the unified opinion classifier, to predict the span belonging to which kind of constituent labels.", "Third, with the addition of the constituent span classification module, we can easily allocate automatic constituent labels to enhance the predicted opinion expressions and roles.", "Thus, we create randomly initialized constituent label embeddings for representing the syntactic labels, which are then 2 We remove constituent spans with label Top and S.", "span exp (cid:48) b,e = span exp b,e emb label exp , span rol (cid:48) b,e = span exp b,e emb label rol .", "The syntax-enhanced span representations are then passed to participate in the later computation process.", "Finally, the focal loss is used to estimate the partial constituent tree prediction module and the partial constituent loss ( L cons ) is used to update the shared input layer, encoder layer, and the partial constituent parsing classification layer.", "So the loss of our constituents-enhanced OM model becomes: L = LOM + L cons .", "It is worth noting that the data size of OM and constituent trees is different, so we employ a corpus-weighting parameter to balance it.", "In general, the MTL method brings two benefits: 1) enhancing the model encoder and 2) adding constituency label information to expressions and roles.", "The MTL method enhances our OM model from the aspect of model representative ability by jointly modeling opinion mining and partial constituency parsing.", "We argue that modeling the syntactic constituent structure is also beneficial for OM because it provides valuable syntactic information for a sentence.", "Therefore, we try to employ the recently popular GCN (Kipf and Welling, 2016) to encode the constituent structure.", "However, the conventional GCN is not suitable for constituency trees, because it usually works on the dependency trees (Zhang et al., 2018, 2020) where the nodes are the surface words in a sentence.", "While, in constituent trees, there exists a certain number of non-terminal nodes 3 , such as NP, VP, SBAR and so on.", "So it is hard to directly apply conventional GCN on the constituent trees.", "In the following, we first introduce the definition and workflow of typical GCN and then describe our modification.", "Formally, we denote an undirected graph as G = ( V , E ) , where V and E are the set of nodes and edges, respectively.", "The GCN computation flow of node v V at l -th layer is defined as: h lv = (cid:32) (cid:88) u N ( v ) W l h l 1 u + b l (cid:33) , (11) 3 Terminal nodes are the surface words in the sentence.", "where W l R m m is the weight matrix, b l R m is the bias term, N ( v ) is the set of all one-hop neighbour nodes of v , and is an activation function (relu activation function in our work).", "Especially, h 0 u R m is the initial input representation, and m is the representation dimension.", "Since there are some non-terminal nodes in the constituent tree, the GCN input can not directly get from the surface words.", "We create a randomly initialized non-terminal embedding matrix EN D and a dynamic mask for composing the GCN input and extracting the GCN output, where N is the number of non-terminal nodes and D is the dimension of the terminal node inputs.", "There are two main ways to add the GCN modules in the neural network models, i.e., concatenating with the input layer and stacking over the encoder layer.", "According to our preliminary experiments, we choose the former method.", "In detail, we treat the composition of non-terminal node representations and terminal node representations as the GCN input, and then concatenate the terminal node GCN outputs x GCN i with the basic model input as the final model input.", "The top right part of Figure 2 shows the overall workflow.", "The final constituent-enhanced unified span-based opinion mining model combines the two methods, which we denoted as MTL+GCN in the later sections.", "The workflow is shown by the right bottom part of Figure 2.", "We conduct experiments on the commonly used English MPQA2.0 dataset (Wiebe et al., 2005).", "Following the data split of previous works (Zhang et al., 2019a, 2020), the development data contains 132 documents and the test data contains 350 documents, using five-fold cross-validation to evaluate the test data.", "For constituent data, we use the OntoNotes 5.0 dataset (Pradhan et al., 2013) in our MTL method.", "We use the constituent parser of Kitaev and Klein (2018) 4 to obtain the automatic constituent trees.", "BERT (Devlin et al., 2019) is employed as the external contextual representations.", "We implement our model with Pytorch 5 and the basic model has 20.46M parameters 6 .", "We employ the 300-dimension GloVe vector (Pen-nington et al., 2014) as our pre-trained word embeddings.", "The character embeddings are randomly initialized and a CNN with kernel sizes of 3, 4, 5 is used to capture the character representations.", "For the contextual representations, we extract the representations from the base BERT by making a weighted summation over the last four layer outputs.", "The hidden size of the BiLSTM layer is set to 300 and we employ 2-layer BiLSTMs to encode the input representations.", "The dimension of opinion expression and role representations is 300 and the hidden size of expression, role, and relation classi-fiers is 150.", "We use 3-layer GCNs with hidden size 300.", "The dropout rate of the input layer, encoder layer, and other components are 0.5, 0.4, and 0.3, respectively.", "The hyper-parameter is 3.0.", "We employ Adam optimizer with an L2 weight decay of 1e-6 to optimize our model.", "The batch size is 32.", "The initial learning rate is set to 0.001 and decays 0.99 for every 50 steps.", "Our model trains for at most 320k steps and early stops if no performance gains happen in 100 epochs on the development data.", "We pick the model that performs best on the development data for evaluation.", "It costs about 4 minutes to run one epoch training and 1 minute for evaluation.", "Following previous works (Marasovic and Frank, 2018; Zhang et al., 2020), we use the Precision, Recall, and F1 score to measure the experimental results regarding to Exact match setting, and two other auxiliary metrics of Binary and Proportional match.", "The average value of the five-fold cross-validation results is reported in our work.", "The binary and proportional metrics are also called overlap metric, which includes the opinion roles that exactly match the gold opinions and inexactly match but overlap with gold roles.", "In detail, the binary match means an opinion overlaps with a gold-standard opinion and the proportional match computes the maximum ratio value of an role with the overlapped gold role.", "Results in the end-to-end setting.", "Table 1 lists the results of previous works and our model (SPANOM) in the end-to-end setting.", "First, our model achieves superior performance than previous works in terms of exact F1 score, reaching better results of 52.90 and 32.42 exact F1 scores on the holder and target roles.", "The overall exact F1 score of the two roles is 43.12.", "Second, integrating BERT representations into the model input can bring substantial improvements, achieving 49.89 exact F1 score.", "We can see that in the auxiliary metrics of binary and proportional, previous works perform better than ours, which we think because our model more focuses on the entire word spans and we will detailedly discuss it in the analysis section.", "Finally, the results of expression prediction are shown in Table 2.", "We can see that our model outperforms Zhang et al. (2019b) by +5.02 exact F1 score.", "Results in the given-expression setting.", "Table 3 shows the experimental results and comparison with previous works in the given-expression set-Models Exact F1 Binary F1 Proportional F1 Holder Target Overall Holder Target Overall Holder Target Overall Zhang et al. (2019a) 73.07 42.70 58.30 81.57 68.34 75.15 79.35 61.22 70.55 Zhang et al. (2020) 73.05 44.21 58.79 81.21 69.50 75.43 79.33 62.53 71.03 Zhang et al. (2020)+BERT 76.74 52.61 64.73 85.45 75.74 80.62 83.58 69.31 76.48 SPANOM 72.40 45.83 59.62 78.10 64.51 71.56 76.74 58.74 68.08 SPAN OM+BERT 76.47 54.95 65.95 82.69 72.93 77.93 81.53 67.42 74.64 Table 3: Experimental results of our span-based opinion mining model and comparison with previous works on the MPQA2.0 dataset in the given-expression setting.", "ting.", "First, we can see that our proposed span-based model outperforms previously proposed BMESO-based models in the exact F1 score metric, achieving 59.62 exact F1 score.", "Second, when using contextual word representations of BERT, our model consistently outperforms the previous best result, resulting in a new state-of-the-art result of 65.95 exact F1 score, showing superior performance compared with the BMESO-based methods.", "Table 4 shows the results of our model integrating syntactic constituents and compare with previous works with SRL or dependency syntax knowledge.", "In the end-to-end setting, incorporating constituent knowledge brings an improvement of +0.57 exact F1 score.", "In the given-expression setting, we can see that integrating constituent syntactic knowledge into our model brings a +2.07 exact F1 score improvement, achieving comparable results with previous best results of Zhang et al. (2020).", "Even though our basic OM model outperforms Zhang et al. (2020), the improvements from syntactic constituents lag behind the dependency syntax.", "We 0%20%40%60%80%100% S p a n OMBMESO 1-5 0%20%40%60%80%100% S p a n OMBMESO 6-10 0%20%40%60%80%100% S p a n OMBMESO >=11 Error Overlapped Matched Figure 3: Percentage comparison of the matched, overlapped, and error predicted opinion roles of the outputs from the SPANOM model and BMESO-based model on the entire test data.", "think this is partly because of the relatively low performance of constituent parsing (93.55 F1 score) compared with dependency parsing (95.7 F1 score).", "Apart from syntactic knowledge, Marasovic and Frank (2018); Zhang et al. (2019a) both try to encode semantic knowledge, but their models don't use BERT representations.", "In this section, we conduct detailed analyses to gain more insights into our unified OM model and the effectiveness of integrating syntactic constituents.", "As the experimental results shown, our span-based model performs better in the exact matching metric than the BMESO-based models, while the BMESO-based models have better results in the auxiliary overlap metric.", "To understand the performance difference, we list the detailed percentage of opinion statistics of the system outputs of our span-based model and the BMESO-based model of Zhang et al. (2019a) in Figure 3, both using the BERT representations.", "The Matched, Overlapped and Error mean the predicted opinion role matches the gold role, not matches but overlaps part of the gold role and totally mismatches the gold role, respectively.", "We can see that: 1) our model achieves better per-Tendai Biti , the MDC 's foreign affairs spokesman , said Mugable was trying ...", "formance on the exact match setting through all the span length scenarios, especially on the spans that contain more than 10 words, 2) the BMESO-based model outputs more overlapped opinion roles than our span-based model, thus the BMESO models have better results in the auxiliary metric of binary and proportional settings. This demonstrates that our SPANOM more focuses on the full opinion role spans while the BMESO-based method may weak to give high exact predictions.", "Case study. The upper part of Figure 4 shows an example of the output of our span-based model and previous BMESO-based model of Zhang et al. (2019a). We can find that the span-based model successfully predicts the full agent while the BMESO-based model only predicts part of the agent span. This confirms the intuition that our span-based model is more good at predicting the long-range arguments, while the BMESO-based model is weak at long-range spans, which is consistent with the findings of Zhang et al. (2020).", "Which source of constituent knowledge is better? There are two main constituent syntax corpus in the community, i.e., Penn Treebank (PTB) (Mar-cus et al., 1993) and OntoNotes5.0 (Weischedel et al., 2013). The PTB corpus contains about 39k training data and mainly focuses on news data, while the OntoNotes5.0 corpus contains about 75k training data and focuses on multi-domain data (news, web, telephone conversation, and etc.).", "It is a worthy question to explore which is better for our span-based OM model, or what kind of combination is better. We compare them with various combinations on the BERT-based model, whose results are shown in Table 5. First, the second major row shows the results of our model with", "the MTL method, where MTL with PTB achieves the best exact F1 score of 68.02.", "Second, the results of our model with the GCN method are listed in the third major row, where OntoNotes and PTB means the automatic constituent trees are generated by parser trained on OntoNotes 7 and PTB, respectively.", "We can see that using the automatic constituent trees from Parser PTB achieves the best exact F1 score of 67.66.", "Finally, we try to combine the two kinds of methods and the results are shown in the last major row.", "It is clear that combining the MTL method with OntoNotes and the GCN method with Parser PTB achieves better results than the reversed one.", "Therefore, our constituent-enhanced opinion mining model follows this combination.", "Besides, we can also see the relative lower results of OntoNotes+PTB in +MTL and +GCN settings, which is strange 7 We use the code of Kitaev and Klein (2018) to train the OntoNotes constituent parser, which achieves 92.20 F1 score on the development data.", "that combining more information leads to lower performance.", "We think this is mainly caused by the different domains of the data in OntoNotes.", "As is well known, learning uniform knowledge from different domains data is a challenging problem.", "So, in the MTL method, adding OntoNotes into PTB can enhance such domain problems, and vice versa.", "In the GCN method, the two GCN outputs are concatenated, so the potential conflicts of different arcs are alleviated.", "Thus, the performance didn't drop too much.", "We also try to utilize dependency syntax.", "However, it brings less improvement compared with constituent syntax, which is understandable that word-based information is not very appropriate for the span-based model.", "It is also consistent with our intuition that span-based syntactic constituents are more suitable for the span-based model.", "Why and where do syntactic constituents help?", "OM aims to discover the structure of Who expressed what in a sentence and constituent syntax provides valid information like the NP and VP phrases in a sentence.", "Intuitively, the agent/target and expression may be covered by NP and VP phrases.", "We make statistics on the overlapping of constituent spans and opinions.", "We find that about 88% opinion roles can be covered by the predicted constituent spans from the MTL module, where the most four are NP, VP, SBAR and PP.", "Since the constituent knowledge can intuitively help the determination of roles, we list the result of the different span lengths in Figure 5a.", "We can find that constituent knowledge helps most on those opinion roles with longer length .", "We also report the results regarding the distance between the expressions and roles in Figure 5b, which shows a similar conclusion.", "Case study.", "The bottom part of Figure 4 gives a case study that shows the difference between syntax-enhanced and syntax-agnostic models.", "We can see that the target argument All composite things is hard to be identified by our baseline model.", "When integrating constituent knowledge, the model correctly discovers this opinion role and give the target relation.", "We think it is because the constituent tree gives a NP label to the word span, which helps our model to identify it.", "We also observe that there are some peculiarities of the MPQAs annotation scheme.", "For example, in the sentence The criteria set by Rice are the following: the three countries in question are repressive ... , set by is the expression, Rice is the holder, and the three countries in question is the target.", "However, set by is not a constituent phrase at all.", "In fact, by and Rice compose a prepositional phrase in the constituent tree.", "So, it is hard for our model to recognize set by as an opinion expression.", "Besides, the three countries in question is also not a dependent of the opinion expression set by , in which the constituent tree can not provide valuable structural information for the two phrases.", "Such phenomena is hard to handle by our model and raise challenges to the future work.", "In this paper, we propose a unified span-based opinion mining model that can handle the overlapped opinion roles, providing a new methodology.", "Our proposed model outperforms previously proposed BMESO-based models in terms of exact match metric on both the end-to-end and given-expression settings.", "Furthermore, integrating syntactic constituents knowledge with MTL and GCN brings substantial improvements over our BERT-enhanced baseline model.", "Detailed analyses show the difference between the span-based model and the BMESO-based model and the effectiveness of incorporating syntactic constituents on the determination of opinion role spans.", "We thank our anonymous reviewers for their helpful comments.", "This work was supported by the National Natural Science Foundation of China (Grant No. 62036004, 61876116), a Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions, and was partially supported by Alibaba Group through Alibaba Research Intern Program." ]
[ "abstain", "objective", "objective", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "abstain", "method", "abstain", "abstain", "other", "objective", "objective", "objective", "result", "objective", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "objective", "other", "other", "abstain", "other", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "other", "other" ]
[ "Multilingual neural machine translation models are trained to maximize the likelihood of a mix of examples drawn from multiple language pairs.", "The dominant inductive bias applied to these models is a shared vocabulary and a shared set of parameters across languages; the inputs and labels corresponding to examples drawn from different language pairs might still reside in distinct subspaces.", "In this paper, we introduce multilingual crossover encoder-decoder ( mXEncDec ) to fuse language pairs at an instance level.", "Our approach interpolates instances from different language pairs into joint crossover examples' in order to encourage sharing input and output spaces across languages.", "To ensure better fusion of examples in multilingual settings, we propose several techniques to improve example interpolation across dissimilar languages under heavy data imbalance.", "Experiments on a large-scale WMT multilingual dataset demonstrate that our approach significantly improves quality on English-to-Many, Many-to-English and zero-shot translation tasks (from +0 . 5 BLEU up to +5 . 5 BLEU points).", "Results on code-switching sets demonstrate the capability of our approach to improve model generalization to out-of-distribution multilingual examples.", "We also conduct qualitative and quantitative representation comparisons to analyze the advantages of our approach at the representation level.", "Multilingual modeling has been receiving increasing research attention over the past few years, arising from successful demonstrations of improved quality across a variety of tasks, languages and modalities (Lample and Conneau, 2019; Arivazhagan et al., 2019b; Conneau et al., 2021).", "The success of these models is typically ascribed to vocabulary sharing, parameter tying and implicit pivoting through dominant languages like English (Conneau et al., 2020).", "These conventional techniques are effective, but might not be exploiting the full potential of multilingual models to learn the underlying inductive bias: the learning signal from one language should benefit the quality of other languages (Caruana, 1997; Arivazhagan et al., 2019b).", "Here we study two related issues that exist in the context of multilingual Neural Machine Translation (NMT) training (Dong et al., 2015; Firat et al., 2016a; Johnson et al., 2017).", "First, NMT models (Bahdanau et al., 2015; Vaswani et al., 2017) are trained with maximum likelihood estimation which has a strong tendency to overfit and even memorize observed training examples, particularly posing challenges for low resource languages (Zhang et al., 2018).", "Second, training examples from distinct language pairs are separately fed into multilingual NMT models without any explicit instance-level sharing (with the exception of multi-source NMT (Zoph and Knight, 2016; Firat et al., 2016b)); as a consequence, given large enough capacity, the models have the liberty to map representations of different languages into distinct subspaces, limiting the extent of cross-lingual transfer.", "In this work, we introduce multilingual crossover encoder-decoder ( mXEncDec ) to address these issues following the recent work on XEncDec (Cheng et al., 2021) and mixup (Zhang et al., 2018; Cheng et al., 2020; Guo et al., 2020).", "Inspired by chromosomal crossovers (Rieger et al., 2012), mXEncDec fuses two multilingual training examples to generate crossover examples inheriting the combinations of traits of different language pairs, which is capable of explicitly capturing cross-lingual signals compared to the standard training which mechanically combines multiple language pairs.", "mXEncDec has the following advantages: 1. Enhancing the cross-lingual generalization.", "Thanks to crossover examples generated by fusing different language pairs, the multilingual NMT is encouraged to learn to transfer 4092 explicitly via more languages rather than implicitly via the predominant languages.", "2. Improving the model generalization and robustness.", "As vicinity examples around each example in the multilingual corpus (akin to Vicinal Risk Minimization (Chapelle et al., 2001)), crossover examples produced by mXEncDec can enrich the support of the training distribution and lead to better generalization and robustness respectively on general and noisy inputs (Zhang et al., 2018).", "3. Alleviating overfitting to low-resource languages.", "mXEncDec can increase the diversity of low-resource languages by fusing low-resource examples with others, instead of the simple duplication in the standard training.", "In mXEncDec , we randomly pick up two training examples drawn from the multilingual training corpus and first interpolate their source sentences where we have to prudently deal with language tags.", "Then we leverage a mixture decoder to produce a virtual target sentence.", "To account for heavy data imbalance of each language pair, we propose a pairwise sampling strategy to adjust interpolation ratios between language pairs.", "We also propose to simplify the target interpolation to cope with noisy attention and fusions of dissimilar language pairs.", "Different from XEncDec fusing two heterogeneous tasks (Cheng et al., 2021), we attempt to adapt it to deeply fuse different language pairs.", "Experimental results on a large-scale WMT multilingual dataset show that mXEncDec yields improvements of +1 .", "13 and +0 .", "47 BLEU points averagely on xx-en and en-xx test sets over a vanilla multilingual Transformer model.", "We also evaluate our approaches on zero-shot translations and obtain up to +5 .", "53 BLEU points over the baseline method, which corroborates the better transfer-abilty of multilingual models with our approaches.", "The more stable performance on noisy input text demonstrates the capability of our approach to improve the model robustness.", "To further explain the model behaviors at the representation level, qualitative and quantitative comparisons on representations manifest that our approach learns better multilingual representations, which indirectly explicates the BLEU improvements.", "optimizes the conditional probability P ( y | x ; ) of translating a source-language sentence x into a target-language sentence y .", "The encoder reads the source sentence x = x 1 , ..., x I as a sequence of word embeddings e ( x ) .", "The decoder acts as a conditional language model over embeddings e ( y ) and the encoder outputs with a cross-attention mechanism (Bahdanau et al., 2015).", "For clarity, we denote the input and output in the decoder as z and y , i.e. , z = (cid:104) s (cid:105) , y 1 , , y J 1 as a shifted copy of y , where (cid:104) s (cid:105) is a sentence start token.", "Then the decoder generates y as P ( y | x ; ) = (cid:81) Jj =1 P ( y j | z j , x ; ) .", "The cross-attention matrix is denoted as A RJ I .", "NMT optimizes the parameters by maximizing the likelihood of a parallel training set D : LD ( ) = E ( x , y ) D [ (cid:96) ( f ( x , y ; ) , v ( y ))] , (1) where (cid:96) is the cross entropy loss between the model prediction f ( x , y ; ) and label vectors v ( y ) for y .", "v ( y ) could be a sequence of one-hot vectors with smoothing in Transformer (Vaswani et al., 2017).", "Multilingual NMT extends NMT from the bilingual to the multilingual setting, in which it learns a one-to-many, many-to-one or many-to-many mapping from a set of languages to another set of languages (Firat et al., 2016a; Johnson et al., 2017).", "More specifically, the multilingual NMT model is learned over parallel corpora M = {D l i } Li =1 where L is the number of language pairs: LM ( ) = ED li M E ( x , y ) D li [ (cid:96) ( f ( x , y ; ) , v ( y ))] , (2) where all the parallel training sets are fed into the NMT model.", "XEncdec: Crossover Encoder-Decoder .", "XEncDec aims to fuse two parallel examples (called parents) in the encoder-decoder model (Cheng et al., 2021).", "The parents' source sentences are shuffled into a sentence (the offspring's source) on the encoder side, and a mixture decoder model predicts a virtual target sentence (the offspring's target).", "Given a pair of examples ( x , y ) and ( x (cid:48) , y (cid:48) ) where their lengths are different in most cases, padding tokens are appended to the shorter one to align their lengths.", "The crossover example ( x , y ) (offspring) is generated by carrying out XEncDec over ( x , y ) and ( x (cid:48) , y (cid:48) ) (parents).", "where m = m 1 , , m | x | { 0 , 1 } | x | is sampled from a distribution or constructed according to a hyperparameter ratio p ; e.g., p = 0 .", "15 means that 15% of elements in m are 0 .", "| x | is the length of x , which is equal to max( | x | , | x (cid:48) | ) .", "On the crossover decoder side, a mixture conditional language model is employed for the generation of the virtual target sentence.", "The input embedding e ( z j ) and output label v ( y j ) for the decoder at the j -th position are calculated as: e ( z j ) = e ( y j 1 ) t j 1 + e ( y (cid:48) j 1 )(1 t j 1 ) , (4) v ( y j ) = v ( y j ) t j + v ( y (cid:48) j )(1 t j ) , (5) where t = t 1 , ..., t | y | [0 , 1] | y | R | y | .", "In contrast to a common language model fed with a single word y j 1 for predicting y j at the j -th position, the crossover decoder aims to generate an interpolated vector v ( y j ) by averaging v ( y j ) and v ( y (cid:48) j ) with t j , on condition that the current input embedding is also weighted on embeddings e ( y j 1 ) and e ( y (cid:48) j 1 ) with t j 1 .", "The weight vector t used for interpolating target inputs and labels is computed as: t j = (cid:80) Ii =1 A ji m i (cid:80) Ii =1 A ji m i + (cid:80) I (cid:48) i =1 A (cid:48) ji (1 m i ) , (6) where A and A (cid:48) are the alignment matrices for ( x , y ) and ( x (cid:48) , y (cid:48) ) .", "In practice the cross-attention scores in the NMT model are utilized as an alternative noisy alignment matrix (Garg et al., 2019).", "The cross-entropy is utilized to compute the loss for XEncDec when feeding e ( x ) , e ( z ) and v ( y ) into the encoder-decoder model, denoted as: (cid:96) ( f ( x , y ; ) , v ( y )) = (cid:88) j KL ( v ( y j ) (cid:107) P ( y | z j , x ; )) .", "(7) 3 mXEncDec In this work, we aim to leverage XEncDec to encourage multilingual NMT models to better exploit cross-lingual signals with crossover examples created by explicitly fusing different language pairs.", "We introduce its variant, called mXEncDec as shown in Figure 1, in which the parent examples could belong to either the same or different language pairs.", "The subsequent subsections discuss (cid:39)(cid:86)(cid:83)(cid:87)(cid:87)(cid:51)(cid:90)(cid:73)(cid:86) (cid:80)(cid:69)(cid:82)(cid:75)(cid:89)(cid:69)(cid:75)(cid:73)(cid:4)(cid:88)(cid:83)(cid:79)(cid:73)(cid:82)(cid:87) (cid:81)(cid:60)(cid:41)(cid:82)(cid:71)(cid:40)(cid:73)(cid:71) (cid:91)(cid:83)(cid:86)(cid:72)(cid:87) Figure 1: An illustration of multilingual crossover encoder-decoder ( mXEncDec ).", "Language Interpolation .", "As multilingual NMT involves a large number of language pairs, several techniques have been adopted to distinguish translation directions among them, such as prepending a language tag to source inputs (Johnson et al., 2017) or both source and target sentences (Wang et al., 2018), training language-specific embeddings for different languages (Lample and Conneau, 2019), and so on (Dabre et al., 2020).", "When following Lample and Conneau (2019), it is natural to interpolate language-specific embeddings as we do for token embeddings.", "However, if we want to adopt a language tag in the first word of a source sentence to indicate the target language (Johnson et al., 2017), we need to address how to interpolate them.", "As Figure 1 shows, to make the sentence x still carry language-specific information from x and x (cid:48) , we conduct a soft combination over their language tags, that is: e ( x 1 ) = e ( x 1 ) (cid:80) | m | i =2 m i | m | 1 + e ( x (cid:48) 1 ) (cid:80) | m | i =2 (1 m i ) | m | 1 , (8) where | m | is the length of m .", "e ( x 1 ) captures the proportion of words in x coming from the translation pairs ( x , y ) and ( x (cid:48) , y (cid:48) ).", "Simplified Target Interpolation .", "In comparison to bilingual NMT, attention matrices learned in multilingual NMT models are excessively noisy, which results in an inappropriate design of using the attention-based target interpolation in Eq.", "(6) for mXEncDec .", "Instead, we can employ a simple linear interpolation by setting t as a constant vector, here exemplified by the case of using language tags: t j = (cid:80) | m | i =2 m i | m | 1 , j { 1 , ..., | y |} , (9) 4094 A similar equation can be obtained for using language embeddings.", "In addition, dispensing with attention can improve the parallel efficiency with 10% speed-up gain.", "Hard Target Input Interpolation .", "For multilingual NMT with multiple languages on the target side, i.e., one-to-many and many-to-many models, we need to carefully design combinations of target input word embeddings.", "As representations from the same language are usually close to each other, it can still augment the representation space by linearly interpolating target embeddings in Eq.", "(4).", "But for dissimilar languages, in particular distantly related languages, the interpolation points between them are comparatively unreliable.", "To tackle this issue, we simply quantize t j to 1 if t j > 0 .", "5 , otherwise t j = 0 when interpolating target input embeddings for two different target languages in Eq.", "(4).", "A better solution should consider varying the interpolation ratio based on the language similarity or encourage interpolations of similar languages.", "We leave this for future exploration.", "Pairwise Sampling .", "The multilingual corpus is usually heavily imbalanced: most of its data distribution concentrates on high-resource language pairs (Arivazhagan et al., 2019b).", "When interpolating high-resource and low-resource sentence pairs, we assume the fusion should be encouraged to be in favor of high-resource language pairs because the representation space supported by high-resource sentences is relatively reliable and stable (Kudugunta et al., 2019).", "This indicates a more frequent small p (e.g. p < 0 . 5 ) to weigh high-resource sentences over low-resource sentences if ( x , y ) D l i is a high-resource sentence and ( x (cid:48) , y (cid:48) ) D l j is a low-resource sentence.", "To this end, we propose a pairwise sampling method to sample the source shuffle ratio p l i ,l j for interpolating language pair l i and l j : g Bernoulli (1 / (1 + exp ( d ( l i , l j ))) , (10) p l i ,l j = gp + (1 g )(1 p ) , (11) where is a temperature hyperparameter to control the tendency of g towards 0 or 1 for the Bernoulli distribution.", "d ( l i , l j ) can be an arbitrary metric to measure the relationship between language l i and l j .", "Here we use d ( l i , l j ) = | D l i | / | D l j | where | D l i | denotes the data size of the language pair l i .", "LX ( ) = ED li M ED lj M E ( x , y ) D li E ( x (cid:48) , y (cid:48) ) D lj [ (cid:96) ( f ( x , y ; ) , v ( y ))] , (12)", "where the generation of ( x , y ) depends on ( x , y ) and ( x (cid:48) , y (cid:48) ) .", "Algorithm 1 shows how to compute Eq.", "(12) effectively.", "We shuffle the min-batch consisting of all the language pairs.", "Then the shuffled batch and original batch can be used to generate ( x , y ) to compute the mXEncDec loss.", "Instead of using one-hot labels v ( y j ) in Eq.", "(5), we adopt label co-refinement (Li et al., 2019) by linearly combing the ground-truth one-hot label with the model prediction, that is v ( y j ) + f j ( x , y ; )(1 ) .", "Finally, our approach optimizes the model loss involving two training losses, Eq.", "(2) and Eq.", "(12): = argmin {L M ( ) + LX ( ) } .", "Data and Evaluation .", "We conduct experiments on the English-centric WMT multilingual dataset composed of 16 languages (including English) and 30 translation directions from past WMT evaluation campaigns before and on WMT' 19 (Barrault et al., 2019).", "The data distribution is highly skewed, varying from roughly 10k examples in En-Gu to roughly 60M examples in En-Cs.", "Two non-English test sets, Fr-De and De-Cs, are used to verify zero-shot translations.", "In addition, we also use multi-4095 = -2 -0.8 -0.4 0 0.4 0.8 2 xx-en 27.22, 27.42, 27.21, 27.41, 27.46, 27.60, 27.41 en-xx 21.76, 21.83, 21.74, 21.87, 21.89, 22.01, 21.87 Table 1: Effect of the temperature in the pairwise sampling.", "To mitigate the data imbalance in the WMT multilingual corpus, we follow Arivazhagan et al. (2019b) and adopt a temperature-based data sampling strategy to over-sample the low-resource languages where the temperature is set to 5 .", "We apply SentencePiece (Kudo and Richardson, 2018) to learn a vocabulary of 64 k sub-words.", "We perform experiments in three settings: many-to-one, one-to-many and many-to-many translations.", "The 15 test language pairs are cast into three groups according to their data size: High ( > 10 M , 5 languages), Low ( < 1 M , 7) and Medium ( > 1 M & < 10 M , 3).", "We report not only the average detokenized BLEU scores for each group as calculated by the SacreBLEU script (Post, 2018) but also winning ratio (WR) indicating the ratio of all the test sets on which our approach beats the baseline method.", "Models and Hyperparamters .", "Following Chen et al. (2018), we select the Transformer Big (6 layer, 1024 model dimension, 8192 hidden dimension) as the backbone model and implement them with the open-source Lingvo (Shen et al., 2019).", "Adafactor (Shazeer and Stern, 2018) is adapted as our training optimizer, in which the learning rate is set to 3 .", "0 and adjusted with 40 k warm-up steps.", "We use a beam size of 4 and a length penalty of 0 .", "6 for all the test sets.", "We apply language-specific embeddings to both many-to-one and one-to-many models while languages in many-to-many models are specified with language tags.", "Many-to-one and one-to-many models are optimized for 150 k steps while many-to-many models run for 300 k steps.", "All Transformer models utilize a large batch of around 5600 64 tokens over 64 TPUv4/TPUv3 chips.", "We average the last 8 checkpoints to report model performance.", "We tune p over the set: { 0 .", "10 , 0 .", "15 , 0 .", "25 , 0 .", "50 } and set it to 0 .", "15 except for many-to-one using 0 .", "25 .", "The temperature used in Eq.", "(10) to sample the shuffle ratio is selected over the set { 0 , 0 .", "4 , 0 .", "8 , 2 .", "0 } .", "= 0 .", "8 is selected for many-to-many models while = 0 is for others as Table 1 suggests.", "The parameter in label co-refinement is annealed from 0 to 0 .", "7 in the first 40 K steps.", "We find that a non-zero and non-one can not only better capture informative label but also substantially improve the training stability.", "Training Efficiency .", "If we adopt the simplified target interpolation, the loss computations for LM ( ) and LX ( ) in Eq.", "(13) are totally independent.", "But we have to halve the batch size to load interpolation examples ( LX ( ) ) into memory.", "To make the 4096 Method Many-to-Many xx-en en-xx Low Med.", "baseline models and our models observe the same amount of parallel examples per step, we double the number of TPUs to compensate for it.", "many-to-one, one-to-many and many-to-many settings: mXEncDec -A: the target interpolation t is computed by normalizing attention in Eq.", "(6).", "mXEncDec -S: the target interpolation t is simplified to a constant vector in Eq.", "(9).", "We compare mXEncDec to the baseline methods: MLE: the vanilla Multilingual NMT is trained with maximum likelihood estimation.", "mixup : we adapt mixup (Zhang et al., 2018) to multilingual NMT by mixing source and target sequences following the methods proposed in Cheng et al. (2020) and Guo et al. (2020).", "For a fair comparison, we also mix co-refined labels rather than one-hot labels.", "The comparisons between the baseline MLE and our approach suggest that mXEncDec can improve the translation performance on both xx-en and en-xx translation settings (up to +1 . 06 BLEU & 93 . 33 WR on xx-en and +0 . 47 BLEU & 86 . 66 WR on en-xx).", "In particular, using simplified target interpolation to substitute the noisy attention-based interpolation ( mXEncDec -S vs. mXEncDec -A) can achieve better results on xx-en translations (+0.64 BLEU) while slightly performing worse on en-xx translations (-0.16 BLEU).", "After incorporating quantized target interpolation, it yields an additional improvement for mXEncDec -S on en-xx translations (+0.32 BLEU).", "The improvement differences between xx-en and en-xx (+1.06 BLEU vs. +0.47 BLEU) to some extent imply that interpolations on the target side are more favourable to similar languages, and interpolations on the encoder side are not sensitive to language types.", "Table 3 shows results for many-to-many models.", "Among all the training methods, our approaches still obtain the best results for both xx-en and en-4097 0.0 0.1 0.2 0.3 0.4 0.5 Noise Fraction 14 16 18 20 22 24 26 28 BLEUMLE mixup mXEncDec-A mXEncDec-S *", "xx translations (up to +1 . 13 BLEU & 100 WR on xx-en and +0 . 46 BLEU & 73.33 WR).", "We consistently find that mXEncDec -S benefits much more from the quantized target interpolation with +0.68 BLEU on xx-en and +0.21 BLEU on en-xx.", "Although this technique slightly impairs the performance of mXEncDec -A on both xx-en and en-xx translations, it significantly boosts its zero-shot translations as shown in Table 4. We also observe that removing the pairwise sampling with = 0 has big negative effects on high-resource language pairs for many-to-many models.", "Pairwise sampling can not only stabilize the performance on low-resource language pairs and significantly improve high-resource language pairs.", "Compared to mixup , our approaches still attain better performance except that mXEncDec -A on xx-en performs slightly worse.", "mixup trains models on linear interpolations of examples and their labels.", "By contrast, mXEncDec combines training examples in a non-linear way on the source side, and encourages the decoder to decouple the non-linear interpolation with a ratio related to the source end.", "To further verify cross-lingual transfer of our approaches, we utilize many-to-many models to decode language pairs not pesent in the training data, i.e., zero-shot sets from WMT and FLORES.", "In Table 4, our approaches achieve notable improvements across all the test sets compared to baseline methods.", "On average, our best approach ( mXEncDec -A + Hard) can gain up to +4 .", "49 BLEU over MLE.", "Interestingly, this model is not the best on general translations but delivers the best results on zero-shot translations.", "These substantial improvements demonstrate the strong transferability of our approaches.", "We construct a noisy test set comprising code-switching noise to test the robustness of multilingual NMT models (Belinkov and Bisk, 2018; Cheng et al., 2019).", "Following the method proposed in Cheng et al. (2021), we randomly replace a certain ratio of English/non-English source words with non-English/English target words by resorting to an English-centric dictionary.", "From results in Figure 2, we find our approaches to exhibit higher robustness with larger improvements as the noise fraction increases.", "mXEncDec -A shows similar robustness to mXEncDec -S on zero-shot translations and even higher robustness on xx-en translations although its performance on clean test sets falls behind mXEncDec -S .", "mXEncDec -S performs significantly better on en-xx translations compared to other approaches.", "Moreover, it is noteworthy that our approaches have better stability on xx-en translations where we replace non-English words with English counterparts, which is in complete agreement with the finding in section 4.4 that English representations tend to be fused into non-English representations by virtue of our approaches.", "To better interpret the advantages of our approaches over baselines, we attempt to delve deep into the representations incurred by models.", "A common method is to study the encoder representations of multilingual NMT models (Kudugunta et al., 2019), which we follow.", "We aggregate the sentence representations by averaging the encoder outputs.", "The data computing representations come from FLORES (Goyal et al., 2021) as it provides a high quality of multi-way translations implying that sentences from each language are semantically equivalent to each other.", "We use the first 100 sentences in 4098 en cs fr ru zh es fi de et lv lt ro hi kk tr gu", "We argue that the encoder in a good multilingual NMT model prefers to distribute sentence representations based on their semantic similarities rather than language families.", "Figure 3 depicts visualisa-tions of representations plotted by t-SNE (Van der Maaten and Hinton, 2008) on xx-en translations.", "We make the following observations: 1. In each figure, sentences with the same semantics incline to form a single cluster.", "2. For MLE in Figure", "(a), most sentences are dispersed into each cluster based on semantics while extremely low-resource languages (Hi, Gu, Kk) and English possess their own distinct clusters.", "3. For mixup , mXEncDec -A and mXEncDec -S in Figure", "(b)-(d), sentences from extremely low-resource languages start to be assimilated into their own semantic clusters.", "4. For mXEncDec -A and mXEncDec -S in Figure", "(c)-(d), English sentences attempt to fuse into representations of other languages.", "English sentences prefer to become an individual cluster.", "Because when using the language tag <2en> to compute English encoder representations, it is treated as a copy task instead of translation tasks for computing representations of other languages.", "However, our approach promotes English sentences to be closer to their semantic equivalents in other languages.", "This leads to enhanced robustness toward code-switching noise when translating sentences in languages that are mixed with English codes.", "The evident representation amelioration for extremely low-resource languages corroborates significant BLEU improvements on low-resource translations in Table 2 and Table 3. The encoder learned by our approach performs the best and complies with our argument.", "We also conduct quantitative analyses to evaluate the clustering effect of each method in Figure 3. In Table 5, we adopt three clustering metrics, SC (Silhouette Co-efficient), CH (Calinski-Harabaz Index), and DB 1 We also have similar findings from visualizations for en-xx translations.", "(Davies-Bouldin Index).", "Although these metrics cannot adequately assess multilingual representations as they advocate distinct separations between different clusters and tight closeness within the same cluster, we believe they can still measure the within-cluster variance in part.", "Among them, mXEncDec -S performs the best while mixup and mXEncDec -A yield similar performance.", "Multilingual NMT has made tremendous progress in recent years (Dong et al., 2015; Firat et al., 2016a; Johnson et al., 2017; Arivazhagan et al., 2019b; Fan et al., 2021).", "Recent research efforts to improve the generalization of multilingual models concentrate on enlarging the model capacity (Huang et al., 2019; Zhang et al., 2020; Lep-ikhin et al., 2020), incorporating hundreds of languages (Fan et al., 2021), pretraining multilingual models (Liu et al., 2020), and introducing additional regularization constraints (Arivazhagan et al., 2019a; Al-Shedivat and Parikh, 2019; Yang et al., 2021).", "Our work is related to the last three ones in that they try to enable models to better transfer across languages by introducing an alignment loss to learn an interlingua (Arivazhagan et al., 2019a) or imposing an agreement loss on translation equivalents (Al-Shedivat and Parikh, 2019; Yang et al., 2021).", "However, we propose to utilize mXEncDec to directly combine language pairs for better exploitation of cross-lingual signals.", "Another related research line is data mixing.", "Since mixup (Zhang et al., 2018; Yun et al., 2019) was proposed in computer vision, we have observed great success in NLP (Guo et al., 2019; Cheng et al., 2020; Guo et al., 2020; Cheng et al., 2021).", "mXEncDec shares the commonality of combining example pairs as inspired by XEncDec (Cheng et al., 2021).", "To the best of our knowledge, we are the first to fuse different language pairs to improve cross-lingual generalization and robustness for multilingual NMT.", "We have presented mXEncDec to fuse different language pairs at instance level for multilingual NMT, which enables the model to better exploit cross-lingual signals.", "Experimental results on general, zero-shot and noisy test sets demonstrate that our approach can significantly improve the cross-lingual generalization, zero-shot transfer and robustness of multilingual NMT models.", "Representation analyses further confirms that our approach is capable of learning better multilingual representations, which coincides with improvements in BLEU.", "We plan to investigate whether this approach can improve the model generalization in a broader scope like domain generalization.", "We find that mXEncDec can easily achieve notable improvements for xx-en translations because they share an identical target language.", "However, there still exits huge headroom for en-xx translations.", "We plan to explore how to interpolate target languages more effectively, for example, possibly considering language similarity." ]
[ "abstain", "abstain", "abstain", "method", "objective", "objective", "objective", "method", "abstain", "abstain", "abstain", "method", "abstain", "other", "method", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "objective", "method", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "abstain", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "other", "abstain", "other", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "other", "abstain", "objective", "other", "abstain", "other", "objective", "method", "objective", "method", "objective", "result", "abstain", "objective" ]
[ "In this paper, we address three challenges in utterance-level emotion recognition in dialogue systems: (1) the same word can deliver different emotions in different contexts; (2) some emotions are rarely seen in general dialogues; (3) long-range contextual information is hard to be effectively captured.", "We therefore propose a hierarchical Gated Recurrent Unit (HiGRU) framework with a lower-level GRU to model the word-level inputs and an upper-level GRU to capture the contexts of utterance-level embeddings.", "Moreover, we promote the framework to two variants, HiGRU with individual features fusion (HiGRU-f) and HiGRU with self-attention and features fusion (HiGRU-sf), so that the word/utterance-level individual inputs and the long-range contextual information can be sufficiently utilized.", "Experiments on three dialogue emotion datasets, IEMOCAP, Friends, and EmotionPush demonstrate that our proposed HiGRU models attain at least 8.7%, 7.5%, 6.0% improvement over the state-of-the-art methods on each dataset, respectively.", "Particularly, by utilizing only the textual feature in IEMOCAP, our HiGRU models gain at least 3.8% improvement over the state-of-the-art conversational memory network (CMN) with the trimodal features of text, video, and audio.", "Emotion recognition is a significant artificial intelligence research topic due to the promising potential of developing empathetic machines for people.", "Emotion is a universal phenomena across different cultures and mainly consists of six basic types: anger, disgust, fear, happiness, sadness, and surprise (Ekman, 1971, 1992).", "In this paper, we focus on textual dialogue sys-tems because textual feature dominates the performance over audio and video features (Poria et al., Role Utterance Emotion Rachel Oh okay, I'll fix that to. What's her email address? Neutral Ross Rachel! Anger Rachel All right, I promise. I'll fix this. I swear. I'll-I'llI'll-I'll talk to her. Non-neutral Ross Okay! Anger Rachel Okay. Neutral Nurse This room's available. Neutral Rachel Okay! Joy Rachel Okay wait! Non-neutral Rachel You listen to me! Anger Figure 1: The word okay exhibits different emotions in the American television sitcom, Friends. 2015, 2017).", "In utterance-level emotion recognition, an utterance (Olson, 1977) is a unit of speech bounded by breathes or pauses and its goal is to tag each utterance in a dialogue with the indicated emotion.", "In this task, we address three challenges: First, the same word can deliver different emotions in different contexts.", "For example, in Figure 1, the word okay can deliver three different emotions, anger, neutral, and joy, respectively.", "Strong emotions like joy and anger may be indicated by the symbols ! or ? along the word.", "To identify a speaker's emotion precisely, we need to explore the dialogue context sufficiently.", "Second, some emotions are rarely seen in general dialogues.", "For example, people are usually calm and present a neutral emotion while only in some particular situations, they express strong emotions, like anger or fear.", "Thus we need to be sensitive to the minority emotions while relieving the effect of the majority emotions.", "Third, the long-range contextual information is hard to be effectively captured in an ut-terance/dialogue, especially when the length of an utterance/dialogue in the testing set is longer than those in the training set.", "To tackle these challenges, we propose a hierarchical Gated Recurrent Unit (HiGRU) framework for the utterance-level emotion recognition in dialogue systems.", "More specifically, HiGRU is composed by two levels of bidirectional GRUs, a lower-level GRU to model the word sequences of each utterance to produce individual utterance embeddings, and an upper-level GRU to capture the sequential and contextual relationship of utterances.", "We further promote the proposed HiGRU to two variants, HiGRU with individual features fusion (HiGRU-f), and HiGRU with self-attention and features fusion (HiGRU-sf).", "In HiGRU-f, the individual inputs, i.e., the word embeddings in the lower-level GRU and the individual utterance embeddings in the upper-level GRU, are concatenated with the hidden states to generate the contextual word/utterance embeddings, respectively.", "In HiGRU-sf, a self-attention layer is placed on the hidden states from the GRU to learn long-range contextual embeddings, which are concatenated with the original individual embeddings and the hidden states to generate the contextual word/utterance embeddings.", "Finally, the contextual utterance embedding is sent to a fully-connected (FC) layer to determine the corresponding emotion.", "To alleviate the effect of data imbalance issue, we follow (Khosla, 2018) to train our models by minimizing a weighted categorical cross-entropy.", "We summarize our contributions as follows: We propose a HiGRU framework to better learn both the individual utterance embeddings and the contextual information of utterances, so as to recognize the emotions more precisely.", "We propose two progressive HiGRU variants, HiGRU-f and HiGRU-sf, to sufficiently incorporate the individual word/utterance-level information and the long-range contextual information respectively.", "We conduct extensive experiments on three textual dialogue emotion datasets, IEMOCAP, Friends, and EmotionPush.", "The results demonstrate that our proposed HiGRU models achieve at least 8.7%, 7.5%, 6.0% improvement over state-of-the-art methods on each dataset, respectively.", "Particularly, by utilizing only the textual feature in IEMOCAP, our proposed HiGRU models gain at least 3.8% improvement over the existing best model, conversational memory network (CMN) with not only the text feature, but also the visual, and audio features.", "Text-based emotion recognition is a long-standing research topic (Wilson et al., 2004; Yang et al., 2007; Medhat et al., 2014).", "Nowadays, deep learning technologies have become dominant methods due to the outstanding performance.", "Some prominent models include recursive autoencoders (RAEs) (Socher et al., 2011), convolutional neural networks (CNNs) (Kim, 2014), and recurrent neural networks (RNNs) (Abdul-Mageed and Ungar, 2017).", "However, these models treat texts independently thus cannot capture the inter-dependence of utterances in dialogues (Kim, 2014; Lai et al., 2015; Grave et al., 2017; Chen et al., 2016; Yang et al., 2016).", "To exploit the contextual information of utterances, researchers mainly explore in two directions: (1) extracting contextual information among utterances, or (2) enriching the information embedded in the representations of words and utterances.", "Contextual Information Extraction.", "The RNN architecture is a standard way to capture the sequential relationship of data.", "Poria et al. propose a bidirectional contextual long short-term memory (LSTM) network, termed bcLSTM, to model the context of textual features extracted by CNNs.", "Hazarika et al. improve bcLSTM by a conversational memory network (CMN) to capture the self and inter-speaker emotional influence, where GRU is utilized to model the self-influence and the attention mechanism is employed to excavate the inter-speaker emotional influence.", "Though CMN is reported to attain better performance than bcLSTM on IEMOCAP (Hazarika et al., 2018), the memory network is too complicated for small-size dialogue datasets.", "Representation Enrichment.", "Multimodal features have been utilized to enrich the representation of utterances (Poria et al., 2015, 2017).", "Previous work indicate that textual features dominate the performance of recognizing emotions in contrast to visual or audio features (Poria et al., 2015, 2017).", "Recently, the textual features are mainly extracted by CNNs to learn individual utterance embeddings (Poria et al., 2015, 2017; Zahiri and Choi, 2018; Hazarika et al., 2018).", "However, CNNs do not capture the contextual information within each utterance well.", "On the other hand, hierarchical RNNs have been proposed and demonstrated good performance in GRU \" # $ % GRU GRUGRUGRU % \" # $ % \" # $ ( ) ) Fusion + ( ) ) ( ) Max-pooling Individual Utterance Embedding . / GRUGRU . / . / Attention %0 \" # $ % ( ) + ( ) Fully-connected Contextual Utterance Embedding 1 2 Attention Softmax %3 % % GRU GRU GRUGRUGRU % \" # $ % \" # $ GRUGRU 1 2 1 2 %0 %3 % % Fusion Figure 2: The architecture of our proposed HiGRU-sf.", "conventional text classification task (Tang et al., 2015), dialogue act classification (Liu et al., 2017; Kumar et al., 2018), and speaker change detection (Meng et al., 2017).", "But they are not well explored in the task of utterance-level emotion recognition in dialogue systems.", "Definition 1 ( Utterance-level Emotion Recogni-tion).", "Suppose we are given a set of dialogues, D = { D i } Li =1 , where L is the number of dialogues.", "In each dialogue, D i = { ( u j , s j , c j ) } N i j =1 , is a sequence of N i utterances, where the utterance u j is spoken by the speaker s j S with a certain emotion c j C .", "All speakers compose the set S and the set C consists of all emotions, such as anger, joy, sadness, and neutral.", "Our goal is to train a model M to tag each new utterance with an emotion label from C as accurately as possible.", "To solve this task, we propose a hierarchical Gated Recurrent Units (HiGRU) framework and extend two progressive variants, HiGRU with individual features fusion (HiGRU-f) and HiGRU with self-attention and features fusion (HiGRU-sf) (il-lustrated in Figure 2).", "The vanilla HiGRU consists of two-level GRUs: the lower-level bidirectional GRU is to learn the individual utterance embedding by modeling the", "word sequence within an utterance and the upper-level bidirectional GRU is to learn the contextual utterance embedding by modeling the utterance sequence within a dialogue.", "Individual Utterance Embedding.", "For the j th utterance in D i , u j = { w k } M j k =1 , where M j is the number of words in the utterance u j .", "The corresponding sequence of individual word embeddings { e ( w k ) } M j k =1 are fed into the lower-level bidirectional GRU (Cho et al., 2014) to learn the individual utterance embedding in two opposite directions: h k = GRU( e ( w k ) , h k 1 ) , (1) h k = GRU( e ( w k ) , h k +1 ) .", "The two hidden states h k and h k are concatenated into hs = [ h k ; h k ] to produce the contextual word embedding for w k via the tanh activation function on a linear transformation:", "where W w R d 1 2 d 1 and b w R d 1 are the model parameters, d 0 and d 1 are the dimensions of word embeddings and the hidden states of the lower-level GRU, respectively.", "Contextual Utterance Embedding.", "For the i th dialogue, D i = { ( u j , s j , c j ) } N i j =1 , the learned individual utterance embeddings, { e ( u j ) } N i j =1 , are fed into the upper-level bidirectional GRU to capture the sequential and contextual relationship of utterances in a dialogue: H j = GRU( e ( u j ) , H j 1 ) , (5) H j = GRU( e ( u j ) , H j +1 ) .", "Here, the hidden states of the upper-level GRU are represented by H j R d 2 , to distinguish from those learned in the lower-level GRU denoted by h k .", "Accordingly, we can obtain the contextual utterance embedding by e c ( u j ) = tanh( W u Hs + b u ) , (7) where Hs = [ H j ; H j ] , W u R d 2 2 d 2 and b u R d 2 are the model parameters, d 2 is the dimension of the hidden states in the upper-level GRU.", "Since the emotions are recognized at utterance-level, the learned contextual utterance embedding e c ( u j ) is directly fed to a FC layer followed by a softmax function to determine the corresponding emotion label: y j = softmax ( W fc e c ( u j ) + b fc ) , (8) where y j is the predicted vector over all emotions, and W fc R |C| d 2 , b fc R |C| .", "The vanilla HiGRU contains two main issues: (1) the individual word/utterance embeddings are diluted with the stacking of layers; (2) the upper-level GRU tends to gather more contextual information from the majority emotions, which deteriorates the overall model performance.", "To resolve these two problems, we propose to fuse individual word/utterance embeddings with the hidden states from GRUs so as to strengthen the information of each word/utterance in its contextual embedding.", "This variant is named as HiGRU-f, representing HiGRU with individual features fusion.", "Hence, the lower-level GRU can maintain individual word embeddings and the upper-level GRU can relieve the effect of majority emotions and attain a more precise utterance representation for different emotions.", "Specifically, ,1 ,2 ,3 . . . , 1,1 1,2 1,3 . . . 1, 2,1 2,2 2,3 . . . 2, 3,1 3,2 3,3 . . . 3, ... ... ... ... . . . 1 2 3 . . . Copy Softmax Matmul 1 2 3 ... 1 2 3 ... Figure 3: Self-attention over the forward hidden states of GRU.", "the contextual embeddings are updated as: e c ( w k ) = tanh( W w hs f + b w ) , (9) e c ( u j ) = tanh( W u Hs f + b u ) , (10) where W w R d 1 ( d 0 +2 d 1 ) , W u R d 2 ( d 1 +2 d 2 ) , hs f = [ h k ; e ( w k ); h k ] , and Hs f = [ H j ; e ( u j ); H j ] .", "Another challenging issue is to extract the contextual information of long sequences, especially the sequences in the testing set that are longer than those in the training set (Bahdanau et al., 2014).", "To fully utilize the global contextual information, we place a self-attention layer upon the hidden states of HiGRU and fuse the attention outputs with the individual word/utterance embeddings and the hidden states to learn the contextual word/utterance embeddings.", "Hence, this variant is termed HiGRU-sf, representing HiGRU with self-attention and features fusion.", "Particularly, we apply self-attention upon the forward and backward hidden states separately to produce the left context embedding, h lk ( H lj ), and the right context embedding, h rk ( H rj ), respectively.", "This allows us to gather the unique global contextual information at the current step in two opposite directions and yield the corresponding contextual embeddings computed as follows: e c ( w k ) = tanh( W w hs sf + b w ) , (11) e c ( u j ) = tanh( W u Hs sf + b u ) , (12) where W w R d 1 ( d 0 +4 d 1 ) , W u R d 2 ( d 1 +4 d 2 ) , hs sf = [ h lk ; h k ; e ( w k ); h k ; h rk ] , and Hs sf = [ H lj ; H j ; e ( u j ); H j ; H rj ] .", "Self-Attention (SA).", "The self-attention mechanism is an effective non-recurrent architecture to compute the relation between one input to all other inputs and has been successfully applied in various natural language processing applications such as reading comprehension (Hu et al., 2018), and neural machine translation (Vaswani et al., 2017).", "Figure 3 shows the dot-product SA over the forward hidden states of GRU to learn the left context h lk .", "Each element in the attention matrix is computed by f ( h k , h p ) = (cid:40) h k (cid:62) h p , if k, p M j , , otherwise .", "An attention mask is then applied to waive the inner attention between the sequence inputs and paddings.", "At each step, the corresponding left context h lk is then computed by the weighted sum of all the forward hidden states: h lk = M j (cid:88) p =1 a kp h p , a kp = exp( f ( h k , h p )) (cid:80) Mj p (cid:48) =1 exp (cid:16) f ( h k , h p (cid:48) ) (cid:17) , (14) where a kp is the weight of h p to be included in h lk .", "The right context h rk can be computed similarly.", "Following (Khosla, 2018) which attains the best performance in the EmotionX shared task (Hsu and Ku, 2018), we minimize a weighted categorical cross-entropy on each utterance of all dialogues to optimize the model parameters:", "where y j is the original one-hot vector of the emotion labels, and y cj and y cj are the elements of y j and y j corresponding to the class c .", "Similar to (Khosla, 2018), we assign the loss weight ( c j ) inversely proportional to the number of training utterances in the class c j , denoted by I c , i.e., assigning larger loss weights for the minority classes to relieve the data imbalance issue.", "The difference is that we add a constant to adjust the smoothness of the distribution.", "Then, we have: 1 ( c ) = I c (cid:80) |C| c (cid:48) =1 I c (cid:48) .", "We conduct systematical experiments to demonstrate the advantages of our proposed HiGRU models.", "The experiments are carried out on three textual dialogue emotion datasets (see the statistics in Table 1):", "IEMOCAP 1 .", "It contains approximately 12 hours of audiovisual data, including video, speech, motion capture of face, text transcriptions.", "Following (Poria et al., 2017; Hazarika et al., 2018): (1) We apply the first four sessions for training and the last session for test; (2) The validation set is extracted from the shuffled training set with the ratio of 80:20; (3) We only evaluate the performance on four emotions: anger, happiness, sadness, neutral, and remove the rest utterances.", "Friends 2 .", "The dataset is annotated from the Friends TV Scripts (Hsu and Ku, 2018), where each dialogue in the dataset consists of a scene of multiple speakers.", "Totally, there are 1,000 dialogues, which are split into 720, 80, and 200 dialogues for training, validation, and testing, respectively.", "Each utterance in a dialogue is labeled by one of the eight emotions: anger, joy, sadness, neutral, surprise, disgust, fear, and non-neutral.", "EmotionPush 3 .", "The dataset consists of private conversations between friends on the Facebook messenger collected by an App called EmotionPush, which is released for the EmotionX shared task (Hsu and Ku, 2018).", "Totally, there are 1,000 dialogues, which are split into 720, 80, 200 dialogue for training, validation, and testing, respectively.", "All the utterances are categorized into one of the eight emotions as in the Friends dataset.", "Following the setup of (Hsu and Ku, 2018), in Friends and EmotionPush, we only evaluate the model performance on four emotions: anger, joy, sadness, and neutral, and we exclude the contribution of the rest emotion classes during training by setting their loss weights to zero.", "Data Preprocessing.", "We preprocess the datasets by the following steps: (1) The utterances are split into tokens with each word being made into the lowercase; (2) All non-alphanumerics except ? and ! are removed because these two symbols usually exhibit strong emotions, such as surprise, 1 https://sail.usc.edu/iemocap/ 2 http://doraemon.iis.sinica.edu.tw/ emotionlines 3 http://doraemon.iis.sinica.edu.tw/ emotionlines Dataset #Dialogue (#Utterance) #Emotion Train Val Test Ang Hap/Joy Sad Neu Others IEMOCAP 96 (3,569) 24 (721) 31 (1,208) 1,090 1,627 1,077 1,704 0 Friends 720 (10,561) 80 (1,178) 200 (2,764) 759 1,710 498 6,530 5,006 EmotionPush 720 (10,733) 80 (1,202) 200 (2,807) 140 2,100 514 9,855 2,133 Table 1: Statistics of the textual dialogue datasets.", "joy and anger; (3) We build a dictionary based on the words and symbols extracted, and follow (Po-ria et al., 2017) to represent the tokens by the publicly available 300-dimensional word2vec 4 vectors trained on 100 billion words from Google News.", "The tokens not included in the word2vec dictionary are initialized by randomly-generated vectors.", "To conduct fair comparison, we adopt two metrics as (Hsu and Ku, 2018), the weighted accuracy (WA) and unweighted accuracy (UWA):", "where p c is the percentage of the class c in the testing", "testing set, and a c is the corresponding accuracy.", "Generally, recognizing strong emotions may provide more value than detecting the neutral emotion (Hsu and Ku, 2018).", "Thus, in Friends and EmotionPush, UWA is a more favorite evaluation metric because WA is heavily compromised with the large proportion of the neutral emotion.", "Our proposed vanilla HiGRU, HiGRU-f, and HiGRU-sf 5 are compared with the following state-of-the-art baselines:", "bcLSTM (Poria et al., 2017) : a bidirectional contextual LSTM with multimodal features extracted by CNNs;", "CMN (Hazarika et al., 2018) : a conversational memory network with multimodal features extracted by CNNs;", "SA-BiLSTM (Luo et al., 2018) : a self-attentive bidirectional LSTM model, a neat model achieving the second place of EmotionX Challenge (Hsu and Ku, 2018);", "CNN-DCNN (Khosla, 2018) : a convolutional-deconvolutional autoencoder with more handmade", "features, the winner of EmotionX Challenge (Hsu and Ku, 2018);", "bcLSTM and bcGRU : our implemented bcLSTM and bcGRU with the weighted loss on the textual feature extracted from CNNs.", "All our implementations are coded on the Pytorch framework.", "To prevent the models fitting the order of data, we randomly shuffle the training set at the beginning of every epoch.", "Parameters.", "For bcLSTM and bcGRU , the CNN layer follows the setup of (Kim, 2014), i.e., consisting of the kernels of 3, 4, and 5 with 100 feature maps each.", "The convolution results of each kernel are fed to a max-over-time pooling operation.", "The dimension of the hidden states of the upper-level bidirectional LSTM or GRU is set to 300.", "For HiGRU, HiGRU-f, and HiGRU-sf, the dimensions of hidden states are set to 300 for both levels.", "The final FC layer contains two sub-layers with 100 neurons each.", "Training.", "We adopt Adam (Kingma and Ba, 2014) as the optimizer and set an initial learning rate, 1 10 4 for IEMOCAP and 2 .", "5 10 4 for Friends and EmotionPush, respectively.", "An annealing strategy is utilized by decaying the learning rate by half every 20 epochs.", "Early stopping with a patience of 10 is adopted to terminate training based on the accuracy of the validation set.", "Specifically, following the best models on each dataset, the parameters are tuned to optimize WA on the validation set of IEMOCAP and to optimize UWA on the validation set of Friends and EmotionPush, respectively.", "Gradient clipping with a norm of 5 is applied to model parameters.", "To prevent overfitting, dropout with a rate of 0.5 is applied after the contextual word/utterance embeddings, and the FC layer.", "Loss weights.", "For Friends and EmotionPush, as mentioned in Section 4.1, the loss weights are set to zero except the four considered emotions, to ignore the others during training.", "Besides, the power rate of loss weights is tested from 0 to 1.5 with Model (Feat) Ang Hap Sad Neu WA UWA bcLSTM (T) 76.07 78.97 76.23 67.44 73.6 74.6 (T+V+A) 77.98 79.31 78.30 69.92 76.1 76.3 CMN (T) --74.1 (T+V+A) 89.88 81.75 77.73 67.32 77.6 79.1 bcLSTM (T) 75.29 79.40 78.07 76.53 77.7 (1.1) 77.3 (1.4) bcGRU (T) 77.20 80.99 76.26 72.50 76.9 (1.6) 76.7 (1.3) HiGRU (T) 75.41 91.64 79.79 70.74 80.6 (0.5) 79.4 (0.5) HiGRU-f (T) 76.69 88.91 80.25 75.92 81.5 (0.7) 80.4 (0.5) HiGRU-sf (T) 74.78 89.65 80.50 77.58 82.1 (0.4) 80.6 (0.2) Table 2: Experimental results on IEMOCAP.", "Table 2 and Table 3 report the average results of 10 trials each on the three datasets, where the standard deviations of WA and UWA are recorded by the subscripts in round brackets.", "The results of bcLSTM, CMN, SA-BiLSTM, and CNN-DCNN are copied directly from the original papers for a fair comparison because we follow the same con-figuration for the corresponding datasets.", "From the results, we have the following observations: (1) Baselines.", "Our implemented bcLSTM and bcGRU, attain comparable performance with the state-of-the-art methods on all three datasets.", "From the results on IEMOCAP in Table 2, we observe that:", "(a) By utilizing the textual feature only, bcGRU outperforms bcLSTM and CMN trained on the textual feature significantly, attaining +3.3 and +2.8 gain in terms of WA, respectively.", "bcLSTM performs better than bcGRU, and even beats bcLSTM and CMN with the trimodal features in terms of WA.", "In terms of UWA, CMN performs better than bcLSTM only when it is equipped with multimodal features.", "(b) By examining the detailed accuracy in each emotion, bcLSTM and bcGRU with the textual feature attain much higher accuracy on the neutral emotion than bcLSTM with the only textual feature while maintaining good performance on the other three emotions.", "The results show that the weighted loss function benefits the training of models.", "bcGRU trained on the same dataset (F+E) of CNN-DCNN perform better than CNN-DCNN on EmotionPush while attaining comparable performance with CNN-DCNN on Friends.", "The results show that by utilizing the contextual information with the weighted loss function, bcLSTM and bcGRU can beat the state-of-the-art method.", "From Table 2, we observe that:", "(a) CMN with the trimodal features attains the best performance on the anger emotion while our vanilla HiGRU achieves the best performance on the happiness emotion and gains further improvement on sadness and neutral emotions over CMN.", "Overall, the vanilla HiGRU achieves at least 8.7% and 3.8% improvement over CMN with the textual feature and the trimodal features in terms of WA, respectively.", "The results, including those of bcLSTM and bcGRU, indicate that GRU learns better representations of utterances than CNN in this task.", "(b) The two variants, HiGRU-f and HiGRU-sf, can further attain +0.9 and +1.5 improvement over HiGRU in terms of WA and +1.0 and +1.2 improvement over HiGRU in terms of UWA, respectively.", "The results demonstrate that the included individual word/utterance-level features and long-range contextual information in HiGRU-f and HiGRU-sf, are indeed capable of boosting the performance of the vanilla HiGRU.", "From Table 3, we can see that:", "(a) In terms of UWA, HiGRU trained and tested on individual sets of Friends and EmotionPush gains at least 7.5% and 6.0% improvement over CNN-DCNN, respectively.", "Overall, our proposed HiGRU achieves well-balanced performance for the four tested emotions, especially attaining significant better performance on the minority emotions of anger and sadness.", "(b) Moreover, HiGRU-f and HiGRU-sf further improve HiGRU +1.2 accuracy and +1.7 accuracy on Friends and +0.6 accuracy and +1.8 accuracy on EmotionPush in terms of UWA, respectively.", "The results again demonstrate the superior power of HiGRU-f and HiGRU-sf.", "(3) Mixing Training Sets.", "By examining the results from the last ten rows in Table 3, we conclude that it does not necessarily improve the performance by mixing the two sets of training data.", "Though the best performance of SA-BiLSTM Model Train Friends (F) EmotionPush (E) Ang Joy Sad Neu WA UWA Ang Joy Sad Neu WA UWA SA-BiLSTM F+E 49.1 68.8 30.6 90.1 -59.6 24.3 70.5 31.0 94.2 -55.0 CNN-DCNN F+E 55.3 71.1 55.3 68.3 -62.5 45.9 76.0 51.7 76.3 -62.5 bcLSTM F(E) 64.7 69.6 48.0 75.6 72.4 (4.2) 64.4 (1.6) 32.9 69.9 47.1 78.0 74.7 (4.4) 57.0 (2.1) bcGRU F(E) 69.5 65.4 52.9 74.7 71.7 (4.7) 65.6 (1.2) 33.7 71.1 57.2 76.1 73.9 (2.9) 59.5 (1.8) bcLSTM F+E 54.5 75.6 43.4 73.0 70.5 (4.5) 61.6 (1.6) 52.4 79.1 54.7 73.3 73.4 (3.8) 64.9 (2.1) bcGRU F+E 59.0 78.6 42.3 71.4 70.2 (5.1) 62.8 (1.4) 49.4 74.8 61.9 72.4 72.1 (4.3) 64.6 (1.8) HiGRU F(E) 66.9 73.0 51.8 77.2 74.4 (1.7) 67.2 (0.6) 55.6 78.1 57.4 73.8 73.8 (2.0) 66.3 (1.7) HiGRU-f F(E) 69.1 72.1 60.4 72.1 71.3 (2.9) 68.4 (1.0) 55.9 78.9 60.4 72.4 73.0 (2.2) 66.9 (1.2) HiGRU-sf F(E) 70.7 70.9 57.7 76.2 74.0 (1.4) 68.9 (1.5) 57.5 78.4 64.1 72.5 73.0 (1.6) 68.1 (1.2) HiGRU F+E 55.4 81.2 51.4 64.4 65.8 (4.2) 63.1 (1.5) 50.8 76.9 69.0 75.7 75.3 (1.7) 68.1 (1.2) HiGRU-f F+E 54.9 78.3 55.5 68.7 68.5 (3.0) 64.3 (1.2) 58.3 79.1 69.6 70.0 71.5 (2.5) 69.2 (0.9) HiGRU-sf F+E 56.8 81.4 52.2 68.7 69.0 (2.0) 64.8 (1.3) 57.8 79.3 66.3 77.4 77.1 (1.0) 70.2 (1.1) Table 3: Experimental results on Friends and EmotionPush.", "and CNN-DCNN is obtained by training on the mixed dataset, the testing results show that our implemented bcLSTM , bcGRU and our proposed HiGRU models can attain better performance on EmotionPush but yield worse performance on Friends in terms of UWA.", "By examining the detailed emotions, we speculate that: EmotionPush is a highly imbalanced dataset with over 60% of utterances in the neutral emotion.", "Introducing EmotionPush into a more balanced dataset, Friends, is equivalent to down-sampling the minority emotions in Friends.", "This hurts the performance on the minority emotions, anger and sadness.", "Meanwhile, introducing Friends into EmotionPush corresponds to up-sampling the minority emotions in EmotionPush.", "The performance of the sadness emotion is significantly boosted and that on the anger emotion is at least unaffected.", "Model Size.", "We study how the scale of the utterance encoder affects the performance of our proposed models, especially when our models contain a similar number of parameters as the baseline, say bcGRU.", "Such a fair condition can be made between our HiGRU-sf and bcGRU if we set d 1 to 150.", "From the testing results on Friends in Table 4, we can observe that: (1) Under the fair condition, the performance of our HiGRU-sf is not degraded compared to that when d 1 = 300 .", "HiGRU-sf still outperforms bcGRU by a significant margin.", "(2) Overall, no matter d 1 is larger or smaller than 150, HiGRU-sf maintains consistently good performance and the difference between HiGRU-sf and HiGRU-f or HiGRU keeps noticeable.", "These results further demonstrate the superiority of our proposed models over the baseline bcGRU and the motivation of developing the two variants based on the vanilla HiGRU.", "Successful Cases.", "We investigate three scenes related to the word okay that expresses three distinct emotions.", "The first two scenes come from the testing set of Friends and the third one from that of IEMOCAP.", "We report the predictions made by bcGUR and our HiGRU-sf, respectively, in Table 5.", "In Scene-1 , okay with period usually exhibits little emotion and both bcGRU and HiGRU-sf correctly classify it as Neu.", "In Scene-2 , okay with ! expresses strong emotion.", "However, bcGRU misclassifies it to Ang while HiGRU-sf successfully recognizes it as Joy.", "Actually, the mistake can be traced back to the first utterance of this scene which is also misclassified as Ang.", "This indicates that bcGRU tends to capture the wrong atmosphere within the dialogue.", "As for Scene-3 , okay with period now indicates Sad and is correctly recognized by HiGRU-sf but misclassified as Neu by bcGRU.", "Note that HiGRU-sf also classifies the third utterance in Scene-3 as Sad which seems to be conflicting Role Utterance Truth bcGRU HiGRU-sf Scene-1 Phoebe Okay.", "to the ground truth.", "In fact, our HiGRU-sf captures the blues of this parting situation, where the true label Hap may not be that suitable.", "These results show that our HiGRU-sf learns from both each utterance and the context, and can make correct predictions of the emotion of each utterance.", "Failed Cases.", "At last, we show some examples that both bcGRU and our HiGRU-sf fail in recognizing the right emotions in Table 6, i.e., Scene-4 from Friends and Scene-5 from EmotionPush.", "In Scene-4 , both bcGRU and HiGRU-sf make wrong predictions for the fifth and the sixth utterances.", "It should be good news that Ross has his paper published and Rachel is glad to see related reports about it.", "However, the transcripts do not reveal very strong emotions compared to what the characters might act in the TV show.", "This kind of scenes may be addressed by incorporating some other features like audio and video.", "As for Scene-5 , the third and the fifth utterances are classified into wrong emotions.", "Notice that the emotions indicated from the two utterances are very subtle even for humans.", "The Speaker-2 did not plan to get up today, but Speaker-1 kept him/her up and it ended up with a really lax day.", "So, the Speaker-2 feels joyful now.", "This indicates that even taking into the context into account, the models' capability of understanding subtle emotions is still limited and more exploration is required.", "We propose a hierarchical Gated Recurrent Unit (HiGRU) framework to tackle the utterance-level emotion recognition in dialogue systems, where the individual utterance embeddings are learned by the lower-level GRU and the contexts of utterances are captured by the upper-level GRU.", "We promote the HiGRU framework to two variants, HiGRU-f, and HiGRU-sf, and effectively capture the word/utterance-level inputs and the long-range contextual information, respectively.", "Experimental results demonstrate that our proposed HiGRU models can well handle the data imbalance issue and sufficiently capture the available text information, yielding significant performance boosting on all three tested datasets.", "In the future, we plan to explore semi-supervised learning methods to address the problem of data scarcity in this task.", "This work is supported by the Research Grants Council of the Hong Kong Special Administrative Region, China (No. CUHK 14208815 and No. CUHK 14210717 of the General Research Fund, and Project No. UGC/IDS14/16), and Meitu (No. 7010445).", "We thank the three anonymous reviewers for the insightful suggestions on various aspects of this work." ]
[ "result", "objective", "method", "objective", "method", "abstain", "abstain", "method", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "method", "objective", "objective", "method", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "objective", "other", "other" ]
[ "Pretrained contextualized embeddings are powerful word representations for structured prediction tasks.", "Recent work found that better word representations can be obtained by concatenating different types of embeddings.", "However, the selection of embeddings to form the best concatenated representation usually varies depending on the task and the collection of candidate embeddings, and the ever-increasing number of embedding types makes it a more difficult problem.", "In this paper, we propose Automated Concatenation of Embeddings (ACE) to automate the process of finding better concatenations of embeddings for structured prediction tasks, based on a formulation inspired by recent progress on neural architecture search.", "Specifically, a controller alternately samples a concatenation of embeddings, according to its current belief of the effectiveness of individual embedding types in consideration for a task, and updates the belief based on a reward.", "We follow strategies in reinforcement learning to optimize the parameters of the controller and compute the reward based on the accuracy of a task model, which is fed with the sampled concatenation as input and trained on a task dataset.", "Empirical results on 6 tasks and 21 datasets show that our approach outperforms strong baselines and achieves state-of-the-art performance with fine-tuned embeddings in all the evaluations.", "1 1 Introduction Recent developments on pretrained contextualized embeddings have significantly improved the performance of structured prediction tasks in natural Yong Jiang and Kewei Tu are the corresponding authors.", "language processing.", "Approaches based on contextualized embeddings, such as ELMo (Peters et al., 2018), Flair (Akbik et al., 2018), BERT (Devlin et al., 2019), and XLM-R (Conneau et al., 2020), have been consistently raising the state-of-the-art for various structured prediction tasks.", "Concurrently, research has also showed that word representations based on the concatenation of multiple pretrained contextualized embeddings and traditional non-contextualized embeddings (such as word2vec (Mikolov et al., 2013) and character embeddings (Santos and Zadrozny, 2014)) can further improve performance (Peters et al., 2018; Akbik et al., 2018; Strakov et al., 2019; Wang et al., 2020b).", "Given the ever-increasing number of embedding learning methods that operate on different granularities (e.g., word, subword, or character level) and with different model architectures, choosing the best embeddings to concatenate for a specific task becomes non-trivial, and exploring all possible concatenations can be prohibitively demanding in computing resources.", "Neural architecture search (NAS) is an active area of research in deep learning to automatically search for better model architectures, and has achieved state-of-the-art performance on various tasks in computer vision, such as image classifi-cation (Real et al., 2019), semantic segmentation (Liu et al., 2019a), and object detection (Ghiasi et al., 2019).", "In natural language processing, NAS has been successfully applied to find better RNN structures (Zoph and Le, 2017; Pham et al., 2018b) and recently better transformer structures (So et al., 2019; Zhu et al., 2020).", "In this paper, we propose Automated Concatenation of Embeddings (ACE) to automate the process of finding better concatenations of embeddings for structured prediction tasks.", "ACE is formulated as an NAS problem.", "In this approach, an iterative search process is guided by a controller based on its belief that models the effectiveness of individual embedding candidates in consideration for a specific task.", "At each step, the controller samples a concatenation of embeddings according to the belief model and then feeds the concatenated word representations as inputs to a task model, which in turn is trained on the task dataset and returns the model accuracy as a reward signal to update the belief model.", "We use the policy gradient algorithm (Williams, 1992) in reinforcement learning (Sutton and Barto, 1992) to solve the optimization problem.", "In order to improve the efficiency of the search process, we also design a special reward function by accumulating all the rewards based on the transformation between the current concatenation and all previously sampled concatenations.", "1. Unlike most previous work, we focus on searching for better word representations rather than better model architectures.", "2. We design a novel search space for the embedding concatenation search.", "Instead of using RNN as in previous work of Zoph and Le (2017), we design a more straightforward controller to generate the embedding concatenation.", "We design a novel reward function in the objective of optimization to better evaluate the effectiveness of each concatenated embeddings.", "3. ACE achieves high accuracy without the need for retraining the task model, which is typically required in other NAS approaches.", "4. Our approach is efficient and practical.", "Although ACE is formulated in a NAS framework, ACE can find a strong word representation on a single GPU with only a few GPU-hours for structured prediction tasks.", "In comparison, a lot of NAS approaches require dozens or even thousands of GPU-hours to search for good neural architectures for their corresponding tasks.", "Empirical results show that ACE outperforms strong baselines.", "Furthermore, when ACE is applied to concatenate pretrained contextualized embeddings fine-tuned on specific tasks, we can achieve state-of-the-art accuracy on 6 structured prediction tasks including Named Entity Recognition (Sundheim, 1995), Part-Of-Speech tagging (DeRose, 1988), chunking (Tjong Kim Sang and Buchholz, 2000), aspect extraction (Hu and Liu, 2004), syntactic dependency parsing (Tesnire, 1959) and semantic dependency parsing (Oepen et al., 2014) over 21 datasets.", "Besides, we also analyze the advantage of ACE and reward function design over the baselines and show the advantage of ACE over ensemble models.", "Non-contextualized embeddings, such as word2vec (Mikolov et al., 2013), GloVe (Pennington et al., 2014), and fastText (Bojanowski et al., 2017), help lots of NLP tasks.", "Character embeddings (San-tos and Zadrozny, 2014) are trained together with the task and applied in many structured prediction tasks (Ma and Hovy, 2016; Lample et al., 2016; Dozat and Manning, 2018).", "For pretrained contextualized embeddings, ELMo (Peters et al., 2018), a pretrained contextualized word embedding generated with multiple Bidirectional LSTM layers, significantly outperforms previous state-of-the-art approaches on several NLP tasks.", "Following this idea, Akbik et al. (2018) proposed Flair embeddings, which is a kind of contextualized character embeddings and achieved strong performance in sequence labeling tasks.", "Recently, Devlin et al. (2019) proposed BERT, which encodes contextualized sub-word information by Transformers (Vaswani et al., 2017) and significantly improves the performance on a lot of NLP tasks.", "Much research such as RoBERTa (Liu et al., 2019c) has focused on improving BERT model's performance through stronger masking strategies.", "Moreover, multilingual contextualized embeddings become popular.", "Pires et al. (2019) and Wu and Dredze (2019) showed that Multilingual BERT (M-BERT) could learn a good multilingual representation effectively with strong cross-lingual zero-shot transfer performance in various tasks.", "Conneau et al. (2020) proposed XLM-R, which is trained on a larger multilingual corpus and significantly outperforms M-BERT on various multilingual tasks.", "Recent progress on deep learning has shown that network architecture design is crucial to the model performance.", "However, designing a strong neural architecture for each task requires enormous efforts, high level of knowledge, and experiences over the task domain.", "Therefore, automatic design of neural architecture is desired.", "A crucial part of NAS is search space design, which defines the discoverable NAS space.", "Previous work (Baker et al., 2017; Zoph and Le, 2017; Xie and Yuille, 2017) designs a global search space (Elsken et al., 2019) which incorporates structures from hand-crafted architectures.", "For example, Zoph and Le (2017) designed a chained-structured search space with skip connections.", "The global search space usually has a considerable degree of freedom.", "For example, the approach of Zoph and Le (2017) takes 22,400 GPU-hours to search on CIFAR-10 dataset.", "Based on the observation that existing hand-crafted architectures contain repeated structures (Szegedy et al., 2016; He et al., 2016; Huang et al., 2017), Zoph et al. (2018) explored cell-based search space which can reduce the search time to 2,000 GPU-hours.", "In recent NAS research, reinforcement learning and evolutionary algorithms are the most usual approaches.", "In reinforcement learning, the agent's actions are the generation of neural architectures and the action space is identical to the search space.", "Previous work usually applies an RNN layer (Zoph and Le, 2017; Zhong et al., 2018; Zoph et al., 2018) or use Markov Decision Process (Baker et al., 2017) to decide the hyper-parameter of each structure and decide the input order of each structure.", "Evolutionary algorithms have been applied to architecture search for many decades (Miller et al., 1989; Angeline et al., 1994; Stanley and Miikkulainen, 2002; Floreano et al., 2008; Jozefowicz et al., 2015).", "The algorithm repeatedly generates new populations through recombination and mutation operations and selects survivors through competing among the population.", "Recent work with evolutionary algorithms differ in the method on parent/survivor selection and population generation.", "For example, Real et al. (2017), Liu et al. (2018a), Wistuba (2018) and Real et al. (2019) applied tournament selection (Goldberg and Deb, 1991) for the par-ent selection while Xie and Yuille (2017) keeps all parents.", "Suganuma et al. (2017) and Elsken et al. (2018) chose the best model while Real et al. (2019) chose several latest models as survivors.", "In ACE, a task model and a controller interact with each other repeatedly.", "The task model predicts the task output, while the controller searches for better embedding concatenation as the word representation for the task model to achieve higher accuracy.", "Given an embedding concatenation generated from the controller, the task model is trained over the task data and returns a reward to the controller.", "The controller receives the reward to update its parameter and samples a new embedding concatenation for the task model.", "Figure 1 shows the general architecture of our approach.", "For the task model, we emphasis on sequence-structured and graph-structured outputs.", "Given a structured prediction task with input sentence x and structured output y , we can calculate the probability distribution P ( y | x ) by: P ( y | x ) = exp ( Score ( x , y )) (cid:80) y (cid:48) Y ( x ) exp ( Score ( x , y (cid:48) )) where Y ( x ) represents all possible output structures given the input sentence x .", "Depending on different structured prediction tasks, the output structure y can be label sequences, trees, graphs or other structures.", "In this paper, we use sequence-structured and graph-structured outputs as two exemplar structured prediction tasks.", "We use BiLSTM-CRF model (Ma and Hovy, 2016; Lample et al., 2016) for sequence-structured outputs and use BiLSTM-Biaffine model (Dozat and Manning, 2017) for graph-structured outputs: P seq ( y | x ) = BiLSTM-CRF ( V , y ) P graph ( y | x ) = BiLSTM-Biaffine ( V , y ) where V = [ v 1 ; ; v n ] , V R d n is a matrix of the word representations for the input sentence x with n words, d is the hidden size of the concatenation of all embeddings.", "The word representation v i of i -th word is a concatenation of L types of word embeddings: v li = embed li ( x ); v i = [ v 1 i ; v 2 i ; . . . ; v Li ] where embed l is the model of l -th embeddings, v i R d , v li R d l .", "d l is the hidden size of embed l .", "The neural architecture search space can be represented as a set of neural networks (Elsken et al., 2019).", "A neural network can be represented as a directed acyclic graph with a set of nodes and directed edges.", "Each node represents an operation, while each edge represents the inputs and outputs between these nodes.", "In ACE, we represent each embedding candidate as a node.", "The input to the nodes is the input sentence x , and the outputs are the embeddings v l .", "Since we concatenate the embeddings as the word representation of the task model, there is no connection between nodes in our search space.", "Therefore, the search space can be significantly reduced.", "For each node, there are a lot of options to extract word features.", "Taking BERT embeddings as an example, Devlin et al. (2019) concatenated the last four layers as word features while Kondratyuk and Straka (2019) applied a weighted sum of all twelve layers.", "However, the empirical results (Devlin et al., 2019) do not show a significant difference in accuracy.", "We follow the typical usage for each embedding to further reduce the search space.", "As a result, each embedding only has a fixed operation and the resulting search space contains 2 L 1 possible combinations of nodes.", "In NAS, weight sharing (Pham et al., 2018a) shares the weight of structures in training different neural architectures to reduce the training cost.", "In comparison, we fixed the weight of pretrained embedding candidates in ACE except for the character embeddings.", "Instead of sharing the parameters of the embeddings, we share the parameters of the task models at each step of search.", "However, the hidden size of word representation varies over the concatenations, making the weight sharing of structured prediction models difficult.", "Instead of deciding whether each node exists in the graph, we keep all nodes in the search space and add an additional operation for each node to indicate whether the embedding is masked out.", "To represent the selected concatenation, we use a binary vector a = [ a 1 , , a l , , a L ] as an mask to mask out the embeddings which are not selected: v i = [ v 1 i a 1 ; . . . ; v li a l ; . . . ; v Li a L ] (1) where a l is a binary variable.", "Since the input V is applied to a linear layer in the BiLSTM layer, multiplying the mask with the embeddings is equivalent to directly concatenating the selected embeddings: W (cid:62) v i = L (cid:88) l =1 W (cid:62) l v li a l (2) where W =[ W 1 ; W 2 ; . . . ; WL ] and W R d h and W l R d l h .", "Therefore, the model weights can be shared after applying the embedding mask to all embedding candidates' concatenation.", "Another benefit of our search space design is that we can remove the unused embedding candidates and the corresponding weights in W for a lighter task model after the best concatenation is found by ACE.", "During search, the controller generates the embedding mask for the task model iteratively.", "We use parameters = [ 1 ; 2 ; . . . ; L ] for the controller instead of using the RNN structure applied in previous approaches (Zoph and Le, 2017; Zoph et al., 2018).", "The probability distribution of selecting an concatenation a is P ctrl ( a ; ) = (cid:81) Ll =1 P ctrl l ( a l ; l ) .", "Each element a l of a is sampled independently from a Bernoulli distribution, which is defined as: P ctrl l ( a l ; l )= (cid:40) ( l ) a l =1 1 P ctrl l ( a l =1; l ) a l =0 (3) where is the sigmoid function.", "Given the mask, the task model is trained until convergence and returns an accuracy R on the development set.", "As the accuracy cannot be back-propagated to the controller, we use the reinforcement algorithm for optimization.", "The accuracy R is used as the reward signal to train the controller.", "The controller's target is to maximize the expected reward J ( ) = EP ctrl ( a ; ) [ R ] through the policy gradient method (Williams, 1992).", "In our approach, since calculating the exact expectation is intractable, the gradient of J ( ) is approximated by sampling only one selection following the distribution P ctrl ( a ; ) at each step for training efficiency: J ( ) L (cid:88) l =1 log P ctrl l ( a l ; l )( R b ) (4) where b is the baseline function to reduce the high variance of the update function.", "The baseline usually can be the highest accuracy during the search process.", "Instead of merely using the highest accuracy of development set over the search process as the baseline, we design a reward function on how each embedding candidate contributes to accuracy change by utilizing all searched concatenations' development scores.", "We use a binary vector | a t a i | to represent the change between current embedding concatenation a t at current time step t and a i at previous time step i .", "We then define the reward function as: r t = t 1 (cid:88) i =1 ( R t R i ) | a t a i | (5) 0 0 0 1 1 1 R f Controller Task Model [ R e w a r d ] 0 1 1 Choice [ A c t i on ] Flair ELMo BERT PreviousChoice CurrentChoice Figure 1: The main paradigm of our approach is shown in the middle, where an example of reward function is represented in the left and an example of a concatenation action is shown in the right.", "where r t is a vector with length L representing the reward of each embedding candidate.", "R t and R i are the reward at time step t and i .", "When the Hamming distance of two concatenations Hamm ( a t , a i ) gets larger, the changed candidates' contribution to the accuracy becomes less noticeable.", "The controller may be misled to reward a candidate that is not actually helpful.", "We apply a discount factor to reduce the reward for two concatenations with a large Hamming distance to alleviate this issue.", "Our final reward function is: r t = t 1 (cid:88) i =1 ( R t R i ) Hamm ( a t , a i ) 1 | a t a i | (6) where (0 , 1) .", "Eq.", "4 is then reformulated as: J t ( ) L (cid:88) l =1 log P ctrl l ( a tl ; l ) r tl (7) 3.4 Training To train the controller, we use a dictionary D to store the concatenations and the corresponding validation scores.", "At t = 1 , we train the task model with all embedding candidates concatenated.", "From t = 2 , we repeat the following steps until a maximum iteration T : 1. Sample a concatenation a t based on the probability distribution in Eq.", "3.", "2. Train the task model with a t following Eq.", "1 and evaluate the model on the development set to get the accuracy R t .", "3. Given the concatenation a t , accuracy R t and D , compute the gradient of the controller following Eq.", "7 and update the parameters of controller.", "When sampling a t , we avoid selecting the previous concatenation a t 1 and the all-zero vector (i.e., selecting no embedding).", "If a t is in the dictionary D , we compare the R t with the value in the dictionary and keep the higher one.", "We use ISO 639-1 language codes to represent languages in the table 2 .", "To show ACE's effectiveness, we conduct extensive experiments on a variety of structured prediction tasks varying from syntactic tasks to semantic tasks.", "The tasks are named entity recognition (NER), Part-Of-Speech (POS) tagging, Chunking, Aspect Extraction (AE), Syntactic Dependency Parsing (DP) and Semantic Dependency Parsing (SDP).", "The details of the 6 structured prediction tasks in our experiments are shown in below: NER : We use the corpora of 4 languages from the CoNLL 2002 and 2003 shared task (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meul-der, 2003) with standard split.", "POS Tagging : We use three datasets, Ritter11-T-POS (Ritter et al., 2011), ARK-Twitter (Gimpel et al., 2011; Owoputi et al., 2013) and Tweebank-v2 (Liu et al., 2018b) datasets (Ritter, ARK and TB-v2 in simplification).", "We follow the dataset split of Nguyen et al. (2020).", "Chunking : We use CoNLL 2000 (Tjong Kim Sang and Buchholz, 2000) for chunking.", "Since there is no standard development set for CoNLL 2000 dataset, we split 10% of the training data as the development set.", "Aspect Extraction : Aspect extraction is a subtask of aspect-based sentiment analysis (Pontiki et al., 2014, 2015, 2016).", "The datasets are from the laptop and restaurant domain of SemEval 2 https://en.wikipedia.org/wiki/List_ of_ISO_639-1_codes NER POS AE de en es nl Ritter ARK TB-v2 14Lap 14Res 15Res 16Res es nl ru tr ALL 83.1 92.4 88.9 89.8 90.6 92.1 94.6 82.7 88.5 74.2 73.2 74.6 75.0 67.1 67.5 RANDOM 84.0 92.6 88.8 91.9 91.3 92.6 94.6 83.6 88.1 73.5 74.7 75.0 73.6 68.0 70.0 ACE 84.2 93.0 88.9 92.1 91.7 92.8 94.8 83.9 88.6 74.9 75.6 75.7 75.3 70.6 71.1 CHUNKDP SDP AVG CoNLL 2000 UAS LAS DM-ID DM-OOD PAS-ID PAS-OOD PSD-ID PSD-OODALL 96.7 96.7 95.1 94.3 90.8 94.6 92.9 82.4 81.7 85.3 RANDOM 96.7 96.8 95.2 94.4 90.8 94.6 93.0 82.3 81.8 85.7 ACE 96.8 96.9 95.3 94.5 90.9 94.5 93.1 82.5 82.1 86.2 Table 1: Comparison with concatenating all embeddings and random search baselines on 6 tasks.", "14, restaurant domain of SemEval 15 and restaurant domain of SemEval 16 shared task (14Lap, 14Res, 15Res and 16Res in short).", "Additionally, we use another 4 languages in the restaurant domain of SemEval 16 to test our approach in multiple languages.", "We randomly split 10% of the training data as the development set following Li et al. (2019).", "Syntactic Dependency Parsing : We use Penn Tree Bank (PTB) 3.0 with the same dataset preprocessing as (Ma et al., 2018).", "Semantic Dependency Parsing : We use DM, PAS and PSD datasets for semantic dependency parsing (Oepen et al., 2014) for the SemEval 2015 shared task (Oepen et al., 2015).", "The three datasets have the same sentences but with different formalisms.", "We use the standard split for SDP.", "In the split, there are in-domain test sets and out-of-domain test sets for each dataset.", "Among these tasks, NER, POS tagging, chunking and aspect extraction are sequence-structured outputs while dependency parsing and semantic dependency parsing are the graph-structured outputs.", "POS Tagging, chunking and DP are syntactic structured prediction tasks while NER, AE, SDP are semantic structured prediction tasks.", "We train the controller for 30 steps and save the task model with the highest accuracy on the development set as the final model for testing.", "Please refer to Appendix A for more details of other settings.", "Basic Settings: For the candidates of embeddings on English datasets, we use the language-specific model for ELMo, Flair, base BERT, GloVe word embeddings, fastText word embeddings, noncontextual character embeddings (Lample et al., 2016), multilingual Flair (M-Flair), M-BERT and", "XLM-R embeddings.", "The size of the search space in our experiments is 2 11 1=2047 3 .", "For language-specific models of other languages, please refer to Appendix A for more details.", "In AE, there is no available Russian-specific BERT, Flair and ELMo embeddings and there is no available Turkish-specific Flair and ELMo embeddings.", "We use the corresponding English embeddings instead so that the search spaces of these datasets are almost identical to those of the other datasets.", "All embeddings are fixed during training except that the character embeddings are trained over the task.", "The empirical results are reported in Section 4.3.1.", "Embedding Fine-tuning: A usual approach to get better accuracy is fine-tuning transformer-based embeddings.", "In sequence labeling, most of the work follows the fine-tuning pipeline of BERT that connects the BERT model with a linear layer for word-level classification.", "However, when multiple embeddings are concatenated, fine-tuning a specific group of embeddings becomes difficult because of complicated hyper-parameter settings and massive GPU memory consumption.", "To alleviate this problem, we first fine-tune the transformer-based embeddings over the task and then concatenate these embeddings together with other embeddings in the basic setting to apply ACE.", "The empirical results are reported in Section 4.3.2.", "We use the following abbreviations in our experiments: UAS : Unlabeled Attachment Score; LAS : Labeled Attachment Score; ID : In-domain test set; OOD : Out-of-domain test set.", "We use language codes for languages in NER and AE.", "To show the effectiveness of our approach, we compare our approach with two strong baselines.", "For the first one, we let the task model learn by itself the contribution of each embedding candidate that is helpful to the task.", "We set a to all-ones (i.e., the concatenation of all the embeddings) and train the task model ( All ).", "The linear layer weight W in Eq.", "2 reflects the contribution of each candidate.", "For the second one, we use the random search ( Random ), a strong baseline in NAS (Li and Talwalkar, 2020).", "For Random , we run the same maximum iteration as in ACE.", "For the experiments, we report the averaged accuracy of 3 runs.", "Table 1 shows that ACE outperforms both baselines in 6 tasks over 23 test sets with only two exceptions.", "Comparing Random with All , Random outperforms All by 0.4 on average and surpasses the accuracy of All on 14 out of 23 test sets, which shows that concatenating all embeddings may not be the best solution to most structured prediction tasks.", "In general, searching for the concatenation for the word representation is essential in most cases, and our search design can usually lead to better results compared to both of the baselines.", "approaches As we have shown, ACE has an advantage in searching for better embedding concatenations.", "We further show that ACE is competitive or even stronger than state-of-the-art approaches.", "We additionally use XLNet (Yang et al., 2019) and RoBERTa as the candidates of ACE.", "In some tasks, we have several additional settings to better compare with previous work.", "In NER, we also conduct a comparison on the revised version of German datasets in the CoNLL 2006 shared task (Buch-holz and Marsi, 2006).", "Recent work such as Yu et al. (2020) and Yamada et al. (2020) utilizes document contexts in the datasets.", "We follow their work and extract document embeddings for the transformer-based embeddings.", "Specifically, we follow the fine-tune process of Yamada et al. (2020) to fine-tune the transformer-based embeddings over the document except for BERT and M-BERT embeddings.", "For BERT and M-BERT, we follow the document extraction process of Yu et al. (2020) because we find that the model with such document embeddings is significantly stronger than the model trained with the fine-tuning process of Yamada et al. (2020).", "In SDP, the state-of-the-art approaches used POS tags and lemmas as additional word features to the network.", "We add these two features to the embedding candidates and train the embeddings together with the task.", "We use the fine-tuned transformer-based embeddings on each task instead of the pretrained version of these embeddings as the candidates.", "4 We additionally compare with fine-tuned XLM-R model for NER, POS tagging, chunking and AE, and compare with fine-tuned XLNet model for DP and SDP, which are strong fine-tuned models in most of the experiments.", "Results are shown in Table 2, 3,", "4. Results show that ACE with fine-tuned embeddings achieves state-of-the-art performance in all test sets, which shows that finding a good embedding concatenation helps structured prediction tasks.", "We also find that ACE is stronger than the fine-tuned models, which shows the effectiveness of concatenating the fine-tuned embeddings 5 .", "To show how efficient our approach is compared with the random search algorithm, we compare the algorithm in two aspects on CoNLL English NER dataset.", "The first aspect is the best development accuracy during training.", "The left part of Figure 2 shows that ACE is consistently stronger than the random search algorithm in this task.", "The second aspect is the searched concatenation at each time step.", "The right part of Figure 2 shows that the accuracy of ACE gradually increases and gets stable when more concatenations are sampled.", "To show the effectiveness of the designed reward function, we compare our reward function (Eq. 6) with the reward function without discount factor (Eq. 5) and the traditional reward function (reward term in Eq. 4).", "We sample 2000 training sentences on CoNLL English NER dataset for faster training and train the controller for 50 steps.", "Table 5 shows that both the discount factor and the binary vector | a t a i | for the task are helpful in both development and test datasets.", "Please refer to Appendix for more details about the embeddings.", "5 We compare ACE with other fine-tuned embeddings in Appendix.", "CHUNKAE CoNLL 2000 14Lap 14Res 15Res 16Res es nl ru tr Akbik et al. (2018) 96.7 Xu et al. (2018) 84.2 84.6 72.0 75.4 --Clark et al. (2018) 97.0 Xu et al. (2019) 84.3 -78.0 --Liu et al. (2019b) 97.3 Wang et al. (2020a) --72.8 74.3 72.9 71.8 59.3 Chen et al. (2020) 95.5 Wei et al. (2020) 82.7 87.1 72.7 77.7 --XLM-R+Fine-tune 97.0 XLM-R+Fine-tune 85.9 90.5 76.4 78.9 77.0 77.6 77.7 74.1 ACE+Fine-tune 97.3 ACE+Fine-tune 87.4 92.0 80.3 81.3 79.9 80.5 79.4 81.9 Table 3: Comparison with state-of-the-art approaches in chunking and aspect extraction.", "We compare ACE with two more approaches to further show the effectiveness of ACE.", "One is a variant of All , which uses a weighting parameter b = [ b 1 , , b l , , b L ] passing through a sigmoid function to weight each embedding candidate.", "Such an approach can explicitly learn the weight of each embedding in training instead of a binary mask.", "We call this approach All+Weight .", "Another one is model ensemble, which trains the task model with each embedding candidate individually and uses the trained models to make joint prediction on the test set.", "We use voting for ensemble as it is simple and fast.", "For sequence labeling tasks, the models vote for the predicted label at each position.", "For DP, the models vote for the tree of each sentence.", "For SDP, the models vote for each potential labeled arc.", "We use the confi-dence of model predictions to break ties if there are more than one agreement with the same counts.", "We call this approach Ensemble .", "One of the ben-efits of voting is that it combines the predictions of the task models efficiently without any training process.", "We can search all possible 2 L 1 model ensembles in a short period of time through caching the outputs of the models.", "Therefore, we search for the best ensemble of models on the development set and then evaluate the best ensemble on the test set ( Ensemble dev ).", "Moreover, we additionally search for the best ensemble on the test set for reference ( Ensemble test ), which is the upper bound of the approach.", "We use the same setting as in Section 4.3.1 and select one of the datasets from each task.", "For NER, POS tagging, AE, and SDP, we use CoNLL 2003 English, Ritter, 16Res, and DM datasets, respectively.", "The results are shown in Table", "6. Empirical results show that ACE out-DP SDP PTB DM PAS PSDUAS LAS ID OOD ID OOD ID OOD Zhou and Zhao (2019) 97.2 95.7 He and Choi (2020) 94.6 90.8 96.1 94.4 86.8 79.5 Mrini et al. (2020) 97.4 96.3 D & M (2018) 93.7 88.9 93.9 90.6 81.0 79.4 Li et al. (2020) 96.6 94.8 Wang et al. (2019) 94.0 89.7 94.1 91.3 81.4 79.6 Zhang et al. (2020) 96.1 94.5 Jia et al. (2020) 93.6 89.1 --Wang and Tu (2020) 96.9 95.3 F & G (2020) 94.4 91.0 95.1 93.4 82.6 82.0 XLNET+Fine-tune 97.0 95.6 XLNet+Fine-tune 94.2 90.6 94.8 93.4 82.7 81.8 ACE+Fine-tune 97.2 95.8 ACE+Fine-tune 95.6 92.6 95.8 94.6 83.8 83.4 Table 4: Comparison with state-of-the-art approaches in DP and SDP.", "performs all the settings of these approaches and even Ensemble test , which shows the effectiveness of ACE and the limitation of ensemble models.", "All , All+Weight and Ensemble dev are competitive in most of the cases and there is no clear winner of these approaches on all the datasets.", "These results show the strength of embedding concatenation.", "Concatenating the embeddings incorporates information from all the embeddings and forms stronger word representations for the task model, while in model ensemble, it is difficult for the individual task models to affect each other.", "Concatenating multiple embeddings is a commonly used approach to improve accuracy of structured prediction.", "However, such approaches can be computationally costly as multiple language models are used as input.", "ACE is more practical than concatenating all embeddings as it can remove those embeddings that are not very useful in the concatenation.", "Moreover, ACE models can be used to guide the training of weaker models through techniques such as knowledge distillation in structured prediction (Kim and Rush, 2016; Kuncoro et al., 2016; Wang et al., 2020a, 2021b), leading to models that are both stronger and faster.", "In this paper, we propose Automated Concatenation of Embeddings, which automatically searches for better embedding concatenation for structured prediction tasks.", "We design a simple search space and use the reinforcement learning with a novel reward function to efficiently guide the controller to search for better embedding concatenations.", "We take the change of embedding concatenations into the reward function design and show that our new reward function is stronger than the simpler ones.", "Results show that ACE outperforms strong baselines.", "Together with fine-tuned embeddings, ACE achieves state-of-the-art performance in 6 tasks over 21 datasets.", "This work was supported by the National Natural Science Foundation of China (61976139) and by Alibaba Group through Alibaba Innovative Research Program.", "We thank Chengyue Jiang for his comments and suggestions on writing." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "method", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "result", "method", "objective", "method", "objective", "objective", "abstain", "objective", "objective", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "abstain", "other", "other" ]
[ "Neural networks equipped with self-attention have parallelizable computation, light-weight structure, and the ability to capture both long-range and local dependencies.", "Further, their expressive power and performance can be boosted by using a vector to measure pairwise dependency, but this requires to expand the alignment matrix to a tensor, which results in memory and computation bottlenecks.", "In this paper, we propose a novel attention mechanism called Multi-mask Tensorized Self-Attention (MTSA), which is as fast and as memory-efficient as a CNN, but significantly outperforms previous CNN-/RNN-/attention-based models.", "MTSA 1) captures both pairwise (token2token) and global (source2token) dependencies by a novel compatibility function composed of dot-product and additive attentions, 2) uses a tensor to represent the feature-wise alignment scores for better expressive power but only requires parallelizable matrix multiplications, and 3) combines multi-head with multi-dimensional attentions, and applies a distinct positional mask to each head (subspace), so the memory and computation can be distributed to multiple heads, each with sequential information encoded independently.", "The experiments show that a CNN/RNN-free model based on MTSA achieves state-of-the-art or competitive performance on nine NLP benchmarks with compelling memoryand time-efficiency.", "Recurrent neural network (RNN) and convolutional neural network (CNN) have been broadly used as context fusion modules for natural language processing (NLP) tasks.", "Recently, RNN/CNN in conjunction with an attention mechanism has been proven to be effective for contextual feature modeling in a wide range of NLP tasks, including sentiment classification (Li et al., 2018), machine translation (Bahdanau et al., 2015), reading comprehension (Seo et al., 2017; Yu et al., 2018), etc.", "More recently, self-attention mechanisms have been developed for context fusion and syntactic dependency modeling with the advantage of fewer parameters, more parallelizable computation, and better empirical performance (Hu et al., 2017; Vaswani et al., 2017; Shen et al., 2018a).", "In addition, neural networks based solely on self-attention mechanisms have achieved state-of-the-art quality on many NLP tasks, e.g., machine translation (Vaswani et al., 2017), sentence embedding (Shen et al., 2018a) and semantic role labeling (Tan et al., 2017).", "Self-attention mechanisms can be categorized into two classes according to the type of dependency each aims to model.", "The first category is token2token self-attention (Hu et al., 2017; Vaswani et al., 2017; Shen et al., 2018a) that captures syntactic dependency between every two tokens in a sequence.", "An efficient dot-product compatibility function is usually deployed to measure this pairwise dependency (Vaswani et al., 2017).", "In contrast, additive compatibility function captures the dependency by multi-layer perceptron (MLP), and can usually achieve better performance (Britz et al., 2017).", "Its expressive power can be further improved if expanded to multiple dimensions (Shen et al., 2018a).", "This multi-dim self-attention empirically surpasses dot-product one, but suffers from expensive computation and memory, which grow linearly with the number of features and quadratically with the sequence length.", "Hence, it is not scalable to long sequences in practice.", "The second category is source2token self-attention (Liu et al., 2016; Lin et al., 2017; Shen et al., 2018a) aiming to capture global dependency, i.e., the importance of each token to the entire sequence for a specific task.", "Its time and space complexities grow linearly, rather than quadratically,", "with the sequence length.", "Hence, it is empirically efficient in terms of memory and computation even if expanded to multiple dimensions, i.e., using a vector of feature-wise scores instead of a scalar for the global dependency.", "But, it is hard to reach state-of-the-art performance on NLP tasks due to the lack of pairwise and local dependencies.", "In this paper, we propose a novel attention mechanism called multi-mask tensorized self-attention (MTSA), for context fusion.", "In MTSA, 1) the pairwise dependency is captured by an efficient dot-product based token2token self-attention, while the global dependency is modeled by a feature-wise multi-dim source2token self-attention, so they can work jointly to encode rich contextual features; 2) self-attention alignment scores are tensorized for more expressive power in that each pair of tokens has one score for each feature, but no tensor computation is required other than simple and efficient matrix multiplications when implemented; 3) the tensors above are computed in multiple subspaces (i.e., in a multi-head fashion) rather than in the original input space, so the required memory and computation can be distributed to multiple subspaces; and 4) a distinct positional mask is applied to each head in order to encode rich structural information such as the sequential order and relative position of tokens.", "In the experiments, we build CNN/RNN-free neural networks based on MTSA for sentence embedding and sequence tagging tasks, including natural language inference, semantic role labeling, sentiment analysis, question-type classification, machine translation, etc.", "The results demonstrate that MTSA achieves state-of-the-art or competitive performance on nine benchmark datasets.", "To summarize the comparison of MTSA with recently popular models, we show the memory consumption and time cost vs. sequence length respectively in Figure", "1(a) and", "1(b) on synthetic data (batch size of 64 and feature channels of 300).", "On the SNLI (Bowman et al., 2015), a public dataset for language inference, as shown in Figure", "1(c), MTSA achieves the best result but is as fast and as memory-efficient as the CNNs (all baselines and the benchmark are detailed in Section 4).", "Notations: 1) lowercase denotes a vector; 2) bold lowercase denotes a sequence of vectors (stored as a matrix); and 3) uppercase denotes a matrix or tensor.", "Given an input sequence of token embeddings or memory slots x = [ x 1 , . . . , x n ] R d e n , and a vector representation of a query q R d q , attention mechanism (Bahdanau et al., 2015; Lu-ong et al., 2015) computes an alignment score between each token x i and q by a compatibility function f ( x i , q ) , which aims to measure the depen-dency/relevance between x i and q , or the attention of q to x i , w.r.t. a given task.", "The scores are transformed into probabilities through a softmax function.", "These probabilities are then used as weights to sum all the tokens and generate a contextual embedding for q , i.e., p ( z | x , q ) = softmax( a ) , a = [ f ( x i , q )] ni =1 , s = n (cid:88) i =1 p ( z = i | x , q ) x i = E i p ( z | x ,q ) [ x i ] , (1) where a R n denotes the vector of n alignment scores, p ( z | x , q ) is the categorical distribution for attention probabilities, which is derived from applying softmax function to a .", "And, s R d e is the output vector for the query q .", "There are two major types of compatibility functions, leading to the two most frequently used attention mechanisms.", "The first one is dot-product or multiplicative compatibility function", "(Eq.(2)), which composes dot-product attention mechanism (Luong et al., 2015) using cosine similarity to model the dependencies.", "The other one is additive or multi-layer perceptron (MLP) compatibility function", "(Eq.(3)) that results in additive attention mechanism (Bahdanau et al., 2015) using MLP to model the dependencies.", "where W ( d 1) R d i d e , W ( d 2) R d i d q , W ( a ) R d a ( d e + d q ) , w R d a are learnable parameters, (cid:104) , (cid:105) denotes inner-product.", "Empirically, networks with additive attention usually outperform those with dot-product attention, but require more computation time and memory (Britz et al., 2017).", "Multi-dim attention mechanism (Shen et al., 2018a) expands the alignment score in previous attention mechanisms to a vector for feature-wise scores, each computed on a feature dimension.", "It has greater capacity to model complex dependencies, and can handle context variation and polysemy problems harassing many NLP tasks.", "In particular, it replaces vector w T R 1 d a in additive compatibility function", "(Eq.(3)) with a matrix W R d e d a , and thus produces d e scores to describe the attention of q to x i .", "Self-attention mechanism is a special case of attention mechanisms, where the query q stems from the input sequence itself.", "Self-attention mechanisms can be classified into token2token or source2token self-attention mechanism according to the type of dependency each aims to model.", "A) Token2token self-attention mechanism (Vaswani et al., 2017; Shen et al., 2018a) aims at producing a context-aware representation for each token in light of its syntactic dependencies on other tokens from the same sequence.", "Two examples of token2token self-attention are 1) scaled dot-product self-attention which composes the multi-head self-attention (Vaswani et al., 2017), and 2) masked self-attention used in directional self-attention (Shen et al., 2018a).", "A.1 ) Scaled dot-product attention mechanism (Vaswani et al., 2017) in general form has three arguments: query tokens q R d i m , key tokens k R d i n and value tokens v R d h n associated with the key tokens.", "It uses a scaled dot-product function to model the relationship between each query and key, and finally outputs a sequence s = [ s 1 , . . . , s m ] R d h m such that s =sdpAttn( q , k , v ) (cid:44) v softmax( q T k (cid:112) d q ) T (4) A special case of this mechanism is that the three input arguments are derived from the same source, i.e., q / k / v = f q / k / v ( x ) , which can be referred to as a token2token self-attention, namely scaled dot-product self-attention.", "As for multi-head attention mechanism, the input is projected into multiple subspaces, then parameter-untied scaled dot-product attention is applied to the embeddings in each subspace.", "The results for multiple subspaces are concatenated to form the final output s , i.e., s = W ( o ) [ H 1 ; . . . ; H h ] , (5) where H c = sdpAttn( W qc q , W kc k , W vc v ) .", "A.2) Masked self-attention mechanism (Shen et al., 2018a) uses multi-dim compatibility function to model the dependency between every two tokens in a sequence, and uses positional mask to encode sequential information.", "It overcomes inherent problem appearing in self-attention compared to RNNs on the lack of sequential information.", "Its compatibility function is defined as f ( x i , x j )= c tanh { ( W ( m ) [ x i ; x j ]+ b ( m ) ) /c } + M i,j (6) where c is a constant scalar, W ( m ) R d e 2 d e is learnable weight matrix, and M is a positional mask with each entry M i,j { , 0 } .", "When M i,j = , applying softmax function to the alignment scores results in a zero attention probability, which cuts off the attention of x j to x i .", "Hence, masked self-attention with an asymmetric mask, where M ij (cid:54) = M ji , can encode sequential or other structural information (Shen et al., 2018a; Im and Cho, 2017).", "To this end, two positional masks have been proposed to encode the forward and backward order information respectively, i.e., M fwi,j = (cid:26) 0 , i < j , otherwise M bwi,j = (cid:26) 0 , i > j , otherwise Furthermore, directional self-attention (DiSA) (Shen et al., 2018a) concatenates the features produced by masked self-attention mechanisms with the forward and backward positional masks (i.e., M fw , M bw ), leading to context-ware representations with bi-directional information encoded.", "B) Source2token self-attention mechanism (Liu et al., 2016; Lin et al., 2017; Shen et al., 2018a) is designed for sentence embedding or sequence compression, which is based on the importance of each token x i to the entire source sequence x for a specific task.", "Specifically, it removes the query q from the compatibility function f ( x i , q ) when computing the alignment score.", "For example, the compatibility function of additive source2token self-attention mechanism is to simply remove q from", "Eq.(3).", "In this section, we firstly elaborate on tensorized self-attention (TSA) in Section 3.1, which captures both pairwise and global dependencies by combining the two types of self-attention mechanisms introduced in Section 2.2.", "Then, we extend TSA to multi-mask tensorized self-attention (MTSA) in Section 3.2 by applying different positional masks to TSA in multiple subspaces (multi-head fashion).", "Lastly, in Section 3.3, we present an efficient computation scheme for MTSA without any high-rank tensor computation involved even if tensorized alignment scores are used.", "Tensorized self-attention (TSA), whose structure is illustrated in Figure 2, is a neural mechanism that can be trained to model both pairwise and global dependencies, while any previous self-attention mechanism only focuses on one type of dependencies.", "TSA models both types by combining the aforementioned token2token and source2token self-attention mechanisms.", "This generates an n n d h tensor containing the alignment scores between every two tokens on each feature dimension.", "These scores are then normalized and transformed into probability weights, which are used to sum all dependent tokens and then generate the contextual embedding for each input token.", "We will demonstrate later in Section 3.3 that only matrix rather than tensor operation is required when executing the procedures above.", "To facilitate the elaboration of proposed models and keep the consistent notation with prior attention mechanisms, TSA first projects the input embeddings x into three spaces to represent the query, key and value tokens, respectively.", "q = W ( t 1) x , k = W ( t 2) x , and v = W ( t 3) x , (7) where W ( t 1) , W ( t 2) R d i d e and W ( t 3) R d h d e are learnable weights for projections.", "TSA then integrates two kinds of compatibility functions from two self-attention mechanisms respectively.", "Firstly, the scaled dot-product self-attention is used to capture dependency between every two tokens.", "Dot-product operations are fast, and sufficient to model the pairwise dependency in most tasks.", "Its compatibility function is f t ( k i , q j ) = (cid:104) k i , q j (cid:105) / (cid:112) d i , i, j [ n ] , (8) where (cid:104) , (cid:105) is inner-product operation.", "Then, a multi-dim source2token self-attention mechanism is used to estimate the contribution of each token to the given task on each feature dimension.", "It aims at capturing the importance of each token to the entire input sequence w.r.t. the task, i.e., the global dependency .", "The multi-dim extension only linearly increases the memory and computation of source2token self-attention by a multiplicative factor d h , but is essentially helpful to improve expressive capability in line with prior works (Shen et al., 2018a).", "Its compatibility function is f s ( k i ) = W ( s 2) m ( W ( s 1) k i + b ( s 1) )+ b ( s 2) , (9) where i [ n ] , W ( s 1) R d a d i , W ( s 2) R d h d a are the learnable weights, and m ( ) is an activation function.", "The compatibility function used in TSA broadcasts the scalar alignment score f t ( k i , q j ) R computed by the token2token self-attention to all d h feature dimensions, and then adds them to the feature-wise score vector f s ( k i ) R d h computed by the source2token self-attention.", "In addition, the positional masks from masked self-attention (in Section 2.2) are also integrated to encode sequential and structural information.", "These yield following compatibility function of TSA.", "For each query token q j , a softmax function is applied to the alignment scores [ f tsa ( k i , q j )] ni =1 on each feature dimension, resulting in a categorical distribution over all value tokens [ v i ] ni =1 based on corresponding key tokens [ k i ] ni =1 .", "The probability of token q j attending to v i on the l th feature dimension (i.e., z l = i ) is p ( z l = i | k ,q j ) (cid:44) [ p ji ] l (cid:44) e [ f tsa ( k i ,q j ) ] l (cid:80) ng =1 e [ f tsa ( k g ,q j )] l , (11) where, i, j [ n ] , l [ d h ] .", "TSA outputs a contextual embedding for each input token on every feature dimension as the weighted sum of all the value token embeddings on that dimension, where the weights are provided by the probabilities in", "Eq.(11).", "It is the expectation of sampling a value token embeddings on each feature dimension according to the feature-wise probability distribution, i.e., s (cid:44) [ s j ] nj =1 , where (12) s j (cid:44) (cid:104) E i p ( z l | k ,q j ) ([ v i ] l ) (cid:105) d h l =1 = (cid:88) n i =1 p ji v i 3.2 Multi-Mask Tensorized Self-Attention (MTSA) Mechanism Rather than computing attention in the original input space, multi-head attention (Vaswani et al., 2017) projects the input sequence to multiple subspaces, applies attention to the projected embedding in each subspace, and concatenates their outputs at last.", "The computations associated with multiple heads can be completed in parallel.", "By using adequate heads, each with a low-dimensional subspace (i.e., the representation dimension for each head is updated by d h d h /h where h is the number of head), it reduces parameters and memory/computation cost and increases diversity of the attention.", "In addition, to encode different kinds of sequential or structural information, multiple different positional masks (e.g., forward, backward and multi-length window) can be further applied to the multiple heads.", "The memory-/time-efficiency and expressive power of TSA can be improved by using the combination of the multi-head and multi-mask techniques introduced above.", "By writing TSA mechanism as a function TSA( x , M ) with input sequence x R d e n and a positional mask M R n n , and the output given by", "Eq.(12), multi-mask tensorized self-attention (MTSA) produces s = W ( o ) [ H 1 ; . . . ; H h ] , (13) where H c = TSA c ( x , M c ) , where W ( o ) R h d h h d h , h is the number of heads, TSA c denotes the c th parameter-independent TSA block that produces a d h -dim representation in the c th subspace, M c represents the positional mask applied to attention in the c th subspace, [ ; . . . ; ] denotes a vertical concatenation operation, and s R h d h n is the output of MTSA.", "In our experiments, we apply forward mask to half of the heads and apply backward mask to the other half to encode bi-directional order information of the input sequence.", "As shown in", "Eq.(10) and", "Eq.(11), TSA or each head of MTSA needs to compute the attention scores and probabilities as n n d h tensors.", "In accordance with multi-dim self-attention (Shen et al., 2018a), this makes TSA more expressively powerful and improves the final performance for sequence modeling, but terribly leads to memory explosion and computational bottleneck on long sequences with large n and d h .", "Fortunately, in MTSA, it is possible to significantly reduce the demand on computations to matrix-only operations by exploring the computational structure.", "A memory-optimized and highly-parallelizable computation scheme for MTSA is given in Algorithm 1.", "For each head, the score matrices of token2token and source2token are computed in steps 3 and 4 respectively.", "Then, we combine token2token scores with the positional mask to form a new mask in step 5, and compute the d h n output embedding with the weighs from the multi-dim source2token self-attention in step 6.", "Finally, in step 7, we apply the new mask from step 5 to the weighted embedding from step 6 and complete the normalization.", "This procedure generates the exactly same output as", "Eq.(13)but no any tensor operation is incurred.", "We compare MTSA with commonly-used context fusion baselines on several NLP tasks 1 .", "When addressing a sentence embedding problem, a multi-dim source2token self-attention is applied on the top of context fusion module to produce the sequence embedding.", "Codes are implemented in Python with Tensorflow and executed on a single NVIDIA GTX 1080Ti graphics card.", "In addition, data for both time cost and memory consumption are collected under Tensorflow-1.7 with CUDA9 and cuDNN7.", "The context fusion baselines include 1) BiLSTM (Graves et al., 2013): 600D bi-directional LSTM consisting of 300D forward plus 300D backward LSTMs, 2) Bi-GRU (Chung et al., 2014): 600D bi-directional GRU, 3) Multi-CNN (Kim, 2014): three CNNs with 200D kernels to model 3/4/5-grams respectively, 4) Hrchy-CNN (Gehring et al., 2017): 3-layer 300D stacked CNN with kernel size 5, gated linear units (Dauphin et al., 2016) and residual connections (He et al., 2016), 5) Multi-head (Vaswani et al., 2017): 600D multi-head self-attention with 8 heads (75-1 Codes for Experiments are released at https:// github.com/taoshen58/mtsa . dim subspace per head) and positional embedding used by Vaswani et al. (2017), 6) DiSA (Shen et al., 2018a): 600D directional self-attention mechanism consisting of 300D forward and 300D backward masked self-attentions, and 7) Bi-BloSA (Shen et al., 2018c): 600D bidirectional block self-attention with intra-/inter-block self-attention, aiming to reduce the time and space complexities of multi-dim self-attention by using hierarchical structure.", "Natural language inference (NLI) aims at speculating on the relationship between a premise and a corresponding hypothesis, where the relationship could be entailment , neutral or contradiction .", "In experiments, we first compare MTSA with other baselines on the Stanford Natural Language Inference (Bowman et al., 2015) (SNLI) dataset.", "Following the method of applying sentence-encoding model to NLI given by Bowman et al. (2016), two parameter-tied sentence-encoding models are used to generate embeddings for premise and hypothesis, resulting in s p and s h respectively.", "The concatenation of s p , s h , s p s h and s p (cid:12) s h representing the relationship is passed into a 3-way neural classifier for final prediction.", "The experimental results of the models from the official leaderboard, baselines, and MTSA are shown in Table 1.", "MTSA achieves state-of-the-art performance with less time and memory cost.", "Compared to the methods from the leaderboard, MTSA outperforms RNN-based encoders (e.g., Residual stacked enc.), RNN+attention encoders (e.g., Deep Gated Attn.) and even parsing trees based encoders (e.g., Gumbel TreeLSTM enc.) by a large margin.", "Compared to the two competitive self-attention networks with complicated and expensive training computations, MTSA trained in end-to-end manner achieves the same state-of-the-art performance by using much fewer parameters and less computational time.", "Compared to baselines, MTSA is 4 5 faster than RNN-based models and outperforms CNN-based models given a similar number of parameters and computation time.", "Moreover, compared to the dot-product self-attention (Multi-head), MTSA costs similar time and memory but performs more expressively powerful self-attention, and thus achieves better performance.", "Furthermore, compared to the multi-dim self-Model | | Time/Epoch Inf.", "attention (DiSA and Bi-BloSA), MTSA uses much less memory and time but even produces much better prediction quality.", "In addition, to further improve the state-of-the-art performance, in contrast to training from scratch, a language model built on the Transformer (Vaswani et al., 2017) unsupervisedly pretrained on large English corpus (detailed by Radford et al. (2018)) is transfered for the baseline and proposed models for sentence-encoding based NLI tasks.", "As shown in Table 2, MTSA integrated with pretrained language model can achieve new state-of-the-art accuracy on both SNLI and Multi-Genre Natural Language Inference (MultiNLI) (Williams et al., 2017) 2 among all sentence-encoding mod-2 All test results are Evaluated on Kaggle official Model | | Inf.", "An ablation study of MTSA is shown in Table 3 to verify the capability of its each part in context fusion.", "The results show that token2token (model-ing pairwise dependency), source2token (model-ing global dependency), and positional masks (en-coding sequential information) all contribute important information to sequence modeling, and the contributions are complementary.", "To verify the capability of MTSA in generating context-aware representation of each token, we compare it with baselines on semantic role labeling (SRL) task, which aims to tag each token from an input sequence with a label for its semantic role.", "Particularly, given a sentence, the goal of SRL is to identify the arguments of each target verb into semantic roles, which can benefit many downstream NLP tasks.", "SRL has two steps: websites: https://www.kaggle.com/c/multinli-matchedopen-evaluation and https://www.kaggle.com/c/multinlimismatched-open-evaluation Models Training Development WSJ Test Brown Test Time P R F1 Comp.", "We follow the experimental setup in Tan et al. (2017), where the SRL task is treated as a BIO tagging problem.", "Tan et al. (2017) designed a deep attentive neural net by stacking multi-head self-attention, named as deepatt, to perform context fusion, whose output is then passed to a neural classifier to make the final decision.", "The results achieved by previous methods, baselines, and MTSA are shown in Table 4, which demonstrates that MTSA achieves new state-of-the-art performance on the CoNLL-05 dataset by costing similar training time as CNN and multi-head self-attention baselines.", "We evaluate the models on five sentence classification benchmarks for different NLP tasks, which include 1) CR (Hu and Liu, 2004): customer reviews of various products to predict whether the review is positive or negative, 2) MPQA (Wiebe et al., 2005): an opinion polarity detection subtask of the MPQA dataset, 3) SUBJ (Pang and Lee, 2004): subjectivity dataset where a label indicates whether a sentence is subjective or objective, 4) TREC (Li and Roth, 2002): question-type classification dataset which classifies the question sentences into six classes, 5) SST-5 (Socher et al., 2013): the Stanford Sentiment Treebank dataset with five sentiment labels.", "The reported accuracies for CR, MPQA, and SUBJ are the mean of 10-fold cross validation.", "The accuracies for TREC are the mean of five runs on the dev set, and the accuracies for SST-5 are the mean of five runs on the test set.", "All standard deviations are shown in parentheses.", "The prediction accuracies achieved on these five benchmarks are shown in Table 5.", "MTSA achieves the best prediction accuracy on CR, MPQA, TREC and SST-5 benchmarks with better time efficiency and a lower memory load.", "We also evaluate proposed model on WMT 2014 English-German translation task for exhaustive comparisons with multi-head attention.", "We replace multi-head self-attention modules in the encoder of official Transformer implementation with MTSA module and do not tune the hyperparame-ters.", "Although our computation resources is limited, we use two training setups and also introduce t -test to ensure that MTSA consistently outperforms multi-head self-attention in Transformer.", "For Setup1 , we use default hyperparameter set of transformer base single gpu provided by official implementation with 1 P100 , batch size of 2048 and training step of 250K, and report BLEU value for the last checkpoint.", "For Setup2 , we use the hyperparameter set of transformer base with the modification of 1) using 4 instead of 8 P100, 2) increasing batch size from 4096 to 6144 per GPU, and 3) using training step of 133K.", "As shown in Table 6, with small p-value for both training setup 1 and 2, the encoder with MTSA significantly outperforms that with multi-head self-attention, which demonstrates that multi-dim based MTSA modeling both pairwise and global dependencies is more expressive than dot-product based multi-head self-attention.", "Although the results do not improve state-of-the-art BLEU value of machine translation task, the purpose of this experiment to verify the effectiveness of MTSA in contrast to dot-product based multi-head self-attention is accomplished.", "In conclusion, MTSA is highly parallelizable with more expressive power since it efficiently captures the pairwise dependency at token level, but delicately models the global dependency at feature level, and distributes computations to multiple heads, each equipped with a distinct positional mask.", "These lead to a sweet spot of the trade-off between performance and efficiency, and make MTSA as memory-efficient as CNN and scalable to long sequences but outperform previous (and even multi-dim) self-attention mechanisms in terms of prediction quality.", "The experiments conducted on nine NLP tasks verify that the MTSA can reach state-of-the-art performance with appealing efficiency.", "This research was funded by the Australian Government through the Australian Research Council (ARC) under grants 1) LP160100630 partnership with Australia Government Department of Health and 2) LP150100671 partnership with Australia Research Alliance for Children and Youth (ARACY) and Global Business College Australia (GBCA).", "We also acknowledge the support of NVIDIA Corporation and MakeMagic Australia with the donation of GPUs." ]
[ "abstain", "abstain", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "other", "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other" ]
[ "In this paper, we address a novel task, Multiple TimeLine Summarization (MTLS), which extends the flexibility and versatility of TimeLine Summarization (TLS).", "Given any collection of time-stamped news articles, MTLS automatically discovers important yet different stories and generates a corresponding timeline for each story.", "To achieve this, we propose a novel unsupervised summarization framework based on the two-stage affinity propagation process.", "We also introduce a quantitative evaluation measure for MTLS based on the previous TLS evaluation methods.", "Experimental results show that our MTLS framework demonstrates high effectiveness and MTLS task can provide better results than TLS.", "Nowadays, online news articles are one of the most popular Web documents.", "However, due to a huge amount of news articles available online, it is getting difficult for users to effectively search, understand, and track the entire news stories.", "To solve this problem, a research area of TimeLine Summarization (TLS) has been established, which can alleviate the redundancy and complexity inherent in news article collections thereby helping users better understand the news landscape.", "After the influential work on temporal summaries by Swan and Allan (2000), TLS has attracted researchers' attention.", "Most of works on TLS (Martschat and Markert, 2018; Steen and Markert, 2019; Gholipour Ghalandari and Ifrim, 2020) have focused on improving the performance of summarization.", "However, their drawbacks are as follows:", "(a) the methods work essentially on a homogeneous type of datasets such as ones compiled from the search results of an unambiguous query (e.g., BP Oil Spill).", "The requirements imposed on the input dataset make it hard for TLS systems to generalize;", "(b) the output is usually a single timeline regardless of the size and the complexity of the input dataset.", "We propose here the Multiple TimeLine Summarization (MTLS) task that enhances and further generalizes TLS.", "MTLS automatically generates a set of timelines that summarize disparate yet important stories, rather than always generating a single timeline as is in the case of TLS.", "An effective MTLS framework should:", "(a) detect key events including both shortand long-term events,", "(b) link events related to the same story and separate events belonging to other stories, and", "(c) provide informative summaries of constituent events to be incorporated into the generated timelines.", "MTLS can also help to deal with the ambiguity, which is common in information retrieval.", "For example, suppose that a user wants to get an overview of news about a basketball player, Michael Jordan , from a large collection of news articles.", "However, when a search engine over such a collection takes Michael Jordan as a query, it would likely return documents constituting a mixture of news about different persons having the same name.", "Then, how can a typical TLS system return meaningful results if only a single timeline can be generated?", "Similarly, ambiguous queries such as Apple, Ama-zon, Java require MTLS solutions to produce high quality results.", "To address this task, we further propose a Two-Stage Affinity Propagation Summarization framework (2SAPS).", "It uses temporal information embedded in sentences to discover important events, and their linking information latent in news articles to construct timelines.", "2SAPS has several advantages: firstly, it is entirely unsupervised which is especially suited to TLS-related tasks as there are very few gold summaries available for training supervised systems; secondly, both the number of events and the number of generated timelines are self-determined.", "This allows our framework to be dependent only on the input document collection, instead of on human efforts.", "Furthermore, the current TLS evaluation measures allow only 1-to-1 comparison (systemto human-generated timeline), which is not suitable for MTLS task where multiple timelines must be compared to (typically) multiple ground-truth timelines.", "Therefore, we also propose a quantitative evaluation measure for MTLS based on the adaptation of the previous TLS evaluation framework.", "1. We propose a novel task (MTLS), which automatically generates multiple, informative, and diverse timelines from an input time-stamped document collection.", "2. We introduce a superior MTLS model that outperforms all TLS-adapted MTLS baselines.", "3. We design an evaluation measure for MTLS systems by extending the original TLS evaluation framework.", "Since the first work on timeline summarization (Swan and Allan, 2000; Allan et al., 2001), this topic has received much attention over the years (Alonso et al., 2009; Yan et al., 2011a; Zhao et al., 2013; Tran et al., 2013; Li and Li, 2013; Suzuki and Kobayashi, 2014; Wang et al., 2016; Takamura et al., 2011; Pasquali et al., 2019, 2021).", "In the following, we review the major approaches.", "Chieu and Lee (2004) constructed timeline by directly selecting the top ranked sentences based on the summed similarities within n -day long window.", "Yan et al. (2011b) proposed evolutionary timeline summarization (ETS) to return the evolution trajectory along the timeline, consisting of individual but correlated summaries of each date.", "Shahaf et al. (2012) created information maps (Maps) to help users understand domain-specific knowledge.", "However, the output consists of a set of storylines that have intersections or overlaps, which is not appropriate for a dataset that may contain quite different topics.", "Nguyen et al. (2014) proposed a pipeline to generate timelines consisting of date selection, sentence clustering and sentence ranking.", "is originally used for multi-document summarization (MDS).", "Duan et al. (2020) introduced the task of Comparative Timeline Summarization (CTS), which captures important comparative aspects of evolutionary trajectories in two input sets of documents.", "The output of the CTS system is, however, always two timelines generated in a contrastive way.", "Then, Gholipour Ghalandari and Ifrim (2020) examined different TLS strategies and categorized TLS frameworks into the following three types: direct summarization approaches , date-wise approaches , and event detection approaches .", "To the best of our knowledge, the idea of multiple timeline summarization has not been formally proposed yet.", "Table 1 compares the related tasks.", "Some works (Yan et al., 2011b; Chen et al., 2019; Duan et al., 2020) evaluate timeline by only computing ROUGE scores (Lin, 2004).", "This way ignores the temporal aspect of a timeline, which is important in timeline summarization.", "Martschat and Markert (2017) then proposed a framework, called tilse , to assess timelines from both textual and temporal aspects.", "Subsequently, TLS works (Steen and Markert, 2019; Gholipour Ghalandari and Ifrim, 2020; Born et al., 2020) have followed this framework to evaluate their models.", "Some researches (Tran et al., 2015; Shahaf et al., 2012; Alonso and Shiells, 2013) also involved user studies, in which users are required to score system-generated timelines based on varying criteria such as relevance and understandability.", "In Section 5, we will adapt the tilse framework to MTLS task.", "Input : A time-stamped news article collection D = { d 1 , d 2 , ..., d |D| } .", "The collection can be stan-dalone or compiled from search results returned by a news search engine.", "Output : A set of timelines, T = { T 1 , T 2 , . . . , T k } is generated based on D , so that each timeline T i includes a sequence of time/date 1 and summary pairs ( t T i 1 , s T i 1 ) , . . . , ( t T i l , s T i l ) where s T i j ( i = 1 , . . . , k ) are the summary sentences for the time t T i j ( j = 1 , . . . , l ) and l is the length of T i .", "Each timeline in T should be consistent and coherent, yet different from other timelines.", "1 In this paper, time and date are used as synonyms.", "We note that while the traditional TLS task is limited as a document collection for it is typically coherent and homogeneous, MTLS is more flexible as the input news collection can be diverse.", "For example, the input collection can be generated using a search query q composed of multiple entities or concepts like q = { egypt , h 1 n 1 , iraq } or by using an ambiguous query like q = { michael , jordan } , or it can also consist of news articles crawled over a certain time span from multiple news sources.", "Generally, the more heterogeneous D is, the more timelines could be produced.", "The intuition behind this idea is that users will need more structured information to help them understand a relatively complex document collection.", "Next, we present two key components of our framework: event generation module (Sec. 4.1) and timeline generation module (Sec. 4.2).", "We first make the following two assumptions: Assumption 1 : News articles sometimes retrospectively mention past events for providing necessary context to the target event, for underlying continuation, causality, etc.", "Assumption 2 : Sentences mentioning similar dates have higher probability to refer to the same event than sentences with different dates.", "In this module, we extract important historical events from a document collection.", "Gholipour Ghalandari and Ifrim (2020) constructed events by simply grouping articles with close publication dates into clusters, resulting in lower accuracy.", "Note that Assumption 1 implies that a single news article may contain multiple events.", "Accordingly, in our work, the concept of event is more fine-grained.", "We define event as a set of sentences that describe the same real-world occurrence, typically using the same identifying information (e.g., actions, entities, locations).", "This information is captured by sentence-BERT (Reimers and Gurevych, 2019): a pre-trained model on a transformer network where similar meanings are positioned nearby in semantic vector space.", "We then employ Affinity Propagation (AP) (Frey and Dueck, 2007) following Steen and Markert (2019) for clustering similar sentences.", "AP algorithm groups data points by selecting a set of exemplars along with their followers due to message passing.", "It operates over an affinity matrix S , where S ( i, j ) denotes similarity between data points x i and x j .", "We observe that high semantic similarity does not always guarantee that sentences refer to the same event.", "Especially, for some periodic events, similar happenings might have occurred several times.", "For example, a news article could include sentences reporting that Brazil won the gold medal in the World Cup (in 2002) while some other sentences in this document could recall that Brazil has won the first place in the World Cup in 1994.", "It is clear that those sentences describe two distinct events, which would be grouped into one event if only semantic similarity is considered.", "Therefore, based on Assumption 2, we introduce another key factor, temporal similarity, which enhances the confidence of how likely two sentences will refer to the same event.", "We define each element S 1 ( v i , v j ) of affinity matrix S 1 as follows: S 1 ( v i , v j ) = 1 S date ( t i , t j )+(1 1 ) S cos ( v i , v j ) , (1) where v i and v j denote different sentences, and t i and t j denote dates mentioned by v i and v j , respectively.", "2 In addition, S date and S cos denote the temporal and semantic similarities, respectively.", "While we employ cosine similarity for the semantic similarity, we define temporal similarity S date ( i, j ) to quantify how similar two dates are using Equation (2): S date ( t i , t j ) = 1 exp | t i t j | , (2) where 3 is the decay rate of the exponential func-2 We use Heideltime (Strtgen and Gertz, 2013) for resolving temporal expressions.", "If a sentence does not explicitly mention any date, we assume it refers to the publication date of the article.", "3 We set = 0 .", "05 in the experiments.", "tion.", "The larger the time gap between two dates, the smaller the value of S date .", "By passing messages of both semantic and temporal information between sentences, clusters consisting of exemplar and non-exemplar sentences are constructed to form the candidate event set E .", "Each cluster represents an event .", "Event Selection.", "In a timeline, it is not necessary to show all events of a story as users usually care about the most important events only.", "We design an event selection step that is helpful for handling excessive number of events .", "The selection relies on two measures: Salience and Consistency defined by Equations (3) and (4), respectively: Salience ( e ) = log ( | e | ) log ( | D | ) , (3) Consistency ( e ) = (cid:80) v i e,v i (cid:54) = v e S cos ( v i ,v e ) | e | 1 , (4) where v e is the exemplar sentence in event e ; | e | and | D | denote the number of sentences in e and document collection D , respectively.", "Intuitively, important historical events would often be mentioned by future news reports.", "Salience of event is used to evaluate such importance and is computed as the relative frequency of sentences about that event compared with all sentences in the collection.", "On the other hand, Consistency ensures high quality of events .", "We then rank all candidate events based on the weighted summed score of these two measures.", "Hereafter, we denote the weight of Event Salience as 1 and that of Event Consistency as 1 1 .", "We select the top-scored events obtaining a new event set E by setting a threshold.", "To avoid tuning its value, we set the value to one standard deviation from the mean (lower end).", "While TLS systems directly link all the identified events, MTLS requires their deeper understanding.", "As described in Section 1, an effective MTLS framework should link events related to the same story and separate other unrelated events to different timelines.", "To achieve this, we explain the following steps in this module: Event Linking , Timeline Selection , and Timeline Summarizing .", "Event Linking.", "According to Assumption 1, current events can refer to related past events.", "We thus define a reference matrix R , in which each element R ( e i , e j ) denotes the degree of reference between two events e i and e j .", "As events in our work are represented by sentences and a sentence belongs to a single event , R ( e i , e j ) can be reflected by counting patterns of sentence co-occurrences in documents.", "Formally, R ( v i , v j ) represents the case where two sentences v j and v i refer to each other as defined by Equation (5): R ( v i , v j )= (cid:26) 1 v i ,v j d v i e k ,v j e l , e k (cid:54) = e l 0 otherwise, (5) where d is an article, e k and e l are elements in E .", "where | e i | and | e j | are sizes of e i , e j , respectively.", "We then construct a graph of events where each node is an e E , and the value of an edge reflects the connection degree between a pair of two events .", "We reuse AP algorithm to detect the community of events over the affinity matrix S 2 defined by Equation (7): S 2 ( e i , e j ) = 2 R ( e i , e j ) + (1 2 ) S cos ( e i , e j ) , (7) where S cos ( e i , e j ) denotes cosine similarity between e i and e j to capture semantic similarity.", "Based on the affinity matrix S 2 , AP finally generates clusters, i.e., the initial timeline set, T .", "Timeline Selection.", "In order to ensure the quality of constructed timelines, we define criteria to select high-quality timelines from T .", "Similar to event selection described in Section 4.1, we also use two indicators to evaluate the quality of a timeline.", "We define Timeline Salience as the average score of Event Salience of all events within the timeline, and Timeline Coherence as the average of semantic similarity scores between any chronologically 4 adjacent events defined by Equation (8): Coherence ( T ) = (cid:80) e i ,e i +1 TS cos ( e i , e i +1 ) | T | 1 , (8) where | T | is the size of a timeline, i.e., the number of events in this timeline.", "Intuitively, important timelines, which reflect important stories in the document collection, are more likely to be preferred by users.", "Timeline Salience captures this importance by passing the importance of its components (i.e., events ), while Timeline Coherence ensures that the story expressed by the timeline is consistent.", "4 The time of an event e is given by its exemplar sentence.", "We rank timelines based on a weighted sum of Timeline Salience and Timeline Coherence .", "The weight of Timeline Salience is denoted as 2 ; thus the weight of Timeline Coherence is 1 2 .", "We then select the top-scored elements from the timeline set T based on a threshold.", "Same as before, we set the value to one standard deviation from the mean.", "Timeline Summarizing.", "By previous steps, we have now obtained multiple timelines { T 1 , T 2 , ... } , where T is a list of events { e 1 , e 2 , ... } .", "However, it is not feasible to show all contents of each e as it usually contains many sentences.", "We use only the exemplar sentence in event since exemplar is the most typical and representative member in the group.", "In addition, it is possible that two events e i and e j occur on the same day.", "In this case, we concatenate their exemplar sentences.", "Timeline Tagging.", "This step is an add-on to MTLS systems.", "To better understand the stories of constructed timelines, we believe that it should be helpful for users to also obtain a label for each timeline.", "As described in Section 1, the input document collection may be composed of different topics or of one topic discussed through different aspects.", "For example, among the timelines generated based on the topic syria , one timeline might summarize the story about Syrian civil war while another might be about Syrian political elections .", "A label should then help people understand the story of the timeline.", "We simply select the 3 most frequent words among events (excluding stopwords) for each timeline as its label.", "TLS evaluation relies on ROUGE score and its variants as follows:", "Concatenation-based ROUGE ( concat ).", "It considers only the textual overlap between concatenated system summaries and ground-truth, while ignoring all date information of timeline (Yan et al., 2011b; Nguyen et al., 2014; Wang et al., 2016).", "Date-agreement ROUGE ( agreement ).", "It measures both textual and temporal information overlap by computing ROUGE score only when the date in the system-generated timelines matches the one of the ground-truth timeline (Tran et al., 2013).", "Otherwise, its value is 0.", "Alignment-based ROUGE.", "It linearly penalizes the ROUGE score by the distances of dates or/and summary contents.", "Martschat and Markert (2017) proposed three types of this metric: align , align+ , align+m:1 (align by date, align by date and contents, align by date and contents where the map function is non-injective, respectively).", "Date selection ( d-select ).", "It evaluates how well the model works in selecting correct dates in the ground-truth (Martschat and Markert, 2018).", "The evaluation methods for TLS cannot directly assess the performance of MTLS systems as there are multiple output timelines and multiple ground-truth timelines.", "Concretely, given an input collection D , corresponding ground-truth timeline set G = { G 1 , G 2 , ...G k 1 } ( k 1 1) , and system-generated timeline set T = { T 1 , T 2 , ..., T k 2 } ( k 2 1) , evaluation metrics need information to automatically match the ground-truth timeline when evaluating T i .", "Therefore, we make the system find the closest ground-truth G to timeline T as follows: G = arg max G G f m ( T, G ) , (9) where f m is the TLS evaluation function to compute the score between T and G based on metric m , which can be either concat , agreement , align , align+ , align+m:1 , or d-select .", "Then, the overall performance of the MTLS models is computed by taking the average of all the members in T .", "The goal of our experiments is to answer the following research questions (RQs): RQ1: Do MTLS models produce more meaningful", "meaningful output than TLS models?", "RQ2: How does 2SAPS framework perform on MTLS task compared with other MTLS baselines?", "RQ3: How effective are the components of the modules in 2SAPS?", "How do parameter changes in the model affect the results?", "We note that there is no available dataset for MTLS task, thus we construct MTLS datasets 5 extending existing TLS datasets.", "Tran et al. released Timeline17 (Binh Tran et al., 2013) and Crisis (Tran et al., 2015) datasets for TLS over news articles.", "5 The datasets are now available at https://yiyualt.github.io/mtlsdata/.", "Table 2 shows their statistics.", "To assure high complexity of data, we generate multiple datasets from TLS datasets by varying degree of story mixtures.", "We construct MTLS datasets based on combining TLS datasets, according to the following procedure: (1) set the number of topics L used to generate a new dataset; (2) from TLS datasets, randomly choose L topics, then merge their document collections into a new dataset D along with grouping their associated ground-truth timelines into G .", "6 (3) repeat steps (1) and (2).", "Here, the value of L reflects the complexity of the dataset.", "The more topics the dataset contains, the more complex it is.", "We repeated the steps (1)~(3) on Timeline17 7 and finally created 25 datasets as shown in Table", "3. Timeline17 contains 9 document collections, covering the following topics: BP Oil Spill (bpoil), Influenza H1N1 (h1n1), Michael Jackson death (mj), Libyan War (libya), Egyptian Protest (egypt), Financial Crisis (finan), Haiti Earthquake (haiti), Iraq War (iraq), Syrian Crisis (syria).", "As there are no ready models for MTLS task, we design the baselines as divide-and-summarize approaches.", "The underlying idea is: first segment the input dataset into sub-datasets (subsequently called 6 If a topic has multiple ground-truth timelines, we pick one that has length closest to the average length of the timelines for that topic. 7 We note that Crisis contains only 4 topics, resulting in few possible combinations, so we finally decided to skip it. segments) by partition/division algorithms; then adopt TLS techniques to generate a timeline for each sub-dataset (segment).", "We now describe the choices for each step.", "Dataset Division Approaches: Random.", "We randomly decide the number of segments from 1 to 10.", "Then, we assign a news article to a random segment.", "LDA (Latent Dirichlet Allocation) (Blei et al., 2003).", "Given a dataset, we first use LDA to detect the main topics in the dataset.", "Then, we assign each news article to its dominant topic.", "K-means (MacQueen et al., 1967).", "We use k-means algorithm in scikit-learn .", "CHIEU 2004 (Chieu and Lee, 2004): It is a frequently used unsupervised TLS baseline which selects the top-ranked sentences based on summed similaries within n -day window.", "MARTSCHAT 2018 (Martschat and Markert, 2018): It is one of the state-of-the-art TLS models and is also the first work to establish formal experimental settings for TLS task.", "We use the implementation given by the authors.", "9 GHALANDARI 2020 (Gholipour Ghalandari and Ifrim, 2020): It constructs timeline by first predicting the important dates via a simple regression model and then selecting important sentences for each date.", "10 We combine the above 3 dataset division approaches and 3 TLS approaches and thus yield 9 baselines.", "Concerning the characteristics of MTLS task and our datasets, the experimental settings differ from the TLS settings applied in Martschat and Markert (2018).", "In particular, the settings are: When generating timelines, none of the compared models knows the actual value of L (i.e., L is not an input data).", "The stratification given in Table 3 is shown only for the reader to explain the datasets' construction method.", "For the dataset-division algorithms, LDA and k-means, we use different techniques to find optimal number of segments.", "For LDA, we evaluate topic coherence measure (C v score) (Rder et al., 2015) for topic numbers ranging from 1 to 10, and then choose the optimal number.", "For k-means, we use silhouette value (Rousseeuw, 1987) to determine the optimal number of segments.", "All the compared methods do not take the information of the ground-truth as input.", "That is, the number of dates, the average number of summary sentences per date, the total number of summary sentences, the ground-truth start dates, and end dates are all unknown.", "We set the length of timelines to 20 and summary length to 2 sentences per date.", "We first address RQ1 to show the necessity of MTLS and to demonstrate that TLS performs poorly when an input dataset contains mixture of documents on different stories.", "To achieve this, we compare results of MTLS baselines with a standard TLS approach.", "Table 4 shows the performance comparison between TLS and MTLS baselines based on MARTCHAT 2018.", "For fair comparison in this first experiment, we select only one timeline from MTLS outputs that is most similar to the timeline generated by TLS.", "We observe that when L = 1 , 2 , MTLS underperforms TLS by 15.1%, 4.8% in terms of align+m:1 ROUGE-1, respectively.", "However, it outperforms TLS by 150%, 117.1%, and 94.7% when L equals 3,4,5, respectively.", "This indicates that as the complexity of input document collection increases (higher L values), TLS systems do not produce good results when compared to MTLS ones.", "In real world scenarios, it is rather rare that the input dataset is clean enough to contain only a single topic.", "Thus, these results suggest that MTLS approach should in practice be more useful than TLS.", "The results for the other two TLS algorithms introduced in Section 6.2 show a similar trend, too.", "Furthermore, the example outputs of TLS and MTLS systems are also available as supplementary materials.", "We now investigate the performance of our framework to answer RQ2.", "Table 5 shows the overall performance of MTLS systems.", "We observe that 2SAPS achieves the best performance in terms of all ROUGE metrics.", "In particular, when compared with CHIEU 2004, MARTSCHAT 2018 and GHALANDARI 2020 in terms of concat ROUGE-1 score, it outperforms them by 52.9%, 12.2%, and 16.4%, respectively.", "We also observe that GHALANDARI 2020 method still achieves the best performance among baselines except for concat ROUGE-1.", "Furthermore, it is worth noticing that k-means works best in dividing datasets.", "On average, k-means outperforms Random and LDA by 15% and 7.2%, respectively, in terms of concat ROUGE-1.", "Finally, compared with the best-performing baseline, k-means-G HALANDARI 2020, our 2SAPS outperforms it by 9.9%, 15.1%, 0%, 10%, 4.7%, 3.6%, 19.1%, in terms of concat (ROUGE-1,ROUGE-2), align+m:1 (ROUGE-1,ROUGE-2), agreement (ROUGE-1,ROUGE-2) and d-select , respectively.", "We turn to the first part of RQ3.", "We conduct ablation tests on Event Selection (ES) and Timeline Selection (TS) components.", "Table 6 shows the changes of different models.", "We observe that without ES, d-select and align+m:1 ROUGE-2 scores decrease 14.6% and 42.2% compared with 2SAPS.", "The plausible reason is that without ES, many unimportant dates and events are included in a timeline, resulting in low recall of correct dates.", "On the other hand, without TS component, the generated timeline set tends to contain noisy timelines, causing low ROUGE-1 as the performance drops by 18.8%.", "We now analyze the impact of key parameters, 1 , 2 , 1 , 2 .", "1 and 2 directly influence the quality of generated events and timelines, while 1 and 2 indirectly affect the model's performance by controlling the selection steps.", "Figure 1 shows the performance of 2SAPS under concat ROUGE-1, align+m:1 ROUGE-1, and agreement ROUGE-1.", "In particular, we observe that: a smaller value of 1 (from 0.1 to 0.4) gives better results than a larger value (Figure 1a).", "When 1 turns to 1, AP algorithm does not converge, and the values of all measures become 0.", "The plausible reason for this could be that when sentence dates are very Model Metric L=1 L=2 L=3 L=4 L=5 TLS (MARTSCHAT 2018) concat (ROUGE-1) 0.287 0.310 0.214 0.261 0.202 concat (ROUGE-2) 0.061 0.069 0.038 0.044 0.035 align+m:1 (ROUGE-1) 0.053 0.063 0.032 0.041 0.038 align+m:1 (ROUGE-2) 0.011 0.017 0.011 0.007 0.007 MTLS (k-means-M ARTSCHAT 2018) concat (ROUGE-1) 0.272 0.364 0.362 0.400 0.390 concat (ROUGE-2) 0.056 0.084 0.085 0.100 0.084 align+m:1 (ROUGE-1) 0.046 0.063 0.082 0.097 0.082 align+m:1 (ROUGE-2) 0.009 0.014 0.026 0.034 0.024 MTLS (LDA-MARTSCHAT 2018) concat (ROUGE-1) 0.274 0.332 0.363 0.335 0.273 concat (ROUGE-2) 0.054 0.074 0.089 0.079 0.059 align+m:1 (ROUGE-1) 0.043 0.057 0.078 0.080 0.065 align+m:1 (ROUGE-2) 0.007 0.009 0.027 0.024 0.018 Table 4: Performance comparison between TLS and MTLS systems.", "Figure 1b shows the impact of the reference relation in linking events .", "The values of all metrics increase as 2 increases.", "It makes sense that reference relation exerts an important role in linking events into timelines, thus a higher value is necessary.", "However, when 2 is over 0.9, the performance drops because when news articles provide few contextual events (e.g., background events, related events, etc.), then the reference relation between events becomes unreliable.", "1 controls the impact of Event Salience described in Section 4.1.", "Another corresponding fac-tor is Event Consistency , which is weighted by 1 1 .", "Figure 1c shows that the model with larger values of 1 underperforms the ones with relatively small values of 1 (from 0.2 to 0.4), indicating that", "con-(a) 1 : Temporal similarity", "sistency of event matters more than its salience in selecting high-quality events .", "Finally, in Figure 1d, we observe that along with the increase of 2 , the performance of all metrics decrease, suggesting that the coherence of timeline is more effective than salience in selecting good timelines.", "Our 2SAPS model works essentially on the unit of sentences and constructs a graph where each sentence is a node and edge is the relation between", "sentences.", "It has then a complexity of O ( n 2 ) .", "Future work could address this by simplifying graph structure and providing approximate solutions to cover also the cases of processing large datasets.", "Another solution is to select only important sentences from news articles using the combination of classification, summarization or filtering.", "We introduced MTLS task to generalize the timeline summarization problem.", "MTLS improves the performance of timeline summarization by generating multiple summaries.", "We conducted experiments to first show that given a heterogeneous time-stamped news article collection, TLS usually does not produce satisfactory result.", "We further proposed 2SAPS, a two-stage clustering-based framework, to effectively solve MTLS task.", "Furthermore, we extended TLS datasets to MTLS datasets, as well as introduced a novel evaluation measure for MTLS.", "Experimental results show that 2SAPS outperforms MTLS baselines which follow the divide-and-summarize strategy.", "Our work sig-nificantly improves the generalization ability of timeline summarization and can provide users with easier access to news collections.", "As an unsupervised approach that does not require costly training data, it can be applied to any potential datasets and languages.", "In future work, we plan to test our approach on additional MTLS datasets.", "We will also investigate scenarios in which MTLS can enhance information retrieval systems operating over news article collections.", "For users searching over large temporal collections, structuring the returned results into a series of timelines could prove beneficial, instead of returning a usual list of interwoven documents that relate to different stories or periods.", "We greatly appreciate the authors in CoNLL'18 paper (Martschat and Markert, 2018) for making their data public.", "In particular, we wish to thank Sebastian Martschat for his great support in discussions about the experiment setup and reproduction.", "We also want to thank anonymous reviewers for their invaluable feedback." ]
[ "objective", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "result", "objective", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "result", "abstain", "abstain", "objective", "abstain", "other", "other", "other" ]
[ "Generating new events given context with correlated ones plays a crucial role in many event-centric reasoning tasks.", "Existing works either limit their scope to specific scenarios or overlook event-level correlations.", "In this paper, we propose to pre-train a general Correlation-aware context-to-Event Transformer (ClarET) for event-centric reasoning.", "To achieve this, we propose three novel event-centric objectives, i.e., whole event recovering, contrastive event-correlation encoding and prompt-based event locating, which highlight event-level correlations with effective training.", "The proposed ClarET is applicable to a wide range of event-centric reasoning scenarios, considering its versatility of", "(i) event-correlation types (e.g., causal, temporal, contrast),", "(ii) application formulations (i.e., generation and classification), and", "(iii) reasoning types (e.g., abductive, counterfactual and ending reasoning).", "Empirical fine-tuning results, as well as zeroand few-shot learning, on 9 benchmarks (5 generation and 4 classification tasks covering 4 reasoning types with diverse event correlations), verify its effectiveness and generalization ability.", "An event', usually a text span composed of a predicate and its arguments (Zhang et al., 2020b), is a fine-grained semantic unit to describe the state of entities/things (e.g., He looks very worried ) and how they act (e.g., I grab his arms ).", "Understanding events and modeling their correlations are fundamental to many reasoning tasks (Bhagavatula et al., 2020; Qin et al., 2019), e.g., abductive reasoning, story ending classification and generation, counterfactual reasoning, script reasoning.", "For instance, in the left example of Figure 1, to generate the missing event [E] in the given context, it is essential to understand that there are four events ( it tries Work is done during internship at Microsoft.", "the knob ', [E] , the creature starts pounding on the door ', and (the creature) to break it down '), and then predict [E] based on the other three events and its correlations to them (i.e., the contrast relation indicated by but ' and the causal relation by so ').", "Event-aware reasoning has gained much attention and achieved promising success in recent years (Lv et al., 2020; Ding et al., 2019).", "However, many algorithms are designed to solve only some specific tasks.", "For example, Qin et al. (2020) propose to improve unsupervised decoding for counterfactual and abductive reasoning; Huang et al. (2021) and Guan et al. (2019) advance story ending generation via incremental encoding and multi-level graph convolutional networks.", "Although these works show effectiveness in corresponding applications, they are limited to specific scenarios, and cannot generalize well to a broad scope of reasoning.", "Meanwhile, some pioneering works follow a recently arising paradigm to conduct event-based pretraining for those downstream reasoning tasks (Yu et al., 2020; Han et al., 2020a; Lin et al., 2020; Zhou et al., 2021b).", "However, these solutions have their own limitations: COMeT (Hwang et al., 2021) learns event correlations from a human-curated knowledge graph and thus limits its scalability.", "Han et al. (2020a) and Lin et al. (2020) only model temporal relations and cannot be expanded to other relations (e.g., causal, contrast).", "EventBERT (Zhou et al., 2021b) is proposed for event-based classifi-2559 cations and is thus inapplicable to generation tasks.", "In this work, we propose a general pre-training framework for event-centric reasoning by learning a C orre la tion-awa r e context-toE vent T ransformer (ClarET) from an event-rich text corpus.", "We propose three novel self-supervised objectives, dubbed as whole event recovering (WER), contrastive event-correlation encoding and prompt-based event locating, respectively.", "The first one aims to capture event correlation by recovering a whole event from its masked context.", "The second one enhances the representation of the masked event in WER by contrasting it with the gold event against the negative ones.", "The last one is a simplified WER task by providing hints in its prompt and thus facilitates effective learning for WER.", "ClarET explicitly models event correlations and contributes to various scenarios.", "From one aspect, it covers a variety of correlation types (e.g., causal, temporal, contrast) attributed to correlation type-agnostic objectives.", "From another aspect, it is applicable to both generation and classification task formulations by its unified structure.", "Lastly, it highlights event-level correlations and thus is more effective for diverse event-centric tasks, e.g., abductive, counterfactual and ending reasoning.", "To evaluate ClarET, we compare it with strong baselines on 9 diverse benchmarks.", "While ClarET is continually pre-trained from BART (Lewis et al., 2020) with very limited extra resources, i.e., training on a small subset of BART-used corpus (i.e., 200M out of 2.2T tokens) within 90 GPU hours (only 0.13% of 70,000h BART pre-training), it achieves state-of-the-art (SoTA) performance on all 5 generation benchmarks.", "It also outperforms all unified models on 4 classification benchmarks and achieves competitive, or even better, accuracy to strong discriminative baselines.", "We further exhibit that the ClarET provides a good initialization for downstream tasks by zeroand few-shot learning.", "Unified Pre-trained Model.", "A recent trend is to pre-train unified (a.k.a. universal or general) models to boost downstream generation and classification tasks, rather than masked language modeling (MLM) only.", "GPT (Radford et al., 2019) is based on auto-regressive language modeling but incompetent in classifications due to unidirectional contextualizing.", "To remedy this, BART (Lewis et al., 2020) trains seq2seq models as a text denoising autoencoder with mask-infilling, etc; UniLM (Dong et al., 2019) designs advanced self-attention masks in Transformer, leading to a partially autoregressive MLM; GLM (Du et al., 2021) proposes an auto-regressive blank-filling objective based on Transformer, achieved by bi-/uni-directional attention and 2D positional encoding.", "T5 (Raffel et al., 2020) pre-trains a text-to-text Transformer to recover the masked part of input by decoding.", "All these general-purpose pre-trained models focus on relatively short-span masking in random, whereas we focus on masking a whole semantic unit (i.e., event) and propose novel training objectives to circumvent problems in long-span event decoding.", "Besides, they are also vulnerable to pretrain-finetune inconsistency, leading to inferior event-centric performance.", "Task-specific Models for Event Reasoning.", "Many recent works present task-specific neural models for various event-centric reasoning types, including (1) abductive reasoning (Ji et al., 2020; Dong et al., 2021; Zhu et al., 2020), (2) counterfactual reasoning (Qin et al., 2019, 2020), (3) ending reasoning (Guan et al., 2019; Wang and Wan, 2019; Yao et al., 2019; Huang et al., 2021; Guan et al., 2020; Wang et al., 2017; Li et al., 2018; Ding et al., 2019; Zhou et al., 2021c; Chaturvedi et al., 2017; Srinivasan et al., 2018), (4) incoherence reasoning (Mori et al., 2020).", "However, these methods are designed for the specific reasoning scenarios based on task-specific models so hardly generalize to other scenarios.", "In contrast, we aim to pre-train a general event-centric model for generalizing to various scenarios.", "Event-centric Pre-training.", "With similar scopes, many works focus on event-centric pre-training to promote event-related tasks as event' is a self-contained semantic unit and also an entry of commonsense reasoning.", "One paradigm is to pre-train on corpora without human-labeling.", "Some methods focus on more specific aspects of events and their correlations.", "DEER (Han et al., 2020b) performs temporal and event masking predictions for temporal relations.", "Lin et al. (2021) propose to recover a temporally-disordered or event-missing sequence for temporal and causal relations.", "Wang et al. (2021) use AMR structure to design contrastive objectives for the event detection task.", "However, they are not general enough to various event reasoning tasks.", "In contrast, CoCoLM (Yu et al., 2020) learns 2560 an event-level MLM to generalize more.", "EventBERT (Zhou et al., 2021b) states the ineffectiveness of event-level MLM and exploits hard negatives via contrasting, contributing much to downstream multi-choice tasks.", "However, these methods are only competent in discriminative tasks.", "The other paradigm is based on supervised pre-training on similar tasks and then performs knowledge transfer, e.g., COMeT (Hwang et al., 2021), UnifiedQA (Khashabi et al., 2020) and UNICORN (Lourie et al., 2021), but they require human-curated data.", "Event-rich Corpus.", "Although raw corpora are viewed as off-the-shelf pre-training resources, a key question is how to mine event-rich examples.", "Here, event-rich' denotes that each example contains various events and entails adequate contexts to support event reasoning via either explicit or implicit event-correlation.", "This is crucial to learning event-correlations and reducing unnecessary overheads.", "Except for human-curated resources (e.g., ATOMIC (Sap et al., 2019) and ConceptNet (Speer et al., 2017)), event-rich corpora are also mined via automatic schemes.", "ASER (Zhang et al., 2020b) builds an event-based graph, where each node is an event extracted from a text and the relation of an event pair is predicted by a PDTB model.", "In contrast, EventBERT (Zhou et al., 2021b) operates on pure text so filters out correlation-scarce contexts and extracts verb-rooted events.", "Besides, it offers event sampling methods for hard negatives.", "We adopt this data processing method as both pure-text examples and hard negatives are prerequisites of generic and robust pre-training.", "In this work, we directly adopt event-rich data mining and negative sampling methods from Zhou et al. (2021b) but focus our contributions on enlarging application scope of event-centric tasks and overcoming challenges raised in the new scope.", "Event-rich Data Mining.", "To mine event-rich data from raw corpus, we employ a story corpus, BOOKCORPUS (Zhu et al., 2015), and take a two-step procedural (i.e., filter ' and extraction ').", "It filters out correlation-scarce paragraphs according to existence of connectives (i.e., discourse relation keywords, e.g., however , while ).", "Then, it highlights the event spans in the filtered paragraphs by extracting verb-rooted sub-trees in dependency trees of the paragraphs.", "With a filtered paragraph x , we build each example as ( x, e ) where e is an event mention in x .", "We obtain 200M tokens (out of 1B in BOOKCORPUS ) in 3.9M filtered paragraphs.", "For clear notations , we denote a text piece as a lower case letter (e.g., e ).", "It is tokenized into a sequence as a bold (e.g., e = [ e 1 , e 2 , . . . ] ), where a letter w/ subscript t is the t -th token in the sequence.", "Negative Event Sampling.", "Following Zhou et al. (2021b), we build a pool of events from the whole corpus and then retrieve negative events by three heuristic schemes.", "Given an event e in ( x, e ) , we sample its negative event, e , in light of lexicon-based (20% time), PoS-based (60% time) or in-domain (20% time) retrieval.", "Consequently, given an event e , we sample M negative events, i.e., { e } Mi =1 .", "Figure 1 (right) shows an integrated instance ( x, e, { e } Mi =1 ) of the event-rich corpus 1 .", "We first present whole event recovering as a backbone pre-training objective in 3.2.1.", "After identifying incompetence of the simple backbone, we propose two other objectives in 3.2.2 and 3.2.3.", "An overview of the objectives is shown in Figure 2.", "For the objective of whole event recovering (WER), it is straightforward to leverage an encoder-decoder structure, where a masked context is passed into the encoder to generate the missing part by decoding.", "Specifically, given an event e in a paragraph x , we mask out e from x at the encoder side and then generate e at the decoder side, i.e., p ( e | x / { e } ; ) = (cid:89) t p ( e t | e <t , x / { e } ; ) , (1) where denotes parameters and x / { e } denotes replacing e in x with one special token [M] .", "We estimate Eq.", "(1) by the Transformer sequence-to-sequence (seq2seq) structure (Vaswani et al., 2017).", "First, we apply the Transformer encoder to x / { m } for contextual embeddings for all tokens in x / { m } : H ( enc ) =Trans-Enc( x / { e } ; ( enc ) ) R d n , (2) where n is the number of tokens in x / { e } .", "y t = Trans-Dec( e <t , H ( enc ) ; ( dec ) ) R |V| , (3)", "where V denotes token vocabulary and y t is the predicted categorical distribution over V .", "Lastly, the training objective is defined as a maximum likelihood estimation.", "Its loss function is written as L ( wer ) = (cid:88) ( x,e ) 1 | e | (cid:88) | e | t =1 log y t [ y = e t ] , (4) where y t [ y = e t ] ' denotes fetching the probability of the t -step gold token e t e from y t .", "This objective is similar to span recovering schema (Raffel et al., 2020; Joshi et al., 2020) but differs in that", "(i) each masked span is an event, i.e., an integrated semantic unit, so much longer (up to 22 tokens and see Figure 4 for length distribu-tion), and", "(ii) only one event is masked out from the context to facilitate event-correlation modeling between the event and its contexts.", "Intuitively, the success of Eq.", "(1) requires to capture correlations between the masked event and remaining contexts but two major problems arise due to WER with long event-level masking spans: (1) Implicit Event-correlation: The model recovers an event based solely on token-level concurrence as in a conditional language model (e.g., T5 and BART), regardless of the rich event-level correlations between the events in context x / { e } and the masked event e .", "Such a correlation-implicit model would achieve inferior performance on downstream event-centric correlation reasoning tasks.", "(2) Learning Difficulty: As the masked event is an integrated, self-contained, semantic unit, it is difficult for the conditional generation model to recover the whole event due to a lack of local contexts.", "As a result, the model cannot effectively learn from the long masked spans, which has been empirically proved in autoencoding MLM models.", "To alleviate the two problems above, we propose two other novel self-supervised objectives in the following.", "Briefly, we present contrastive event-correlation encoding to enhance correlations between contexts and events, and prompt-based event locating to reduce generation difficulty.", "For the implicit event-correlation problem, an intuitive solution is to explicitly highlight the correlation from the masked context to the missing event at the encoder side.", "To achieve this, we resort to contrastive learning to enhance the encoder-side representation of the masked event by contrasting it with the embedding of the gold event mention e against those of negative ones e .", "Particularly, we first derive the embedding of e and e independently via the Transformer encoder in", "Eq.(2), i.e., c = Pool(Trans-Enc( [CLS] + e ; ( enc ) )) , (5) c = Pool(Trans-Enc( [CLS] + e ; ( enc ) )) , (6) where [CLS] is a special token prefixed to each event mention, and Pool( ) denotes using the contextual embedding of [CLS] to represent the whole event.", "Then, we enhance h [m] , the contextual representation of [M] in x / { e } from H ( enc ) in", "Eq.(2), by contrasting it with c against c , i.e., L ( cee ) =max(0 , + d ( h [m] , c ) d ( h [m] , c )) , (7) where d ( , ) denotes a distance metric of two vectors, which is Euclidean distance in this work.", "As a result, the encoder-side correlation-aware representation h [m] also offers a straightforward pathway to transmit event-level information to decoding so mitigates the learning difficulty to some extent.", "As for learning difficulty problem, we also propose a prompt-based event locating objective to reduce generative difficulty by providing hints in the prompt.", "The basic idea is to simplify WER objective as an extractive generation task to locate and copy a candidate/hint from the prompt, which aims at improving learning effectiveness.", "To this end, we present two prompt-based generation schemas in the following.", "Correct Event Selection.", "Inspired by advances of prompt-based multi-choice question answering, we present correct event selection schema to select the gold event e against negative ones { e } Mi =1 based on the contexts x / { e } .", "Given an event-masked paragraph x / { e } suffixed with several candidate events { e } Mi =1 containing the gold masked one e , it aims to generate the masked event e back, i.e., x ( ces ) = x / { e } + Options:", "where [ e 1 , e 2 , . . . ] is a random permutation of [ e, { e } Mi =1 ] in case of position bias.", "We use a random permutation as all candidates are assigned with distinct position embeddings during contextualizing, and a fixed permutation of gold events will result in a learning shortcut (position bias) to degrade the model.", "Thus, similar to", "Eq.(1), we can define its formula as p ( e | x ( ces ) ; ) .", "Wrong Event Tagging.", "The other schema is wrong event tagging to find the wrong event in a corrupted paragraph, similar to incoherence reasoning.", "Thus, we re-write the encoder input as x ( wet ) = x / { e } & { e } + Event: [M] is wrong , where x / { e } & { e } denotes replacing the gold event e in x with a negative e { e } Mi =1 .", "Thus, we can define the formula of this objective as p ( e | x ( wet ) ; ) .", "Based on the two formulas above, we define the prompt-based event locating objective as L ( pel ) = (cid:88) ( x,e ) 1 | e | (cid:88) t log p ( e t | e <t , x ( ces ) ; ) 1 | e | (cid:88) t log p ( e t | e <t , x ( wet ) ; ) , (8) where = { ( enc ) , ( dec ) } , e is sampled in { e } Mi =1 .", "3.3 Model Pre-training and Fine-tuning Self-supervised Pre-training.", "The final loss to pre-train our ClarET is a linear combination of the three losses above from", "Eq.(4, 7, 8), i.e., L = L ( wer ) + L ( cee ) + L ( pel ) .", "Supervised Downstream Fine-tuning.", "For generation tasks, we simply leverage the formula in", "Eq.(1) to establish fine-tuning objectives.", "For discriminative (e.g., multi-choice) tasks, we can either formulate all tasks into generation as in GPT/T5 or fine-tune with classifying heads as in BART.", "With pilot experiments, we found the latter one can achieve better performance and adopted it.", "While we adopt the same data processing in EventBERT (Zhou et al., 2021b) and share a similar motivation to learn an event-centric pre-trained model, we expand the scope from discriminative-only ' in EventBERT into unified ' by our context-to-event Transformer for a broad spectrum of scenarios.", "Such an expansion is non-trivial since new challenges arise in the unified formulation.", "Compared to the inefficient event-backfilling and contextualizing ' paradigm in EventBERT, our model can explicitly and effectively learn event-level correlations between contexts and events by our novel contrastive and prompt-based objectives.", "Moreover, COMeT (Bosselut et al., 2019; Hwang et al., 2021) is also a conditional generation model but focuses on triple-level commonsense reasoning given ( head event , relation ) to generate tail events , whose motivation, however, is orthogonal to ours.", "Therefore, we focus on a different motivation or scope, not to mention evaluation formulations.", "This section begins with descriptions of downstream datasets and experimental setups.", "Downstream Datasets.", "We conduct extensive evaluations on 9 datasets for 9 downstream tasks, i.e., 5 generation and 4 classification tasks.", "Generation tasks include abductive commonsense reasoning on ART ( NLG) (Bhagavatula et al., 2020), counterfactual story generation on TIMETRAVEL (Qin et al., 2019), story ending generation (Guan et al., 2019), commonsense story generation (Guan et al., 2020), and event process completion on APSI (Zhang et al., 2020a).", "Classification tasks include script reasoning on MCNC (Li et al., 2018), abductive commonsense reasoning on ART ( NLI) (Bhagavatula et al., 2020), narrative incoherence detection on ROCStories (Mori et al., 2020), and story cloze test (Mostafazadeh et al., 2016).", "Please refer to Appendix C for their details.", "Pre-training Setups.", "Instead of learning from scratch, we perform continual pre-training from BART-large (Lewis et al., 2020) due to limited computation resources.", "The batch size and number of training steps are 1152 and 160k.", "The model is trained by Adam (Kingma and Ba, 2015) w/ learning rate of 1e-5 and warmup proportion of 0.03.", "The gradient clip, dropout rate and weight decay are 1.0, 0.1 and 0.01.", "Notably,", "(i) BOOKCORPUS 2563 Abductive C.S. Reasoning CounterfactualStory Story Ending Generation C.S. Story Generation Event Process Completion Size B-4 R-L BERT B-4 R-L BERT B-1 B-2 B-1 B-2 B-1 B-2 Selected task-specific models with competitive performance GRF (Ji et al., 2020) -11.62 34.62 -----IE+MSA (Guan et al., 2019) ----24.40 7.80 --Plan&Write (Yao et al., 2019) ----24.40 8.40 30.80 12.60 -Fine-tuning with pre-trained unified (generative) model GPT2-S (Radford et al., 2019) 124M 2.23 22.83 48.74 69.27 65.72 60.53 39.23 13.08 32.20 14.10 35.25 11.75 GPT2-M (Radford et al., 2019) 335M --75.71 72.72 62.39 --45.43 14.81 BART (Lewis et al., 2020) 400M 16.47 38.73 56.36 82.91 76.44 79.50 54.22 18.07 54.22 18.07 56.25 18.75 GLM (Du et al., 2021) 335M 7.79 25.54 54.85 75.81 70.03 68.23 57.04 18.45 57.04 18.45 57.34 19.11 ClarET (ours) 400M 17.67 41.04 57.31 87.18 80.74 81.48 57.47 19.16 57.47 19.16 58.88 19.74 Table 1: Fine-tuning results on five generation benchmark datasets.", "has already been used by BART pre-training and our data processing is based on heuristics without human-curated resources;", "(ii) Our continual pretraining only needs 90 GPU hours on 200M tokens, i.e., 0.13% of BART that consumes 70K hours on 2.2T tokens (see Appendix B.1).", "Hence, ClarET with zero newly introduced corpus and relatively negligible computing overhead makes great lifts and preserves fair comparisons with baselines.", "Fine-tuning Setups.", "For finetuning, we train the model with an Adam w/ learning rate of 1e-5 and warmup proportion of 0.06.", "The dropout rate, batch size and weight decay are 0.1, 32 and 0.01.", "For generative downstream tasks, we take BLEUN (BN ) (Papineni et al., 2002), ROUGE-L (R-L) (Lin, 2004) and BERTScore (BERT) (Zhang et al., 2020c) as the evaluation metrics, while the accuracy (ACC) is taken for classification tasks.", "Each fine-tuning runs with seeds 2, 10 and 1234, and we evaluate the best dev model on the test set.", "Fine-tuning for Generation.", "As shown in Table 1, our proposed ClarET achieves SoTA performance across all generation tasks.", "For instance, ClarET increases the ROUGE-L score by 2.3 ab-solute value for abductive reasoning.", "The supe-rior performance of ClarET on the benchmarks demonstrates that it can model event-level corre-2564 Method B-4 R-L BERT GPT (Qin et al., 2019) 1.25 18.26 59.50 GPT2-S (Qin et al., 2019) 1.28 20.27 59.62 GPT2-M (Qin et al., 2019) 1.51 19.41 60.17 Zero-Shot-Ranked (Qin et al., 2020) 2.26 25.81 60.07 BART-large (Lewis et al., 2020) 7.08 30.60 61.58 DELOREAN (Qin et al., 2020) 21.35 40.73 63.36 ClarET (ours) 23.75 43.03 63.93 Table 3: Zero-shot results on generative Counterfactual Story.", "lation more effectively via few steps of continual pre-training and provide a general solution for a variety of event-centric correlation reasoning tasks.", "Fine-tuning for Classification.", "Table 2 lists results on 4 classification tasks.", "We find ClarET performs better than all task-specific models and unified pre-trained models with 2%-4% improvement.", "It achieves competitive accuracy to strong discriminative models, e.g., the gap between ClarET and EventBERT is 0.15 for narrative incoherence detection and story cloze test.", "However, EventBERT is a RoBERTa-based competitor using the identical pre-training corpus.", "Its pre-training follows event-backfilling and contextualizing (similar to multi-choice QA), which has a small gap to downstream classification tasks for strong performance but brings two drawbacks.", "Firstly, its pre-training is slow due to repeat contextualizing over paragraphs, leading to 5 .", "6 longer GPU hours than ours.", "In addition, its discriminative paradigm limits it specifically to classifications, regardless of wide generation tasks.", "The results show ClarET is on par with the discriminative-only EventBERT on classifications.", "This is non-trivial given the large formulation gap between our generative pretraining objectives and downstream multi-choice-style classification tasks, and attributed to our effective event-correlation learning.", "In summary, these results show ClarET serves as a unified pre-trained model for event-centric generation and classification tasks.", "Zero-shot Learning.", "It is essential to verify if the targeted information was learned and retained by a pre-trained model.", "Compared to MLM, our generative recovering model is inherently applicable to event-centric multi-choice and generative formulations.", "For generation tasks , we apply", "Eq.(1) to generate answers.", "As shown in Table 3, ClarET achieves the best performance and outperforms DE Method ACC (%) Random 20.00 RoBERTa-large (Zhou et al., 2021b) 20.09 DeBERTa-xlarge (Zhou et al., 2021b) 20.31 BART-large (Lewis et al., 2020) 21.72 EventBERT (Zhou et al., 2021b) 30.79 ClarET (ours) 32.15 Table 4: Zero-shot results on discriminative Script Reasoning.", "LOREAN (which adapts auto-regression for counterfactual reasoning).", "For classification tasks , we apply", "Eq.(1) to each option for its perplexity and select the option with minimum.", "As shown in Table 4, ClarET surpasses previous models and beats the discriminative-only event-centric model, EventBERT.", "Besides, the general-purpose pre-trained models perform nearly random guesses due to their incompetence in long-span event discrimination.", "Few-shot Learning.", "Since our model reduces pretrain-finetune inconsistency for event-centric tasks and provides a good initialization for downstream fine-tuning, it is also interesting to see few-shot performance by scaling down training data.", "As shown in Figure 3, ClarET achieves similar performance to strong baselines with only 10%-30% of training data for fine-tuning.", "Ablation study.", "To measure the contribution of each objective to the final fine-tuning results, we conduct an ablation study on both generation and classification in Table 5. The first two ablations drop the two prompt schemas respectively in prompt-based event locating objective of", "Eq.(8), which verifies the effectiveness of reducing task difficulty.", "Then, the third ablation removes contrastive event-correlation encoding and shows a substantial drop, which verifies the significance of explicit event-correlation learning.", "Next, we keep only the prompt-based event locating objective to make our model a prompt-learning discrim-2565 Method Gen-CS Cls-SR B-4 R-L ACC ClarET (full, pre-trained by", "inative model (sharing more close methodology with EventBERT), however leading to a dramatic decrease.", "Lastly, when removing all the objectives, our model degenerates to BART-large.", "Comparison with Larger Model.", "A trend of pretraining models follows the law of larger models for better performance' but a crucial research question is how to perform competitively with fewer computation resources'.", "To answer, we show extra fine-tuning results on the five generation datasets in Table 7 to compare our ClarET (400M parameters) with T5-large (770M) and T5-base (220M).", "It is observed", "(i) with 3 scale, T5-large notably outperforms T5-base to support the above law and", "(ii) with almost half model size, our ClarET performs very competitively to T5-large (even better on 3 out of 5 tasks), verifying the significance of our objectives towards event-related knowledge.", "Difficulty of Event Generation.", "To exhibit the learning difficulty in pre-training (as stated in 3.2.1) and the effectiveness of our novel learning objectives, we conduct another ablation setting in Table 6. It is observed that ClarET achieves better event-level perplexity (ePPL), verifying the two novel objectives promote event generations and reduce difficulty of decoding.", "check if ClarET is more competitive on longer-span event generation, we compare it with BART-large and", "T5-base/-large by log ' of", "Eq.(1).", "Different from recovering paradigm of others, we follow the denoising paradigm to implement BART and calculate its score by considering the masked part in decoding.", "Figure 4 shows that (1) Line Chart: the gap between ClarET and the others becomes larger with event length increasing as the general-purpose models only consider short-span masking in pretraining, leading to inferior event generation; and (2) Bar Chart: as for data distribution, although a majority of data falls into the 6-8 bin, there are still many examples with event length greater than nine.", "Natural Language Understanding (NLU).", "Our basic model, BART-large, is presented for general NLU tasks.", "To exhibit our minor event-centric continual pre-training would not interfere its NLU ability, we conduct fine-tuning experiments on GLUE benchmark (Wang et al., 2019) as in Figure 5. It is observed that, although slightly surpassed by the discriminative RoBERTa model, fine-tuning BART and ClarET achieve very comparable results, which verifies ClarET's retention of NLU capability.", "Case Study.", "As the first case in Figure 6, we conduct a case study on generative abductive reasoning task, where the fine-tuned ClarET generates an event semantically close to the gold reference, but the BART does not.", "BART only generates a part of the answer but ignores the event-correlations from They were impressed with my phone ', while ClarET completely captures the correlations in the 2566 Figure 5: Fine-tuning results on GLUE dev, which verifies ClarET retains BART's natural language understanding ability.", "Context: I went to the store to buy a phone.", "[E] They were impressed with my phone.", "Reference of the Gold Event [E] : I bought the latest model of the phone I wanted, and showed it to my friends.", "Generation by ClarET: I bought a new phone and showed it to my friends.", "(BLEU-4: 34)", "Generation by BART: I bought a new phone.", "(BLEU-4: 0)", "Context: Cora was starting her job as a kindergarten teacher.", "[E] At the end of the day, they all told her how much they liked her!", "Reference of the Gold Event [E] : Cora was nervous, but knew the students were nervous too, so she tried to be extra friendly.", "Generation by ClarET: Cora spent the whole day with her students.", "(BLEU-4: 0)", "Error Analysis and Limitation.", "The second case in Figure 6 shows that our ClarET is ineffective when the gold event is very complicated.", "In detail, the model focus only on at the end of the day ' to generate ... spent the whole day ... ' but ignore very subtle contexts, e.g., starting her job ... teacher ' and they liked her '.", "To expand, we found a problem in long-event decoding by pilot experiments.", "As shown in Figure 7, it is observed that the gap of token-level perplexity between ClarET and WER-only gradually diminishes.", "This is because the subsequent tokens in an event can be generated on the basis of previous generations on the decoder side, rather than context-aware representations from the encoder side.", "While a long span is masked, the model can see previous tokens in an event (i.e., e <t ) in decoding and incline to perform the t -th prediction based on e <t but not x / { e } , especially with a larger t .", "As a result, the model would cheat' in the generation but learn decoder-side language modeling rather than context-aware representations.", "In the future, we will exploit this problem.", "Besides, due to computation resources, we choose the model size with 400M and continual 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% Position (top-percentage) of a token 0 1 2 3 T o k e n p e r p l e x i t y ClarET WER-Only Figure 7: Token-level perplexity w.r.t tokens' percentage positions in events on held-out dev set.", "We present a novel correlation-aware context-to-event Transformer to self-supervisedly learn event-correlation knowledge from text corpus and benefit various event-centric reasoning scenarios.", "Besides SoTA fine-tuning results on 5 generation and 4 classification tasks, we conduct zero-/few-shot learning and extensive ablation studies to exhibit our model's effectiveness.", "Lastly, we find our model is competitive to a twice larger general-purpose model, reduces learning difficulty for event generation, and retains NLU ability from its basic model.", "Although this work learns context-to-event knowledge, our self-supervised objectives are applicable to other semantically-meaningful text units besides events.", "For example, text units can be entities and concepts to learn relational and commonsense knowledge, which can benefit more downstream tasks.", "This work does not involve any sensitive data, but only public unlabeled corpora, i.e., BookCorpus (Zhu et al., 2015) pre-processed by Zhou et al. (2021b), and crowd-sourced datasets released in previous works, including ART (Bhagavatula et al., 2020), TIMETRAVEL (Qin et al., 2019), APSI (Zhang et al., 2020a), MCNC (Li et al., 2018), ROCStories (Mori et al., 2020)." ]
[ "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "method", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "objective", "result", "objective", "objective", "abstain", "other" ]
[ "We consider the intrinsic evaluation of neural generative dialog models through the lens of Grice's Maxims of Conversation (1975).", "Based on the maxim of Quantity (be informative), we propose Relative Utterance Quantity (RUQ) to diagnose the I don't know' problem, in which a dialog system produces generic responses.", "The linguistically motivated RUQ diagnostic compares the model score of a generic response to that of the reference response.", "We find that for reasonable baseline models, I don't know' is preferred over the reference the majority of the time, but this can be reduced to less than 5% with hyperparameter tuning.", "RUQ allows for the direct analysis of the I don't know' problem, which has been addressed but not analyzed by prior work.", "Neural generative dialog models have a tendency to produce generic, safe responses, such as I don't know' (Serban et al., 2016; Li et al., 2016a).", "The repetition of such phrases is annoying to users, and contributes nothing to the conversation.", "Evaluating chatbots is an active area of research, partly due to their open-ended nature (Hashimoto et al., 2019; Sedoc et al., 2019; Li et al., 2019; Mehri and Eskenazi, 2020b; Deriu et al., 2020).", "To the best of our knowledge, no prior work focuses on analyzing systems for generic, safe responses, such as I don't know.' While prior work (Li et al., 2016a,b; Csky et al., 2019; Welleck et al., 2020) addresses the I don't know' problem, the lack of analysis leaves it unclear if a method improves models by mitigating this problem, or another.", "One linguistic framework for analyzing conversations is Grice's Cooperative Principle (1975), which consists of Maxims of Conversation that function as guidelines for effective communication.", "Grice considered conversations between humans, but there has also been some exploration in NLP (Bernsen et al., 1996; Harabagiu et al., 1996; Qwaider et al., 2017; Jwalapuram, 2017).", "We discuss each of the categories of maxims and the ways a chatbot might violate them in Table 1. We propose a novel automatic diagnostic inspired by the Gricean QUANTITY maxim.", "Relative Utterance Quantity checks if the model favors a generic response (such as I don't know.') over the reference it was trained on for each prompt.", "We apply our diagnostic to a method designed to address this problem (Csky et al., 2019), and find that method does mitigate it, though not by as much as a hyperparameter search.", "Based on this interpretation we propose a method for diagnosing the problem.", "We compare the model score of producing I don't know.' to the model score of producing the reference response.", "This can be done on the training data, or the test data.", "Particularly on the training data, we should expect the model to know' the data it was trained on and therefore score it higher than I don't know.' We propose two diagnostic measures to compute the Relative Utterance Quantity of a model: (1) We plot the average model score for each token across sentences.", "We compare the original reference, beam search output, and two I don't know' (IDK) variants: I don't know.' and I don't know what to do.' allowing for the visualization of the relative gap in scores at different points in the sentence.", "(2) We compute the (length normalized) model score for I don't know.' and the reference of each training prompt, and count how many times the reference is preferred.", "We denote the later as RUQ score.", "Both generalize to other generic responses, as might be appropriate for other corpora or other languages.", "If there are multiple references we would recommend comparing the lowest likelihood reference for RUQ score, since all valid references should be better than I don't know.", "We note that RUQ captures some types of QUANTITY violations, but not all violations of this maxim.", "Following Khayrallah and Sedoc (2020), we train and evaluate on DailyDialog (Li et al., 2017), 1 which consists of 80,000 turns of English-learners practicing daily dialogues' in various contexts, e.g., chatting about vacation or food.", "We also use Entropy-Based Data Filtering (Csky et al., 2019), which filters out high entropy utterances 2 with the goal of removing generic ones.", "We use the recommended filtering threshold of 1 As released by ParlAI (Miller et al., 2017).", "The ParlAI release of DailyDialog is tokenized and lowercased.", "Following Khayrallah and Sedoc (2020) we detokenize and recase the DailyDialog data for training.", "2 Prompts that solicit many different responses and responses that can apply to many different prompts.", "1.0 and IDENTITY' clustering.", "We filter based on their source', target', and both' settings.", "We consider target' as the baseline, as they find it works best.", "We denote models trained on DailyDialog as DD and models trained on Csky et", "al.'s entropy filtered version as EF .", "We use the single-reference and multi-reference 3 automatic evaluation framework for DailyDialog released by Gupta et al. (2019), 4 which is computed using NLG-EVAL (Sharma et al., 2017).", "5 We primarily consider multi-reference METEOR (Lavie and Agarwal, 2007); see Appendix A.7 for all metrics.", "6 4.2 Human Evaluation For human evaluation of the different systems we use crowdworkers on Amazon Mechanical Turk to judge the fluency, coherence, and interestingness of utterances on a 1-5 Likert scale (see Appendix A.4 for full details) for 100 randomly sampled evaluation set prompts.", "Four annotators judge the responses from all systems for each prompt in a single turn context.", "We remove any annotators with a linear Cohen's Kappa < 0.1 from the results.", "Following Khayrallah and Sedoc (2020), we train Transformer (Vaswani et al., 2017) chatbots in FAIRSEQ using parameters from the FLORES benchmark for low-resource MT (Guzmn et al., 2019): 7 5 -layer encoder and decoder, 512 dimensional embeddings, and 2 encoder and decoder attention heads.", "The default regularization parameters are 0 .", "2 label smoothing (Szegedy et al., 2016), 0 .", "4 dropout, and 0.2 attention & ReLU dropout.", "Some kinds of regularization (e.g., label smoothing and subword vocabularies) are not universally used", "3 For RUQ, we only use the original single-reference.", "4 github.com/prakharguptaz/multirefeval 5 github.com/Maluuba/nlg-eval 6 For reading ease, we report metrics scaled between 0 and 100 rather than 0 and 1. 7 See A.6 for full details for replication.", "in dialog.", "8 Since we are concerned with the model over-fitting on IDK, we perform a hyperparameter sweep of regularization parameters, including SentencePiece (Kudo and Richardson, 2018) vocabulary size, learning rate, dropout, attention & relu dropout, and label smoothing.", "9 We denote models trained with the FLORES hyperparameters as BASE , and the best model from the hyperparameter searches for each data type (as selected by multiple-reference METEOR) as BEST .", "We report the multi-reference METEOR scores for the BASE and BEST sysems in Table 2. 10 For the DailyDialog data we find that hyperparameter tuning can improve multiple-reference METEOR from 12.7 ( DD-BASE ) to 17.8 ( DD-BEST ).", "We perform the same hyperparameter sweep after performing entropy filtering (Csky et al., 8 For example popular toolkits for dialog (e.g., Hugging Face (Wolf et al., 2020) and ParlAI (Miller et al., 2017)) do not implement label smoothing.", "2019) on the data, but we find that the best model is still DD-BEST .", "Without hyperparameter tuning, entropy filtering improves performance by 0.5 on multi-reference METEOR, but the improvement by hyperparameter sweeping is much larger (5.1 points).", "11 We did a very thorough sweep (including values we expected to perform poorly), which led to some general takeaways: 12 Using a subword vocabulary (of 4-8k) is helpful.", "(2) Label smoothing interacts with subword vocabulary size, but is also helpful.", "We show plots for the four models in Figure 1. We plot the token normalized model score for reference and I don't know.' For additional comparison, we also plot the model scores for the", "11 We note that Csky et al. (2019)who proposed entropy filtering and an observed a 1 BLEU point improvement from using it (we observed a 0.3 improvement in single reference BLEU)did not use any subwords units; they used a total vocab size of 16k.", "Our 10 best systems all had Sentencepiece vocab sizes of 2k, 4k, or 8k, so perhaps this difference may explain the discrepancy between their results and our replication.", "We note that for the 3 metrics which we believe our evaluations are comparablesingle reference Embedding Average Cosine Similarity, and single reference Vector Extrema Cosine Similarityour baseline outperforms their results.", "The BLEU scores are not directly comparable because they report sentence BLEU, while we report corpus BLEU following Gupta et al. (2019).", "12 See chateval.org/RUQ for automatic metrics on the full hyper parameter sweep.", "beam-search output and I don't know what to do.' Overall, we observe that for the BASE models the IDKs are higher probability than the reference, even on the training data.", "This is problematic, because the model is ranking a response that is not providing enough QUANTITY of information higher than the reference despite the fact that it should know ' the training data.", "The relative difference in probabilities is much better in DDBEST than DD-BASE , particularly on the training set.", "Simply entropy filtering the data alone does not fix the problem.", "We summarize QUANTITY in a single statistic by counting how many times the reference has a higher probability than I don't know.' on the training data.", "Entropy filtering improves how often the reference is preferred to I don't know.', but not by as much as the hyperparameter sweep does, see Table 3 for the RUQ scores on the training data.", "13 For both DD-BASE and EF-BASE , IDK is preferred over the reference response the model was trained on over half of the time (71.5% for DD , 62.1% for EF ).", "Table 4 shows human judgments of fluency, coherence, and interestingness.", "14 The models trained on DailyDialog have higher fluency and coherence, while the models trained on the filtered data have higher interestingness.", "For both kinds of data, the hyperparameter tuning (as selected by METEOR) improved interestingness.", "Fluency did not change.", "Coherence was reduced for the filtered models and improved for the base model.", "Improved RUQ may be reflected in either interestingness or coherence, but other factors can influence those judgments.", "Therefore, measuring RUQ directly is important to measuring progress on the IDK problem.", "The relative RUQ rankings of the four systems we consider in this work are the same as the relative rankings by multi-reference METEOR, and DD-BEST (the single best model according to mulit-reference METEOR) is also the one with the highest RUQ score.", "Among all models in the hyperparameter sweep, RUQ is correlated with METEOR with Spearman's of 0.9 but this drops to 0.6 when considering only the top 20 systems, demonstrating that RUQ and METEOR do not capture the same phenomenon.", "We note that RUQ on the training data does not require a particular (multi-reference) test set like most automatic evaluation metrics.", "RUQ simply diagnoses how well the model learned the training data compared to a generic response.", "The model's relative preference of IDK over the (presumably) better reference response is not only a QUANTITY violation, but is also indicative of a fundamental problem with the models themselves, and should be fixed before decoding time (either by correcting the data, or by correcting the model).", "Csky et al. (2019) argue that the IDK problem is due to the one-to-many/many-to-one nature of dialog training dataif a single response applies to many different responses, it will become the canonical response.", "Therefore their entropy filtering method removes one-to-many/many-to-one pairs, by removing high entropy responses.", "While this data filtering reduces the problem, we found that the baseline model trained on the 14 A.5 discusses head to head judgments.", "entropy filtered data ( EF-BASE ) still preferred IDK over the reference the majority of the time, suggesting opportunities for future research on the IDK problem.", "Gricean Maxims in NLP Gricean maxims have previously been discussed in NLP.", "Bernsen et al. (1996) examine the relationship between a new set of maxims for human-bot dialogs and relate them to Gricean maxims.", "They point out that these do not entirely overlap; however, the maxim of Quantity is preserved since unambiguous contributing responses are required in conversations in general.", "(Harabagiu et al., 1996) attempt to explicitly create an evaluation methodology using sets of primitive rules and WordNet.", "Our approach is different as RUQ is a diagnostic metric.", "Jwalapuram (2017) propose a Gricean dialog evaluation where humans rate performance on a Likert scale for each category.", "Qwaider et al. (2017) consider the QUANTITY , RELATION , and MANNER maxims for ranking community question answers.", "They use other NLP tools to evaluate if the response has key elements or named entities ( QUANTITY / RELATION ), has high semantic similarity ( RELATION ), and includes/excludes positive/negative polarity terms ( MANNER ).", "Chatbot evaluation Automatic evaluations for dialog typically measure lexical or semantic similarity between a produced response and a reference, under the assumption that the reference is a good response and responses similar to it will be good as well.", "Since there are often multiple valid responses to a prompt, this can be extended to multiple references too.", "In contrast, in our work we compare a model's score of a reference to a model's score of a generic response for directed analysis.", "HUSE (Hashimoto et al., 2019) uses the model score combined with human judgments to evaluate diversity and quality, classifying a response as humanor machine-generated.", "Our work does not require human judgments, and compares the model score of a generic response to the reference response.", "Mehri and Eskenazi (2020a) also use scoring from a model.", "Whereas that work is using an external model, we propose an intrinsic diagnostic for a particular phenomenon.", "Each serves a different purpose, and an advantage of our method is our analysis does not require an external model, which might not be available in all languages and for all types of text.", "Mitigating the IDK Problem A variety of approaches have been proposed to mitigate the IDK problem.", "These include active postprocessing methods such as MMI (Li et al., 2016a), as well as training data filtration (Csky et al., 2019), reinforcement learning (Li et al., 2016b) and unlikelihood training (Welleck et al., 2020).", "In our work, we propose an intrinsic model diagnostic to analyze the problem.", "MMI Maximum Mutual Information was proposed as a Diversity-Promoting Objective Function' for dialog (Li et al., 2016a).", "MMI-bidi encourages the prompt to be predictable from the response, by using a reverse direction model.", "We argue this was not diversity broadly speaking, but actually tackling a RELEVANCY problem, since it is scoring how predictable the prompt is from the response.", "Li et al. demonstrate MMI improves performance, though recent work found that it does not always (Khayrallah and Sedoc, 2020).", "Copying in Machine Translation Ott et al. (2018) found that copying was overrepresented in the output of RNN NMT.", "Using an analysis that inspired RUQ plots they compare the score of the beamsearch output to that of the copied source.", "They also consider the probability at each position in the output, and find the model is unlikely to start copying; however, after starting to copy continuing to copy has high probability.", "We find IDK has a relatively high score from the start, though for some models the gap widens towards the end of the sentence.", "We reframe the IDK problem as a violation of the Gricean maxim of QUANTITY , and introduce a new measureRelative Utterance Quantity (RUQ) which allows researchers to diagnose if their model is violating this particular conversational principle, and analyze methods that aim to address it.", "We aim to encourage further discussion and research drawing on linguistic principles about discourse and pragmatics for analysis of dialog models.", "We thank Patrick Xia, Nathaniel Weir, Rachel Rudinger, and Claire Daniele for their helpful comments and feedback on the paper.", "We additionally thank the reviewers for their insightful comments.", "This work was supported in part by DARPA KAIROS (FA8750-19-2-0034).", "The views and conclusions contained in this work are those of the authors and should not be interpreted as representing official policies or endorsements of DARPA or the U.S. Government." ]
[ "method", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "objective", "abstain", "result", "objective", "method", "abstain", "objective", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "objective", "other", "method", "other", "objective", "objective", "other", "other", "objective", "other", "other", "method", "other", "other", "other", "other", "abstain", "objective", "objective", "other", "other", "other", "other" ]
[ "Semantic dependency parsing, which aims to find rich bi-lexical relationships, allows words to have multiple dependency heads, resulting in graph-structured representations.", "We propose an approach to semi-supervised learning of semantic dependency parsers based on the CRF autoencoder framework.", "Our encoder is a discriminative neural semantic dependency parser that predicts the latent parse graph of the input sentence.", "Our decoder is a generative neural model that reconstructs the input sentence conditioned on the latent parse graph.", "Our model is arc-factored and therefore parsing and learning are both tractable.", "Experiments show our model achieves significant and consistent improvement over the supervised baseline.", "Semantic dependency parsing (SDP) is a task aiming at discovering sentence-internal linguistic information.", "The focus of SDP is the identification of predicate-argument relationships for all content words inside a sentence (Oepen et al., 2014, 2015).", "Compared with syntactic dependencies, semantic dependencies are more general, allowing a word to be either unattached or the argument of multiple predicates.", "The set of semantic dependencies within a sentence form a directed acyclic graph (DAG), distinguishing SDP from syntactic dependency parsing tasks, where dependencies are usually tree-structured.", "Extraction of such high-level structured semantic information potentially benefits downstream NLP tasks (Reddy et al., 2017; Schuster et al., 2017).", "Several supervised SDP models are proposed in the recent years by modifying syntactic dependency parsers.", "Their parsing mechanisms are either transition-based (Kanerva et al., 2015; Wang et al., Corresponding author. 2018) or graph-based (Martins and Almeida, 2014; Peng et al., 2017; Dozat and Manning, 2018; Wang et al., 2019).", "One limitation of supervised SDP is that labeled SDP data resources are limited in scale and diversity.", "Due to the rich relationships in SDP, the annotation of semantic dependency graphs is expensive and difficult, calling for professional linguists to design rules and highly skilled annotators to annotate sentences.", "This limitation becomes more severe with the rise of deep learning, because neural approaches are more data-hungry and susceptible to over-fitting when lacking training data.", "To alleviate this limitation, we investigate semi-supervised SDP capable of learning from both labeled and unlabeled data.", "While a lot of work has been done on supervised SDP, the research of unsupervised and semi-supervised SDP is still lacking.", "Since parsing results of semantic dependencies are DAGs without the tree-shape restriction, most existing successful unsupervised (Klein and Manning, 2004; I. Spitkovsky et al., 2010; Jiang et al., 2016; Cai et al., 2017) and semi-supervised (Koo et al., 2008; Druck et al., 2009; Suzuki et al., 2009; Corro and Titov, 2019) learning models for syntactic dependency parsing cannot be applied to SDP directly and it would be non-trivial to extend these models for SDP.", "There also exist several unsupervised (Poon and Domingos, 2009; Titov and Klementiev, 2011) and semi-supervised (Das and Smith, 2011; Kocisk`y et al., 2016; Yin et al., 2018) methods for semantic parsing, but these models are designed for semantic representations different from dependency graphs, making their adaptation to SDP difficult.", "In this work, we propose an end-to-end neural semi-supervised model leveraging both labeled and unlabeled data to learn a dependency graph parser.", "Our model employs the framework of Conditional Random Field Autoencoder (Ammar et al., 2014), modeling the conditional reconstruction probability given the input sentence with its dependency graph as the latent variable.", "Our encoder is the supervised model of Dozat and Manning (2018), formulating an SDP task as labeling each arc in a directed graph with a simple neural network.", "Analogous to a CRF model (Sutton et al., 2012), our encoder is capable of computing the probability of a dependency graph conditioned on the input sentence.", "The decoder is a generative model based on recurrent neural network language model (Mikolov et al., 2010), which formulates the probability of generating the input sentence, but we take into account the information given by the dependency parse graphs when generating the input.", "Our model is arc-factored, i.e., the encoding, decoding and reconstructing probabilities can all be factorized into the product of arc-specific quantities, making both learning and parsing tractable.", "A unified learning objective is defined that takes advantage of both labeled and unlabeled data.", "Besides, compared with previous semi-supervised approaches based on Variational Autoencoder (Kingma and Welling, 2013), our learning process does not involve sampling, promising better stability.", "We evaluate our model on SemEval 2015 Task 18 Dataset (English) (Oepen et al., 2015) and find that our model consistently outperforms the supervised baseline.", "We also conduct detailed analysis showing the benefits of different amounts of unlabeled data.", "Our model is based on the CRF autoencoder framework (Ammar et al., 2014) which provides a unified fashion for structured predictors to leverage both labeled and unlabeled data.", "A CRF autoencoder aims to produce a reconstruction of the input X from the original input X with an intermediate latent structure Y .", "It is trained to maximize the conditional reconstruction probability P (X = X | X) with the latent variable Y marginalized.", "Ideally, successful reconstruction implies that the latent structure captures important information of the input.", "We adopt the following notations when describing our model.", "We represent a vector in lowercase bold, e.g., s , and use a superscript for indexing, e.g., s i for the i -th vector.", "We represent a scalar in lowercase italics, e.g., s , and use a subscript for indexing, e.g., s i for the i -th element of vector s .", "An uppercase italic letter such as Y denotes a matrix.", "A lower case letter with a subscript pair such as y i,j refers to the element of matrix Y at row i and column j .", "An uppercase bold letter, e.g., U , stands for a tensor.", "We maintain this convention when indexing, e.g., y i is the i -th row of matrix Y .", "In our model, the input is a natural language sentence consisting of a sequence of words.", "A sentence with m words is represented by s = ( s 0 , s 1 , s 2 , . . . , s m ) , where s 0 is a special token TOP.", "The latent variable produced by our encoder is a dependency parse graph of the input sentence, represented as a matrix of booleans Y { 0 , 1 } ( m +1) ( m +1) , where y i,j = 1 indicates that there exists an dependency arc pointing from word s i to word s j .", "The reconstructed output generated by our decoder is a word sequence s = ( s 1 , s 2 , . . . , s m ) .", "Our encoder with parameters computes P ( Y | s ) , the probability of generating a dependency parse graph Y given a sentence s .", "Our decoder with parameters computes P ( s | Y ) , the probability of reconstructing sentence s conditioned on the parse graph Y .", "The encoder and decoder in combination specify the following conditional distribution.", "where Y is the set of all possible dependency parse graphs of s .", "During training, we set s = s and maximize the conditional reconstruction probability P ( s | s ) .", "Note that throughout our model, we only consider dependency arc predictions (i.e., whether an arc exists between each word pair).", "Arc-labels will be learned separately as described in Section 3. We leave the incorporation of arc-label prediction in our model for future work.", "Our encoder can be any arc-factored discriminative SDP model.", "Here we adopt the model of Dozat and Manning (2018), one of the best-performing SDP models, which formulates the semantic dependency parsing task as independently labeling each arc in Embedding FNN BiLSTM Biaff (dep) (head) Figure 1: Illustration of the encoder, following the design of Dozat and Manning (2018).", "a directed complete graph.", "To predict whether or not a directed arc ( s i , s j ) exists, the model computes contextualized representations of s i and s j and feeds them into a binary classifier.", "The architecture of our encoder is shown in Figure 1. Word, part-of-speech tag (for short, POS tag), and lemma embeddings 1 of each word in the input sentence are concatenated and fed into a multilayer bi-directional LSTM to get a contextualized representation of the word.", "where e (word) i , e (tag) i and e (lemma) i are notations for the word, POS tag and lemma embedding respectively, concatenated ( ) to form an embedding x i for word s i .", "Stacking x i for i = 0 , 1 , . . . , m forms matrix X .", "The contextualized word representation is then fed into two single-layer feedforward neural networks (FNN) with different parameters to produce two vectors: one for the representation of the word as a dependency head and the other for the representation of the word as a dependent.", "They are denoted as h (head) i and h (dep) i respectively.", "Finally, a biaffine function is applied to every arc between word pairs ( s i , s j ) to obtain an arc-existence score i,j .", "i,j = h (head) (cid:62) i W h (dep) j + b 1 The latest experimental results in Dozat and Manning (2018) show that using lemma embedding improves performance even further while including character-level word embedding produces little effect.", "Thus unless stated otherwise, our model makes use of lemma embeddings by default.", "where W is a square matrix of size d d ( d is the size of vector h (head) i and h (dep) j ) , and b is a scalar.", "The likelihood of every arc's presence given a sentence, P ( y i,j = 1 | s ) , can be computed by applying a sigmoid function on score i,j .", "The arc-absence probability P ( y i,j = 0 | s ) is evidently 1 P ( y i,j = 1 | s ) .", "To conclude, the probability of producing a dependency parse graph Y from the encoder given an input sentence s can be computed as below.", "Our generative decoder is based on recurrent neural network language models (Mikolov et al., 2010), but we take dependency relationships into account during reconstruction.", "Our inspiration sources from the decoder with a Graph Convolutional Network (GCN) used by Corro and Titov (2019) to incorporate tree-structured syntactic dependencies when generating sentences, but our decoder differs significantly from theirs in that ours handles parse graphs and is arc-factored.", "As mentioned above, semantic dependency parsing allows a word to have multiple dependency heads.", "If we generate a word conditioned on multiple heads, then it becomes difficult if not impossible to make the decoder arc-factored and hence we may have to enumerate all parse graphs during parsing and learning, which is intractable.", "Instead, we propose to generate a word for multiple times, each time conditioned on a different head, which leads to a fully arc-factored generative decoder and hence tractable parsing and learning.", "Specifi-cally, we split dependency graph Y of a sentence s = ( s 0 , s 1 , . . . , s m ) with m words and a TOP token into m + 1 parts: Y = [ y 0 ; y 1 ; y 2 ; . . . ; y m ] Each y i is the i -th row of Y , representing a subgraph where arcs are rooted at the i -th word of the sentence s .", "Mathematically, we have y i = { y i,j | j (1 , 2 , ..., m ) } .", "We then generate m + 1 sentences ( s 0 , s 1 , s 2 , . . . , s m ) using m + 1 neural generators.", "The generation of sentence s i is guided by the i -th sub-graph y i .", "Each generator is a left-to-right LSTM language model and computes P ( s ki | s k 0: i 1 , y k,i ) , the probability of generating !", "We share parameters among all the m + 1 generators.", "Figure 2 shows an example for computing the generative probability of s k by the k -th generator ( k { 0 , 1 , . . . , m } ) that incorporates the information of the k -th sub-graph y k .", "Recall that y k contains only dependencies rooted at s k .", "Below we describe how to compute the generative probability of each word s ki with and without the dependency arc ( s k , s i ) respectively.", "Generative probability with a dependency Suppose there is a dependency arc from s k to s i , we need to compute the generative probability P ( s ki | s k 0: i 1 , y k,i = 1) .", "The LSTM in the k -th generator takes the embedding of the previous word s i 1 computed through Eq.1 as its input and outputs the hidden state g i 1 , which is fed into an FNN to produce a representation m (pre) i 1 .", "Meanwhile, the embedding of the k -th word (also computed through Eq.1) is fed into another FNN to get its representation m (head) k as a dependency head.", "G = LSTM ( X ) m (pre) i 1 = FNN (dec pre) ( g i 1 ) (2) m (head) k = FNN (dec head) ( x k ) (3) m (head) k and m (pre) i 1 are fed into a bilinear function to obtain a vocabulary-size score vector ki .", "Here, U is a tensor of size d V d , where V is the vocabulary size and d is the size of vector m (head) k and m (pre) i 1 .", "To conserve parameters, the tensor U is diagonal (i.e., u i,k,j = 0 wherever i (cid:54) = j ).", "A softmax function can then be applied to ki , from which we pick the generative probability of s ki .", "Generative probability without a dependency Suppose there is no dependency arc from s k to s i .", "In this case, reconstruction of s ki resembles a normal recurrent neural network language model.", "The representation m (pre) i 1 from Eq.2 is fed into a fully connected layer to get ki , a vector of vocabulary size containing generative scores of all the words.", "The generative probability P ( s ki | s k 0: i 1 , y k,i = 0) can then be computed by applying a softmax function on ki and selecting the corresponding probability of s ki .", "Since we simply reconstruct word s i without considering the dependency arc information, this probability is exactly the same in the m + 1 generators and only needs to be computed once.", "To conclude the overall design of our decoder, it is worth noting that in m + 1 generation processes, parameters among all LSTMs are shared, as well as those among all FNNs 2 and FCs.", "Still, embeddings in Eq.1 are shared among both encoder and decoder.", "With P ( s ki | s k 0: i 1 , y k,i ) computed for i = 1 , . . . , m , k = 0 , 1 , . . . , m , the probability of generating s 0 , s 1 , s 2 , . . . , s m from dependency graph Y can be computed through: P ( s 0 , . . . , s m | Y ) = m (cid:89) k =0 P ( s k | y k ) = m (cid:89) k =0 m (cid:89) i =1 P ( s ki | s k 0: i 1 , y k,i ) In our model, we are only interested in the case where all the m + 1 sentences are the same.", "In addition, to balance the influence of the encoder and the decoder, we take the geometric mean of the m + 1 probabilities.", "The final decoding probability is defined as follows.", "2 FNN (dec pre) and FNN (dec head) never share parameters between each other, since their usages are different.", "Given parameters { , } of our encoder and decoder, we can parse a sentence s by finding a Y Y ( s ) which maximizes probability P ( s = s , Y | s ) , where Y ( s ) is the set of all parse graphs of sentence s .", "= arg max Y Y ( s ) log P , ( s , Y | s ) (6) = arg max Y Y ( s ) log P ( Y | s ) P ( s | Y ) = arg max Y Y ( s ) (cid:88) i,j (cid:16) log P ( y i,j | s ) + 1 m + 1 log P ( s j | s 0: j 1 , y i,j ) (cid:17)", "Since the probability is arc-factored, we can determine the existence of each dependency arc independently by picking the value of y i,j that maximizes the corresponding term.", "The time complexity of our parsing algorithm is O ( m 2 ) for a sentence with m words.", "Since we want to train our model in a semi-supervised manner, we design loss functions for labeled and unlabeled data respectively.", "For each training sentence s , the overall loss function is defined as a combination of supervised loss L l and unsupervised loss L u .", "Supervised Loss For any labeled sentence ( s , Y ) , where s stands for a sentence and Y stands for a gold parse graph, we can compute the discriminative loss.", "L l ( s ) = log P , ( s = s , Y | s ) (8) Following the derivation of Eq.6, we have: log P , ( s , Y | s ) = (cid:88) i,j (cid:16) log P ( y i,j | s ) + 1 m + 1 log P ( s j | s 0: j 1 , y i,j ) (cid:17) Usage Source Sentences Tokens train WSJ Sec.00-20 35,656 802,717 test (id) WSJ Sec.21 1,410 31,948 test (ood) Brown 1,849 31,583 Table 1: The sources and scale of the SDP 2014 & 2015 (English) dataset.", "Gold parses also provide a label for each dependency.", "We follow Dozat and Manning (2018) and model dependency labels with a purely supervised module on top of the BiLSTM layer of the encoder.", "Its parameters are learned by optimizing a cross-entropy loss function.", "Unsupervised Loss For any unlabeled sentence s , we maximize the conditional reconstruction probability P ( s = s | s ) .", "The unsupervised loss is: L u ( s ) = log P , ( s | s ) (9) = log (cid:88) Y Y ( s ) P , ( Y, s | s ) = log (cid:88) Y Y ( s ) P ( Y | s ) P ( s | Y ) = (cid:88) i,j log (cid:88) y i,j { 0 , 1 } (cid:16) P ( y i,j | s ) m +1 (cid:113) P ( s j | s 0: j 1 , y i,j ) (cid:17) Derivations of Eq.9 are provided in the Appendix A. Given a dataset containing both labeled and unlabeled sentences, our model can be trained end-to-end by optimizing the loss function Eq.7 over the combined dataset using any gradient based method.", "Dataset We examine the performance of our model on the English corpus of the SDP 2014 & 2015: Broad Coverage Semantic Dependency Parsing dataset (Oepen et al., 2015).", "The corpus is composed of three distinct and parallel semantic dependency annotations (DM, PAS, PSD) of Sections 00-21 of the WSJ Corpus, as well as a balanced sample of twenty files from the Brown Corpus.", "The scale of this dataset is shown in Table 1. Hidden Layer Hidden Sizes Word/GloVe/POS/Lemma/Char 100 GloVe Linear 125 Encoder BiLSTM 3*600 Encoder FNN(head) 1*600 Encoder FNN(dep) 1*600 Decoder UniLSTM 1*600 Decoder FNN(head) 1*400 Decoder FNN(pre) 1*400 Dropouts Dropout Prob.", "We evaluate the performance of models through two metrics: Unlabeled F1 score (UF1) and Labeled F1 score (LF1).", "UF1 measures the accuracy of the binary classification of arc existence, while LF1 measures the correctness of each arc-label as well.", "Unless stated otherwise, we report scores averaged over three runs.", "Network Configuration For our encoder, we adopt the hyper-parameters of Dozat and Manning (2018).", "Following Dozat and Manning (2018), we concatenate pre-trained 100-dimensional GloVe embeddings (Pennington et al., 2014) linearly transformed to 125-dimension into our input word embeddings.", "Words or lemmas whose occurrences are less than 7 times within the training set are treated as UKN.", "For our decoder, we set the number of layer(s) of uni-directional LSTM to 1, whose recurrent hidden size is 600.", "For FNN (dec head) and FNN (dec pre) , the output sizes are both 400, activated by a tanh ( ) function.", "Learning Our loss function (Eq.7) is optimized by the Adam+AMSGrad optimizer (Reddi et al., 2018), with hyper-parameters 1 , 2 kept the same as those of Dozat and Manning (2018).", "The interpolation constant is tuned with the size of unlabeled data.", "A detailed table of hyper-parameters is provided in Table 2. The training time for one batch with our autoencoder is 23 times of that of Dozat and Manning (2018) because of the extra decoder.", "In our first experiment (with the DM annotations only), we fix the amount of labeled data and continuously incorporate more unlabeled data into the training set.", "Specifically, we randomly sample 10% of the whole dataset as labeled data.", "Unlabeled data are then sampled from the remaining part (with their gold parses removed), with a proportion increasing from 0% to 90% of the complete dataset.", "For unlabeled data, we find that long sentences do not help in improving F1 scores and therefore in this and all the subsequent experiments we remove unlabeled sentences longer than 20 to reduce the running time and memory usage.", "Experimental results are visualized in Figure 3. It is observed that in the purely supervised setting (i.e., +0% unlabeled data), our model already outperforms the baseline (Dozat and Manning, 2018).", "Since our encoder is exactly the baseline model, this shows the benefit of adding the decoder for joint learning and parsing even in the supervised setting.", "With an increasing size of unlabeled data, we can see the increase in performance of our model, especially when evaluated on out-of-domain data, suggesting the benefit of semi-supervised learning with our model.", "In our second experiment (again with the DM an-notations), we use the full training set and vary the proportion of labeled and unlabeled data.", "Experimental results are shown in Table 3. Our semi-supervised model shows the largest advantage over the supervised models with the 0.1:9.9 proportion (which contains only 339 labeled sentences), Models Labeled:Unlabeled 0.1:9.9 1:9 3:7 5:5 10:0 UF1 LF1 UF1 LF1 UF1 LF1 UF1 LF1 UF1 LF1 id D&M 75.21 70.70 88.32 86.60 91.65 90.52 92.81 91.90 94.11 93.38 Ours-Sup 75.52 70.59 88.58 86.74 91.88 90.73 92.99 92.05 94.30 93.55 Ours-Semi 76.73 72.16 88.98 87.11 92.04 90.92 93.02 92.07 -ood D&M 70.51 65.63 83.15 80.87 86.91 85.17 88.35 86.93 90.01 88.87 Ours-Sup 70.53 65.48 83.33 80.92 87.16 85.45 88.63 87.24 90.22 89.05 Ours-Semi 72.18 67.30 83.93 81.48 87.43 85.70 88.67 87.28 -Table 3: Experimental results with varying proportions of labeled and unlabeled data.", "With the increased proportion of labeled data, the performance of all the models goes up, but the advantage of our semi-supervised model diminishes.", "This is consistent with the tendency of many semi-supervised approaches to work well when given small labeled data but have diminishing effectiveness when adding more labeled data.", "Another worth-noting observation is that the superiority of our semi-supervised model is much stronger on the out-of-domain tests, which suggests good generalizability of our semi-supervised model.", "In the previous two experiments, we evaluate our model on the DM representation.", "Here we evaluate our model on all the three representations: DM, PAS and PSD.", "We slightly tune the hyper-parameters based on the optimal values from the previous experiments of the DM representation.", "We use 10% of the sentences as labeled data and the rest 90% of the sentences as unlabeled data.", "For the completeness of our experiment, we follow Dozat and Manning (2018) and examine four different word representations: basic (i.e., using only word and POS tag embeddings), +Lemma (i.e., using word, POS tag and lemma embeddings), +Char (i.e., using word, POS tag and character embeddings) and +Lemma+Char (i.e. using word, POS tag, lemma and character embeddings).", "Table 4 shows the experimental results of +Lemma , the default word representation.", "The results of the other word representations show very similar trends (see the Table 7 in Appendix B).", "We observe significant improvement of our semi-supervised model over the two supervised baselines on both DM and PSD representations.", "However, it is surprising to find that on the PAS representation, our semi-supervised model exhibits little advantage over its supervised counterpart.", "One possible explanation, as Dozat and Manning (2018) also noted, is that PAS is the easiest of the three representations (as can be seen by comparing the scores of the three representations in Table 4) and our supervised model may already reach the performance ceiling.", "We empirically study alternative structures of our decoder.", "In the first variant, we remove the LSTM layer of our decoder, so each word s i is generated without access to the generation history before s i 1 .", "In the second variant, we replace the bilinear function in Eq.4 with a fully connected layer that takes as input either the concatenation or the summation of m (head) k and m (pre) i 1 .", "All the other settings are the same as in Section 4.4 on the DM annotation.", "Experimental results are shown in Table 5.", "We can see that these alternatives lead to worse scores, which verifies the effectiveness of our decoder design.", "To test the stability of our model, we repeat the experiment of Section 4.4 on the DM annotation for three times (without tuning hyper-parameters), each time with respect to different labeled data sampled from the training dataset.", "Table 6 shows the results.", "We observe consistent advantage of our Models DM PAS PSD Avg UF1 LF1 UF1 LF1 UF1 LF1 UF1 LF1 id D&M 88.32 86.60 91.89 90.57 88.17 73.42 89.46 83.53 Ours-Sup 88.58 86.74 92.14 90.91 88.49 73.34 89.74 83.66 Ours-Semi 88.98 87.11 92.07 90.84 88.62 73.68 89.89 83.88 ood D&M 83.15 80.87 88.34 86.32 85.10 71.30 85.53 79.50 Ours-Sup 83.33 80.92 88.57 86.68 85.09 71.11 85.66 79.57 Ours-Semi 83.93 81.48 88.61 86.68 85.30 71.46 85.95 79.87 Table 4: Experimental results on all the three representations.", "Work on unsupervised or semi-supervised dependency parsing, to the best of our knowledge, is dominated by tree-structured parsing (Koo et al., 2008; Druck et al., 2009; Suzuki et al., 2009).", "Recently, Corro and Titov (2019) introduced an approximate inference method with a Variational Autoencoder (Kingma et al., 2014) for semi-supervised syntactic dependency parsing.", "Our decoder is inspired by their work, but differs from theirs in that our decoder handles parse graphs and is arc-factored.", "Cai et al. (2017) used the framework of CRF Autoencoder (Ammar et al., 2014) to perform unsupervised syntactic dependency parsing.", "The same framework has been used by Zhang et al. (2017) for semi-supervised sequence labeling.", "Our work also adopts the CRF Autoencoder framework, but with both the encoder and the decoder redesigned for semantic dependency parsing.", "Existing unsupervised and semi-supervised approaches to semantic parsing focused on semantic representations different from dependency graphs, e.g., general-purpose logic forms (Sondheimer and Nebel, 1986) and formal meaning representations (Bordes et al., 2012).", "Poon and Domin-gos (2009) presented the first unsupervised semantic parser to transform dependency trees into quasi-logical forms with Markov logic.", "Following this work, Titov and Klementiev (2011) proposed a non-parametric Bayesian model for unsupervised semantic parsing using hierarchical Pit-manYor process (Teh, 2006).", "Das and Smith (2011) described a semi-supervised approach to frame-semantic parsing.", "Kocisk`y et al. (2016) proposed a semi-supervised semantic parsing approach making use of unpaired logical forms with sentence being unobserved.", "Recently, Yin et al. (2018) proposed a variational autoencoding model for semi-supervised semantic parsing of tree-structured semantic representations.", "Take Yin et al. (2018) for example.", "To extend their approach for SDP, one needs to design a different transition system for their encoder for graph parsing and design a graph linearization method for their sequence-to-sequence decoder.", "In addition, SDP-specific constraints (e.g., the graph structure contains exactly the same set of words as the sentence) shall be incorporated into their model.", "Therefore, previous semi-supervised semantic parsing models cannot be applied to SDP directly and modifying them for SDP is non-trivial.", "We leave for future work such modification and extension of previous semi-supervised semantic parsing approaches to SDP.", "In this work, we proposed a semi-supervised learning model for semantic dependency parsing using CRF Autoencoders.", "Our model is composed of a discriminative neural encoder producing a dependency graph conditioned on an input sentence, and Models Data1 Data2 Data3 Avg UF1 LF1 UF1 LF1 UF1 LF1 UF1 LF1 id D&M 88.25 86.55 88.70 87.09 88.49 86.85 88.48 86.83 Ours-Sup 88.68 86.84 88.79 86.96 88.71 86.97 88.73 86.92 Ours-Semi 88.95 87.07 89.24 87.45 88.97 87.19 89.05 87.24 ood D&M 83.12 80.89 83.30 81.01 83.62 81.26 83.35 81.05 Ours-Sup 83.36 80.98 83.46 81.05 84.00 81.62 83.60 81.22 Ours-Semi 83.94 81.55 83.87 81.51 84.11 81.68 83.97 81.58 Table 6: Experimental results on three randomly sampled datasets.", "a generative neural decoder for input reconstruction based on the dependency graph.", "The model works in an arc-factored fashion, promising end-to-end learning and efficient parsing.", "We evaluated our model under both full-supervision settings and semi-supervision settings.", "Our model outperforms the baseline on multiple target representations.", "By adding unlabeled data, our model exhibits further performance improvements.", "In particular, our semi-supervised model performs well in the low resource setting and on the out-of-domain test set.", "This points to future directions of applying our model to low-resource languages and cross-domain settings.", "Our code is publicly available at https://github.com/JZXXX/Semi-SDP .", "This work was supported by the National Natural Science Foundation of China (61976139)." ]
[ "abstain", "objective", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "other", "abstain", "objective", "method", "method", "method", "method", "abstain", "abstain", "method", "result", "result", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "other", "objective", "other", "other", "method", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "abstain", "objective", "result", "abstain", "abstain", "method", "result", "method", "method", "method", "other", "other" ]
[ "If two sentences have the same meaning, it should follow that they are equivalent in their inferential properties, i.e., each sentence should textually entail the other.", "However, many paraphrase datasets currently in widespread use rely on a sense of paraphrase based on word overlap and syntax.", "Can we teach them instead to identify paraphrases in a way that draws on the inferential properties of the sentences, and is not over-reliant on lexical and syntactic similarities of a sentence pair?", "We apply the adversarial paradigm to this question, and introduce a new adversarial method of dataset creation for paraphrase iden-tification: the Adversarial Paraphrasing Task (APT), which asks participants to generate semantically equivalent (in the sense of mutually implicative) but lexically and syntactically disparate paraphrases.", "These sentence pairs can then be used both to test paraphrase identifi-cation models (which get barely random accuracy) and then improve their performance.", "To accelerate dataset generation, we explore automation of APT using T5, and show that the resulting dataset also improves accuracy.", "We discuss implications for paraphrase detection and release our dataset in the hope of making paraphrase detection models better able to detect sentence-level meaning equivalence.", "Although there are many definitions of paraphrase' in the NLP literature, most maintain that two sentences that are paraphrases have the same meaning or contain the same information.", "Pang et al. (2003) define paraphrasing as expressing the same information in multiple ways and Bannard and Callison-Burch (2005) call paraphrases alternative ways of conveying the same information.", "Ganitke-vitch et al. (2013) write that paraphrases are differing textual realizations of the same meaning.", "A definition that seems to sufficiently encompass the others is given by Bhagat and Hovy (2013): paraphrases are sentences or phrases that use different wording to convey the same meaning .", "However, even that definition is somewhat imprecise, as it lacks clarity on what it assumes meaning' means.", "If paraphrasing is a property that can hold between sentence pairs, 1 then it is reasonable to assume that sentences that are paraphrases must have equivalent meanings at the sentence level (rather than exclusively at the levels of individual word meanings or syntactic structures).", "Here a useful test is that recommended by inferential role semantics or inferentialism (Boghossian, 1994; Peregrin, 2006), which suggests that the meaning of a statement s is grounded in its inferential properties: what one can infer from s and from what s can be inferred.", "Building on this concept from inferentialism, we assert that if two sentences have the same inferential properties, then they should also be mutually implicative.", "Mutual Implication (MI) is a binary relationship between two sentences that holds when each sentence textually entails the other (i.e., bidirectional entailment).", "MI is an attractive way of operationalizing the notion of two sentences having the same meaning, as it focuses on inferential relationships between sentences (properties of the sentences as wholes) instead of just syntactic or lexical similarities (properties of parts of the sen-tences).", "As such, we will assume in this paper that two sentences are paraphrases if and only if they are MI .", "2 In NLP, modeling inferential relationships between sentences is the goal of the textual entailment, or natural language inference (NLI) tasks (Bowman et al., 2015).", "We test MI 1 In this paper we study paraphrase between sentences, and do not address the larger scope of how our work might extend to paraphrasing between arbitrarily large text sequences.", "2 The notations used in this paper are listed in Table 1. using the version of RoBERTa large released by Nie et al. (2020) trained on a combination of SNLI (Bowman et al., 2015), multiNLI (Williams et al., 2018), FEVER-NLI (Nie et al., 2019), and ANLI (Nie et al., 2020).", "Owing to expeditious progress in NLP research, performance of models on benchmark datasets is plateauing' with near-human performance often achieved within a year or two of their release and newer versions, using a different approach, are constantly having to be created, for instance, GLUE (Wang et al., 2019) and SuperGLUE (Wang et al., 2020).", "The adversarial paradigm of dataset creation (Jia and Liang, 2017a,b; Bras et al., 2020; Nie et al., 2020) has been widely used to address this plateauing,' and the ideas presented in this paper draw inspiration from it.", "In the remainder of this paper, we apply the adversarial paradigm to the problem of paraphrase detection, and demonstrate the following novel contributions : We use the adversarial paradigm to create a new benchmark examining whether paraphrase detection models are assessing the meaning equivalence of sentences rather than being over-reliant on word-level measures.", "We do this by collecting paraphrases that are MI but are as lexically and syntactically disparate as possible (as measured by low BLEURT scores).", "We call this the Adversarial Paraphrasing Task (APT).", "We show that a SOTA language model trained on paraphrase datasets perform poorly on our benchmark.", "However, when further trained on our adversarially-generated datasets, their MCC scores improve by up to 0.307.", "We create an additional dataset by training a paraphrase generation model to perform our adversarial task, creating another large dataset that further improves the paraphrase detection models' performance.", "We propose a way to create a machine-generated adversarial dataset and discuss ways to ensure it does not suffer from the plateauing that other datasets suffer from.", "Paraphrase detection (given two sentences, predict whether they are paraphrases) (Zhang and Patrick,", "2005; Fernando and Stevenson, 2008; Socher et al., 2011; Jia et al., 2020) is an important task in the field of NLP, finding downstream applications in machine translation (Callison-Burch et al., 2006; Apidianaki et al., 2018; Mayhew et al., 2020), text summarization, plagiarism detection (Hunt et al., 2019), question answering, and sentence simplifi-cation (Guo et al., 2018).", "Paraphrases have proven to be a crucial part of NLP and language education, with research showing that paraphrasing helps improve reading comprehension skills (Lee and Colln, 2003; Hagaman and Reid, 2008).", "Question paraphrasing is an important step in knowledge-based question answering systems for matching questions asked by users with knowledge-based assertions (Fader et al., 2014; Yin et al., 2015).", "Paraphrase generation (given a sentence, generate its paraphrase) (Gupta et al., 2018) is an area of research benefiting paraphrase detection as well.", "Lately, many paraphrasing datasets have been introduced to be used for training and testing ML models for both paraphrase detection and generation.", "MSRP (Dolan and Brockett, 2005) contains 5801 sentence pairs, each labeled with a binary human judgment of paraphrase, created using heuristic extraction techniques along with an SVM-based classifier.", "These pairs were annotated by humans, who found 67% of them to be semantically equivalent.", "The English portion of PPDB (Ganitkevitch et al., 2013) contains over 220M paraphrase pairs generated by meaning-preserving syntactic transformations.", "Paraphrase pairs in PPDB 2.0 (Pavlick et al., 2015) include fine-grained entailment relations, word embedding similarities, and style annotations.", "TwitterPPDB (Lan et al., 2017) consists of 51,524 sentence pairs captured from Twitter by linking tweets through shared URLs.", "This ap-proach's merit is its simplicity as it involves neither a classifier nor a human-in-the-loop to generate paraphrases.", "Humans annotate the pairs, giving them a similarity score ranging from 1 to 6.", "ParaNMT (Wieting and Gimpel, 2018) was created by using neural machine translation to translate the English side of a Czech-English parallel corpus (CzEng 1.6 (Bojar et al., 2016)), generating more than 50M English-English paraphrases.", "However, ParaNMT's use of machine translation models that are a few years old harms its utility (Nighojkar and Licato, 2021), considering the rapid improvement in machine translation in the past few years.", "To rectify this, we use the google-translate library to translate the Czech side of roughly 300k CzEng2.0 (Kocmi et al., 2020) sentence pairs ourselves.", "We call this dataset ParaParaNMT (PP-NMT for short, where the extra paraprefix re-flects its similarity to, and conceptual derivation from, ParaNMT).", "Some work has been done in improving the quality of paraphrase detectors by training them on a dataset with more lexical and syntactic diversity.", "Thompson and Post (2020) propose a paraphrase generation algorithm that penalizes the production of n-grams present in the source sentence.", "Our approach to doing this is with the APT, but this is something worth exploring.", "Sokolov and Fil-imonov (2020) use a machine translation model to generate paraphrases much like ParaNMT.", "An interesting application of paraphrasing has been discussed by Mayhew et al. (2020) who, given a sentence in one language, generate a diverse set of correct translations (paraphrases) that humans are likely to produce.", "In comparison, our work is focused on generating adversarial paraphrases that are likely to deceive a paraphrase detector, and models trained on the adversarial datasets we produce can be applied to Mayhew et", "al.'s work too.", "ANLI (Nie et al., 2020), a dataset designed for Natural Language Inference (NLI) (Bowman et al., 2015), was collected via an adversarial human-and-model-in-the-loop procedure where humans are given the task of duping the model into making a wrong prediction.", "The model then tries to learn how not to make the same mistakes.", "AFLite (Bras et al., 2020) adversarially filters dataset biases making sure that the models are not learning those biases.", "They show that model performance on SNLI (Bow-man et al., 2015) drops from 92% to 62% when biases were filtered out.", "However, their approach is to filter the dataset, which reduces its size, making model training more difficult.", "Our present work tries instead to generate adversarial examples to increase dataset size.", "Other examples of adversarial datasets in NLP include work done by Jia and Liang (2017a); Zellers et al. (2018, 2019).", "Perhaps the closest to our work is PAWS (Zhang et al., 2019), short for Paraphrase Adversaries from Word Scrambling.", "The idea behind PAWS is to create a dataset that has a high lexical overlap between sentence pairs without them being paraphrases.' It has 108k paraphrase and non-paraphrase pairs with high lexical overlap pairs generated by controlled word swapping and back-translation, and human raters have judged whether or not they are paraphrases.", "Including PAWS in the training data has shown the state-of-the-art models' performance to jump from 40% to 85% on PAWS's test split.", "In comparison to the present work, PAWS does not explicitly incorporate inferential properties, and we seek paraphrases minimizing lexical overlap.", "Semantic Textual Similarity (STS) measures the degree of semantic similarity between two sentences.", "Popular approaches to calculating STS include BLEU (Papineni et al., 2002), BertScore (Zhang et al., 2020), and BLEURT (Sellam et al., 2020).", "BLEURT is a text generation metric building on BERT's (Devlin et al., 2019) contextual word representations.", "BLEURT is warmed-up using synthetic sentence pairs and then fine-tuned on human ratings to generalize better than BERTScore (Zhang et al., 2020).", "Given any two sentences, BLEURT assigns them a similarity score (usually between -2.2 to 1.1).", "However, high STS scores do not necessarily predict whether two sentences have equivalent meanings.", "Consider the sentence pairs in Table 3, highlighting cases where STS and paraphrase appear to misalign.", "The existence of such cases suggests a way to advance automated paraphrase detection: through an adversarial benchmark consisting of sentence pairs that have the same MI-based meaning, but have BLEURT scores that are as low as possible.", "This is the motivation behind what we call the Adversarial Paraphrasing Task (APT), which has two components: 1. Similarity of meaning : Checked through MI (Section 1).", "We assume if two sentences are MI (Mutually Implicative), they are semantically equivalent and thus paraphrases.", "Note Figure 1: The mTurk study and the reward calculation.", "that MI is a binary relationship, so this APT component does not bring any quantitative variation but is more like a qualifier test for APT.", "All AP T sentence pairs are MI .", "2. Dissimilarity of structure : Measured through BLEURT, which assigns each sentence pair a score quantifying how lexically and syntactically similar the two sentences are.", "To test the effectiveness of APT in guiding the generation of mutually implicative but lexically and syntactically disparate paraphrases for a given sentence, we designed an Amazon Mechanical Turk (mTurk) study (Figure 1).", "Given a starting sentence, we instructed participants to [w]rite a sentence that is the same in meaning as the given sentence but as structurally different as possible. Your sentence should be such that you can infer the given sentence from it AND vice-versa. It should be sufficiently different from the given sentence to get any reward for the submission. For example, a simple synonym substitution will most likely not work.", "The sentences given to the participants came from MSRP and PPNMT (Section 1).", "Both of these datasets have pairs of sentences in each row, and we took only the first one to present to the participants.", "Neither of these datasets has duplicate sentences by design.", "Every time a sentence was selected, a random choice was made between MSRP and PPNMT, thus ensuring an even distribution of sentences from both datasets.", "This formula was designed to ensure (1) the maximum reward per submission was $1, and (2) no reward was granted for sentence pairs that are non-MI or have BLEURT > 0 .", "5 .", "Participants were encouraged to frequently revise their sentences and click on a Check' button which showed them the reward amount they would earn if they submitted this sentence.", "Once the Check' button was clicked, the participant's reward was evaluated (see Figure 1) and the sentence pair added to APH (regardless of whether it was AP T ).", "If Submit' was clicked, their attempt was rewarded based on Equation 1. The resulting dataset of sentence pairs, which we call APH (Adversarial Paraphrase by Humans), consists of 5007 human-generated sentence pairs, both MI and nonMI (see Table 2).", "Humans were able to generate AP T paraphrases for 75 .", "48% of Dataset Total attempts APT attempts MI attempts nonMI attempts Unique sentences APT uniques MI uniques nonMI uniques APH 5007 2659 53.10% 3232 64.55% 1775 35.45% 1631 1231 75.48% 1338 82.04% 293 17.96% APMT 5 62,986 3836 6.09% 37,511 59.55% 25,475 40.44% 4072 2288 56.19% 4045 99.34% 3115 76.50% AP TwT 5 75,011 6454 8.60% 17,074 22.76% 57,937 77.24% 4328 3670 84.80% 4131 95.45% 4230 97.74% Table 2: Proportion of sentences generated by humans ( APH ) and T5 base ( APT 5 ).", "the sentences presented to them and only 53 .", "1% of attempts were AP T , showing that the task is difficult even for humans.", "Note that MI attempts' and MI uniques' are supersets of AP T attempts' and AP T uniques,' respectively.", "Since human studies can be time-consuming and costly, we trained a paraphrase generator to perform APT.", "We used T5 base (Raffel et al., 2020), as it achieves SOTA on paraphrase generation (Niu et al., 2020; Bird et al., 2020; Li et al., 2020) and trained it on TwitterPPDB (Section 2).", "Our hypothesis was that if T5 base is trained to maximize the APT reward (Equation 1), its generated sentences will be more likely to be AP T .", "We generated paraphrases for sentences in MSRP and those in TwitterPPDB itself, hoping that since T5 base is trained on TwitterPPDB, it would generate better paraphrases ( MI with lower BLEURT) for sentences coming from there.", "The proportion of sentences generated by T5 base is shown in Table 2. We call this dataset APT 5 , the generation of which involved two phases: Training: To adapt T5 base for APT, we implemented a custom loss function obtained from dividing the cross-entropy loss per batch by the total reward (again from Equation 1) earned from the model's paraphrase generations for that batch, provided the model was able to reach a reward of at least 1. If not, the loss was equal to just the cross-entropy loss.", "We trained T5 base on TwitterPPDB for three epochs; each epoch took about 30 hours on one NVIDIA Tesla V100 GPU due to the CPU bound BLEURT component.", "More epochs may help get better results, but our experiments showed that loss plateaus after three epochs.", "next word according to its conditional probability distribution, introduces non-determinism in language generation.", "Fan et al. (2018) introduce top-k sampling, which filters k most likely next words, and the probability mass is redistributed among only those k words.", "Nucleus sampling (or top-p sampling) (Holtzman et al., 2020) reduces the options to the smallest possible set of words whose cumulative probability exceeds p , and the probability mass is redistributed among this set of words.", "Thus, the set of words changes dynamically according to the next word's probability distribution.", "We use a combination of top-k and top-p sampling with k = 120 and p = 0 .", "95 in the interest of lexical and syntactic diversity in the paraphrases.", "For each sentence in the source dataset (MSRP 3 and TwitterPPDB for APMT 5 and AP TwT 5 respectively), we perform five iterations, in each of which, we generate ten sentences.", "If at least one of these ten sentences passes AP T , we continue to the next source sentence after recording all attempts and classifying them as MI or nonMI .", "If no sentence in a maximum of 50 attempts passes AP T , we record all attempts nonetheless, and move on to the next source sentence.", "For each increasing iteration for a particular source sentence, we increase k by 20 , but we also reduce p by 0 .", "05 to avoid vague guesses.", "Note the distribution of MI and nonMI in the source datasets does not matter because we use only the first sentence from the sentence pair.", "3.3 Dataset Properties T5 base trained with our custom loss function generated AP T -passing paraphrases for ( 56 . 19% ) of starting sentences.", "This is higher than we initially expected, considering how difficult APT proved to be for humans (Table 2).", "Noteworthy is that 3 We use the official train split released by Dolan and Brock-ett (2005) containing 4076 sentence pairs.", "only 6 .", "09% of T5 base 's attempts were AP T .", "This does not mean that the remaining 94% of attempts can be discarded, since they amounted to the negative examples in the dataset.", "Since we trained it on TwitterPPDB itself, we expected that T5 base would generate better paraphrases, as measured by a higher chance of passing AP T on TwitterPPDB, than any other dataset we tested.", "This is supported by the data in Table 2, which shows that T5 base was able to generate an AP T passing paraphrase for 84.8% of the sentences in TwitterPPDB.", "The composition of the three adversarial datasets can be found in Table 2. These metrics are useful to understand the capabilities of T5 base as a paraphrase generator and the paraphrasability of sentences in MSRP and TwitterPPDB.", "For instance, T5 base 's attempts on TwitterPPDB tend to be MI much less frequently than those on MSRP and hu-man's attempts on MSRP + PPNMT.", "This might be because in an attempt to generate syntactically dissimilar sentences, the T5 base paraphraser also ended up generating many semantically dissimilar ones as well.", "To visualize the syntactic and lexical disparity of paraphrases in the three adversarial datasets, we present their BLEURT distributions in Figure 2. As might be expected, the likelihood of a sentence pair being MI increases as BLEURT score increases (recall that AP T -passing sentence pairs are simply MI pairs with BLEURT scores < = 0 . 5 ), but Figure 2 shows that the shape of this increase is not straightforward, and differs among the three datasets.", "As might be expected, humans are much more skilled at APT than T5 base , as shown by the fact that the paraphrases they generated have much lower mean BLEURT scores (Figure 2), and the ratio of AP T vs nonAP T sentences is much higher (Table 2).", "As we saw earlier, when T5 base wrote paraphrases that were low on BLEURT, they tended to become nonMI (e.g., line 12 in Table 3).", "However, T5 base did generate more AP T -passing sentences with a lower BLEURT on Twitter-PPDB than on MSRP, which may be a result of overfit-ting T5 base on TwitterPPDB.", "Furthermore, all three adversarial datasets have a distribution of MI and nonMI sentence pairs balanced enough to train a model to identify paraphrases.", "Table 3 has examples from APH and APT 5 showing the merits and shortcomings of T5, BLEURT, and RoBERTa large (the MI detector used).", "Some observations from Table 3 include: Lines 1 and 3: BLEURT did not recognize the paraphrases, possibly due to the differences in words used.", "RoBERTa large however, gave the correct MI prediction (though it is worth noting that the sentences in line 1 are questions, rather than truth-apt propositions).", "Line 4: RoBERTa large and BLEURT (to a large extent since it gave it a score of 0.4) did not recognize that the idiomatic phrase break a leg' means good luck' and not fracture.' Lines 6 and 12: There is a loss of information going from the first sentence to the second and BLEURT and MI both seem to have understood the difference between summarization and paraphrasing.", "Line 7: T5 not only understood the scores but also managed to paraphrase it in such a way that was not syntactically and lexically similar, just as we wanted T5 to do when we fine-tuned it.", "Line 9: T5 base knows that Fort Lauderdale is in Florida but RoBERTa large does not.", "To quantify our datasets' contributions, we designed experiment setups wherein we trained RoBERTa base (Liu et al., 2019) for paraphrase detection on a combination of TwitterPPDB and our datasets as training data.", "RoBERTa was chosen for its generality, as it is a commonly used model in current NLP work and benchmarking, and currently achieves SOTA or near-SOTA results on a majority of NLP benchmark tasks (Wang et al., 2019, 2020; Training Dataset TwitterPPDB + Size APHAPH -test MCC F1 MCC F1 APH -train 46k 0.440 0.809 APMT 5 106k 0.410 0.725 0.369 0.705 APH -train + APMT 5 109k 0.516 0.828 AP TwT 5 117k 0.433 0.772 0.422 0.765 APH -train + AP TwT 5 121k 0.488 0.812 APT 5 180k 0.461 0.731 0.437 0.716 APH -train + APT 5 184k 0.525 0.816 Table 6: Performance of RoBERTa base trained on adversarial datasets. Size is the number of training examples in the dataset rounded to nearest 1000. Chen et al., 2021).", "For each source sentence, multiple paraphrases may have been generated.", "Hence, to avoid data leakage, we created a train-test split on APH such that all paraphrases generated using a given source sentence will be either in APH -train or in APH test, but never in both.", "Note that APH is not balanced as seen in Table 2. Table 4 shows the distribution of MI and nonMI pairs in APH -train and APH -test and MI attempts' and nonMI attempts' columns of Table 2 show the same for other adversarial datasets.", "The test sets used were APH wherever APH -train was not a part of the training data and APH -test in every case.", "Does RoBERTa base do well on APH ?", "RoBERTa base was trained on each training dataset (90% training data, 10% validation data) for five epochs with a batch size of 32 with the training and validation data shuffled, and the trained model was tested on APH and APH -test.", "The results of this are shown in Table 6.", "Note that since the number of MI and nonMI sentences in all the datasets is imbalanced, Matthew's Correlation Coefficient (MCC) is a more appropriate performance measure than accuracy (Boughorbel et al., 2017).", "Our motivation behind creating an adversarial dataset was to improve the performance of paraphrase detectors by ensuring they recognize paraphrases with low lexical overlap.", "To demonstrate the extent of their inability to do so, we first compare the performance of RoBERTa base trained only on TwitterPPDB on specific datasets as shown Table 5.", "Although the model performs slightly well on MSRP, it does barely better than a random prediction on APH , thus showing that identifying adversarial paraphrases created using APT is nontrivial for paraphrase identifiers.", "Do human-generated adversarial paraphrases improve paraphrase detection?", "We introduce APH -train to the training dataset along with TwitterPPDB.", "This improves the MCC by 0.222 even though APH -train constituted just 8.15% of the entire training dataset, the rest of which was TwitterPPDB (Table 6).", "This shows the effectiveness of human-generated paraphrases, as is especially impressive given the size of APH -train compared to TwitterPPDB.", "Do machine-generated adversarial paraphrases improve paraphrase detection?", "We set out to test the improvement brought by APT 5 , of which we have two versions.", "Adding APMT 5 to the training set was not as effective as adding APH -train, increasing MCC by 0.188 on APH and 0.151 on APH -test, thus showing us that T5 base , although was able to clear AP T , lacked the quality which human paraphrases possessed.", "This might be explained by Figure 2 since APMT 5 does not have many sentences with low BLEURT, we cannot expect a vast improvement in RoBERTa base 's performance on sentences with BLEURT as low as in APH .", "Since we were not necessarily testing T5 base 's performance and we had trained T5 base on TwitterPPDB we used the trained model to perform APT on TwitterPPDB itself.", "Adhering to expectations, training RoBERTa base (the paraphrase detector) with AP TwT 5 yielded higher MCCs.", "Note that none of the sentences are common between AP TwT 5 and APH since APH is built on MSRP and PPNMT and the fact that the model got this performance when trained on AP TwT 5 is a testimony to the quality and contribution of APT.", "Combining these results, we can conclude that although machine-generated datasets like APT 5 can help paraphrase detectors improve themselves, a smaller dataset of human-generated adversarial paraphrases improved performance more.", "Overall, however, the highest MCC (0.525 in Table 6) is obtained when TwitterPPDB is combined with all three adversarial datasets, suggesting that the two approaches nicely complement each other.", "This paper introduced APT (Adversarial Paraphrasing Task), a task that uses the adversarial paradigm to generate paraphrases consisting of sentences with equivalent (sentence-level) meanings, but differing lexical (word-level) and syntactical similarity.", "We used APT to create a human-generated dataset / benchmark ( APH ) and two machine-generated datasets ( APMT 5 and AP TwT 5 ).", "Our goal was to effectively augment how paraphrase detectors are trained, in order to make them less reliant on word-level similarity.", "In this respect, the present work succeeded: we showed that RoBERTa base trained on TwitterPPDB performed poorly on APT benchmarks, but this performance was increased significantly when further trained on either our humanor machine-generated datasets.", "The code used in this paper along with the dataset has been released in a publicly-available repository.", "4 Paraphrase detection and generation have broad applicability, but most of their potential lies in areas in which they still have not been substantially applied.", "These areas range from healthcare (im-proving accessibility to medical communications or concepts by automatically generating simpler language), writing (changing the writing style of an article to match phrasing a reader is better able to understand), and education (simplifying the language of a scientific paper or educational lesson to 4 https://github.com/ Advancing-Machine-Human-Reasoning-Lab/apt make it easier for students to understand).", "Thus, future research into improving their performance can be very valuable.", "But approaches to paraphrase that treat it as no more than a matter of detecting word similarity overlap will not suffice for these applications.", "Rather, the meanings of sentences are properties of the sentences as a whole, and are inseparably tied to their inferential properties.", "Thus, our approaches to paraphrase detection and generation must follow suit.", "The adversarial paradigm can be used to dive deeper into comparing how humans and SOTA language models understand sentence meaning, as we did with APT.", "Furthermore, automatic generation of adversarial datasets has much unrealized potential; e.g., different datasets, paraphrase generators, and training approaches can be used to generate future versions of APT 5 in order to produce AP T passing sentence pairs with lower lexical and syntactic similarities (as measured not only by BLEURT, but also by future state-of-the-art STS metrics).", "The idea of more efficient automated adversarial task performance is particularly exciting, as it points to a way language models can improve themselves while avoiding prohibitively expensive human participant fees.", "Finally, the most significant contribution of this paper, APT, presents a dataset creation method for paraphrases that will not saturate because as the models get better at identifying paraphrases, we will improve paraphrase generation.", "As models get better at generating paraphrases, we can make APT harder (e.g., by reducing the BLEURT threshold of < 0 . 5 ).", "One might think of this as students in a class who come up with new ways of copying their assignments from sources as plagiarism detectors improved.", "That brings us to one of the many applications of paraphrases: plagiarism generation and detection, which inherently is an adversarial activity.", "Until plagiarism detectors are trained on adversarial datasets themselves, we cannot expect them to capture human levels of adversarial paraphrasing.", "This material is based upon work supported by the Air Force Office of Scientific Research under award numbers FA9550-17-1-0191 and FA9550-18-1-0052.", "Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the United States Air Force.", "We would also like to thank Antonio Laverghetta Jr. and Jamshidbek Mirzakhalov for their helpful suggestions while writing this paper, and Gokul Shanth Raveendran and Manvi Nagdev for helping with the website used for the mTurk study." ]
[ "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "objective", "method", "abstain", "method", "objective", "method", "abstain", "result", "result", "result", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "other", "other", "objective", "other", "other", "method", "other", "other", "other", "other", "other", "other", "method", "other", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "method", "objective", "result", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "objective", "method", "abstain", "abstain", "method", "other", "other", "other" ]
[ "Cross-domain sentiment analysis has achieved promising results with the help of pre-trained language models.", "As GPT-3 appears, prompt tuning has been widely explored to enable better semantic modeling in many natural language processing tasks.", "However, directly using a fixed predefined template for cross-domain research cannot model different distributions of the [MASK] token in different domains, thus making underuse of the prompt tuning technique.", "In this paper, we propose a novel Ad versarial S oft P rompt T uning method (AdSPT) to better model cross-domain sentiment analysis.", "On the one hand, AdSPT adopts separate soft prompts instead of hard templates to learn different vectors for different domains, thus alleviating the domain discrepancy of the [MASK] token in the masked language modeling task.", "On the other hand, AdSPT uses a novel domain adversarial training strategy to learn domain-invariant representations between each source domain and the target domain.", "Experiments on a publicly available sentiment analysis dataset show that our model achieves new state-of-the-art results for both single-source domain adaptation and multi-source domain adaptation.", "In recent years, with the emergence of a series of large-scale pre-trained language models (PLMs), such as GPT (Radford et al., 2018, 2019), BERT (Devlin et al., 2019), and RoBERTa (Liu et al., 2019), fine-tuning PLMs has achieved promising results on a wide range of natural language processing (NLP) tasks.", "However, as PLMs become larger and larger, fine-tuning larger PLMs becomes more challenging in most real-world applications.", "More recently, Brown et al. (2020) show that designing task descriptions (a.k.a. prompts) can make accurate predictions without updating any of the Corresponding author.", "parameters of GPT-3 (which has 175 B parameters).", "This inspires a new PLM-tuning method named prompt tuning .", "Such prompt tuning method has achieved state-of-the-art results on text classification and natural language inference (Schick and Schtze, 2020; Schick et al., 2020; Gao et al., 2020), relation classification (Han et al., 2021), and natural language generation (Li and Liang, 2021).", "It is common to use a predefined template (e.g., It was [MASK] .) in prompt tuning for binary sentiment analysis, and the classification results of positive or negative depend on the probabilities of predefined label words (e.g., {good, bad}) in the masked language modeling (MLM) task.", "However, the distributions of MLM prediction results can be different for different domains.", "An example is shown in Figure 1, the discrepancy between book-domain review and video-domain review leads to different possibilities of label words.", "The high-frequency label word in book-domain review is useful , and video-domain review is real , neither of which is in the predefined {good, bad}.", "Therefore, it is unreasonable to predict predefined label words with fixed templates (a.k.a. hard prompts) for different domain datasets.", "The intuition is that the feature distributions corresponding to the [MASK] position learned from the hard prompt are distinct among different do-2438 mains.", "And the discrepancy among different domains can have serious effects on the cross-domain setting where we train a classifier on source domain data, e.g., the book reviews, and test it on the target domain, e.g., the video review.", "So domain adaptation (Ben-David et al., 2007; Mansour et al., 2009) based on cluster hypothesis (Zhu and Goldberg, 2009) becomes a key point of the cross-domain research.", "In order to improve the cross-domain sentiment analysis with the help of PLMs, we propose AdSPT: an Ad versarial S oft P rompt T uning method, which sheds new light on solving the domain adaptation problem.", "Specifically, we use soft prompts composed of multiple learnable vectors and the [MASK] token instead of hard templates for tuning.", "For different domains, we use independent soft prompts to represent domain-specific information, thus making them have the domain-aware knowledge.", "With different domain soft prompts, the MLM head classifier can mitigate the domain discrepancy of the [MASK] token.", "To enhance the effectiveness of the target domain, we design a novel adversarial training strategy to learn the domain-invariant knowledge of the [MASK] token, which can be seen as a two-player minimax game between the target domain and each source domain under multi-source domain adaptation setting.", "As a result, the collaborative effect of soft prompt tuning and domain adversarial training can more properly predict the feature distribution of the [MASK] token on the ground of domain-specific soft prompts and the domain invariance of the [MASK] token.", "In experiments, we evaluate on a publicly available sentiment analysis dataset for both single-source domain adaptation and multi-source domain adaptation.", "Our results show the effectiveness of collaboratively leveraging domain-specific soft prompts tuning and domain adversarial training.", "To summarize, the main contributions of this work are as follows: (1) In prompt tuning, we adopt separate soft prompts to learn embeddings enriched with the domain knowledge, thus alleviating the domain discrepancy of the [MASK] position.", "(2) We design a novel adversarial training strategy to learn the domain-invariant representation of the [MASK] position.", "(3) Experiments on the Amazon reviews dataset show our method AdSPT obtains the average accuracy 93 .", "14% ( 0 . 46 absolute improvement) under single-source domain adaptation and the average accuracy 93 .", "75% ( 0 . 81 absolute improvement) under multi-source domain adaptation.", "Prompt tuning.", "Fine-tuning PLMs with task-specific heads on downstream tasks has become the main paradigm and yields strong performance on many NLP tasks (Peters et al., 2018; Devlin et al., 2019; Radford et al., 2019).", "But there is a big gap between the fine-tuning objectives of downstream tasks and the pre-training objectives of PLMs, which could limit the exploitation of knowledge in PLMs (Liu et al., 2021b).", "Subsequently, GPT-3 (Brown et al., 2020) brings a new paradigm prompt tuning for downstream tasks, which leverages natural-language prompts and task demonstrations as context to make downstream tasks similar to language modeling.", "Early works explore manually defined templates (a.k.a. hard templates) for text classification and natural language inference (Schick and Schtze, 2020, 2021).", "However, suitable templates require strong domain knowledge.", "Therefore, some automatically generated hard templates are explored (Shin et al., 2020; Gao et al., 2020; Ben-David et al., 2021).", "Since prompt construction is to find a method that allows PLMs to effectively perform downstream tasks, it is not necessary to limit templates to human-interpretable natural language.", "Some works attempt to perform prompting directly with several learnable vectors, such as soft prompt (Lester et al., 2021; Vu et al., 2021), prefix-tuning (Li and Liang, 2021) and P-tuning V2 (Liu et al., 2021a).", "Moreover, Schick et al. (2020) explore automatically identifying label words.", "Hu et al. (2021) use an external knowledge base to expand label words.", "This paper focuses on improving the cross-domain sentiment analysis via different soft prompts of different domains.", "Domain Adaptation.", "Research on domain adaptation (DA) uses labeled or unlabeled target data to transfer labeled source information to a specific target domain (Pan and Yang, 2009; Mansour et al., 2009).", "Popular methods for unsupervised DA are based on domain discrepancy optimizing based on adversarial training (Ganin et al., 2016; Zhao et al., 2018; Saito et al., 2018).", "As for cross-domain sentiment analysis, some early works use pivot-based methods to capture the shared feature representation of different domains (Yu and Jiang, 2016; Ziser 2439 and Reichart, 2018; Li et al., 2018; Peng et al., 2018).", "Some other works adopt different adversarial learning methods to learn the domain-common sentiment knowledge (Li et al., 2017; Qu et al., 2019; Li et al., 2019).", "Recently, with the promising performance of PLMs in NLP, many works on cross-domain sentiment analysis focus on how to improve lan-gange model pre-training and fine-tuning, e.g., Du et al. (2020) use a target domain MLM task and a domain-distinguish task in pre-training; Zhou et al. (2020) utilize several pre-training tasks based on existing lexicons and annotations.", "Different from these works, our method is the first to use the combination of soft prompt tuning and adversarial training to solve the DA problem.", "In this paper, we study cross-domain sentiment analysis in the unsupervised domain adaptation setting which contains two scenarios: a source domain and a target domain or multiple source domains and a target domain.", "Given m ( m 1) source domains, the l -th ( l [1 , . . . , m ] ) source domain contains an annotated dataset S l = { x si , y si } N sl i =1 , where x si = [ w s 1 , . . . , w sn ] is a input sentence with n words, y si is the corresponding polarity label, and N sl represents the number of examples of the l -th source domain.", "In the target domain, there is an unannotated dataset T = { x ti } N t i =1 , where x ti = [ w t 1 , . . . , w tn ] is an unlabeled sentence of the target domain and N t is the number of the unlabeled data.", "The goal of cross-domain sentiment analysis is to learn a function F that could both retain in-domain knowledge for different domains and also learn the domain invariance between the target domain and each source domain to better predict the polarity of unlabeled sentences from the target domain.", "In this section, we first introduce a soft prompt tuning method for sentiment classification that utilizes soft prompts to capture domain-specific knowledge.", "Then we present a domain adversarial training method for domain adaptation.", "Finally, we describe the overall learning procedure.", "Prompt tuning is an approach to add extra information for PLMs by reformulating downstream tasks as cloze questions.", "The primary components include a template and a set of label words, where the template is a background description of current task and the label words are the high-probability vocabulary predicted by PLMs in the current context.", "In the binary sentiment classification, we denote the input sentence as x = [ w 1 , . . . , w n ] , the output label as y .", "Here y Y , and the label space Y = { positive , negative } .", "Prompt tuning formalizes the classification task into a MLM task.", "Given a PLM M and its vocabulary V , a prompt consists of a template function T ( ) that converts the input sentence x to a prompt input x prompt = T ( x ) with the [MASK] token and a set of label words V V , which are connected with the label space through a mapping function v : Y (cid:55) V .", "As shown in Figure 2, the soft prompted input x prompt contains the embeddings of the original sentence e ( x ) , k learnable vectors [ h 0 , . . . , h k 1 ] , the embedding of the [MASK] token e ([MASK]) , and the embeddings of two positional tokens e ([CLS]) and e ([SEP]) .", "So the actual input of M is represented as: x prompt = (cid:2) e ([CLS]) , e ( x ) , h 0 , . . . , h k 1 , e ([MASK]) , e ([SEP]) (cid:3) (1) where e ( ) represents the embedding function of M .", "Here we can denote a PLM M as a function mapping from x prompt to the feature representation and vocabulary distribution of the [MASK] token, represented as: h [MASK] , s [MASK] = M ( x prompt ) (2) where h [MASK] R h and s [MASK] R |V| are the hidden representation and vocabulary distribution of the [MASK] token respectively, and s [MASK] = f ( h [MASK] ) is obtained by the MLM head function f .", "The probability p ( y | x ) is formalized according to the distribution of the label word w V", "w.r.t. the [MASK] position.", "In binary sentiment classification, we set the label words as V = 2440 ... ( x 11 ) ( x 21 ) ( x 1 1 ) ... (x 1 1 ) (x 2 1 ) ( x 1 1 ) ... (x 1 ) (x 2 ) (x ) ( \" [ MASK ] \" ) ... (\"[MASK]\") ... (\"[MASK]\") ... ... ... ... ... ... ( \" [ SEP ] \" ) (\"[SEP]\") (\"[SEP]\") ( \" [ CLS ] \" ) ( \" [ CLS ] \" ) (\"[CLS]\") [MASK] [MASK] N -1 Source Domains A Target Domain PLMs None Embeddings of the Soft Prompted Input Label Domain Adversarial Training Domain Adversarial Training SentimentClassificationSentimentClassification ... ...", "{ good , bad } .", "So, p ( y | x ) = p ( V y [MASK] | x prompt ) = exp( s [MASK] ( V y )) (cid:80) y (cid:48) Y exp( s [MASK] ( V y (cid:48) )) (3) Given an annotated dataset S = { x i , y i } Ni =1 , the training objective for soft prompt tuning is obtained using the binary cross-entropy loss, L class ( S ; M ,p,f ) = N (cid:88) i =1 (cid:20) log p ( y i | x i ) I { y i =1 } + log(1 p ( y i | x i )) I { y i =0 } (cid:21) (4) where y i represents the ground truth label ranging from 1 as the positive label and 0 as the negative label).", "M ,p,f represents the overall trainable parameters of the PLM M , several learnable vectors p and the MLM head function f .", "For the same task in different domains, domain adversarial training can not only transfer the generic knowledge from source domains to the target domain, but also train more domain-aware classifiers.", "As shown in Figure 2, domain adversarial training aims to make the feature distributions of the [MASK] position from different domains closer.", "More intuitively, it will encourage the MLM head classifer to obtain domain-invariant features across domains.", "Based on the hidden representation h [MASK] by the PLM, the detailed process of domain adversarial training is as follows: given m ( m 1) source domains, we assume that between each source domain S l ( l [1 , . . . , m ]) and the target domain T have a domain discriminative function g l : R h D that discriminates between the source domain and the target domain, where the domain label set is represented as D = { 0 , 1 } , 0 is the source domain label, and 1 is the target domain label.", "To this end, there are m domain discriminators, denoted as g = { g l } ml =1 .", "Given an input example x from either the l -th ( l [1 , . . . , m ] ) source domain or the target domain, we first obtain the task-specific head representation h [MASK] by M and then model the probability p ( d | x ) for discriminating the domain label d D as: p ( d | x ) = exp( g dl ( h [MASK] )) (cid:80) d (cid:48) D exp( g d (cid:48) l ( h [MASK] )) (5) Given m source domain dataset S = {S l } ml =1 = {{ x si } N sl i =1 } ml =1 and a target domain dataset T = { x ti } N t i =1 , where N sl is the number of samples in the l -th source domain and N t is the number of samples in the target domain, the domain discriminative objective is to minimize the following cross-2441 entropy loss, L domain ( S , T ; M ,p, g ) = m (cid:88) l =1 N sl + N t (cid:88) i =1 (cid:20) log p ( d i | x i ) I { d i =1 } + log(1 p ( d i | x i )) I { d i =0 } (cid:21) (6) where d i represents the truth domain label and M ,p, g represents the overall trainable parameters of the PLM M , several learnable vectors p and m domain discriminators g .", "The domain adversarial training among m source domains and the target domain can be seen as a two-player minimax game where the domain classifiers g = { g l } ml =1 tend to minimize the domain discrimination loss so as to make the domain discriminators strong while the PLMM tends to maximize the domain discrimination loss so as to weaken the domain discrimination.", "max M ,p min g L domain ( S , T ; M ,p, g )", "Joint training objective.", "Given m source domains S and a target domain T , the sentiment classifier and the domain discriminator are jointly trained for optimizing the PLMM , soft prompt embeddings p , MLM head function f and domain discriminators g , and the final training objective is formally represented as: min M ,p,f (cid:8) L class ( S ; M ,p,f ) min g L domain ( S , T ; M ,p, g ) (cid:9) (8) where is a trade-off parameter.", "The sentiment classification objective L class and the domain discrimination objective L domain are defined in Eq.", "(4) and Eq.", "(6), respectively.", "Training procedure.", "The iterative training procedure is summarized in Algorithm 1.", "In each iteration, the input samples of each source domain are first used for training the PLM M , several learnable vectors p and the MLM head function f .", "The sentiment classification loss is computed in line 5 .", "Then the samples of each source domain and the Algorithm 1 Training Process of AdSPT.", "Input: Training samples of m source domain dataset S = {S l } ml =1 = {{ x si , y si } N sl i =1 } ml =1 and a target domain dataset T = { x ti } N t i =1 ; the number of training iterations n .", "Output: Configurations of AdSPT M ,p,f, g Initialize: PLM M ; soft prompt embeddings p ; MLM head function f ; domain discriminator { g l } ml =1 ; learning rate ; trade-off parameter .", "target domain are mapped to different domain discriminators to train the PLMM , several learnable vectors p and the domain discriminator g l .", "The corresponding domain discrimination loss is computed in line 6 .", "The sentiment classification loss is used for updating the parameters of the PLM, several learnable vectors and the MLM head function (line 7, 10).", "The domain discrimination loss is used for updating the parameters of the PLM, several learnable vectors and the domain discriminators.", "Obviously, the parameters of the PLM and several learnable vectors be updated together by the above two losses.", "In this section, we conduct experiments to evaluate the effectiveness of our methods.", "Our experiments are carried out on single-source domain adaptation and multi-source domain adaptation settings ( 5.3).", "In addition, we also investigate how different components in the model impact the performance of cross-domain sentiment analysis with different settings.", "Dataset.", "We evaluate on the Amazon reviews dataset (Blitzer et al., 2007), which has been widely used for cross-domain sentiment classification.", "This dataset contains reviews of binary categories from four domains: Books (B), DVDs 2442 S T Fine-tuning Prompt-tuning BERT-DAAT SENTIX Fix FT FT + AT PT(H ARD ) PT(H ARD ) + AT PT(S OFT ) AdSPT B D 89.70 91.30 88.96 89.70 89.75 90.75 90.50 92.00 B E 89.57 93.25 86.15 87.30 91.75 92.45 93.05 93.75 B K 90.75 96.20 89.05 89.55 91.90 92.70 92.75 93.10 D B 90.86 91.15 89.40 89.55 90.90 91.50 91.75 92.15 D E 89.30 93.55 86.55 86.05 91.75 92.75 93.55 94.00 D K 87.53 96.00 87.53 87.69 91.05 92.35 92.50 93.25 E B 88.91 90.40 86.50 87.15 90.00 91.90 91.90 92.70 E D 90.13 91.20 87.98 88.20 92.10 92.55 93.25 93.15 E K 93.18 96.20 91.60 91.91 92.90 93.55 93.95 94.75 K B 87.98 89.55 87.55 87.65 89.15 90.75 91.75 92.35 K D 88.81 89.85 87.30 87.72 90.05 91.00 91.35 92.55 K E 91.72 93.55 90.45 90.25 92.15 92.50 93.10 93.95 Avg.", "(D), Electronics (E) and Kitchen appliances (K).", "Each domain has totally 2,000 manually labeled reviews.", "We use different settings for single-source domain adaptation and multi-source domain adaptation.", "For each domain, there are 2000 labeled reviews, including 1000 positive and 1000 negative, and 4000 unlabeled reviews.", "Following previous work (Ruder and Plank, 2017), we randomly select a small part ( 20% ) of examples in each domain as the development set to save the best training model and perform a 5 fold cross-validation.", "In single-source domain adaptation, we follow previous work (Ziser and Reichart, 2018) to construct 12 cross-domain sentiment analysis tasks (corresponding to 12 ordered domain pairs).", "In multi-source domain adaptation, we choose three-domain data as multiple source domains and the remaining one as the target domain, e.g., BDE K.", "So there are 4 combinations, corresponding to 4 tasks.", "Training details.", "In the Amazon reviews experiments, we adopt a 12 -layer Transformer (Vaswani et al., 2017; Devlin et al., 2019) initialized with RoBERTa BASE (Liu et al., 2019) as the PLM.", "During the training, we train with batch size of 2 for 10 epoches.", "The optimizer is Adam with learning rate 2 e 5 for the PLM optimization and 5 e 5 for optimizing domain discriminators.", "All experiments are conducted with an NVIDIA GeForce RTX 2080 Ti.", "We compare our method against 2 state-of-the-art methods, and also design several variants of fine-tuning and prompt tuning as baselines to demonstrate the effectivenss of adversatial training strategy in soft prompt tuning for DA.", "(1) BERT-DAAT (Du et al., 2020): Use BERT post-training for cross-domain sentiment analysis with adversarial training.", "(2) SENTIX Fix (Zhou et al., 2020): Pre-train a sentiment-aware language model by several pretraining tasks.", "(3) Fine-tuning : Standard fine-tuning vanilla PLMs in the source domain labeled data, which use the hidden representation of [CLS] for classification.", "(4) Fine-tuning + AT : Add the adversarial training operating on standard fine-tuning vanilla PLMs.", "(5) Prompt-tuning(Hard) : Use a manually defined template It is [MASK] for prompt-tuning.", "(6) Prompt-tuning(Hard) + AT : Add the adversarial training operating on Prompt-tuning(Hard).", "Following previous work (Du et al., 2020; Zhou et al., 2020), we adopt the accuracy to evaluate the performance.", "Main results contain results of single-source domain adaptation (Table 1) and multi-source domain adaptation (Table 2).", "Results of Single-source Domain Adaptation.", "Table 1 shows our main experimental results under single-source domain adaptation.", "We can observe that our method AdSPT outperforms all other methods in most of single-source domain adaptation.", "Compared with previous state-of-the-art methods, AdSPT is significantly superior to BERT-DAAT and SENTIX Fix on average ( 3 . 02 absolute improvement and 0 . 46 absolute improvement, re-spectively).", "More specifically speaking, prompt-tuning methods achieve better results than BERT-DAAT on most of single-source domain adaptation.", "This indicates that prompt tuning can stimulate pre-encoded knowledge in PLMs to solve the DA problem.", "But the performance of PT(H ARD ) and PT(H ARD ) + AT is lower than that of SENTIX Fix on average ( 91 .", "12%", "v.s.", "92 .", "68% and 92 .", "06%", "v.s.", "92 .", "68% ), showing that the feature representation of the [MASK] token in hard prompt tuning learns more domain knowledge of source domains, which leads to degraded performance on the target domain.", "Conversely, PT(S OFT ) is comparable to SENTIX Fix on average ( 92 .", "45%", "v.s.", "92 .", "68% ) and AdSPT achieves better results than SENTIX Fix on average ( 0 . 46 absolute improvement).", "It shows that soft prompt tuning not only learns domain-aware continuous vectors, but also weakens the domain discrepancy of the feature distribution of the [MASK] position.", "In addition, prompt-tuning methods are consistently superior to FT and FT + AT, either using a hard prompt, or soft prompt.", "In prompt-tuning, soft prompt tuning methods achieve better performances than corresponding hard prompt tuning methods ( 1 . 33 absolute improvement and 1 . 08 absolute improvement, respec-tively).", "This indicates these separate soft prompts can flexibly learn in-domain knowledge of different domains, which makes the feature representation of the [MASK] token more suitable for predicting the predefined label words.", "So soft prompt is more applicable to the DA problem than a hard prompt.", "When we add a domain adversarial training operation on soft prompt tuning, AdSPT achieves the new start-of-the-art result on average.", "It shows that the domain adversarial training strategy can enhance the domain-invariant feature of the [MASK] token among different domain datasets.", "Results of Multi-source Domain Adaptation.", "Table 2 shows our main experimental results under multi-source domain adaptation.", "Compared with fine-tuning methods, variants of prompt tuning achieve better performances (over at least 0 . 55 absolute improvement on average).", "This is mainly because prompt tuning uses the feature representation of [MASK] token for classification, rather than the feature representation of [CLS] token.", "On the one hand, fine-tuning is difficult to train the domain-specific classifier accurately from scratch on the unlabeled dataset.", "On the other hand, prompt tuning is used to classify by predicting the feature distribution of the [MASK] token in the set of label words, which can activate some prior knowledge in PLMs.", "Compared with hard prompt tuning methods, soft prompt tuning methods achieve significant improvements on average ( 92 .", "94%", "v.s.", "91 .", "39% and 93 .", "75%", "v.s.", "92 .", "94% ).", "Constructing the sophisticated hard template not only requires expertise knowledge and time, but the unified predefined hard template leads to the domain discrepancy of the feature representation of the [MASK] position that is unsuitable for multi-domain adaptation.", "Besides, PT(H ARD ) + AT achieves a better result than PT(H ARD ) on average ( 0 . 61 absolute improvement), which shows the domain adversarial training can obtain domain-invariant features among different domains by domain discriminators for DA.", "So when adding the domain adversarial training into soft prompt tuning, AdSPT achieves the best results under multi-source domain adaptation setting.", "This shows the effectiveness of the collaboration of soft prompt tuning and the domain adversarial training strategy.", "In the domain ad-2444 Figure 3: Analysis of multi-source and single-source versarial training, using the feature representation of the [MASK] token to obtain domain invariance is better for predicting the predefined set of label words.", "Multi-source", "v.s.", "Single-source.", "We make more detailed comparisons to explore the effect of multi-source domain adaptation and single-source domain adaptation settings.", "Figure 3 illustrates the influence of multi-source and single-source on the predicted results of the same target domain.", "When the target domain is E, D, or B, multi-source achieves better results in the target domain than single-source, showing that in most cases, multi-source domain adaptation is superior to single-source domain adaptation in cross-domain research.", "However, when the target domain is K, the result of E K is superior to that of BDE K ( 94 .", "75%", "v.s.", "93 .", "75% ).", "It is mainly because the feature distribution of E and K is closer.", "Effect of Soft Prompts.", "As stated in previous works (Gao et al., 2020), the choice of hard templates may have a huge impact on the performance of prompt tuning.", "In this subsection, we carry out experiments in BDE K and B K respectively to investigate the influence of different soft prompts under multi-source domain adaptation and single-source domain adaptation settings.", "As shown in Figure 4, we use 6 different soft prompts (by changing the number of prompt tokens k ).", "The results demonstrate that the choice of templates exerts a considerable influence on the Figure 4: Results of different soft prompts k on BDE K and B K performance of prompt tuning.", "For soft prompts, surprisingly, prompt tuning yields the best result with the fewest special tokens.", "Here k = 3 .", "In this paper, we proposed a novel Ad versarial S oft P rompt T uning method (AdSPT) for cross-domain sentiment analysis.", "Firstly, we use domain-specific soft prompts instead of hard templates to represent domain-specific knowledge.", "The domain-specific soft prompts can alleviate the domain discrepancy", "w.r.t. the [MASK] representations by MLM task.", "Meanwhile, we also design a novel adversarial training strategy to learn the domain-invariant knowledge of the [MASK] token among different domains.", "Experiments on the Amazon reviews dataset achieve state-of-the-art performance.", "We thank the anonymous reviewers for their helpful comments and suggestions.", "This work is supported by the Project of Technological Innovation 2030 New Generation Artificial Intelligence (Grant no. 2020AAA0107904), the Major Scientific Research Project of the State Language Commission in the 13th Five-Year Plan (Grant nos. WT135-38), and the Key Support Project of NSFC-Liaoning Joint Foundation (Grant no. U1908216)." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "method", "method", "abstain", "objective", "abstain", "method", "result", "objective", "objective", "result", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "objective", "method", "other", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "objective", "abstain", "other", "other" ]
[ "Discovering the stances of media outlets and influential people on current, debatable topics is important for social statisticians and policy makers.", "Many supervised solutions exist for determining viewpoints, but manually annotating training data is costly.", "In this paper, we propose a cascaded method that uses unsupervised learning to ascertain the stance of Twitter users with respect to a polarizing topic by leveraging their retweet behavior; then, it uses supervised learning based on user labels to characterize both the general political leaning of online media and of popular Twitter users, as well as their stance with respect to the target polarizing topic.", "We evaluate the model by comparing its predictions to gold labels from the Media Bias/Fact Check website, achieving 82.6% accuracy.", "Online media and popular Twitter users, which we will collectively refer to as influencers , often express overt political leanings, which can be gleaned from their positions on a variety of political and cultural issues.", "Determining their leaning can be done through the analysis of their writing, which includes the identification of terms that are indicative of stance (Groseclose and Milyo, 2005; Gentzkow and Shapiro, 2011).", "Performing such analysis automatically can be done using supervised classification, which in turn would require manually labeled data (Groseclose and Milyo, 2005; Gentzkow and Shapiro, 2011; Mohammad et al., 2016).", "Alternatively, leanings can be inferred based on which people share the content (blogs, tweets, posts, etc.) on social media, as social media users are more likely to share content that originates from sources that generally agree with their positions (An et al., 2012; Morgan et al., 2013; Ribeiro et al., 2018; Wong et al., 2013).", "Here, we make use of this observation to characterize influencers, based on the stances of the Twitter users that share their content.", "Ascertaining the stances of users, also known as stance detection, involves identifying the position of a user with respect to a topic, an entity, or a claim (Mohammad et al., 2016).", "For example, on the topic of abortion in USA, the stances of leftvs. right-leaning users would typically be pro-choice vs. pro-life, respectively.", "In this paper, we propose to apply unsupervised stance detection to automatically tag a large number of Twitter users with their positions on specific topics (Darwish et al., 2020).", "The tagging identi-fies clusters of vocal users based on the accounts that they retweet.", "Although the method we use may yield more than two clusters, we retain the two largest ones, which typically include the overwhelming majority of users, and we ignore the rest.", "Then, we train a classifier that predicts which cluster a user belongs to, in order to expand our clusters.", "Once we have increased the number of users in our sets, we determine which sources are most strongly associated with each group based on sharing by each group.", "We apply this methodology to determine the positions of influencers and of media on eight polarizing topics along with their overall leaning: left, center or right.", "In doing so, we can also observe the sharing behavior of rightand left-leaning users, and we can correlate their behavior with the credibility of the sources.", "Further, given the user stances for these eight topics, we train a supervised classifier to predict the overall bias of sources using a variety of features, including the so-called valence (Conover et al., 2011a), graph embeddings, and contextual embeddings.", "Using a combination of these features, our classifier is able to predict the bias of sources with 82.6% accuracy, with valence being the most effective feature.", "Figure 1 outlines our overall methodology.", "We use unsupervised stance detection to automatically determine the stance of Twitter users with respect to several polarizing topics.", "We then use distant supervision based on these discovered user stances to accurately characterize the political leaning of media outlets and of popular Twitter accounts.", "For classification, we use a combination of source valence, graph embeddings, and contextualized text embeddings.", "We evaluate our approach by comparing its bias predictions for a number of news outlets against gold labels from Media Bias/Fact Check.", "We further evaluate its predictions for popular Twitter users against manual judgments.", "The experimental results show sizable improvements over using graph embeddings or contextualized text embeddings.", "The remainder of this paper is organized as follows: Section 2 discusses related work.", "Section 3 describes the process of data collection.", "Section 4 presents our method for user stance detection.", "Section 5 describes how we characterize the influencers.", "Section 6 discusses our experiments in media bias prediction.", "Finally, Section 7 concludes and points to possible directions for future work.", "Recent work that attempted to characterize the stance and the ideological leaning of media and Twitter users relied on the observation that users tend to retweet content that is consistent with their world view.", "This stems from selective exposure , which is a cognitive bias that leads people to avoid the cognitive overload from exposure to opposing views as well as the cognitive dissonance in which people are forced to reconcile between their views and opposing views (Morgan et al., 2013).", "Concerning media, Ribeiro et al. (2018) used the Facebook advertising services to infer the ideological leaning of online media based on the political leaning of Facebook users who consumed them.", "An et al. (2012) relied on follow relationships to online media on Twitter to ascertain ideological leaning of media and users based on the similarity between them.", "Wong et al. (2013) studied retweet behavior to infer the ideological leanings of online media sources and popular Twitter accounts.", "Barbera and Sood (2015) proposed a statistical model based on the follower relationships to media sources and Twitter personalities in order to estimate their ideological leaning.", "As for individual users, much recent work focused on stance detection to determine a person's position on a topic including the deduction of political preferences (Barbera, 2015; Barber and Rivero, 2015; Borge-Holthoefer et al., 2015; Cohen and Ruths, 2013; Colleoni et al., 2014; Conover et al., 2011b; Fowler et al., 2011; Hasan and Ng, 2014; Himelboim et al., 2013; Magdy et al., 2016a,b; Makazhanov et al., 2014; Trabelsi and Zaane, 2018; Weber et al., 2013).", "User stance classification is aided by the tendency of users to form so-called echo chambers, where they engage with like-minded users (Himelboim et al., 2013; Magdy et al., 2016a), and the tendency of users' beliefs to be persistent over time (Borge-Holthoefer et al., 2015; Magdy et al., 2016a; Pennacchiotti and Popescu, 2011b).", "Studies have examined the effectiveness of different features for stance detection, including textual features such as word n -grams and hashtags, network interactions such as retweeted accounts and mentions, and profile information such as user location (Borge-Holthoefer et al., 2015; Hasan and Ng, 2013; Magdy et al., 2016a,b; Weber et al., 2013).", "Network interaction features were shown to yield better results compared to using textual features (Magdy et al., 2016a; Wong et al., 2013).", "Sridhar et al. (2015) leveraged both user interactions and textual information when modeling stance and disagreement, using a probabilistic programming system that allows models to be specified using a declarative language.", "Trabelsi and Zaane (2018) described an unsupervised stance detection method that determines the viewpoints of comments and of their authors.", "It analyzes online forum discussion threads, and therefore assumes a certain structure of the posts.", "It also assumes that users tend to reply to each others' comments when they are in disagreement, whereas we assume the opposite in this paper.", "Their model leverages the posts' contents, whereas we only use the retweet behavior of users.", "Many methods involving supervised learning were proposed for stance detection.", "Such methods require the availability of an initial set of labeled users, and they use some of the aforementioned features for classification (Darwish et al., 2018; Magdy et al., 2016b; Pennacchiotti and Popescu, 2011a).", "Such classification can label users with precision typically ranging between 70% and 90% (Rao et al., 2010; Pennacchiotti and Popescu, 2011a).", "Label propagation is a semi-supervised method that starts with a seed list of labeled users and propagates the labels to other users who are similar based on the accounts they follow or retweet (Barbera and Sood, 2015; Borge-Holthoefer et al., 2015; Weber et al., 2013).", "While label propagation may label users with high precision (often above 95%), it is biased towards users with more extreme views; moreover, careful choice of thresholds is often required, and post-checks are needed to ensure quality.", "Abu-Jbara et al. (2013) and more recently Darwish et al. (2020) used unsupervised stance detection, where users are mapped into a lower dimensional space based on user-user similarity, and then clustered to find core sets of users representing different stances.", "This was shown to be highly effective with nearly perfect clustering accuracy for polarizing topics, and it requires no manual labeling of users.", "Here, we use the same idea, but we combine it with supervised classification based on retweets in order to increase the number of labeled users (Darwish, 2018).", "Other methods for user stance detection include collective classification (Duan et al., 2012), where users in a network are jointly labeled and classification in a low-dimensional user-space (Darwish et al., 2017).", "As for predicting political leaning or sentiment, this problem was studied previously as a supervised learning problem, where a classifier learns from a set of manually labeled tweets (Pla and Hur-tado, 2014; Bakliwal et al., 2013; Bermingham and Smeaton, 2011).", "Similarly, Volkova et al. (2014) predicted Twitter users' political affiliation (being Republican or Democratic), using their network connections and textual information, relying on user-level annotations.", "We obtained data on eight topics that are considered polarizing in the USA (Darwish et al., 2020), shown in Table", "1. They include a mix of long-standing issues such as racism and gun control, temporal issues such as the nomination of Judge Brett Kavanaugh to the US Supreme Court and Representative Ilhan Omar's polarizing remarks, as well as non-political issues such as the potential dangers of vaccines.", "Further, though long-standing issues typically show right left polarization, stances towards Omar's remarks are not as clear, with divisions on the left as well.", "Since we are interested in US users, we filtered some tweets to retain such by users who have stated that their location was USA.", "We used a gazetteer that included words that indicate USA as a country (e.g., America, US), as well as state names and their abbreviations (e.g., Maryland, MD).", "Other data that we used in our experiments is a collection of articles that were cited by users from the tweets collection and that originate from media, whose bias is known, i.e., is discussed on the Media Bias/Fact Check website.", "In order to analyze the stance of influencers on a given topic, we first find the stances of Twitter users, and then we project them to the influencers that the users cite.", "A central (initial) assumption here is that if a user includes a link to some article in their tweet, they are more likely to agree or endorse the article's message.", "Similarly, when a user retweets a tweet verbatim without adding any comments, they are more likely to agree with that tweet.", "We label a large number of users with their stance for each topic using a two-step approach, namely projection and clustering and supervised classification .", "For the projection and clustering step, we identify clusters of core vocal users using the unsupervised method described in (Darwish et al., 2020).", "In this step, users are mapped to a lower dimensional space based on their similarity, and then they are clustered.", "After performing this unsupervised learning step, we train a supervised classifier using the two largest identified clusters in order to tag many more users.", "For that, we use FastText, a deep neural network text classifier, that has been shown to be effective for various text classification tasks (Joulin et al., 2017).", "Once we have expanded our sets of labeled users, we identify influencers that are most closely associated with each group using a modified version of the so-called valence score , which varies in value between 1 and", "1. If an influencer is being cited evenly between the groups, then it would be assigned a valence score close to zero.", "Conversely, if one group disproportionately cites an influencer compared to another group, then it would be assigned a score closer to 1 or", "1. We perform these steps for each of the given topics, and finally we summarize the stances across all topics.", "Below, we explain each of these steps in more detail.", "Given the tweets for each topic, we compute the similarity between the top 1,000 most active users.", "To compute similarity, we construct a vector for each user containing the number of all the accounts that a user has retweeted, and then we compute the pairwise cosine similarity between them.", "For example, if user A has only retweeted user B 3 times, user C 5 times and user E 8 times, then user A's vector would be (0, 3, 5, 0, 8, 0, 0, ... 0).", "Solely using the retweeted accounts as features has been shown to be effective for stance classification (Darwish et al., 2020; Magdy et al., 2016a).", "Finally, we perform dimensionality reduction and we project the users using Uniform Manifold Approximation and Projection (UMAP).", "When performing dimensionality reduction, UMAP places users on a two-dimensional plane such that similar users are placed closer together and dissimilar users are pushed further apart.", "Figure 2 shows the top users for the midterm topic projected with UMAP onto the 2D plane.", "After the projection, we use Mean Shift to cluster the users as shown in Figure", "2. This is the best setup described in (Darwish et al., 2020).", "Clustering high-dimensional data often yields suboptimal results, but can be improved by projecting to a low-dimensional space (Darwish et al., 2020).", "Since unsupervised stance detection is only able to classify the most vocal users, which only constitute a minority of the users, we wanted to assign stance labels to as many additional users as we can.", "Given the clusters of users that we obtain for each topic, we retain the two largest clusters for each topic, and we assign cluster labels to the users contained therein.", "Next, we use all the automatically labeled users for each topic to train a supervised classifier using the accounts that each user retweeted as features (same as the features we used to compute user similarity earlier).", "For classification, we train a FastText model using the default parameters, and then we classify all other users with five or more retweeted accounts, only accepting the classification if FastText was more than 80% confident (7090% yielded nearly identical results).", "In order to obtain a rough estimate of the accuracy of the model, we trained FastText using a random 80% subset of the clustered users for each topic and we tested on the remaining 20%.", "The accuracy was consistently above 95% for all topics.", "This does not mean that this model can predict the stance for all users that accurately the clustered users were selected to be the most active ones.", "Rather, it shows that the classifier can successfully capture what the previous, unsupervised step has already learned.", "Table 2 lists the total number of users who authored the tweets for each topic, the number of users who were automatically clustered using the aforementioned unsupervised clustering technique, and the number of users who were automatically labeled afterwards using supervised classification.", "Given that we applied unsupervised stance detection to the most active 1,000 users, the majority of the users appeared in the largest two clusters (shown in Table 2).", "Given all the labeled users for each topic, we computed a valence score for each influencer.", "As mentioned earlier, the valence score ranges between [ 1 , 1 ] , where a value close to 1 implies it is strongly associated with one group of users, 1 shows it is strongly associated with the other group of users, and 0 means that it is being shared or cited by both groups.", "The original valence score described by Conover et al. (2011a) is calculated as follows: V ( u ) = 2 t f ( u , C 0 ) total ( C 0 ) t f ( u , C 0 ) total ( C 0 ) + t f ( u , C 1 ) total ( C 1 ) 1 (1) where t f ( u , C 0 ) is the number of times (term frequency) item u is cited by group C 0 , and total ( C 0 ) is the sum of the term frequencies of all items cited by C 0 .", "t f ( u , C 1 ) and total ( C 1 ) are defined in a similar fashion.", "We use the above equation to compute valence scores for the retweeted accounts, but we using a modified version for calculating the score for influencers ( I ): V ( I ) = 2 t f ( I , C 0 ) total ( C 0 ) t f ( I , C 0 ) total ( C 0 ) + t f ( I , C 1 ) total ( C 1 ) 1 (2) where t f ( I , C i ) = a I (cid:84) C i [ ln ( Cnt ( a , C i )) + 1 ] total ( C i ) = I t f ( I , C i ) In the latter equation, Cnt ( a , C i ) is the number of times article a was cited by users from cluster C i .", "In essence, we are replacing term frequencies with the natural log of the term frequencies.", "We opted to modify the equation in order to tackle the following issue: if users from one of the clusters, say C 1 , cite only one single article from some media source a large number of times (e.g., 2,000 times), while users from the other cluster ( C 0 ) cite 10 other articles from the same media 50 times each, then using equation 1 would result in a valence score of 0.6.", "We would then regard the given media as having an opposing stance to the stance of users in C 0 .", "Alternatively, using the natural log would lead to a valence score close to 0.88.", "Thus, dampening term frequencies using the natural log has the desired effect of balancing between the number of articles being cited by each group and the total number of citations.", "We bin the valence scores between 1 and 1 into five equal size bands as follows: Cat ( V ) = , if s [ 1 , 0 . 6 ) , if s [ 0 . 6 , 0 . 2 ) 0 , if s [ 0 . 2 , 0 . 2 ) + , if s [ 0 . 2 , 0 . 6 ) ++ , if s [ 0 . 6 , 1 ] (3) 5 Characterizing the Influencers We use valence to characterize the leaning of all cited influencers for each of the topics.", "Table 3 shows the valence categories for the top-cited media sources across all topics.", "It also shows each media's factuality of reporting, i.e., trustworthiness, and bias (ranging from far-left to far-right) as determined by mediaBiasFactCheck.com .", "Since the choice of which cluster should be C 0 and which would be C 1 is arbitrary, we can multiply by 1 the valence scores for any topic and the meaning of the results would stay the same.", "We resorted to doing so for some topics in order to align the extreme valence bands across all topics.", "Given tweet samples from users in a given cluster for a given topic, labeling that cluster manually was straightforward with almost no ambiguity.", "Table 4 shows the most frequently cited media source for each topic and for each valence band.", "Of the 5,406 unique media sources that have been cited in tweets across all topics, 806 have known political bias from mediaBiasFactCheck.", "com .", "Figure 3 shows the confusion matrix between our valence categories and the goold labels from mediaBiasFactCheck.com .", "We notice that many of the media that have a negative valence score (categories and ) are classified on the right side of the political spectrum by mediaBiasFactCheck.com , while most media with positive scores (categories + and ++ ) are classified as slightly left-leaning.", "Although there are almost no extreme-left cases, there is a correlation between bias and our valence score.", "mediaBiasFactCheck.com seems to rarely categorize media sources as extreme-left.", "This could be a reflection of reality or it might imply that mediaBiasFactCheck.com has an inherent bias.", "We also computed the valence scores for the top-200 retweeted accounts, and we assigned each account a valence category based on the score.", "Independently, we asked a person who is well-versed with US politics to label all the accounts as left, center, or right.", "When labeling accounts, right-leaning include those expressing support for Trump, the Republican party, and gun rights, opposition to abortion, and disdain for Democrats.", "As for left-leaning accounts, they include those attacking Trump and the Republicans, and expressing support for the Democratic party and for Liberal social positions.", "If the retweeted account happens to be a media source, we used mediaBiasFactCheck.com .", "Table 5 compares the per-topic valence for each retweeted account along with the average category and the true label.", "It is noteworthy that all top-200 retweeted accounts have extreme valence categories on average across all topics.", "Their average valence scores, with one exception, appear between 0.6 and 1.00 for right, and between 0.6 and 1 for left (see Figure 4).", "Of those manually and independently tagged accounts, all that were tagged as left-leaning have a strong positive valence score and all that were tagged as right-leaning have a strong negative valence score.", "Only two accounts were manually labeled as center , namely Reuters and CSPAN, which is a US channel that broadcasts Federal Government proceedings, and they had valence scores of 0.55 and 0.28, respectively.", "Though their absolute values are lower than those of all other sources, they are mapped to the + valence category.", "Table 3 summarizes the valence scores for the media across all topics.", "Table 4 lists the most cited media sources for each topic and for each of the five valence bands.", "The order of the bands from top to bottom is: ++ , + , 0, and .", "The table also includes the credibility and the political leaning tags from mediaBiasFactCheck.com .", "The key observations from the table as follows:", "1. Most right-leaning media appear overwhelmingly in the and valence categories.", "Conversely, left-leaning media appear in all valence categories, except for the category.", "This implies that left-leaning users cite right-leaning media sparingly.", "We looked at some instances where right-leaning users cited left-leaning media, and we found that in many cases the cited articles reinforced a right-leaning viewpoint.", "For example, right-leaning users shared a video from thehill.com , a left-center site, 2,398 times for the police racism topic.", "The video defended Trump against charges of racism by Lynne Patton, a longtime African-American associate of Trump.", "2. Most right-leaning sources in the category have mixed, low, or very low factuality.", "Conversely, most left-leaning sites appearing in the valence category have high or very high factuality.", "Similarly for the vaccine topic, where high credibility sources, such as fda.gov and nih.gov , are frequently cited by anti-vaccine users, mostly to support their beliefs.", "3. The placements of sources in different categories are relatively stable across topics.", "For example, washingtonPost.com and theguardian.com exclusively appear in the ++ category, while breitbart.com and foxnews.com consistently appear in the category.", "Given the stances of users on the aforementioned eight topics, we leverage this information to predict media bias.", "Specifically, we describe in this section how we make use of the valence scores, as well as other features, namely graph and contextualized text embeddings, to train supervised classifiers for this purpose.", "Valence Scores.", "We use valence scores in two ways.", "First, we average the corresponding valence across the different polarizing topics to obtain an average valence score for a given target news medium.", "This is an unsupervised method for computing polarity.", "Second, we train a Logistic Regression classifier that uses the calculated valence scores as features and annotations from mediaBiasFactCheck.com as gold target labels in order to predict the general political leaning of a target news medium.", "We merged left and extreme left, and similarly we merged right and extreme right.", "We discarded media labeled as being left-center and right-center.", "Each news medium was represented by an 8-dimensional vector containing the valence scores for the above topics.", "In the experiments, we used the lbfgs solver and C = 0 .", "1. We used two measures to evaluate its performance, namely accuracy and mean absolute error (MAE).", "The latter is calculated by considering the different classes as ordered and equally distant from each other, i.e., if the model predicts right and the true label is left , this amounts to an error equal to", "2. climatechange guncontrol IlhanOmar immigration theguardian.com H L-C thehill.com H L-C washingtonpost.com H L-C theguardian.com H L-C washingtonpost.com H L-C cnn.com M L theguardian.com H L-C washingtonpost.com H L-C independent.co.uk H L-C nytimes.com H L-C mondoweiss.net H L cnn.com M L wef.ch npr.org VH L-C thinkprogress.org M L huffingtonpost.com H L vox.com H L washingtonpost.com H L-C haaretz.com H L-C npr.org VH L-C nytimes.com H L-C politico.com H L-C nytimes.com H L-C thehill.com H L-C bbc.com H L-C usatoday.com H L-C thehill.com H L-C nytimes.com H L-C cnn.com M L msn.com H L-C politico.com H L-C reuters.com VH C reuters.com VH C bbc.com H L-C cnn.com M L politico.com H L-C bloomberg.com H L-C cnbc.com H L-C apple.news usatoday.com H L-C thehill.com H L-C apple.news mediaite.com H L apple.news apple.news sun-sentinel.com H R-C usatoday.com H L-C msn.com H L-C npr.org VH L-C nypost.com M R-C yahoo.com M L-C pscp.tv seattletimes.com H L-C dailymail.co.uk VL R timesofisrael.com H L-C whitehouse.gov M R newsweek.com M L mailchi.mp theatlantic.com H L-C texastribune.org H C change.org H L washingtontimes.com H R-C nypost.com M R-C dailymail.co.uk VL R latimes.com H L-C breaking911.com VL jpost.com H C nypost.com M R-C dailymail.co.uk VL R chicagotribune.com H R-C dailymail.co.uk VL R zerohedge.com M climatechangedispatch.com rt.com M R-C algemeiner.com H R-C ir.shareaholic.com cnbc.com H L-C forbes.com M R-C startribune.com H L-C breaking911.com VL forbes.com M R-C breitbart.com VL FarR foxnews.com M R breitbart.com VL FarR breitbart.com VL FarR foxnews.com M R breitbart.com VL FarR illegalaliencrimereport.com dailycaller.com M R ammoland.com H R townhall.com M R washingtonexaminer.com H R tambonthongchai.com dailycaller.com M R change.org H L foxnews.com M R wattsupwiththat.com L bearingarms.com M R hannity.com westernjournal.com M R midterm police&racism Kavanaugh vaccine washingtonpost.com H L-C washingtonpost.com H L-C thehill.com H L-C thehill.com H L-C theguardian.com H L-C rawstory.com M L washingtonpost.com H L-C theguardian.com H L-C rawstory.com M L huffingtonpost.com H L cnn.com M L washingtonpost.com H L-C tacticalinvestor.com theguardian.com H L-C nytimes.com H L-C vaxopedia.org vox.com H L nytimes.com H L-C huffingtonpost.com H L nytimes.com H L-C thehill.com H L-C thehill.com H L-C politico.com H L-C cnn.com M L reuters.com VH C apple.news apple.news statnews.com H C nytimes.com H L-C cnn.com M L yahoo.com M L-C latimes.com H L-C cnn.com M L nbcnews.com H L-C apnews.com VH C cbc.ca H L-C dailykos.com M L thedailybeast.com H L latimes.com H L-C usatoday.com H L-C apple.news msn.com H L-C usatoday.com H L-C cdc.gov VH sagagist.com.ng pscp.tv mediaite.com H L medium.com M L-C bbc.com H L-C bloomberg.com H L-C theweek.com H L-C newsroom.fb.com alzwaaj.com politics.theonion.com lawandcrime.com help.senate.gov washingtonexaminer.com H R rollcall.com VH C cnbc.com H L-C msn.com H L-C dailymail.co.uk VL R mediaite.com H L pscp.tv change.org H L pbs.org H L-C dailymail.co.uk VL R nypost.com M R-C fda.gov zerohedge.com M news.sky.com H L-C ir.shareaholic.com variety.com ajc.com H L-C newsone.com H L-C rollcall.com VH C veritablenouvelordre.forumcanada.org aol.com H L-C c-span.org VH C breitbart.com VL FarR breitbart.com VL FarR foxnews.com M R ncbi.nlm.nih.gov VH foxnews.com M R defensemaven.io truepundit.com L vaccineimpact.com dailycaller.com M R foxnews.com M R dailycaller.com M R naturalnews.com M ilovemyfreedom.org VL FarR thegatewaypundit.com VL FarR breitbart.com VL FarR vaccines.me westernjournal.com M R nypost.com M R-C thegatewaypundit.com VL FarR thevaccinereaction.org Table 4: Top 5 websites per valence category for each topic.", "The results are shown in Table 6, where we can see that using the average valence score yields 68.0% accuracy (0.330 MAE) compared to 75.2% accuracy (0.278 MAE) when using the eight individual valence scores as features.", "Graph embeddings.", "We further use graph embeddings, generated by building a User-to-Hashtag graph (U2H) and a User-to-Mention (U2M) graph and then running node2vec on both (Atanasov et al., 2019), producing two types of graph embeddings.", "When using graph embeddings, we got worse results compared to our previous setup with valence scores (see Table 6).", "However, when we combine them with the valence scores, we observe a sizable boost in performance, up to 11% absolute.", "Tweets.", "We also experimented with BERT-base.", "We used the text of the tweets that cite the media we are classifying.", "For classification, we fed BERT representations of tweets to a dense layer with softmax output to fine-tune it with the textual contents of the tweets.", "We trained at the tweet level, and we averaged the scores (from softmax) for all tweets from the same news medium to obtain an overall label for that news medium.", "The accuracy is much lower than for the valence scores: 64.0% accuracy vs. 75.2% for supervised and 68.0% for unsupervised.", "Article titles and text.", "Using the BERT setup for Tweets , we used the titles and the full text of up to 100 articles from each of the target media.", "When using the full text of articles, we balanced the number of articles per news medium.", "We trained two separate BERT models, one on the titles and another one on the full text (content).", "Both models did worse than using valence alone, but the combination improved over valence only.", "System Combination.", "We combined different setups including using all the aforementioned models in combination.", "Using graph embeddings (GraphH + GraphM) with BERT embeddings (Tweet+Title+Content) and valence yielded the best results with accuracy of 82.6% and MAE of .206.", "If we remove valence from the combination, the accuracy drops by 4.5% while MAE jumps by .078, absolute.", "This suggests that valence is a very effective feature that captures important information, complementary to what can be modeled using graph and contextualized text embeddings.", "We have presented a method for predicting the general political leaning of media sources and popular Twitter users, as well as their stances on specific polarizing topics.", "Our method uses retweeted accounts, and a combination of dimensionality reduction and clustering algorithms, namely UMAP and Mean Shift, in order to produce sets of users that have opposing opinions on specific topics.", "Next, we expand the discovered sets using supervised learning that is trained on the automatically discovered user clusters.", "We are able to automatically tag large sets of users according to their stance of preset topics.", "Users' stances are then projected to the influencers that are being cited in the tweets for each of the topics using the so-called valence score .", "The projection allows us to tag a large number of influencers with their stances on specific issues and with their political leaning in general (i.e., left vs. right ) with high accuracy and with minimal human effort.", "The main advantage of our method is that it does not require manual labeling of entity stances, which requires both topical expertise and time.", "We also investigated the quality of the valence features, and we found that valence scores help to predict media bias with high accuracy.", "In future work, we plan to increase the number of topics that we use to characterize media.", "Ideally, we would like to automatically identify such polarizing topics.", "Doing so would enable us to easily retarget this work to new countries and languages.", "This research is part of the Tanbih project 1 , which aims to limit the effect of fake news, propaganda and media bias by making users aware of what they are reading.", "1 http://tanbih.qcri.org/ References Amjad Abu-Jbara, Ben King, Mona Diab, and Dragomir Radev." ]
[ "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "result", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "objective", "other", "other" ]
[ "Pragmatic inferences often subtly depend on the presence or absence of linguistic features.", "For example, the presence of a partitive construction ( of the ) increases the strength of a so-called scalar inference: listeners perceive the inference that Chris did not eat all of the cookies to be stronger after hearing Chris ate some of the cookies than after hearing the same utterance without a partitive, Chris ate some cookies .", "In this work, we explore to what extent neural network sentence encoders can learn to predict the strength of scalar inferences.", "We first show that an LSTM-based sentence encoder trained on an English dataset of human inference strength ratings is able to predict ratings with high accuracy ( r = 0 . 78 ).", "We then probe the model's behavior using manually constructed minimal sentence pairs and corpus data.", "We find that the model inferred previously established associations between linguistic features and inference strength, suggesting that the model learns to use linguistic features to predict pragmatic inferences.", "An important property of human communication is that listeners can infer information beyond the literal meaning of an utterance.", "One well-studied type of inference is scalar inference (Grice, 1975; Horn, 1984), whereby a listener who hears an utterance with a scalar item like some infers the negation of a stronger alternative with all : (1)", "Early accounts of scalar inferences (e.g., Gazdar 1979; Horn 1984; Levinson 2000) considered them to arise by default unless explicitly contradicted in context.", "However, in a recent corpus study, Degen (2015) showed that there is much more variability Equal contribution.", "in scalar inferences from some to not all than previously assumed.", "Degen (2015) further showed that this variability is not random and that several lexical, syntactic, and semantic/pragmatic features of context explain much of the variance in inference strength.", "1 Recent Bayesian game-theoretic models of pragmatic reasoning (Goodman and Frank, 2016; Franke and Jager, 2016) are able to integrate speaker expectations with world knowledge to predict listeners' pragmatic inferences in many cases (e.g., Goodman and Stuhlmuller 2013; Degen et al. 2015).", "However, to compute speaker expectations, these models require manual specification of features as well as specification of a finite set of possible utterances.", "Further, inference becomes intractable when scaling up beyond toy domains to make predictions for arbitrary utterances.", "2 Neural network (NN) models, on the other hand, do not suffer from these limitations: they are capable of making predictions for arbitrary utterances and do not require manual specification of features.", "Unlike Bayesian game-theoretic models, however, NN models have no explicit pragmatic reasoning mechanisms.", "In this work, we investigate to what extent NN models can learn to predict subtle differences in scalar inferences and to what extent these models infer associations between linguistic features and 1 See Section 2 for the operationalization of inference strength that we use throughout this paper and for a description of these features.", "2 Recent models of generating pragmatic image descriptions (Andreas and Klein, 2016; Cohn-Gordon et al., 2018) and color descriptions (Monroe et al., 2017) have overcome this issue by approximating the distributions of utterances given a set of potential referents.", "However, these models require a finite set of world states (e.g., several referents to choose from) and a corresponding generative model of utterances (e.g., an image captioning model) and are therefore also limited to scenarios with pre-specified world states and a corresponding generative model.", "inference strength.", "In this enterprise we follow the recent successes of NN models in predicting a range of linguistic phenomena such as long distance syntactic dependencies (e.g., Elman 1990; Linzen et al. 2016; Gulordava et al. 2018; Futrell et al. 2019; Wilcox et al. 2019), semantic entailments (e.g., Bowman et al. 2015; Conneau et al. 2018), acceptability judgements (Warstadt et al., 2019b), factuality (Rudinger et al., 2018), negative polarity item licensing environments (Warstadt et al., 2019a), and, to some extent, speaker commitment (Jiang and de Marneffe, 2019a).", "In particular, we ask: 1. How well can a neural network sentence encoder learn to predict human inference strength judgments for utterances with some ?", "2. To what extent does such a model capture the qualitative effects of hand-mined contextual features previously identified as influencing inference strength?", "To address the first question, we compare the performance of several NN models that differ in the underlying word embedding model (GloVe, ELMo, or BERT).", "To address the second question, we probe the best model's behavior through an analysis of predictions on manually constructed minimal sentence pairs, a regression analysis, and an analysis of attention weights.", "We find that the best model is able to predict inference strength ratings on a held-out test set with high accuracy ( r = 0 . 78 ).", "The three analyses consistently suggest that the model learned associations between inference strength and linguistic features established by previous work (Degen, 2015).", "We use the annotated dataset collected by Degen (2015), a dataset of the utterances from the Switchboard corpus of English telephone dialogues (God-frey et al., 1992) with a noun phrase (NP) with some .", "The dataset consists of 1,362 unique utterances.", "For each example with a some -NP, Degen (2015) collected inference strength ratings from at least 10 participants recruited on Amazon's Mechanical Turk.", "Participants saw both the target utterance and ten utterances from the preceding discourse context.", "They then rated the similarity between the original utterance like (2a) and an utterance in which some was replaced with some, but not all like (2b), on a 7-point Likert scale with endpoints labeled very different meaning (1) and same meaning (7).", "Low similarity ratings thus indicate low inference strength, and high similarity ratings indicate high inference strength.", "(2)", "a. I like I like to read some of the philosophy stuff.", "b. I like I like to read some, but not all, of the philosophy stuff.", "Using this corpus, Degen (2015) found that several linguistic and contextual factors influenced inference strength ratings, including the partitive form of , subjecthood, the previous mention of the NP referent, determiner strength, and modification of the head noun, which we describe in turn.", "Partitive: (3a-b) are example utterances from the corpus with and without partitive some -NPs, respectively.", "Values in parentheses indicate the mean inference strength rating for that item.", "On average, utterances with partitives yielded stronger inference ratings than ones without.", "Subjecthood: Utterances in which the some -NP appears in subject position, as in (4a), yielded stronger inference ratings than utterances in which the some -NP appears in a different grammatical position, e.g., as a direct object as in (4b).", "Previous mention: Discourse properties also have an effect on inference strength.", "A some -NP with a previously mentioned embedded NP referent yields stronger inferences than a some -NP whose embedded NP referent has not been previously mentioned.", "For example, (5a) contains a some -NP in which them refers to previously mentioned Mission Impossible tape recordings , whereas problems in the some -NP in (5b) has not been previously mentioned.", "Modification: Degen (2015) also found a small effect of whether or not the head noun of the some NP was modified: some -NPs with unmodified head nouns yielded slightly stronger inferences than those with modified head nouns.", "Determiner strength: Finally, it has been argued that there are two types of some : a weak some and a strong some (Milsark, 1974; Barwise and Cooper, 1981).", "This weak/strong distinction has been notoriously hard to pin down (Horn, 1997) and Degen (2015) used empirical strength norms elicited independently for each item.", "To this end, she exploited the fact that removing weak some from an utterance has little effect on its meaning whereas removing strong some changes the meaning.", "Determiner strength ratings were thus elicited by asking participants to rate the similarity between the original utterance and an utterance without some (of) on a 7-point Likert scale from different meaning' to same meaning'.", "Items with stronger some e.g., (6a), determiner strength 3.3 yielded stronger inference ratings than items with weaker some e.g., (6b), determiner strength 6.7.", "(6)", "a. And some people don't vote.", "(5.2)", "b. Well, we could use some rain up here.", "(2.1)", "The quantitative findings from Degen (2015) are summarized in Figure 4, which shows in blue the regression coefficients for all predictors she considered (see the original paper for more detailed descriptions).", "For our experiments, we randomly split the dataset into a 70% training and 30% test set, resulting in 954 training items and 408 test items.", "The objective of the model is to predict mean inference strength rating i given an utterance (a sequence of words) U = { w 1 , w 2 , ..., w N } .", "We rescale the 1-to-7 Likert scale ratings to the interval [0 , 1] .", "Figure 1 shows the overall model architecture.", "The model is a sentence classifica-tion model akin to the model proposed by Lin et al. (2017).", "It first embeds the utterance tokens using pre-trained embedding models, and then forms a sentence representation by passing the embedded tokens through a 2-layer bidirectional LSTM network (biLSTM) (Hochreiter and Schmidhuber, 1997) with dropout (Srivastava et al., 2014) followed by a self-attention mechanism that provides a weighted average of the hidden states of the topmost biLSTM layer.", "This sentence representation is then passed through a transformation layer with a sigmoid activation function, which outputs the predicted score in the interval [0 , 1] .", "We used 5-fold cross-validation on the training data to optimize the following hyperparameters.", "Word embedding model : 100d GloVe (Penning-ton et al., 2014), 1024d ELMo (Peters et al., 2018; Gardner et al., 2018), 768d BERT-base, 1024d BERT-large (Devlin et al., 2019; Wolf et al., 2019).", "Output layer of word embedding models : [1 , 3] for ELMo, [1 , 12] for BERT-base, and [1 , 24] for BERT-large.", "Dimension of LSTM hidden states : { 100 , 200 , 400 , 800 } .", "We first optimized the output layer parameter for each contextual word embedding model while keeping all other parameters fixed.", "We then optimized the other parameters for each embedding model by computing the average correlation between the model predictions and the human ratings across the five cross-validation folds.", "Architectural variants.", "We also evaluated all combinations of two architectural variants: First, we evaluated models in which we included the attention layer (LSTM+A TTENTION ) or simply used the final hidden state of the LSTM (LSTM) as a sentence representation.", "Second, since participants providing inference strength ratings also had access to 10 utterances from the preceding conversational context, we also compared models that make predictions based only the target utterance with the some -NP and models that make predictions based on target utterances and the preceding conversational context.", "For the models using GloVe and ELMo, we prepended the conversational context to the target utterance to obtain a joint context and utterance embedding.", "For models using BERT, we made use of the fact that BERT had been trained to jointly embed two sentences or documents, and we obtained embeddings for the tokens in the target utterance by feeding the target utterance as the first document and the preceding context as the second document into the BERT encoder.", "We discarded the hidden states of the preceding context and only used the output of BERT for the tokens in the target utterance.", "Implementation details.", "We implemented the model in PyTorch (Paszke et al., 2017).", "We trained the model using the Adam optimizer (Kingma and Ba, 2015) with default parameters and a learning rate of 0.001, minimizing the mean squared error of the predicted ratings.", "In the no-context experiments, we truncated target utterances longer than 30 tokens, and in the experiments with context, we truncated the beginning of the preceding context such that the number of tokens did not exceed 150.", "Evaluation.", "We evaluated the model predictions in terms of their correlation r with the human inference strength ratings.", "As mentioned above, we optimized the hyperparameters using cross validation.", "We then took the best set of parameters and trained a model on all the available training data and evaluated that model on the held-out data.", "Not surprisngly, we find that the attention layer improves predictions and that contextual word embeddings lead to better results than the static GloVe embeddings.", "We also find that including the conversational context does not improve predictions (see Appendix A, for learning curves of all models, and Section 6, for a discussion of the role of conversational context).", "Otherwise, the model is quite insensitive to hyperparameter settings: neither the dimension of the hidden LSTM states nor the dropout rate had considerable effects on the prediction accuracy.", "We do find, however, that there are differences depending on the BERT and ELMo layer that we use as word representations.", "We find that higher layers work better than lower layers, suggesting that word representations that are influenced by other utterance tokens are helpful for this task.", "Based on these optimization runs, we chose the model with attention that uses the BERT-large embeddings but no conversational context for the subsequent experiments and analyses.", "Figure 2 shows the correlation between the best model according to the tuning runs (now trained on all training data) and the empirical ratings on", "the 408 held-out test items.", "As this plot shows, the model predictions fall within a close range of the empirical ratings for most of the items ( r = 0 . 78 ).", "3 Further, similarly as in the empirical data, there seem to be two clusters in the model predictions: one that includes lower ratings and one that includes higher ratings, corresponding to strong and weak scalar inferences, respectively.", "The only systematic deviation appears to be that the model does not predict any extreme ratings almost all predictions are greater than 2 or less than 6, whereas the empirical ratings include some cases outside of this range.", "Overall, these results suggest that the model can learn to closely predict the strength of scalar inferences.", "However, this result by itself does not provide evidence that the model learned associations between linguistic features and inference strength, since it could also be that, given the large number of parameters, the model learned spurious correlations independent of the empirically established feature-strength associations.", "To investigate whether the model learned the expected associations, we probed the model's behavior in multiple ways, which we discuss next.", "Minimal pair analysis.", "As a first analysis, we constructed artificial minimal pairs that differed along several factors of interest and compared the model predictions.", "Such methods have been recently used to probe, for example, what kind of 3 For comparison, we estimated how well the human ratings correlated through a bootstrapping analysis: We re-sampled the human ratings for each item and computed the average correlation coefficient between the original and the re-sampled datasets, which we found to be approximately 0.93.", "syntactic dependencies different types of recurrent neural network language models are capable of encoding or to what extent sentence vector representations capture compositional meanings (e.g., Linzen et al. 2016; Gulordava et al. 2018; Chowdhury and Zamparelli 2018; Ettinger et al. 2018; Marvin and Linzen 2018; Futrell et al. 2019; Wilcox et al. 2019), and also allow us to probe whether the model is sensitive to controlled changes in the input.", "We constructed a set of 25 initial sentences with some -NPs.", "For each sentence, we created 32 variants that differed in the following four properties of the some -NP: subjecthood, partitive, pre-nominal modification, and post-nominal modification.", "For the latter three features, we either included or excluded of the or the modifier, respectively.", "For example, the version in (7a) includes of the whereas the version in (7b) lacks the partitive feature.", "To manipulate subjecthood of the some -NP, we created variants in which some was either the determiner in the subject NP as in (7) or in the object-NP as in (8).", "We also created passive versions of each of these variants (9-10).", "Each set of sentences included a unique main verb, a unique pair of NPs, and unique modifiers.", "The full list of sentences can be found in Appendix C. (7)", "a. Some of the (organic) farmers (in the mountains) milked the brown goats who graze on the meadows.", "b. Some (organic) farmers (in the mountains) milked the brown goats who graze on the meadows.", "(8) The organic farmers in the mountains milked some (of the) (brown) goats (who graze on the meadows).", "(9) The brown goats who graze on the meadows were milked by some (of the) (organic) farmers (in the mountains).", "(10)", "Some (of the) (brown) goats (who graze on the meadows) were milked by the organic farmers in the mountains.", "Figure 3 shows the model predictions for the manually constructed sentences grouped by the presence of a partitive construction, the grammatical function of the some -NP, and the presence of a modifier.", "As in the natural dataset from Degen (2015), sentences with a partitive received higher predicted ratings than sentences without a partitive; sentences with subject some -NPs received higher predicted ratings than sentences with nonsubject some -NPs; and sentences with a modified head noun in the some -NP received lower predictions than sentences with an unmodified some -NP.", "All these results provide evidence that the model learned the correct associations.", "This is particularly remarkable considering the train-test mismatch: the model was trained on noisy transcripts of spoken language that contained many disfluencies and repairs, and was subsequently tested on clean written sentences.", "Regression analysis.", "In the minimal pair analysis above we only investigated model predictions for three factors.", "As a second analysis, we therefore investigated whether the predictions of the best neural network model explain the variance explained by the linguistic features that modulate inference strength.", "To this end, we used a slightly simplified 4 Bayesian implementation of the mixed-effects model by Degen (2015) that predicted inference strength ratings from hand-mined features.", "We used the brms (Burkner, 2017) and STAN (Car-penter et al., 2017) packages and compared this original model to an extended model that included both all of the predictors of the original model as well as the the output of the above NN model as a predictor.", "For this comparison, we investigated whether the magnitude of a predictor in the original model significantly decreased in the extended model with the NN predictor, based on the reasoning that if the NN predictions explain the variance previously explained by these manually coded pre-4 We removed by-item random intercepts and by-subject random slopes to facilitate inference.", "This simplification yielded almost identical estimates as the original model by Degen (2015).", "Regression model original model extended model", "dictors, then the original predictor should explain no or less additional variance.", "We approximated the probability that the magnitude of the coefficient for the predictor i ( i ) in the extended model including the NN predictor is smaller than the coefficient in the original model, P ( | extendedi | < | originali | ) , by sampling values for each coefficient from the distributions of the original and the extended models and comparing the magnitude of the sampled coefficients.", "We repeated this process 1,000,000 times and treated the simulated proportions as approximate probabilities.", "An issue with this analysis is that estimating the regression model only on the items in the held-out test set yields very wide credible intervals for some of the predictorsin particular for some of the interactionssince the model infers these values from very little data.", "We therefore performed this regression analysis (and the subsequent analyses) on the entire data.", "However, while we estimated the regression coefficients from all the data, we crucially obtained the NN predictions through 6-fold cross-validation (without additional tuning of hyperparameters), so that the NN model always made predictions on data that it had not seen during training.", "This did yield the same qualitative results as the analyses only performed on the held-out test items (see Appendix B) but it also provided us with narrower credible intervals that highlight the differences between the coefficient estimates of the two models.", "Figure 4 shows the estimates of the coefficients in the original model and the extended model.", "We find that the NN predictions explain some or all of the variance originally explained by many of the manually coded linguistic features: the estimated magnitude of the predictors for partitive, determiner strength, linguistic mention, subjecthood, modification, utterance length, and two of the interaction terms decreased in the extended model.", "These results provide additional evidence that the NN model indeed learned associations between linguistic features and inference strength rather than only explaining variance caused by individual items.", "This is particularly true for the grammatical and lexical features; we find that the NN predictor explains most of the variance originally explained by the partitive, subjecthood, and modification predictors.", "More surprisingly, the NN predictions also explain a lot of the variance originally explained by the determiner strength predictor, which was empirically determined by probing human interpretation and is not encoded explicitly in the surface form utterance.", "5 One potential explanation for this is that strong and weak some have different context distributions.", "For instance, weak some occurs in existential there constructions and with individual-level predicates, whereas strong some tends not to (Milsark, 1974; McNally and Geenhoven, 1998; Carlson, 1977).", "Since pre-trained word embedding models capture a lot of distributional information, the NN model is presumably able to learn this association.", "5 As explained above, Degen (2015) obtained strength ratings by asking participants to rate the similarity of the original utterance and an utterance without the determiner some (of) .", "Attention weight analysis.", "As a final type of analysis, we analyzed the attention weights that the model used for combining the token embeddings to a sentence embedding.", "Attention weight analyses have been successfully used for inspecting and debugging model decisions (e.g., Lee et al., 2017; Ding et al., 2017; Wiegreffe and Pinter, 2019; Vashishth et al., 2019; but see Serrano and Smith, 2019, and Jain and Wallace, 2019, for critical discussions of this approach).", "Based on these results, we expected the model to attend more to tokens that are relevant for making predictions.", "6 Given that many of the hand-mined features that predict inference strength occur within or in the vicinity of the some -NP, we should therefore expect the model to attend most to the some -NP.", "To test this, we first explored whether the model attended on average more to some than to other tokens in the same position.", "Further, we exploited the fact that in English, subjects generally occur early in a sentence.", "If the model attends to the vicinity of the some -NP, the average attention weights should be higher at early positions in utterances with a sub-6 As pointed out by one of the reviewers, given the transformer architecture, BERT token representations are influenced by numerous tokens of the input sentence and therefore it could be that the output representation of the i -th token ultimately contains very little information about the i -th token that was input to the model.", "Consequently, it could be that the attention weights do not provide information about which tokens the model attends to.", "To rule out this possibility, we also conducted the attention weight analysis for the model using static GloVe embeddings, which always exclusively represent the input token, and we found the same qualitative patterns as reported in this section, suggesting that the attention weights provide information about the tokens that are most informative for making predictions.", "Nevertheless, we do want to caution researchers from blindly trusting attention weight analyses and recommend using this type of analysis only in combination with other types of analyses as we have done in this work.", "ject some -NP compared to utterances with a nonsubject some -NP, and conversely for late utterance positions.", "We thus compared the average attention weights for each position across utterances with subject versus non-subject some -NPs.", "To make sure that any effects were not only driven by the attention weight of the some -tokens, we set the attention weights of the token corresponding to some to 0 and re-normalized the attention weights for this analysis.", "Further, since the attention weights are dependent on the number of tokens in the utterance, it is crucial that the average utterance length across the two compared groups be matched.", "We addressed this by removing outliers and limiting our analysis to utterances up to length 30 (1,028 ut-terances), which incidentally equalized the number of tokens across the two groups.", "These exclusions resulted in tiny differences in the average attention weights, but the qualitative patterns are not affected.", "The left panel of Figure 5 shows the average attention weight by position for some versus other tokens.", "The model assigns much higher weight to some .", "The center panel of Figure 5 shows the average attention weight by position for subject vs. non-subject some -NP utterances.", "The attention weights are generally higher for tokens early in the utterance, 7 but the attention weights of utterances with a subject some -NP are on average higher for tokens early in the utterance compared to utterances with the some -NP in non-subject positions.", "Both of these findings provide evidence that the model assigns high weight to the tokens within and 7 This is in part an artifact of shorter utterances which distribute the attention weights among fewer tokens.", "In a more targeted analysis to assess whether the model learned to use the partitive feature, we examined whether the model assigned higher attention to the preposition of in partitive some -NPs compared to when of occurred elsewhere.", "As utterance length was again a potential confound, we conducted the analysis separately on the full set of utterances with raw attention weights and on a subset that included only utterances with at least two instances of of (128 utterances), in which we renormalized the weights of of -tokens to sum to 1. Results are shown in the right panel of Figure 5. The attention weights were higher for of tokens in partitive some -NPs, suggesting that the model learned an association between partitive of in some NPs and inference strength.", "In the tuning experiments above, we found that including the preceding conversational context in the input to the model did not improve or lowered prediction accuracy.", "9 At the same time, we found that the model is capable of making accurate predictions in most cases without taking the preceding context into account.", "Taken together, these results suggest either that the conversational context is not necessary and one can draw inferences from the target utterance alone, or that the model does not make adequate use of the preceding context.", "Degen (2015) did not systematically investigate whether the preceding conversational context was used by participants judging inference strength.", "To assess the extent to which the preceding context in this dataset affects inference strength, we re-ran her experiment, but without presenting participants with the preceding conversational context.", "We recruited 680 participants on Mechanical Turk who 8 The regression analysis suggests that the model learned to make use of the subjecthood feature and previous work on probing behavior of contextual word representations has found that such models are capable of predicting dependency labels, including subjects (e.g., Liu et al., 2019).", "We therefore also hypothesize that the representations of tokens that are part of a subject some -NP contain information about the subjecthood status.", "This in return could be an important feature for the output layer of the model and therefore be providing additional signal for the model to attend to these tokens.", "9 As suggested by a reviewer, we conducted post-hoc experiments in which we limited the conversational context to the preceding 2 or 5 utterances, which presumably have a higher signal-to-noise ratio than a larger conversational context of 10 preceding utterances.", "In these experiments, we again found that including the conversational context did not improve model predictions.", "each judged 20 or 22 items, yielding 10 judgments per item.", "If the context is irrelevant for drawing inferences, then mean inference strength ratings should be very similar across the two experiments, suggesting the model may have rightly learned to not utilize the context.", "If the presence of context affects inference strength, ratings should differ across experiments, suggesting that the model's method of integrating context is ill-suited to the task.", "The new, no-context ratings correlated with the original ratings ( r = 0 . 68 , see Appendix D) but were overall more concentrated towards the center of the scale, suggesting that in many cases, participants who lacked information about the conversational context were unsure about the strength of the scalar inference.", "Since the original dataset exhibited more of a bi-modal distribution with fewer ratings at the center of the scale, this suggests that the broader conversational context contains important cues to scalar inferences.", "For our model, these results suggest that the representation of the conversational context is inadequate, which highlights the need for more sophisticated representations of linguistic contexts beyond the target utterance.", "10 We further find that the model trained on the original dataset is worse at predicting the no-context ratings ( r = 0 . 66 ) than the original ratings ( r = 0 . 78 ), which is not surprising considering the imperfect correlation between ratings across experiments, but also provides additional evidence that participants indeed behaved differently in the two experiments.", "We showed that despite lacking specific pragmatic reasoning abilities, neural network-based sentence encoders are capable of harnessing the linguistic signal to learn to predict human inference strength ratings from some to not all with high accuracy.", "Further, several model behavior analyses provided consistent evidence that the model learned associations between previously established linguistic features and the strength of scalar inferences.", "In an analysis of the contribution of the conversational context, we found that humans make use of the preceding context whereas the models we considered failed to do so adequately.", "Considering the 10 The representation of larger linguistic context is also important for span-based question-answer (QA) systems (e.g., Hermann et al., 2015; Chen, 2018; Devlin et al., 2019) and adapting methods from QA to predicting scalar inferences would be a promising extension of the current model.", "importance of context in drawing both scalar and other inferences in communication (Grice, 1975; Clark, 1992; Bonnefon et al., 2009; Zondervan, 2010; Bergen and Grodner, 2012; Goodman and Stuhlmuller, 2013; Degen et al., 2015), the development of appropriate representations of larger context is an exciting avenue for future research.", "We also only considered the supervised setting in which the model was trained to predict inference strength.", "It would be interesting to investigate how much supervision is necessary and, for example, to what extent a model trained to perform another task such as predicting natural language inferences is able to predict scalar inferences (see Jiang and de Marneffe (2019b) for such an evaluation of predicting speaker commitment, and Jeretic et al. (2020) for an evaluation of different NLI models for predicting lexically triggered scalar in-ferences).", "One further interesting line of research would be to extend this work to other pragmatic inferences.", "Recent experimental work has shown that inference strength is variable across scale and inference type (Doran et al., 2012; van Tiel et al., 2016).", "We treated some as a case study in this work, but none of our modeling decisions are specific to some .", "It would be straightforward to train similar models for other types of inferences.", "Lastly, the fact that the attention weights provided insights into the model's decisions suggests possibilities for using neural network models for developing more precise theories of pragmatic language use.", "Our goal here was to investigate whether neural networks can learn associations for already established linguistic features but it would be equally interesting to investigate whether such models could be used to discover new features, which could then be verified in experimental and corpus work, potentially providing a model-driven approach to experimental and formal pragmatics.", "We thank the anonymous reviewers for their thoughtful feedback.", "We also gratefully acknowledge Leyla Kursat for collecting the no-context inference strength ratings, and we thank Jesse Mu, Shyamal Buch, Peng Qi, Marie-Catherine de Marneffe, Tal Linzen, and the members of the ALPS lab and the JHU Computational Psycholinguistics group for helpful discussions." ]
[ "abstain", "abstain", "objective", "objective", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "method", "objective", "objective", "method", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "abstain", "abstain", "method", "other", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "other", "other" ]
[ "It is generally believed that a translation memory (TM) should be beneficial for machine translation tasks.", "Unfortunately, existing wisdom demonstrates the superiority of TM-based neural machine translation (NMT) only on the TM-specialized translation tasks rather than general tasks, with a non-negligible computational overhead.", "In this paper, we propose a fast and accurate approach to TM-based NMT within the Transformer framework: the model architecture is simple and employs a single bilingual sentence as its TM, leading to efficient training and inference; and its parameters are effectively optimized through a novel training criterion.", "Extensive experiments on six TM-specialized tasks show that the proposed approach substantially surpasses several strong baselines that use multiple TMs, in terms of BLEU and running time.", "In particular, the proposed approach also advances the strong baselines on two general tasks (WMT news Zh En and En De).", "A translation memory (TM) is originally collected from the translation history of professional translators, and provides the most similar source-target sentence pairs for the source sentence to be translated (Garcia, 2009; Koehn and Senellart, 2010b; Utiyama et al., 2011; Robinson, 2012; Huang et al., 2021).", "A TM generally provides valuable translation information particularly for those input sentences preferably matching the source sentences in the TM, and many efforts have been devoted to integrating a TM into statistical machine translation (Simard and Isabelle, 2009; Koehn and Senellart, 2010a; Ma et al., 2011; Wang et al., 2013; Liu et al., 2019).", "TM (Li et al., 2016; Farajian et al., 2017; Gu et al., 2018; Xia et al., 2019; Bulte and Tezcan, 2019; Xu et al., 2020).", "Many notable approaches have been proposed to augment an NMT model by using a TM.", "For example, Zhang et al. (2018) and He et al. (2019) extract scored n-grams from a TM and then reward each partial translation once it matches an extracted n-gram during beam search.", "Gu et al. (2018) and Xia et al. (2019) use an auxiliary network to encode a TM and then integrate it into the NMT architecture.", "Bulte and Tezcan (2019) and Xu et al. (2020) employ data augmentation to train an NMT model whose training instances are bilingual sentences augmented by their TMs.", "Despite their improvements on the TM-specialized translation tasks (aka JRC-Acquis corpora) where a TM is very similar to test sentences, they consume considerable computational overheads in either training or testing, and particularly it is unclear whether they can deliver gains over standard NMT on general tasks where a TM is not very similar to test sentences.", "Indeed, both Zhang et al. (2018) and Xu et al. (2020) reported their failures on WMT news translation tasks.", "In this paper, we present a fast and accurate approach for TM-based NMT which can be applied to general translation tasks besides TM-specialized tasks.", "We first design a light-weight TM-based NMT model for efficiency: its TM includes a single bilingual sentence and we explore variant ways to encode the TM.", "Also, the designed model outperforms strong TM-based baselines.", "Second, we deeply analyze its translation performance and observe an issue of robustness : it decreases significantly for those input sentences which are not very similar to their TMs, although it obtains substantial improvements for other inputs.", "To address this issue, we propose a novel training criterion for optimizing the parameters of our model inspired by multiple-task learning (van Dyk and Meng, 2001; Ben-David and Borbely, 2008; Qiu et al., 2013).", "The loss function includes two terms: the first term is induced by the bilingual corpus with a TM whereas the second term is induced by the bilingual corpus without any TM.", "In this way, the TM-based NMT model gains better performance and is robust to translate any input sentences no matter they are similar to their TM or not.", "Additionally, this makes it possible that a single unified model can handle both translation situations (with or without a TM), which is practical for online services.", "To validate the effectiveness of the proposed approach, we conduct extensive experiments on eight translation tasks including both TM-specialized tasks and general tasks (WMT).", "Our experiments justify that the proposed approach is better than several strong TM-based baselines in speed, and it further delivers substantial gains (up to 4.7 BLUE points) over those baselines on TM-specialized tasks, leading to up to 8.5 BLEU points over standard Transformer-based NMT.", "In particular, it also outperforms strong baselines on two general translation tasks, i.e., with a gain of 0.7 BLEU points on WMT14 En De task and 1.0 BLEU point on WMT17 Zh En task.", "This paper makes the following contributions: It points out a critical issue about robustness when training TM-based NMT models and provides an elegant method to address this issue.", "It proposes a simple TM-based NMT model that outperforms strong TM-based baselines in terms of both translation quality and speed.", "It verifies that a well-designed TM-based translation model is able to advance strong MT baselines on general translation tasks where a TM is not very similar to input source sentences.", "Suppose x = { x 1 , ..., x n } is a source sentence and y = { y 1 , ..., y m } is the corresponding target sentence.", "From the probabilistic perspective, NMT models the conditional probability of the target sentence y given the source sentence x .", "Formally, for a given x , NMT aims to generate the output y according to the conditional probability P ( y | x ) defined by neural networks: P ( y | x ) = m (cid:89) i =1 P ( y i | x , y <i ) (1) where y <i = { y 1 , . . . , y i 1 } denotes a prefix of y , and each factor P ( y i | x , y <i ) is defined as follows: P ( y i | x , y <i ) = softmax (cid:16) ( h D,Li ) (cid:17) (2) where h D,Li indicates the i th hidden unit at L th layer in the D ecoding phrase under the encoder-decoder framework (Bahdanau et al., 2016), and is a linear network that projects hidden units onto vectors with dimension of the target vocabulary.", "Recently, self-attention networks have attracted many interests due to their flexibility in parallel computation and modeling h D,Li .", "The state-of-the-art NMT model is Transformer (Vaswani et al., 2017), which uses stacked self-attention and fully connected layers for its encoder and decoder.", "Self-attention relies on an attention mechanism to compute a representation of a sequence.", "In Transformer, there are three kinds of attention mechanisms, including encoder multi-head attention, decoder masked multi-head attention and encoder-decoder multi-head attention.", "Attention with H heads can be calculated by the equations: MH-Att ( q, u ) = (cid:20) Att ( q, j ( u ) , j ( u )) (cid:21) H j =1 , Att ( q, u , v ) = softmax (cid:18) q u (cid:62) d (cid:19) v (3) where q is a query vector and u is a two-dimensional matrix, [ u j ] Hj =1 denotes concatenation of all vectors u j , j and j stand for two linear projections from one matrix to another matrix, respectively.", "The 1 d is the scaling factor, and d is the dimension of q .", "And we refer enthusiastic readers to Vaswani et al. (2017) for detailed definitions.", "In this section, in order to preferably bridge TM and NMT, we propose the architecture of TM-based NMT within the Transformer.", "To make our proposed model fast in running time and powerful in quality, at first, we present a configuration of TM to make the proposed model efficient.", "Then we explore three different methods to encode the TM into a sequence of vectors in a coarse-to-fine manner.", "Finally, we propose the architecture that decodes a target word given an input source sentence and its TM representation.", "sentence x we employ Apache Lucene (Bialecki et al., 2020) to retrieve top-100 similar bilingual sentences from the training data.", "Then we adopt the following similarity to re-rank the retrieved bilingual sentences and maintain topK ( K < 100 ) bilingual sentences as the TM for x : sim ( x , x tm ) = 1 dist ( x , x tm ) max( | x | , | x tm | ) (4) where dist denotes the edit-distance, and x tm is a retrieved source sentence from the training data and its reference is y tm .", "Previous studies show that the best translation quality is achieved when the size K of the TM is larger than", "1. For example, the optimized K is set to be 5 in Gu et al. (2018) and Xia et al. (2019), and it is even set to be 100 in Zhang et al. (2018).", "Unfortunately, such a large K significantly decreases the translation speed because the computational complexity is linear in the size of K .", "To make our inference as efficient as possible, we set K = 1 and employ the most similar bilingual sentence denoted by (cid:104) x tm , y tm (cid:105) as the TM for x .", "1 3.2 Encoding TM In this subsection, we will describe how to encode the TM (cid:104) x tm , y tm (cid:105) into a sequence of vectors m .", "Three variant methods for encoding a TM are illustrated in the right part of Figure", "1. Method 1: sentence (TF-S) Given (cid:104) x tm , y tm (cid:105) for x , the first method utilizes word embedding and position embedding of y tm to represent m as follows: m = E tm = [ E w ( y 1 tm ) + E p ( y 1 tm ) , , E w ( y J (cid:48) tm ) + E p ( y J (cid:48) tm )] (5) where E w and E p are word embedding and position embedding respectively, J (cid:48) is the length of y tm and the symbol + denotes a simple addition operator.", "Method 2: sentence with score (TF-SS) The first method is agnostic to the similarity score.", "Intuitively, if a TM (cid:104) x tm , y tm (cid:105) is with high similarity, y tm may be more helpful to predict a good translation.", "So, the second method takes the similarity score into account and it defines m as follows: m = s tm E tm (6) where s tm = sim ( x , x tm ) is the similarity score and the symbol denotes the scalar-multiplication.", "Method 3: sentence with alignment (TF-SA) As shown in Figure 1, x tm consists of the matched parts (in orange color) and the unmatched parts (in dark color) to x .", "Since each word in the TM is not of the same importance to the source sentence x , we should pay more attention to the words that are in the matched parts.", "So, we further obtain the word alignment between x tm and y tm through fast-align toolkit (Dyer et al., 2013).", "2 Suppose A tm is the word alignment between x tm and y tm : A jtm = 1 denotes y j is aligned to some x i otherwise A jtm = 0 , where x i is also in x .", "Therefore, the third method defines m as follows: m = A tm (cid:0) s tm E tm (cid:1) (7) where the symbol denotes an operator between a vector and a matrix such that m j = (cid:40) s tm E jtm if A jtm = 0 E jtm if A jtm = 1 (8) 3.3 TM Augmented NMT Suppose the encoded TM (cid:104) x tm , y tm (cid:105) is denoted by m , a sequence of vectors.", "We aim to build a model P ( y i | x , y <i , m ) for the source sentence x , given the m and prefix translation y <i at time step i , leading to the entire translation model: P ( y | x , x tm , y tm ; ) = (cid:89) i P ( y i | x , y <i , m ) (9) where denotes the parameter of our proposed model.", "3 Example Layer The model architecture of P ( y i | x , y <i , m ) is illustrated at the left part of Figure 1, where its architecture is generally similar to standard Transformer and the core component is the Example Layer.", "Specifically, the Example Layer includes two multi-head attention operators: the left multi-head attention (i.e. MH-Att ( y <i , y <i )) is the same as Transformer, and it is defined on the prefix translation y <i ; the right multihead attention (i.e. MH-Att ( y <i , y tm )) attempts to capture information from the TM, and its query is from y <i while key and value are from the representation of TM m .", "After the two parallel attention operators, two resulting sequences are passed to Add & Norm operator and a new sequence is obtained as the query for the next multi-head attention (i.e. MH-Att ( y <i , x )).", "The following sub-layer is the same as Transformer and P ( y i | x , y <i , m ) can be obtained similar to the definition of standard NMT P ( y i | x , y <i ) as presented in Section", "2. We skip those formal equations to rewrite P ( y i | x , y <i , m ) due to space limitation.", "2 Although some advanced word alignment toolkits (Dou and Neubig, 2021; Chen et al., 2021; Jalili Sabet et al., 2020) may lead to better performance, we still employ fast-align to be in line with previous work for fair comparison (Zhang et al., 2018; Xia et al., 2019).", "3 In the rest of this paper, we may drop in the model for easier notations.", "In summary The entire model architecture is illustrated in Figure 1: the dashed box in the right part shows the memory encoder, and the left part shows how the memory representation is used in the NMT model similar to the Transformer.", "In our model architecture, the encoder block contains two sub-layers and the decoder block contains three sub-layers.", "The core sub-layer in the decoder block is our proposed Example Layer, which consists of multi-head attention and cross attention.", "By introducing the memory encoder and Example Layer, the parameters in our model are increased only by 8.96% compared to the standard NMT baseline.", "Suppose the training corpus is D = {(cid:104) x i , y i , x itm , y itm (cid:105) | i [1 , N ] } , where (cid:104) x i , y i (cid:105) is a bilingual sentence, and (cid:104) x itm , y itm (cid:105) is the related TM which consists of a single bilingual sentence.", "Our goal is to learn the parameter of the TM-based NMT model P ( y | x , x tm , y tm ; ) defined in", "Eq.(9) using D .", "The common wisdom is to optimize the parameter under the maximum likelihood estimation (MLE), i.e. standard training.", "Formally, it minimizes the following criterion: N (cid:88) i log P ( y i | x i , x itm , y itm ; ) .", "Robustness issue Unfortunately, the model trained with MLE suffers from an issue about robustness even if its overall performance is much better than standard Transformer and outperforms TM-based baselines on the Es En task.", "According to our experiments (see Table 4 later), our proposed model performs worse than the Transformer for those sentences which do not have a similar TM.", "As a result, it would be dangerous to use the model for online services because users may provide an input sentence whose TM is not similar to itself.", "The possible reason for the above issue is explained as follows.", "On the average case, the reference y is strongly correlated to its TM target y tm in the training corpus D .", "For example, the average similarity score is about 0 .", "58 for Es En translation task, according to our statistics.", "Because of the powerful fitting ability of neural networks, the model parameters will be guided to heavily depend on the given TM target y tm during training.", "In this way, if an input source sentence x has a high similarity with its given TM, the model will output high-quality results, as we also observed in Table 5.", "On the contrary, once an input sentence is provided with a low similar TM (cid:104) x tm , y tm (cid:105) (for instance, the similarity between 0 and 0.3, as shown in Table 4), the translation quality of its output rapidly decreases.", "Training criterion In order to avoid the TM over-fitting, we propose a simple yet elegant method, inspired by data augmentation (van Dyk and Meng, 2001; Li et al., 2019; Zhong et al., 2020) and multiple-task learning (Ben-David and Borbely, 2008; Qiu et al., 2013; Liu et al., 2016).", "Specifically, we first construct another corpus D 0 = {(cid:104) x i , y i , null , null (cid:105) | i [1 , N ] } from D = {(cid:104) x i , y i , x itm , y itm (cid:105) | i [1 , N ] } .", "In the constructed corpus, (cid:104) null , null (cid:105) plays a role of a TM, but both source and target sides of the TM are empty sentences.", "4 Then we train the model P ( y | x , x tm , y tm ; ) using both D and D 0 , i.e. joint training, which is similar to multiple-task learning.", "Formally, we minimize the following joint loss function: (cid:96) ( D , D 0 ; ) = N (cid:88) i (cid:16) log P ( y i | x i , x itm , y itm ; ) + log P ( y i | x i , null , null ; ) (cid:17) (10) where 0 < is a coefficient to trade off both loss terms.", "Intuitively, the first term induced by D guides the model to use the information from a TM for prediction, and thereby it will generate accurate translations for those input source sentences whose TM is with high similarity.", "On the other hand, the second term induced by D 0 teaches the model to output good translations without information from a TM.", "Additionally, this makes it possible that a single unified model can handle both translation scenarios (with or without a TM), which is practical for online services.", "Note that the proposed method is slightly different from standard data augmentation (Sennrich et al., 2016a; Fadaee et al., 2017; Fadaee and Monz, 2018; Wang et al., 2018) and multiple-task learning (Dong et al., 2015; Kiperwasser and Balles-teros, 2018; Wang et al., 2020) in NMT research.", "These data augmentation techniques automatically generate pseudo data based on the original training data and then train a model using both original and generated data.", "However, the dataset D 0 is 4 In the experiments, we implement null as the sentence including a single word, i.e. (cid:104) eos (cid:105) .", "directly taken from the original D in our scenario.", "Also, multiple-task learning in their works typically involves different models that share some partial parameters rather than all parameters.", "In contrast, both terms in our joint loss correspond to the same task, i.e. translation prediction given a source sentence and its TM; and both models are exactly the same.", "The detailed joint training algorithm is presented in Algorithm", "1. It follows the standard gradient descent method for optimization.", "Note that in line 2 and 3, it samples two mini-batches which do not share the same bilingual sentences to promote diversity, i.e., D and D 0 are independently and randomly sampled.", "In our experiments, we employ Adam (Kingma and Ba, 2014) with default settings as the learning rate schema.", "In this section, we validate the effectiveness of the proposed approach: robustness for handling both translation situations (with or without a TM), running efficiency compared with the previous TM-based NMT models, translation quality on both TM-specialized tasks and general MT tasks.", "We use the case-insensitive BLEU score as the automatic metric (Papineni et al., 2002) for the translation quality evaluation.", "TM-specialized tasks We evaluate our proposed models with the JRC-Acquis corpora, which include three language pairs and lead to six translation tasks in total: English German (En De),", "English Spanish (En Es) and English French (En Fr).", "To compare with previous work, we adopt the same splitting of training/dev/test and pre-processing as Gu et al. (2018), Zhang et al. (2018), and Xia et al. (2019).", "General tasks The proposed models are evaluated on the widely-used general WMT tasks: WMT14 English-to-German (En De) and WMT17 Chinese-to-English (Zh En) tasks.", "For the En De task, we use newstest2013 as the development set, as well as employ newstest2014 and newstest2017 as the test sets.", "For the Zh En task, we employ newsdev2017 and newstest2017 as the development and test set respectively.", "Table 1 summarizes the data statistics for both TM-specialized and general tasks.", "In addition, we employ Byte Pair Encoding (BPE) (Sennrich et al., 2016b) on all the tasks mentioned before.", "model with the strong baselines as follows: TF (Vaswani et al., 2017): it is the standard Transformer.", "TF-P (Zhang et al., 2018): it is reimplemented on top of Transformer by ourselves.", "TF-G (Xia et al., 2019) and TF-SEQ (Gu et al., 2018): TF-SEQ is a mimic implementation over Transformer by Xia et al. (2019).", "We report the results from Xia et al. (2019) since they were also implemented over Transformer as comparison.", "FM + (Xu et al., 2020): since Xu et al. (2020) adopt a different split on JRC corpus, the results are not comparable to ours.", "For a fair comparison, we re-implement a strong model FM + as a baseline which makes use of the same metric to retrieve a TM as ours and is better than the method in Bulte and Tezcan (2019).", "Our models In the case of the three methods proposed in this paper, TF-S , TF-SS and TF-SA refer to the method encoding TM by the sentence, sentence with score, and sentence with alignment, respectively.", "We optimize their parameters through both standard training and joint training.", "For joint training, the hyperparameter is set to be 1 for all translation tasks.", "System configuration For a fair comparison, we employ the same settings to train all baselines and our models, and the learning rate for all models is Adam with the default hyper-parameters.", "The details of the settings are shown in Table", "2. 5.2 Results and Analysis on Es En Task Standard training and robustness issue We first evaluate the proposed models under the standard training criterion.", "Table 3 shows the comparison among different TM encoding methods for our models.", "From this table, we can see that our models achieve substantial improvements over Transformer (TF) which does not use any TM, even if our models are simple and only utilize a single bilingual sentence in the TM.", "TF-SA performs better than TF-S and TF-SS thanks to the fine-grained alignment information encoded in the TM.", "Also, TF-SA outperforms all TM-based baselines by at least 1.0 BLEU point, compared with Table 6.", "In addition, we exploit the influence of our models on the similarity of a TM.", "We thereby divide the test dataset into ten subsets according to the similarity score and report the results in Table 4.", "We find that the gains of our models over the TF baseline are mainly from those sentences whose TMs are with relatively high similarity.", "To our surprise, our models perform worse than TF on the subset with relatively low similarity except the subset with the lowest similarity.", "5 This result demonstrates that our models with standard training are not robust to similarity scores, as deeply explained in the previous section.", "Joint training Luckily the robustness issue can be fixed well by joint training, as depicted in the right part of Table 4.", "We can see that our model is better than the baseline TF on the subset of [0 , 0 .", "3) , and it substantially outperforms TF on the subset of [0 .", "3 , 1) .", "With the help of joint training, TF-SA delivers gains of 1.2 BLEU points over standard training, and gains of 5.7 BLEU points over the strong TF baseline on the entire test set.", "Therefore, in the rest of the experiments, we employ joint training to set up all of our models because it is robust to the low similarity of TMs.", "Without TM or with Ref as TM The situation without any TM and the situation with reference as a TM are more extreme cases of the robustness issue.", "As reported in Table 5, if a perfect TM is 5 We further check these two exceptional sentences and find that they are very short in length.", "In particular, their word alignment results from the fast-align toolkit are very good, which may be beneficial to our proposed model.", "This might be the reason why our proposed model advances the baseline Transformer.", "provided to our models, they can yield excellent translation results.", "Besides, the proposed methods are not inferior to the standard Transformer when no TM is provided.", "As a result, the proposed model makes it possible that a single unified model can handle both translation situations (with or without a TM), which is practical for online services.", "Noisy TM To validate whether the model works well with noisy TMs, we also conduct a quick experiment by adding noises to TM for the test set by randomly replacing words in the target side of TM with incorrect words.", "After replacing one and two words, the proposed TF-SA achieves 68.17 BLEU points and 67.94 BLEU points, respectively.", "Both results are slightly worse than the noise-free TF-SA (68.49) but still better than the best TM baseline (66.21).", "Note that both results are obtained without retraining TF-SA model with noisy TM.", "This fact demonstrates our model is even robust to noisy TMs and thus it is useful for the online TM.", "Comparison with baselines Table 6 illustrates the results between the proposed model TF-SA and the baselines.", "It is clearly shown that TF-SA surpasses all TM-based baselines with a substantial margin.", "In details, TF-SA outperforms TF-P and TF-SEQ by about 3.2 BLEU points, FM + by about 2.6 BLEU points, and the strong baseline TF-G by about 2.2 BLEU points.", "Running time Since all TM-based models employ the same retrieval metric and their retrieval Time(s) TF TF-P TF-SEQ TF-G FM + TF-S TF-SS TF-SA Train 3727 17841 7074 7720 4350 4361 4518 Test 0.30 0.71 1.91 0.55 0.33 0.39 0.40 0.41 Table 7: Running time comparison on Es En task.", "time is exactly the same, we only report the running time of all TM-based NMT models excluding retrieval time in Table", "7. As reported in this table, our proposed model further saves significant running time over TF-SEQ and TF-G for both training and testing, besides achieving better translation performance.", "In addition, although it requires slight overhead in training, its testing is more efficient than TF-P; and our training is faster than FM + .", "The experimental results of all the systems on the six translation tasks of TM-specialized datasets are reported in Table", "8. Several observations can be made from the results.", "First, the baseline TF-P and TF-G achieve substantial gains over the strong baseline TF, outperforming by [1.1, 4.1] BLEU points.", "This result is in line with the finding in Zhang et al. (2018) and Xia et al. (2019).", "Second, on the basis of that, compared with the strongest baseline TF-G, our proposed TF-S, TF-SS and TF-SA can obtain further gains up to 4.9 BLEU points, at least 1.2 BLEU points.", "It is important to mention that all previous TM-based approaches failed in getting notable improvements on the general WMT datasets.", "Since Xia et al. (2019) did not conduct experiments on the WMT datasets and their implementation is not released, we compare our models with two baselines: TF and TF-P.", "Our experimental results on the general WMT datasets are reported in Table", "9. As we BLEU WMT En De WMT Zh En news13 news14 news17 dev17 test17 TF 26.18 27.93 26.82 22.52 24.12 TF-P 26.26 27.79 26.70 22.65 24.17 TF-S 26.56 28.13 26.61 22.88 24.22 TF-SS 27.02 28.22 27.19 23.85 25.12 TF-SA 26.66 28.66 27.48 23.65 25.03 Table 9: Translation accuracy in terms of BLEU on the general WMT tasks.", "can see, the method TF-P is only comparable to the baseline NMT, which is in line with the observation in Zhang et al. (2018).", "In contrast, our models perform well on these tasks.", "Our best model gains about 0.7 BLEU points on the En De and 1.0 BLEU point on the Zh En task, over both baselines on average.", "The experimental results demonstrate that a TM based translation model can advance strong MT baselines on general translation tasks where a TM is not very similar to input source sentences.", "What's more, as shown in Table 5, our models can get excellent translation results while a perfect TM is provided.", "In a summary, based on the above extensive experimental results, our proposed models substantially surpass several baselines on TM-specialized tasks and general tasks, in terms of BLEU and running time.", "In the statistical machine translation (SMT) diagram, Koehn and Senellart (2010a) extract bilingual segments from a TM which matches the source sentence to be translated, and employ a heuristic score to decide whether the extracted segments should be used as decoding constraints or not, then hardly constrain SMT to decode for those unmatched parts of the source sentence.", "Ma et al. (2011) design a fine-grained classifier, rather than the heuristic score, to predict the score for making more reliable decisions.", "Simard and Isabelle (2009), Wang et al. (2013) and Wang et al. (2014) add the extracted bilingual segments to the translation table of SMT, and then bias the decoder in a soft constraint manner when decoding the source sentence with the augmented translation table.", "Liu et al. (2012) use the retrieved bilingual sentences to update the parameters for the log-linear model based SMT.", "In recent years, many efforts are made on neural machine translation (NMT) associated with a TM.", "Li et al. (2016) and Farajian et al. (2017) make full use of the retrieved TM sentence pairs to fine-tune the pre-trained NMT model on-the-fly.", "The most obvious drawback of fine-tuning is that the delay is too long for testing sentences.", "To avoid the online tuning process, Zhang et al. (2018) and He et al. (2019) dynamically integrate translation pieces, based on n -grams extracted from the matched segments in the TM target, into the beam search stage.", "The second type of approach is efficient but heavily depends on the global hyper-parameter , which is sensitive to the development set, leading to inferior performance.", "Recently, there are notable approaches for the sake of further excavation on TM-based NMT.", "Bulte and Tezcan (2019) and Xu et al. (2020) propose data augmentation approaches by augmenting input sentences with a TM which do not modify the NMT model architecture.", "Gu et al. (2018) and Xia et al. (2019) employ an auxiliary network to encode TMs and integrate it into the NMT architecture.", "Our model architecture is simpler than Gu et al. (2018) and Xia et al. (2019) and we encode a single TM target sentence and utilize simple attention mechanisms on the TM.", "And the architecture is more efficient and leads to a faster translation speed compared with Gu et al. (2018) and Xia et al. (2019).", "In particular, we propose a novel training criterion to make the TM-based NMT model more robust in different translation situations (with or without a TM).", "In parallel with our work, Cai et al. (2021) extend the translation memory from the bilingual setting to the monolingual setting through a cross-lingual retrieval technique, and Khandel-wal et al. (2021) report significant improvements in quality on general translation tasks as ours, but their inference speed is two orders of magnitude slower than Transformer because they perform contextual word retrieval whose search space is much larger than that of sentence retrieval.", "This paper presents a simple TM-based NMT model that employs a single bilingual sentence as", "its TM and thus is fast in training and inference.", "Although the presented model with the standard training outperforms strong TM-based baselines, it suffers from a robustness issue: its performance highly depends on the similarity of a TM.", "To address this issue, we propose a novel training criterion inspired by multiple-task learning and data augmentation.", "Experiments on TM-specialized tasks demonstrate its superiority over strong baselines in terms of running time and BLEU.", "Also, it is shown that a TM-based NMT model can advance the strong Transformer on general translation tasks like WMT.", "This work is supported by NSFC (grant No. 61877051).", "We thank Jiatao Gu and Mengzhou Xia for providing their preprocessed datasets.", "We also thank the anonymous reviewers for providing valuable suggestions and feedbacks." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "objective", "objective", "objective", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "objective", "other", "method", "abstain", "abstain", "objective", "abstain", "abstain", "other", "other", "other" ]
[ "Natural language processing (NLP) research combines the study of universal principles, through basic science, with applied science targeting specific use cases and settings.", "However, the process of exchange between basic NLP and applications is often assumed to emerge naturally, resulting in many innovations going unapplied and many important questions left unstudied.", "We describe a new paradigm of Translational NLP , which aims to structure and facilitate the processes by which basic and applied NLP research inform one another.", "Translational NLP thus presents a third research paradigm, focused on understanding the challenges posed by application needs and how these challenges can drive innovation in basic science and technology design.", "We show that many significant advances in NLP research have emerged from the intersection of basic principles with application needs, and present a conceptual framework outlining the stakeholders and key questions in translational research.", "Our framework provides a roadmap for developing Translational NLP as a dedicated research area, and identifies general translational principles to facilitate exchange between basic and applied research.", "Natural language processing (NLP) lies at the intersection of basic science and applied technologies.", "However, translating innovations in basic NLP methods to successful applications remains a difficult task in which failure points often appear late in the development process, delaying or preventing potential impact in research and industry.", "Application challenges range widely, from changes in data distributions (Elsahar and Gall, 2019) to computational bottlenecks (Desai et al., 2020) and integration with domain expertise (Rahman et al., 2020).", "When unanticipated, such challenges can be fatal to applications of new NLP methodologies, leaving exciting innovations with minimal practical Applications Linguistics Modeling I den t i f i e s ne w r e s ea r c h que s t i on s D i sc o v e r s unde r l y i ng pheno m ena Guides model development E n a b l e s a p p li c a t i o n g o a l s D r i v e m ode li ng need s and c on s t r a i n t s P r o v i de s e v i den c e f o r li ngu i s t i c t heo r y Translational NLP Figure 1: Interactions between linguistic theory, model development, and applications in NLP research.", "impact.", "Meanwhile, real-world applications may rely on regular expressions (Anzaldi et al., 2017) or unigram frequencies (Slater et al., 2017) when more sophisticated methods would yield deeper insight.", "When successful translations of basic NLP insights into practical applied technologies do occur, the factors contributing to this success are rarely analyzed, limiting our ability to learn how to enable the next project and the next technology.", "We argue for a third kind of NLP research, which we call Translational NLP .", "Translational NLP research aims to understand why one translation succeeds while another fails, and to develop general, reusable processes to facilitate more (and easier) translation between basic NLP advances and real-world application settings.", "Much NLP research already includes translational insights, but often considers them properties of a specific application rather than generalizable findings that can advance the field.", "This paper illustrates why general principles of the translational process enhance mutual exchange between linguistic inquiry, model development, and application research (illustrated in Figure 1), and are key drivers of NLP advances.", "We present a conceptual framework for Translational NLP, with specific elements of the translational process that are key to successful applications, each of which presents distinct areas for research.", "Our framework provides a concrete path for designing use-inspired basic research so that research products can effectively be turned into practical technologies, and provides the tools to understand why a technology translation succeeds or fails.", "A translational perspective further enables factorizing grand challenge research questions into clearly-defined pieces, producing intermediate results and driving new basic research questions.", "Our paper makes the following contributions: We characterize the stakeholders involved in the process of translating basic NLP advances to applications, and identify the roles they play in identifying new research problems (3.1).", "We present a general-purpose checklist to use as a starting point for the translational process, to help integrate basic NLP innovations into applications and to identify basic research opportunities arising from application needs (3.2).", "We present a case study in the medical domain illustrating how the elements of our Translational NLP framework can lead to new challenges for basic, applied, and translational NLP research (4).", "A long history of distinguishing between basic and applied research (Bush, 1945; Shneiderman, 2016) has noted that these terms are often relative; one researcher's basic study is the application of an-other's theory.", "In practice, basic and applied research in NLP are endpoints of a spectrum, rather than discrete categories.", "As use-inspired research, most NLP studies incorporate elements of both basic and applied research.", "We therefore define our key terms for this paper as follows: Basic research Basic NLP research is focused on universal principles: linguistically-motivated study that guides model design (e.g., Recasens and Hovy (2009) for coreference, Kouloumpis et al. (2011) for sentiment analysis), or modeling techniques designed for general use across different settings and genres.", "Basic research tends to focus on one problem at a time, and frequently leverages established datasets to provide a well-controlled environment for varying model design.", "Basic NLP research is intended to take the long view: it takes the time to investigate fundamental questions that may yield rewards for years to come.", "Applied research Applied NLP research studies the intersection of universal principles with specific settings; it is responsive to the needs of commercial applications or researchers in other domains.", "Applied research utilizes real-world datasets, often specialized, and involves sources of noise and unreliability that complicate capturing linguistic regularities of interest.", "Applications often involve tackling multiple interrelated problems, and demand complex combinations of tools (e.g. using OCR followed by NLP to analyze scanned documents).", "Applied research is concrete and immediate, but may also be reactive and have a limited scope.", "Translational research The term translational is used in medicine to describe research that aims to transform advances in basic knowledge (biological or clinical) to applications to human health (Butte, 2008; Rubio et al., 2010).", "Translational research is a distinct discipline bridging basic science and applications (Pober et al., 2001; Reis et al., 2010).", "We adopt the term Translational NLP to describe research bridging the gap between basic and applied NLP research, and aiming to understand the processes by which each informs the other.", "Section 4 presents one in-depth example; other salient examples include comparing the efficacy of domain adaptation methods for different application domains (Naik et al., 2019) and developing reusable software for processing specific text genres (Neu-mann et al., 2019).", "Translational research occupies a middle ground in the timeframe and complexity of solutions: it develops processes to rapidly and effectively integrate new innovations into applications to address emerging needs, and facilitates integration between pipelines of NLP tools.", "In addition to forward motion of basic innovations into practical applications, the needs of real-world applications also provide significant opportunities for new fundamental research.", "Shnei-derman's model of two parents, three children (Shneiderman, 2016) provides an informative picture: combining a practical problem and a theoretical model yields (1) a solution to the problem, (2) a refinement of the theory, and (3) guidance for future research.", "Tight links between basic research and applications have driven many major advances in NLP, from machine translation and dialog systems to search engines and question answering.", "Designing research with application needs in mind is a key impact criterion for both funding agencies (Christianson et al., 2018) and industry (Spector et al., 2012), and helps to identify new, high-impact research problems (Shneiderman, 2018).", "The NLP field has always lain at the nexus of basic and applied research.", "Application needs have driven some of the most fundamental developments in the field, leading to explosions in basic research in new topics and on long-standing challenges.", "The need to automatically translate Russian scientific papers in the early years of the Cold War led to some of the earliest NLP research, creating the still-thriving field of machine translation (Slocum, 1985).", "Machine translation has since helped drive many significant advances in basic NLP research, from the adoption of statistical models in the 1980s (Dorr et al., 1999) to neural sequence-to-sequence modeling (Sutskever et al., 2014) and attention mechanisms (Bahdanau et al., 2015).", "Similarly, the rapid growth of the World Wide Web in the 1990s created an acute need for technologies to search the growing sea of information, leading to the development of NLP-based search engines such as Lycos (Mauldin, 1997), followed by PageRank (Page et al., 1999) and the growth of Google.", "The need to index and monetize vast quantities of textual information led to an explosion in information retrieval research, and the NLP field and ever-growing web data continue to co-develop.", "In a more recent example, IBM identified automated question answering (QA) as a new business opportunity in a high-information world, and developed the Watson project (Ferrucci et al., 2010).", "Watson's early successes catapulted QA into the center of NLP research, where it has continued to drive both novel technology development and benchmark evaluation datasets used in hundreds of basic NLP studies (Rajpurkar et al., 2016).", "These and other examples illustrate the key role that application needs have played in driving innovation in NLP research.", "This reflects not only the history of the field but the role that integrating basic and applied research has in enriching scientific en-deavor (Stokes, 1997; Branscomb, 1999; Narayana-murti et al., 2013; Shneiderman, 2016).", "An integrated approach has been cited by both Google (Spector et al., 2012) and IBM (McQueeney, 2003) as central to their successes in both business and research.", "The aim of our paper is to facilitate this integration in NLP more broadly, through presenting a rubric for studying and facilitating the process of getting both to and back from application.", "For an operational definition of Translational NLP, it is instructive to consider four phases of a generic workflow for tackling a novel NLP problem using supervised machine learning.", "1 First, a team of NLP experts works with subject matter experts (SMEs) to identify appropriate corpora, define concepts to be extracted, and construct annotation guidelines for the target task.", "Second, SMEs use these guidelines to annotate natural language data, using iterative evaluation, revision of guidelines, and re-annotation to converge on a high-quality gold standard set of annotations.", "Third, NLP experts use these annotations to train and evaluate candidate models of the task, joined with SMEs in a feedback loop to discuss results and needed revisions of goals, guidelines, and gold standards.", "Finally, buy-in is sought from SMEs and practitioners in the target domain, in a dialogue informed by empirical results and conceptual training.", "NLP adoption in practice identifies failure cases and new information needs, and the process begins again.", "This laborious process is needed because of the gaps between expertise in NLP technology and expertise in use cases where NLP is applied.", "NLP expertise is needed to properly formulate problems, and subsequently to develop sound and generalizable solutions to those problems.", "However, for uptake (and therefore impact) to occur, these solutions must be based in deep expertise in the use case domain, reified in a computable manner through annotation or knowledge resource development.", "These distinct forms of expertise are generally found in different groups of individuals with complementary perspectives (see e.g. Kruschwitz and Hull (2017)).", "Given this gap, we define Translational NLP as the development of theories, tools, and processes to enable the direct application of advanced NLP 1 While workflows will vary for different classes of NLP problems, dialogue between NLP experts and subject matter experts is at the heart of developing almost all NLP solutions.", "tools in specific use cases.", "Implementing these tools and processes, and engaging with basic NLP experts and SMEs in their use, is the role of the Translational NLP scientist.", "Although every use case has unique characteristics, there are shared principles in designing NLP solutions that undergird the whole of the research and application process.", "These shared translational principles can be adopted by basic researchers to increase the impact of NLP methods innovations, and guide the translational researcher in developing novel efforts targeting fundamental gaps between basic research and applications.", "The framework presented in this paper identifies common variables and asks specific questions that can drive this research.", "For examples of this process in practice, it is valuable to examine NLP development in the medical domain.", "Use-inspired NLP research has a long history in medicine (Sager et al., 1982; Ranum, 1989), frequently with an eye towards practical applications in research and care.", "Chapman et al. (2011) highlight shared tasks as a key step towards addressing numerous barriers to application of NLP on clinical notes, including lack of shared datasets, insufficient conventions and standards, limited reproducibility, and lack of user-centered design (all factors presenting basic research opportunities, in addition to NLP task improvement).", "Several efforts have explored the development of graphical user interfaces for conducting NLP tasks, including creation and execution of pipelines (Cunningham, 2002; D'Avolio et al., 2010, 2011; Soysal et al., 2018), although these efforts generally do not report on evaluation of usability by non-NLP experts.", "Usability has been investigated by other studies involving more focused tools aimed at specific NLP tasks, including concept searching (Hultman et al., 2018), annotation (Gobbel et al., 2014b,a), and interactive review of and update of text classification models (Trivedi et al., 2018, 2019; Savelka et al., 2015).", "Recent research has utilized interactive NLP tools for processing cancer research (Deng et al., 2019) and care (Yala et al., 2017) documents.", "By constructing, designing, and evaluating tools designed to simplify specific NLP processes, these efforts present examples of Translational NLP.", "We present a conceptual framework for Translational NLP, to formalize shared principles describing how basic and applied research interact to create", "create NLP solutions.", "Our framework codifies fundamental variables in this process, providing a roadmap for negotiating the design of methodological innovations with an eye towards potential applications.", "Although it is certainly not the case that every basic research advance must be tied to a downstream application need, designing foundational technologies for potential application from the beginning produces more robust technologies that are easier to transfer to practical settings, increasing the impact of basic research.", "By defining common variables, our framework also provides a structure for aligning application needs to basic technologies, helping to identify potential failure points and new research needs early for faster adoption of basic NLP advances.", "1. A definition of broad classes of stakeholders in translating basic NLP innovations into applications, including the roles that each stakeholder plays in defining and guiding research;", "2. A checklist of fundamental questions to structure the Translational NLP process, and to guide identification of basic research opportunities in specific application cases.", "NLP applications involve three broad categories of stakeholders, illustrated in Figure", "2. Each contributes differently to technology implementation and identifying new research challenges.", "NLP Experts NLP researchers bring key analytic skills to enable achieving the goals of an applied system.", "NLP experts provide methodological sophistication in models and paradigms for analyzing language, and an understanding of the nature of language and how it captures information.", "NLP researchers provide much-needed data expertise , including skills in obtaining, cleaning, and formatting data for machine learning and evaluation, as well as conceptual models for representing information needs .", "NLP scientists identify research opportunities in modeling information needs, bringing linguistic knowledge into the equation, and developing appropriate tools for application and reuse.", "Subject Matter Experts Subject matter experts (SMEs) provide the context that helps to determine what information is important to analyze and what the outputs of applied NLP systems mean for the application setting.", "SMEs, from medical practitioners to legal scholars and financial experts, bring Tractable analytic methods Data expertise Conceptual models of information NLP Experts Computing constraints Data availability Organizational priorities End Users Research context Information context NLP consumers Subject Matter Experts Translational NLP Researchers Figure 2: Attributes of key stakeholders in the translational process for NLP.", "an understanding of where relevant information can be found (e.g., document sources (Fisher et al., 2016) and sections (Afzal et al., 2018)), which can help identify new types of language for basic researchers to study (Burstein, 2009; Crossley et al., 2014) and new challenges such as sparse complex information (Newman-Griffis and Fosler-Lussier, 2019) and higher-level structure in complex documents (Naik et al., 2019).", "In addition, the context that domain experts offer in terms of the needs of target applications feeds back into evaluation methods in the basic research setting (Graham, 2015).", "SMEs are also the consumers of NLP solutions, as tools for their own research and applications.", "Thus, SMEs must also be consultants regarding the trustworthiness and reliability of proposed solutions, and can identify key application-specific concerns such as security requirements.", "End Users The end users of NLP solutions span a range of roles, environmental contexts, and goals, each of which guides implementation factors of NLP applications.", "For example, collecting patient language in a lab setting, in a clinic, or at home will pose different challenges in each setting, which can inform the development of basic NLP methods.", "Application settings may have limited computational resources , motivating the development of efficient alternatives to high-resource models (e.g. Wang et al. (2020)), and have different human factors affecting information collection and use.", "End users have different constraints on data availability , in terms of how much data of what types can be obtained from whom; the extensive work funded by DARPA's Low Resource Languages for Emergent Incidents (LORELEI) initiative (Christianson et al., 2018) is a testament to the basic research arising from these constraints.", "use NLP technologies to address their own information needs according to the priorities of their organizations.", "These organizational priorities may conflict with existing modeling assumptions, highlighting new opportunities for basic research to expand model capabilities.", "For example, Shah et al. (2019) highlight the conceptual gap between predictive model performance in medicine and clinical utility to call for new research on utility-driven model evaluation.", "Spector et al. (2012) make a similar point about Google's mission-driven research identifying unseen gaps for new basic research.", "The role of the Translational NLP researcher is to interface with each of these stakeholders, to connect their goals, constraints, and contributions into a single applied system, and to identify new research opportunities where parts of this system conflict with one another.", "Notably, this creates an opportunity for valuable study of SME and end user research practices, and for participatory design of NLP research (Lazar et al., 2017).", "Our checklist, introduced in the next section, provides a structured framework for this translational process.", "The path between basic research and applications is often nebulous in NLP, limiting the downstream impact of modeling innovations and obscuring basic research challenges found in application settings.", "We present a general-purpose checklist covering fundamental variables in translating basic research into applications, which breaks down the translational process into discrete pieces for negotiation, measurement, and identification of new research opportunities.", "Our checklist, illustrated in Figure 3, is loosely ordered from initial design to application details.", "In practice, these items reflect different elements of the application process and are constantly Information Need What is the goal, and what are the outputs?", "re-evaluated via a feedback loop between the application stakeholders.", "While many of these items will be familiar to NLP researchers, each represents potential points of failure in translation.", "Designing the research process with these variables in mind will produce basic innovations that are more easily adopted for application and more directly connected to the challenges of real-world use cases.", "We illustrate our items for two example cases: Ex.", "1: Analysis of multimodal clinical data (scanned text, tables, images) for patient diagnosis.", "Ex.", "2: Comparison of medical observations to government treatment and billing guidelines.", "Information Need The initial step that guides an application is defining inputs and outputs, at two levels: (1) the overall problem to address with NLP (led by the subject matter expert), and (2) the formal representation of that problem (led by the NLP expert).", "The overall goal (e.g., extract information on cancer from clinical notes) determines the requirements of the solution, and is central to identifying a measurement of its effectiveness.", "Once the overall goal is determined, the next step is a formal representation of that goal in terms of text units (documents, spans) to analyze and what the analysis should produce (class labels, sequence annotations, document rankings, etc.).", "These requirements are tailored to specific applications and may not reflect standardized NLP tasks.", "For example, a clinician interested in the documented reasoning behind a series of laboratory test orders needs: (1) the orders themselves (text spans); (2) the temporal sequence of the orders; and (3) a text span containing the justification for each order.", "Ex.", "1: type, severity, history of symptoms.", "Ex.", "2: clinical findings, logical criteria.", "Data Characteristics A clear description of the language data to be analyzed is key to identifying appropriate NLP technologies.", "Data characteristics include the natural language(s) used (e.g., English, Chinese), the genre(s) of language to analyze (e.g., scientific abstracts, quarterly earnings reports, tweets, conversations), and the type(s) of linguistic community that produced them (e.g., medical practitioners, educators, policy experts).", "This information identifies the sublanguage(s) of interest (Gr-ishman and Kittredge, 1986), which determine the availability and development of appropriate NLP tools (Grishman, 2001).", "Corporate disclosures, financial news reports, and tweets all require different processing strategies (Xing et al., 2018), as do tweets written by different communities (Blodgett et al., 2016; Groenwold et al., 2020).", "Ex.", "1: clinical texts, lab reports.", "Task Paradigms To address the overall goal with an NLP solution, it must be formulated in terms of one or more well-defined NLP problems .", "Many real-world application needs do not clearly correspond to a single benchmark task formulation.", "For example, our earlier example of the sequence of lab order justifications can be formulated as a sequence of: (1) Named Entity Recognition (treat-ing the order types as named entities in a medical knowledge base); (2) time expression extraction and normalization; (3) event ordering; and (4) evidence identification.", "Breaking the application need into well-studied subproblems at design time enables faster identification and development of relevant NLP technologies, and highlights any portions of the goal that do not correspond with a known problem, requiring novel basic research.", "Ex.", "1: document type classification, OCR, information extraction (IE), patient classification.", "Available Resources The question of resources to support an NLP solution includes two distinct concerns: (1) knowledge sources available to represent salient aspects of the target task; and (2) compute infrastructure for NLP system execution and deployment.", "Knowledge sources may be symbolic, such as knowledge graphs or gazetteers, or representational, such as representative corpora or pretrained language models.", "For some applications, powerful knowledge sources may be available (such as the UMLS (Bodenreider, 2004) for biomedical reasoning), while others are severely under-resourced (such as emerging geopolitical events, which may lack even relevant social media text).", "These resources in turn affect the kinds of technologies that are appropriate to use.", "In terms of infrastructure, NLP technologies are deployed on a wide variety of systems, from commercial data centers to mobile devices.", "Each setting presents constraints of limited resources and throughput requirements (Nityasya et al., 2020).", "An application environment with a high maximum resource load but low median availability is amenable to batch processing architectures or approaches with high pretraining cost and low test-time cost.", "Pretrained word representstions (Mikolov et al., 2013; Pennington et al., 2014) and language models (Peters et al., 2018; Devlin et al., 2019) are one example of fundamental technologies that address such a need.", "Throughput requirements, i.e., how much language input needs to be analyzed in a fixed amount of time, often require engineering optimization for specific environments (Afshar et al., 2019), but the need for faster runtime computation has led to many advances in machine learning for NLP, such as variational autoencoders (Kingma and Welling, 2014) and the Transformer architecture (Vaswani et al., 2017).", "Ex.", "1: UMLS, high GPU compute.", "NLP Technologies The interaction between task paradigms, data characteristics, and available resources helps to determine what types of implementations are appropriate to the task.", "Implementations can be further broken down into representation technologies , for mathematically representing the language units to be analyzed; modeling architectures , for capturing regularities within that language; and optimization strategies (when using machine learning), for efficiently estimating model parameters from data.", "In low-resource settings, highly parameterized models such as BERT may not be appropriate, while large-scale GPU server farms enable highly complex model architectures.", "When the overall goal is factorized into multiple NLP tasks, optimization often involves joint or multi-task learning (Caruana, 1997).", "Ex.", "2: dictionary matching, small neural models.", "Evaluation Once a solution has been designed, it must be evaluated in terms of both the specific NLP problem(s) and the overall goal of the application.", "Standardized NLP task formulations typically define benchmark metrics which can be used for evaluating the NLP components: F-1 and AUC for information extraction, MRR and NDCG for information retrieval, etc.", "The design of these metrics is its own extensive area of research (Jones and Galliers, 1996; Hirschman and Thompson, 1997; Graham, 2015), and even established evaluation methods may be constantly revised (Grishman and Sundheim, 1995).", "Critically for the translational researcher, some metrics may be preferred over others (e.g., precision over recall), and standardized evaluation metrics may not reflect the goals and needs of applications (Friedman and Hripcsak, 1998).", "Improvements on standardized evaluation metrics (such as increased AUC) may even obscure degradations in application-relevant performance measures (such as decreased process efficiency).", "Translational researchers thus have the opportunity to work with NLP experts and SMEs to identify or develop metrics that capture both the effectiveness of the NLP system and its contribution to the application's overall goal.", "Ex.", "1: F-1, patient outcomes.", "Ex.", "2: F-1, billing rates.", "Interpretation Interpretability and analysis of NLP and other machine learning systems has been the focus of extensive research in recent years (Gilpin et al., 2018; Belinkov and Glass, 2019), with debate over what constitutes an interpretation (Rudin, 2019; Wiegreffe and Pinter, 2019) and development of broad-coverage software packages for ease of use (Nori et al., 2019).", "For the translational researcher, the first step is to engage with SMEs to determine what constitutes an acceptable interpretation of an NLP system's output in the application domain (which may be subject to specific legal or ethical requirements around accountability in decision-making processes).", "This leads to an iterative process, working with SMEs and NLP experts to identify appropriately interpretable models, or to identify the need for new basic research on interpretability within the target domain.", "Ex.", "2: Criteria visualization, model audits.", "Application Engineering Last but not least, the translational process must also be concerned with the implementation of NLP solutions, both in terms of the specific technologies used and how they can fit in to broader information processing pipelines.", "The development of general-purpose NLP architectures such as the Stanford CoreNLP Toolkit (Man-ning et al., 2014), spaCy (Honnibal and Montani, 2017), and AllenNLP (Gardner et al., 2018), as well as more targeted architectures such as the clinical NLP framework presented by Wen et al. (2019), provide well-engineered frameworks for implementing new technologies in a way that is easy for others to both adopt and adapt for use in their own pipelines.", "Standardized data exchange frameworks such as UIMA (Ferrucci and Lally, 2004) and JSON make implementations more modular and easier to wire together.", "Leveraging tools and frameworks like these, together with good software design principles, makes NLP tools both easier to apply downstream and easier for other researchers to incorporate into their own work.", "Ex.", "1: Multiple interoperable technologies.", "Ex.", "2: Single decision support tool.", "While the checklist items can guide initial design of a new NLP solution, they are equally applicable for incorporating new basic NLP innovations into existing solutions.", "Any new innovation can be reviewed in terms of our checklist items to identify new requirements or constraints (e.g., higher computational cost, more intuitive interpretability measures).", "The translational researcher can then work with NLP experts, SMEs, and the end users to determine how to incorporate the new innovation into the existing solution.", "We illustrate our Translational NLP framework using our recent line of research on developing NLP tools to assist US Social Security Administration (SSA) officials in reviewing applications for disability benefits (Desmet et al., 2020).", "The goal of this effort was to help identify relevant pieces of medical evidence for making decisions about disability benefits, analyzing vast quantities of medical records collected during the review process.", "The stakeholders in this setting included: NLP researchers (interested in developing generalizable methods); subject matter experts in disability and rehabilitation; and SSA end users (limited computing, large data but strictly controlled, overall priorities of efficiency and accuracy).", "The Translational NLP checklist for this setting is shown in Table", "1. This combination of factors has led to several translational studies, including: Newman-Griffis et al. (2018) developed a low-resource entity embedding method for domains with minimal knowledge sources (lack of Available Resources).", "Newman-Griffis and Zirikly (2018) analyzed the data size and representativeness tradeoff for information extraction in domains lacking large corpora (Available Resources).", "Newman-Griffis and Fosler-Lussier (2019) developed a flexible method for identifying sparse health information that is syntactically complex (challenging Data Characteristics).", "Newman-Griffis and Fosler-Lussier (2021) compared the Task Paradigms of classification and candidate selection paradigms for medical coding in a new domain.", "While these studies do not systematically explore Evaluation, Interpretation, or Application Engineering, they illustrate how the characteristics of one application setting can lead to a line of Translational NLP research with broader implications.", "Several further challenges of this application area remain unstudied: for example, representing and modeling the complex timelines of persons with chronic health conditions and intermittent health care and adapting NLP systems to highly variable medical language from practitioners and patients around the US.", "These present intriguing challenges for basic NLP research that can inform many other applications beyond this case study.", "Of course, these studies are far from the only examples of Translational NLP research.", "Many studies tackle translational questions, from domain adaptation (shifts in Data Characteristics) and low-resource learning (limited Available Resources), and the growing NLP literature in domain-specific venues such as medical research, law, finance, and more involves all aspects of the translational process.", "Rather, this case study is simply one illustration of how an explicitly translational perspective in study design can identify and connect broad opportunities for contributions to NLP research.", "Our paradigm of Translational NLP defines and gives structure to a valuable area of research not explicitly represented in the ACL community.", "We note that translational research is not meant to replace either basic or applied research, nor do we intend to say that all basic NLP studies must be tied to specific application needs.", "Rather we aim to highlight the value of studying the processes of turning basic innovations into successful applications.", "These processes, from scaling model computation to redesigning tools to meet changing application needs, can inform new research in model design, domain adaptation, etc., and can help us understand why some tools succeed in application while others fail.", "In addition to helping more innovations successfully translate, the principles outlined in this paper can be of use to basic and applied NLP researchers as well as translational ones, in identifying common variables and concerns to connect new work to the broader community.", "Translational research is equally at home in industry and academia, and already occurring in both.", "While resource disparities between industrial and academic research increasingly push large-scale modeling efforts out of reach of academic teams, a translational lens can help to identify rich areas of knowledge-driven study that do not require exas-cale data or computing resources.", "The general principles and interdisciplinary nature of translational research make it a natural fit for public knowledge-driven academic settings, while its applicability to commercial needs is highly relevant to industry.", "for every project.", "The specifics of different applications will expand our initial questions in different ways (e.g., Data Characteristics may involve multimodal data, or different language styles), and the dynamics of collaborations will shift answers over time (e.g., a change in evaluation criteria may motivate different model training approaches).", "Our checklist provides a minimal set of common questions, and can function as a touchstone for discussions throughout the research process, but it can and should be tailored to the nature of each project.", "Our framework is itself a preliminary characterization of Translational NLP research, and will evolve over time as the field continues to develop.", "We have outlined a new model of NLP research, Translational NLP, which aims to bridge the gap between basic and applied NLP research with generalizable principles, tools, and processes.", "We identified key types of stakeholders in NLP applications and how they inform the translational process, and presented a checklist of common variables and translational principles to consider in basic, translational, or applied NLP research.", "The translational framework reflects the central role that integrating basic and applied research has played in the development of the NLP field, and is illustrated by both the broad successes of machine translation, speech processing, and web search, as well as many individual studies in the ACL community and beyond.", "This work was supported by the National Library of Medicine of the National Institutes of Health", "T15 LM007059, and National grant 1822831.", "Rich Caruana.", "1997.", "Multitask learning.", "Machine learning , 28(1):4175.", "Caitlin Christianson, Jason Duncan, and Boyan Onyshkevych.", "2018.", "Overview of the DARPA LORELEI Program.", "Machine Translation , 32(1):3 9.", "Scott A Crossley, Laura K Allen, Kristopher Kyle, and Danielle S McNamara.", "2014.", "Analyzing Discourse Processing Using a Simple Natural Language Processing Tool.", "Discourse Processes , 51(5-6):511 534.", "Hamish Cunningham.", "2002.", "GATE, a General Architecture for Text Engineering.", "Computers and the Humanities , 36(2):223254.", "Shrey Desai, Geoffrey Goh, Arun Babu, and Ahmed Aly.", "2020.", "Lightweight convolutional representations for on-device natural language processing.", "arXiv preprint arXiv:2002.01535 ." ]
[ "abstain", "abstain", "objective", "abstain", "result", "objective", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "objective", "objective", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "method", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other" ]
[ "There is growing evidence that changes in speech and language may be early markers of dementia, but much of the previous NLP work in this area has been limited by the size of the available datasets.", "Here, we compare several methods of domain adaptation to augment a small French dataset of picture descriptions ( n = 57) with a much larger English dataset ( n = 550), for the task of automatically distinguishing participants with dementia from controls.", "The first challenge is to identify a set of features that transfer across languages; in addition to previously used features based on information units , we introduce a new set of features to model the order in which information units are produced by dementia patients and controls.", "These concept-based language model features improve classification performance in both English and French separately, and the best result (AUC = 0.89) is achieved using the multilingual training set with a combination of information and language model features.", "According to the World Health Organisation, the largest global challenge facing the world today is the rapid increase of the population aged over 65 years.", "It is projected to increase from 524 million in 2010 to 1.5 billion in 2050, with the largest increase in the developing world (Suzman and Beard, 2011).", "This demographic trend has profound societal implications; for example, the number of persons affected by dementia will increase worldwide from 46 million in 2015 to 131.5 million in 2050 (Prince et al., 2015).", "The most common underlying condition causing dementia is Alzheimer's disease (AD).", "Although no cure to this neurodegenerative disease has been found, experts agree that intervention in early stages is crucial to delay onset (Dubois et al., 2016).", "AD is characterised by a global impairment of cognitive functioning, with specific deficits in episodic memory, executive functioning, perceptual speed and language (Backman et al., 2005; Weiner et al., 2008).", "Machine learning experiments using speech and language for the detection of dementia or related disorders have been conducted in many languages, including English (Roark et al., 2011; Mirheidari et al., 2016; Fraser et al., 2016; As-gari et al., 2017), French (Troger et al., 2017; Konig et al., 2018), German (Weiner et al., 2016), Hungarian (Szatloczki et al., 2015; Vincze et al., 2016), Spanish (Meilan et al., 2014), Greek (Satt et al., 2013), Swedish (Lundholm Fors et al., 2018; Fraser et al., 2018a), Japanese (Shibata et al., 2016), Portuguese (Alusio et al., 2016), and Mandarin Chinese (Lai et al., 2009).", "Most studies acknowledge that small data sets are a limitation and describe the difficulties in gathering more data, including the challenges in patient recruitment, the expense of running clinically-based studies, and the manual effort required for transcription and annotation.", "Here, we consider whether it could be possible to increase the amount of available data by augmenting a corpus in one language with data from another language, and thus improve predictive performance without the need for new data collection.", "Specifically, we consider augmenting a relatively small French dataset with a much larger English one.", "The two aims of this study are: (1) to identify a set of features that are both useful for the detection of dementia and that we expect to transfer across different languages, and (2) to improve classification results on the French dataset by augmenting the training set with English data.", "One way to assess language is through narrative speech, such as that elicited by the Cookie Theft Picture (CTP) task (Goodglass et al., 2000).", "In this task, participants are asked to describe the content of a line drawing of a kitchen scene, where a boy can be seen standing on a stool, trying to reach a cookie jar, while a woman is preoccupied washing dishes.", "In this study, we analyse CTP narratives, due to the widespread use of the task in multiple languages.", "Narrative speech can be analysed on a number of levels, including phonology, morphology, syntax, semantics, and pragmatics.", "Here, our goal is to extract features that both predict AD and are likely to transfer across different languages.", "Although other studies have used acoustic features for this task (Meilan et al., 2014; Konig et al., 2018), there are well-documented differences in the phonology and prosody of French and English (Bertran, 1999; Vaissi`ere, 2002).", "Syntax and morphology also differ across languages, and the degree to which they are impaired in mild to moderate AD is unclear (Taler and Phillips, 2008).", "Pragmatic ability in AD may be disrupted (Chapman et al., 1998; Boschi et al., 2017); however, the CTP is not ideally suited for assessing pragmatics.", "Instead, we focus on the semantic level, with the assumption that while the specific vocabulary will be different across languages, the underlying meanings or semantic concepts expressed should be the same.", "Features relating to semantic content are also motivated by the AD literature.", "Cue-tos et al. (2007) reported a significant reduction in semantic units produced by pre-clinical AD participants, relative to controls, on the CTP task.", "Croisile et al. (1996) studied CTP descriptions from French participants, and found that the AD descriptions were shorter and less informative than the control descriptions.", "They measured information content by scoring the narratives against a gold standard list of 23 expected information units, which have been widely used in subsequent research.", "Several recent studies have used NLP and machine learning to analyse speech samples from people with dementia and other cognitive disorders.", "Most relevant here, are those which focus on picture description tasks in English or French.", "DementiaBank 1 is a large database of CTP narratives from AD patients and controls, containing primarily English data.", "A number of recent papers report classification results on this corpus (Prud'-hommeaux and Roark, 2015; Fraser et al., 2016; Al-Hameed et al., 2016; Yancheva and Rudzicz, 2016; Sirts et al., 2017).", "Language analysis of English-language CTP data from other sources has also been used to differentiate between different underlying pathologies in AD (Rentoumi et al., 2014), and variants of frontotemporal lobar degeneration (Pakhomov et al., 2010).", "In French, picture description was one of multiple tasks used to elicit speech for the classification of participants with mild cognitive impairment and AD reported by Konig et al. (2015) and Konig et al. (2018), although only acoustic processing was used.", "There has been very little prior work on multilingual or cross-lingual dementia classification.", "Ren-toumi et al. (2018) presented preliminary results suggesting that some language features from CTP samples could transfer across Greek and English, but did not report classification results.", "Fraser et al. (2018b) studied a related task of detecting mild cognitive impairment (MCI), and found that classification results could be improved in both English and Swedish by incorporating multilingual topic modelling into the feature extraction pipeline; however, they did not consider multilingual classification directly.", "More generally, multilingual NLP is an active and growing area of research.", "Some approaches to improving classifier performance on a resource-poor target language by leveraging a resource-rich source language include: translate the target language to the source language (or vice versa) and 1 https://dementia.talkbank.org/ train a unilingual classifier (Wan, 2009); extract features from the two languages separately and then use domain adaptation techniques to train a classifier for the target language (Blitzer et al., 2006; Prettenhofer and Stein, 2010); or determine a common representation for both languages and then extract features from the combined corpus to train a multilingual classifier (Ammar et al., 2016).", "In the extreme case, one can also consider purely cross-lingual classification, in which the classifier is trained solely on the source language, but tested on the target language.", "We use a supervised domain adaptation approach, similar to that of Daume III (2007), by considering each language to be a different domain.", "In related (though not multilingual) work, Masrani et al. (2017) also used this approach to adapt a dataset of AD narratives to their MCI classification task.", "In contrast to the previous work on AD classification, we measure not only which information units are mentioned, but also the order in which they are mentioned.", "Our approach has some similarity to class-based language models (Brown et al., 1992), in which words are first grouped into classes (or clusters), and then the language model is trained on the classes rather than the individual words.", "One benefit to this approach is improved gener-alisability (Hoidekr et al., 2006), and another is the ability of classes to span different languages (Tackstrom et al., 2012).", "Data were taken from two corpora: a small French dataset ( n = 57), collected at the Memory Clinic and Research Centre of the University Hospital Nice, and the Pitt subcorpus of DementiaBank, containing 550 English samples 2 .", "Detailed information about the protocols for each study can be found in Troger et al. (2017) and Becker et al. (1994).", "In both cases, ethics approval for the data collection was obtained from the local governing bodies.", "The demographics for the participants in each language are shown in Table 1.", "In both studies, the 2 In this analysis, we included all participants in the Dementia subfolder, regardless of specific diagnosis, to maximize the size of the source data.", "participants were asked to perform the CTP task in their respective languages.", "In English, the image was shown on paper and speech was digitally recorded, while in the French study, the image was displayed on a tablet and speech was recorded via the tablet microphone.", "The English and French audio samples were manually transcribed using the CHAT protocol (MacWhinney, 2014).", "A set of pre-defined information units found in the CTP was determined as an extension to Croisile et al. (1996), and is given in Table 2a.", "Mentions of information units were determined using keyword-spotting (based on manually-constructed word lists specific to each language), and used to translate the full narratives to sequences of information units.", "As an example, the English A boy is standing on a stool and French Le garcon est sur un tabouret would both be mapped to the sequence BOYSTOOL .", "Features relating to the occurrence of each distinct information unit comprise the info feature set, described in Table 2b.", "Additionally, new features are derived from language models build on the sequence of information units.", "To this end, concept-based language models are trained for English and French in a leave-one-out fashion, using the kenlm framework (Heafield, 2011).", "Models up to 5-grams were constructed.", "For each participant, two language models are constructed for each n : one trained on the healthy control (HC) population and one trained on the AD population.", "The participant is left out of the model built on their associated diagnostic group.", "The trained language models are then applied to the held-out par-ticipant's sequence of information units and various language model ( LM ) features are extracted (Table 2c).", "Actions STEAL , FALL , WASH , OVERFLOW , GIRL ' S ACTION , WOMAN ' S INDIFFERENCE Actors BOY , GIRL , CHILD ( REN ), WOMAN Places KITCHEN , EXTERIOR Objects COOKIE , JAR , STOOL , SINK , DISHCLOTH , WATER , WINDOW , CUPBOARD , DISH , CURTAIN , COUNTER", "(a) Information units.", "has unit Binary feature indicating presence or absence of each information unit (23 features) ratio unit For each information unit, the number of times that unit was mentioned, divided by the total number of words in the original narrative (23 features) unique concept density Total number of information units which were mentioned at least once, divided by the total number of words in the original narrative (1 feature) unique concept efficiency Total number of information units which were mentioned at least once, divided by the duration of the sample in seconds (1 feature) total concept density Total number of words referring to information units, divided by the total number of words in the original narrative (1 feature) total concept efficiency Total number of words referring to information units, divided by the duration of the sample in seconds (1 feature)", "(b) info features perplexity class n -gram The perplexity assigned to the sample by each of the eight language models, where n = 2 , 3 , 4 , 5, and the models are trained on data from either the AD or HC class.", "(8 features) score class n -gram The log probability assigned to the sample by each of the eight language models.", "(8 features) max perplexity class n -gram The maximum perplexity, computed over all n -grams in a sample, for each of the eight language models.", "(8 features) min score class n -gram The minimum log probability, computed over all n -grams in a sample, for each of the eight language models.", "(8 features)", "(c) LM features Table 2: Top, the information units extracted from CTP narratives.", "To evaluate the performance of the three proposed feature sets ( info , LM , and info+LM ), we first train classifiers to distinguish between HC and AD participants within a given language.", "To examine the importance of certain features, we restrict ourselves to more explainable linear models, namely logistic regression (LR) and linear support vector machines (SVM) (Pedregosa et al., 2011).", "In both cases, we use L 1 regularisation to promote sparsity in the feature weights.", "Area under the Receiver-Operator curve (AUC) is reported as the evaluation parameter.", "Due to the small size of the French dataset, we use leave-pair-out cross validation (LPO-CV), which has been shown to produce an unbiased estimate for AUC on small datasets (Airola et al., 2009), and has also been used in related work (Roark et al., 2011).", "However, since LPO-CV is computationally very costly, we instead use 10-fold cross-validation (10-CV) for English, making sure that any samples for a given participant occur in either the training set or the test set, but not both.", "For LPO-CV we compute AUC and its standard deviation as described by Roark et al. (2011); for 10-CV we compute the AUC in each test fold and then report the average and standard deviation over folds.", "Feature scaling and hyper-parameter optimisa-tion is done on the training set in each fold.", "Features are scaled using Maximum-Absolute Scaling to preserve the binary nature of the info features.", "For both SVMs and LR, C was optimised between C [ 10 4 ,..., 10 4 ] using a grid search.", "Our goal is to improve classification in French, by incorporating training data from English.", "To this end, we examine multiple ways to combine data from both English and French in the training set.", "We first consider domain adaptation , where we treat French as the target domain and English as the source domain.", "We implement the AUGMENT method of Daume III (2007), which involves augmenting the feature space with source-specific, target-specific, and combined versions of all the original features, allowing the classifier to assign a higher weight to the combined version when that feature transfers well across domains, while also retaining sourceand target-specific information where appropriate.", "We consider as well as the baseline methods described in Daume III (2007): WEIGHT , in which the samples from the source domain are assigned reduced weights in the model; PRED, in which the prediction made by the source classifier is used as an additional feature in the target model; LININT , in which the predictions from the source and target models are linearly interpolated; and ALL , in which target and source data are simply combined in a single training set.", "Due to the limited size of our data, we do not optimise the weighting factors in WEIGHT and LININT , but rather assume the two languages should be given equal importance, and use a weighting factor of 0.1 in WEIGHT (since the English data is 10 times the size of the French data), and 0.5 in LININT .", "Another option is to combine the French and English datasets before extracting features.", "Specifically, we first replace the word-level transcripts with the sequence of information units, and then combine the two datasets and train the language models over the multilingual corpus, thus generating multilingual language models .", "To understand how well a trained classification model in one language could be applied to another, we also perform cross-lingual experiments.", "For this, we train language and classification models in one language and test it on the other.", "The results of the classification experiments are presented in Figure 1.", "In French, for both LR and SVM, using LM features leads to higher AUC than the info features, and the combination of features is more effective than either feature set alone.", "In the English case, the LM and info features lead to equivalent performance individually, but the AUC is again marginally improved when the feature sets are combined, suggesting that they are capturing at least somewhat complementary information.", "For French, the LM features generally do not benefit from domain adaptation, with equivalent or poorer AUC relative to the unilingual case.", "The best result with the LM features is achieved in the AUGMENT scenario, where the classifier can select the French LM features only (although this result holds only for the SVM classifier).", "In contrast, the info features do benefit from the additional data available through domain adaptation, and lead to better results than the unilingual baseline.", "The best overall result of AUC = 0.89 is achieved by combining the feature types in the ALL configuration.", "For English, we do not expect to see much benefit from including the (much smaller) French dataset.", "The WEIGHT adaptation technique is not feasible when the source data is smaller than the target data, and the LININT technique performs poorly, as it assigns too much importance the smaller and out-of-domain dataset.", "However, we do see marginal improvements using ALL and AUGMENT , reflecting the value of increasing the training set size by roughly 10%.", "The best result of AUC = 0.84 is achieved in the ALL condition, using the combined feature set.", "Using the multilingual LM does not affect the info features, and therefore Figure 1 shows only the LM and info+LM results.", "Clearly, the multilingual LM approach does not work well here.", "Unlike in domain adaptation, combining the datasets using this method assumes that information units will be produced in the same order in the two languages.", "While French and English are similar in this respect, there are many possible counter-examples, such as cookie jar (COOKIEJAR ) versus bote `a biscuits (JARCOOKIE ).", "When training entirely on English data and testing on French, the results using info and info+LM features are significantly improved over the unilingual baseline, while the LM results are reduced, once again indicating that the info features transfer better across languages.", "The results are very similar to those using the ALL technique for domain adaptation, suggesting that in that case, model training is dominated by the English data.", "To further explore the similarity in performance in the ALL and cross-lingual cases, we examine the effect of incrementally increasing the amount of English data in the training set, when testing on French data.", "Figure 2 displays the classification performance of SVM and LR classifiers trained either using the ALL method of domain adaptation or cross-lingually with increasing amounts (10% at a time) of the English data.", "Considering first the ALL method (red and blue), at x = 0 there is no English data, and so we recover the French unilingual baseline.", "As we increase the amount of English data in the training set, performance slowly increases, eventually reaching the values reported in Figure 1.", "Considering next the cross-lingual case (yellow and green), we see that training on only 10% of the English data (55 samples) results in much poorer AUC values.", "However, each further 10% increases the classification performance.", "At 80% of English data (440 samples) the multi-and cross-lingual cases converge in performance.", "Thus, it would appear that domain adaptation is more data-efficient, as we achieve close to optimal results with a smaller proportion of English data, 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 SVM: info + LM SVM: info SVM: LM LR: info + LM LR: info LR: LM 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 Unilingual Domain Adaptation Domain Adaptation with multilingual LM Unilingual with multilingual LM ALL Augment LinInt PRED Weight Augment LinInt PRED Weight Crosslingual All English French Figure 1: Results of uni-, multiand cross-lingual classification experiments.", "Finally, we examine the features to determine which features are most useful to the task of dementia detection, and to compare the selected features in the unilingual and multilingual cases.", "Figure 3 shows the median absolute value of the weights assigned to each feature, for English and French, in the unilingual and multilingual ALL condition.", "The L 1 regularisation serves to set many feature weights to zero.", "As a high-level observation, in both the uniand multilingual cases, relatively more info features are selected, and relatively fewer LM features.", "Of the LM features that are selected, those which relate to the maximum perplexity or minimum probability appear to be more useful.", "These features l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l l 0.70 0.65 0.80 0.75 0.90 0.85 0 20 40 60 80 100 % of English training data AUCLR: Multilingual SVM: Multilingual LR: Cross-lingual SVM: Cross-lingual Figure 2: AUC as a function of the amount of English data used in the training set, for both multiand crosslingual cases.", "capture locally anomalous speech patterns, relative to either the AD or control language models.", "In the unilingual case, the French models show a preference for the binary has features (indicat-ing whether or not an information unit has been mentioned).", "Only 4 of the ratio features and none of the density or efficiency features have a median value greater than zero.", "However, these features are relevant to the task, and potentially more generalisable (e.g., total concept efficiency differs between the French AD and HC groups with p < 0 . 001 on a t -test, and represents an aggregate score rather than depending on the presence or absence of a single information unit).", "Such features are selected more often in the multilingual case, and lead to improved performance.", "One explanation for this could be that in the small French training set, spurious correlations due to noise can overpower the real signal, and lead to less relevant features being assigned high weights, while correlated (but perhaps actually more relevant) features are suppressed.", "By increasing the size of the training set with English data, the signal-to-noise ratio is improved, and a better set of features is selected.", "Generally, the feature values (not shown) support the intuition that controls mention more of the information units in the image (higher has feature values), convey information more efficiently, with fewer off-topic words (higher density and efficiency scores), and organize the narrative in a more predictable way (narratives have lower perplexity and higher probability) than the AD participants.", "Again, these trends are more apparent in the English data than the French data, likely due to the relatively larger number of samples.", "One perhaps surprising result of this study was that naively combining features in the ALL condition led to better results than the AUGMENT algorithm.", "However, this is in line with the original findings of Daume III (2007), where he identified a set of tasks where AUGMENT performed sub-optimally: specifically, those cases where training on source-only data was better than training on target-only data.", "This is precisely the case we have here, as training cross-lingually (on English source data) leads to better results than training unilingually (on French target data).", "The explanation offered by Daume III is, If the domains are so similar that a large amount of source data outperforms a small amount of target data, then it is unlikely that blowing up the feature space will help.", "In some sense, then, these results are confirmation that we have indeed identified a set of features over which the two languages (i.e. domains) are very similar.", "The fact that the ALL configuration is optimal in both French and English has an added practical benefit: since there is no distinction between source and target features, the resulting classifier is language-agnostic.", "This means that test data could come from either language, in a hypothesized future screening application.", "While our goal in this paper was not to push the state-of-the-art on the DementiaBank dataset, we do find that our best English result (AUC=0.84, which corresponds to an accuracy of 75% and F 1 score of 0.77) is comparable to the other published results on this dataset (Prud'hommeaux and Roark, 2015; Yancheva and Rudzicz, 2016; Sirts et al., 2017; Fraser et al., 2016; Hernandez-Domnguez et al., 2018).", "There are no previously published results on the French dataset.", "In this work, we have shown that there are features which can both distinguish AD patients from healthy controls with a high degree of accuracy,", "and also generalize across languages.", "By incorporating a large English dataset, we were able to improve the AUC on the French dataset from 0.85 to 0.89.", "We also developed a new set of features for this task, using concept-based language modelling, which improved AUC from 0.80 to 0.85 in the unilingual case, and 0.88 to 0.89 in the multilingual case.", "Future work will involve extending the set of features involved, incorporating data from other languages, and testing whether similar techniques can be effective for detecting earlier stages of cognitive decline, such as MCI.", "Other work from our group has also begun to explore the use of unsupervised methods and out-of-domain data sources (Li et al., 2019).", "Technical challenges aside, collaborations of this nature can be difficult due to the sensitive nature of the data, and the need to respect ethical guidelines and participant consent when sharing and storing data.", "With this in mind, we recommend to other researchers working in similar domains to consider from the outset whether their data could eventually be shared, and to make suitable provisions in their ethics protocols and participant consent forms.", "We look to DementiaBank as a model for this kind of data-sharing and openness, and hope that researchers can continue to find ways to share resources of this nature.", "This research was partially funded by the Riks-bankens Jubileumsfond The Swedish Foundation for Humanities and Social Sciences, grant no: NHS 14-1761:1, and the EIT Digital Wellbeing Activity 17074, ELEMENT.", "The French data was collected during the ELEMENT project and the FP7 Dem@Care project (grant number 288199).", "The original acquisition of the DementiaBank data was supported by NIH grants AG005133 and AG003705 to the University of Pittsburgh, and the data archive is supported by NIHNIDCD grant R01-DC008524 to Carnegie Mellon University." ]
[ "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "objective", "objective", "objective", "other", "other", "abstain", "other", "objective", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "objective", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "other", "abstain", "result", "abstain", "result", "objective", "abstain", "abstain", "abstain", "method", "result", "other", "other", "other" ]
[ "A common factor in bias measurement methods is the use of hand-curated seed lexicons, but there remains little guidance for their selection.", "We gather seeds used in prior work, documenting their common sources and rationales, and in case studies of three English-language corpora, we enumerate the different types of social biases and linguistic features that, once encoded in the seeds, can affect subsequent bias measurements.", "Seeds developed in one context are often re-used in other contexts, but documentation and evaluation remain necessary precursors to relying on seeds for sensitive measurements.", "There has been increasing concern in the NLP community over bias and stereotypes contained in models and how these biases can trickle downstream to practical applications, such as serving job advertisements.", "In particular, there has been much recent scrutiny of word representations, with many studies finding harmful associations encoded in embedding models.", "Combating such biases requires measuring the bias encoded in a model so that researchers can establish improvements, and many variants of embedding-based measurement techniques have been proposed (Bolukbasi et al., 2016; Caliskan et al., 2017; Manzini et al., 2019).", "These measurements have had the additional upstream benefit of providing computational social science and digital humanities scholars with a new means of quantifying bias in datasets of social, political, or literary interest.", "Researchers increasingly use embeddings (Garg et al., 2018; Knoche et al., 2019a; Hoyle et al., 2019) and other lexicon-based methods (Saez-Trumper et al., 2013; Fast et al., 2016; Rudinger et al., 2017) to provide quantitative answers to otherwise elusive political and social Target Concept Highlighted Seeds Unpleasant divorce, jail, poverty, cancer, ...", "questions about the biases in a corpus and its authors.", "This work often involves comparing bias measurements across different corpora, which requires reliable, fine-grained measurements.", "While there is a wide range of bias measurement methods, every one of them relies on lexicons of seed terms to specify stereotypes or dimensions of interest.", "But the rationale for choosing specific seeds is often unclear; sometimes seeds are crowd-sourced, sometimes hand-selected by researchers, and sometimes drawn from prior work in the social sciences.", "The impact of the seeds is not well-understood, and many previous seed sets have serious limitations.", "As shown in Table 1, the seeds used for bias measurement can themselves exhibit cultural and cognitive biases (e.g., reductive definitions), and in addition, linguistic features of the seeds (e.g., frequency) can affect bias measurements (Ethayarajh et al., 2019).", "Though they are often re-used, the suitability of these seeds to novel corpora is uncertain, and while evaluations sometimes include permutation tests, distinct sets of seeds are rarely compared.", "We use a mixture of literature survey, qualitative analysis of seed terms, and analytic methods to explore the use of seed sets for bias measurement through two overarching research questions.", "(1) We explore how seeds are selected and from which sources they are drawn to better understand rationales and assumptions underlying common seed sets.", "(2) We explore which features of seeds can cause instability , including both social biases and linguistic dimensions in our analysis.", "Our work provides the following contributions.", "Documentation: We document and test 178 seed sets used in prior work, and we release this documentation as a resource for the research community.", "1 Analysis: We provide a systematic framework for understanding the different sources of instability in seed sets that can affect bias measurements.", "We compare the gathered seeds to larger sets of artificially created seed sets, and we investigate the reliability of seed terms used for two popular embedding-based bias measurement methods in case studies on three datasets.", "Recommendations: With this larger perspective, we discuss how seed sets should be examined versus how these sets are popularly considered and what kind of documentation best practices should be followed.", "Seeds are a brittle but unavoidable element of current bias measurement algorithms, with weaknesses that need probing even for embedding-based measurements.", "The term bias has many definitions, from a value-neutral meaning in statistics to a more normative meaning in socio-cultural studies.", "In the bias measurement literature in NLP, lack of precise definitions and problem specifications (Blodgett et al., 2020) has led to many of the errors we explore in this paper.", "In general, bias in NLP most often represents harmful prejudices (Caliskan et al., 2017) whose spurious and undesirable influence can affect model outputs.", "While these downstream effects have inspired work on removing bias from embedding models (Bolukbasi et al., 2016), there have also been critiques of these efforts (Gonen and Goldberg, 2019), and we do not focus on this use case in our study.", "Instead, we focus on bias measurement as a tool used in diverse settings to make comparisons across specific corpora of interest.", "Unsupervised methods for bias measurement have included pointwise mutual information (Rudinger et al., 2017), normalized frequencies and cosine similarity of TF-IDF weighted word vectors (Saez-Trumper et al., 2013), generative models (Joseph et al., 2017; Hoyle et al., 2019), 1 Seeds and documentation are available at https://gi thub.com/maria-antoniak/bad-seeds and a combination of odds ratios, embeddings, and crowd-sourcing (Fast et al., 2016).", "All of these methods rely on sets of seed terms.", "While much recent NLP work has focused on contextual embeddings, most recent bias-detection work has focused on vocabulary-based embeddings and word representations.", "Researchers have increasingly used embedding-based methods to measure biases and draw comparisons in training corpora of social interest (Kim et al., 2014; Hamilton et al., 2016; Kulkarni et al., 2016; Phillips et al., 2017; Kozlowski et al., 2019).", "For example, Bhatia et al. (2018) train embedding models on news sources to compare trait associations for political candidates.", "We believe that our results should extend to contextual embedding methods (Zhao et al., 2019; Sedoc and Ungar, 2019), but vocabulary-based embeddings are easier to analyze.", "We discuss several recent studies that include analysis of seed sets (Kozlowski et al., 2019; Ethayarajh et al., 2019; Sedoc and Ungar, 2019) in 8.", "Training Corpora.", "Our dataset choices are guided by our focus on the upstream use case, where embeddings are trained on relatively small, special-purpose collections to answer social and humanist questions about the training corpus.", "The scope of these datasets fits the use case of a social scientist interested in measuring bias during a small time window, across specific genres, or in a particular set of authors.", "Table 2 shows an overview of the data, and more details are in the Appendix.", "articles from April 15th-June 30th, 2016; high quality WikiText articles, using the full WikiText-103 training set (Merity et al., 2016); and Goodreads book reviews for the romance and history and biography genres, sampled from the UCSD Book Graph (Wan and McAuley, 2018; Wan et al., 2019).", "For added validity, we also replicate existing studies, using a pre-trained model on a large Google News corpus (Mikolov et al., 2013).", "For each dataset, we lowercase all text, parse and obtain POS tags using spaCy (Honnibal et al., 2020), tokenize the text into unigrams, and filter words that occur fewer than 10 times in the training dataset.", "Lowercasing controls for the varying levels of capitalization used in the gathered seeds.", "We leave analysis of bigram seeds to future work and rely on unigrams as a simplifying assumption.", "Corpus-Derived 7/18 papers Re-Used 7/18 papers Borrowed from Social Sciences 6/18 papers Curated 5/18 papers Adapted from Lexical Resources 3/18 papers Crowd-Sourced 2/18 papers Population-Derived 2/18 papers Table 3: Overview of the surveyed seed sources.", "Gathered Seeds Sets.", "We gather 178 seed sets used in a representative sample of 18 highly-cited prior works on bias measurement.", "Seeds include both embedding-based and non-embedding-based bias detection methods as there is often crossover and re-use of seed sets.", "Because we use word embedding models trained on unigrams, we do not include bigram seeds in our analysis, and in each experiment, we omit words that were not present in our training set.", "While these choices could be seen as limitations, we see them as realistic applications of seeds to constrained datasets, reflecting the scenario in which biases are compared across specific corpora.", "Figure 1 overviews the seed sets, examples used in the paper are documented in the Appendix, and the full collection is shared in the supplementary materials and is available online.", "How do researchers select seeds, and from which sources are they popularly drawn?", "We explore this question using the gathered seed sets from prior works on unsupervised bias detection.", "The origins of these seeds and the rationales for using them are not always explained by researchers, but in cases where we were able to determine a source or rationale, we group them into the following categories.", "Table 3 overviews the source frequencies.", "We emphasize that each source comes with risks and benefits; there is no one correct method to selecting seeds, but awareness of pros and cons can help guide decisions and evaluation methods.", "Borrowed from Social Sciences.", "Seed sets are often borrowed from prior work in psychology and other social sciences, usually in an effort to either replicate results or build confidence from previously validated work.", "For example, Caliskan et al. (2017) validate prompts from the Implicit Association Test (Greenwald et al., 1998), while Garg et al. (2018) and Hoyle et al. (2019) use personality traits from Williams and Bennett (1975); Williams and Best (1977, 1990).", "Sometimes the seeds appeal for validity via highly cited resources, like LIWC (Pennebaker et al., 2001), despite critiques about unreliability (Panger, 2016; Forscher et al., 2017).", "Borrowing seeds does not absolve researchers from examining and validating seeds.", "Crowd-Sourced.", "Custom seed sets can be created through crowd-based annotation.", "Fast et al. (2016) use Mechanical Turk to validate the inclusion of terms in their seed sets; the final terms are then included in packaged code for researchers and practitioners.", "Kozlowski et al. (2019) use Mechanical Turk to gather ratings of items scaled along gender, race, and class.", "Crowd-sourcing can aid in gathering contemporary associations and stereotypes.", "However, controlling for crowd demographics can be difficult, and crowd-sourcing can result in alarming errors, in which popular stereotypes are hard-coded into the seeds (as in Table 1).", "Population-Derived.", "Some seed sets are derived from government-collected population datasets.", "Popular sources include U.S. census data (Boluk-basi et al., 2016; Caliskan et al., 2017), the U.S. Bureau of Labor Statistics (Caliskan et al., 2017), and the U.S. Social Security Administration (Garg et al., 2018).", "These sources are usually used to gather names and occupations common to certain demographic groups.", "These sources tend to be U.S.-centric, though the training data for the embedding does not always match (e.g., large Wikipedia datasets are not guaranteed to have U.S. authors).", "Reliance on these sources is particularly vulnerable to reductive definitions of the target conceptse.g., gender (Keyes, 2017)and assumes a level of trust and representation in the data collection that might not exist evenly across groups.", "Adapted from Lexical Resources.", "Some seed sets are drawn from existing dictionaries, lexicons, and other public resources, such as SemEval tasks (Zhao et al., 2018) and ConceptNet (Fast et al., 2016).", "Pre-packaged sentiment lexicons are a popular source (Saez-Trumper et al., 2013; Sweeney and Najafian, 2019); these lexicons include the Affective Norms for English Words (ANEW) (Bradley and Lang, 1999) and negative/positive sentiment words from Hu and Liu (2004).", "These seeds have the advantage of previous rounds of validation, but this does not guarantee validity for new domains.", "Corpus-Derived.", "Quantitative methods can be used to extract seed terms from a corpus of interest.", "For example, Saez-Trumper et al. (2013) use sorted lists of named entities extracted from a target dataset to create seed sets for personas of interest.", "Similarly, Sweeney and Najafian (2019) extract high frequency identity terms from a Wikipedia corpus.", "These methods have the advantage of ensuring high frequency terms in the target dataset, but they pose similar risks to crowd-sourcing; unless an extra round of cleaning and curation is completed by the researchers, terms with unintended effects can be included in the seed sets.", "Curated.", "Seed sets are sometimes hand-selected by the authors, usually after close reading of the corpus of interest.", "For example, Rudinger et al. (2017) hand-select a set of seed terms that correspond to a set of demographic categories of interest, and Joseph et al. (2017) hand-select a set of identity seeds based on their frequency in a Twitter dataset.", "Often, even when papers rely on other seed sources, manual curation is included as a step in the seed creation process.", "Hand-curation can result in high precision seeds, but this method relies on the au-thors' correction for their own social biases.", "Re-Used.", "Finally, many papers rely on prior bias measurement research for seed terms.", "The most popular sources in our survey include early papers on bias in embeddings such as Bolukbasi et al. (2016) and Caliskan et al. (2017).", "This repetition means that the seeds are tested on many different datasets, but they should not be trusted without validation; there can be mismatches in frequency and contextual meaning between datasets.", "In the upstream use case, locally trained word embeddings remain state of the art because fine-tuned pre-trained contextual models might introduce extrinsic information, and it is not feasible to pre-train contemporary contextual embeddings on such small collections.", "Here, we focus on two popular seed-based methods to detect bias in word embeddings.", "Bolukbasi et al. (2016) and Caliskan et al. (2017) both introduce embedding-based methods for bias detection that rely on sets of seed words.", "Each of these methods requires two sets of seed words, X and Y , and one additionally requires matched pairs of seed words { ( X 1 , Y 1 ) , ( X 2 , Y 2 ) , ... } .", "WEAT.", "Given a set of embedding vectors w , the Word Embedding Association Test (WEAT) (Caliskan et al., 2017) defines a vector based on the difference between the mean vector of the two target sets, and then measures the cosine similarity of a set of attribute words to that vector.", "The strength of the association between the target sets X and Y , and the sets of attributes, A , and B , is given by s ( X , Y , A , B ) = (cid:88) x X s ( x, A , B ) (cid:88) y Y s ( y, A , B ) where s ( w, A , B ) is equal to the difference in average cosine similarities between a query w and each term in A and w and each term in B .", "To test whether the resulting difference s ( X , Y , A , B ) is significant, this result is compared to the same function applied to randomly permuted sets drawn from X and Y .", "Caliskan et al. (2017) use WEAT to measure stereotypical associations between sets of targets and attributes, where, for example, the target terms might be arts and science terms, and the attribute terms might be male and female terms.", "PCA.", "The principal component analysis (PCA) method tests how much variability there is in the difference vectors between pairs of word vectors (Bolukbasi et al., 2016).", "If the vector difference between pairs of seed terms can be approximated well by a single constant vector c , then this vector represents a bias subspace .", "In this case, the subspace is simply a one dimensional vector, though this process could be extended to more dimensions.", "For each pair of embedding vectors corresponding to one seed word from set X and one from set Y , Bolukbasi et al. (2016) calculate the mean vector of those two vectors and then include the two resulting half vectors from that mean to the two seed vectors as columns in the input matrix.", "To quantify how large an effect seed features can have on bias measurements, we calculate a set of metrics for both PCA and WEAT methods that summarizes how well the bias subspace represents the target seeds.", "For each dataset, we use the popular skip-gram with negative sampling (SGNS) algorithm to train a word2vec model.", "We use the gensim package for training ( Rehurek and Sojka, 2010).", "We use a window size of 5, a minimum word count of 10, and a vector size of 100 for all experiments.", "We repeat this process across 20 bootstrapped samples of each dataset.", "For PCA, we calculate the difference vector between the embedding vectors for each pair of words in the two seed sets.", "For each set of paired seed sets, we run PCA and plot the percent of variance explained by each component.", "For the gathered seeds, we only use pairings documented in prior work.", "We perform a manual confirmation that the first component g indeed represents the bias subspace by ranking all the words in the vocabulary by their cosine similarity to g .", "For WEAT, we hold the attribute terms constant, where A = [ good ] and B = [ bad ] , while our generated seed sets take the place of the target terms X and Y .", "Holding the attribute terms constant is a simplifying assumption; our goal is not to test all possible attribute terms but to show that significant variation is possible.", "We then calculate the WEAT test statistic and significance.", "Coherence.", "In addition to the PCA explained variance and WEAT test statistic, we also measure the coherence of each pairing of seed sets after being mapped to the bias subspace.", "Ideally, when we project all the words in the vocabulary onto the subspace, the two sets would be drawn as far apart as possible.", "We rank all words by cosine similarity 0.0 0.2 0.4 Similarity to Unpleasantness Vector woman, women, she, her, her,... (Kozlowski et al 2019) sister, female, woman, girl, daughter,... (Caliskan et al 2017) woman, girl, she, mother, gal,... (Bolukbasi et al 2016) woman, girl, mother, daughter, sister,... (Hoyle et al 2019) lady, nun, heroine, actress, businesswoman,...(Zhao et al 2018) baker, counselor, nanny, librarians, socialite,... (Zhao et al 2018) S eed s romance history + biography Figure 2: Bias measurements depend on seeds.", "where X 1 and Y 2 are seed sets and R 1 and R 2 are their mean ranks in the bias subspace.", "Finally, we normalize the scores to a [0, 1] range.", "Higher coherence scores indicate that the seed sets have very different mean ranks, i.e., the seeds are separated by more of the vocabulary.", "For example, in Figure 4, ordered seeds", "(a) produce a subspace with greater coherence (sets are further apart in the bias subspace) than shuffled seeds", "(b).", "Generated Seed Sets.", "In order to control for frequency and POS when measuring instabilities due to semantic similarity and word order, we generate a large collection of artificial, randomized seed sets.", "We select a target term at random from the model's vocabulary, filtered by POS.", "Each seed set consists of this target term and its four nearest neighbors, ranked by cosine similarity.", "We repeat this process for each of the models trained on the bootstrapped samples of the corpus.", "We choose seed sets that are semantically similar (rather than randomly selecting seeds) because we expect that seed sets of realistic research interest would be coherent.", "We emphasize that researchers have used bias measurement methods for increasingly creative purposes, moving beyond gender and race, and similar bias measurement techniques can be used for aspect detection and other seed-based tasks.", "Example seeds are shown in Table 4.", "Before moving to specific seed features, we present some general results showing the instability of measurements using seeds.", "Figure 2 shows a motivating example, in which we imagine a digital humanities scholar interested in measuring whether women are portrayed more negatively in different genres of book reviews.", "As in the WEAT test, each seed is plotted according to its cosine similarity to an averaged unpleasantness vector (Caliskan et al., 2017).", "For some sets, no significant difference is visible, while for other sets, there are much larger differences, causing the researcher to draw different conclusions when comparing biases across datasets.", "Table 4 shows both the generated and gathered seed sets ordered by their coherence after using the WEAT method to discover a bias subspace.", "These examples highlight factors contributing to lower coherence (e.g., similarity of the seed sets) which we discuss in 8.", "They also highlight the general difficulty in constructing seed sets; e.g., as noted by Garg et al. (2018), the final row demonstrates that some U.S. racial categories are not distinguishable from available census data.", "Similar challenges arise when seeds do not occur in the target dataset, which is often true for names.", "The wide variation in coherence scores, especially for the generated seeds which are less likely to contain overlapping terms, indicates that different seed sets can have widely differing success for bias measurement.", "Sometimes seeds can reflect the curator's (or crowd's) personal biases.", "Instabilities can also arise from the organization of the seeds and seemingly innocuous linguistic features.", "We describe a series of distinct sources of instability that can be encoded in seed sets and discuss the implications of each.", "We rely on a combination of literature review, qualitative close reading of example seeds, and quantitative tests of seed features.", "We iterated through the seeds, flagging problematic sets, and then manually clustered and labeled the factors that could cause instability.", "Our identified factors can be categorized as definitional factors (reductive definitions, inclusion of confounding concepts), lexical factors (frequency, POS of individual seeds), and set factors (number and order of seeds, similarity of seed sets).", "Reductive Definitions.", "The seeds can be reductive and essentializing, codifying life experiences into traditional categories.", "Using names as place-holders for concepts like race (Nguyen et al., 2014; Sen and Wasow, 2016) or reducing gender to a binary with two extremes (Bolukbasi et al., 2016; Caliskan et al., 2017) can create a distorted view of the source data.", "Sometimes these are simplifying assumptions, made in an effort to measure biases that would otherwise go unexamined.", "However, these decisions run the risk of further entrenching these category definitionse.g., see discussions 1 2 3 4 5 6 7 8 9 10 Component 0.0 0.2 0.4 E x p l a i ned V a r i an c e Setting original order shuffled", "in Keyes (2017); Larson (2017) for the mistakes and harms that can be caused by mapping names to gendersand these trade-offs should be evaluated and documented.", "More broadly, recent work has critiqued NLP and ML bias research for not successfully connecting with the literature in sociology and critical race studies (Hanna et al., 2020; Blodgett et al., 2020).", "Engaging with this literature would provide a better foundation for decision-making about seed sets and provide context for future researchers.", "Imprecise Definitions.", "If the target concept is not well-defined, the resulting seed terms can be too broad and include multiple concepts, risking the creation of confounded or circular arguments.", "Similarly, the unexamined use of pre-existing sets and over-reliance on the category labels from prior work can result in a series of errors.", "The seeds can contain confounding terms (e.g., in Table 1, unpleasant contains cancer which in some datasets might be more prevalent for certain demographic groups) or terms from the target group (e.g., domestic work includes the gendered terms mom and mum).", "Similarly, the seeds can manifest cultural stigmas: for example, including fat and wrin-kled in an ugliness category (Fast et al., 2016) results in a seed set that itself contains stereotypes.", "These stigmas are harmful and can interact with other demographic features like gender or age (Puhl and Heuer, 2009), and unless their inclusion is intentional, they can accidentally inflate measurements towards certain groups.", "Predicting all such errors is impossible, and there can be cases where researchers intentionally include such terms (e.g., to capture a particular stereotype)but this should be a conscious decision by each researcher using the seeds, and at a minimum, researchers should clearly define their target concepts.", "Lexical Factors.", "Prior work examining seeds has shown that the frequency and part of speech of seeds can affect the resulting bias measurements.", "Ethayarajh et al. (2019) show that the WEAT test requires that the paired seeds occur at similar frequencies and that seed sets can be manipulated to produce certain measurements.", "Brunet et al. (2019) explore the effects of perturbing the training corpus, finding that (1) second-order neighbors to the seeds can have a strong impact on the bias measurement effect size and (2) effects are stronger for rarer words.", "Using contextual embeddings, Sedoc and Ungar (2019) show that different classes of words (e.g., names vs. pronouns) can result in different bias subspaces and that sometimes these subspaces represent an unintended dimension (e.g., age instead of gender).", "Set Size and Alignment.", "The number of seeds included in each set can affect the resulting bias subspace; Kozlowski et al. (2019) find small increases in performance when using more seed pairs.", "The alignment of the seeds in matched sets (i.e., the ordering or pairing of seeds in one set with seeds in another set) can also affect the bias subspace.", "In the PCA method, each term in one seed set is explicitly linked to a single term in the other seed set.", "The specific alignment between paired words matters; altering the pairing can result in dramatically different results, even for cases like gender, which is marked in English.", "However, we observe conscious pairings of seeds only for obvious cases, and sometimes obvious pairings produce subspaces that explain less variance.", "We replicate a study previously carried out on embeddings trained on internet-scale collections (Bolukbasi et al., 2016) using both a large, pretrained embedding and the relatively small NYT dataset.", "Figure 3 shows how much variance is explained by the first ten principal components of three difference matrices.", "When we use the original paired male-female seed words from Bolukbasi et al. (2016) (e.g., man woman , he she ), we see a single dominant first component, suggesting a strong male-female axis.", "As previously reported, the variances fall off gradually when the seeds are a set of random words.", "When we shuffle the order of the seed words, the drop off is steeper than for random pairs, but there is no longer a single dominant principal component.", "Similarly, Figure 4 shows that when we used the ordered gender pairs, the ranked words roughly divide into groups correlated with gender, while if we use shuffled pairs, the lists of high and low ranked words are not as easily distinguishable as masculine or feminine.", "We find an opposite effect social class pairs (Kozlowski et al., 2019); when we shuffle, we find a subspace that explains more variance than the explicitly ordered pairs (e.g., richest-poorest).", "We find similar differences when testing some seed sets that lack intuitive pairings, e.g., the matched pleasantness and unpleasantness seeds (Caliskan et al., 2017) and the matched Christianity and Islam seeds (Garg et al., 2018).", "Order does not always affect the subspacee.g, we found no significant difference when shuffling sets of namesbut we have shown that it can affect the subspace, and so to build confidence in measurements, testing is required.", "Set Similarity.", "By sampling random seed sets we find that it is more difficult to represent the variance of seed sets that are too close together.", "Figure 5 shows that set similarity (cosine similarity between the set mean vectors) is significantly correlated with explained variance for generated sets (Pearson r = 0 . 67 , p < 0 . 05 ).", "We highlight two comparisons between gathered sets intended to measure racial bias that explain different degrees of variance.", "Synthetic pairings generally explain more variance than pairings of gathered sets of equal similarity, although for gathered sets we cannot control for POS and frequency.", "Table 4 shows the generated seed sets ranked by coherence, where higher scores indicate that the bias subspace was able to separate the seed sets.", "Similar seed sets and sets with duplicates (e.g., the pairing in the table in which both generated sets contain food terms) have low coherence scores.", "Almost all recent work on bias measurement relies on sets of seed terms to ground cultural concepts in language.", "If we do not pay attention to the seeds, these methods will lack foundation and the claims they support will be left open to criticism and dis-missal.", "Seeds and their rationales need to be tested and documented, rather than hidden in code or copied without examination.", "Some of the risks discussed in this paper may seem obvious in retrospect, but our literature survey suggests there are widely varying levels of evaluation and documentation.", "Rationales for picking sources or seeds are not always explained, or the reader is left to assume that prior work has adequately validated the seeds.", "Tests for frequency, semantic similarity, and other features are rare or non-existent, and clear definitions and discussion of limitations are often missing.", "Permutation tests are sometimes used, but these do not account for seeds outside of those already selected.", "Significantly different results can be found using alternative seeds sets for the same target concept, and fine-grained comparisons require validation on multiple sets.", "We faced a number of challenges in gathering 178 seed sets from prior work.", "Sometimes seeds are shared online at an undocumented location and sometimes hard-coded into code repositories; this can significantly obscure the seeds from public view, which is troubling for tools intended for wide use on sensitive topics.", "Documentation is often scattered across locations, and in more than one case, we found contradictions between different sources for a single project.", "In one case, we were unable to find the full list of seeds used in the paper, and in several cases, it was unclear which seed sets were used for which experiments.", "While some authors went to commendable lengths to document their materials, there is a need for more consistent and transparent documentation.", "We recommend that researchers carefully trace the origins of seed sets , with attention to the risks associated with the origin type.", "We also recommend that researchers examine seed features .", "POS, frequency, semantic similarity, and pairing order can significantly affect the results of bias measurements.", "Seeds should be both examined manually and tested as shown in 8; importantly, they should be compared to alternative seeds with different attributes, as in 7.", "To assist this we release a compilation of 178 seed sets from prior work.", "These tests are particularly important when comparing biases across datasets.", "Finally, researchers should document all seeds and the rationales underlying their design, including concept definitions.", "We add to recent calls for better documentation and problem specification in machine learning (Bender and Friedman, 2018; Gebru et al., 2018; Mitchell et al., 2019; Blodgett et al., 2020) and in studies of social biases in technology (Olteanu et al., 2019).", "Specifically, when the seeds intentionally encode harmful stereotypes or slurs, it can be beneficial to include a trigger warning or not highlight the seeds in the paper; however, full seed lists should always be accessible, not hard-coded, with unique labels matched to experiments.", "Ultimately, our goal is not to eliminate a problem but to illuminate it: 2 to help practitioners think through the potential risks posed by seed sets used for bias detection.", "We encourage thoughtful, critical studies, but we observe a trend in which seed sets are used in new research and applications simply because they have been used in prior published work, without additional vetting.", "Research precedents can take on a life of their own and we have a responsibility to explore and document possible sources of error.", "We believe that seed sets can be useful and are probably unavoidable, but that no technical tool can absolve researchers from the duty to choose seeds carefully and intentionally.", "Thank you to our anonymous reviewers whose comments substantially influenced and improved this paper.", "Thank you to Rishi Bommasani, Forrest Davis, Os Keyes, Lauren Kilgour, Rosamund Thalken, Marten van Schijndel, Melanie Walsh, and Gregory Yauney for their many helpful suggestions.", "This work was funded through NSF grant #1652536." ]
[ "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "method", "method", "objective", "method", "abstain", "other", "abstain", "other", "method", "method", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "objective", "objective", "method", "other", "other", "other" ]
[ "Human conversations contain many types of information, e.g., knowledge, common sense, and language habits.", "In this paper, we propose a conversational word embedding method named PR-Embedding, which utilizes the conversation pairs (cid:104) post, reply (cid:105) 1 to learn word embedding.", "Different from previous works, PR-Embedding uses the vectors from two different semantic spaces to represent the words in post and reply.", "To catch the information among the pair, we first introduce the word alignment model from statistical machine translation to generate the cross-sentence window, then train the embedding on word-level and sentence-level.", "We evaluate the method on single-turn and multi-turn response selection tasks for retrieval-based dialog systems.", "The experiment results show that PR-Embedding can improve the quality of the selected response.", "2 1 Introduction Word embedding is one of the most fundamental work in the NLP tasks, where low-dimensional word representations are learned from unlabeled corpora.", "The pre-trained embeddings can reflect the semantic and syntactic information of words and help various downstream tasks get better performance (Collobert et al., 2011; Kim, 2014).", "The traditional word embedding methods train the models based on the co-occurrence statistics, such as Word2vec (Mikolov et al., 2013a,b), GloVe (Pennington et al., 2014).", "Those methods are widely used in dialog systems, not only in retrieval-based methods (Wang et al., 2015; Yan et al., 2016) but also the generation-based models (Serban et al., 1 In this paper, we name the first utterance in the conversation pair as post,' and the latter is reply' 2 PR-Embedding source code is available at https:// github.com/wtma/PR-Embedding . 2016; Zhang et al., 2018b).", "The retrieval-based methods predict the answer based on the similarity of context and candidate responses, which can be divided into single-turn models (Wang et al., 2015) and multi-turn models (Wu et al., 2017; Zhou et al., 2018; Ma et al., 2019) based on the number of turns in context.", "Those methods construct the representations of the context and response with a single vector space.", "Consequently, the models tend to select the response with the same words .", "On the other hand, as those static embeddings can not cope with the phenomenon of polysemy, researchers pay more attention to contextual representations recently.", "ELMo (Peters et al., 2018), BERT (Devlin et al., 2019), and XLNet (Yang et al., 2019) have achieved great success in many NLP tasks.", "However, it is difficult to apply them in the industrial dialog system due to their low computational efficiency.", "In this paper, we focus on the static embedding, for it is flexible and efficient.", "The previous works learn the embedding from intra-sentence within a single space, which is not enough for dialog systems.", "Specifically, the semantic correlation beyond a single sentence in the conversation pair is missing.", "For example, the words why' and because' usually come from different speakers, and we can not catch their relationship by context window within the sentence.", "Furthermore, when the words in post and reply are mapped into the same vector space, the model tends to select boring replies with repeated content because repeated words can easily get a high similarity.", "To tackle this problem, we propose PR-Embedding (Post-Reply Embedding) to learn representations from the conversation pairs in different spaces.", "Firstly, we represent the post and the reply in two different spaces similar to the source and target languages in the machine translation.", "Then, the word alignment model is introduced to gener-P_from Post: Reply: P_hi P_, P_where P_are P_you R_i R_am R_from R_alabama R_how R_about R_you R_, P_you R_i R_alabama Figure 1: An example of conversational word alignment from the PersonaChat dataset (section 3.1).", "ate the cross-sentence window.", "Lastly, we train the embeddings based on the word-level co-occurrence and a sentence-level classification task.", "The main contributions of our work are: (1) we propose a new method to learn the conversational word embedding from human dialogue in two different vector spaces; (2) The experimental results show that PR-Embedding can help the model select better responses and catch the semantic correlation among the conversation pair.", "We consider two vocabularies for the post and the reply V p := { v p 1 , v p 2 , ..., v ps } , V r := { v r 1 , v r 2 , ..., v rs } together with two embedding matrices E p , E r R s d , where s is the size of the vocabularity and d is the embedding dimension.", "We need to learn the embedding from the conversation pair (cid:104) post, reply (cid:105) .", "They can be formulated as P = ( p 1 , ..., p m ) , R = ( r 1 , ..., r n ) , where m, n are the length of the post and the reply respectively.", "For each pair in the conversation, we represent the post, reply in two spaces E p , E r , by which we can encode the relationship between the post and reply into the word embeddings.", "Similar to the previous works (Mikolov et al., 2013b; Pennington et al., 2014), we also learn the embeddings based on word co-occurrence.", "The difference is that we capture both intra-sentence and cross-sentence co-occurrence.", "For the single sentence, the adjacent words usually have a more explicit semantic relation.", "So we also calculate the co-occurrence based on the context window in a fixed size.", "However, the relationship among the cross-sentence words is no longer related to their distance.", "As shown in Figure 1, the last word in the post from' is adjacent to the first word i' in reply, but they have no apparent semantic relation.", "So we need to find the most related word from the other sequence for each word in the pair.", "In other words, we need to build conversational word alignment between the post and the reply.", "In this paper, we solve it by the word alignment model in statistical machine translation (Och and Ney, 2003).", "We treat the post as the source language and the reply as the target language.", "Then we align the words in the pair with the word alignment model and generate a cross-sentence window centered on the alignment word.", "Word-level.", "PR-Embedding learns the word representations from the word-level co-occurrence at first.", "Following the previous work (Pennington et al., 2014), we train the embedding by the global log-bilinear regression model w Ti w k + b i + b k = log ( X ik ) (1) where X ik is the number of times word k occurs in the context of word i .", "w , w are the word vector and context word vector, b is the bias.", "We construct the word representations by the summation of w and w .", "Sentence-level.", "To learn the relationship of embeddings from the two spaces, we further train the embedding by a sentence-level classification task.", "We match the words in the post and reply based on the embeddings from word-level learning.", "Then we encode the match features by CNN (Kim, 2014) followed by max-pooling for prediction.", "We can formulate it by M ( i,j ) = cosine ( p i , r j ) (2) M i = tanh ( W 1 M i : i + h 1 + b 1 ) (3) M = MaxP ooling m h +1 i =1 [ M i ] (4) where W 1 , b 1 are trainable parameters, M i : i + h 1 refers to the concatenation of ( M i , ..., M i + h 1 ) and hits@1 hits@5 hits@10 GloVe train 12.6 39.6 63.7 GloVe emb 18.0 44.6 66.9 BERT emb 15.4 41.0 62.9 Fasttext emb 17.8 44.9 67.2 PR-Embedding 22.4 60.0 81.1 IR baseline 21.4 -Starpace 31.8 -Profile Memory 31.8 -KVMemnn 32.3 62.0 79.2 +PR-Embedding 35.9 66.1 82.6 KVMemnn (GloVe) 36.8 68.1 83.6 +PR-Embedding 39.9 72.4 87.0 Table 1: Experimental results on the test set of the PersonaChat dataset.", "h is the window size of the filter.", "At last, we feed the vector M into a fully-connected layer with sigmoid output activation.", "where W 2 , b 2 are trainable weights.", "We minimize the cross-entropy loss between the prediction and ground truth for training.", "To better evaluate the embeddings, we choose the manual annotation conversation datasets.", "For the English dataset, we use the multi-turn conversation dataset PersonaChat (Zhang et al., 2018a).", "For the Chinese dataset, we use an in-house labeled test set of the single-turn conversations, which contains 935 posts, and 12767 candidate replies.", "Each of the replies has one of the three labels: bad, middle, and good.", "The training set comes from Baidu Zhidao 3 and contains 1.07 million pairs after cleaning.", "Baselines.", "We use GloVe as our main baseline, and compare PR-Embedding with the embedding layer of BERT, which can also be used as static word embedding.", "We also compare with the the public embeddings of Fasttext (Joulin et al., 2017) and DSG (Song et al., 2018).", "Tasks.", "We focus on the response selection tasks for retrieval-based dialogue systems both in the single-turn and multi-turn conversations.", "For the Personchat dataset, we use the current query for response selection in the single-turn task and conduct the experiments in no-persona track because we focus on the relationship between post and reply.", "Models.", "For the single-turn task, we compare the embeddings based on BOW (bag-of-words, the average of all word embedding vectors), and select replies by cosine similarity; For the multi-turn task, we use a neural model called key-value (KV) memory network 4 (Miller et al., 2016), which has been proved to be a strong baseline in the ConvAI2 competition (Dinan et al., 2020).", "Metrics.", "We use the recall at position k from 20 candidates (hits@k, only one candidate reply is true) as the metrics in the PersonaChat dataset following the previous work (Zhang et al., 2018a).", "For the Chinese dataset, we use NDCG and P@1 to evaluate the sorted quality of the candidate replies.", "Setup.", "We train the model by Adagrad (Duchi et al., 2011) and implement it by Keras (Chollet et al., 2015) with Tensorflow backend.", "For the PersonaChat dataset, we train the embeddings by the training set containing about 10k conversation pairs, use validation sets to select the best embeddings, and report the performance on test sets.", "The results on the PersonaChat dataset are shown in Table", "1. The strongest baseline in the single-turn task is GloVe, but PR-Embedding outperforms the baseline by 4.4%.", "For the multi-turn task, we concatenate PR-Embeddings with the original embedding layer of the model.", "We find that the 4 The official baseline result is 34.9 on hits@1, which is subject to the changes of the computing device.", "performance becomes much better when we concatenate PR-Embedding with the randomly initialized embedding.", "The model KVMemnn becomes much stronger when the embedding layer initializes with the embeddings from GloVe.", "However, PR-Embedding still improves the performance significantly.", "The results on the in-house dataset are in Table", "2. Our method (PR-Emb) significantly exceeds all the baselines in all metrics.", "The improvement is greater than the results on the English dataset as the training corpus is much larger.", "Note that, all the improvements on both datasets are statistically significant (p-value 0 . 01 ).", "We conduct the ablations on Chinese datasets in consideration of its larger training corpus.", "The results are in the last part of Table", "2. When we change the two vector spaces into the single one (w/o PR), the model is similar to GloVe with sentence-level learning.", "The performance becomes much worse in all the metrics, which shows the effect of two vector spaces.", "Furthermore, all the scores drop significantly after sentence-level learning is removed (w/o SLL), which shows its necessity.", "We provide an analysis based on the nearest tokens for the selected words in the whole vector space, including the word itself.", "For PR-Embedding, we select the words from the post vocabulary and give the nearest words both in the post and the reply space.", "Note that all of them are trained by the training set of the PersonaChat dataset.", "The results are in Table", "3. For the columns in GloVe and P-Emb, the words are the same (first one) or similar to the selected ones because the nearest token for any word is itself within a single vector space.", "The similarity makes that the model tends to select the reply with repeated words.", "While the words in the column R-Emb are relevant to the selected words, such as words why' and because,' thanks' and welcome,' congrat-ulations' and thank.' Those pairs indicate that PR-Embedding catches the correlation among the conversation pairs, which is helpful for the model to select the relevant and content-rich reply.", "To further explore how PR-Embedding represents words and the relation between the two spaces, we use t-SNE (Maaten and Hinton, 2008) to visualize the embeddings of 40 words with the highest frequency except for stop words in the spaces.", "The embeddings are visualized in Figure", "2. For the embeddings in the same spaces, the words with similar semantic meanings are close to each other, indicating that PR-Embedding catches the similarity within the same space.", "For example, the words hello' and hi', good' and great', not' and no'.", "For the same words in different spaces, most of them have close locations, especially nouns and verbs, such as work,' think,' know.' Maybe it is because they play a similar role in the post and the reply.", "While some question words have different situations, for example, how' and good, great,' why' and because' show the clear relations in the post and the reply spaces, which conforms to the habit of human dialog.", "Furthermore, PR-Embeddings can also capture the correlation between pronouns such as such as my, we' and your' also catch the correlation.", "We can conclude that our method can encode the correlation among the two spaces into the embeddings.", "In this paper, we have proposed a conversational word embedding method named PR-Embedding, which is learned from conversation pairs for retrieval-based dialog system.", "We use the word alignment model from machine translation to calculate the cross-sentence co-occurrence and train the embedding on word and sentence level.", "We find that PR-Embedding can help the models select the better response both in single-turn and multi-turn conversation by catching the information among the pairs.", "In the future, we will adapt the method to more neural models especially the generation-based methods for the dialog system.", "We would like to thank all anonymous reviewers for their hard work on reviewing and providing valuable comments on our paper.", "We thank Yecheng Hu and Chenglei Si for proofreading our paper thoroughly.", "We also would like to thank Quan Wen for insightful suggestions.", "This work was supported by the National Natural Science Foundation of China (NSFC) via grant 61976072, 61632011 and 61772153." ]
[ "abstain", "objective", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "objective", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "method", "result", "abstain", "other", "other", "other", "other" ]
[ "We introduce a new syntax-aware model for dependency-based semantic role labeling that outperforms syntax-agnostic models for English and Spanish.", "We use a BiLSTM to tag the text with supertags extracted from dependency parses, and we feed these supertags, along with words and parts of speech, into a deep highway BiLSTM for semantic role labeling.", "Our model combines the strengths of earlier models that performed SRL on the basis of a full dependency parse with more recent models that use no syntactic information at all.", "Our local and non-ensemble model achieves state-of-the-art performance on the CoNLL 09 English and Spanish datasets.", "SRL models benefit from syntactic information, and we show that supertagging is a simple, powerful, and robust way to incorporate syntax into a neural SRL system.", "Semantic role labeling (SRL) is the task of identifying the semantic relationships between each predicate in a sentence and its arguments (Gildea and Jurafsky, 2002).", "While early research assumed that SRL models required syntactic information to perform well (Punyakanok et al., 2008), recent work has demonstrated that neural networks can achieve competitive and even state-of-the-art performance without any syntactic information at all (Zhou and Xu, 2015; Marcheggiani et al., 2017; He et al., 2017).", "These systems have the benefits of being simpler to implement and performing more robustly on foreign languages and out-of-domain data, cases where syntactic parsing is more difficult (Marcheggiani et al., 2017).", "In this paper, we show that using supertags is an effective middle ground between using full syntactic parses and using no syntactic information Work partially done at Yale University.", "at all.", "A supertag is a linguistically rich description assigned to a lexical item.", "Supertags impose complex constraints on their local context, so supertagging can be thought of as almost parsing (Bangalore and Joshi, 1999).", "Supertagging has been shown to facilitate Tree-Adjoining Grammar (TAG) parsing (Bangalore et al., 2009; Friedman et al., 2017; Kasai et al., 2017, 2018) and Combinatory Categorial Grammar (CCG) parsing (Clark and Curran, 2007; Kummerfeld et al., 2010; Lewis et al., 2016; Xu, 2016).", "We propose that supertags can serve as a rich source of syntactic information for downstream tasks without the need for full syntactic parsing.", "Following Ouchi et al. (2014), who used supertags to improve dependency parsing, we extract various forms of supertags from the dependency-annotated CoNNL 09 corpus.", "This contrasts with prior SRL work that uses TAG or CCG supertags (Chen and Rambow, 2003; Lewis et al., 2015).", "We train a bidirectional LSTM (BiLSTM) to predict supertags and feed the predicted supertag embedding, along with word and predicted part-of-speech embeddings, to another BiLSTM for semantic role labeling.", "Predicted supertags are represented by real-valued vectors, contrasting with approaches based on syntactic paths (Roth and Lapata, 2016; He et al., 2018) and syntactic edges (Marcheggiani and Titov, 2017; Strubell et al., 2018).", "This way of incorporating information alleviates the issue of error propagation from parsing.", "Supertagging has many advantages as part of a natural language processing pipeline.", "First, as a straightforward sequence-labeling task, the supertagging architecture is much simpler than comparable systems for structured parsing.", "Second, it is simple to extract different forms of supertags from a dependency corpus to test different hypotheses about which kinds of syntactic information are most useful for downstream tasks.", "Our re-Token Model 1 Model TAG No DEP/R DEP/R , P/R P/R it SBJ/R was ROOT+L R ROOT+SBJ/L PRD/R n't ADV/L ADV/L black NAME/R NAME/R Monday PRD/L+L Table 1: Supertags for the sentence No, it wasn't black Monday.", "sults show that supertags, by encoding just enough information, can improve SRL performance even compared to systems that incorporate complete dependency parses.", "We experiment with four supertag models, two from Ouchi et al. (2014), one from Nguyen and Nguyen (2016), and one of our own design inspired by Tree Adjoining Grammar supertags (Bangalore and Joshi, 1999).", "Each model encodes a different set of attributes about the syntactic relationship between a word, its parent, and its dependents.", "Table 2 summarizes what information is expressed in each supertag model.", "Model", "0. A Model 0 supertag for a word w encodes the dependency relation and the relative position (direction) between w and its head, i.e. left (L), right (R), or no direction (ROOT) (Nguyen and Nguyen, 2016).", "Model", "1. A Model 1 supertag for w adds to the parent information from Model 0 the information of whether w possesses dependents to its left (L) or right (R) (Ouchi et al., 2014).", "Model", "2. A Model 2 supertag for w extends Model 1 by encoding the dependency relation between w and its obligatory dependents.", "1 When w 1 Following Ouchi et al. (2014), we define obligatory dependents as those with relations SBJ,' OBJ,' PRD,' and VC.' For Spanish, we define obligatory syntactic arguments lacks such obligatory children, we encode whether it possesses non-obligatory dependents to the left (L) or right (R) as in Model", "1. Model TAG.", "We propose Model TAG supertags that represent syntactic information analogously to TAG supertags (elementary trees) (Bangalore and Joshi, 1999).", "A Model TAG supertag encodes the dependency relation and the direction of the head of a word similarly to Model 0 if the dependency relation is non-obligatory (corresponding to adjunction nodes), and the information about obligatory dependents of verbs if any similarly to Model 2 (corresponding to substitution nodes).", "Motivated by recent state-of-the-art supertaggers (TAG: Kasai et al. (2017, 2018); CCG: Lewis et al. (2016); Xu (2016)), we employ a bi-directional LSTM (BiLSTM) architecture for our supertagging.", "The input for each word is the conncate-nation of a dense vector representation of the word, a vector embedding of a predicted PTB-style POS tag (only for English), 2 and a vector output by character-level Convolutional Neural Networks (CNNs) for morphological information.", "For POS tagging before English supertagging, we use the same hyperparameters as in Ma and Hovy (2016).", "For supertagging, we follow the hyperparameters chosen in Kasai et al. (2018) regardless of the supertag model that is employed.", "We initialize the word embeddings by the pretrained 100 dimensional GloVe (Pennington et al., 2014) and the 300 dimensional FastText (Bo-janowski et al., 2017) vectors for English and Spanish respectively.", "Our SRL model is most similar to the syntax-agnostic SRL model proposed by Marcheggiani et al. (2017).", "Our model differs in two ways: 1) we add randomly initialized 50 dimensional supertag embeddings to the input layer (Fig. 1), and 2) we use a modified LSTM with highway layers and regularization (0.5 dropout) as in He et al. (2017).", "We use the same hyperparameters as in Marcheggiani et al. (2017) with randomly initialized 50 dimensional embeddings for supertags.", "3 as dc,'suj,' cd,' and cpred.' 2 For the English data, predicted PTB-style POS tags generally contribute to increases, approximately 0.2-0.4% in the dev set, whereas for Spanish adding predicted (coarse-grained) POS tags hurt the performance.", "3 We provide lists of hyperparameters in Appedix A.1.", "For pre-trained word embeddings, we use the same word embeddings as the ones in Marcheggiani et al. (2017) for English and the 300-dimensional FastText vectors (Bojanowski et al., 2017) for Spanish.", "We use the predicates predicted by the mate-tools (Bjorkelund et al., 2009) (English) and Zhao et al. (2009) (Spanish) system in our models, again following Marcheggiani et al. (2017) to facilitate comparison.", "Our code is available online for easy replication of our results.", "4 Figure 1: SRL architecture with a highway BiLSTM.", "Table 3 provides our supertagging results for English and Spanish across the different types of supertag described above.", "Here we clearly see the general pattern that the more granular supertagging becomes, the less reliable it is, and finding the balance between granularity and predictability is critical.", "We present our SRL results in Tables 4-7 along with the results from a baseline 4 https://github.com/jungokasai/ stagging_srl .", "BiLSTM model, which is our implementation of the syntax-agnostic model in Marcheggiani et al. (2017).", "We also present results for a BiLSTM model with dropout and highway connections but without supertags (BDH model), to distinguish the effects of supertags from the effects of better LSTM regularization.", "In every experiment we train the model five times, and present the mean score.", "Table 4 shows that Model 1 yields the best performance in the English dev set, and thus we only use Model 1 supertags for test evaluation.", "We primarily show results only with word type embeddings to conduct fair comparisons with prior work, but we also provide results with deep contextual word representations, ELMo (Peters et al., 2018), and compare our results with recent work that utilizes ELMo (He et al., 2018).", "5 English in-domain.", "Table 5 summarizes the results on the English in-domain test set.", "First, we were able to approximately replicate the results from Marcheggiani et al. (2017).", "Adding dropout and highway connections to our BiLSTM model improves performance by 0.5 points, to 88.1, and adding supertags improves results even further to 88.6.", "Our supertag model performs even better than the non-ensemble model in Marcheggiani and Titov (2017), in which the model is given the complete dependency parse of the sentence.", "This result suggests that supertags can be even more effective for SRL than a more complete representation of syntax.", "Furthermore, our supertag-based method with contextual representations achieves 90.2, a new state-of-the-art.", "Interestingly, the gain from supertagging decreases to 0.2 points (90.2 vs. 90.0) in the presence of contextual representations, suggesting that contextual representations encode some of the same syntactic information that supertags provide.", "English out-of-domain.", "One of the advantages of using a syntax-agnostic SRL model is that such a model can perform relatively well on out-of-domain data, where the increased difficulty of syn-5 We used the pretrained ELMo available at https:// tfhub.dev/google/elmo/2 .", "tactic parsing can cause errors in a syntax-based system (Marcheggiani et al., 2017).", "Unfortunately we were not able to replicate the out-of-domain results of Marcheggiani et al. (2017): our implementation of the BiLSTM achieves a score of 76.4, compared to their reported score of 77.7.", "However, we note that incorporating supertags into our own model improves performance, with our best model achieving a score of 77.6.", "Our supertag-based model also substantially outperforms the full dependency-based models (Roth and Lapata, 2016; Marcheggiani and Titov, 2017).", "This suggests that syntax with a certain degree of granularity is useful even across domains.", "Our supertag-based method alleviates the issue of error propagation from syntactic parsing.", "Finally, our model with contextual representations yields 80.8, an improvement of 1.5 F1 points over the previous state-of-the-art (He et al., 2018), which also uses ELMo.", "Spanish.", "Table 7 shows the results on the Spanish test data.", "Our BiLSTM implementation yields lower performance than Marcheggiani et al. (2017): our model achieves a score of 79.1, compared to their reported score of 80.3.", "However, our BDH model yields a score of 80.8, already achieving state-of-the-art performance.", "Adding supertags to BDH improves the score further to 81.0.", "This suggests that while the gains are relatively small, the supertag-based approach still helps Spanish SRL.", "Supertags slightly improve performance when contextual representations are used (83.0 vs. 82.9).", "See appendices for details.", "Following the analysis in Roth and Lapata (2016), we show plots of the BiLSTM, BDH (BiLSTM + Dropout + Highway), and Model 1 role labeling performance for sentences with varying number of words (in-domain: Fig. 2; out-of-domain: Fig. 3).", "Note first that BDH outperforms the baseline BiLSTM model in a relatively uniform manner across varying sentence lengths.", "The benefits of Model 1 supertags, in contrast, come more from longer sentences, especially in the out-Non-ensemble System P R F 1 FitzGerald et al. (2015) 87.3 Roth and Lapata (2016) 90.0 85.5 87.7 Marcheggiani et al. (2017) 88.7 86.8 87.7 Marcheggiani and Titov (2017) 89.1 86.8 88.0 BiLSTM 88.5 86.7 87.6 BDH 88.3 87.8 88.1 BDH + Model 1 89.0 88.2 88.6 + Contextual Representations He et al. (2018) (ELMo) 89.7 89.3 89.5 BDH + ELMo 90.3 89.7 90.0 BDH + Model 1 + ELMo 90.3 90.0 90.2 Ensemble System FitzGerald et al. (2015) 87.7 Roth and Lapata (2016) 90.3 85.7 87.9 Marcheggiani and Titov (2017) 90.5 87.7 89.1 Table 5: Results on the CoNLL 2009 in-domain test set for English.", "of-domain test set.", "This implies that the supertag model is robust to the sentence length, probably because supertags encode relations between words that are linearly distant in the sentence, information that a simple BiLSTM is unlikely to recover.", "Table 8 reports SRL results broken down by predicate category (V: Verb, Propbank; N: Noun, Nombank) and semantic role.", "We can observe that the various supertag models differ in their performance for different predicate-role pairs, suggesting that different kinds of linguistic information are relevant for identifying the different roles.", "Overall, Model 1 supertags achieve the most consistent improvements over BiLSTM and BiLSTM + Dropout + Highway (BDH) in V / A0, V / A1, V / A2, V / AM, N / A2, and N / AM.", "Moreover, Model 1 even improves on Path-LSTM (Roth and Lapata, 2016) by large margins in V / A0, V / A1, V / AM, and N / AM, even though the Path-LSTM model has the benefit of using the complete dependency path between each word and its head.", "This shows that supertags can be even more effective for SRL than more granular syntactic informationeven quite simple supertags, like Model 0, which encode only the dependency arc between a word and its head.", "We presented state-of-the-art SRL systems on the CoNLL 2009 English and Spanish data that make crucial use of dependency-based supertags.", "We showed that supertagging serves as an effective middle ground between syntax-agnostic approaches and full parse-based approaches for dependency-based semantic role labeling.", "Supertags give useful syntactic information for SRL and allow us to build an SRL system that does not depend on a complex architecture.", "We have also seen that the choice of the linguistic content of a supertag makes a significant difference in its utility for SRL.", "In this work, all models are developed independently for English and Spanish.", "However, sharing some part of SRL models could improve performance (Mulcaire et al., 2018, 2019).", "In future work, we will explore crosslingual transfer for supertagging and semantic role labeling.", "The authors thank Diego Marcheggiani for assistance in implementing SRL models and Diego Marcheggiani and the anonymous reviewers for their helpful feedback.", "This work was funded in part by the Funai Overseas Scholarship to JK." ]
[ "objective", "method", "method", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "result", "result", "other", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "method", "result", "objective", "abstain", "objective", "other", "other" ]
[ "The introduction of immensely large causal language models (CLMs) has rejuvenated the interest in open-ended text generation.", "However, controlling the generative process for these Transformer-based models is at large an unsolved problem.", "Earlier work has explored either plug-and-play decoding strategies or more powerful but blunt approaches such as prompting.", "There hence currently exists a trade-off between fine-grained control and the capability for more expressive high-level instructions.", "To alleviate this trade-off, we propose an encoder-decoder architecture that enables intermediate text prompts at arbitrary time steps.", "We propose a resource-efficient method for converting a pre-trained CLM into this architecture and demonstrate its potential in various experiments, including the novel task of contextualized word inclusion.", "Our method provides strong results in multiple experimental settings, proving itself to be both expressive and versatile.", "1 1 Introduction A causal language model (CLM) is a language model trained using a simple next-token prediction objective.", "Current CLMs are typically based on the Transformer architecture (Vaswani et al., 2017), which has resulted in unprecedented text generation capabilities (Radford et al., 2018a,b; Brown et al., 2020).", "Even so, the generation process of a CLM is difficult to control, as one is forced to gradually decode the next-step prediction one token at a time.", "This inhibits the applicability of CLMs when one intends for the generated text to fulfill certain criteria, and not only be a linguistically sound continuation in a given context.", "the generated text to counter the many biases that modern CLMs have been shown to possess (Bor-dia and Bowman, 2019).", "However, most applications require a greater degree of control, as one often wishes to steer the text generation in a specific direction, such as generating a story to a given plot (Li et al., 2013; Yao et al., 2019; Riedl, 2021), or sticking to a certain topic (Keskar et al., 2019).", "Some areas require stringent and fine-grained control, as the many data-to-text tasks (Gardent et al., 2017; Leppanen et al., 2017; Koncel-Kedziorski et al., 2019), which necessitates that the generated text mediates very specific information and facts.", "Due to this apparent need for controllable text generation, recent work (see Section 2.1) has explored different methods to steer and constrain the generation process of a CLM.", "There are mainly two lines of research in this area.", "The more traditional approach focuses on fine-grained control and how to steer the generation process at arbitrary points, while still adhering to the current context.", "This is often achieved by independently modifying the predicted vocabulary distribution at each decoding step.", "However, this decouples the CLM from the control method, prohibiting the CLM's ability to plan accordingly and thus severely limits the type of control that can be formulated.", "The second approach instead opts for more expressive and high-level control, letting the CLM itself interpret and incorporate the instruction into the text generation.", "This is often done via either a fine-tuning objective or, as is currently common, by formulating the instruction as a textual context (referred to as prompting).", "Although expressive, these approaches are less effective than the previous ones in controlling generation at specific points.", "This is due to the prompt's influence being negatively correlated with the distance from the prompt to the next predicted token (Zou et al., 2021), making prompting difficult for nonadjacent text.", "In an attempt to bridge the gap between fine-grained control and the expressiveness of prompts, we propose an architecture that permits long-distance and independent prompting throughout the generation process.", "This architecture has an encoder-decoder setup, where the encoder influences the decoder via a novel non-residual attention schema.", "Along with theoretical arguments for the benefits of this architecture, we provide a resource-efficient self-supervised method for converting a pre-trained CLM into this setup.", "In addition to evaluating on the original CommonGen dataset (Lin et al., 2020), we propose a new contextualized version of CommonGen, called Contextualized CommonGen (C 2 GEN ) and evaluate relevant methods on it.", "This new dataset extends the task to generating a sentence which includes a given set of words, while simultaneously adhering to a given context.", "We find that no previous solution is capable of handling this task, either barely including 50% of the target words, or not generating text of satisfactory quality.", "Our Contributions: (1) An encoder-decoder architecture based on a novel attention module which enables prompting at arbitrary time steps.", "(2) A resource-efficient method, which requires no labeled data, for converting a pre-trained CLM into this architecture.", "(3) The introduction of the contextualized word inclusion task, through the C 2 GEN dataset.", "(4) Extensive testing of related baselines and our proposed method, via both automatic and human evaluation.", "This section briefly introduces the related work for constrained text generation.", "A detailed description of each method, their strengths and weaknesses, and how they are configured to form our baselines is available in Appendix D. Decoding strategies operate directly on the CLM's predicted vocabulary distribution at each time step, and are hence often model-agnostic.", "Dathathri et al. (2020) propose Plug-and-Play-Language-Models (PPLM), which adjust the distribution in accordance with the gradients of an external discriminator model.", "Pascual et al. (2021) introduce Keyword2Text, which steers the CLM to include target words by directly increasing their sampling probability, along with their GloVe (Pen-nington et al., 2014) neighbours.", "Training objectives can be set up to grant generative control, such as CTRL (Keskar et al., 2019), which incorporates control codes for textual genre.", "KG-BART (Liu et al., 2021) utilizes a common sense knowledge graph and fine-tunes BART (Lewis et al., 2020) towards word inclusion.", "GDC (Khalifa et al., 2021) fine-tunes towards arbitrary discriminator signals using Reinforcement Learning.", "POINTER (Zhang et al., 2020) tackles word inclusion with a non-autoregressive approach, injecting words around the target words until a sentence is formed.", "Tailor (Ross et al., 2021) fine-tunes a T5 (Raf-fel et al., 2020) for fine-grained semantically-controlled text generation, with a focus on perturbing text for data augmentation.", "Prompting acts within the framework of the CLM's pre-training task, as constraints are expressed through natural language.", "This approach was popularized by the GPT models (Radford et al., 2018b; Brown et al., 2020) and has been shown to work for many different types of constraints (Reif et al., 2021; Clive et al., 2021).", "There is no standardized evaluation methodology for open-ended text generation (Howcroft et al., 2020).", "The large number of possible good texts hinders the usage of automatic text-overlap metrics (Papineni et al., 2002; Lin, 2004).", "And many human evaluations are to vague to be properly reproducible (Belz et al., 2020).", "To remedy this, van der Lee et al. (2019) propose guidelines for human studies, and Gehrmann et al. (2021) argue that textual quality cannot be described through a single metric.", "Informed by these arguments, we report relevant metrics for various situations, without necessarily claiming one method to be superior in all aspects.", "We propose to steer a CLM's generative direction by introducing a separate encoder for prompt instructions, which we refer to as the prompt model .", "The prompt model interprets textual prompts and produces positional invariant key-values, which the CLM can attend to via the novel non-residual attention schema (Section 3.1).", "The positional invariance ensures that the instruction is equally applicable at any time step, and is achieved by an additional shift of its key-values (Section 3.2).", "To allow independent prompts at different time steps we compute two distinct streams of information for the CLM.", "We refer to these as the textual and non-residual streams.", "The textual stream ignores the prompt model completely, and is identical to the normal self-attention of the CLM.", "The non-residual stream is responsible for the prediction at each time step, and instead attends to both the previous steps of the textual stream, and key-values from the prompt model.", "This is depicted in Figure 1 and formalized in Equation", "1. Concretely, at time-step n the textual stream self-attends to the current time step and the previous textual key-values KV i<nT .", "The nonresidual stream self-attends to the current time step, the previous textual key-values KV i<nT , and the prompt model's key-values KVP .", "Finally, the next step prediction P ( w n +1 ) is computed from the non-residual stream.", "Applying the prompt SP to every time step in the text SCLM = { w 1 , w 2 , ..., w n } thus results in: KVP = PromptModel ( SP ) KV nT = CLM ( w n | KV i<nT ) P ( w n +1 ) = CLM ( w n | KV p , KV i<nT ) (1) Non-residual key-values are hence never attended to by either streams from subsequent time steps.", "A prompt instruction at time step n can therefore only influence future decoding steps via the sampled token at time step n , and not through its key-values.", "This non-residual property of each prompt assures that the hidden state of the CLM does not deteriorate over time.", "Appendix C.1 further motivates this with an example.", "Intuitively, this ensures that the residual key-values are only affected by textual input, allowing the CLM to operate within the limits of its pretraining objective.", "Furthermore, this means that one can apply different prompts at different time steps, without them disrupting each other through the CLM's internal state.", "2 Further intuition on non-residual attention is available in Appendix C. 3.2 Position Invariant Transformation Ideally, prompt instructions should be equally applicable at any time step in the generation process.", "However, the positional encoding system of Transformers makes this difficult, particularly absolute positional encodings (Vaswani et al., 2017).", "Overcoming this requires a significant amount of training of the prompt model (See Appendix C.2).", "To alleviate the computational burden, we propose an architectural add-on where positional invariance is achieved by an additional set of weights, trained after the prompt model is trained on single sentence data.", "This reduces the overall training time, and allows one to easily fine-tune the prompt model on tasks lacking context, and apply the positional invariant transformation afterwards.", "This is depicted as step 3 an 4 in Figure", "2. The prompt model, being a CLM, uses causal self-attention to process text and generate L sets of key-values per time step, where L refers to the number of layers in the model.", "We refer to the L key-values at a time step i as kv i .", "Hence, when the prompt model computes a prompt of length n , it yields the sequence of key-values KV P = { kv 1 , kv 2 , ..., kv n } .", "The positional invariant transformation, referred to as C , consists of one parameter for each of the CLMs key-value parameters.", "3 The same transformation C is then applied by point-wise addition to the prompt model's output at all time steps, thus yielding the shifted key-values KVP = { kv 1 + C, kv 2 + C, ..., kv n + C } .", "Given a pre-trained CLM, we propose to train an accompanying prompt model via four distinct phases, 4 as demonstrated in Figure", "2. This includes an initialization phase, two pre-training phases, and one optional fine-tuning phase.", "The weights of the CLM are never updated in any of the training phases.", "As popularized by Raffel et al. (2020), all training, independent of task, is formulated within the framework of teacher-forced causal language modeling (Williams and Zipser, 1989).", "The goal is to maximize the likelihood of generating text S given prompt P , in accordance with Equation", "1. 4.1 Initialization Prior to any training, the prompt model is created by cloning the pre-trained CLM into a separate new model.", "The CLM and prompt models hence start with an identical set of weights.", "This results in an efficient starting point, since the CLM is trained to communicate with itself via self-attention, and thus also the CLM and prompt model.", "Pre-training is divided into two distinct phases, both relying on the text generation task of word inclusion with a target sentence length.", "In the first phase, the prompt model is trained to influence the CLM using only single sentence data, without any position invariant transformation.", "In the second phase only the position invariant transformation is learnt, by training on data with longer context.", "This is illustrated in Figure", "3. For both phases, training data is generated by sampling [ A, B ] unique target words for each sentence S = { w 1 , w 2 , ..., w n } , and incorporating them and the sentence length n into prompt P .", "The second phase utilizes sequences of multiple sentences, where each sentence is given its own prompt.", "During this phase, each prompt is computed independently, and the CLM attends only to the relevant prompt for each sentence.", "Details regarding the corpus and sampling schema used in our experiments are available in Appendix A, and details regarding our randomized prompt template is available in Appendix A.3.", "Finally, one can optionally fine-tune the prompt model towards another task or dataset.", "This is done by temporarily removing the positional invariant transformation, and tuning only the prompt model.", "The positional invariant transformation is then re-inserted afterwards, shifting the now fine-tuned prompt model's key-values.", "This fine-tuning schema circumvents the problem that many NLP tasks and labeled datasets are formulated without any accompanying context.", "One can therefore utilize single sentence datasets, and still apply the prompt model at arbitrary time steps.", "CommonGen (Lin et al., 2020) is a dataset for the constrained text generation task of word inclusion.", "The objective of the task is to generate text that includes a given set of target words and adhering to common sense.", "Each sample includes 3-5 target words, taken from various image-caption datasets.", "The samples in CommonGen are however all formulated without any accompanying context.", "We argue that this task formulation is too narrow, and that it needlessly incentivizes researchers to focus on methods that do not support context.", "This is orthogonal to our belief that many application areas necessitates the consideration of surrounding context.", "Therefore, to complement CommonGen, we provide an extended test set where an additional context is provided for each set of target words.", "The task is therefore reformulated to both generate commonsensical text which includes the given words, and also have the generated text adhere to the given context.", "Each context is formulated as three sentences, created by human annotators from Mechanical Turk ( www.mturk.com ), as exemplified in Table", "1. The annotators were tasked to create three sentences, so that a subsequent sentence would be likely to include the target words.", "Details regarding the creation process of C 2 GEN , and its statistical properties are available in Appendix F. Jane was excited when the teacher announced it was career week.", "Jane signed her dad up to visit the classroom.", "On the appointed day, Jane's dad showed up dressed in his work gear.", "We separate word inclusion into two different settings.", "In the first, the model is tasked to generate exactly 32 tokens.", "Requiring the model to both satisfy the word inclusion objective, and continue generating contextually relevant text.", "This allows methods that do not grant sentence level control to participate, such as PPLM and Keyword2Text.", "In the second setting, the model is only tasked to create a single sentence, elevating the requirement of continued text generation.", "This setting is suitable for methods specifically trained towards creating a single common sense sentence, such as KG-BART and POINTER.", "For both of these settings, we run experiments on both CommonGen and C 2 GEN .", "Since experiments on the contextualized C 2 GEN require the model to adhere to a context regardless of whether the objective is to generate a single sentence or a free text, KG-BART and POINTER are excluded from these experiments all together.", "Using our proposed method we train a nonresidual prompt model to accompany a pre-trained GPT-2 Large model.", "This setup is referred to as NRP during experiments, and training details can be found in Appendix A. In order to demonstrate how more sophisticated decoding strategies can be incorporated, we also combine NRP with a slightly modified version of Keyword2Text.", "Details for this incorporation can be found in Appendix B.3.", "The inference utilizes a beam size of 4, and any additional parameters were set according to a held-out validation set (See Appendix B).", "All baseline implementations are taken from their respective code repositories, and if possible the official pre-trained model (See Appendix D).", "In accordance with the guidelines described in Section 2.2, we provide both quantitative and qualitative evaluation.", "The qualitative examples in Table 4 are intended to convey the overall style for each algorithm, and more qualitative examples are available in Appendix G. Quantitative metrics are easily comparable, but may be less suited to convey the overall style.", "Our quantitative metrics are described in detail in Appendix E, and briefly below: Word Inclusion Coverage (Cov): The percentage of target words that are included in the generated text.", "Both target and generated words are lemmatized, alleviating the need to match the exact form of the target word.", "Perplexity (Ppl): The mean perplexity of the generated text calculated with GPT-2 XL.", "Although lower perplexity often indicates better language fluency, degenerate repetitions tend to result in low perplexity as well.", "Therefore, one should not rely on perplexity alone, but in combination with other metrics and qualitative analysis.", "Nevertheless, it is a metric that yields a hint of language fluency that does not require human evaluation.", "In the presence of contexts, as is the case with C 2 GEN , the perplexity is conditioned on the context and typically results in significantly lower perplexity values.", "Self-BLEU-5 (Self-Bleu): Average BLEU-5 overlap between all generated texts.", "A lower score is desired as this indicates syntactic diversity.", "Common Sense (Sense): The average score on how well the generated text adheres to common sense, according to human evaluators.", "Contextual Relevancy (Ctx): The average score on how well the generated text fits the given context, according to human evaluators.", "For more information about the human evaluation process, see Appendix E.3.", "We wish to highlight that NRP, Keyword2Text, and the prompted GPT-2 all control the same underlying CLM model.", "Differences between these approaches are hence a result of the method, not the model.", "Unfortunately, all quantitative metrics (including human metrics) are intrinsically correlated with sentence length, making comparisons of single sentences non-trivial (See Appendix E).", "First, we note that it is only the NRP approaches, and arguably GPT-2, that supports all four experiments.", "In general we find that the incorporation of Keyword2Text with NRP increases the coverage slightly, but at the cost of a slightly higher self-Bleu.", "Hence, for brevity, we refer to both of them as NRP throughout the remainder of this section.", "first sentence are ride , scooter , shirt , wear , and the target words for the second sentence are press , card , place , button , scanner .", "In the CommonGen Free Text setting (Table 2), NRP achieves the best coverage rate by a large margin, and also the best perplexity.", "Noticeably, NRP outperform GPT-2 in all metrics, besides common sense where they are virtually equal.", "Interestingly, GPT-2's coverage is virtually the same as its free text counterpart, indicating that it quickly forgets the intended instruction.", "Keyword2Text generates the lowest self-Bleu, but has both the worst perplexity and common sense score.", "PPLM performs the best on common sense but instead fails the task completely, as demonstrated by its poor coverage.", "In the CommonGen Single Sentence setting (Table 2), NRP fall slightly behind the specialized sentence methods in terms of coverage, but has a noticeably higher coverage than GPT-2.", "POINTER has the best coverage and self-Bleu, but also the worst common sense and dramatically worst perplexity.", "KG-BART has as expected the best common sense score, while staying fairly balanced on all other metrics.", "Again, NRP and GPT-2 show similar common sense scores.", "For C 2 GEN Free Text (Table 3), NRP performs the best on coverage and perplexity.", "All methods perform nearly identical on the context score.", "Both PPLM and Keyword2Text perform better than they did on CommonGen, but Keyword2Text is still worst on perplexity and common sense, and PPLM still performs the worst on coverage.", "As expected, GPT-2 performs poorly on contextualized word inclusion, demonstrated by its low coverage.", "This indicates that GPT-2 acts more as a regular CLM, ignoring the instruction prompt, which explains its high common sense score.", "Finally, NRP performs significantly better on coverage, perplexity, self-Bleu and context with Single Sentences on C 2 GEN (Table 3).", "GPT-2 performs better on common sense, which is likely due to it focusing less on the word inclusion objective.", "Again, GPT-2 achieves a similar coverage as its Free Text counterpart.", "As demonstrated in Table 4, NRP and GPT-2 tend to generate more linguistically complicated sentences, with more flow, compared to that of KG-BART.", "While stylistic complexity is arguably something desirable, it has the drawback that it increases the chance of generating text that breaks common sense.", "Our inspection also confirms that POINTER generates long sentences with weird formulations, that often break common sense and being syntactically incorrect.", "Examples of generated texts from all methods are available in Appendix G. Keyword2Text often inserts multiple line breaks, and sometimes gets stuck repeating a word.", "The differences between NRP, PPLM and GPT-2 are more subtle, the major distinction being that PPLM comes off as slightly more fluid in its formulations.", "The inclusion of sentence length in the pretraining objective (Section 4.2), gives an additional level of generative control over the linguistic style.", "As demonstrated in Table 5, the model incorporates and plans using the prompted sentence length, and changes the wording and content accordingly.", "We note that the model tends to prioritize textual quality over strictly sticking to the exact number of words.", "To measure this discrepancy, we generate sentences for all CommonGen validation samples for different prompted sentence lengths.", "Figure 4 shows the results from this experiment, displaying the expected offset for different prompted sentence lengths.", "The mean offset is always above 0 and below 1 , meaning the CLM can be expected to generate a slightly longer sentence than intended.", "The standard deviation increases both as the prompted length approaches long, and short sentences.", "This matches the sentence distribution of the pretraining dataset, as demonstrated in Table 6 found in Appendix A. 8 Discussion and Future Work We opted to demonstrate our architecture's capabilities on the task of word inclusion, since quantitative comparisons on this task are relatively straight-forward, compared to most other open-ended text generation tasks.", "While experimental results indicate the versatility of our approach, it is important to note that the method conceptually generalizes to a much wider range of tasks.", "Our non-residual architecture enables the use of prompt instructions at arbitrary time steps, but is not limited to word inclusion.", "We hence encourage future work to pursue the incorporation of multi-task prompt learning, as being able to apply flexible prompts with precision would be a big step forward in the many areas striving to use CLMs.", "Indeed, we consider the ability to control the text generation process while considering context crucial for any tool intended for human editors.", "Admittedly, our training method for realizing our encoder-decoder architecture has largely been dictated by a lack of resources.", "We conceptually prefer the more straight-forward training approach of training the prompt model directly on long context data, and removing the positional invariant transformation.", "Future work could thus increase computational resources and investigate the possibility of different positional encoding schemes.", "Finally, we stress that nothing in our approach has focused explicitly on common sense.", "It is hence expected that methods that do, like KG-BART, perform better on this metric.", "Future work could thus investigate the use of a prompt model to control a CLM fine-tuned towards common sense, or fine-tune a prompt model using common sense data.", "Results on CommonGen and C 2 GEN dataset still leave ample room for improvements.", "This paper has introduced the concept of nonresidual attention and demonstrated how it can be used to control a generative text model.", "Additionally, our work pinpoints the lack of open-ended controllable text generation tasks that require the model to also account for a given context.", "We set out to remedy this by introducing the humanly created C 2 GEN dataset, introducing the task of contextualized word inclusion.", "Experimental results on C 2 GEN and CommonGen, clearly demonstrates that using a nonresidual prompt model increases generative control over a CLM.", "Compared to other methods, our approach stands out as the most versatile, consistently performing well across all tested situations.", "This work is partly funded by the Swedish innovation agency (Vinnova) under contract 2019-02996.", "Additionally, we thank the Grace & Thomas C. H. Chan Cambridge Scholarship, which supports Fangyu Liu.", "Finally, a special thanks goes out to Martin Korling at RISE, who helped attain computational resources for the final submission deadline.", "Controllable text generation is an important step to unleashing the potential of modern CLMs.", "Additionally, it is an interesting approach to counter many of the problematic biases that have been found.", "But an increased level of control also entails an increased risk of malicious use.", "We hence recognize the possibility that techniques proposed in this paper could be utilized in malevolent scenarios, like guided misinformation or targeted harmful content.", "This work has utilized computational GPU resources provided by ICE-RISE 5 .", "The final model training lasted roughly 2 days on a single DGX-100 machine, resulting in about 400 GPU hours.", "The total number of GPU hours for the whole research endeavour is difficult to estimate, but it can be safe to assume that it is less than 2000 GPU hours." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "result", "objective", "abstain", "abstain", "objective", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "method", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain" ]