sentences
sequence
labels
sequence
[ "Deception often takes place during everyday conversations, yet conversational dialogues remain largely unexplored by current work on automatic deception detection.", "In this paper, we address the task of detecting multimodal deceptive cues during conversational dialogues.", "We introduce a multimodal dataset containing deceptive conversations between participants playing The Tonight Show Starring Jimmy Fallon R (cid:13) Box of Lies game, in which they try to guess whether an object description provided by their opponent is deceptive or not.", "We conduct annotations of multimodal communication behaviors, including facial and linguistic behaviors, and derive several learning features based on these annotations.", "Initial classification experiments show promising results, performing well above both a random and a human baseline, and reaching up to 69% accuracy in distinguishing deceptive and truthful behaviors.", "Deception occurs often during dialogues, but until now this setting has received little attention from the research community (Tsunomori et al., 2015).", "In this paper, we explore verbal, nonverbal, and conversational dialog cues between contestants playing the Box of Lies game in The Tonight Show Starring Jimmy Fallon R (cid:13) tv show.", "In the game, participants try to guess whether an object description provided by their opponent is deceptive or not.", "The game scenario provides a rich environment where we can explore several aspects of deceptive behavior occurring during conversations.", "First, it allows us to study conversational deception in the presence of multiple modalities such as verbal and non-verbal behaviors.", "Second, it provides observable assessments of participant's honesty, which is usually an important challenge during deception research.", "Third, since participants experience the pressure to win the game in front of a big audience, it presumably presents an environment with high stakes.", "1 Recent work on multimodal deception detection has already shown the importance of verbal and non-verbal behaviors during the automatic iden-tification of deceit (Abouelenien et al., 2017a).", "Following this line of work, our main contribution consists of investigating whether such modalities can also be leveraged to predict deception in a conversational dialog as well as exploring whether the dialogue setting adds meaningful information to the other modalities to potentially increase the classification performance.", "Based on earlier work (Perez-Rosas et al., 2014; Mihalcea et al., 2013), we hypothesize that (1) including dialogue features in addition to other multimodal features (language and facial expressions) while training a classifier increases the prediction performance; (2) automatic classification of truthful and deceptive behavior is better than random guessing (50% for equal class sizes); and (3) automatic classification of truthful and deceptive responses is more accurate than human judgments (based on performance of participants in the dataset).", "To address these hypotheses, we first generate a dataset containing verbal and non-verbal annotations of deceptive and truthful interactions between the game participants.", "Next, we derive linguistic, visual, and dialog cues based on our annotations for the verbal and non-verbal components of the dataset.", "The features are then used to conduct several learning experiments under different scenarios that attempt to distinguish between deceptive and truthful utterances either by focusing on the statements generated by one participant at a time (the game's host or the guest), or by address-1 To our knowledge, these conversations are not scripted.", "Our initial experiments show that language, as well as behavioral and dialog features, carry meaningful information.", "Moreover, the automatic classification of deception can be performed with an accuracy that is better than random guessing and ourperforms human judgments.", "To tackle the problem of reliably detecting deception, researchers have applied various forms of automated deception detection methods that rely on machine learning approaches, which are able to incorporate a variety of behavioral cues from text, audiovisual or physiological data sources (Ott et al., 2011; Fornaciari and Poesio, 2013; Mihalcea and Strapparava, 2009; Abouelenien et al., 2016).", "Many studies focused on text-based classification, detecting false online reviews (Ott et al., 2013) or deceptive transcribed statements from court hearings (Fornaciari and Poesio, 2013).", "Other studies utilized visual cues such as facial expressions or other body movements to detect deception (Meservy et al., 2005).", "These methods already show success in identifying deceptive behavior using individual modalities.", "In addition, recent approaches, which combine multiple modalities are able to further boost classification performances (Abouelenien et al., 2017b; Perez-Rosas et al., 2015).", "However, multimodal approaches have not utilized the dialogue dimension in combination with other modalities as of yet.", "The dialogue dimension captures the interaction between two individuals and how they react to each other.", "One previous study investigated such an interaction, in which the researchers examined question types and their behavioral effect on participants (Tsunomori et al., 2015).", "Findings of the study showed that specific questions led to more salient deceptive behavior patterns in participants.", "This increase in feature salience resulted in better deception detection performances.", "The interaction between two individuals in deceptive conversations was also investigated by Hancock et al. (2004), who examined deception at the linguistic level.", "Participants, who were unaware of receiving a deceptive message, produced more words, sense terms and asked more questions as compared to when they received a truthful message.", "In a similar setting, in which two participants engaged in a question-response task, Levitan et al. (2018) examined linguistic, gender and native language differences.", "They found significant variations in these features for truthful and deceptive responses.", "The experimenters utilized these variations in an automated classification task and reached up to 72% accuracy.", "These studies show that a focus on the linguistic level and the interaction between individuals can have a beneficial effect on detecting deceit.", "Other studies examined non-verbal behavior.", "In an experiment, Sen et al. (2018) video-recorded conversations between participants in an interrogation game and examined participant's facial expressions.", "The results showed that interrogators exhibited different facial expressions when they were lied to as opposed to when they were told the truth.", "In a different approach, Yu et al. (2015) observed head movements and facial expressions between two individuals.", "The authors established normalized non-verbal patterns, which enabled them to capture interactional synchrony.", "This allowed them to successfully discriminate between truths and lies in the experiment.", "Overall, this previous research demonstrates that capturing verbal or non-verbal interactions can convey meaningful information about deceit, which can be leveraged for multimodal deception detection.", "To explore the role played by conversation dynamics in deceptive behaviors, we collected conversations where participants acted deceptively.", "Specifically, we opted for identifying public sources where the veracity or falsehood of conversation participants is known.", "In the game show Box of Lies, which is part of the late-night talk show The Tonight Show Starring Jimmy Fallon, these labels are known.", "The host (Jimmy Fallon) and his guest play the game Box of Lies, where participants take turns to play the game.", "During the game, when is their turn, the participants pick a box (from among nine available boxes) that contains an object they have to describe to their opponent.", "The object is hidden from the opponent through a separation wall between the two contestants.", "Participants sit opposite to each other and see their upper body and face through a cut hole in the separation wall.", "The opponent must guess if the provided description is truthful or not.", "The participant with the best of three guesses wins the game.", "This setup allows us to observe verbal and nonverbal behavior exhibited by the participants during the dialogue interaction.", "In order to better capture multimodal behavioral cues of deception throughout the conversation, we decided to conduct annotations at utterance-level.", "We thus built a rich multimodal dataset containing verbal and non-verbal annotations for 1049 utterances, which is used in the experiments reported in this paper.", "The data collection and annotation process are described below.", "We search for publicly available Box of Lies videos on the YouTube platform.", "2 We collected 25 videos that are currently available in the show video-feed.", "The full set consists of 2 hours and 24 minutes of video.", "The average length of a video is six minutes and contains around three rounds of the game (this varies depending on the score and on whether additional time was available for extra rounds).", "Each video features a different guest and Jimmy Fallon, resulting in 26 unique participants, with 6 of them being males and 20 females.", "To capture the non-verbal behavior of the participants, each video is initially segmented based on the conversation turn-taking and annotated with the help of the ELAN software (Wittenburg et al., 2006).", "ELAN provides a multimodal annotation platform on which audiovisual recordings are annotated in a multi-level tier structure.", "In our case, we defined the following structure to annotate both types of behavior: host verbal, host non-verbal, guest verbal, and guest non-verbal.", "To annotate facial and communication behaviors, we use MUMIN, a multimodal coding scheme that is used to study gestures and facial displays in interpersonal communication with a focus on the role played by multimodal expressions for feedback, turn management, and sequencing (Allwood", "2 The videos are originally produced by NBC and retrieved from Youtube. We consider that using YouTube videos for research purposes falls under the fair use clause, which is stated on: https://www.youtube.com/ intl/en-GB/yt/about/copyright/fair-use/", "et al., 2005).", "Given the nature of the video-conversations being depicted in our dataset, which show the face and upper bodies of the participants and their interaction, we focus our annotations on facial and conversational behavior.", "These choices are motivated by previous research showing that different expressions for truthful and deceptive behaviors are present (DePaulo et al., 2003) in the eyes and mouth regions, as well as studies on the role of conversational involvement in deceptive interactions (Burgoon et al., 1999).", "Facial behaviors .", "We annotate the categories for visual cues and behaviors of eyebrows, eyes, gaze, mouth-openness, mouth-lips, head, and the general face.", "Each of the categories takes on one of several mutually exclusive behavior values.", "Table 1 shows the frequencies of all facial expressions included in this set.", "In the table, we observe a slightly unequal representation of behavioral categories (e.g., head movements are observed more often than other facial expressions).", "This is mainly attributed to camera angle changes during the videos causing participant's faces to be only partly or not visible, thus restricting the behavioral coding.", "The annotated values reflect the most dominant observed behavior in that time segment of the video.", "Two annotators coded the videos, and after the first three videos, the inter-annotator agreement was measured by calculating the Kappa score, to ensure accurate coding.", "If the agreement was below Kappa (weighted) = 0.45 in any category, this category was discussed to identify and reconcile differences in the coding strategy.", "The annotators re-coded the videos individually and compared them again.", "This process was repeated until the desired agreement was reached (above .40 for each category).", "In most cases, we repeated the process only twice, except for the feedback receiving and feedback eliciting categories which were discussed three times.", "Table 2 shows the final Kappa score for each category.", "In the full video set, participants play 68 rounds (29 truthful and 39 deceptive).", "Occasionally, deceptive rounds also contain truthful statements, in which contestants describe parts of the object truthfully, but other parts deceptively, turning the overall description into a lie.", "For example, a contestant might say: I have before me, a green lobster on a plate.", "In truth, the object is a red lob-Label Count General face Smile 411 Neutral 342 Other 100 Laughter 83 Scowl 42 Eyebrows Neutral/Normal 531 Raising 320 Frowning 76 Other 39 Mouth-Lips Retracted 279 Neutral 267 Corners up 261 Other 102 Protruded 46 Corners down 21 Label Count Head Neutral/still 320 Waggle 292 Side-turn 242 Single Nod (Down) 165 Move Forward 153 Repeated Nods (Down) 122 Move Backward 117 Single Tilt (Sideways) 115 Single Jerk (Backwards Up) 78 Shake (repeated) 75 Repeated Tilts (Sideways) 18 Other 15 Single Slow Backwards Up 10 Repeated Jerks (Backwards Up) 7 Label Count Mouth-Openness Open mouth 763 Closed mouth 212 Other 3 Gaze Towards interlocutor 674 Towards object 148 Down 37 Towards audience 36 Sideways 35 Other 34 Eyes Neutral/Open 465 Closing-repeated 203 Closing-both 166 Other 121 Exaggerated Opening 8 Closing-one 4 Table 1: Frequency counts for participants' face, head and mouth annotations.", "ster on a plate.", "The description contains truthful and deceptive aspects, but it is considered to be a deceptive round since the main purpose of the statement is to deceive.", "This fine-grained distinction is captured during the annotation of behaviors, described below, which allows us to obtain more precise veracity labels of the behavior.", "In our example, the behavior associated with the description green is labeled as deceptive, whereas all the other behaviors are labeled as being truthful.", "To enable this annotation, we further process our initial turn-by-turn segmentation to obtain spoken segments by either of the participants.", "We then code the veracity (i.e., truthful or deceptive) for each verbal statement of the participants.", "During the veracity coding, we assume that the behavior is always deceptive unless the verbal description indicates otherwise (i.e., accurate description of the object), as the general goal of each participant is to deceive their opponent.", "The final distribution of these annotations is 862 utterances labeled as deceptive, and 187 as truthful.", "Figure 1 shows examples of truthful and deceptive behaviors in the dataset.", "In order to include linguistic features in our analyses, we first transcribe the participants' conversations.", "To obtain transcriptions, we first extract the Kappa Category Host Guest General Face 0.75 0.70 Eyebrows 0.51 0.70 Eyes 0.56 0.92 Gaze 0.45 0.74 Mouth-Openness 0.64 0.47 Mouth-Lips 0.79 0.53 Head 0.60 0.55 Feedback receiving 0.47 0.72 Feedback eliciting 0.73 0.46 Average 0.61 0.64 Table 2: Inter-annotator agreement Truthful Deceptive Total Host 749 4211 4960 Guests 748 2496 3244 Total 1497 6707 8204 Table 3: Distribution of words for all transcriptions audio of the corresponding video clip and slice it based on the verbal annotation time-stamps.", "For this task, we use Pympi (Lubbers and Torreira, 2013) and Ffmpy (Developers, 2016).", "We transcribe the resulting audio clips using Amazon Mechanical Turk (AMT), a crowd-sourcing platform.", "We notice that some of the clips include brief interruptions among speakers, thus we ask the AMT workers to transcribe only the speech of the main speaker in the audio clip.", "After we collect all transcriptions, we proofread them to avoid mistakes such as double transcriptions and remove additional characters or descriptions (e.g. person 1, clapping, [pause]).", "The final distribution of all the words from the transcriptions is shown in Table", "3. Example utterances of truthful and deceptive statements are displayed in Table", "4. 4 Methodology Gathering data from different modalities creates the need to combine them into a coherent feature set, which can be utilized by machine learning classifiers.", "The following subsections describe how we generate features based on our annotations for the verbal and non-verbal behavior components of the dataset.", "These features are then used to train and test the classifiers in our experiments.", "We derive various linguistic features from the transcriptions of the participants' speech, which include: unigrams, psycholinguistic features, part of speech features, and word embedding features.", "Unigrams.", "These features are created with bag-of-words representations of all transcriptions from the guests and the host.", "The unigrams are represented using their frequencies.", "Psycholinguistic Features.", "These features are created with the help of the Linguistic Inquiry and Word count Lexicon (Version 2015) (Pennebaker et al., 2007).", "They represent 80 different classes of words, which can be attributed to different psychological dimensions.", "The features display the frequencies of occurrences of classes, derived from the occurrences of words, attributed to each class.", "The lexicon has been successfully used in previous work for automatic deception detection (Ott et al., 2013; Mihalcea et al., 2013).", "Part of Speech tags (PoS).", "These features are created by obtaining PoS-tagging of transcripts.", "They capture the grammatical and syntactical structure of the utterances of the transcriptions (e.g., noun, verb, adjective).", "Features display the distribution of these categories in percentage for each utterance.", "Word Embeddings.", "These features are obtained using Word2Vec by creating vector representations of the words in the transcriptions.", "By training word representations based on other words occurring in the same context, these features capture similarities of words next to each other and in context.", "Together, all words are represented in a vector space in which similar words lay closer to each other as compared to dissimilar words.", "These features are generated from the non-verbal behaviors described in Section 3.2 and represented as percentages.", "Specifically, the different behavioral values for a category (e.g., Head) in a verbal utterance are counted and represented as percentages.", "For example, a verbal utterance might last for one minute and during that time head movements might take several different values, such as side-turn (20 sec.), shake (30 sec.), and single nod (10 sec.).", "These times are transformed into percentages and the category head then consist of 33.33% side-turn , 50% shake , and 16.67% single nod during the one-minute utterance.", "In this manner, each facial area designates its percentage representation of behavioral values, which add up to 100%.", "In case a behavior cannot be fully attributed to one of the possible actions through the verbal statement, left-over percentages are assigned to Participant Truthful Deceptive Host In a, no.", "none , representing the lack of occurrence of a behavioral action in its category.", "This transformation is performed for all seven different facial areas we have annotated, including General Facial Expressions, Eyebrows, Eyes, Gaze, Mouth-Openness, Mouth-Lips, and Head.", "Our non-verbal behavior feature set thus consists of all the facial expressions or head movements expressed as the percentage of times they occur during a speaker's utterance.", "Possible attributes for each of the seven categories can be found in Table 1.", "We derive dialogue-based features by exploring verbal and non-verbal aspects of the interaction between participants that are related to deceptive behavior.", "The features attempt to capture deception cues the speaker's exhibited during their conversation prior to the current utterance.", "These features are obtained as follows: Deception Changes.", "These features include the count of truthful and deceptive utterances up to the current utterance.", "We also aggregate the counts of deceptive and truthful utterances to represent the participation of the speaker during the conversation.", "Non-verbal Changes.", "These features capture how facial displays differ between consequent utterances.", "We calculate these features by subtracting the numeric vectors representing the non-verbal behavior during the current utterance from the previous utterance.", "In order to attribute the corresponding non-verbal behaviors to verbal utterances for later classification tasks, each behavior receives a veracity label (truthful or deceptive) individually.", "The veracity label that overlaps with a behavior for more than 50% of its span, it is associated with that behavior.", "The overlap is determined by comparing the time stamp of the behavior and the veracity annotation, which are obtained from the ELAN files.", "Table 5 displays the distribution of these feature sets.", "In order to evaluate the automated methods and compare them to human performance, we establish a human baseline, representing how well humans guess deceptive and truthful behavior correctly.", "Since the game show Box of Lies is already set up in a way that participants have to guess if their opponent is lying or telling the truth, their performance serves as a baseline.", "Thus, we use their assessments to obtain a confusion matrix showing their correct and incorrect guesses.", "We calculate their performance in terms of accuracy, which reflects the proportion of correctly categorized descriptions of all object descriptions; precision, which reflects the proportion of correctly identified descriptions in one classi-fied category; recall, which reflects the proportion of correctly identified descriptions out of all the object descriptions truly belonging to that category; and f1-score, which reflects the weighted average of precision and recall in that category.", "Human performance is shown in Table 6.", "Since participants tell 39 deceptive and 29 truthful descriptions in total, the distribution is slightly uneven, resulting in a baseline of 0.57 in detecting a lie.", "Considering this, participants and the overall accuracy is almost equal to the accuracy of random guessing.", "This supports earlier findings that humans are almost only as good as chance (Bond and DePaulo, 2006).", "Results for each class (detecting truthful or deceptive descriptions) show that participants are better at detecting truthful descriptions.", "This could be based on the truth bias, which describes the phenomenon according to which people generally tend to believe others (Levine et al., 1999).", "We perform all the classification experiments with the python package Scikit-learn (Pe-dregosa et al., 2011) using the standard settings for the model parameters.", "All classifiers are evaluated using five-fold cross-validation.", "During our experiments we focus on three scenarios: (1) How well can we distinguish between truthful and deceptive utterances in the dataset?", "In this scenario, we explore whether the different features we propose can capture differences between truthful and deceptive behavior, regardless of the speaker.", "Note that in this scenario, a significant fraction of the data comes from the same speaker (host).", "(2) How well can we distinguish between truthful and deceptive behaviors elicited by the guests?", "In this experiment, we consider the subset of deceptive and truthful utterances produced by the guests in our dataset.", "Again, we test our different feature sets in the prediction of truthful and deceptive behavior, but this time we focus on learning deceptive patterns from several individuals, which might exhibit different verbal and non-verbal behaviors.", "(3) How well can we distinguish between truthful and deceptive behaviors exhibited by the host?", "In this scenario, we explore whether the availability of data by the same individual can help to improve the detection of deceptive behavior.", "In other words, this experiment builds personalized deception models for the host using the different sets of features representing verbal and non-verbal behavior.", "For each scenario, we test classifiers with features derived from the different verbal and nonverbal modalities as well as features that represent the interaction between participants (described in Sections 4.1, 4.2 and 4.3).", "We test the predictive power of each feature set individually and we also build joint models that combine all feature sets.", "The classifiers performance is evaluated in terms of accuracy, precision, recall, and f1-score.", "An important challenge during the experiments is that the nature of our dataset leads to a high unbalance between the truthful and deceptive classes, as shown in Table 5.", "During our experiments, the imbalance of the data is tackled by applying down-sampling to the deceptive class (Oliphant, 2006).", "This ensures an equal distribution of each label and results in a baseline of 0.50 in all scenarios.", "The results for verbal, nonverbal, dialog features, and their combination for each scenario are shown in Table 7.", "Overall, our experiments show the benefit of combining multiple sources of information on this task, with accuracies well above the baseline and a noticeable accuracy improvement when using all feature sets.", "The different classification performances show that adding information from several modalities helps to increase the accuracy of the detection system.", "Not surprisingly, the linguistic modality shows the best performance among single modalities (Scenarios 1 and 2).", "More interestingly, the non-verbal modality is the second best indicator of deception, despite a significant amount of facial occlusions present in our dataset.", "3 Furthermore, this finding is in line with other work on multimodal deception detection also showing that gestures are a reliable indicator of deception (Perez-Rosas et al., 2015).", "In addition, we generate learning curves for each modality in scenario 1 (figure 2).", "The curves show that when training with 50 60% of the data, the classifier starts to improve upon the (guessing) baseline.", "The ascending trend does not seem to level off, even with the entire dataset, indicating that the classifier might benefit from more data.", "Our experiment on exploring the classification of deceptive behaviors from the host (scenario 3) also lead to interesting insights.", "First, the linguistic modality is the weakest since it obtained the lowest performance in both classes.", "As the difference in f-score values shows, the host does not appear to use significantly different language while telling lies or truths, at least not at the lexical and semantic level, as captured by our linguistic features.", "Second, his non-verbal behavior does 3 Facial occlusions are mainly attributed to changes in camera angles occurring during the videos Lie Truth Scenario Features Acc.", "Third, the performance of the dialog features suggests that having evidence of previous behavior (as captured by deception changes and non-verbal behavior changes) can be useful when modeling the deceptive behavior of a single individual, further suggesting that the non-verbal component of a lie seems to be more individually shaped for each person as opposed to the linguistic component.", "However, the differences in performance between scenarios 1 and 2 suggest that the current features might not be enough to capture deceptive behavior by a single individual since the developed classifiers still find this task challenging.", "The preliminary analyses of this new multimodal dataset show promising results by successfully classifying truthful and deceptive behaviors.", "Currently, the dataset provides feature sets drawn from three modalities (verbal and non-verbal, as well as dialogue) but can be further analyzed to extract additional features from other modalities such as speech.", "Specifically, the dialogue has the potential to add many more layers of information by systematically analyzing verbal, nonverbal, and speech patterns between the participants.", "These patterns can lead to detectable differences between actions and reactions within a dialogue (Tsunomori et al., 2015; Levitan et al., 2016).", "We consider analyzing such patterns as a future research venue to expand the analyses on our dataset.", "A challenge while using this data is the current imbalance of truthful and deceptive feature sets, which can have a detrimental effect on classification performance.", "However, there are several other possible ways to address this issue other than down-sampling as we did during our experiments.", "For instance, other computational methods could be explored, such as one-class classification tasks.", "Such models train on a dataset from the same distribution and classify new data as being similar or different to that distribution.", "This way, anomalies (i.e., behavior with a feature configura-tion different from the training set) are detectable.", "Since truthful behavior is underrepresented in our dataset, the deceptive features could serve as the training set and the goal is to detect truthful behavioral patterns.", "Expanding on other computational tasks also tackles future applicable problems of dealing with uneven datasets, as they are often present when working with real-life datasets.", "The issue of an underrepresented class is prevalent in deception detection research.", "Finally, the dataset could be expanded with more behavioral data, mainly by augmenting the number of truthful behaviors.", "Since all the contestants in our dataset are celebrities, it is likely that other videos portraying them are available.", "In this paper, we showed how we can successfully build a multimodal dialog dataset for deception detection, and presented exploratory deception classification tasks.", "We showed how we can integrate multiple modalities and build feature sets useful for automated processing.", "We were able to achieve a classification performance that is better than random guessing and exceeds human performance.", "Furthermore, additional modalities systematically showed an improvement in classification performance.", "The best performance of 69% was obtained by combining multiple verbal, nonverbal, and dialogue feature sets, which represents a significant improvement over the human performance of a maximum of 58% accuracy.", "The dataset introduced in this paper represents a first attempt to integrate the dialogue dimension with multiple other modalities in deception detection research.", "It has the potential of triggering novel research on multimodal deception data, specifically for speech and the dialogue dimension, which should be explored in the future.", "This material is based in part upon work supported by the Michigan Institute for Data Science, by the National Science Foundation (grant #1815291), and by the John Templeton Foundation (grant #61156).", "Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the Michigan Institute for Data Science, the National Science Foundation, or the John Templeton Foundation." ]
[ "abstain", "abstain", "method", "method", "abstain", "abstain", "objective", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "objective", "other", "objective", "abstain", "method", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "objective", "abstain", "other", "other" ]
[ "Users of medical question answering systems often submit long and detailed questions, making it hard to achieve high recall in answer retrieval.", "To alleviate this problem, we propose a novel Multi-Task Learning (MTL) method with data augmentation for medical question understanding.", "We first establish an equivalence between the tasks of question summarization and Recognizing Question Entailment (RQE) using their definitions in the medical domain.", "Based on this equivalence, we propose a data augmentation algorithm to use just one dataset to optimize for both tasks, with a weighted MTL loss.", "We introduce gradually soft parameter-sharing : a constraint for decoder parameters to be close, that is gradually loosened as we move to the highest layer.", "We show through ablation studies that our proposed novelties improve performance.", "Our method outperforms existing MTL methods across 4 datasets of medical question pairs, in ROUGE scores, RQE accuracy and human evaluation.", "Finally, we show that our method fares better than single-task learning under 4 low-resource settings.", "In order to retrieve relevant answers, one of the basic steps in Question Answering (QA) systems is understanding the intent of questions (Chen et al., 2012; Cai et al., 2017).", "This is particularly important for medical QA systems (Wu et al., 2020), as consumer health questions questions asked by patients may use a vocabulary distinct from doctors to describe similar health concepts (Ben Abacha and Demner-Fushman, 2019a).", "Consumer health questions may also contain peripheral information like patient history (Roberts and Demner-Fushman, 2016), that are not necessary to answer questions.", "There is a growing number of approaches to medical question understanding, including query relax-Source User-written Question or Consumer Health Question (CHQ) : SUBJECT: Morgellon Disease.", "MESSAGE: It appears as if I have had this horrible disease for many, many years and it is getting worst.", "I am trying to find a physician or specialist in the South Carolina area who can treat me for this medical/mental disease.", "It seems as if this disease has \"NO\" complete treatment and it is more least a disability!", "Reference Summarized Question or Frequently Asked Question (FAQ) : What are the treatments for Morgellon Disease, and how can I find physician(s) in South Carolina who specialize in it?", "BART Trained on Summarization Loss Only (Baseline): Where can I find physician(s) who specialize in morgellon disease?", "Our Gradually Soft Multi-Task and Data-Augmented Model: Where can I find a physician or specialist in South Carolina who can treat Morgellon Disease?", "ation (Ben Abacha and Zweigenbaum, 2015; Lei et al., 2020), question entailment (Ben Abacha and Demner-Fushman, 2016, 2019b; Agrawal et al., 2019), question summarization (Ben Abacha and Demner-Fushman, 2019a), and question similarity (Ben Abacha and Demner-Fushman, 2017; Yan and Li, 2018; McCreery et al., 2019).", "Medical question summarization is the task of summarizing consumer health questions into short, single-sentence questions that capture essential information needed to give a correct answer.", "The task of Recognizing Question Entailment (RQE) is defined by Ben Abacha and Demner-Fushman (2016) in the medical domain as a binary classification task.", "For the purpose of this task, a first question is considered to entail a second one if and only if every answer to the second question is a correct, and either full or partial answer to the first question.", "We find in initial experiments (Mrini et al., 2021b) that RQE can teach question summarizers to distinguish salient information from peripheral details, and likewise that question summarization can benefit RQE classifiers.", "In our setting, we cast the medical question understanding task as a MultiTask Learning (MTL) problem involving the two tasks of question summarization and Recognizing Question Entailment.", "We use a simple sum of learning objectives in Mrini et al. (2021b).", "In this paper, we introduce a novel, gradually soft multi-task and data-augmented approach to medical question understanding.", "1 Previous work on combining summarization and entailment uses at least 2 datasets 1 from each task (Pasunuru et al., 2017; Guo et al., 2018).", "We first establish an equivalence between both tasks.", "This equivalence is the inspiration behind the data augmentation schemes introduced in our previous work (Mrini et al., 2021b).", "The goal of the data augmentation is to use a single dataset for MultiTask Learning.", "We propose to use a weighted loss function to simultaneously optimize for both tasks.", "Then, we propose a gradually soft parameter-sharing MTL approach.", "We conduct ablation studies to show that our two novelties data augmentation and gradually soft parameter-sharing improve performance in both tasks.", "Our proposed gradually soft multi-task and data-augmented approach outperforms existing single-task and multi-task learning methods on architectures achieving state-of-the-art results in abstractive summarization.", "Compared to single-task learning, our approach achieves a 12% increase in accuracy on a medical RQE dataset, and an average increase of 3.5% in ROUGE-1 F1 scores across 3 medical question summarization datasets.", "Additionally, we perform human evaluation and find our approach generates more informative summarized questions.", "Finally, we find that our approach is more efficient at leveraging smaller amounts of data, and yields better performance under 4 low-resource settings.", "Recognizing Question Entailment (RQE).", "Ben Abacha and Demner-Fushman (2016) introduce the task of RQE.", "It is closely related but not exactly similar to the task of Recognizing Textual Entailment (RTE) (Dagan et al., 2005, 2013), and early definitions of question entailment (Groenendijk and Stokhof, 1984; Roberts, 1996).", "The task of RQE is to predict, given two pairs of questions A and B, whether A entails B. RQE considers that question A entails question B if every 1 Our code is available at: https://github.com/KhalilMrini/Medical-Question-Understanding answer to B is a correct answer to A, and answers A either partially or fully.", "It differs from traditional definitions of entailment, where we consider that the premise entails the hypothesis if and only if the hypothesis is true only if the premise is true.", "Ben Abacha and Demner-Fushman (2016) define RQE within the context of Medical Question Answering.", "The goal is to match a Consumer Health Question (CHQ) to a Frequently Asked Question (FAQ), and ultimately match the CHQ to an expert-written answer.", "Summarization and Entailment.", "There is a growing body of work combining summarization and entailment (Lloret et al., 2008; Mehdad et al., 2013; Gupta et al., 2014).", "Falke et al. (2019) use textual entailment predictions to detect factual errors in abstractive summaries generated by state-of-the-art models.", "Pasunuru and Bansal (2018) propose an entailment reward for their reinforced abstractive summarizer, where the entailment score is obtained from a pre-trained and frozen natural language inference model.", "Pasunuru et al. (2017) propose an LSTM encoder-decoder model that incorporates entailment generation and abstractive summarization.", "The authors optimize alternatively between the two tasks, and use separate Natural Language Inference (NLI) and abstractive summarization datasets.", "Only the decoder parameters are shared.", "Li et al. (2018) closely follow the MTL setting of Pasunuru et al. (2017), and propose a model with a shared encoder, an NLI classifier and an NLI-rewarded summarization decoder.", "Guo et al. (2018) introduce a pointer-generator summarization model with coverage loss (See et al., 2017).", "They build upon the work of Pasunuru et al. (2017), and add question generation on top of the two tasks of abstractive summarization and entailment generation.", "They also alternate between the three different objectives.", "The authors propose to share all parameters except the first layer of the encoder and the last layer of the decoder, and show that soft parameter-sharing improves over hard parameter-sharing.", "Their method outperforms the pointer-generator networks of See et al. (2017) on the CNN-Dailymail news summarization baseline.", "Here, the authors show performance increase in entailment on some batch sizes and decrease on other batch sizes, and they consider entailment as an auxiliary task.", "Transfer Learning for Medical QA.", "BioNLP is one of many NLP applications to benefit from language models that use multi-task learning and transfer learning.", "There are pretrained language models that are geared towards BioNLP applications, that are based on BERT (Devlin et al., 2019).", "Those include SciBERT (Beltagy et al., 2019) which has been fine-tuned using biomedical text from PubMed.", "BioBERT (Lee et al., 2020) has been fine-tuned on the PMC dataset, whereas models named ClinicalBERT (Huang et al., 2019; Alsentzer et al., 2019) additionally use the MIMIC III dataset (John-son et al., 2016).", "Transfer learning was a popular approach at the 2019 MEDIQA shared task (Ben Abacha et al., 2019) on medical NLI, RQE and QA.", "The question answering task involved re-ranking answers, not generating them (Demner-Fushman et al., 2020).", "For the RQE task, the best-performing model (Zhu et al., 2019) uses transfer learning on NLI and ensemble methods.", "We consider the multi-task learning of medical question summarization and medical RQE.", "The input to both tasks is a pair of medical questions.", "The first question is called a Consumer Health Question (CHQ), and the second question is called a Frequently Asked Question (FAQ).", "The CHQ is written by a patient and is usually longer and more informal, whereas the FAQ is usually a single-sentence question written by a medical expert.", "The purpose of both tasks is to match a CHQ to an FAQ, and ultimately to an expert-written answer that matches the FAQ.", "An example pair is shown in Figure", "1. Our novel gradually soft multi-task and data-augmented learning approach to medical question understanding has four main components.", "First, we establish the equivalence between medical question pairs in question summarization and RQE.", "Then, we use our equivalence observation to propose a scheme for data augmentation.", "Third, we show our simultaneous multi-task learning model architecture and learning objective.", "Finally, we describe our gradually soft parameter-sharing scheme.", "medical RQE.", "We first consider a pair of medical questions C and F, where C is a CHQ and F and is an FAQ, such that C is longer than F. Ben Abacha and Demner-Fushman (2016) define question entailment as: question C entails question F ( C F ) if and only if every answer to F is also a correct answer to C, whether partially or completely (1) .", "According to the guidelines set in the data cre-ation of a medical question summarization dataset by Ben Abacha and Demner-Fushman (2019a), doctors were told to grade manually written summarized questions (FAQs) as perfect, acceptable or incorrect.", "The two conditions for a perfect FAQ are: first, an FAQ should enable to retrieve complete and correct answers to the original CHQ, and second, the summarized question should not be so short that it violates the first condition.", "The resulting medical question summarization dataset includes perfect and acceptable FAQs.", "We assume that a perfect FAQ provides complete and correct answers to the corresponding CHQ, and that an acceptable FAQ provides correct answers to the corresponding CHQ, whether partially or completely.", "We therefore conclude that: F is a good summary of C, if and only if F enables to retrieve correct answers to C, whether partially or completely (2) .", "We have: F enables to retrieve correct answers to C, if and only if answers to F are correct answers to C. Therefore, F enables to retrieve correct answers to C, if and only if every answer to F is also a correct answer to C, whether partially or completely.", "Given the equivalences (1) and (2) above, it follows that: question F is a good summary of question C, if and only if question C entails question F (3) .", "Medical question understanding datasets are scarce, and new high-quality datasets are complex and costly to create.", "We propose in Mrini et al. (2021b) to augment existing datasets in one of the two tasks to create a synthetic dataset of the same size for the other task.", "Our two-way data augmentation algorithm is inspired by the equivalence shown in the previous subsection, and enables us to train in a simultaneous multi-task setting.", "Our data augmentation method also addresses a weakness in previous work in multi-task learning, where each task involves a distinct dataset, often from a different domain.", "Our data augmentation will enable us to use datasets in the same domain, and we hypothesize this can benefit performance in both tasks.", "For summarization datasets, we create equivalent RQE pairs.", "For each existing summarization pair, we first choose with equal probability whether the equivalent RQE pair is labeled as entailment or not.", "If it is an entailment case, we use the equivalence in (3) and create an RQE pair identical to the summarization pair.", "If it is not an entailment case, then we have: (3) question F is not a summary of question C if and only if question C does not entail question F (4) .", "Therefore, to create an equivalent RQE pair labeled as not entailement, the RQE CHQ is identical to the CHQ of the summarization pair, and the RQE FAQ is randomly selected from a distinct question pair from the same dataset split.", "Inversely, for the RQE dataset, we create equivalent summarization pairs.", "For each existing RQE pair, we consider two cases.", "If the RQE pair is labeled as entailment, we create an identical summarization pair.", "If the RQE pair is labeled as not entailment, then following (4) , we create a summarization pair that is identical to a randomly selected and distinct RQE pair labeled as entailment from the same dataset split.", "Previous work on multi-task learning with summarization and entailment (Pasunuru et al., 2017; Guo et al., 2018) optimize for the objectives of the different tasks by alternating between them.", "This alternating multi-task training follows a ratio between the different tasks, that depends on the size of the dataset of each task (e.g. a ratio of 10:1 means training for 10 batches on one task, and then for 1 batch on the other task).", "In our approach, we propose to optimize simultaneously for the objectives of both tasks.", "We do not use ratios, as we are not alternating between objectives and the resulting datasets from our data augmentation algorithm are of equal size.", "Whereas many previous multi-task settings chose generation tasks (entailment generation and question generation), we choose the BART Large architecture (Lewis et al., 2019) as it enables to optimize for a classification task (RQE) and a generation task (summarization) using the same architecture.", "In addition, BART is adequate as it achieves very strong results in benchmark datasets of recognizing textual entailment and abstractive summarization.", "The input works differently between both tasks.", "For summarization, the encoder Shared Encoder Decoder Decoder FAQ CHQ; FAQ CHQ; FAQ CHQ Recognizing Question Entailment (RQE) Classification Task Question Summarization Generation Task Classification Head Cross-Entropy Loss Negative Log-Likelihood Loss Gradually Soft Parameter-Sharing Loss Figure 2: Overview of the architecture of our proposed gradually soft multi-task and data-augmented model.", "takes the CHQ as input and the decoder takes the FAQ as input.", "For RQE, both the encoder and decoder take the entire RQE pair as input.", "We add a classification head for RQE, to which we feed the last decoder output, as it attends over all decoder and encoder positions.", "We show an overview of our architecture in Figure", "2. We propose to optimize a single loss function that combines objectives of both tasks.", "Our loss function is the weighted sum of the negative log-likelihood summarization objective, and the binary cross-entropy classification objective of RQE.", "More formally, given a CHQ embedding x , the corresponding FAQ embedding y , and the entailment label l entail { 0 , 1 } , we optimize the following multi-task learning loss function: LMTL ( ) = log p ( y | x ; ) + (1 ) BCE ([ x ; y ] , l entail ; ) (1) where BCE is binary cross entropy, and is a hyperparameter between 0 and", "1. 3.4 Gradually Soft Parameter-Sharing In multi-task learning, there are two widely used approaches: hard parameter-sharing and soft parameter-sharing.", "Guo et al. (2018) propose soft parameter-sharing for all parameters except the first layer of the encoder and last layer of the decoder.", "Liu et al. (2019) introduce MT-DNN and show that hard parameter-sharing of all of the transformer encoder layers, and only having task-specific classification heads produces results that set a new state of the art for the GLUE benchmark (Wang et al., 2018).", "We propose a hybrid approach, where we apply hard parameter-sharing for the encoder, and a novel gradually soft parameter-sharing approach for the decoder layers.", "We define gradually soft parameter-sharing as a smooth transition from hard parameter-sharing to task-specific layers.", "It is a soft parameter-sharing approach that is gradually toned down from the first layer of the decoder to the last layer, which is entirely task-specific.", "In gradually soft parameter-sharing, we constrain decoder parameters to be close by penalizing their l 2 distances, and the higher the layer the looser the constraint.", "Given a decoder with N layers, the gradually soft parameter-sharing loss term is as follows: LGS ( ) = N 1 (cid:88) n =1 (cid:16) e N nN 1 (cid:17) (cid:13)(cid:13)(cid:13) QSdec ,n RQEdec ,n (cid:13)(cid:13)(cid:13) 2 (2) where is a hyperparameter, QSdec ,n represents the decoder parameters for the question summarization at the n -th layer, and likewise RQEdec ,n represents the decoder parameters for the RQE task at the n -th layer.", "We iterate from the 1 st to the ( N 1) -th layer, as the N -th layer is entirely task-specific and unconstrained.", "We show a high-level representation in Figure", "2. 4 Experiments 4.1 Datasets We consider 3 medical question summarization datasets and 1 medical RQE dataset.", "We show dataset statistics in Table", "1. MeQSum and MEDIQA RQE can be considered low-resource, whereas the other two are far larger.", "Our datasets are in the English language.", "Due to space constraints, we briefly introduce the datasets and leave additional details in the appendix.", "The medical question summarization datasets are MeQSum (Ben Abacha and Demner-Fushman, 2019a), HealthCareMagic and iCliniq.", "We extract in Mrini et al. (2021b) and in Mrini et al. (2021c) the HealthCareMagic and iCliniq datasets from the large-scale MedDialog dataset (Chen et al., 2020).", "Whereas MeQSum is a high-quality dataset from the U.S. National Institutes of Health (NIH), HealthCareMagic and iCliniq are from online healthcare service platforms.", "HealthCareMagic's summaries are more abstractive and are written in a formal style, unlike iCliniq's patient-written summaries.", "The medical RQE dataset is the MEDIQA RQE dataset from the 2019 MEDIQA shared task (Ben Abacha et al., 2019).", "Similarly to MeQSum, the question pairs match a longer CHQ received by the U.S. National Library of Medicine (NLM) and a FAQ from NIH institutes.", "Whereas the train and dev sets have automatically generated CHQs, the test set has manually written CHQs.", "This results in significantly higher dev set results than for test sets, as has been observed during the 2019 MEDIQA shared task.", "In addition, we use two pretraining datasets.", "We use the XSum dataset (Narayan et al., 2018), an abstractive summarization benchmark, for question summarization.", "For the RQE task, we use the Recognizing Textual Entailment (RTE) dataset (Da-gan et al., 2005; Haim et al., 2006; Giampiccolo et al., 2007; Bentivogli et al., 2009) from the GLUE benchmark (Wang et al., 2018).", "All of our models use the BART large architecture.", "Unless otherwise noted, all experiments on the 3 question summarization datasets are made using a checkpoint pre-trained on the XSum dataset using only the summarization objective, and all experiments on the RQE dataset are made using a checkpoint pre-trained on the RTE dataset, only optimizing the cross-entropy loss.", "We report ROUGE F1 scores for the question DATASET MeQSum HealthCareMagic iCliniq RQE METRIC R1 R2 RL R1 R2 RL R1 R2 RL Accuracy ABLATION OFDATAAUGMENTATION Gradually Soft MTL + Existing Dataset 51.3 32.3 47.5 45.1 22.9 40.3 59.4 46.0 54.5 81.1% ABLATION OFGRADUALLYSOFTPARAMETER-SHARING Hard-shared Decoder + Data Aug. 52.0 34.0 47.9 44.3 23.3 41.5 60.1 47.0 56.3 77.5% Soft-shared Decoder + Data Aug. 53.2 35.6 48.9 44.8 22.8 40.9 60.7 48.3 57.8 79.4% Task-specific Decoder + Data Aug. 50.8 31.7 45.4 46.0 25.1 43.4 61.8 47.5 56.9 81.8% OURMODEL Gradually Soft MTL + Data Aug. 54.5 37.9 50.2 46.9 24.8 43.2 62.3 48.7 58.5 82.1% Table 2: Dev set results for the ablation studies on our two main novelties: our data augmentation algorithm, and our gradually soft parameter-sharing method.", "summarization datasets, and accuracy for the RQE dataset, as it is a binary classification task with two labels: entailment and not entailment.", "The learning rate for RQE experiments is 1 10 5 and for the question summarization experiments, it is 3 10 5 .", "We use an Adam optimizer where the betas are 0 .", "9 and 0 .", "999 for summarization, and 0 .", "9 and 0 .", "98 for RQE.", "In all experiments, the Adam epsilon is 10 8 , and the dropout is 0 .", "1 .", "We set the hyperparameter to 1 10 7 .", "Our loss function as defined in Eq.1 has a hyperparameter to balance between the question summarization objective and the RQE objective.", "We run experiments where varies from 0 .", "1 to 0 .", "9 in 0 .", "1 increments.", "The results are in Figure", "3. The best values are 0.5 for MeQSum, 0.7 for iCliniq, 0.8 for HealthCareMagic and 0.3 for MEDIQA RQE.", "For the question summarization datasets, we notice that the smaller the dataset, the more it benefits from data-augmented MTL with RQE.", "We perform two ablation studies to show the added value of our main novelties: our equivalence-inspired data augmentation algorithm and our gradually soft parameter-sharing algorithm.", "Data Augmentation.", "We compare our data augmentation algorithm against the following alternative: instead of training using a synthetic dataset for the auxiliary task, we choose a separate, existing dataset for abstractive summarization or recognizing textual entailment.", "This follows the approach taken by most MTL models.", "For the question summarization task, we optimize the cross-entropy objective using the RTE dataset.", "For the RQE task, we optimize the summarization objective using the XSum dataset.", "For the sake of fair comparison, we use the simultaneous MTL objective and the same architecture.", "Results in Table 2 show a consistent increase in performance across all datasets when using our data augmentation method, suggesting that in-domain MTL is more efficient.", "Comparing Parameter-Sharing Configurations.", "We compare our gradually soft parameter-sharing method with 3 other parameter-sharing configura-tions.", "For all configurations, we keep using our data augmentation method, and sharing encoder parameters entirely.", "3. Task-specific decoder: we train two task-specific decoders.", "Our ablation study results in Table 2 show that our gradually soft parameter-sharing method exceeds all 3 of the other parameter-sharing con-figurations in RQE accuracy, and in the sum of ROUGE F1 scores.", "These results show our proposed smoother parameter-sharing transition between encoder and decoder layers brings about higher performance.", "Baselines.", "We consider three main baselines.", "The first one is BART (Lewis et al., 2019), where we only train on the summarization task.", "The second baseline trains BART on the same MTL settings as Pasunuru et al. (2017), using alternative training with entailment generation on the Stanford Natural Language Inference (SNLI) corpus (Bowman et al., 2015) and having a shared decoder and task-specific encoders.", "The third baseline trains BART on the same MTL settings as Guo et al. (2018), where, on top of the entailment generation task, we add the question generation task using the Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016), and all parameters are soft-shared, except for the task-specific first encoder layer and last decoder layer.", "In addition, we also report the baselines assessed by Ben Abacha and Demner-Fushman (2019a) for MeQSum.", "For data augmentation, they use semantically-selected relevant question pairs from the Quora Question Pairs dataset (Iyer et al., 2017).", "Their results show that coverage loss (See et al., 2017) diminishes the added value of data augmentation in pointer-generator networks.", "Our summarization-only BART baseline exceeds all of the reported MeQSum baselines in ROUGE-1 F1.", "Summarization Results.", "We report our summarization results in Table", "3. Compared to the single-task BART baseline, our gradually soft multi-task and data-augmented method performs better across all three ROUGE metrics, and achieves increases ranging from 1.4 to 5.5 points in ROUGE-1 F1.", "This differences shows that our method is consistently more efficient compared to training only on summarization.", "The other two MTL baselines are generally performing better than the single-task BART baseline, except for the larger HealthCareMagic dataset.", "We observe that the different parameter-sharing con-figurations and tasks used in the MTL baselines are scoring about 1 to 4 points below our method in terms of ROUGE-1 F1 scores.", "This shows that our choice of tasks, simultaneous MTL loss, data augmentation and gradually soft parameter-sharing method work consistently better than existing MTL methods.", "Human Evaluation.", "Given that ROUGE is notoriously unreliable, we hire 2 annotators to judge 120 randomly selected summaries from the summarization test sets, generated from the single-task BART baseline and our own method in Table", "3. We ask the annotators to judge the Fluency, Coherence, Informativeness and Correctness of each generated summary, using Best-Worst scaling, with the possibility of ranking both summaries equally.", "The annotators are presented with 2 generated summaries, in a randomized order at each evaluation, such that they cannot identify which method generated which summary.", "Our human evaluation results are in Table", "4. Scores generally favor our method, more strongly so in the abstractive datasets HealthCareMagic and MeQSum.", "However, we note an increase in correctness for the more extractive iCliniq dataset.", "On average, our gradually soft multi-task and data-augmented method outputs summarized questions that are more fluent and more informative than the single-task BART baseline.", "Baselines.", "We compare our method to three baselines.", "The first one trains a single-task BART on RQE, with a classification head pre-trained on RTE.", "The second baseline is a feature-based SVM from Ben Abacha and Demner-Fushman (2016) who introduced the MEDIQA RQE dataset.", "The third baseline (Zhou et al., 2019) is an adversarial MTL method combining medical question answering and RQE.", "The architecture consists of a shared transformer encoder using BioBERT embeddings (Lee et al., 2020), separate classification heads for RQE and medical QA, and a task discriminator for adversarial training.", "A separate dataset is used for medical QA (Ben Abacha et al., 2019).", "RQE Results.", "We show our RQE results in Table", "5. We see a 12% increase on the test set compared to optimizing only on the RQE objective, and 10% increase.", "Without a separate dataset or embeddings trained on large-scale biomedical data, our method is able to exceed the performance of Zhou et al. (2019) by 0.7%.", "This confirms the strength of our method, and shows our method can increase performance in both RQE and Question Summarization in the medical domain.", "We compare our gradually soft MTL and data-augmented method with the single-task BART baseline on four low-resource settings.", "For each dataset, Figure 4: Test set 4-run average performance of our method compared to single-task BART in low-resource settings.", "we limit the training data to a subset of 50, 100, 500 or 1000 datapoints, and keep the same training settings.", "To avoid selection bias, we select four random and distinct subsets per low-resource setting, and show average ROUGE-1 F1 scores in Figure", "4. The results show that our approach is able to perform much better in low-resource settings.", "We notice in particular that, on all 4 datasets, the scores of the single-task BART baseline for 100 and 1000 datapoints are lower than or roughly equal to the scores of our method for a training subset of half the size (50 and 500 datapoints respectively).", "This suggests that our method's performance increase is not only related to additional datapoints, but also its gradually soft MTL setting.", "We propose a novel multi-task learning approach for medical question understanding.", "Our approach trains on the tasks of RQE and question summarization in a simultaneous, weighted MTL loss function, where we add a loss term to constrain the decoder layers to be close, and we loosen the constraint gradually as we move higher up the layers.", "We show using the definitions of both tasks in the medical domain that we can augment datasets, such that we only need one dataset for MTL.", "Our two ablation studies show that our gradually soft parameter-sharing and our data augmentation algorithm each increase performance individually.", "We compare our method to single-task learning and existing MTL work, and show improvements across 3 medical question summarization datasets and 1 medical RQE dataset.", "Finally, we test our approach under low-resource settings: we find that it is able to efficiently leverage small quantities of data, and that these performance increases do not only depend on additional data from augmentation.", "We gratefully acknowledge the award from NIH/NIA grant R56AG067393.", "Khalil Mrini is additionally supported by Adobe Research Unrestricted Gifts.", "This work is part of the VOLI project (Mrini et al., 2021a; Johnson et al., 2020).", "We thank Naba Rizvi for the annotation work, and the anonymous reviewers for their feedback." ]
[ "abstain", "objective", "objective", "objective", "abstain", "objective", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "objective", "objective", "result", "objective", "result", "objective", "result", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "result", "result", "result", "other", "other", "other", "other" ]
[ "We introduce a general framework for several information extraction tasks that share span representations using dynamically constructed span graphs.", "The graphs are constructed by selecting the most confident entity spans and linking these nodes with confidence-weighted relation types and coreferences.", "The dynamic span graph allows coreference and relation type confidences to propagate through the graph to iteratively refine the span representations.", "This is unlike previous multitask frameworks for information extraction in which the only interaction between tasks is in the shared first-layer LSTM.", "Our framework significantly outperforms the state-of-the-art on multiple information extraction tasks across multiple datasets reflecting different domains.", "We further observe that the span enumeration approach is good at detecting nested span entities, with significant F1 score improvement on the ACE dataset.", "1 1 Introduction Most Information Extraction (IE) tasks require identifying and categorizing phrase spans, some of which might be nested.", "For example, entity recognition involves assigning an entity label to a phrase span.", "Relation Extraction (RE) involves assigning a relation type between pairs of spans.", "Coreference resolution groups spans referring to the same entity into one cluster.", "Thus, we might expect that knowledge learned from one task might benefit another.", "Most previous work in IE (e.g., (Nadeau and Sekine, 2007; Chan and Roth, 2011)) employs a pipeline approach, first detecting entities and then using the detected entity spans for relation extraction and coreference resolution.", "To avoid cascading 1 Code and pre-trained models are publicly available at https://github.com/luanyi/DyGIE .", "errors introduced by pipeline-style systems, recent work has focused on coupling different IE tasks as in joint modeling of entities and relations (Miwa and Bansal, 2016; Zhang et al., 2017), entities and coreferences (Hajishirzi et al., 2013; Durrett and Klein, 2014), joint inference (Singh et al., 2013) or multi-task (entity/relation/coreference) learning (Luan et al., 2018a).", "These models mostly rely on the first layer LSTM to share span representations between different tasks and are usually designed for specific domains.", "In this paper, we introduce a general framework Dynamic Graph IE (DYGIE) for coupling multiple information extraction tasks through shared span representations which are refined leveraging contextualized information from relations and coreferences.", "Our framework is effective in several domains, demonstrating a benefit from incorporating broader context learned from relation and coreference annotations.", "Figure 1 shows an example illustrating the potential benefits of entity, relation, and coreference contexts.", "It is impossible to predict the entity labels for This thing and it from within-sentence context alone.", "However, the antecedent car strongly suggests that these two entities have a VEH type.", "Similarly, the fact that Tom is located at Starbucks and Mike has a relation to Tom provides support for the fact that Mike is located at Starbucks .", "DYGIE uses multi-task learning to identify entities, relations, and coreferences through shared span representations using dynamically constructed span graphs.", "The nodes in the graph are dynamically selected from a beam of highly-confident mentions, and the edges are weighted according to the confidence scores of relation types or coreferences.", "Unlike the multi-task method that only shares span representations from the local context (Luan et al., 2018a), our framework leverages rich contextual span representations by propagating information through coreference and relation links.", "Unlike previous BIO-based entity recognition systems (Collobert and Weston, 2008; Lample et al., 2016; Ma and Hovy, 2016) that assign a text span to at most one entity, our framework enumerates and represents all possible spans to recognize arbitrarily overlapping entities.", "We evaluate DYGIE on several datasets spanning many domains (including news, scientific articles, and wet lab experimental protocols), achieving state-of-the-art performance across all tasks and domains and demonstrating the value of coupling related tasks to learn richer span representations.", "For example, DYGIE achieves relative improvements of 5.7% and 9.9% over state of the art on the ACE05 entity and relation extraction tasks, and an 11.3% relative improvement on the ACE05 overlapping entity extraction task.", "The contributions of this paper are threefold.", "1) We introduce the dynamic span graph framework as a method to propagate global contextual information, making the code publicly available.", "2) We demonstrate that our framework significantly outperforms the state-of-the-art on joint entity and relation detection tasks across four datasets: ACE 2004, ACE 2005, SciERC and the Wet Lab Protocol Corpus.", "3) We further show that our approach excels at detecting entities with overlapping spans, achieving an improvement of up to 8 F1 points on three benchmarks annotated with overlapped spans: ACE 2004, ACE 2005 and GENIA.", "Previous studies have explored joint modeling (Miwa and Bansal, 2016; Zhang et al., 2017; Singh et al., 2013; Yang and Mitchell, 2016)) and multi-task learning (Peng and Dredze, 2015; Peng et al., 2017; Luan et al., 2018a, 2017a) as methods to share representational strength across related information", "information extraction tasks.", "The most similar to ours is the work in Luan et al. (2018a) that takes a multi-task learning approach to entity, relation, and coreference extraction.", "In this model, the different tasks share span representations that only incorporate broader context indirectly via the gradients passed back to the LSTM layer.", "In contrast, DYGIE uses dynamic graph propagation to explicitly incorporate rich contextual information into the span representations.", "Entity recognition has commonly been cast as a sequence labeling problem, and has benefited substantially from the use of neural architectures (Collobert et al., 2011; Lample et al., 2016; Ma and Hovy, 2016; Luan et al., 2017b, 2018b).", "However, most systems based on sequence labeling suffer from an inability to extract entities with overlapping spans.", "Recently Katiyar and Cardie (2018) and Wang and Lu (2018) have presented methods enabling neural models to extract overlapping entities, applying hypergraph-based representations on top of sequence labeling systems.", "Our framework offers an alternative approach, forgoing sequence labeling entirely and simply considering all possible spans as candidate entities.", "Neural graph-based models have achieved significant improvements over traditional feature-based approaches on several graph modeling tasks.", "Knowledge graph completion (Yang et al., 2015; Bordes et al., 2013) is one prominent example.", "For relation extraction tasks, graphs have been used primarily as a means to incorporate pipelined features such as syntactic or discourse relations (Peng et al., 2017; Song et al., 2018; Zhang et al., 2018).", "Christopoulou et al. (2018) models all possible paths between entities as a graph, and refines pair-wise embeddings by performing a walk on the graph structure.", "All these previous works assume that the nodes of the graph (i.e. the entity candidates to be considered during relation extraction) are predefined and fixed throughout the learning process.", "On the other hand, our framework does not require a fixed set of entity boundaries as an input for graph construction.", "Motivated by state-of-the-art span-based approaches to coreference resolution (Lee et al., 2017, 2018) and semantic role labeling (He et al., 2018), the model uses a beam pruning strategy to dynamically select high-quality spans, and constructs a graph using the selected spans as nodes.", "Many state-of-the-art RE models rely upon domain-specific external syntactic tools to construct dependency paths between the entities in a sentence (Li and Ji, 2014; Xu et al., 2015; Miwa and Bansal, 2016; Zhang et al., 2017).", "These systems suffer from cascading errors from these tools and are hard to generalize to different domains.", "To make the model more general, we combine the multitask learning framework with ELMo embeddings (Peters et al., 2018) without relying on external syntactic tools and risking the cascading errors that accompany them, and improve the interaction between tasks through dynamic graph propagation.", "While the performance of DyGIE benefits from ELMo, it advances over some systems (Luan et al., 2018a; Sanh et al., 2019) that also incorporate ELMo.", "The analyses presented here give insights into the benefits of joint modeling.", "Problem Definition The input is a document represented as a sequence of words D , from which we derive S = { s 1 , . . . , s T } , the set of all possible within-sentence word sequence spans (up to length L ) in the document.", "The output contains three structures: the entity types E for all spans S , the relations R for all span pairs S S within the same sentence, and the coreference links C for all spans in S across sentences.", "We consider two primary tasks.", "First, Entity Recognition is the task of predicting the best entity type labels e i for each span s i .", "Second, Relation Extraction involves predicting the best relation type r ij for all span pairs ( s i , s j ) .", "We provide additional supervision by also training our model to perform a third, auxiliary task: Coreference resolution .", "For this task we predict the best antecedent c i for each span s i .", "Our Model We develop a general information extraction framework (DYGIE) to identify and classify entities, relations, and coreference in a multi-task setup.", "DYGIE first enumerates all text spans in each sentence, and computes a locally-contextualized vector space representation of each span.", "The model then employs a dynamic span graph to incorporate global information into its span representations, as follows.", "At each training step, the model identifies the text spans that are most likely to represent entities, and treats these spans as nodes in a graph structure.", "It constructs confidence-weighted arcs for each node according to its predicted coreference and relation links with the other nodes in the graph.", "Then, the span representations are refined using broader context from gated updates propagated from neighboring relation types and co-referred entities.", "These refined span representations are used in a multi-task framework to predict entity types, relation types, and coreference links.", "In this section, we give an overview of the main components and layers of the DYGIE framework, as illustrated in Figure 2.", "Details of the graph construction and refinement process will be presented in the next section.", "Token Representation Layer We apply a bidirectional LSTM over the input tokens.", "The input for each token is a concatenation of the character reprensetation, GLoVe (Pennington et al., 2014) word embeddings, and ELMo embeddings (Peters et al., 2018).", "The output token representations are obtained by stacking the forward and backward LSTM hidden states.", "Span Representation Layer For each span s i , its initial vector representation g 0 i is obtained by concatenating BiLSTM outputs at the left and right end points of s i , an attention-based soft head-word, and an embedded span width feature, following Lee et al. (2017).", "Coreference Propagation Layer The propagation process starts from the span representations g 0 i .", "At each iteration t , we first compute an update vector u tC for each span s i .", "Then we use u tC to update the current representation g ti , producing the next span representation g t +1 i .", "By repeating this process N times, the final span representations g Ni share contextual information across spans that are likely to be antecedents in the coreference graph, similar to the process in (Lee et al., 2018).", "Relation Propagation Layer The outputs g Ni from the coreference propagation layer are passed as inputs to the relation propagation layer.", "Similar to the coreference propagation process, at each iteration t , we first compute the update vectors u tR for each span s i , then use it to compute g t +1 i .", "Information can be integrated from multiple relation paths by repeating this process M times.", "Final Prediction Layer We use the outputs of the relation graph layer g N + M i to predict the entity labels E and relation labels R .", "For entities, we pass g N + M i to a feed-forward network (FFNN) to arrive at Starbucks car Tom Input document Span enumeration Final prediction of entities and relations Coref.", "produce per-class scores PE ( i ) for span s i .", "For relations, we pass the concatenation of g N + M i and g N + M j to a FFNN to produce per-class relation scores PR ( i, j ) between spans s i and s j .", "Entity and relation scores are normalized across the label space, similar to Luan et al. (2018a).", "For coreference, the scores between span pairs ( s i , s j ) are computed from the coreference graph layer outputs ( g Ni , g Nj ), and then normalized across all possible antecedents, similar to Lee et al. (2018).", "The dynamic span graph facilitates propagating broader contexts through soft coreference and relation links to refine span representations.", "The nodes in the graph are spans s i with vector representations g ti R d for the t -th iteration.", "The edges are weighted by the coreference and relation scores, which are trained according to the neural architecture explained in Section 3.1.", "In this section, we explain how coreference and relation links can update span representations.", "Coreference Propagation Similar to (Luan et al., 2018a), we define a beam BC consisting of b c spans that are most likely to be in a coreference chain.", "We consider P tC to be a matrix of real values that indicate coreference confidence scores between these spans at the t -th iteration.", "P tC is of size b c K , where K is the maximum number of antecedents considered.", "For the coreference graph, an edge in the graph is single directional, connecting the current span s i with all its potential antecedents s j in the coreference beam, where j < i .", "The edge between s i and s j is weighted by coreference confidence score at the current iteration P tC ( i, j ) .", "The span update vector u tC ( i ) R d is computed by aggregating the neighboring span representations g tj , weighted by their coreference scores P tC ( i, j ) : u tC ( i ) = (cid:88) j BC ( i ) P tC ( i, j ) g tj (1) where BC ( i ) is the set of K spans that are antecedents of s i , P tC ( i, j ) = exp( V tC ( i, j )) (cid:80) j (cid:48) BC ( i ) exp( V tC ( i, j )) (2) V t C ( i, j ) is a scalar score computed by concatenating the span representations [ g ti , g tj , g ti (cid:12) g tj ] , where (cid:12) is element-wise multiplication.", "The concatenated vector is then fed as input to a FFNN, similar to (Lee et al., 2018).", "Relation Propagation For each sentence, we define a beam BR consisting of b r entity spans that are mostly likely to be involved in a relation.", "Unlike the coreference graph, the weights of relation edges capture different relation types.", "Therefore, for the t -th iteration, we use a tensor V tR R b R b R LR to capture scores of each of the LR relation types.", "In other words, each edge in the relation graph connects two entity spans s i and s j in the relation beam BR .", "V tR ( i, j ) is a LR -length vector of relation scores, computed with a FFNN with [ g ti , g tj ] as the input.", "The relation update vector u tR ( i ) R d is computed by aggregating neighboring span representations on the relation graph: u tR ( i ) = (cid:88) j BR f ( V tR ( i, j )) AR (cid:12) g tj , (3) where AR RLR d is a trainable linear projection matrix, f is a non-linear function to select the most important relations.", "Because only a small number of entities in the relation beam are actually linked to the target span, propagation among all possible span pairs would introduce too much noise to the new representation.", "Therefore, we choose f to be the ReLU function to remove the effect of unlikely relations by setting the all negative relation scores to", "0. Unlike coreference connections, two spans linked via a relation are not expected to have similar representations, so the matrix AR helps to transform the embedding g tj according to each relation type.", "Updating Span Representations with Gating To compute the span representations for the next iteration t { 1 , . . . , N + M } , we define a gating vector f tx ( i ) R d , where x { C, R } , to determine whether to keep the previous span representation g ti or to integrate new information from the coreference or relation update vectors u tx ( i ) .", "Formally, f tx ( i ) = g ( W f x [ g ti , u tx ( i )]) (4) g t +1 i = f tx ( i ) (cid:12) g ti + (1 f tx ( i )) (cid:12) u tx ( i ) , where W f x R d 2 d are trainable parameters, and g is an element-wise sigmoid function.", "The loss function is defined as a weighted sum of the log-likelihood of all three tasks:", "(cid:88) ( D,R ,E ,C ) D (cid:110) E log P ( E | C, R, D ) (5) + R log P ( R | C, D ) + C log P ( C | D ) (cid:111)", "where E , R and C are gold structures of the entity types, relations and coreference, respectively.", "D is the collection of all training documents D .", "The task weights E , R , and C are hyper-parameters to control the importance of each task.", "We use a 1 layer BiLSTM with 200-dimensional hidden layers.", "All the feed-forward functions have 2 hidden layers of 150 dimensions each.", "We use 0.4 variational dropout (Gal and Ghahramani, 2016) for the LSTMs, 0.4 dropout for the FFNNs, and 0.5 dropout for the input embeddings.", "The hidden layer dimensions and dropout rates are chosen based on the development set performance in multiple domains.", "The task weights, learning rate, maximum span length, number of propagation iterations and beam size are tuned specifically for each dataset using development data.", "DYGIE is a general IE framework that can be applied to multiple tasks.", "We evaluate the performance of DYGIE against models from two lines of work: combined entity and relation extraction, and overlapping entity extraction.", "For the entity and relation extraction task, we test the performance of DYGIE on four different datasets: ACE2004, ACE2005, SciERC and the Wet Lab Protocol Corpus.", "We include the relation graph propagation layer in our models for all datasets.", "We include the coreference graph propagation layer on the data sets that have coreference annotations available.", "Data All four data sets are annotated with entity and relation labels.", "Only a small fraction of entities ( < 3% of total) in these data sets have a text span that overlaps the span of another entity.", "Statistics on all four data sets are displayed in Table", "1. The ACE2004 and ACE2005 corpora provide entity and relation labels for a collection of documents from a variety of domains, such as newswire and online forums.", "We use the same entity and relation types, data splits, and preprocessing as Miwa and Bansal (2016) and Li and Ji (2014).", "Following the convention established in this line of work, an entity prediction is considered correct Dataset System Entity Relation ACE04 Bekoulis et al. (2018) 81.6 47.5 Miwa and Bansal (2016) 81.8 48.4 DYGIE 87.4 59.7 ACE05 Miwa and Bansal (2016) 83.4 55.6 Zhang et al. (2017) 83.6 57.5 Sanh et al. (2019) 87.5 62.7 DYGIE 88.4 63.2 SciERC Luan et al. (2018a) 64.2 39.3 DYGIE 65.2 41.6 WLPC Kulkarni et al. (2018) 78.0 *54.9 DYGIE 79.5 64.1 Table 2: F1 scores on the joint entity and relation extraction task on each test set, compared against the previous best systems.", "if its type label and head region match those of a gold entity.", "We will refer to this version of the ACE2004 and ACE2005 data as ACE04 and ACE05.", "Since the domain and mention span annotations in the ACE datasets are very similar to those of OntoNotes (Pradhan et al., 2012), and OntoNotes contains significantly more documents with coreference annotations, we use OntoNotes to train the parameters for the auxiliary coreference task.", "The OntoNotes corpus contains 3493 documents, averaging roughly 450 words in length.", "The SciERC corpus (Luan et al., 2018a) provides entity, coreference and relation annotations for a collection of documents from 500 AI paper abstracts.", "The dataset defines scientific term types and relation types specially designed for AI domain knowledge graph construction.", "An entity prediction is considered correct if its label and span match with a gold entity.", "The Wet Lab Protocol Corpus (WLPC) provides entity, relation, and event annotations for 622 wet lab protocols (Kulkarni et al., 2018).", "A wet lab protocol is a series of instructions specifying how to perform a biological experiment.", "Following the procedure in Kulkarni et al. (2018), we perform entity recognition on the union of entity tags and event trigger tags, and relation extraction on the union of entity-entity relations and entity-trigger event roles.", "Coreference annotations are not available for this dataset.", "Baselines We compare DYGIE with current state of the art methods in different datasets.", "Miwa and Bansal (2016) provide the current state of the art on ACE04.", "They construct a Tree LSTM using dependency parse information, and use the representations learned by the tree structure as features for relation classification.", "Bekoulis et al. (2018) use adversarial training as regularization for a neural model.", "Zhang et al. (2017) cast joint entity and relation extraction as a table filling problem and build a globally optimized neural model incorporating syntactic representations from a dependency parser.", "Similar to DYGIE, Sanh et al. (2019) and Luan et al. (2018a) use a multi-task learning framework for extracting entity, relation and coreference labels.", "Sanh et al. (2019) improved the state of the art on ACE05 using multi-task, hierarchical supervised training with a set of low level tasks at the bottom layers of the model and more complex tasks at the top layers of the model.", "Luan et al. (2018a) previously achieved the state of the art on SciERC and use a span-based neural model like our DYGIE.", "Kulkarni et al. (2018) provide a baseline for the WLPC data set.", "They employ an LSTM-CRF for entity recognition, following Lample et al. (2016).", "For relation extraction, they assume the presence of gold entities and train a maximum-entropy classifier using features from the labeled entities.", "Results Table 2 shows test set F1 on the joint entity and relation extraction task.", "We observe that DYGIE achieves substantial improvements on both entity recognition and relation extraction across the four data sets and three domains, all in the realistic setting where no gold entity labels are supplied at test time.", "DYGIE achieves 7.1% and 7.0% relative improvements over the state of the art on NER for ACE04 and ACE05, respectively.", "For the relation extraction task, DYGIE attains 25.8% relative improvement over SOTA on ACE04 and 13.7% relative improvement on ACE05.", "For ACE05, the best entity extraction performance is obtained by switching the order between CorefProp and RelProp ( RelProp first then CorefProp ).", "On SciERC, DYGIE advances the state of the art by 5.9% and 1.9% for relation extraction and NER, respectively.", "The improvement of DYGIE over the previous SciERC model underscores the ability of coreference and relation propagation to construct rich contextualized representations.", "The results from Kulkarni et al. (2018) establish a baseline for IE on the WLPC.", "In that work, relation extraction is performed using gold entity boundaries as input.", "Without using any gold entity information, DYGIE improves on the baselines by 16.8% for relation extraction and 2.2% for NER.", "On the OntoNotes data set used for the auxiliary coreference task with ACE05, our model achieves coreference test set performance of 70.4 F1, which is competitive with the state-of-the-art performance reported in Lee et al. (2017).", "There are many applications where the correct identification of overlapping entities is crucial for correct document understanding.", "For instance, in the biomedical domain, a BRCA1 mutation carrier could refer to a patient taking part in a clinical trial, while BRCA1 is the name of a gene.", "We evaluate the performance of DYGIE on overlapping entity extraction in three datasets: ACE2004, ACE2005 and GENIA.", "Since relation annotations are not available for these datasets, we include the coreference propagation layer in our models but not the relation layer.", "2 Data Statistics on our three datasets are listed in Table 3.", "All three have a substantial number ( > 20% of total) of overlapping entities, making them appropriate for this task.", "As in the joint case, we evaluate our model on ACE2004 and ACE2005 , but here we follow the same data preprocessing and evaluation scheme as Wang and Lu (2018).", "We refer to these data sets as ACE04-O and ACE05-O.", "Unlike the joint entity and relation task in Sec. 4.1, where only the entity head span need be predicted, an entity prediction is considered correct in these experiments if both its entity label and its full text span match a gold prediction.", "This is a more stringent evaluation criterion than the one used in Section 4.1.", "As before, we use the OntoNotes annotations to train the parameters of the coreference layer.", "The GENIA corpus (Kim et al., 2003) provides entity tags and coreferences for 1999 abstracts from the biomedical research literature.", "We only use the IDENT label to extract coreference clusters.", "2 We use the pre-processed ACE dataset from previous work and relation annotation is not available.", "Baselines The current state-of-the-art approach on all three data sets is Wang and Lu (2018), which uses a segmental hypergraph coupled with neural networks for feature learning.", "Katiyar and Cardie (2018) also propose a hypergraph approach using a recurrent neural network as a feature extractor.", "Results Table 4 presents the results of our overlapping entity extraction experiments on the different datsets.", "DYGIE improves 11.6% on the state of the art for ACE04-O and 11.3% for ACE05-O.", "DYGIE also advances the state of the art on GENIA, albeit by a more modest 1.5%.", "Together these results suggest that DYGIE can be utilized fruitfully for information extraction across different domains with overlapped entities, such as bio-medicine.", "We use the dev sets of ACE2005 and SciERC to analyze the effect of different model components.", "Tables 5 and 6 show the effects of graph propagation on entity and relation prediction accuracy,", "where CorefProp and RelProp denote ablating the propagation process by setting N = 0 or M = 0 , respectively.", "Base is the base model without any propagation.", "For ACE05, we observe that coreference propagation is mainly helpful for entities; it appears to hurt relation extraction.", "On SciIE, coreference propagation gives a small benefit on both tasks.", "Relation propagation significantly benefits both entity and relation extraction in both domains.", "In particular, there are a large portion of sentences with multiple relation instances across different entities in both ACE05 and SciERC, which is the scenario in which we expect relation propagation to help.", "Since coreference propagation has more effect on entity extraction and relation propagation has more effect on relation extraction, we mainly focus on ablating the effect of coreference propagation on entity extraction and relation propagation on relation extraction in the following subsections.", "A major challenge of ACE05 is to disambiguate the entity class for pronominal mentions, which requires reasoning with cross-sentence contexts.", "For example, in a sentence from ACE05 dataset, One of [them] PER , from a very close friend of [ours] ORG .", "It is impossible to identity whether them and ours is a person ( PER ) or organization ( ORG ) unless we have read previous sentences.", "We Entity Perf.", "hypothesize that this is a context where coreference propagation can help.", "Table 7 shows the effect of the coreference layer for entity categorization of pronouns.", "3 DYGIE has 6.6% improvement on pronoun performance, confirming our hypothesis.", "Looking further, Table 8 shows the impact on all entity categories, giving the difference between the confusion matrix entries with and without CorefProp .", "The frequent confusions associated with pronouns ( GPE/PER and PER/ORG , where GPE is a geopolitical entity) greatly improve, but the benefit of CorefProp extends to most categories.", "Of course, there are a few instances where CorefProp causes errors in entity extraction.", "For example, in the sentence [They] ORGPER might have been using Northshore..., DYGIE predicted They to be of ORG type because the most confident antecedent is those companies in the previous sentence: The money was invested in those companies .", "However, They is actually referring to these fund managers earlier in the document, which belongs to PER category.", "In the SciERC dataset, the pronouns are uniformly assigned with a Generic label, which explains why CorefProp does not have much effect on entity extraction performance.", "The Figure 3a shows the effect of number of iterations for coreference propagation in the entity extraction task.", "The figure shows that coreference layer obtains the best performance on the second iteration ( N = 2 ).", "Figure 4 shows relation scores as a function of number of entities in sentence for DYGIE and DYGIE without relation propagation on ACE05.", "The figure indicates that relation propagation achieves significant improvement in sentences with more entities, where one might expect that using broader context 3 Pronouns included: anyone, everyone, it, itself, one, our, ours, their, theirs, them, themselves, they, us, we, who LOC WEA GPE PER FAC ORG VEH LOC 5 0 -2 -1 2 -1 0 WEA 0 3 0 0 1 -3 -1 GPE -3 0 31 -26 3 -7 0 PER 0 -2 -3 18 -1 -26 4 FAC 4 -1 2 -3 2 -5 1 ORG 0 0 0 -8 -1 6 0 VEH 0 -2 -1 2 5 -1 1 Table 8: Difference in the confusion matrix counts for ACE05 entity extraction associated with adding CorefProp .", "Figure 3b shows the effect of number of iterations for relation propagation in the relation extraction task.", "Our model achieves the best performance on the second iteration ( M = 2 ).", "We have introduced DYGIE as a general information extraction framework, and have demonstrated that our system achieves state-of-the art results on entity recognition and relation extraction tasks across a diverse range of domains.", "The key contribution of our model is the dynamic span graph approach, which enhance interaction across tasks that allows the model to learn useful information from broader context.", "Unlike many IE frameworks, our model does not require any preprocessing using syntactic tools, and has significant improvement across different IE tasks including entity, relation extraction and overlapping entity extraction.", "The addition of co-reference and relation propagation across sentences adds only a small computation cost to inference; the memory cost is controlled by beam search.", "These added costs are small relative to those of the baseline span-based model.", "We wel-come the community to test our model on different information extraction tasks.", "Future directions include extending the framework to encompass more structural IE tasks such as event extraction.", "This research was supported by the Office of Naval Research under the MURI grant N00014-18-1-2670, NSF (IIS 1616112, III 1703166), Allen Distinguished Investigator Award, Samsung GRO and gifts from Allen Institute for AI, Google, Amazon, and Bloomberg.", "We also thank the anonymous reviewers and the UW-NLP group for their helpful comments." ]
[ "method", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "result", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "abstain", "other", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "other", "other" ]
[ "It has been proven that automatic conversational agents can be built up using the End-to-End Neural Response Generation (NRG) framework, and such a data-driven methodology requires a large number of dialog pairs for model training and reasonable evaluation metrics for testing.", "This paper proposes a Large Scale Domain-Specific Conversational Corpus (LSDSCC) composed of high-quality query-response pairs extracted from the domain-specific online forum, with thorough preprocessing and cleansing procedures.", "Also, a testing set, including multiple diverse responses annotated for each query, is constructed, and on this basis, the metrics for measuring the diversity of generated results are further presented.", "We evaluate the performances of neural dialog models with the widely applied diversity boosting strategies on the proposed dataset.", "The experimental results have shown that our proposed corpus can be taken as a new benchmark dataset for the NRG task, and the presented metrics are promising to guide the optimization of NRG models by quantifying the diversity of the generated responses reasonably.", "Conversational agents (a.k.a. Chat-bots) are effective media to establish communications with human beings and have received much attention from academic and industrial experts in recent years (Serban et al., 2017).", "One essential fact promoting the research work on conversational agents is the explosive growth of human interaction data accumulated in the social network services, such as Twitter 1 and Reddit 2 .", "So, it is possible to build Chat-bots based on data-driven approaches (Ser-ban and Pineau, 2015).", "Nevertheless, there still remains a great challenge for building such conversational agents: at present, the automatic evaluation metrics of NRG models can hardly afford to measure the semantic relevance and diversity of generated results reasonably, and even the latter evaluation aspect has been paid little attention.", "The widely accepted evaluating methods employed by the existing NRG models can be categorized as:", "a) metrics inherited from Machine Translation, e.g., BLEU, Perplexity, etc. (Yao et al., 2015; Lowe et al., 2017; Wu et al., 2018);", "b) discrete scores measuring the quality of generated results by human labeling (Shang et al., 2015; Serban et al., 2016; Xu et al., 2017); and", "c) case study comparing the generated results of different NRG models (Shang et al., 2015; Wang et al., 2017).", "The disappointing situation is that these evaluating methods have not revealed tangible difference among NRG models, the reasons for which can be reflected by the example given in Table 1. Query : Where did you get that from?", "For each query in Table 1, one response from the testing set is taken as the ground truth, together with responses with more morphological and semantic variations, marked with the symbol .", "These samples indicate that the numerical metrics inherited from NMT which discard the diver-2070 sity among responses, cannot reflect marginal differences among generative models, which is supported by the research work of Liu et al. (2016).", "Thus, an NRG model with good capability to produce diverse and meaningful responses is possible to be judged as a poor one by the BLEU/Perplexity based evaluations.", "Meanwhile, the metrics based on human labeling are still promising, yet the expensive cost and inconsistency among labelers limit the scale of human-annotation.", "Therefore, it becomes a necessity to develop reasonable automatic evaluation metrics, that can be taken to measure both candidate response's diversity and its relevance to the given query, to effectively guide the training of NRG models towards the state of promoting meaningful and diverse responses (Li et al., 2016a; Shao et al., 2017; Freitag and Al-Onaizan, 2017).", "In order to evaluate the performance of NRG models automatically and reasonably, a well-annotated testing set should be built first.", "But then, building such a high-quality testing set is a non-trivial task indeed.", "On one hand, most existing source datasets cover various domains, making it difficult to evaluate the generated results in case that the domain of the generated response is different from that of the reference.", "On the other hand, a large number of noises, typos, and slangs distribute in existing large-scale datasets, such as Twitter corpus (Ritter et al., 2011) and Ubuntu dialog corpus (Kadlec et al., 2015).", "For instance, there are many file directories with computer names in Ubuntu dialog corpus.", "Therefore, qualified domain-specific datasets are desperately required to evaluate NRG models with different architectures reasonably.", "To address the above issues, we build a high-quality and domain-specific dialog corpus composed of a carefully prepared training set, and meanwhile, a testing set is constructed by collecting multiple reference responses for each query and conducting group-aware human annotation on collected responses.", "On this basis, we proposed three discriminative metrics: MaxBLEU , Mean Diversity Score ( MDS ), and Probabilistic Diversity Score ( PDS ), to primarily evaluate the diversity of generated responses with relevance also considered.", "To further assess the performance and effectiveness of the test set cooperating with the proposed metrics, the widely applied Sequence-to-Sequence (Seq2Seq) (Bahdanau et al., 2014; Sutskever et al., 2014) based models with the available diversity promotion methods are implemented, and experiments are conducted on the proposed Large Scale Domain-Specific Conversational Corpus (LSDSCC) dataset.", "The experimental results stay consistent with the previous experience acquired from human-labeled sets, and the performance of these models suggests that the LSDSCC corpus and discriminative metrics will provide insights for future research in the field of NRG.", "Seq2Seq based conversation modeling approaches have been proven to be able to generate response directly (Vinyals and Le, 2015; Shang et al., 2015).", "However, these models tend to produce generic responses to any given queries, namely the defi-cient diversity problem (Shao et al., 2017).", "Recent studies attempt to constrain these universal replies and promote more diverse responses with various strategies during the procedure of training or inference (Li et al., 2016a,b; Mou et al., 2016; Xing et al., 2017; Shao et al., 2017).", "Besides, there still exits another meaningful option, that is, to employ reasonable diversity oriented evaluation metrics to guide the optimization of models.", "The quality of testing sets is a primary fac-tor for such evaluation of NRG models.", "Existing large-scale corpora contain the Movie Dialogue, Ubuntu, Twitter, and Reddit corpus (Banchs, 2012; Uthus and Aha, 2013; Ritter et al., 2010; Schrading et al., 2015).", "The Ubuntu corpus is built by scraping a large scale tech support dialogues from Ubuntu IRC forum for building response ranking models (Kadlec et al., 2015).", "Similarly, Sordoni et al. (2015) provide external context information for message response pairs from Twitter FireHose.", "Besides, Dodge et al. (2016) and Schrading et al. (2015) collect real conversations from movie categories of Reddit community, which are integrated into a multi-task corpus on movie for the ranking task and discourse analysis.", "In the above corpora, there are only one or two reference responses for most query, which is completely unlike that of the practical conversation scenario (Li et al., 2017b).", "By contrast, this paper construct a high-quality testing set, including multi-references for each query.", "In this regard, our testing set is more close to the real-world setting.", "Besides the testing set, evaluation metrics are also important for the performance measurement of NRG models.", "Most frequently applied evaluation metrics for NRG models are inherited from NMT to measure the fluency and relevance of generated responses, such as Perplexity, BLEU (Pap-ineni et al., 2002) and deltaBLEU (Galley et al., 2015).", "Although these metrics demonstrate the relevance between the given query and the generated responses, they overlook the reply's diversity that is of great importance in conversation setting.", "Thus, efforts are devoted to simulate the human subjective judgment, which is similar with the response ranking task in retrieval-based chat agents (Lowe et al., 2017; Tao et al., 2018), but unavoidable uncertainty and errors are brought into the systems (Hu et al., 2014).", "In addition, automatic evaluation metrics (e.g. BLEU, deltaBLEU, etc.) are limited by the fact that each query only has references with the exact same meaning and many overlapped phrases, which is unreasonable in the conversational scenario.", "Previous studies indicate that more focused topics and less diverged domain is helpful to guide NRG models away from the state of producing universal responses (Mou et al., 2016; Xing et al., 2017), so we compose a domain-specific corpora by constraining the domain of crawled dialogues from Reddit to its movie discussion board 3 .", "The quality of the data in Reddit movie category has been discussed by Stoddard (2015) and Jamnik and Lane (2017), who point out that the popularity is a good indication of relative quality and the movie category is one of the most popular boards in Reddit.", "Thus, the data in Reddit movie category is originally high-quality.", "In this section, the pipeline for building the LSDSCC dataset will be discussed in detail, and necessary statistical indicators are collected to demonstrate its distribution.", "Moreover, human evaluation is conducted to measure the quality of the obtained training set.", "Data Cleansing.", "We crawl threads from the movie discussion board of Reddit that includes human-to-human conversations as the raw dataset, and conduct the following cleansing operations: 3 https://www.reddit.com/r/movies/, selected from https://www.reddit.com/r/datasets", "a) For each thread, we strip away the markdown and html syntax tokens, e.g., [word](url) is transformed to word, &gt; is reformed to > , etc.", "Meanwhile, all forms of urls, emails and digits withing the paragraphs are normalized as url, email and digits tokens respectively;", "b) As emoticons in the data originated from social media services always provide essential emotional information of users, we propose to convert the same groups of emoticons into corresponding words (e.g., :-) will be reverted to happy) to preserve such emotion knowledge;", "c) Finally, replicated words or characters (e.g., cooool and ahahaha, etc.) are substituted with its normal form using regular expressions.", "Vocabulary Truncation.", "After the above preprocessing operations, there still exist redundant unformatted slang and noisy strings (e.g., Iloveyou), which have low-frequency in the crawled raw data.", "Consequently, the vocabulary size of the dataset is exceeding 160 K as shown in Table 2. Keeping a such large vocabulary for Seq2Seq based models will consume excessive memory and make those models difficult to converge, while pruning low-frequency unformatted slang and noisy strings into UNK symbols would directly harm the performance of the model since excessive knowledge hidden in these strings are ignored in the training process.", "To address this issue, we break these slangs and noises into several frequent words in our corpus and eliminate non-ASCII tokens.", "In this way, sufficient information of the dataset is maintained for model training.", "Finally, the vocabulary sizes of the dataset are reduced to around 50 K .", "Dialog Pruning.", "Statistical results on the sentence length of query-response pairs in the cleaned corpus are illustrated in Fig. 1. Concerning the fact that recurrent neural networks can not effi-ciently capture the semantics of over-long sentences and previous studies indicate that such re-2072 100 90 80 70 60 50 Query Length Figure 1: Sentence length coverage of queries and responses within the dataset.", "sponses would make the decoder hard to converge (Greff et al., 2017), it is necessary to prune the pairs containing very long sentences.", "After the sentences are tokenized using the NLTK toolk-its 4 , the cases with queries longer than 100 or responses exceeding 60 words are pruned directly, and 76.25% of the dataset are finally reserved.", "After pruning the corpus, there remain 738,095 single-turn and 346,543 multi-turn conversations.", "Since this paper focuses on the single-turn dialogs, the evaluative testing set and detailed experiments in the following sections are designed for single-turn corpus.", "As the testing set will select pairs from the preprocessed data, the corresponding tuples will be deleted to avoid coverage.", "As one of the most important qualities of the conversational corpus, the query-response relevance demonstrates the overall quality of the dataset.", "Human evaluations of the query-response relevance are conducted to validate the quality of the dataset used in this paper.", "Nine experienced annotators are invited to evaluate the query-response relevance of 500 single-turn dialogs uniformly sampled from the whole dataset obtained in Subsection 3.1.", "In the evaluation, we ask each annotator to label whether the response is appropriate to the corresponding query in the given query-response pair.", "A pair is tagged as Unsure if the annotator could not confirm the degrees of relevance without related context and background movie knowledge.", "The labeled result is shown in Table 3.", "It is observed that 85% samples in the query-response relevance task are confirmed to keep high relevance between the query and the corresponding responses.", "Moreover, there exist 4 http://www.nltk.org/ only about 6.6% irrelevant noises.", "Existing evaluation metrics of dialog agents measure the quality of the generated sentences only by referring to the existing responses, which obeys the same principle with NMT models' metrics.", "However, one essential difference between NRG and NMT lies in the fact that, a large group of responses can be considered as relevant to a given query in conversations, while the number of references to a translation result is quite limited for NMT models.", "So the diversity degree of candidates which have not covered by NMT oriented evaluation metrics, is supposed to be quantified and measured in NRG models.", "Currently, few studies focus on the evaluation based on the group of references, which is more meaningful and reasonable for NRG models.", "Therefore, we proposed three metrics: MaxBLEU , Mean Diversity Score, and Probabilistic Diversity Score, to quantify both the relevance and diversity of the generated responses.", "Since these metrics are based on the multi-reference, we first describe the procedure of building testing set, with multi-references for each query.", "Then, the metrics for NRG models are detailed based on the multi-reference testing set.", "Fig. 2 illustrates the response quantity distribution of queries in the preprocessed data.", "While the testing set is randomly sampled from the preprocessed data, the response quantity distribution of the testing set is the same as that in Fig. 2. In this case, the multi-reference testing set for NRG evaluation is difficult to construct by directly extracting samples from the dialog corpus, since there are too few queries that contain more than three responses.", "Roughly choosing samples from such data is possible to bring topic bias into the testing set, and manually filtering suitable candidate pairs from 2073 them is also time-consuming and expensive.", "Nevertheless, there exist large amounts of queries that are highly semantically similar or correlated with each other.", "This indicates that the multiple references can be obtained by selecting responses of queries that are semantically identical to the original query.", "What's more, the human-annotation is involved to proofread the filtered pairs' quality and complete the final labeling.", "\u0000\u0014 \u0000\u0015 \u0000\u0016 \u0000\u001b \u0000\u001c \u0000\u0014\u0000\u0013 \u0000\u0017 \u0000\u0018 \u0000\u0019 \u0000\u001a \u00005 eply Aggregation \u0000\u0013 \u0000\u0014\u0000\u0013 \u0000\u0015\u0000\u0013 \u0000\u0016\u0000\u0013 \u0000\u0017\u0000\u0013 \u0000\u0018\u0000\u0013 \u0000\u0019\u0000\u0013 \u0000\u001a\u0000\u0013 \u0000\u001b\u0000\u0013 \u00003 \u0000H \u0000U \u0000F\u0000H \u0000Q \u0000W \u0000D \u0000J \u0000H \u0000\u0003 \u0000\u000b \u0000\b \u0000\f \u0000\u001a\u0000\u0014\u0000\u0011\u0000\u001c\u0000\b \u0000\u0014\u0000\u0019\u0000\u0011\u0000\u0015\u0000\b \u0000\u0018\u0000\u0011\u0000\u001a\u0000\b \u0000\u0015\u0000\u0011\u0000\u0019\u0000\b \u0000\u0014\u0000\u0011\u0000\u0017\u0000\b \u0000\u0013\u0000\u0011\u0000\u001c\u0000\b \u0000\u0013\u0000\u0011\u0000\u0018\u0000\b \u0000\u0013\u0000\u0011\u0000\u0017\u0000\b \u0000\u0013\u0000\u0011\u0000\u0016\u0000\b \u0000\u0013\u0000\u0011\u0000\u0015\u0000\b Figure 2: Distribution of reply quantities in training set.", "When constructing the testing set, the very first step is getting semantically similar (or even identical) queries with the given ones.", "For this purpose, this paper adopts the TF-IDF similarity and semantic embedding based distance to measure the similarity between queries.", "The procedure of gaining similar queries is divided into two stage: In the first stage, we employ Apache Lucene 5 to exploit the word-level TF-IDF patterns within queries, and then extract the top 100 similar queries with highest scores given by Lucene for each query.", "Yet, these candidates only capture n-gram level similarity with the probably diverged semantics.", "Thus, in the second stage, we utilize paragraph vector algorithm (a.k.a Doc2vec 6 ) (Le and Mikolov, 2014) to resort the selected similar queries in the semantic space and only queries of similarity score higher than a certain threshold (i.e., 0.9) are reserved.", "Table 4 lists several identical queries filtered by Lucene and Doc2vec methods with the given query.", "It should be noted that the Lucene index and Doc2vec need to be initialized by feeding all the sentences in the dialogue corpus.", "To reserve as much information as possible and balance the distribution of the composed testing set, we divide the dataset into several subsets based on the response number of queries, and then sample testing data from each subset uniformly.", "Concretely, according to the response number, 5 https://lucene.apache.org/ 6 https://radimrehurek.com/gensim/models/doc2vec.html Similar Queries Similarity If you haven't already, watch the animatrix.", "queries of the dataset are divided into three subsets:", "a) queries with less than 3 responses,", "b) queries with 3 to 5 responses, and", "c) queries with more than 5 responses.", "We randomly sample 100 queries from each subset, and thus 300 queries are obtained as the testing set.", "Aiming at building a multiple references testing set, each query in the testing set is assigned with 15 responses, including the original responses and the ones of the most similar queries obtained by the procedure of last paragraph.", "Afterwards, three skilled and experienced labelers familiar with movies are employed and carefully trained to crosswise annotate the filtered testing set.", "In addition, labelers can also obtain some background of the corresponding query since there are additional details for most queries in Reddit.", "In this case, the quality of selected samples can be guaranteed.", "Besides, the annotators are asked not only to label the relevance of query and reference responses, but also reorganize the indepen-dent references into groups by the semantic similarity subjectively.", "The grouping strategy is introduced for the purpose of evaluating the diversity of responses generated by different models.", "In the relevance oriented annotation procedure, the labelers are first asked to judge whether a candidate response is appropriate and natural to the input query.", "If a candidate response is grammatically correct and semantically relevant with the corresponding query from the annotators' perspective, it should be labeled as 1 .", "Otherwise, the annotators have to give 0 label to the candidate.", "Then, for each query, the annotators need to split responses labeled with 1 into different groups based on word overlapping between them, with stop-word overlapping ignored.", "Finally, the groups with the similar semantics are merged into a larger group by the annotators, so as to get the final grouped responses.", "At last, we obtain a high-quality testing set, in which each query is assigned with different numbers of reference responses.", "Fig. 3 shows the distribution of the response numbers in the testing set.", "Comparing to the original response number distribution in Fig. 2, the replies distribution of the testing set is much more appropriate for the conversational scenario.", "Furthermore, responses to the corresponding query are categorized into several groups.", "In this case, NRG models can be evaluated reasonably using such a testing set.", "One sampled case in the testing set is shown in the left phase of Fig. 4, and there are eight responses in the labeled data divided into four groups.", "The different metrics in this figure will be introduced in the following sections.", "It should be noted that both the single-turn dialogs and the annotated testing set are released 7 .", "Since the NRG architecture is analogous to the NMT models, introducing the BLEU scores to evaluate the semantic relevance of the generated results is acceptable.", "However, it is not reasonable to average the BLEU scores of the generated response to each reference, because the semantic of each reference varies significantly.", "Aiming at revealing the variation and diversity among responses, which have not yet covered at the NMT models, we propose a MaxBLEU metric customized for response generation based on the Multi-BLEU metric (Madnani et al., 2008).", "Noticing that the metrics inherited from SMT, like BLEU, is not able to evaluate the diversity of responses, we propose the specified metrics for diversity evaluation, which will be described in the next subsection.", "Given an input query, the NRG model is able to generate a set of hypothesis { h i } 8 .", "Meanwhile, 7", "https://drive.google.com/file/d/1nbpbnhwNP14xAc4SAc1-NN5lvEr01dQb/view?usp=sharing 8 Following the terms in machine translation, this part takes hypothesis to represent response.", "according to the human-annotation strategy described in Subsection 4.1, the set of references can be reorganized, based on their semantic similarity, into the groups with the format of { r ij } , where r ij denotes the j -th reference in the i -th group.", "On this basis, the MaxBLEU metric is defined as: MaxBLEU ( h i ) = arg max k Multi-BLEU ( h i , r k ) (1) where r k denotes all the references in the k th group.", "That is, we begin by calculating all the multi-BLEU scores between each hypothesis and grouped references, and pick the score for the sentence with the highest BLEU as the score for this set of hypothesis, so that we make an alignment between generated hypothesis h to the group-aware references r .", "For simplicity, one response can only be aligned to one group reference, and multi-group references are not considered in this work.", "Given a query, the diversity degree of candidate responses is an essential criterion for evaluating the performances of NRG models.", "Currently, most studies tend to demonstrate the diversities of different models by sampling and comparing the generated results, or labeling the diversity of the generated samples, which makes it difficult to benchmark and automatically evaluate different models.", "Although Li et al. (2017a) propose to calculate the number of distinct unigrams and bigrams of generated responses, such scores do not align well with human inspection (Serban et al., 2017).", "Therefore, we propose two evaluative metrics based on the MaxBLEU metric for diversity measurement:", "a) Mean Diversity Score ( MDS ) and", "b) Probabilistic Diversity Score ( PDS ).", "Basically, the 2075 1.1 Rogen and Goldberg are producing a movie being acted by the Workaholics guys.", "two metrics aim at measuring the overall diversity of the whole set of generation results (hypothe-sis) by taking them as an entirety, and the detailed calculation steps of the proposed metrics are illustrated in Algorithm 1. According to the algorithm, the PDS metric assumes that the weight of each reference group is distributed uniformly, regardless of the reference number in each group.", "Similarly, the MDS metric takes the count of the members in each group as the weight of the corresponding group, and actually compute the weighted coverage upon the reference group.", "In this section, we present the detailed experiments on the single-turn dialog dataset and analysis on generated results, in accordance to the proposed metrics.", "Experiments are conducted using the popular Seq2Seq based models with the currently available", "diversity prompting strategies as follows: 1) Basic Seq2Seq.", "We employ the basic Seq2Seq to build the encoder-decoder architecture running on the proposed dataset, by taking the bidirectional LSTM cell as the encoder to address the input sentences ordering problem and classic LSTM cell as the decoder (Vinyals and Le, 2015).", "2) Attention-Seq2Seq.", "As proposed by Vinyals and Le (2015); Luong et al. (2015), a concatenated version of attention mechanism is applied upon the basic Seq2Seq model.", "3) Greedy-Seq2Seq.", "Based on the basis Seq2Seq model, the diversity promotion strategy proposed by Li et al. (2016b) is applied in the generating procedure, and the training procedure stays the same.", "Hyper parameter , a.k.a. diversity rate , are set with empirical experiments (i.e., = 0 . 1 , 0 . 8 ) to reveal the efforts.", "4) Greedy-Attn-Seq2Seq.", "Following the work of Li et al. (2016b), the greedy diversity promotion strategy is applied on the Seq2Seq model with attention mechanism similar with model 2, and we set hyper parameter = 0 .", "1 , 0 .", "8 .", "5) MMI-Seq2Seq.", "In the generation procedure, Maximum Mutual Information (MMI) model is applied in the decoder to prune generic answers on the basic Seq2Seq model (Li et al., 2016a).", "In our research, we implement these models on the TensorFlow platform 9 , and Adam optimizer (Kingma and Ba, 2015) is employed for gradient optimization during training.", "Besides, we choose to prune the words whose frequencies are below 2, so the source and target vocabulary are set to 42 , 257 and 46 , 865 respectively.", "In addition, we set the batch size to 50, hidden size of encoder to 256, hidden size of decoder to 512 and learning rate to 2 e 4 .", "The gradients are clipped within [ 3 . 0 , 3 . 0] to avoid the gradient explosion problem.", "Every model runs on a single GPU separately for at least one week before convergence.", "Afterwards, for all these methods, we generate a set of hypothesis sentences with beam size set to k = 50 , and the evaluation scores are obtained using the proposed metrics.", "els converge to about 4.2 and the Seq2Seq models augmented with attention converge to 3.1.", "Also, we set the dropout rate to 0 .", "5 , which enables us to tune the models though much more epochs and avoid the over-fitting problems.", "The semantic relevance of the generated responses is represented by the MaxBLEU scores, which are listed in the corresponding column of Table 5.", "From this benchmarking table, it can be observed that the attention mechanism is helpful for decoders to improve the relevance of the generated responses, since the Attention-Seq2Seq performs better than the basic Seq2Seq on the dataset, in terms of all the three metrics.", "However, the relative gain of the attention layer is limited, indicating that modeling relation of query and response by attention module is not able to directly solve the learning paradigm of conversations.", "In accordance to the results of Greedy-Seq2Seq ( = 0 . 1 ) and Greedy-Seq2Seq ( = 0 . 8 ), the hyper-parameter actually plays an important role in the generation steps of the decoder.", "Since is introduced to constrain the selection probability of the next-step word by performing the re-ranking process, and the larger value of this parameter will lead to the greater impact upon generating steps and produce more diverse sentences, we evaluate this greedy strategy with set with two empirical value.", "It can be seen that the model with the smaller performs better than the one with the larger parameter, which can be attributed to the fact that responses with more diversity are less similar to references.", "Similar observation can be get from the results of models Greedy-Attn-Seq2Seq ( = 0 . 1 ) and Greedy-Attn-Seq2Seq ( = 0 . 8 ).", "Besides, the reason for setting = 0 .", "1 , 0 .", "8 in this part is that they are well represented for the poor diversity and good diversity, which the exact score of will vary under different configu-rations and structures of model.", "In addition, the MMI model is proved to be promising to enhance the generation models, by improving both the relevant and diversity of generated responses.", "Even though the MMI-Seq2Seq model has not got the highest MaxBLEU , it outperforms the other ones on diversity, which will be discussed in the following subsection.", "Table 5 also illustrates the MDS and PDS score of each benchmark.", "It is observed that the greedy strategies in the generating procedure with the greater parameter can boost the diversity of generated responses obviously.", "This phenomenon is attributed to the inter-sibling ranking policy in the decoding procedure, which tends to choose hypotheses from diverse parents.", "In addition, the MMI strategy gets the highest MDS and PDS , because the MMI criterion relieves the constraint of the language model, under which general responses always get a higher generative probability.", "Meanwhile, the PDS metric aligns well with the basic MDS , but the relative gap becomes larger within the Greedy-Seq2Seq ( = 0 . 1 ) and Greedy-Seq2Seq ( = 0 . 8 ) models.", "The reason for enlarging relative gap between different models, is to distinguish the performance of similar models and evaluate the performance of specific module inside the models.", "When comparing Seq2Seq and Attention-Seq2Seq, relative gain of applied attention module to the overall model in terms of MDS was 2 .", "1% , while it became 3 .", "6% considering the PDS metric.", "Practically, it is reasonable to make a trade-off between the relevance and diversity.", "The PDS is more suitable for choosing the systems with stringent diversity requirement, and the MDS is a softer metric, which should be taken into consideration when measuring the diversity improvements by integrating some new modules into NRG models.", "Moreover, it can be observed that the relevance oriented metric MaxBLEU gets improvement along 2077 with the increasing of the diversity oriented PDS and MDS .", "This phenomenon indicates a relationship between relevance and diversity against that in some of text generation tasks (e.g., image caption (Yao et al., 2017)).", "Since there are generally many references for a given query, the relevance and diversity are possible to be improved simultaneously for the response generation task (Li et al., 2016a).", "And thus, the topic changing on the generated results is tolerable.", "To validate the correlations between human ratings and the proposed metrics, we further invite 9 annotators with rich movie knowledge to judge the relevance and diversity of the generated responses from benchmark methods.", "Each baseline model generate 10 responses for each query in the test dataset.", "The annotators are first asked to judge whether a generated response is relevant to the query (labeled with 1) or not (labeled with 0).", "After that, the annotators estimate the diversity of relevant responses of each query with a scale of 1 to 3.", "The final Fleiss Kappa (Fleiss, 1971) score is 0.46, which denotes moderate agreement of the annotators.", "The Pearson and Spearman correlation between the human evaluations and each metric are given in Table 6.", "It can be observed that the proposed metrics correlate with human judgments moderately with p value < 0 .", "05 , which is quite different from the correlation test in Liu et al. (2016).", "This can be attributed to the fact that there are multiple references for each query in our test dataset.", "Although the proposed metrics are derived from the word-overlap based BLEU scores, expanding references of each query makes such scores much more reasonable for evaluating the relevance and diversity of generated responses.", "(LSD-SCC), collected from the movie discuss threads in the Reddit community, for training and testing the Neural Response Generation (NRG) models.", "In addition, necessary data cleansing and pruning works are done to remove noises in the utterances.", "Moreover, we employ volunteers to annotate a diverse query-responses testing set, with reference groups taken into consideration for objectively quantifying the diversity of generated results.", "On the basis of the testing set, we propose two evaluative diversity metrics (mean diversity score and probabilistic diversity score) calculated according to the standard MaxBLEU score.", "Furthermore, we investigate the performance of popular Seq2Seq based models with various diversity promotion strategies, and the score of them are collected to validate the effectiveness of the proposed metrics.", "The proposed dataset and evaluation metrics are expected to be used for the effective training and reasonable testing of NRG models.", "In the future studies, we would explore the possibility of promoting diversity on the learning procedure, by directly optimizing diversity loss in the cost function.", "Besides, injecting external information during response's generation would be another challenging work.", "We thank the anonymous reviewers for their insightful comments.", "This research is partially supported by National Natural Science Foundation of China (No.61572151, No.61602131, No.61672192) and the National High Technology Research and Development Program (863 Program) of China (No.2015AA015405)." ]
[ "abstain", "objective", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "objective", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "objective", "abstain", "objective", "abstain", "other", "other" ]
[ "This paper presents a corpus and experiments to mine possession relations from text.", "Specifically, we target alienable and control possessions, and assign temporal anchors indicating when the possession holds between possessor and possessee.", "We present new annotations for this task, and experimental results using both traditional classifiers and neural networks.", "Results show that the three subtasks (predicting possession existence, possession type and temporal anchors) can be automated.", "Every language has a way of expressing possessive relationships (Aikhenvald and Dixon, 2012).", "Possession is an asymmetric semantic relation between two entities, where one entity (the possessee) belongs to the other entity (the possessor) (Stassen, 2009).", "When it comes to defining possession, belongs includes a wide range of relationships, including (hereafter, we use x to refer to the possessor, and y to refer to the possessee) kinship (e.g., [my] x oldest [son] y ), part-whole (e.g., the [car] x 's [dashboard] y ), physical and temporary possession (e.g., [I] x have John's [book] y ), possession of something intangible (e.g., [John] x got the [flu] y last year ) and proximity (e.g., The [shelf] x has a [glass sculpture] y ).", "Possession relations can be divided into alienable (also referred to as acquired, transferable, non-intimate, etc.) and inalienable (also referred to as inherent, inseparable, intimate, etc.).", "Possessees that can be separated from their possessors are alienable, and possessees that cannot normally be separated from their possessors are inalienable (Heine, 1997).", "For example, [John] x 's [condo] y is alienable, and [John] x 's [arm] y is inalienable (some previous works would call the latter a part-whole relation instead).", "Tham (2004) defines control possession as a relation in which the possessor has temporary control of the possessee, but does not necessarily alienably possess it (e.g., [John] x borrowed the [car] y for the weekend ).", "Following the aforecited works, possession goes beyond ownership of property.", "Possession relations can be expressed in a wide variety of syntactic constructions, including noun phrases (e.g., [John] x 's [car] y ) and clauses (e.g., [John] x bought a [blue car] y ).", "The subject of a verb can map to either the possessor as exempli-fied above, or to the possessee (e.g., The [car] y belongs to [John] x ) (Aikhenvald and Dixon, 2012).", "Within computational linguistics, possession relationships have usually been studied as part of larger studies that target all relations between arguments connected with a syntactic pattern (e.g., possessive constructions, nominals).", "Additionally, previous efforts have mostly targeted alienable possessionor alternatively, ownership.", "The work presented here takes a different approach.", "We start by pairing people (plausible possessors) with physical objects (plausible possesses).", "Then, we determine whether a possession relationship exists, and if so,", "(a) determine the type (alienable or control) and", "(b) assign temporal anchors with respect to the event of which the possessor is the subject.", "We target all verbs, not only prototypical verbs of possession (e.g., have , get ).", "Thus, our approach extracts possessions intuitive to humans when there is no specific possession cue (e.g., we extract a control possession from The [computer] y at work was slow, [I] x didn't get anything done ).", "The main contributions of this paper are:", "(a) deterministic procedure to pair plausible possessors and possessees;", "(b) corpus annotating possession existence, possession type and temporal anchors;", "(c) detailed corpus analysis per verb and type of possession; and", "(d) experimental results showing that the task can be automated.", "The literature has studied possession relations extensively from theoretical and conceptual points of views.", "Here, we succinctly present some of the most influential works in the area.", "The very definition of possession is not set in stone.", "Aikhenvald (2013) distinguishes three core meanings for possessive noun phrases that occur across languages: ownership (of property), whole-part (often referred to as part-whole), and kinship.", "Following a cross-linguistic perspective, she discusses possessions and time (present and former possession relationships, e.g., my tooth vs. my former axe ), temporary and permanent possession (e.g., borrow vs. acquire ) and others.", "Heine (1997) classifies possession relationships depending on the possessor and possessee.", "First, he makes a distinction between human (e.g., [I] x have a [house] y ) and non-human possessors (e.g. [This house] x has [two bedrooms] y ).", "Second, he differentiates three kinds of possession depending on the possessee: concrete possession (e.g., [I] x have [two cats] y ), social possession (e.g., [I] x have [two sisters] y ), and abstract possession (e.g., [I] x have [an idea] y ).", "Miller and Johnson-Laird (1976) differentiate between three kinds of possession: inherent, accidental, and physical; and provide the following example: He owns an umbrella (inherent), but she's borrowed it (acciden-tal), though she doesn't have it with her (physical) .", "Possession relationships have also been defined in terms of their parameters.", "Stassen (2009) consider two parameters: permanent contact and control.", "These parameters are binary, and four kinds of possessions emerge from combining them: alienable (permanent contact: +, control: +), inalienable (+, -), temporary (-, +), and abstract (-, -).", "Similarly, Heine (1997) defines five binary parameters: human possessor, concrete possessee, spatial proximity, temporal permanence, and control.", "Combining these parameters, he defines 7 kinds of possessions: alienable, physical, temporary, inalienable, abstract, inanimate inalienable and inanimate alienable possession.", "Most influential to the work presented here, Tham (2004) presents four types of possession:", "(a) inalienable (e.g., John has a daughter ),", "(b) alienable (e.g., John has a car ),", "(c) control (e.g., John has the car (for the weekend) ), and", "(d) focus (e.g., John has the window (to clean) ).", "In this paper, we target alienable and control possessions.", "We discard inalienable possessions because automated extraction has been studied before at least partially, e.g., part-whole (Girju et al., 2006)and focus possessions because they only occurred 5 times in the corpus we work with.", "Within computational linguistics, possession relations have been mostly studied as one of the many relations encoded in a given syntactic construction.", "For example, Tratz and Hovy (2013) extract semantic relations within English possessives.", "They propose a set of 18 relations, e.g. temporal (e.g., [today] x 's [rates] y ), extent (e.g., [6 hours] y ' [drive] x ).", "Their controller / owner / user relation (one relation with three aliases) is the closest relation to the alienable and control possessions we target in this paper.", "Unlike them, we distinguish between alienable and control possessions, and assign temporal anchors to possessions.", "Additionally, we are not restricted to possessive constructions.", "Instead, we start by pairing potential possessors and possessees within a sentence.", "Extracting semantic relations between noun compounds (Nakov and Hearst, 2013; Tratz and Hovy, 2010) usually includes extracting possession relations, e.g., [family] x [estate] y .", "Because they target noun compounds, they disregard numerous possessions encoded in text at the clause or sentence level.", "Although they do extract many relations from noun compounds beyond possessions, they do not distinguish between alienable and control possessions, or temporally anchor relations with respect to events in which the possessor participates.", "To the best of our knowledge, the work by Banea et al. (2016) is the only one on extracting possession relations without imposing syntactic constraints.", "They build a dataset working with blog texts, but do not present results on automatic extraction.", "Their definition of possession includes alienable and control possessions, but they do not distinguish between them.", "Additionally, they only consider as possessors the author of a blog, and as possessees concrete nouns in the blog posts by the possessor.", "Regarding time, they annotate possessions at the time of the utterance.", "Unlike them, we distinguish between alienable and control possessions, and assign temporal anchors with respect to an event in which the possessor participates.", "Pairs generated: (She, hats) and (She, hatbox)", "antiquity.n.01, block.n.01, cone.n.01, con-tainer.n.01, covering.n.02, decker.n.01, de-vice.n.01, fabric.n.01, fixture.n.01, float.n.01, fur-nishing.n.01, insert.n.01, layer.n.01, lemon.n.01, marker.n.01, plaything.n.01, ready-made.n.01, squeaker.n.01, strip.n.01, vehicle.n.01", "We create a corpus 1 following two steps.", "First, we generate intrasentential pairs ( x , y ) of potential possessors ( x ) and possessees ( y ).", "Second, we annotate whether a possession exists, and if so, the type and temporal anchors.", "Generating pairs a priori proved more effective than giving annotators plain text and asking them to annotate possessions.", "We add our annotations to OntoNotes (Hovy et al., 2006).", "Doing so has several advantages.", "First, OntoNotes contains texts from several domains and genres (e.g., conversational telephone speech, weblogs, broadcast), thus we not only work with newswire.", "Second, OntoNotes includes part-of-speech tags, named entities and parse trees, three annotation layers that allow us to streamline the corpus creation process.", "Our goal is to obtain pairs ( x , y ) such that it is plausible that x is the possessor of possessee y .", "To do so, we follow these steps: 1. Collect as potential possessors all PERSON named entities and personal pronouns (part-of-speech tag PRP) I , he , she , we and they .", "2. Discard potential possessors that are not the nominal subject ( nsubj syntactic dependency) of a verb.", "Let us name that verb verb x .", "1 Available at http://www.cse.unt.edu/ blanco possessee ( y ) is subsumed by possessor ( x ) is a pronoun person NE All device.n.01 295 74 369 container.n.01 189 30 219 covering.n.02 138 54 192 vehicle.n.01 124 36 160 fabric.n.01 6 7 13 block.n.01 9 2 11 plaything.n.01 3 2 5 fixture.n.01 4 0 4 antiquity.n.01 2 0 2 other 4 0 4 All 774 205 979 Table 1: Counts of pairs ( x , y ) generated per type of potential possessor ( x ) and possessee ( c ).", "3. For each possessor, collect as potential possessees all nouns reachable from verb x in the dependency tree and subsumed in WordNet (Miller, 1995) by the synsets in Figure 1. Step (1) selects most people (not groups), and is inspired by Aikhenvald (2013, p. 11), who states that possessors are usually animate.", "Step (2) reduces the number of potential possessors, but note that we do not impose any restriction on verb x , which may or may not be a verb of caused possession (Beavers, 2011).", "Finally, Step (3) restricts the kind of objects considered as possessees.", "The list of synsets was defined after analyzing the WordNet noun hierarchy and prior to generating pairs.", "Most of these synsets are children of artifact.n.01 , other children of artifact.n.01 were discarded because intuitively they cannot be possessees.", "For example, we discard mystification.n.02: something designed to mystify or bewilder .", "Figure 2 shows a sample sentence and the pairs generated.", "Note that these pairs include distant possessor-possessee pairs, not only subject-object pairs.", "Nouns track , rest , Polaroid and snapshot are discarded as potential possessees because they are not subsumed by the synsets in Figure 1. 498 Labels % yes , never , unk , inv 86.1 0.79 alienable , control 82.5 0.77 before yes , before no 83.6 0.68 during yes , during no 88.8 0.75 after yes , after no 83.6 0.59 Table 2: Inter-annotator agreements (raw percentage and Cohen's ).", "The total number of pairs generated after executing Steps (13) is 2,025.", "In order to reduce the annotation effort, we set to annotate 1,000 pairs.", "After trying several strategies, we reduce the number of pairs as follows.", "First, we discard pairs with verb x see , think , believe , say and tell because pilot annotations revealed that almost no possessions can be extracted from them (1,757 pairs left).", "Second, we discard pairs ( x , y ) such that verb x occurs five or less times (979 pairs left).", "Table 1 presents basic counts per type of possessor (named entity or personal pronoun) and possessee (Word-Net synset) for the 979 pairs.", "After automatically generating pairs of potential possessors and possessees, annotators validate them manually.", "Annotations were done in-house, and the annotation interface showed the current sentence (with x , y and verb x highlighted), as well as the previous and next sentences.", "The annotation process includes two major steps.", "First, annotators decide whether a possession relation exists between x and y based on the three sentences provided.", "More specifically, they choose from the following labels: yes if a possession exists at some point of time with respect to verb x ; never if a possession does not exist at any point of time with respect to verb x ; unk if it is sound to ask whether x is the possessor of possessee y , but there is not enough information to choose yes or never ; and inv if either the potential possessor x is not animate, or the potential possessee y is nonsensical in the given context.", "Temporal anchors: whether the possession is true at some point of time before , during , and at some point of time after verb x takes place (three binary decisions).", "Following the literature (Tham, 2004), we de-fine alienable possession as a possessor owning a possessee, and control possession as a possessor having control of the possessee, but not necessarily ownership.", "Annotators were instructed to use world knowledge and fully interpret the sentences provided beyond what is explicitly stated.", "We present annotation examples in Section 5.1 Inter-Annotator Agreement.", "The annotations were done by two graduate students.", "Both of them annotated 35% of all pairs (possession existence, possession type and temporal anchors).", "We show inter-annotator agreements in Table 2. Cohen's for possession detection (labels yes , never , unk and inv ) is 0.79, and 0.77 when including possession type (labels alienable and control ).", "Answering whether the possession is true before, during or after verb x obtains lower coefficients: 0.68, 0.75 and 0.59 respectively.", "Not surprisingly, the agreement for during is higher.", "Note that coefficients in the range 0.600.80 are considered substantial, and coefficients over 0.80 are usually considered perfect (Artstein and Poesio, 2008).", "Given these high agreement, the rest of pairs (65%) were annotated once.", "Figure 3 presents percentages per label for all verbs and the top 10 most frequent verbs.", "Overall, 36.5% of pairs are validated ( alienable : 20.6%, control : 15.9%), and only 5.8% of pairs are annotated unk .", "The relatively high percentage of inv label is mostly due to potential possessees that can only be possessed in certain contexts, e.g., compare [They] x asked [regulators] y to suggest new ways to [. . . ] ( inv ) vs. [They] x replaced the [regulators] y to control the flow of water ( yes ).", "The percentage distributions depends heavily on the verb at hand.", "Note that several verbs with high alienable and control labels are not prototypical verbs of possession (e.g., go , use , know ).", "When a possession holds, the type is most likely control for most verbs.", "The only exceptions are have (23.7% vs. 17.7%), get (32.4% vs. 10.8%), make (29.4% vs. 17.6%) and know (16.6% vs. 8.3%).", "The most productive verb as far as alienable possession is get (32.4%), and as far as control possessions, use (43.2%).", "Labels per temporal anchor with respect to verb x (binary flags for before, during and after) and possession type are presented in Table 3. Alienable and control possessions show opposite trends for before and after , and substantially different distributions for during .", "The vast majority of control possessions are true during verb x (85.3% vs. 14.7%), as well as a more modest majority of alienable possessions (55.9% vs. 44.1%).", "Alienable and control possessions, however, have opposite temporal anchors for before and after .", "Specifically, most alienable possessions are true before and after verb x (69.8% and 92.6% respec-tively), and most control possessions are not true before and after verb x (71.2% and 66.7%).", "We present annotation examples using selected pairs of possessors and possessees in Table 4.", "In Sentence (1), annotators interpreted that the relationship between he and car is an alienable possession.", "While not explicitly stated, annotators interpreted that he is an adult, and world knowledge tells us that most adults own the cars they drive unless a modifier indicates otherwise (e.g., rental car, my father 's car).", "Regarding temporal anchors, the possession between he and car is true before and during died , but not after.", "Sentence (2) is a common example of alienable possession that is true after verb x .", "The subject of a verb of creation (e.g., make , build ) often becomes an alienable possessor of the direct object after the verb, but not before or during (because the object has not come into being yet).", "Sentence (3) and (4) exemplify control possessions.", "In Sentence (3), He is borrowing my father's car for a period of time, and thus He has control over but does not own it.", "Regarding temporal anchors, nothing in the sentence indicates that He will have control over the car before or after kept .", "Note that our procedure to generate pairs would not generate the pair ( father , car ), but previous work has targeted possessives (Section 3).", "In Example (4), verb x is felt , yet we extract a valid control possession.", "I is crew member of a warship and is describing his experience while on board.", "Annotators understood he had control over the ship (at least partially) before, during and after, as felt did not last long and there is no indication that I left the boat immediately before or after felt .", "Sentences (57) present examples in which annotators did not annotate a possession relation (la-bels never , unk , and inv ).", "In Sentence (5), the mask belongs to Joseph .", "There is no indication that a possession relation exists between LaToya and mask , although LaToya was in close spatial proximity of the mask worn by Joseph .", "In Sentence (6), it is the case that They have some knowledge about the car that was seized, and it appears that him not They may be the alienable possessor.", "It is unclear, however, whether They and car are related by a control possession, thus annotators chose label unk .", "Finally, Sentence (7) exemplifies label inv .", "While baggage is most of the time a concrete object that passes the restrictions on potential possessees (Section 4), in this context, it is part of the metaphor ideological baggage .", "Since we only target concrete possessees, annotators chose inv .", "We conduct experiments using Support Vector Machines and neural networks.", "Each pair ( x , y ) becomes an instance, and we create stratified train (80%) and test (20%) sets.", "We report results using the test set after tuning hyper parameters using 10-fold cross validation.", "More specifically, we train five classifiers and experiment with all instances but the ones annotated inv .", "The first classifier predicts possession existence ( yes , never or unk ).", "The second classifier predicts possession types, i.e., classifies pairs between which a possession holds ( yes ) into alienable or control .", "The third, fourth and fifth classifiers predict temporal anchors, i.e., classify pairs between which a possession holdseither alienable or control into before yes or before no , during yes or during no , and after yes or after no .", "We trained the five classifiers using the SVM implementation in scikit-learn (Pedregosa et al., 2011).", "We tuned hyper-parameters C and using 10-fold cross validation, and used the features that are summarized in Table 5. Verb features include the word and POS tag for the verb, previous and next tokens, as well as information regarding the outgoing and incoming dependencies.", "We also include a binary flag indicating whether the verb is a possession verb from the list collected by Viberg (2010, Table 1).", "Possessor and Possessee features are very similar to Verb features, but we consider the concatenation of words and POS tags.", "Possessee features also include information derived from the WordNet hypernym paths to the root in the noun hierarchy, i.e., entity.n.01 .", "More specifically, WN synset captures the synset from Figure 1 the possessee is subsumed by, and WN path are features capturing the top 6 synsets in the hypernym path from the possessee to entity.n.01 .", "Finally, Path features include three syntactic paths (syntactic dependency types and up / down symbols): from the possessor to the verb, from the possessee to the verb, and from the possessor to the possessee.", "The feature set is heavily inspired in many previous works (e.g, (Gildea and Jurafsky, 2002)).", "We experimented with SVMs to establish a strong supervised baseline using linguistic information, and to compare with neural networks that take as input only words along with information 501 Feature Description Verb word and tag word form and part-of-speech tag of verb is possession verb flag indicating whether verb is in the list of possession verbs previous, next tokens word form and part-of-speech tags of the previous and next tokens dependency out outgoing syntactic dependency type dependencies in flags indicating the incoming syntactic dependencies left, right children number of incoming syntactic dependencies to the left and right of verb Possessor words concatenation of words pos tags part-of-speech tag (full tag and first character, i.e., pronoun or noun) previous, next tokens word form and part-of-speech tags of the previous and next tokens dependency out outgoing syntactic dependency type dependencies in flags indicating the incoming syntactic dependencies Possessee same as possessor same features extracted for the possessor WN synset WordNet synset from Figure 1 the possessee is subsumed by WN path WordNet synsets from entity.n.01 to the possessee Paths possessor to verb syntactic path between possessor and verb possessee to verb syntactic path between possessee and verb possessor to possessee syntactic path between possessor and possessee Table 5: Feature set used to extract possession relations (existence, type and temporal anchors) with Support Vector", "regarding who is the potential possessor, possessee and verb x .", "We experiment with feedforward and Long Short-Term Memory networks, and use the implementations in Keras (Chollet et al., 2015) using Ten-sorFlow backend (Abadi et al., 2015).", "All networks use GloVe embeddings with 100 dimensions (Pennington et al., 2014) and the Adam optimizer (Kingma and Ba, 2014).", "Regarding input, we experiment with the potential possessor x , possessee y , verb x , and the rest of the sentence.", "The three architectures are depicted in Figure 4. Feedforward Neural Network.", "The feedforward neural network takes as input the embeddings of the potential possessor x , possessee y and verb x .", "It has a fully connected hidden layer with 50 neurons and uses softmax in the output layer of size 3 for predicting possession existence ( yes , never and unk ) or size 2 for predicting possession type ( alienable and control ) and temporal anchors ( yes and never for before, during and after).", "LSTM ppv .", "The first Long Short-Term Memory network takes as input a fixed-length sequence consisting of the potential possessor x , possessee y and verb x .", "We used 100 LSTM units (output dimension) and the output layer also uses softmax.", "While this LSTM has access to the same information than the feedforward network, we expect that the input, output and forget gates will learn to update the cell state to better solve our task.", "LSTM sent .", "The architecture of the second Long Short-Term Memory network is the same than LSTM ppv , but the input is different.", "LSTM sent takes as input the sequence of words from which the potential possessor x , possessee y and verb x were extracted.", "Each element in the input is represented by the concatenation of its word embedding and an additional embedding indicating if the token is the potential possessor x , possessee y , verb x , or none of them.", "Unlike the other two networks, LSTM sent has access to the full sentence, and we expect that the memory update mechanism (i.e., the input, output and forget gates) will learn the context most relevant for our task.", "Possession Existence and Type.", "Table 6 presents results obtained with the majority baseline (pos-session existence: always never , possession type: always alienable ), SVMs and the three neural networks.", "All models outperform the majority baseline in both tasks (possession existence F1: 0.24, possession type F1: 0.40), and the three neural architectures outperform SVM (existence: 0.570.74 vs. 0.56, type: 0.610.67 vs. 0.58).", "Regarding possession existence, the vanilla feedforward neural network alone performs similar to the SVM (F1: 0.57 vs. 0.56), indicating that word embeddings capture the kind of verbs and (potential) possessors and possessees more likely to have a possession relationship.", "Despite the small dataset (total: 979 pairs), the LSTMs out-502 Softmax Possessor", "perform the feedforward neural network (0.74 vs. 0.57).", "LSTM ppv performs surprisingly well (F1: 0.69) even though it only has access to the possessor, possessee and verb x .", "LSTM sent highly ben-efits from having access to the full sentence (F1: 0.74).", "This shows that context plays a vital role in deciding the existence of possession.", "Regarding possession type, the feedforward neural network is comparable to LSTM ppv .", "Intuitively, distinguishing between alienable and control possessions can be done mostly based on the possessor, possessee and verb x , and the embeddings capture this kind of information.", "For example, verbs such as use and rent indicate a control possession, while acquire indicates alienable possession.", "Temporal Anchors.", "Table 7 presents results obtained with SVMs and the best neural network architecture in this subtask.", "LSTM ppv performs similar to the SVM (before: 0.71 vs. 0.76, during: 0.75 vs. 0.72, after: 0.70 vs. 0.73).", "As expected, F1 scores are higher with the labels that occur more often: yes is more frequent than never with all temporal anchors, especially during and after (Table 3), and F1 scores for yes are higher than for never (before: 0.73 vs. 0.68, during: 0.82 vs. 0.59, after: 0.77 vs. 0.54).", "Possession relations are present in all languages, and they can reflect relationships, values, concepts and cultural changes (Aikhenvald, 2013).", "In this 503 Before During After P R F1 P R F1 P R F1 SVM, all feats.", "paper, we mine possessions from text.", "Specifi-cally, we extract alienable and control possessions, and specify temporal anchors with respect to the verb of which the possessor is the subject.", "We have created the first corpus annotating types of possessions following two steps.", "First, we automatically pair potential possessors and possessees, resulting in 979 pairs.", "Second, we manually validate pairs by annotating possession existence ( yes , never , unk and inv ), types ( alienable or control ) and temporal anchors ( before yes / no , during yes / no , after yes / no ).", "Inter-annotator Cohen's coefficients show that the annotation task can be done reliably (Ta-ble 2).", "Experimental results show that the task can be automated, and that neural networks outperform SVMs trained with features extracted from linguistic structure although we experiment with a relatively small dataset.", "Beyond fundamental research, we believe that mining possession types has several applications.", "For example, marketers may target people who do not alienably possess something, and certain skills may be inferred from the kind of objects people have control possessions over (e.g., an individual having a control possession of an 18-wheeler most likely knows how to drive large trucks and has a commercial driver's license)." ]
[ "method", "abstain", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "abstain", "abstain", "result", "method", "abstain" ]
[ "We present an adaptation of RNN sequence models to the problem of multi-label classification for text, where the target is a set of labels , not a sequence.", "Previous such RNN models define probabilities for sequences but not for sets; attempts to obtain a set probability are after-thoughts of the network design, including pre-specifying the label order, or relating the sequence probability to the set probability in ad hoc ways.", "Our formulation is derived from a principled notion of set probability, as the sum of probabilities of corresponding permutation sequences for the set.", "We provide a new training objective that maximizes this set probability, and a new prediction objective that finds the most probable set on a test document.", "These new objectives are theoretically appealing because they give the RNN model freedom to discover the best label order, which often is the natural one (but different among documents).", "We develop efficient procedures to tackle the computation difficulties involved in training and prediction.", "Experiments on benchmark datasets demonstrate that we outperform state-of-the-art methods for this task.", "Multi-label text classification is an important machine learning task wherein one must predict a set of labels to associate with a given document; for example, a news article might be tagged with labels sport , football , 2018 world cup , and Russia .", "Formally, we are given a set of label candidates L = { 1 , 2 , ..., L } , and we aim to build a classifier which maps a document x to a set of labels y L .", "The label set y is typically written as a binary vector y { 0 , 1 } L , with each bit y (cid:96) indicating the presence or absence of a label.", "Naively, one could predict each label independently without considering label dependencies.", "This approach is called Binary Relevance (Boutell et al., 2004; Tsoumakas and Katakis, 2007), and is widely used due to its simplicity, but it often does not deliver good performance.", "Intuitively, knowing some labelssuch as sport and football should make it easier to predict 2018 world cup and then Russia .", "There are several methods that try to capture label dependencies by building a joint probability estimation over all labels p ( y = ( y 1 , y 2 , ..., y L ) | x ) (Ghamrawi and McCallum, 2005; Read et al., 2009; Dembczynski et al., 2010; Li et al., 2016).", "The most popular approach, Probabilistic Classifier Chain (PCC) (Dembczynski et al., 2010) learns labels one-by-one in a predefined fixed order: for each label, it uses one classifier to estimate the probability of that label given all previous labels predictions, p ( y l | y 1 , ..., y l 1 , x ) .", "PCC's well known drawback is that errors in early probability estimations tend to affect subsequent predictions, and can become massive when the total number of label candidates L is large.", "Recurrent neural network (RNN) is originally designed to output a sequential structure, such as a sentence (Cho et al., 2014).", "Recently, RNNs have also been applied to multi-label classification by mapping the label set to a sequence (Wang et al., 2016; Zhang et al., 2016; Jin and Nakayama, 2016; Wang et al., 2017b,a; Chen et al., 2018; Yang et al., 2018).", "In contrast to PCC where a binary decision is made for each label sequentially, RNN only predicts the positive labels explicitly and therefore its decision chain length is equal to the number of positive labels, not the number of all labels.", "This makes RNN suffer less from early estimation errors than PCC.", "Both PCC and RNN rely heavily on label orders in training and prediction.", "In multi-label data, the labels are given as sets, not necessarily with natural orders.", "RNN defines a sequence probability, while PCC defines set probability.", "Various ways of arranging sets as sequences have been explored: ordering alphabetically, by frequency, based on a label hierarchy, or according to some label ranking algorithm (Liu and Tsang, 2015).", "Previous experimental results show that which order to choose can have a significant impact on learning and prediction (Vinyals et al., 2016; Nam et al., 2017; Chen et al., 2018).", "In the above example, starting label predictions sequence with Russia , while correct, would make the other predictions very difficult.", "Previous work has shown that it is possible to train an RNN on multi-label data without specifying the label order in advance.", "With special training objectives, RNN can explore different label orders and converge to some order automatically (Vinyals et al., 2016).", "In this paper we follow the same line of study: We consider how to adapt RNN sequence model to multi-label set prediction without specifying the label order.", "Specifically, we make the following contributions:", "1. We analyze existing RNN models proposed for multi-label prediction, and show that existing training and prediction objectives are not well justified mathematically and have undesired consequences in practice.", "2. We develop efficient approximate training and prediction methods.", "We propose new training and prediction objectives based on a principled notion of set probability.", "Our new formulation avoids the drawbacks of existing ones and gives the RNN model freedom to discover the best label order.", "3. We crawl two new datasets for multi-label prediction task, and apply our method to them.", "We also test our method on two existing multi-label datasets.", "The experimental results show that our method outperforms state-of-the-art methods on all datasets.", "We release the datasets at http://www.ccis.neu.", "edu/home/kechenqin .", "In this section, we describe how existing approaches map sequences to sets, by writing down their objective functions using consistent notations.", "To review RNN designed for sequences, let s = ( s 1 , s 2 , ..., s T ) be an input sequence of outcomes, in a particular order, where s t { 1 , 2 , ..., L } ; the order is often critical to the datapoint.", "An RNN model defines a probability distribution over all possible output sequences given the input in the form p ( s = ( s 1 , s 2 , ..., s T ) | x ) = (cid:81) Tt =1 p ( s t | x, s 1 , s 2 , ..., s t 1 ) .", "To train the RNN model, one maximizes the likelihood of the ground truth sequence.", "At prediction time, one seeks to find the sequence with the highest probability s = arg max s p ( s | x ) , and this is usually implemented approximately with a beam search procedure (Lowerre, 1976) (we modified into Algorithm 1).", "The sequence history is encoded with an internal memory vector h t which is updated over time.", "RNN is also often equipped with the attention mechanism (Bahdanau et al., 2014), which in each timestep t puts different weights on different words (features) and thus effectively attends on a list of important words.", "The context vector c t is computed as the weighted average over the dense representation of important words to capture information from the document.", "The context c t , the RNN memory h t at timestep t , and the encoding of previous label s t 1 are all concatenated and used to model the label probability distribution at time t as p ( s t | x, s 1 , s 2 , ..., s t 1 ) softmax ( ( c t , h t , s t 1 )) , where is a non-linear function, and softmax is the normalized exponential function.", "To apply RNN to multi-label problems, one approach is to map the given set of labels y to a sequence s = ( s 1 , s 2 , ..., s T ) , on training documents.", "This is usually obtained by writing the label set in a globally fixed order (e.g. by label fre-quency), as in PCC.", "Once the mapping is done, RNN is trained with the standard maximum likelihood objective (Nam et al., 2017): maximize N (cid:88) n =1 log p ( s ( n ) | x ( n ) ) (1) where x ( n ) is the n -th document and N is the total number of documents in the corpus.", "Vinyals et al. (2016) proposes to dynamically choose during training the sequence order deemed as most probable by the current RNN model: maximize N (cid:88) n =1 max s ( y ( n ) ) log p ( s | x ( n ) ) (2) where the ( y ( n ) ) stands for all permutations of the label set y ( n ) .", "This eliminates the need to manually specify the label order.", "However, as noticed Methods Training objectives Prediction objectives seq2seq-RNN maximize (cid:80) Nn =1 log p ( s ( n ) | x ( n ) ) y = set ( s ) , s = arg max s p ( s | x ) Vinyals-RNN-max maximize (cid:80) Nn =1 max s ( y ( n ) ) log p ( s | x ( n ) ) y = set ( s ) , s = arg max s p ( s | x ) Vinyals-RNN-uniform maximize (cid:80) Nn =1 (cid:80) s ( y ( n ) ) log p ( s | x ( n ) ) y = set ( s ) , s = arg max s p ( s | x ) Vinyals-RNN-sample maximize (cid:80) Nn =1 (cid:80) s ( y ( n ) ) p ( s | x ( n ) ) log p ( s | x ( n ) ) y = set ( s ) , s = arg max s p ( s | x ) set-RNN (ours) maximize (cid:80) Nn =1 log (cid:80) s ( y ( n ) ) p ( s | x ( n ) ) y = arg max y p ( y | x ) Table 1: Comparison between previous and our set-RNN training and prediction objectives.", "by the authors, this objective cannot be used in the early training stages: the early order choice (of-ten random) is reinforced by this objective and can be stuck upon permanently.", "To address this issue, Vinyals et al. (2016) also proposes two smoother alternative objectives to initialize the model training: The authors suggest that one first consider many random orders for each label set in order to explore the space: maximize N (cid:88) n =1 (cid:88) s ( y ( n ) ) log p ( s | x ( n ) ) (3) After that, one can sample sequences following the model predictive distribution instead of uniform distribution: maximize N (cid:88) n =1 (cid:88) s ( y ( n ) ) p ( s | x ( n ) ) log p ( s | x ( n ) ) (4) In training, one needs to schedule the transition among these objectives, a rather tricky en-deavor.", "At prediction time, one needs to find the most probable set.", "This is done by (ap-proximately) finding the most probable sequence s = arg max s p ( s | x ) and treating it as a set y = set ( s ) .", "With a large number of sequences, it is quite possible that the argmax has actually a low probability, which can lead to neglecting important information when we ignore sequences other than the top one.", "We propose a new way of adapting RNN to multilabel set prediction, which we call set-RNN .", "We appreciate the RNN model structure (Rumelhart et al., 1988) (defines a probability distribution over all possible sequences directly) and introduce training and prediction objectives tailored for sets that take advantage of it, while making a clear distinction between the sequence probability p ( s | x ) and the set probability p ( y | x ) .", "We define the set probability as the sum of sequences probabilities for all sequence permutations of the set, namely p ( y | x ) = (cid:80) s ( y ) p ( s | x ) .", "Based on this formulation, an RNN also defines a probability distribution over all possible sets indirectly since (cid:80) y p ( y | x ) = (cid:80) y (cid:80) s ( y ) p ( s | x ) = (cid:80) s p ( s | x ) = 1 .", "(For this equation to hold, in theory, we should also consider permutations s with repeated labels, such as (1 , 2 , 3 , 1) .", "But in practice, we find it very rare for RNN to actually generate sequences with repeated labels in our setup, and whether allowing repetition or not does not make much", "difference.) In standard maximum likelihood training, one wishes to maximize the likelihood of given label sets, namely, (cid:81) Nn =1 p ( y ( n ) | x ( n ) ) = (cid:81) Nn =1 (cid:80) s ( y ( n ) ) p ( s | x ( n ) ) , or equivalently, maximize N (cid:88) n =1 log (cid:88) s ( y ( n ) ) p ( s | x ( n ) ) (5) 3.1 How is our new formulation different?", "This training objective (5) looks similar to the objective (3) considered in previous work (Vinyals et al., 2016), but in fact they correspond to very different transformations.", "Under the maximum likelihood framework, our objective (5) corresponds to the transformation p ( y | x ) = (cid:80) s ( y ) p ( s | x ) , while objective (3) corresponds to the transformation p ( y | x ) = (cid:81) s ( y ) p ( s | x ) .", "The latter transformation does not define a valid probability distribution over y (i.e., (cid:80) y p ( y | x ) (cid:54) = 1 ), and it has an undesired consequence in practical model training: because of the multiplication operation, the RNN model has to assign equally high probabilities to all sequence permutations of the given label set in order to maximize the set probability.", "If only some sequence permutations receive high probabilities while others receive low probabilities, the set probability computed as the product of sequence probabilities will still be low.", "In other words, if for each document, RNN finds one good way of ordering relevant labels (such as hierarchically) and allocates most of the probability mass to the sequence in that order, the model still assigns low probabilities to the ground truth label sets and will be penalized heavily.", "As a consequence the model has little freedom in discovering and concentrating on some natural label order.", "In contrast, with our proposed training objective, in which the multiplication operation is replaced by the summation operation, it suffices to find only one reasonable permutation of the labels for each document.", "It is worth noting that different documents can have different label orders; thus our proposed training objective gives the RNN model far more freedom on label order.", "The other two objectives (2) and (4) proposed in (Vinyals et al., 2016) are less restrictive than (3), but they have to work in conjunction with (3) because of the self reinforcement issue.", "Our proposed training objective has a natural probabilistic interpretation, and does not suffer from self reinforcement issue.", "Thus it can serve as a stand alone training objective.", "Also, using Jensen's inequality, one can show that objective (3) is maximizing a lower bound on the log-likelihood, while objective (5) is maximizing it directly.", "Training an RNN model with the proposed objective (5) requires summing up sequence (permu-tation) probabilities for a set y , where | y | is the cardinality of the set.", "Thus evaluating this objective exactly can be intractable.", "We can approximate this sum by only considering the top K highest probability sequences produced by the RNN model.", "We introduce a variant of beam search for sets with width K and with the search candidates in each step restricted to only labels in the set (see Algorithm 1 with ALL = 1 ).", "This approximate inference procedure is carried out repeatedly before each batch training step, in order to find highest probability sequences for all training instances occurring in that batch.", "The overall training procedure is summarized in Algorithm", "2. 3.3 Predicting the Most Probable Set The transformation p ( y | x ) = (cid:80) s ( y ) p ( s | x ) also naturally leads to a prediction procedure, Algorithm 1: Beam Search Input : Instance x Subset of labels considered G L Boolean flag ALL : 1 if sequences must contain all G labels; 0 if partial sequences are allowed Output: A list of top sequences and the associated probabilities 1 Let s 1 , s 2 ,..., s K be the top K sequences found so far.", "associated probabilities", "which is different from the previous standard of directly using most probable sequence as a set.", "We instead aim to find the most likely set y = arg max y p ( y | x ) , which involves summing up probabilities for all of its permutations.", "To make it tractable, we propose a two-level beam search procedure.", "First we run standard RNN beam search (Algorithm 1 with ALL = 0 ) to generate a list of highest probability sequences.", "We then consider the label set associated with each label sequence.", "For each set, we evaluate its probability using the same approximate summation procedure as the one used during model training (Al-gorithm 1 with ALL = 1 ): we run our modified beam search to find the top few highest probability sequences associated with the set and sum up their Algorithm 2: Training method for set-RNN Input : Multi-label dataset ( x ( n ) , y ( n ) ) , n = 1 , 2 , ..., N Output: Trained RNN model parameters 1 foreach batch do 2 foreach ( x n , y n ) in the batch do 3 Get top K sequences : 4 { s n 1 , ..., s nK , p ( s n 1 | x n ) , ..., p ( s nK | x n ) } = = Beam Search ( x n , y n , ALL = 1 ) 5 end 6 Update model parameters by maximizing (cid:80) ( x n , y n ) batch log (cid:80) s { s n 1 ,..., s nK } p ( s | x n ) 7 end probabilities.", "Among these sets that we have evaluated, we choose the one with the highest probability as the prediction.", "The overall prediction procedure is summarized in Algorithm", "3. As we shall show in case study, the most probable set may not correspond to the most probable sequence; these are certainly cases where our method has an advantage.", "Both our method and the competitor state-of-the-art (Vinyals-RNNs) are at most K times slower than a vanilla-RNN, due to the time spent on dealing with K permutations per datapoint.", "Our proposed method is about as fast as the Vinyals-RNN methods, except for the Vinyals-RNN-uniform which is a bit faster (by a factor of 1.5) because its epochs do not run the additional forward pass.", "We test our proposed set-RNN method on 4 real-world datasets, RCV1-v2, Slashdot, TheGuardian, and Arxiv Academic Paper Dataset (AAPD) (Yang et al., 2018).", "We take the public RCV1-v2 release 1 and randomly sample 50,000 documents.", "We crawl Slashdot and TheGuardian documents from their websites 2 and treat the official editor tags as ground truth.", "We also gather a list of user tags 3 for each document and treat them as additional features.", "For AAPD dataset, we follow 1 http://www.ai.mit.edu/projects/jmlr/papers/ volume5/lewis04a/lyrl2004_rcv1v2_README.htm 2 Slashdot: https://slashdot.org/ Note that there is another public Slashdot multi-label dataset (Read et al., 2009) but we do not use that one because it is quite small.", "the same train/test split as in (Yang et al., 2018).", "Table 2 contains statistics of these four datasets.", "Links to document, official editor tags, and user tags are avaliable at http://www.ccis.neu.", "edu/home/kechenqin .", "To process documents, we filter out stopwords and punctuations.", "Each document is truncated to have maximum 500 words for TheGuardian and AAPD, and 120 for Slashdot and RCV1-v2.", "Zero padding is used if the document contains less words than the maximum number.", "Numbers and out-of-vocabulary words are replaced with special tokens.", "Words, user tags and labels are all encoded as 300-dimensional vectors using WORD 2 VEC (Mikolov et al., 2013).", "We implement RNNs with attention using TENSORFLOW -1.4.0 (Abadi et al., 2016).", "The dynamic function for RNNs is chosen to be Gated recurrent units (GRU) with 2 layers and at most 50 units in decoder.", "The size of the GRU unit is 300.", "We set dropout rate to 0.3, and train the model with Adam optimizer (Kingma and Ba, 2014) with learning rate 0 .", "0005 .", "Beam size is set to be 12 at both training and inference stages.", "We adopt label-F1 (average F1 over labels) and instance-F1 Methods Slashdot RCV1-v2 TheGuardian AAPD label-F1 instance-F1 label-F1 instance-F1 label-F1 instance-F1 label-F1 instance-F1 hamming-loss micro-F1 BR .271 .484 .486 .802 .292 .572 .529 .654 .0230 .685 BR-support .247 .516 .486 .805 .296 .594 .545 .689 .0228 .696 PCC .279 .480 .595 .818 -.541 .688 .0255 .682 seq2seq-RNN .270 .528 .561 .824 .331 .603 .510 .708 .0254 .701 Vinyals-RNN-uniform .279 .527 .578 .826 .313 .567 .532 .721 .0241 .711 Vinyals-RNN-sample .300 .531 .590 .828 .339 .597 .527 .706 .0259 .697 Vinyals-RNN-max .293 .530 .588 .829 .343 .599 .535 .709 .0256 .700 Vinyals-RNN-max-direct .226 .518 .539 .808 .313 .583 .490 .702 .0257 .694 SGM ----.0245 .710 set-RNN .310 .538 .607 .838 .361 .607 .548 .731 .0241 .720 Table 3: Comparison of different approaches.", "(average F1 over instances) as our main evaluation metrics, as defined below: label-F1 = 1 LL (cid:88) (cid:96) =1 2 (cid:80) Nn =1 y ( n ) (cid:96) y ( n ) (cid:96) (cid:80) Nn =1 y ( n ) (cid:96) + (cid:80) Nn =1 y ( n ) (cid:96) instance-F1 = 1 NN (cid:88) n =1 2 (cid:80) L(cid:96) =1 y ( n ) (cid:96) y ( n ) (cid:96) (cid:80) L(cid:96) =1 y ( n ) (cid:96) + (cid:80) L(cid:96) =1 y ( n ) (cid:96) where for each instance n , y ( n ) (cid:96) = 1 if label (cid:96) is a given label in ground truth; y ( n ) (cid:96) = 1 if label (cid:96) is a predicted label.", "We compare our method with the following methods: Binary Relevance (BR) (Tsoumakas and Katakis, 2007) with both independent training and prediction; Binary Relevance with support inference (BR-support) (Wang et al., 2018) which trains binary classifiers independently but imposes label constraints at prediction time by only considering label sets observed during training, namely y = arg max observed y (cid:81) L(cid:96) =1 p ( y (cid:96) | x ) ; Probabilistic Classifier Chain (PCC) (Dembczynski et al., 2010) which transforms the multi-label classification task into a chain of binary classification problems.", "Predictions are made with Beam Search.", "Sequence to Sequence RNN (seq2seq-RNN) (Nam et al., 2017) which maps each set to a sequence by decreasing label frequency and solves the multi-label task with an RNN designed for sequence prediction (see Table 1).", "Vinyals-RNN-uniform, Vinyals-RNN-sample, and Vinyals-RNN-max are three variants of RNNs proposed by (Vinyals et al., 2016).", "They are trained with different objectives that correspond to different transformations between sets and sequences.", "See Table 1 for a summary of their training objectives.", "Following the approach taken by (Vinyals et al., 2016), Vinyals-RNN-sample and Vinyals-RNN-max are initialized by Vinyals-RNN-uniform.", "We have also tested training Vinyals-RNN-max directly without having Vinyals-RNN-uniform as an initialization, and we name it as Vinyals-RNN-max-direct .", "similar to seq2seq-RNN but uses a new decoder structure that computes a weighted global embedding based on all labels as opposed to just the top one at each timestep.", "In BR and PCC, logistic regressions with L1 and L2 regularizations are used as the underlying binary classifiers.", "seq2seq-RNN, PCC, and SGM rely on a particular label order.", "We adopt the decreasing label frequency order, which is the most popular choice.", "Table 3 shows the performance of different methods in terms of label-F1 and instance-F1 .", "The SGM results are taken directly from (Yang et al., 2018), and are originally reported only on AAPD dataset in terms of hamming-loss and micro-F1 .", "Definitions of these two metrics can be found in (Koyejo et al., 2015).", "Our method performs the best in all metrics on all datasets (except hamming loss on AAPD, see table 3).", "In general, RNN based methods perform better than traditional methods BR, BR-support and PCC.", "Among the Vinyals-RNN variants, Vinyals-RNN-max and Vinyals-sample work the best and have similar performance.", "However, they have to be initialized by Vinyals-RNN-uniform.", "Otherwise, the training gets stuck in early stage and the performance degrades significantly.", "One can see the clear degradation by comparing the Vinyals-RNN-max row (with initialization) with the Vinyals-RNN-max-direct row (with-out initialization).", "By contrast, our training objective in set-RNN does not suffer from this issue and can serve as a stable stand alone training objective.", "On TheGuardian dataset, set-RNN performs slightly better than seq2seq-RNN in terms of instance-F1, but much better in terms of label-F1.", "It is known that instance-F1 is basically determined by the popular labels' performance while label-F1 is also sensitive to the performance on rare labels.", "Figure 1 shows that set-RNN predicts rare labels better than seq2seq-RNN.", "Next we analyze how much benefit our new set prediction strategy brings in.", "For each RNN-based method, we test two prediction strategies: 1) finding the sequence with the highest probability and outputting the corresponding set (this is the default prediction strategy for all models except set-RNN); 2) outputting the set with the highest probability (this is the default prediction Figure 1: Average F1 over rare labels with the same frequency on TheGuardian dataset.", "strategy for set-RNN).", "Table 4 shows how each method performs with these two prediction strategies.", "One can see that Vinyals-RNN-uniform and set-RNN benefit most from predicting the top set, Vinyals-RNN-sample, Vinyals-RNN-max and Vinyals-RNN-max-direct benefit less, and seq2seq RNN does not benefit at all.", "Intuitively, for the top-set prediction to be different from the top-sequence prediction, the model has to spread probability mass across different sequence permutations of the same set.", "Results in Table 4 motivates us to check how sharply (or uniformly) distributed the probabilities are over different sequence permutations of the predicted set.", "We first normalize these sequence probabilities related to the predicted set and then compute the entropy.", "To make predictions with different set sizes (and hence different number of sequence permutations) comparable, we further divide the entropy by the logarithm of number of sequences.", "Smaller entropy values indicate a sharper distributions.", "The results are shown in Figure", "2. seq2seq-RNN trained with fixed label order and standard RNN objective (1) generates very sharp sequence distributions.", "It basically only assigns probability to one sequence in the given order.", "The entropy is close to 0.", "In this case, predicting the set is no different than predicting the top sequence (see Table 4).", "On the other extreme is Vinyals-RNN-uniform, trained with objective (3), which spreads probabilities across many se-Figure 2: Entropy of sequence probability distribution for each model.", "Blue( \\ )=Vinyals-RNN-uniform, Orange(+)=set-RNN, Green( )=Vinyals-RNN-max, Red( )=seq2seq-RNN.", "quences, and leads to the highest entropy among all models tested (the uniform distribution has the max entropy of 1).", "From Table 4, we see that by summing up sequence probabilities and predicting the most probable set, Vinyals-RNN-uniform's performance improves.", "But as discussed earlier, training with the objective (3) makes it impossible for the model to discover and concentrate on a particular natural label order (represented by a sequence).", "Overall Vinyals-RNN-uniform is not competitive even with the set-prediction enhancement.", "Between the above two extremes are Vinyals-RNN-max and set-RNN (we have omitted Vinyals-RNN-sample and Vinyals-RNN-max-direct here as they are similar to Vinyals-RNN-max).", "Both models are allowed to assign probability mass to a subset of sequences.", "Vinyals-RNN-max produces sharper sequence distributions than set-RNN, because Vinyals-RNN-max has the incentive to allocate most of the probability mass to the most probable sequence due to the max operator in its training objective (2).", "From Table 4, one can see that set-RNN clearly benefits from summing up sequence probabilities and predicting the most probable set while Vinyals-RNN-max does not benefit much.", "Therefore, the sequence probability summation is best used in both training and prediction, as in our proposed method.", "Comparing 4 datasets in Table 4, we also see that Slashdot and TheGuardian, which have larger label cardinalities (therefore more permutations for one set potentially), benefit more from predicting the most probable set than RCV1 and AAPD, which have smaller label cardinalities.", "We further demonstrate how set-RNN works with two examples.", "In the first example from the RCV1-v2 dataset, the most probable set predicted by set-RNN (which is also the correct set in this example) does not come from the most probable sequence.", "Top sequences in decreasing probability order are listed in Table 5.", "The correct label set { forex, markets, equity, money markets, metals trading, commodity } has the maximum total probability of 0.161, but does not match the top sequence.", "Next we demonstrate the issue with prescribing the sequence order in seq2seq-RNN with a TheGuardian example 4 .", "Figure 3 shows the predictions made by seq2seq-RNN and our method.", "In this particular example the top sequence agrees with the top set in our method's prediction so we can just analyze the top sequence.", "seq2seq-RNN predicts Tate Modern (incorrect but more popular label) while we predict Tate Britain (correct but less popular label).", "The seq2seq predicted sequence is in the decreasing label frequency order while our predicted sequence is not.", "In the training data, Exhibition is more frequent than Tate Britain and Tate Modern .", "If we arrange labels by decreasing frequency, Exhibition is immediately followed by Tate Modern 19 times, and by Tate Britain only 3 times.", "So it is far more likely to have Tate Modern than Tate Britain after Exhibition .", "However, at the set level, Exhibition and Tate Modern co-occurs 22 times while Exhibition and Tate Britain 4 This document can be viewed at http://www.guardian.", "co.uk/artanddesign/jonathanjonesblog/2009/apr/08/altermodernism-nicolas-bourriaud Figure 3: Top: best sequence by seq2seq-RNN; bottom: best sequence by set-RNN.", "co-occurs 12 times, so the difference is not so dramatic.", "In this case, imposing the sequence order biases the probability estimation and leads to incorrect predictions.", "In this work, we present an adaptation of RNN sequence models to the problem of multi-label classification for text.", "RNN only directly defines probabilities for sequences, but not for sets.", "Different from previous approaches, which either transform a set to a sequence in some pre-specified order, or relate the sequence probability to the set probability in some ad hoc way, our formulation is derived from a principled notion of set probability.", "We define the set probability as the sum of all corresponding sequence permutation probabilities.", "We derive a new training objective that maximizes the set probability and a new prediction objective that finds the most probable set.", "These new objectives are theoretically more appealing than existing ones, because they give the RNN model more freedom to automatically discover and utilize the best label orders.", "We thank reviewers and Krzysztof Dembczynski for their helpful comments, Xiaofeng Yang for her help on writing, and Bingyu Wang for his help on proofreading.", "This work has been generously supported through a grant from the Massachusetts General Physicians Organization." ]
[ "method", "abstain", "abstain", "objective", "abstain", "objective", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "objective", "objective", "objective", "objective", "abstain", "result", "other", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "other", "method", "method", "method", "other", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "objective", "abstain", "other", "other" ]
[ "Open-domain dialog systems have a user-centric goal: to provide humans with an engaging conversation experience.", "User engagement is one of the most important metrics for evaluating open-domain dialog systems, and could also be used as real-time feedback to benefit dialog policy learning.", "Existing work on detecting user disengagement typically requires hand-labeling many dialog samples.", "We propose HERALD, an e cient annotation framework that reframes the training data annotation process as a denoising problem.", "Specifically, instead of manually labeling training samples, we first use a set of labeling heuristics to label training samples automatically.", "We then denoise the weakly labeled data using the Shapley algorithm.", "Finally, we use the denoised data to train a user engagement detector.", "Our experiments show that HERALD improves annotation e ciency significantly and achieves 86% user disengagement detection accuracy in two dialog corpora.", "Our implementation is available at https:// github.com/Weixin-Liang/HERALD/ .", "Evaluation metrics heavily influence a field's research direction.", "The ultimate goal of open-domain dialog systems is to provide an enjoyable experience to users.", "Previous research mainly focuses on optimizing automatic dialog evaluation metrics such as BLEU, which models the distance between the system responses and a limited number of references available.", "However, it has been shown that these metrics correlate poorly with human judgments (Liu et al., 2016).", "Open-domain dialog system evaluation has long been one of the most di cult challenges in the dialog community for several reasons: (1) The goal of 1 Equal Contribution.", "dialog evaluation should be to evaluate users' conversational experience.", "Existing automatic evaluation metrics such as BLEU are mostly constrained to a static corpus, and do not capture the user experience in a realistic interactive setting.", "(2) Currently, self-reported user ratings are widely used to evaluate open-domain dialogs.", "However, self-reported ratings su er from bias and variance among di erent users (Liang et al., 2020e).", "Although we could tell which dialog system is better by running statistical tests on a large number of noisy ratings, it is challenging to locate dialogs with bad performance reliably.", "Only by identifying these bad dialogs effectively can we correct errors in these samples to improve dialog system quality.", "User engagement has been recognized as one of the essential metrics for open-domain dialog evaluation (Ram et al., 2018).", "Previous research also confirms that incorporating user engagement as real-time feedback benefits dialog policy learning (Yu et al., 2016).", "One of the most costly bottlenecks of learning to detect user disengagement is to annotate many turn-level user engagement labels (Ghazarian et al., 2020).", "In addition, the data annotation process becomes more expensive and challenging for privacy-sensitive dialog corpora, due to the privacy concerns in crowdsourcing (Xia and McKernan, 2020).", "To improve annotation e ciency, we reframe the training data annotation process as a denoising problem.", "Specifically, instead of manually labeling each training datum, we automatically label the training samples with a set of labeling heuristics.", "The heuristic functions primarily consist of regular expressions (Regexes) and incorporate open-sourced natural language understanding (NLU) services.", "Since the automatically generated labels might contain noise, we then denoise the labeled data using the Shapley algorithm (Jia et al., 2019a,b).", "We use the Shapley algorithm to quantify the contribution of each training datum, so that we can identify the noisy data points with negative contribution and then correct their labels.", "Our experiments show that HERALD achieves 86% accuracy in user disengagement detection in two dialog corpora.", "Our proposed framework HERALD is conceptually simple and suitable for a wide range of application scenarios: First, since our model could detect user engagement in real-time (i.e., after each user utterance), our model could be plugged into existing dialog systems as a real-time user experience monitor module.", "In this way, dialog systems could detect and react to user's disengagement in both open-domain dialogs (Yu et al., 2016) and task-oriented dialogs (Yu et al., 2017).", "During training, our model could also be used as real-time feedback to benefit dialog policy learning (Yi et al., 2019).", "Second, HERALD could quantify user engagement and be used as an automatic dialog evaluation metric.", "It could locate dialogs with poor user experience reliably to improve dialog system quality (Ghazarian et al., 2020; Choi et al., 2019).", "Third, user engagement is an essential objective of dialog systems, but few dialog datasets with user engagement ratings are available.", "Our heuristic functions, combined with the proposed workflow, can be readily deployed to annotate new dialog datasets.", "Open-domain dialog system evaluation is a long-lasting challenge.", "It has been shown that existing automatic dialog evaluation metrics correlate poorly with human judgments (Liu et al., 2016; Lowe et al., 2017; Novikova et al., 2017).", "A wellknown reason is that these automatic dialog evaluation metrics rely on modeling the distance between the generated response and a limited number of references available.", "The fundamental gap between the open-ended nature of the conversations and the limited references (Gupta et al., 2019) is not addressed in methods that are lexical-level based (Pa-pineni et al., 2002; Lin, 2004; Banerjee and Lavie, 2005), embedding based (Rus and Lintean, 2012; Forgues et al., 2014), perplexity based (Adiwar-dana et al., 2020), or learning based (Tao et al., 2018; Lowe et al., 2017).", "Mehri and Esknazi (2020) simulate user response using DialogGPT and evaluate the probability of user complaint.", "Given the limitations above, self-reported user ratings are widely used to evaluate open-domain dialogs.", "However, self-reported ratings su er from bias and variance among di erent users (Venkatesh et al., 2018).", "Denoising human ratings is still an open research problem (Liang et al., 2020e; Li et al., 2019).", "User engagement is commonly defined as the user's willingness to continue conversing with the dialog system (Yu et al., 2016, 2017).", "Existing work on measuring user engagement primarily resorts to human rating (Yi et al., 2019; Hancock et al., 2019), or proxy metrics.", "Example proxy metrics include conversation length like number of dialog turns (Venkatesh et al., 2018; Ram et al., 2018), and conversational breadth like topical diversity (Guo et al., 2018).", "Sporadic attempts have been made to detecting user disengagement in dialogs (Yu et al., 2004; Ghazarian et al., 2020; Choi et al., 2019).", "A major bottleneck of these methods is that they require hand-labeling many dialog samples for individual datasets.", "Although Liang et al. (2020e) denoise user self-reported ratings with the Shapley algorithm for dialog system evaluation, their method cannot be directly applied to dialogs without user ratings as in our setting.", "Our work is focusing on the problem that it is expensive and di cult to obtain user ratings.", "The core insight of our work is to reframe the training data annotation process as a process of denoising labels created by heuristic functions pre-defined.", "To the best of our knowledge, we are the first to combine automatic data labeling with the Shapley algorithm to perform dialog evaluation.", "Our method could potentially generalize to other classification tasks if di erent weak labelers are provided.", "Learning from weak supervision reduces annotation costs by utilizing noisy but cost-e cient labels (Ratner et al., 2020, 2016; Liang et al., 2020e).", "One of the most popular forms of weak supervision is distant supervision, in which the records of an external knowledge base are heuristically aligned with data points to produce noisy labels for relationship extraction tasks (Bunescu and Mooney, 2007; Mintz et al., 2009; Hancock et al., 2018).", "Other applications of weak supervision to scene graph prediction (Krishna et al., 2019), intent classification (Mallinar et al., 2019), and medical imag-Figure 1: Schematic of the HERALD two-stage workflow.", "Stage 1: Auto-label training data with Heuristic Functions.", "We first design heuristics rules for detecting user disengagement by investigating multiple dialog corpora.", "The heuristics rules are implemented as heuristic functions based on regular expressions and dialog acts.", "Then, we use the heuristic function to label the training set automatically.", "Stage 2: Denoise weakly-labeled training data with Shapley Algorithm.", "We calculate the Shapley value for each data point and correct the noisy data points with negative Shapely values by flipping their labels.", "Finally, we fine-tune the model on the denoised training data.", "ing (Varma et al., 2017) have observed similar benefits in annotation e ciency.", "Unlike the existing work, we leverage weak supervision to improve annotation e ciency for detecting user disengagement in social conversations.", "We defined engagement as the degree to which users are willing to continue conversing with the dialog system Yu et al. (2016, 2017).", "We focus on identifying the dialog turns with disengaged user response, since they usually indicate poor conversation experience.", "We formulate the user engagement prediction as a binary classification problem: Our goal is to learn a parameterized user engagement predictor M that, given a dialog turn (along with its dialog context) x X , predicts the turn-level user engagement label y Y = { 0 , 1 } , where label y = 1 means disengaged and y = 0 means engaged.", "We start from an unlabeled train set D train = { x i } N train 1 without any label y i .", "The test set D test = { ( x i , y i ) } N test 1 contains the ground-truth label y i .", "The development set D dev has a similar structure as the test set D test but the development set can be much smaller than a train set (i.e., N dev (cid:28) N train ), making it economical to obtain.", "Following the general architecture of neural classifiers, we formulate our model M = M ( , f ) = f ( ( x )): Here BERT (Devlin et al., 2019)-based is a text encoder that maps each dialog turn x to a feature space ( x ) R d .", "f is the final linear layer with softmax activation.", "To ensure our framework is generalized to various corpora, we investigate multiple open-domain dialog datasets ranging from ASR-based (Gun-rock (Liang et al., 2020a)) to text-based (Con-vAI2 (Dinan et al., 2019), Blender (Roller et al., 2020), and Meena (Adiwardana et al., 2020)) dialog systems.", "Gunrock Movie Dataset Gunrock Movie dataset consists of dialog data collected from Gunrock, an ASR-based open-domain social chatbot originally designed for Amazon Alexa Prize (Liang et al., 2020a).", "The Gunrock dataset comes from a user study where in-lab users were recruited to carry on conversations.", "We have consent to use the data and we also removed any sensitive information in the conversation.", "Two dialog experts (co-authors of this paper) randomly annotated 134 dialogs and split them evenly into the test set and development set.", "In total, the experts labeled 519 turn-level disengaging user responses and 2,312 engaging user responses.", "They reached a high inter-annotator agreement score (Cohen, 1968) with kappa = 0 .", "78.", "The training set contains 276 unlabeled dialogs, with 5644 dialog turns.", "In addition, we ensure that the data annotation is independent of the labeling heuristics collection, so there is no data leakage problem.", "A full example dialog can be found in Appendix A.4.", "ConvAI2 Dataset ConvAI2 dataset contains text-based dialog collected from the second Conver-Labeling Heuristics Coverage (%) Example Disengaged User Responses Heuristics Group Disengaged intents Gunrock ConvAI2 (1) Complain system responses Complain system repetition 1.93 1.95 { You already asked me that.", "sational Intelligence (ConvAI) Challenge (Dinan et al., 2019).", "We select dialogs from the main eight participated chatbots (Bot 1, 2, 3, 4, 6, 9, 11) and exclude dialogs that are one-sided or shorter than three turns.", "The dialog experts annotated 207 dialogs in total.", "The dialogs are evenly distributed over all the eight bots to ensure system diversity, and are randomly sampled within each bot.", "The annotated data consist of 209 disengaging turns and 1684 non-disengaging turns.", "They reached a high inter-annotator agreement score (Cohen, 1968) with kappa = 0 .", "76.", "We split the annotated dialogs evenly into the test set and develop set.", "The training set contains 2,226 dialogs, with 18,306 dialog turns.", "Google Meena Dataset Meena (Adiwardana et al., 2020) is the largest end-to-end neural chatbot so far, trained on 867 M public domain social media conversations.", "We study the 93 example Human-Menna conversations released by Google.", "Facebook Blender Dataset The Blender bot (Roller et al., 2020) is an open-domain chatbot with several conversational skills: providing engaging talking points and listening to their partners, displaying knowledge, empathy, and personality appropriately while maintaining a consistent persona.", "We study the 108 example Human-Blender conversations released by Facebook.", "Our goal is to train a user engagement detector with minimum data annotation e orts.", "Traditional supervised learning paradigms require annotating many training samples.", "In addition, it requires additional data annotation to extend the model to a new dialog corpus.", "To reduce annotation work, we propose HERALD, a two-stage pipeline that annotates large-scale training data e ciently and accurately (Figure 1).", "Instead of hand-labeling training data points, we use heuristic functions to label each training datum automatically.", "The heuristic functions are built upon a set of user disengagement heuristics rules.", "Since the training data are automatically labeled, their labels would be noisy.", "We then clean the noisy training data with Shapley algorithm (Ghorbani and Zou, 2019) to improve the labeling accuracy.", "The Shapley algorithm denoises training data by identifying data with wrong labels and flip their labels.", "Finally, as we received clean training data, we use them to fine-tune a BERT-based model and obtain the final user disengagement detection model.", "Since labeling large-scale training data is time-consuming, we propose heuristic labeling functions to label training data automatically.", "The heuristic functions focus on detecting disengagement from user responses, as it directly indicates poor user experience.", "To build the heuristics functions, we first summarize the heuristic rules shared among users.", "We investigate the disengaged dialog turns from the four datasets mentioned above and identify four groups of user disengagement patterns: complain system responses, dislike current top-ics, terminate or change topics, and end with non-positive responses (Table 1).", "We then discuss the implementation of heuristics functions.", "Group 1: Complain system responses.", "Complaints are an evident sign of user disengagement.", "We identify six related disengaged intents.", "The first three intents (complain system repetition, complain system ignoring them and complain system misunderstanding) usually appear when the bot makes errors like repeating the same content, ignoring, forgetting, and misunderstanding the user's response.", "In these cases, users express their disengagement by indicating the bot's error (e.g. You already told me that, You're not listening).", "Another intent not understanding system happens when users cannot understand the system's response (e.g. I don't know what you're talking about.).", "In the last two intents, users reveal negative emotions by cursing the system (e.g. you're dumb) or express frustration (e.g. sigh) about the conversation.", "Group 2: Dislike current topics.", "When discussing a given topic, users might show their disengagement by expressing negative opinions or low interest.", "For example, given the bot's response, I write romantic novels under a pen name. , for users who are not interested in reading, users might say reading is boring, I don't like to read, or I'm not interested in this.", "We also make sure to handle the corner cases where the user utterance should be labeled as engaged but contains negative opinions.", "For instance, to respond to the bot's question, do you want to not work?, a user might say, Yes. my job is boring. I have to work with mail.", "Though the user mentions a negative feeling (boring), the user agrees with the bot and shares further information.", "Group 3: Terminate or change topics Group 3 considers the cases where users express disengagement to the current topic in a more straightforward fashion.", "For example, if users are not interested in the current topic, instead of just expressing their dislike to it, they may request to switch topics with Let's talk about something else.", "In some cases, users might show strong disengagement by requesting to end the conversation if the user is no longer interested in continuing the conversation.", "Group 4: End with non-positive responses A more subtle but common clue of disengagement is when users end the response with non-positive content.", "For example, non-positive responses like I don't know, No, Yeah, uh, Probably, imply that users do not have much to talk about the current topic.", "To keep the precision of our heuristics high, we carefully consider the counterexamples.", "One case is that the user follows up with more responses such as questions (e.g., Bot: Have you seen any movies lately? , User: No. Have you?), and opinion (e.g. Bot: What's your favorite animation movie?, User: I don't know, but it might actually be frozen two. My sister loves it.) in the same dialog turn.", "These turns should not be labeled as disengaged since the user is still interested in sharing more content or asking followup questions.", "Therefore, we take a conservative approach: we label the dialog turn as disengaged only if no more responses follow the non-positive response.", "Next, we discuss how to use heuristic functions to auto-label disengaged user utterances.", "First, we split user responses into segments since user responses may consist of multiple units with di erent semantic meanings.", "We use NLTK Sentence Tokenizer for text-based system, and a segmentation model (Chen et al., 2018) for ASR (Automatic Speech Recognition)-based system as the segmentation tool.", "We then apply the heuristic functions on each segment to detect disengaged intents.", "For heuristic groups 1 to 3, if any segment contains a disengaged intent, the user response is auto-labeled as disengaged.", "For heuristic group 4 (End with non-positive responses), we assign disengaged labels only if the disengaged intents are detected in the last segment.", "We detect disengaged intents with Regexes.", "The benefit of using Regexes is that they have minimum dependencies and are easy to modify.", "We design Regexes for each intent.", "Following common Regexes complexity metrics (Luo et al., 2018), our Regexes for each intent contains 43.9 Regexes groups and 87.7 or clauses on average.", "Our framework also supports incorporating additional resources to improve the intent detection accuracy for automatic training data labeling.", "For example, we can enhance the recall of Regexes intent detection by incorporating existing deep learning-based NLU (Natural Language Understanding) models.", "Specifically, we re-purpose an open-sourced dialog act classification model (Yu and Yu, 2021) to enhance disengagement intent detection: we select 6 out of the 23 supported dialog act labels that are associated with disengaged intents, and map each selected dialog act label to the heuristic groups.", "The dialog act com-plaint is mapped to the heuristic group complain system repetition;closing is mapped to the disengaged intent request termination; hold to hesitation;other_answers to unsure an-swer; back-channeling to back-channeling, and neg_answer to negative answer. If a user utterance is detected with disengaged intent by either Regexes or the deep learning model, then the utterance is auto-labeled as disengaged. 5.2 Stage 2: Denoise with Shapley Algorithm & Fine-tune Overview Next, we denoise the labeled data using Shapley algorithm (Ghorbani and Zou, 2019). Shapley algorithm has been studied in the cooperative game theory (Dubey, 1975) and economics (Gul, 1989) as a fair distribution method. Shapley algorithm computes a Shapley value for each training datum, which quantifies the contribution of each training datum to the prediction and performance of a deep network. Low Shapley value data capture outliers and corruptions. Therefore, we can identify and denoise the incorrectly labeled data by computing their Shapley values and fine-tune the model on the cleaned training set. Shapley Algorithm Shapley algorithm comes originally from cooperative game theory (Dubey, 1975). Consider a cooperative game with n players D = { 1 , ..., n } and a utility function v : 2 [ n ] R which assigns a reward to each of 2 n subsets of players: v ( S ) is the reward if the players in subset S D cooperate. Shapley value defines a unique scheme to distribute the total gains generated by the coalition of all players v ( D ) with a set of appealing mathematical properties. In our setting, we can consider D train = { ( x i , y i ) } N train 1 as N train players. We define the utility function v ( S ) as the performance on the development set D dev . The Shapley value for player i is defined as the average marginal contribution of { ( x i , y i ) } to all possible subsets that are formed by other players (Jia et al., 2019a,b): s i = 1 N (cid:88) S D train \\{ x i } 1 (cid:16) N 1 | S | (cid:17) [ v ( S { x i } ) v ( S )] As suggested by the definition of Shapley value, computing Shapley value requires an exponentially large number of computations to enumerate O (2 N train ) possible subsets and train the model M on each subset, which is intractable. Inspired by (Jia et al., 2019a,b), HERALD tackles this issue by reducing the deep model M to a K nearest neighbors ( K NN) model and then apply the closed-form solution of Shapley value on K NN: We reduce our BERT-based classification model M = M ( , f ) = f ( ( x )) to a K NN by first fine-tuning M on the auto-labeled training samples. We then use the feature extractor to map each training datum to the feature space { ( x i ) } N train 1 . We construct a K NN classifier in the feature space to compute the closed-form Shapley value. Next, we discuss the closed-form solution of Shapley value. We first consider a special case where the development set D dev only contains one datum D dev = { ( x dev , y dev ) } . Given any nonempty subset S D train , we use the K NN classifier to classify x dev . To do this, we sort the data points in the training set { x i } N train 1 based on their euclidean distance in the feature space ( x ) to the datum in the development set x dev , yielding ( x 1 , x 2 , ..., x | S | ) with x 1 , ..., x K as the topK most similar data points to x dev . The KNN classifier outputs the probability of x dev taking the label y dev as P [ x dev y dev ] = 1 K (cid:80) Kk = 1 1 [ y k = y dev ], where k is the index of the k th nearest neighbor. We define the utility function as the likelihood of the correct label: ( S ) = 1 K min { K , | S |} (cid:88) k = 1 1 [ y k ( S ) = y dev ] (1) Jia et al. (2019a,b) proves that the Shapley value of each training point s i can be calculated recursively in O ( N log N ) time as follows: s N = 1 [ y N = y dev ] N s i = s i + 1 + min { K , i } i K (cid:0) 1 [ y i = y dev ] 1 [ y i + 1 = y dev ] (cid:1) The above result for a single point in D dev could be readily extended to the multiple-point case, in which the utility function is defined by ( S ) = 1 N dev N dev (cid:88) j = 1 1 K min { K , | S |} (cid:88) k = 1 1 [ y ( j ) k ( S ) = y dev , j ] where ( j ) k ( S ) is the index of the k th nearest neighbor in S to x dev , j . Jia et al. (2019a,b) also prove that the Shapley value in this case is the average of the Shapley value for every single dev point. Denoising Procedure Our denoising procedure works as follows: (1) We first fine-tune our BERT-based classification model M = M ( , f ) = f ( ( x )) No. Method Gunrock Movie ConvAI2 bACC F 2 Score bACC F 2 Score (1) Heuristics 78.32 65.09 76.58 58.16 (2) Heuristics (regex only) 62.81 35.46 72.04 49.90 (3) Heuristics (NLU only) 72.68 56.32 63.62 32.86 (4) Heuristics w / o Group 1 78.21 64.88 71.20 48.44 (5) Heuristics w / o Group 2 77.96 64.49 75.45 56.22 (6) Heuristics w / o Group 3 71.52 55.36 71.96 49.80 (7) Heuristics w / o Group 4 58.34 23.97 68.32 42.68 (8) BERT(dev) 73.98 60.74 74.97 55.40 (9) BERT(Auto) 80.55 71.77 78.76 63.13 (10) BERT(Auto + dev) 80.73 72.16 80.46 64.54 (11) HERALD 86.17 * 80.01 * 86.22 * 70.49 * Table 2: Evaluation results comparison among variants of HERALD. * indicates that the model is statistically significantly better than baseline models. All numbers in the table are in percentage. on the auto-labeled training samples. This step injects the knowledge in the labeling heuristic into the model M . (2) We then map each auto-labeled training datum to the feature space { ( x i ) } N train 1 , since we want to apply the closed-form KNN formula of Shapley value in the feature space. (3) Next, for a binary classification problem, we duplicate each training datum 2 times with labels [0 , 1]. This generates a large training set D large with 2 N train data points, and we note that the origin training set D train is a subset of D large , since D large enumerates all C possible labels for each each training datum. (4) We then calculate Shapley value for the 2 N train data points in D large using the closed-form KNN formula. (5) We remove the data with negative Shapley value in D large , and get a cleaned training set D clean . The duplicate-and-remove procedure flips the labels of the noisy data points with low Shapley value. (6) Finally, we fine-tune the classification model M on D clean to get the final user disengagement detection model. To sum up, the Shapley value quantifies the contribution of each training datum. Low Shapley value data capture outliers and corruptions that are not consistent with the distribution of other data points. We identify and correct these outliers and corruptions to provide a clean training set. 6 Experiments Model Setup We use K = 10 for the K NN Classifier. We use BERT (Devlin et al., 2019) as the text encoder of our classification model M = M ( , f ) = f ( ( x )). Additional implementation details are included in Appendix. Model Comparisons and Ablations We compare HERALD to its several ablations (Table 2) and evaluate the performance on the test set. We report balanced accuracy (bACC) and F Score with = 2 (Baeza-Yates et al., 1999). (1) Heuristics uses the labeling heuristic function with both Regex and dialog act to predict the test set. (2) Heuristics (Regex only) uses the labeling heuristic function only with Regex to predict on the test set. (3) Heuristics (NLU only) uses the labeling heuristic function only with NLU. (4-7) show the ablation of the heuristics function prediction baseline by excluding each heuristic group. (8) BERT(dev) fine-tunes BERT on the expert-annotated development set. (9) BERT(Auto) fine-tunes BERT on the auto-labeled training samples. (10) BERT(Auto + dev) fine-tunes BERT on both the auto-labeled training samples and the development set. (11) HERALD reports the performance of the final model trained on D clean . Results Our first takeaway is that our labeling heuristics produce decent predictions and generalize to di erent datasets. As shown in Table 2, Heuristics prediction (Heuristic, 78.32%, 76.58%) is better than the BERT-based model with limited training samples (BERT(dev), 73.98%, 74.94%) on both datasets. It also shows that our labeling heuristics are generalizable to di erent corpora. Our second takeaway is that learning from a large number of noisy labels works better than learning from a limited number of clean labels. As shown in Table 2, BERT fine-tuned on the auto-labeled training set (BERT(Auto), 80.55, 78.76) outperforms BERT fine-tuned on clean but small development set (BERT(dev), 73.98, 74.94) by a large margin. In addition, we also observe that the BERT model fine-tuned on the auto labeled training data (BERT(Auto), 80.55%, 78.76%) generalizes beyond the labeling heuristics (Heuristics, 78.32%, 76.58%). Our third takeaway is that using the expert-annotated development set for denoising is more e cient than using the development set as additional training data. After fine-tuning BERT on the weakly labeled training data (BERT(Auto), 80.55%, 78.76%), having an additional fine-tuning step using the development set slightly improves the model's performance (BERT(Auto + dev), 80.73%, 80.46%).", "In contrast, using the development set for the Shapley denoising algorithm gives a significant performance gain (HERALD, 86.17%, 86.22%).", "Annotation Cost The cost of annotating the DEV set is small for the Shapley algorithm.", "For Gunrock Movie Dataset, we used 67 annotated dialogs as the DEV set.", "For ConvAI2, we used 52 annotated dialogs as the DEV set.", "The annotation takes less than 1 hour in both cases, which is negligible compared to the cost of annotating all training data.", "Heuristics Group Analysis We perform ablation studies to analyze the importance of each of the four heuristics groups in Table", "1. As shown in Table 2, excluding heuristics group 4 leads to the most significant performance drop in both datasets (Heuristics w / o Group 4, 58.34%, 68.32%), indicating that end with non-positive response is the most prevalent form of user disengagement.", "In addition, each heuristics group has di erent importance in di erent datasets.", "For example, dropping heuristics group 1 (complain system responses) only leads to a marginal performance drop on the Gunrock Movie dataset but incurs a significant performance drop on the ConvAI2 dataset.", "We also notice that heuristic group 4 (End with non-positive responses) plays a more critical role in the Gunrock Movie dataset than in the ConvAI2 dataset.", "This might be mainly due to the di er-ence between ASR-based (Gunrock Movie) and text-based (ConvAI2) systems.", "When asked an open-ended question in ASR-based systems, since users have less time to think, they are more likely to reply with responses such as I'm not sure, let me think.", "While in text-based systems (Con-vAI2), users have more time to think and formulate their responses.", "Hence, heuristics group 4 covering these responses happen more in Gunrock Movie than ConvAI2.", "Generalizability of Heuristic Functions The results show that our heuristic functions are generalized to both ASR-based and text-based systems.", "As indicated in Table 2, our Regexes reach a decent accuracy of 62.81% and 72.04% on the expert annotated test set respectively on Gunrock Movie and ConvAI2 dataset, and thus can serve as a relatively reliable source for auto-labeling.", "In addition, although the dialog act model (MIDAS) is initially designed for ASR-based systems and thus has a better performance on the Gunrock Movie data, it should be generalizable to other ASR-based systems, as the six selected dialog acts are general and independent of topics.", "Therefore, the combination of dialog acts and Regexes should be su cient to be applied to various corpora.", "Shapley Value Analysis We also present an analysis to show how Shapley denoising works, as shown in Figure", "2. We examine the Shapley value for each training datum in Stage", "2. We first show two example dialog turns from the Gunrock Movie dataset with a negative Shapley value in Figure 3 and Figure 4.", "In Figure 3, the dialog turn is incorrectly auto-labeled as non-disengaged.", "This is because an ASR error happens, and the user utterance I don't wanna talk about movies anymore is transcribed as I wanna talk about movies anymore.", "In Figure 4, the user says, Oh I disagree. I think the movie was fantastic!.", "The labeling heuristics see the negative word disagree and auto-label this turn as disengaged.", "Both data points are with negative Shapley values and are corrected in Stage 3.", "Next, we present a quantitative analysis of Shapley value.", "According to the Shapley value, we remove data points one by one, starting from the least valuable (low Shapley values) to the most valuable (high Shapley values).", "Each time, after removing the data point, we create new K NN classifier models on the remaining dialog turns and labels and evaluate them on the test set with expert annotations.", "As shown in Figure 2, removing training data with low Shapley values increases the performance to a certain point before convergence for K of all choices.", "We observe a similar trend when re-training a model on the remaining data.", "In contrast, removing data randomly or removing data starting from high Shapley values decreases the performance on the test set (Random and Retain-Hurtful in Figure 2).", "This shows that low Shapley value data e ectively capture outliers and corruptions, which further justifies our design choice of denoising with Shapley value.", "Alternative Data Valuation Methods We also explored alternative methods to data Shapley like influence function (Koh and Liang, 2017) and TracIn (Pruthi et al., 2020): on Gunrock Movie, Influence Functions and TracIn achieve 82.96% and 83.15% accuracy, respectively.", "Both methods outperform BERT(Auto + dev) (80.73%) significantly but perform slightly worse than HERALD (86.17%).", "Overall, results show that our data annotation workflow also works well with other data valuation methods.", "Error Analysis Figure 5 shows an error example of HERALD, where both the labeling heuristics and the Shapley algorithm fail to identify this turn as low engagement.", "In this example, the chatbot system asks whether the user is interested in movies, but the user does not directly answer the question.", "Instead, the user says I have a question for you social bot, indicating that the user does not like the current topic and wants to talk about something else.", "HERALD fails to identify this dialog turn as low engagement, partly because the Regexes in the request topic change heuristic rule does not cover this example.", "One way to fix this error is to upgrade the Regexes.", "A more general solution is to consider the chatbot system's expectations on user responses conditioned on the chatbot's question.", "If the chatbot receives an un-expected user response, then the user is probably not interested in discussing the current topic.", "The ultimate chatbot evaluation metric should be user-centric, as chatbots are there to provide humans with enjoyable experiences.", "Previously detecting user disengagement typically requires annotating many dialog samples for each individual dataset.", "We propose a two-stage pipeline HERALD to automatically label and denoise training data and, at the same time, build a user disengagement detector.", "Our experiment shows that HERALD significantly reduces the annotation cost of a new corpus.", "HERALD's disengagement detection results highly correlate with expert judgments on user disengagement in both datasets (86.17% bACC in Gunrock Movie, 86.22% in ConvAI2).", "We thank ACL 2021 chairs and reviewers for their review e orts and constructive feedback.", "We would also like to thank Yu Li and Minh Nguyen for revising the Regexes." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "method", "method", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "objective", "result", "objective", "abstain", "method", "abstain", "abstain", "abstain", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "method", "objective", "method", "other", "other", "other", "other", "objective", "other", "method", "other", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "other", "other" ]
[ "In traditional approaches to entity linking, linking decisions are based on three sources of information the similarity of the mention string to an entity's name, the similarity of the context of the document to the entity, and broader information about the knowledge base (KB).", "In some domains, there is little contextual information present in the KB and thus we rely more heavily on mention string similarity.", "We consider one example of this, concept linking, which seeks to link mentions of medical concepts to a medical concept ontology.", "We propose an approach to concept linking that leverages recent work in contextualized neural models, such as ELMo (Peters et al., 2018), which create a token representation that integrates the surrounding context of the mention and concept name.", "We find a neural ranking approach paired with contextualized embeddings provides gains over a competitive baseline (Leaman et al., 2013).", "Additionally, we find that a pre-training step using synonyms from the ontology offers a useful initialization for the ranker.", "Medical concept linking produces structured topical content from clinical free text (Aronson and Lang, 2010).", "Healthcare providers often refer to medical concepts in clinical text notes that are absent from associated health record metadata despite their importance to understanding a patient's medical status.", "For example, in The patient reports a history of seizure disorder ... , the phrase seizure disorder refers to the concept epilepsy contained within the Unified Medical Language System (UMLS) ontology (Bodenreider, 2004).", "However, this may be absent from metadata as it is not part of the current diagnosis.", "Concept mentions can use non-standard Contribution performed during an internship at Johns Hopkins University.", "terms (e.g. epilepsy ), thus concept linking requires non-lexical methods.", "Additionally, some terms ( cancer ) are ambiguous and could refer to multiple concepts ( breast cancer , colon cancer , etc.) The related task of Entity Linking linking named entities (people, places, and organizations) to a knowledge base has been explored in nonmedical domains (Dredze et al., 2010; Durrett and Klein, 2014; Gupta et al., 2017).", "Entity linking systems consider three sources of information:", "1) similarity between mention strings and names for the KB entity;", "2) comparison of the document context to information about the KB entity (e.g. entity description);", "3) information contained in the KB, such as entity popularity or inter-entity relations.", "In contrast to the dense KBs in entity linking, concept linking uses sparse ontologies, which contain a unique identifier (CUI), title, and links to synonyms and related concepts, but rarely long-form text.", "For example, while the concept epilepsy has many synonyms in UMLS, it has no defini-tion or other long description.", "Furthermore, UMLS concept names are more formal than clinical notes, making mention matching challenging.", "Therefore, we need an approach that can use local context from the mention (surrounding sentence), and whatever information may be present in the ontology to build a contextualized non-lexical representation for matching.", "Additionally, Entity Linking systems are often able to leverage greater amounts of annotated data, which are not available in the clinical space.", "Text that does not have restrictive privacy protections can be annotated more easily through crowdsourc-ing, or other sources of non-gold standard data collected (e.g., Wikipedia cross-links).", "As the annotation of clinical notes is expensive due to the knowledge required of annotators and the protected status of clinical records, any effort in clinical concept linking must focus on leveraging a small amount of annotations, and using larger amounts of related or unannotated data when possible.", "We propose learning contextualized representations that leverage both free text and information from knowledge bases.", "We train a contextualized language model (Peters et al., 2018) on unannotated clinical text, leveraging sentence context to construct a mention.", "We explore several methods of building representations of the mention span and concept, including pooling and attention, and pre-training our linker with additional data from the ontology to augment the small amount of annotated data present.", "The resulting ranker outperforms a non-contextualized version of our model, and beats the previous best performing system (Leaman et al., 2013) in most metrics.", "Concept linking (alternatively: named entity recognition, entity normalization), has a long history (Pradhan et al., 2013; Luo et al., 2019) in the clinical NLP community, with common approaches including generating lexical variations to increase matches (Metamap) (Aronson, 2001; Aronson and Lang, 2010), dictionary matching algorithms (Kipper-Schuler et al., 2008; Savova et al., 2010), rule based systems (D'Souza and Ng, 2015), and mention/ontology context overlap (Aggarwal and Barker, 2015).", "Learned ensembles can also be effective (Rajani et al., 2017).", "Concept linking has also been applied to bio-medical literature (Dogan et al., 2014; Zheng et al., 2015; Tsai and Roth, 2016; Zhao et al., 2019) and is most similar to the task of entity linking (Dredze et al., 2010; Durrett and Klein, 2014; Gupta et al., 2017; Mueller and Durrett, 2018).", "Similar to our approach, Choi et al. (2016) learn representations of concepts in UMLS.", "While we cannot make a direct comparison since they do not cover all of our KB (SNOMED-CT), initial experiments with their embeddings performed worse than our method.", "While some jointly consider the task of mention finding and linking (Durrett and Klein, 2014), we follow the more common convention of separating the two and assuming gold mention spans (Leaman et al., 2013; D'Souza and Ng, 2015).", "Formally, we are given a mention m in a document and must select the best CUI (concept) c from an ontology/KB, or CUI-less if no relevant concept exists.", "train-Figure 1: Architecture for our neural ranker.", "The input consists of gold standard mention string representation m (purple), gold standard concept representation c + (blue), and n randomly selected negative concept representation c pairings (red).", "The ELMO hidden states are noted as h , and the hidden states of our feed forward neural network are noted as d .", "To build our ELMO representations for m , c + and c , we select the representation from the lowest layer of the model.", "ing data to augment a dictionary (D'Souza and Ng, 2015; Luo et al., 2019).", "While this approach does quite well, it poorly generalizes to unseen mentions or new domains.", "1 Therefore, our work will focus on a learned system and compare it to similar baselines.", "While related to concept linking, entity linking requires a different solution due to several factors.", "Many entity linking systems (Upadhyay et al., 2018; Kolitsas et al., 2018) leverage context from a large document, such as Wikipedia, to make linking decisions, while a similar source is not present in UMLS.", "Further, earlier work (Zheng et al., 2014) showed that standard Entity Linking systems don't work well on the related domain of biomedical journal literature, which suggests that separate solutions are required.", "Our concept linking system is based on a pairwise neural network ranker ( 3.1) using contextualized representations ( 3.2) for both the mention and concept.", "We leverage the context present in clinical notes for our representations and synonyms present within the UMLS to train our linker.", "For a given mention string m and document, the system ranks all possible candidates c in the KB.", "Figure 1 shows our ranking system, based on the Rank model of Dehghani et al. (2017).", "We learn the parameters of a scoring function S ( m, c ; ) , which consists of a feed-forward neural network with hidden layers d that takes input representations of m and c in addition to pairwise features.", "We train using pairwise loss, in which we have two point-wise networks one which takes the mention m and correct concept c + as input, the other which takes the mention m and incorrect concept c with shared parameters that are updated to minimize the loss function.", "Using a pairwise model allows us to learn a scoring function that does not rely on annotated scores.", "Adapting the approach of Dehghani et al. (2017), we use adaptive hinge loss, which considers n negative concepts and selects the highest scoring concept as the negative sample.", "For mention m , correct concept c + , and n negative samples c 0 to c n , our loss function is: L ( ) = max { 0 , (cid:15) ( S ( { m, c + } ; ) max { S ( { m, c 0 } ; ) . . . S ( { m, c n } ; ) }} (1) 3.2 Contextualized Representations Recent work (Devlin et al., 2019) proposed representations of words that integrate the context of the surrounding sentence.", "We use ELMo (Peters et al., 2018), a bi-directional recurrent neural network (RNN), to build representations for each token in a sentence trained using language model objectives.", "For each direction, the model first builds a context-independent token representation using a convolutional neural network over the characters.", "Then the representation is passed through L = 2 layers of long-short term memory (LSTM) RNN.", "The final layer is used to predict the next token.", "These models are robust to out-of-vocabulary types, so they provide broad coverage to the diverse types present in clinical text.", "We train ELMo on clinical notes and create mention representations m by running the entire sentence through the model and selecting the resulting word representations for the mention (the lowest token representation) from the LSTM.", "2 .", "1 An extension of this approach could use unsupervised methods to discover synonyms in a new dataset (Schumacher and Dredze, 2019) 2 While there are now a multitude of deep transformer-based LMs (Devlin et al., 2019), the principle of contextual-The concept representations c are created in the same manner as m except that only the name of the concept, as there is often no available context 3 .", "For multi-word mentions and concept names, we explore two methods of creating a single embedding.", "First, we use max-pooling over the set of token embeddings (reported as Max in Table 1).", "Second, we run self-attention (Vaswani et al., 2017) 4 over the set of token embeddings, with a single head to attend over the tokens (noted as Attention ).", "Pre-training a model using an alternative data source has been frequently used in the field of machine learning (Erhan et al., 2010; Sharif Razavian et al., 2014), and presented (Tsujimura et al., 2019) at a recent shared task (Luo et al., 2019).", "A model is pre-trained on a large amount of a related dataset and then is trained on the target task, which allows a model to see more examples to achieve a better initialization for training on the final task.", "As creation is expensive, most annotated clinical datasets are small, such as for our task.", "Therefore, we look to alternative data sources for pre-training our model.", "For a given concept (e.g. epilepsy ), the UMLS includes synonyms (e.g. seizure disorder , epileptic fits ), which can be used to pre-train our linker.", "Unlike in the annotated clinical data, there is no surrounding context, and terms in the UMLS are more likely to be formal.", "However, training on synonyms will allow for a greater variety of terms to be seen by our model than otherwise possible.", "Therefore, using all synonyms taken from the annotated subset of the UMLS, we pre-train our linker before training on the annotated clinical notes.", "We follow the previous training procedure by replacing the mention representation m with the synonym string representation only (without surrounding sen-tence), thus training the linker to assign a higher score to the synonym paired with the corresponding concept representation c + against negatively sampled concepts c .", "We use this pre-training initialization with the Attention model discussed in ized representations are the same.", "Additionally, others have found ELMo trained on MIMIC does better than a similarly trained BERT model (Schumacher and Dredze, 2019) 3 We ran experiments that padded the names with synonyms or other forms of available text within the knowledge base.", "However, we did not see consistent improvements.", "4 We use the implementation provided by https://github.com/kaushalshetty/Structured-Self-Attention .", "We train and evaluate our system on the ShARe/CLEF eHealth Evaluation Lab 2013 Task 1b dataset (Pradhan et al., 2013), which consists of span-level annotations for disorder concepts taken from the MIMIC 2.5 clinical note dataset (Saeed et al., 2011).", "The publicly available training set includes 200 clinical notes, which we split into a 100 note training set, and development and testing sets of 50 documents each the shared task test set was not available.", "The data is annotated against SNOMED-CT (Spackman et al., 1997), one of the ontologies within UMLS.", "We choose to focus on this smaller dataset as leveraging small amounts of annotated data is critical to building useful tools in the clinical domain.", "We only included mention annotations for concepts that occur in the selected subset of the ontology noted in the annotation guidelines for the respective datasets or are marked as CUI-less 5 .", "In Table 1, we report results on only mentions with links to the ontology ( CUI ) and mentions with 5 We included all concepts in the SNOMED-CT Disorder Semantic group or in the Finding , Body Substance , and Mental Process semantic types.", "We include all preferred entries, with the default settings of UMLS 2011AA, in the SNOMEDCT Disorder Semantic group (116,436 unique concepts), but also include the first non-preferred entries that do not have a preferred entry (8,926 unique concepts.), and annotations marked CUI-less .", "Mentions that do not have a corresponding concept in the ontology (e.g. calcifications ) were classified as CUI-less (or NIL ) entries by annotators.", "Some annotations consist of concepts outside of the subsets described in the shared task paper, and we exclude those exceptions.", "links to the ontology and CUI-less mentions ( All ).", "We train ELMo on 199,987 clinical notes from MIMIC III (Johnson et al., 2016) as the source of our clinical text, pre-processing the data using the NLTK toolkit ( Rehurek and Sojka, 2010).", "For the Pre-training model, we augment the clinical text training data with synonyms, definitions, and names of related concepts from the selected subset of UMLS.", "All together, this resulted in 645,863 additional sentences of training data.", "We compare our system to DNorm (Leaman et al., 2013) for the SHARE/Clef 2013 dataset, the best performing system in the SHARE/Clef 2013 shared task.", "6 Unlike many other concept linking systems, DNorm scores each mention against all concepts and does not use a triage system, allowing a fair comparison to our system.", "DNorm builds term frequency-inverse document frequency (TF-IDF) representations of both the mention and concept and learns a weighted similarity to rank concepts for each mention.", "It is unable to return concept candidates for mentions that are out-of-vocabulary as it uses a word-level measure.", "The authors add a specific CUI-less representation, which is made of entries occurring more than four times in training.", "We report results on our recreated test set, as the evaluation set provided for the shared task was not available to us.", "We also compare with using Word2vec (Mikolov et al., 2013) representations instead of ELMo representations in the same linking architecture to test the effect of contextualized embeddings.", "We trained the Word2vec model on the MIMIC dataset.", "We created single embeddings ( d = 600 ) for mentions and concepts by max pooling over all embeddings for words in the corresponding text, ignoring all out-of-vocabulary words.", "We explored several parameter configurations for our model suggested in Dehghani et al. (2017), reporting the best performing models on development.", "These include hidden layers of size [256, 512, 1024] and number of layers in [1,2,3], with a Tanh activation function for final layer and ReLu (Glorot et al., 2011) for all others.", "We optimize using the ADAM optimizer (Kingma and Ba, 2014), and a dropout rate of 0 .", "2 .", "Parameter values and development metrics are available in Appendix A. For the ELMo models, we trained for 10 epochs 6 As of this writing, there are no papers describing the 2019 N2C2 methods.", "Additionally, since we are interested in nontraining data-based dictionaries, a direct comparison to shared task submissions wasn't possible.", "using the default configuration.", "For CUI-less mentions, we select a threshold score based on the development set, equal to the mean score of all CUI-less entries.", "If an entry does not have a scored concept above that threshold, we consider it CUI-less , adding CUI-less at that position in the list for MRR.", "We use the Pytorch framework and code from the Spotlight library (Kula, 2017).", "Table 1 reports accuracy and mean reciprocal rank (MRR) for all models.", "We compare our models ( Word2Vec , Max , Attention , and Att. + Pre. ) to DNorm for all mentions (All) and only those with links to concepts in the KB (CUI).", "While DNorm has higher accuracy on entries with CUIs, our models have higher MRR on entities with CUIs ( Att. + Pre. ) and perform best on all entities in both accuracy and MRR ( Attention and Att. + Pre. ).", "Our neural ranking models with attention outperform all other models, except for CUI-only accuracy.", "In the case of entities with CUIs, we find that pre-training the model does provide a gain in ranking accuracy (MRR).", "In the case of all entities, we find that the attention models provide a sizable gain in both accuracy and MRR.", "We conducted an error analysis of the best performing MRR model ( Att. + Pre. ) on the development data, looking at errors where the gold standard concept was not highly ranked (assigned a rank of 10 or above).", "Of those errors ( n = 110 ), we find that 26% are mentions that contain only acronyms (e.g. LBP for lower back pain ), and 14% are mentions containing some other abbreviation (a shorted word, e.g. post nasal drip for Posterior rhinorrhoea , or a partial acronym, Seizure d / o for Epilepsy ).", "Comparing to similar errors from Attention model ( n = 161 ), we find that the number of acronym errors is nearly the same (24) as the better performing model (26).", "In contrast, the number of non-abbreviation errors drops significantly.", "This suggests that pre-training provides useful signal for mentions that consist of variations appearing in the ontology.", "However, it does not help with acronyms or other abbreviations that are less likely to appear in the ontology or are shorter and more ambiguous (e.g., 'R' for Rhonchus).", "concept was ranked above 10, many incorrect concept predictions were somewhat related to the gold concept (e.g., for mention atherosclerotic plaque with gold concept Atherosclerotic fibrous plaque our model predicted the concept Atherosclerosis ).", "We further noticed that in 21% of cases the linker predicted a relevant concept (e.g., mention thrombosed and Thrombosis ), but is not counted as correct due to annotation decisions.", "This could be due to multiple possible concepts in the ontology or the presence of closely-related concepts.", "Deploying our system in a large-volume clinical setting would likely require several alterations.", "The main computational barrier to labeling a large amount of data, the speed of prediction, can be addressed by using an accurate candidate selection system to prune the number of concepts considered.", "Considering a smaller subset (e.g., 20) of concepts instead of all would significantly improve the speed.", "Further, if using a consistent portion of the ontology, caching the concept embeddings c as opposed to building them in-model also enhances efficiency.", "Depending on the application, a less accurate but faster linker might be a better choice (e.g. for all clinical notes at a medical institution).", "In contrast, a more complex linker, such as ours, maybe a better option for specific subsets of notes that require better accuracy (e.g., the results of specific clinical studies).", "Our results demonstrate the advantages of using contextualized embeddings for ranking tasks, and that using information from the knowledge base for training is an essential direction for learning concept representations for sparse KB domains.", "Future work will consider additional methods for integrating ontology structure into representation learning." ]
[ "abstain", "abstain", "method", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "method", "objective", "result", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "other", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "result", "result", "result", "method", "result", "result", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "We present the first supertagging-based parser for linear context-free rewriting systems (LCFRS).", "It utilizes neural classifiers and outperforms previous LCFRS-based parsers in both accuracy and parsing speed by a wide margin.", "Our results keep up with the best (gen-eral) discontinuous parsers, particularly the scores for discontinuous constituents establish a new state of the art.", "The heart of our approach is an e cient lexicalization procedure which induces a lexical LCFRS from any discontinuous treebank.", "We describe a modifica-tion to usual chart-based LCFRS parsing that accounts for supertagging and introduce a procedure that transforms lexical LCFRS derivations into equivalent parse trees of the original treebank.", "Our approach is evaluated on the English Discontinuous Penn Treebank and the German treebanks Negra and Tiger.", "In NLP, constituency parsing is a task that assigns usually tree-shaped syntactic structures to sentences.", "Formalisms such as context-free grammars (CFG) are used in this setting because they are conceptually simple, interpretable, and parsing is tractable (cubic in sentence length).", "Discontinuous constituents span non-contigu-ous sets of positions in a sentence.", "The resulting phrase structures do not take the shape of a tree anymore, as they contain crossing branches (cf. the left of Fig. 1), and cannot be modeled by CFG.", "As a countermeasure, many treebanks, e.g. the Penn Treebank (PTB; Marcus et al., 1994), denote these phrase structures as trees nevertheless and introduce designated notations for discontinuity, which is then often ignored in parsing.", "However, discontinuity occurs in about 20 % of the sentences in the PTB and to an even larger extent in German treebanks such as Negra and Tiger.", "For parsing discontinuous constituents, so-called mildly context-sensitive grammar formalisms have been investigated, e.g. tree-adjoining grammars (TAG; Joshi et al., 1975) and linear context-free rewriting systems (LCFRS; Vijay-Shanker et al., 1987).", "An LCFRS derivation of a discontinuous phrase is shown in the right of Fig. 1. The increased expressiveness of these formalisms comes at the cost of a higher parsing complexity: given a sentence of length n , parsing is in O ( n 6 ) for TAG and O ( n 3 fo( G ) ) for a binary LCFRSG .", "The grammar-specific fanout fo( G ) indicates that G can parse constituents spanning up to n non-contiguous sets of positions.", "TAG have the same expressiveness as LCFRS with fanout 2 (Seki et al., 1991), which accounts for 96 .", "67 % of the sentences in Negra and 96 .", "83 % of the sentences in Tiger (Maier and Sgaard, 2008).", "Previous publications have established mildly context-sensitive formalisms in the field of statistical constituent parsing, and found methods to tame the high parsing complexity (Evang and Kallmeyer, 2011; Kallmeyer and Maier, 2013; Angelov and Ljunglof, 2014; van Cranenburgh, 2012).", "One approach for making parsing with mildly context-sensitive grammars tractable is supertagging , which was originally introduced for lexical TAG (Bangalore and Joshi, 1999).", "A TAG is lexical if each rule contains exactly one lexical item, i.e. word in the parsed language.", "The supertagger is a (often discriminative) classifier that selects for each position of the input sentence a subset of the rules of the TAG; these are the so-called supertags.", "Parsing is then performed with the much smaller grammar of supertags.", "Research on supertagging has also been conducted in the context of combinatory categorial grammars (CCG; Clark, 2002), but not yet for LCFRS.", "The use of recurrent neural networks (RNN) as classifiers in supertagging has improved their accuracy by far (Vaswani et al., 2016; Kasai et al., 2017; Bladier et al., 2018; Kadari et al., 2018).", "Recently, Morbitz and Ruprecht (2020) introduced a lexicalization procedure for probabilistic LCFRS 1 , paving the way to employ supertagging for parsing with this formalism.", "Early experiments showed that the approach is infeasible in realistic settings: the set of rules explodes in a step of the construction where new rules are introduced for pairs of terminals in the grammar.", "To mitigate this problem, we conduct the procedure for single derivations.", "Consequently, we only have to construct rules for pairs of terminals that occur in sibling nodes of a derivation (cf.", "step (4) in Section 4).", "Moreover, we consider unweighted LCFRS, as weights of underlying grammar structures are usually not considered in supertagging-based approaches.", "In this paper, we present the first supertagging-based parser for LCFRS.", "Section 3 extends the usual chart-based parsing approach for LCFRS to account for supertagging with lexical LCFRS.", "Section 4 adapts the lexicalization procedure by Morbitz and Ruprecht (2020) to e ciently induce a lexical LCFRS from any given treebank.", "We implemented and evaluated the approach.", "Section 5 describes the experimental setups of our evaluation using three discontinuous treebanks (one English and two German).", "Section 6 compares our results to recent LCFRS-based parsers and other state-of-the-art parsers for discontinuous constituents.", "The implementation of our approach is published on GitHub.", "2 1 Their work is an instance of the lexicalization of (un-weighted) multiple context-free tree grammars by Engelfriet et al. (2018).", "We start by introducing some basic notation that will be used throughout Sections 3 and 4.", "The set of non-negative (resp. positive ) integers is denoted by N (resp. N + ).", "We abbreviate { 1 , ... , n } by [ n ]; for each n (cid:60) N + , the set [ n ] is the empty set.", "An alphabet is a finite and non-empty set; the set of (finite) strings over is denoted by .", "The symbol denotes an empty string or sequence.", "Compositions.", "Linear context-free rewriting systems (LCFRS) extend the rule-based string rewriting mechanism of CFG to string tuples; we describe the generation process by compositions .", "Let k N and s 1 , . . . , s k , s N + ; one can think of k as the number of arguments of a function mapping string tuples of the sizes s 1 , . . . , s k to a string tuple of size s .", "A -composition is a tuple ( u 1 , . . . , u s ) where each u 1 , . . . , u s is a non-empty string over and variables of the form x ji with i [ k ] and j [ s i ].", "Each of these variables must occur exactly once in u 1 u s and they are ordered such that x 1 i occurs before x 1 i + 1 and x ji occurs before x j + 1 i for each i [ k 1] and j [ s i 1].", "The set of all such compositions is denoted by C ( s 1 s k , s ) .", "As usual in the literature, we will only consider binary compositions (where k 2) in the following.", "Variables of the form x i 1 and x j 2 are abbreviated by x i and y j , respectively.", "We associate with each composition ( u 1 , . . . , u s ) C ( s 1 s k , s ) a function from k string tuples, where the i -th tuple is of arity s i , to a string tuple of arity s .", "This function is denoted by (cid:126) ( u 1 , . . . , u s ) (cid:127) .", "Intuitively, it replaces each variable of the form x i in u 1 , . . . , u s by the i -th component of the first argument, and y j by the j -th component of the second argument.", "The identity composition (x 1 , . . . , x s ) is denoted by id s .", "Let c C s 1 s k , s be a composition where k [2], i [ k ] such that s i = 1, and w .", "We obtain the partial application of c to w as i-th argument , denoted by (cid:126) c (cid:127) i ( w ) as follows: (cid:126) c (cid:127) 2 ( w ) C ( s 1 , s ) is obtained from c by replacing y 1 by w and (cid:126) c (cid:127) 1 ( w ) is obtained from c by replacing x 1 by w and each variable y j by x j .", "If k = 1, then (cid:126) c (cid:127) 1 ( w ) C , s , otherwise (cid:126) c (cid:127) 1 ( w ) C s 2 , s .", "LCFRS.", "A (binary) LCFRS is a tuple G = ( N , , S , R ) where N is a finite set ( nonterminals ) where each nonterminal A N has a fanout fo( A ) N + , is an alphabet ( terminals ), S N ( initial nonterminals ) such that fo( A ) = 1 for each A S , and R is a finite set ( rules ); each rule in R is of the form A c ( B 1 , . . . , B k ), where k { 0 , 1 , 2 } , A , B 1 , . . . , B k N , and c C (fo( B 1 ) fo( B k ) , fo( A )) .", "The function (cid:126) c (cid:127) maps k string tuples (of sizes fo( B 1 ) , . . . , fo( B k )) to a string tuple of size fo( A ).", "We call A the left-hand side ( lhs ), B 1 , . . . , B k the right-hand side ( rhs ) and c the rule's composition .", "We drop the parentheses around the rhs if k = 0.", "In our examples, whenever the fanout of a nonterminal is greater than 1, the fanout is the subscript of the nonterminal.", "For instance, VP 2 denotes a verbal phrase with fanout 2. The fanout of G is fo( G ) = max A N fo( A ).", "Rules of the form A c , A c ( B ), and A c ( B 1 , B 2 ) are called terminating , monic , and branching , respectively.", "A rule is called (uni/ double-)lexical , if its composition contains at least one terminal (resp. exactly one terminal / exactly two terminals).", "The LCFRSG is called (uni-)lexical , if each rule is (uni-)lexical.", "A derivation in G (starting with A N) is a tree over rules d = r ( d 1 , . . . , d k ) such that r is of the form A c ( B 1 , . . . , B k ) and each d i is a derivation in G starting with B i .", "The set of derivations in G is denoted by DG .", "The string tuple computed by d is defined recursively as w = (cid:126) c (cid:127) ( w 1 , . . . , w k ) where w 1 , . . . , w k are the string tuples computed by d 1 , . . . , d k ; in the following we also call d a derivation for w .", "Our parsing model consists of two components: a uni-lexical LCFRS and a discriminative sequence", "tagger which we henceforth call supertagger .", "The LCFRS is induced from a treebank by an adaptation of the construction of Morbitz and Ruprecht (2020); the interested reader may find a detailed description of this procedure in Section 4.", "After the induction, we replace every terminal of the LCFRS by the wildcard symbol , and we refer to the resulting rules as supertags .", "Our parsing pipeline is depicted in Fig. 2. (1) Given a sentence w , the supertagger predicts for each position of w the k best supertags, where k constitutes a hyperparameter of our approach.", "(2) We combine the predicted supertags to a new LCFRS which we call G w .", "In doing so, we replace the wildcard of each supertag by the sentence position it was predicted for.", "(3) We employ a usual chart-based parsing algorithm to parse the sequence 1 2 | w | of sentence positions with G w .", "(4) We transform the resulting derivation in G w into a parse tree of the same form as those in the original treebank.", "As G w only resembles a fraction of all supertags, this approach shifts a huge amount of work from parsing with grammars to predicting the rules.", "Thus its success is mainly determined by the quality of the supertagger.", "Our lexicalization scheme is based on Morbitz and Ruprecht (2020).", "However, we ignore all weights and perform lexicalization on individual derivations rather than on a grammar induced from the entire treebank.", "More specifically, we directly read o a set of uni-lexical rules from each tree in the treebank; then the union of these rules forms our uni-lexical LCFRSG lex .", "In contrast to that, Morbitz and Ruprecht (2020) first induce an LCFRS G from the entire treebank and then lexicalize G .", "Thus G lex may have a di erent language than the lexicalization of G .", "We obtain a set of uni-lexical rules from each tree t in the treebank by the following procedure.", "(1) Binarize the tree.", "The symbol | is appended to constituents that result from binarization (this reflects Markovization with a vertical context of 1 and a horizontal context of 0).", "nonterminals of each chain are combined to a new nonterminal.", "(4) Remove every terminating rule that has a parent and insert the terminal from its composition into the parent.", "(5) Propagate terminals from double-lexical terminating rules into non-lexical branching rules.", "All rules in the resulting derivation are lexical.", "(6) Split all remaining double-lexical terminating rules into two uni-lexical rules.", "All rules in the resulting derivation are uni-lexical.", "The resulting derivation is called d lex ( t ).", "(7) Read o the rules of d lex ( t ); call them R ( t ).", "These steps are defined such that in the LCFRS formed by R ( t ), d lex ( t ) is a derivation for the sentence of t .", "Moreover, we are able to reconstruct t from d lex ( t ) by reverting steps (6) to (1) (we will give the details later).", "Finally, we obtain the uni-lexical LCFRS G lex by combining the rules R ( t ) for each tree t .", "The initial nonterminals of G lex are all left-hand sides of roots of d lex ( t ).", "Let t be a tree in the treebank.", "Steps (1) and (2) and their reversal are standard techniques for trees and LCFRS.", "After applying them to t , we obtain an LCFRS derivation in which each occurring rule is either of the form A ( ), where is a terminal and A is the part-of-speech tag of , A c ( B 1 ) where fo( A ) = fo( B ) and c = id fo( A ) , or A c ( B 1 , B 2 ) where c contains no terminals and none of B 1 , B 2 is an initial nonterminal.", "Let us denote this derivation by d .", "In the following, we describe steps (3) to (6) of the above procedure in more detail (showing examples in Figs. 3 to 6) and also glimpse at how the individual steps are reverted.", "Step (3).", "We repeatedly replace parts in d of the form A c ( B ) (cid:0) B c (cid:48) (cid:1) by A + B c (cid:48) , and A c ( B ) (cid:16) B c (cid:48) ( C 1 , C 2 ) (cid:0) . . . (cid:1)(cid:17) by A + B c (cid:48) ( C 1 , C 2 ) (cid:0) . . . (cid:1) , until there is no monic rule in d left.", "If the occurrence of A c ( B ) is not the root of d , then the corresponding nonterminal in the parent's rhs is replaced by A + B .", "3 After this step, there are only branching rules and terminating rules in d .", "Figure 3 shows an example for this step.", "This step is easy to reverse, as the composition of every removed rule is c = id fo( B ) .", "We give the 3 Note that root nodes in the derivation may be collapsed, this is why we use LCFRS with multiple initial nonterminals.", "VP | 2 (x 1 , y 1 )(VBN , NP) VBN (scheduled) NP (x 1 )(NN) NN (today) +", "(a) A derivation for scheduled today .", "Gray arrows show how the bottom-most composition is chained with the monic rule on top.", "VP | 2 (x 1 , y 1 )(VBN , NP + NN) VBN (scheduled) NP + NN (today)", "(b) The derivation resulting from applying step (3) to the derivation in Fig. 3a.", "Figure 3: Example for step (3).", "Step (4).", "We remove every non-root occurrence r of a terminating rule A ( ) in d .", "Let r be the i th child of its parent (with i [2]), then we replace the parent's composition c by (cid:126) c (cid:127) i ( ) and remove the i th nonterminal in the parent's rhs.", "We note that the parent becomes lexical, and after this step, every rule in d is either branching or lexical.", "Moreover, every terminal rule in d is either double-lexical (if both children were removed) or the root of d (and thus its only node).", "Figure 4 shows an example for this step.", "NP 2 (x 1 , y 1 )(NP , PP) NP (x 1 y 1 )(DT , NN) DT (A) NN (hearing) PP (x 1 y 1 )(IN , NP) IN (on) NP (x 1 y 1 )(DT , NN) DT (the) NN (issue)", "(a) A derivation for the string tuple ( A hearing , on the issue ).", "Gray arrows show the terminals that are put into binary nonlexical rules during step (4).", "NP 2 (x 1 , y 1 )(NP , PP) NP (Ahearing) PP (onx 1 )(NP) NP (theissue)", "(b) The derivation resulting from applying step (4) to the derivation in Fig. 4a.", "Figure 4: Example for step (4).", "Clearly, this step loses information, namely the left-hand sides of the removed rules.", "These nonterminals are part-of-speech tags (that may be enriched with nonterminals of collapsed monic rules from the previous step).", "For the reversal of this step, we opted to predict them along with the supertags as part of the supertagger.", "The formal description of the reversal is given in Appendix A.2.", "Step (5).", "For each occurrence r of a branching rule A c ( A 1 , A 2 ) in d , let us consider the occurrence t of the leftmost terminating rule (i.e. t is a leaf) that is reachable via the second successor of r .", "For example, in Fig. 5a, the two binary rules ( r ) are end points of gray arrows; these arrows start at the mentioned leaves ( t ).", "Our goal is to remove one terminal from t and propagate it all the way up to r .", "For this, at each node s on the path from t to r (from bottom up): If s is t , we remove the leftmost terminal in the rule's composition at s .", "If s is neither t nor r , we insert the last removed terminal right before the variable x 1 and then remove the leftmost terminal in the rule's composition at s .", "We note that if the rule at s is monic and the variable x 1 occurs right of the terminal in its composition, then we propagate a di erent terminal than the one received from the child.", "In order to be able to reverse this step, we need to remember whether the terminal in the rule's composition stayed the same or was swapped with the terminal received from the child.", "In the following, we consider this information as part of the rule (cf. the gray annotation swapped in Fig. 5).", "If s is r , we insert the last removed terminal right before the variable y 1 in the rule's composition at s .", "we annotate the lhs nonterminal at s and the i th rhs nonterminal at s (cid:48) with and remove the empty component, and if i = 1 (resp. i = 2), we remove x 1 (resp. y 1 ) and replace every other occurrence of x i by x i 1 (resp. y j by y j 1 ) at s (cid:48) .", "Otherwise, we annotate the nonterminals with + .", "We note that the rule at r is uni-lexical and branching now, the rule at t is uni-lexical and terminating, and the number of terminals in each rule between them did not change.", "After this step, every rule in d is lexical.", "Figure 5 shows an example for this step.", "r .", "Intuitively, this holds since", "(a) after step (4) every leaf of d is a double-lexical rule and", "(b) for each branching rule we first go to its second successor and then follow the path of first successors until we reach a leaf.", "Here,", "(a) guarantees that there exists a double-lexical rule for each branching rule and", "(b) guarantees that each double-lexical rule is assigned to at most one branching rule, thus at most one terminal is removed from it.", "We refer the more interested reader to consult the proof of correctness by Engelfriet et al. (2018); this proof also applies to our method.", "NP 2 (x 1 , y 1 )(NP , PP) NP (Ahearing) PP (onx 1 )(NP) NP (theissue) VP | 2 (scheduled , today)", "(a) A derivation for the string tuple ( A hearing , scheduled on the issue today ).", "Gray arrows show how terminals will be propagated through the derivation to lexicalize branching rules during step (5).", "VP 2 (x 1 , scheduledx 2 y 1 )(NP 2 , VP | 2 ) NP 2 (x 1 , ony 1 )(NP , PP + ) NP (Ahearing) PP + (thex 1 )(NP + ) swapped NP + (issue) VP | 2 (today)", "(b) The derivation resulting from applying step (5) to the derivation in Fig. 5a.", "A gray annotation swapped marks a monic rule whose terminal changed.", "Figure 5: Example for step (5).", "The reversal of this step removes all annotation ( + , , and swapped ) and restores each composition in d to its original form.", "We note that the original composition can be obtained deterministically; the construction is given in Appendix A.1.", "Step (6).", "We replace the rightmost terminal 2 in the composition of each double-lexical terminating rule by a variable and add a new nonterminal AR to the rule's right-hand side (making it a uni-lexical monic rule).", "Then we insert the rule AR ( 2 ) as a child.", "After this step, every rule in d is uni-lexical.", "Figures 5b and 6 show an example for this step.", "The reversal of this step is straightforward.", "Implementation.", "The induction of uni-lexical LCFRS and parsing was implemented by extending disco-dop (van Cranenburgh et al., 2016), from which we could borrow the generic LCFRS extraction and statistical parsing implementation.", "Moreover, we used the computation of evaluation scores in disco-dop .", "The supertagger was implemented using the flair framework (Akbik et al., 2019).", "We report results for three types of architectures: bert the output of the four topmost layers of a pretrained BERT 4 model (Devlin et al., 2019), which is fine-tuned during training, flair the concatenation of language-specific fasttext (Mikolov et al., 2018) and flair embeddings (Akbik et al., 2018), which is fed through a two layered Bi-LSTM (Hochreiter and Schmidhuber, 1997), and supervised (small / large) word embeddings (one-hot embeddings and character-based BiLSTM outputs) are trained with the model, and fed through a two layered Bi-LSTM.", "The small model adopts its size parameters from Stanojevic and Steedman (2020); Coavoux and Cohen (2019) and the large model from Corro (2020).", "On top of each of those, there are two linear layers in parallel: one for the supertags and one for the nonterminals that were removed in step (4) of our lexicalization scheme (i.e. part-of-speech tags plus nonterminals from collapsed monic rules).", "The sequence tagger is trained to predict the gold supertag and the removed nonterminal for each sentence position via the sum of cross-entropy losses.", "More 4 We used language-specific flavors of bert-base that were available in huggingface's transformers library; bert-base-german-cased for Tiger and Negra, and bert-base-cased for DPTB.", "During parsing, the predicted supertags are interpreted as a probabilistic grammar.", "At each sentence position, the weight of the rules is the softmax of the supertag's score among the k best scores.", "The parsing implementation that we borrow from disco-dop supports heuristics and early-stopping to speed-up the parsing process.", "For each intermediate parse that does not span all sentence positions, we use the best supertag probability for each position that does not belong to the parse as a heuristic to estimate the weight of a complete parse.", "We extended the parser with a fallback mechanism that deals with parse fails, i.e. when it is not able to find parse trees for the whole sentence.", "It picks the largest partial derivations (for parts of the sentence) that it was able to find and combines them as children of artificial NOPARSE nodes.", "This is especially beneficial in settings with small k as there are many parse fails (cf. Table 1 column cov.).", "For example, if we did not use this mechanism, we would obtain prec.", "= 95 .", "53, rec.", "= 46 .", "21 and F1 = 62 .", "29 for the development set of Negra and k = 1 (cf. first row in Table 1).", "Data.", "Following Coavoux and Cohen (2019), we use three treebanks for discontinuous constituent parsing in our evaluations: Negra (Skut et al., 1998), Tiger (Brants et al., 2004), and a discontinuous version of the Penn treebank (DPTB; Evang and Kallmeyer, 2011).", "The treebanks were split according to the usual standards into training, development and test sets.", "5 During development, the lexicalization, tagging and parsing were mostly tested and optimized using Negra.", "We binarized each training set before extracting the LCFRS and supertags.", "Markovization with horizontal context h = 0 and vertical context v = 1 has yielded the best results; we thus extracted 3275 supertags from the training set of Negra, 4614 from Tiger and 4509 from DPTB.", "More context in Markovization lead to a blowup in the number of supertags which proved to be disadvantageous.", "Baselines.", "We report labeled F1-scores, obtained from predicted and gold parse trees us-5 We use the split for Negra by Dubey and Keller (2003), for Tiger by Seddah et al. (2013), and the standard split for DPTB (sections 221 for training, 22 for development, 23 for testing).", "ing disco-dop (with the usual parameters in proper.prm ), for all constituents ( F1 ) and all discontinuous constituents ( Dis-F1 ).", "Additionally to the scores, parse speed 6 is reported in sentences per second ( sent / s ).", "grammar-based parsers (van Cranenburgh et al., 2016; Gebhardt, 2020; Versley, 2016) that directly rely on an underlying (probabilis-tic) grammar, chart-based parsers (Corro, 2020; Stanojevic and Steedman, 2020) that share parsing algorithms with LCFRS, but lack an explicit set of rules, transition systems (Coavoux and Cohen, 2019; Coavoux et al., 2019), and neural systems (Fernandez-Gonzalez and Gomez-Rodrguez, 2020; Vilares and Gomez-Rodrguez, 2020) all other recent parsing approaches using neural classifiers.", "Our approach is in the first category, as the supertags are clearly constructed from a grammar that was extracted from the treebank.", "Therefore, the local relations in the predicted derivations are restricted to those occurring in the treebank.", "The approaches by Corro (2020) and Stanojevic and Steedman (2020), on the other hand, rank spans in the sentence for occurrence in the predicted parse tree and predict their nonterminal; both independently from previous spans and nonterminals.", "Hence, they allow any combination of parent / child nonterminals in the resulting derivations.", "Table 1 shows statistics of our parser on the development sets for di erent amounts ( k ) of supertags per sentence position.", "Specifically, we report the parsing speed ( sent / s ), the rate of sentence positions where the gold supertag was among the k predicted supertags ( tag accuracy ), the rate of sentences that was completely parsed ( coverage ) and parsing scores (labeled precision , recall and F1 ).", "We see that the parsing speed gradually drops with rising k , but for k > 10 there are barely any gains in terms of parsing scores.", "As expected, with rising k , the recall increases drastically.", "The preci-6 We measured the parsing speed on a system with an Nvidia GeForce RTX 2080, two Intel Xeon Silver 4114 (20 cores / 40 threads at 2.2 GHz) and 256 GB RAM.", "sion, on the other hand, only changes slightly.", "The drop in precision using Negra and Tiger may be explained by a significant decrease in parse fails from k = 1 to k = 2, then the e ects of fewer parse fails and considering lower-scored supertags seem to balance each other out.", "We found k = 10 to be a good parameter for the rest of our experiments.", "Table 3 shows the parsing scores and speed of our trained models on the test set compared to the scores reported in other recent publications on discontinuous constituent parsing.", "The experiments suggest that parsing using LCFRS can greatly benefit from supertagging with respect to both speed and accuracy.", "This, however requires a strong discriminative classifier for the sequence tagger to predict useful rules.", "Most notably, the prediction accuracy for discontinuous constituents seems to strongly benefit from pretrained word embeddings.", "Compared to other parsing approaches, we obtain results that are on par with the state of the art; recently, this is rather unusual for grammar-based constituent parsing.", "We would like to especially highlight our results for discontinuous constituents, which surpass the previous state of the art by a wide margin.", "Unfortunately, we can only compare our results to those of other supertagging-based parsers to a very limited extent, as authors seem to either report no parsing scores at all (Bladier et al., 2018), or give attachment scores for dependency relations (Kasai et al., 2017; Tian et al., 2020).", "However, Table 2 compares the accuracy of our supertagger to some recent publications.", "The CCG community is very active in the field of neural supertagging, achieving an improvement from 91.3% (Lewis and Steedman, 2014) to 96.4% accuracy (Tian et al., 2020) for predicted supertags in the last 6 years.", "We can not compete with those numbers, but this may be due to the fact that there are far fewer supertags trained in these approaches than in ours.", "In the case of TAG, the supertagger by Bladier et al. (2018) achieves a better accuracy than ours.", "But again, there are fewer tags to predict.", "Compared to Kasai et al. (2017), our models with pretrained embeddings seem to be on par in both the number of tags and the accuracy.", "We described an approach to utilize supertagging for parsing discontinuous constituents with LCFRS and evaluated it.", "Compared to other parsers for the Table 3: Our results on test sets compared to other published constituent parsers.", "same grammar formalism, we achieve state-of-the-art results, i.e. we are more accurate and also faster (cf. Table 3, Grammar-based systems).", "In contrast to previous parsers utilizing LCFRS, we can even keep up with other (neural) parsing approaches and establish a new state of the art for discontinuous constituents (cf. Table 3, columns for Dis-F1).", "Recent publications by Corro (2020) and Stanojevic and Steedman (2020) address discontinuous constituent parsing using approaches that share an algorithmic foundation with LCFRS parsing, but do not use an underlying grammar.", "Both of them restrict constituents to two non-contiguous spans (equivalent to an LCFRS with fanout 2), we have no such limitation.", "Considering the margin between our discontinuous F1-score and theirs, we suppose that this restriction is only benefiting the complexity, not the accuracy.", "Future Work.", "Compared to previous approaches for supertagging, we utilize large sets of supertags.", "We are confident that the accuracy of the supertagger can be improved by appropriately reducing these sets.", "The approach how terminals are transported in derivations during step (5) of the extraction is quite technical and chosen such that there is no impact on the fanout of the grammar (M orb-itz and Ruprecht, 2020).", "Alternative techniques could conceivably result in smaller sets of supertags and / or improve parsing results.", "To validate the benefit of LCFRS (compared to using TAG or CCG) for supertagging-based approaches to constituent paring, we aim for an in-depth comparison of our work to previous approaches.", "However, currently, these approaches lack of publicly available implementations for constituent parsing.", "We thank Alex Ivliev for conducting early experiments during the development of our parser, and our colleague Kilian Gebhardt as well as the anonymous reviewers for their insightful comments on drafts of this paper." ]
[ "objective", "abstain", "objective", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "objective", "abstain", "abstain", "method", "method", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "result", "objective", "abstain", "abstain", "method", "abstain", "method", "result", "abstain", "abstain", "objective", "abstain", "other" ]
[ "A major hurdle in data-driven research on typology is having sufficient data in many languages to draw meaningful conclusions.", "We present VoxClamantis V 1.0 , the first large-scale corpus for phonetic typology, with aligned segments and estimated phoneme-level labels in 690 readings spanning 635 languages, along with acoustic-phonetic measures of vowels and sibilants.", "Access to such data can greatly facilitate investigation of phonetic typology at a large scale and across many languages.", "However, it is nontrivial and computationally intensive to obtain such alignments for hundreds of languages, many of which have few to no resources presently available.", "We describe the methodology to create our corpus, discuss caveats with current methods and their impact on the utility of this data, and illustrate possible research directions through a series of case studies on the 48 highest-quality readings.", "Our corpus and scripts are publicly available for non-commercial use at https:// voxclamantisproject.github.io .", "Understanding the range and limits of cross-linguistic variation is fundamental to the scientific study of language.", "In speech and particularly phonetic typology, this involves exploring potentially universal tendencies that shape sound systems and govern phonetic structure.", "Such investigation requires access to large amounts of cross-linguistic data.", "Previous cross-linguistic phonetic studies have been limited to a small number of languages with available data (Disner, 1983; Cho and Ladefoged, 1999), or have relied on previously reported measures from many studies (Whalen and Levitt, 1995; Becker-Kristal, 2010; Gordon and Roettger, 2017; Chodroff et al., 2019).", "Existing multilingual speech corpora have similar restrictions, with data too limited for many tasks (Engstrand and Cunningham-Andersson, 1988; Ladefoged and Maddieson, 2007) or approximately 20 to 30 recorded languages (Ardila et al., 2020; Harper, 2011; Schultz, 2002).", "The recently developed CMU Wilderness corpus (Black, 2019) constitutes an exception to this rule with over 600 languages.", "This makes it the largest and most typologically diverse speech corpus to date.", "In addition to its coverage, the CMU Wilderness corpus is unique in two additional aspects: cleanly recorded, read speech exists for all languages in the corpus, and the same content (modulo translation) exists across all languages.", "However, this massively multilingual speech corpus is challenging to work with directly.", "Copyright, computational restrictions, and sheer size limit its accessibility.", "Due to copyright restrictions, the audio cannot be directly downloaded with the sentence and phoneme alignments.", "A researcher would need to download original audio MP3 and text through links to bible.is , then segment these with speech-to-text sentence alignments distributed in Black (2019).", "1 For phonetic research, subsequently identifying examples of specific phonetic segments in the audio is also a near-essential 1 The stability of the links and recording IDs is also questionable.", "Since the release of Black (2019), many of the links have already changed, along with a few of the IDs.", "We have begun identifying these discrepancies, and plan to flag these in a future release.", "step for extracting relevant acoustic-phonetic measurements.", "Carrying out this derivative step has allowed us to release a stable-access collection of token-level acoustic-phonetic measures to enable further research.", "Obtaining such measurements requires several processing steps: estimating pronunciations, aligning them to the text, evaluating alignment quality, and finally, extracting phonetic measures.", "This work is further complicated by the fact that, for a sizable number of these languages, no linguistic resources currently exist (e.g., language-specific pronunciation lexicons).", "We adapt speech processing methods based on Black (2019) to accomplish these tasks, though not without noise: in 3.4, we identify three significant caveats when attempting to use our extended corpus for large-scale phonetic studies.", "We release a comprehensive set of standoff markup of over 400 million labeled segments of continuous speech.", "2 For each segment, we provide an estimated phoneme-level label from the X-SAMPA alphabet, the preceding and following labels, and the start position and duration in the audio.", "Vowels are supplemented with formant measurements, and sibilants with standard measures of spectral shape.", "We present a series of targeted case studies illustrating the utility of our corpus for large-scale phonetic typology.", "These studies are motivated by potentially universal principles posited to govern phonetic variation: phonetic dispersion and phonetic uniformity .", "Our studies both replicate known results in the phonetics literature and also present novel findings.", "Importantly, these studies investigate current methodology as well as questions of interest to phonetic typology at a large scale.", "The CMU Wilderness corpus (Black, 2019) consists of recorded readings of the New Testament of the Bible in many languages and dialects.", "Following the New Testament structure, these data are broken into 27 books, each with a variable number of chapters between 1 and 25.", "Bible chapters contain standardized verses (approximately sentence-level segments); however, the speech is originally split only by chapter.", "Each chapter 2 For some languages, we provide multiple versions of the markup based on different methods of predicting the pronunciation and generating time alignments (3.1).", "has an average of 13 minutes of speech for a total of 20 hours of speech and text per language.", "These recordings are clean, read speech with a sampling rate of 16 kHz.", "In most languages, they are non-dramatic readings with a single speaker; in some, they are dramatic multi-speaker readings with additive music.", "3 The release from Black (2019) includes several resources for processing the corpus: scripts to download the original source data from bible.is , lexicons' created using grapheme-to-phoneme (G2P) conversion, and scripts to apply their generated sentence alignments, which facilitates downstream language processing tasks, including phoneme alignment.", "Our VoxClamantis V 1.0 corpus is derived from 690 audio readings of the New Testament of the Bible 4 in 635 languages.", "5 We mark estimated speech seg-3 Information about the recordings available can be found at https://www.faithcomesbyhearing.com/mission/recordings 4 Nine of the readings from Black (2019) could not be aligned.", "5 We specify number of distinct languages by the number of distinct ISO 639-3 codes, which may not distinguish dialects.", "ments labeled with phonemic labels, and phonetic measures for the tokens that are vowels or sibilants.", "The extraction process is diagrammed in Figure 2.", "In the sections below, we detail our procedures for extracting labeled audio segments and their phonetic measures, in both highand low-resource languages.", "We then outline important caveats to keep in mind when using this corpus.", "We use a multi-pronged forced alignment strategy to balance broad language coverage (3.1.1) with utilization of existing high-quality resources (3.1.2).", "We assess the quality of our approaches in 3.1.3.", "We release the stand-off markup for our final alignments as both text files and Praat TextGrids (Boersma and Weenink, 2019).", "6 Using scripts and estimated boundaries from Black (2019), we first download and convert the audio MP3s to waveforms, and cut the audio and text into sentences' (hereafter called utterances' as they are not necessarily sentences).", "This step creates shorter-length speech samples to facilitate forced alignment; utterance boundaries do not change through our processing.", "To extract labeled segments, we first require pronunciations for each utterance.", "A pronunciation is predicted from the text alone using some grapheme-to-phoneme (G2P) method.", "Each word's predicted pronunciation is a sequence of categorical labels, which are phoneme-level' in the sense that they are usually intended to distinguish the words of the language.", "We then align this predicted sequence of phonemes' to the corresponding audio.", "Most of our languages have neither existing pronunciation lexicons nor G2P resources.", "To provide coverage for all languages, we generate pronunciations using the simple universal' G2P system Unitran (Qian et al., 2010, as extended by Black, 2019), which deterministically expands each grapheme to a fixed sequence of phones in the Extended Speech Assessment Methods Phonetic Alphabet (X-SAMPA) (Wells, 1995/2000).", "This naive process is error-prone for languages with opaque orthographies, as we show in 3.1.3 below and discuss further in 3.4 (Caveat B ).", "Even so, it provides a starting point for exploring low-resource languages: after some manual inspection, a linguist may be 6 Corresponding audio will need to be downloaded from source and split by utterance using scripts from Black (2019).", "able to correct the labels in a given language by a combination of manual and automatic methods.", "For each reading, to align the pronunciation strings to the audio, we fit a generative acoustic model designed for this purpose: specifically, eHMM (Prahallad et al., 2006) as implemented in Festvox (Anumanchipalli et al., 2011) to run full BaumWelch from a flat start for 15 to 30 iterations until the mean mel cepstral distortion score (see 3.1.3) converges.", "Baum-Welch does not change the predicted phoneme labels, but obtains a language-specific, reading-specific, contextual (triphone) acoustic model for each phoneme type in the language.", "We then use Viterbi alignment to identify an audio segment for each phoneme token.", "A subset of the languages in our corpus are supported by existing pronunciation resources.", "Two such resources are Epitran (Mortensen et al., 2018), a G2P tool based on language-specific rules, available in both IPA and X-SAMPA, and WikiPron (Lee et al., 2020), a collection of crowd-sourced pronunciations scraped from Wiktionary.", "These are mapped from IPA to X-SAMPA for label consistency across our corpus.", "Epitran covers 29 of our languages (39 readings), while WikiPron's phonemic' annotations 7 provide partial coverage of 13 additional languages (18 readings).", "We use Epitran for languages with regular orthographies where it provides high-quality support, and WikiPron for other languages covered by WikiPron annotations.", "While Unitran and Epitran provide a single pronunciation for a word from the orthography, WikiPron may include multiple pronunciations.", "In such cases, Viterbi alignment (see below) chooses the pronunciation of each token that best fits the audio.", "For most languages covered by WikiPron, most of our corpus words are out-of-vocabulary, as they do not yet have user-submitted pronunciations on Wiktionary.", "We train G2P models on WikiPron annotations to provide pronunciations for these words.", "Specifically, we use the WFST-based tool Phonetisaurus (Novak et al., 2016).", "Model hyperparameters are tuned on 3 WikiPron languages from SIGMORPHON 2020 (Gorman et al., 2020) (see Appendix C for details).", "In general, for languages that are not easily supported by Epitran-style G2P rules , training a G2P model on sufficiently many 7 WikiPron annotations are available at both the phonemic and phonetic level, with a greater number of phonemic annotations, which we use here.", "high-quality annotations may be more accurate.", "We align the speech with the high-quality labels using a multilingual ASR model (see Wiesner et al., 2019).", "The model is trained in Kaldi (Povey et al., 2011) on 300 hours of data from the IARPA BABEL corpora (21 languages), a subset of Wall Street Journal (English), the Hub4 Spanish Broadcast news (Spanish), and a subset of the Voxforge corpus (Russian and French).", "These languages use a shared X-SAMPA phoneme label set which has high coverage of the labels of our corpus.", "Our use of a pretrained multilingual model here contrasts with 3.1.1, where we had to train reading-specific acoustic models to deal with the fact that the same Unitran phoneme label may refer to quite different phonemes in different languages (see 3.4).", "We did not fine-tune our multilingual model to each language, as the cross-lingual ASR performance in previous work (Wiesner et al., 2019) suggests that this model is sufficient for producing phoneme-level alignments.", "Automatically generated phoneme-level labels and alignments inherently have some amount of noise, and this is particularly true for low-resource languages.", "The noise level is difficult to assess without gold-labeled corpora for either modeling or assessment.", "However, for the high-resource languages, we can evaluate Unitran against Epitran and WikiPron, pretending that the latter are ground truth.", "For example, Table 1 shows Unitran's phoneme error rates relative to Epitran.", "Appendix B gives several more detailed analyses with examples of individual phonemes.", "Unitran pronunciations may have acceptable phoneme error rates for languages with transparent orthographies and one-to-one grapheme-to-phoneme mappings.", "Alas, without these conditions they prove to be highly inaccurate.", "That said, evaluating Unitran labels against Epitran or WikiPron may be unfair to Unitran, since some discrepancies are arguably not errors but mere differences in annotation granularity.", "For example, the phonemic' annotations in WikiPron are sometimes surprisingly fine-grained: WikiPron frequently uses / t / in Cebuano where Unitran only uses / t /, though these refer to the same phoneme.", "These tokens are scored as incorrect.", "Moreover, there can be simple systematic errors: Unitran always maps grapheme < a > to label / A /, but in Tagalog, all such tokens should be / a /.", "Such errors can often be fixed by remapping the Unitran labels, which in these cases would reduce PER from 30.1 to 6.8 (Cebuano) and from 34.4 to 7.8 (Tagalog).", "Such rules are not always this straightforward and should be created on a language-specific basis; we encourage rules created for languages outside of current Epitran support to be contributed back to the Epitran project.", "For those languages where we train a G2P system on WikiPron, we compute the PER of the G2P system on held-out WikiPron entries treated as ground truth.", "The results (Appendix C) range from excellent to mediocre.", "We care less about the pronunciations themselves than about the segments that we extract by aligning these pronunciations to the audio.", "For high-resource languages, we can again compare the segments extracted by Unitran to the higher-quality ones extracted with better pronunciations.", "For each Unitran token, we evaluate its label and temporal boundaries against the high-quality token that is closest in the audio, as measured by the temporal distance between their midpoints (Appendix B).", "Finally, the segmentation of speech and text into corresponding utterances is not perfect.", "We use the utterance alignments generated by Black (2019), in which the text and audio versions of a putative utterance may have only partial overlap.", "Indeed, Black (2019) sometimes failed to align the Unitran pronunciation to the audio at all, and discarded these utterances.", "For each remaining utterance, he assessed the match quality using Mel Cepstral Distortion (MCD)which is commonly used to evaluate synthesized spoken utterances (Kominek et al., 2008)between the original audio and a resynthesized version of the audio based on the aligned pronunciation.", "Each segment's audio was resynthesized given the segment's phoneme label and the preceding and following phonemes, in a way that preserves its duration, using CLUSTER-GEN (Black, 2006) with the same reading-specific eHMM model that we used for alignment.", "We distribute Black's per-utterance MCD scores with our corpus, and show the average score for each language in Appendix E. In some readings, the MCD scores are consistently poor.", "Using the phoneme-level alignments described in 3.1, we automatically extract several standard acoustic-phonetic measures of vowels and sibilant fricatives that correlate with aspects of their articulation and abstract representation.", "Standard phonetic measurements of vowels include the formant frequencies and duration information.", "Formants are concentrations of acoustic energy at frequencies reflecting resonance points in the vocal tract during vowel production (Ladefoged and Johnson, 2014).", "The lowest two formants, F1 and F2, are considered diagnostic of vowel category identity and approximate tongue body height (F1) and backness (F2) during vowel production (Figure 3).", "F3 correlates with finer-grained aspects of vowel production such as rhoticity (/ r /-coloring), lip rounding, and nasality (House and Stevens, 1956; Lindblom and Sundberg, 1971; Ladefoged et al., 1978), and F4 with high front vowel distinctions and speaker voice quality (Eek and Meister, 1994).", "Vowel duration can also signal vowel quality, and denotes lexical differences in many languages.", "We extracted formant and duration information from each vowel using Praat (Boersma and Weenink, 2019).", "The first four formants (F1F4) were measured at each quartile and decile of the vowel.", "Formant estimation was performed with the Burg algorithm in Praat with pre-emphasis from 50 Hz, a time window of 25 ms, a time high F1 F2 low front back central mid (tense) mid (lax) Figure 3: Vowel Chart step of 6.25 ms, a maximum of five formants permitted, and a formant ceiling of 5000 Hz, which is the recommended value for a male vocal tract (Boersma and Weenink, 2019).", "Note that the speakers in this corpus are predominantly male.", "Standard phonetic measurements of sibilant fricatives such as / s /, / z /, / S /, and / Z / include measures of spectral shape, and also segment duration.", "Measures of spectral shape frequently distinguish sibilant place of articulation: higher concentrations of energy generally reflect more anterior constriction locations (e.g., / s z / are produced closer to the teeth than / S Z /).", "Segment duration can also signal contrasts in voicing status (Jongman et al., 2000).", "Our release contains the segment duration, spectral peak, the spectral moments of the frequency distribution (center of gravity: COG, variance, skewness, and kurtosis), as well as two measures of the mid-frequency peak determined by sibilant quality.", "These are the mid-frequency peak between 3000 and 7000 Hz for alveolar sibilants, and between 2000 and 6000 Hz for post-alveolar sibilants (Koenig et al., 2013; Shadle et al., 2016).", "The spectral information was obtained via multitaper spectral analysis (Rahim and Burr, 2017), with a time-bandwidth parameter ( nw ) of 4 and 8 tapers ( k ) over the middle 50% of the fricative (Blacklock, 2004).", "Measurements were made using the methods described in Forrest et al. (1988) for spectral moments and Koenig et al. (2013) for spectral peak varieties.", "Generating phoneme-level alignments and extracting subsequent phonetic measures takes significant time, computational resources, and domain knowledge.", "Our release enables the community to use this data directly without these prerequisites.", "Table 2 shows that the time to extract our resources, Computation Time Resource Per Language Total Time Utterance Alignments 30m 14d 13h Phoneme Alignments 3d 3h 37m 6y 12d 16h Vowel Measures 45m 21d 20h Sibilant Measures 20m 9d 17h 3d 5h 0m 6y 58d 19h Table 2: Computation time to generate the full corpus.", "once methods have been developed, was more than 6 CPU years, primarily for training eHMM models.", "We caution that our labeling and alignment of the corpus contains errors.", "In particular, it is difficult to responsibly draw firm linguistic conclusions from the Unitran-based segments (3.1.1).", "In 5 we suggest future work to address these issues.", "A Quality of Utterance Pairs : For some utterances, the speech does not correspond completely to the text, due to incorrect co-segmentation.", "In our phonetic studies, we threshold using reading-level MCD as a heuristic for overall alignment quality, and further threshold remaining readings using utterance-level MCD.", "We recommend others do so as well.", "B Phoneme Label Consistency and Accuracy : Phoneme-level labels are predicted from text without the aid of audio using G2P methods.", "This may lead to systematic errors.", "In particular, Unitran relies on a universal' table that maps grapheme < s > (for example) to phoneme / s / in every context and every language.", "This is problematic for languages that use < s > in some or all contexts to refer to other phonemes such as / S / or / /, or use digraphs that contain < s > , such as < sh > for / S /.", "Thus, the predicted label / s / may not consistently refer to the same phoneme within a language, nor to phonetically similar phonemes across languages.", "Even WikiPron annotations are user-submitted and may not be internally consistent (e.g., some words use / d Z / or / t / while others use / / or / t /), nor comparable across languages.", "Phoneme' inventories for Unitran and WikiPron have been implicitly chosen by whoever designed the language's orthography or its WikiPron pages; while this may reflect a reasonable folk phonology, it may not correspond to the inventory of underlying or surface phonemes that any linguist would be likely to posit.", "C Label and Alignment Assessment : While alignment quality for languages with Epitran and WikiPron can be assessed and calibrated beyond this corpus, it cannot for those languages with only Unitran alignments; the error rate on languages without resources to evaluate PER is unknown to us.", "The Unitran alignments should be treated as a first-pass alignment which may still be useful for a researcher who is willing to perform quality control and correction of the alignments using automatic or manual procedures.", "Our automatically-generated alignment offers an initial label and placement of the boundaries that would hopefully facilitate downstream analysis.", "D Corpus Representation : It is difficult to draw conclusions about average behavior' across languages.", "Some language families are better represented in the corpus than others, with more languages, more Bible readings per language, more hours of speech per reading, or more examples of a given phoneme of interest.", "8 Additionally, the recordings by language are largely single-speaker (and predominantly male).", "This means that we can often draw conclusions only about a particular speaker's idiolect, rather than the population of speakers of the language.", "Metadata giving the exact number of different speakers per recording do not exist.", "We present two case studies to illustrate the utility of our resource for exploration of cross-linguistic typology.", "Phoneticians have posited several typological principles that may structure phonetic systems.", "Though previous research has provided some indication as to the direction and magnitude of expected effects, many instances of the principles have not yet been explored at scale.", "Our case studies investigate how well they account for cross-linguistic variation and systematicity for our phonetic measures from vowels and sibilants.", "Below we present the data filtering methods for our case studies, followed by an introduction to and evaluation of phonetic dispersion and uniformity.", "8 See our corpus website for exact numbers of utterances and our phonetic measures per each language.", "MCD lower than 8.0.", "9 Furthermore, we only use those utterances with MCD lower than 6.0.", "The vowel analyses focus on F1 and F2 in ERB taken at the vowel midpoint (Zwicker and Terhardt, 1980; Glasberg and Moore, 1990).", "10 The sibilant analyses focus on mid-frequency peak of / s / and / z /, also in ERB.", "Vowel tokens with F1 or F2 measures beyond two standard deviations from the label-and reading-specific mean were excluded, as were tokens for which Praat failed to find a measurable F1 or F2, or whose duration exceeded 300 ms. Sibilant tokens with mid-frequency peak or duration measures beyond two standard deviations from the labeland reading-specific mean were also excluded.", "When comparing realizations of two labels such as / i // u / or / s // z /, we excluded readings that did not contain at least 50 tokens of each label.", "We show data representation with different filtering methods in Appendix D. After filtering, the vowel analyses included 48 readings covering 38 languages and 11 language families.", "The distribution of language families was 21 Indo-European, 11 Austronesian, 3 Cre-ole/Pidgin, 3 Turkic, 2 Afro-Asiatic, 2 Tai-Kadai, 2 Uto-Aztecan, 1 Austro-Asiatic, 1 Dravidian, 1 Hmong-Mien, and 1 Uralic.", "Approximately 8.2 million vowel tokens remained, with a minimum of 31,000 vowel tokens per reading.", "The sibilant analysis included 22 readings covering 18 languages and 6 language families.", "The distribution of language families was 10 Indo-European, 6 Austronesian, 3 Turkic, 1 Afro-Asiatic, 1 Austro-Asiatic, and 1 Creole/Pidgin.", "The decrease in total number of readings relative to the vowel analysis primarily reflects the infrequency of / z / cross-linguistically.", "Approximately 385,000 /s/ and 83,000 /z/ tokens remained, with a minimum of 5,200 tokens per reading.", "Phonetic dispersion refers to the principle that contrasting speech sounds should be distinct from one another in phonetic space (Martinet, 1955; Jakobson, 1968; Flemming, 1995, 2004).", "Most studies investigating this principle have focused on its va-9 In the high-MCD languages, even the low-MCD utterances seem to be untrustworthy.", "10 The Equivalent Rectangular Bandwidth (ERB) scale is a psychoacoustic scale that better approximates human perception, which may serve as auditory feedback for the phonetic realization (Fletcher, 1923; Nearey, 1977; Zwicker and Terhardt, 1980; Glasberg and Moore, 1990).", "The precise equation comes from Glasberg and Moore (1990, Eq. 4).", "lidity within vowel systems, as we do here.", "While languages tend to have seemingly well-dispersed vowel inventories such as { / i /, / a /, / u / } (Joos, 1948; Stevens and Keyser, 2010), the actual phonetic realization of each vowel can vary substantially (Lindau and Wood, 1977; Disner, 1983).", "One prediction of dispersion is that the number of vowel categories in a language should be inversely related to the degree of per-category acoustic variation (Lindblom, 1986).", "Subsequent findings have cast doubt on this (Livijn, 2000; Recasens and Espinosa, 2009; Vaux and Samuels, 2015), but these studies have been limited by the number and diversity of languages investigated.", "To investigate this, we measured the correlation between the number of vowel categories in a language and the degree of per-category variation, as measured by the joint entropy of (F1, F2) conditioned on the vowel category.", "We model p ( F1 , F2 | V ) using a bivariate Gaussian for each vowel type v .", "We can then compute the joint conditional entropy under this model as H( F1 , F2 | V ) = P v p ( v ) H( F1 , F2 | V = v ) = P v p ( v ) 12 ln det(2 e v ) , where v is the covariance matrix for the model of vowel v .", "Vowel inventory sizes per reading ranged from 4 to 20 vowels, with a median of 8.", "Both Spearman and Pearson correlations between entropy estimate and vowel inventory size across analyzed languages were small and not significant (Spearman = 0.11, p = 0 . 44 ; Pearson r = 0.11, p = 0 . 46 ), corroborating previous accounts of the relationship described in Livijn (2000) and Vaux and Samuels (2015) with a larger number of languagesa larger vowel inventory does not necessarily imply more precision in vowel category production.", "11 4.3 Phonetic uniformity Previous work suggests that F1 is fairly uniform with respect to phonological height.", "Within a single language, the mean F1s of / e / and / o /which share a heighthave been found to be correlated across speakers ( Yorkshire English : Watt, 2000; French : Menard et al., 2008; Brazilian Portuguese : Oushiro, 2019; Dutch, English, French, Japanese, Portuguese, Spanish : Schwartz and Menard, 2019).", "Though it is physically possible for these vowels 11 Since differential entropy is sensitive to parameterization, we also measured this correlation using formants in hertz, instead of in ERB, as ERB is on a logarithmic scale.", "This change did not the influence the pattern of results (Spearman = 0.12, p = 0 . 41 ; Pearson r = 0.13, p = 0 . 39 ).", "(b) Mid-frequency peak of / s // z / in ERB Figure 4: Correlations of mean F1 (ERB) between /i/ and /u/ and of mean mid-frequency peak (ERB) between / s / and / z /.", "The paired segments share a relevant phonological feature specification that is approximated by the acoustic-phonetic measurement: vowel height by F1 and sibilant place by mid-frequency peak.", "Each reading is represented by an ellipsoid, centered on the paired means and shaped by 110 of their respective standard deviations.", "The solid line reflects the best-fit linear regression line with standard error in gray shading; the dashed line shows the line of equality.", "Marginal histograms show the range of variation in the segment-specific means.", "to differ in F1 realization, the correlations indicate a strong tendency for languages and individual speakers to yoke these two representations together.", "Systematicity in the realization of sibilant place of articulation has also been observed across speakers of American English and Czech (Chodroff, 2017).", "Phonetic correlates of sibilant place strongly covary between / s / and / z /, which share a [+anterior] place of articulation and are produced the alveolar ridge, and between / S / and / Z /, which share a [-anterior] place of articulation and are produced behind the alveolar ridge.", "A principle of uniformity may account for these above findings.", "Uniformity here refers to a principle in which a distinctive phonological feature should have a consistent phonetic realization, within a language or speaker, across different segments with that feature (Keating, 2003; Chodroff et al., 2019).", "Similar principles posited in the literature include Maximal Use of Available Controls, in which a control refers to an integrated perceptual and motor phonetic target (Menard et al., 2008), as well as a principle of gestural economy (Mad-dieson, 1995).", "Phonetic realization refers to the mapping from the abstract distinctive feature to an abstract phonetic target.", "We approximate this phonetic target via an acoustic-phonetic measurement, but we emphasize that the acoustic measurement is not necessarily a direct reflection of an underlying phonetic target (which could be an articulatory gesture, auditory goal, or perceptuo-motor representation of the sound).", "We make the simplifying assumption that the acoustic-phonetic formants (F1, F2) directly correspond to phonetic targets linked to the vowel features of height and backness.", "More precisely, uniformity of a phonetic measure with respect to a phonological feature means that any two segments sharing that feature will tend to have approximately equal measurements in a given language, even when that value varies across languages.", "We can observe whether this is true by plotting the measures of the two segments against each other by language (e.g., Figure 4).", "Vowels.", "As shown in Figure 4 and Table 3, the strongest correlations in mean F1 frequently re-flected uniformity of height (e.g., high vowels / i / / u /: r = 0.79, p < 0 . 001 , mid vowels / e // o /: r = 0.62, p < 0 . 01 ).", "12 Nevertheless, some vowel pairs that differed in height were also moderately correlated in mean F1 (e.g., / o // a /: r = 0 . 66 , p < 0 . 001 ).", "Correlations of mean F1 were overall moderate in strength, regardless of the vowels' phonological specifications.", "Correlations of mean F2 were also strongest among vowels with a uniform backness specification (e.g., back vowels / u // o /: r = 0 . 69 , p < 0 . 001 ; front vowels / i // E /: r = 0 . 69 , p < 0 . 05 ; Table 4).", "The correlation between front tense vowels / i / and / e / was significant and in the ex-12 p -values are corrected for multiple comparisons using the Benjamini-Hochberg correction and a false discovery rate of 0.25 (Benjamini and Hochberg, 1995).", "pected direction, but also slightly weaker than the homologous back vowel pair ( r = 0 . 41 , p < 0 . 05 ).", "Vowels differing in backness frequently had negative correlations, which could reflect influences of category crowding or language-/speaker-specific differences in peripheralization.", "We leave further exploration of those relationships to future study.", "The moderate to strong F1 correlations among vowels with a shared height specification are consistent with expectations based on previous studies, and also with predictions of uniformity.", "Similarly, we find an expected correlation of F2 means for vowels with a shared height specification.", "The correlations of vowel pairs that were predicted to have significant correlations, but did not, tended to have small sample sizes ( < 14 readings).", "Nevertheless, the correlations are not perfect; nor are the patterns.", "For instance, the back vowel correlations of F2 are stronger than the front vowel correlations.", "While speculative, the apparent peripheralization of / i / (as revealed in the negative F2 correlations) could have weakened the expected uniformity relation of / i / with other front vowels.", "Future research should take into account additional influences of the vowel inventory composition, as well as articulatory or auditory factors for a more complete understanding of the structural forces in the phonetic realization of vowels.", "Sibilants.", "The mean mid-frequency peak values for / s / and / z / each varied substantially across readings, and were also strongly correlated with one another ( r = 0 . 87 , p < 0 . 001 ; Figure 4).", "13 This find-ing suggests a further influence of uniformity on the realization of place for /s/ and /z/, and the magnitude is comparable to previous correlations observed across American English and Czech speakers, in which r was 0.90 (Chodroff, 2017).", "We hope our corpus may serve as a touchstone for further improvements in phonetic typology research and methodology.", "Here we suggest potential steps forward for known areas (3.4) where this corpus could be improved: A Sentence alignments were generated using Unitran, and could be improved with higher-quality G2P and verse-level text segmentation to standardize utterances across languages.", "B Consistent and comparable phoneme labels are the ultimate goal.", "Concurrent work on universal phone recognition (Li et al., 2020) addresses this issue through a universal phone inventory constrained by language-specific PHOIBLE inventories (Moran and McCloy, 2019).", "However, free-decoding phones from speech alone is challenging.", "One exciting possibility is to use the orthography and audio jointly to guide semi-supervised learning of per-language pronunciation lexicons (Lu et al., 2013; Zhang et al., 2017).", "C Reliable quality assessment for current methods remains an outstanding research question for many languages.", "For covered languages, using a universal label set to map additional high quality lexicons (e.g., hand-annotated lexicons) to the same label space as ours would enable direct label and alignment assessment through precision, recall, and PER.", "D Curating additional resources beyond this corpus would improve coverage and balance, such as contributing additional Epitran modules.", "Additional readings exist for many languages on the original bible.is site and elsewhere.", "Annotations with speaker information are not available, but improved unsupervised speaker clustering may also support better analysis.", "VoxClamantis V 1.0 is the first large-scale corpus for phonetic typology, with extracted phonetic features for 635 typologically diverse languages.", "We present two case studies illustrating both the research potential and limitations of this corpus for investigation of phonetic typology at a large scale.", "We discuss several caveats for the use of this corpus and areas for substantial improvement.", "Nonetheless, we hope that directly releasing our alignments and token-level features enables greater research accessibility in this area.", "We hope this corpus will motivate and enable further developments in both phonetic typology and methodology for working with cross-linguistic speech corpora.", "The authors gratefully acknowledge Colin Wilson for his guidance and discussion on the topic, Florian Metze for resources, and Carlos Aguirre for helpful feedback." ]
[ "abstain", "objective", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "other" ]
[ "In open-domain question answering, dense passage retrieval has become a new paradigm to retrieve relevant passages for finding answers.", "Typically, the dual-encoder architecture is adopted to learn dense representations of questions and passages for semantic matching.", "However, it is difficult to effectively train a dual-encoder due to the challenges including the discrepancy between training and inference, the existence of unlabeled positives and limited training data.", "To address these challenges, we propose an optimized training approach, called RocketQA , to improving dense passage retrieval.", "We make three major technical contributions in RocketQA, namely cross-batch negatives, denoised hard negatives and data augmentation.", "The experiment results show that RocketQA significantly outperforms previous state-of-the-art models on both MSMARCO and Natural Questions.", "We also conduct extensive experiments to examine the effectiveness of the three strategies in RocketQA.", "Besides, we demonstrate that the performance of end-to-end QA can be improved based on our RocketQA retriever 1 .", "Open-domain question answering (QA) aims to find the answers to natural language questions from a large collection of documents.", "Early QA systems (Brill et al., 2002; Dang et al., 2007; Ferrucci et al., 2010) constructed complicated pipelines consisting of multiple components, including question understanding, document retrieval, passage ranking and answer extraction.", "Recently, inspired by the advancements of machine reading comprehension (MRC), Chen et al. (2017) proposed a sim-plified two-stage approach, where a traditional IR Corresponding authors.", "The work was done when Ruiyang Ren was doing internship at Baidu.", "1 Our code is available at https://github.com/ PaddlePaddle/Research/tree/master/NLP/NAACL2021-RocketQA [CLS] q (1) q", "retriever (e.g., TF-IDF or BM25) first selects a few relevant passages as contexts, and then a neural reader reads the contexts and extracts the answers.", "As the recall component, the first-stage retriever significantly affects the final QA performance.", "Though efficient with an inverted index, traditional IR retrievers with term-based sparse representations have limited capabilities in matching questions and passages, e.g., term mismatch.", "To deal with the issue of term mismatch, the dual-encoder architecture (as shown in Figure 1a) has been widely explored (Lee et al., 2019; Guu et al., 2020; Karpukhin et al., 2020; Luan et al., 2020; Xiong et al., 2020) to learn dense representations of questions and passages in an end-to-end manner, which provides better representations for semantic matching.", "These studies first separately encode questions and passages to obtain their dense representations, and then compute the similarity between the dense representations using similarity functions such as cosine or dot product.", "Typically, the dual-encoder is trained by using in-batch random negatives: for each question positive passage pair in a training batch, the positive passages for the other questions in the batch would be used as negatives.", "However, it is still difficult to effectively train a dual-encoder for dense passage retrieval due to the following three major challenges.", "First, there exists the discrepancy between training and inference for the dual-encoder retriever.", "During inference, the retriever needs to identify positive (or relevant) passages for each question from a large collection containing millions of candidates.", "However, during training, the model is learned to estimate the probabilities of positive passages in a small candidate set for each question, due to the limited memory of a single GPU (or other device).", "To reduce such a discrepancy, previous work tried to design specific mechanisms for selecting a few hard negatives from the topk retrieved candidates (Gillick et al., 2019; Wu et al., 2020; Karpukhin et al., 2020; Luan et al., 2020; Xiong et al., 2020).", "However, it suffers from the false negative issue due to the following challenge.", "Second, there might be a large number of unlabeled positives.", "Usually, it is infeasible to completely annotate all the candidate passages for one question.", "By only examining the the topK passages retrieved by a specific retrieval approach (e.g. BM25), the annotators are likely to miss relevant passages to a question.", "Taking the MSMARCO dataset (Nguyen et al., 2016) as an example, each question has only 1 .", "1 annotated positive passages on average, while there are 8 .", "8 M passages in the whole collection.", "As will be shown in our experiments, we manually examine the top-retrieved passages that were not labeled as positives in the original MSMARCO dataset, and we find that 70% of them are actually positives.", "Hence, it is likely to bring false negatives when sampling hard negatives from the topk retrieved passages.", "Third, it is expensive to acquire large-scale training data for open-domain QA.", "MSMARCO and Natural Questions (Kwiatkowski et al., 2019) are two largest datasets for open-domain QA.", "They are created from commercial search engines, and have 516K and 300K annotated questions, respectively.", "However, it is still insufficient to cover all the topics of questions issued by users to search engines.", "In this paper, we focus on addressing these challenges so as to effectively train a dual-encoder retriever for open-domain QA.", "We propose an optimized training approach, called RocketQA , to improving dense passage retrieval.", "Considering the above challenges, we make three major technical contributions in RocketQA.", "First, RocketQA introduces cross-batch negatives.", "Comparing to in-batch negatives, it increases the number of available negatives for each question during training, and alleviates the discrepancy between training and inference.", "Second, RocketQA introduces denoised hard negatives.", "It aims to remove false negatives from the top-ranked results retrieved by a retriever, and derive more reliable hard negatives.", "Third, RocketQA leverages large-scale unsupervised data labeled by a cross-encoder (as shown in Figure 1b) for data augmentation.", "Though inefficient, the cross-encoder architecture has been found to be more capable than the dual-encoder architecture in both theory and practice (Luan et al., 2020).", "Therefore, we utilize a cross-encoder to generate high-quality pseudo labels for unlabeled data which are used to train the dual-encoder retriever.", "The contributions of this paper are as follows: The proposed RocketQA introduces three novel training strategies to improve dense passage retrieval for open-domain QA, namely cross-batch negatives, denoised hard negatives, and data augmentation.", "The overall experiments show that our proposed RocketQA significantly outperforms previous state-of-the-art models on both MSMARCO and Natural Questions datasets.", "We conduct extensive experiments to examine the effectiveness of the above three strategies in RocketQA.", "Experimental results show that the three strategies are effective to improve the performance of dense passage retrieval.", "We also demonstrate that the performance of end-to-end QA can be improved based on our RocketQA retriever.", "Passage retrieval for open-domain QA For open-domain QA, a passage retriever is an important component to identify relevant passages for answer extraction.", "Traditional approaches (Chen et al., 2017) implemented term-based passage retrievers (e.g. TF-IDF and BM25), which have limited representation capabilities.", "Recently, researchers have utilized deep learning to improve traditional passage retrievers, including document expansions (Nogueira et al., 2019c), question expansions (Mao et al., 2020) and term weight estimation (Dai and Callan, 2019).", "Different from the above term-based approaches, dense passage retrieval has been proposed to represent both questions and documents as dense vectors (i.e., embeddings), typically in a dual-encoder architecture (as shown in Figure 1a).", "Existing approaches can be divided into two categories: (1) self-supervised pre-training for retrieval (Lee et al., 2019; Guu et al., 2020; Chang et al., 2020) and (2) fine-tuning pre-trained language models on labeled data.", "Our work follows the second class of approaches, which show better performance with less cost.", "Although the dual-encoder architecture enables the appealing paradigm of dense retrieval, it is difficult to effectively train a retriever with such an architecture.", "As discussed in Section 1, it suffers from a number of challenges, including the training and inference discrepancy, a large number of unlabeled positives and limited training data.", "Several recent studies (Karpukhin et al., 2020; Luan et al., 2020; Chang et al., 2020; Henderson et al., 2017) tried to address the first challenge by designing complicated sampling mechanism to generate hard negatives.", "However, it still suffers from the issue of false negatives.", "The later two challenges have seldom been considered for open-domain QA.", "Passage re-ranking for open-domain QA Based on the retrieved passages from a first-stage retriever, BERT-based rerankers have recently been applied to retrieval-based question answering and search-related tasks (Wang et al., 2019; Nogueira and Cho, 2019; Nogueira et al., 2019b; Yan et al., 2019), and yield substantial improvements over the traditional methods.", "Although effective to some extent, these rankers employ the cross-encoder architecture (as shown in Figure 1b) that is impractical to be applied to all passages in a corpus with respect to a question.", "The re-rankers (Khattab and Zaharia, 2020; Gao et al., 2020) with light weight interaction based on the representations of dense retrievers have been studied.", "However, these techniques still rely on a separate retriever which provides candidates and representations.", "As a comparison, we focus on developing dual-encoder based retrievers.", "In this section, we propose an optimized training approach to dense passage retrieval for open-domain QA, namely RocketQA .", "We first introduce the background of the dual-encoder architecture, and then describe the three novel training strategies in RocketQA.", "Lastly, we present the whole training procedure of RocketQA.", "The task of open-domain QA is described as follows.", "Given a natural language question, a system is required to answer it based on a large collection of documents.", "Let C denote the corpus, consisting of N documents.", "We split the N documents into M passages, denoted by p 1 , p 2 , ..., p M , where each passage p i can be viewed as an l -length sequence of tokens p (1) i , p (2) i , ..., p ( l ) i .", "Given a question q , the task is to find a passage p i among the M candidates, and extract a span p ( s ) i , p ( s +1) i , ..., p ( e ) i from p i that can answer the question.", "In this paper, we mainly focus on developing a dense retriever to retrieve the passages that contain the answer.", "We develop our passage retriever based on the typical dual-encoder architecture, as illustrated in Figure 1a.", "First, a dense passage retriever uses an encoder E p ( ) to obtain the d -dimensional real-valued vectors (a.k.a., embedding) of passages.", "Then, an index of passage embeddings is built for retrieval.", "At query time, another encoder E q ( ) is applied to embed the input question to a d -dimensional real-valued vector, and k passages whose embeddings are the closest to the question's will be retrieved.", "The similarity between the question q and a candidate passage p can be computed as the dot product of their vectors: sim ( q, p ) = E q ( q ) E p ( p ) .", "In practice, the separation of question encoding and passage encoding is desirable, so that the dense representations of all passages can be precomputed for efficient retrieval.", "Here, we adopt two independent neural networks initialized from pre-trained LMs for the two encoders E q ( ) and E p ( ) separately, and take the representations at the first token (e.g., [CLS] symbol in BERT) as the output for encoding.", "Training The training objective is to learn dense representations of questions and passages so that question positive passage pairs have higher similarity than the question negative passage pairs in training data.", "Formally, given a question q i together with its positive passage p + i and m negative passages { p i,j } mj =1 , we minimize the loss function: L ( q i , p + i , { p i,j } mj =1 ) = log e sim ( q i ,p + i ) e sim ( q i ,p + i ) + (cid:80) mj =1 e sim ( q i ,p i,j ) , (2) where we aim to optimize the negative log likelihood of the positive passage against a set of m negative passages.", "Ideally, we should take all the negative passages in the whole collection into consideration in Equation", "2. However, it is computationally infeasible to consider a large number of negative samples for a question, and hence m is practically set to a small number that is far less than M .", "As what will be discussed later, both the number and the quality of negatives affect the final performance of passage retrieval.", "Inference In our implementation, we use FAISS (Johnson et al., 2019) to index the dense representations of all passages.", "Specifically, we use IndexFlatIP for indexing and the exact maximum inner product search for querying.", "In Section 1, we have discussed three major challenges in training the dual-encoder based retriever, including the training and inference discrepancy, the existence of unlabeled positives, and limited training data.", "Next, we propose three improved training strategies to address the three challenges.", "Cross-batch Negatives When training the dual-encoder, the trick of in-batch negatives has been widely used in previous work (Henderson et al., 2017; Gillick et al., 2019; Wu et al., 2020; Karpukhin et al., 2020; Luan et al., 2020).", "Assume that there are B questions in a mini-batch on a single GPU, and each question has one positive passage.", "With the in-batch negative trick, each question can be further paired with B 1 negatives (i.e., positive passages of the rest questions) without sampling additional negatives.", "In-batch negative training is a memory-efficient way to reuse the examples already loaded in a mini-batch rather than sampling new negatives, which increases the number of negatives for each question.", "As illustrated at the top of Figure 2, we present an example for in-batch negatives when training on A GPUs in a data parallel way.", "To further optimize the training with more negatives, we propose to use cross-batch negatives when training on multiple GPUs, as illustrated at the bottom of Figure", "2. Specifically, we first compute the passage embeddings within each single GPU, and then share these passage embeddings among all the GPUs.", "Besides the in-batch negatives, we collect all passages (i.e., their dense representations) from other GPUs as the additional negatives for each question.", "Hence, with A GPUs q 11 p 11 q 12 p 12 q AB p AB In-batch Cross-batch q 11 p 11 p 12 p 1B q 12 p 12 p 11 p 1B q 1B p 1B p 12 p 11 q A1 p A1 q A2 p A2 q AB p AB p A2 p A1 q 11 p 11 q 12 p 12 p AB q ABGPU 1 GPUA p A2 p AB p A1 p ABGPU 1 GPUA p A1 p 11 p A2 p A1 p AB p A1 p AB p A2 p A2 p 12 p 1B p 11 p 1B p 12 p 1B Figure 2: The comparison of traditional in-batch negatives and our cross-batch negatives when trained on multiple GPUs, where A is the number of GPUs, and B is the number of questions in each min-batch.", "(or mini-batches) 2 , we can indeed obtain A B 1 negatives for a given question, which is approximately A times as many as the original number of in-batch negatives.", "In this way, we can use more negatives in the training objective of Equation 2, so that the results are expected to be improved.", "Denoised Hard Negatives Although the above strategy can increase the number of negatives, most of negatives are easy ones, which can be easily discriminated.", "While, hard negatives are shown to be important to train a dual-encoder (Gillick et al., 2019; Wu et al., 2020; Karpukhin et al., 2020; Luan et al., 2020; Xiong et al., 2020).", "To obtain hard negatives, a straightforward method is to select the top-ranked passages (excluding the labeled positive passages) as negative samples.", "However, it is likely to bring false negatives (i.e., unlabeled positives), since the annotators can only annotate a few top-retrieved passages (as discussed in Section 1).", "Another note is that previous work mainly focuses on factoid questions, to which the answers are short and concise.", "Hence, it is not challenging to filter false negatives by using the short answers (Karpukhin et al., 2020).", "However, it cannot apply to non-factoid questions.", "In this paper, we aim to learn dense passage retrieval for both factoid questions and non-factoid questions, which needs a more effective way for denoising hard negatives.", "Here, our idea is to utilize a well-trained cross-encoder to remove top-retrieved passages that are likely to be false negatives.", "Because the cross-encoder architecture is more powerful for capturing semantic similarity via deep interaction and shows much better performance than the dual-encoder ar-2 Note that cross-batch negatives can be applied in both settings of single-GPU and multi-GPUs.", "When there is only a single GPU available, it can be implemented in an accumulation way while trading off training time.", "chitecture (Luan et al., 2020).", "The cross-encoder is more effective and robust, while it is inefficient over a large number of candidates in inference.", "Hence, we first train a cross-encoder (following the architecture shown in Figure 1b).", "Then, when sampling hard negatives from the top-ranked passages retrieved by a dense retriever, we select only the passages that are predicted as negatives by the cross-encoder with high confidence scores.", "The selected top-retrieved passages can be considered as denosied samples that are more reliable to be used as hard negatives.", "Data Augmentation The third strategy aims to alleviate the issue of limited training data.", "Since the cross-encoder is more powerful in measuring the similarity between questions and passages, we utilize it to annotate unlabeled questions for data augmentation.", "Specifically, we incorporate a new collection of unlabeled questions, while reuse the passage collection.", "Then, we use the learned cross-encoder to predict the passage labels for the new questions.", "To ensure the quality of the automatically labeled data, we only select the predicted positive and negative passages with high confidence scores estimated by the cross-encoder.", "Finally, the automatically labeled data is used as augmented training data to learn the dual encoder.", "Another view of the data augmentation is knowledge distillation (Hinton et al., 2015), where the cross-encoder is the teacher and the dual-encoder is the student.", "As shown in Figure 3, we organize the above three training strategies into an effective training pipeline for the dual-encoder.", "It makes an analogy to a multi-stage rocket, where the performance of the dual-encoder is consecutively improved at three steps (STEP 1, 3 and 4).", "That is why we call our approach RocketQA .", "Next, we will describe the details of the whole training procedure of RocketQA.", "REQUIRE: Let C denote a collection of passages.", "QL is a set of questions that have corresponding labeled passages in C , and QU is a set of questions that have no corresponding labeled passages.", "DL is a dataset consisting of C and QL , and DU is a dataset consisting of C and QU .", "STEP 1: Train a dual-encoder M (0) D by using cross-batch negatives on DL .", "STEP 2: Train a cross-encoder MC on DL .", "The positives used for training the cross-encoder are from the original training set DL , while the negatives are randomly sampled from the topk passages (excluding the labeled positive passages) retrieved by M (0) D from C for each question q QL .", "This design is to let the cross-encoder adjust to the distribution of the results retrieved by the dual-encoder, since the cross-encoder will be used in the following two steps for optimizing the dual-encoder.", "This design is important, and there is similar observation in Facebook Search (Huang et al., 2020).", "STEP 3: Train a dual-encoder M (1) D by further introducing denoised hard negative sampling on DL .", "Regarding to each question q QL , the hard negatives are sampled from the top passages retrieved by M (0) D from C , and only the passages that are predicted as negatives by the cross-encoder MC with high confidence scores will be selected.", "STEP 4: Construct pseudo training data DU by using MC to label the topk passages retrieved by M (1) D from C for each question q QU , and then train a dual-encoder M (2) D on both the manually labeled training data DL and the automatically augmented training data DU .", "encoder.", "The cross-encoder is used both STEP 3 and STEP 4 with different purposes to promote the performance of the dual encoder.", "The implementation details of denoising hard negatives and data augmentation can be found in Section", "4. 4 Experiments 4.1 Experimental Setup 4.1.1 Datasets We conduct the experiments on two popular QA benchmarks: MSMARCO Passage Ranking (Nguyen et al., 2016) and Natural Questions (NQ) (Kwiatkowski et al., 2019).", "The statistics of the datasets are listed in Table", "1. MSMARCO Passage Ranking MSMARCO is originally designed for multiple passage MRC, and its questions were sampled from Bing search logs.", "Based on the questions and passages in MSMARCO Question Answering, a dataset for passage ranking was created, namely MSMARCO Passage Ranking, consisting of about 8 .", "8 million passages.", "The goal is to find positive passages that answer the questions.", "Natural Question (NQ) Kwiatkowski et al. (2019) introduces a large dataset for open-domain QA.", "The original dataset contains more than 300 , 000 questions collected from Google search logs.", "In Karpukhin et al. (2020), around 62 , 000 factoid questions are selected, and all the Wikipedia articles are processed as the collection of passages.", "There are more than 21 million passages in the corpus.", "In our experiments, we reuse the version of NQ created by Karpukhin et al. (2020).", "Note that the dataset used in DPR contains empty negatives, and we discarded the empty ones.", "Following previous work, we use MRR and Recall at top k ranks to evaluate the performance of passage retrieval, and exact match (EM) to measure the performance of answer extraction.", "MRR The Reciprocal Rank (RR) calculates the reciprocal of the rank at which the first relevant passage was retrieved.", "When averaged across questions, it is called Mean Reciprocal Rank (MRR).", "Recall at top k ranks The topk recall of a retriever is defined as the proportion of questions to which the top k retrieved passages contain answers.", "Exact match This metric measures the percentage of questions whose predicted answers that match any one of the reference answers exactly, after string normalization.", "We conduct all experiments with the deep learning framework PaddlePaddle (Ma et al., 2019) on up to", "eight NVIDIA Tesla V100 GPUs (with 32G RAM).", "Pre-trained LMs The dual-encoder is initialized with the parameters of ERNIE 2.0 base (Sun et al., 2020), and the cross-encoder is initialized with ERNIE 2.0 large.", "ERNIE 2.0 has the same networks as BERT, and it introduces continual pretraining framework on multiple pre-trained tasks.", "We notice previous work use different pre-trained LMs, and we examine the effects of pre-trained LMs in Section A.1 in Appendix.", "Our approach is effective when using different pre-trained LMs.", "Cross-batch negatives 3 The cross-batch negative sampling is implemented with differentiable all-gather operation provided in FleetX (Dong, 2020), that is a highly scalable distributed training engine of PaddlePaddle.", "The all-gather operator makes representation of passages across all GPUs visible on each GPU and thus the cross-batch negative sampling approach can be applied globally.", "Denoised hard negatives and data augmentation We use the cross-encoder for both denoising hard negatives and data augmentation.", "Specifically, we select the top retrieved passages with scores less than 0 .", "1 as negatives and those with scores higher than 0 .", "9 as positives.", "We manually evaluated the selected data, and the accuracy was higher than 90% .", "The number of positives and negatives When training the cross-encoders, the ratios of the number of positives to the number of negatives are 1:4 and 1:1 on MSMARCO and NQ, respectively.", "The 3 When using multi-GPUs, the cross-batch negatives is as efficient as the in-batch negatives.", "Because the cross-batch re-uses the computed embeddings of paragraphs and the communication cost of embeddings across GPUs can be negligible.", "negatives used for training cross-encoders are randomly sampled from top1000 and top100 passages retrieved by the dual-encoder M (0) D on MSMARCO and NQ, respectively.", "When training the dual-encoders in the last two steps ( M (1) D and M (2) D ), we set the ratios of the number of positives to the number of hard negatives as 1:4 and 1:1 on MSMARCO and NQ, respectively.", "Batch sizes The dual-encoders are trained with the batch sizes of 512 8 and 512 2 on MSMARCO and NQ, respectively.", "The batch size used on MSMARCO is larger, since the size of MSMARCO is larger than NQ.", "The cross-encoders are trained with the batch sizes of 64 4 and 64 on MSMARCO and NQ, respectively.", "We use the automatic mixed precision and gradient checkpoint 4 functionality in FleetX, so as we can train the models using large batch sizes with limited resources.", "Training epochs The dual-encoders are trained on MSMARCO for 40 , 10 and 10 epochs in three steps of RocketQA, respectively.", "The dual-encoders are trained on NQ for 30 epochs in all steps of RocketQA.", "The cross-encoders are trained for 2 epochs on both MSMARCO and NQ.", "Optimizers We use ADAM optimizer.", "Warmup and learning rate The learning rate of the dual-encoder is set to 3e-5 and the rate of linear scheduling warm-up is set to 0 .", "1 , while the learning rate of the cross-encoder is set to 1e-5.", "Maximal length We set the maximal length of questions and passages as 32 and 128, respectively.", "Unlabeled questions We collect 1 .", "7 million unlabeled questions from Yahoo! Answers 5 , ORCAS (Craswell et al., 2020) and MRQA (Fisch et al., 2019).", "We use the questions from Yahoo! Answers, 4 The gradient checkpoint (Chen et al., 2016) enables the trading off computation against memory resulting in sublinear memory cost, so bigger/deeper nets can be trained with limited resources.", "5 http://answers.yahoo.com/ ORCAS and NQ as new questions in the experiments of MSMARCO.", "We only use the questions from MRQA as the new questions in the experiments of NQ.", "Since both NQ and MRQA mainly contain factoid-questions, while other datasets contain both factoid and non-factoid questions.", "In our experiments, we first examine the effectiveness of our retriever on MSMARCO and NQ datasets.", "Then, we conduct extensive experiments to examine the effects of the three proposed training strategies.", "We also show the performance of end-to-end QA based on our retriever on NQ dataset.", "We first compare RocketQA with the previous state-of-the-art approaches on passage retrieval.", "We consider both sparse and dense passage retriever baselines.", "The sparse retrievers include the traditional retriever BM25 (Yang et al., 2017), and four traditional retrievers enhanced by neural networks, including doc2query (Nogueira et al., 2019c), DeepCT (Dai and Callan, 2019), docTTTTTquery (Nogueira et al., 2019a) and GAR (Mao et al., 2020).", "Both doc2query and docTTTTTquery employ neural question generation to expand documents.", "In contrast, GAR employs neural generation models to expand questions.", "Different from them, DeepCT utilizes BERT to learn the term weight.", "The dense passage retrievers include DPR (Karpukhin et al., 2020), ME-BERT (Luan et al., 2020) and ANCE (Xiong et al., 2020).", "Both DRP and ME-BERT use in-batch random sampling and hard negative sampling from the results retrieved by BM25, while ANCE enhances the hard negative sampling by using the dense retriever.", "Table 2 shows the main experimental results.", "We can see that RocketQA significantly outperforms all the baselines on both MSMARCO and Strategy MRR@10 In-batch negatives 32.39 Cross-batch negatives (i.e. STEP 1) 33.32 Hard negatives w/o denoising 26.03 Hard negatives w/ denoising (i.e. STEP 3) 36.38 Data augmentation (i.e. STEP 4) 37.02 Table 3: The experiments to examine the effectiveness of the three proposed training strategies in RocketQA on MSMARCO Passage Ranking.", "NQ datasets.", "Another observation is that the dense retrievers are overall better than the sparse retrievers.", "Such a finding has also been reported in previous studies (Karpukhin et al., 2020; Luan et al., 2020; Xiong et al., 2020), which indicates the effectiveness of the dense retrieval approach.", "In this part, we conduct the extensive experiments on MSMARCO dataset to examine the effectiveness of the three strategies in RocketQA.", "Results on NQ dataset has shown the similar findings (see in Section A.2 in Appendix).", "First, we compare cross-batch negatives with in-batch negatives by using the same experimental setting (i.e. the number of epochs is 40 and the batch size is 512 on each single GPU).", "From the first two rows in Table 3, we can see that the performance of the dense retriever can be improved with more negatives by cross-batch negatives.", "It is expected that when increasing the number of random negatives, it will reduce the discrepancy between training and inference.", "Furthermore, we investigate the effect of the number of random negatives.", "Specifically, we examine the performance of dual-encoders trained by using different numbers of random negatives with a fixed number of 5 10 15 20 25 30 35 40 Top-k 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 D e n o i s e d r a t i o 0.75 0.52 0.36 0.26 0.18 0.12 0.08 0.04 Figure 5: The ratios of denoised passages at different ranks on MSMARCO.", "steps.", "From Figure 4, we can see that the model performance increases, when the number of random negatives becomes larger.", "After a certain point, the model performance starts to drop, since a large batch size may bring difficulty for optimization on training data with limited size.", "We say that there should be a balance between the batch size and the number of negatives.", "When increasing the batch size, we will have more negatives for each question.", "However, when the size of training data is limited, a large batch size will bring difficulty for optimization.", "Second, we examine the effect of denoised hard negatives from the topk passages retrieved by the dense retriever.", "As shown in the third row in Table 3, the performance of the retriever significantly decreases by introducing hard negatives without denoising.", "We speculate that it is caused by the fact that there are a large number of unlabeled positives.", "Specifically, we manually examine the top-retrieved passages of 100 questions, that were not labeled as true positives.", "We find that about 70% of them are actually positives or highly relevant.", "Hence, it is likely to bring noise if we simply sample hard negatives from the top-retrieved passages by the dense retriever, which is a widely adopted strategy to sample hard negatives in previous studies (Gillick et al., 2019; Wu et al., 2020; Xiong et al., 2020).", "As a comparison, we propose denoised hard negatives by a powerful cross-encoder.", "From the fourth row in Table 3, we can see that denoised negatives improve the performance of the dense retriever.", "To obtain more insights about denoised hard negatives, Table 4 gives the sampled hard negatives for two questions before and after denoising.", "Figure 5 further illustrates the ratio of filtered passages at different ranks.", "We can see that there are more passages filtered (i.e. denoised) at Question Labelpositives Hardnegativesw/odenoising(falsenegatives) Hardnegativesw/denoising Howmanykilohertzinamegahertz One megahertz (abbreviated: MHz) is equal to 1,000 kilohertz, or 1,000,000 hertz .", "lower ranks, since it is likely to have more false negatives at lower ranks.", "Finally, when integrated with the data augmentation strategy (see the fifth row in Table 3), the performance has been further improved.", "A major merit of data augmentation is that it does not explicitly rely on manually-labeled data.", "Instead, it utilizes the cross-encoder (having more powerful capability than the dual-encoder) to generate pseudo training data for improving the dual-encoder.", "We further examine the effect of the size of the augmented data.", "As shown in Figure 6, we can see when the size of the augmented data is increasing, the performance increases.", "Previous experiments have shown the effectiveness of RocketQA on passage retrieval.", "Next, we verify whether the retrieval results of RocketQA can improve the performance of passage reading for extracting correct answers.", "We implement an end-to-end QA system in which we have an extractive reader stacked on our RocketQA retriever.", "For a fair comparison, we first re-use the released model 6 of the extractive reader in DPR (Karpukhin et al., 2020), and take 100 retrieved passages during inference (the same setting used in DPR).", "Besides, 6 https://github.com/facebookresearch/ DPR Model EM BM25+BERT (Lee et al., 2019) 26.5 HardEM (Min et al., 2019a) 28.1 GraphRetriever (Min et al., 2019b) 34.5 PathRetriever (Asai et al., 2020) 32.6 ORQA (Lee et al., 2019) 33.3 REALM (Guu et al., 2020) 40.4 DPR (Karpukhin et al., 2020) 41.5 GAR (Mao et al., 2020) 41.6 RocketQA + DPR reader 42.0 RocketQA + re-trained DPR reader 42.8 Table 5: The experimental results of passage reading on NQ dataset.", "we use the same setting to train a new extractive reader based on the retrieval results of RocketQA (except that we choose top 50 passages for training instead of 100).", "The motivation is that the reader should be adapted to the retrieval distribution of RocketQA.", "Table 5 summarizes the the end-to-end QA performance of our approach and a number of competitive methods.", "From Table 5, we can see that our retriever leads to better QA performance.", "Compared with prior solutions, our novelty mainly lies in the passage retrieval component, i.e., the RocketQA approach.", "The results have shown that our approach can provide better passage retrieval results, which finally improve the final QA performance.", "In this paper, we have presented an optimized training approach to improving dense passage retrieval.", "We have made three major technical contributions in RocketQA, namely cross-batch negatives, denoised hard negatives and data augmentation.", "Extensive experiments have shown the effectiveness of the proposed approach by incorporating the three optimization strategies.", "We also demonstrate that the performance of end-to-end QA can be improved based on our RocketQA retriever.", "The technique of dense passage retrieval is effective for question answering, where the majority of questions are informational queries.", "Different from the traditional search, there is usually term mismatch between questions and answers.", "The term mismatch brings barriers for the machine to accurately find the information for people.", "Hence, we need dense passage retrieval for semantic matching in the scenario of question answering.", "Dense passage retrieval has the potential to empower people to find the accurate information more quickly and achieve more in their daily life and work.", "Our technique contributes toward the goal of asking machines to find the answers to natural language questions from a large collection of documents.", "However, the goal is still far from being achieved, and more efforts from the community is needed for us to get there.", "This work is supported by the National Key Research and Development Project of China (No. 2018AAA0101900).", "We would also like to thank the anonymous reviewers for their insightful suggestions." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "abstain", "method", "objective", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "method", "objective", "objective", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other" ]
[ "On social media platforms, hateful and offensive language negatively impact the mental well-being of users and the participation of people from diverse backgrounds.", "Automatic methods to detect offensive language have largely relied on datasets with categorical labels.", "However, comments can vary in their degree of offensiveness.", "We create the first dataset of English language Reddit comments that has fine-grained, real-valued scores between -1 (maxi-mally supportive) and 1 (maximally offensive).", "The dataset was annotated using BestWorst Scaling , a form of comparative annotation that has been shown to alleviate known biases of using rating scales.", "We show that the method produces highly reliable offensiveness scores.", "Finally, we evaluate the ability of widely-used neural models to predict offensiveness scores on this new dataset.", "Social media platforms serve as a medium for exchange of ideas on a range of topics, from the personal to the political.", "This exchange can, however, be disrupted by offensive or hateful language.", "Such language is pervasive online (Statista, 2020b), and exposure to it may have numerous negative consequences for the victim's mental health (Munro, 2011).", "Automated offensive language detection has thus been gaining interest in the NLP community, as a promising direction to better understand the nature and spread of such content.", "There are several challenges in the automatic detection of offensive language (Wiedemann et al., 2018).", "The NLP community has adopted various definitions for offensive language, classifying it into specific categories.", "For example, Waseem and Both authors contributed equally.", "Hovy (2016) classified comments as racist, sexist, neither ; Davidson et al. (2017) as hate-speech, offensive but not hate-speech, neither offensive nor hate-speech and Founta et al. (2018) as abusive, hateful, normal, spam .", "Schmidt and Wiegand (2017); Fortuna and Nunes (2018); Mishra et al. (2019); Kiritchenko and Nejadgholi (2020) summarize the different definitions.", "However, these categories have significant overlaps with each other, creating ill-defined boundaries, thus introducing ambiguity and annotation inconsistency (Founta et al., 2018).", "A further challenge is that after encountering several highly offensive comments, an annotator might find subsequent moderately offensive comments to not be offensive ( de-sensitization ) (Kurrek et al., 2020; Soral et al., 2018).", "At the same time, existing approaches do not take into account that comments can be offensive to a different degree.", "Knowing the degree of offensiveness of a comment has practical implications, when taking action against inappropriate behaviour online, as it allows for a more fine-grained analysis and prioritization in moderation.", "The representation of the offensive class in a dataset is often boosted using different strategies.", "The most common strategy used is key-word based sampling.", "This results in datasets that are rich in explicit offensive language (language that is unambiguous in its potential to be offensive, such as those using slurs or swear words (Waseem et al., 2017)) but lack cases of implicit offensive language (language with its true offensive nature obscured due to lack of unambiguous swear words, usage of sarcasm or offensive analogies, and others (Waseem et al., 2017; Wiegand et al., 2021)) (Waseem, 2016; Wiegand et al., 2019).", "key-word based sampling often results in spurious correlations (e.g., sports-related expressions such as announcer and sport occur very frequently in offensive tweets).", "Lastly, existing datasets consider of-2700 fensive comments in isolation from the wider conversation of which they are a part.", "Offensive language is, however, inherently a social phenomenon and its analysis has much to gain from taking the conversational context into account (Gao and Huang, 2017).", "In this paper, we present the first dataset of 6000 English language Reddit comments that has fine-grained, real-valued scores between -1 (maximally supportive) and 1 (maximally offensive) normative offensiveness ratings for the comments.", "For the first time, we use comparative annotations to detect offensive language.", "In its simplest form, comparative annotations involve giving the annotators two instances at a time, and asking which exhibits the property of interest to a greater extent.", "This alleviates several annotation biases present in standard rating scales, such as scale-region bias (Presser and Schuman, 1996; Asaadi et al., 2019), and improves annotation consistency (Kiritchenko and Mohammad, 2017).", "However, instead of needing to annotate N instances, one now needs to annotate N 2 instance pairswhich can be prohibitive.", "Thus, we annotate our dataset using an efficient form of comparative annotation called BestWorst Scaling (BWS) (Louviere, 1991; Louviere et al., 2015; Kiritchenko and Mohammad, 2016, 2017).", "By eliminating different offensiveness categories, treating offensiveness as a continuous dimension, and eliciting comparative judgments from the annotators (based on their understanding of what is offensive), we alleviate the issues regarding category definitions and arbitrary category boundaries discussed earlier.", "By obtaining real-valued offensiveness scores, different thresholds can be used in downstream applications to handle varying degrees of offensiveness appropriately.", "By framing the task as a comparative annotation task, we obtain consistent and reliable annotations.", "We also greatly mitigate issues of annotator de-sensitization as one will still be able to recognize if one comment is more offensive than another, even if they think both comments are not that offensive.", "In contrast to existing resources, which provide annotations for individual comments, our dataset includes conversational context for each comment (i.e. the Reddit thread in which the comment oc-curred).", "We conduct quantitative and qualitative analyses of the dataset to obtain insights into how emotions, identity terms, swear words, are related to offensiveness.", "Finally, we benchmark several widely-used neural models in their ability to predict offensiveness scores on this new dataset.", "1 2 Related Work 2.1 Offensive Language Datasets Surveys by Schmidt and Wiegand (2017); Fortuna and Nunes (2018); Mishra et al. (2019); Vidgen and Derczynski (2020) discuss various existing datasets and their compositions in detail.", "Waseem and Hovy (2016); Davidson et al. (2017); Founta et al. (2018) created datasets based on Twitter data.", "Due to prevalence of the non-offensive class in naturally-occurring data (Waseem, 2016; Founta et al., 2018), the authors devised techniques to boost the presence of the offensive class in the dataset.", "Waseem and Hovy (2016) used terms frequently occurring in offensive tweets, while Davidson et al. (2017) used a list of hate-related terms to extract offensive tweets from the Twitter search API.", "Park et al. (2018), Wiegand et al. (2019), and Davidson et al. (2019) show that the Waseem and Hovy (2016) dataset exhibits topic bias and author bias due to the employed sampling strategy.", "Founta et al. (2018) boosted the representation of offensive class in their dataset by analysing the sentiment of the tweets and checking for the presence of offensive terms.", "In our work, we employ a hybrid approach, selecting our data in three ways: specific topics, emotion-related key-words, and random sampling.", "Past work has partitioned offensive comments into explicitly offensive (those that include profanityswear words, taboo words, or hate terms) and implicitly offensive (those that do not include profanity) (Waseem et al., 2017; Caselli et al., 2020a; Wiegand et al., 2021).", "Some other past work has defined explicitly and implicitly offensive instances a little differently: Sap et al. (2020) considered factors such as obviousness, intent to offend and biased implications, Breitfeller et al. (2019) considered factors such as the context and the person annotating the instance, and Razo and Kubler (2020) considered the kind of lexicon used.", "Regardless of the exact definition, implicit offensive language, due to a lack of lexical cues, is harder to classify not only for computational models, but also for humans.", "In our work, we consider implicitly offensive comments as those offensive comments that do not contain any swear words.", "Wulczyn et al. (2016, 2017) created three different datasets from Wikipedia Talk pages, focusing on aggression, personal attacks and toxicity.", "The comments were sampled at random from a large dump of English Wikipedia, and boosted by including comments from blocked users.", "For the personal attacks dataset, Wulczyn et al. (2016) used two different kinds of labels: ED (empirical distribution), OH (one hot).", "In case of ED, the comments were assigned real-valued scores between 0 and 1 representing the fraction of annotators who considered the comment a personal attack.", "While these labels were introduced to create a separation between the nature of comments with a score of 1 .", "0 and those with a score of 0 .", "6 (which would otherwise be classified as attacks), they are discrete.", "In our work, using the BWS comparative annotation setup, we assign fine-grained continuous scores to comments to denote their degree of offensiveness.", "BWS was proposed by Louviere (1991).", "Kiritchenko and Mohammad (2017) have experimentally shown that BWS produces more reliable fine-grained scores than the scores acquired utilizing rating scales.", "In the BWS annotation setup, the annotators are given an n-tuple (where n > 1, and commonly n = 4 ), and asked which item is the best and which is the worst (best and worst correspond to the highest and the lowest with respect to a property of interest).", "Bestworst annotations are particularly efficient when using 4-tuples, as each annotation results in inequalities for 5 of the 6 item pairs.", "For example, a 4-tuple with items A, B, C, and D, where A is the best, and D is the worst, results in inequalities: A > B, A > C, A > D, B > D, and C > D. Real-valued scores of associations are calculated between the items and the property of interest from the bestworst annotations for a set of 4-tuples (Orme, 2009; Flynn and Marley, 2014).", "The scores can be used to rank items by the degree of association with the property of interest.", "Within the NLP community, BWS has thus far been used only for creating datasets for relational similarity (Jurgens et al., 2012), word-sense disambiguation (Jurgens, 2013), wordsentiment intensity (Kiritchenko et al., 2014), phrase sentiment composition (Kiritchenko and Mohammad, 2016), and tweet-emotion intensity (Mohammad and Bravo-Marquez, 2017; Mohammad and Kiritchenko, 2018).", "Using BWS, we create the first dataset with degree of offensiveness scores for social media comments.", "We extracted Reddit data from the Pushshift repository (Baumgartner et al., 2020) using Google BigQuery .", "Reddit is a social news aggregation, web content rating, and discussion website.", "It contains forums called subreddits dedicated to specific topics.", "Users can make a post on the subreddit to start a discussion.", "Users can comment on existing posts or comments to participate in the discussion.", "As users can also reply to a comment, the entire discussion has a hierarchical structure called the comment thread .", "We divided the extracted comments into 3 categories based on their subreddit source: 1. Topics (50%) : Contains comments from topic-focused subreddits: AskMen, AskReddit, TwoXChromosomes, vaxxhappened, worldnews, worldpolitics .", "These subreddits were chosen to cover a diverse range of topics.", "AskReddit, vaxxhappened, worldnews, worldpolitics discuss generic themes.", "TwoXChromosomes contains women's perspectives on various topics and AskMen contains men's perspectives.", "2. ChangeMyView (CMV) (25%) : The CMV subreddit (with over a million users) has posts and comments on controversial topics.", "3. Random (25%) : Contains comments from random subreddits.", "We selected 808 posts from the subreddits based on criteria such as date, thread length, and post length.", "(Further details in the Appendix A.1.)", "We took the first 25 and the last 25 comments per post (skipping comments that had [ DELETED ] or [ REMOVED ] as comment body).", "The first responses are likely to be most relevant to the post.", "The final comments indicate how the discussion ended.", "We sampled 6000 comments from this set for annotation.", "The goal of the sampling was to increase the proportion of offensive and emotional comments.", "Emotions are highly representative of one's mental state, which in turn are associated with their behaviour (Poria et al., 2019).", "For example, Jay and Janschewitz (2008) show that people tend to swear when they are angry, frustrated or anxious.", "Studies have shown that the primary dimensions of emotion are valence, arousal, and dominance (VAD) (Osgood et al., 1957; Russell, 1980, 2003).", "Valence is the positive negative or pleasure displeasure dimension.", "Arousal is the excited calm or activepassive dimension.", "Dominance is powerfulweak or have full control'have no control' dimension (Mohammad, 2018).", "To boost the representation of offensive and emotional comments in our dataset, we up-sampled comments that included low-valence (highly negative) words and those that included high-arousal words (as per the NRC VAD lexicon (Mohammad, 2018)).", "2 The manually constructed NRC VAD lexicon includes 20,000 English words, each with a real-valued score between 0 and 1 in the V, A, D dimensions.", "In order to do this upsampling, we first defined the valence score of each comment as the average valence score of the negative words within the comment (A negative word is defined as a word with a valence score 0 . 25 in the VAD lexicon.) Similarly, we defined the arousal score for a comment as the average arousal score of high-arousal words in each comment.", "(A high-arousal word is defined as a word with an arousal score 0 . 75 .)", "We selected comments from the comment pool such that 50% were from the Topics category, 25% from the CMV category, and 25% from the Random category.", "Within each category, 33% of the comments were those that had the lowest valence scores, 33% of the comments were those that had the highest arousal scores, and the remaining were chosen at random.", "The perception of offensiveness' of a comment can vary from person to person.", "Therefore, we used crowdsourcing to annotate our data.", "Crowdsourcing helps us get an aggregation of varied perspectives rather than expert opinions which can leave out offensiveness in a comment that lies outside the typical' offensiveness norms (Blackwell et al., 2017).", "We carried out all the annotation tasks on Amazon Mechanical Turk (AMT).", "Due to the strong language, an adult content warning was issued for the task.", "Reddit is most popular in the US, which accounts for 50% of its desktop traffic (Statista, 2020a).", "Therefore, we restricted annotators to those residing in the US.", "To maintain the quality of annotations, only annotators with high approval rate were allowed to participate.", "2 In some initial pilot experiments, we found this approach of sampling low valence and high arousal comments to result in a greater number of offensive comments.", "We followed the procedure described in Kiritchenko and Mohammad (2016) to obtain BWS annotations.", "Annotators were presented with 4 comments (4-tuple) at a time and asked to select the comment that is most offensive (least supportive) and the comment that is least offensive (most supportive).", "We randomly generated 2N distinct 4-tuples (where N is the number of comments in the dataset), such that each comment was seen in eight different 4-tuples and no two 4-tuples had more than 2 items in common.", "We used the script provided by Kiritchenko and Mohammad (2016) to obtain the 4-tuples to be annotated.", "3 Kiritchenko and Mohammad (2016) show that in a word-level sentiment task, using just three annotations per 4-tuple produces highly reliable results.", "However, since we work with long comments and a relatively more difficult task, we got each tuple annotated by 6 annotators.", "Since each comment is seen in 8 different 4-tuples, we obtain 8 X 6 = 48 judgements per comment.", "In our instructions to the annotators, we defined offensive language as comments that include but are not limited to [being hurtful (with or without the usage of abusive words)/ being intentionally harmful/ treating someone improperly/ harming the self-concept' of another person/ aggressive outbursts/ name calling/ showing anger and hostility/ bullying/ hurtful sarcasm].", "We also encouraged the annotators to follow their instincts.", "By framing the task in terms of comparisons and providing a broad definition of offensiveness, we avoided introducing artificial categories and elicit responses guided by their intuition of the language.", "Detailed annotation instructions are made publicly available (Figure 5 in Appendix A.2).", "4 A sample questionnaire is shown in Figure 6 in Appendix A.2.", "For quality control purposes, we manually annotated around 5% of the data ourselves beforehand.", "We will refer to these instances as gold questions .", "The gold questions were interspersed with the other questions.", "If a worker's accuracy on the gold questions fell below 70%, they were refused further annotation and all of their annotations were discarded.", "The discarded annotations were 3 http://saifmohammad.com/WebPages/ BestWorst.html 4 AMT task interface with instructions: https:// hadarishav.github.io/Ruddit/ 2703 # Comments # Annotations per Tuple # Annotations # Annotators SHR Pearson SHR Spearman 6000 6 95,255 725 0.8818 0.0023 0.8612 0.0029 Table 1: Ruddit annotation statistics and split-half reliability (SHR) scores.", "published again for re-annotation.", "We received a total of 95,255 annotations by 725 crowd workers.", "The BWS responses were converted to scores using a simple counting procedure (Orme, 2009; Flynn and Marley, 2014).", "For each item, the score is the proportion of times the item is chosen as the most offensive minus the proportion of times the item is chosen as the least offensive.", "We release the aggregated annotations as well as the individual annotations of Ruddit, to allow further work on examining and understanding the variability.", "5 4.3 Annotation Reliability We cannot use standard inter-annotator agreement measures to ascertain the quality of comparative annotations.", "The disagreement that arises in tuples having two items that are close together in their degree of offensiveness is a useful signal for BWS (helping it give similar scores to the two items).", "The quality of annotations can be measured by measuring the reproducibility of the end result if repeated manual annotations from multiple annotators can produce similar rankings and scores, then, one can be confident about the quality of annotations received.", "To assess this reproducibility, we computed average split-half reliability (SHR) values over 100 trials.", "SHR is a commonly used approach to determine consistency in psychological studies.", "For computing SHR values, the annotations for each 4-tuple were randomly split in two halves.", "Using these two splits, two sets of rankings were determined.", "We then calculated the correlation values between these two sets.", "This procedure was repeated 100 times and the correlations were averaged.", "A high correlation value indicates that the annotations are of good quality.", "Table 1 shows the SHR for our annotations.", "SHR scores of over 0.8 indicate substantial reliability.", "In this section, we analyze various aspects of the data, including: the distribution of scores, the as-5", "as-5 We provide the comment IDs and not the comment body, in accordance to the GDPR regulations.", "Comment body can be extracted using the Reddit API.", "sociation with identity terms, the relationship with emotion dimensions, the relationship with data source, and the role of swear words.", "Distribution of Scores Figure 1 shows a histogram of frequency of comments vs. degree of offensiveness, over 40 equi-spaced score bins of size 0 .", "05 .", "We observe a normal distribution.", "To analyze the data, we placed the comments in 5 equi-spaced score bins of size 0 .", "4 (bin 1: 1 . 0 to 0 . 6 , bin 2: 0 . 6 to 0 . 2 , and so on).", "Table 2 shows some comments from the dataset (more examples can be found in Appendix A.3", "Table 6).", "We observed that bin 1 primarily contains supportive comments while bin 2 shows a transition from supportive to neutral comments.", "Bin 3 is dominated by neutral comments but as the score increases the comments become potentially offensive and bins 4 & 5 predominantly contain offensive comments.", "It is interesting to note that bin 4 contains some instances of implicit offensive language such as You look like a lesbian mechanic who has a shell collection' .", "In their paper, Wiegand et al. (2021) explore the category of such implicity abusive comparisons , in depth.", "More examples of implicitly offensive comments present in our dataset can be found in table 2 and table 6 (in Appendix A.3).", "To explore whether specific bins capture specific topics or key-words, we calculated Pointwise Mutual Information (PMI) scores of all the unique words in the comments (excluding stop words) with 2704 Bin Comment Score 1 Don't worry, she's going to be fine.", "the five score bins.", "Table 3 shows the top scoring words for each bin.", "We observed that bins 1, 2, and 3 exhibit a strong association with supportive or neutral words, while bins 4 and 5 show a strong association with swear words and identity terms commonly found in offensive contexts.", "Identity terms A common criticism of the existing offensive language datasets is that in those datasets, certain identity terms (particularly those referring to minority groups) occur mainly in texts that are offensive (Sap et al., 2019; Davidson et al., 2019; Wiegand et al., 2019; Park et al., 2018; Dixon et al., 2018).", "This leads to high association of targeted minority groups (such as Muslims, females, black people and others) with the offensive class(es).", "This bias, in turn, is captured by the computational models trained on such datasets.", "As mentioned earlier, in Ruddit, certain words such as gay, trans, male, female, black, white were found to exhibit a relatively higher association with the offensive bins than with the supportive bins.", "In order to probe the effect of this on the computational models, we created a variant of Ruddit by replacing all the identity terms (from the list given in Appendix A.4) in the comments with the [ group ] token and observed the effect on the models' performance.", "Offensiveness vs. emotion As discussed earlier, our emotions impact the words we use in text.", "We examined this relationship quantitatively using Ruddit and the NRC VAD Lexicon (which has intensity scores along the valence, arousal, and dominance dimensions).", "For each comment in Ruddit, we calculated three scores that captured the intensities of the V, A, D words (the averages of the intensities of the V/A/D words in the comment), using the entire lexicon.", "We then determined the correlation between each of the three scores and the degree of offensiveness.", "Only comments containing at least 4 words from the VAD lexicon were considered for the score and correlation calculation.", "A total of 4831 comments qualified the criteria.", "See Table 4. From the table, we can observe that valence is weakly inversely correlated, arousal is weakly correlated, and dominance does not exhibit notable correlation with offensiveness.", "This behaviour can also be observed in Figure 2 that shows a plot of the average V, A, and D scores of comments in the five equi-spaced offensiveness-score bins.", "Note the clear trend that as we look at bins with more offensive comments, the average valence of the comments decreases and the average arousal increases.", "Offensiveness vs. data source As mentioned earlier, comments in our dataset come from three different sources Topics, CMV, and Random.", "Figure 3 shows the distribution of comments from each source over the score bins.", "We observed that comments from Topics have near equal representation on both sides of the scale, while for the other 2705 Emotion Pearson's r Valence 0 .", "two sources, comments are more prevalent in the supportive bins.", "The higher representation of comments from Topics than the other two sources in the offensive bins, is likely due to the fact that the Topics category includes subreddits such as worldnews and worldpolitics .", "Discussions on these subreddits covers controversial topics and lead to the usage of offensive language.", "We observed that worldnews and worldpolitics indeed have high representation in the offensive bins (Figure 8 in Appendix A.4).", "Swear words We identified 868 comments in our dataset that contain at least one swear word from the cursing lexicon (Wang et al., 2014).", "Comments containing swear words can have a wide range of offensiveness scores.", "To visualize the distribution, we plot a histogram of the comments containing swear words vs. degree of offensiveness (see Figure 7 in Appendix A.4).", "The distribution is skewed towards the offensive end of the scale.", "An interesting observation is that some comments with low offensiveness scores contain phrases using swear words to express enthusiasm or to lay more emphasis, for example Hell yes ', sure as hell love it ', uncomfortable as shit ' and others.", "To study the impact of comments containing swear words on computational models, we created another variant of Ruddit in which we removed all the comments containing at least one swear word.", "We refer to this variant as the no-swearing dataset .", "This dataset contains 5132 comments.", "We analyse the models' performance on this dataset in the next section.", "Offensiveness in different score ranges It is possible that comments in the middle region of the scale may be more difficult for the computational models.", "Thus, we created a subset of Ruddit containing comments with scores from 0 .", "5 to 0 .", "5 .", "We call this subset (of 5151 comments), the reduced-range dataset .", "We discuss the models' performance on this dataset in the next section.", "In this section, we present benchmark experiments on Ruddit and its variants by implementing some commonly used model architectures.", "The task of the models was to predict the offensiveness score of a given comment.", "We performed 5-fold cross-validation for each of the models.", "6 6.1 Models Bidirectional LSTM We fed pre-trained 300 dimensional GloVe word embeddings (Pennington et al., 2014) to a 2-layered BiLSTM to obtain a sentence representation (using a concatenation of the last hidden state from the forward and backward direction).", "This sentence representation was then passed to a linear layer with a tanh activation to produce a score between 1 and 1. We used Mean Squared Error (MSE) loss as the objective function, Adam with 0 .", "001 learning rate as the optimizer, hidden dimension of 256, batch size of 32, and a dropout of 0 .", "5 .", "The model was trained for 7 epochs.", "6 Since we have a linear regression task, we created folds using sorted stratification (Lowe, 2016) to ensure that the distribution of all the partitions is similar.", "BERT We fine-tuned BERT base (Devlin et al., 2019).", "We added a regression head containing a linear layer to the pre-trained model.", "We used MSE loss as the objective function, batch size of 16, and learning rate of 2 e 5 (other hyperparameters same as (Devlin et al., 2019)).", "We used the AdamW optimizer with a linear learning rate scheduler with no warm up steps.", "The model was trained for 3 epochs.", "(More details in Appendix A.5.)", "HateBERT HateBERT (Caselli et al., 2020b) is a version of BERT pretrained for abusive language detection in English.", "HateBERT was trained on RAL-E, a large dataset of English language Reddit comments from communities banned for being offensive or hateful.", "HateBERT has been shown to outperform the general purpose BERT model on the offensive language detection task when fine-tuned on popular datasets such as OffensEval 2019 (Zampieri et al., 2019), AbusEval (Caselli et al., 2020a), and HatEval (Basile et al., 2019).", "We report Pearson correlation ( r ) and MSE, averaged over all folds.", "The performance of the models on Ruddit and its variants is shown in the Table 5.", "Note that the performance values on the no-swearing and the reduced-range datasets are not directly comparable to the performance values on the full Ruddit as their score range is different.", "We can see that on all the datasets, the HateBERT model performs the best, followed by the BERT model.", "Interestingly, the model performance (for all models) does not change substantially when trained on Ruddit or the identity-agnostic dataset.", "This indicates that the computational models are not learning to benefit from the association of certain identity terms with a specific range of scores on the offensiveness scale.", "The models show a performance drop on the no-swearing dataset, which suggests that swear words are useful indicators of offensiveness and that the comments containing them are easier to classify.", "Yet, the fact that the models still obtain performance of up to 0 .", "8 ( r ) demonstrates that they necessitate and are able to learn other types of offensiveness features.", "It is also worth mentioning that even if they encounter swear words in a comment, the task is not simply to label the comment as offensive but to provide a suitable score.", "Finally, the models obtained the performance of up to 0 .", "78 ( r ) on the reduced-range dataset, which shows that even if the comments from the extreme ends of the offensiveness scale are removed, Ruddit still presents an interesting and feasible offensiveness scoring task.", "Error Analysis Figure 4 shows the squared error values of the 3 models over the offensiveness score range in Ruddit.", "As expected, for all the models, the error in predictions is lower on both the extreme ends of the scale than in the middle region.", "Comments with very high or very low offensiveness scores are rich in obvious linguistic cues, making it easier for the computational models to predict scores.", "Most of the not-obvious, indirect implicitly offensive, and neutral comments should be present in the middle region of the offensiveness scale, making them more difficult for the models.", "It is interesting to observe that HateBERT, unlike the other two models, does not have high error values for samples within the score range 0 .", "25 0 .", "75 .", "This indicates that HateBERT is efficient in dealing with offensive language that does not lie in the extreme offensive end.", "BiLSTM seems relatively less accurate for samples in the supportive range ( 0 . 75 to 0 . 25 ).", "This could be attributed to the less complex model architecture and the usage of GloVe 7 It should be noted that since the list of identity terms and the cursing lexicon we use is not exhaustive, our conclusions are only limited to the scope of the respective lists.", "We presented the first dataset of online comments annotated for their degree of offensiveness.", "We used a comparative annotation technique called BestWorst Scaling, which addresses the limitations of traditional rating scales.", "We showed that the ratings obtained are highly reliable (SHR Pearson r 0 . 88 ).", "We performed data analysis to gain insight into the relation of emotions, data sources, identity terms, and swear words with the offensiveness scores.", "We showed that valence is inversely correlated with offensiveness and arousal is directly correlated with offensiveness.", "Finally, we presented benchmark experiments to predict the offensiveness score of a comment, on our dataset.", "We found that computational models are not benefiting from the association of identity terms with specific range of scores on the offensiveness scale.", "In future work, it would be interesting to explore the use of conversational context in computational modeling of offensiveness, as well as studying the interaction between offensiveness and emotions in more depth.", "We make our dataset freely available to the research community.", "This research was funded by the Facebook Online Safety Benchmark Research award for the project A Benchmark and Evaluation Framework for Abusive Language Detection.", "We create Ruddit to study, understand and explore the nature of offensive language.", "Any such dataset might also be used to create automatic offensive language detection systems.", "While we realise the importance of such systems, we also accept that any moderation of online content is a threat to free speech.", "Offensive language datasets or automatic systems can be misused to stifle disagreeing voices.", "Our intent is solely to learn more about the use of offensive language, learn about the various degrees of offensive language, explore how computational models can be enabled to watch and contain offensive language, and encourage others to do so.", "We follow the format provided by Bender and Friedman (2018) to discuss the ethical considerations for our dataset.", "Institutional Review: This research was funded by the Facebook Online Safety Benchmark Research award.", "The primary objective of this research award is the creation of publicly available benchmarks to improve online safety.", "This award does not directly benefit Facebook in any way.", "This research was reviewed by Facebook for various aspects, in particular: Legal Review: Evaluates whether the research to be undertaken or the research performed can violate intellectual property rights.", "Policy and Ethics Review: Evaluates whether the research to be undertaken aligns with the best ethics practices.", "This includes several aspects such as mitigating harm to people involved, improving data privacy, and informed consent.", "Data Redistribution / User Privacy: We extracted our data from the Pushshift Reddit dataset made publicly available by Baumgartner et al. (2020) for research purposes.", "The creators of the Pushshift Reddit dataset have provisions to delete comments from their dataset upon user's request.", "We release data in a manner that is GDPR compliant.", "We do not provide any user-specific 2708 information.", "We release only the comment IDs and post IDs.", "Reddit's Terms of Service do not prohibit the distribution of ids.", "8 The researchers using the dataset need to retrieve the data using the Reddit API.", "Speaker and Annotator Demographic: No specific speaker demographic information is available for the comments included in Ruddit.", "According to the October 2020 survey published by Statista (Statista, 2020a), 50% of the Reddit's desktop traffic is from the United States.", "They also state that from the internet users in the US, 21% from ages 18-24, 23% from ages 25-29 and 14% from ages 30-49 use Reddit.", "We restricted annotators to those residing in the US.", "A total of 725 crowd-workers participated in the task.", "Apart from the country of residence, no other information is known about the annotators.", "The annotators are governed by AMT's privacy policy.", "9 Pew Research Center conducted a demographic survey of AMT workers in 2016.", "In this survey, 3370 workers participated.", "They found out that 80% of the crowd-workers on AMT are from the US (PRC, 2020).", "More information about the workers who participated in their survey can be found in their article.", "It is important to include the opinions of targeted minorities and marginalized groups when dealing with the annotation of offensive language (Kiritchenko and Nejadgholi, 2020; Blackwell et al., 2017).", "However, we did not have our data annotated by the specific target demographic because it poses certain challenges.", "For example: identification of the target of offensive language; finding people of the target demographic group who are willing to annotate offensive language; and others.", "Annotating such offensive data can be even more traumatizing for the members of the targeted minorities.", "Finally, Ruddit was created with the intention to look at wide ranging offensive language of various degrees as opposed to detecting offensive language towards specific target groups.", "Annotation Guidelines: We created our annotation guidelines drawing inspiration from the community standards set for offensive language on several social media platforms.", "These standards are made after thorough research and feedback from the community.", "However, we are aware 8 https://www.reddit.com/wiki/api-terms 9 https://www.mturk.com/help that the definitions in our guidelines are not representative of all possible perspectives.", "The degree of offensiveness scores that we provide in Ruddit are a representation of what the majority of our annotators think.", "We would like to emphasize that the scores provided are not the correct or the only appropriate value of offensiveness.", "Different individuals and demographic groups may find the same comment to be more or less offensive than the scores provided.", "Impact on Annotators: Annotation of harsh and offensive language might impact the mental health of the annotators negatively (Vidgen et al., 2019; Roberts, 2016, 2019; Kiritchenko and Nejadgholi, 2020).", "The following minimized negative mental impact on the annotators participating in our task: The comments that we included in our dataset are pre-moderated by Reddit's admins and subreddit specific moderators.", "Any comments that do not comply with Reddit's content policy are not included.", "10 Our goal was to annotate posts one sees on social media (after content moderation).", "Unlike some past work, we do not limit the data to include only negative comments.", "We included a large sample of posts that one normally sees on social media, and annotated it for degree of supportiveness or degree of offensiveness.", "AMT provides a checkbox where requesters can indicate that some content in the task may be offensive.", "These tasks are not shown to annotators who have specified so in their profile.", "We used the checkbox to indicate that this task has offensive content.", "We explicitly warned the annotators about the content of annotation, and advised worker discretion.", "We provided detailed annotation instructions and informed the annotators about how the annotations for offensive language will be used for studying and understanding offensive language.", "The annotation of our data was crowdsourced, allowing for a large number of raters (725).", "This reduces the number of comments seen per rater.", "We also placed a limit on how many posts one may annotate.", "Annotators were not allowed to submit more than 5% of the total assignments.", "There are just 25 comments in the top 10% of the offensiveness score range.", "Thus, most annotators ( > 99 . 95% ) do not see even one such comment.", "10 https://www.redditinc.com/policies/ content-policy 2709 Identity Terms: As discussed in section 5, in Ruddit, certain identity terms show a higher association with offensive comments than with the supportive comments.", "In order to address this, we created a variant of Ruddit, in which we replaced all the identity terms (from the list given in Appendix A.4) with the [ group ] token.", "We call this variant the identity-agnostic dataset.", "We release the code for creating this variant from the original dataset.", "We evaluated our computational models on this variant and observed that the models did not learn to benefit from the association of the identity terms with the offensive comments.", "Computational Models: The models reported in this paper are not intended to fully automate offensive content moderation or to make judgements about specific individuals.", "Owing to privacy concerns, we do not model user history to predict offensiveness scores (Mitchell et al., 2018).", "Feedback: We are aware that our dataset is subject to the inherent bias of the data, the sampling procedure and the opinion of the annotators who annotated it.", "Finally, we acknowledge that this is not a comprehensive listing of all the ethical considerations and limitations.", "We welcome feedback from the research community and anyone using our dataset." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "method", "objective", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "method", "result", "method", "result", "method", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method" ]
[ "Understanding causality has vital importance for various Natural Language Processing (NLP) applications.", "Beyond the labeled instances, conceptual explanations of the causality can provide deep understanding of the causal facts to facilitate the causal reasoning process.", "However, such explanation information still remains absent in existing causal reasoning resources.", "In this paper, we fill this gap by presenting a human-annotated explainable CAusal REasoning dataset (e-CARE), which contains over 21K causal reasoning questions, together with natural language formed explanations of the causal questions.", "Experimental results show that generating valid explanations for causal facts still remains especially challenging for the state-of-the-art models, and the explanation information can be helpful for promoting the accuracy and stability of causal reasoning models.", "Causal reasoning is one of the most central cognitive abilities of human beings (Waldmann and Hagmayer, 2013; Jonassen et al., 2008), which enables one to understand the observed facts and predict the future.", "However, although recent causal reasoning models have achieved impressive performances on certain hand-crafted datasets, there still remains a considerable gap compared to human performances, as they cannot achieve stable performances across different datasets and are susceptible to adversarial attacks (McCoy et al., 2019; Poliak et al., 2018; Gururangan et al., 2018).", "One key factor leading to such drastic contrast is that, present causal reasoning models only learn to induce empirical causal patterns that are predictive to the label, while human beings seek for deep and conceptual understanding of the causality to explain the observed causal facts.", "The conceptual Corresponding author Cause Effect Causal Relationship Conceptual Explanation Observed Causal Fact C1: Adding rock into hydrochloric acid E1: Rock dissolved Acid is corrosive Figure 1: Conceptual explanations of observed causality can be helpful for understanding the unseen causal facts.", "explanations can not only serve as a touchstone to examine whether the underlying causal mechanism has been thoroughly understood, but it can also in turn support the causal reasoning process.", "As illustrated in Figure 1, observing the causal fact C 1 : adding rock into hydrochloric acid causes E 1 : rock dissolved , one may further ask why such a causal relationship exists and reach the plausible conceptual explanation that Acid is corrosive , which goes beyond the isolated facts and reaches the conceptual nature to reveal the principle of the causal mechanism.", "However, despite the critical importance of conceptual explanations in causal reasoning, there is still a lack of such an explainable causal reasoning dataset.", "To fill this gap, we contribute an explainable CAusal REasoning dataset (e-CARE),together with a new causal explanation generation task, and a novel Causal Explanation Quality (CEQ) evaluation metric.", "The e-CARE dataset is constructed by crowd-sourcing and contains over 21K multiple-choice causal reasoning questions, which makes e-CARE the largest human-annotated commonsense causal reasoning dataset to the best of our knowledge.", "In addition to the causal reasoning question itself, e-CARE also provides a free-text-formed conceptual explanation for each causal question to explain why the causation exists.", "On this basis, we propose a new causal explanation generation task that requires models not only to choose the correct causal fact but also to generate the ex-432 planation for the choice.", "In addition, to directly measure the quality of generated explanations, we propose a novel causal explanation quality evaluation metric (namely, CEQ score).", "Compared to conventional text generation evaluation metrics such as BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004) which mainly evaluate the textual or semantic similarity between generated explanations with golden annotations, CEQ score focuses on evaluating how much promotion an explanation can bring to understanding the causal mechanism.", "The dataset is publicly available at https: //github.com/Waste-Wood/e-CARE/ .", "Experimental results demonstrate that the causal questions of e-CARE are still challenging for the state-of-the-art (SOTA) pretrained language models, indicating the effectiveness of the e-CARE dataset in evaluating the causal learning ability of models.", "In addition, the explanation signal received in the training process can enhance the performance and the stability of the reasoning model, while the SOTA baselines still have trouble in explaining the causal facts at a conceptual level.", "These analyses highlight the importance of the conceptual explanations in causal reasoning, and suggest an avenue for future researches.", "Existing commonsense causal reasoning corpora differ in their annotation guidelines and how they are constructed: (1) whether the corpus is automatically constructed or built by human annotation; (2) whether the annotation unit of the corpus is word-level, phrase-level, or sentence-level.", "To obtain abundant causal knowledge, a natural way is extracting causal knowledge using heuristic rules from large-scale open-domain web text corpora (Luo et al., 2016; Li et al., 2020; Sap et al., 2019).", "However, the reporting bias may challenge both the coverage and quality of the extracted causal knowledge.", "Different from automatic construction, human annotation can endow datasets with higher precision.", "A line of work focuses on providing word-level causality knowledge (Girju et al., 2007; Mostafazadeh et al., 2016; Do et al., 2011; Hen-drickx et al., 2019).", "However, a word is not a complete semantic unit, which may limit the integrity of causal expressions and lead to ambi-Dataset Anno.", "guity.", "To address this issue, other datasets are constructed to provide phrase-level (Caselli and Vossen, 2017; Bethard and Martin, 2008; Mirza et al., 2014; Dunietz et al., 2017) and sentence-level (Ning et al., 2019; Roemmele et al., 2011) causal knowledge.", "Among these datasets, COPA (Roemmele et al., 2011) has become a widely adopted benchmark.", "Nevertheless, the size of COPA is rather limited, which may result in over-fitting and arouse concerns about the confidence of the results.", "In this paper, we introduce an e xplainable CAusal REasoning dataset (e-CARE).", "As shown in Table 1, to the best of our knowledge, e-CARE is the largest human-annotated causal reasoning dataset.", "With more than 21,000 instances, the e-CARE dataset can serve as a more reliable benchmark.", "Furthermore, compared to previous work, e-CARE can provide additional explanation information, which plays a critical role in learning the underlying mechanism of causal knowledge.", "Recently, an increasing amount of datasets have been proposed to address the explainability of textual inference tasks, such as textual entailment inference (Camburu et al., 2018), question-answering (QA) (DeYoung et al., 2019; Perez et al., 2019) and multi-hop QA (Ye et al., 2020).", "The form and content of the explanations vary with the nature of specific tasks.", "The QA task requires a model to answer the question based on evidences within given texts.", "Therefore, the explanation for this task should de-433 Number Train Dev Test Total Causal Questions 14,928 2,132 4,264 21,324 Uniq.", "scribe where and how an answer can be found (Wiegreffe and Marasovic, 2021).", "The explanations can have various forms, including answer-bearing sentences (Perez et al., 2019), structured information connecting the question and answer (Hancock et al., 2018; Ye et al., 2020), or even human-annotated free-formed sentences (Cam-buru et al., 2018; Rajani et al., 2019).", "In contrast, the multi-hop QA task requires the model to infer the correct answer through multiple reasoning steps.", "Hence, the explanation of this task needs to provide the specific reasoning paths (Wiegreffe and Marasovic, 2021; Jhamtani and Clark, 2020).", "Our work is quite different from previous work.", "We notice that all of these previous work only offer explanations that explain a specific question.", "Whereas we aim at providing a conceptual understanding of the causality, which has the potential to explain a set of related causal observations , rather than only explain a specific causal fact.", "e-CARE contains a total of 21,324 instances, corresponding to 13,048 unique explanations.", "This also makes e-CARE the largest human-annotated commonsense causal reasoning benchmark.", "The corpus-level statistics of the e-CARE dataset are shown in Table 2.", "As shown in Table 3, each instance of the e-CARE dataset is constituted by two components: (1) a multiple-choice causal reasoning question, composed of a premise and two hypotheses, and one of the hypotheses can form a valid causal fact with the premise; (2) a conceptual explanation about the essential condition that enables the existence of the causal fact.", "For example, as Table 3 shows, the explanation points out the nature of copper that Copper is a good thermal conductor , so that holding copper on fire will make fingers feel burnt immediately.", "The appendix provides more discussion about the explanations within e-CARE.", "On this basis, we introduce two tasks: Causal Reasoning Task We formulate the causal Premise : Tom holds a copper block by hand and heats it on fire.", "reasoning task as a multiple-choice task: given a premise event, one needs to choose a more plausible hypothesis from two candidates, so that the premise and the correct hypothesis can form into a valid causal fact.", "Explanation Generation Task It requires the model to generate a free-text-formed explanation for a given causal fact (composed of a premise and the corresponding correct hypothesis).", "To construct the e-CARE dataset, we start by collecting statements that describe conceptual understandings of world knowledge.", "Then given a statement, we ask different annotators to generate causal facts that can be explained by the statement, and build causal questions based on these causal facts.", "This is because we hope to provide conceptual explanations with more generality, that can explain a set of correlated causal facts, instead of only applicable to a certain isolated causal fact.", "Moreover, the statements can serve as clues to help the annotators to come up with causal facts.", "Collecting Potential Explanations Two key issues remain in collecting statements as potential explanations: (1) what kind of statements can be potential conceptual explanations of the causal facts; (2) where to find the appropriate statements.", "For the first question, Jonassen et al. (2008) concluded that, in general, the explanation of causality mainly describes three categories of information: (1) the nature or attributes of the objectives involved in the causal facts; (2) forces or actions that cause changes and drive transient motions; (3) the goals, intentions, motives or purposes of the causal agents.", "In addition, to be the conceptual explanation of a causal fact, the statement should be able to involve with a category of objects or people, but not only focus on a specific object or person (Sembugamoorthy and Chandrasekaran, 1986).", "Following these principles, we notice that there are already several available knowledge bases containing statements about such generic world knowledge, including ConceptNet (Speer 434 and Havasi, 2013), WordNet (Fellbaum, 2010), Atomic (Sap et al., 2019) and GenericsKB (Bhak-thavatsalam et al., 2020).", "However, ConceptNet and WordNet are structured knowledge graphs, containing only triplet-structured statements with a limited number of predicates.", "The scope of Atomic is limited in the activities of human beings.", "Compared to these knowledge bases, GenericsKB is an open-domain, large-scale knowledge base, containing rich generic world knowledge described in free-form text.", "Therefore, we collect the statements from GenericsKB to ensure the coverage and diversity of the potential explanations.", "Specifically, we filter out the statements in GenericsKB with low reliability, and the statements that may disobey the above-mentioned three principles.", "More details are provided in the Appendix.", "Thereafter, a total of 19,746 statements are left to form into a potential explanation set, which is further provided to the annotators to generate the causal questions.", "Annotating Causal Reasoning Questions Given the potential explanation set, annotators were recruited to generate corresponding causal questions.", "Specifically, a causal question is generated by two steps: First, an annotator was presented with a statement as a potential explanation, and was instructed to write a causal fact (composed of a cause and an effect), so that the causal fact can be interpreted by the given statement.", "In this step, a key issue is controlling the quality of generated causal facts.", "Thus we demonstrated illustrative examples to guide the annotators to avoid the following mistakes: (1) The created cause and effect are not in a valid causal relationship; (2) The created causal fact cannot be explained by the provided statement; (3) There are factual errors or imaginary contents in the created causal facts.", "In the causal fact generation process, each statement is randomly distributed to 1-3 annotators, so that we can find some statements that could explain multiple causal facts.", "Note that, in this process, we do not assume all statements are necessary to be a valid explanation.", "In other words, we do not require that the annotators must generate a causal fact for each given statement.", "Instead, we leave it to the judgment of annotators.", "In this way, the unreliable statements can be further excluded to promote the quality of our dataset.", "After the generation of causal facts, an ask-for indicator a [cause, effect] was randomly generated, where a = cause (effect) means that the cause (effect) event is the hypothesis, and the effect (cause) event is the premise of the causal question, respectively.", "Then given the ask-for indicator, in order to control the grammar and writing style consistency, the same annotator was prompted to write a distract cause (effect) as the implausible hypothesis according to the ask-for indicator.", "In this process, the annotators were instructed to create the implausible hypothesis as close as possible to the true hypothesis, meanwhile prevent creating uninformative distractors (such as simply adding a not into the true hypothesis).", "A significant challenge in dataset construction is avoiding introducing superficial cues into the dataset (Gururangan et al., 2018; Poliak et al., 2018), which refers to the unintentional features that leak the label information.", "To address this issue, following Bhagavatula et al. (2019) and Sakaguchi et al. (2020), we employ an adversarial filtering algorithm to replace the implausible hypotheses that can easily be distinguished with the correct hypotheses using the superficial clues.", "More details about the adversarial filtering are provided in the Appendix.", "As Table 4 shows, after the adversarial filtering, without the existence of the premise, the SOTA pretrained language models can hardly distinguish two candidate hypotheses, which indicates that to predict the correct label, a model must understand the causal relationship between the premise and hypothesis, rather than only depend on the superficial cues within the two hypotheses.", "After the refinement, we evaluate the quality of the annotated causal questions and collected explanations through crowdsourcing.", "We assess the quality of causal questions by testing if there is agreement among human raters on the answer of causal questions.", "Specifically, we randomly sampled 200 causal questions from e-CARE, and en-435 listed 10 annotators to answer the causal questions.", "In this process, each causal question was evaluated by three annotators.", "When answering the causal questions, the raters were allowed to choose an additional option None of the above if neither hypothesis was deemed plausible.", "The human annotators achieve a 92% accuracy with a high agreement (Cohen's = 0.935) (Cohen, 1960).", "To validate the quality of explanations, we enlisted volunteers to determine whether or not the explanations can explain corresponding causal facts.", "In total 200 causal facts with corresponding explanations were sampled and distributed to 10 volunteers, and each explanation was evaluated by three volunteers.", "After the evaluation, on average 89.5% of the explanations were deemed as valid (Cohen's = 0.832), showcasing the quality of the explanations in e-CARE.", "A number of automatic scores have been proposed to evaluate the quality of generated explanations, such as BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004).", "However, these metrics evaluate the quality of the generated explanations only through comparing the textual or semantic similarity between the generated explanations and the golden annotation.", "Alternatively, an ideal causal explanation quality evaluation metric should directly measure if the causal fact is appropriately explained by the explanation.", "Hence, we propose a novel causal explanation quality evaluation metric (namely, CEQ score) as a step towards directly measuring the quality of generated explanations.", "We devise the CEQ score based on the consideration that a better explanation should provide more information for understanding the causality, so that the prediction model can more accurately estimate the reasonableness of the causal fact.", "Previous literature characterized such reasonableness as the causal strength of the given causal fact (Roemmele et al., 2011; Luo et al., 2016), where the causal strength is a score in [0 , 1] .", "Hence, in theory, for a valid causal fact, its causal strength should be equal to 1.", "Given a valid causal fact, an explanation should help to increase its estimated causal strength to the ground-truth value 1.", "generated explanation by measuring the increase of causal strength brought by the explanation.", "Specifically, let C , E , and X denote the cause, the effect and the generated explanation, respectively.", "Formally, the CEQ score is defined as: CEQ = cs = cs( C, E | X ) cs( C, E ) , (1) where cs( C, E ) is the original causal strength between C and E ; cs( C, E | X ) is the causal strength after involvement of the additional explanation information.", "The explanation enhanced causal strength cs( C, E | X ) is defined as: cs( C, E | X ) = max [cs( C + X, E ) , cs( C, E + X )] , (2) where + denotes the string concatenate operation.", "Therefore, the CEQ score is positively related to the increase of causal strength between C and E after the involvement of the explanation X .", "In this paper, we employ a widely-adopted model-agnostic method proposed by Luo et al. (2016) to calculate the causal strength.", "The model-agnostic nature enable us to avoid reliance on certain models and keep the fairness of evaluation.", "Specifically, the phrase-level causal strength is derived through synthesizing the word-level causality.", "where ( CA , EB ) is an arbitrary causal fact; NCA and NEB are the number of words within CA and EB , respectively; cs( w i , w j ) is the causal strength between word w i and w j , which is estimated from a large corpus as:", "We examine the performance of state-of-the-art pretrained language models on the causal reasoning task and the explanation generation task.", "Furthermore, we investigate the specific role of explanations in causal reasoning by: (1) a predict-and-generate experiment, which requires models to conduct the causal reasoning task and generate corresponding explanations simultaneously; (2) a stability analysis using adversarial attacks.", "Settings We cast the causal reasoning task as a prediction problem: The input of the model is a candidate causal fact composed of a premise and one of the corresponding candidate hypotheses.", "The output is a score measuring the reasonableness of the candidate causal fact.", "We evaluate the causal reasoning ability of several SOTA pretrained language models, including discriminative pretrained language models BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), XLNet (Yang et al., 2019), and ALBERT (Lan et al., 2019); as well as autoregressive generative pretrained language models GPT2 (Radford et al., 2019) and BART (Lewis et al., 2020), which can also be adapted to the predictive causal reasoning task.", "In this section and the following parts, all experiments are conducted using the base-sized version of the pretrained language models.", "Additional details about experimental settings are provided in the Appendix.", "Results As shown in Table 5, ALBERT achieves the highest accuracy of 73.86% on the causal reasoning task of e-CARE.", "However, ALBERT can achieve an accuracy of 86.0% on the widely adopted causal reasoning benchmark COPA by our implementation.", "This is mainly because, on one hand, previous causal reasoning datasets are too small to evaluate the genuine reasoning ability of the model.", "On the other hand, previous datasets may provide some superficial cues for the reasoning models to achieve superb performances.", "In contrast, e-CARE is the largest causal reasoning dataset that can provide enough test instances to evaluate the actual ability of the model.", "More-Corr.", "over, in the annotating process of e-CARE, we introduced an adversarial filtering process to avoid the influence of superficial cues on the performances of reasoning models.", "Hence, we believe that e-CARE dataset can serve as a new benchmark for effectively evaluating models' causal reasoning ability.", "We also notice that human beings can achieve an accuracy of 92.00% on the e-CARE dataset.", "The large gap between the human performance and the pretrained language models suggests that the causal reasoning questions provided in our dataset still remain challenging, and calls for more powerful causal reasoning models.", "We investigate whether the model can generate correct explanations for given valid causal facts by training a GRU-based Seq2Seq model (Chung et al., 2014), and finetuning a generative pretrained language model GPT2 (Radford et al., 2019) on the e-CARE dataset.", "Both models take the concatenation of the cause and effect as input.", "Please refer to the Appendix for more details.", "Evaluation Metrics We automatically evaluate the quality of generated explanations using average-BLEU (n=4) (Papineni et al., 2002), ROUGE-l (Lin, 2004), Perplexity (Horgan, 1995), together with our proposed CEQ score.", "Human Evaluation We also assess the quality of model-generated explanations through human evaluation.", "Specifically, we sampled 200 explanations generated by each method.", "Then three workers were shown with the generated explanations, together with corresponding causal facts, and were asked to label whether the generated explanation can explain the corresponding causal fact.", "Quantitative Results As shown in Table 6, 89.5% of human-written explanations are found to be valid, while the generative pretrained language 437 Causal Facts (Generated) Explanation Human Annotation CEQ Cause: He was infected with gram-positive bacteria.", "model GPT2 only achieves a correctness of 20.0%.", "The last row of Table 6 reports the score of held-out human-written explanations, which serves as a ceiling for model performance.", "The significant gap indicates that, although GPT2 can achieve impressive performance on various natural language generation tasks, it still remains especially challenging for GPT2 to deeply understand the causal facts and then generate explanations like human beings.", "This may be one of the main obstacles hindering the further improvement of present causal reasoning models.", "Moreover, we measure the similarity between the automatic scores with the results of human evaluation using the Spearman correlation coefficient.", "As Table 7 shows, ROUGH-l and average-BLEU barely have a correlation with the results of human evaluation.", "This is because average-BLEU and ROUGH-l only implicitly evaluate the quality of generated explanations by measuring the textual similarity with the golden annotations.", "Compared to average-BLEU and ROUGH-l, the CEQ score has a significant positive relationship with the human evaluation results.", "This indicates the efficiency of the CEQ score in evaluating the quality of generated explanations.", "Qualitative Analysis In Table 8, we provide examples of explanations generated by GPT2.", "We observe that GPT2 can generate a reasonable explanation for some causal facts, while the generated explanations may still contain factual mistakes, or be totally irrelevant to the given causal fact (highlighted in yellow and pink, respectively).", "This indicate that the explanation generation still remains challenging for the GPT2 model.", "To investigate the role of causal explanations in the causal reasoning process, we trained models to jointly conduct these two tasks.", "Settings Since this task requires a model to predict a label meanwhile generate an explanation, we conduct the experiments using the GPT2 model, which can be adapted to conduct the predictive causal reasoning task and explanation generation simultaneously.", "We denote this multi-task finetuned GPT2 model as GPT2 CR-GE .", "Details for training GPT2 CR-GE is provided in the Appendix.", "To make the performance comparable, when evaluating the performance of GPT2 CR-GE on the causal expatiations generation task, the same as the settings in the explanation generation task, the premise and the correct hypothesis are taken as the input of GPT2 CR-GE for generating explanations.", "Results We measure the quality of generated explanations using the same automatic scores and human evaluation settings as the Explanation Generation experiment.", "The performance of causal reasoning is also measured using accuracy.", "The results are shown in Table 9, where GPT2 CR denotes the GPT2 model finetuned for the causal reasoning task, and GPT2 EG refers to the GPT2 model finetuned for the explanation generation task.", "We observe that compared with GPT2 CR , the improved performance of GPT2 CR-EG on causal reasoning indicates that the additional explanation can be helpful for the causal reasoning task, as it prompts model to have a deep understanding of the causal mechanisms.", "Interestingly, by comparing with GPT2 EG and GPT2 CR-EG , we find that learning to predict the label can also be helpful for the explanation generation process.", "This indicates the 438 synergistic effect of the causal reasoning and the explanation generation on promoting models' understanding of causal mechanism.", "Previous studies indicate that models may utilize some superficial cues within the dataset to predict the label.", "This leads to the vulnerability of models when facing adversarial attacks (Poliak et al., 2018; McCoy et al., 2019).", "Learning to generate the additional conceptual explanation may promote the understanding of causality to increase the stability of the reasoning model.", "Hence, we conduct a stability analysis to examine the specific effect of additional explanations.", "Following Bekoulis et al. (2018) and Yasunaga et al. (2018), we attack the causal reasoning system by adding a perturbation term on the word em-beddings of inputs.", "The perturbation term is derived using the gradient-based FGM method (Miy-ato et al., 2016).", "Table 9 shows the change of causal reasoning accuracy ( Accu.) brought by the adversarial attack.", "For example, = 6 .", "40 means a 6.40% decrease of prediction accuracy after the adversarial attack.", "We find that, compared to the vanilla GPT2 CR model, the explanation enhanced GPT2 model GPT2 CR-EG demonstrates stronger stability.", "This suggests that, by training reasoning models to generate correct explanations of the causal facts, the understanding of the causality can be promoted, and then the stability of model performance can be increased.", "Causal knowledge is critical for various NLP applications.", "In this section, we investigate if the causality knowledge provided by e-CARE can be used as a resource to boost model performance on other causal-related tasks.", "To this end, we apply transfer learning by first finetuning a BERT model on e-CARE, then adapting the e-CARE-enhanced model (denoted as BERTE ) on a causal extraction task EventStoryLine 0.9 (Caselli and Vossen, 2017), two causal reasoning tasks BECauSE 2.0 (Dunietz et al., 2017) and COPA (Roemmele et al., 2011), as well as a commonsense reasoning dataset CommonsenseQA (Tal-mor et al., 2019).", "On the EventStoryLine 0.9 dataset, we conduct experiment only on the instances about within-sentence causal relationship.", "The results are shown in Table 10.", "We observe Dataset Metric BERT BERTE EventStoryLine 0.9 F1 (%) 66.5 68.1 BECauSE 2.1 Accu.", "that the additional training process on e-CARE can consistently increase the model performance on all four tasks.", "This indicates the potential of e-CARE in providing necessary causality information for promoting causal-related tasks in multiple domains.", "In this paper, we introduce additional explanation information for the causal reasoning process, and propose a corresponding explanation generation task.", "Previous literature concluded the explanation generation process as an abductive reasoning process (Hanson, 1958; Peirce, 1974) and highlighted the importance of the abdutive explanation generation, as it may interact with the causal reasoning process to promote the understanding of causal mechanism, and increase the efficiency and reliability of causal reasoning.", "For example, as Figure 2 shows, one may have an observation that C 1 : adding rock into hydrochloric acid caused E 1 : rock dissolved .", "Through abductive reasoning, one may come up with a conceptual explanation for the observation that acid is corrosive .", "After that, one can confirm or rectify the explanation by experiments, or resorting to external references.", "In this way, new ideas about causality can be involved for understanding the observed causal fact.", "Then if the explanation is confirmed, it can be further utilized to support the causal reasoning process by helping to explain and validate other related causal facts, 439 such as C 2 : adding rust into sulphuric acid may lead to E 2 : rust dissolved.", "This analysis highlights the pivotal role of conceptual explanation in learning and inferring causality.", "In this paper, we introduce the e-CARE dataset to provide causal explanations and support future research towards stronger human-like causal reasoning systems.", "In this paper, we present an e xplainable CAusal REeasoning dataset e -CARE, which contains over 21K causal questions, together with over 13K unique conceptual explanations about the deep understanding of the causal facts, which also makes e-CARE the largest causal reasoning benchmark.", "Experimental results show that both the causal reasoning task and especially the explanation generation task remain challenging for the SOTA pretrained language models.", "Moreover, the additional explanation signal can promote both the prediction accuracy and stability of models, highlighting the vital importance of the conceptual explanations in causal reasoning.", "We thank the anonymous reviewers for their constructive comments, and gratefully acknowledge the support of the New Generation Artificial Intelligence of China (2020AAA0106501), and the National Natural Science Foundation of China (62176079, 61976073)." ]
[ "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "objective", "objective", "abstain", "other", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "objective", "other", "other", "other", "other", "other", "other", "method", "other", "other", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "other" ]
[ "Many text classification tasks are known to be highly domain-dependent.", "Unfortunately, the availability of training data can vary drastically across domains.", "Worse still, for some domains there may not be any annotated data at all.", "In this work, we propose a multinomial adversarial network 1 ( MAN ) to tackle this real-world problem of multi-domain text classification (MDTC) in which labeled data may exist for multiple domains, but in insufficient amounts to train effective classifiers for one or more of the domains.", "We provide theoretical justifications for the MAN framework, proving that different instances of MAN s are essentially minimizers of various f-divergence metrics (Ali and Silvey, 1966) among multiple probability distributions.", "MAN s are thus a theoretically sound generalization of traditional adversarial networks that discriminate over two distributions.", "More specifically, for the MDTC task, MAN learns features that are invariant across multiple domains by resorting to its ability to reduce the divergence among the feature distributions of each domain.", "We present experimental results showing that MAN s significantly outperform the prior art on the MDTC task.", "We also show that MAN s achieve state-of-the-art performance for domains with no labeled data.", "Text classification is one of the most fundamental tasks in Natural Language Processing, and has found its way into a wide spectrum of NLP applications, ranging from email spam detection and social media analytics to sentiment analysis and data mining.", "Over the past couple of decades, supervised statistical learning methods have become the dominant approach for text classification 1 The source code of MAN is available at https:// github .", "(e.g. McCallum et al. (1998); Kim (2014); Iyyer et al. (2015)).", "Unfortunately, many text classification tasks are highly domain-dependent in that a text classifier trained using labeled data from one domain is likely to perform poorly on another.", "In the task of sentiment classification, for example, the phrase runs fast is usually associated with positive sentiment in the sports domain; not so when a user is reviewing the battery of an electronic device.", "In real applications, therefore, an adequate amount of training data from each domain of interest is typically required, and this is expensive to obtain.", "Two major lines of work attempt to tackle this challenge: domain adaptation (Blitzer et al., 2007) and multi-domain text classification (MDTC) (Li and Zong, 2008).", "In domain adaptation, the assumption is that there is some domain with abundant training data (the source do-main), and the goal is to utilize knowledge learned from the source domain to help perform classifica-tions on another lower-resourced target domain.", "2 The focus of this work, MDTC, instead simulates an arguably more realistic scenario, where labeled data may exist for multiple domains, but in insufficient amounts to train an effective classifier for one or more of the domains.", "Worse still, some domains may have no labeled data at all.", "The objective of MDTC is to leverage all the available resources in order to improve the system performance over all domains simultaneously.", "One state-of-the-art system for MDTC, the CMSC system of Wu and Huang (2015), combines a classifier that is shared across all domains (for learning domain-invariant knowledge) with a set of classifiers, one per domain, each of which captures domain-specific text classification knowledge.", "This paradigm is sometimes known 2 See 6 for other variants of domain adaptation.", "as the Shared-Private model (Bousmalis et al., 2016).", "CMSC, however, lacks an explicit mechanism to ensure that the shared classifier captures only domain-independent knowledge: the shared classifier may well also acquire some domain-specific features that are useful for a subset of the domains.", "We hypothesize that better performance can be obtained if this constraint were explicitly enforced.", "In this paper, we thus propose Multinomial Adversarial Networks (henceforth, MAN s) for the task of multi-domain text classification.", "In contrast to standard adversarial networks (Goodfellow et al., 2014), which serve as a tool for minimizing the divergence between two distributions (Nowozin et al., 2016), MAN s represent a family of theoretically sound adversarial networks that, in contrast, leverage a multinomial discriminator to directly minimize the divergence among multiple probability distributions.", "And just as binomial adversarial networks have been applied to numerous tasks (e.g. image generation (Goodfellow et al., 2014), domain adaptation (Ganin et al., 2016), cross-lingual text classification (Chen et al., 2016)), we anticipate that MAN s will make a versatile machine learning framework with applications beyond the MDTC task studied in this work.", "We introduce the MAN architecture in 2 and prove in 3 that it directly minimizes the (gener-alized) f-divergence among multiple distributions so that they are indistinguishable upon successful training.", "Specifically for MDTC, MAN is used to overcome the aforementioned limitation in prior art where domain-specific features may sneak into the shared model.", "This is accomplished by relying on MAN 's power of minimizing the divergence among the feature distributions of each domain.", "The high-level idea is that MAN will make the extracted feature distributions of each domain indistinguishable from one another, thus learning general features that are invariant across domains.", "We then validate the effectiveness of MAN in experiments on two MDTC data sets.", "We find first that MAN significantly outperforms the state-of-the-art CMSC method (Wu and Huang, 2015) on the widely used multi-domain Amazon review dataset, and does so without relying on external resources such as sentiment lexica ( 4.1).", "When applied to the second dataset, FDU-MTL ( 4.3), we obtain similar results: MAN achieves substantially higher accuracy than the previous top-performing method, ASP-MTL (Liu et al., 2017).", "ASP-MTL is the first empirical attempt to use a multinomial adversarial network for multi-task learning, but is more restricted and can be viewed as a special case of MAN .", "In addition, we provide the first theoretical guarantees for multinomial adversarial networks ( 3).", "Finally, while many MDTC methods such as CMSC require labeled data for each domain, MAN s can be applied in cases where no labeled data exists for a subset of domains.", "To evaluate MAN in this semi-supervised setting, we compare MAN to a method that can accommodate unlabeled data for (only) one domain (Zhao et al., 2017), and show that MAN achieves performance comparable to the state of the art ( 4.2).", "In this paper, we strive to tackle the text classification problem in the real-world setting in which texts come from a variety of domains, each with a varying amount of labeled data.", "Specifically, assume we have a total of N domains, N 1 labeled domains (denoted as \u0000 L ) for which there is some labeled data, and N 2 unlabeled domains ( \u0000 U ) for which no annotated training instances are available.", "Denote \u0000 = \u0000 L [ \u0000 U as the collection of all domains, with N = N 1 + N 2 .", "The goal of this work, and of MDTC in general, is to improve the overall classification performance across all N domains, measured in this paper as the average 3 classification accuracy across the N domains in \u0000 .", "As shown in Figure 1, the Multinomial Adversarial Network ( MAN ) adopts the Shared-Private paradigm of Bousmalis et al. (2016) and consists of four components: a shared feature extractor F s , a domain feature extractor F d i for each labeled domain d i 2 \u0000 L , a text classifier C , and a domain discriminator D .", "The main idea of MAN is to explicitly model the domain-invariant features that are beneficial to the main classification task across all domains (i.e. the shared features , extracted by F s ), as well as the domain-specific features that mainly contribute to the classification in its own domain (the domain features , extracted by F d ).", "Here, the adversarial domain discriminator D has a multinomial output that takes a shared feature 3 In this work, we use macro-average over domains, but MAN can be readily adapted for micro-average or other (weighted) averaging schemes.", "As seen in Figure 1, during the training of F s (green arrows denote the training flow), F s aims to confuse D by minimizing JDF s , which is anticorrelated to JD (detailed in 2.2), so that D cannot predict the domain of a sample given its shared features.", "The intuition is that if even a strong discriminator D cannot tell the domain of a sample from the extracted features, those features F s learned are essentially domain invariant.", "By enforcing domain-invariant features to be learned by F s , when trained jointly via back-propagation, the set of domain feature extractors F d will each learn domain-specific features beneficial within its own domain.", "The architecture of each component is relatively flexible, and can be decided by the practitioners to suit their particular classification tasks.", "For instance, the feature extractors can adopt the form of Convolutional Neural Nets (CNN), Recurrent Neural Nets (RNN), or a Multi-Layer Perceptron (MLP), depending on the input data (see 4).", "The input of MAN will also be dependent on the feature Algorithm 1 MAN Training Require: labeled corpus X ; unlabeled corpus U ; Hyperpamameter \u0000 > 0 , k 2 N 1: repeat 2: .", "extractor choice.", "The output of a (shared/domain) feature extractor is a fixed-length vector, which is considered the (shared/domain) hidden features of some given input text.", "On the other hand, the outputs of C and D are label probabilities for class and domain prediction, respectively.", "For example, both C and D can be MLPs with a softmax layer on top.", "In 3, we provide alternative architectures for D and their mathematical implications.", "We now present a detailed description of the MAN training in 2.2 as well as the theoretical grounds in 3.", "Denote the annotated corpus in a labeled domain d i 2 \u0000 L as X i ; and ( x, y ) X i is a sample drawn from the labeled data in domain d i , where x is the input and y is the task label.", "On the other hand, for any domain d i 0 2 \u0000 , denote the unlabeled corpus as U i 0 .", "Note for the choice of unlabeled data of a labeled domain, one can use a separate unlabeled corpus or simply use the labeled data (or use both).", "In Figure 1, the arrows illustrate the training flows of various components.", "Due to the adversarial nature of the domain discriminator D , it is trained with a separate optimizer (red arrows), while the rest of the networks are updated with the main optimizer (green arrows).", "C is only trained on the annotated data from labeled domains, and it takes as input the concatenation of the shared and domain feature vectors.", "At test time, for data from 1228 unlabeled domains with no F d , the domain features are set to the 0 vector for C 's input.", "On the contrary, D only takes the shared features as input, for both labeled and unlabeled domains.", "The MAN training procedure is described in Algorithm", "1. In Algorithm 1, LC and LD are the loss functions of the text classifier C and the domain discriminator D , respectively.", "As mentioned in 2.1, C has a softmax layer on top for classification.", "We hence adopt the canonical negative log-likelihood (NLL) loss: LC ( y, y ) = \u0000 log P ( y = y ) (1) where y is the true label and y is the softmax predictions.", "For D , we consider two variants of MAN .", "The first one is to use the NLL loss same as C which suits the classification task; while another option is to use the Least-Square (L2) loss that was shown to be able to alleviate the gradient vanishing problem when using the NLL loss in the adversarial setting (Mao et al., 2017): LNLL D ( d, d ) = \u0000 log P ( d = d ) (2) LL 2 D ( d, d ) = NX i =1 ( d i \u0000 { d = i } ) 2 (3) where d is the domain index of some sample and d is the prediction.", "Without loss of generality, we normalize d so that P Ni =1 d i = 1 and 8 i : d i \u0000 0 .", "Therefore, the objectives of C and D that we are minimizing are: JC = NX i =1 E ( x,y ) X i [ LC ( C ( F s ( x ) , F d ( x )); y )] (4) JD = NX i =1 E x U i [ LD ( D ( F s ( x )); d )] (5) For the feature extractors, the training of domain feature extractors is straightforward, as their sole objective is to help C perform better within their own domain.", "Hence, JF d = JC for any domain d .", "Finally, the shared feature extractor F s has two objectives: to help C achieve higher accuracy, and to make the feature distribution invariant across all domains.", "It thus leads to the following bipartite loss: JF s = JCF s + \u0000 JDF s where \u0000 is a hyperparameter balancing the two parts.", "JDF s is the domain loss of F s anticorrelated to JD : ( NLL ) JDF s = \u0000 JD (6) ( L 2) JDF s = NX i =1 E x U i 2 4 NX j =1 ( D j ( F s ( x )) \u0000 1 N ) 2 3 5 (7) If D adopts the NLL loss (6), the domain loss is simply \u0000 JD .", "For the L2 loss (7), JDF s intuitively translates to pushing D to make random predictions.", "See 3 for theoretical justifications.", "Binomial adversarial nets are known to have theoretical connections to the minimization of various f-divergences 4 between two distributions (Nowozin et al., 2016).", "However, for adversarial training among multiple distributions, no theoretical justifications have been provided to our best knowledge, despite that this idea has recently been explored empirically (Liu et al., 2017).", "In this section, we present a theoretical analysis showing the validity of MAN .", "In particular, we show that MAN 's objective is equivalent to minimizing the total f-divergence between each of the shared feature distributions of the N domains, and the centroid of the N distributions.", "The choice of loss function will determine which specific f-divergence is minimized.", "Furthermore, with adequate model capacity, MAN achieves its optimum for either loss function if and only if all N shared feature distributions are identical, hence learning an invariant feature space across all domains.", "First, consider the distribution of the shared features f for instances in each domain d i 2 \u0000 : P i ( f ) , P ( f = F s ( x ) | x 2 d i ) (8) Combining (5) with the two loss functions (2), (3), the objective of D can be written as: JNLL D = \u0000 NX i =1 E f P i [log D i ( f )] (9) JL 2 D = NX i =1 E f P i 2 4 NX j =1 ( D j ( f ) \u0000 { i = j } ) 2 3 5 (10) 4 An f-divergence (Ali and Silvey, 1966) is a function that measures the distance between two probability distributions, e.g. the KL or Jensen-Shannon divergence.", "where D i ( f ) is the i -th dimension of D 's (nor-malized) output vector, which conceptually corresponds to the probability of D predicting that f is from domain d i", "We first derive the optimal D for any fixed F s .", "Lemma", "1. For any fixed F s , with either NLL or L2 loss, the optimum domain discriminator D is: D i ( f ) = P i ( f ) P Nj =1 P j ( f ) (11) The proof involves an application of the Lagrangian Multiplier to solve the minimum value of JD , and the details can be found in the Appendix.", "We then have the following main theorems for the domain loss for F s : Theorem", "1. Let P = P Ni =1 P i N .", "When D is trained to its optimality, if D adopts the NLL loss : JDF s = \u0000 min DJD = \u0000 JD = \u0000 N log N + N JSD ( P 1 , P 2 , . . . , PN ) = \u0000 N log N + NX i =1 KL ( P i k P ) where JSD ( ) is the generalized Jensen-Shannon Divergence (Lin, 1991) among multiple distributions, defined as the average Kullback-Leibler divergence of each P i to the centroid P (Aslam and Pavlu, 2007).", "Theorem", "2. If D uses the L2 loss : JDF s = NX i =1 E f P i 2 4 NX j =1 ( D j ( f ) \u0000 1 N ) 2 3 5 = 1 NNX i =1 \u0000 2 Neyman ( P i k P ) where \u0000 2 Neyman ( k ) is the Neyman \u0000 2 divergence (Nielsen and Nock, 2014).", "The proof of both theorems can be found in the Appendix.", "Consequently, by the non-negativity and joint convexity of the f-divergence (Csiszar and Korner, 1982), we have: Corollary", "1. The optimum of JDF s is \u0000 N log N when using NLL loss, and 0 for the L2 loss.", "The optimum value above is achieved if and only if P 1 = P 2 = = PN = P for either loss.", "Therefore, the loss of F s can be interpreted as simultaneously minimizing the classification loss Book DVD Elec.", "JC as well as the divergence among feature distributions of all domains.", "It can thus learn a shared feature mapping that is invariant across domains upon successful training while being beneficial to the main classification task.", "In this experiment, we compare MAN to state-of-the-art MDTC systems on the multi-domain Amazon review dataset (Blitzer et al., 2007), which is one of the most widely used MDTC datasets.", "Note that this dataset was already preprocessed into a bag of features (unigrams and bigrams), losing all word order information.", "This prohibits the use of CNNs or RNNs as feature extractors, limiting the potential performance of the system.", "Nonetheless, we adopt the same dataset for fair comparison and employ a MLP as our feature extractor.", "In particular, we take the 5000 most frequent features and represent each review as a 5000d feature vector, where feature values are raw counts of the fea-1230 tures.", "Our MLP feature extractor would then have an input size of 5000 in order to process the reviews.", "The Amazon dataset contains 2000 samples for each of the four domains: book, DVD, electronics , and kitchen , with binary labels (positive, neg-ative).", "Following Wu and Huang (2015), we conduct 5-way cross validation.", "Three out of the five folds are treated as the training set, one serves as the validation set, while the remaining is the test set.", "The 5-fold average test accuracy is reported.", "Table 1 shows the main results.", "Three types of models are shown: Domain-Specific Models Only , where only in-domain models are trained 5 ; Shared Model Only , where a single model is trained with all data; and Shared-Private Models , a combination of the previous two.", "Within each category, various architectures are examined, such as Least Square (LS), SVM, and Logistic Regression (LR).", "As explained before, we use MLP as our feature extractors for all our models (bold ones).", "Among our models, the ones with the MAN prefix use adversarial training, and MAN -L2 and MAN -NLL indicate MAN with the L2 loss and the NLL loss, respectively.", "From Table 1, we can see that by adopting modern deep neural networks, our methods achieve superior performance within the first two model categories even without adversarial training.", "This is corroborated by the fact that our SP-MLP model performs comparably to CMSC, while the latter relies on external resources such as sentiment lexica.", "Moreover, when our multinomial adversarial nets are introduced, further improvement is observed.", "With both loss functions, MAN outperforms all Shared-Private baseline systems on each domain, and achieves statistically significantly higher overall performance.", "For our MANSP models, we provide the mean accuracy as well as the standard errors over five runs, to illustrate the performance variance and conduct significance tests.", "It can be seen that MAN 's performance is relatively stable, and consistently outperforms CMSC.", "have any annotated corpora available.", "It is therefore also important to look at the performance in these unlabeled domains for a MDTC system.", "Fortunately, as depicted before, MAN 's adversarial training only utilizes unlabeled data from each domain to learn the domain-invariant features, and can thus be used on unlabeled domains as well.", "During testing, only the shared feature vector is fed into C , while the domain feature vector is set to 0 .", "In order to validate MAN 's effectiveness, we compare to state-of-the-art multi-source domain adaptation (MS-DA) methods (see 6).", "Compared to standard domain adaptation methods with one source and one target domain, MS-DA allows the adaptation from multiple source domains to a single target domain.", "Analogically, MDTC can be viewed as multi-source multi-target domain adaptation, which is superior when multiple target domains exist.", "With multiple target domains, MSDA will need to treat each one as an independent task, which is more expensive and cannot utilize the unlabeled data in other target domains.", "In this work, we compare MAN with one recent MS-DA method, MDAN (Zhao et al., 2017).", "Their experiments only have one target domain to suit their approach, and we follow this setting for fair comparison.", "However, it is worth noting that MAN is designed for the MDTC setting, and can deal with multiple target domains at the same time, which can potentially improve the performance by taking advantage of more unlabeled data from multiple target domains during adversarial training.", "We adopt the same setting as Zhao et al. (2017), which is based on the same multi-domain Amazon review dataset.", "Each of the four domains in the dataset is treated as the target domain in four separate experiments, while the re-1231 books elec.", "maining three are used as source domains.", "In Table 2, the target domain is shown on top, and the test set accuracy is reported for various systems.", "It shows that MAN outperforms several baseline systems, such as a MLP trained on the source-domains, as well as single-source domain adaptation methods such as mSDA (Chen et al., 2012) and DANN (Ganin et al., 2016), where the training data in the multiple source domains are combined and viewed as a single domain.", "Finally, when compared to MDAN, MAN and MDAN each achieves higher accuracy on two out of the four target domains, and the average accuracy of MAN is similar to MDAN.", "Therefore, MAN achieves competitive performance for the domains without annotated corpus.", "Nevertheless, unlike MS-DA methods, MAN can handle multiple target domains at one time.", "To make fair comparisons, the previous experiments follow the standard settings in the literature, where the widely adopted Amazon review dataset is used.", "However, this dataset has a few limitations.", "First, it has only four domains.", "In addition, the reviews are already tokenized and converted to a bag of features consisting of unigrams and bi-grams.", "Raw review texts are hence not available in this dataset, making it impossible to use certain modern neural architectures such as CNNs and RNNs.", "To provide more insights on how well MAN works with other feature extractor architectures, we provide a third set of experiments on the FDU-MTL dataset (Liu et al., 2017).", "This dataset is created as a multi-task learning dataset with 16 tasks , where each task is essentially a different domain of reviews.", "It has 14 Amazon domains: books, electronics, DVD, kitchen, apparel, camera, health, music, toys, video, baby, magazine, software, and sports, in addition to two movie review domains from the IMDb and the MR datasets.", "Each domain has a development set of 200 samples, and a test set of 400 samples.", "The amount of training and unlabeled data vary across domains but are roughly 1400 and 2000, respectively.", "We compare MAN with ASP-MTL (Liu et al., 2017) on this FDU-MTL dataset.", "ASP-MTL also adopts adversarial training for learning a shared feature space, and can be viewed as a special case of MAN that adopts the NLL loss ( MAN -NLL) and chooses LSTM as their feature extractor.", "In contrast, we found a CNN-based feature extractor (Kim, 2014) achieves much better accuracy while being 10 times faster.", "Indeed, as shown in Table 3, with or without adversarial training, our CNN models outperform LSTM ones by a large margin.", "When used in our MAN framework, we attain the state-of-the-art performance on every domain with a 88.4% overall accuracy, surpassing ASP-MTL by a significant margin of 2.3%.", "We hypothesize the reason a LSTM performs much worse than a CNN is its lack of an attention mechanism.", "In ASP-MTL, only the last hidden unit is taken as the extracted features.", "While LSTMs are effective for representing the context for each token, it might not be powerful enough for directly encoding the entire document (Bah-danau et al., 2015).", "Therefore, various attention mechanisms have been introduced on top of the vanilla LSTM to select words (and contexts) most relevant for making the predictions.", "In our preliminary experiments, we find that a Bi-directional LSTM with the dot-product attention (Luong et al., 2015) yields better performance 1232 than the vanilla LSTM in ASP-MTL.", "However, it still does not outperform CNN and is much slower.", "As a result, we conclude that, for text classification tasks, CNN is both effective and efficient in extracting local and higher-level features for making a single categorization.", "Finally, we observe that MAN -NLL achieves slightly higher overall performance compared to MAN -L2, providing evidence for the claim in a recent study (Lucic et al., 2017) that the original GAN loss (NLL) may not be inherently inferior to the L2 loss.", "Moreover, the two variants excel in different domains, suggesting the possibility of further performance gain when using ensemble.", "For all three of our experiments, we use \u0000 = 0 .", "05 and k = 5 (See Algorithm 1).", "For both optimizers, Adam (Kingma and Ba, 2015) is used with learning rate 0 .", "0001 .", "The size of the shared feature vector is set to 128 while that of the domain feature vector is 64 .", "Dropout of p = 0 .", "4 is used in all components.", "C and D each has one hidden layer of the same size as their input ( 128 + 64 for C and 128 for D ).", "ReLU is used as the activation function.", "Batch normalization (Ioffe and Szegedy, 2015) is used in both C and D but not F .", "We use a batch size of 8.", "For our first two experiments on the Amazon review dataset, the MLP feature extractor is used.", "As described in 4.1, it has an input size of 5000.", "Two hidden layers are used, with size 1000 and 500 , respectively.", "For the CNN feature extractor used in the FDU-MTL experiment, a single convolution layer is used.", "The kernel sizes are 3, 4, and 5, and the number of kernels are 200.", "The convolution layers take as input the 100d word embeddings of each word in the input sequence.", "We use word2vec word embeddings (Mikolov et al., 2013) trained on a bunch of unlabeled raw Amazon reviews (Blitzer et al., 2007).", "After convolution, the outputs go through a ReLU layer before fed into a max pooling layer.", "The pooled output is then fed into a single fully connected layer to be converted into a feature vector of size either 128 or 64.", "More details of using CNN for text classification can be found in the original paper (Kim, 2014).", "MAN is implemented using PyTorch (Paszke et al., 2017).", "Multi-Domain Text Classification The MDTC task was first examined by Li and Zong (2008), who proposed to fuse the training data from multiple domains either at the feature level or the classifier level.", "The prior art of MDTC (Wu and Huang, 2015) decomposes the text classifier into a general one and a set of domain-specific ones.", "However, the general classifier is learned by parameter sharing and domain-specific knowledge may sneak into it.", "They also require external resources to help improve accuracy and compute domain similarities.", "Domain Adaptation Domain Adaptation attempts to transfer the knowledge from a source domain to a target one, and the traditional form is the single-source, single-target ( SS,ST ) adaptation (Blitzer et al., 2006).", "Another variant is the SS,MT adaptation (Yang and Eisenstein, 2015), which tries to simultaneously transfer the knowledge to multiple target domains from a single source.", "However, it cannot fully take advantage the training data if it comes from multiple source domains.", "MS,ST adaptation (Mansour et al., 2009; Zhao et al., 2017) can deal with multiple source domains but only transfers to a single target domain.", "Therefore, when multiple target domains exist, they need to treat them as independent problems, which is more expensive and cannot utilize the additional unlabeled data in these domains.", "Finally, MDTC can be viewed as MS,MT adaptation, which is arguably more general and realistic.", "Adversarial Networks The idea of adversarial networks was proposed by Goodfellow et al. (2014) for image generation, and has been applied to various NLP tasks as well (Chen et al., 2016; Yu et al., 2017).", "Ganin et al. (2016) first used it for the SS,ST domain adaptation followed by many others.", "Bousmalis et al. (2016) utilized adversarial training in a shared-private model for domain adaptation to learn domain-invariant features, but still focused on the SS,ST setting.", "Finally, the idea of using adversarial nets to discriminate over multiple distributions was empirically explored by a very recent work (Liu et al., 2017) under the multitask learning setting, and can be considered as a special case of our MAN framework with the NLL domain loss.", "We propose MAN as a more general framework with alternative architectures for the adversarial component, and for the first time 1233 provide theoretical justifications the multinomial adversarial nets.", "Moreover, Liu et al. (2017) used a LSTM without attention as their feature extractor, which we found to perform sub-optimal in the experiments.", "We instead chose Convolutional Neural Nets as our feature extractor that achieves higher accuracy while running an order of magnitude faster (see 4.3).", "In this work, we propose a family of Multinomial Adversarial Networks ( MAN s) that generalize the traditional binomial adversarial nets in the sense that MAN can simultaneously minimize the difference among multiple probability distributions instead of just two.", "We provide theoretical justifications for two instances of MAN , MAN -NLL and MAN -L2, showing they are minimizers of two different f-divergence metrics among multiple distributions, respectively.", "This indicates MAN can be used to make multiple distributions indistinguishable from one another.", "It can hence be applied to a variety of tasks, similar to the versatile binomial adversarial nets, which have been used in many areas for making two distributions alike.", "In this paper, we design a MAN model for the MDTC task, following the shared-private paradigm that has a shared feature extractor to learn domain-invariant features and domain feature extractors to learn domain-specific ones.", "MAN is used to enforce the shared feature extractor to learn only domain-invariant knowledge, by resorting to MAN 's power of making indistinguishable the shared feature distributions of samples from each domain.", "We conduct extensive experiments, demonstrating our MAN model outperforms the prior art systems in MDTC, and achieves state-of-the-art performance on domains without labeled data when compared to multi-source domain adaptation methods.", "This work was supported in part by NSF grant SES-1741441 and DARPA DEFT Grant FA8750-13-2-0015.", "The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of NSF, DARPA or the U.S. Government.", "We also thank Yun Liu, Tianze Shi, Xun Huang, and the anonymous reviewers for their helpful feedback and/or discussions." ]
[ "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "result", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "abstain", "result", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "objective", "abstain", "abstain", "objective", "result", "abstain", "abstain", "method", "abstain", "objective", "other", "other", "other" ]
[ "Implicit discourse relation recognition is a challenging task due to the lack of connectives as strong linguistic clues.", "Previous methods primarily encode two arguments separately or extract the specific interaction patterns for the task, which have not fully exploited the annotated relation signal.", "Therefore, we propose a novel TransS-driven joint learning architecture to address the issues.", "Specifically, based on the multi-level encoder, we 1) translate discourse relations in low-dimensional embedding space (called TransS), which could mine the latent geometric structure information of argument-relation instances; 2) further exploit the semantic features of arguments to assist discourse understanding; 3) jointly learn 1) and 2) to mutually reinforce each other to obtain the better argument representations, so as to improve the performance of the task.", "Extensive experimental results on the Penn Discourse TreeBank (PDTB) show that our model achieves competitive results against several state-of-the-art systems.", "Discourse relation describes how two adjacent text units (e.g., clauses, sentences, and larger sentence groups) are connected logically to one another.", "A discourse relation instance is usually defined as a connective taking two arguments (as Arg 1 and Arg 2, respectively).", "Implicit discourse relation recognition without explicit connectives (Pitler et al., 2009) is still a challenging problem of discourse analysis, which needs to infer the discourse relation from a specific context.", "It is beneficial to many downstream natural language processing (NLP) applications, such as machine translation (Meyer and Popescu-Belis, 2012) and text summarization (Gerani et al., 2014).", "The existing neural network-based models have shown great success in recognizing implicit discourse relations.", "It mainly includes 1) Basic neural networks (Braud and Denis, 2015; Zhang et al., 2015; Liu et al., 2016) can learn the dense vector representations of discourse arguments, which can capture the semantic information to some extent.", "Further studies exploit different attention or memory mechanisms (Liu and Li, 2016; Zhang et al., 2016) to capture the critical information of argument pairs.", "2) Complex neural models (Chen et al., 2016; Lei et al., 2017; Guo et al., 2018) utilize gated relevance networks or neural tensor networks to capture the deeper interactions between two discourse arguments.", "3) Joint learning architectures (Qin et al., 2017; Bai and Zhao, 2018; Xu et al., 2019) exploit implicit connective cues, different granularity of text, or topic-level relevant information to improve the discourse relation prediction.", "However, these approaches still have the following drawbacks: 1) do not make full use of the annotated discourse relation signal to explore the argument-relation features; 2) neglect the extra information in the low-dimensional continuous embedding space, i.e., the direction or structure information of the vectors.", "Notice that Translating Embeddings (TransE) is a method for the prediction of entities' missing relations in knowledge graphs.", "Bordes et al. (2013) model relations by interpreting them as translating operation not on the graph structure directly but in a learned low-dimensional embedding of the knowledge graph entities: if ( h e , l e , t e ) holds, then the embedding of the tail entity t e should be close to the embedding of the head entity h e plus some vector that depends on the relation l e .", "Similar to the entity relation extraction, our task aims to identify the semantic relations between two arguments (i.e., sentences).", "Inspired by TransE, we design a new method (TransS), which translates discourse relations in sentence embedding spaces to mine the argument-relation features.", "Intuitively, these features reflect the latent geometric structure among the arguments and their discourse relation by performing the algebraic operation, and the argument-relation instances with the same discourse relation may have similar direction and position information in the embedding space.", "Therefore, we propose a novel TransS-driven joint learning neural network framework that leverages the latent geometric structure information of argument-relation instances, in addition to using the semantic features to improve the comprehension of discourse argument.", "Among them, we adopt a multi-level encoder to further enrich the argument representations, which could obtain the deeper semantics of discourse.", "Propose a novel TransS-driven joint learning architecture, including the latent geometric structure information learning (GSL) and semantic feature learning (SFL); Design TransS approach to translate discourse relations in low-dimensional embedding space from the sentence-level perspective, which could induce the geometric structure of argument-relation instances to some extent; Employ the mutual reinforcing between the GSL and SFL to optimize the argument representations: 1) the GSL adopts its geometric structure clues to facilitate the SFL; 2) the SFL utilizes its semantic cues to improve the learning capability of GSL; The experimental results on the PDTB demonstrate", "The implicit discourse relation recognition task is usually formalized as a classification problem.", "In this section, we give an overview of the TransS-driven joint learning framework, which consists of four parts: embedding layer, multi-level encoder, latent geometric structure learning, and semantic feature learning, as shown in Figure 1.", "In order to model two discourse arguments with neural networks, we transform the one-hot representations", "representations of arguments and their discourse relation into the distributed representations.", "Formally, the embedding layer could be seen as a simple projection layer where the word embedding is achieved by lookup table operation according to the indexes.", "All words of two arguments Arg 1, Arg 2, and their relation will be mapped into low dimensional vector representations, which are taken as the input of our model.", "To enrich the discourse argument representations, we exploit multi-level encoder shown in Figure 2 to learn the argument representations at the different levels.", "Particularly, the higher-level states of multilevel encoder could capture context-dependent aspects of words while the lower-level states could model aspects of syntax (Peters et al., 2018).", "The multi-level encoder is composed of stacked encoder layers.", "Referring to the previous work, we implement the bidirectional LSTM (BiLSTM) neural network to model the argument sequences, which could preserve both the historical and future information in forward and reverse directions.", "Therefore, we can obtain two representations h t and h t at each time step t of the sequence.", "Then we concatenate them to get the intermediate state h t = [ h t , h t ] .", "Attention Controller .", "Due to the limitations of treating each word equally in the general representations, we use attention mechanism to point out the words particularly useful for our task.", "Let H be the matrix consisting of output vectors [ h 1 , h 2 , ..., h n ] of the last layer produced, where n is the length of the argument.", "The new representation h of the argument is formed by a weighted sum of the output vectors: M = tanh ( H ) , (1) = softmax ( w TM ) , (2) h = H T .", "(3) where H R n d , d is the dimension of word embedding, w is a parameter vector.", "Then we could obtain the argument representation with important information from Eq.", "(4) for the next step.", "Finally, we can receive the overall argument representations by averaging pooling operation for the word embedding sequence, defined as:", "where h Arg is the argument representation, h ( m ) i is the representation of the i -th word in the word embedding sequence of the m -th encoder layer, n is the number of words in an argument.", "TransE, as a model for learning low-dimensional embeddings of entities, is to enforce the structure of embedding space in which different relations between entities of different types may be represented by translation (Bordes et al., 2013).", "Discourse relation recognition and entity relation extraction are similar to some extent.", "Intuitively, the argument-relation instances with the same discourse relation may also have similar direction and position information in embedding space.", "However, discourse argument embedding is a sentence-level representation, which is different from the reuse of entities in other sentences, and more diverse and complex than entity representation.", "Therefore, we design TransS , a method which models discourse relations by interpreting them as translations operating in the low-dimensional embedding space from the sentence perspective.", "Moreover, it could mine the latent geometric structure of argument-relation instances.", "Specifically, to define two arguments as head vector h s and tail vector t s respectively, their annotated relation signal as relation vector r s , the latent geometric structure is reflected by h s + r s t s , their score function is defined as follows: d s ( h s , t s ) = || h s + r s t s || 2 2 .", "where h s , t s denote the representations of Arg 1 and Arg 2 respectively; r s R d is the embedding of discourse relation and d is the dimension of word embedding.", "GSL Loss .", "Under the framework of TransS, given a training set T of triplets ( h s , r s , t s ) composed of two arguments h s , t s V (the set of sentence vectors) and a relation r s R (the set of relation), our model would learn the embeddings of the words in arguments and the discourse relation.", "The GSL loss function is defined as: LGSL = (cid:88) ( h s ,r s ,t s ) T (cid:88) ( h (cid:48) s ,r s ,t (cid:48) s ) T (cid:48) ( hs,rs,ts ) [ + d s ( h s + r s , t s ) d s ( h (cid:48) s + r s , t (cid:48) s )] + + GSL (cid:107) (cid:107) 22 .", "(7) where [ ] + denotes the positive instances, > 0 is a margin hyper-parameter, and the set of negative triplets, constructed according to", "T (cid:48) ( h s ,r s ,t s ) = { ( h (cid:48) s , r s , t s ) | h (cid:48) s V } { ( h s , r s , t (cid:48) s ) | t (cid:48) s V ) } .", "(8) By optimizing the GSL loss, we could obtain the latent geometric structure information about argument-relation instances.", "Eq.(8), in which the head or tail is replaced by a random argument vector (but not simultaneously).", "denotes the other parameters of the network.", "L 2 regularization is used to penalize the size of all parameters for preventing overfitting, weighted by GSL .", "Different from TransE, we could not directly utilize TransS to recognize discourse relations, for that each argument could not be reused in discourse.", "Therefore, we exploit TransS to mine the latent geometric structure information and further guide the semantic feature learning.", "The new argument representations ( h Arg 1 , h Arg 2 ) with latent geometric structure information learned by the GSL are as inputs of the semantic feature learning (SFL).", "The h Arg 1 (i.e., h s ) and h Arg 2 (i.e., t s ) are obtained from the multi-level encoder.", "We further stack a softmax layer upon the representations: y = f ( W f (cid:20) h Arg 1 , h Arg 2 (cid:21) + b f ) .", "The SFL loss is a cross-entropy style shown as: LSFL = C (cid:88) j =1 y j log ( y j ) (10) where y is the one-hot representation of the ground-truth relation; y is the predicted probabilities of relations; C is the number of relation class.", "where f is the softmax function, W f RC 2 d , b f RC are the weights and bias term respectively, d denotes the dimension of word embedding and C denotes the number of relation classes.", "SFL Loss .", "Under the framework of basic neural networks for our task, given training set T , two argument vectors h s , t s in the triplet ( h s , r s , t s ) are concatenated to a new sentence vector during the training process, and then the generated vector is used for relation recognition.", "After obtaining the new representations Arg 1 as head vector h s , Arg 2 as tail vector t s , and the relation vector r s , our model is trained using joint learning mechanism.", "The goal of our model is to minimize the loss function", "(Eq.(11)) L = LGSL + LSFL .", "where, LGSL and LSFL are from", "Eq.(7) and (10), respectively; is the trade-off parameter controlling the balance between GSL and SFL.", "Our model jointly learns the GSL and SFL to optimize the argument representations.", "On the one hand, the GSL maps the discourse relation between two arguments to the low-dimensional embedding space and obtains the vectors h s , r s , t s with geometric structure information to constrain the SFL.", "On the other hand, the SFL alternately optimizes the discourse representations and provides the necessary semantic clues for geometric structure information mining.", "Generally, the GSL and SFL reinforce with each other, and finally get the better argument representations containing the semantics and the latent geometric structure information of argument-relation.", "The PDTB 2.0, a large scale corpus annotated on 2,312 Wall Street Journal articles, is utilized for all experiments.", "It contains three hierarchies: Level-1 Class, Level-2 Type, and Level-3 Subtype.", "We focus on the first level, which contains four classes: Comparison (Comp.), Contingency (Cont.), Expansion (Exp.), and Temporal (Temp.).", "As (Rutherford and Xue, 2014), we use Sections 2-21 as the training set, Section 22 as the development set, Section 23 as the test set.", "All the arguments are padded at the same length of 100.", "Word embedding is randomly initialized by uniformly distributed samples [-0.1, 0.1] with 300-dimension.", "The learning rate is set to 0.001, the batch size is 128, and the number of iteration is 100.", "For the GSL, the margin of loss is set to 0.5, the trade-off parameter in", "Eq.(11) is set to 1.0, and we use L 2 distance as dissimilarity; For the SFL, the sizes of the input and the hidden layer of the BiLSTMs are both 300; we choose three encoder layers, and set the dimension of pre-trained embeddings from ELMo (Peters et al., 2018) to 300.", "To validate the effectiveness of our model, we select some state-of-the-art systems from the following three aspects to compare with our model: Discourse Argument Representation", "1) Ji2015 : Ji and Eisenstein (2015) computed distributed representations for each discourse argument by composition up the syntactic parse tree.", "2) Zhang2015 : Zhang et al. (2015) proposed pure neural networks with three different pooling operations to learn shallow representations in tasks.", "3) Liu2016a : Liu and Li (2016) combined attention mechanism and external memory to focus on specific words that helps determine discourse relations.", "4) Lan2017 : Lan et al. (2017) designed an attention-based neural network for learning discourse argument representations and a multi-task framework for learning knowledge from annotated and unannotated corpora.", "5) Chen2016 : Chen et al. (2016) adopted a gated relevance network to capture interaction information between two arguments to enhance relation recognition.", "6) Qin2016 : Qin et al. (2016a) adopted context-aware character-enhanced embeddings to address implicit discourse relation recognition task.", "8) Dai2018 : Dai and Huang (2018) modeled interdependencies between discourse units as well as discourse relation continuity and patterns, and predict a sequence of discourse relations in a paragraph.", "9) Liu2016b : Liu et al. (2016) designed related discourse classification tasks specific to a corpus, and proposed a novel Convolutional Neural Network embedded multi-task learning system to synthesize these tasks by learning both unique and shared representations for each task.", "10) Bai2018 : Bai and Zhao (2018) employed different grained text representations, including character, subword, word, sentence, and sentence pair levels, and transfered the knowledge from the implicit connectives to support discourse relation prediction.", "Baseline (Including SFL) We use three encoder layers to encode the argument pairs separately, then concatenate them together, and feed them to the SFL module for relation recognition.", "+ELMo We utilize the Baseline to receive the argument representations, and then we use the pre-trained ELMo vector to enhance the argument representations.", "Finally, we feed them to the SFL module for relation recognition.", "+GSL & ELMo (Ours) We feed the two argument representations, encoded by the Baseline and enhanced by the pre-trained ELMo vector, into GSL and SFL modules, respectively.", "And then, we utilize the integrated representation to recognize the discourse relation.", "Consistent with previous studies, we choose F 1 score and accuracy as evaluation metrics.", "For binary classification, the result is computed by F 1 score, and for 4-way classification, the result is computed by macro average F 1 score.", "Table 2 shows the results of the compared state-of-the-art systems on binary and 4-way classification.", "We could make the following observations: Overall,", "i) our model achieves state-of-the-art performance, i.e., the F 1 score and accuracy are 51.24% and 59.94% on the 4-way classification, respectively;", "ii) the results of binary classification are keeping a similar tendency with the 4-way classification.", "In particular, our model gains the best F 1 score on Comparison relation.", "The main reasons may be that the instances with different discourse relations have different directions and position (geomet-ric structure) features in the low-dimensional continuous embedding space, and the Comparison instances have more obvious indicative structure features.", "Comparing our model with Chen2016 and Lei2017, the F 1 scores of our model are higher than those of the latter two.", "It proves that our model is better than the two methods only considering the content interactions, since we jointly leverage the geometric structure information and the semantic information of the argument-relation instances to obtain deeper interactions.", "In the comparison models, Bai2018 with joint learning framework achieves the best performance, which illustrates that jointly utilizing the discourse relation and the implicit connectives are helpful to the task.", "Moreover, the performance of our model is better than that of Bai2018.", "It not only indicates that the effectiveness of joint learning, but also proves considering the geometric structure is beneficial to our task.", "observations from Table 3: Overall :1) Our model gains state-of-the-art performance than that of the other ablation models.", "This demonstrates that the geometric structure information could enrich the argument representation and promote implicit discourse relation recognition.", "2) All models have a higher F 1 values on the Expansion relation than those of the other relations.", "The unbalanced data may cause that.", "GSL : The F 1 score of our model using the GSL module is 48.91%, higher than the performance of Baseline.", "In addition, compared with ELMo,", "(a) without geometric structure features.", "(b) with geometric structure features.", "although the performance of GSL does not exceed ELMo's, GSL obtain comparable results.", "This manifests that the two modules (GSL and SFL) could reinforce with each other, which utilizes the geometric structure information by the algebraic operation.", "Moreover, we exploit the geometric structure clues to augment the semantic understanding of discourse from a new aspect, which is different from the ELMo only focusing on the semantic information of the text itself.", "ELMo : The third row of Table 3 is the result of our model, which only uses the pre-trained ELMo vector to enhance argument representations.", "The F 1 score and accuracy are 50.07% and 58.89%, respectively, which achieve 3.61% and 4.87% improvements than those of the Baseline.", "It verifies that ELMo, as pre-trained contextualized word embeddings, could contain more contextual information.", "GSL & ELMo : Compared with ELMo, GSL & ELMo gains better performance, which demonstrates that inducing spatial geometry structure information based on argument enhancement could understand the semantics of discourse better.", "To illustrate the effectiveness of the latent geometric structure information of argument-relation instances gotten by TransS, we visualize the heat maps of the interaction information of argument representations shown in Figure3.", "Every word comes with various background colors.", "The darker patches denote the correlations of word pairs are higher.", "The example of Comparison relation is listed below: Arg1: I was prepared to be in a very bad mood tonight.", "From the semantics of perspective, this example could be identified as Comparison or Temporal relation.", "Since argument pairs may have distinct distinguishing features in geometric space, we could consider the geometric structure of argument pairs to help identify the discourse relation.", "We can obtain the following observations: Seen from", "Figure3(a), without introducing geometric structure information, the model has a high correlation around the word Now which might indicate the Temporal relation directly.", "This demonstrates that only considering the semantic information of arguments may suffer from issues such as polysemy, ambiguity, as well as fuzziness.", "", "Figure3(b) shows the result of the interaction information of argument representations, which introduces the GSL.", "From the results, we can see that the model has a high correlation around the word little and very with the comparative information.", "The possible reason is that our model utilizing GSL shifts the higher attention from the word Now with Temporal information to the word pairs (little, very), (euphoria, bad) and (euphoria, mood) with Comparison relation.", "Our model with GSL introduces the geometric structure information and jointly utilizes these features and semantic information to help identify the discourse relation.", "In order to illustrate the impact of the encoder layer number, we select different sizes of encoder layer", "as comparison experiments on the 4-way classification.", "Figure 4 shows that the F 1 scores are increasing until three encoder layers.", "And when the size of the encoder layer is four or five, the performance of our model is decreasing obviously.", "With the increasing of the number of encoder layers, the model could capture the richer semantic information.", "However, the results imply that with the more encoder layers considered, the model could incur the over-fitting problem due to adding more parameters.", "Therefore, we adopt three encoder layers to encode the arguments as our Baseline in section 3.3.", "Neural network-based models have shown great effectiveness in implicit discourse relation recognition.", "We give the analysis of mainly relevant work: 4.1 Discourse Argument Representation Proper argument representation is a core factor of our task.", "Most previous researches encode arguments as dense and continuous representation based on various neural networks, from basic neural networks (such as CNN, RNN) to complex neural networks (Zhang et al., 2015; Qin et al., 2016b; Rutherford et al., 2016).", "Some studies adopt different attention or memory mechanisms to catch the emphasis on discourse arguments (Mnih et al., 2014; Liu and Li, 2016; Zhang et al., 2016).", "Li et al. (2016) exploit the hierarchical attention to capture the focus of different granularities.", "Zhang et al. (2016) build upon a semantic memory to store knowledge in the distributed fashion for the task.", "However, these models have only considered the two arguments independently without the interaction information.", "Further studies tend to discover more semantic interactions between two arguments by complex neural networks (Qin et al., 2016c; Cai and Zhao, 2017; Lan et al., 2017; Guo et al., 2018).", "Chen et al. (2016) develop a novel gated relevance network to capture semantic interactions between arguments.", "Lei et al. (2017) conduct word pair interaction score to capture both linear and quadratic relation for argument representation.", "However, these methods utilize the pre-trained embeddings for mining the interaction features and ignore the geometric structure information entailed in discourse arguments and their relation.", "Recently, some researches adopt joint learning framework to capture more discourse clues for the task.", "Bai and Zhao (2018) jointly predict connectives and relations, assuming the shared parameters of the deep learning models.", "Xu et al. (2019) propose a topic tensor network (TTN) to model the sentence-level interactions and topic-level relevance among arguments for this task.", "However, few studies model discourse relations by translating them in the low-dimensional embedding space as we do in this work.", "TransE effectively maps the relation to the embedding space of entities by performing the algebraic operation.", "Bordes et al. (2013) model entity relations by interpreting them as translating operation in the low-dimensional embedding of the entities.", "Inspired by TransE, we design a TransS method to mine the latent geometric structure information, which could enhance the argument representations for promoting discourse relation recognition.", "To our knowledge, this is the first attempt to mine the latent geometric structure of argument-relation.", "Meanwhile, the embeddings of argument and relation by TransS could be used to the other high-level NLP tasks.", "In this paper, we propose a novel TransS-driven joint learning neural network framework by optimizing the discourse argument representations to improve implicit discourse relation recognition.", "We interpret the discourse relations as translation in low-dimensional embedding space, which reflects the geometric structure of argument-relation, and also can obtain the richer argument representations based on the multi-level encoder.", "Different from the conventional approaches only considering the semantic features, we jointly leverage the latent geometric structure information and the semantic features to optimize the argument representations, which could improve the semantic understanding of discourse.", "Experimental results on the PDTB show the effectiveness of our model.", "We thank the anonymous reviewers for their valuable feedback.", "Our work is supported by the National Key R&D Program of China (2019YFC1521200), the National Natural Science Foundation of China (61976154,U1736103), the Tianjin Natural Science Foundation (18JCY-BJC15500), and the Foundation of State Key Laboratory of Cognitive Intelligence, iFLYTEK (CIOS-20190001)." ]
[ "abstain", "abstain", "objective", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "objective", "objective", "abstain", "objective", "result", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "method", "objective", "other", "objective", "result", "result", "result", "other", "other" ]
[ "Relation prediction informed from a combination of text corpora and curated knowledge bases, combining knowledge graph completion with relation extraction, is a relatively little studied task.", "A system that can perform this task has the ability to extend an arbitrary set of relational database tables with information extracted from a document corpus.", "OpenKi (Zhang et al., 2019) addresses this task through extraction of named entities and predicates via OpenIE tools then learning relation embeddings from the resulting entity-relation graph for relation prediction, outperforming previous approaches.", "We present an extension of OpenKi that incorporates embeddings of text-based representations of the entities and the relations.", "We demonstrate that this results in a substantial performance in-crease over a system without this information.", "https://github.com/drevicko/OpenKI 1 Introduction Curated knowledge repositories such as knowledge bases and relational databases provide powerful tools for many practical knowledge related tasks.", "They require, however, substantial effort to create and maintain.", "Many applications deal with knowledge that is continuously changing, presenting prohibitive maintenance costs and limiting the utility of explicit knowledge representation technologies.", "The new knowledge is often available in text based formats such as reports, news items and memos.", "In this work, we use the term proposition to describe a triple ( e 1 , r, e 2 ) that indicates that a relation r holds between two entities e 1 and e 2 .", "Work in the field has largely focussed on either extracting propositions directly from text or inferring missing propositions by examining knowledge graphs.", "What we are interested in here combines the two in a single model, utilising information from the knowledge base and collections of text together to infer relations, both mentioned in the text and implied by the text in combination with existing knowledge.", "Previous work following this approach draws on patterns in the curated knowledge graph in combination with the graph of entity mentions in texts, allowing prediction of new knowledge base relations (Riedel et al., 2013; Verga et al., 2015, 2017).", "Zhang et al. (2019) extend this work by incorporating text predicates connecting entity mentions extracted using OpenIE tools (Fader et al., 2011; Lockard et al., 2019) and introducing the concept of entity neighbourhoods consisting of the binary OpenIE predicates and knowledge base relations 1 that occur with a given entity as their subject or object.", "Drawing on the success of text based representations incorporated into entity recognition tasks (Gillick et al., 2019), we extend Zhang", "et.al.'s model by incorporating text based embeddings of entities and relations into the entity neighbourhood representations.", "Texts are drawn from knowledge base metadata and occurrences in source texts.", "We use fasttext (Mikolov et al., 2018) word embeddings and BERT (Devlin et al., 2018) to obtain text embeddings.", "The resulting models achieve state of the art results on two knowledge base extension data sets.", "Open information extraction (OpenIE) attempts to find relations expressed in collections of texts through identification of entity and relation spans (Fader et al., 2011; Stanovsky et al., 2018).", "Our work can be taken as an approach to incorporate this extracted information into an existing knowledge base.", "Relation extraction, the identification of relations expressed in text between given entity mentions, has received much attention in recent years 1 we refer to relations from a knowledge base as relations or KB relations and predicates extracted from text as predicates or text predicates.", "These tasks consider only the recognition of knowledge directly expressed in individual texts, whereas we seek to utilise the combined knowledge from both a collection of texts and a knowledge base, allowing implicit and automatic association between expressions in texts and knowledge base relations and inference of propositions not directly expressed in individual texts.", "A number of works present a distant supervision approach that utilises entity pairs in texts as a signal for the presence of propositions that may be incorporated in a knowledge base.", "This signal is inherently noisy, and several approaches have been devised do deal with this (e.g.: (Hoffmann et al., 2011; Zeng et al., 2015; Lin et al., 2016)).", "Closer to what we propose, Han et al. (2018) propose a neural attention mechanism between a knowlege graph and supporting texts, outperforming previous approaches.", "These approaches do not utilise graph 2 https://paperswithcode.com/sota/ relation-extraction-on-tacred information in the form of connections between the texts and can only extract relations explicitly mentioned in the texts.", "We note that the OpenKI model (Zhang et al., 2019), which we use as a baseline, outperforms these models (see Table 3).", "We build on the Entity Neighbourhood Encoding (ENE) model proposed by Zhang et al. (2019).", "We then combine our enhanced neighbourhood encodings with the more complex dual attention model coined as OpenKI.", "Input data consists of a knowledge base or KB (a curated collection of proposition triples) and a collection of texts with entities identified and linked to knowledge base entities (where possible).", "In addition, text predicates linking entity mentions in source texts may be extracted (for example with OpenIE tools such as Reverb (Fader et al., 2011) or Ceres (Lockard et al., 2019)).", "Alternatively, sentences can be used as proxies for text predicates.", "The task then is to decide whether a query proposition ( e 1 , r, e 2 ) with KB relation r is true and should be added to the KB.", "the knowledge base and source texts is constructed.", "Here the entities are nodes and KB relations and text predicates are directed links from the subject entity to the object entity.", "Neighbourhoods of entities are then defined as the set of outward links (subject neighbourhoods) and inward links (object neighbourhoods) from/to an entity.", "Each relation and predicate p is associated with two unique, trainable embeddings v subjp , v objp RT .", "We combine these learned relation/predicate embeddings with embeddings v textp derived from associated texts to obtain enhanced representations v : t as follows.", "v subj : t p = v subjp + tanh ( W predsubj v textp + b predsubj ) v obj : t p = v objp + tanh ( W predobj v textp + b predobj ) (1) where W predsubj , W predobj R D,T and b predsubj , b predobj RD are trainable weight matrices and bias vectors respectively.", "We use a tanh activation function to allow the model to adapt the learned representations v subjp and v objp in both a positive and negative direction.", "In this work, the text representations v textp are static and do not vary during training.", "Given subject/object entities s and o in our query, we aggregate relation/predicate representations from their respective entity neighbourhoods R ( s, ) and R ( , o ) , as follows.", "We use vector average as the aggregation function Agg ( ) .", "Note that entities have no associated learned embedding, and are represented only as these aggregate representations.", "Zhang et.al.", "(Zhang et al., 2019) posit that the aggregated representations v aggsubj ( e ) and v aggobj ( e ) provide ultra-fine grained type information about entities when playing the respective roles and observe that including entity type information into their models does not notably improve performance, suggesting that type information is already present.", "Taking inspiration from that, we propose combining these aggregated representations with text based entity embeddings v texte RT derived from entity names and descriptions.", "where W entsubj , W entobj R D,T and b entsubj , b entobj RD are trainable weight matrices and bias vectors respectively.", "We then obtain association scores for a candidate predicate p , candidate subject entity s and candidate object entity o via a vector similarity measure (dot product in our case).", "These scores are then passed through sigmoid functions with trainable temperatures a subj , a obj R and thresholds b subj , b obj R , then summed with trainable mixing weights subj , obj R .", "The mixing weights are passed through the ReLU function to ensure that the raw scores can only contribute positively to the final score without canceling each other out.", "The resulting score, trained with a max-margin loss, allows us to rank propositions, with true propositions ranked higher.", "The full OpenKi model incorporates a third scoring component that combines aggregated neighbour representations (Equation", "2) with a query attention mechanism similar to (Verga et al., 2017) see (Zhang et al., 2019) for details.", "For text enhanced models we replace the neighbour representations with Equation 3.", "Following (Zhang et al., 2019) we test our models on two data sets:", "1) English language extractions from the New Your Times (NYT) (Riedel et al., 2010) consisting of sentences with named entities identified and linked to FreeBase (FB) and", "2) REVERB (Fader et al., 2011) (an OpenIE tool) extractions from ClueWeb (Lin et al., 2012) (English language web texts) as preprocessed by OpenKI authors 3 also with entities linked to FreeBase.", "For the NYT data, we use sentences as proxies for text predicates and for predicate texts we use whole sentences (including the entity men-tions).", "Texts for Freebase relations are derived 3 https://github.com/zhangdongxu/ relation-inference-naacl19 Table 1: Data Statistics OpenIE NYT Training data # entity pairs 40,878 377,013 # without KB relations 0 359,197 # KB relation types 250 57 # Predicate types 124,836 320,711 Test data # test triples 4,938 1,761 from their identifiers, which are paths in the freebase relation hierarchy.", "We convert these paths to texts consisting of the sequence of relation class names separated by full stops.", "For example, location.us_state.capital is converted to Location. US state. Captial.", "See Appendix C for details of NYT data preprocessing.", "The second data set consists of REVERB (Fader et al., 2011) (an OpenIE tool) extractions from ClueWeb with entities linked to FreeBase (Lin et al., 2012), as provided by OpenKI authors 4 .", "Text predicates in this data are provided in text form, which are used directly.", "We obtain texts for FreeBase relations in a similar way to the NYT data.", "Note that original sentences from ClueWeb are not readily available for this data.", "To obtain entity texts for both data sets, we use the property type.object.name of the associated FreeBase entity, where present, or the entity span in the NYT source text in other cases 5 .", "Most FreeBase entities also include a longer description text (the common.topic.description prop-erty).", "We concatenate the entity names and their descriptions to obtain a second text representation, used in the . . . + Desc columns in Table 2.", "Where the description text is missing or the entity was not found in FreeBase, we use only the shorter text for the . . . + desc results.", "We follow the experiments presented in (Zhang et al., 2019) for effective comparison.", "In preliminary experiments, we additionally trained all model variants using only text representations (effectively 4 https://github.com/zhangdongxu/ relation-inference-naacl19 5 Two entities in the OpenIE data were not found in FreeBase, zero vectors were used for their entity text embeddings. fixing all learned representations v subj/objp to zero vectors), and found performance to be substantially degraded in all cases.", "Similarly, the SOTA knowledge base completion model Tucker (Balazevic et al., 2019) performed very poorly when applied to the combined text predicate + KB relation graph for both data sets.", "Source code for our experiments including data download links is available on GitHub 6 .", "We use text embeddings derived from fasttext word embeddings (Mikolov et al., 2018) and BERT-SMALL (Devlin et al., 2018).", "For fasttext, we average the embeddings for words in the text.", "For BERT we use two strategies: the average of token representations and the representation of the special [CLS] token appended to all texts during standard BERT pre-processing.", "We use 100 dimensional learned embeddings, learning rate 0.005 with the RAdam optimiser (Liu et al., 2019) and batch size 128 for 150 epochs (ClueWeb data) and 70 epochs (NYT data).", "We train with max-margin loss with a margin of 1.0, using 16 negative examples for each positive example.", "Negative samples consist of the entity pair and a uniformly sampled (no-positive) relation or predicate.", "For evaluation, with the NYT data we use the area under the Precision-Recall graph (AUC-PR) for relation prediction over entity pairs.", "With the OpenIE data we use mean average precision (MAP) on the task of ranking entity pairs.", "Reported results are from the best of 5 runs for each configuration (as measured by development set performance).", "In Table 2 we see that inclusion of text based information provides a substantial boost to performance across all model variants, with improvements up to 9% in MAP and 16% in AUC-PR.", "We observe that including entity texts performs better than relation/predicate texts, even when entity texts are included as well (mostly ~3% im-provement).", "This can probably be explained by the paucity of the predicate text representations for KB relations and that whole sentences contain extraneous information not relevant to the relationship between entities.", "Future work with, for example, contextual BERT representations of predicate spans and excluding KB relation texts may perform better.", "Including entity descriptions is either similar or better than not including them (up to ~4%), in particular for BERT.", "It is not surprising that BERT can leverage the long-form entity descriptions effectively.", "Average BERT token embeddings perform better than the CLS token embedding in most cases.", "The most surprising result is the relative performance between BERT and FastText, with FastText outperforming BERT with entity only text enhancement and providing the best performing models.", "It is not clear to us why this is the case.", "One hypothesis is that the fully conected layers projecting text representations to the learned embedding dimension may do better with a different, lower learning rate, and that this effect may be more pronounced with the larger BERT representations.", "We plan to explore this in future work.", "It is worth noting that using sentences as proxies for text predicates is a rather weak setup.", "The majority of sentences contain a single entity pair, meaning that the sentences (as a predicate proxy) only appears in one subject and one object neighbour list.", "This provides little graph information for the model to utilise.", "The small proportion that do overalap appear to provide benefit however.", "OpenKI identifies a compatibility between relations and entities through their co-occurrences in a graph.", "Though a strong signal, our results indicate that this information is further enhancded by the detailed and nuanced information that can be found in both task source texts and entity and relation descriptions.", "Text based information alone, however, has not been found to provide sufficient informa-Table 3: Other Baseline Models on NYT data.", "tion for good performance on these tasks, as seen in both our preliminary experiments without learned graph-based embeddings and previous work that relies on text based inference (Table 3).", "We investigated the task of integrating new information in the form of a collection of texts such as news articles into a knowledge base (KB), building on previous models that utilised information from the combined graph of knowledge base relations and predicates extracted from the texts using OpenIE tools.", "We propose a mechanism for incorporating text representations of entities, KB relations and text predicates into the state of the art OpenKI model, providing a substantial improvement in performance.", "From this we can conclude that source texts and entity and relation descriptions contain nuanced information useful to the task beyond that contained in graph structures in the knowledge base and extracted predicate propositions.", "Our models represent a new state of the art on two data sets for this task.", "We thank anonymous reviewers for their insightful suggestions to improve this paper.", "This research was conducted under the Australian Research Councils Discovery Projects funding scheme (project number DP160102156)." ]
[ "abstain", "abstain", "abstain", "method", "objective", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "method", "method", "method", "other", "other", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "objective", "objective", "method", "objective", "other", "other" ]
[ "In open-domain question answering (QA), retrieve-and-read mechanism has the inherent benefit of interpretability and the easiness of adding, removing, or editing knowledge compared to the parametric approaches of closed-book QA models.", "However, it is also known to suffer from its large storage footprint due to its document corpus and index.", "Here, we discuss several orthogonal strategies to drastically reduce the footprint of a retrieve-and-read open-domain QA system by up to 160x.", "Our results indicate that retrieve-and-read can be a viable option even in a highly constrained serving environment such as edge devices, as we show that it can achieve better accuracy than a purely parametric model with comparable docker-level system size.", "1 1 Introduction Open-domain question answering (QA) is the task of finding answers to generic factoid questions.", "End-to-End QA Accuracy (EM) DPR 2.1 Passage Filtering 2.2.1 128D Embedding 2.2.1 MobileBERT 2.2.2 Retriever Encoder Sharing 2.2.3 Unified RetrieverReader Through KD 2.2.4 Iterative Finetuning 2.3 Post-Training Compression (Minimal R&R) T5-1.1-small+SSM T5-1.1-XL+SSM T5-1.1-XXL+SSM 15 20 25 30 35 40 0 1 2 3 4 5 6 7 8 E nd t o E nd QAA cc u r ac y ( EM ) System Footprint (GB) Minimal R&R T5 w/ Compression 78 Figure 1: System footprint vs. Exact Match (EM) accuracy on EfficientQA dev set.", "In recent literature, the task is largely approached in two ways, namely retrieve & read and parametric .", "The former solves the problem by first retrieving documents relevant to the question from a large knowledge source and then reading the retrieved documents to find out the answer (Lee et al., 2019; Guu et al., 2019; Karpukhin et al., 2020; Lewis et al., 2020; Izacard and Grave, 2021).", "The latter, also known as closed-book QA, generates the answer in a purely parametric end-to-end manner (Brown et al., 2020; Roberts et al., 2020).", "While a parametric model enjoys the benefit in terms of system size that they do not require additional knowledge source like a retrieve & read system does, its fundamental limitations are that their predictions are not so interpretable and they are not suitable for dynamic knowledge source as Most of the work was done while the author was working at NAVER Corp. 1 Our code and model weights are available in https://github.com/clovaai/minimal-rnr-qa.", "it is difficult to add, remove, or edit knowledge in the parametric model.", "These limitations are well-addressed by the retrieve & read mechanism, which makes it often more suitable for real-world products.", "However, it is known to suffer from its large storage footprint due to its document corpus and index, especially compared to the parametric model that only needs to store the parameters (Izacard et al., 2020; Fajcik et al., 2021; Lewis et al., 2021).", "Building an interpretable and flexible open-domain QA system and reducing its system size are both important in real-world scenarios; the system must be able to quickly adapt to the changes of the world and be deployed in a highly constrained serving environment such as edge devices.", "Hence, to get the best of both worlds, it is worthwhile to explore the trade-off between the storage budget and the accuracy of a retrieve & read system.", "quantization (Zafrir et al., 2019), and knowledge distillation (Hinton et al., 2014).", "In this paper, we utilize some of these generic approaches and combine them with problem-specific techniques to size down a conventional retrieve & read system.", "We first train a passage filter and use it to reduce the corpus size (Section 2.1).", "We further apply parameter sharing strategies and knowledge distillation to make a single-encoder lightweight model that can perform both retrieval and reading (Section 2.2).", "In addition, we adopt multiple engineering tricks to make the whole system even smaller (Section 2.3).", "We verify the effectiveness of our methods on the dev set and test set of EfficientQA 2 (Min et al., 2021).", "By applying our strategies to a recent extractive retrieve & read system, DPR (Karpukhin et al., 2020), we reduce its size by 160x with little loss of accuracy, which is still higher than the performance of a purely parametric T5 (Roberts et al., 2020) baseline with a comparable docker-level storage footprint.", "In Appendix A.5, we also report the performance on two more open-domain QA datasets, Natural Questions (Kwiatkowski et al., 2019) and TriviaQA (Joshi et al., 2017), to test the generalizability of our methods and suggest a future research direction.", "In this section, we discuss three techniques for reducing the storage footprint of a generic retrieve & read system, namely passage filtering (Section 2.1), unifying retriever and reader into a single model through parameter sharing (Section 2.2), and post-training compression (Section 2.3).", "We assume that the initial system takes the conventional composition of a trainable (neural) retriever with a question encoder and a passage encoder that create dense vectors used for search, a neural extractive reader (possibly with passage ranking), and a text corpus and the corresponding index that serve as the knowledge source.", "Figure 1 shows how we start from one such retrieve & read system and apply each of the methods, in the order they are introduced in this section, to successively reduce its system footprint without sacrificing much accuracy.", "Index and corpus files can take up a significant portion of the storage footprint of a retrieve & read", "system if a large text corpus is utilized as the knowledge source.", "Therefore, to drastically reduce the system size, we train a binary classifier and use it to exclude passages that are relatively unlikely to be useful for question answering.", "Let the set of indices of all passages in the corpus be J total .", "To create the training data, we split J total into two disjoint sets, J ` train and J train , such that the former contains the indices of the passages we would like to include in the minimal retrieve & read system.", "3 Denoting E pq as a trainable dense encoder which maps a passage to a d -dimensional embedding, such that v j E p p j q P R d , the score s j v J j w , where w P R d as a learnable vector, represents how likely a passage p j would hold the answer to an input question. The classifier is trained with binary cross entropy on the minibatches of half-positive and half-negative passages, J ` 1 train and J 1 train drawn from J ` train and J train , respectively. During training, we sample several checkpoints and evaluate them using the hit ratio on a validation set: hit val | J r 1: | J ` val |s val X J ` val |{| J ` val | , where J ` val is a set of indices of the ground truth passages that hold the answer for the questions in the validation set and J r 1: | J ` val |s val is the set of indices j of the passages whose inferred score s j is in the top| J ` val | scores sorted in descending order, among all s j such that j P J ` val Y J val . J val is a disjoint set randomly sampled from J train . We select the checkpoint with the highest hit val and calculate s j for all p j , where j P J total , using the selected checkpoint. Then, we retrieve J subset J r 1: n s total , the set of indices of the n top-scoring passages, to indicate the passages to include in our minimal retrieve & read system. 2.2 Retriever-Reader with Single Encoder In this subsection, we introduce how to obtain a unified retriever-reader with a single encoder (which results in a smaller system footprint) that can perform both retrieval and reading without much drop in accuracy. The unified retriever-reader is trained by successively applying (1) retriever encoder sharing, (2) distilling a reader into the retriever-reader network, and (3) iterative finetuning. 2.2.1 Lightweight Encoder and Embedding Dimension Reduction To make the system small, we utilize a lightweight pretrained encoder. Specifi-3 All the details, including how we split the data, are in Appendix A.1. cally, MobileBERT (Sun et al., 2020) (4.3x smaller than BERT-base (Devlin et al., 2019)) is employed as the encoder of our retriever-reader model. We use the dense embedding vectors of the passages in the knowledge source as the index. Therefore, reducing the embedding dimension results in a linear decrease in the index size. We use only the first 128 dimensions (out of 512) to encode the questions and passages. 2.2.2 Retriever Encoder Sharing Let E pq and E pq be the the question encoder and passage encoder of a retriever, where each of the encoders produces a vector for question q and passage p . We share the parameters of the encoders, so that , and differentiate the question inputs from passages inputs using an additional input signal: different token type ids of 0 for questions and 1 for passages. The retrieval score for a pair of question q and passage p is calculated as sim p q, p q E p q, 0 q E p p, 1 q . We minimize the negative log-likelihood of selecting the passage which holds the answer, namely the positive passage, while training on mini-batches that consist of questions that are each paired with one positive passage and several negative passages. This procedure creates a retriever with a single encoder of parameters that can encode both questions and passages. 4 2.2.3 Unified Retriever-Reader Through Knowledge Distillation The previous subsection describes how to make a retriever that holds only one encoder. Here, we further train the parameters of the retriever so that it can also acquire the ability of a reader; we make a unified retriever-reader model that shares all the encoder parameters and eliminate the need for a separate reader. Specifically, using a fully trained reader of parameters as the teacher, we adopt knowledge distillation to transfer its reading ability to the unified retriever-reader network. The training starts after initializing the parameters of the retriever-reader as , which is obtained from the retriever encoder sharing procedure described in the previous subsection. Let J read J subset be the set of indices of the passages whose retrieval score sim p q, p j q , calculated 4 In a setting where the index is frozen (addition or editing of index items does not occur), the system does not need a passage encoder. However, we assume a self-contained system with the full ability to update the index, so the passage encoder is considered in the system composition. for question q using a retriever with parameters 5 , is among the topk 1 scores for all j P J subset . J read serves as the candidate pool of the indices of the training set passages. During training, for question q , a set of passages P q t p i | 1 i m u where m 2 is sampled from t p j | j P J read u to construct a part of the training batch, such that only p 1 contains the answer to question q among p i P P q . Then, we train the unified retriever-reader network with parameters using a multitask loss L read ` L ret , such that the former is used to train the reader part of the network, and the latter is used to keep training the retriever part. The resulting retriever-reader model has the ability to perform both retrieval and reading. L read is designed to distill the knowledge of a reader teacher into the reader part of the retriever-reader student; the KL divergence between the sharpened and softmaxed answer span scores of the teacher and the student, DKL p P span ,q || P span ,q q . If the teacher reader additionally contains a passage ranker, distillation is also jointly done on the passage ranking scores ( m -dim vector outputs). Retrieval loss L ret is jointly optimized in a multitask-learning manner to prevent the retriever part of the unified network from forgetting the retrieval ability while training the reader part. The loss can either be the negative log-likelihood described in the previous subsection or another knowledge distillation objective function with a fully trained retriever teacher. If the reader teacher used for L read has a passage ranker, the passage ranking score of the teacher can serve as the distillation target (Yang and Seo, 2020). 2.2.4 Iterative Finetuning of Unified Retriever-Reader We have observed that finetuning the unified retriever-reader for a few more epochs leads to better retrieval and reading performance. While the most simple method is to jointly train the model on the standard reader loss and retriever loss 6 , we additionally try iterative finetuning of each of the retriever and reader part as described in Algorithm 1. The motivation here is to apply a loose reconstruc-tion constraint L recon to keep the retrieval score as it is before and after the model is optimized for reading, with an assumption that this would be helpful 5 The retriever with parameters is the retriever used with the teacher reader of parameters . 6 The marginal negative log-likelihood of all the correct answer spans in the positive passage and the negative loglikelihood of positive passage p 1 being selected, respectively. to alleviate the train-inference discrepancy in the input distribution of the reader, created because the unified retriever-reader is not trained using a pipelined manner (training the reader on top of the retrieval result of a fixed retriever). Algorithm 1 A single iterative finetuning step on the unified retriever-reader with parameters at time t Input p t q (parameters of the model at time t ), knowledge distillation temperature , and training batch of question q and passages P q t p i | 1 i m u drawn from J read such that m 2 , Y p q, p 1 q 1 , and Y p q, p i q 0 , @ 2 i m . (batch size of 1 is assumed here for a simple presentation) Output Updated parameters p t ` 1 q 1: (cid:96) p t q r E p t q p p 1 , 1 q , , E p t q p p m , 1 qs JE p t q p q, 0 q 2: p t q GradientUpdate p L read p q, p 1 , , p m q ; p t q q 3: (cid:96) p t q r E p t q p p 1 , 1 q , , E p t q p p m , 1 qs JE p t q p q, 0 q 4: L recon DKL softmax p (cid:96) p t q { q|| softmax p (cid:96) p t q { q 5: L nll CrossEntropy p softmax p (cid:96) p t q q , Y q 6: p t ` 1 q GradientUpdate p L recon ` L nll ; p t q q 2.3 Post-Training Compression Techniques In addition to the training methods to decrease the corpus, index, and model size, several post-training engineering tricks are applied to compress the system footprint further: (1) INT8 quantization of index items, (2) saving model weights as FP16, (3) resource compression, and (4) utilizing token IDs as the corpus instead of raw texts. INT8 Quantization of Index Items The dense embeddings that serve as the items in the search index are of type FP32 in the default state. INT8 quantization can be applied to reduce the index size by four times with a little bit of drop in the accuracy. We make use of the quantization algorithm implemented in FAISS (Johnson et al., 2019) In-dexScalarQuantizer 7 . During inference, the embeddings are de-quantized, and the search is performed on the restored FP32 vectors. Saving Model Weights as FP16 Half precision can be used to size down the model weights of originally FP32 tensors with almost no drop in accuracy. In PyTorch, this can be done by calling .half() on each FP32 tensor in the model checkpoint. In TensorFlow, model graphs saved as the data type of FP16 may result in unacceptably slow inference according to the used hardware. We have found out that keeping the tensor types of the graph as FP32 but making the actual assigned values as 7 https://github.com/facebookresearch/faiss/blob/v1.5.2/ IndexScalarQuantizer.cpp FP16 enables a higher compression ratio when the model weights are compressed as described below. Resource Compression Data compressors with a high compression ratio are effective at reducing the initial system footprint. Our observation is that bzip2 is better for binary files such as model weights or index of embedding vectors, whereas lzma is better for human-readable text files. System resources can also be compressed if necessary. We use -9 option for both compressors. Utilizing Token IDs as the Corpus A corpus file must be included in the system to get the actual text of the item retrieved by search (an embedding vector in our case). We have found out that using the file of the encoded token ids of the tokenized texts as the corpus, instead of the raw texts, is ben-eficial not only because it reduces the inference latency by preprocessing the texts, but also the compressed output size is often slightly smaller. 3 Experiments Experimental Setup We apply our storage reduction methods to a recent extractive retrieve & read system, DPR (Karpukhin et al., 2020), which consists of three different BERT-base encoders: question encoder of the retriever, passage encoder of the retriever, and encoder of the reader with a ranker. All experiments are done on Naver Smart Machine Learning (NSML) Platform (Sung et al., 2017; Kim et al., 2018). The training and evaluation details are in Appendix A.1, A.2, and A.3. Experimental Results Figure 1 shows how each of the discussed strategies changes DPR's system size and Exact Match (EM) score on the EfficientQA dev set (see Table 3 and Table 4 in Appendix for details). Our starting point is a standalone open-domain QA system with DPR whose estimated size is 77.5 GB: 1.4 (system) + 0.8 (retriever) + 0.4 (reader) + 61 (index) + 13 (text) GB. The red plot shows from left to right one path to successively apply each strategy to reduce the system footprint to 484.69MB, which is 160 times smaller. Although the methods are described as sequential for easier presentation, the methods with filled markers and dotted lines are orthogonal to each other and thus can be applied in any other order. The methods with unfilled markers and solid lines are built on top of the previous method for each. Sizing down the corpus from 21,015,325 to 1,224,000 (5.8%) passages (2.1) decreases the system footprint by a large margin of about 70.5GB with only 2.72% of drop in EM. Using a smaller passage embedding dimension of 128D (2.2.1), changing the encoder to MobileBERT (2.2.1), and sharing the encoders of the retriever (2.2.2) save further 4.1GB of storage with little drop in accuracy of 1.28%. The process of unifying the retriever and reader into a single model (2.2.3) drops EM by 1.11, but the accuracy increases by 2.77% (to 34.44%) with iterative finetuning (2.2.4). In ablation studies on the three-step training procedure, omitting the knowledge distillation step drops EM by 1.5%, and omitting L recon drops EM by 0.38%. Applying post-training compression techniques further reduces the system footprint by a large margin while sacrificing little accuracy. EM changes to 34.39% with INT8 quantization, and the rest of the tricks do not affect the accuracy. Converting the PyTorch checkpoint to a binary for TensorFlow Serving to reduce system library dependency and applying bzip2 compression on some of the system resources creates the final system of 484.69MB with an accuracy of 34.33%. Figure 1 shows that this accuracy is higher than the performance of the parametric T5 (Roberts et al., 2020) baseline with a comparable docker-level system footprint. 8 In Table 1, we show the test set accuracy of our final system and other baselines. In summary, the performance of our system is higher than all of the parametric baselines, and the accuracy drop from DPR is only 2.45% on the EfficientQA dev set and about 4% on the test set while reducing the system footprint to about 0.6% of the original size. Our final system achieves the first place in the human (manual) evaluation and the second place in the automatic evaluation on Systems Under 500MB Track of the EfficientQA competition.", "While the accuracy of our system is 32.06% on the EfficientQA test set in the automatic evaluation, which is 1.38% behind the top-performing system (Lewis et al., 2021), its accuracy is 42.23% in the human evaluation which is 2.83% higher than the other system.", "Interestingly, when possibly correct answers are also counted as correct, the accuracy rises to 54.95% (7.58% higher than the other system).", "Please refer to Table 2 of Min et al. (2021) for more details.", "8 The accuracy of the T5 baselines are calculated using the SSM models finetuned on Natural Questions: https://github.com/google-research/google-research/tree/master/t5_closed_book_qa#released-model-checkpoints.", "perform experiments on open-domain Natural Questions (NQ) (Kwiatkowski et al., 2019) and TriviaQA (Joshi et al., 2017) to test the generalizability of the proposed methods.", "The results and detailed analysis are presented in Appendix A.5.", "There has recently been a line of work that targets to create storage-efficient open-domain QA systems, especially following the EfficientQA competition.", "Here, we introduce several approaches concurrent to ours that interested readers may refer to.", "Izacard et al. (2020) and Fajcik et al. (2021) explore the trade-off between storage budget and accuracy, and their retrieve & read systems take up only about 6GB with state-of-the-art performance.", "Lewis et al. (2021) propose a QA-pair retrieval system for open-domain QA, which enjoys the ben-efits of high flexibility and low latency.", "Their retriever answers 1100 questions per second with 41.2% accuracy on NQ, which rises to 47.7% when equipped with a reranker.", "The variants optimized for small system footprint are the winning systems of two storage-constrained tracks at EfficientQA.", "Min et al. (2021) review the EfficientQA competition with detailed analysis and summarize all of the top-performing systems.", "We discuss several orthogonal approaches to reduce the system footprint of a retrieve-and-read-based open-domain QA system.", "The methods together reduce the size of a reference system (DPR) by 160 times with an accuracy drop of 2.45% and 4% on EfficientQA dev and test, respectively.", "We hope that the presented strategies and results can be helpful for designing future retrieve-and-read systems under a storage-constrained serving environment.", "9 https://ai.google.com/research/NaturalQuestions/ efficientqa Acknowledgements The authors would like to thank the members of NAVER Clova for proofreading this paper.", "This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.", "2019-0-00075, Artificial Intelligence Graduate School Program (KAIST))." ]
[ "abstain", "abstain", "method", "result", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "method", "method", "result", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "other", "method", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "other", "other", "other", "other", "other", "method", "abstain", "result", "other", "abstain", "abstain" ]
[ "In natural language processing, a recently popular line of work explores how to best report the experimental results of neural networks.", "One exemplar publication, titled Show Your Work: Improved Reporting of Experimental Results (Dodge et al., 2019), advocates for reporting the expected validation effectiveness of the best-tuned model, with respect to the computational budget.", "In the present work, we critically examine this paper.", "As far as statistical generalizability is concerned, we find unspoken pitfalls and caveats with this approach.", "We analytically show that their estimator is biased and uses error-prone assumptions.", "We find that the estimator favors negative errors and yields poor bootstrapped confidence intervals.", "We derive an unbiased alternative and bolster our claims with empirical evidence from statistical simulation.", "Our codebase is at https://github.com/ castorini/meanmax .", "Questionable answers and irreproducible results represent a formidable beast in natural language processing research.", "Worryingly, countless experimental papers lack empirical rigor, disregarding necessities such as the reporting of statistical significance tests (Dror et al., 2018) and computational environments (Crane, 2018).", "As Forde and Paganini (2019) concisely lament, explorimentation , the act of tinkering with metaparameters and praying for success, while helpful in brainstorming, does not constitute a rigorous scientific effort.", "Against the crashing wave of explorimentation, though, a few brave souls have resisted the urge to feed the beast.", "Reimers and Gurevych (2017) argue for the reporting of neural network score distributions.", "Gorman and Bedrick (2019) demonstrate that deterministic dataset splits yield less robust results than random ones for neural networks.", "Dodge et al. (2019) advocate for reporting the expected validation quality as a function of the computation budget used for hyperparameter tuning, which is paramount to robust conclusions.", "But carefully tread we must.", "Papers that advocate for scientific rigor must be held to the very same standards that they espouse, lest they birth a new beast altogether.", "In this work, we critically examine one such paper from Dodge et al. (2019).", "We acknowledge the validity of their technical contribution, but we find several notable caveats, as far as statistical generalizability is concerned.", "Analytically, we show that their estimator is negatively biased and uses assumptions that are subject to large errors.", "Based on our theoretical results, we hypothesize that this estimator strongly prefers underestimates to overestimates and yields poor confidence intervals with the common bootstrap method (Efron, 1982).", "Our main contributions are as follows: First, we prove that their estimator is biased under weak conditions and provide an unbiased solution.", "Second, we show that one of their core approximations often contains large errors, leading to poorly controlled bootstrapped confidence intervals.", "Finally, we empirically confirm the practical hypothesis using the results of neural networks for document classification and sentiment analysis.", "Notation.", "We describe our notation of fundamental concepts in probability theory.", "First, the cumulative distribution function (CDF) of a random variable (RV) X is defined as F ( x ) := Pr[ X x ] .", "Given a sample ( x 1 , . . . , x B ) drawn from F , the empirical CDF (ECDF) is then FB ( x ) := 1 B (cid:80) B i =1 I [ x i x ] , where I denotes the indicator function.", "Note that we pick B instead of n to be consistent with Dodge et al. (2019).", "The error of the ECDF is popularly characterized by the KolmogorovSmirnov (KS) distance between the ECDF and CDF: KS( FB , F ) := sup x R | FB ( x ) F ( x ) | .", "defined using the RiemannStieltjes integral.", "We write the i th order statistic of independent and identically distributed (i.i.d.) X 1 , . . . , XB as X ( i : B ) .", "Recall that the i th order statistic X ( i : B ) is an RV representing the i th smallest value if the RVs were sorted.", "Hyperparameter tuning.", "In random search, a probability distribution p ( H ) is first defined over a k -tuple hyperparameter configuration H := ( H 1 , . . . , H k ) , which can include both cts.", "and discrete variables, such as the learning rate and random seed of the experimental environment.", "Commonly, researchers choose the uniform distribution over a bounded support for each hyperparameter (Bergstra and Bengio, 2012).", "Combined with the appropriate model family M and dataset D := ( DT , DV ) split into training and validation sets, respectivelya configuration then yields a numeric score V on DV .", "Finally, after sampling B i.i.d. configurations, we obtain the scores V 1 , . . . , VB and pick the hyperparameter configuration associated with the best one.", "In Show Your Work: Improved Reporting of Experimental Results, Dodge et al. (2019) realize the ramifications of underreporting the hyperparameter tuning policy and its associated budget.", "One of their key findings is that, given different computation quotas for hyperparameter tuning, researchers may arrive at drastically different conclusions for the same model.", "Given a small tuning budget, a researcher may conclude that a smaller model outperforms a bigger one, while they may reach the opposite conclusion for a larger budget.", "To ameliorate this issue, Dodge et al. (2019) argue for fully reporting the expected maximum of the score as a function of the budget.", "Concretely, the parameters of interest are 1 , . . . , B , where n := E [max { V 1 , . . . , V n } ] = E [ V ( n : n ) ] for 1 n B .", "In other words, n is precisely the expected value of the n th order statistic for a sample of size n drawn i.i.d. at tuning time.", "For this quantity, they propose an estimator, derived as follows: first, observe that the CDF of V n = V ( n : n ) is Pr[ V n v ] = Pr[ V 1 v V n v ] (3.1) = Pr[ V v ] n , (3.2) which we denote as F n ( v ) .", "Then n = E [ V ( n : n ) ] = (cid:90) v d F n ( v ) .", "For approximating the CDF, Dodge et al. (2019) use the ECDF F nB ( v ) , constructed from some sample S := ( v 1 , . . . , v B ) , i.e., F nB ( v ) = (cid:16) FB ( v ) (cid:17) n = (cid:32) 1 BB (cid:88) i =1 I [ v i v ] (cid:33) n .", "(3.4)", "The first identity in Eq.", "(3.4) is clear from Eq.", "(3.2).", "Without loss of generality, assume v 1 v B .", "To construct an estimator n for n , Dodge et al. (2019) then replace the CDF with the ECDF: n := (cid:90) v d F nB ( v ) , (3.5) which, by definition, evaluates to n = B (cid:88) i =1 v i (cid:16) F nB ( v i ) F nB ( v i 1 ) (cid:17) , (3.6) where, with some abuse of notation, v 0 < v 1 is a dummy variable and F nB ( v 0 ) := 0 .", "We henceforth refer to n as the MeanMax estimator.", "Dodge et al. (2019) recommend plotting the number of trials on the x -axis and n on the y -axis.", "We find two unspoken caveats in Dodge et al. (2019): first, the MeanMax estimator is statistically biased, under weak conditions.", "Second, the ECDF, as formulated, is a poor drop-in replacement for the true CDF, in the sense that the finite sample error can be unacceptable if certain, realistic conditions are unmet.", "Estimator bias.", "The bias of an estimator is defined as the difference between its expectation and its estimand : Bias( ) := E [ ] .", "An estimator is said to be unbiased if its bias is zero; otherwise, it is biased .", "We make the following claim: Theorem", "1. Let V 1 , . . . , VB be an i.i.d. sample (of size B ) from an unknown distribution F on the real line.", "Then, for all 1 n B , Bias( n ) 0 , with strict inequality iff V (1) < V ( n ) with nonzero probability.", "In particular, if n = 1 , then Bias( 1 ) = 0 while if n > 1 with F continuous or discrete but non-degenerate, then Bias( n ) < 0 .", "n := E [ V n : n ] = E [max { V 1 , . . . , V n } ] .", "An obvious unbiased estimator, based on the given sample of size B , is the following: U Bn := 1 (cid:0) Bn (cid:1) (cid:88) 1 i 1 <i 2 < <i n B max { V i 1 , . . . , V i n } .", "This estimator is obviously unbiased since E [ U Bn ] = E [max { V i 1 , . . . , V i n } ] = n , due to the i.i.d. assumption on the sample.", "A second, biased estimator is the following: V Bn := 1 B n (cid:88) 1 i 1 i 2 i n B max { V i 1 , . . . , V i n } .", "(3.7)", "This estimator is only asymptotically unbiased when n is fixed while B tends to .", "In fact, we will prove below that for all 1 n B : V Bn U Bn , (3.8) with strict inequality iff V (1) < V ( n ) , where V ( i ) = V ( i : B ) is defined as the i th smallest order statistic of the sample.", "We start with simplifying the calculation of the two estimators.", "It is easy to see that the following holds: U Bn = B (cid:88) j =1 (cid:0) j 1 n 1 (cid:1) (cid:0) Bn (cid:1) V ( j ) , where we basically enumerate all possibilities for max { V i 1 , . . . , V i n } = V ( j ) .", "By convention, (cid:0) mn (cid:1) = 0 if m < n so the above summation effectively goes from k to B , but our convention will make it more convenient for comparison.", "Similarly, V Bn = B (cid:88) j =1 j n ( j 1) n B n V ( j ) .", "We make an important observation that connects our estimators to that of Dodge et al.", "Let FB ( x ) = 1 B (cid:80) Bi =1 I [ V i x ] be the empirical distribution of the sample.", "Then, the plug-in estimator, where we replace F with FB , is Bn = E [max { V 1 , . . . , V n } ] , where V i iid FB = B (cid:88) j =1 [ F nB ( V ( j ) ) F nB ( V ( j 1) )] V ( j ) = V Bn , since F nB ( V ( j ) ) = ( j/B ) n if there are no ties in the sample.", "The formula continues to hold even if there are ties, in which case we simply collapse the ties, using the fact that (cid:80) kj = i F nB ( V ( j ) ) F nB ( V ( j 1) ) = F nB ( V ( k ) ) F nB ( V ( i 1) ) when V ( i 1) < V ( i ) = V ( i +1) = = V ( k ) < V ( k +1) .", "Now, we are ready to prove Eq.", "(3.8).", "All we need to do is to compare the cumulative sums of the coefficients in the two estimators: k (cid:88) j =1 (cid:0) j 1 n 1 (cid:1) (cid:0) Bn (cid:1) = (cid:0) kn (cid:1) (cid:0) Bn (cid:1) , k (cid:88) j =1 j n ( j 1) n B n = k n B n .", "(cid:0) kn (cid:1) (cid:0) Bn (cid:1) < k n B n (cid:0) kn (cid:1) k n < (cid:0) Bn (cid:1) B n n 1 (cid:89) i =0 (1 i k ) < n 1 (cid:89) i =0 (1 i B ) ,", "where the last inequality follows from k < B and n > 1 .", "Thus, we have verified the following for all 1 k < B : k (cid:88) j =1 (cid:0) j 1 n 1 (cid:1) (cid:0) Bn (cid:1) < k (cid:88) j =1 j n ( j 1) n B n .", "Eq.", "(3.8) now follows since V (1) < < V ( B ) lies in the isotonic cone while we have proved the difference of the two coefficients lies in the dual cone of the isotonic cone.", "An elementary way to see this is to first compare the coefficients in front of V ( B ) : clearly, U Bn 's is larger since it has smaller sum of all coefficients (but the one in front of V ( B ) ; take k = B 1 ) whereas the total sum is always one.", "Repeat this comparison for V (1) , . . . , V ( B 1) .", "Lastly, if V (1) < V ( n ) , then there exists a subset (with repetition) 1 i 1 . . . i n n such that max { V ( i 1 ) , . . . , V ( i n ) } < V ( n ) .", "For instance, setting i 1 = . . . = i n = 1 would suffice.", "Since V Bn puts positive mass on every subset of n elements (with repetitions allowed), the strict inequality follows.", "We note that if F is continuous, or if F is discrete but non-degenerate, then V (1) < V ( n ) with nonzero probability, hence Bias( n ) = E ( V Bn U Bn ) < 0 .", "For further caveats, see Appendix A. The practical implication is that researchers may falsely conclude, on average, that a method is worse than it is, since the MeanMax estimator is negatively biased.", "In the context of environmental consciousness (Schwartz et al., 2019), more computation than necessary is used to make a conclusion.", "Notably, this result always holds for cts.", "distributions, since the population maximum is never in the sample.", "Practically, this theorem suggests the failure of bootstrapping (Efron, 1982) for statistical hypothesis testing and constructing confidence intervals (CIs) of the expected maximum, since the bootstrap requires a good approximation of the CDF (Canty et al., 2006).", "Thus, relying on the bootstrap method for constructing confidence intervals of the expected maximum, as in Lucic et al. (2018), may lead to poor coverage of the true parameter.", "To support the validity of our conclusions, we opt for cleanroom Monte Carlo simulations, which enable us to determine the true parameter and draw millions of samples.", "To maintain the realism of our study, we apply kernel density estimation to actual results, using the resulting probability density (or discretized mass) function as the ground truth distribution.", "Specifically, we examine the experimental results of the following neural networks: Document classification.", "We first conduct hyperparameter search over neural networks for document classification, namely a multilayer percep-tron (MLP) and a long short-term memory (LSTM; Hochreiter and Schmidhuber, 1997) model representing state of the art (for LSTMs) from Adhikari et al. (2019).", "For our dataset and evaluation metric, we choose Reuters (Apte et al., 1994) and the F 1 score, respectively.", "Next, we fit discretized kernel density estimators to the resultssee the appendix for experimental details.", "We name the distributions after their models, MLP and LSTM.", "Sentiment analysis.", "Similar to Dodge et al. (2019), on the task of sentiment analysis, we tune the hyperparameters of two LSTMsone ingesting embeddings from language models (ELMo; Peters et al., 2018), the other shallow word vectors (GloVe; Pennington et al., 2014).", "We choose the binary Stanford Sentiment Treebank (Socher et al., 2013) dataset and apply the same kernel density estimation method.", "We denote the distributions by their embedding types, GloVe and ELMo.", "False conclusion probing.", "To assess the impact of the estimator bias, we measure the probability of researchers falsely concluding that one method underperforms its true value for a given n .", "The unbiased estimator has an expectation of 0 .", "5 , preferring neither underestimates nor overestimates.", "Concretely, denote the true n -run expected max-ima of the method as n and the estimator as n .", "We iterate n = 1 , . . . , 50 and report the proportion of samples (of size B = 50 ) where n < n .", "We compute the true parameter using 1,000,000 iterations of Monte Carlo simulation and estimate the proportion with 5,000 samples for each n .", "CI coverage.", "To evaluate the validity of bootstrapping the expected maximum, we measure the coverage probability of CIs constructed using the percentile bootstrap method (Efron, 1982).", "Specifi-cally, we set B = 50 and iterate n = 1 , . . . , 50 .", "For each n , across M = 1000 samples, we compare the empirical coverage probability (ECP) to the nominal coverage rate of 95 % , with CIs constructed using 5 , 000 bootstrapped resamples.", "The ECP n is computed as n := 1 MM (cid:88) i =1 I ( n CI i ) , (4.1) where CI i is the CI of the i th sample.", "Following Dodge et al. (2019), we present the budgetquality curves for each model pair in Figure", "1. For each n number of trials, we vertically average each curve across the 5,000 samples.", "We construct CIs but do not display them, since the estimate is precise (standard error < 0 . 001 ).", "For document classification, we observe that the LSTM is more difficult to tune but achieves higher quality after some effort.", "For sentiment analysis, using ELMo consistently attains better accuracy with the same number of trialswe do not consider the wall clock time.", "In Figure 2, we show a failure case of biased estimation in the document classification task.", "At B = 25 , from n = 20 to 25 , the averaged estimate yields the wrong conclusion that the MLP outperforms the LSTMsee the true LSTM line, which is above the true MLP line, compared to its estimate, which is below.", "False conclusions probing.", "Figure 3 shows the results of our false conclusion probing experiment.", "We find that the estimator quickly prefers negative errors as n increases.", "The curves are mostly similar for both tasks, except the MLP fares worse.", "This requires further analysis, though we conjecture that the reason is lower estimator variance, which would result in more consistent errors.", "CI coverage.", "We present the results of the CI coverage experiment results in Figure 4.", "We find that the bootstrapped confidence intervals quickly fail to contain the true parameter at the nominal coverage rate of 0 .", "95 , decreasing to an ECP of 0 .", "7 by n = 20 .", "Since the underlying ECDF is the same, this result extends to Lucic et al. (2018), who construct CIs for the expected maximum.", "In this work, we provide a dual-pronged theoretical and empirical analysis of Dodge et al. (2019).", "We find unspoken caveats in their worknamely, that the estimator is statistically biased under weak conditions and uses an ECDF assumption that is subject to large errors.", "We empirically study its practical effects on tasks in document classification and sentiment analysis.", "We demonstrate that it prefers negative errors and that bootstrapping leads to poorly controlled confidence intervals.", "This research was supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada." ]
[ "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "objective", "objective", "result", "result", "other", "abstain", "other", "other", "abstain", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "method", "result", "method", "objective", "other" ]
[ "In sequence modeling, certain tokens are usually less ambiguous than others, and representations of these tokens require fewer refinements for disambiguation.", "However, given the nature of attention-based models like Transformer and UT (universal transformer), all tokens are equally processed towards depth.", "Inspired by the equilibrium phenomenon, we present a lazy transition, a mechanism to adjust the significance of iterative refinements for each token representation.", "Our lazy transition is deployed on top of UT to build LT (lazy transformer), where all tokens are processed unequally towards depth.", "Eventually, LT is encouraged to oscillate around a relaxed equilibrium.", "Our experiments show that LT outperforms baseline models on several tasks of machine translation, pre-training, Learning to Execute, and LAMBADA.", "Attention-based models like Transformer (Vaswani et al., 2017) underly two core concepts: layer and refinement .", "With layer-stacked structure, the model entirely relies on attention mechanisms (Parikh et al., 2016; Lin et al., 2017) to refine token representations (e.g.,vectors) layer-by-layer from context-informed representations by using residual connections (He et al., 2016; Ebski et al., 2018).", "However, layer is not necessary.", "UT (universal transformer) (Dehghani et al., 2019) and its variants (Bai et al., 2019; Lan et al., 2020) iteratively run single-layer but wide Transformer for sequence modeling that token representations are refined at each step by attending to the sequence representation of the previous step, showing higher performance than layer-stacked Transformer does on NMT (Dehghani et al., 2019), pre-training (Lan et al., 2020), language modeling (Bai et al., 2019), and other tasks of varying complexity (Dehghani et al., 2019) with the same number of parameters.", "The computational bound of recurrence is not the number of tokens in the sequence (like RNN) or the number of stacked layers in the model but is the maximum number of refinements made to token representations, e.g., a pre-defined maximum number of iteration steps.", "We follow this line but further ponder refinement .", "Concretely, the model refines token representations iteratively that tokens are equally processed towards depth for disambiguation.", "Essentially, different token representations are refined at a step equally and concurrently.", "Thus, regardless of the ambiguous state, a step shows the same importance for all the tokens.", "However, certain tokens are less ambiguous than others in sequence modeling, especially in NMT.", "It raises two questions: 1) do deep representations have to be learned for less-ambiguous tokens; 2) do tokens must consider the contributions or importance of a step equally?", "Meanwhile, our work derives its motivation from the combination of UT and ACT (adaptive computation time) (Graves, 2016) that can dynamically estimate the importance of iteration steps for understanding different sentences in the bAbI task (Weston et al., 2016).", "However, we consider the importance of every iteration step for all the tokens in the same sequence, sharing a similar motivation with depth-adaptive Transformer (Elbayad et al., 2020).", "On the other hand, these methods learn a halting probability for each step without considering the correspondence and interdependence throughout the iteration and the model's convergence.", "We attempt a mechanism like the reset gate in GRU (Chung et al., 2014) that the model is allowed to dynamically forget some information based on the correspondence and interdependence, where in our case, this information is the further refinement at the current step for the corresponding token representation.", "In this way, we weaken the importance of an iteration step for less-ambiguous token representations to prevent over-refining but retain the significance for more-ambiguous token 2904 representations to avoid under-refining, refining token representations at the same step unequally.", "To this end, we present a lazy transition that performs between two consecutive steps when running UT iteratively.", "It forms iterative refinements (Eb-ski et al., 2018) for token representations with a residual structure: h t i i = h t i 1 i + (1 t i i ) UT ( h t i 1 i ) (1) where h t i i is the representation for the i -th token in the sequence at step t i (token-wise), UT yields the deep refinement UT ( h t i 1 i ) for h t i i by consuming h t i 1 i , and (1 t i i ) is a refining rate for adjusting the contribution of UT ( h t i 1 i ) .", "In this way, we model an equilibrium (Bai et al., 2019; Lan et al., 2020) between two representations of the same token from consecutive steps.", "Significantly, when off (1 t i i ) = 0 , the model is allowed to dismiss the further refinement UT ( h t i 1 i ) for h t i i at the step t i .", "Then, the model reaches a local equilibrium for the corresponding token that the input h t i 1 i is similar to the output h t i i .", "In other words, the model ponders the significance of the further refinement via our lazy transition that a token representation is lazy to absorb the information if the further refinement is trivial.", "In sequence modeling, all the equilibrium token representations are learned and concatenated for the required sequence representation that the model oscillates around a relaxed equilibrium.", "Our contributions are: 1) we present a lazy transition that is placed on top of UT to build LT (lazy transformer).", "Our lazy transition dynamically forms a refinement path for a token at each step by pondering the step importance and the local equilibrium.", "Eventually, LT is encouraged to oscillate around a relaxed equilibrium in sequence modeling; 2) we provide an empirical study to quantitatively analyze how the relaxed equilibrium impacts NMT in performance, deep models, and zero-shot inferring; 3) we show our model can consistently improve the performance of pre-training and two tasks of varying complexity, where standard Transformer fails; 4) our empirical study shows that stable and smooth refinements at the early iteration steps are significant, which results in a strong equilibrium and a stable oscillation.", "Iterative Refinement in Transformer, Tied Transformer, and Universal Transformer One core backbend of Transformer (Vaswani et al.,", "2017) is iterative refinements (Ebski et al., 2018) that are finished by forming a residual structure around each sub-layer: h li = h l 1 i + f l ( h l 1 i ) , where l denotes depth and f ( ) represents a sublayer (e.g., an attention layer (Parikh et al., 2016; Lin et al., 2017)).", "Following tied Transformer (Gul-cehre et al., 2018; Xia et al., 2019; Dabre and Fujita, 2019), which share parameters across some layers, (Dehghani et al., 2019) introduce token-wise recurrence (Graves et al., 2014; Joulin and Mikolov, 2015; Kaiser and Sutskever, 2016) to signal-layer but wide Transformer and then present UT, a Turing-complete model.", "Theoretically, UT yields a similar recurrent inductive bias of RNN's because each token representation is refined by attending to the sequence representation of the previous step with tied parameters.", "We present a comparison in Appendix A. Significantly, the iterative refinement in UT is reformed to : h ti = h t 1 i + f ( h t 1 i ) , where f is the same for every iteration step t , i.e., a single-layer structure that the parameters are tied throughout the iteration.", "In this work, we apply our lazy transition on top of a UT block.", "To reform iterative refinements, we are inspired by (Zhang et al., 2021; Escolano et al., 2021; Bapna et al., 2018; Wang et al., 2019) that f (or f l ) could be a more complicated network rather than a simple sub-layer, defining f as the entire UT block and reforming the iterative refinement as Eq.1.", "Adaptive Computation While a token-wise recurrent model like UT can theoretically run in-finitely, the model is commonly trained by setting a maximum number of iteration steps.", "However, the number of inference steps is flexible.", "(Graves, 2016) first introduce ACT (Adaptive Computation Time ) to compute a scalar halting probability predicted by the model at each step for each token in RNN.", "Then, (Dehghani et al., 2019) adapt ACT for UT to halt the iteration before reaching the maximum number of training steps 1 in inferring.", "Furthermore, (Elbayad et al., 2020) present depth-adaptive decoding for tokens to model the distribution of exiting with the probability of computing each layer/step and then emitting token predictions.", "Our lazy transition follows this line somewhat that (1 t i i ) in Eq.1 dynamically approximates the computation time for tokens.", "It reflects on two characteristics: 1) (1 t i i ) = 0 stop iteration; 1 In some papers, training steps refer to how many batches we use for training.", "2) (1 t i i ) computed at each step.", "We present LT (Lazy Transformer), a token-wise recurrent model.", "LT only consists of a UT block and our lazy transition.", "In training, we iteratively run LT but constrain the computational bound to control the training process by setting a maximum number of iteration steps.", "However, iteration steps are dynamic in inferring.", "In sequence modeling, DEQ(Bai et al., 2019) and ALBERT (Lan et al., 2020) report that a UT-based model tends to oscillate around an equilibrium.", "Concretely, each additional step has a smaller and smaller contribution to the current sequence representation until the model oscillates around a fixed point.", "Formally, we have: lim t + h t 1: N = lim t + UT ( h t 1: N ) h E 1: N = UT ( h E 1: N ) , where t is the step, h t 1: N is the sequence representation of a N length sequence X 1: N at t , and h E 1: N is the equilibrium sequence representation.", "Empirically, the equilibrium phenomenon could be observed from the difference norm of sequence representations: || UT ( h t 1: N ) h t 1: N || || UT ( h t +11: N ) UT ( h t 1: N ) || || UT ( h t :1: N ) h t :1: N || < (cid:15) , where (cid:15) depends on the model and t : denotes steps after t .", "Intuitively, UT, DEQ, and ALBERT are encouraged to reach a global and sequence-level equilibrium 2 at a specific iteration step t in a dynamic programming style for outputting the equilibrium sequence representation.", "By contrast, we attempt a greedy strategy to independently find a local equilibrium for a token, modeling the locally optimal choice.", "Then, for an input sequence, we model all the local equilibriums to find a relaxed equilibrium instead of naively finding a global and sequence-level equilibrium.", "Formally, in sequence modeling, we model a relaxed equilibrium to obtain an equilibrium sequence representation h R 1: N : i N : lim t i + h t i i = lim t i + UT ( h t i i ) i N : h R i i = UT ( h R i i ) h R 1: N = UT ( h R 1: N ) = [ h R 1 1 , h R 2 2 , ..., h RNN ] (2) where h t i i denotes i -th token representation at t i (token-wise), h R i i is the equilibrium token representation, R stands the step of reaching the relaxed 2 Our preliminary experiment confirms this intuition, which will be further discussed in Experiment.", "equilibrium, and [ ] denotes concatenation.", "Similarly, we can evaluate the local equilibrium from the difference norm of token representations: || UT ( h t i : i ) h t i : i || < (cid:15) (3) Note that, in our preliminary experiment, we find that h E 1: N (cid:54) = [ h R 1 1 , h R 2 2 , ..., h RNN ] , i.e., the equilibrium sequence representation is not equivalent to the combination of all the equilibrium token representations because tokens have different ambiguous states.", "For the relaxed equilibrium, we do not have to update tokens equally at an iteration step so that an iteration step can show varying importance for different tokens, subject to the ambiguous state of the token representations.", "Thus, we consider adjusting the impact of the current step for different tokens throughout the iteration.", "Although we can easily and immediately observe the equilibrium or the local equilibrium phenomenon from difference norms, we have no prior knowledge in practice that we cannot analyze the equilibrium quantitatively throughout the iteration.", "Hence, we consider the linear CKA exam (centered kernel alignment): CKA ( X, Y ) = || XTY || 2 F / ( || XTX || F || YTY || F ) (0 , 1] that is introduced by (Kornblith et al., 2019) to identify correspondences between representations in models trained from different initializations and is invariant to orthogonal transform and isotropic scaling but is not invariant to any linear transforms.", "For this exam, we are inspired by (Wu et al., 2020), who measure the degree of a layer's multilinguality with a CKA exam between two averages of outputted sequence representations.", "However, we run the exam for two averages of sequence representations h 1: N emerged from two consecutive steps 3 .", "Precisely, we majorly rewrite the equilibrium to: lim t + CKA ( h t , (cid:80) Ni =1 UT ( h ti ) N ) = 1 CKA ( h E , (cid:80) Ni =1 UT ( h Ei ) N ) = 1 (4) 3 Note that, in this way, the feature space is the channel of representations, not the representations itself in the original CKA exam.", "where h t = (cid:80) Ni =1 h ti N .", "For our relaxed equilibrium in sequence modeling, we have: i N : lim t i + CKA ( h t i i , UT ( h t i i )) = 1 i N : CKA ( h R i i , UT ( h R i i )) = 1 N (cid:88) i =1 CKA ( h R i i , UT ( h R i i )) = N (5) Essentially, we can dynamically evaluate the local equilibrium by giving the exam to token representations throughout the iteration: CKA ( h t i : i , UT ( h t i : i )) 1 (6) It reflects the correspondence and interdependence between UT's input and output.", "Note that, although the global equilibrium does not expect the local equilibrium, this exam also gives an intuition of how a token representation changes throughout the iteration.", "To leverage the relaxed equilibrium (Eq.2 and Eq.5), our lazy transition uses a residual structure to form iterative refinements (Ebski et al., 2018) for h i at each step in order to obtain h R i i .", "Recall that, UT ( h t i 1 i ) is the deep refinement we can obtain at step t i , and CKA ( ) returns (0 , 1] .", "We form the iterative refinement (Eq.1) to: h t i i = h t i 1 i + (1 t i i ) UT ( h t i 1 i ) t i i = CKA ( h t i 1 i , UT ( h t i 1 i )) (7) Concretely, the model is based on the correspondence and interdependence between UT's input h t i 1 i and output UT ( h t i 1 i ) .", "When t i i is close to 1 at step t i , h t i : i is only oscillating around h t i i , and UT cannot provide useful information for better representations anymore.", "Then, the model is encouraged to dismiss UT and then outputs h i 's equilibrium token representation: h R i i ( h t i 1 i h t i i h t i : i ), for the the relaxed equilibrium.", "It is similar to the reset gate in GRU that dynamically forget the previous state, but our lazy transition attempts to dismiss the newly obtained information UT ( h t i 1 i ) that is unimportant.", "Therefore, our lazy transition provides implicit step information, similar to GRU that can identify the current position and handle input sentences of varying length.", "Since there is no variable for identifying steps, the model can run varying steps in inferring.", "On the other hand, this could be viewed as a linear interpolation: h t i i = t i i h t i 1 i + (1 t i i )( h t i 1 i + UT ( h t i 1 i )) , where t i i decides how much the model updates h t i 1 i at step t i from the pre-refined representation 4 : h t i 1 i + UT ( h t i 1 i ) .", "Meanwhile, since h t i i is informed by UT ( h t i 1 i ) without any step-specific parameter, the benefits are twofold: 1) our lazy transition does not hurt the global receptive field of UT, only adjusting the contribution of a step; 2) the recurrent inductive bias of UT is inherited because all the parameters are tied throughout the iteration.", "Our empirical study confirms these two benefits (see Experiment).", "In sequence modeling, since our model does not provide any position information, in order for the model to make use of the order of the sequence, we inject position information at each step.", "Therefore, our LT (lazy Transformer) is formed as: h t 1: N 1: N = h t 1: N 1 1: N +(1 t 1: N 1: N ) UT ( h t 1: N 1 1: N + P E 1: N ) , where h t i i computed by Eq.7 is the i -th token representation of a N length sequence representation h 1: N and P E 1: N is the sinusoidal position encoding for identifying positions as defined in (Vaswani et al., 2017).", "Recall that 1 t i i collapses to 0 when the model is researching the local equilibrium of h i .", "The model can simply copy the equilibrium token representation to the next step for speed.", "Instantiation LT can be instantiated, used, and trained as the same as a vanilla Transformer block or a UT block.", "Concretely, we can instantiate and train: 1) a LT encoder with the objective of MLM (masked language modeling) (Devlin et al., 2019; Lan et al., 2020); 2) a LT decoder with the objective of GPT (generative pre-training) (Radford et al., 2018; Alec Radford, 2020); 3) a LT encoder-decoder (consisting of a LT encoder and a LT decoder) in a seq2seq (Graves, 2013) manner.", "Lazy Transition vs. GRU They have different motivations.", "GRU aims to learn a segment-level representation by accumulating the information from all the tokens, whereas LT is encouraged to ponder the importance of a step for different tokens and then to oscillate around the relaxed equilibrium.", "Meanwhile, compared to GRU, which forgets the previously computed state via the reset gate, our lazy transition is allowed to dismiss the newly computed information that is trivial.", "Lazy Transition vs. Adaptive Computation Adaptive computation methods like ACT (Graves, 2016) and Adaptive-depth Transformer (Elbayad et al., 2020) learn a generator to output a probability of exiting based on the step output.", "We argue that these methods are agonistic for the model's convergence because they do not consider the information flow from the input to the corresponding output.", "By contrast, our method leverages the model's convergence and applies CKA to ponder the correspondence and interdependence between inputs and outputs, where in our case, the model's convergence is the relaxed equilibrium.", "Lazy Transformer vs. UT, DEQ, and ALBERT LT is parallel to UT (Dehghani et al., 2019), DEQ (Bai et al., 2019), and ALBERT (Lan et al., 2020).", "We share the idea of recurrence over depth, but we have three main differences: 1) previous methods require an explicit step encoding for each iteration, whereas we let our lazy transition handle iterations implicitly; 2) we ponder the significance of a step for different tokens, whereas previous methods refine tokens equally at a step; 3) we consider the local and token-level equilibrium in addition to the sequence-level equilibrium.", "We divide our empirical studies and experiments into two genres: 1) we experiment with NMT (our main task) to confirm the effectiveness of our methods for large-scale sequence modeling and further quantitatively justify our hypotheses and assumptions; 2) we attempt pre-training tasks and two somewhat rare but challenging tasks: the Learning to Execute task (Zaremba and Sutskever, 2014) and the LAMBADA (Paperno et al., 2016) task, to observe the performance on tasks of varying complexity.", "All the links of datasets, libraries, scripts, and tools marked with (cid:5) are listed in Appendix H. We open source code on GitHub.", "Training Our code is implemented on Tensor-flow 2.6 (Abadi et al., 2016) with 4 NVIDIA TI-TAN Xp 12G GPU.", "We implement our model based on the codebase of official UT from tensor2tensor (cid:5) 5 and official CKA (cid:5) .", "We use the default setting: universal_transformer_base from tensor2tensor.", "Concretely, we use Adam optimizer (Kingma and Ba, 2015) with parameters 1 = 0 .", "9 , 2 = 0 .", "997 and (cid:15) = 10 9 , and a dynamic learning rate with warm _ up = 8000 (Vaswani et al., 2017) ( learning _ rate (0 , 7 e 4 ] ) is employed.", "We set dropout regularization with a drop rate rate = 0 .", "1 and label smoothing with gamma = 0 .", "1 (Mezzini, 2018).", "For data feeding efficiency, each batch of similar-length sequences are padded to the same length and may have a different number of elements in each batch.", "Reimplementation and Reconfiguration We reimplement some models on our machine with the same batch size.", "We compare the reimplemented results to the reported results on the same test set to ensure the difference is less than 5% (or 1 in BLEU).", "Then, we can confirm the reimplementation and reconfiguration.", "Dataset and Preprocessing We train a LT encoder-decoder model for machine translation.", "To be comparable, we share two NMT tasks: 1) English German of WMT 2014 (cid:5) (Bojar et al., 2014); 2) English Romanian of WMT 2016 (cid:5) (Bojar et al., 2016).", "Following the standard evaluation, the model is evaluated on newstest2014 for English German and newstest2016 for English Romanian .", "We use the Moses tokenizer (cid:5) developed by (Koehn et al., 2007) for tokenization and use fastBPE (cid:5) to learn shared 32K BPE (Sennrich et al., 2016) for a language pair.", "Data filtering is finished by FAIR tool (cid:5) (Ng et al., 2019).", "We use sacreBleu (cid:5) (Post, 2018) with standard settings 6 to evaluate the quality of translation.", "Model Configuration The model configurations are identical to base-UT (Dehghani et al., 2019).", "Specifically, we set model dimension, word embedding, head, and FFN filter to 1024, 1024, 16, and 4096, which results in the same number of parameters (62M) as base-Transformer 5 Note that, the newest UT implementation uses pre-normalization y = x + f ( ln ( x )) instead of reported post-normalization y = ln ( x + f ( x )) .", "UT with pre-normalization shows slight degradation in performance ( 1% ) but improves stability in training, initialization, and scalability, where x denotes the input, f ( . ) stands for a sub-layer, and ln is a layer-normalization unit.", "6 {nrefs:1|case:mixed|eff:no|tok:13a|smooth:exp| version:2.0.0} 2908 newstest2014 newstest2016 # Model En De En Ro basemodel: { 6 , 6 } = { 6 , 6 } 1 base-Transformer(Vaswanietal.,2017) 27.50 32.31 2 (cid:63) base-UTw/oSE 27.85 3 (cid:63) base-UT 28.73 33.97 4 base-UT(Dehghanietal.,2019) 28.90 5 (cid:63) base-UT+ACT 29.11 6 (cid:63) base-UT+GRU 26.59 7 OURS:base-LT 29.81 35.02 deepmodel: { 20 / 40 , 6 } = { 20 / 40 , 6 } 9 (cid:63) 20-Transformer(Bapnaetal.,2018) 28.72 33.59 10 20-Transformer(Wangetal.,2019) 28.90 11 (cid:63) 20-UT 29.69 34.32 12 OURS:20-LT 30.54 35.62 13 OURS:40-LT 31.05 36.04 Table 1: Performance of translation.", "(Vaswani et al., 2017).", "Beam search is con-figured with beam size 4 and length penalty 0.6.", "We train the model for 100k iterations.", "We use { T _ enc _ step, T _ dec _ step } = { I _ enc _ step, I _ dec _ step } to denote a model that runs the maximum T _ enc _ step and T _ dec _ step steps in the encoder and decoder respectively for training and runs the maximum I _ enc _ step and I _ dec _ step steps in the encoder and decoder respectively for inferring.", "For instance, { 6 , 6 } = { 6 , 6 } means we set the maximum step to 6 both in the encoder and decoder for training and inferring.", "In this experiment, we set { 6 , 6 } = { 6 , 6 } ( base-LT ), which is identical to the baseline model: base-UT.", "Also, it is equivalent to base-Transformer that has 6 layers in both the encoder and decoder.", "For comparison, we place GRU 7 on top of the UT block for evaluation (base-UT + GRU), similar to that we use our lazy transition, and we also train another UT with ACT (base-UT + ACT).", "All of these models require step encodings for the identification of steps.", "For the evaluation of how our lazy transition handles iterations without explicit encodings, we instantiate another UT without step encodings (base-UT w/o SE), a similar model that naively repeats one layer without step identifications (Dabre and Fujita, 2019).", "Table 1 shows the results on NMT tasks.", "base-LT outperforms base-UT (row 3&7) by 4% .", "By observing row 2&3, the performance of UT significantly degrades without using step encodings, which indicates a mechanism for step identification is beneficial for UT.", "Meanwhile, we find that applying ACT to UT (row 5) can improve the per-7 We use orthogonal kernels for GRU to solve an optimization problem.", "no effect.", "We observe the equilibrium phenomena in models by giving the CKA exam (Eq.6) in the encoder and decoder.", "Visualizations are presented in Appendix E. In our case study, all the tokens in base-UT (Appendix E.1 (c,d)) run to an equilibrium synchronously because every step shows similar importance to all the tokens, i.e., all the tokens have similar CKA scores at every step throughout the iteration.", "It confirms that base-UT tends to find a global equilibrium (Eq.4) for all the tokens, as discussed before.", "Furthermore, this process is unstable that CKA scores change dramatically at the early steps 8 , resulting in unstable refinements (under-refining or over-refining) for some tokens and hurting tokens' local equilibrium.", "Meanwhile, we are aware that base-UT + ACT (Appendix E.3), base-UT w/o SE (Appendix E.2), and base-UT + GRU (Appendix E.4) show a similar behavior to base-UT throughout the iteration.", "base-UT + GRU is even more irregular than base-UT at the early steps.", "Also, we suspect that ACT and GRU have no power to adjust the importance of a step for each token throughout the iteration in NMT.", "Compared to that, base-LT (Appendix E.1 (a,b)) enables each token to smoothly find its local equilibrium independently and then turns to oscillate around the relaxed equilibrium (Eq.2) together.", "Specifically, we observe that CKA scores of a step ( t i i in Eq.7) differ from one token to the others at every step in the early 4 iterations, which answers our question in Introduction that a step can show varying importance for all the tokens.", "Then, base-LT tends to refine token representations based on the sequence-level characteristics at the late 2 steps to oscillate around the relaxed equilibrium, resulting in similar CKA scores.", "Given the nature of the iteration-based model, we can run infinite steps in the encoder and decoder.", "Therefore, we test the performance beyond 6 steps.", "In this experiment, we share a challenge with (Bapna et al., 2018; Wang et al., 2019) to train deep models.", "According to their works, NMT can get significant benefits from many layers in the encoder but not in the decoder.", "Similarly, we assume we can obtain benefits from running many steps in the encoder.", "For comparison, we config-8 e.g., from step 1 to step 2, the average of absolute difference is 0 .", "221 , which is only 0 .", "105 in base-LT.", "ure models with { 20 , 6 } = { 20 , 6 } ( 20-LT ), which is similar to the 20-layer model in (Bapna et al., 2018; Wang et al., 2019).", "Beyond 20 steps in the encoder, we further configure a model with { 40 , 6 } = { 40 , 6 } ( 40-LT ).", "We show the results of deep models in Table 1.", "Running a large number of steps in the encoder can consistently improve the performance of LT, and deep LT outperforms baseline models.", "Similar to base-LT, we also consider the CKA exam for the equilibrium phenomena (Appendix E.5 ) that we obtain similar conclusions to base-LT.", "It indicates our method is a potential new development for deep models without introducing additional parameters.", "Besides, we find UT can get benefits from deep settings, but the performance is not very strong.", "Intuitively, it is caused by the global equilibrium strategy observed in base-UT that CKA scores change dramatically at the early steps.", "Therefore, 20-UT cannot smoothly run to an equilibrium, which results in a suboptimal choice.", "The CKA exam confirms our intuition and draws a similar conclusion of base-UT.", "By contrast, instead of searching a global equilibrium from scratch, LT searches local equilibriums for tokens at the early steps, which results in varying contributions for different token representations, and then turns to find a global equilibrium for all tokens that we observe a similar behavior to UT.", "We conjure that LT tends to focus on token-specific characteristics first and then on sequence-specific characteristics.", "In the above experiments, we set the inference steps as the same as the training steps, sharing the same strategy with previous works (Dehghani et al., 2019; Lan et al., 2020; Bai et al., 2019).", "We are interested in an asymmetric strategy that the inference steps is different from the training steps.", "Thus, some steps are zero-shot inferring without explicit training.", "Recall that the model oscillates around an equilibrium point and outputs similar representations (if iterating).", "We assume these similar representations can be used for prediction, and we expect similar performance.", "On the other hand, we visualize the equilibrium phenomena to observe the equilibrium, and zero-shot inferring can quantitatively examine the equilibrium.", "For this test, we reuse our trained models 9 9 Interestingly, we find the reported sinusoidal step encodings for UT support this test without bringing noises to untrained inference steps.", "for En De but change the setting to { 6 , 6 } = { [1 , 12] , [1 , 12] } , { 20 , 6 } = { [15 , 26] , [1 , 12] } , and { 40 , 6 } = { [35 , 46] , [1 , 12] } respectively.", "For metrics, we compute avg and std for nonzero ( > 1 ) outputs, where avg indicates how strong the equilibrium point is and std tells us how far the model is oscillating around the point.", "We show a part of experimental results in Table 2 (see all the results in Appendix F).", "We observe that models can achieve competitive performance from zero-shot inferring for some steps close to the training step, and the best performance does not precisely exist at the training step, except for base-UT + ACT.", "Essentially, we observe and confirm some conclusions as mentioned earlier.", "1) LT's behavior is regularized around the equilibrium when comparing base-LT std : 5 .", "03 with base-UT std : 6 .", "32 and 20-LT std : 1 .", "65 with 20-UT std : 4 .", "62 .", "Specifically, LT's lazy transition controls the refinements throughout the iteration that token representations from one step can jointly compose a sequence representation yielding similar performance to the others, whereas UT is look-up methods are problematic.", "encouraged to compose the final sequence representations at the training step and lacks a mechanism to control the refinements explicitly.", "2) UT does not oscillate around a strong equilibrium point, i.e., having a suboptimal equilibrium, because base-UT avg : 22 .", "16 and 20-UT avg : 26 .", "03 are not strong.", "By contrast, LT is stably oscillating around a strong equilibrium and can compose relatively strong sequence representations in different inference steps because base-LT and 20-LT achieve avg : 24 .", "52 and avg : 28 .", "89 respectively.", "3) In both UT and LT, deep models generally find a stronger and more stable equilibrium than base models.", "4) base-UT + GRU fails in this test, which results in low avg : 17 .", "46 and high std : 9 .", "13 .", "We suspect this failure causes by a very irregular behavior at the first step, as the CKA exam indicates in Appendix E.4.", "5) base-UT + ACT outperforms base-UT on avg .", "However, base-UT is slightly more stable than base-UT + ACT.", "Besides, base-UT + ACT generally has a larger gradient norm than others, which may impact the stability and convergence in training.", "6) Due to the halting mechanism in ACT, base-UT + ACT seems to halt the process instead of oscillating around the equilibrium because the performance is constant after some steps.", "7) base-UT significantly outperforms base-UT w/o SE on stability, which indicates a mechanism for step identifications is an essential component for stability.", "Limitation We do not find a solution to pick the inference step for the best performance, because we do not recognize any pattern.", "Meanwhile, we could dynamically select a step for a tradeoff between speed and performance by simply copying equilibrium token representations from the previous steps to the next steps.", "We will leave this for our future work.", "Besides, we find LT and UT are not stable when translating sentences longer than 50 words.", "The BLEU score varies from 5 to 60 for different sentences (see Appendix G).", "We will conduct further experiments to probe this problem.", "ALBERT (Lan et al., 2020) study the application of UT in pre-training.", "Since LT is extended from UT, we study LT in pre-training, sharing the framework of ALBERT.", "Concretely, our setting is identical to base ALBERT, denoting it as 12-base-ALBERT.", "We set the model dimension, word embedding dimension, and the maximum number of steps to 768, Model SQuAD1.1 SQuAD2.0 MNLI (F1) (F1) (Acc) 12-base-ALBERT (Lan et al., 2020) 89.3 80.0 81.6 (cid:63) 12-base-ALBERT 89.4 80.0 81.4 (cid:63) 12-base-ALBERT-ACT 89.5 80.5 81.6 (cid:63) 12-base-ALBERT-GRU 86.9 77.8 78.6 OURS: 12-base-LT 89.8 81.1 82.1 (cid:63) 24S-base-ALBERT 89.6 80.9 81.7 OURS: 24-base-LT 90.1 81.7 82.6 Table 3: LT in pre-training.", "128, and 12.", "Note that in the original ALBERT, they denote steps as layers.", "As recommended, we generate masked span for the MLM targets using the random strategy from (Joshi et al., 2020), and we use LAMB optimizer (cid:5) with a learning rate of 0.00176 (You et al., 2020) instead of Adam optimizer.", "The only change is that we use our LT to replace UT in 12-base-ALBERT, and we denote our model as 12-base-LT.", "Following the instructions, we pre-train models on BooksCorpus (cid:5) (Zhu et al., 2015) and English Wikipedia (cid:5) (Devlin et al., 2019) for 140k steps.", "Then, we fine-tune on MNLI (cid:5) (Williams et al., 2018) and SQuAD(v1.1 and v2.0) (cid:5) (Rajpurkar et al., 2016, 2018).", "We report the performance on the dev set as the same as (Devlin et al., 2019; Lan et al., 2020).", "Result We report the result in Table 3.", "In this test, we run all models 12 steps, and we implement 12-base-ALBERT-ACT and 12-base-ALBERT-GRU, similar to the experiment of the translation task.", "12-base-LT significantly outperforms 12-base-ALBERT, 12-base-ALBERT-ACT, and 12-base-ALBERT-GRU.", "These observations confirm the effectiveness of our lazy transition.", "For further tests, we train all models in 24 steps.", "Our model gets benefits from a large number of steps, improving the performance from the base model significantly.", "By contrast, 24-base-ALBERT cannot obtain significant improvements.", "LTE (Learning to Execute) (Zaremba and Sutskever, 2014) including program evaluation tasks (program, control, and addition) and memorization tasks (copy, double, and reverse) is an algorithmic task of varying complexity.", "The goal is to train models on short snippets of python code to predict the output of the generated programs, which is parameterized by their length and nesting .", "Specifically, length is the number of digits in the integers that appear in the programs, and nesting 2911 programevaluation memorization Model program control addition copy double reverse (cid:63) LSTM 54.1 69.2 83.8 78.1 51.9 92.1 (cid:63) Transformer(Vaswanietal.,2017) 72.0 92.9 99.8 98.2 94.8 81.8 DNC(Gravesetal.,2016) 69.5 83.8 99.4 100.0 100.0 100.0 Entet(Henaffetal.,2017) 73.4 83.8 98.4 91.8 62.3 100.0 RMC(Santoroetal.,2018) 79.0 99.6 99.9 100.0 99.8 100.", "is the number of times we are allowed to combine the operations.", "Following the instructions from the official repository of LTE (cid:5) , we use the mix-strategy to generate the datasets for training.", "Result Table 4 shows the results on LTE.", "Our method yields benchmark performance on the program task of program evaluation (column 2) and reaches SOTA performance on the other tasks.", "We attempt the LAMBADA (Paperno et al., 2016) task to evaluate our model on language modeling tasks of varying complexity.", "The goal of the LAMBADA task is to predict the target word of the target sentence, based on a narrative passage.", "In this test, we only use the setting of the standard setup as language modeling that is more challenging.", "Following the instructions (Parikh et al., 2016), we download the dataset from the official repository of LAMBADA (cid:5) , and then we train the model to predict the next word as a general language modeling task on the training dataset but only predict the target word at test time.", "Note that we do not compare our method with pre-training-based SOTA (Radford et al., 2018; Brown et al., 2020; Alec Radford, 2020).", "Readers can refer to Appendix D or the authors' paper for more introduction.", "Result Table 5 shows the results on the LAMBADA task.", "1) We first observe that Transformer fails in this test.", "Specifically, Transformer shows strong performance on control but weak performance on test .", "The low performance on test cannot be attributed simply to poor language modeling because control is used to evaluate Transformer in standard language modeling before test .", "We suspect that the low performance on test can be attributed to a lack of inductive bias in training.", "Concretely, Transformer is trained to predict the next word as a general language modeling task but only predict the target word at test time.", "The varying complexity leads to failure on test , similar to the report in (Dehghani et al., 2019).", "2) Our method significantly improves the performance on Ppl.", "test , which means our method does not hurt the recurrent inductive bias inherited from UT 10 .", "Also, our method is robust to the maximum number of steps we choose (base-LT achieves strong results), whereas UT seems a bit sensitive to the maximum number of steps.", "In this work, we place our lazy transition on top of UT to build LT. Our lazy transition leverages the relaxed equilibrium for sequence modeling and provides step identifications that the model ponders the importance of every step for different tokens throughout the iteration.", "Our main experiment shows that LT can achieve strong performance on translation tasks, facilitate the training of deep models, and tackle the challenge of zero-shot inferring.", "Our method retains the recurrent inductive bias learned by its UT component, which is confirmed by our secondary experiments.", "LT tends to focus on token-specific characteristics at the early steps and then turns to find sequence-specific ones at the late steps, especially in deep settings.", "Meanwhile, stable and smooth behaviors in the early iteration are significant.", "Although there are some practical limitations, as we mentioned in this paper, we believe our lazy transition is a novel perspective to reconsider the models based on iterative refinements in sequence modeling.", "10 As mentioned before, LT can inherit the recurrent inductive bias of UT for handling varying complexity." ]
[ "abstain", "abstain", "method", "method", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "result", "other", "other", "other", "other", "method", "method", "method", "other", "other", "other", "other", "other", "method", "method", "other", "method", "method", "method", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "method", "abstain", "method", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "method", "abstain", "abstain", "objective", "abstain" ]
[ "Training neural models for named entity recognition (NER) in a new domain often requires additional human annotations that are usually expensive and time-consuming to collect.", "Thus, a crucial research question is how to obtain supervision in a cost-effective way.", "In this paper, we introduce entity triggers, an effective proxy of human explanations for facilitating label-efficient learning of NER models.", "An entity trigger is defined as a group of words in a sentence that helps to explain why humans would recognize an entity in the sentence.", "We crowd-sourced 14k entity triggers for two well-studied NER datasets 1 .", "Our proposed model, Trigger Matching Network , jointly learns trigger representations and soft matching module with self-attention such that can generalize to unseen sentences easily for tagging.", "The framework is significantly more cost-effective than the traditional frameworks.", "Named entity recognition (NER) is a fundamental information extraction task that focuses on extracting entities from a given text and classifying them using pre-defined categories (e.g., persons, locations, organizations) (Nadeau and Sekine, 2007).", "Recent advances in NER have primarily focused on training neural network models with an abundance of human annotations, yielding state-of-the-art results (Lample et al., 2016).", "However, collecting human annotations for NER is expensive and time-consuming, especially in social media messages (Lin et al., 2017a) and technical domains such as biomedical publications, financial documents, legal reports, etc.", "As we seek to advance NER into more domains with less human effort, The first two authors contributed equally.", "how to learn neural models for NER in a cost-effective way becomes a crucial research problem.", "The standard protocol for obtaining an annotated NER dataset involves an annotator selecting token spans in a sentence as mentions of entities, and labeling them with an entity type. However, such annotation process provides limited supervision per example . Consequently, one would need large amount of annotations in order to train high-performing models for a broad range of entity types, which can clearly be cost-prohibitive. The key question is then how can we learn an effective NER model in presence of limited quantities of labeled data ?", "We, as humans, recognize an entity within a sentence based on certain words or phrases that act as cues. For instance, we could infer that Kasd-frcxzv' is likely to be a location entity in the sentence Tom traveled a lot last year in Kasdfrcxzv. We recognize this entity because of the cue phrase travel ... in , which suggests there should be a location entity following the word 'in'. We call such phrases entity triggers.", "Similar to the way these triggers guide our recognition process, we hypothesize that they can also help the model to learn to generalize efficiently. Specifically, we define an entity trigger (or trigger for simplicity) as a group of words that can help explain the recognition process of a particular entity in the same sentence. For example, in Figure 1, had ... lunch at 2 and where the food are two distinct triggers associated with the RESTAURANT entity Rumble Fish.", "An entity trigger should be a necessary and sufficient cue for humans to recognize its associated entity even if we mask the entity with a random word. Thus, unnecessary words such as fantastic should not be considered part of the entity trigger. In this paper, we argue that a combination of entity triggers and standard entity annotations can enhance the generalization power of NER models. This approach is more powerful because unlabeled sentences, such as Bill enjoyed a great dinner with Alice at Zcxlbz. , can be matched with the existing trigger had ... lunch at via their semantic relatedness. This makes it easier for a model to recognize Zcxlbz as a RESTAURANT entity. In contrast, if we only have the entity annotation itself (i.e., Rumble Fish ) as supervision, the model will require many similar examples in order to learn this simple pattern. We hypothesize that using triggers as additional supervision is a more cost-effective way to train models. We crowd-sourced annotations of 14,708 triggers on two well-studied NER datasets to study their usefulness for the NER task. Also, we propose a novel framework named Trigger Matching Network that learns trigger representations indicative of entity types during the training phase, and identifies triggers in an unlabeled sentence at inference time to guide a traditional entity tagger for delivering better overall NER performance. Different from conventional training, our learning process has two stages, where the first stage comprises jointly training a trigger classifier and the semantic trigger matcher, followed by a second stage that leverages the trigger representation and the encoding of the given sentence using an attention mechanism to learn a tagger. Experiments show that the proposed model using only 20% of the trigger-annotated sentences results in a comparable performance as using 70% of conventional annotated sentences. 2 Problem Formulation We consider the problem of how to cost-effectively learn a model for NER using entity triggers . In this 2 Note that a trigger can be a discontinuous phrase. section, we introduce basic concepts and their notations, present the conventional data annotation process for NER, and provide a formal task defini-tion for learning using entity triggers. In the conventional setup for supervised learning for NER, we let x = [ x (1) , x (2) , , x ( n ) ] denote a sentence in the labeled training corpus DL . Each labeled sentence has a NER-tag sequence y = [ y (1) , y (2) , , y ( n ) ] , where y ( i ) Y and Y can be { O , B-PER , } . Thus, we have DL = { ( x i , y i ) } , and an unlabeled corpus DU = { x i } . We propose to annotate entity triggers in sentences. We use T ( x , y ) to represent the set of annotated entity triggers, where each trigger t i T ( x , y ) is associated with an entity index e and a set of word indices { w i } . Note that we use the index of the first word of an entity as its entity index. That is, t = ( { w 1 , w 2 , } e ) , where e and w i are integers in the range of [1 , | x | ] . Adding triggers creates a new form of data DT = { ( x i , y i , T ( x i , y i ) } . Our goal is to learn a model for NER from a trigger-labeled dataset DT , such that we can achieve comparable learning performance to a model with a much larger DL . 3 Trigger Matching Networks We propose a straightforward yet effective framework, named Trigger Matching Networks (TMN), consisting of a trigger encoder ( TrigEncoder ), a semantic-based trigger matching module ( TrigMatcher ), and a base sequence tagger ( SeqTagger ). We have two learning stages for the framework: the first stage (Section 3.1) jointly learns the TrigEncoder and TrigMatcher , and the second stage (Section 3.2) uses the trigger vectors to learn NER tag labels. 3.1 Trigger Encoding & Semantic Matching Learning trigger representations and semantically matching them with sentences are inseparable tasks. Desired trigger vectors capture the semantics in a shared embedding space with token hidden states, such that sentences and triggers can be semantically matched. Learning an attention-based matching module between entity triggers and sentences is necessary so that triggers and sentences can be semantically matched. Specifically, for a sentence x with multiple entities { e 1 , e 2 , } , for each entity e i we assume that there is a set of triggers T i = { t ( i ) 1 , t ( i ) 2 , } without loss of generality. To enable more efficient Bidirectional LSTM Networks Sent. Rep. Trigger Rep. Bidirectional LSTM Networks B-PERO O O O O O Trigger Classification Loss CRF : Learning for Sequence Tagging Contrastive Loss LSM <latexit sha1_base64=\"5Dr2frp6Y78DxAGDPOmRUM5R8yI=\">AAAB7XicbVBNSwMxEJ2tX7V+VT16CRbBU9mVFj0WvXhQqGg/oF1KNs22sdlkSbJCWfofvHhQxKv/x5v/xrTdg7Y+GHi8N8PMvCDmTBvX/XZyK6tr6xv5zcLW9s7uXnH/oKlloghtEMmlagdYU84EbRhmOG3HiuIo4LQVjK6mfuuJKs2keDDjmPoRHggWMoKNlZo3vfT+dtIrltyyOwNaJl5GSpCh3it+dfuSJBEVhnCsdcdzY+OnWBlGOJ0UuommMSYjPKAdSwWOqPbT2bUTdGKVPgqlsiUMmqm/J1IcaT2OAtsZYTPUi95U/M/rJCa88FMm4sRQQeaLwoQjI9H0ddRnihLDx5Zgopi9FZEhVpgYG1DBhuAtvrxMmmdlr1Ku3lVKtcssjjwcwTGcggfnUINrqEMDCDzCM7zCmyOdF+fd+Zi35pxs5hD+wPn8AV6djwE=</latexit> Structured Self-Attention Layer g s <latexit sha1_base64=\"vYcSCMfjd5sWvKTiT9fKRqAeqCk=\">AAAB83icbVDLSsNAFL2pr1pfVZduBovgqiSi6LLoxmUF+4CmlMl00g6dTMLMjVBCf8ONC0Xc+jPu/BsnbRbaemDgcM693DMnSKQw6LrfTmltfWNzq7xd2dnd2z+oHh61TZxqxlsslrHuBtRwKRRvoUDJu4nmNAok7wSTu9zvPHFtRKwecZrwfkRHSoSCUbSS70cUx0GYjQZmNqjW3Lo7B1klXkFqUKA5qH75w5ilEVfIJDWm57kJ9jOqUTDJZxU/NTyhbEJHvGepohE3/WyeeUbOrDIkYaztU0jm6u+NjEbGTKPATuYZzbKXi/95vRTDm34mVJIiV2xxKEwlwZjkBZCh0JyhnFpCmRY2K2FjqilDW1PFluAtf3mVtC/q3mX96uGy1rgt6ijDCZzCOXhwDQ24hya0gEECz/AKb07qvDjvzsditOQUO8fwB87nD3bhkfk=</latexit> g t <latexit sha1_base64=\"ESRGTsfbvn7G7hfgLSAZQyYApbU=\">AAAB83icbVDLSsNAFL2pr1pfVZduBovgqiSi6LLoxmUF+4CmlMl00g6dTMLMjVBCf8ONC0Xc+jPu/BsnbRbaemDgcM693DMnSKQw6LrfTmltfWNzq7xd2dnd2z+oHh61TZxqxlsslrHuBtRwKRRvoUDJu4nmNAok7wSTu9zvPHFtRKwecZrwfkRHSoSCUbSS70cUx0GYjQY4G1Rrbt2dg6wSryA1KNAcVL/8YczSiCtkkhrT89wE+xnVKJjks4qfGp5QNqEj3rNU0YibfjbPPCNnVhmSMNb2KSRz9fdGRiNjplFgJ/OMZtnLxf+8XorhTT8TKkmRK7Y4FKaSYEzyAshQaM5QTi2hTAublbAx1ZShraliS/CWv7xK2hd177J+9XBZa9wWdZThBE7hHDy4hgbcQxNawCCBZ3iFNyd1Xpx352MxWnKKnWP4A+fzB3hmkfo=</latexit> type(e) = PERLTC <latexit sha1_base64=\"ruNI31krsrYxeKepsIhlzZUfJuU=\">AAAB7XicbVA9SwNBEJ2LXzF+RS1tFoNgFe4kQctgGguLCPmC5Ah7m71kzd7usbsnhCP/wcZCEVv/j53/xk1yhSY+GHi8N8PMvCDmTBvX/XZyG5tb2zv53cLe/sHhUfH4pK1loghtEcml6gZYU84EbRlmOO3GiuIo4LQTTOpzv/NElWZSNM00pn6ER4KFjGBjpfb9IG3WZ4NiyS27C6B14mWkBBkag+JXfyhJElFhCMda9zw3Nn6KlWGE01mhn2gaYzLBI9qzVOCIaj9dXDtDF1YZolAqW8Kghfp7IsWR1tMosJ0RNmO96s3F/7xeYsIbP2UiTgwVZLkoTDgyEs1fR0OmKDF8agkmitlbERljhYmxARVsCN7qy+ukfVX2KuXqQ6VUu83iyMMZnMMleHANNbiDBrSAwCM8wyu8OdJ5cd6dj2VrzslmTuEPnM8fUPGO+A==</latexit> L <latexit sha1_base64=\"TOO3Ikl/IgY7t8Ss6ba6sY3RmME=\">AAAB6HicbVA9SwNBEJ2LXzF+RS1tFoNgFe4komXQxsIiAfMByRH2NnPJmr29Y3dPCCG/wMZCEVt/kp3/xk1yhSY+GHi8N8PMvCARXBvX/XZya+sbm1v57cLO7t7+QfHwqKnjVDFssFjEqh1QjYJLbBhuBLYThTQKBLaC0e3Mbz2h0jyWD2acoB/RgeQhZ9RYqX7fK5bcsjsHWSVeRkqQodYrfnX7MUsjlIYJqnXHcxPjT6gynAmcFrqpxoSyER1gx1JJI9T+ZH7olJxZpU/CWNmShszV3xMTGmk9jgLbGVEz1MveTPzP66QmvPYnXCapQckWi8JUEBOT2dekzxUyI8aWUKa4vZWwIVWUGZtNwYbgLb+8SpoXZa9SvqxXStWbLI48nMApnIMHV1CFO6hBAxggPMMrvDmPzovz7nwsWnNONnMMf+B8/gCmIYzY</latexit> : Jointly Training TrigEncoder & TrigMatcher c o n c a t g t <latexit sha1_base64=\"v1d7lGKquw2hk4tFpqJWsOrK2hw=\">AAAB+3icbVDLSsNAFJ3UV62vWJduBovgqiSi6LLoxmUF+4AmlMl00g6dTMLMjVhCfsWNC0Xc+iPu/BsnbRbaemDgcM693DMnSATX4DjfVmVtfWNzq7pd29nd2z+wD+tdHaeKsg6NRaz6AdFMcMk6wEGwfqIYiQLBesH0tvB7j0xpHssHmCXMj8hY8pBTAkYa2nVvQiDzIgKTIMzGQ8jzod1wms4ceJW4JWmgEu2h/eWNYppGTAIVROuB6yTgZ0QBp4LlNS/VLCF0SsZsYKgkEdN+Ns+e41OjjHAYK/Mk4Ln6eyMjkdazKDCTRUi97BXif94ghfDaz7hMUmCSLg6FqcAQ46IIPOKKURAzQwhV3GTFdEIUoWDqqpkS3OUvr5LuedO9aF7eXzRaN2UdVXSMTtAZctEVaqE71EYdRNETekav6M3KrRfr3fpYjFascucI/YH1+QPWY5T4</latexit> Kzch is the leader of our group Kzch is the leader of our group Bidirectional LSTM Networks Sent. Rep. Bidirectional LSTM Networks Structured Self-Attention Layer g s <latexit sha1_base64=\"vYcSCMfjd5sWvKTiT9fKRqAeqCk=\">AAAB83icbVDLSsNAFL2pr1pfVZduBovgqiSi6LLoxmUF+4CmlMl00g6dTMLMjVBCf8ONC0Xc+jPu/BsnbRbaemDgcM693DMnSKQw6LrfTmltfWNzq7xd2dnd2z+oHh61TZxqxlsslrHuBtRwKRRvoUDJu4nmNAok7wSTu9zvPHFtRKwecZrwfkRHSoSCUbSS70cUx0GYjQZmNqjW3Lo7B1klXkFqUKA5qH75w5ilEVfIJDWm57kJ9jOqUTDJZxU/NTyhbEJHvGepohE3/WyeeUbOrDIkYaztU0jm6u+NjEbGTKPATuYZzbKXi/95vRTDm34mVJIiV2xxKEwlwZjkBZCh0JyhnFpCmRY2K2FjqilDW1PFluAtf3mVtC/q3mX96uGy1rgt6ijDCZzCOXhwDQ24hya0gEECz/AKb07qvDjvzsditOQUO8fwB87nD3bhkfk=</latexit> c o n c a t g t <latexit sha1_base64=\"v1d7lGKquw2hk4tFpqJWsOrK2hw=\">AAAB+3icbVDLSsNAFJ3UV62vWJduBovgqiSi6LLoxmUF+4AmlMl00g6dTMLMjVhCfsWNC0Xc+iPu/BsnbRbaemDgcM693DMnSATX4DjfVmVtfWNzq7pd29nd2z+wD+tdHaeKsg6NRaz6AdFMcMk6wEGwfqIYiQLBesH0tvB7j0xpHssHmCXMj8hY8pBTAkYa2nVvQiDzIgKTIMzGQ8jzod1wms4ceJW4JWmgEu2h/eWNYppGTAIVROuB6yTgZ0QBp4LlNS/VLCF0SsZsYKgkEdN+Ns+e41OjjHAYK/Mk4Ln6eyMjkdazKDCTRUi97BXif94ghfDaz7hMUmCSLg6FqcAQ46IIPOKKURAzQwhV3GTFdEIUoWDqqpkS3OUvr5LuedO9aF7eXzRaN2UdVXSMTtAZctEVaqE71EYdRNETekav6M3KrRfr3fpYjFascucI/YH1+QPWY5T4</latexit> LearnedTrigger Rep.Table Mean pooling g t 2 <latexit sha1_base64=\"yHTqdZezy1IzDpthV/l93GUJabI=\">AAAB+3icbVBNS8NAFNzUr1q/Yj16WSyCp5KUih4LXjxWsK3QhrDZbtqlm03YfRFLyF/x4kERr/4Rb/4bN20O2jqwMMy8x5udIBFcg+N8W5WNza3tnepubW//4PDIPq73dZwqyno0FrF6CIhmgkvWAw6CPSSKkSgQbBDMbgp/8MiU5rG8h3nCvIhMJA85JWAk365no4jANAiziZ+B38rz3LcbTtNZAK8TtyQNVKLr21+jcUzTiEmggmg9dJ0EvIwo4FSwvDZKNUsInZEJGxoqScS0ly2y5/jcKGMcxso8CXih/t7ISKT1PArMZBFUr3qF+J83TCG89jIukxSYpMtDYSowxLgoAo+5YhTE3BBCFTdZMZ0SRSiYumqmBHf1y+uk32q67eblXbvRaZZ1VNEpOkMXyEVXqINuURf1EEVP6Bm9ojcrt16sd+tjOVqxyp0T9AfW5w+47pTU</latexit> g t 3 <latexit sha1_base64=\"MqmPx0Wk6XhalfjWxOfNBIun31g=\">AAAB+3icbVBNS8NAFNzUr1q/Yj16WSyCp5BoRY8FLx4r2FpoQ9hsN+3SzSbsvogl5K948aCIV/+IN/+N2zYHbR1YGGbe481OmAquwXW/rcra+sbmVnW7trO7t39gH9a7OskUZR2aiET1QqKZ4JJ1gINgvVQxEoeCPYSTm5n/8MiU5om8h2nK/JiMJI84JWCkwK7ng5jAOIzyUZBDcFEURWA3XMedA68SryQNVKId2F+DYUKzmEmggmjd99wU/Jwo4FSwojbINEsJnZAR6xsqScy0n8+zF/jUKEMcJco8CXiu/t7ISaz1NA7N5CyoXvZm4n9eP4Po2s+5TDNgki4ORZnAkOBZEXjIFaMgpoYQqrjJiumYKELB1FUzJXjLX14l3XPHazqXd81GyynrqKJjdILOkIeuUAvdojbqIIqe0DN6RW9WYb1Y79bHYrRilTtH6A+szx+6dZTV</latexit> g t 4 <latexit sha1_base64=\"AhhJDMVyfDqfzXnM+na4G9HdGCw=\">AAAB+3icbVDLSsNAFJ3UV62vWJduBovgqiRS0WXBjcsK9gFtCJPppB06mYSZG7GE/IobF4q49Ufc+TdO2iy09cDA4Zx7uWdOkAiuwXG+rcrG5tb2TnW3trd/cHhkH9d7Ok4VZV0ai1gNAqKZ4JJ1gYNgg0QxEgWC9YPZbeH3H5nSPJYPME+YF5GJ5CGnBIzk2/VsFBGYBmE28TPwW3me+3bDaToL4HXilqSBSnR8+2s0jmkaMQlUEK2HrpOAlxEFnAqW10apZgmhMzJhQ0MliZj2skX2HJ8bZYzDWJknAS/U3xsZibSeR4GZLILqVa8Q//OGKYQ3XsZlkgKTdHkoTAWGGBdF4DFXjIKYG0Ko4iYrplOiCAVTV82U4K5+eZ30Lptuq3l132q0m2UdVXSKztAFctE1aqM71EFdRNETekav6M3KrRfr3fpYjlascucE/YH1+QO7/JTW</latexit> g t 5 <latexit sha1_base64=\"YsBPtT4PycpgeQJC/aZTlWrJr0Y=\">AAAB+3icbVDLSsNAFL2pr1pfsS7dDBbBVUnEosuCG5cV7APaECbTSTt0MgkzE7GE/IobF4q49Ufc+TdO2iy09cDA4Zx7uWdOkHCmtON8W5WNza3tnepubW//4PDIPq73VJxKQrsk5rEcBFhRzgTtaqY5HSSS4ijgtB/Mbgu//0ilYrF40POEehGeCBYygrWRfLuejSKsp0GYTfxM+608z3274TSdBdA6cUvSgBId3/4ajWOSRlRowrFSQ9dJtJdhqRnhNK+NUkUTTGZ4QoeGChxR5WWL7Dk6N8oYhbE0T2i0UH9vZDhSah4FZrIIqla9QvzPG6Y6vPEyJpJUU0GWh8KUIx2jogg0ZpISzeeGYCKZyYrIFEtMtKmrZkpwV7+8TnqXTfeq2bq/arSbZR1VOIUzuAAXrqENd9CBLhB4gmd4hTcrt16sd+tjOVqxyp0T+APr8we9g5TX</latexit> g t 1 <latexit sha1_base64=\"NEvlkA3NWHbVsKhnWcog2EZu+q0=\">AAAB+3icbVDLSsNAFL2pr1pfsS7dDBbBVUikosuCG5cV7APaECbTSTt0MgkzE7GE/IobF4q49Ufc+TdO2yy09cDA4Zx7uWdOmHKmtOt+W5WNza3tnepubW//4PDIPq53VZJJQjsk4Ynsh1hRzgTtaKY57aeS4jjktBdOb+d+75FKxRLxoGcp9WM8FixiBGsjBXY9H8ZYT8IoHwe5DryiKAK74TruAmideCVpQIl2YH8NRwnJYio04Vipgeem2s+x1IxwWtSGmaIpJlM8pgNDBY6p8vNF9gKdG2WEokSaJzRaqL83chwrNYtDMzkPqla9ufifN8h0dOPnTKSZpoIsD0UZRzpB8yLQiElKNJ8ZgolkJisiEywx0aauminBW/3yOuleOl7TubpvNlpOWUcVTuEMLsCDa2jBHbShAwSe4Ble4c0qrBfr3fpYjlascucE/sD6/AG3Z5TT</latexit> k-nearest leader of group PER Pyzxc is the head of our team Pyzxc is the head of our team : Tagging the Sequence : Soft-Matching Triggers Trigger-basedGlobalAttention B-PERO O O O O OCRF Trigger-basedGlobalAttention Figure 2: Two-stage training in Trigger Matching Network (Left).", "batch-based training, we reformat the trigger-based annotated dataset DT such that each new sequence contains only one entity and one trigger.", "We then create a training instance by pairing each entity with one of its triggers, denoted ( x , e i , t ( i ) j ) .", "For each reformed training instance ( x , e, t ) , we first apply a bidirectional LSTM (BLSTM) on the sequence of word vectors of x , obtaining a sequence of hidden states that are the contextualized word representations h i for each token x i in the sentence.", "We use H to denote the matrix containing the hidden vectors of all of the tokens, and we use Z to denote the matrix containing the hidden vectors of all trigger tokens inside the trigger t .", "In order to learn an attention-based representation of both triggers and sentences, we follow the self-attention method introduced by (Lin et al., 2017b) as follows: (cid:126)a sent = SoftMax (cid:0) W 2 tanh (cid:0) W 1 HT (cid:1)(cid:1) g s = (cid:126)a sent H (cid:126)a trig = SoftMax (cid:0) W 2 tanh (cid:0) W 1 ZT (cid:1)(cid:1) g t = (cid:126)a trig ZW 1 and W 2 are two trainable parameters for computing self-attention score vectors (cid:126)a sent and (cid:126)a trig .", "We obtain a vector representing the weighted sum of the token vectors in the entire sentence as the final sentence vector g s .", "Similarly, g t is the final trigger vector, representing the weighted sum of the token vectors in the trigger.", "We want to use the type of the associated entity as supervision to guide the trigger representation.", "Thus, the trigger vector g t is further fed into a multi-class classifier to predict the type of the associated entity e (such as PER , LOC , etc) which we use type ( e ) to denote.", "The loss of the trigger classification is as follows: LTC = (cid:88) log P ( type ( e ) | g t ; TC ) , where TC is a model parameter to learn.", "Towards learning to match triggers and sentences based on attention-based representations, we use contrastive loss (Hadsell et al., 2006).", "The intuition is that similar triggers and sentences should have close representations (i.e., have a small distance between them, d ).", "We create negative examples (i.e., mismatches) for training by randomly mixing the triggers and sentences, because TrigMatcher needs to be trained with both positive and negative examples of the form (sentence, trigger, label).", "For the negative examples, we expect a margin m between their embed-dings.", "The contrastive loss of soft matching is as follows, where 1 matched is 1 if the trigger was originally in this sentence and 0 if they are not: d = (cid:107) g s g t (cid:107) 2 LSM = (1 1 matched )12 ( d ) 2 + 1 matched 1 2 { max (0 , m d ) } 2 The joint loss of the first stage is thus L = LTC + L SM , where is a hyper-parameter to tune.", "The learning objective in this stage is to output the tag sequence y .", "Following the most common design of neural NER architecture, BLSTM-CRF (Ma and Hovy, 2016), we incorporate the entity triggers as attention queries to train a trigger-enhanced sequence tagger for NER.", "Note that the BLSTM used in the the TrigEncoder and TrigMatcher modules is the same BLSTM we use in the SeqTagger to obtain H , the matrix containing the hidden vectors of all of the tokens.", "Given a sentence x , we use the previously trained TrigMatcher to compute the mean of all the trigger vectors g t associated with this sentence.", "Following the conventional attention method (Lu-ong et al., 2015), we incorporate the mean trigger vector as the query, creating a sequence of attention-based token representations, H (cid:48) .", "U 1 , U 2 , and v are trainable parameters for computing the trigger-enhanced attention scores for each token.", "Finally, we concatenate the original token representation H with the trigger-enhanced one H (cid:48) as the input ( [ H ; H (cid:48) ] ) to the final CRF tagger.", "Note that in this stage, our learning objective is the same as conventional NER, which is to correctly predict the tag for each token.", "When inferencing tags on unlabeled sentences, we do not know the sentence's triggers.", "Instead, we use the TrigMatcher to compute the similarities between the self-attended sentence representations and the trigger representations, using the most suitable triggers as additional inputs to the SeqTagger .", "Specifically, we have a trigger dictionary from our training data, T = { t | ( , , t ) DT } .", "Recall that we have learned a trigger vector for each of them, and we can load these trigger vectors as a look-up table in memory.", "For each unlabeled sentence x , we first compute its self-attended vector g s as we do when training the TrigMatcher .", "Using L2-norm distances to compute the contrastive loss, we efficiently retrieve the most similar triggers in the shared embedding space of the sentence and trigger vectors.", "Then, we calculate g t , the mean of the top k nearest semantically matched triggers, as this serves a proxy to triggers mentioned for the entity type in the labeled data.", "We then use it as the attention query for SeqTagger , similarly in Sec. 3.2.", "In this section, we first discuss how to collect entity triggers, and empirically study the data-efficiency of our proposed framework.", "We use a general domain dataset CoNLL2003 (Tjong Kim Sang and De Meul-der, 2003) and a bio-medical domain dataset BC5CDR (Li et al., 2016).", "Both datasets are well-studied and popular in evaluating the performance of neural named entity recognition models such as BLSTM-CRF (Ma and Hovy, 2016).", "In order to collect the entity triggers from human annotators, we use Amazon SageMaker Ground Truth 3 to crowd-source entity triggers.", "More recently, Lee et al. (2020) developed an annotation framework, named LEAN-LIFE, which supports our proposed trigger annotating.", "Specifically, we sample 20% of each training set as our inputs, and then reform them (Section 2).", "Annotators are asked to annotate a group of words that would be helpful in typing and/or detecting the occurrence of a particular entity in the sentence.", "We masked the entity tokens with their types so that human annotators are more focused on the nonentity words in the sentence when considering the triggers.", "We consolidate multiple triggers for each entity by taking the intersection of the three anno-tators' results.", "Statistics of the final curated triggers are summarized in Table", "1. 4.2 Base model We require a base model to compare with our proposed TMN model in order to validate whether the TMN model effectively uses triggers to improve model performance in a limited label setting.", "We choose the CNN-BLSTM-CRF (Ma and Hovy, 2016) as our base model for its wide usage in research of neural NER models and applications.", "Our TMNs are implemented within the same codebase and use the same external word 3 An advanced version of Amazon Mechanical Turk .", "vectors from GloVE (Pennington et al., 2014).", "The hyper-parameters of the CNNs, BLSTMs, and CRFs are also the same.", "This ensures a fair comparison between a typical non-trigger NER model and our trigger-enhanced framework.", "Labeled data efficiency.", "We first seek to study the cost-effectiveness of using triggers as an additional source of supervision.", "Accordingly, we explore the performance of our model and the baseline for different fractions of the training data.", "The results on the two datasets are shown in Table", "2. The full results are shown in Table", "3. We can see that by using only 20% of the trigger-annotated data, TMN model delivers comparable performance as the baseline model using 50-70% traditional training data.", "The drastic improvement in the model performance obtained using triggers thus justifies the slightly additional cost incurred in annotating triggers.", "Self-training with triggers.", "We also do a preliminary investigation of adopting self-training (Rosenberg et al., 2005) with triggers.", "We make inferences on unlabeled data and take the predictions with high confidences as the weak training examples for continually training the model.", "The confidence is computed following the MNLP metric (Shen et al., 2017), and we take top 20% every epoch.", "With the self-training method, we further improve the TMN model's F-1 scores by about 0.5 1.0%.", "Annotation time vs. performance.", "Although it is hard to accurately study the time cost on the crowd-sourcing platform we use, based on our of-Figure 3: The cost-effectiveness study.", "fline simulation we argue that annotating both triggers and entities are about 1 .", "5 times (BLSTM-CRF (x1.5)) longer than only annotating entities.", "our offline simulation.", "In Figure 3, The x-axis for BLSTM-CRF means the number of sentences annotated with only entities, while for TMN means the number of sentences tagged with both entities and triggers.", "In order to reflect human annotators spending 1.5 to 2 times as long annotating triggers and entities as they spend annotating only entities, we stretch the x-axis for BLSTM-CRF.", "We can clearly see that the proposed TMN outperforms the BLSTM-CRF model by a large margin.", "Even if we consider the extreme case that tagging triggers requires twice the human effort (BLSTM-CRF (x2)), the TMN is still significantly more labor-efficient in terms of F1 scores.", "We introduce entity trigger as a complementary annotation.", "We crowdsourced triggers on two mainstream datasets and will release them to the community, and proposed a novel framework TMN which can generalize to unseen sentences easily for tagging named entities.", "This research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via Contract No. 2019-19051600007, NSF SMA 18-29268, and Snap research gift.", "The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government." ]
[ "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "method", "other", "other", "objective", "method", "objective", "method", "method", "result", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "other", "other" ]
[ "We present PROTOTEX , a novel white-box NLP classification architecture based on prototype networks (Li et al., 2018).", "PROTOTEX faithfully explains model decisions based on prototype tensors that encode latent clusters of training examples.", "At inference time, classification decisions are based on the distances between the input text and the prototype tensors, explained via the training examples most similar to the most influential prototypes.", "We also describe a novel interleaved training algorithm that effectively handles classes characterized by the absence of indicative features.", "On a propaganda detection task, PROTOTEX accuracy matches BART-large and exceeds BERT-large with the added benefit of providing faithful explanations.", "A user study also shows that prototype-based explanations help non-experts to better recognize propaganda in online news.", "Neural models for NLP have yielded significant gains in predictive accuracy across a wide range of tasks.", "However, these state-of-the-art models are typically less interpretable than simpler, traditional models, such as decision trees or nearest-neighbor approaches.", "In general, less interpretable models can be more difficult for people to use, trust, and adopt in practice.", "Consequently, there is growing interest in going beyond simple black-box model accuracy to instead design models that are both highly accurate and human-interpretable.", "While much research on white-box explainable models focuses on attributing parts of the input (e.g., word sequences) to a model's prediction (Xu et al., 2015; Lei et al., 2016; Bastings et al., 2019; Jain et al., 2020; Glockner et al., 2020), there is much debate around their faithfulness and reliability (Serrano and Smith, 2019; Jain and Wallace, 2019; Wiegreffe and Pinter, 2019; Pruthi et al., Both authors contributed equally. 2020).", "Additionally, while such local explanations (if faithful) can be extremely useful in more intuitive tasks such as sentiment classification, that may not be the case for difficult tasks where human judgments may require a high degree of training or domain expertise.", "In such cases, understanding how models make their decisions for a particular input based on its training data can be insightful especially for engaging with users to develop an intuition on the model's decision making process.", "In this paper, we propose Protot ype T ensor Ex plainability Network (PROTOTEX ) 1 to faithfully explain classification decisions in the tradition of case-based reasoning (Kolodner, 1992).", "Our novel white-box NLP architecture augments prototype classification networks (Li et al., 2018) with large-scale pretrained transformer language models.", "Through a novel training regime, the network learns a set of prototype tensors that encode latent clusters of training examples.", "At inference time, classification decisions are entirely based on similarity to prototypes.", "This enables model predictions to be faithfully explained based on these prototypes, directly via similar training examples (i.e., those most similar to top-matched prototypes).", "We build upon the state-of-the-art NLP neural architectures to augment their accuracy with faithful and human-interpretable explanations.", "Figure 1 shows an example of PROTOTEX on the task of propaganda detection (Da San Martino et al., 2019).", "Another contribution of PROTOTEX concerns effective modeling of positive vs. negative classes in the presence of asymmetry.", "In a typical binary classification (e.g., sentiment detection), the presence of positive vs. negative language can be used to distinguish classes.", "However, with a task such as Web search, what most distinguishes relevant vs. irrelevant search results is the presence vs. absence of relevant content.", "Having this absence (rather than presence) of certain features most clearly dis-1 https://github.com/anubrata/ProtoTEx/ Figure 1: PROTOTEX architecture along with a use case demonstration.", "tinguish a class complicates both predicting it and explaining these predictions to users.", "To address this, we introduce a single negative prototype for representing the negative class, learned via a novel training regime.", "We show that including this negative prototype significantly improves results.", "While our model is largely agnostic to the prediction task, we evaluate PROTOTEX on a sentence-level binary propaganda detection task (Da San Martino et al., 2019).", "Recent work on explainable fact-checking (Kotonya and Toni, 2020a) has provided explanations via attention (Popat et al., 2018; Shu et al., 2019), rule discovery (Gad-Elrab et al., 2019), and summarization (Atanasova et al., 2020; Kotonya and Toni, 2020b,a), but not prototypes.", "Better explanations could enable support for human fact-checkers (Nakov et al., 2021).", "We show that PROTOTEX provides faithful explanations without reducing classification accuracy, which remains comparable to the underlying encoder, BART-large (Lewis et al., 2020), superior to that of BERT-large (Devlin et al., 2019), and with the added benefit of faithful explanations in the spirit of case-based reasoning.", "Furthermore, to the best of our knowledge, we are the first work in NLP that examines the utility of global case-based explanations for non-expert users in model understanding and downstream task accuracy.", "classification networks (Li et al., 2018; Chen et al., 2019; Hase et al., 2019) are white-box models with explainability built-in via case-based reasoning (Kolodner, 1992) rather than extractive rationales (Lei et al., 2016; Bastings et al., 2019; Jain et al., 2020; Glockner et al., 2020).", "They are the neural variant of prototype classifiers (Bien and Tibshirani, 2011; Kim et al., 2014), predicting based on similar known instances.", "Contemporary work (Rajagopal et al., 2021) also stressed the importance of global explainability through training examples, yet in their approach, the similar training examples are not directly integrated in the decision itself; in contrast, we do so via learned prototypes to provide more transparency.", "Our work builds on Li et al. (2018), which we lay out in Section 3.1.", "Later work (Chen et al., 2019; Hase et al., 2019) enables prototype learning of partial images.", "In NLP, Guu et al. (2018) retrieved prototype examples from the training data for edit-based natural language generation.", "Hase and Bansal (2020) used a variant of Chen et al. (2019)'s work to examine among other approaches; unlike our work, they used feature activation to obtain explanations similar to post-hoc approaches, and did not handle the absence of relevant content.", "Evaluating explainability Explainability is a multi-faceted problem.", "HCI concerns include:", "a) For whom are we designing the explanations?", "b) What goals are they trying to achieve?", "c) How can we best convey information without imposing excessive cognitive load?", "and", "d) Can explainable systems foster more effective human+AI partnerships (Amershi et al., 2019; Wickramasinghe et al., 2020; Wang et al., 2019; Liao et al., 2020; Wang et al., 2021; Bansal et al., 2021)?", "On the other hand, algorithmic concerns include generating faithful and trustworthy explanations (Jacovi and Goldberg, 2020), local vs. global explanations, and post-hoc vs. self-explanations (Danilevsky et al., 2020).", "Explainability evaluation methods (Doshi-Velez and Kim, 2017) include measuring faithfulness (Ja-covi and Goldberg, 2020), enabling model simulatability (Hase et al., 2019), behavioral testing (Ribeiro et al., 2020), and evaluating intelligent user interactions (Nguyen et al., 2018).", "Human+AI fake news detection While explainable fact-checking (Kotonya and Toni, 2020a) could better support human-in-the-loop fact-checking (Nakov et al., 2021; Demartini et al., 2020), studies rarely assess a human+AI team in combination (Nguyen et al., 2018).", "In fact, hu-man+AI teams often under-perform the human or AI working alone (Bansal et al., 2021), emphasizing the need to carefully baseline performance.", "Propaganda detection (Da San Martino et al., 2019) constitutes a form of disinformation detection.", "Because propaganda detection is a hard task for non-expert users and state-of-the-art models are not accurate enough for practical use, explainability may promote adoption of computational propaganda detection systems (Da San Martino et al., 2021).", "We adopt prototype classification networks (Li et al., 2018) first proposed for vision tasks as the foundation for our prototype modeling work (Sec-tion 3.1).", "We design a novel interleaved training procedure, as well as a new batching process, to", "(a) incorporate large-scale pretrained language models, and", "(b) address within classification tasks where some classes can only be predicted by the absence of characteristics indicative of other classes.", "PROTOTEX is based on Li et al. (2018)'s Prototype Classification Network, and we integrate pretrained language model encoders under this framework.", "Their architecture is based on learning prototype tensors that serve to represent latent clusters of similar training examples (as identified by the model).", "Classification is performed via a linear model that takes as an input the distances to the prototype tensors.", "As such, the network is a white-box model where global explanation is attained by directly linking the model to learned clusters of the training data.", "Shown in Figure 1, the input is first encoded into a latent representation.", "This representation is fed through a prototype layer, where each unit of that layer is a learned prototype tensor that represents a cluster of training examples through loss terms L p 1 and L p 2 (specified by equations 2 and 3 below).", "For each prototype j , the prototype layer calculates the L2 distance between its representation p j and that of the input x i , i.e., || x i p j || 22 .", "The output of the prototype layer, which is a matrix of L2 distances, is then fed into a linear layer; this learns a weight matrix of dimension K m for K classes and m prototypes, where the K weights learned for each prototype indicates that prototype's relative affinity to each of the K classes.", "Classification is performed via softmax.", "L = L ce + 1 L p 1 + 2 L p 2 (1) with hyperparameter s, standard classification cross-entropy loss L ce , and two prototype loss terms, L p 1 and L p 2 .", "L p 1 minimizes avg.", "squared distance between each of the m prototypes and 1 encoded input: L p 1 = 1 m m (cid:88) j =1 min i =1 ,n || p j x i || 22 (2) encouraging each learned prototype representation to be similar to at least one training example.", "L p 2 encourages training examples to cluster around prototypes in the latent space by minimizing the average squared distance between every encoded input and at least one prototype: L p 2 = 1 n n (cid:88) i =1 min j =1 ,m || x i p j || 22 (3) Li et al. (2018) used convolutional autoencoders to represent input images.", "However, in the context of NLP, convolutional neural networks do not have sufficient representation power (Elbayad et al., 2018) and transformer-based language models, which are pretrained on large amounts of data, have consistently performed better in recent research.", "Thus to encode inputs, we experiment with 2 In Li et al. (2018), a fourth reconstruction loss is used with their convolutional network.", "two such models: BERT (Devlin et al., 2019) (a masked language model) and BART (Lewis et al., 2020) (a sequence-to-sequence autoencoder).", "Intuition & explainability based on case-based reasoning.", "Because learned prototypes occupy the same space as encoded inputs, we can directly measure the distance between prototypes and encoded train or test instances.", "During inference time, prototypes closer to the encoded test example become more activated, with larger weights from the prototype layer output.", "Consequently, model prediction is thus the weighted affinity of each prototype to the test example, where each prototype has K weights over the possible class assignments.", "In the context of classification in NLP, we oper-ationalize case-based reasoning (Kolodner, 1992) by providing similar training examples.", "Once the model is trained, for each prototype we rank the training examples by proximity in the latent space.", "During inference, we rank the prototypes by proximity to the test example.", "Thus, for a test example, we can obtain the training examples closest to the prototypes most influential to the classification decision.", "Jacovi and Goldberg (2020) define faithfulness as how accurately [explanations] reflects the true reasoning process of the model.", "Since prototypes are directly linked to the model predictions via a linear classification layer, explanations derived by the prototypes are faithful by design.", "We also provide a mathematical intuition of how prototype layers relates to soft-clustering (which is inherently interpretable) in the appendix A.1.", "Section 1 noted a challenge in effectively modeling positive vs. negative classes in the presence of asymmetry.", "With detection tasks (e.g., finding relevant documents (Kutlu et al., 2020) or propaganda (Da San Martino et al., 2019)), the negative class may be most distinguished by the lack of positive features (rather than presence of negative ones).", "If Algorithm 2 Decoupled training for prototypes and classification, which enables the learning of the negative prototype.", "a document is relevant only if it contains relevant content, how can one show the lack of such content?", "This poses a challenge both in classifying negative instances and in explaining such classification decisions on the basis of missing features.", "For propaganda, Da San Martino et al. (2019) side-step the issue by only providing rationales for positive instances.", "For relevance, Kutlu et al. (2020) define a negative rationale as summarizing the instance, to succinctly show it is not germane to the positive class.", "However, if we conceptualize the positive class as a specific foreground to be distinguished from a more general background , such summary negative rationales drawn from the background distribution are likely to provide only weak, noisy evidence for the negative class. We investigate the potential value of including or excluding a single negative prototype to model this background negative class, and design an interleaved training procedure to learn this prototype.", "We present two algorithms for training.", "The vanilla one, which we call SIMPLEPROTOTEX , does not interleave the training of positive and negative prototypes.", "This is illustrated in Algorithm 1 .", "One of our contributions is the design of an iterative, interleaved approach to training that balances competing loss terms, encouraging each learned prototype to be similar to at least one training example ( L p 1 ) and encouraging training examples to cluster around prototypes ( L p 2 ).", "We perform each Propaganda Subtypes (ordered w.r.t #instances) 0.0 0.2 0.4 0.6 0.8 1.0 F 1 S c o r e on t he T e s t S e t Figure 2: Macro-F1 score of PROTOTEX predicting examples that belong to each propaganda subclass.", "type of representation update separately to ensure that we progressively push the prototypes and the encoded training examples closer to one another.", "We illustrate this process in Algorithm 2 .", "We initialize prototypes with Xavier, which allows the prototype tensors to start blind (thus unbiased) with respect to the training data and discover novel patterns or clusters on their own.", "After initialization, in each iteration, we first update the prototype tensors to become closer to at least one training example (henceforth loop).", "Then, in a separate training iteration, we update the representations of the training examples to push them closer to the nearest prototype tensor (henceforth loop).", "Since prototypes themselves do not have directly trainable parameters, we train the classification layer together with the encoder representations during the loop.", "We further separate the training of the positive and negative prototypes in order to push the negative background examples to form its own cluster.", "To this end, we perform class-level masking by setting the distances between the examples and prototypes of different classes to inf .", "Finally, we perform instance normalization (Ulyanov et al., 2016) for all distances in order to achieve segregation among different prototypes (namely, the prototypes of the same class do not rely solely on a handful of examples).", "We discuss the effects of instance normalization in Section 4.2.", "Task We evaluate a binary sentence-level classification task predicting whether or not each sentence contains propaganda.", "We adopt Da San Martino et al. (2019)'s dataset of 21,230 sentences from news articles, with a 70/10/20 Neg Pos Macro Random 0 .", "train/development/test split.", "Only 35.2% of sentences contain propaganda.", "The data is further classified into 18 fine-grained categories of propaganda; see analysis of prototypes in Section 4.2.", "Hyperparameters are tuned on the validation data.", "Optimization for all neural models use AdamW (Loshchilov and Hutter, 2019) algorithm with a learning rate of 3e-5 and a batch size of 20.", "We use early-stopping (Fomin et al., 2020) with Macro F1 on validation data.", "We further perform upsampling within each batch to balance the number of examples in the positive and the negative classes.", "Prototype Models PROTOTEX can be used across different underlying encoders on which interpretability components are added.", "Empirically, we found BART performed better on classification and so adopt it.", "We empirically determine the optimal number of prototypes to be 20, with one negative prototype.", "= 1 , = 2 , 1 = 2 = 0 .", "9 .", "To achieve the maximum transparency, we set the bias term in the linear layer to 0 so that all information goes through the prototypes.", "3 Additionally, we compare to SIMPLEPROTOTEX , which trains without use of the negative prototype.", "Baselines As a strong blackbox benchmark we use pretrained LMs without prototypes.", "BERT-large (Devlin et al., 2019): we use a simple linear layer over the output of the CLS token from the BERT encoder for classification.", "BART-large (Lewis et al., 2020): we use the eos token's representation from the BART encoder as input to the linear layer of the model.", "We also include a random baseline and a case-based reasoning K-Nearest-Neighbor ( KNN-BART ) baseline with the BART-large encoder.", "Table 1 shows F1 scores achieved by models.", "Among the black-box baselines, the BART-large encoder representation outperformed BERT-large significantly ( p < 0 .", "05 , bootstrap test (Berg-Kirkpatrick et al., 2012)).", "PROTOTEX performed on-par with its underlying encoder BART, showing that PROTOTEX 's explainability came at no cost of classification performance.", "It also substantially outperforms the KNN-BART baseline.", "Figure 2 shows F1 for the examples, pretaining to each subclass labeled by Da San Martino et al. (2019).", "We can see that the model performance is relatively consistent across subclasses.", "The two subclasses that are most difficult for the model are Reductio ad Hitleru and Appeal to Authority.", "In Figure 3 , we visualize and show that different prototypes focus on each subclass differently.", "We also see that negative examples are associated only with the negative prototype, and vice-versa.", "Negative Prototype.", "Using a negative prototype slightly improves SIMPLEPROTOTEX results.", "Lacking a negative prototype, the only way to classify a negative class would be via a negative correlation on the distance between the test input and the learned prototypes.", "The use of the negative prototype simplifies the discriminatory process by dissociating the classification process of the negative class from the classification process of the positive class.", "Because PROTOTEX 's explainability comes from retrieving the most informative training examples, it will not be helpful for people if all prototypes are close to only a few training examples.", "Instead, it would be more beneficial for the prototypes to represent more subtle patterns within the training examples belonging to the same class.", "We refer to this phenomenon as prototype segregation .", "While the classification layer ensures that positive and negative examples (and their prototypes) are separated, it does not take into account segregation within the positive class.", "Similarly, the prototype losses L p 1 and L p 2 only locally ensure the closeness of examples to prototypes and vice-versa.", "To encourage segregation, we perform instance normalization (Ulyanov et al., 2016) for all distances.", "This effect is shown in Figure 4.", "Specifically, we retrieve the 5 closest training examples for each of our 20 prototypes; good segregation would mean that a large portion of these examples are unique examples (the highest value is 100 meaning that all examples are unique), while bad segregation means that a large portion of these examples are the same (the lowest value is 5 meaning that all prototypes are the closest to only 5 training examples).", "Without normalization, we have only 17 unique examples for all 20 prototypes, yet with normalization this number is 88.", "Furthermore, almost all of the 88 training examples are associated with only one prototype.", "based explanations (as shown in Table 2 ) for its classification decisions.", "Given the set of top prototypes most influential in predicting the class for a given example, we hypothesize that these top prototypes will be representative of the example and the label corresponding to the example.", "We carry out two user studies to assess the utility of these prototype-based explanations for non-expert end users.", "Specifically, we examine whether model explanations help non-expert users to: 1) better recognize propaganda in online news; and 2) better understand model behavior.", "We obtain 540 user-responses, based on 20 test-set examples, balancing gold labels and model predictions to include 5 examples from each group: true-positives, false-negatives, true-negatives, and false-positives.", "To simplify propaganda definitions for non-experts, we pick only four types of propaganda and we provide participants with definitions and examples for each type: Appeal to Authority , Exaggeration or Minimisation , Loaded Language , and Doubt .", "We select these categories because they cover the majority of the examples in the test set.", "For each example, we select the top-5 prototypes that most influenced the model's prediction.", "We then represent each prototype by the closest training example in the embedding space.", "As with case-base reasoning, we explain model decisions to participants by showing for each test example the five training examples that best represent the evidence (prototypes) consulted by the model in making its prediction.", "Participants are primed that the model is wrong in 50% of the cases (to prevent over-trust).", "In this first likert-scale rating task, participants are asked whether the test example contains pro-0%", "paganda.", "Options included: definitely, probably, probably not, definitely not, or I have no idea (completely unsure how to respond).", "We compare the following four study conditions: No Explanation (Baseline) We show only the test example that needs to be classified.", "Explanation Only (EO) We also show five training examples, each representing a top-5 prototype influencing the model prediction, as the evidence consulted by the model in arriving at its prediction.", "4 Random sampling of examples has been successfully used in tasks such as Semantic Textual Similarity (STS) and Natural Language Inference (NLI) (Agirre et al., 2013; Gold et al., 2019) to obtain a reasonable lower-bound.", "Comparison with random baseline demonstrates that our system selects examples that can improve human performance.", "In the second baseline condition, when we providing random examples as explanation, accuracy drops to 44%.", "We also measure how varying model accuracy impacts the effect of model explanations, comparing four model accuracy conditions: 0% (always incorrect), 50%, 75%, and 100% (always correct) 5 .", "When the model is always wrong, explanations reduce the human performance below both baselines (38% in the EO condition, 26% in ME).", "At 50% model accuracy, human performance is higher than the random condition, but lower than the baseline.", "At 75% , the ME condition outperforms the baseline (67%).", "Finally, at 100% model performance both model conditions improve the accuracy of the human annotation, with ME condition reaching 84%.", "Our sample size of 540 exceeds the necessary 70 to holds a statistical power for between-subject studies (Bojko, 2013) .", "Results from this experiment demonstrate that case-based explanations can improve human performance compared to a random baseline.", "However, the utility of the explanations is a function of the model accuracy.", "The second user task investigates model understanding by simulatability (Hase et al., 2019): can the participant predict the model decision given the", "most important evidence consulted by the model?", "Specifically, we show five training examples to the user, either Random Examples (RE) or PROTOTEX Examples (PE) (i.e., the same training examples used in the EO condition above).", "We ask participants to predict the model's decision using the same 5-point likert-scale as earlier.", "Results Per Figure 6a, PROTOTEX 's explanations help the users predict the model behavior better than random examples: 50% correct user assessment for PE vs 43.3% for RE.", "In 23.3% of the RE examples users are unable to make a prediction vs. 8% for the PE.", "Random guessing would be 40% accurate on a five-way rating task with 2 positive, 1 neutral, and 2 negative options (5.1).", "In Figure 6b we can see that the users are better at assessing the model prediction when the model is right (57%) vs when the model is wrong (43%).", "Additionally, we see that less users report inability to identify mode prediction when the model is correct (3.33%) vs. when the model is not (13.3%).", "PROTOTEX is a novel approach to faithfully explain classification decisions by directly connecting model decisions with training examples via learned prototypes.", "PROTOTEX builds upon the state of the art in NLP.", "It integrates an underlying transformer encoder with prototype classification networks, and uses a novel, interleaving training algorithm for prototype learning.", "On the challenging propaganda detection task, PROTOTEX performed on-par in classification as its underlying encoder (BART-large), and exceeded BERT-large, with the added benefit of providing faithful model explanations via prototypes.", "Our pilot human evaluation study shows that additional input provided by PROTOTEX contains relevant information for the task and can improve the annotation performance, provided sufficient model accuracy.", "We further demonstrate that explanations help non-expert users better understand and simulate model predictions.", "For annotation, we source participants from Amazon Mechanical Turk only within the United States, paying $10/hour based on average task time.", "We did not reject any work but exclude data from participants who failed an attention check.", "We thank the reviewers for their valuable feedback, the online workers who participated in our study and provided annotations, and the Texas Advanced Computing Center (TACC) at UT Austin for its computational resources.", "This research was supported in part by NSF grants IIS-1850153 and IIS-2107524, as well as by Wipro, the Knight Foundation, the Micron Foundation, and by Good Systems 6 , a UT Austin Grand Challenge to develop responsible AI technologies.", "The statements made herein are solely the opinions of the authors and do not reflect the views of the sponsoring agencies." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "objective", "result", "method", "abstain", "abstain", "result", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "abstain", "method", "other", "other", "other" ]
[ "Recent advances in natural language processing have enabled powerful privacy-invasive authorship attribution.", "To counter authorship attribution, researchers have proposed a variety of rule-based and learning-based text obfuscation approaches.", "However, existing authorship obfuscation approaches do not consider the adversarial threat model.", "Specifically, they are not evaluated against adversarially trained authorship attributors that are aware of potential obfuscation.", "To fill this gap, we investigate the problem of adversarial authorship attribution for deobfuscation.", "We show that adversarially trained authorship attributors are able to degrade the effectiveness of existing obfuscators from 20-30% to 5-10%.", "We also evaluate the effectiveness of adversarial training when the attributor makes incorrect assumptions about whether and which obfuscator was used.", "While there is a a clear degradation in attribution accuracy, it is noteworthy that this degradation is still at or above the attribution accuracy of the attributor that is not adversarially trained at all.", "Our results underline the need for stronger obfuscation approaches that are resistant to deobfuscation.", "Recent advances in natural language processing have enabled powerful attribution systems 1 that are capable of inferring author identity by analyzing text style alone (Abbasi and Chen, 2008; Narayanan et al., 2012; Overdorf and Greenstadt, 2016; Stolerman et al., 2013; Ruder et al., 2016).", "There have been several recent attempts to attribute the authorship of anonymously published text using This paper is third in the series.", "See (Mahmood et al., 2019) and (Mahmood et al., 2020) for the first two papers.", "Our code and data are available at: https://github.", "such advanced authorship attribution approaches.", "2 This poses a serious threat to privacy-conscious individuals, especially human rights activists and journalists who seek anonymity for safety.", "Researchers have started to explore text obfuscation as a countermeasure to evade privacy-invasive authorship attribution.", "Anonymouth (McDonald et al., 2012; Brennan et al., 2012) was proposed to identify words or phrases that are most revealing of author identity so that these could be manually changed by users seeking anonymity.", "Since it can be challenging for users to manually make such changes, follow up work proposed rule-based text obfuscators that can automatically manipulate certain text features (e.g., spelling or synonym) (Mc-Donald et al., 2013; Almishari et al., 2014; Keswani et al., 2016; Karadzhov et al., 2017; Castro-Castro et al., 2017; Mansoorizadeh et al., 2016; Kacmarcik and Gamon, 2006; Kingma and Welling, 2018).", "Since then more sophisticated learning-based text obfuscators have been proposed that automatically manipulate text to evade state-of-the-art authorship attribution approaches (Karadzhov et al., 2017; Shetty et al., 2018; Li et al., 2018; Mahmood et al., 2019; Grndahl and Asokan, 2020).", "In the arms race between authorship attribution and authorship obfuscation, it is important that both attribution and obfuscation consider the adversarial threat model (Potthast et al., 2018).", "While recent work has focused on developing authorship obfuscators that can evade state-of-the-art authorship attribution approaches, there is little work on developing authorship attribution approaches that can work against state-of-the-art authorship obfuscators.", "Existing authorship attributors are primarily designed for the non-adversarial threat model and only evaluated against non-obfuscated documents.", "Thus, it is not surprising that they can be readily evaded by state-of-the-art authorship obfuscators 2 https://www.nbcchicago.com/news/politics/Science-May-Help-Identify-Opinion-Columnist-492649561.html 7372 (Karadzhov et al., 2017; Shetty et al., 2018; Li et al., 2018; Mahmood et al., 2019; Grndahl and Asokan, 2020).", "To fill this gap, we investigate the problem of authorship deobfuscation where the goal is to develop adversarial authorship attribution approaches that are able to attribute obfuscated documents.", "We study the problem of adversarial authorship attribution in the following two settings.", "First, we develop attributors that filter obfuscated documents using obfuscation/obfuscator detectors and then use an authorship attributor that is adversarially trained on obfuscated documents.", "Second, we develop adversarially trained authorship attributors that does not make assumptions about whether and which authorship obfuscator is used.", "The results show that our authorship deobfuscation approaches are able to significantly reduce the adverse impact of obfuscation, which results in up to 20-30% degradation in attribution accuracy.", "We find that an authorship attributor that is purpose-built for obfuscated documents is able to improve attribution accuracy to within 5% as without obfuscation.", "We also find that an adversarially trained authorship attributor is able to improve attribution accuracy to within 10% as without obfuscation.", "Additionally, we evaluate the effectiveness of adversarial training when the attributor makes incorrect assumptions about whether and which obfuscator is used.", "We find that these erroneous assumptions degrade accuracy up to 20%, however, this degradation is the same or smaller than when the attributor is not adversarially trained, which can degrade accuracy up to 32%.", "Ethics Statement: We acknowledge that authorship deobfuscation in itself is detrimental to privacy.", "Our goal is to highlight a major limitation of prior work on authorship obfuscation under the adversarial threat model.", "We expect our work to foster further research into new authorship obfuscation approaches that are resistant to deobfuscation.", "Authorship attribution is the task of identifying the correct author of a document given a range of possible authors.", "It has been a long-standing topic, and researchers have developed a wide range of solutions to the problem.", "Earlier researchers focus more on analysis based on writing style features.", "These include the distribution of word counts and basic Bayesian methods (Mosteller and Wallace, 1963), different types of writing-style features (lexical, syntactic, structural, and content-specific) (Zheng et al., 2006), and authors' choices of synonyms (Clark and Hannon, 2007).", "Other researchers combined machine learning and deep learning methods with stylometric features.", "Abbasi and Chen (2008) combine their rich feature set, Writeprints, with an SVM.", "Brennan et al. (2012) improve Writeprints to reduce the computational load required of the feature set.", "Finally, more recent research focuses on fine-tuning pre-trained models since they do not require predefined features sets.", "Ruder et al. (2016) tackle authorship attribution with a CNN, while Howard and Ruder (2018) introduce the Universal Language Model Fine-tuning (ULMFiT) which shows strong performance in attribution.", "To the best of our knowledge, prior work lacks approaches for adversarial authorship deobfuscation.", "Prior work has shown that existing authorship attributors do not perform well against obfuscators.", "Brennan et al. (2012) present a manual obfuscation experiment which causes large accuracy degradation.", "Since this obfuscation experiment, much has been done in the area of authorship text obfuscation (Rao and Rohatgi, 2000; Brennan et al., 2012; McDonald et al., 2012, 2013; Karadzhov et al., 2017; Castro et al., 2017; Mahmood et al., 2019; Grndahl and Asokan, 2020; Bo et al., 2019).", "We focus on state-of-the-art obfuscators, MutantX (Mahmood et al., 2019) and DS-PAN (Castro et al., 2017) specifically in our research.", "Other obfuscation methods are as vulnerable to adversarial training which is reinforced in (Grndahl and Asokan, 2020).", "Our proposed authorship attributor leverages adversarial training to attribute documents regardless of obfuscation.", "First described in (Goodfellow et al., 2014), adversarial training uses text produced by an adversary to train a model to be more robust.", "Adversarial training has seen success in other text domains including strengthening word embeddings 7373 (Miyato et al., 2016), better classification in cross-lingual texts (Dong et al., 2020), and attacking classifiers (Behjati et al., 2019).", "We start by describing the threat model for the authorship deobfuscation attack.", "There is an arms race between an attacker (who desires to iden-tify/attribute the author of a given document) and a defender (an author who desires privacy and therefore uses an obfuscator to protect their identity).", "Figure 1 illustrates the expected workflow between the defender and the attacker.", "The defender uses an obfuscator before publishing the documents and the attacker employs obfuscation and/or obfuscator detector as well as an adversarially trained attributor for deobfuscation.", "Defender.", "The goal of the defender is to obfuscate a document so that it cannot be attributed to the author.", "The obfuscator takes as input an original document and obfuscates it to produce an obfuscated version that is expected to evade authorship attribution.", "Attacker.", "The goal of the attacker is to use an attributor trained on documents from multiple authors to identify the author of a given document.", "The attacker assumes to know the list of potential authors in the traditional closed-world setting.", "We examine two scenarios: First, as shown in Figure 1a, the attacker assumes to know that the document is obfuscated and also the obfuscator used by the defender.", "In this scenario, the attacker is able to access the documents that are produced by the obfuscator and hence train an attributor for obfuscated documents from the obfuscator.", "Second, as shown in Figure 1b, the attacker assumes to know that the document is obfuscated and that there is a pool of available obfuscators, of which one is used by the defender.", "Note that the attacker does not know exactly which obfuscator from the pool was used by the defender.", "Thus, the attacker trains an attributor for documents that are obfuscated by any one of the pool of available obfuscators.", "We use two state-of-the-art text obfuscators .", "Document Simplification (DS-PAN).", "This approach obfuscates documents through rule-based sentence simplification (Castro et al., 2017).", "The transformation rules include lexical transformations, substitutions of contractions or expansions, and eliminations of discourse markers and fragments of text in parenthesis.", "This approach was one of the best performing in the annual PAN competition, a shared CLEF task (Potthast et al., 2017).", "It was also one of the few approaches that achieves \"passable\" and even \"correct\" judgements on the soundness of obfuscated text (i.e., whether the semantics of the original text are preserved) (Hagen et al., 2017).", "We refer to this approach as DS-PAN.", "Mutant-X.", "This approach performs obfuscation using a genetic algorithm based search framework (Mahmood et al., 2019).", "It makes changes to input text based on the attribution probability and semantics iteratively so that obfuscation improves at each step.", "It is also a fully automated authorship obfuscation approach and outperformed text obfuscation approaches from PAN (Potthast et al., 2017) and has since been used by other text obfuscation approaches (Grndahl and Asokan, 2020).", "There are two versions of Mutant-X: Mutant-X writeprintsRFC, which uses Random Forests along with Writeprints-Static features (Brennan et al., 2012); and Mutant-X embeddingCNN, which uses a Convolutional Neural Network (CNN) classifier with word embeddings.", "We use writeprintsRFC version because it achieves better drop in attribution accuracy and semantic preservation as compared to embeddingCNN.", "We describe the design of the authorship attributor and our adversarial training approaches for deobfuscation.", "Authorship Attributor.", "We use writeprintsRFC as the classifier for authorship attribution.", "More specifically, we use the Writeprints-Static feature set (Brennan et al., 2012) that includes lexical features on different levels, such as word level (total number of words) and letter level (letter frequency) as well as syntactic features such as the frequency of functional words and parts of speech tags.", "It is one of the most widely used stylometric feature sets and has consistently achieved high accuracy on different datasets and author sets while maintaining a low computational cost.", "We then use these features to train an ensemble random forest classifier 7374 Input Documents Author: 7 Author: 8 Author: 0 ...", "Adversarial Training.", "The basic idea of adversarial training is to include perturbed/obfuscated inputs into the training set to improve the model's resistance towards such adversarially obfuscated inputs (Goodfellow et al., 2014).", "It has been widely used in various domains including text classification.", "In our case, obfuscated texts are texts that vary slightly from the original texts and these serve as adversarial examples.", "We examine how using these adversarial examples as training data influ-ences the attributor's performance and whether it adds resilience against obfuscation.", "Based on our two scenarios described in Section 3.1 and shown in Figure 1, we propose two ways of adversarial training.", "For both cases, original texts from the list of possible authors are selected and prepared for obfuscation.", "For scenario 1, we train the attributor using documents obfuscated by a known obfuscator.", "For scenario 2, since the attacker does not assume to know the specific obfuscator used by the defender, we train the attributor using documents obfuscated by the pool of available obfuscators.", "We describe the dataset, evaluation metrics, and experimental design to assess the effectiveness of our adversarial authorship attribution approaches for deobfuscation.", "Dataset.", "Following previous research (Mahmood et al., 2019), we examine a publicly available dataset for evaluation of our methodology.", "The Blog Authorship Corpus (Schler et al., 2006) contains over 600,000 blog posts from blogger.com.", "These posts span 19,320 unique authors.", "Previous research (Narayanan et al., 2012) found that authorship attribution gets harder when more authors are included.", "Based on the author selection in (Mahmood et al., 2019), we select a subset of 15 each with 100 documents (compared to their 5 and 10 authors) for a more precised evaluation.", "These Input Documents Obfuscators Attributor writeprintsRFC Attribution Results Author: 7 Author: 8 Author: 0 ...", "1500 documents are divided into 80-20% split for training and testing, respectively.", "Specifically, 80 documents from each author are used in the training set while the rest 20 documents are used in the test set.", "As shown in Figure 2, we train on various combinations of obfuscated documents.", "These documents are obfuscated by the obfuscators described in Section 3.2.", "When an attributor-dependent-obfuscator (e.g. Mutant-X (Mahmood et al., 2019)) is used, the attributor will have access to the same training documents used to train the obfuscator.", "Otherwise, the attributor does not assume to have access to the attributor used by the obfuscator.", "To control for training size, when more than 1 obfuscator is used, we sample equal amounts of documents from each set of obfuscated documents.", "For example, if we train against 2 obfuscators, then 600 documents are sampled from each set of respective obfuscated documents to get a training set of size 1200.", "To calibrate the obfuscated texts, we use METEOR score (Banerjee and Lavie, 2005) to evaluate the soundness of documents.", "The score for Mutant-X ranges from 0.3 to 0.7 (mean=0.46), and the score for DS-PAN ranges from 0.24 to 0.57 (mean=0.38), which are comparable to previous studies (Mahmood et al., 2019).", "An in-depth analysis of the METEOR score is reported in Appendix A. Metric.", "15-class problem, we calculate the accuracy as:", "Attack Scenarios.", "Figure 3 illustrates the flow of our experimental evaluation under different attack scenarios.", "0. Baseline: For the baseline model, we assume that there is no obfuscation in this world.", "The attacker is trained on original documents and is deployed on original documents.", "1. Obfuscation-unaware-attacker: The first case we examine is when the defender actively seeks to hide author identity.", "Thus, the defender gains an advantage by obfuscating documents using either Mutant-X or DS-PAN in order to bypass the attacker.", "The attacker, however, remains unaware of obfuscation and trains the attributor only on original documents.", "2. Obfuscation-aware-attacker with obfuscation detector: Next, we give knowledge of obfuscation to the attacker by introducing an obfuscation detector into the system.", "Previous research (Mahmood et al., 2020) shows that texts generated by existing obfuscators can be detected as obfuscated with high accuracy.", "The device for this type of detection is called an obfuscation detector.", "Hence, in this scenario we ask whether there is any benefit to the attacker if the text is identified as obfuscated before attribution.", "Since the attacker does not know which obfuscator was used by the defender, the attributor is trained on the combination of documents generated from DS-PAN and from Mutant-X.", "The defender is the same as in the previous scenario, i.e., it uses one of the obfuscators to generate documents.", "3. Obfuscator-aware-attacker with obfuscator detector: We give additional information to the attacker.", "Previous research has shown that embedding watermarks and fingerprints that show the identity of the model into deep neural networks is a valid approach to protect the rights of shared trained models (Uchida et al., 2017; Zhang et al., 2018).", "Hence, it is reasonable to assume that there will be methods in the future to identify the watermarks for specific deep neural networks.", "Here, we propose the concept of obfuscator detector, which can detect the specific obfuscator used.", "In this case, the attacker attributor is trained always on the documents generated by the same obfuscator as the defender: either documents generated from DS-PAN or from Mutant-X.", "2i.", "Obfuscation-aware-attacker with incorrect obfuscation detector: Here we ask the question: what happens in scenario 2 if the obfuscation detector makes errors?", "The specific error addressed is that the detector classifies the text as obfuscated whereas it is actually an original.", "Under this condition, the attacker attributor is still trained on the combination of documents generated from DS-PAN and from Mutant-X.", "But the defender now presents an original document.", "3i.", "Obfuscator-aware-attacker with incorrect obfuscator detector: When the obfuscator detector classifies incorrectly, it assumes that the defender uses a specific obfuscator when it actually uses a different one.", "The attacker attributor is trained on the documents generated by one of the obfuscators: either documents generated from DS-PAN or from Mutant-X.", "However, the defender uses a different obfuscator than the attacker to generate the documents.", "4. Obfuscator-aware-attacker that does not rely on an obfuscator detector or obfuscation detector: Since the previous processes require the proposed obfuscation and obfuscator detector, it is not efficient.", "Hence, a simpler, more efficient solution is to train on all the documents at once.", "In this sim-plified version, the attacker attributor is trained on the combination of original documents, documents generated from DS-PAN, and documents generated from Mutant-X.", "Since this is the combined condition, the defender may or may not use an obfuscator, and will choose from the two possible obfuscators to generate documents.", "In this section, we present the results following the progression of various attack scenarios shown in Figure", "3. 5.1 Major Scenarios", "0. Baseline: The original authorship attributor has an accuracy of 76.7% when trained on the original documents and tested on original documents.", "The attribution accuracy should be higher than 6.7%, which is when we attribute the 15 authors randomly, to be considered effective.", "1. Obfuscation-unaware-attacker: Based on the first row of Table 1, the result shows that the attribution accuracy drops from 76.7% to 50.7% with a decrease of 26% when tested on DS-PAN obfuscated documents, while the accuracy for testing on Mutant-X obfuscated documents drops from 76.7% to 44.3%.", "The average drop in accuracy is from 76.7% to 47.5%, which is 29.2%.", "Based on the results, we know that as shown by previous works (Karadzhov et al., 2017; Mahmood et al., 2019) on the performance of the obfuscators, DS-PAN and Mutant-X obfuscators can successfully decrease the attribution accuracy of original attributor.", "2. Obfuscation-aware-attacker with obfuscation detector: The second row of Table 1 shows that attribution accuracy increases by 13.2% from 50.7% to 63.9% when tested on DS-PAN documents, and increases by 24.7% from 44.3% to 69% when tested on Mutant-X documents.", "The average accuracy is 66.4%, which increases from the previous 47.5% by about 19%.", "While the performance is still not comparable to the baseline results, the increase in accuracy from the previous scenario is significant, which suggests that the obfuscation detector would benefit the performance against obfuscated documents.", "3. Obfuscator-aware-attacker with obfuscator detector: As shown on the third and forth row of Table 1, when trained only on DS-PAN documents, the accuracy tested on DS-PAN is 68.6%, with an increase of 17.9% from the first scenario; when trained only on Mutant-X documents, the accuracy tested on Mutant-X is 75.7%, with an increase of 31.4%.", "The average test accuracy is 71.1%, which increases by about 5% compared to the 66.4% in the previous case.", "From the results, we can see that having an obfuscator detector as well as an obfuscation detector is the most beneficial to improve the attribution accuracy from obfuscated texts.", "Although obfuscation/obfuscator detector are quite accurate, they are not perfect.", "Hence, we test the success of the attacker when the obfuscation detector and obfuscator detector are incorrect.", "2i.", "Obfuscation-aware-attacker with incorrect obfuscation detector: Shown on the first column of row four on Table 2, the attribution accuracy decreases by 8.4% from the baseline 76.7% to 68.3%, but a higher accuracy is maintained than the average of Attack Scenario 2 (66.4%) The result shows that when the obfuscation detector produces wrong results, performance will be influenced, but still stay at a relatively high level.", "Thus, having an obfuscation detector is generally good for the attacker with little cost.", "3i.", "Obfuscator-aware-attacker with incorrect obfuscator detector: From second and third rows of Table 2 we see that when the attacker is trained only on DS-PAN documents, the accuracy tested on Mutant-X is 57.3%, with a drop in performance of 18.4% when compared to training on only MutantX documents (75.7%).", "When the attacker is trained only on Mutant-X documents, the accuracy tested on DS-PAN is 48.5%, with a drop in performance of 20.1% as compared to training on only DS-PAN documents (68.6%).", "The average test accuracy is 52.9%, which is lower than training on the same obfuscator, but higher than the results in 1 of 5.1 (50.7% and 44.3%).", "When the obfuscator detector gives incorrect results, the attribution accuracy will not achieve its best performance, but the result is still higher than trained only on original documents.", "Hence, using obfuscated documents to train always tends to benefit the attribution accuracy.", "Here the attacker simply uses originals and obfuscated documents from all available obfuscators for adversarial training of the attributor.", "4. Obfuscator-aware-attacker that does not rely on an obfuscator detector or obfuscation detector: 7377 This result is shown on the last row of Table", "2. Attribution accuracy when tested on original documents drops from 76.7% to 66.3%, but increases by 10.5% from 50.7% to 61.2% when tested on DS-PAN, and increases by 24.5% from 44.3% to 68.8% when tested on Mutant-X.", "The average accuracy is 65%, which increases from the average of the former three, 57.2%, by about 8%.", "While the attacker does not know if the document is obfuscated or not, or by which obfuscator, it is still able to achieve a high boost in attribution accuracy by adversarial training.", "Therefore, although the previous processes can achieve higher performances, training on a combination of these documents could be a valid approach when time and resources are limited.", "Next, we look more closely into the results from adversarial training to better understand them.", "Figure 4 pesents the confusion matrices produced from DS-PAN obfuscated documents tested on Attack Scenario 1, 2 and 3 respectively.", "Rows represent the Original Authors, while the columns represent the Predicted Authors.", "The values in the matrices are the percentage of the original documents that are classified as a specific author.", "Moving from scenario 1 to 3, we see an increase in color density and percentage on the diagonal, which signifies the general increase in accuracy when the training documents become more specific.", "Consistent with above, the color on the nondiagonal areas becoming more transparent also indicates reduction of classification errors.", "At the author level, we observe that almost all of the authors show increases in accuracy on the diagonal cells across the three scenarios.", "It shows that adversarial training is effective even on authors with different styles.", "Looking more closely at each author, we know that Author 9 is the easiest to classify performance is always at 100%.", "Author 6, on the other hand, is relatively hard to attribute.", "The best performance for Author 6 is only 35% from the most effective Attack Scenario", "3. Figure 6 presents another view on performance.", "It shows the percentage of errors made for each author out of all the errors in the three scenarios combined (note: the sum of all errors in the figure is 100%).", "Thus, the errors made for Author 1 under Scenario 1 is 3.18% of total errors across the three scenarios.", "We observe that the color is generally darker in Scenario 1, while it gradually lightens in Scenario 2 and then in Scenario", "3. Again, this indicates the benefit of having more specific training data.", "Looking more closely within each scenario, we see that the attributor of Attacker Scenario 1 tends to misclassify Authors 5 and 8 the most.", "But the attributors for Scenario 2 and Scenario 3 learn more effectively for these two authors thereby reducing mistakes.", "For Attack Scenario 3, the most misclassified author is Author 6, where 3.76% of all errors.", "But this percentage is still an improvement over the 4.34% in the previous two scenarios.", "Motivated by the above observations, next we investigate shifts in performance for a specific author.", "We assign labels to the 15 authors in the dataset and select Original Author 15 for more detailed analysis.", "The reason we choose Author 15 is that its accuracy is among the ones that increases the most, from 45% to 80%.", "In order to find out the reasons behind such increase, we perform PCA analysis on all of the DS-PAN documents whose original author is Author 15.", "We use Writeprints-Static feature set, which has a total of 555 features.", "In order to preserve the most significant features for attribution, we select the most important 25 features from the original writeprintsRFC and process them through PCA so that we can visualize the features into 3 dimensional graphs.", "As shown in the graphs in Figure 5, each dot on the graph represents a document.", "The green ones are the ones that are attributed correctly while the red ones are attributed incorrectly.", "In Figure 5a, the incorrectly attributed ones are mainly gathered in a cluster.", "This suggests that the attributor has trouble discriminating the documents that are similar to each other.", "But as we go from left to right, the documents in the cluster are also gradually attributed correctly.", "The trend shows that the attributor is getting better at distinguishing between documents that are similar to each other.", "Hence, we can infer that adversarial training improves attribution accuracy by discriminating between the ones that are more similar to each other.", "In Attack Scenarios 2, 3, and 4, the test sets using DS-PAN for obfuscation yield worse attribution", "Page 5 of 13", "(a) Attack Scenario 1 9/11/21, 7 : 36 PM ConfusionMatrix.ipynb Colaboratory Page 6 of 13 https://colab.research.google.com/drive/1YApk0ceP2Q0KPpJZ03JWAzVc-oR5UiZo Text(0.5, 57.5, 'Predicted Author') plt.figure(figsize=(10,10))sns.set(font_scale=1.7)sns.heatmap(cm2, annot=annot2, fmt='', cmap = \"Blues\", cbar = None, yticklabels=list( plt.ylabel('Original Author',**axis_font) plt.xlabel('Predicted Author',**axis_font) plt.figure(figsize=(10,10))", "(b) Attack Scenario 2 9/11/21, 7 : 36 PM ConfusionMatrix.ipynb Colaboratory Page 7 of 13 https://colab.research.google.com/drive/1YApk0ceP2Q0KPpJZ03JWAzVc-oR5UiZo Text(0.5, 57.5, 'Predicted Author') plt.figure(figsize=(10,10))sns.set(font_scale=1.7)sns.heatmap(cm3, annot=annot3, fmt='', cmap = \"Blues\", cbar = None, yticklabels=list( plt.ylabel('Original Author',**axis_font) plt.xlabel('Predicted Author',**axis_font) plt.figure(figsize=(10,10))", "(c) Attack Scenario 3 Figure 4: Confusion matrices of different attack scenarios 9/11/21, 7 : 18 PM PCA4&5&8.ipynb Colaboratory Page 5 of 57 https://colab.research.google.com/drive/1Ly5vXBk_ZQ3ZUj4y1eIUOghjvwLxdQL0 [ 2 9 11 14 9 9 9 6 14 9 3 14 14 14 14 14 13 9 14 14] x = df1.drop(labels=['Filename','Original','Predicted_Orig','Obfuscator','Predicted_DS' color =", "(a) Attack Scenario 1 9/11/21, 7 : 18 PM PCA4&5&8.ipynb Colaboratory Page 6 of 57 https://colab.research.google.com/drive/1Ly5vXBk_ZQ3ZUj4y1eIUOghjvwLxdQL0 8 : 'g', 1 : 'lightcoral', 10: 'purple', 0: 'yellowgreen', } label_color = [LABEL_COLOR_MAP[l] for l in color] label_color = np.asarray(label_color) ax.scatter(x_trans[y==14, 0], x_trans[y==14, 1], x_trans[y==14, 2], c='yellowgreen' label='Correct') ax.scatter(x_trans[y!=14, 0], x_trans[y!=14, 1], x_trans[y!=14, 2], c='lightcoral', s = label='Incorrect') # Hide grid lines ax.grid(False) # Hide axes ticks ax.set_xticks([]) ax.set_yticks([]) ax.set_zticks([]) ax.w_xaxis.gridlines.set_lw(3.0) ax.w_yaxis.gridlines.set_lw(3.0) ax.w_zaxis.gridlines.set_lw(3.0) #ax.legend() #plt.title('PCA of DS') plt.show() x = df1.drop(labels=['Filename','Original','Predicted_Orig','Obfuscator','Predicted_DS' x = df1.drop(labels=['Filename','Original','Predicted_Orig','Obfuscator','Predicted_DS','Predicted_MX','Predicted_Mix','Combined'],", "(b) Attack Scenario 2 9/11/21, 7 : 18 PM PCA4&5&8.ipynb Colaboratory Page 8 of 57 https://colab.research.google.com/drive/1Ly5vXBk_ZQ3ZUj4y1eIUOghjvwLxdQL0 # Hide axes ticks ax.set_xticks([]) ax.set_yticks([]) ax.set_zticks([]) ax.legend() #plt.title('PCA of DS') plt.show() 2D plot ## Print Original Graph x = df1.drop(labels=['Filename','Original','Predicted_Orig','Obfuscator','Predicted_DS' y =", "https://colab.research.google.com/drive/1YApk0ceP2Q0KPpJZ03JWAzVc-oR5UiZo", "(annot).round(2) annot = annot.astype('str') sns.set(font_scale=1.7)sns.heatmap(dfx*100, fmt='', annot = annot, cmap = \"Reds\", yticklabels=list(range(1 plt.ylabel('Author',**axis_font)plt.xlabel('Attack Scenario',**axis_font) Figure 6: Percentage of misclassified document for each author across attack scenarios accuracy than those using Mutant-X.", "Our analysis of obfuscated documents showed that DS-PAN makes both a greater number of changes as well as more significant changes as compared to Mutant-X.", "Thus, we surmise that DS-PAN results in larger degradation in attribution accuracy because the at-tacker's training set contains text that is less similar to the original text.", "However, the changes made by DS-PAN also have side effect in that they lower the soundness of obfuscated text as reflected by lower METEOR scores.", "The mean METEOR score for DS-PAN is 0.38 as compared to 0.46 for Mutant-X.", "A more detailed analysis of METEOR score and semantic similarity between obfuscated and original texts is reported in Appendix A. 6.4 Insights into Adversarial Training The performance gain of adversarial training comes from a \"noisy\" training dataset comprising of obfuscated documents as well as knowledge about the obfuscator.", "To disentangle these two factors, we compare the accuracy improvements of the second and third rows of Table 2 against the Mutant-X obfuscated test documents.", "We note that the improvement in attribution accuracy is 13% when DS-PAN obfuscated documents are used for training.", "The improvement in attribution accuracy is further 18% (31% overall) when Mutant-X obfuscated documents are used for training.", "This difference (13% vs. 18%) indicates that although having a noisy dataset helps, the knowledge of the specific obfuscator is likely more crucial to improving attribution performance.", "In this work, we explored the novel problem of adversarial authorship attribution for deobfuscation.", "We demonstrate that adversarial training is able to significantly reduce the adverse impact of existing text obfuscators on authorship attribution accuracy.", "We found that an adversarially trained authorship attributor improves attribution accuracy to within 5-10% as without obfuscation.", "While an adversarially trained authorship attributor achieved best accuracy when it is trained using the documents obfuscated by the respective obfuscator, we found that it achieves reasonable accuracy even when it is trained using documents obfuscated by a pool of obfuscators.", "When the adversarially trained attributor makes erroneous assumptions about the obfuscator used to obfuscate documents, we note a degradation in attribution accuracy.", "It is noteworthy, however, that this degradation is still similar or better than the attribution accuracy of the baseline attributor that is not adversarially trained.", "Our results shed light into the future of the ensuing arms race between obfuscators and attributors.", "Most notably, we find that the effectiveness of adversarial training is somewhat limited if the obfuscators continue to employ new and improved methods that are not available to attributors for adversarial training.", "Therefore, it is important to continue development of new and improved text obfuscation approaches that are resistant to deobfuscation (Bevendorff et al., 2019; Bo et al., 2019; Grndahl and Asokan, 2020; Hlavcheva et al., 2021).", "On the other hand, recent work on understanding and improving transferability of adversarial attacks can inform development of better adversarial attributors that might work well even for unknown obfuscators (Tramr et al., 2017; Zheng et al., 2020; He et al., 2021; Mireshghallah and Berg-Kirkpatrick, 2021).", "Finally, our experiments were limited to the closed-world setting where the universe of potential authors is assumed to be known by the attributor.", "Further research is needed to investigate whether (and how much) adversarial algorithms are effective in the open-world setting." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "result", "method", "abstain", "result", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "objective", "method", "objective", "objective", "result", "objective", "result", "method", "result", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "method", "other", "objective", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "abstain", "result", "result", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "objective", "objective", "result", "result", "method", "abstain", "result", "objective", "abstain", "abstain", "method", "abstain" ]
[ "Abstract k -Nearest-Neighbor Machine Translation ( k NN-MT) has been recently proposed as a non-parametric solution for domain adaptation in neural machine translation (NMT).", "It aims to alleviate the performance degradation of advanced MT systems in translating out-of-domain sentences by coordinating with an additional token-level feature-based retrieval module constructed from in-domain data.", "Previous studies (Khandelwal et al., 2021; Zheng et al., 2021a) have already demonstrated that non-parametric NMT is even superior to models fine-tuned on out-of-domain data.", "In spite of this success, k NN retrieval is at the expense of high latency, in particular for large datastores.", "To make it practical, in this paper, we explore a more efficient k NN-MT and propose to use clustering to improve the retrieval efficiency.", "Concretely, we first propose a cluster-based Compact Network for feature reduction in a contrastive learning manner to compress context features into 90+% lower dimensional vectors.", "We then suggest a cluster-based pruning solution to filter out 10%~40% redundant nodes in large datastores while retaining translation quality.", "Our proposed methods achieve better or comparable performance while reducing up to 57% inference latency against the advanced non-parametric MT model on several machine translation benchmarks.", "Experimental results indicate that the proposed methods maintain the most useful information of the original datastore and the Compact Network shows good generalization on unseen domains.", "Codes are available at https: //github.com/tjunlp-lab/PCKMT .", "Recently, non-parametric approaches (Khandelwal et al., 2021; Zheng et al., 2021a,b; Jiang et al., 2021) have been successfully applied to neural Equal contribution.", "machine translation (NMT) for domain adaptation with retrieval pipelines.", "Given an advanced MT model, they generally involve two steps: It builds a cached memory, usually called datastore , in advance by extracting the context representations of the penultimate layer of the given NMT model corresponding to each target token from in-domain data.", "At inference, it retrieves the k nearest neighbors of the context representation for each generated token from the constructed datastore and then integrates external k NN translation probabilities derived from these retrievals to adjust the translation.", "The accessibility of any provided datastore during translation makes them interpretable.", "Meanwhile, the reliability of these approaches gives the credit to the datastore quality.", "In spite of significant translation improvements, analyses on the datastore behavior have not been fully explored yet.", "We empirically observe that the construction of datastore is not optimal for retrieval from two aspects: retrieval latency and semantic distribution.", "Retrieval Latency.", "As shown in Table 1, we compare both translation performance and speed between a pre-trained NMT model (Ng et al., 2019) with 270M parameters and the adaptive k NN-MT (Zheng et al., 2021a) system originated from the former on the same hardware (a P100-16GB GPU with 18 cores Intel Xeon Gold 6240 CPU @ 2.60GHz), where the later is the most advanced 2175 Figure 1: t-SNE visualization of IT domain features.", "retrieval-based NMT model so far.", "1 It indicates that the heavy computation of retrieval within a datastore causes increased latency and makes it less practical in real-time scenarios.", "To address this problem, we propose an efficient pruning strategy to decrease the datastore redundancy so as to deal with the trade-off between the speed and the quality.", "Semantic Distribution.", "For robust token-to-token retrieval, tokens with similar context are expected to be distributed close to each other to form separable and compact semantic clusters, otherwise semantic noise may hurt the retrieval effectiveness.", "To explore the potential of k -nearest retrieval, we visualize the feature distribution of a datastore built on the IT-domain corpus (Koehn and Knowles, 2017) in Figure", "1. For the datastore constructed in the traditional way, we have 2 important findings.", "One is that the majority tokens are distributed in the overlapped area regardless of frequency.", "The other is that even the overall distribution shows a clustering effect, only a few small clusters are correctly classified with respect to frequency.", "Intuitively, these findings will directly and negatively affect the distance-based retrieval.", "Moreover, as (Zhang et al., 2021) suggest, the dimension is highly related to retrieval speed.", "Preliminary studies on k NN-LM (He et al., 2021) indicate that traditional feature reduction algorithms could only maintain the original performance until the context feature dimension is reduced to a minimum required size (e.g., for feature dimension 1024, PCA requires at least 512).", "For NMT model, it is still challenging to reduce the feature dimen-1 The speed comparison is based on the implementation released at https://github.com/zhengxxn/adaptive-knn-mt sion to its 10% (e.g., from 1024 to <100).", "To tackle this problem, we design a cluster-based training strategy where an external light-weight feature reduction network is learnt in a contrastive training manner to maximize the margin between context semantic clusters.", "In our experiments, we can even cut out 93.75% of the original feature size.", "We propose a cluster-based Compact Network to reduce the dimension of the semantic representations and improve the translation performance by making different tokens separable to refine the retrieval results.", "We further propose a cluster-based pruning strategy by filtering redundant representations in the datastore so that our proposed methods could significantly decrease the translation latency during inference.", "Experiments on multi-domain machine translation benchmarks indicate that our proposed methods are superior to existing retrieval-based machine translation systems in terms of both speed and quality.", "In this section, we will briefly introduce the background of the adaptive k NN-MT (Zheng et al., 2021a).", "Adaptive k NN-MT is derived from k NN-MT (Khandelwal et al., 2021) by inserting a lightweight Metak Network that fuses k NN retrievals with various k to alleviate the possible noise induced by a single k .", "Formally, it is formulated as two steps: target-side datastore creation and Meta-k Network predictions.", "Target-side Datastore Creation.", "The datastore constists of a set of key-value pairs.", "Given a bilingual sentence pair ( s, t ) in a corpus ( S, T ) , a pre-trained general domain NMT model autoregres-sively extracts the context representation h i of the i -th target token conditioned on both source and target context ( s, t <i ) , denoted as h i = f ( s, t <i ) .", "The datastore is finally constructed by taking h i as keys and t i as values: ( K , V ) = (cid:91) ( s,t ) ( S , T ) { ( h i , t i ) , t i t } .", "constructed datastore, it considers a set of different k s that are smaller than an upper bound K .", "The standard setting for k is Q = { 0 } { k r N | log 2 k r N, k r K } .", "K nearest neighbors of the current context query h i from the datastore are first retrieved at the i -th decoding step.", "Then the square of l 2 distance from h i to each neighbor ( h j , v j ) is denoted as d j = (cid:107) h j , h i (cid:107) 2 .", "And the number of distinct values in top j neighbors are denoted as c j .", "The normalized weights of each available k are computed as: p ( k ) = softmax ( f ([ d 1 , ..., d K ; c 1 , ..., c K ])) where f denotes the Metak Network.", "For k r Q , the word prediction probability over the vocabulary w.r.t each neighbor is computed via the Gaussian kernal function: p k r NN ( y i | x, y <i ) (cid:88) { ( h j ,v j ) | j k r ,j N } 1 y i = v j exp( (cid:107) h j , h i (cid:107) 2 T ) where T denotes the temperature hyper-parameter.", "Note that a validation set is usually required to study the Metak Network before predicting on test sets.", "During training, only the parameters of the Metak Network need to update.", "As shown in Figure 2, our proposed approach focuses on datastore reconstruction from the perspectives of feature compression and size pruning by utilizing cluster-based signals.", "From Figure 1, we observe that spatially close context representations may have noisy and different semantics.", "During inference, it may lead to unreliable neighbors for retrieval-based NMT (see examples in Appendix D Case Analysis) due to the entanglements from these noisy context space.", "We hypothesize that the reasons may be three-fold.", "First, the pre-trained NMT model on general domain lacks target domain-specific knowledge.", "Second, the high dimensional semantic space is too sparse and may contain some noisy underlying components.", "Third, the likelihood-maximization objective from the logits by dot-production enforces the alignment of vector directions, which is inconsistent with the spatially close expectation for the sake of both direction and length.", "To address these issues, we propose a one-plus-one ( f + f ) Compact Network on top of the pre-trained NMT model.", "The first one module is to transform the coarse-grained semantics of the pre-trained NMT into the fine-grained semantic clusters.", "The second one module is used to calculate our designed loss function.", "To obtain coarse-grained semantic clusters, we first follow the method described in Target-side Datastore Creation of Section 2 to create the 2177 Figure 3: The Compact Network illustration.", "in-domain datastore.", "For context representations (keys) with the same target token (value), we conduct target-side clustering for the representations, shown as the left clusters in Figure 3.", "We denote the resulted clusters from the same value as the cluster family for the corresponding target token.", "Due to the distance-based clustering, it is guaranteed that clusters within each cluster family are not overlapped at all.", "However, different cluster families will have large overlapped space according to Figure", "1. Therefore, our main purpose is to construct a transform that can make the cluster families separable as well.", "The proposed light-weight Compact Network in Figure 3 is desired to fulfill above purpose and compress the feature dimension.", "The first two-layer per-ceptron is applied for representation compression: f ( ) = FFN 2 ( ( FFN 1 ( ))) , where ( ) denotes the Sigmoid function.", "The last layer f is attached for transferring the compressed representations into classification logits where the output dimension depends on the number of designed categories.", "Note that the f layer is discarded at inference.", "In order to obtain the separable cluster families after f , we are motivated to consider several candidate contrastive regularizations to train the Compact Network.", "Triplet Noise-Contrastive Estimation (NCE).", "For each cluster in one particular cluster family, two semantic representations are randomly sampled, one as the pivot example v and the other as the positive example v + .", "From the cluster in a different cluster family, another semantic representation is randomly selected as the negative example v .", "Then we conduct NCE (Gutmann and Hyvri-nen, 2010) with binary classification on {pivot, positive} and {pivot, negative} to predict which pair belongs to the same cluster.", "Triplet Distance Ranking.", "This is similar to the Triplet NCE.", "The differences are that (1) we remove the f layer and (2) the objective is modi-fied as a ranking loss by minimizing the l 2 distance between the pivot and positive examples as well as maximizing the distance between the pivot and negative ones: min f ,f (cid:107) f ( v + ) f ( v ) (cid:107) 2 + 1 / (cid:107) f ( v ) f ( v ) (cid:107) 2 Word Prediction Loss.", "To compensate the loss of linguistic information that NCE may ignore, the traditional word prediction NMT loss is also used to train the Compact Network.", "In this scenario, the output dimension of f is the vocabulary size of the corresponding target language.", "In addition, we find that dynamic pivot selection leads to unstable training as the compressed representations are forced to update toward various directions.", "For each cluster, we modify the dynamic pivot as a static pivot, by fixing it as the centroid.", "After the training converges, we can construct a new feature-compressed datastore with the output of f , which is used for query retrieval during the k NN-MT inference.", "Apart from feature reduction, the number of key-value pairs in the compressed datastore is crucial for the translation latency as well, hence redundant tokens are encouraged to be pruned.", "In literature, phrase-level pruning strategies have proved efficient for statistical machine translation (SMT) (Ling et al., 2012; Zens et al., 2012).", "Each record in the phrase table reflects a similar semantic unit, hence one could prune parts of the records that share similar statistics, e.g., translation quality, translation cost, etc.", "Enlightened by SMT, we propose an efficient pruning strategy based on n -gram metrics on the original semantic representation space.", "Intuitively, the entry of a key-value pair in the datastore is redundant if there are other key-value pairs (with the same value) holding for that the difference of their perplexity (PPL) values is smaller than a given threshold (cid:15) (an example is represented in Figure 4).", "To make it concrete, we decrible the translation cost as follows.", "For a given n -gram phrase ( t i n +1 , t i n +2 , ..., t i ) in the translation with the corresponding token-level translation probability 2178 Figure 4: An example of redundant bigram \"a man\" with similar translation costs.", "p ( t j | s, t <j ) j { i, i 1 , ..., i n + 1 } , we measure the translation cost of its last token (desired value in datastore) as the perplexity (PPL) of the n -gram phrase.", "However, when n is fixed, n -gram phrases are not always meaningful because some translations are independent of its previous target-side context (Ling et al., 2012).", "Hence we do not directly adopt the naive PPL as a stable translation cost but truncate it in a heuristic way.", "We search for the minimal PPL among all consecutive subsequences ending with that last token.", "Formally, given a bilingual sentence pair ( s, t ) , we define the translation cost for each target token t i : c t i = min b { 1 , 2 ,...,n } PPL( p ( t i b +1 | s, t <i b +1 ) , ..., p ( t i 1 | s, t <i 1 ) , p ( t i | s, t <i )) Then we can add the translation cost into the feature-compressed datastore.", "For the augmented datastore described above, we only apply propagation-based clustering (Ester et al., 1996; Zhang et al., 1996) upon the translation cost c t i to get cost-similar groups, and partition the semantic representations in accordance to these groups.", "To get pruned datastore, we adopt uniform sampling on each group and collect them into a small key-value paired datastore.", "This algorithm is summarized in Algorithm", "1. In brief, our efficient cluster-based k -nearest neighbor machine translation can be concluded into the following steps.", "We adopt the validation set to train the Metak Network while the parameters of NMT and Compact Network are fixed.", "We reconstruct the feature-compressed datastore and prune it into a small datastore using our proposed n -gram pruning algorithm that will be eventually used for testing.", "We carried out a series of experiments to evaluate the proposed non-parametric NMT against the previous advanced counterpart on several translation benchmarks.", "We followed (Zheng et al., 2021a) to conduct all experiments on five widely used machine translation benchmarks of unique domains, including IT, Koran, Medical, Law and Subtitles.", "The first four domains were also used in (Zheng et al., 2021a) while the last Subtitles dataset contains a large number of target tokens, which is hence suitable to explore our pruning strategy.", "The statistics of these datasets are shown in Table", "2. We tokenized sentences using Moses 2 and split words into subword 2 https://github.com/moses-smt/ mosesdecoder 2179 Dataset Statistics of training sets Statistics of test sets Domain Koran IT Medical Law Sub Koran IT Medical Law Sub sentence 222K 248K 18K 467K 12.4M 2K 2K 2K 2K 2K token 0.5M 3.6M 6.9M 19M 154M 58K 34K 57K 81K 25K Table 2: The statistics of datasets in all experiments.", "units (Sennrich et al., 2016) with the bpe-codes provided by (Ng et al., 2019).", "We applied the product quantizer with the inverted file system based on Faiss 3 to quantize the datastores and conduct retrieval.", "The hyper-parameters of Faiss are provided in Appendix B. 4.2 Clustering Algorithm Selection The determination of clustering algorithms depends on computation complexity and clustering effectiveness.", "As semantic clusters in a large datastore are vague and it is hard to determine the prior quantity of clusters existing in a large datastore, clustering algorithms that hold a static cluster quantity in advance (e.g., k -Means (Hartigan and Wong, 1979)) are not fit for dataset partitioning.", "Besides, clustering complexity is not tolerant in practice when it increases up to O ( N 2 ) (e.g., Affinity Propagation (Frey and Dueck, 2007)) since N is usually extremely large for a high-quality datastore.", "We eventually chose two classical clustering algorithms from candidates for exploration in our experiments: DBSCAN (Ester et al., 1996) and Birch (Zhang et al., 1996).", "DBSCAN was applied for clustering datastore with 100Mnodes while BIRCH was applied for clustering datastore with 100M+ nodes for the sake of computation-and-quality trade-off.", "In our experiments, We adopted the scikit-learn clustering implements.", "4 4.3 Baselines We adopted the following models as our baselines.", "Base NMT.", "This is the winner model (Vaswani et al., 2017) of WMT'19 German-English News translation task 5 provided by (Ng et al., 2019), which is also used in (Zheng 3 https://github.com/facebookresearch/faiss/ 4 https://scikit-learn.org/stable/modules/clustering.html 5 http://www.statmt.org/wmt19/ Model BLEU NMT 38.35 adaptive k NN-MT 47.20 +feature-wise PCA 46.84 +weight-wise SVD 45.96 [ DY ] CKMT+DR 37.10 [ DY ] CKMT+WP 46.41 [ DY ] CKMT+NCE 46.58 [ DY ] CKMT+NCE+DR 37.33 [ DY ] CKMT+NCE+WP 46.42 [ DY ] CKMT+NCE+CL 47.48 [ ST ] CKMT+NCE+CL 47.94 [ ST ] CKMT+NCE+CL+DR 47.64 [ ST ] CKMT+NCE+CL+WP 46.88 Table 3: The BLEU performance comparison of the feature reduction methods on the IT domain.", "et al., 2021a).", "It is a Transformer model (Vaswani et al., 2017) with hidden size 1024.", "Adaptive k NN-MT (Zheng et al., 2021a).", "This is the benchmark model of our work.", "In our modifications, as expected to reduce the dimension to <10% of its original size, we did greedy searching in [16, 32, 64, 128] to obtain the optimal 64 as f 's output dimension on the IT domain validation set and then used this setting in all experiments.", "The detailed dimension related analysis can be found in Appendix A. Similarly we used grid search and selected bigram in the clustering-based pruning algorithm.", "All experiments were conducted on a P100-16GB GPU with 18 cores Intel(R) Xeon(R) Gold 6240 CPU @ 2.60GHz except for the experiments in", "Subsection 4.5.2 where we used 2 GPU cards to load a larger datastore.", "All translation results were evaluated in case-sensitive detokenized BLEU with SacreBLEU (Post, 2018).", "For simplicity, we refer to the base NMT model equipped with the proposed Compact Network as CKMT and further equipped with the pruned datastore as PCKMT in this section.", "On the IT domain, we first evaluated the compact layer settings mentioned in Section 3, as well as two traditional feature reduction algorithms: Principal Component Analysis (PCA) used in (He et al., 2021) and Singular Value Decomposition (SVD).", "We applied the PCA solution to learn feature-wise linear projection while the SVD solution to learn matrix-wise projection that decomposes the weight ( W ) of the last layer of the base NMT model into three matrices: W 1024 vocab _ size = S 1024 64 U 64 64 V 64 vocab _ size Then f can be replaced by an FFN layer with the weight S 1024 64 U 64 64 but without bias.", "As shown in Table 3, the best CKMT solution is equipped with the Compact Network trained using NCE+CL+DR.", "It outperforms the adaptive k NN-MT by 0.74 BLEU.", "Being consistent with (He et al., 2021), we find that it is difficult to use the 1024-to-64 feature-wise PCA to maintain the translation performance with such a low dimension.", "Basically, the distance ranking loss causes serious performance degradation.", "We assume that the distance minimization restraint is too strict to optimize a small datastore since both the direction and the length of a semantic vector have already been optimized.", "Though the word prediction (WP) can recover semantic information, its f has too many parameters Rate Datastore Size BLEU 100% 3.6M 47.94 80% 2.9M 47.67 60% 2.2M 47.57 40% 1.4M 47.29 20% 0.7M 46.98 1% 0.04M 46.21 Table 5: Performance of CKMT* using decreasing rates of data to train the Compact Network at state I. to be optimized on the limited IT domain datastet compared with NCE alone.", "Besides, we attribute the improvement obtained by the clustering (CL) to the introduced semantic disambiguation.", "Finally, the static pivot selection (ST) achieves an improvement of 0.46 BLEU against the dynamic method.", "We refer to the best setting [ST] CKMT+NCE+CL as CKMT*, and report the results against the adaptive k NN-MT on various domains in Table", "4. CKMT* gains an average improvement of 0.70 BLEU over the adaptive k NN-MT which indicates that our proposed Compact Network refines the retrieval for machine translation.", "The Compact Network Training with Limited Data.", "It is unclear how much data are adequate at training-stage I. Hence, we gradually reduce the number of key-value pairs in the datastore to train the Compact Network as shown in Table", "5. As the number decreases, the performance degrades slowly.", "When we use only 40% of the datastore for training, CKMT still outperforms the adaptive k NN-MT.", "It indicates that our proposed Compact Network is efficient and requires a small amount of key-value pairs to compress the semantic representations with contrastive loss.", "Cross Domain Generalization.", "Is there a general Compact Network that is capable to generalize to different domains?", "If so, we will save the cost to train an unique Compact Network for various target domains.", "To explore this, we trained the Compact Network in a general domain with the large-scale Wikimatrix Corpus (Schwenk et al., 2021) and evaluated its behavior on various target domains.", "As the last row of Table 4 shows, it is interesting that the general CKMT* drops only 0.39 BLEU compared with 4 domain-specific datastores, and it still outperforms the adaptive k NN-MT by 0.31 BLEU.", "Overall speaking, the Compact Network generalizes well across different domains.", "Spatially Pruning by Distance (SP) .", "It is a naive pruning strategy using distance-wise solution by cutting off nodes with low probability according to the distance from each node to its cluster center.", "Low Translation Probability Pruning (LTP) .", "Tokens translated with low probabilities tend to have poor translation quality, and will be pruned for datastore stability.", "High Translation Probability Pruning (HTP) .", "As the k NN probabilities are benefi-cial for hart-to-translate words that NMT cannot handle, it would be more encouraged to restore the tokens wrongly translated by the base NMT.", "In this sense, tokens paired with high confidence will be pruned.", "Random Pruning (RP) .", "We also perform the random pruning strategy alone for the target-side clusters, as the step 2 introduced in Algorithm", "1. The results on 4 different domains are shown in Table 6.", "Since the datastore size remains the same (10% pruned) for all pruning methods in Table 6, there is no much retrieval speed difference among these methods.", "Our cluster based pruning strategy generally achieves the smallest degradation.", "Though other strategies obtain impressive 6 results on a few domains (e.g., 10% pruned CKMT*+HTP outperforms non-pruned CKMT* by 0.18 BLEU 6 This is in comparison to previous studies (e.g., (He et al., 2021)) that usually fail to maintain model performance when datastores are pruned to a large extent.", "on the Koran test set) since previous studies (i.e, (He et al., 2021)) our cluster-based pruning strategy performs the most stably on average.", "Note that the random pruning strategy is simple yet effective, which coincides with (He et al., 2021).", "However, we find that the in-domain data of the tested domains have limited redundancy since the average frequency of bigrams is too low (e.g., more than 0.4M unique bigrams were collected from the 3.6M IT domain datastore, on average each bigrams only has no more than 9 occurrences in the datastore).", "Therefore, even 10% pruning rate can lead to about 1 BLEU loss in Table 6.", "We leave reducing the datastore with low n -gram redundancy to our future work.", "To further explore the potential of the pruning methods on large datastore, we conducted pruning experiments on Subtitles domain containing 154M keys.", "We tested the random pruning strategy as well because it is the second competitive pruning strategy.", "As Figure 5 illustrates, the proposed PCKMT*+Ours with pruning rate 30% can even outperform non-pruned CKMT*.", "As the pruning rate increases, PCKMT*+Ours generally outperforms PCMKT*+RP for the same k .", "The performance of PCKMT*+RP drops seriously (more than 1 BLEU point) when the pruning rate 50%, but PCKMT*+Ours sees a clear drop until the pruning rate 70%.", "When the pruning rate increases to 80+%, PCKMT*+RP even performs worse than the base NMT, but PCKMT*+Ours still outperforms it by a large margin.", "These results suggest that the proposed cluster-based pruning algorithm is effective for datastore reduction.", "In Table 7, we further evaluated the computation cost of CKMT* with the same BLEU performance as the adaptive k NN-MT.", "With the same k and the batch size, PCKMT* achieves 27%~57% less speed latency compared with the adaptive k NN-MT.", "In addition, we compared our optimally performed model with baselines in Table 8.", "PCKMT ( k =8) equipped with pruning rate 30% has the optimal performance, which obtains an improvement of 0.36 BLEU and 1.56x translation speed over the adaptive k NN-MT.", "Cluster Visualization.", "We visualize the IT domain datastore in Figure 6 to verify our assumption that our Compact Network maps the original semantic representations to a separable distribution with less overlaps.", "In this paper, we propose a cluster-based Compact Network for feature reduction in a contrastive learning manner to reduce 90+% context feature dimension, and suggest a cluster-based pruning strategy to prune 10%~40% redundant keys in datastore while translation quality remains unchanged.", "Our proposed methods achieve better or comparable performance while reducing up to 57% inference latency against the advanced non-parametric MT model on several benchmarks.", "For future work, it is promising to design effective feature reduction algorithms and pruning strategies based on more linguistic and cross-lingual information.", "Both Dexin Wang and Deyi Xiong were partially supported by the Natural Science Foundation of Tianjin (Grant No. 19JCZDJC31400).", "We would like to thank the anonymous reviewers for their insightful comments." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "objective", "abstain", "other", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "objective", "objective", "objective", "abstain", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "other", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "method", "abstain", "result", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "result", "objective", "objective", "abstain", "other", "other" ]
[ "It is appealing to have a system that generates a story or scripts automatically from a storyline, even though this is still out of our reach.", "In dialogue systems, it would also be useful to drive dialogues by a dialogue plan.", "In this paper, we address a key problem involved in these applications guiding a dialogue by a narrative.", "The proposed model ScriptWriter selects the best response among the candidates that fit the context as well as the given narrative.", "It keeps track of what in the narrative has been said and what is to be said.", "A narrative plays a different role than the context (i.e., previous utterances), which is generally used in current dialogue systems.", "Due to the unavailability of data for this new application, we construct a new large-scale data collection GraphMovie from a movie website where end-users can upload their narratives freely when watching a movie.", "Experimental results on the dataset show that our proposed approach based on narratives significantly outperforms the baselines that simply use the narrative as a kind of context.", "Narrative is generally understood as a way to tell a story.", "WordNet defines it as a message that tells the particulars of an act or occurrence or course of events; presented in writing or drama or cinema or as a radio or television program 1 .", "Narrative plays an important role in many natural language processing (NLP) tasks.", "For example, in storytelling, the storyline is a type of narrative, which helps generate coherent and consistent stories (Fan et al., 2018, 2019).", "In dialogue generation, narrative can be used to define a global plan for the whole conversation session, so as to avoid generating inconsistent Corresponding authors.", "and scattered responses (Xing et al., 2018; Tian et al., 2017; Ghazvininejad et al., 2018).", "In this work, we investigate the utilization of narratives in a special case of text generation movie script generation .", "This special form of conversation generation is chosen due to the unavailability of the data for a more general form of application.", "Yet it does require the same care to leverage narratives in general conversation, and hence can provide useful insight to a more general form of narrative-guided conversation.", "The dataset we use to support our study is collected from GraphMovie 2 , where an end-user retells the story of a movie by uploading descriptive paragraphs in his/her own words.", "More details about the dataset will be presented in Section 3.2.", "An example is shown in Figure 1, where the narrative 2 http://www.graphmovies.com/home/2/ index.php .", "Unfortunately, we find this website was closed recently.", "Our problem is closely related to dialogue generation that takes into account the context (Wu et al., 2017; Zhang et al., 2018; Zhou et al., 2018b).", "However, a narrative plays a different and more specific role than a general context.", "In particular, a narrative may cover the whole story (a part of a script), thus a good conversation should also cover all the aspects mentioned in a narrative, which is not required with a general context.", "In this paper, we propose a new model called ScriptWriter to address the problem of script generation/selection with the help of a narrative.", "ScriptWriter keeps track of what in the narrative has been said and what is remaining to select the next line by an updating mechanism.", "The matching between updated narrative, context, and response are then computed respectively and finally aggregated as a matching score.", "As it is difficult to evaluate the quality of script generation, we frame our work in a more restricted case selecting the right response among a set of candidates.", "This form of more limited conversation generation -retrieval-based conversation has been widely used in the previous studies (Wu et al., 2017; Zhou et al., 2018b), and it provides an easier way to evaluate the impact of narratives.", "We conduct experiments on a dataset we collected and made publicly available (see Section 5).", "The experiments will show that using a narrative to guide the generation/selection of script is a much more appropriate approach than using it as part of the general context.", "(1) To our best knowledge, this is the first investigation on movie script generation with a narrative.", "This task could be further extended to a more general text generation scenario when suitable data are available.", "(2) We construct the first large-scale data collection GraphMovie to support research on narrative-guided movie script generation, which is made publicly accessible.", "(3) We propose a new model in which a narrative plays a specific role in guiding script generation.", "This will be shown to be more appropriate than a general context-based approach.", "It has been more than thirty years since researchers proposed narrative comprehension as an important ability of artificial intelligence (Rapaport et al., 1989).", "The ultimate goal is the development of a computational theory to model how humans understand narrative texts.", "Early explorations used symbolic methods to represent the narrative (Turner, 1994; Bringsjord and Ferrucci, 1999) or rule-based approaches to generate the narrative (Riedl and Young, 2010).", "Recently, deep neural networks have been used to tackle the problem (Bamman et al., 2019), and related problems such as generating coherent and cohesive text (Cho et al., 2019) and identifying relations in generated stories (Roem-mele, 2019) have also been addressed.", "However, these studies only focused on how to understand a narrative itself (e.g., how to extract information from a narrative).", "They did not investigate how to utilize the narrative in an application task such as dialogue generation.", "Existing methods of open-domain dialogue can be categorized into two groups: retrieval-based and generation-based.", "Recent work on response generation is mainly based on sequence-to-sequence structure with attention mechanism (Shang et al., 2015; Vinyals and Le, 2015), with multiple extensions (Li et al., 2016; Xing et al., 2017; Zhou et al., 2018a, 2020; Zhu et al., 2020).", "Retrieval-based methods try to find the most reasonable response from a large repository of conversational data, instead of generating a new one (Wu et al., 2017; Zhou et al., 2018b; Zhang et al., 2018).", "In general, the utterances in the previous turns are taken together as the context for selecting the next response.", "Retrieval-based methods are widely used in real conversation products due to their more flu-ent and diverse responses and better efficiency.", "In this paper, we focus on extending retrieval-based methods by using a narrative as a plan for a session.", "This is a new problem that has not been studied before.", "Contrary to open-domain chatbots, task-oriented systems are designed to accomplish tasks in a specific domain (Seneff et al., 1998; Levin et al., 2000; Wang et al., 2011; Tur and Mori, 2011).", "In these systems, a dialogue state tracking component is designed for tracking what has happened in a dia-Table 1: Statistics of GraphMovie corpus.", "logue (Williams and Young, 2007; Henderson et al., 2014; Xu and Rudnicky, 2000).", "This inspires us to track the remaining information in the narrative that has not been expressed by previous lines of conversation.", "However, existing methods cannot be applied to our task directly as they are usually predefined for specific tasks, and the state tracking is often framed as a classification problem.", "Existing studies have also tried to generate a story.", "Early work relied on symbolic planning (Meehan, 1977; Cavazza et al., 2002) and case-based reasoning (y Perez and Sharples, 2001; Gervas et al., 2005), while more recent work uses deep learning methods.", "Some of them focused on story ending generation (Peng et al., 2018; Guan et al., 2019), where the story context is given, and the model is asked to select a coherent and consistent story ending.", "This is similar to the dialogue generation problem mentioned above.", "Besides, attempts have been made to generate a whole story from scratch (Fan et al., 2018, 2019).", "Compared with the former task, this latter is more challenging since the story framework and storyline should all be controlled by the model.", "Some recent studies also tried to guide the generation of dialogues (Wu et al., 2019; Tang et al., 2019) or stories (Yao et al., 2019) with keywords the next response is asked to include the keywords.", "This is a step towards guided response generation and bears some similarities with our study.", "However, a narrative is more general than keywords, and it provides a description of the dialogue session rather than imposing keywords to the next response.", "Suppose that we have a dataset D , in which a sample is represented as ( y, c, p, r ) , where c =", "{ s 1 , , s n } represents a context formed by the preceding sentences/lines { s i } ni =1 ; p is a predefined narrative that governs the whole script session, and r is a next line candidate (we refer to it as a re-sponse); y { 0 , 1 } is a binary label, indicating whether r is a proper response for the given c and p .", "Intuitively, a proper response should be relevant to the context, and be coherent and aligned with the narrative.", "Our goal is to learn a model g ( c, p, r ) with D to determine how suitable a response r is to the given context c and narrative p .", "Data is a critical issue in research on story/dialogue generation.", "Unfortunately, no dataset has been created for narrative-guided story/dialogue generation.", "To fill the gap, we constructed a test collection from GraphMovie, where an editor or a user can retell the story of a movie by uploading descriptive paragraphs in his/her own words to describe screenshots selected from the movie.", "A movie on this website has, on average, 367 descriptions.", "A description paragraph often contains one to three sentences to summarize a fragment of a movie.", "It can be at different levels from retelling the same conversations to a high-level description.", "We consider these descriptions as narratives for a sequence of dialogues, which we call a session in this paper.", "Each dialogue in a session is called a line of script (or simply a line).", "To construct the dataset, we use the top 100 movies in IMDB 3 as an initial list.", "For each movie, we collect its description paragraphs from GraphMovie.", "Then we hire annotators to watch the movie and annotate the start time and end time of the dialogues corresponding to each description paragraph through an annotation tool specifically developed for this purpose.", "According to the start and end time, the sequence of lines is extracted from the subtitle file and aligned with a corresponding description paragraph.", "As viewers of a movie can upload descriptions freely, not all description paragraphs correspond to a narrative and are suitable for our task.", "For example, some uploaded paragraphs express one's subjective opinions about the movie, the actors, or simply copy the script.", "Therefore, we manually review the data and remove such non-narrative data.", "We also remove sessions that have less than two lines.", "Finally, we obtain 16,109 script sessions, 3 https://www.imdb.com/ each of which contains a description paragraph (narrative) and corresponding lines of the script.", "As shown in Table 1, on average, a narrative has about 25 words, and a session has 4.7 lines.", "The maximum number of lines in a session is 34.", "Our task is to select one response from a set of candidates at any point during the session.", "By moving the prediction point through the session, we obtain a set of micro-sessions, each of which has a sequence of previous lines as context at that point of time, the same narrative as the session, and the next line to predict.", "The candidates to be selected contain one ground-truth line the one that is genuinely the next line, together with one (in the training set) or nine (in the validation/test set) other candidates retrieved with the previous lines by Solr 4 .", "The above preparation of the dataset follows the practice in the literature (Wu et al., 2017) for retrieval-based dialogue.", "A good response is required to be coherent with the previous lines, i.e., context, and be consistent with the given narrative.", "For example, Just stay a little longer can respond Mama's going to worry about me and it has no conflict with the narrative in Figure", "1. Furthermore, as our target is to generate all lines in the session successively, it is also required that the following lines should convey the information that the former lines have not conveyed.", "Otherwise, only a part of the narrative is covered, and we will miss some other aspects specified in the narrative.", "We propose an attention-based model called ScriptWriter to solve the problem.", "ScriptWriter follows a representation-matching-aggregation framework.", "First, the narrative, the context, and the response candidate are represented in multiple granularities by multi-level attentive blocks.", "Second, we propose an updating mechanism to keep track of what in a narrative has been expressed and explicitly lower their weights in the updated narrative so that more emphasis can be put on the remaining parts.", "Third, matching features are extracted between different elements: between context and response to capture whether it is a proper reply; between narrative and response to capture whether it is consistent with the narrative; and between context and narrative to implicitly track what in the 4 https://lucene.apache.org/solr/ narrative has been expressed in the previous lines.", "Finally, the above matching features are concatenated together and a final matching score is produced by convolutional neural networks (CNNs) and a multi-layer perceptron (MLP).", "To better handle the gap in words between two word sequences, we propose to use an attentive block, which is similar to that used in Transformer (Vaswani et al., 2017).", "The input of an attentive block consists of three sequences, namely query ( Q ), key ( K ), and value ( V ).", "The output is a new representation of the query and is denoted as AttentiveBlock( Q , K , V ) in the remaining parts.", "This structure is used to represent a response, lines in the context, and a narrative.", "More specifically, given a narrative p = ( w p 1 , , w pn p ) , a line s i = ( w s i 1 , , w s i n si ) and a response candidate r = ( w r 1 , , w r n r ) , ScriptWriter first uses a pre-trained embedding table to map each word w to a d e -dimension embedding e , i.e., w e .", "Thus the narrative p , the line s i and the response candidate r are represented by matrices P 0 = ( e p 1 , , e pn p ) , S 0 i = ( e s i 1 , , e s i n si ) and R 0 = ( e r 1 , , e rn r ) .", "Then ScriptWriter takes P 0 , { S 0 i } ni =1 and R 0 as inputs and uses stacked attentive blocks to construct multi-level self-attention representations.", "The output of the ( l 1) th level of attentive block is input into the l th level.", "The representations of p , s i , and r at the l th level are defined as follows: P l = AttentiveBlock ( P l 1 , P l 1 , P l 1 ) , (1) S li = AttentiveBlock ( S l 1 i , S l 1 i , S l 1 i ) , (2) R l = AttentiveBlock ( R l 1 , R l 1 , R l 1 ) , (3) where l ranges from 1 to L .", "Inspired by a previous study (Zhou et al., 2018b), we apply another group of attentive blocks, which is referred to as cross-attention, to capture semantic dependency between p , s i and r .", "Considering p and s i at first, their cross-attention representations are defined by: P ls i = AttentiveBlock ( P l 1 , S l 1 i , S l 1 i ) , (4) S li,p = AttentiveBlock ( S l 1 i , P l 1 , P l 1 ) .", "Here, the words in the narrative can attend to all words in the line, and vice verse.", "In this way, some inter-dependent segment pairs, such as stay in the !", "line and go home later in the narrative, become close to each other in the representations.", "Similarly, we compute cross-attention representations between p and r and between r and s i at different levels, which are denoted as P lr , R lp , S li,r and R ls i .", "These representations further provide matching information across different elements in the next step.", "We design an updating mechanism to keep track of the coverage of the narrative by the lines so that the selection of the response will focus on the uncovered parts.", "The mechanism is illustrated in Figure", "2. We update a narrative gradually by all lines in the context one by one.", "For the i th line s i , we conduct a matching between S i and P by their cosine similarity at all levels ( l ) of attentive blocks: T ls i ,p [ j ][ k ] = cos( S li [ j ] , P l [ k ]) , (6) where j and k stand for the j th word in s i and k th word in p respectively.", "To summarize how much information in p has been expressed by s i , we compute a vector D i by conducting summations along vertical axis on each level in the matching !", "map T s i ,p .", "The summation on the l th level is: D li = [ d li, 1 , d li, 2 , , d li,n p ] , (7) d li,k = n si (cid:88) j =1 T ls i ,p [ j ][ k ] , (8) where n p , n s i denotes the number of words in p and s i ; [0 , 1] is a parameter to learn and works as a gate to control the decaying degree of the mentioned information.", "Finally, we update the narrative's representation as follows for the i th line s i in the context: P li +1 = (1 D li ) P li .", "The initial representation P l 0 is equal to P l defined in Equation (1).", "If there are n lines in the context, this update is executed n times, and (1 D l ) will produce a continuous decaying effect.", "The matching between the narrative p and the line s i is conducted based on both their self-attention and cross-attention representations, as shown in Figure", "3. First, ScriptWriter computes the dot product on these two representations separately as follows: m selfs i ,p,l [ j, k ] = S li [ j ] T P l [ k ] , (10) m crosss i ,p,l [ j, k ] = S li,p [ j ] T P ls i [ k ] , (11) where l ranges from 0 to L. Each element is the dot product of the j th word representation in S li or S li,p and the k th word representation in P l or P ls i .", "Then the matching maps in different layers are concatenated together as follows: m selfs i ,p [ j, k ] = (cid:104) m selfs i ,p, 0 [ j, k ] ; ; m selfs i ,p,L [ j, k ] (cid:105) , m crosss i ,p [ j, k ] = (cid:2) m crosss i ,p, 0 [ j, k ] ; ; m crosss i ,p,L [ j, k ] (cid:3) , where [; ] is concatenation operation.", "Finally, the matching features computed by the self-attention representation and the cross-attention representation are fused as follows: M s i ,p [ j, k ] = (cid:104) m selfs i ,p [ j, k ] ; m crosss i ,p [ j, k ] (cid:105) .", "The matching matrices M p,r and M s i ,r for narrative-response and context-response are constructed in a similar way.", "For the sake of brevity, we omit the formulas.", "After concatenation, each cell in M s i ,p , M p,r or M s i ,r has 2( L + 1) channels and contains matching information at different levels.", "The matching between narrative, context, and response serves for different purposes.", "Context-response matching ( M s i ,r ) serves to select a response suitable for the context.", "Context-narrative matching ( M s i ,p ) helps the model remember how much information has been expressed and implicitly influences the selection of the next responses.", "Narrative-response matching ( M p,r ) helps the model to select a more consistent response with the narrative.", "As the narrative keeps being updated along with the lines in context, ScriptWriter tends to dynamically choose the response that matches what remains unexpressed in the narrative.", "To further use the information across two consecutive lines, ScriptWriter piles up all the context-narrative matching matrices and all the context-response matching matrices to construct two cubes Q cp = { M s i ,p [ j, k ] } ni =1 and Q cr = { M s i ,r [ j, k ] } ni =1 , where n is the number of lines in the session.", "Then ScriptWriter employs 3D convolutions to distill important matching features from the whole cube.", "We denote these two feature vectors as f ( c, p ) and f ( c, r ) .", "For narrative-response matching, ScriptWriter conducts 2D convolutions on M p,r to distill matching features between the narrative and the response, denoted as f ( p, r ) .", "The three types of matching features are concatenated together, and the matching score g ( c, p, r ) for ranking response candidates is computed by an MLP with a sigmoid activation function, which is defined as: f ( c, p, r ) = [ f ( c, p ); f ( c, r ); f ( p, r )] , (12) g ( c, p, r ) = sigmoid ( WT f ( c, p, r ) + b ) , (13) where W and b are parameters.", "L ( ) = (cid:88) ( y,c,p,r ) D [ y log( g ( c, p, r )) + (1 y ) log(1 g ( c, p, r ))] .", "(14) 5 Experiments 5.1 Evaluation setup As presented in Table 1, we randomly split the the GraphMovie collection into training, validation and test set.", "The split ratio is 18:1:1.", "We split the sessions into micro-sessions: given a session with n lines in the context, we will split it into n micro-sessions with length varying from 1 to n .", "These micro-sessions share the same narrative.", "By doing this, the model is asked to learn to select one line as the response from a set of candidates at any point during the session, and the dataset, in particular for training, can be significantly enlarged.", "We conduct two kinds of evaluation as follows: Turn-level task asks a model to rank a list of candidate responses based on its given context and narrative for a micro-session.", "The model then selects the best response for the current turn.", "This setting is similar to the widely studied response selection task (Wu et al., 2017; Zhou et al., 2018b; Zhang et al., 2018).", "We follow these previous studies and employ recall at position k in n candidates (R n @k) and mean reciprocal rank (MRR) (Voorhees, 1999) as evaluation metrics.", "For example, R 10 @1 means recall at one when we rank ten candidates (one positive sample and nine negative samples).", "The final results are average numbers over all micro-sessions in the test set.", "Session-level task aims to predict all the lines in a session gradually.", "It starts with the first line of the session as the context and the given narrative and predicts the best next line.", "The predicted line is then incorporated into the context to predict the next line.", "This process continues until the last line of the session is selected.", "Finally, we calculate precision over the whole original session and report average numbers over all sessions in the test set.", "Precision is defined as the number of correct selection divided by the number of lines in a session.", "We consider two measures: 1) P strict which accepts a right response at the right position; 2) P weak which accepts a right response at any position.", "As no previous work has been done on narrative-based script generation, no proper baseline exists.", "Nevertheless, some existing multi-turn conversation models based on context can be adapted to work with a narrative: the context is simply extended with the narrative.", "Two different extension methods have been tested: the narrative is added into the context together with the previous lines; the narrative is used as a second context.", "In the latter case, two matching scores are obtained for context-narrative and narrative-response.", "They are aggregated through an MLP to produce a final score.", "This second approach turns out to perform better.", "Therefore, we only report the results with this latter method 5 .", "(1) MVLSTM (Wan et al., 2016): it concatenates all previous lines as a context and uses an LSTM to encode the context and the response candidate.", "A matching score is determined by an MLP based on a map of cosine similarity between them.", "A matching score for narrative-response is produced similarly.", "(2) DL2R (Yan et al., 2016): it encodes the context by an RNN followed by a CNN.", "The matching score is computed similarly to MVLSTM.", "(3) SMN (Wu et al., 2017): it matches each line with response sequentially to produce a matching vector with CNNs.", "The matching vectors are aggregated with an RNN.", "(4) DAM (Zhou et al., 2018b): it represents a context and a response by using self-attention and cross-attention operation on them.", "It uses CNNs to extract features and uses an MLP to get a score.", "Different from our model, this model only considers the context-response matching and does not track what in the narrative has already been expressed by the previous lines, i.e., context.", "5 We also tested some basic models such as RNN, LSTM, and BiLSTM (Lowe et al., 2015) in our experiments.", "However, they cannot achieve comparable results to the selected baselines.", "(5) DUA (Zhang et al., 2018): it concatenates the last line with each previous line in the context and response, respectively.", "Then it performs a self-attention operation to get refined representations, based on which matching features are extracted with CNNs and RNNs.", "All models are implemented in Tensorflow 6 .", "Word embeddings are pre-trained by Word2vec (Mikolov et al., 2013) on the training set with 200 dimensions.", "We test the stack number in { 1,2,3 } and report our results with three stacks.", "Due to the limited resources, we cannot conduct experiments with a larger number of stacks, which could be tested in the future.", "Two 3D convolutional layers have 32 and 16 filters, respectively.", "They both use [3,3,3] as kernel size, and the max-pooling size is [3,3,3].", "Two 2D convolutional layers on narrative-response matching have 32 and 16 filters with [3,3] as kernel size.", "The max-pooling size is also [3,3].", "All parameters are optimized with Adam optimizer (Kingma and Ba, 2015).", "The learning rate is 0.001 and decreased during training.", "The initial value for is 0.5.", "The batch size is 64.", "We use the validation set to select the best models and report their performance on the test set.", "The maximum number of lines in context is set as ten, and the maximum length of a line, response, and narrative sentence is all set as 50.", "All sentences are zero-padded to the maximum length.", "We also padded zeros if the number of lines in a context is less than 10.", "Otherwise, we kept the latest ten lines.", "The dataset and the source code of our model are available on GitHub 7 .", "The experimental results are reported in Table", "2. The results on both turn-level and session-level evaluations indicate that ScriptWriter dramatically outperforms all baselines, including DAM and DUA, which are two state-of-the-art models on multi-turn response selection.", "All improvements are statistically significant ( p -value 0 . 01 ).", "DAM performs better than other baselines, which con-firms the effectiveness of the self and cross attention mechanism used in this model.", "The DUA model also uses the attention mechanism.", "It outper-6 https://www.tensorflow.org 7 https://github.com/DaoD/ScriptWriter Table 2: Evaluation results on two response selection tasks: turn-level and session-level.", "forms the other baselines that do not use attention.", "Both observations confirm the advantage of using attention mechanisms over pure RNN.", "Between the two session-level measures, we observe that our model is less affected when moving from P weak to P strict .", "This shows that ScriptWriter can better select a response in the right position.", "We attribute this behavior to the utilization of narrative coverage.", "We conduct an ablation study to investigate the impact of different modules in ScriptWriter.", "First, we remove the updating mechanism by setting = 0 (i.e., the representation of the narrative is not updated but static).", "This model is denoted as ScriptWriter static in Table", "2. Then we remove narrative-response, context-narrative, and matching-response, respectively.", "These variants are denoted as ScriptWriter-PR, ScriptWriter-CP, and ScriptWriter-CR.", "Model ablation results are shown in the second part of Table", "2. We have the following findings: 1) ScriptWriter performs better than ScriptWriter static , demonstrating the effectiveness of updating mechanism for the narrative.", "The optimal value of is at around 0.647 after training, which means that only about 35% of information is kept when a line conveys it.", "2) In both turn-level and session-level evaluations, the performance drops the most when we remove narrative-response matching.", "This indicates that the relevance of the response to the narrative is the most useful information in narrative-\u0000\u0013 \u0000\u000b\u0000\u0013\u0000\u000f\u0000\u0013\u0000\u0011\u0000\u0015\u0000\f \u0000>\u0000\u0013\u0000\u0011\u0000\u0015\u0000\u000f\u0000\u0013\u0000\u0011\u0000\u0017\u0000\f \u0000>\u0000\u0013\u0000\u0011\u0000\u0017\u0000\u000f\u0000\u0013\u0000\u0011\u0000\u0019\u0000\f \u0000>\u0000\u0013\u0000\u0011\u0000\u0019\u0000\u000f\u0000\u0013\u0000\u0011\u0000\u001b\u0000\f \u0000>\u0000\u0013\u0000\u0011\u0000\u001b\u0000\u000f\u0000\u0014\u0000@ \u0000\u0013\u0000\u0011\u0000\u0013 \u0000\u0013\u0000\u0011\u0000\u0014 \u0000\u0013\u0000\u0011\u0000\u0015 \u0000\u0013\u0000\u0011\u0000\u0016 \u0000\u0013\u0000\u0011\u0000\u0017 \u0000\u0013\u0000\u0011\u0000\u0018 \u0000\u0013\u0000\u0011\u0000\u0019 P strict of SW P weak of SW P strict of DUA P weak of DUA Percentage(%) Figure 4: The performance of ScriptWriter (SW) and DUA on the test set with different types of narrative in session-level evaluation.", "guided script generation.", "3) When we remove context-narrative matching, the performance drops too, indicating that context-narrative matching may provide implicit and complementary information for controlling the alignment of response and narrative.", "4) In contrast, when we remove the context-response matching, the performance also drops, however, at a much smaller scale, especially on P weak , than when narrative-response matching is removed.", "This contrast indicates that narrative is a more useful piece of information than context to determine what should be said next, thus it should be taken into account with an adequate mechanism.", "As we explained, narratives in our dataset are contributed by netizens, and they vary in style.", "Some narratives are detailed, while others are general.", "The question we analyze is how general vs. detailed narratives affect the performance of response selection.", "We use a simple method to evaluate roughly the degree of detail of a narrative: a narrative that has a high lexical overlap with the lines in the session is considered to be detailed.", "Narratives are put into six buckets depending on their level of detail, as shown in Figure 4.", "We plot the performance of ScriptWriter and DUA in session-level evaluation over different types of narratives.", "The first type 0 means no word overlap between narrative and dialogue sessions.", "This is the most challenging case, representing extremely general narratives.", "It is not surprising to see that both ScriptWriter and DUA performs poorly on this type compared with other types in terms of P strict .", "The performance tends to become better when the overlap ratio is increased.", "This is consistent with our intuition: when a narrative is more detailed and better aligned with the session in wording, it is easier to choose the best responses.", "This plot also shows that our ScriptWriter can achieve better performance than DUA on all types of narratives, which further demonstrates the effectiveness of using narrative to guide the dialogue.", "We also observe that the buckets [0, 0.2) and [0.2, 0.4) contain the largest proportions of narratives.", "This indicates that most netizens do not use the original lines to retell a story.", "The problem we address in this paper is thus non-trivial.", "6 Conclusion and Future Work Although story generation has been extensively studied in the literature, no existing work addressed the problem of generating movie scripts following a given storyline or narrative.", "In this paper, we addressed this problem in the context of generating dialogues in a movie script.", "We proposed a model that uses the narrative to guide the dialogue generation/retrieval.", "We keep track of what in the narrative has already been expressed and what is remaining to select the next line through an updating mechanism.", "The final selection of the next response is based on multiple matching criteria between context, narrative and response.", "We constructed a new large-scale data collection for narrative-guided script generation from movie scripts.", "This is the first public dataset available for testing narrative-guided dialogue generation/selection.", "Experimental results on the dataset showed that our proposed approach based on narrative significantly outperforms the baselines that use a narrative as an additional context, and showed the importance of using the narrative in a proper manner.", "As a first investigation on the problem, our study has several limitations.", "For example, we have not considered the order in the narrative description, which could be helpful in generating dialogues in correct order.", "Other methods to track the dialogue state and the coverage of narrative can also be designed.", "Further investigations are thus required to fully understand how narratives can be effectively used in dialogue generation.", "Ruihua Song and Zhicheng Dou are the corresponding authors.", "This work was supported by National Natural Science Foundation of China No. 61872370 and No. 61832017, and Beijing Outstanding Young Scientist Program NO.", "BJJWZYJH012019100020098." ]
[ "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "abstain", "other", "result", "method", "abstain", "abstain", "objective", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "method", "method", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "objective", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "other", "method", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "other", "other" ]
[ "We propose to train a non-autoregressive machine translation model to minimize the energy defined by a pretrained autoregressive model.", "In particular, we view our non-autoregressive translation system as an inference network (Tu and Gimpel, 2018) trained to minimize the autoregressive teacher energy.", "This contrasts with the popular approach of training a non-autoregressive model on a distilled corpus consisting of the beam-searched outputs of such a teacher model.", "Our approach, which we call ENGINE (ENerGy-based Inference NEtworks), achieves state-of-the-art non-autoregressive results on the IWSLT 2014 DE-EN and WMT 2016 RO-EN datasets, approaching the performance of autoregressive models.", "1 1 Introduction The performance of non-autoregressive neural machine translation (NAT) systems, which predict tokens in the target language independently of each other conditioned on the source sentence, has been improving steadily in recent years (Lee et al., 2018; Ghazvininejad et al., 2019; Ma et al., 2019).", "One common ingredient in getting non-autoregressive systems to perform well is to train them on a corpus of distilled translations (Kim and Rush, 2016).", "This distilled corpus consists of source sentences paired with the translations produced by a pretrained autoregressive teacher system.", "As an alternative to training non-autoregressive translation systems on distilled corpora, we instead propose to train them to minimize the energy defined by a pretrained autoregressive teacher model.", "That is, we view non-autoregressive machine transWork partly done at Toyota Technological Institute at Chicago and the University of Chicago.", "lation systems as inference networks (Tu and Gimpel, 2018, 2019; Tu et al., 2019) trained to minimize the teacher's energy.", "This provides the non-autoregressive model with additional information related to the energy of the teacher, rather than just the approximate minimizers of the teacher's energy appearing in a distilled corpus.", "In order to train inference networks to minimize an energy function, the energy must be differentiable with respect to the inference network output.", "We describe several approaches for relaxing the autoregressive teacher's energy to make it amenable to minimization with an inference network, and compare them empirically.", "We experiment with two non-autoregressive inference network architectures, one based on bidirectional RNNs and the other based on the transformer model of Ghazvininejad et al. (2019).", "In experiments on the IWSLT 2014 DE-EN and WMT 2016 RO-EN datasets, we show that training to minimize the teacher's energy significantly outperforms training with distilled outputs.", "Our approach, which we call ENGINE (ENerGy-based Inference NEtworks), achieves state-of-the-art results for non-autoregressive translation on these datasets, approaching the results of the autoregressive teachers.", "Our hope is that ENGINE will enable energy-based models to be applied more broadly for non-autoregressive generation in the future.", "Non-autoregressive neural machine translation began with the work of Gu et al. (2018a), who found benefit from using knowledge distillation (Hin-ton et al., 2015), and in particular sequence-level distilled outputs (Kim and Rush, 2016).", "Subsequent work has narrowed the gap between non-autoregressive and autoregressive translation, including multi-iteration refinements (Lee et al., 2018; Ghazvininejad et al., 2019; Saharia et al., 2020; Kasai et al., 2020) and rescoring with autoregressive models (Kaiser et al., 2018; Wei et al., 2019; Ma et al., 2019; Sun et al., 2019).", "Ghazvininejad et al. (2020) and Saharia et al. (2020) proposed aligned cross entropy or latent alignment models and achieved the best results of all non-autoregressive models without refinement or rescoring.", "We propose training inference networks with autoregressive energies and outperform the best purely non-autoregressive methods.", "Another related approach trains an actor network to manipulate the hidden state of an autoregressive neural MT system (Gu et al., 2017; Chen et al., 2018; Zhou et al., 2020) in order to bias it toward outputs with better BLEU scores.", "This work modifies the original pretrained network rather than using it to define an energy for training an inference network.", "Energy-based models have had limited application in text generation due to the computational challenges involved in learning and inference in extremely large search spaces (Bakhtin et al., 2020).", "The use of inference networks to output approximate minimizers of a loss function is popular in variational inference (Kingma and Welling, 2013; Rezende et al., 2014), and, more recently, in structured prediction (Tu and Gimpel, 2018, 2019; Tu et al., 2019), including previously for neural MT (Gu et al., 2018b).", "Most neural machine translation (NMT) systems model the conditional distribution p ( y | x ) of a target sequence y = (cid:104) y 1 , y 2 , ..., y T (cid:105) given a source sequence x = (cid:104) x 1 , x 2 , ..., x T s (cid:105) , where each y t comes from a vocabulary V , y T is (cid:104) eos (cid:105) , and y 0 is (cid:104) bos (cid:105) .", "It is common in NMT to define this conditional distribution using an autoregressive factorization (Sutskever et al., 2014; Bahdanau et al., 2015; Vaswani et al., 2017): log p ( y | x ) = | y | (cid:88) t =1 log p ( y t | y 0: t 1 , x ) This model can be viewed as an energy-based model (LeCun et al., 2006) by defining the energy function E ( x , y ) = log p ( y | x ) .", "Given trained parameters , test time inference seeks to find the translation for a given source sentence x with the lowest energy: y = arg min y E ( x , y ) .", "Finding the translation that minimizes the energy involves combinatorial search.", "In this paper, we train inference networks to perform this search approximately.", "The idea of this approach is to replace the test time combinatorial search typically employed in structured prediction with the output of a network trained to produce approximately optimal predictions (Tu and Gimpel, 2018, 2019).", "More formally, we define an inference network A which maps an input x to a translation y and is trained with the goal that A ( x ) arg min y E ( x , y ) .", "Specifically, we train the inference network parameters as follows (assuming is pretrained and fixed): (cid:98) = arg min (cid:88) (cid:104) x , y (cid:105)D E ( x , A ( x )) (1) where D is a training set of sentence pairs.", "The network architecture of A can be different from the architectures used in the energy function.", "In this paper, we combine an autoregressive energy function with a non-autoregressive inference network.", "By doing so, we seek to combine the effectiveness of the autoregressive energy with the fast inference speed of a non-autoregressive network.", "In order to allow for gradient-based optimization of the inference network parameters , we now define a more general family of energy functions for NMT.", "First, we change the representation of the translation y in the energy, redefining y = (cid:104) y 0 , . . . , y | y | (cid:105) as a sequence of distributions over words instead of a sequence of words.", "In particular, we consider the generalized energy E ( x , y ) = | y | (cid:88) t =1 e t ( x , y ) (2) where e t ( x , y ) = y (cid:62) t log p ( | y 0 , y 1 , . . . , y t 1 , x ) .", "(3) We use the notation in p ( | . . . ) above to indicate that we may need the full distribution over words.", "Note that by replacing the y t with one-hot distributions we recover the original energy.", "In order to train an inference network to minimize this energy, we simply need a network architecture that can produce a sequence of word distributions, which is satisfied by recent non-autoregressive NMT models (Ghazvininejad et al., Figure 1: The ENGINE framework trains a non-autoregressive inference network A to produce translations with low energy under a pretrained autoregressive energy E . 2019).", "However, because the distributions involved in the original energy are one-hot, it may be advantageous for the inference network too to output distributions that are one-hot or approximately so.", "We will accordingly view inference networks as producing a sequence of T logit vectors z t R |V| , and we will consider two operators O 1 and O 2 that will be used to map these z t logits into distributions for use in the energy.", "Figure 1 provides an overview of our approach, including this generalized energy function, the inference network, and the two operators O 1 and O 2 .", "We describe choices for these operators in the next section.", "We now consider ways of defining the two operators that govern the interface between the inference network and the energy function.", "As shown in Figure 1, we seek an operator O 1 to modulate the way that logits z t output by the inference network are fed to the decoder input slots in the energy function, and an operator O 2 to determine how the distribution p ( | . . . ) is used to compute the log probability of a word in y .", "Explicitly, then, we O ( z ) O ( z ) z SX q q z STL onehot (arg max( z )) I SG onehot (arg max( q )) q z ST onehot (arg max( q )) q z GX q q z Table 1: Let O ( z ) |V| 1 be the result of applying an O 1 or O 2 operation to logits z output by the inference network.", "The choices we consider for O 1 and O 2 , which we present generically for operator O and logit vector z , are shown in Table 1, and described in more detail below.", "Some of these O operations are not differentiable, and so the Jacobian matrix O ( z ) z must be approximated during learning; we show the approximations we use in Table 1 as well.", "We consider five choices for each O :", "Jacobian approximation is necessary.", "(b) STL : straight-through logits.", "Here O ( z ) = onehot (arg max i z ) .", "O ( z ) z is approximated by the identity matrix I (see Bengio et al. (2013)).", "(c) SG : straight-through Gumbel-Softmax.", "Here O ( z ) = onehot (arg max i softmax( z + g )) , where g i is Gumbel noise.", "2 O ( z ) z is approximated with softmax( z + g ) z (Jang et al., 2016).", "(d) ST : straight-through.", "This setting is identical to SG with g = 0 (see Bengio et al. (2013)).", "(e) GX : Gumbel-Softmax.", "Here O ( z ) = softmax( z + g ) , where again g i is Gumbel noise; no Jacobian approximation is necessary.", "2 g i = log( log( u i )) and u i Uniform (0 , 1) .", "We evaluate our methods on two datasets: IWSLT14 German (DE) English (EN) and WMT16 Romanian (RO) English (EN).", "All data are tokenized and then segmented into subword units using byte-pair encoding (Sennrich et al., 2016).", "We use the data provided by Lee et al. (2018) for RO-EN.", "We consider two architectures for the pretrained autoregressive (AR) energy function.", "The first is an autoregressive sequence-to-sequence (seq2seq) model with attention (Luong et al., 2015).", "The encoder is a two-layer BiLSTM with 512 units in each direction, the decoder is a two-layer LSTM with 768 units, and the word embedding size is 512.", "The second is an autoregressive transformer model (Vaswani et al., 2017), where both the encoder and decoder have 6 layers, 8 attention heads per layer, model dimension 512, and hidden dimension 2048.", "We choose two different architectures: a BiLSTM tagger (a 2-layer BiLSTM followed by a fully-connected layer) and a conditional masked language model (CMLM; Ghazvininejad et al., 2019), a transformer with 6 layers per stack, 8 attention heads per layer, model dimension 512, and hidden dimension 2048.", "Both architectures require the target sequence length in advance; methods for handling length are discussed in Sec. 4.5.", "For baselines, we train these inference network architectures as non-autoregressive models using the standard per-position cross-entropy loss.", "For faster inference network training, we initialize inference networks with the baselines trained with cross-entropy loss in our experiments.", "The baseline CMLMs use the partial masking strategy described by Ghazvininejad et al. (2019).", "This involves using some masked input tokens and some provided input tokens during training.", "At test time, multiple iterations (refinement iterations) can be used for improved results (Ghazvininejad et al., 2019).", "Each iteration uses partially-masked input from the preceding iteration.", "We consider the use of multiple refinement iterations for both the CMLM baseline and the CMLM inference network.", "3 4.4 Hyperparameters For inference network training, the batch size is 1024 tokens.", "We train with the Adam optimizer (Kingma and Ba, 2015).", "We tune the learning rate in { 5e 4 , 1e 4 , 5e 5 , 1e 5 , 5e 6 , 1e 6 } .", "For regularization, we use L2 weight decay with rate 0.01, and dropout with rate 0.1.", "We train all models for 30 epochs.", "For the baselines, we train the models with local cross entropy loss and do early stopping based on the BLEU score on the dev set.", "For the inference network, we train the model to minimize the energy (Eq. 1) and do early stopping based on the energy on the dev set.", "Non-autoregressive models often need a target sequence length in advance (Lee et al., 2018).", "We report results both with oracle lengths and with a simple method of predicting it.", "We follow Ghazvininejad et al. (2019) in predicting the length of the 3 The CMLM inference network is trained according to Eq.", "1 with full masking (no partial masking like in the CMLM baseline).", "However, since the CMLM inference network is initialized using the CMLM baseline, which is trained using partial masking, the CMLM inference network is still compatible with refinement iterations at test time.", "translation using a representation of the source sequence from the encoder.", "The length loss is added to the cross-entropy loss for the target sequence.", "During decoding, we select the top k = 3 length candidates with the highest probabilities, decode with the different lengths in parallel, and return the translation with the highest average of log probabilities of its tokens.", "Effect of choices for O 1 and O 2 .", "Table 2 compares various choices for the operations O 1 and O 2 .", "For subsequent experiments, we choose the setting that feeds the whole distribution into the energy function ( O 1 = SX) and computes the loss with straight-through ( O 2 = ST).", "Using Gumbel noise in O 2 has only minimal effect, and rarely helps.", "Using ST instead also speeds up training by avoiding the noise sampling step.", "Training with distilled outputs vs. training with energy.", "We compared training non-autoregressive models using the references, distilled outputs, and as inference networks on both datasets.", "Table 5 in the Appendix shows the results when using BiLSTM inference networks and seq2seq AR energies.", "The inference networks improve over training with the references by 11.27 BLEU on DE-EN and 12.22 BLEU on RO-EN.", "In addition, inference networks consistently improve over non-autoregressive networks trained on the distilled outputs.", "Impact of refinement iterations.", "Ghazvininejad et al. (2019) show improvements with multiple refinement iterations.", "Table 3 shows refinement results of CMLM and ENGINE.", "Both improve with multiple iterations, though the improvement is much larger with CMLM.", "However, even with IWSLT14 WMT16 DE-EN RO-EN Autoregressive (Transformer) Greedy Decoding 33.00 33.33 Beam Search 34.11 34.07 Non-autoregressive Iterative Refinement (Lee et al., 2018) -25.73 NAT with Fertility (Gu et al., 2018a) -29.06 CTC (Libovicky and Helcl, 2018) -24.71 FlowSeq (Ma et al., 2019) 27.55 30.44 CMLM(Ghazvininejad et al., 2019) 28.25 28.20 Bag-of-ngrams-based loss (Shao et al., 2020) -29.29 AXE CMLM (Ghazvininejad et al., 2020) -31.54 Imputer-based model (Saharia et al., 2020) -31.7 ENGINE (ours) 31.99 33.16 Table 4: BLEU scores on two datasets for several non-autoregressive methods.", "Comparison to other NAT models.", "Table 4 shows 1-iteration results on two datasets.", "To the best of our knowledge, ENGINE achieves state-of-the-art NAT performance: 31.99 on IWSLT14 DE-EN and 33.16 on WMT16 RO-EN.", "In addition, ENGINE achieves comparable performance with the autoregressive NMT model.", "We proposed a new method to train non-autoregressive neural machine translation systems via minimizing pretrained energy functions with inference networks.", "In the future, we seek to expand upon energy-based translation using our method.", "We would like to thank Graham Neubig for helpful discussions and the reviewers for insightful comments.", "This research was supported in part by an Amazon Research Award to K. Gimpel." ]
[ "objective", "method", "abstain", "result", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "method", "result", "result", "method", "other", "other", "other", "objective", "other", "method", "other", "other", "other", "other", "other", "other", "method", "other", "objective", "method", "other", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "objective", "method", "other", "other" ]
[ "Open-domain question answering can be reformulated as a phrase retrieval problem, without the need for processing documents on-demand during inference (Seo et al., 2019).", "However, current phrase retrieval models heavily depend on sparse representations and still underper-form retriever-reader approaches.", "In this work, we show for the first time that we can learn dense representations of phrases alone that achieve much stronger performance in open-domain QA.", "We present an effective method to learn phrase representations from the supervision of reading comprehension tasks, coupled with novel negative sampling methods.", "We also propose a query-side fine-tuning strategy, which can support transfer learning and reduce the discrepancy between training and inference.", "On five popular open-domain QA datasets, our model DensePhrases improves over previous phrase retrieval models by 15% 25% absolute accuracy and matches the performance of state-of-the-art retriever-reader models.", "Our model is easy to parallelize due to pure dense representations and processes more than 10 questions per second on CPUs.", "Finally, we directly use our pre-indexed dense phrase representations for two slot filling tasks, showing the promise of utilizing DensePhrases as a dense knowledge base for downstream tasks.", "1 1 Introduction Open-domain question answering (QA) aims to provide answers to natural-language questions using a large text corpus (Voorhees et al., 1999; Fer-rucci et al., 2010; Chen and Yih, 2020).", "While a dominating approach is a two-stage retriever-reader approach (Chen et al., 2017; Lee et al., 2019; Guu et al., 2020; Karpukhin et al., 2020), we focus on Work partly done while visiting Princeton University.", "a recent new paradigm solely based on phrase retrieval (Seo et al., 2019; Lee et al., 2020).", "Phrase retrieval highlights the use of phrase representations and finds answers purely based on the similarity search in the vector space of phrases.", "2 Without relying on an expensive reader model for processing text passages, it has demonstrated great runtime efficiency at inference time.", "Despite great promise, it remains a formidable challenge to build vector representations for every single phrase in a large corpus.", "Since phrase representations are decomposed from question representations, they are inherently less expressive than cross-attention models (Devlin et al., 2019).", "Moreover, the approach requires retrieving answers correctly out of billions of phrases (e.g., 6 10 10 phrases in English Wikipedia), making the scale of the learning problem difficult.", "Consequently, existing approaches heavily rely on sparse representations for locating relevant documents and paragraphs while still falling behind retriever-reader models (Seo et al., 2019; Lee et al., 2020).", "In this work, we investigate whether we can build fully dense phrase representations at scale for open-domain QA.", "First, we aim to learn strong phrase representations from the supervision of reading comprehension tasks.", "We propose to use data augmentation and knowledge distillation to learn better phrase representations within a single passage.", "We then adopt negative sampling strategies such as in-batch negatives (Henderson et al., 2017; Karpukhin et al., 2020), to better discriminate the phrases at a larger scale.", "Here, we present a novel method called pre-batch negatives , which leverages preceding mini-batches as negative examples to compensate the need of large-batch training.", "Lastly, we present a query-side fine-tuning strategy that dras-2 Following previous work (Seo et al., 2018), phrase' denotes any contiguous segment of text up to L words (including single words), which is not necessarily a linguistic phrase.", "tically improves phrase retrieval performance and allows for transfer learning to new domains, without re-building billions of phrase representations.", "As a result, all these improvements lead to a much stronger phrase retrieval model, without the use of any sparse representations (Table 1).", "We evaluate our model, DensePhrases , on five standard open-domain QA datasets and achieve much better accuracies than previous phrase retrieval models (Seo et al., 2019; Lee et al., 2020), with 15% 25% absolute improvement on most datasets.", "Our model also matches the performance of state-of-the-art retriever-reader models (Guu et al., 2020; Karpukhin et al., 2020).", "Due to the removal of sparse representations and careful design choices, we further reduce the storage footprint for the full English Wikipedia from 1.5TB to 320GB, as well as drastically improve the throughput.", "Finally, we envision that DensePhrases acts as a neural interface for retrieving phrase-level knowledge from a large text corpus.", "To showcase this possibility, we demonstrate that we can directly use DensePhrases for fact extraction, without rebuilding the phrase storage.", "With only fine-tuning the question encoder on a small number of subject-relation-object triples, we achieve state-of-the-art performance on two slot filling tasks (Petroni et al., 2021), using less than 5% of the training data.", "We first formulate the task of open-domain question answering for a set of K documents D = { d 1 , . . . , d K } .", "We follow the recent work (Chen et al., 2017; Lee et al., 2019) and treat all of English Wikipedia as D , hence K 5 10 6 .", "However, most approachesincluding oursare generic and could be applied to other collections of documents.", "The task aims to provide an answer a for the input question q based on D .", "In this work, we focus on the extractive QA setting, where each answer is a segment of text, or a phrase , that can be found in D .", "Denote the set of phrases in D as S ( D ) and each phrase s k S ( D ) consists of contiguous words w start ( k ) , . . . , w end ( k ) in its document d doc ( k ) .", "In practice, we consider all the phrases up to L = 20 words in D and S ( D ) comprises a large number of 6 10 10 phrases.", "An extractive QA system returns a phrase s = argmax s S ( D ) f ( s |D , q ) where f is a scoring function.", "The system finally maps s to an answer string a : TEXT ( s ) = a and the evaluation is typically done by comparing the predicted answer a with a gold answer a .", "Although we focus on the extractive QA setting, recent works propose to use a generative model as the reader (Lewis et al., 2020; Izacard and Grave, 2021), or learn a closed-book QA model (Roberts et al., 2020), which directly predicts answers without using an external knowledge source.", "The extractive setting provides two advantages: first, the model directly locates the source of the answer, which is more interpretable, and second, phrase-level knowledge retrieval can be uniquely adapted to other NLP tasks as we show in 7.3.", "Retriever-reader.", "A dominating paradigm in open-domain QA is the retriever-reader approach (Chen et al., 2017; Lee et al., 2019; Karpukhin et al., 2020), which leverages a first-stage document retriever f retr and only reads top K (cid:48) (cid:28) K documents with a reader model f read .", "The scoring function f ( s | D , q ) is decomposed as: f ( s | D , q ) = f retr ( { d j 1 , . . . , d j K (cid:48) } | D , q ) f read ( s | { d j 1 , . . . , d j K (cid:48) } , q ) , (1) where { j 1 , . . . , j K (cid:48) } { 1 , . . . , K } and if s / S ( { d j 1 , . . . , d j K (cid:48) } ) , the score will be 0.", "It can easily adapt to passages and sentences (Yang et al., 2019; Wang et al., 2019).", "However, this approach suffers from error propagation when incorrect documents are retrieved and can be slow as it usually requires running an expensive reader model on every retrieved document or passage at inference time.", "Phrase retrieval.", "Seo et al. (2019) introduce the phrase retrieval approach that encodes phrase and question representations independently and performs similarity search over the phrase representations to find an answer.", "Their scoring function f is computed as follows: f ( s | D , q ) = E s ( s, D ) (cid:62) E q ( q ) , (2) where E s and E q denote the phrase encoder and the question encoder respectively.", "As E s ( ) and E q ( ) representations are decomposable, it can support maximum inner product search (MIPS) and improve the efficiency of open-domain QA models.", "Previous approaches (Seo et al., 2019; Lee et al., 2020) leverage both dense and sparse vectors for phrase and question representations by taking their concatenation: E s ( s, D ) = [ E sparse ( s, D ) , E dense ( s, D )] .", "3 However, since the sparse vectors are difficult to parallelize with dense vectors, their method essentially conducts sparse and dense vector search separately.", "The goal of this work is to only use dense representations, i.e., E s ( s, D ) = E dense ( s, D ) , which can model f ( s | D , q ) solely with MIPS, as well as close the gap in performance.", "We introduce DensePhrases, a phrase retrieval model that is built on fully dense representations.", "Our goal is to learn a phrase encoder as well as a question encoder, so we can pre-index all the possible phrases in D , and efficiently retrieve phrases for any question through MIPS at testing time.", "We outline our approach as follows: 3 Seo et al. (2019) use sparse representations of both paragraphs and documents and Lee et al. (2020) use contextualized sparse representations conditioned on the phrase.", "We first learn a high-quality phrase encoder and an (initial) question encoder from the supervision of reading comprehension tasks (4.1), as well as incorporating effective negative sampling to better discriminate phrases at scale (4.2, 4.3).", "Then, we fix the phrase encoder and encode all the phrases s S ( D ) and store the phrase indexing offline to enable efficient search (5).", "Finally, we introduce an additional strategy called query-side fine-tuning (6) by further updating the question encoder.", "4 We find this step to be very effective, as it can reduce the discrepancy between training (the first step) and inference, as well as support transfer learning to new domains.", "Our base architecture consists of a phrase encoder E s and a question encoder E q .", "Given a passage p = w 1 , . . . , w m , we denote all the phrases up to L tokens as S ( p ) .", "Each phrase s k has start and end in-dicies start ( k ) and end ( k ) and the gold phrase is s S ( p ) .", "Following previous work on phrase or span representations (Lee et al., 2017; Seo et al., 2018), we first apply a pre-trained language model M p to obtain contextualized word representations for each passage token: h 1 , . . . , h m R d .", "Then, we can represent each phrase s k S ( p ) as the concatenation of corresponding start and end vectors: E s ( s k , p ) = [ h start ( k ) , h end ( k ) ] R 2 d .", "A great advantage of this representation is that we eventually only need to index and store all the word vectors (we use W ( D ) to denote all the words in D ), instead of all the phrases S ( D ) , which is at least one magnitude order smaller.", "Similarly, we need to learn a question encoder E q ( ) that maps a question q = w 1 , . . . , w n to a vector of the same dimension as E s ( ) .", "Since the start and end representations of phrases are produced by the same language model, we use another two different pre-trained encoders M q, start and M q, end to differentiate the start and end positions.", "We apply M q, start and M q, end on q separately and obtain representations q start and q end 4 In this paper, we use the term question and query interchangeably as our question encoder can be naturally extended to unnatural queries.", "taken from the [CLS] token representations respectively.", "Finally, E q ( ) simply takes their concatenation: E q ( q ) = [ q start , q end ] R 2 d .", "(4) Note that we use pre-trained language models to initialize M p , M q, start and M q, end and they are fine-tuned with the objectives that we will define later.", "In our pilot experiments, we found that SpanBERT (Joshi et al., 2020) leads to superior performance compared to BERT (Devlin et al., 2019).", "SpanBERT is designed to predict the information in the entire span from its two endpoints, therefore it is well suited for our phrase representations.", "In our final model, we use SpanBERT-base-cased as our base LMs for E s and E q , and hence d = 768 .", "5 See Table 5 for an ablation study.", "In this section, we start by learning dense phrase representations from the supervision of reading comprehension tasks, i.e., a single passage p contains an answer a to a question q .", "Our goal is to learn strong dense representations of phrases for s S ( p ) , which can be retrieved by a dense representation of the question and serve as a direct 5 Our base model is largely inspired by DenSPI (Seo et al., 2019), although we deviate from theirs as follows.", "(1) We remove coherency scalars and don't split any vectors.", "(2) DenSPI uses a shared encoder for phrases and questions while we use 3 separate language models initialized from the same pre-trained model.", "(3) We use SpanBERT instead of BERT.", "answer (4.1).", "Then, we introduce two different negative sampling methods (4.2, 4.3), which encourage the phrase representations to be better discriminated at the full Wikipedia scale.", "See Figure 1 for an overview of DensePhrases.", "To learn phrase representations in a single passage along with question representations, we first maximize the log-likelihood of the start and end positions of the gold phrase s where TEXT ( s ) = a .", "The training loss for predicting the start position of a phrase given a question is computed as: z start 1 , . . . , z start m = [ h (cid:62) 1 q start , . . . , h (cid:62) m q start ] , P start = softmax ( z start 1 , . . . , z start m ) , L start = log P start start ( s ) .", "(5) We can define L end in a similar way and the final loss for the single-passage training is L single = L start + L end 2 .", "This essentially learns reading comprehension without any cross-attention between the passage and the question tokens, which fully decomposes phrase and question representations.", "Data augmentation Since the contextualized word representations h 1 , . . . , h m are encoded in a query-agnostic way, they are always inferior to query-dependent representations in cross-attention models (Devlin et al., 2019), where passages are fed along with the questions concatenated by a special token such as [SEP] .", "We hypothesize that one key reason for the performance gap is that reading comprehension datasets only provide a few annotated questions in each passage, compared to the set of possible answer phrases.", "Learning from this supervision is not easy to differentiate similar phrases in one passage (e.g., s = Charles, Prince of Wales and another s = Prince George for a question q = Who is next in line to be the monarch of England? ).", "Following this intuition, we propose to use a simple model to generate additional questions for data augmentation, based on a T5-large model (Raf-fel et al., 2020).", "To train the question generation model, we feed a passage p with the gold answer s highlighted by inserting surrounding special tags.", "Then, the model is trained to maximize the log-likelihood of the question words of q .", "After training, we extract all the named entities in each training passage as candidate answers and feed the passage p with each candidate answer to generate questions.", "We keep the question-answer pairs only when a cross-attention reading comprehension model 6 makes a correct prediction on the generated pair.", "The remaining generated QA pairs { ( q 1 , s 1 ) , ( q 2 , s 2 ) , . . . , ( q r , s r ) } are directly augmented to the original training set.", "Distillation We also propose improving the phrase representations by distilling knowledge from a cross-attention model (Hinton et al., 2015).", "We minimize the KullbackLeibler divergence between the probability distribution from our phrase encoder and that from a standard SpanBERT-base QA model.", "The loss is computed as follows: L distill = KL ( P start || P start c ) + KL ( P end || P end c ) 2 , (7) where P start (and P end ) is defined in Eq.", "(5) and P start c and P end c denote the probability distributions used to predict the start and end positions of answers in the cross-attention model.", "Eventually, we need to build phrase representations for billions of phrases.", "Therefore, a bigger challenge is to incorporate more phrases as negatives so the representations can be better discriminated 6 SpanBERT-large, 88.2 EM on SQuAD.", "(a) In-batch Negatives ( ) B 1", "at a larger scale.", "While Seo et al. (2019) simply sample two negative passages based on question similarity, we use in-batch negatives for our dense phrase representations, which has been shown to be effective in learning dense passage representations before (Karpukhin et al., 2020).", "As shown in Figure 2", "(a), for the i -th example in a mini-batch of size B , we denote the hidden representations of the gold start and end positions h start ( s ) and h end ( s ) as g start i and g end i , as well as the question representation as [ q start i , q end i ] .", "Let G start , G end , Q start , Q end be the B d matrices and each row corresponds to g start i , g end i , q start i , q end i respectively.", "Basically, we can treat all the gold phrases from other passages in the same mini-batch as negative examples.", "We compute S start = Q start G start (cid:124) and S end = Q end G end (cid:124) and the i -th row of S start and S end return B scores each, including a positive score and B 1 negative scores: s start 1 , . . . , s start B and s end 1 , . . . , s end B .", "Similar to Eq.", "(5), we can compute the loss function for the i -th example as: P start_ib i = softmax ( s start 1 , . . . , s start B ) , P end_ib i = softmax ( s end 1 , . . . , s end B ) , L neg = log P start_ib i + log P end_ib i 2 , (8) We also attempted using non-gold phrases from other passages as negatives but did not find a meaningful improvement.", "The in-batch negatives usually benefit from a large batch size (Karpukhin et al., 2020).", "However, it is challenging to further increase batch sizes, as they are bounded by the size of GPU memory.", "Next, we propose a novel negative sampling method called pre-batch negatives , which can effectively utilize the representations from the preceding C mini-batches (Figure 2", "(b)).", "In each iteration, we maintain a FIFO queue of C mini-batches to cache phrase representations G start and G end .", "The cached phrase representations are then used as negative samples for the next iteration, providing B C additional negative samples in total.", "7 These pre-batch negatives are used together with in-batch negatives and the training loss is the same as Eq.", "(8), except that the gradients are not back-propagated to the cached pre-batch negatives.", "After warming up the model with in-batch negatives, we simply shift from in-batch negatives ( B 1 negatives) to in-batch and pre-batch negatives (hence a total number of B C + B 1 negatives).", "For simplicity, we use L neg to denote the loss for both in-batch negatives and pre-batch negatives.", "Since we do not retain the computational graph for pre-batch negatives, the memory consumption of pre-batch negatives is much more manageable while allowing an increase in the number of negative samples.", "Finally, we optimize all the three losses together, on both annotated reading comprehension examples and generated questions from 4.1:", "where 1 , 2 , 3 determine the importance of each loss term.", "We found that 1 = 1 , 2 = 2 , and 3 = 4 works well in practice.", "See Table 5 and Table 6 for an ablation study of different components.", "Indexing After training the phrase encoder E s , we need to encode all the phrases S ( D ) in the entire English Wikipedia D and store an index of the phrase dump.", "We segment each document d i D into a set of natural paragraphs, from which we obtain token representations for each paragraph using E s ( ) .", "Then, we build a phrase dump H = [ h 1 , . . . , h |W ( D ) | ] R |W ( D ) | d by stacking the token representations from all the paragraphs in D .", "Note that this process is computationally expensive and takes about hundreds of GPU hours with a large disk footprint.", "To reduce the 7 This approach is inspired by the momentum contrast idea proposed in unsupervised visual representation learning (He et al., 2020).", "Contrary to their approach, we have separate encoders for phrases and questions and back-propagate to both during training without a momentum update.", "size of phrase dump, we follow and modify several techniques introduced in Seo et al. (2019) (see Appendix E for details).", "After indexing, we can use two rows i and j of H to represent a dense phrase representation [ h i , h j ] .", "We use faiss (Johnson et al., 2017) for building a MIPS index of H .", "8 Search For a given question q , we can find the answer s as follows: s = argmax s ( i,j ) E s ( s ( i,j ) , D ) (cid:62) E q ( q ) , = argmax s ( i,j ) ( Hq start ) i + ( Hq end ) j , (10) where s ( i,j ) denotes a phrase with start and end indices as i and j in the index H .", "We can compute the argmax of Hq start and Hq end efficiently by performing MIPS over H with q start and q end .", "In practice, we search for the topk start and topk end positions separately and perform a constrained search over their end and start positions respectively such that 1 i j < i + L |W ( D ) | .", "So far, we have created a phrase dump H that supports efficient MIPS search.", "In this section, we propose a novel method called query-side fine-tuning by only updating the question encoder E q to correctly retrieve a desired answer a for a question q given H .", "Formally speaking, we optimize the marginal log-likelihood of the gold answer a for a question q , which resembles the weakly-supervised QA setting in previous work (Lee et al., 2019; Min et al., 2019).", "For every question q , we retrieve top k phrases and minimize the objective: L query = log (cid:80) s S ( q ) , TEXT ( s )= a exp (cid:0) f ( s |D ,q ) (cid:1) (cid:80) s S ( q ) exp (cid:0) f ( s |D ,q ) (cid:1) , (11) where f ( s |D , q ) is the score of the phrase s (Eq.", "(2)) and S ( q ) denotes the top k phrases for q (Eq.", "(10)).", "In practice, we use k = 100 for all the experiments.", "There are several advantages for doing this: (1) we find that query-side fine-tuning can reduce the discrepancy between training and inference, and hence improve the final performance substantially (8).", "Even with effective negative sampling, the model only sees a small portion of passages compared to the full scale of D and this training objective can effectively fill in the gap.", "(2) This training strategy allows for transfer learning to unseen 8 We use IVFSQ4 with 1M clusters and set n-probe to 256.", "domains, without rebuilding the entire phrase index.", "More specifically, the model is able to quickly adapt to new QA tasks (e.g., WebQuestions) when the phrase dump is built using SQuAD or Natural Questions.", "We also find that this can transfers to non-QA tasks when the query is written in a different format.", "In 7.3, we show the possibility of directly using DensePhrases for slot filling tasks by using a query such as (Michael Jackson, is a singer of, x) .", "In this regard, we can view our model as a dense knowledge base that can be accessed by many different types of queries and it is able to return phrase-level knowledge efficiently.", "Datasets.", "We use two reading comprehension datasets: SQuAD (Rajpurkar et al., 2016) and Natural Questions (NQ) (Kwiatkowski et al., 2019) to learn phrase representations, in which a single gold passage is provided for each question.", "For the open-domain QA experiments, we evaluate our approach on five popular open-domain QA datasets: Natural Questions, WebQuestions (WQ) (Berant et al., 2013), CuratedTREC (TREC) (Baudi and ediv`y, 2015), TriviaQA (TQA) (Joshi et al., 2017), and SQuAD.", "Note that we only use SQuAD and/or NQ to build the phrase index and perform query-side fine-tuning (6) for other datasets.", "We also evaluate our model on two slot filling tasks, to show how to adapt our DensePhrases for other knowledge-intensive NLP tasks.", "We focus on using two slot filling datasets from the KILT benchmark (Petroni et al., 2021): T-REx (Elsahar et al., 2018) and zero-shot relation extraction (Levy et al., 2017).", "Each query is provided in the form of {subject entity} [SEP] {relation}\" and the answer is the object entity.", "Appendix C provides the statistics of all the datasets.", "Implementation details.", "We denote the training datasets used for reading comprehension (Eq.", "(9)) as C phrase .", "For open-domain QA, we train two versions of phrase encoders, each of which are trained on C phrase = { SQuAD } and { NQ , SQuAD } , respectively.", "We build the phrase dump H for the 2018-12-20 Wikipedia snapshot and perform query-side fine-tuning on each dataset using Eq.", "(11).", "For slot filling, we use the same phrase dump for open-domain QA, C phrase = { NQ , SQuAD } and perform query-side fine-tuning on randomly sampled 5K Model SQuAD NQ (Long) EM F1 EM F1 Query-Dependent BERT-base 80.8 88.5 69.9 78.2 SpanBERT-base 85.7 92.2 73.2 81.0 Query-Agnostic DilBERT (Siblini et al., 2020) 63.0 72.0 -DeFormer (Cao et al., 2020) -72.1 -DenSPI 73.6 81.7 68.2 76.1 DenSPI + Sparc 76.4 84.8 -DensePhrases (ours) 78.3 86.3 71.9 79.6 Table 2: Reading comprehension results, evaluated on the development sets of SQuAD and Natural Questions.", "or 10K training examples to see how rapidly our model adapts to the new query types.", "See Appendix D for details on the hyperparameters and Appendix A for an analysis of computational cost.", "Reading comprehension.", "In order to show the effectiveness of our phrase representations, we first evaluate our model in the reading comprehension setting for SQuAD and NQ and report its performance with other query-agnostic models (Eq.", "(9) without query-side fine-tuning).", "This problem was originally formulated by Seo et al. (2018) as the phrase-indexed question answering (PIQA) task.", "Compared to previous query-agnostic models, our model achieves the best performance of 78.3 EM on SQuAD by improving the previous phrase retrieval model (DenSPI) by 4 .", "7% (Table 2).", "Although it is still behind cross-attention models, the gap has been greatly reduced and serves as a strong starting point for the open-domain QA model.", "Open-domain QA.", "Experimental results on open-domain QA are summarized in Table 3.", "Without any sparse representations, DensePhrases outperforms previous phrase retrieval models by a large margin and achieves a 15% 25% absolute improvement on all datasets except SQuAD.", "Training the model of Lee et al. (2020) on C phrase = { NQ , SQuAD } only increases the result from 14.5% to 16.5% on NQ, demonstrating that it does not suffice to simply add more datasets for training phrase representations.", "Our performance is also competitive with recent retriever-reader models (Karpukhin et al., 2020), while running much faster during inference (Table 1).", "Table 4 summarizes the results on the two slot filling datasets, along with the baseline scores provided by Petroni et al. (2021).", "The only extractive baseline is DPR + BERT, which performs poorly in zero-shot relation extraction.", "On the other hand, our model achieves competitive performance on all datasets and achieves state-of-the-art performance on two datasets using only 5K training examples.", "Ablation of phrase representations.", "Table 5 shows the ablation result of our model on SQuAD.", "Upon our choice of architecture, augmenting training set with generated questions (QG = (cid:51) ) and performing distillation from cross-attention models (Distill = (cid:51) ) improve performance up to EM = 78.3.", "We attempted adding the generated questions to the training of the SpanBERT-QA model but find a 0.3% improvement, which validates that data sparsity is a bottleneck for query-agnostic models.", "Effect of batch negatives.", "We further evaluate the effectiveness of various negative sampling methods introduced in 4.2 and 4.3.", "Since it is computationally expensive to test each setting at the full Wikipedia scale, we use a smaller text corpus D small of all the gold passages in the development sets of Natural Questions, for the ablation study.", "Empirically, we find that results are generally well correlated when we gradually increase the size of |D| .", "As shown in Table 6, both in-batch and pre-batch negatives bring substantial improvements.", "While using a larger batch size ( B = 84 ) is beneficial for in-batch negatives, the number of preceding batches in pre-batch negatives is optimal when C = 2 .", "Surprisingly, the pre-batch negatives also improve the performance when D = { p } .", "Effect of query-side fine-tuning.", "We summarize the effect of query-side fine-tuning in Table 7.", "For the datasets that were not used for training the phrase encoders (TQA, WQ, TREC), we observe a 15% to 20% improvement after query-side fine-tuning.", "Even for the datasets that have been used (NQ, SQuAD), it leads to significant improvements (e.g., 32.6% 40.9% on NQ for C phrase = {NQ}) and it clearly demonstrates it can effectively reduce the discrepancy between training and inference.", "Learning effective dense representations of words is a long-standing goal in NLP (Bengio et al., 2003; Collobert et al., 2011; Mikolov et al., 2013; Peters et al., 2018; Devlin et al., 2019).", "Beyond words, dense representations of many different granularities of text such as sentences (Le and Mikolov, 2014; Kiros et al., 2015) or documents (Yih et al., 2011) have been explored.", "While dense phrase representations have been also studied for statistical machine translation (Cho et al., 2014) or syntactic parsing (Socher et al., 2010), our work focuses on learning dense phrase representations for QA and any other knowledge-intensive tasks where phrases can be easily retrieved by performing MIPS.", "This type of dense retrieval has been also studied for sentence and passage retrieval (Humeau et al., 2019; Karpukhin et al., 2020) (see Lin et al., 2020 for recent advances in dense retrieval).", "While DensePhrases is explicitly designed to retrieve phrases that can be used as an answer to given queries, retrieving phrases also naturally entails retrieving larger units of text, provided the datastore maintains the mapping between each phrase and the sentence and passage in which it occurs.", "In this study, we show that we can learn dense representations of phrases at the Wikipedia scale, which are readily retrievable for open-domain QA and other knowledge-intensive NLP tasks.", "We learn both phrase and question encoders from the supervision of reading comprehension tasks and introduce two batch-negative techniques to better discriminate phrases at scale.", "We also introduce query-side fine-tuning that adapts our model to different types of queries.", "We achieve strong performance on five popular open-domain QA datasets, while reducing the storage footprint and improving latency significantly.", "We also achieve strong performance on two slot filling datasets using only a small number of training examples, showing the possibility of utilizing our DensePhrases as a knowledge base.", "We thank Sewon Min, Hyunjae Kim, Gyuwan Kim, Jungsoo Park, Zexuan Zhong, Dan Friedman, Chris Sciavolino for providing valuable comments and feedback.", "This research was supported by a grant of the Korea Health Technology R&D Project through the Korea Health Industry Development Institute (KHIDI), funded by the Ministry of Health & Welfare, Republic of Korea (grant number: HR20C0021) and National Research Foundation of Korea (NRF-2020R1A2C3010638).", "It was also partly supported by the James Mi *91 Research Innovation Fund for Data Science and an Amazon Research Award.", "Our work builds on standard reading comprehension datasets such as SQuAD to build phrase representations.", "SQuAD, in particular, is created from a small number of Wikipedia articles sampled from top-10,000 most popular articles (measured by PageRanks), hence some of our models trained only on SQuAD could be easily biased towards the small number of topics that SQuAD contains.", "We hope that excluding such datasets during training or inventing an alternative pre-training procedure for learning phrase representations could mitigate this problem.", "Although most of our efforts have been made to reduce the computational complexity of previous phrase retrieval models (further detailed in Appendices A and E), leveraging our phrase retrieval model as a knowledge base will inevitably increase the minimum requirement for the additional experiments.", "We plan to apply vector quantization techniques to reduce the additional cost of using our model as a KB." ]
[ "abstain", "abstain", "objective", "objective", "objective", "result", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "method", "objective", "method", "abstain", "abstain", "result", "abstain", "result", "method", "objective", "result", "objective", "method", "other", "other", "abstain", "other", "method", "other", "other", "abstain", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "method", "method", "method", "abstain", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "method", "other", "other", "result", "abstain", "abstain", "result", "result", "other", "other", "other", "method", "method", "method", "method", "method" ]
[ "Humans can distinguish new categories very efficiently with few examples, largely due to the fact that human beings can leverage knowledge obtained from relevant tasks.", "However, deep learning based text classification model tends to struggle to achieve satisfactory performance when labeled data are scarce.", "Inspired by human intelligence, we propose to introduce external knowledge into few-shot learning to imitate human knowledge.", "A novel parameter generator network is investigated to this end, which is able to use the external knowledge to generate different metrics for different tasks.", "Armed with this network, similar tasks can use similar metrics while different tasks use different metrics.", "Through experiments, we demonstrate that our method outperforms the SoTA few-shot text classification models.", "Humans are adept at quickly learning from a small number of examples.", "This motivates research of few-shot learning (Vinyals et al., 2016; Finn et al., 2017), which aims to recognize novel categories from very few labeled examples.", "The key challenge in few-shot learning is to make full use of the limited labeled examples to find the right generalizations.", "Metric-based approaches (Vinyals et al., 2016; Snell et al., 2017; Sung et al., 2018; Li et al., 2019; Zhang et al., 2020) are effective ways to address this challenge, which learn to represent examples in an appropriate feature space and use a distance metric to predict labels.", "However, directly employing metric-based approaches in text classification faces a problem that tasks are diverse and significantly different from each other, since words that are highly informative for one task may not be relevant for other tasks (Bao et al., 2019).", "Therefore, a single metric is insufficient to cope with all these tasks in few-shot text classification (Yu et al., 2018).", "To adapt metric learning to significantly diverse tasks, we propose a knowledge guided metric learning method.", "This method is inspired by the fact that human beings approach diverse tasks armed with knowledge obtained from relevant tasks (Lake et al., 2017).", "We use external knowledge from the knowledge base (KB) to imitate human knowledge, whereas the role of external knowledge has been ignored in previous methods (Yu et al., 2018; Bao et al., 2019; Geng et al., 2019, 2020).", "In detail, we resort to distributed representations of the KB instead of symbolic facts, since symbolic facts face the issues of poor generalization and data sparsity.", "Based on such KB embeddings, we investigate a novel parameter generator network (Ha et al., 2016; Jia et al., 2016) to generate task-relevant relation network parameters.", "With these generated parameters, the task-relevant relation network is able to apply diverse metrics to diverse tasks and ensure that similar tasks use similar metrics while different tasks use different metrics.", "Inspire by human intelligence, we present the first approach that introduces external knowledge into few-shot learning.", "A novel parameter generator network based on external knowledge is proposed to generate diverse metrics for diverse tasks.", "Experimental results on the public dataset show that our model significantly outperforms previous methods.", "Few-shot classification aims at training a model that can recognize novel classes from very few labeled examples.", "The training and testing of the model are conducted on two datasets (training set and test set) with no overlapped classes.", "At Task-AgnosticRelationNetwork KnowledgeRetrieval BERTEncoder Average&Combine Task-RelevantRelationNetwork Parameterizethe Network Relation Score One-hotVector SupportSet Query Figure 1: The main architecture for a C -way N -shot ( C = 3 , N = 2 ) problem with one query example.", "both the training and test stages, the labeled examples are called the support set, which serves as a meta-training set and the meta-testing examples are called the query set.", "If the support set contains N labeled examples for each of C unique classes, the few-shot problem is called C -way N -shot.", "To guarantee a good generalization performance at test time, the training and evaluation of the model are accomplished by episodically sampling the support set and the query set (Vinyals et al., 2016).", "More concretely, in each meta-training iteration, an episode is formed by randomly selecting C classes from the training set with N labeled examples for each of the C classes to serve as the support set S = { ( x i , y i ) } C N i =1 , as well as a fraction of the remainder of those C classes' examples to act as the query set Q = { ( x i , y i ) } C N + m i = C N +1 , where x i and y i { 1 , ..., C } are the sentence and its label, and m is the number of query samples.", "The model is trained on the support set S to minimize the loss of its predictions over the query set Q .", "This training procedure is iteratively carried out episode by episode until convergence.", "In this network, a pre-trained BERT (Devlin et al., 2019) encoder is used to model sentences.", "Given an input text x i = ([CLS] , w 1 , w 2 , ..., w T , [SEP]) as input, the output of BERT encoder is denoted as H ( x i ) R ( T +2) d 1 , where d 1 is the output dimension of the BERT encoder.", "We use the first token of the sequence (classification token) as the sentence representation, which is denote as h ( x i ) .", "In meta-learning, the representation of each class is the mean vector of the embedded sentences belonging to its class, c z = 1 |S z | (cid:88) ( x i ,y i ) S z h ( x i ) R d 1 (1) where S z denotes the set of sentences labeled with class z.", "Following Sung et al. (2018), we use concatenation operator to combine the query representation h ( x j ) with the class representation c z .", "This module takes combined representation (shown in Equation 2) and the knowledge of the support set as input, and produces a scalar in range of 0 to 1 representing the similarity between the query sentence and the class representation, which is called relation score.", "Compared with the original relation network (Sung et al., 2018), we decompose the relation network into two parts, task-agnostic relation network and task-relevant relation network, in order to serve two purposes.", "Task agnostic relation network models a basic metric function, while task-relevant relation network adapts to diverse tasks.", "Task-Agnostic Relation Network The task-agnostic relation network uses a learned unified metric for all tasks, which is the same with the original relation network (Sung et al., 2018).", "With this unified metric, C task-agnostic relation scores r agnz,j are generated for modeling the relation between one query input x j and the class representation c z , r agnz,j = RN agn ( p z,j | agn ) R , z = 1 , 2 , ..., C (3) where RN agn denotes task-agnostic relation network and agn are learnable parameters.", "Task-Relevant Relation Network The task-relevant relation network is able to apply diverse metrics for diverse tasks armed with external knowledge.", "In detail, for each support set S ( S contains C N labeled sentences), we retrieve a set of potentially relevant KB concepts K ( S ) , where each concept k i is associated with KB embedding e i R d 2 .", "(we will describe these processes in the following section).", "We average over these KB embeddings element by element to form the knowledge representation of this support set.", "Then we use this knowledge representation to generate task-relevant relation network parameters,", "task-relevant relation network.", "With these generated parameters, we use the task-relevant network to generate C task-relevant relation scores r relz,j for the relation between one query input x j and the class representation c z , r relz,j = RN rel ( p z,j | rel ) R , z = 1 , 2 , ..., C (6) where RN rel denotes task-relevant relation network.", "Finally, relation score is defined as: r z,j = Sigmoid ( r agnz,j + r relz,j ) (7) where a sigmoid function is used to keep the score in a reasonable range.", "Following Sung et al. (2018), the network architecture of relation networks is two full-connected layers and mean square error (MSE) loss is used to train the model.", "The relation score is regressed to the ground truth: the matched pairs have similarity 1 and the mismatched pairs have similarity 0 .", "We use NELL (Carlson et al., 2010) as the KB, stored as (subject, relation, object) triples, where each triple is a fact indicating a specific relation between subject and object, e.g., (Intel, competes with, Nvidia).", "Knowledge Embedding Since symbolic facts suffer from poor generalization and data sparsity, we resort to distributed a representation of triples.", "In detail, given any triple ( s, r, o ) , vector embeddings of the subject s , the relation r and the object o are learned jointly such that the validity of the triple can be measured in the real number space.", "We adopt the BILINEAR model (Yang et al., 2015) to measure the validity of triples: f ( s, r, o ) = s T diag ( r ) o R (9) where s , r , o R d 2 are the embeddings associated with s , r , o , respectively, and diag ( r ) is a diagonal matrix with the main diagonal given by the relation embedding r .", "To learn these vector embeddings, a margin-based ranking loss is designed, where triples in the KB are adopted to be positive and negative triples are constructed by corrupting either subjects or objects.", "Knowledge Retrieval Inspired by the previous studies (Yang and Mitchell, 2017; Yang et al., 2019), exact string matching (Charras and Lecroq, 2004) is used to recognize entity mentions from a given passage and link recognized entity mentions to subjects in KB.", "Then, we collect the corresponding objects (concepts) as candidates.", "After this retrieval process, we obtain a set of potentially relevant KB concepts, where each KB concept is associated with a KB embedding.", "Our model is evaluated on the widely used ARSC (Blitzer et al., 2007) dataset, which comprises reviews for 23 types of products on Amazon.", "For each product domain, there are three different binary classification tasks.", "These buckets form 69 tasks in total.", "Following Yu et al. (2018), we select 12 tasks from four domains (Books, DVDs, Electronics, and Kitchen) as testing set, with only 5 examples as support set for each class.", "In our experiments, we use hugginface's implementation 1 of BERT (base version) and initialize parameters of the BERT encoding layer with pre-trained models officially released by Google 2 .", "To represent knowledge in NELL (Carlson et al., 2010), BILINEAR model (Yang et al., 2015) is implemented with the open-source framework OpenKE (Han et al., 2018) to obtain the embedding of entities and relations.", "The size of embeddings of entities and relations is set to 100.", "To train our model, We use Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.00001.", "All experiments are run with an NVIDIA GeForce RTX 2080 Ti.", "Baseline.", "We compare our method to the following baselines: (1) Match Network is a metric-based attention method for few-shot learning; (2) Prototypical Network is a metric-based method that uses sample averages as class prototypes; (3) MAML is an optimization-based method through learning to learn with gradients; (4) Relation Network is a metric-based method that leverages two full-connected layers as the distance metric and sums up sample vectors in the support set as class vectors; (5) Graph Network is a graph-based model that implements a task-driven message passing algorithm on the sample-wise level; (6) ROBUSTTC-FSL is an approach that combines adaptive metric methods by clustering the tasks; (7) Induction Network is a metric-based method by using dynamic routing to learn class-wise representations.", "Analysis.", "Experiment results on ARSC are presented in Table", "1. We observe that our method 1 https://huggingface.co/transformers 2 https://github.com/google-research/bert achieves the best results amongst all meta-learning models.", "Both Induction Network and Relation Network use a single metric to measure the similarity.", "Compared with these methods, we attribute the improvements of our model to the fact that our model can adapt to diverse tasks with diverse metrics.", "Compared with ROBUSTTC-FSL, our model leverages knowledge to get implicit task clusters and is trained in an end-to-end manner, which can mitigate error propagation.", "To analyze the contributions and effects of external knowledge in our approach, we perform some ablation and replacement studies, which are shown in Table", "2. Ablation means that we delete the task-relevant relation network and the model is reduced to the original BERT-based relation network.", "We observe that ablation degrades performance.", "To exclude the factor of reduction in the number of parameters, we conduct a replacement experiment, in which we replace the task-relevant relation network with a task-agnostic relation network.", "We find that increasing the number of parameters can slightly improve performance, but there is still a big gap between our model.", "Therefore, we conclude that the effectiveness of our model is credited with introducing external knowledge rather than increasing the number of model parameters.", "To analyze different strategies of introducing knowledge in few-shot learning, we remove the task-relevant relation network, and replace the BERT encoder in our method with KT-NET encoder (Yang et al., 2019) and K-BERT encdoer (Liu et al., 2019).", "In the KT-NET encoder, an attention mechanism is used to adaptively fuse selected knowledge with BERT.", "In the K-BERT encoder, a knowledge-rich sentence tree is the input of the model.", "These methods both introduce knowledge at the representation level 3 , while our method injects knowledge at the task level.", "The result is shown in Table", "3. Combined Table 2 and Table 3, we find that (1) introducing knowledge can improve the performance of few-shot text classification; (2) it is more effective to introduce knowledge at the task level rather than at the representation level.", "Inspired by human intelligence, we introduce external knowledge into few-shot learning.", "A parameter generator network is investigated to this end, which can use external knowledge to generate relation network parameters.", "With these parameters, the relation network can handle diverse tasks with diverse metric.", "Through various experiments, we demonstrate the effectiveness of our model.", "We thank the anonymous reviewers for their insightful comments.", "We also thank Yushan Xie and Zhixing Tian for helpful suggestions.", "This work is supported by the National Key RD Program of China (Grant No. 2020AAA0106400), the National Natural Science Foundation of China (Grant No. 61976211 and Grant No. 61806201) and the Key Research Program of the Chinese Academy of Sciences (Grant No. ZDBS-SSW-JSC006)." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "objective", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "objective", "other", "other", "other" ]
[ "Sentence simplification is the task of rewriting texts so they are easier to understand.", "Recent research has applied sequence-to-sequence (Seq2Seq) models to this task, focusing largely on training-time improvements via reinforcement learning and memory augmentation.", "One of the main problems with applying generic Seq2Seq models for simplification is that these models tend to copy directly from the original sentence, resulting in outputs that are relatively long and complex.", "We aim to alleviate this issue through the use of two main techniques.", "First, we incorporate content word complexities, as predicted with a leveled word complexity model, into our loss function during training.", "Second, we generate a large set of diverse candidate simplifications at test time, and rerank these to promote fluency, adequacy, and simplicity.", "Here, we measure simplicity through a novel sentence complexity model.", "These extensions allow our models to perform competitively with state-of-the-art systems while generating simpler sentences.", "We report standard automatic and human evaluation metrics.", "1 1 Introduction Automatic text simplification aims to reduce the complexity of texts and preserve their meaning, making their content more accessible to a broader audience (Saggion, 2017).", "This process can benefit people with reading disabilities, foreign language learners and young children, and can assist non-experts exploring a new field.", "Text simplification has gained wide interest in recent years due to its relevance for NLP tasks.", "Simplifying text during preprocessing can improve the performance of syntactic parsers (Chandrasekar et al., 1996) and 1 Our code is available in our fork of Sockeye (Hieber et al., 2017) at https://github.com/rekriz11/sockeye-recipes.", "semantic role labelers (Vickrey and Koller, 2008; Woodsend and Lapata, 2014), and can improve the grammaticality (fluency) and meaning preservation (adequacy) of translation output (Stajner and Popovic, 2016).", "Most text simplification work has approached the task as a monolingual machine translation problem (Woodsend and Lapata, 2011; Narayan and Gardent, 2014).", "Once viewed as such, a natural approach is to use sequence-to-sequence (Seq2Seq) models, which have shown state-of-the-art performance on a variety of NLP tasks, including machine translation (Vaswani et al., 2017) and dialogue systems (Vinyals and Le, 2015).", "One of the main limitations in applying standard Seq2Seq models to simplification is that these models tend to copy directly from the original complex sentence too often, as this is the most common operation in simplification.", "Several recent efforts have attempted to alleviate this problem using reinforcement learning (Zhang and Lapata, 2017) and memory augmentation (Zhao et al., 2018), but these systems often still produce outputs that are longer than the reference sentences.", "To avoid this problem, we propose to extend the generic Seq2Seq framework at both training and inference time by encouraging the model to choose simpler content words, and by effectively choosing an output based on a large set of candidate simplifications.", "We propose a custom loss function to replace standard cross entropy probabilities during training, which takes into account the complexity of content words.", "We include a similarity penalty at inference time to generate more diverse simplifications, and we further cluster similar sentences together to remove highly similar candidates.", "We develop methods to rerank candidate simplifications to promote fluency, adequacy, and simplicity, helping the model choose the best option from a diverse set of sentences.", "An analysis of each individual components reveals that of the three contributions, reranking simplifications at post-decoding stage brings about the largest benefit for the simplification system.", "We compare our model to several state-of-the-art systems in both an automatic and human evaluation settings, and show that the generated simple sentences are shorter and simpler, while remaining competitive with respect to fluency and adequacy.", "We also include a detailed error analysis to explain where the model currently falls short and provide suggestions for addressing these issues.", "Text simplification has often been addressed as a monolingual translation process, which generates a simplified version of a complex text.", "Zhu et al. (2010) employ a tree-based translation model and consider sentence splitting, deletion, reordering, and substitution.", "Coster and Kauchak (2011) use a Phrase-Based Machine Translation (PBMT) system with support for deleting phrases, while Wubben et al. (2012) extend a PBMT system with a reranking heuristic (PBMT-R).", "Woodsend and Lapata (2011) propose a model based on a quasi-synchronous grammar, a formalism able to capture structural mismatches and complex rewrite operations.", "Narayan and Gardent (2014) combine a sentence splitting and deletion model with PBMT-R.", "This model has been shown to perform competitively with neural models on automatic metrics, though it is outperformed using human judgments (Zhang and Lapata, 2017).", "In recent work, Seq2Seq models are widely used for sequence transduction tasks such as machine translation (Sutskever et al., 2014; Luong et al., 2015), conversation agents (Vinyals and Le, 2015), summarization (Nallapati et al., 2016), etc.", "Initial Seq2Seq models consisted of a Recurrent Neural Network (RNN) that encodes the source sentence x to a hidden vector of a fixed dimension, followed by another RNN that uses this hidden representation to generate the target sentence y .", "The two RNNs are then trained jointly to maximize the conditional probability of the target sentence given the source sentence, i.e. P ( y | x ) .", "Other works have since extended this framework to include attention mechanisms (Luong et al., 2015) and transformer networks (Vaswani et al., 2017).", "2 Nisioi et al. (2017) was the first major application of Seq2Seq models to text simplification, applying a standard encoder-decoder approach with attention and beam search.", "Vu et al. (2018) extended this framework to incorporate memory augmentation, which simultaneously performs lexical and syntactic simplification, allowing them to outperform standard Seq2Seq models.", "There are two main Seq2Seq models we will compare to in this work, along with the statistical model from Narayan and Gardent (2014).", "Zhang and Lapata (2017) proposed DRESS (Deep REinforcement Sentence Simplification), a Seq2Seq model that uses a reinforcement learning framework at training time to reward the model for producing sentences that score high on fluency, adequacy, and simplicity.", "This work showed state-of-the-art results on human evaluation.", "However, the sentences generated by this model are in general longer than the reference simplifications.", "Zhao et al. (2018) proposed DMASS (Deep Memory Augmented Sentence Simplification), a multilayer, multi-head attention transformer architecture which also integrates simplification rules.", "This work has been shown to get state-of-the-art results in an automatic evaluation, training on the WikiLarge dataset introduced by Zhang and Lapata (2017).", "Zhao et al. (2018), however, does not perform a human evaluation, and restricting evaluation to automatic metrics is generally insufficient for comparing simplification models.", "Our model, in comparison, is able to generate shorter and simpler sentences according to Flesch-Kincaid grade level (Kincaid et al., 1975) and human judgments, and provide a comprehensive analysis using human evaluation and a qualitative error analysis.", "2 For a detailed description of Seq2Seq models, please see (Sutskever et al., 2014).", "Standard Seq2Seq models use cross entropy as the loss function at training time.", "This only takes into account how similar our generated tokens are to those in the reference simple sentence, and not the complexity of said tokens.", "Therefore, we first develop a model to predict word complexities, and incorporate these into a custom loss function.", "Extending the complex word identification model of Kriz et al. (2018), we train a linear regression model using length, number of syllables, and word frequency; we also include Word2Vec embeddings (Mikolov et al., 2013).", "To collect data for this task, we consider the Newsela corpus, a collection of 1,840 news articles written by professional editors at 5 reading levels (Xu et al., 2015).", "3 We extract word counts in each of the five levels; in this dataset, we denote 4 as the original complex document, 3 as the least simplified re-write, and 0 as the most simplified re-write.", "We propose using Algorithm 1 to obtain the complexity label for each word w , where l w represents the level given to the word, and c w i represents the number of times that word occurs in level i .", "Here, we initially label the word with the most complex level, 4. If at least 70% of the instances of this word is preserved in level 3, we reassign the label as level 3; if the label was changed, we then do this again for progressively simpler levels.", "As examples, Algorithm 1 labels pray, sign, and ends with complexity level 0, and prolifer-ation, consensus, and emboldened with complexity level 4. We split the data extracted from Algorithm 1 into Train, Validation and Test sets (90%, 5% and 5%, respectively, and use them for 3 Newsela is an education company that provides reading materials for students in elementary through high school.", "We report the Mean Squared Error (MSE) and Pearson correlation on our test set in Table 1. 5 We compare our model to two baselines, which predict complexity using log Google n -grams frequency (Brants and Franz, 2006) and word length, respectively.", "For these baselines, we calculate the minimum and maximum values for words in the training set, and then normalize the values for words in the test set.", "We propose a metric that modifies cross entropy loss to upweight simple words while downweight-ing more complex words.", "More formally, the probabilities of our simplified loss function can be generated by the process described in Algorithm 2. Since our word complexities are originally from 0 to 4, with 4 being the most complex, we need to reverse this ordering and add one, so that more complex words and non-content words are not given zero probability.", "In this algorithm, we denote the original probability vector as CE , our vocabulary as V , the predicted word complexity of a word v as score v , the resulting weight for a word as w v , and our resulting weights as SCE , which we then normalize and convert back to logits.", "Here, is a parameter we can tune during experimentation.", "Note that we only upweight simple content words, not stopwords or entities.", "To increase the diversity of our candidate simplifications, we apply a beam search scoring modi-fication proposed in Li et al. (2016).", "In standard 4 Note that we also tried continuous rather than discrete labels for words by averaging frequencies, but found that this increased the noise in the data.", "For example, the and dog were incorrectly labeled as level 2 instead of 0, since these words are seen frequently across all levels.", "5 We report MSE results by level in the appendix.", "beam search with a beam width of b , given the b hypotheses at time t 1 , the next set of hypotheses is generated by first selecting the top b candidate expansions from each hypothesis.", "These b b hypotheses are then ranked by the joint probabilities of their sequence of output tokens, and the top b according to this ranking are chosen.", "We observe that candidate expansions from a single parent hypothesis tend to dominate the search space over time, even with a large beam.", "To increase diversity, we apply a penalty term based on the rank of a generated token among the b candidate tokens from its parent hypothesis.", "If Y jt 1 is the j th top hypothesis at time t 1 , j [1 ..b ] , and y j,j (cid:48) t is a candidate token generated from Y jt 1 , where j (cid:48) [1 ..b ] represents the rank of this particular token among its siblings, then our modified scoring function is as follows (here, is a parameter we can tune during experimentation): S ( Y jt 1 , y j,j (cid:48) t ) = log p ( y j 1 , . . . , y jt 1 , y j,j (cid:48) t | x ) j (cid:48) (1) Extending the work of Li et al. (2016), to further increase the distance between candidate simplifications, we can cluster similar sentences after decoding.", "To do this, we convert each candidate into a document embedding using Paragraph Vector (Le and Mikolov, 2014), cluster the vector representations using k -means, and select the sentence nearest to the centroids.", "This allows us to group similar sentences together, and only consider candidates that are relatively more different.", "Generating diverse sentences is helpful only if we are able to effectively rerank them in a way that promotes simpler sentences while preserving fluency and adequacy.", "To do this, we propose three Model Correlation MSE Length Baseline 0.503 3.72 CNN (ours) 0.650 1.13 Table 2: Pearson Correlation and Overall Mean Squared Error (MSE) for the sentence-level complexity prediction model (CNN), compared to a length-based baseline.", "Fluency ( f i ): We calculate the perplexity based on a 5-gram language model trained on English Gigaword v.5 (Parker et al., 2011) using KenLM (Heafield, 2011).", "Adequacy ( a i ): We generate Paragraph Vector representations (Le and Mikolov, 2014) for the input sentence and each candidate and calculate the cosine similarity.", "Simplicity ( s i ): We develop a sentence complexity prediction model to predict the overall complexity of each sentence we generate.", "To calculate sentence complexity, we modify a Convolutional Neural Network (CNN) for sentence classification (Kim, 2014) to make continuous predictions.", "We use aligned sentences from the Newsela corpus (Xu et al., 2015) as training data, labeling each with the complexity level from which it came.", "6 As with the word complexity prediction model, we report MSE and Pearson correlation on a held-out test set in Table 2. 7 We normalize each individual score between 0 and 1, and calculate a final score as follows: score i = f f i + a a i + s s i (2) We tune these weights ( ) on our validation data during experimentation to find the most appropriate combinations of reranking metrics.", "Examples of improvements resulting from the including each of our contributions are shown in Table 3. 4 Experiments 4.1 Data We train our models on the Newsela Corpus.", "In previous work, models were mainly trained on the parallel Wikipedia corpus (PWKP) consisting of paired sentences from English Wikipedia 6 We respect the train/test splits described in Section 4.1.", "7 We report MSE results by level in the appendix.", "and Simple Wikipedia (Zhu et al., 2010), or the extended WikiLarge corpus (Zhang and Lapata, 2017).", "We choose to instead use Newsela, because it was found that 50% of the sentences in Simple Wikipedia are either not simpler or not aligned correctly, while Newsela has higher-quality simplifications (Xu et al., 2015).", "As in Zhang and Lapata (2017), we exclude sentence pairs corresponding to levels 4-3, 3-2, 2-1, and 1-0, where the simple and complex sentences are just one level apart, as these are too close in complexity.", "After this filtering, we are left with 94,208 training, 1,129 validation, and 1,077 test sentence pairs; these splits are the same as Zhang and Lapata (2017).", "We preprocess our data by tokenizing and replacing named entities using CoreNLP (Manning et al., 2014).", "For our experiments, we use Sockeye, an open source Seq2Seq framework built on Apache MXNet (Hieber et al., 2017; Chen et al., 2015).", "In this model, we use LSTMs with attention for both our encoder and decoder models with 256 hidden units, and two hidden layers.", "We attempt to match the hyperparameters described in Zhang and Lapata (2017) as closely as possible; as such, we use 300-dimensional pretrained GloVe word embeddings (Pennington et al., 2014), and Adam optimizer (Kingma and Ba, 2015) with a learning rate of 0.001.", "We ran our models for 30 epochs.", "8 During training, we use our complexity-weighted loss function, with = 2 ; for our baseline models, we use cross-entropy loss.", "At inference time, where appropriate, we set the beam size b = 100 , and the similarity penalty = 1 .", "0 .", "After inference, we set the number of clusters to 20, and we compare two separate reranking weightings: one which uses fluency, adequacy, and simplicity (FAS), where f = a = s = 13 ; and one which uses only fluency and adequacy (FA), where f = a = 12 and s = 0.", "machine translation system (Narayan and Gardent, 2014).", "DRESS is a Seq2Seq model trained with reinforcement learning which integrates lexical simplifications (Zhang and Lapata, 2017).", "9 DMASS is a Seq2Seq model which integrates the transformer architecture and additional simplifying paraphrase rules (Zhao et al., 2018).", "10 We also present results on several variations of our models, to isolate the effect of each individual improvement.", "S2S is a standard sequence-to-sequence model with attention and greedy search.", "S2S-Loss is trained using our complexity-weighted loss function and greedy search.", "S2S-FA uses beam search, where we rerank all sentences using fluency and adequacy (FA weights).", "S2S-Cluster-FA clusters the sentences before reranking using FA weights.", "S2S-Diverse-FA uses diversified beam search, reranking using FA weights.", "S2S-All-FAS uses all contributions, reranking using fluency, adequacy, and simplicity (FAS weights).", "Finally, S2S-All-FA integrates all modifications we propose, and reranks using FA weights.", "In this section, we compare the baseline models and various configurations of our model with both standard automatic simplification metrics and a human evaluation.", "We show qualitative examples where each of our contributions improves the generated simplification in Table 3. 5.1 Automatic Evaluation Following previous work (Zhang and Lapata, 2017; Zhao et al., 2018), we use SARI as our main automatic metric for evaluation (Xu et al., 2016).", "11 Specifically, SARI calculates how often a generated sentence correctly keeps, inserts, and deletes n -grams from the complex sentence, using the reference simple standard as the gold-standard, where 1 n 4 .", "Note that we do not use 9 For Hybrid and DRESS, we use the generated outputs provided in Zhang and Lapata (2017).", "We made a significant effort to rerun the code for DRESS, but were unable to do so.", "10 For DMASS, we ran the authors' code on our data splits from Newsela, in collaboration with the first author to ensure an accurate comparison.", "BLEU (Papineni et al., 2002) for evaluation; even though it correlates better with fluency than SARI, Sulem et al. (2018) recently showed that BLEU often negatively correlates with simplicity on the task of sentence splitting.", "We also calculate oracle SARI, where appropriate, to show the score we could achieve if we had a perfect reranking model.", "Our results are reported in Table 4. Our best models outperform previous state-of-the-art systems, as measured by SARI.", "Table 4 also shows that, when used separately, reranking and clustering result in improvements on this metric.", "Our loss and diverse beam search methods have more ambiguous effects, especially when combined with the former two; note however that including diversity before clustering does slightly Model Len FKGL TER Ins Edit Complex 23.1 11.14 0 0 Hybrid 12.4 7.82 0.49 0.01 DRESS 14.4 7.60 0.44 0.07 DMASS 15.1 7.40 0.59 0.28 S2S 16.1 7.91 0.41 0.23 S2S-Loss 16.4 8.11 0.40 0.31 S2S-FA 7.6 6.42 0.73 0.01 7.28 S2S-Cluster-FA 9.1 6.49 0.68 0.05 7.55 S2S-Diverse-FA 7.5 5.97 0.78 0.07 8.22 S2S-All-FAS 9.1 5.37 0.68 0.05 7.56 S2S-All-FA 10.8 6.42 0.61 0.07 7.56 Reference 12.8 6.90 0.67 0.42 Table 5: Average sentence length, FKGL, TER score compared to input, and number of insertions.", "We calculate several descriptive statistics on the generated sentences and report the results in Table 5. We observe that our models produce sentences that are much shorter and lower reading level, according to Flesch-Kincaid grade level (FKGL) (Kincaid et al., 1975), while making more changes to the original sentence, according to Translation Error Rate (TER) (Snover et al., 2006).", "In addition, we see that the customized loss function increases the number of insertions made, while both the diversified beam search and clustering techniques individually increase the distance between sentence candidates.", "While SARI has been shown to correlate with human judgments on simplicity, it only weakly cor-Model", "relates with judgments on fluency and adequacy (Xu et al., 2016).", "Furthermore, SARI only considers simplifications at the word level, while we believe that a simplification metric should also take into account sentence structure complexity.", "We plan to investigate this further in future work.", "Due to the current perceived limitations of automatic metrics, we also choose to elicit human judgments on 200 randomly selected sentences to determine the relative overall quality of our simplifications.", "For our first evaluation, we ask native English speakers on Amazon Mechanical Turk to evaluate the fluency, adequacy, and simplicity of sentences generated by our systems and the baselines, similar to Zhang and Lapata (2017).", "Each annotator rated these aspects on a 5-point Likert Scale.", "These results are found in Table 6. 12 As we can see, our best models substantially outperform the Hybrid and DMASS systems.", "Note that DMASS performs the worst, potentially because the transformer model is a more complex model that requires more training data to work properly.", "Comparing to DRESS, our models generate simpler sentences, but DRESS better preserves the meaning of the original sentence.", "To further investigate why this is the case, we know from Table 5 that sentences generated by our model are overall shorter than other models, which also corresponds to higher TER scores.", "Napoles et al. (2011) notes that on sentence compression, longer sentences are perceived by human annotators to preserve more meaning than shorter sentences, controlling for quality.", "Thus, the drop in human-judged adequacy may be related to our sentences' relatively short lengths.", "To test that this observation also holds true for simplicity, we took the candidates generated by 12 We present the instructions for all of our human evaluations in the appendix.", "our best model, and after reranking them as before, we selected three sets of sentences: MATCH-Dress0 : Highest ranked sentence with length closest to that of DRESS (DRESS-Len); average length is 14.10.", "MATCH-Dress+2 : Highest ranked sentence with length closest to (DRESS-Len + 2); average length is 15.32.", "MATCH-Dress-2 : Highest ranked sentence with length closest to (DRESS-Len 2); average length is 12.61.", "The average fluency, adequacy, and simplicity from human judgments on these new sentences are shown in Figure 2, along with those ranked highest by our best model (Original).", "As expected, meaning preservation does substantially increase as we increase the average sentence length, while simplicity decreases.", "Interestingly, fluency also decreases as sentence length increases; this is likely due to our higher-ranked sentences having greater fluency, as defined by language model perplexity.", "To gain insight in what aspects of the simplification process are challenging to our model, we present the most recurring types of errors from our test set.", "Long and complex sentences with multiple clauses", "(a) Complex : Turkey has long enshrined the secular ideals of founding father Mustafa Kemal Ataturk, particularly in an education system that until recently banned Islamic headscarves in schools and made schoolchildren begin the day reciting an oath of allegiance to Ataturk's legacy.", "Reference : Schools in Turkey had banned headscarves.", "Simple : They made schoolchildren to Ataturk's history.", "(b) Complex : And Wal-Mart, which imports more fruits and vegetables from Mexico than any other U.S. company, announced its effort to force improvements up and down its supply chain.", "Reference : Experts said Wal-Mart is an important company.", "Simple : Wal-Mart used more fruits and vegetables from the company.", "(a) Complex : He is the creative director of Rethink Leisure & Entertainment , which is working on several projects in China and elsewhere in Asia .", "Reference : He is with Rethink Leisure & Entertainment.", "Simple : He is working on several projects in China.", "(b) Complex : Teachers there say Richie reads like a high school student.", "Reference : He reads like a high school student.", "Simple : Richie says he is a high school student.", "3. Simplifying the wrong part of the sentence", "4. Poor substitution due to word embeddings proximity", "(a) Complex : Just until recently, the education system had banned Islamic headscarves in schools and made schoolchildren begin the day reciting a pledge of allegiance to Ataturk's legacy.", "(a) Complex : Parks deliberately maintained her image as shy and proper, said Adrienne Cannon, an expert on African-American history.", "Reference : Adrienne Cannon studies African-American history.", "Simple : She is an expert on African-American history.", "(b) Complex : His father owned the home when the lava flowed slowly to the coast.", "Reference : His father still owned the home.", "Simple : The river cut slowly to the coast.", "Reference : Schools in Turkey had banned headscarves.", "Simple : Just until recently, the education officials had banned Islamic flags .", "(b) Complex : In Beijing kite circles, Fei is widely known as the elder statesman .", "Reference : In Beijing, Fei is widely known as an expert.", "Simple : In Beijing, Fei is considered a doctor .", "Training confusion due to misalignments in training corpus", "6. Acceptable simplifications that were not available in the training", "(a) Complex : He heard the applause.", "Reference : The winning word was magician.", "Simple : But he heard the song.", "(b) Complex : More than 300,000 students across the country signed up to participate.", "Reference : Rogers Middle School took part in the program.", "Simple : There were more than 300,000 students.", "(a) Complex : He later tried to buy his freedom.", "Reference : Henson tried to buy his freedom later.", "Simple : The man tried to buy his freedom.", "(b) Complex : Middle school was a rough couple of years for Talmus' daughter, Lili Smith.", "Reference : She had a hard time in middle school.", "Simple : School was a rough couple of years.", "Attempting to rewrite very long and complex sentences resulted to consistent errors, as shown in 1a and 1b.", "This observation in combination with the examples of mis-alignments in the training corpus (5a and 5b) indicate that we either need to improve the alignments such the model can capture that the simplification process involves in many cases splitting a sentence and then simplifying or train to learn when to split first and then attempt rewriting.", "The next two types of errors show failure in capturing discourse level meaning:", "a) errors due to failed pronoun resolution, shown in 2a and 2b, and", "b) errors due to the most important part of the sentence being left out, shown in 3b and 3b.", "In these cases, the sentences were not bad, but the information was assigned to the wrong referent, or important meaning was left out.", "In 4a and 4b, the substitution is clearly semantically related to the target, but changes the meaning.", "Finally, there were examples of acceptable simplifications, as in 6a and 6b, that were classified as errors because they were not in the gold data.", "We provide additional examples for each error category in the appendix.", "To improve the performance of future models, we see several options.", "We can improve the original alignments within the Newsela corpus, particularly in the case where sentences are split.", "Prior to simplification, we can use additional context around the sentences to perform anaphora resolution; at this point, we can also learn when to perform sentence splitting; this has been done in the Hybrid model (Narayan and Gardent, 2014), but has not yet been incorporated into neural models.", "Finally, we can use syntactic information to ensure the main clause of a sentence is not removed.", "In this paper, we present a novel Seq2Seq framework for sentence simplification.", "We contribute three major improvements over generic Seq2Seq models: a complexity-weighted loss function to encourage the model to choose simpler words; a similarity penalty during inference and clustering post-inference, to generate candidate simplifications with significant differences; and a reranking system to select the simplification that promotes both fluency and adequacy.", "Our model outperforms previous state-of-the-art systems using SARI, the standard metric for simplification.", "More importantly, while other previous models generate relatively long sentences, our model is able to generate shorter and simpler sentences, while remaining competitive regarding human-evaluated fluency and adequacy.", "Finally, we provide a qualitative analysis of where our different contributions improve performance, the effect of length on human-evaluated meaning preservation, and the current shortcomings of our model as insights for future research.", "Generating diverse outputs from Seq2Seq models could be used in a variety of NLP tasks, such as chatbots (Shao et al., 2017), image captioning (Vi-jayakumar et al., 2018), and story generation (Fan et al., 2018).", "In addition, the proposed techniques can also be extremely helpful in leveled and personalized text simplification, where the goal is to generate different sentences based on who is requesting the simplification.", "We would like to thank the anonymous reviewers for their helpful feedback on this work.", "We would also like to thank Devanshu Jain, Shyam Upad-hyay, and Dan Roth for their feedback on the post-decoding aspect of this work, as well as Anne Cocos and Daphne Ippolito for their insightful comments during proofreading.", "This material is based in part on research sponsored by DARPA under grant number HR0011-15-C-0115 (the LORELEI program).", "The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes.", "The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of DARPA and the U.S. Government.", "The work has also been supported by the French National Research Agency under project ANR-16-CE33-0013.", "This research was partially supported by Joao Sedoc's Microsoft Research Dissertation Grant.", "Finally, we gratefully acknowledge the support of NSF-SBIR grant 1456186." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "method", "objective", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "objective", "abstain", "result", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "abstain", "other", "other", "abstain", "other", "method", "other", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "result", "method", "method", "objective", "objective", "result", "method", "objective", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other" ]
[ "A few approaches have been developed to improve neural machine translation (NMT) models with multiple passes of decoding.", "However, their performance gains are limited because of lacking proper policies to terminate the multi-pass process.", "To address this issue, we introduce a novel architecture of Rewriter-Evaluator .", "Translating a source sentence involves multiple rewriting passes.", "In every pass, a rewriter generates a new translation to improve the past translation.", "Termination of this multi-pass process is determined by a score of translation quality estimated by an evaluator.", "We also propose prioritized gradient descent (PGD) to jointly and efficiently train the rewriter and the evaluator.", "Extensive experiments on three machine translation tasks show that our architecture notably improves the performances of NMT models and significantly outperforms prior methods.", "An oracle experiment reveals that it can largely reduce performance gaps to the oracle policy.", "Experiments confirm that the evaluator trained with PGD is more accurate than prior methods in determining proper numbers of rewriting.", "Encoder-Decoder architecture (Sutskever et al., 2014) has been widely used in natural language generation, especially neural machine translation (NMT) (Bahdanau et al., 2015; Gehring et al., 2017; Vaswani et al., 2017; Zhang et al., 2019; Kitaev et al., 2020).", "Given a source sentence, an encoder firstly converts it into hidden representations, which are then conditioned by a decoder to produce a target sentence.", "In analogy to the development of statistical machine translation (SMT) (Och and Ney, 2002; Shen et al., 2004; Zhang and Gildea, 2008), some recent methods in NMT attempt to improve the encoder-decoder architecture with multipass decoding (Xia et al., 2017; Zhang et al., 2018; Geng et al., 2018; Niehues et al., 2016).", "In these models, more than one translation is generated for a source sentence.", "Except for the first translation, each of the later translations is conditioned on the previous one.", "While these methods have achieved promising results, they lack a proper termination poqlicy for this multi-turn process.", "For instance, Xia et al. (2017); Zhang et al. (2018) adopt a fixed number of decoding passes, which is inflexible and can be sub-optimal.", "Geng et al. (2018) utilize reinforcement learning (RL) (Sutton et al., 2000) to automatically decide the number of decoding passes.", "However, RL is known to be unstable due to the high variance in gradient estimation (Boyan and Moore, 1995).", "To address this problem, we introduce a novel architecture, Rewriter-Evaluator .", "This architecture contains a rewriter and an evaluator.", "The translation process involves multiple passes.", "Given a source sentence, at every turn, the rewriter generates a new target sequence to improve the translation from the prior pass, and the evaluator measures the translation quality to determine whether to end the iterative rewriting process.", "Hence, the translation process is continued until a certain condition is met, such as no significant improvement in the measured translation quality.", "In implementations, the rewriter is a conditional language model (Sutskever et al., 2014) and the evaluator is a text matching model (Wang et al., 2017).", "We also propose prioritized gradient descent (PGD) that facilitates training the rewriter and the evaluator both jointly and efficiently.", "PGD uses a priority queue to store previous translation cases.", "The queue stores translations with descending order of their scores, computed from the evaluator.", "The capacity of the queue is limited to be a few times of batch size.", "Due to its limited size, the queue pops those translations with high scores and only keeps the translations with lower scores.", "The samples in Target Encoder !\"", "the queue are combined together with new cases from the training data to train the rewriter.", "Rewriter-Evaluator has been applied to improve two mainstream NMT models, RNNSearch (Bah-danau et al., 2015) and Transformer (Vaswani et al., 2017).", "We have conducted extensive experiments on three translation tasks, NIST Chinese-to-English, WMT'18 Chinese-to-English, and WMT'14 English-to-German.", "The results show that our architecture notably improves the performance of NMT models and significantly outperforms related approaches.", "We conduct oracle experiment to understand the source of improvements.", "The oracle can pick the best translation from all the rewrites.", "Results indicate that the evaluator helps our models achieve the performances close to the oracle, outperforming the methods of fixing the number of rewriting turns.", "Compared against averaged performances using a fixed number of rewriting iterations, performance gaps to the oracle can be reduced by 80 .", "7% in the case of RNNSearch and 75 .", "8% in the case of Transformer.", "Quantitatively, we find the evaluator trained with PGD is significantly more accurate in determining the optimal number of rewriting turns.", "For example, whereas the method in Geng et al. (2018) has 50 .", "2% accuracy in WMT'14, the evaluator achieves 72 .", "5% accuracy on Transformer.", "Rewriter-Evaluator consists of iterative processes involving a rewriting process and an evaluation process .", "The process of translating an n -length source sentence x = [ x 1 , x 2 , , x n ] is an application of the above processes.", "Assume we are at the k -th iteration ( k 1 ).", "The rewriter generates a target sequence z ( k ) = [ z ( k ) 1 , z ( k ) 2 , , z ( k ) l k ] given the source sentence x and the past translation z ( k 1) = [ z ( k 1) 1 , z ( k 1) 2 , , z ( k 1) l k 1 ] from the ( k 1) -th turn.", "l k and l k 1 are the sentence lengths.", "The evaluator estimates the translation quality score q ( k ) of the new translation z ( k ) , which is used for determining whether to end the multiturn process.", "Formally, the k -th pass of a translation process is defined as (cid:40) z ( k ) = ( x , z ( k 1) ) q ( k ) = ( x , z ( k ) ) .", "Initially, z (0) and q (0) are respectively set as an empty string and .", "The above procedure is repeatedly carried out until not much improvement in the estimated quality score can be achieved, i.e., q ( k ) + (cid:15) < q ( k 1) , (cid:15) > 0 , (2) where (cid:15) is a small value tuned on the development set.", "Alternatively, the procedure is terminated if a certain number of iterations K > 0 is reached.", "In the former case, we adopt z ( k 1) as the final translation.", "In the latter case, the last translation z ( K ) is accepted.", "A general architecture of Rewriter-Evaluator using Encoder-Decoder is illustrated in Fig.", "1. The rewriter consists of a source encoder f SE , a target decoder f TE , and a decoder g DEC .", "The evaluator shares encoders with the rewriter and contains an estimator g EST .", "Assume it is at the k -th pass.", "Firstly, the source encoder f SE casts the source sentence x into word Algorithm 1: Prioritized Gradient Descent (PGD) Input: rewriter , evaluator , training set T , batch size B , and expected iteration number E .", "previous turn k 1 is encoded as P ( k 1) = [ p ( k 1) 1 ; p ( k 1) 2 ; ; p ( k 1) l k 1 ] = f TE ( z ( k 1) ) .", "(4) Then, the decoder g DEC of the rewriter produces a new translation z ( k ) as z ( k ) = g DEC ( H , P ( k 1) ) .", "(5) Ultimately, the evaluator scores the new translation z ( k ) with the estimator g EST : (cid:40) P ( k ) = f TE ( z ( k ) ) q ( k ) = g EST ( H , P ( k ) ) .", "(6) The implementation can be applied to a variety of architectures.", "The encoders, f SE and f TE , can be any sequence model, such as CNN (Kim, 2014).", "The decoder g DEC is compatible with any language model (e.g., Transformer).", "The estimator g EST is a text matching model, e.g., ESIM (Chen et al., 2017).", "In Sec. 4, we apply this implementation to improve generic NMT models.", "We represent the ground truth target sentence as a ( m + 1) -length sequence y = [ y 0 , y 1 , , y m ] .", "The rewriter is trained via teacher forcing.", "We use o i to denote the probability of the i -th target word, which is the prediction of feeding its prior words [ y 0 , y 1 , , y i 1 ] into the decoder g DEC .", "The training loss for the rewriter is J = (cid:88) 1 i m log( o i [ y i ]) .", "where y 0 = [SOS] and y m = [EOS] , marking the ends of a target sentence.", "For the evaluator , we incur a hinge loss between the translation score of the ground truth y and that of the current translation z ( k ) as (cid:40) q = ( x , y ) J = max(0 , 1 q + q ( k ) ) .", "At training time, translation z ( k ) is generated via greedy search, instead of beam search, to reduce training time.", "We present prioritized gradient descent (PGD) to train the proposed architecture.", "Instead of the random sampling used in stochastic gradient descent RNN Encoder (Source) RNN Encoder (Target) !", "(SGD) (Bottou and Bousquet, 2008), PGD uses a priority queue to store previous training cases that receive low scores from the evaluator.", "Randomly sampled training cases together with those from the priority queue are used for training.", "Details of PGD are illustrated in Algorithm", "1. Initially, we set a priority queue A ( 1 -st line) with a limited size C = B E .", "B is the batch size.", "E , the expected number of rewriting iterations, is set as K 2 .", "The queue A is ordered with a quality rate in descending order, where the top one corresponds to the highest rate.", "The quality rate of a certain sample ( x , y , z ( k ) ) is computed as r ( k ) = (1 ) BLEU( z ( k ) , y ) + q ( k ) , (9) where the weight is controlled by an annealing schedule j j +1 with j being the current training epoch and BLEU (Papineni et al., 2002).", "The rate r ( k ) is dominated by BLEU in the first few epochs, and is later dominated by the evaluation score q ( k ) with an increasing number of epochs.", "This design is to mitigate the cold start problem when training an evaluator .", "At every training epoch, PGD firstly discards a certain number of previous training samples with high quality rates ( 3 -rd line) from queue A .", "It then replaces them with newly sampled samples S ( 4 -th to 6 -th lines).", "Every sample ( x , y , z ( k 1) , r ( k 1) ) in queue A is then rewritten into a new translation z ( k ) by the rewriter.", "These are scored by the evaluator ( 10 -th lines).", "These new samples are used to respectively train the rewriter and the evaluator ( 14 -th to 15 -th lines) with Eq.", "(7) and Eq.", "(8).", "PGD keeps low-quality translations in the queue A for multi-pass rewriting until they are popped out from queue A with high scores from the evaluator .", "Hence, the evaluator is jointly trained with the rewriter to learn discerning the quality of translations from the rewriter , in order to help the rewriter reduce loss in Eq.", "(7).", "PGD uses a large queue ( B E ) to aggregate the past translations and newly sampled cases.", "Computationally, this is more efficient than explicit B times of rewriting to obtain samples.", "This requires extra memory space in exchange for lowing training time.", "In Sec. 5.7, we will show that the additional increase of training time by PGD is less than 20%, which is tolerable.", "RNNSearch w/ Rewriter-Evaluator.", "The improved RNNSearch is illustrated in Fig.", "2. The two encoders (i.e., f SE and f TE ) and the decoder g DEC are GRU (Chung et al., 2014).", "We omit computation details of these modules and follow their settings in Bahdanau et al. (2015).", "Note that, at every decoding step, the hidden state of decoder is attended to not only h i , 1 i n but also p ( k 1) j , 1 j l k 1 .", "We apply co-attention mechanism (Parikh et al., 2016) to model the estimator f EST .", "Firstly, we capture the semantic alignment between the source sentence x and the translation z ( k 1) as i,j = h Ti Wp ( k 1) j (cid:101) h i = (cid:88) j exp( i,j ) (cid:80) j (cid:48) exp( i,j (cid:48) ) p ( k 1) j (cid:101) p ( k 1) j = (cid:88) i exp( i,j ) (cid:80) i (cid:48) exp( i (cid:48) ,j ) h i .", "Then, we use average pooling to extract features and compute the quality score: q ( k 1) = v T (cid:16)(cid:80) i (cid:101) h i n (cid:80) j (cid:101) p ( k 1) j l k 1 (cid:17) , (11) where is column-wise vector concatenation.", "Transformer w/ Rewriter-Evaluator.", "The Transformer (Vaswani et al., 2017) is modified to an architecture in Fig.", "3. The input to the encoder contains a source sentence x , a special symbol ALIGN , and the past translation z ( k 1) : x (cid:48) = x (cid:12) [ALIGN] (cid:12) z ( k 1) , (12) Transformer Encoder Transformer Decoder Dot Product !", "where operation (cid:12) denotes the concatenation of two sequences.", "The following mask matrix is applied to every layer in the encoder: 1 n n 0 T 1 n 0 n l k 1 1 1 n 1 1 1 l k 1 0 l k 1 n 0 T 1 l k 1 1 l k 1 l k 1 .", "In this way, the words in x can't attend to those in z ( k 1) and vice versa.", "ALIGN can attend to the words both in x and z ( k 1) .", "This design is to avoid cross-sentence attention in encoder layers.", "In earlier studies, we find it slightly improves the performances of models.", "in which v is a learnable vector.", "We have conducted extensive experiments on three machine translation tasks: NIST Chinese-to-English (Zh En), WMT'18 Chinese-to-English, and WMT'14 English-to-German (En De).", "The results show that Rewriter-Evaluator significantly improves the performances of NMT models and notably outperforms prior post-editing methods.", "Oracle experiment verifies the effectiveness of the evaluator.", "Termination accuracy analysis shows our evaluator is much more accurate than prior methods in determining the optimal number of rewriting turns.", "We also perform ablation studies to explore the effects of some components.", "For NIST Zh En, the training set contains 1 .", "25 M sentence pairs extracted from LDC corpora, including LDC2002E18, LDC2003E07, LDC2003E14, a portion of LDC2004T07, LDC2004T08, and LDC2005T06.", "We adopt NIST 2002 (MT02) as the validation set.", "We use NIST 2003 (MT03), NIST 2004 (MT04), NIST 2005 (MT05), and NIST 2006 (MT06) for tests.", "For WMT'18 Zh En 1 , we use 18 .", "4 M preprocessed data, with byte pair encoding (BPE) tokenization (Sennrich et al., 2016).", "We use newstest2017 for validation and newstest2018 for test.", "For WMT'14 En De 2 , following the same setting as in Vaswani et al. (2017), we use 4 .", "5 M preprocessed data that is tokenized via BPE with 32 k merge operations and a shared vocabulary for English and German.", "We use newstest2013 for development and newstest2014 for test.", "We train all the models with 150 k steps for NIST Zh En, 300 k steps for WMT'18 Zh En, and 300 k steps for WMT'14 En De.", "We select the model that performs the best on validations and report their performances on test sets.", "Using multi-bleu.perl 3 , we measure case-insensitive BLEU scores and case-sensitive ones for NIST Zh En and WMT'14 En De, respectively.", "For WMT'18 Zh En, we use the case-sensitive BLEU scores calculated by mteval-v13a.pl 4 .", "The improvements of the proposed models over the baselines are statistically significant with a reject probability smaller than 0 .", "05 (Koehn, 2004).", "For RNNSearch, the dimensions of word embed-dings and hidden layers are both 600 .", "Encoder has 3 layers and decoder has 2 layers.", "Dropout rate is set to 0 .", "2 .", "For Transformer, we follow the setting of Transformer-Base in Vaswani et al. (2017).", "Both models use beam size of 4 and the maximum number of training tokens at every step is 4096 .", "We use Adam (Kingma and Ba, 2014) for optimization.", "In all the experiments, the proposed models run on NVIDIA Tesla V100 GPUs.", "For Rewriter-Evaluator , the maximum number of rewriting iterations K is 6 and termination threshold (cid:15) is 0 .", "05 .", "Hyper-parameters are obtained by grid search, except for the Transformer backbone.", "1 http://www.statmt.org/wmt18/translation-task.html.", "2 http://www.statmt.org/wmt14/translation-task.html.", "3 https://github.com/moses-smt/mosesdecoder/blob/ master/scripts/generic/multi-bleu.perl.", "4 https://github.com/moses-smt/mosesdecoder/blob/ master/scripts/generic/mteval-v13a.pl.", "We adopt the following related baselines: 1) Deliberation Networks (Xia et al., 2017) adopts a second decoder to polish the raw sequence produced by the first-pass decoder; 2) ABD-NMT (Zhang et al., 2018) uses a backward decoder to generate a translation and a forward decoder to refine it with attention mechanism; 3) Adaptive Multi-pass Decoder (Geng et al., 2018) utilizes RL to model the iterative rewriting process.", "Table 1 shows the results of the proposed models and the baselines on NIST.", "Baseline BLEU scores are from Geng et al. (2018).", "There are three observations.", "Firstly, Rewriter-Evaluator significantly improves the translation quality of NMT models.", "The averaged BLEU score of RNNSearch is raised by 3 .", "1% and that of Transformer is increased by 1 .", "05% .", "Secondly, the proposed architecture notably outperforms prior multi-pass decoding methods.", "The performance of RNNSearch w/ Rewriter-Evaluator surpasses those of Deliberation Network by 2 .", "46% , ABD-NMT by 2 .", "06% , and Adaptive Multi-pass Decoder by 1 .", "72% .", "Because all of these systems use the same backbone of RNN-based NMT models, these results validate that Rewriter-Evaluator is superior to other alternative methods.", "Lastly, the proposed architecture can improve Transformer backbone by 1 .", "05% on average, and the improvements are consistently observed on tasks from MT03 to MT06.", "To further confirm the effectiveness of the proposed architecture, we make additional comparisons on WMT'14 En De and WMT'18 Zh En.", "The results are demonstrated in Table", "2. Because the above methods don't have results on the two datasets, we re-implement Adaptive Multi-pass Decoding for comparisons.", "These results are consistent with the observations in Sec. 5.2.", "We can see that the new architecture can improve BLEU scores on both RNNSearch and Transformer backbones.", "For example, the improvements on RNNSearch backbone are 2 .", "13% on WMT'14 and 2 .", "24% on WMT'18.", "On Transformer backbone, scores are raised by 1 .", "38% on WMT'14 and 1 .", "43% on WMT'18 .", "Furthermore, RNNSearch w/ Rewriter-Evaluator outperforms Adaptive Multi-pass Decoder by 1 .", "31% and 1 .", "32% , respectively, on the two tasks.", "Interestingly, the proposed architecture on RNNSearch backbone even surpasses Transformer on these two datasets.", "For example, the BLEU score on WMT'14 increases from 27 .", "53% to 27 .", "86% .", "We conduct oracle experiments on the test set of WMT'14 En De to understand potential improvements of our architecture.", "An oracle selects the iteration that the corresponding rewrite has the highest BLEU score.", "Its BLEU scores are shown on the 1 2 3 4 5 6 Rewriting Turns (RNNSearch w/ Rewriter-Evaluator) 25.0 25.5 26.0 26.5 27.0 27.5 28.0 28.5 29.0 BLEUS c o r e 25.61 26.58 26.91 26.03 26.58 26.21 Oracle 28.23 Evaluator 27.86 Avg.", "red dashed lines in Fig. 4.", "The numbers on the green vertical bars are the BLEU scores of adopting a fixed number of rewriting iterations.", "Their averaged number is shown on the dashed blue line.", "BLEU score from using our evaluator is shown on the solid dark-blue line.", "Results show that the evaluator, with 27 .", "86% BLEU score and 28 .", "91 BLEU score, is much better than the strategies of using a fixed number of rewriting turns.", "The gaps between oracle and the averaged performance by RNNSearch and Transformer with fixed iterations are 1 .", "92% and 1 .", "90% .", "Using the evaluator, these gaps are reduced relatively by 80 .", "7% for RNNSearch and 75 .", "8% for Transformer, respectively, down to 0 .", "37% and 0 .", "46% .", "These results show that the evaluator is able to learn an appropriate termination policy, approximating the performances of oracle policy.", "We define a metric, percentage of accurate terminations (PAT), to measure how precise a termination policy can be.", "PAT is computed as 1 | U | (cid:88) ( x , y ) U ( w q ( x , y ) = w b ( x , y )) , (15) where is the indicator function that outputs 1 if its argument is true and 0 otherwise.", "For each pair ( x , y ) in the test set U , w q ( x , y ) is the turn index k with the highest quality score max k q ( k ) and w b ( x , y ) is the one with the highest BLEU score Param.", "max k BLEU( z ( k ) , y ) .", "The translations z ( k ) , 1 k K and their scores q ( k ) , 1 k K are obtained using Eq.", "5 and Eq.", "6.", "For fair comparisons, the maximum number of rewritings is set to 6 for both Rewriter-Evaluator and Adaptive Multi-pass Decoder (Geng et al., 2018).", "Results in Table 3 show that PAT scores from Rewriter-Evaluator are much higher than those of Adaptive Multi-pass Decoder.", "For instance, RNNSearch w/ Rewriter-Evaluator surpasses Adaptive Multi-pass Decoder by 40 .", "96% on WMT'14 and 10 .", "35% on WMT'18.", "Parameter Sharing.", "The encoders from Eq.", "(3) and Eq.", "(4) are shared between the rewriter and the evaluator.", "We find this improves the performances of the proposed models.", "For example, on NIST, sharing encoders increases our BLEU score Method WMT'14 En De Training Test RNNSearch 7 h 56 m 11 m 26 s w/ Rewriter-Evaluator 9 h 17 m 39 m 50 s Transformer 5 h 23 m 14 m 11 s w/ Rewriter-Evaluator 6 h 36 m 52 m 02 s Table 5: Running time comparisons on WMT'14.", "from 42 .", "25% to 42 .", "79% with the same maximum iteration number of K .", "Maximum Number of Iterations.", "Increasing the maximum number of turns K generally improves the BLEU scores.", "For instance, on NIST, K = 8 outperforms K = 2 by 1 .", "0% , K = 4 by 0 .", "46% , and K = 6 by 0 .", "04% .", "However, described in Sec. 5.7, large K (e.g., 8 ) can increase inference time cost.", "Moreover, additional gains in performance from K = 8 is small.", "We therefore set K = 6 by default.", "While achieving improved translation quality, the models are trained with multiple passes of translation.", "Therefore, a natural question is on the increase of training time and test time.", "We report results on 4 GPUs with the maximum rewriting turns K = 6 and the beam size set to 8 .", "Results on WMT'14 are listed in Table 5.", "It shows that Rewriter-Evaluator increases the test time by approximately 4 times, because of multiple passes of decoding.", "However, training time is only relatively increased by 15% and 18% , respectively on RNNSearch and Transformer, due to the large priority queue used in PGD to store previous translation cases.", "Multi-pass decoding has been well studied in statistical machine translation (Brown et al., 1993; Koehn et al., 2003, 2007; Och and Ney, 2004; Chiang, 2005; Dyer et al., 2013).", "Och (2003); Och and Ney (2002) propose training models with minimum error rate criterion on lattices from first-pass decoder.", "Marie and Max (2015) introduce an iterative method to refine search space generated from simple feature with additional information from more complex feature.", "Shen et al. (2004) investigate reranking of hypothesis using neural models trained with discriminative criterion.", "Neubig et al. (2015) propose to reconfirm effectiveness of reranking.", "Chen et al. (2008) present a regeneration of search space from techniques such as n-gram expansion.", "These approaches are however applied to shallow models such as log-linear models (Och and Ney, 2002).", "Our work is closely related to recent efforts in multi-pass decoding on NMT.", "In these recent works (Xia et al., 2017; Zhang et al., 2018; Geng et al., 2018), the models generate multiple target sentences for a source sentence and, except for the first one, each of them is based on the sentence generated in the previous turn.", "For example, Xia et al. (2017) propose Deliberation Networks that uses a second decoder to polish the raw sequence produced by the first-pass decoder.", "While these methods have achieved promising results, they lack a proper termination policy for the multi-pass translation process.", "Zhang et al. (2018) adopt a predefined number of decoding passes, which is not flexible.", "Geng et al. (2018) incorporate post-editing mechanism into NMT model via RL.", "However, RL can be unstable for training because of the high variance in gradient estimation.", "The lack of a proper termination policy results in premature terminations or over-translated sentences, which can largely limit the performance gains of these methods.", "This paper has introduced a novel architecture, Rewriter-Evaluator , that achieves a proper termination policy for multi-pass decoding in NMT.", "At every translation pass, given the source sentence and its past translation, a rewriter generates a new translation, aiming at making further performance improvements over the past translations.", "An evaluator estimates the translation quality to determine whether to complete this iterative rewriting process.", "We also propose PGD that facilitates training the rewriter and the evaluator both jointly and efficiently.", "We have applied Rewriter-Evaluator to improve mainstream NMT models.", "Extensive experiments have been conducted on three translation tasks, NIST Zh En, WMT'18 Zh En, and WMT'14 En De, showing that our architecture notably improves the results of NMT models and significantly outperforms other related methods.", "An oracle experiment and a termination accuracy analysis show that the performance gains can be attributed to the improvements in completing the rewriting process at proper iterations.", "This work was done when the first internship at Ant Group.", "We thank anonymous reviewers for their valuable suggestions.", "Xinwei Geng, Xiaocheng Feng, Bing Qin, and Ting Liu.", "2018.", "Adaptive multi-pass decoder for neural machine translation.", "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing , pages 523532.", "Diederik P Kingma and Jimmy Ba.", "2014.", "Adam: A method for stochastic optimization.", "arXiv preprint arXiv:1412.6980 .", "Nikita Kitaev, Lukasz Kaiser, and Anselm Levskaya.", "2020.", "Reformer: The efficient transformer.", "In International Conference on Learning Representations ." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "method", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "objective", "result", "result", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain" ]
[ "Automatic evaluation systems in the field of automatic summarization have been relying on the availability of gold standard summaries for over ten years.", "Gold standard summaries are expensive to obtain and often require the availability of domain experts to achieve high quality.", "In this paper, we propose an alternative evaluation approach based on pairwise preferences of sentences.", "In comparison to gold standard summaries, they are simpler and cheaper to obtain.", "In our experiments, we show that humans are able to provide useful feedback in the form of pairwise preferences.", "The new framework performs better than the three most popular versions of ROUGE with less expensive human input.", "We also show that our framework can reuse already available evaluation data and achieve even better results.", "Due to the huge amount of information contained in texts, the task of automatic text summarization (Mani, 2001; Nenkova and McKeown, 2011) is a pressing challenge nowadays and will become even more important in the future.", "Building summarization systems is, however, not the only challenge in this field.", "Evaluation of automatically generated summaries is also an active field of research.", "Ideally, we would like to ask humans for their opinion about the quality of automatically generated summaries in an extrinsic evaluation (Hal-teren and Teufel, 2003).", "Since summaries are generated for humans, they should also be evaluated directly by humans.", "Unfortunately, manual evaluation cannot be performed at a large scale because of the huge effort which is necessary for evaluation.", "(Lin, 2004) reported that 3,000 hours of human effort would be required for a simple evaluation of the summaries for the Document Understanding Conference (DUC) 2003, a popular summarization shared task series.", "This motivates research of automatic evaluation methods for automatic summarization.", "ROUGE (Lin, 2004), the current method of choice for evaluating automated text summarization, relies on the availability of gold standard summaries.", "The gold standard summaries are used to define the optimal output of a summarization system.", "Writing high-quality summaries, however, requires the availability of expert writers and takes a lot of effort.", "(Dang, 2005) reported that creating the reference summaries for the DUC 2005 shared task was a difficult endeavor with an effort of five hours to produce each reference summary.", "Since ROUGE needs at least four reference summaries to become reasonably reliable, the effort sums up to at least 20 hours of annotation effort per topic.", "For this reason, gold standard summaries are only available for a few, rather small datasets.", "Also the more accurate (but also even more expensive) Pyramid method (Nenkova and Passonneau, 2004) requires expensive gold standard summaries.", "Lack of larger and diverse evaluation corpora limits research in automatic summarization.", "Furthermore, currently available automatic evaluation methods are viewed with skepticism (Rankel et al., 2013).", "Proper evaluation is, however, an indispensable ingredient for good research.", "Computing the similarity between two summaries as in ROUGE is a very difficult task.", "This seems to be obvious since estimating the similarity between sentences and even words is still an active field of research.", "In this work, we present an alternative evaluation framework which does not use gold standard summaries to estimate the quality of summaries.", "Instead of comparing automatically generated summaries with gold standard summaries, 1687 our model is trained with simple and inexpensive pairwise preferences (Thurstone, 1927; Furnkranz and Hullermeier, 2010) of sentences.", "To this end, we provide pairs of sentences from the input document of a summarization task to human annotators and ask which of the two sentences contains more important information.", "We use here the idea of intrinsic information importance (Hong and Nenkova, 2014; Zopf et al., 2016) which describes that information can be intrinsically important.", "For example, the information Donald Trump won the U.S. presidential election is intrinsically important.", "It is likely that it should also be contained in the generated summary if this information is contained in an input document.", "After collecting few preferences, our model uses the preferences to generate a ranking of all sentences according to information importance.", "Summaries which contain sentences similar the upper part of the ranking are then considered to be better than summaries which contain unimportant sentences from the lower part of the ranking.", "Pairwise preferences are an appealing form of annotation, since they are much easier to generate than producing complex gold standard summaries.", "Not only collecting the annotations is easier, but also using the collected annotations is much simpler.", "The presented model does not have to solve the difficult task of estimating the similarity between generated and gold standard summaries.", "Instead, the model uses the ranking to estimate the summary quality.", "Figure 1 provides an illustration of the traditional evaluation and our new model.", "On the left, the input documents are illustrated which should be summarized.", "In the upper part gold standard summaries are generated by humans and used to estimate the quality of an automatically generated summary.", "In the lower part, we collect pairwise preferences of sentences and use the preferences for evaluation.", "An evaluation on topics from two standard datasets, looking at predicting the relative ratings of automatically generated summaries, shows that our new evaluation model is as good as or better than existing methods, at a much lower annotation cost.", "In this section, we will recapitulate previous work in automated text summarization evaluation, fo-Figure", "fo-Figure 1: Illustration of traditional evaluation models based on reference summaries (top) and the new model (bottom) which is based on pairwise preferences.", "cusing on three important approaches, namely model-free evaluation, ROUGE, and Pyramid.", "The evaluation methods are ordered according to their annotation requirements from none (model-free evaluation) to high (Pyramid).", "In addition to the most prominent methods described below, several evaluation models were developed in the Automatically Evaluating Summaries Of Peers (AE-SOP) shared tasks.", "The systems in this shared task also considered reference summaries as additional information to evaluate a reference summary and are therefore as expensive as ROUGE in terms of required human annotation.", "Similarly, Giannakopoulos and Karkaletsis (2013) use machine learning to learn a linear combination of n-gram methods to evaluate summaries.", "Mackie et al. (2014), Giannakopoulos (2013), and Cohan and Goharian (2016) investigate evaluation for microblog, multilingual, and scientific summarization, respectively.", "Our evaluation, on contrary, uses newswire datasets since this is the most prominent application domain for automatic summarization.", "Furthermore, we focus on evaluating the information content of summaries and do not evaluate linguistic quality.", "This is, for example, captured by Pitler et al. (2004).", "Model-free evaluation methods Jensen-Shannon divergence (Louis and Nenkova, 2013) do not require human input such as gold standard summaries and can therefore be applied without additional cost.", "The quality of model-free evaluation methods is however limited, which is validated in our experiments (see Section 5).", "2.2 ROUGE ROUGE (Lin, 2004) was first used in the Document Understanding Conference (DUC) (Over et al., 2007) and is nowadays the method of choice for automatic evaluation in text summarization.", "Many popular summarization systems were evaluated with ROUGE (Erkan and Radev, 2004; Mihalcea and Tarau, 2004; Gillick et al., 2009; Lin and Bilmes, 2011).", "It is inspired from the BLEU evaluation method (Papineni et al., 2002) and is based on measuring lexical n-gram overlap of (stemmed) tokens between generated and gold standard summaries.", "Researchers usually report the n-gram recall of a summary to evaluate the quality of a summary.", "The quality of ROUGE is often criticized in the research community.", "Sjobergh (2007), for example, shows nicely how the ROUGE recall scoring can be fooled easily.", "A simple greedy language model based on the source documents extracts frequent bi-grams which are likely to occur in the reference summaries.", "The generated texts are merely lists of bi-grams and not meaningful sentences which cannot be considered to be summaries.", "However, they achieve superhuman ROUGE recall scores.", "In the TAC 2008 shared task (Dang and Owczarzak, 2008), both ROUGE-2 and ROUGE-SU4 score automatic systems higher than human summaries, which would lead to the conclusion that these systems are able to produce better summaries than humans.", "Furthermore, studies show that the correlation between ROUGE scores and human judgments may not be significant in non-newswire genres and other summary types (Liu and Liu, 2008).", "ROUGE also has many parameters (Graham, 2015), which makes reproduction and comparison of results problematic.", "Last but not least, ROUGE computes text similarity only based on simple string matching.", "Expressing the same information with different words is not rewarded by ROUGE.", "In addition to Graham (2015), Owczarzak et al. (2012) and Rankel et al. (2013) analyze ROUGE in more detail.", "Agreement with human judgments (Owczarzak et al., 2012) can be used instead of Pearson's correlation to validate an automatic evaluation model.", "Measuring agreement allows to obtain a better understanding of the performance of an evaluation model compared to the Pearson correlation.", "We will also use agreement similarly to Owczarzak et al. (2012) in our experiments.", "The Pyramid method (Nenkova et al., 2007) (sim-ilar to (Teufel and Van Halteren, 2004)) was used in the Text Analysis Conference (TAC) (Dang and Owczarzak, 2008) and goes beyond lexical comparisons.", "It is based on Summarization Content Units (SCUs, later also called Summary Content Units).", "An SCU is a set of lexical expressions with same meaning (e.g. { 2 people passed away, two persons died } ).", "After generating the gold standard summaries, SCUs are extracted from these summaries and are weighted by their occurrence frequency in an additional annotation step.", "Furthermore, every generated summary has to be annotated individually with SCUs before the Pyramid method can be applied.", "(Nenkova and Passonneau, 2004) have already reported that a large-scale application of the Pyramid method is infeasible.", "(Over et al., 2007) report a huge effort for the annotation process in the DUC challenges.", "This additional annotation effort is unattractive for researchers, who prefer automatic methods such as ROUGE.", "This is validated by the few applications of the Pyramid method until today.", "The need for more human annotation also introduces an additional source for annotation mistakes.", "Inspecting the annotations in the TAC 2008 dataset in detail reveals that this is not only a theoretical issue but has practical implications.", "1 PEAK (Yang et al., 2016) is an attempt to automate the Pyramid evaluation (similar to (Passonneau et al., 2013)).", "PEAK also requires reference summaries and is therefore as expensive as ROUGE.", "Simple and inexpensive qualitative human feedback has already been used in the field of machine translation (Callison-Burch, 2009; Callison-Burch et al., 2012; Guzman et al., 2015).", "(Snow et al., 2008) showed that in a wide variate of NLP tasks, cheap non-expert labels can replace expensive expert annotations.", "In comparison to our work, we are not asking non-experts to perform the same task as expert annotators (namely writing references summaries) but replace the complex task with a simpler tasks (providing qualitative feedback in form of pairwise preferences).", "1 We found several issues such as not annotating parent SCUs, missing SCUs in sentences, and different annotations for equal sentences.", "First, we define T to be the set of all possible texts which can be considered to be summaries.", "For a given set of source documents D , we define a binary relation > D T T with the intuition that a > D b holds for two texts a , b T if and only if a is considered to be the better summary of document collection D than b .", "Whenever the context is clear, we will omit D and write a > b for short.", "The relation > induces a strict total order (given that ties are not allowed) over T .", "A text which ranks high according to > D is a good summary of the document collection D .", "How good summaries are is annotated in summarization corpora only for a very small subset of assessed texts T + T by human annotators.", "We use the relation > to express in which order summaries are ranked by humans in summarization corpora for each document set D (also called topic or cluster).", "The relation > therefore models the human judgments.", "For not assessed texts T = T \\ T + the human judgment is unknown.", "The quality of an evaluation method E can be assessed by measuring the agreement with the human judgments.", "Evaluation models define (im-plicitly) a ranking > E by assigning scores to summaries or predicting the ranking directly.", "Calculating the agreement of the ranking > E with the human ranking defined by > provides a scores which can be used to assess the performance of evaluation models.", "Measuring the agreement between two relations (which are sets) can be easily done by computing the intersection of both sets: 2 Agreement ( > , > E ) = | > > E | | > | (1) This evaluation of evaluation models is similar to the definition of Agreement and Contradiction in Owczarzak et al. (2012): Agreements occur when the two evaluation metrics make the same distinction between System A and System B (...). Contradictions occur when both metrics find a (...) difference between A and B, but in opposite direc-tions.", "A perfect evaluation model, which predicts the preference for all pairs of summaries correctly, will have an agreement of 1 whereas a random 2 We require that an evaluation metric has to make a decision for two summaries if the two summaries are different according to the human judgment.", "Formally: a > b a > E b or b > E a .", "evaluation model, which always predicts the preference randomly, will have an expected value of 0.5 according to this measure.", "We prefer to use the agreement as defined in Equation 1 for evaluation since it can be much better interpreted (Owczarzak et al., 2012).", "Furthermore, Pearson's correlation is known to be sensitive to outliers, is only able to measure linear correlations, and requires normally distributed, interval scaled residuals (Anscombe, 1973).", "These properties cannot be assumed to be given when comparing human scores and automated evaluation measures.", "We therefore prefer to use the agreement according to human judgments as defined in Equation 1 instead of calculating Pearson's correlation.", "In this section, we present a novel framework which does not infer a ranking of automatically generated summaries based on gold standard summaries but based on pairwise preferences.", "The fundamental idea is not to rely on expensive gold standard summaries as previous work does, but to ask annotators for their preferences about sentences.", "Annotates label pairs of sentences with a preference label which indicates which sentence contains more important information.", "Figure 2 illustrates such a pairwise preference annotation.", "A human would likely prefer the first sentence to be included in a summary instead of the second sentence because the first sentence contains, compared to the second sentence, relatively important information.", "Based on the preferences, our model generates a ranking which reflects the importance of information which is contained in the sentences.", "Sentences with important information will be ranked high whereas sentences containing only less important information will be ranked low.", "The easiest strategy to select pairs of sentences for which preferences should be annotated is to sample pairs of sentences randomly and to ask annotators to provide a preference label for each sampled pair (i.e. annotating whether a (cid:31) b or b (cid:31) a for two randomly sampled a, b S ).", "The sentences are sampled from all source document of a topic and are therefore independent from the automatically generated summaries.", "Our model will therefore not only be able to evaluate already generated summaries but also summaries which will be generated in the future.", "All preferences are stored in a matrix M .", "An entry of n at position M i j indicates that sentence with index i was preferred n -times over sentence with index j .", "To reduce the number of annotations, we apply a smooth propagation of knowledge .", "The idea is that we do not only obtain information about the sampled sentence pair but also about pairs which are similar to the sampled pair.", "To estimate how much information can be trans-fered from one to another pair, we calculate the similarities between all sentences.", "As similarity measure we use the average of the well-known and simple Cosine similarity of TF-IDF vectors and Jaccard similarities.", "The combination allows to both rely the similarity computation on lexical similarity (Jaccard) and on important content words (Cosine).", "We define the set of all sentences in the source documents of a topic as S .", "Let ( a, b ) , a, b S be one annotated sentence pair and a, b the vector of similarities between a and b and all sentences (i.e. a i denotes the similarity between a and the i -th sentence in S ).", "We define the similarity of the pair ( a 1 , b 1 ) and the pair ( a 2 , b 2 ) as sim( a 1 , a 2 ) sim( b 1 , b 2 ) .", "If a 1 is the exact same sentence as a 2 and b 1 is similar with a degree of 0.7 to b 2 , we will transfer 0.7 of the information from ( a 1 , b 1 ) to the pair ( a 2 , b 2 ) .", "Transferring information means that we generate additional preferences based on human preferences.", "If a 1 was preferred over b 1 by a human annotator, we will additionally generate a weighted preferences of with a weight of 0.7 between a 2 and b 2 .", "This can be modeled by the outer product a 1 b 1 of a 1 and b 1 .", "For each annotated pair ( a, b ) , in which a was preferred by a human over b , we update matrix M by M M + a b .", "The proposed usage of pairwise preferences between sentences is close to the idea of generating a ranking of sports teams by playing individual matches.", "Instead of competitions between teams, we observe competitions between sentences.", "The outcome of a match between teams equals to the annotation of a pair of sentences by a human annotator.", "Since different people can have different opinions about the importance of information (Gambhir and Gupta, 2016), we expect that one sentence will not always be preferred by humans similarly to the situation that the better sports team does not always win against a weaker opponent.", "This is expressed by the winning probability between teams (or sentences).", "In sports, the term power ranking is used to describe a ranking which does not only rank the individual teams but also assigns a score to each team, the skill .", "A well-known method to generate a power ranking is the Bradly-Terry (BT) model (Bradley and Terry, 1952).", "It estimates the utilities v ( a ) , v ( b ) of two teams (or two sentences) a and b so that the winning probability of a against b equals the score of a divided by the sum of the scores of a and b : p ( a is prefered over b ) = v ( a ) v ( a ) + v ( b ) (2) An algorithm to find a maximum-likelihood estimator (MLE) has already been proposed in (Zer-melo, 1929).", "To find the MLE, we iteratively perform Equation 3 for all sentences s i until the difference between two iterations is sufficiently small.", "3 wins ( s i ) denotes the total number of wins of s i and duels ( s i , s j ) the number of duels played between sentences s i and s j .", "This information was collected in the previous step and is stored in matrix M .", "We normalize the resulting skill vector after each iteration since every multiple of the solution is also a correct solution and therefore restrict the model to converge to one particular solution.", "X The utility of a summary is therefore defined as the weighted sum of the sentence utilities v .", "Since we do not want to restrict our model to purely extractive summaries (which would mean that all sentences contained in the automatic summary have to be exactly contained in the source documents), we estimate the score of a sentence s i in the summary by searching for the most similar sentence s in the source documents with similarity function sim : S S [0 , 1] .", "As weight of s i , we use | s i | | s | where | .", "| denotes the length of the summary and sentence measured in number of characters.", "The intuition of the weight is that a sentence contributes more to the overall score of a summary if it is longer.", "The score of a summary will decrease if a large fraction of the summary is occupied with a poor sentence.", "By using a similarity function instead of a hard matching, our method is able to generalize to unseen sentences.", "The definition of u in Equation 4 does not consider redundancy.", "Including a sentence s twice would result in adding the score of s twice to the summary score.", "This behavior of the evaluation measure is not desirable.", "We therefore include a redundancy penalization which does not reward redundant information.", "For a summary s , we reduce the score of sentence s by v red ( s ) = v ( s ) 1 | s | X g s num ( g, s ) num ( g, s ) (5) where num( g, s ) and num( g, s ) denote the number of occurrences of the bi-gram g in s and s , respectively.", "For already existing summarization corpora, reference summaries and/or Pyramid annotations have already been created.", "Instead of generating new preference annotations by asking human annotators, we can also reuse the available data to simulate annotations.", "To this end, we define functions w r and w p which estimate the score of a single sentence based on reference summaries (r) and Pyramid scores (p), respectively.", "We will use the scores generated by w r and w p to simulate annotations of sentence pairs.", "For two sentence a, b we can simulate a human preference annotation of a (cid:31) b if w r ( a ) > w r ( b ) and a win of b over a otherwise (equivalent for w p ).", "For a set of gold standard summaries R , we define w r : S R simply to be the maximum similarity to the sentences in the gold standard summaries: w r ( s ) = max t r , r R (sim( s, t )) (6) If a very similar sentence appears in a gold standard summary, s will receive a high score.", "If no similar sentences are in the gold standard summaries the sentence will receive a low score.", "Given that Pyramid annotations are available (as in the TAC 2009 corpus, for example), we can define the score of a sentence as the sum of the weights of the matched unique SCUs (similar to the Pyramid method).", "Annotations are, unfortunately, only available for all sentences in the documents in T + and not for sentences in S .", "We therefore search for sentence s in S for the most similar sentence s in the documents in T + s = arg max t t , t T + sim( s, t ) (7) and set the score of s to w p ( s ) = X scu s weight( scu ) (8) where scu t are all unique SCUs contained in t and weight( scu ) denotes the weight of an SCU as defined in (Nenkova and Passonneau, 2004).", "As described above, we will observe wins and losses between pairs based on the estimated scores.", "We provide in this section a detailed analysis of our proposed evaluation method.", "For the experiment, we use eight topics from two popular multi-document summarization datasets, the DUC 2004 (DUC04) and TAC 2009 (TAC09) corpora, which are freely available upon request.", "4 Each topic in the datasets contains ten source documents.", "Each topic contains automatically generated summaries which were generated in the DUC 2004 and TAC 2009 shared tasks.", "All automatically 4 http://duc.nist.gov and https://tac.", "generated summaries were evaluated by humans.", "Each summary was labeled with a score from 1 to 5 (DUC04) or 1 to 10 (TAC09) indicating the information content of the summary.", "Evaluation of grammatically, writing style, etc. is not included in the scores.", "An evaluation model predicts the preference for two selected summaries correctly if the model predicts the same preference according to the annotated reference scores and incorrectly otherwise.", "We do not consider ties in the experiments.", "In the following, we report the agreement as described in Equation 1 for various experiments.", "We use the abbreviations JS (Jensen-Shannon), R1 R4 (ROUGE-1 ROUGE-4), SU4 (ROUGE-SU4), and PY (Pyramid (Nenkova and Passonneau, 2004)) to denote the reference systems.", "In the first experiment, we investigate whether humans are able to provide useful feedback in the", "form of pairwise preferences.", "To evaluate our model, we annotated 200 randomly sampled sentence pairs for the first four topics in the DUC04 and the first four topics in the TAC09 corpus with pairwise preferences.", "The preferences were used (including the previously described smoothed sampling) as input for the proposed model.", "The results are shown in Table 1.", "Column man denotes the performance of our now model and column Time (min) indicates how much time was required to generate the annotations.", "This information is in particular important for this paper since our main aim is to develop a cheap evaluation framework.", "In average, our model achieves an agreement of 0.673 in DUC04 and 0.688 in TAC09.", "This means, that 67.3/68.8 percent of all pairs of manually rated summaries were predicted correctly.", "This outperforms the best versions of ROUGE in the respective corpora (SU4 with 65.1 percent in DUC04 and R2 with 66.0 percent in TAC09).", "With an average annotation time per topic of 53 R1 R2 R4 PY man+ref man+py man+ref+py DUC04 0.651 0.639 0.606 n/a 0.722 n/a n/a TAC09 0.638 0.668 0.674 0.715 0.682 0.707 0.717 Table 2: Agreement of different versions of ROUGE and Pyramid (PY) and our novel models based on human and automatically generated pairwise preferences in addition to manually labeled preferences.", "Simulated Annotations In the next experiment, we are interested whether we can simulate additional annotations based on already available reference summaries and Pyramid annotations.", "The automatically annotated pairs can be considered to be an additional weak supervision for the model.", "We simulated 200 additional annotations based on reference summaries and/or Pyramid annotations in addition to the 200 manual annotations per topic.", "To this end, we randomly sampled 200 additional pairs and annotated the pairs with a preference label based on reference summaries and/or Pyramid annotations.", "Table 2, column man+ref contains the results for 200 manual + 200 simulated reference summary-based annotations; column man+py contains the results for 200 manual + 200 simulated Pyramid score-based annotations; and column man+ref+py contains results for 200 manual + 200 reference summary-based + 200 Pyramid score-based annotations.", "The results show that we can improve the agreement with additional simulated annotations based on reference summaries in DUC04 by 5 percent points.", "Additional annotations increased Agreement in TAC09 by 3 percent points.", "This leads to the conclusion that we can use already available reference summaries in order to substitute more human preference annotations, which makes the trade-off between performance and annotations effort of our model even better.", "Now, we investigate if simulated preferences are already sufficient to produce reasonable good results.", "Table 3, columns ref and py contain the results of an experiment where we sampled 1,000 simulated pairwise annotations.", "Without any additional annotation effort, the new model is able to 1693 R1 R2 R4 PY ref py DUC04 0.651 0.639 0.606 n/a 0.716 n/a TAC09 0.638 0.668 0.674 0.715 0.644 0.709 Table 3: Agreement with human judgments for reference systems and our model fed with only automatically generated preferences labels.", "perform much better than ROUGE at DUC 2004.", "In TAC 2009, our model achieves similar performance as the best performing evaluation based on Pyramid annotations.", "We conclude that automatically generating pairwise preferences based on already available reference summaries is already sufficient to outperform ROUGE.", "Pairwise preferences generated based on the more expensive Pyramid annotations do not improve the performance.", "In the next experiment, we investigate how agreement changes with an increasing amount of annotations.", "Figure 3 shows how agreement improves with more annotations.", "We sampled n annotations (horizontal axis) randomly from the human annotations and averaged the resulting agreement scores (vertical axis) of 100 runs to obtain reliable results.", "We observe a continuous improvement of agreement in all four topics in the TAC 2009 dataset which indicates that sampling more annotations can further improve the performance of our system.", "We now investigate the ranking generated by our model directly.", "Since individual sentences are annotated in the TAC 2009 corpus with SCUs, we can generate a ranking of the sentences and directly compare this ranking with the ranking generated by our model.", "Table 4 shows the percentage of correctly ordered sentence pairs (similar to Kendall's ) for our model without and with smoothed sampling.", "Smoothed sampling improves the raking of the model if we use 200 manual or 200 reference summary-based preferences in the TAC 2009 corpus.", "Given that we can sample pairs based on Pyramid scores, the model is able to reconstruct the ranking almost perfectly if we do not use smoothed sampling.", "With smoothed sampling, the performance decreases in this case.", "The result confirms the previously observed performance at summary scoring where preferences based on Pyramid annotations performed best followed by manually generated preference annotations.", "Evaluating automatically generated summaries is a challenging task and creating annotations which are required by applications such as ROUGE or Pyramid is laborious and expensive.", "We presented an alternative model which does not rely on reference summaries or Pyramid annotations but only on simple pairwise preferences of sentences.", "We showed in our experiments that the proposed model is able to perform better than the current state-of-the-art ROUGE method with less expensive annotations and that humans are able to provide useful feedback in the form of pairwise preferences.", "In combination with already available references summaries and Pyramid annotations, we were able to simulate more annotations, which improved performance further.", "We conclude that gold standard summaries are not the only usable human feedback which can be used for summary evaluation.", "Investigating other kinds of feedback such as pairwise preferences might be a promising future research direction.", "In future work, we would like to investigate whether we can use crowd-sourcing platforms to 1694 collect pairwise preferences on a large scale.", "Furthermore, we want to investigate whether we can reduce the number of required preferences with smarter sampling methods.", "Active learning methods can be used to replace the simple random sampling strategy.", "Additionally, the investigation of more sophisticated similarity functions can potentially improve the model's performance.", "This work has been supported by the German Research Foundation (DFG) as part of the Research Training Group Adaptive Preparation of Information from Heterogeneous Sources (AIPHES) under grant No.", "GRK 1994/1." ]
[ "abstain", "abstain", "objective", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "method", "method", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "method", "objective", "result", "method", "abstain", "objective", "objective", "abstain", "abstain", "other", "other" ]
[ "We propose an unsupervised approach for sarcasm generation based on a non-sarcastic input sentence.", "Our method employs a retrieve-and-edit framework to instantiate two major characteristics of sarcasm: reversal of valence and semantic incongruity with the context, which could include shared commonsense or world knowledge between the speaker and the listener.", "While prior works on sarcasm generation predominantly focus on context incongruity, we show that combining valence reversal and semantic incongruity based on commonsense knowledge generates sarcastic messages of higher quality based on several criteria.", "Human evaluation shows that our system generates sarcasm better than human judges 34% of the time, and better than a reinforced hybrid baseline 90% of the time.", "Studies have shown that the use of sarcasm or verbal irony, can increase creativity on both the speakers and the addressees (Huang et al., 2015), and can serve different communicative purposes such as evoking humor and diminishing or enhancing critique (Burgers et al., 2012).", "Thus, developing computational models that generate sarcastic messages could impact many downstream applications, such as better conversational agents and creative or humorous content creation.", "While most computational work has focused on sarcasm detection (Davidov et al., 2010; Gonzalez-Ibanez et al., 2011; Riloff et al., 2013; Ghosh et al., 2015; Joshi et al., 2015b; Muresan et al., 2016; Ghosh and Veale, 2017; Ghosh et al., 2017, 2018), research on sarcasm generation is in its infancy (Joshi et al., 2015a; Mishra et al., 2019).", "Sarcasm generation The research was conducted when the author was at USC/ISI.", "is a challenging problem since the generated utterance should have at least five characteristics (a.k.a. sarcasm factors) (Burgers et al., 2012): 1) be evaluative; 2) be based on a reversal of valence between the literal and intended meaning; 3) be based on a semantic incongruity with the context, which can include shared commonsense or world knowledge between the speaker and the addressee; 4) be aimed at some target, and 5) be relevant to the communicative situation in some way.", "To simplify the problem, we focus on the task of generating a sarcastic utterance starting from a non-sarcastic utterance that conveys the speaker's intended meaning and that is evaluative.", "Consider the examples in Table 1. Given the literal input I hate getting sick from fast food or I inherited unfavorable genes from my mother, our task is to generate a sarcastic message that would convey this intended literal meaning.", "In this simplifying task, we are not concerned with the fifth characteristic, while the first and to some degree, the fourth are specified by the input (literal) utterances.", "Given the lack of training data for the sarcasm generation task, we propose a novel unsupervised approach that has three main modules guided by the above mentioned sarcasm factors: 1. Reversal of Valence: To generate sarcastic utterances that satisfy the second characteristic we identify the evaluative word and use negation or lexical antonyms to generate the sarcastic utterance by reversing the valence (Section 4.1).", "For example, given, I hate getting sick from fast food this module will generate I love getting sick from fast food (GenSarc1 in Table 1).", "2. Retrieval of Commonsense Context: Adding commonsense context could be important to make explicit the semantic incongruity factor (e.g., GenSarc4 vs. GenSarc3 in Table 1), or could enhance the humorous effect of the generated sarcastic message (e.g., GenSarc2 vs. GenSarc1 in Table 1).", "We propose an approach where retrieved relevant commonsense context sentences are to be added to the generated sarcastic message.", "At first, we use a pre-trained language model fine-tuned on the ConceptNet (Speer et al., 2017) called COMET (Bosselut et al., 2019) to generate relevant commonsense knowledge.", "COMET gives us that, inherited unfavorable genes from my mother causes to be ugly or that getting sick from fast food causes stomach ache (Section 4.2.1).", "The derived commonsense concept is then used to retrieve relevant sentences from a corpus that could be added to the sentence obtained through reversal of valence (e.g., Stomach ache is just an additional side effect in Table 1) (Section 4.2.2).", "3. Ranking of Semantic Incongruity: The previous module generates a list of candidate commonsense contexts.", "Next, we measure contradiction between each of these commonsense contexts and the sentence generated by the reversal of valence approach (module 1) and select the commonsense context that received the highest contradiction score.", "Finally, we concatenate the selected context to the sentence obtained through reversal of valence.", "Here, conceptually, contradiction detection is aimed to capture the semantic incongruity between the output of valence reversal and its context.", "We test our approach on 150 non-sarcastic utterances randomly sampled from two existing data sets.", "We conduct human evaluation using several criteria: 1) how sarcastic is the generated message; 2) how humorous it is; 3) how creative it is; and 4) how grammatical it is.", "Evaluation via Amazon's Mechanical Turk (MTurk) shows that our system is better 34% of the time compared to humans and 90% of the time compared to a recently published reinforced hybrid baseline (Mishra et al., 2019).", "We also present a thorough ablation study of several variations of our system demonstrating that incorporating more sarcasm factors (e.g., reversal of valence, commonsense context, and semantic incongruity) lead to higher quality sarcastic utterances.", "We make the code and data from our experiments publicly available.", "1 2 Related Work 2.1 Sarcasm Generation Research on sarcasm generation is in its infancy.", "Joshi et al. (2015a) proposed SarcasmBot , a sarcasm generation system that implements eight rule-based sarcasm generators, each of which generates a certain type of sarcastic expression.", "Peled and Reichart (2017) introduced a novel task of sarcasm interpretation, defined as the generation of a non-sarcastic utterance conveying the same message as the original sarcastic one.", "They use supervised machine translation models for the same in presence of parallel data.", "However, it is impractical to assume the existence of large corpora for training supervised generative models using deep neural nets; we hence resort to unsupervised approaches.", "Mishra et al. (2019) employed reinforced neural seq2seq learning and information retrieval based approaches to generate sarcasm.", "Their models are trained using only unlabeled non-sarcastic and sarcastic opinions.", "They generated sarcasm as a disparity between positive sentiment context and negative situational context.", "We, in contrast, model sarcasm using semantic incongruity with the context which could include shared commonsense or world knowledge.", "Prior works looked into unsupervised text style/sentiment transfer (Shen et al., 2017; Fu et al., 2017; Li et al., 2018), which transfers a sentence from one style to another without changing the content.", "This is relevant to the reversal of valence for sarcasm generation.", "However, these transformations are mainly at the lexical and syntax levels rather than pragmatic level; in contrast, sarcastic utterances often include additional information associated with the context they occur (Regel, 2009), which is beyond text style/sentiment transfer.", "The study of irony and sarcasm are closely related as sarcasm is defined as, the use of verbal irony to mock someone or show contempt.", "Van Hee et al. (2018) addressed the challenge of modeling implicit or prototypical sentiment in the framework of automatic irony detection.", "They first manually annotated stereotypical ironic situations (e.g., flight delays) and later addressed the implicit sentiment held towards such situations automatically by using both a lexico-semantic commonsense knowledge base and a data-driven method.", "They however used it for irony detection, while we are focused on sarcasm generation.", "2 3 Sarcasm Factors Used in Generation A sarcastic utterance must satisfy the sarcasm factors, i.e., the inherent characteristics of sarcasm (Attardo, 2000; Burgers et al., 2012).", "In this research, we leverage the use of two particular factors to generate sarcasm.", "One is the reversal of valence and the other is the semantic incongruity with the context , which could include shared commonsense or world knowledge between the speaker and the hearer.", "The first key sarcasm factor is the reversal of valence between the literal and the intended meaning (Burgers et al., 2012).", "Reversal of valence can be achieved in two ways: when the literal meaning of the sarcastic message is positive (e.g., that is a great outfit if the outfit is ugly) or when the literal 2 While we do not directly model the negative intent in sarcasm, the generated output could lead to sarcastic messages rather than just ironic depending on the initial target given in the non-sarcastic message (E.g a sample generation Our politicians have everything under control. The nation is in danger of falling into anarchy.) meaning is negative (e.g., that is an ugly dress if the dress is really beautiful).", "Arguably, the former is more likely to appear in sarcastic utterances.", "As the intended meaning is generally the opposite of its literal meaning in sarcastic utterances (Gibbs, 1986), using lexical antonym of negative sentiment words or negation can be used to convert a non-sarcastic utterance to its sarcastic version.", "For example, given a non-sarcastic utterance Zero visibility in fog makes driving difficult , one could identify the evaluative negative word difficult and replace it with its antonym easy , thereby converting the utterance to the sarcastic Zero visibility in fog makes driving easy .", "Likewise, Drunk driving should be taken seriously can be converted to its sarcastic counterpart, Drunk driving should not be taken seriously by using negation.", "We propose a generation approach that is able to capture the reversal of valence (Section 4.1).", "The second sarcasm factor, semantic incongruity, appears between the literal evaluation and the context, as in the example I love getting sick from fast food, where we have semantic incongruity between the positive word love and the negative situation getting sick.", "However, often, the negative situation is absent from the utterance, and thus additional pragmatic inference is needed to understand the sarcastic intent.", "For example, the listener might miss the sarcastic intent in zero visibility in fog makes driving easy, where the speaker meant to convey that it can cause accidents .", "Adding suffered three cracked ribs in an accident. makes the sarcastic intent more explicit, while maintaining the acerbic wit of the speaker.", "In the next section, we propose a novel generation approach that incorporates such relevant commonsense knowledge as context for semantic incongruity (Section 4.2 and Section 4.3).", "An overview of the sarcasm generation pipeline is shown in Figure 1. In this section, we detail the three main modules that are designed to instantiate the key sarcasm factors.", "As sarcasm is a type of verbal irony used to mock or convey contempt, in most sarcastic messages we encounter a positive sentiment towards a nega-Figure", "nega-Figure 1: Our complete pipeline for sarcasm generation.", "The components with highlighted background denote Reversal of Valence, Retrieval of Commonsense Context and Ranking based on Semantic Incongruity respectively tive situation (i.e., ironic criticism (Kreuz and Link, 2002)).", "This observation is also supported by research on sarcasm detection, particularly on social media.", "Hence, for our sarcasm generation task, we focus on transforming a literal utterance with negative valence into positive valence.", "To implement the reversal of valence, as highlighted in the yellow background in Figure 1, we first identify the evaluative words and replace them with their lexical antonyms using WordNet (Miller, 1995).", "As we expect the evaluative words to be negative words, we rely on the word level negative scores obtained from SentiWordNet (Esuli and Sebastiani, 2006).", "In the absence of words with negative polarity, we check if there is the negation word not or words ending with n't and remove these words.", "In case there are both negative words and not (or words ending in n't ), we handle only one of them.", "Given the non sarcastic example zero visibility in fog makes driving difficult shown in Figure 1 and which we use as our running example, the reversal of valence module generates zero visibility in fog makes driving easy .", "As discussed before, a straightforward reversal of valence might not generate sarcastic messages that display a clear semantic incongruity, and thus, additional context is needed.", "We propose an approach to retrieve relevant context for the sarcastic message based on commonsense knowledge.", "First, we generate commonsense knowledge based on Con-cepNet (e.g., driving in zero visibility causes accidents) (Section 4.2.1).", "Second, we retrieve candidate context sentences that contain the commonsense concept from a retrieval corpus (Section 4.2.2) and edit them for grammatical consistency with the input message (Section 4.2.3).", "We extract nouns, adjectives, adverbs, and verbs from the non-sarcastic input messages and feed them as input to COMET (Bosselut et al., 2019) model to generate commonsense knowledge (high-lighted in green background in Figure 1).", "COMET is an adaptation framework for constructing commonsense knowledge based on pre-trained language models.", "It initiates with a pre-trained GPT (Radford et al., 2018) model and fine-tune on commonsense knowledge tuples (in our case, ConceptNet (Speer et al., 2017)).", "These tuples provide COMET with the knowledge base structure and relations that must be learned, and COMET adapts the representations that the language model learned from the pre-training stage to add novel nodes to the seed knowledge graph.", "Our work only leverages the causes relation.", "For instance, from our running example, we first remove the stopwords and then extract nouns, adjectives, adverbs, and verbs including the terms zero , visibility , fog , makes driving , and difficult to feed to COMET as inputs.", "In turn, COMET returns the probable causes with their probability scores.", "For the running example, COMET returns with the highest probability that these terms may cause an accident (illustrated in Figure 2).", "Commonsense Concepts Once we obtain the most probable output from COMET, the next step is to retrieve sentences containing the commonsense word or phrase from a retrieval corpus.", "We impose several constraints:", "(a) the retrieved sentences should contain the commonsense concept at the beginning or at the end;", "(b) sentence length should be less than twice the number of tokens in the non-sarcastic input to keep a consistency between the length of the non-sarcastic input and its sarcastic version.", "If none of the commonsense phrase is present in the retrieval corpus, we retrieve sentences containing the nouns within the top most phrase.", "For example, if COMET yields microwave burger awful causes the phrase food to spoil , and this phrase does not appear in any sentence in the retrieval corpus, we search for food and later replace it in the retrieved sentence with food to spoil .", "COMET often returns output with common phrases such as you to be , you to get , person will be , you have which we also removed while keeping the main content word (i.e the commonsense concept) We use Sentencedict.com, an online sentence dictionary as the retrieval corpus, where one can find high quality sentences for almost every word obeying the above constraints.", "3 4.2.3 Grammatical Consistency We first check whether the retrieved sentences are consistent with the non-sarcastic input in terms of the pronouns.", "If the pronouns are mismatched, then we modify the pronoun of the retrieved sentence to match the pronoun of the non-sarcastic input.", "In case, the non-sarcastic input does not have any pronoun, but the retrieved sentence does, we simply change that pronoun to I.", "For example, if the non-sarcastic input sentence is Ignoring texts is literally the worst part of communication. and the retrieved commonsense sentence is He has never suffered the torment of rejection. , we modify the retrieved sentence to I have never suffered the torment of rejection. to have consistency among the pronoun use.", "After correcting the pronouns and proper names (in the same way as pronoun correction), we feed the corrected sentences into the Neural Grammatical Error Corrections System 3 https://sentencedict.com/ (Zhao et al., 2019) to correct any pronoun or gender specific errors introduced by the replacements.", "After the grammatical error correction, the next step is to select the best context sentence from the retrieved results.", "Since we expect the context sentences to be incongruous with the sentence generated by the reversal of valence approach (Section 4.1), we rank the context sentences by semantic incongruity scores and select the best candidate.", "We frame the problem of semantic incongruity based on the Natural Language Inference (NLI) (Bowman et al., 2015) task.", "The Multi-Genre NLI (Williams et al., 2018) covers a range of genres of spoken and written text, and supports a distinctive cross-genre generalization, making it an ideal choice as our NLI Dataset.", "We first fine-tune RoBERTa-large (Liu et al., 2019), a state-of-the-art pre-trained language model for a 3-way classifica-tion (i.e., contradiction, entailment, and neutral) by training on the Multi-NLI dataset.", "Next, for each retrieved sentence, we treat it as the premise and the sentence generated by the reversal of valence as the hypothesis , and thus, obtain a contradiction score from the trained model.", "Finally, the scores obtained for the contradiction class are used as a proxy for the degree of semantic incongruity and we select the context with the highest score.", "Figure 1 shows the region with light purple background as our incongruity ranking module.", "We use the pre-trained COMET model 4 for commonsense reasoning with a greedy decoding of five to generate a commonsense phrase and return the topmost that has no lexical overlap with the input.", "If the generated phrase contains stopwords in the beginning we remove them.", "For incorporating semantic incongruity, we use the RoBERTa-large model with 355M parameters and fine-tune on MNLI.", "For grammatical error correction model, we use an open source pre-trained model.", "5 5 Experimental Setup 5.1 Dataset Ghosh et al. (2020) released a dataset of 4,762 pairs of speakers sarcastic messages and hearers interpretations by conducting a crowdsourcing experiment.", "Both datasets were collected using the hashtag #sarcasm from Twitter.", "We merge these two datasets and choose non-sarcastic utterances no longer than 15 words.", "For each literal non-sarcastic utterance we also keep the corresponding gold sarcastic message, which is useful for evaluation and comparison purposes.", "We randomly select 150 utterances as part of the test set (i.e., five times more than the size of the test data in Mishra et al. (2019)), while assuring such utterances do not contain high lexical overlap.", "We allow this constraint to evaluate how our method(s) deal with diverse data.", "1. Full Model (FM) : This model consists of all the three modules aimed at capturing reversal of valence, commonsense context, and semantic incongruity, respectively.", "2. Reversal of Valence (RV) : This model relies only on the reversal of valence component.", "3. No Reversal of Valence (NoRV) : This model only retrieves commonsense context and ranks them based on semantic incongruity.", "4. No Semantic Incongruity (NSI) : This model relies only on the reversal of valence and retrieval of commonsense context, without ranking based on semantic incongruity.", "A randomly selected retrieved sentence is used.", "5. MTS2019 : We make use of the model released by Mishra et al. (2019) as it is the state-of-the-art sarcasm generation system.", "6 6. Human (Gold) Sarcasm : As described in Section 5.1, we have gold sarcasm created by humans for every non-sarcastic utterance.", "BLEU (Papineni et al., 2002) is one of the most widely used automatic evaluation metric for generation tasks such as Machine Translation.", "However, for creative text generation, it is not ideal to expect significant n-gram overlaps between the machine-generated and the gold-standard utterances.", "Hence, we performed a human evaluation.", "We evaluate a total of 900 generated utterances since our ablation study consisted of six different systems with 150 utterances each.", "Sarcasm is often linked with intelligence, creativity, and wit; thus we propose a set of 4 criteria to evaluate the generated output: (1) Creativity (How creative are the utterances ?), (2) Sarcasticness (How sarcastic are the utterances ?), (3) Humour (How funny are the sentences ?) (Skalicky and Crossley, 2018), and (4) Grammaticality (How grammatical are the sentences ?).", "We design a MTurk task where Turkers were asked to rate outputs from all the six systems.", "Each Turker was given the non-sarcastic utterance as well as a group of sarcastic utterances generated by all the six systems (randomly shuffled).", "Each criteria was rated on a scale from 1 ( not at all ) to 5 ( very ).", "Finally, each utterance was rated by three individual Turkers.", "55, 59, 66, and 60 Turkers 6 https://github.com/TarunTater/sarcasm generation System Sarcasticness Creativity Humor Grammaticality State-of-the-art (Mishra et al., 2019) 1.63 1.60 1.50 1.46 Human Generated 3.57 3.16 3.18 3.98 Reversal of Valence (RV) 3.00 2.80 2.72 4.29 No Reversal of Valence (NoRV) 1.79 2.28 2.09 3.91 No Semantic Incongruity (NSI) 3.04 2.99 2.90 3.68 Full Model (FM) 3.23* 3.24 3.08* 3.69 Table 2: Average scores for generated sarcasm from all systems as judged by the Turkers.", "attempted the HITs (inter-annotator agreement of 0.59, 0.53, 0.47 and 0.66 for the tasks on creativity, sarcasticness, humour and grammaticality, respectively using Spearman's correlation coefficient).", "Table 2 presents the scores for the above mentioned metrics of different systems averaged over 150 test utterances.", "Our full model as well as the variations that ablated some components improve over the state-of-the-art (Mishra et al., 2019) on all the criteria.", "The ablation in Table 2 shows that our full model is superior to individual modules in terms of sarcasticness, creativity and humor.", "For grammaticality, we observe that the Turkers scored shorter sentences higher (e.g., RV), which also explains why NoRV model received a higher score than the full model.", "NoRV otherwise performed worse than all the other variations.", "In terms of creativity, our full model attains the highest average scores over all the other models including sarcastic utterances composed by humans.", "For grammaticality, the reversal of valence model is the best, even better than human gener-Figure 3: Pie chart comparing the success rate of all the variations of our model.", "ated ones.", "The performance of the full model is the second best in terms of the sarcasticness and humor, only slightly worse than human-generated sarcasm, showing the effectiveness of our approach that captures various factors of sarcasm.", "Table 3 displays the pairwise comparisons between the full model (FM) and human generated sarcasm, and FM and Mishra et al. (2019), respectively.", "Given a pair of inputs, we decide win/lose/tie by comparing the average scores (over three Turkers) of both outputs.", "We see that FM dominates Mishra et al. (2019) on all the metrics and human-generated sarcasm on the creativity metric.", "For sarcasticness, although humans are better, the FM model still has a 34% winning rate.", "We focus our ablation study on the metric of sarcasticness, as we consider this as the main criterion for the success of generating sarcasm.", "As shown in Figure 3, our best model (FM) outperforms individ-Non Sarcastic System Sarcasm S C H G I inheritedunfavorable genes from my mother.", "ual ablation modules.", "We filtered out 60 examples from the 150 with no ties.", "The ablation component employing just Reversal of Valence is second best for sarcasticness according to Figure 3. Further, to understand the extent to which ranking the retrieved sentence based on the degree of incongruity helped generate better sarcasm, we took the outputs from FM and NSI for comparisons.", "Out of the 150 utterances, 119 times there was no tie.", "Our best model (FM) wins 66% of the time while the NSI model wins 34% of the cases.", "Table 4 demonstrates several generation outputs from different modules associated with human ratings for different criteria.", "We notice that often one of our modules generate better sarcasm than humans.", "For instance, for the first and the second example in Table 4, all of FM, RV and NSI are better than human generated sarcasm.", "In general, the generations from the FM model are more humorous, which is also an useful criterion to evaluate sarcasm besides sarcasticness (Skalicky and Crossley, 2018).", "We also observe that Turkers consistently rated generations from the FM model more sarcastic than the NSI model suggesting that there is a correlation between human scores of sarcasticness and incongruity.", "To support this observation, we took the contradiction scores from the RoBERTa model for both best ranked retrieved sentences (FM) and the randomly selected retrieved sentences (NSI).", "We then computed a correlation between the sarcasticness scores given by the humans and the automatic contradiction scores for both the best ranked retrieved sentences (FM) and the randomly selected retrieved sentences (NSI).", "For FM model we obtain a higher Pearson correlation coefficient compared to NSI suggesting the important role of incongru-ency for sarcasm.", "While our best model combining different sarcasm factors does outperform the system with individual factors, there are sometimes exceptions.", "We notice, in few cases, the simple reversal of valence (RV) strategy is enough to generate sarcasm.", "For instance, for the literal input It is not fun to date a drug addict, just removing the negation word leads to a full score on sarcasticness without the additional commonsense module.", "Future work would include building a model that can decide whether just the RV strategy is sufficient or if we need to add additional commonsense context to it.", "Although incorporating incongruity ranking is useful, there are several cases when a randomly retrieved message may obtain better sarcasticness score.", "Table 5 presents such an example.", "Even though the retrieved message Please stop whirling me round; it makes me feel sick. scores lower than The very thought of it makes me feel sick., in terms of incongruity with respect to I love being put in the hospital for dehydration, the former received a higher sarcasticness score that suggests the incongruity scores obtained from NLI are not perfect.", "The ordering of the commonsense context and the valence reversed sentence is predetermined in our generation.", "Specifically, we always append the retrieved commonsense context after the valence reversed output.", "Changing the order can sometimes make the sarcasm better and more humorous.", "The reason for our current ordering choice is that we always treat the valence reversed version as hypothesis and the commonsense retrieved sentence as premise for the NLI model.", "We attempted reversing the order in preliminary experiments but NSI I love being put in the hospital for dehydration.", "In future, we would like to generate more diverse sarcasm that are not tied to a fixed pattern.", "Finally, the generations are dependent on COMET and thus the quality will be governed by the accuracy of the COMET model.", "We address the problem of unsupervised sarcasm generation that models several sarcasm factors including reversal of valence and semantic incongruity with the context.", "The key contribution of our approach is the modeling of commonsense knowledge in a retrieve-and-edit generation framework.", "A human-based evaluation based on four criteria shows that our generation approach significantly outperforms a state-of-the-art model.", "Compared with human generated sarcasm, our model shows promise particularly for creativity, humor and sarcasticness, but less for grammaticality.", "A bigger challenge in sarcasm generation and more generally, creative text generation, is to capture the difference between creativity (novel but well-formed material) and nonsense (ill-formed material).", "Language models conflate the two, so developing methods that are nuanced enough to recognize this difference is key to future progress.", "This work was supported in part by the MCS program under Cooperative Agreement N66001-19-2-4032 and the CwC program under Contract W911NF-15-1-0543 with the US Defense Advanced Research Projects Agency (DARPA).", "The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government.", "The authors would like to thank Christopher Hidey, John Kropf, Anusha Bala and Christopher Robert Kedzie for useful discussions.", "The authors also thank members of PLUSLab at the University Of Southern California and the anonymous reviewers for helpful comments." ]
[ "objective", "method", "result", "result", "abstain", "abstain", "other", "abstain", "other", "method", "method", "objective", "objective", "objective", "objective", "objective", "abstain", "abstain", "objective", "objective", "method", "result", "objective", "method", "method", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "objective", "result", "result", "abstain", "abstain", "other", "other", "other", "other" ]
[ "This work aims to tackle the challenging heterogeneous graph encoding problem in the text-to-SQL task.", "Previous methods are typically node-centric and merely utilize different weight matrices to parameterize edge types, which 1) ignore the rich semantics embedded in the topological structure of edges, and 2) fail to distinguish local and nonlocal relations for each node.", "To this end, we propose a L ine G raph E nhanced Text-to-SQL (LGESQL) model to mine the underlying relational features without constructing meta-paths.", "By virtue of the line graph, messages propagate more efficiently through not only connections between nodes, but also the topology of directed edges.", "Furthermore, both local and non-local relations are integrated distinctively during the graph iteration.", "We also design an auxiliary task called graph pruning to improve the discriminative capability of the encoder.", "Our framework achieves state-of-the-art results ( 62 . 8% with GLOVE , 72 . 0% with ELECTRA ) on the cross-domain text-to-SQL benchmark Spider at the time of writing.", "The text-to-SQL task (Zhong et al., 2017; Xu et al., 2017) aims to convert a natural language question into a SQL query, given the corresponding database schema.", "It has been widely studied in both academic and industrial communities to build natural language interfaces to databases (NLIDB, Androut-sopoulos et al., 1995).", "One daunting problem is how to jointly encode the question words and database schema items (in-cluding tables and columns), as well as various relations among these heterogeneous inputs.", "Typically, previous literature utilizes a node-centric graph neural network (GNN, Scarselli et al., 2008) The corresponding authors are Lu Chen and Kai Yu.", "to aggregate information from neighboring nodes.", "GNNSQL (Bogin et al., 2019a) adopts a relational graph convolution network (RGCN, Schlichtkrull et al., 2018) to take into account different edge types between schema items, such as T-HAS-C relationship 1 , primary key and foreign key constraints.", "However, these edge features are directly retrieved from a fixed-size parameter matrix and may suffer from the drawback: unaware of con-textualized information, especially the structural topology of edges.", "Meta-path is defined as a composite relation linking two objects, which can be used to capture multi-hop semantics.", "For example, in Figure", "1(a), relation Q-EXACTMATCH-C and C-BELONGSTO-T can form a 2-hop meta-path indicating that some table t has one column exactly mentioned in the question.", "1 For abbreviation, Q represents QUESTION node, while T and C represent TABLE and COLUMN nodes.", "or multi-hop, in the same manner (relative position embedding, Shaw et al., 2018) in a complete graph.", "Without distinguishing local and non-local neighbors, see Figure", "1(b), each node will attend to all the other nodes equally, which may lead to the notorious over-smoothing problem (Chen et al., 2020a).", "Besides, meta-paths are currently constructed by domain experts or explored by breadth-first search (Kong et al., 2012).", "Unfortunately, the number of possible meta-paths increases exponentially with the path length, and selecting the most important subset among them is an NP-complete problem (Lao and Cohen, 2010).", "To address the above limitations, we propose a L ine G raph E nhanced Text-toSQL model (LGESQL), which explicitly considers the topological structure of edges.", "According to the definition of a line graph (Gross and Yellen, 2005), we firstly construct an edge-centric graph from the original node-centric graph.", "These two graphs capture the structural topology of nodes and edges, respectively.", "Iteratively, each node in either graph gathers information from its neighborhood and incorporates edge features from the dual graph to update its representation.", "As for the node-centric graph, we combine both local and non-local edge features into the computation.", "Local edge features denote 1 -hop relations and are dynamically provided by node embeddings in the line graph, while non-local edge features are directly extracted from a parameter matrix.", "This distinction encourages the model to pay more attention to local edge features while maintaining information from multihop neighbors.", "Additionally, we propose an auxiliary task called graph pruning .", "It introduces an inductive bias that the heterogeneous graph encoder of text-to-SQL should be intelligent to extract the golden schema items related to the question from the entire database schema graph.", "Experimental results on benchmark Spider (Yu et al., 2018b) demonstrate that our LGESQL model promotes the exact set match accuracy to 62 .", "8% (with GLOVE , Pennington et al. 2014) and 72 .", "0% (with pretrained language model ELECTRA , Clark et al. 2020).", "Our main contributions are summarized as follows: We propose to model the 1 -hop edge features with a line graph in text-to-SQL.", "Both nonlocal and local features are integrated during the iteration process of node embeddings.", "We design an auxiliary task called graph pruning , which aims to determine whether each node in the database schema graph is relevant to the given question.", "Empirical results on dataset Spider demonstrate that our model is effective, and we achieve state-of-the-art performances both without and with pre-trained language models.", "Problem definition Given a natural language question Q = ( q 1 , q 2 , , q | Q | ) with length | Q | and the corresponding database schema S = T C , the target is to generate a SQL query y .", "The database schema S contains multiple tables T = { t 1 , t 2 , } and columns C = { c t 1 1 , c t 1 2 , , c t 2 1 , c t 2 2 , } .", "Each table t i is described by its name and is further composed of several words ( t i 1 , t i 2 , ) .", "Similarly, we use word phrase ( c t i j 1 , c t i j 2 , ) to represent column c t i j t i .", "Besides, each column c t i j also has a type field c t i j 0 to constrain its cell values (e.g. TEXT and NUMBER ).", "The entire input node-centric heterogeneous graph G n = ( V n , R n ) consists of all three types of nodes mentioned above, that is V n = Q T C with the number of nodes | V n | = | Q | + | T | + | C | , where | T | and | C | are the number of tables and columns respectively.", "Meta-path As shown in Figure", "1(a), a meta-path represents a path 1 r 1 2 r 2 r l l +1 , where the target vertex type of previous relation r i 1 equals to the source vertex type i of the current relation r i .", "It describes a composite relation r = r 1 r 2 r l between nodes with type 1 and l +1 .", "In this work, i { QUESTION ,T ABLE ,C OLUMN } .", "Throughout our discussion, we use the term local to denote relations with path length 1 , while nonlocal relations refer to meta-paths longer than 1 .", "The relational adjacency matrix R n contains both local and non-local relations, see Appendix A for enumeration.", "Line Graph Each vertex v ei , i = 1 , 2 , , | V e | in the line graph G e = ( V e , R e ) can be uniquely mapped to a directed edge r nst R n , or v ns v nt , in the original node-centric graph G n = ( V n , R n ) .", "Function f maps the source and target node index tuple ( s, t ) into the edge index i = f ( s, t ) in G e .", "The reverse mapping is f 1 .", "In the line graph G e , a directed edge r eij R e exists from node v ei to v ej , iff the target node of edge r nf 1 ( i ) and the source node of edge r nf 1 ( j ) in G n are exactly the same node.", "Actually, r eij captures the information flow in meta-path r nf 1 ( i ) r nf 1 ( j ) .", "We prevent backtracking cases where two reverse edges will not be connected in G e , illustrated in Figure", "2. We only utilize local relations in R n as the node set V e to avoid creating too many nodes in the line graph G e .", "Symmetrically, each edge in R e can be uniquely identified by the node in V n .", "For example, in the upper right part of Figure 2, the edge between nodes e1 and e2 in the line graph can be represented by the middle node with double solid borderlines in the original graph.", "After constructing the line graph, we utilize the classic encoder-decoder architecture (Sutskever et al., 2014; Bahdanau et al., 2015) as the backbone of our model.", "LGESQL consists of three parts: a graph input module, a line graph enhanced hidden module, and a graph output module (see Figure 3 for an overview).", "The first two modules aim to map the input heterogeneous graph G n into node embeddings X R | V n | d , where d is the graph hidden size.", "The graph output module retrieves and transforms X into the target SQL query y .", "This module aims to provide the initial embeddings for both nodes and edges.", "Initial local edge features Z 0 R | V e | d and non-local edge features Z nlc R ( | R n || V e | ) d are directly retrieved from a parameter matrix.", "For nodes, we can obtain their representations from either word vectors GLOVE (Pennington et al., 2014) or a pre-trained language model (PLM) such as BERT (Devlin et al., 2019).", "GLOVE Each word q i in the question Q or schema item t i T or c t i j C can be initialized by looking up the embedding dictionary without considering the context.", "Then, these vectors are passed into three type-ware bidirectional LSTMs (BiL-STM, Hochreiter and Schmidhuber, 1997) respectively to attain contextual information.", "We concatenate the forward and backward hidden states for each question word q i as the graph input x 0 q i .", "As for table t i , after feeding ( t i 0 , t i 1 , t i 2 , ) into the BiLSTM (special type t i 0 = table , i ), we concatenate the last hidden states in both directions as the graph input x 0 t i (similarly for column c t i j ).", "These node representations are stacked together to form the initial node embeddings matrix X 0 R | V n | d .", "PLM Firstly, we flatten all question words and schema items into a sequence, where columns belong to the same table are clustered together 2 : [CLS] q 1 q 2 q | Q | [SEP] t 10 t 1 c t 1 10 c t 1 1 c t 1 20 c t 1 2 t 20 t 2 c t 2 10 c t 2 1 c t 2 20 c t 2 2 [SEP] .", "The type information t i 0 or c t i j 0 is inserted before each schema item.", "Since each word w is tokenized into sub-words, we append a subword attentive pooling layer after PLM to obtain word-level representations.", "Concretely, given the output sequence of subword features w s 1 , w s 2 , , w s | w | for each subword w si in w , the word-level representation w is 3 a i = softmax i tanh ( w si W s ) v T s , w = (cid:88) i a i w si , where v s and W s are trainable parameters.", "After obtaining the word vectors, we also feed them into three BiLSTMs according to the node types and get the graph inputs X 0 for all nodes.", "It contains a stack of L dual relational graph attention network (Dual RGAT) layers.", "In each layer l , two RGATs (Wang et al., 2020b) capture the structure of the original graph and line graph, respectively.", "Node embeddings in one graph play the role of edge features in another graph.", "For example, the edge features used in graph G n are provided by the node embeddings in graph G e .", "We use X l R | V n | d to denote the input node embedding matrix of graph G n in the l -th 2 We randomly shuffle the order of tables and columns in different mini-batches to discourage over-fitting.", "layer, l { 0 , 1 , , L 1 } .", "As for each specific node v ni V n , we use x li .", "Similarly, matrix Z l R | V e | d and vector z li are used to denote node embeddings in the line graph.", "Following RATSQL (Wang et al., 2020a), we use multi-head scaled dot-product (Vaswani et al., 2017) to calculate the attention weights.", "For brevity, we formulate the entire computation in one layer as two basic modules: X l +1 = RGAT n ( X l , [ Z l ; Z nlc ] , G n ) , Z l +1 = RGAT e ( Z l , X l , G e ) , where Z nlc is the aforementioned non-local edge features in the original graph G n .", "Given the node-centric graph G n , the output representation x l +1 i of the l -th layer is computed by", "hji =( x li W hq )( x lj W hk + [ ( r nji )] Hh ) T , hji = softmax j ( hji / (cid:112) d/H ) , x li = H (cid:110) h =1 (cid:88) v nj N ni hji ( x lj W hv + [ ( r nji )] Hh ) , x l +1 i = LayerNorm ( x li + x li W o ) , x l +1 i = LayerNorm ( x l +1 i + FFN ( x l +1 i )) ,", "where (cid:107) represents vector concatenation, matrices W hq , W hk , W hv R d d/H , W o R d d are trainable parameters, H is the number of heads and FFN ( ) denotes a feedforward neural network.", "N ni represents the receptive field of node v n i and function ( r nji ) returns a d -dim feature vector of relation r nji .", "Operator [ ] Hh first evenly splits the vector into H parts and returns the h -th partition.", "Since there are two genres of relations (local and non-local), we design two schemes to integrate them: Mixed Static and Dynamic Embeddings If r nji is a local relation, ( r nji ) returns the node embedding z lf ( j,i ) from the line graph 4 .", "Otherwise, ( r nji ) directly retrieves the vector from the non-local embedding matrix Z nlc , see Figure 4.", "The neighborhood function N ni for node v ni returns the entire node set V n and is shared across different heads.", "Multi-head Multi-view Concatenation An alternative is to split the muli-head attention module into two parts.", "In half of the heads, the neighborhood function N ni of node v ni only contains nodes that are reachable within 1 -hop.", "In this case, ( r n ji ) returns the layer-wise updated feature z lf ( j,i ) from 4 Function f maps the tuple of source and target node indices in G n into the corresponding node index in G e .", "Z l .", "In the other heads, each node has access to both local and non-local neighbors, and ( ) always returns static entries in the embedding matrix Z nlc Z 0 , see Figure 5 for illustration.", "Symmetrically, given edge-centric graph G e , the updated node representation z l +1 i from z li is calculated similarly with little modifications:", "hji =( z li U hq + [ ( r eji )] Hh )( z lj U hk ) T , hji = softmax j ( hji / (cid:112) d/H ) , z li = H (cid:110) h =1 (cid:88) v ej N ei hji ( z lj U hv + [ ( r eji )] Hh ) , z l +1 i = LayerNorm ( z li + z li U o ) , z l +1 i = LayerNorm ( z l +1 i + FFN ( z l +1 i )) .", "Here ( r eji ) returns the feature vector of relation r eji in G e .", "Since we only consider local relations in the line graph, N ei only includes 1 -hop neighbous and ( r eji ) equals to the source node embedding in X l of edge v ei .", "Attention that the relational feature is added on the query side instead of the key side when computing attention logits h ji cause it is irrelevant to the incoming edges.", "For example, in Figure 3, the connecting nodes of two edge pairs (1 4 , 4 5) and (2 4 , 4 5) are the same node with index 4 .", "U hq , U hk , U hv R d d/H , U o R d d are trainable parameters.", "The output matrices of the final layer L are the desired outputs of the encoder: X = XL , Z = ZL .", "This module includes two tasks: one decoder for the main focus text-to-SQL and the other one to perform an auxiliary task called graph pruning .", "We use the subscript to denote the collection of node embeddings with a specific type, e.g., X q is the matrix of all question node embeddings.", "We adopt the grammar-based syntactic neural decoder (Yin and Neubig, 2017) to generate the abstract syntax tree (AST) of the target query y in depth-first-search order.", "The output at each decoding timestep is either 1) an APPLYRULE action that expands the current non-terminal node in the partially generated AST, or 2) SELECTTABLE or SELECTCOLUMN action that chooses one schema item x s i from the encoded memory X s = X t X c .", "Mathematically, P ( y | X ) = (cid:81) j P ( a j | a <j , X ) , where a j is the action at the j -th timestep.", "For more implementation details, see Appendix B. 3.3.2 Graph Pruning We hypothesize that a powerful encoder should distinguish irrelevant schema items from golden schema items used in the target query.", "In Figure 6, the question-oriented schema sub-graph (above the shadow region) can be easily extracted.", "The intent c 2 and the constraint c 5 are usually explicitly mentioned in the question, identified by dot-product attention mechanism or schema linking.", "The linking nodes such as t 1 , c 3 , c 4 , t 2 can be inferred by the 1 -hop connections of the schema graph to form a connected component.", "To introduce this inductive bias, we design an auxiliary task that aims to classify each schema node s i S = T C based on its relevance with the question and the sparse structure of the schema graph.", "Firstly, we compute the context vector x s i from the question node embeddings X q for each schema node s i via multi-head attention.", "hji = softmax j ( x s i W hsq )( x q j W hsk ) T (cid:112) d/H , x s i =( H (cid:110) h =1 (cid:88) j hji x q j W hsv ) W so , where W hsq , W hsk , W hsv R d d/H and W so R d d are network parameters.", "Then, a biaffine (Dozat and Manning, 2017) binary classifier is used to determine whether the compressed context vector x s i and the schema node embedding x s i are correlated.", "Biaffine ( x 1 , x 2 ) = x 1 U s x T2 + [ x 1 ; x 2 ] W s + b s , P gp ( y s i | x s i , X q ) = ( Biaffine ( x s i , x s i )) .", "The ground truth label y gs i of a schema item is 1 iff s i appears in the target SQL query.", "The training object can be formulated as L gp = (cid:88) s i [ y gs i log P gp ( y s i | x s i , X q ) + (1 y gs i ) log(1 P gp ( y s i | x s i , X q ))] .", "This auxiliary task is combined with the main text-to-SQL task in a multitasking way.", "Similar ideas (Bogin et al., 2019b; Yu et al., 2020) and other association schemes are discussed in Appendix C. 4 Experiments In this section, we evaluate our LGESQL model in different settings.", "Codes are public available 5 .", "Dataset Spider (Yu et al., 2018b) is a large-scale cross-domain zero-shot text-to-SQL benchmark 6 .", "It contains 8659 training examples across 146 databases in total, and covers several domains from other datasets such as Restaurants (Popescu et al., 2003), GeoQuery (Zelle and Mooney, 1996), Scholar (Iyer et al., 2017), Academic (Li and Ja-gadish, 2014), Yelp and IMDB (Yaghmazadeh et al., 2017) datasets.", "The detailed statistics are shown in Table", "1. We follow the common practice to report the exact set match accuracy on the validation and test dataset.", "The test dataset contains 2147 samples with 40 unseen databases but is not public available.", "We submit our model to the organizer of the challenge for evaluation.", "Implementations We preprocess the questions, table names, and column names with toolkit Stanza (Qi et al., 2020) for tokenization and lemma-tization.", "Our model is implemented with Py-torch (Paszke et al., 2019), and the original and line graphs are constructed with library DGL (Wang et al., 2019a).", "Within the encoder, we use GLOVE (Pennington et al., 2014) word embeddings with dimension 300 or pretrained language models (PLMs), BERT (Devlin et al., 2019) or ELECTRA (Clark et al., 2020), to leverage contextual information.", "With GLOVE , embeddings of the most frequent 50 words in the training set are fixed during training while the remaining will be fine-tuned.", "The schema linking strategy is borrowed from RATSQL (Wang et al., 2020a), which is also our baseline system.", "During evaluation, we adopt beam search decoding with beam size 5 .", "Hyper-parameters In the encoder, the GNN hidden size d is set to 256 for GLOVE and 512 for PLMs.", "The number of GNN layers L is 8 .", "In the decoder, the dimension of hidden state, action embedding and node type embedding are set to 512 , 128 and 128 respectively.", "The recurrent dropout rate (Gal and Ghahramani, 2016) is 0 .", "2 for decoder LSTM.", "The number of heads in multi-head attention is 8 and the dropout rate of features is set to 0 .", "2 in both the encoder and decoder.", "Throughout the experiments, we use AdamW (Loshchilov and Hutter, 2019) optimizer with linear warmup scheduler.", "The warmup ratio of total training steps is 0 .", "1 .", "For GLOVE , the learning rate is 5 e 4 and the weight decay coefficient is 1 e 4 ; For PLMs, we use smaller leaning rate 2 e 5 ( base ) or 1 e 5 ( large ), and larger weight decay rate 0 .", "1 .", "The optimization of the PLM encoder is carried out more carefully with layer-wise learning rate decay coefficient 0 .", "8 .", "Batch size is 20 and the maximum gradient norm is 5 .", "The number of training epochs is 100 for GLOVE , and 200 for PLMs respectively.", "The main results of the test set are provided in Table", "2. Our proposed line graph enhanced text-to-SQL (LGESQL) model achieves state-of-the-art results in all configurations at the time of writing.", "With word vectors GLOVE , the performance increases from 57 .", "2% to 62 .", "8% , 5 .", "6% absolute improvements.", "With PLM bert-large-wwm , LGESQL also surpasses all previous methods, including the ensemble model, and attains 68 .", "3% accuracy.", "Recently, more advanced approaches all leverage the benefits of larger PLMs, more task adaptive data (text-table pairs), and tailored pretraining tasks.", "For example, GAP (Shi et al., 2020) designs some task adaptive self-supervised tasks such as column prediction and column recovery to better address the downstream joint encoding problem.", "We utilize electra-large for its compatibility with our model and achieves 72 .", "0% accuracy.", "Taking one step further, we compare more fine-grained performances of our model to the baseline system RATSQL (Wang et al., 2020a) classified by the level of difficulty in Table", "3. We observe that LGESQL surpasses RATSQL across all subdivisions in both the validation and test datasets regardless of the application of a PLM, especially at the Medium and Extra Hard levels.", "This validates the superiority of our model by exploiting the structural relations among edges in the line graph.", "In this section, we investigate the contribution of each design choice.", "We report the average accuracy on the validation dataset with 5 random seeds.", "Different Components of LGESQL", "RGATSQL is our baseline system where the line graph is not utilized.", "It can be viewed as a variant of RATSQL with our tailored grammar-based decoder.", "From Table 4, we can discover that: 1) if non-local relations or meta-paths are removed (w/o NLC), the performance will decrease roughly by 2 points in LGESQL, while 3 points drop in RGATSQL.", "However, our LGESQL with merely local relations is still competitive.", "It consolidates our motivation that by exploiting the structure among edges, the line graph can capturing long-range relations to some extent.", "2) graph pruning task contributes more in LGESQL ( +1 . 2% ) than RGATSQL ( +0 . 7% ) on account of the fact that local relations are more critical to structural inference.", "3) Two strategies of combining local and non-local relations introduced in 3.2.1 (w/ MSDE or MMC) are both beneficial to the eventual performances of LGESQL ( 2 . 0% and 2 . 1% gains, respectively).", "It corroborates the assumption that local and nonlocal relations should be treated with distinction.", "However, the performance remains unchanged in RGATSQL, when merging a different view of the graph (w/ MMC) into multi-head attention.", "This may be caused by the over-smoothing problem of a complete graph.", "In this part, we analyze the effects of different pre-trained language models in Table 5.", "From the overall results, we can see that: 1) by involving the line graph into computation, LGESQL outperforms the baseline model RGATSQL with different PLMs, further demonstrating the effectiveness of explicitly modeling edge features.", "2) large series PLMs consistently perform better than base models on account of their model capacity and generalization capability to unseen domains.", "3) Task adaptive PLMs especially ELECTRA are su-perior to vanilla BERT irrespective of the upper GNN architecture.", "We hypothesize the reason is that ELECTRA is pre-trained with a tailored binary classification task, which aims to individually distinguish whether each input word is substituted given the context.", "Essentially, this self-supervised task is similar to our proposed graph pruning task, which focuses on enhancing the discriminative capability of the encoder.", "In Figure 7, we compare the SQL queries generated by our LGESQL model with those created by the baseline model RGATSQL.", "We notice that Figure 7: Case study: the first three cases are positive samples while the last one is negative.", "LGESQL performs better than the baseline system, especially on examples that involve the JOIN operation of multiple tables.", "For instance, in the second case where the connection of three tables are included, RGATSQL fails to identify the existence of table flights .", "Thus, it is unable to predict the WHERE condition about the destination city and does repeat work.", "In the third case, our LGESQL still successfully constructs a connected schema sub-graph by linking table template to documents.", "Sadly, the RGATSQL model neglects the occurrence of documents again.", "However, in the last case, our LGESQL is stupid to introduce an unnecessary table airports.", "It ignores the situation that table flights has one column source airport which already satisfies the requirement.", "Encoding Problem for Text-to-SQL To tackle the joint encoding problem of the question and database schema, Xu et al. (2017) proposes col-umn attention strategy to gather information from columns for each question word.", "TypeSQL (Yu et al., 2018a) incorporates prior knowledge of column types and schema linking as additional input features.", "Bogin et al. (2019a) and Chen et al. (2021) deal with the graph structure of database schema via GNN.", "EditSQL (Zhang et al., 2019b) considers co-attention between question words and database schema nodes similar to the common practice in text matching (Chen et al., 2017).", "BRIDGE (Lin et al., 2020) further leverages the database content to augment the column representation.", "The most advanced method RATSQL (Wang et al., 2020a), utilizes a complete relational graph attention neural network to handle various pre-defined relations.", "In this work, we further consider both local and non-local, dynamic and static edge features among different types of nodes with a line graph.", "Heterogeneous Graph Neural Network Apart from the structural topology, a heterogeneous graph (Shi et al., 2016) also contains multiple types of nodes and edges.", "To address the heterogeneity of node attributes, Zhang et al. (2019a) designs a type-based content encoder and Fu et al. (2020) utilizes a type-specific linear transformation.", "For edges, relational graph convolution network (RGCN, Schlichtkrull et al., 2018) and relational graph attention network (RGAT, Wang et al., 2020b) have been proposed to parameterize different relations.", "HAN (Wang et al., 2019b) converts the original heterogeneous graph into multiple homogeneous graphs and applies a hierarchical attention mechanism to the meta-path-based sub-graphs.", "Similar ideas have been adopted in dialogue state tracking (Chen et al., 2020b, 2019a), dialogue policy learning (Chen et al., 2018) and text matching (Chen et al., 2020c; Lyu et al., 2021) to handle heterogeneous inputs.", "In another branch, Chen et al. (2019b), Zhu et al. (2019) and Zhao et al. (2020) construct the line graph of the original graph and explicitly model the computation over edge features.", "In this work, we borrow the idea of a line graph and update both node and edge features via iteration over dual graphs.", "In this work, we utilize the line graph to update the edge features in the heterogeneous graph for the text-to-SQL task.", "Through the iteration over the structural connections in the line graph, local edges can incorporate multi-hop relational features and capture significant meta-paths.", "By further integrating non-local relations, the encoder can learn from multiple views and attend to remote nodes with shortcuts.", "In the future, we will investigate more useful meta-paths and explore more effective methods to deal with different meta-path-based neighbors.", "We thank Tao Yu, Yusen Zhang and Bo Pang for their careful assistance with the evaluation.", "We also thank the anonymous reviewers for their thoughtful comments.", "This work has been supported by Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102), No.SKLMCPTS2020003 Project and Startup Fund for Youngman Research at SJTU (SFYR at SJTU)." ]
[ "objective", "abstain", "objective", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "method", "method", "abstain", "abstain", "objective", "other", "other", "other" ]
[ "Due to high data demands of current methods, attention to zero-shot cross-lingual spoken language understanding (SLU) has grown, as such approaches greatly reduce human annotation effort.", "However, existing models solely rely on shared parameters, which can only perform implicit alignment across languages.", "We present G lobal L ocal C ontrastive LE arning F ramework (GL-CLEF) to address this shortcoming.", "Specifically, we employ contrastive learning, leveraging bilingual dictionaries to construct multilingual views of the same utterance, then encourage their representations to be more similar than negative example pairs, which achieves to explicitly aligned representations of similar sentences across languages.", "In addition, a key step in GL-CLEF is a proposed Local and Global component, which achieves a fine-grained cross-lingual transfer (i.e., sentence-level Local intent transfer, token-level Local slot transfer, and semantic-level Global transfer across intent and slot).", "Experiments on MultiATIS++ show that GL-CLEF achieves the best performance and successfully pulls representations of similar sentences across languages closer.", "Spoken language understanding (SLU) is a critical component in task-oriented dialogue systems (Tur and De Mori, 2011; Qin et al., 2021b).", "It usually includes two sub-tasks: intent detection to identify users' intents and slot filling to extract semantic constituents from the user's query.", "With the advent of deep neural network methods, SLU has met with remarkable success.", "However, existing SLU models rely on large amounts of annotated data, which makes it hard to scale to low-resource languages that lack large amounts of labeled data.", "To address this shortcoming, zero-shot Work done during internship at Microsoft Research Asia and remote visiting at National University of Singapore.", "cross-lingual SLU generalization leverages the labeled training data in high-resource languages to transfer the trained model to a target, low-resource language, which gains increasing attention.", "To this end, many works have been explored for zero-shot cross-lingual SLU.", "Multilingual BERT (mBERT) (Devlin et al., 2019), a cross-lingual contextual pre-trained model from a large amount of multi-lingual corpus multi-lingual corpus, has achieved considerable performance for zero-shot cross-lingual SLU.", "Liu et al. (2020) further build an attention-informed mixed-language training by generating bi-lingual code-switched data to implicitly align keywords (e.g., slots) between source and target language.", "Qin et al. (2020) extend the idea to a multilingual code-switched setting, aligning the source language to multiple target languages.", "This approach currently achieves the 2677 state-of-the-art performance for zero-shot cross-lingual SLU.", "Though achieving promising performance, as shown in Figure 1", "(a), the above methods solely rely on shared parameters and can only perform implicit alignment across languages, which brings two challenges.", "First, such implicit alignment process seems to be a black box, which not only seriously affects the alignment representation but also makes it hard to analyze the alignment mechanism.", "Second, prior work do not distinguish between the varying granularities of the tasks: the intent detection is sentence-level and the slot filling is token-level , which does not offer fine-grained cross-lingual transfer for token-level slot filling.", "To solve the aforementioned challenges, we propose a G lobal L ocal C ontrastive LE arning F ramework (GL-CLEF) for zero-shot cross-lingual SLU.", "For the first challenge, as shown in Figure 1", "(b), the key insight in GL-CLEF is to explicitly ensure that representations of similar sentences across languages are pulled closer together via contrastive learning (CL).", "Specifically, we leverage bilingual dictionaries to generate multi-lingual code-switched data pairs, which can be regarded as cross-lingual views with the same meaning.", "With the use of CL, our model is able to learn to distinguish the code-switched utterance of an input sentence from a set of negative examples, and thus encourages representations of similar sentences between source language and target language closer.", "For the second challenge, SLU requires accomplishing tasks at two different levels: token-level slot filling and sentence-level intent detection.", "As such, simply leveraging ordinary sentence-level contrastive learning is ineffective for fine-grained knowledge transfer in token-level slot filling.", "Therefore, we first introduce a Local module in GL-CLEF to learn different granularity alignment representations (i.e., sentence-level Local intent CL and token-level local slot CL).", "To be specific, sentence-level Local intent CL and token-level local slot CL are introduced for aligning similar sentence and token representations across different languages for intent detection and slot filling, respectively.", "In addition, we further argue that slot and intent are highly correlated and have similar semantic meanings in a sentence.", "This phenomenon can serve as a signal for self-supervised alignment across intent and slots.", "Therefore, a Global module named semantic-level global intentslot CL is further proposed to bring the representations of SLU out SLU out", "We conduct experiments on MultiATIS++ (Xu et al., 2020), which includes nine different languages.", "Our experiments show that GL-CLEF achieves state-of-the-art results of 54.09% sentence accuracy, outperforming the previous best by 10.06% on average.", "Besides, extensive analysis experiments demonstrate that GL-CLEF has successfully reduced the representation gap between different languages.", "To facilitate further research, codes are publicly available at https://github.com/ LightChen233/GL-CLeF .", "We first describe traditional SLU before the specifics of zero-shot cross-lingual version of SLU.", "Traditional SLU in Task-oriented Dialogue.", "SLU in Task-oriented Dialogue contains two subtasks: Intent Detection and Slot Filling .", "Intent Detection: Given input utterance x , this is a classification problem to decide the corresponding intent label o I .", "Slot Filling: Often modeled as a sequence labeling task that maps an input word sequence x = ( x 1 , . . . , x n ) to slots sequence o S = ( o S 1 , . . . , o Sn ) , where n denotes the length of sentence x .", "Since the two tasks of intent detection and slot filling are highly correlated, it is common to adopt a joint model that can capture shared knowledge.", "We follow the formalism from Goo et al. (2018), formulated as ( o I , o S ) = f ( x ) , where f is the trained model.", "Zero-shot Cross-lingual SLU.", "This means that a SLU model is trained in a source language, e.g., English ( cf.", "Figure 2", "(a)) and directly applied to other target languages ( cf.", "Figure 2", "(b)).", "Formally, given each instance x tgt in a target language, the model f which is trained on the source language is directly used for predicting its intent and slots: ( o Itgt , o Stgt ) = f ( x tgt ) , (1) where tgt represents the target language.", "We describe the general approach to general SLU task first, before describing our GL-CLEF model which explicitly uses contrastive learning to explicitly achieve cross-lingual alignment.", "The main architecture of GL-CLEF is illustrated in Figure 3.", "Encoder.", "Given each input utterance x = ( x 1 , x 2 , . . . , x n ) , the input sequence can be constructed by adding specific tokens x = ( [CLS] , x 1 , x 2 , ..., x n , [SEP] ) , where [CLS] denotes the special symbol for representing the whole sequence, and [SEP] can be used for separating non-consecutive token sequences (Devlin et al., 2019).", "Then, we follow Qin et al. (2020) to first generate multi-lingual code-switched data.", "Then, we employ mBERT model to take code-switched data for encoding their representations H = ( h CLS , h 1 , . . . , h n , h SEP ).", "Slot Filling.", "Since mBERT produces subword-resolution embeddings, we follow Wang et al. (2019) and adopt the first sub-token's representation as the whole word representation and use the hidden state to predict each slot: o St = softmax( W s h t + b s ) , where h t denotes the first sub-token representation of word x t ; W s and b s refer to the trainable parameters.", "Intent Detection.", "We input the sentence representation h CLS to a classification layer to find the label o I : o I = softmax( WI h CLS + b I ) , where WI and b I are tuneable parameters.", "We introduce our globallocal contrastive learning framework (GL-CLEF) in detail, which consists of three modules: 1) a sentence-level local intent contrastive learning (CL) module to align sentence representation across languages for intent detection, 2) a token-level local slot CL module to align token representations across languages for slot filling,", "For contrastive learning, the key operation is to choose appropriate positive and negative pairs against to the original (anchor) utterance.", "Positive Samples.", "Positive samples should preserve the same semantics compared against the anchor utterance.", "Therefore, given each anchor utterance x = ( [CLS] , x 1 , x 2 , ..., x n , [SEP] ) , we follow Qin et al. (2020) to use bilingual dictionaries (Lample et al., 2018) to generate multi-lingual code-switched data, which is considered as the positive samples x + .", "Specifically, for each word x t in x , x t is randomly chosen to be replaced with a translation provisioned from a bilingual dictionary to generate a positive sample.", "For example, given an anchor utterance watch sports movie in English, we can generate a positive multi-lingual code-switched sample (watch/zh) (sports/ja) pelcula (movie/es) ( cf. Figure 3).", "Such a pair of anchor utterance and multi-lingual code-switched sample can be regarded as cross-lingual views of the same meaning across different languages.", "x + is fed into mBERT to obtain the corresponding representations H + = ( h CLS + , h 1 + , . . . , h n + , h SEP + ).", "Negative Samples.", "A natural approach for generating negative samples is randomly choosing other queries in a batch.", "However, this method requires the recoding of the negative samples, hurting effi-ciency.", "Inspired by He et al. (2020), in GL-CLEF, we maintain a negative sample queue, where the previously encoded original anchor utterance x , positive samples x + and previous negative samples x are also progressively reused as negative samples.", "This enables us to reuse the encoded samples from the immediate preceding batches, so as to eliminate the unnecessary negative encoding process.", "The negative sample queues for [CLS] and sentence representation are represented as: HCLS = { h k CLS } K 1 k =0 , HS = { H kS } K 1 k =0 , where K is the maximum capacity for negative queue.", "Sentence-level Local Intent CL.", "Since intent detection is a sentence-level classification task, aligning sentence representation across languages is the goal of zero-shot cross-lingual intent detection task.", "Therefore, in GL-CLEF, we propose a sentence-level local intent CL loss to explicitly encourage the model to align similar sentence representations into the same local space across languages for intent detection.", "Formally, this is formulated as: LLI = log s ( h CLS , h CLS + ) s ( h CLS , h CLS + ) + (cid:80) K-1k=0 s ( h CLS , h k CLS) , where s ( p, q ) denotes the dot product between p and q ; is a scalar temperature parameter.", "Token-level Local Slot CL.", "As slot filling is a token-level task, we propose a token-level local slot CL loss to help the model to consider token alignment for slot filling, achieving fine-grained cross-lingual transfer.", "We apply toke-level CL for all tokens in the query.", "Now, we calculate the i th token CL loss for simplicity: L i LS = n (cid:88) j =1 log s ( h i , h j + ) s ( h i , h j + ) + (cid:80) K-1 k = 0 s ( h i , h k j ) /n, where the final LLS is the summation of all tokens CL loss.", "Semantic-level Global Intent-slot CL.", "We noted that slots and intent are often highly related semantically when they belong to the same query.", "Therefore, we think that the intent in a sentence and its own slots can naturally constitute a form of positive pairings, and the corresponding slots in other sentences can form negative pairs.", "We thus further introduce a semantic-level global intent slot CL loss to model the semantic interaction between slots and intent, which may further improve cross-lingual transfer between them.", "Formally: L GIS1 = n (cid:88) j =1 log s ( h CLS , h j ) s ( h CLS , h j ) + (cid:80) K-1 k = 0 s ( h CLS , h k j ) /n, L GIS2 = n (cid:88) j =1 log s ( h CLS , h j + ) s ( h CLS , h j + ) + (cid:80) K-1 k = 0 s ( h CLS , h k j ) /n, LGIS = L GIS1 + L GIS2 , where we consider CL loss from both anchor sentences ( L GIS1 ) and code-switched sentence ( L GIS2 ), and add them to do semantic-level contrastive learning ( LGIS ) .", "GL-CLEF is a tuned combination of the individual losses:", "L = ILI + SLS + LILLI + LSLLS + GISLGIS , (4)", "We use the latest multilingual benchmark dataset of MultiATIS++ (Xu et al., 2020) which consists of 9 languages including English (en), Spanish (es), Portuguese (pt), German (de), French (fr), Chinese (zh), Japanese (ja), Hindi (hi), and Turkish (tr).", "We use the base case multilingual BERT (mBERT), which has N = 12 attention heads and M = 12 transformer blocks.", "We select the best hyperpa-rameters by searching a combination of batch size, learning rate with the following ranges: learning rate { 2 10 7 , 5 10 7 , 1 10 6 , 2 10 6 , 5 10 6 , 6 10 6 , 5 10 5 , 5 10 4 } ; batch size { 4 , 8 , 16 , 32 } ; max size of negative queue { 4 , 8 , 16 , 32 } ; For all experiments, we select the best-performing model over the dev set and evaluate on test datasets.", "All experiments are conducted at TITAN XP and V100.", "To verify the effect of GL-CLEF, we compare our model with the following state-of-the-art baselines: 1) mBERT.", "mBERT 1 follows the same model architecture and training procedure as BERT (Devlin et al., 2019), but trains on the Wikipedia pages of 1 https://github.com/google-research/bert/blob/master/multilingual.md 104 languages with a shared subword vocabulary.", "This allows mBERT to share embeddings across languages, which achieves promising performance on various cross-lingual NLP tasks; 2) Ensemble-Net.", "Razumovskaia et al. (2021) propose an Ensemble-Net where predictions are determined by 8 independent models through majority voting, each separately trained on a single source language, which achieves promising performance on zero-shot cross-lingual SLU; 3) AR-S2S-PTR.", "Rongali et al. (2020) proposed a unified sequence-to-sequence models with pointer generator network for cross-lingual SLU; 4) IT-S2S-PTR.", "Zhu et al. (2020) proposed a non-autoregressive parser based on the insertion transformer.", "It speeds up decoding and gain improvements in cross-lingual SLU transfer; 5) CoSDA.", "Qin et al. (2020) propose a data augmentation framework to generate multi-lingual code-switching data to fine-tune mBERT, which encourages the model to align representations from source and multiple target languages.", "Following Goo et al. (2018), we evaluate the performance of slot filling using F1 score, intent prediction using accuracy, and the sentence-level semantic frame parsing using overall accuracy which represents all metrics are right in an utterance.", "From the results in Table 1, we observe that: (1) CoSDA achieves better performance than no alignment work mBERT and even outperforms the Ensemble-Net .", "This is because that such implicit alignment does align representations to some extent, compared against mBERT .", "(2) Our framework achieves the state-of-the art performance and beats CoSDA with 10.06% average improvements on overall accuracy.", "This demonstrates that GL-CLEF explicitly pull similar representations across languages closer, which outperforms the implicit alignment manner.", "To understand GL-CLEF in more depth, we perform comprehensive studies to answer the following research questions (RQs):", "(1) Do the local intent and slot CLs bene-fit sentenceand token-level representation alignment?", "(2) Can semantic-level global intent-slot CL boost the overall sentence accuracy?", "(3) Are local intent CL and local slot CL complementary?", "(4) Does GL-CLEF pull similar representa-2681 Intent Accuracy en de es fr hi ja pt tr zh AVG mBERT* (Xu et al., 2020) -95.27 96.35 95.92 80.96 79.42 94.96 69.59 86.27 mBERT (Devlin et al., 2019) 98.54 95.40 96.30 94.31 82.41 76.18 94.95 75.10 82.53 88.42 Ensemble-Net* (Razumovskaia et al., 2021) 90.26 92.50 96.64 95.18 77.88 77.04 95.30 75.04 84.99 87.20 CoSDA (Qin et al., 2020) 95.74 94.06 92.29 77.04 82.75 73.25 93.05 80.42 78.95 87.32 GL-CLEF 98.77 97.53 97.05 97.72 86.00 82.84 96.08 83.92 87.68 91.95 Slot F1 en de es fr hi ja pt tr zh AVG Ensemble-Net* (Razumovskaia et al., 2021) 85.05 82.75 77.56 76.19 14.14 9.44 74.00 45.63 37.29 55.78 mBERT* (Xu et al., 2020) -82.61 74.98 75.71 31.21 35.75 74.05 23.75 62.27 mBERT (Devlin et al., 2019) 95.11 80.11 78.22 82.25 26.71 25.40 72.37 41.49 53.22 61.66 CoSDA (Qin et al., 2020) 92.29 81.37 76.94 79.36 64.06 66.62 75.05 48.77 77.32 73.47 GL-CLEF 95.39 86.30 85.22 84.31 70.34 73.12 81.83 65.85 77.61 80.00 Overall Accuracy en de es fr hi ja pt tr zh AVG AR-S2S-PTR* (Zhu et al., 2020) 86.83 34.00 40.72 17.22 7.45 10.04 33.38 23.74 -IT-S2S-PTR* (Zhu et al., 2020) 87.23 39.46 50.06 46.78 11.42 12.60 39.30 28.72 mBERT (Devlin et al., 2019) 87.12 52.69 52.02 37.29 4.92 7.11 43.49 4.33 18.58 36.29 CoSDA (Qin et al., 2020) 77.04 57.06 46.62 50.06 26.20 28.89 48.77 15.24 46.36 44.03 GL-CLEF 88.02 66.03 59.53 57.02 34.83 41.42 60.43 28.95 50.62 54.09 Table 1: Results on MultiATIS++.", "tions across languages closer?", "(5) Does GL-CLEF improve over other pre-trained models?", "(6) Does GL-CLEF generalize to non pre-trained models?", "(7) Is GL-CLEF robust to the one-to-many translation problem?", "Answer 1: Local intent CL and slot CL align similar sentence and token representations across languages.", "We investigate the effect of the local intent CL and local slot CL mechanism, by removing the local intent CL and slot CL, respectively (Figure 4, LI and LS (Col 1,2)).", "For the effectiveness of local intent CL, we find the performance of intent detection averaged on 9 languages drops by 3.52% against the full system ( ibid. final, RHS column).", "This is because sentence-level intent CL loss can pull sentence representations closer across languages.", "Similarly, considering the effectiveness of local slot CL, we find the performance of slot filling averaged on 9 languages drops by 2.44% against the full system.", "We attribute performance drops to the fact that local slot CL successfully make a fine-grained cross-lingual knowledge transfer for aligning token representation across languages, which is essential for token-level cross-lingual slot filling tasks.", "Answer 2: Semantic-level global intent-slot successfully establishes a semantic connection across languages.", "We further investigate the effect of the semantic-level intent-slot CL mechanism when we remove the global intent-slot CL loss (Figure 4, GIS (Col 3)).", "We find the sentence overall performance drops a lot (from 54.09% to 46.94%).", "Sentence overall metrics require model to capture the semantic information (intent and slots) for queries.", "Therefore, we attribute it to the proposed semantic-level global intent-slot CL.", "As it successfully establishes semantic connection across languages, it boosts overall accuracy.", "Answer 3: Contribution from local intent CL and slot CL module are complementary.", "We explore whether local intent CL and slot CL module are complementary.", "By removing all the Local CL modules (including sentence-level local intent CL and token-level local slot CL), results are shown in Figure 4 (Local Col 4).", "We find that the experiments are lowest compared with only removing any single local CL module, which demonstrates the designed two local CL module works orthogonally.", "representations across languages closer.", "We choose test set and use representations of [CLS] of each sentence for visualization.", "Figure 5 (a, LHS) shows the t-SNE visualization of the mBERT output, where we observe that there very little overlap between different languages, which shows that the distance of the representations of different languages are distant.", "In contrast, the GL-CLEF representations (b, RHS) fine-tuned model in different languages are closer and largely overlap with each other.", "The stark contrast between the figures demonstrates 2682 Intent Accuracy en de es fr hi ja pt tr zh AVG BiLSTM (Hochreiter and Schmidhuber, 1997) 72.56 70.96 70.35 60.05 64.50 64.33 71.75 56.22 60.13 65.65 BiLSTM+GL-CL EF 84.77 74.44 71.09 69.53 65.29 66.14 77.02 63.36 67.08 70.97 XLM-R (Conneau et al., 2020) 98.32 97.19 98.03 94.94 88.91 88.50 96.41 72.45 91.15 93.02 XLM-R+GL-CL EF 98.66 98.43 98.04 97.85 93.84 88.83 97.76 81.68 91.38 94.05 Slot F1 en de es fr hi ja pt tr zh AVG BiLSTM (Hochreiter and Schmidhuber, 1997) 75.43 15.81 34.97 33.38 5.83 4.98 43.89 9.51 27.51 27.92 BiLSTM+GL-CL EF 87.45 38.40 46.06 46.16 20.28 29.53 59.67 37.25 42.48 45.25 XLM-R (Conneau et al., 2020) 94.58 72.35 76.72 71.81 60.51 9.31 70.08 45.21 13.44 57.38 XLM-R+GL-CL EF 95.88 84.91 82.47 80.99 61.11 55.57 77.27 54.55 80.50 74.81 Overall Accuracy en de es fr hi ja pt tr zh AVG BiLSTM (Hochreiter and Schmidhuber, 1997) 37.06 0.78 3.08 0.63 0.22 0.00 10.20 0.00 0.03 5.80 BiLSTM+GL-CL EF 61.37 4.60 9.10 4.30 0.34 2.03 16.82 2.80 2.46 11.53 XLM-R (Conneau et al., 2020) 87.45 43.05 42.93 43.74 19.42 5.76 40.80 9.65 6.60 33.31 XLM-R+GL-CL EF 88.24 64.91 53.51 58.28 19.49 13.77 52.35 14.55 52.07 46.35 Table 2: Experimental results on BiLSTM and XLM-R.", "Answer 5: Contributions from contrastive learning and pre-trained model use are complementary.", "To verify the contribution from GL-CLEF is still effective when used in conjunction with other strong pre-trained models, we perform experiments with XLM-R (Conneau et al., 2020).", "XLM-R demonstrates significant gains for a wide range of cross-lingual tasks.", "From the results in Table 2, we find GL-CLEF enhances XLM-R 's performance, demonstrating that contributions from the two are complementary.", "This also indicates that CL-CLEF is model-agnostic, hinting that GL-CLEF may be applied to other pre-trained models.", "whether GL-CLEF is effective for non pre-trained models, in addition to transformers.", "To answer the question, we replace mBERT with BiLSTM , keeping other components unchanged.", "The results are shown in Table 2.", "We can see that GL-CLEF outperforms BiLSTM in all metrics, further demonstrating that GL-CLEF is not only effective over mBERT but also ports to general encoders for both pre-trained models and non pre-trained models.", "Answer 7: GL-CLEF is robust.", "It is worth noting that words in the source language can have multiple translations in the target language.", "We follow Qin et al. (2020) to randomly choose any of the multiple translations as the replacement target language word.", "Their work verified that random selection effective method (Qin et al., 2020).", "A natural question that arises is whether GL-CLEF is robust over different translation selections.", "To answer the question, we choose 15 different seeds to perform experiment and obtain the standard deviation, which we take as an indicator of the stability and robustness of models' performance.", "Results 2683 1.000 0.975 0.950 0.925 0.900 0.875 0.850 0.825 0.800 P e r f o r m a n c e 0.900 0.800 0.700 0.600 0.500 0.400 0.300 P e r f o r m a n c e en de es fr hi ja pt tr zh avg 0.950 0.900 0.850 0.800 0.750 0.700 0.650 P e r f o r m a n c e", "in Figure 6 shows a lower standard deviation on each metric, indicating our model is robust to different translation.", "Finding and using the absolutely correct contextual word-to-word translation is an interesting direction to be explored in the future.", "Traditional Spoken Language Understanding.", "Since slot filling and intent detection are two correlated tasks, traditional SLU approaches mainly explore a joint model for capturing shared knowledge across the two tasks.", "Specifically, Zhang and Wang (2016); Liu and Lane (2016a,b); Hakkani-Tr et al. (2016) consider an implicit joint mechanism using a multi-task framework by sharing an encoder for both tasks.", "Goo et al. (2018); Li et al. (2018); Qin et al. (2019) consider explicitly leveraging intent detection information to guide slot filling.", "Wang et al. (2018); E et al. (2019); Zhang et al. (2020); Qin et al. (2021a) use a bi-directional connection between slot filling and intent detection.", "Zero-shot Cross-lingual Spoken Language Understanding.", "Traditional SLU has largely been limited to high-resource languages.", "To solve this problem, zero-shot cross-lingual SLU has gained increasing attention.", "Recently, cross-lingual con-textualized embeddings have achieved promising results (e.g., mBERT (Devlin et al., 2019)).", "Many works target improving mBERT at the pre-training stage (Conneau and Lample, 2019; Huang et al., 2019; Yang et al., 2020; Feng et al., 2020; Conneau et al., 2020; Xue et al., 2021; Chi et al., 2021a,b).", "Compared with their work, our focus is on enhancing mBERT at the fine-tuning stage.", "In recent years, related work also considers aligning representations between source and target languages during fine-tuning, eschewing the need for an extra pre-training process.", "Specifically, Liu et al. (2020) propose code-mixing to construct training sentences that consist of both source and target phrases for implicitly fine-tuning mBERT.", "Qin et al. (2020) further propose a multi-lingual code-switching data augmentation to better align a source language and all target languages.", "In contrast to their work, our framework consider aligning similar representation across languages explicitly via a contrastive learning framework.", "In addition, in GL-CLEF, we propose a multi-resolution loss to encourage fine-grained knowledge transfer for token-level slot filling.", "Contrastive Learning.", "Contrastive learning is now commonplace in NLP tasks.", "Wu et al. (2020) adopt multiple sentence-level augmentation strategies to learn a noise-invariant sentence representation.", "Fang and Xie (2020) apply back translation to create augmentations of original sentences for training transformer models.", "Wang et al. (2021) propose contrastive learning with semantically negative examples (CLINE) to improve the robustness under semantically adversarial attack.", "Inspired by the success of CL, we utilize contrastive learning to explicitly align similar representations across source language and target language.", "We introduced a globallocal contrastive learning (CL) framework (GL-CLEF) to explicitly align representations across languages for zero-shot cross-lingual SLU.", "Besides, the proposed Local CL module and Global CL module achieves to learn different granularity alignment (i.e., sentence-level local intent alignment, token-level local slot alignment, semantic-level global intent-slot alignment).", "Experiments on MultiATIS++ show that GL-CLEF obtains best performance and extensive analysis indicate GL-CLEF successfully pulls closer the representations of similar sentence across languages.", "Spoken language understanding (SLU) is a core component in task-oriented dialogue system, which becomes sufficiently effective to be deployed in practice.", "Recently, SLU has achieved remarkable success, due to the evolution of pre-trained models.", "However, most SLU works and applications are English-centric, which makes it hard to generalize to other languages without annotated data.", "Our work focuses on improving zero-shot cross-lingual SLU model that do not need any labeled data for target languages, which potentially is able to build multilingual SLU models and further promotes the globalization of task-oriented dialog systems.", "We also thank all anonymous reviewers for their constructive comments.", "This work was supported by the National Key R&D Program of China via grant 2020AAA0106501 and the National Natural Science Foundation of China (NSFC) via grant 61976072 and 62176078." ]
[ "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "method", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "other", "objective", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "objective", "objective", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other" ]
[ "Although pretrained Transformers such as BERT achieve high accuracy on in-distribution examples, do they generalize to new distributions?", "We systematically measure out-of-distribution (OOD) generalization for seven NLP datasets by constructing a new robustness benchmark with realistic distribution shifts.", "We measure the generalization of previous models including bag-of-words models, ConvNets, and LSTMs, and we show that pretrained Transformers' performance declines are substantially smaller.", "Pretrained transformers are also more effective at detecting anomalous or OOD examples, while many previous models are frequently worse than chance.", "We examine which factors affect robustness, finding that larger models are not necessarily more robust, distillation can be harmful, and more diverse pretraining data can enhance robustness.", "Finally, we show where future work can improve OOD robustness.", "The train and test distributions are often not identically distributed.", "Such train-test mismatches occur because evaluation datasets rarely characterize the entire distribution (Torralba and Efros, 2011), and the test distribution typically drifts over time (Quionero-Candela et al., 2009).", "Chasing an evolving data distribution is costly, and even if the training data does not become stale, models will still encounter unexpected situations at test time.", "Accordingly, models must generalize to OOD examples whenever possible, and when OOD examples do not belong to any known class, models must detect them in order to abstain or trigger a conservative fallback policy (Emmott et al., 2015).", "dependent and identically distributed (IID).", "In the IID setting, large pretrained Transformer models can attain near human-level performance on numerous tasks (Wang et al., 2019).", "However, high IID accuracy does not necessarily translate to OOD robustness for image classifiers (Hendrycks and Di-etterich, 2019), and pretrained Transformers may embody this same fragility.", "Moreover, pretrained Transformers can rely heavily on spurious cues and annotation artifacts (Cai et al., 2017; Gururangan et al., 2018) which out-of-distribution examples are less likely to include, so their OOD robustness remains uncertain.", "In this work, we systematically study the OOD robustness of various NLP models, such as word embeddings averages, LSTMs, pretrained Transformers, and more.", "We decompose OOD robustness into a model's ability to (1) generalize and to (2) detect OOD examples (Card et al., 2018).", "To measure OOD generalization, we create a new evaluation benchmark that tests robustness to shifts in writing style, topic, and vocabulary, and spans the tasks of sentiment analysis, textual entailment, question answering, and semantic similarity.", "We create OOD test sets by splitting datasets with their metadata or by pairing similar datasets together (Section 2).", "Using our OOD generalization benchmark, we show that pretrained Transformers are considerably more robust to OOD examples than traditional NLP models (Section 3).", "We show that the performance of an LSTM semantic similarity model declines by over 35% on OOD examples, while a RoBERTa model's performance slightly increases .", "Moreover, we demonstrate that while pretraining larger models does not seem to improve OOD generalization, pretraining models on diverse data does improve OOD generalization.", "To measure OOD detection performance, we turn classifiers into anomaly detectors by using their prediction confidences as anomaly scores (Hendrycks and Gimpel, 2017).", "We show that many non-pretrained NLP models are often near or worse than random chance at OOD detection.", "In contrast, pretrained Transformers are far more capable at OOD detection.", "Overall, our results highlight that while there is room for future robustness improvements, pretrained Transformers are already moderately robust.", "We evaluate OOD generalization with seven carefully selected datasets.", "Each dataset either (1) contains metadata which allows us to naturally split the samples or (2) can be paired with a similar dataset from a distinct data generating process.", "By splitting or grouping our chosen datasets, we can induce a distribution shift and measure OOD generalization.", "We utilize four sentiment analysis datasets: We use SST-2 , which contains pithy expert movie reviews (Socher et al., 2013), and IMDb (Maas et al., 2011), which contains full-length lay movie reviews.", "We train on one dataset and evaluate on the other dataset, and vice versa.", "Models predict a movie review's binary sentiment, and we report accuracy.", "The Yelp Review Dataset contains restaurant reviews with detailed metadata (e.g., user ID, restaurant name).", "We carve out four groups from the dataset based on food type: American, Chinese, Italian, and Japanese .", "Models predict a restaurant review's binary sentiment, and we report accuracy.", "The Amazon Review Dataset contains product reviews from Amazon (McAuley et al., 2015; He and McAuley, 2016).", "We split the data into five categories of clothing (Clothes, Women Clothing, Men Clothing, Baby Clothing, Shoes) and two categories of entertainment products (Music, Movies).", "We sample 50,000 reviews for each category.", "Models predict a review's 1 to 5 star rating, and we report accuracy.", "STS-B requires predicting the semantic similarity between pairs of sentences (Cer et al., 2017).", "The dataset contains text of different genres and sources; we use four sources from two genres: MSRpar (news), Headlines (news); MSRvid (captions), Images (captions).", "The evaluation metric is Pearson's correlation coefficient.", "ReCoRD is a reading comprehension dataset using paragraphs from CNN and Daily Mail news articles and automatically generated questions (Zhang et al., 2018).", "We bifurcate the dataset into CNN and Daily Mail splits and evaluate using exact match.", "MNLI is a textual entailment dataset using sentence pairs drawn from different genres of text (Williams et al., 2018).", "We select examples from two genres of transcribed text (Telephone and Face-to-Face) and one genre of written text (Letters), and we report classification accuracy.", "We evaluate NLP models with different input representations and encoders.", "We investigate three model categories with a total of thirteen models.", "Bag-of-words (BoW) Model.", "We use a bag-of-words model (Harris, 1954), which is high-bias but low-variance, so it may exhibit performance stability.", "The BoW model is only used for sentiment analysis and STS-B due to its low performance on the other tasks.", "For STS-B, we use the cosine similarity of the BoW representations from the two input sentences.", "Word Embedding Models.", "We use word2vec (Mikolov et al., 2013) and GloVe (Pen-nington et al., 2014) word embeddings.", "These embeddings are encoded with one of three models: word averages (Wieting et al., 2016), LSTMs (Hochreiter and Schmidhuber, 1997), and Convolutional Neural Networks (ConvNets).", "For classification tasks, the representation from the encoder is fed into an MLP.", "For STS-B and MNLI, we use the cosine similarity of the encoded representations from the two input sentences.", "For reading comprehension, we use the DocQA model (Clark and Gardner, 2018) with GloVe embeddings.", "We implement our models in AllenNLP (Gardner et al., 2018) and tune the hyperparameters to maximize validation performance on the IID task.", "Pretrained Transformers.", "We investigate BERT-based models (Devlin et al., 2019) which are pretrained bidirectional Transformers (Vaswani et al., 2017) with GELU (Hendrycks and Gimpel, 2016) activations.", "In addition to using BERT Base and BERT Large, we also use the large version of RoBERTa (Liu et al., 2019b), which is pretrained on a larger dataset than BERT.", "We use ALBERT (Lan et al., 2020) and also a distilled version of BERT, DistilBERT (Sanh et al., 2019).", "We follow the standard BERT fine-tuning procedure (Devlin et al., 2019) and lightly tune the hyperparameters for our tasks.", "We perform our experiments using the HuggingFace Transformers library (Wolf et al., 2019).", "In this section, we evaluate OOD generalization of numerous NLP models on seven datasets and provide some upshots.", "A subset of results are in Figures 1 and", "2. Full results are in the Appendix.", "Pretrained Transformers are More Robust.", "In our experiments, pretrained Transformers often have smaller generalization gaps from IID data to OOD data than traditional NLP models.", "For instance, Figure 1 shows that the LSTM model declined by over 35%, while RoBERTa's generalization performance in fact increases.", "For Amazon, MNLI, and Yelp, we find that pretrained Transformers' accuracy only slightly fluctuates on OOD examples.", "Partial MNLI results are in Table", "1. We present the full results for these three tasks in the Appendix.", "In short, pretrained Transformers can generalize across a variety of distribution shifts.", "Bigger Models Are Not Always Better.", "While larger models reduce the IID/OOD generalization gap in computer vision (Hendrycks and Di-etterich, 2019; Xie and Yuille, 2020; Hendrycks et al., 2019d), we find the same does not hold in NLP.", "Figure 3 shows that larger BERT and AL-Avg.BoW Avg.w2v ConvNetw2v LSTMw2v BERTBase BERTLarge RoBERTa 60 70 80 90 100 A cc u r a c y ( % ) IMDb Sentiment Classifier Generalization IID Data (IMDb) OOD Data (SST-2) DocQA DistilBERT BERT Base BERT Large RoBERTa 20 30 40 50 60 70 80 E x a c t M a t c h ( % ) ReCoRD Reading Comprehension Generalization IID Data (CNN) OOD Data (Daily Mail) Figure 2: Generalization results for sentiment analysis and reading comprehension.", "BERT models do not reduce the generalization gap.", "However, in keeping with results from vision (Hendrycks and Dietterich, 2019), we find that model distillation can reduce robustness, as evident in our DistilBERT results in Figure", "2. This highlights that testing model compression methods for BERT (Shen et al., 2020; Ganesh et al., 2020; Li et al., 2020) on only in-distribution examples gives a limited account of model generalization, and such narrow evaluation may mask downstream costs.", "2020; Hendrycks et al., 2019a), pretraining on larger and more diverse datasets can improve robustness.", "RoBERTa exhibits greater robustness than BERT Large, where one of the largest differences between these two models is that RoBERTa pretrains on more data.", "See Figure 2's results.", "Since OOD robustness requires evaluating both OOD generalization and OOD detection, we now turn to the latter.", "Without access to an outlier dataset (Hendrycks et al., 2019b), the state-of-the-art OOD detection technique is to use the model's prediction confidence to separate inand out-of-distribution examples (Hendrycks and Gimpel, 2017).", "Specifically, we assign an example x the anomaly score max y p ( y | x ) , the negative prediction confidence, to perform OOD detection.", "We train models on SST-2, record the model's confidence values on SST-2 test examples, and then record the model's confidence values on OOD examples from five other datasets.", "For our OOD examples, we use validation examples from 20 Newsgroups (20 NG) (Lang, 1995), the English source side of English-German WMT16 and English-German Multi30K (Elliott et al., 2016), and concatenations of the premise and hypothesis for RTE (Dagan et al., 2005) and SNLI (Bowman et al., 2015).", "These examples are only used during OOD evaluation not training.", "For evaluation, we follow past work (Hendrycks et al., 2019b) and report the False Alarm Rate at 95% Recall (FAR95).", "The FAR95 is the probability that an in-distribution example raises a false alarm, assuming that 95% of all out-of-distribution examples are detected.", "Hence a lower FAR95 is better.", "Partial results are in Figure 4, and full results are in the Appendix.", "Previous Models Struggle at OOD Detection.", "Models without pretraining (e.g., BoW, LSTM word2vec) are often unable to reliably detect OOD examples.", "In particular, these models' FAR95 scores are sometimes worse than chance because the models often assign a higher probability to out-of-distribution examples than in-distribution examples.", "The models particularly struggle on 20 Newsgroups (which contains text on diverse topics including computer hardware, motorcycles, space), as their false alarm rates are approximately 100% .", "Pretrained Transformers Are Better Detectors.", "In contrast, pretrained Transformer models are better OOD detectors.", "Their FAR95 scores are always better than chance.", "Their superior detection performance is not solely because the underlying model is a language model, as prior work (Hendrycks et al., 2019b) shows that language models are not necessarily adept at OOD detection.", "Also note that in OOD detection for computer vision, higher accuracy does not reliably improve OOD detection (Lee et al., 2018), so pretrained Transformers' OOD detection performance is not anticipated.", "Despite their relatively low FAR95 scores, pretrained Transformers still do not cleanly separate inand out-of-distribution examples (Figure 5).", "OOD detection using pretrained Transformers is still far from perfect, and future work can aim towards creating better methods for OOD detection.", "Why Are Pretrained Models More Robust?", "An interesting area for future work is to analyze why pretrained Transformers are more robust.", "A flawed explanation is that pretrained models are simply more accurate.", "However, this work and past work show that increases in accuracy do not directly translate to reduced IID/OOD generalization gaps (Hendrycks and Dietterich, 2019; Fried et al., 2019).", "One partial explanation is that Transformer models are pretrained on diverse data, and in computer vision, dataset diversity can improve OOD generalization (Hendrycks et al., 2020) and OOD detection (Hendrycks et al., 2019b).", "Similarly, Transformer models are pretrained with large amounts of data, which may also aid robustness (Orhan, 2019; Xie et al., 2020; Hendrycks et al., 2019a).", "However, this is not a complete explanation as BERT is pretrained on roughly 3 billion tokens, while GloVe is trained on roughly 840 billion tokens.", "Another partial explanation may lie in self-supervised training itself.", "Hendrycks et al. (2019c) show that computer vision models trained with self-supervised objectives exhibit better OOD generalization and far better OOD detection performance.", "Future work could propose new self-supervised objectives that enhance model robustness.", "Domain Adaptation.", "Other research on robustness considers the separate problem of domain adaptation (Blitzer et al., 2007; Daume III, 2007), where models must learn representations of a source and target distribution.", "We focus on testing generalization without adaptation in order to benchmark robustness to unforeseen distribution shifts.", "Unlike Fisch et al. (2019); Yogatama et al. (2019), we measure OOD generalization by considering simple and natural distribution shifts, and we also evaluate more than question answering.", "Adversarial Examples.", "Adversarial examples can be created for NLP models by inserting phrases (Jia and Liang, 2017; Wallace et al., 2019), paraphrasing questions (Ribeiro et al., 2018), and reducing inputs (Feng et al., 2018).", "However, adversarial examples are often disconnected from real-world performance concerns (Gilmer et al., 2018).", "Thus, we focus on an experimental setting that is more realistic.", "While previous works show that, for all NLP models, there exist adversarial examples, we show that all models are not equally fragile.", "Rather, pretrained Transformers are overall far more robust than previous models.", "Counteracting Annotation Artifacts.", "Annotators can accidentally leave unintended shortcuts in datasets that allow models to achieve high accuracy by effectively cheating (Cai et al., 2017; Gururangan et al., 2018; Min et al., 2019).", "These annotation artifacts are one reason for OOD brittleness: OOD examples are unlikely to contain the same spurious patterns as in-distribution examples.", "OOD robustness benchmarks like ours can stress test a model's dependence on artifacts (Liu et al., 2019a; Feng et al., 2019; Naik et al., 2018).", "We created an expansive benchmark across several NLP tasks to evaluate out-of-distribution robustness.", "To accomplish this, we carefully restructured and matched previous datasets to induce numerous realistic distribution shifts.", "We first showed that pretrained Transformers generalize to OOD examples far better than previous models, so that the IID/OOD generalization gap is often markedly reduced.", "We then showed that pretrained Transformers detect OOD examples surprisingly well.", "Overall, our extensive evaluation shows that while pretrained Transformers are moderately robust, there remains room for future research on robustness.", "We thank the members of Berkeley NLP, Sona Jeswani, Suchin Gururangan, Nelson Liu, Shi Feng, the anonymous reviewers, and especially Jon Cai.", "This material is in part based upon work supported by the National Science Foundation Frontier Award 1804794.", "Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation." ]
[ "abstain", "objective", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "method", "result", "result", "objective", "method", "result", "abstain", "result", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "result", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "other", "other", "other", "method", "result", "other", "other", "other", "other", "other", "method", "abstain", "objective", "result", "result", "other", "other", "other" ]
[ "Responsing with image has been recognized as an important capability for an intelligent conversational agent.", "Yet existing works only focus on exploring the multimodal dialogue models which depend on retrieval-based methods, but neglecting generation methods.", "To fill in the gaps, we first present a new task: multimodal dialogue response generation (MDRG) given the dialogue context, one model needs to generate a text or an image as response.", "Learning such a MDRG model often requires multimodal dialogues containing both texts and images which are difficult to obtain.", "Motivated by the challenge in practice, we consider MDRG under a natural assumption that only limited training examples are available.", "Under such a low-resource setting, we devise a novel conversational agent, Divter, in order to isolate parameters that depend on multimodal dialogues from the entire generation model.", "By this means, the major part of the model can be learned from a large number of text-only dialogues and text-image pairs respectively, then the whole parameters can be well fitted using just a few training examples.", "Extensive experiments demonstrate our method achieves state-of-the-art results in both automatic and human evaluation, and can generate informative text and high-resolution image responses.", "With the development of instant messaging technology in the recent decades, the intermediary of online conversation has also changed from pure text to a variety of visual modalities (e.g., image, gif animation, short video).", "Similar to communicating by the messenger tools (e.g., Facebook, WhatsApp, WeChat) in reality, an excellent intelligent conversational agent should not only be able to converse freely with plain text, but also have the ability to perceive and share the real visual physical world.", "Although recently some large-scale pre-trained text-only dialogue generation models, such as DialoGPT (Zhang et al., 2020), Blender (Roller et al., 2021), Meena (Adiwardana et al., 2020), have shown excellent performance, they still cannot rely exclusively on plain text to completely simulate the rich experience of visual perception.", "Recently, various vision-language tasks have been introduced and attracted widespread attention, such as visual question answering (Ren et al., 2015; Lu et al., 2016; Anderson et al., 2018; Li et al., 2019a; Huang et al., 2020), image captioning (Xu et al., 2015; Anderson et al., 2016; Ghanimifard and Dobnik, 2019; Cornia et al., 2020), image-grounded dialogue (Das et al., 2017; Yang et al., 2021; Agarwal et al., 2020; Qi et al., 2020; Chen et al., 2021; Liang et al., 2021).", "Specifically, in human conversations, the images can easily show rich visual perception, which is hard to be expressed by plain text.", "As the example shown in Figure 1, images are required in at least three circumstances:", "(i) the other speaker has little knowledge (e.g., colorful Burano, in the 1st image) of the objects only you had seen;", "(ii) to share more details (e.g., red wine and pasta, in the 2nd image) of the objects even you have common knowledge of them;", "(iii) to express your emotions (e.g., happy, in the 3rd image) about a specific event.", "An existing related task is photo sharing (Zang et al., 2021), which aims to select and share the image based on the textual context, is a challenging task that requires models to understand the background story which complemented by human imaginations, rather than to locate related visual objects or explicitly mention main visible content in the image as the previous works do.", "Zang et al. (2021) propose a retrieval-based method to resolve the above challenge.", "However, the performance of the retrieval-based method is limited in specific domains by the size of the pre-constructed conversational history repository, especially for long-tail contexts that are not covered in the history, where the set of image responses of a retrieval system is also fixed.", "On the other hand, a better way is to generate a new one accordingly.", "In this paper, we formulate a new problem: M ultimodal D ialogue R esponse G eneration ( MDRG ), that is, given the dialogue context, the model should not only generate a pure text response but also have the capacity to generate a multimodal response (e.g., containing both image and text).", "We argue that there are still some hindrances to application, since (1) the sophisticated neural end-to-end architecture will overfit to very few well-annotated training data (e.g., a few existing 10k multimodal dialogues).", "Evidence is that when discussing the topics outside the training data domain, its performance drops dramatically; and (2) as human effort is expensive, it is not easy to collect enough training data for a new domain.", "Based on the above facts, we take a step further to extend the assumption of MDRG to a low-resource setting where only a few multimodal dialogues are available.", "To tackle the above challenges, our key idea is to make parameters that rely on multimodal dialogues small and independent by disentangling textual response generation and image response generation, and thus we can learn the major part of the generation model from text-only dialogues and <image description, image> pairs that are much easier to be obtained.", "Specifically, we present Divter , a novel conversational agent powered by large-scale visual world experiences.", "As shown in Figure 2, our Divter is made up of two Transformer-based (Vaswani et al., 2017a) components: a multimodal dialogue response generator, and a text-to-image translator.", "Divter takes the dialogue context as input, then generates a textual sequence which may contains a text response or a textual image description or both of them.", "The text-to-image translator takes above image description as condition, then generates a realistic and consistent high resolution image.", "Both components are independent with the opposite knowledge, and thus can be pre-trained using a large number of text-only dialogues and 2855 the <image description, image> pairs respectively.", "The end-to-end Divter depends on the multimodal dialogues constructed as the tuple: ( dialogue context, text response / <image description, image> ) , but the joint learning and estimation of the two components just require a few training examples depending on specific domains.", "Contributions of this work are three-fold: To the best of our knowledge, it is the first work on the multimodal dialogue response generation.", "We explore the task under a low-resource setting where only a few multimodal dialogues are assumed available.", "We present Divter, a novel conversational agent which can effectively understand dialogue context and generate informative text and high-resolution image responses.", "Extensive experiments on PhotoChat Corpus (Zang et al., 2021) indicate the effectiveness of Divter, it achieves a significant improvement with pure text dialogue generation model and retrieval-based image sharing method.", "End-to-end response generation for textual open-domain dialogues is inspired by the successful application of neural sequence-to-sequence models on machine translation (Sutskever et al., 2014).", "On top of the basic architecture (Shang et al., 2015; Vinyals and Le, 2015), the vanilla encoder-decoder method is widely extended to address the critical challenges in open-domain dialogue systems, including improving the diversity of responses (Li et al., 2016a; Zhao et al., 2017; Tao et al., 2018), modeling conversation contexts (Ser-ban et al., 2016; Xing et al., 2017; Zhang et al., 2019; Zhao et al., 2020), controlling attributes of responses (See et al., 2019; Zhou et al., 2018; Xu et al., 2019), biasing responses to some specific personas (Li et al., 2016b; Zhang et al., 2018), incorporating extra knowledge into generation (Di-nan et al., 2019; Ghazvininejad et al., 2018; Kim et al., 2020; Li et al., 2020), and building general pre-trained agents (Adiwardana et al., 2020; Zhang et al., 2020; Roller et al., 2021; Qi et al., 2021).", "Different from the previous works on open-domain dialogue response generation that converse freely with plain text, our work lies in the research of multimodal response generation.", "In the research of text-to-image generation, various works have been extensively studied.", "Mansimov et al. (2016) shown the Draw generative model (Gregor et al., 2015) could generate images from natural language descriptions.", "Reed et al. (2016) proposed a generative adversarial network to improve the image fidelity.", "Then some improvement methods continue to optimize the generation architecture, such as stacked generators (Zhang et al., 2017), attentional network (Xu et al., 2018), and extra knowledge (Li et al., 2019b).", "Nguyen et al. (2017) provided a unified probabilistic interpretation of related activation maximization methods to produce high-quality images at higher resolutions.", "Separately, Cho et al. (2020) used uniform masking with a large range of masking ratios and align the suitable pre-training datasets to the proper objectives.", "More recently, Ramesh et al. (2021) and (Ding et al., 2021) adopt transformer-based methods which autoregressively model the text and image tokens as a single stream of data.", "For this multimodal response generation scenario, we use the textual image description to bridge above textual dialogue generation and text-to-image generation models, where the image description is the output of the former and input of the latter in a low-resource setting.", "Suppose that we have dataset DS = {( U i , R i )} ni = 1 , where i { 1 , . . . , n } , U i = { u i, 1 , . . . , u i,n i } is the dialogue context with u i,j the j -th utterance, and R i is the response regarding to U i .", "u i,j and R i could contain two modalities: text, and image.", "The goal is to learn a generation model P ( R U ; ) ( denotes the parameters of the model) with DS .", "Thus, given a new dialogue context U , one can generate a multimodal response R following P ( R U ; ) .", "This section first formulates the unified tokenization method for multimodal dialogues.", "We then introduce the two important components in our proposed multimodal dialogue response generation model ( Divter ) under low-resource scenario, including", "(i) textual dialogue response generator;", "(ii) text-to-image translator.", "Figure 2 shows the overall of our Divter .", "To learn a multimodal generation model, we should first model the unified representations of both text and image.", "Inspired by the success of DALLE (Esser et al., 2020) and VQGAN (Ramesh et al., 2021), to utilize the highly expressive transformer architecture for text-to-image generation, we need to express an image in the form of a sequence, similar to what we usually do for pure text tokenization.", "The tokenization for text is already well-studied, e.g., BPE (Gage, 1994).", "This work uses 50257 BPE-encoded tokens and distributed embedding of Transformer architecture (Vaswani et al., 2017b) to model the texts in a dialogue.", "The tokenizer for image is a discrete Auto-Encoder (VQGAN 1 ) V as shown in Figure 2.", "V uses an encoder VE to compress each image r v of shape H W 3 into z of shape h w d z , then each vector of dimension d z would be quantized to its closest embedding z k in a learned, discrete codebook Z = { z k } Kk = 1 R d z under the action of element-wise quantization q ( ) z q = q ( z ) = ( arg min z k Z z ij z k ) R h w d z (1) Thus r v can be represented by a spatial collection of codebook entries z q R h w d z .", "The decoder VD maps the z q back to a image r v to reconstruct the input.", "In this work, H = W = 256 , h = w = 16 , K = 16384 , d z = 256 .", "The learning details of V and Z could be found in Ramesh et al. (2021).", "Learning an effective multimodal generation model with a single sequence-to-sequence model often requires a large number of training instances.", "However, only very few multimodal dialogues are available due to the privacy restrictions on social media and the expensive human effort.", "On the other hand, as shown in Figure 3, there existed a large number of open source text-only dialogues (e.g. Reddit comments 2 , formulated as DC = {( U i , r ei )} Ni = 1 with ( U i , r ei ) a <text dialogue context, text response> pair) , and a large number of <image description, image> pairs (e.g. YFCC100M (Thomee 1 https://github.com/CompVis/ taming-transformers 2 https://files.pushshift.io/reddit/ Figure 3: Abstract Logic of the proposed approach. Solid lines mean that there exists large-scale training set to pre-train the generation model, while dotted lines mean that only very few training instances are available, means bad generation quality. et al., 2016), formulated as DP = {( c j , r vj )} Mj = 1 with ( c j , r vj ) a <textual image-description, image> pair).", "Based on the above facts and the low-resource challenges on MDRG task, we adapt to incorporate generative text-to-image translation into text-only open domain dialogue response generation.", "More specifically:", "(i) if the multimodal dialogue context contains an image, we replace the image with its description to form a text-only context, and take this context as the input of the text-only dialogue generation model G (pre-trained with DC );", "(ii) if we need to generate an image as a part of response, we could first generation a textual description with G , then adopt a text-to-image translator module F (pre-trained with DP ) to translate the description to a synonymous image.", "To bridge G and F , we further extend the formalization of DS to a new D S in which each image r v is paired with its textual description c .", "Both the", "(i) and", "(ii) actions can be independently learned, which becomes the key to aiding the small D S with the large DC and DP .", "By this means, the current goal is to learn a generation model P ( R U ; ) with D = { D S , DC , DP } .", "With the pre-trained G and F available, we finally use D S to jointly finetune G and F to obtain the capacity of generating multimodal responses.", "Figure 2 illustrates the architecture of our model.", "The model is made up of two components: a textual dialogue response generator G and a text-to-image translator F .", "In the rest of this section, we will elaborate these two modules in detail.", "The textual dialogue response generator G is a sequence-to-sequence model based on the Trans-2857", "former architecture (Vaswani et al., 2017b), it consists of a 24-layers Transformer with a hidden size of 1024 and 16 heads.", "Specifically, given a text dialogue context U = { u 1 , . . . , u l } from DS as source, and the target is a text R = { w 1 , , [ SEP ] , [ DST ] , , [ SEP ] , , w T } with w t the t -th word, the [DST] token means the following subsequence is a textual image description c .", "The generation loss is defined by LG = E ( U, R ) DS [ log p ( R )] (2) p ( R ) = t p ( w t U, w 1 t 1 ) (3) Inference Given a new text dialogue context U , when a generated image description c occurs, it will be fed into the following text-to-image translator, then constructed to the codebook embeddings of its synonymous image.", "The text-to-image translator F is also a sequence-to-sequence generation model based on the Transformer architecture, it consists of 24-layers Transformer with a hidden size of 1024 and 16 attention heads.", "Given an image r v RH W 3 and its textual description c = { w 1 , , w T } from DS , with the VE and Z available, we can represent r v in terms of the codebook indices of its encodings.", "More precisely, the quantized encoding of image r v is given by z q = q (VE ( r v )) R h w d z , and could be transferred to a sequence s { 0 , , Z 1 } h w of indices from the codebook Z , which is obtained by replacing each code with its index in the codebook Z s i,j = k such that ( z q ) i,j = z k (4) Then we concatenate tokenized c and s to a single stream of tokens x = { w 1 , , w T , [ SEP ] , s 1 , , s h w } (5) and train an autoregressive transformer to model the joint distribution over the text and image tokens, the generation loss is defined by LF = E ( c,r v ) DS [ log p ( x )] (6) p ( x ) = t p ( w t w 1 t 1 ) i p ( s i c, s 1 i 1 ) (7) Inference Given a description c , we leverage the text-to-image translator to generate the representations z = F( c ) R h w d z of its synonymous image.", "Let us denote { g , , } as the parameters of textual dialogue response generator G , image tokenizer V and text-to-image translator F .", "In the pre-training stage, we use textual dialogues DC to estimate g , use the ImageNet (Deng et al., 2009) to estimate , use <image description, image> pairs DP to estimate .", "Then we fix , and jointly finetune g and with D S , thus the final objective is to minimize the integrated loss L = LG + LF (8) where is a hyper parameter.", "Remarks.", "In this work, we mainly focus on integrating text and image responses generation, but our proposed approach actually provides a recipe for a general solution to low-resource MDRG in which the target modality could be gifs, videos, or speech sounds, etc.", "To do that, one only needs to modify the text-to-image translator to make it compatible with the specific modality type, then pre-train a new text-to-<target modality> translator.", "To evaluate the performance of Divter, we conduct comprehensive experiments on the PhotoChat dataset released by Zang et al. (2021), which is a multimodal conversational dataset consisting of 10917 images and 12286 dialogues, each of which is paired with a user image that is shared during the conversation, and each image is paired with its text description.", "The dataset has been split into 10286 train, 1000 dev, and 1000 test instances.", "More details are described in Appendix A.1.", "We conduct evaluation with both automatic metrics and human judgements.", "For automatic evaluation, we focus on four aspects: (1) Image Intent Prediction, the goal of this task is to predict whether a image should be produced in the next turn for given context; (2) Text Description Generation; (3) Image Generation Quality ; (4) Text Response Generation.", "For (1), we follow Zang et al. (2021), which formulates the problem as a binary classification task, and use F1 as metric; for (2) and (4), we use PPL , BLEU (Papineni et al., 2002), Rouge (Lin, 2004) and F1 ; for (3) we follow Ramesh et al. (2021) and use Frechet Inception Distance ( FID ) and Inception Score ( IS ).", "For human evaluation, we randomly sample 200 dialogue contexts and generate responses from PhotoChat for Divter and baselines.", "Three human annotators are asked to score the response quality on a scale of {0, 1, 2} from four aspects: (1) Context Coherence : Whether the text response is coherent with the context; (2) Text Fluency : Whether the text response is natural and fluent; (3) Image Quality : The quality (including definition and integrity) of the image response; (4) Background Consistency of Image : For each dialogue, We select the top-8 generated/retrieved images group and ask the annotators to decide whether the group is consistent with the dialogue background, a qualitative assessment is also shown in Figure 5.", "We report the average scores over three annotators, and the higher score means the better.", "We also compare both pure text Divter and multimodal Divter with DialoGPT, respectively.", "The pure text Divter means we block the [DST] token in the vocabulary in the decoding stage, so that the responses would only contain texts.", "We also randomly sample 200 dialogues.", "To each annotator, two responses from different models are presented, which are randomly shuffled to hide their sources.", "The annotators then judge which response is more effective in improving the dialogue experience and attractiveness.", "The agreement among the annotators is measured by Fleiss' Kappa (Fleiss, 1971).", "For the textual dialogue response generator G , we use DialoGPT (Zhang et al., 2020) as pre-trained model initialization, trained on 147M conversation-like exchanges extracted from Reddit comment chains over a period spanning from 2005 through 2017.", "In the fine-tuning stage, we concatenate the context turns with the token [SEP] as a single sequence, we adopt Adam optimizer as an initial learning rate of 1e-5, and the batch size is 256, the training of PhotoChat is conducted on 16 Nvidia Tesla V100 32G GPU cards.", "We use beam search(size=5) to decode the text sequence.", "For the text-to-image translator F , we randomly select 5M <categorical image description, image> pairs from ImageNet, and <image description, image> pairs from YFCC100M (Thomee et al., 2016) as training data.", "We set the maximum image description length as 32, then pre-train F for 3.5 million steps with a batch size of 256 accumulated on 16 Nvidia Tesla V100 32G GPUs.", "In the fine-tuning stage, we train PhotoChat for 50000 steps.", "In the joint learning, we first train F for 48000 steps, then jointly train G and F for 2000 steps.", "The in Eq.8 is 0.2.", "Early stopping on validation is adopted as a regularization strategy.", "All the hyper parameters are determined by grid search.", "More details are described in Appendix A.3.", "We implement the image Auto-Encoder using the code https://github.com/CompVis/ taming-transformers , implement the Textual Dialogue Response Generator using the code https://github.com/microsoft/ DialoGPT , and implement the Text-to-Image Translator using the code https://github.", "com/lucidrains/DALLE-pytorch .", "Two pre-trained models BERT-base (Devlin et al., 2019) and T5-3B (Raffel et al., 2020) are selected as baselines to measure the Image Intent Predic-tion task in Section 5.2.", "They takes the text dialogue context as input, and predict whether a image will be shared in the next turn.", "SCAN is proposed by Lee et al. (2018), the model captures interplay between image regions and text tokens to infer image-text similarity, SCAN achieves state-of-the-art performance of the Image Retrieval task on PhotoChat.", "S2S-TF is a single sequence-to-sequence model with 24-layers Transformer, we only use PhotoChat to train this multimodal generation model.", "As shown in Table 1, our Divter achieves not only comparable performance with the state-of-the-art retrieval-based image response intent prediction model but also achieves remarkable performance in all the generation metrics.", "This indicates that Divter can accurately judge the timing of generating image response with the given dialogue context, and produce text responses that are coherent to the context, and generate high-quality image responses.", "The significant performance gap between Divter and the baseline models (e.g. S2S-TF, Divter variants) without pre-training indicates the superiority of our proposed learning strategy.", "Table 2 reports the results of human evaluation, our Divter also significantly outperforms the baselines on most of the aspects.", "The comparison results shown in Table 3 indicates (1): out Divter can achieve comparable performance on pure text response generation with DialoGPT; (2): the multimodal responses generated by Divter achieve a significant improvement on the dialogue experience and attractiveness in contrast to pure text dialogue model (DialoGPT).", "We conduct extensive ablation experiments over different variants to better understand their relative importance to the MDRG task.", "As shown in Table 1, all the variants lead to worse performance 2860 Example 1 Example 2 A: OMG...the new ice cream shop is amazing.", "in most of the metrics.", "For a more intuitive comparison, the qualitative assessment results are also shown in Figure 4.", "In particular, both quantitative and qualitative results on the ablation study validate that: (1) pre-training is crucial to low-resource multimodal dialogue response generation, since removing any component from pre-training causes performance drop when training data is small; (2) in terms of impact to performance of image generation, F > G , in terms of impact to performance of text generation, G > F ; (3) The joint learning also has contributions to Divter, indicating that leveraging the integrated learning of textual context and visual image benefits more in contrast to any single one of them.", "To further investigate the quality of multimodal responses generated by Divter, we show two examples on the PhotoChat test data in Table 4.", "The given context of the first one is about ice cream, and the second one is about honey bee.", "As we can see, Divter can not only generate a realistic high-resolution image which is coherent to the background, but also generate the informative text responses grounded on the image.", "Separately, The high-quality generated images are comparable to those real-world ground truths, which demonstrates the practicability of Divter.", "Benefits over retrieval-based methods To further investigate and compare the generalization capability between Divter and the retrieval-based method, we also get top-10 generated images from Divter and equivalent retrieved images from SCAN model given the same context.", "As shown in Figure 5, on the one hand, the diversity and richness of the generated images are desirable, on the other hand, those retrieved results often suffer from wrong consistency with dialogue background.", "For example in the second case, the dialogue is talking about coffee, but the retrieved images contain some uncorrelated objects like milk, cake, dog' and snack.", "And in the third example, all the retrieval results are mistaken since there is little curtain in the training and retrieval space.", "This demonstrates the fact that the performance of retrieval-based method is extremely limited in specific domains by the size of the pre-constructed conversational history repository, especially in the low-resource scenario.", "Furthermore, our proposed generation based method shows better generalization capability to tackle the low-resource challenge.", "In this paper, we explore multimodal dialogue response generation under a low-resource setting.", "To overcome the challenges from the new task and insufficient training data, we propose Divter, a neural 2861 conversational agent which incorporates text-to-image generation into text-only dialogue response generation, in which most parameters do not rely on the training data any more and can be estimated from large scale textual open domain dialogues and <image description, image> pairs.", "Extensive experiments demonstrate Divter achieves state-of-the-art results in automatic and human evaluation.", "In the future, we will explore more efficient methods to inject more modalities into response generation.", "We thank anonymous reviewers for their insightful suggestions to improve this paper." ]
[ "abstain", "abstain", "objective", "abstain", "objective", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "result", "objective", "result", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "other", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "other", "other", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "objective", "other" ]
[ "Evaluating open-domain dialogue systems is difficult due to the diversity of possible correct answers.", "Automatic metrics such as BLEU correlate weakly with human annotations, resulting in a significant bias across different models and datasets.", "Some researchers resort to human judgment experimentation for assessing response quality, which is expensive, time consuming, and not scalable.", "Moreover, judges tend to evaluate a small number of dialogues, meaning that minor differences in evaluation configuration may lead to dissimilar results.", "In this paper, we present interpretable metrics for evaluating topic coherence by making use of distributed sentence representations.", "Furthermore, we introduce calculable approximations of human judgment based on conversational coherence by adopting state-of-the-art entailment techniques.", "Results show that our metrics can be used as a surrogate for human judgment, making it easy to evaluate dialogue systems on large-scale datasets and allowing an unbiased estimate for the quality of the responses.", "Recently, we have witnessed a big success in the capability of computers to seemingly understand natural language text and to generate plausible responses to conversations (Serban et al., 2016; Xing et al., 2017; Sordoni et al., 2015; Li et al., 2016; Serban et al., 2017; Devlin et al., 2018; Radford et al., 2018).", "A challenging task of building dialogue systems lies in evaluating the quality of their responses.", "Typically, evaluating goal-oriented dialogue systems is done via human-generated judgment like a task completion test or user satisfaction score (Walker et al., 1997; Moller et al., 2006).", "However, the task of evaluating open-ended dialogue systems is not well defined as there is no Equal Contribution clear explicit goal for conversations.", "Indeed, dialog systems are ultimately created to satisfy the user's need which can be associated with how entertaining and engaging the conversation was.", "It is unclear how to define a metric that can account comprehensibly for the semantic meaning of the responses.", "Moreover, grasping the underlying meaning of text has always been fraught with difficulties, which are essentially attributed to the complexities and ambiguities in natural language.", "Generally, a good dialogue can be described as an exchange of information that sustain coherence through a train of thoughts and a flow of topics.", "Therefore, a plausible way to evaluate open-ended dialogue systems is to measure the consistency of the responses.", "For example, a neural dialogue system can respond to the utterance Do you like animals? by Yes, I have three cats , thereafter replies to How many cats do you have by I don't have cats. .", "Here, we can notice that the dialogue system failed to provide a coherent answer and instead generated an inconsistent response.", "In this work, we characterize the consistency of dialogue systems as a natural language inference (NLI) (Dagan et al., 2006) problem.", "In particular, NLI is focused on recognizing whether a hypothesis is inferred from a premise.", "In dialogue systems, we cast a generated response as the hypothesis and the conversation history as the premise, projecting thus the automatic evaluation into an NLI task.", "In other words, we propose directly calculable approximations of human evaluation grounded on conversational coherence and af-fordance by using state-of-the-art entailment techniques.", "For this purpose, we build a synthesized inference data from conversational corpora.", "The intuition behind this choice is motivated by the fact that utterances in a human conversation tend to follow a consistent and coherent flow where each utterance can be inferred from the previous interactions.", "We train the state-of-the-art inference models on our conversational inference data and then the learned models are used to evaluate the coherence in a given conversation.", "Finally, we fare our proposed evaluation method against existing automated metrics.", "The results highlight the capability of inference models to automatically evaluate dialogue coherence.", "The source code and the dataset are available at https://github.", "com/nouhadziri/DialogEntailment 2 Related Work Evaluating open-ended dialogue systems has drawn the attention of several researchers in recent years.", "Unfortunately, word-overlapping metrics such as BLEU have been shown to correlate weakly with human evaluation, which in turn, introduces bias against certain models (Liu et al., 2016).", "Many studies have been proposed to improve the quality of automated metrics.", "In particular, Lowe et al. (Lowe et al., 2017) introduced an automatic evaluation system called ADEM which learns to score responses from an annotated dataset of human responses scores.", "However, such system is heavily biased towards the training data and struggles with generalization capabilities on unseen datasets.", "Further, collecting an annotated gold standard of human judgment is very expensive and thus, ADEM is less flexible and extensible.", "Venkatesh et al. (Venkatesh et al., 2018) introduced a framework for evaluating the quality of the conversations based on topical diversity, coherence, engagement and conversational depth and showed that these metrics conform with human evaluation.", "However, a big part of their metrics relies on human labels, which makes the evaluation system not scalable.", "Recently, Welleck et al. (Welleck et al., 2018) investigated the use of NLI models (e.g., ESIM (Chen et al., 2016) and In-ferSent (Conneau et al., 2017)) to measure consistency in dialogue systems.", "They built a Dialogue NLI dataset which consists of sentence pairs labeled as entailment, neutral, or contradiction.", "The utterances are derived from a two-agent persona-based dialogue dataset.", "To annotate the dataset, they used human annotation from Amazon Mechanical Turk.", "In this work, we propose a method that employs NLI approaches to detect coherence in dialogue systems.", "The proposed evaluation procedure does not require human labels, making progress towards scalable and autonomous evaluation systems.", "Reasoning about the semantic relationship between two utterances is a fundamental part of text understanding.", "In this setting, we consider inference about entailment as a useful testing bed for the evaluation of coherence in dialogue systems.", "The success of NLI models 1 allows us to frame automated dialogue evaluation as an entailment problem.", "More specifically, given a conversation history H and a generated response r , the goal is to understand whether the premise-hypothesis pair ( H, r ) is entailing, contradictory, or neutral.", "The essence of neural response generation models is designed by maximizing the likelihood of the target response given source utterances.", "Therefore, a dialogue generation task can be formulated as a next utterance prediction problem.", "In particular, the model predicts a response u i +1 given a conversation history ( u 1 , ..., u i ) .", "One key factor for a successful conversation is having coherence across multiple turns.", "A machine's response can be considered as incoherent when it contradicts directly its previous utterances or follows an illogical reasoning throughout the whole conversation.", "Inconsistency can be clearly identified when it corresponds to logical discrepancy between two facts.", "For example, when you indicate clearly during the conversation that you have cats but when you get asked How many cats do you have , you answer by I don't have cats. .", "Nevertheless, in general, inconsistency can be less explicitly recognizable as it may describe an error between what the person has said and what she/he truly believes given her/his personality and background information.", "To detect dialogue incoherence, we consider two prominent models that have shown promising results in commonsense reasoning: the Enhanced Sequential Inference Model (ESIM) (Chen et al., 2016) and Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2018): ESIM (Chen et al., 2016): employs a Bi-LSTM model (Graves and Schmidhuber, 2005) to encode the premise and the hypothesis.", "Also, it explores the effectiveness of syntax for NLI by encoding syntactic parse trees of premise and hypothesis through Tree-LSTM (Zhu et al., 2015).", "Then, the 1 Recent models have achieved high accuracy in Stanford NLI corpus (Bowman et al., 2015) (90.1%) and GLUE Benchmark (Wang et al., 2018) (86.7%) input encoding part is followed by a matrix attention layer, a local inference layer, another BiL-STM inference composition layer, and finally a pooling operation before the output layer.", "We further boost ESIM with by incorporating contextualized word embeddings, namely ELMo (Peters et al., 2018), into the inference model.", "BERT (Devlin et al., 2018): exploits a multilayer Bidirectional Transformers model (Vaswani et al., 2017) to learn pre-trained universal representations of text using only a plain text corpus from Wikipedia.", "BERT has achieved state-of-the art results on various natural language understanding tasks and has been shown to handle strongly long-range dependencies in text.", "BERT can be fine-tuned to achieve several tasks by solely adding a small layer to the core model.", "In this work, we adopted BERT to the task of NLI.", "Overall, the goal of the above models is to learn a function GNLI that predicts one of three categories (i.e., entailment, contradiction or neutral) given premise-hypothesis pairs.", "To train the inference models, we build a synthesized dataset geared toward evaluating consistency in dialogue systems.", "To this end, the Persona-Chat conversational data (Zhang et al., 2018) is used to form the basis of our conversational inference data.", "The continuity of utterances in human conversation facilitates the use of entailment in the dialogue domain.", "Typically, when we interact with one another, we tend to reference information from previous utterances to engage with the interlocutor.", "This is why we build our synthetic inference dataset upon a dialogue corpus.", "The Persona-Chat corpus is a crowd-sourced dataset where two people converse with each other based on a set of randomly assigned persona.", "To build an inference corpus, we need to find three different labels (i.e., entailment , contradiction , and neutral ).", "For this purpose, we map an appropriate and on topic response to the entailment label.", "Consequently, the entailment instances are derived from the utterances in the conversations.", "For contradiction , grammatically-impaired sentences are constructed by randomly choosing words from the conversation.", "We also added randomly drawn contradictory instances from the MultiNLI corpus (Williams et al., 2018) to account for meaningful inconsistencies.", "Finally, random utterances from Train Dev Test #entailment 218.2K 12.2K 1.4K #neutral 579.5K 28.0K 3.1K #contradiction 261.9K 9.8K 1.1K Total 1.1M 50.2K 5.6K Table 1: Distribution of labels in the InferConvAI corpus.", "other conversations or generic responses such as I don't know comprise the neutral instances.", "Following this approach, we build a corpus of 1.1M premise-hypothesis pairs, namely InferConvAI .", "Table 1 summarizes the statistics of InferConvAI.", "In this section, we focus on the task of evaluating the next utterance given the conversation history.", "We used the following models to generate responses.", "These models were trained on the conversational datasets, using optimization, until convergence: Seq2Seq with attention mechanism (Bah-danau et al., 2015): predicts the next response given the previous utterance using an encoder-decoder model.", "HRED (Serban et al., 2016): extends the Seq2Seq model by adding a context-RNN layer that accounts for contextual information.", "TA-Seq2Seq (Xing et al., 2017): extends the Seq2Seq model by biasing the overall distribution towards leveraging topic words in the response.", "THRED (Dziri et al., 2018): builds upon TA-Seq2Seq model by levering topic words in the response in a multi-turn dialogue system.", "The training was conducted on two datasets: OpenSubtitles (Tiedemann, 2012) and Reddit (Dziri et al., 2018).", "Due to lack of resources, we randomly sampled 6M dialogues as training data from each dataset, 700K dialogues as development data, and 40K dialogues as test data.", "Each dialogue corresponds to three turn exchanges.", "To evaluate accurately the quality of the generated responses, we recruited five native English speakers.", "The judges annotated 150 dialogues from Reddit Method Reddit OpenSubtitles ESIM + ELMo 0.526 0.455 BERT 0.553 0.498 Table 2: Accuracy of inference models on InferConvAI.", "and 150 dialogues from OpenSubtitles.", "All subjects have informed consent as required from the Ethics Review Board at the University of Alberta.", "Due to lack of space, we will omit an exhaustive description of the human evaluation process and refer readers to (Dziri et al., 2018) as we conducted the same evaluation procedure.", "In this section, we evaluate the performance of the state-of-the-art entailment models on predicting a score for the generated utterances.", "In particular, the conversation history H is treated as a hypothesis, whereas the generated response r acts as a premise.", "We pick two state-of-the-art NLI models (i.e., ESIM (Chen et al., 2016) and BERT (Devlin et al., 2018)).", "These models were trained on the InferConvAI dataset.", "During evaluation, we use our test dialogue corpus from Reddit and OpenSubtitles, in which the majority vote of the 4-scale human rating constitutes the labels.", "The results are illustrated in Table", "2. Both models reach reasonable performance in this setting, while BERT outperforms ESIM.", "Note that this experiment examines the generalization capabilities of these inference models as the test datasets are drawn from an entirely different distribution than the training corpus.", "Figure 1 illustrates the performance of BERT Method Pearson Reddit OpenSubtitles SS ( H 2 ) BERT -0.204 -0.290 SS ( H 2 ) ELMo -0.146 -0.365 SS ( H 2 ) USE -0.248 -0.314 SS ( H 1 ) BERT -0.214 -0.337 SS ( H 1 ) ELMo -0.178 -0.404 SS ( H 1 ) USE -0.287 -0.320 ABERT 0.135 0.131 A ELMo 0.085 0.162 A word2vec 0.037 0.196 GBERT 0.208 0.132 G ELMo 0.037 0.072 G word2vec -0.033 0.015 EBERT 0.162 0.144 E ELMo 0.035 0.116 E word2vec -0.065 0.118 Table 3: The Pearson Correlation between different metrics and human judgments with p -value < 0 .", "for each class with respect to the human scores.", "The test utterances that are predicted as entailment tend to be rated higher than other utterances, exhibiting that the entailment models correlate quite well with what humans perceive as a coherent response.", "Another observation is that the inference models often classify acontextual and off-topic responses as neutral and the annotators typically dislike these types of responses.", "This contributes to the lower scores of neutral -detected responses compared to responses predicted as contradiction .", "We consider as evaluation metrics baselines three textual similarity metrics (Liu et al., 2016) based on word embeddings: Average (A), Greedy (G), and Extrema (E).", "These word-level embedding metrics have been proven to correlate with human judgment marginally better than other world-overlap metrics (e.g., BLEU, ROUGE and METEOR) (Liu et al., 2016).", "One critical flaw of these embedding metrics is that they assume that 0.0 0.5 1.0 1.5 2.0 2.5 3.0 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 0.0 0.5 1.0 1.5 2.0 2.5 3.0 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 0.0 0.5 1.0 1.5 2.0 2.5 3.0 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Figure 2: Scatter plots illustrating correlation between human judgment and the automated metrics on the Reddit test dataset.", "Further, the sentence is treated as a bag-of-words, disregarding words order and dependencies that are known to be substantial for understanding the semantic of a sentence.", "The correlation of these metrics with human judgment is showcased in Table", "3. We can notice that the three metrics A, G and E correlate weakly with human judgment in both datasets, demonstrating the need for a well-designed automated metric that provides an accurate evaluation of dialogues.", "5.2.2 Semantic Similarity The Semantic Similarity (SS) metric was suggested by (Dziri et al., 2018).", "It measures the distance between the generated response and the utterances in the conversation history.", "The intuition of this metric revolves around capturing good and consistent responses by showing whether the machine-generated responses maintain the topic of the conversation.", "In this project, we measured SS with respect to two different utterances, the conversation history H and the most recent utterance H 1 .", "The conversation history is formed by concatenating the two most recent utterances.", "We report the Pearson coefficient of this metric with human judgment in Table", "3. The SS metric is expected to have a negative correlation as the higher human ratings correspond to the lower semantic distance.", "The results demonstrate that SS metrics correlate better than word-level metrics as they make use of word interactions to represent utterances.", "Moreover, the Universal Sentence Encoder (USE) (Cer et al., 2018) model performs better on Reddit, whereas the ELMo embeddings achieve higher correlation on OpenSubtitles.", "This arguably underlines that deep contextualized word representations can manage better complex characteristics of natural language (e.g., syntax and se-mantics).", "The SS metric, which requires no pretraining, reaches a Pearson correlation of -0.404 with respect to the most recent utterance on OpenSubtitles.", "Such correlation can be compared with a correlation of 0.436 achieved by ADEM (Lowe et al., 2017) which required large amounts of training data and computation.", "Moreover, in order to investigate whether the results in Table 3 are in line with human evaluation, we visualized the correlation between the human ratings and SS metric as scatter plots in Figure", "2. 6 Conclusion Evaluating dialogue systems has been heavily investigated, but researchers are still on the quest for a strong and reliable metric that highly conforms with human judgment.", "Existing automated metrics show poor correlation with human annotations.", "In this paper, we present a novel paradigm for evaluating the coherence of dialogue systems by using state-of-the-art entailment techniques.", "We aim at building a system that does not require human annotation, which in turn, can lead to a scalable evaluation approach.", "While our results illustrate that the proposed approach correlates reasonably with human judgment and provide an unbiased estimate for the response quality, we believe that there is still room for improvement.", "For instance, measuring the engagingness of the conversation would be helpful in improving evaluating different dialogue strategies." ]
[ "abstain", "abstain", "abstain", "abstain", "method", "method", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "objective", "objective", "abstain", "method", "objective", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "Bilingual lexicons map words in one language to their translations in another, and are typically induced by learning linear projections to align monolingual word embedding spaces.", "In this paper, we show it is possible to produce much higher quality lexicons with methods that combine (1) unsupervised bitext mining and (2) unsupervised word alignment.", "Directly applying a pipeline that uses recent algorithms for both subproblems significantly improves induced lexicon quality and further gains are possible by learning to filter the resulting lexical entries, with both unsupervised and semi-supervised schemes.", "Our final model outperforms the state of the art on the BUCC 2020 shared task by 14 F 1 points averaged over 12 language pairs, while also providing a more interpretable approach that allows for rich reasoning of word meaning in context.", "Further analysis of our output and the standard reference lexicons suggests they are of comparable quality, and new benchmarks may be needed to measure further progress on this task.", "1 1 Introduction Bilingual lexicons map words in one language to their translations in another, and can be automatically induced by learning linear projections to align monolingual word embedding spaces (Artetxe et al., 2016; Smith et al., 2017; Lample et al., 2018, inter alia ).", "Although very successful in practice, the linear nature of these methods encodes unrealistic simplifying assumptions (e.g. all translations of a word have similar embeddings).", "In this paper, we show it is possible to produce much higher quality lexicons without these restrictions by introducing new methods that combine (1) unsupervised bitext mining and (2) unsupervised word alignment.", "Work done during internship at Facebook AI Research.", "1 Code is publicly available at https://github.com/ facebookresearch/bitext-lexind .", "We show that simply pipelining recent algorithms for unsupervised bitext mining (Tran et al., 2020) and unsupervised word alignment (Sabet et al., 2020) significantly improves bilingual lexicon induction (BLI) quality, and that further gains are possible by learning to filter the resulting lexical entries.", "Improving on a recent method for doing BLI via unsupervised machine translation (Artetxe et al., 2019), we show that unsupervised mining produces better bitext for lexicon induction than translation, especially for less frequent words.", "These core contributions are established by systematic experiments in the class of bitext construction and alignment methods (Figure 1).", "Our full induction algorithm filters the lexicon found via the initial unsupervised pipeline.", "The filtering can be either fully unsupervised or weakly-supervised: for the former, we filter using simple heuristics and global statistics; for the latter, we train a multi-layer perceptron (MLP) to predict the probability of a word pair being in the lexicon, where the features are global statistics of word alignments.", "In addition to BLI, our method can also be directly adapted to improve word alignment and reach competitive or better alignment accuracy than the state of the art on all investigated language pairs.", "We find that improved alignment in sentence representations (Tran et al., 2020) leads to better contextual word alignments using local similarity (Sabet et al., 2020).", "Our final BLI approach outperforms the previous state of the art on the BUCC 2020 shared task (Rapp et al., 2020) by 14 F 1 points averaged over 12 language pairs.", "Manual analysis shows that most of our false positives are due to the incompleteness of the reference and that our lexicon is comparable to the reference lexicon and the output of a supervised system.", "Because both of our key building blocks make use of the pretrainined contextual representations from mBART (Liu et al., GutenMorgen .", "2020) and CRISS (Tran et al., 2020), we can also interpret these results as clear evidence that lexicon induction benefits from contextualized reasoning at the token level, in strong contrast to nearly all existing methods that learn linear projections on word types.", "Bilingual lexicon induction (BLI).", "The task of BLI aims to induce a bilingual lexicon (i.e., word translation) from comparable monolingual corpora (e.g., Wikipedia in different languages).", "Following Mikolov et al. (2013), most methods train a linear projection to align two monolingual embedding spaces.", "For supervised BLI, a seed lexicon is used to learn the projection matrix (Artetxe et al., 2016; Smith et al., 2017; Joulin et al., 2018).", "For unsupervised BLI, the projection matrix is typically found by an iterative procedure such as adversarial learning (Lample et al., 2018; Zhang et al., 2017), or iterative refinement initialized by a statistical heuristics (Hoshen and Wolf, 2018; Artetxe et al., 2018).", "Artetxe et al. (2019) show strong gains over previous works by word aligning bitext generated with unsupervised machine translation.", "We show that retrieval-based bitext mining and contextual word alignment achieves even better performance.", "Word alignment.", "Word alignment is a fundamental problem in statistical machine translation, of which the goal is to align words that are translations of each in within parallel sentences (Brown et al., 1993).", "Most methods assume parallel sentences for training data (Och and Ney, 2003; Dyer et al., 2013; Peter et al., 2017, inter alia ).", "In contrast, Sabet et al. (2020) propose SimAlign , which does not train on parallel sentences but instead aligns words that have the most similar pretrained multilingual representations (Devlin et al., 2019; Conneau et al., 2019).", "SimAlign achieves competitive or superior performance than conventional alignment methods despite not using parallel sentences, and provides one of the baseline components for our work.", "We also present a simple yet effective method to improve performance over SimAlign (Section 5).", "Bitext mining/parallel corpus mining.", "Bitext mining has been a long studied task (Resnik, 1999; Shi et al., 2006; Abdul-Rauf and Schwenk, 2009, inter alia ).", "Most methods train neural multilingual encoders on bitext, which are then used with effi-cent nearest neighbor search to expand the training set (Espana-Bonet et al., 2017; Schwenk, 2018; Guo et al., 2018; Artetxe and Schwenk, 2019a, inter alia ).", "Recent work has also shown that unsupervised mining is possible (Tran et al., 2020; Keung et al., 2020).", "We use CRISS (Tran et al., 2020) 2 as one of our component models.", "We build on unsupervised methods for word alignment and bitext construction, as reviewed below.", "SimAlign (Sabet et al., 2020) is an unsupervised word aligner based on the similarity of contextualized token embeddings.", "Given a pair of parallel sentences, SimAlign computes embeddings using pretrained multilingual language models such as mBERT and XLM-R, and forms a matrix whose entries are the cosine similarities between every source token vector and every target token vector.", "Based on the similarity matrix, the argmax algorithm aligns the positions that are the simultaneous column-wise and row-wise maxima.", "To increase recall, Sabet et al. (2020) also propose itermax , which applies argmax iteratively while excluding previously aligned positions.", "We consider two methods for bitext construction: unsupervised machine translation (generation; Artetxe et al., 2019, Section 3.2) and bitext retrieval (retrieval; Tran et al., 2020, Section 3.2).", "Generation Artetxe et al. (2019) train an unsupervised machine translation model with monolingual corpora, generate bitext with the obtained model, and further use the generated bitext to induce bilingual lexicons.", "We replace their statistical unsupervised translation model with CRISS, a recent high quality unsupervised machine translation model which is expected to produce much higher quality bitext (i.e., translations).", "For each sentence in the two monolingual corpora, we generate a translation to the other language using beam search or nucleus sampling (Holtzman et al., 2020).", "Retrieval Tran et al. (2020) show that the CRISS encoder module provides as a high-quality sentence encoder for cross-lingual retrieval: they take the average across the contextualized embeddings of tokens as sentence representation, perform nearest neighbor search with FAISS (Johnson et al., 2019), 3 and mine bitext using the margin-based max-score method (Artetxe and Schwenk, 2019a).", "4 The score between sentence representations s and t is defined by score ( s , t ) (1) = cos ( s , t ) (cid:80) t (cid:48) NN k ( t ) cos( s , t (cid:48) ) 2 k + (cid:80) s (cid:48) NN k ( s ) cos( s (cid:48) , t ) 2 k , where NN k ( ) denotes the set of k nearest neighbors of a vector in the corresponding space.", "In this work, we keep the top 20% of the sentence pairs with scores larger than 1 as the constructed bitext.", "Our framework for bilingual lexicon induction takes separate monolingual corpora and the pretrained CRISS model as input, and outputs a list of 3", "bilingual word pairs as the induced lexicon.", "The framework consists of two parts:", "(i) an unsupervised bitext construction module which generates or retrieves bitext from separate monolingual corpora without explicit supervision (Section 3.2), and", "(ii) a lexicon induction module which induces bilingual lexicon from the constructed bitext based on the statistics of cross-lingual word alignment.", "For the lexicon induction module, we compare two approaches: fully unsupervised induction (Sec-tion 4.1) which does not use any extra supervision, and weakly supervised induction (Section 4.2) that uses a seed lexicon as input.", "We align the constructed bitext with CRISS-based SimAlign , and propose to use smoothed matched ratio for a pair of bilingual word type (cid:104) s, t (cid:105)", "as the metric to induce lexicon, where mat( s, t ) and coc( s, t ) denote the one-to-one matching count (e.g., guten-good; Figure 1) and co-occurrence count of (cid:104) s, t (cid:105) appearing in a sentence pair respectively, and is a non-negative smoothing term.", "5 During inference, we predict the target word t with the highest ( s, t ) for each source word s .", "Like most previous work (Artetxe et al., 2016; Smith et al., 2017; Lample et al., 2018, inter alia ), this method translates each source word to exactly one target word.", "We also propose a weakly supervised method, which assumes access to a seed lexicon.", "This lexicon is used to train a classifier to further filter the potential lexical entries.", "For a pair of word type (cid:104) s, t (cid:105) , our classifier uses the following global features: Count of alignment: we consider both one-to-one alignment (Section 4.1) and many-to-one alignment (e.g., danke-you and danke-thank; Figure 1) of s and t separately as two features, since the task of lexicon induction is arguably biased toward one-to-one alignment.", "5 We use = 20 .", "This reduces the effect of noisy alignment: the most extreme case is that both mat( s, t ) and coc( s, t ) are 1 , but it is probably not desirable despite the high matched ratio of 1.", "The count of s in the source language and t in the target language.", "6 Non-contextualized word similarity: we feed the word type itself into CRISS, use the average pooling of the output subword embeddings, and consider both cosine similarity and dot-product similarity as features.", "For a counting feature c , we take log ( c + c ) , where consists of learnable parameters.", "There are 7 features in total, which is denoted by x (cid:104) s,t (cid:105) R 7 .", "We compute the probability of a pair of words (cid:104) s, t (cid:105) being in the induced lexicon P ( s, t ) 7 by a ReLU activated multi-layer perceptron (MLP): h (cid:104) s,t (cid:105) = ReLU (cid:0) W 1 x (cid:104) s,t (cid:105) + b 1 (cid:1) P ( s, t ) = (cid:16) w 2 h (cid:104) s,t (cid:105) + b 2 (cid:17) , where ( ) denotes the sigmoid function, and = { W 1 , b 1 , w 2 , b 2 } denotes the learnable parameters of the model.", "Recall that we are able to access a seed lexicon, which consists of pairs of word translations.", "In the training stage, we seek to maximize the log likelihood: = arg max (cid:88) (cid:104) s,t (cid:105)D + log P ( s, t ) + (cid:88) (cid:104) s (cid:48) ,t (cid:48) (cid:105)D log (cid:0) 1 P ( s (cid:48) , t (cid:48) ) (cid:1) , where D + and D denotes the positive training set (i.e., the seed lexicon) and the negative training set respectively.", "We construct the negative training set by extracting all bilingual word pairs that co-occurred but are not in the seed word pairs.", "We tune two hyperparameters and n to maximize the F 1 score on the seed lexicon and use them for inference, where denotes the prediction threshold and n denotes the maximum number of translations for each source word, following Laville et al. (2020) who estimate these hyperparameters based on heuristics.", "The inference algorithm is summarized in Algorithm 1.", "The idea of using an MLP to induce lexicon with weak supervision (Section 4.2) can be directly extended to word alignment.", "Let B = {(cid:104)S i , T i (cid:105)} Ni =1 6 SimAlign sometimes mistakenly align rare words to punctuation, and such features can help exclude such pairs.", "denote the constructed bitext in Section 3.2, where N denotes the number of sentence pairs, and S i and T i denote a pair of sentences in the source and target language respectively.", "In a pair of bitext (cid:104)S , T (cid:105) , S = (cid:104) s 1 , . . . , s (cid:96) s (cid:105) and T = (cid:104) t 1 , . . . , t (cid:96) s (cid:105) denote sentences consist of word tokens s i or t i .", "For a pair of bitext, SimAlign with a speci-fied inference algorithm produces word alignment A = {(cid:104) a i , b i (cid:105)} i , denoting that the word tokens s a i and t b i are aligned.", "Sabet et al. (2020) has proposed different algorithms to induce alignment from the same similarity matrix, and the best method varies across language pairs.", "In this work, we consider the relatively conservative (i.e., having higher precision) argmax and the higher recall itermax algorithm (Sabet et al., 2020), and denote the alignments by A argmax and A itermax respectively.", "We substitute the non-contextualized word similarity feature (Section 4.2) with contextualized word similarity where the corresponding word embedding is computed by averaging the final-layer contextualized subword embeddings of CRISS.", "The cosine similarities and dot-products of these embeddings are included as features.", "Instead of the binary classification in Section 4.2, we do ternary classification for word alignments.", "For a pair of word tokens (cid:104) s i , t j (cid:105) , the gold label y (cid:104) s i ,t j (cid:105) is defined as 1 [ (cid:104) i, j (cid:105) A argmax ] + 1 [ (cid:104) i, j (cid:105) A itermax ] .", "Intuitively, the labels 0 and 2 represents confi-dent alignment or non-alignment by both methods, while the label 1 models the potential alignment.", "The MLP takes the features x (cid:104) s i ,t j (cid:105) R 7 of the word token pair, and compute the probability of each label y by h = ReLU (cid:16) W 1 x (cid:104) s i ,t j (cid:105) + b 1 (cid:17) g = W 2 h + b 2 P ( y | s i , t j , S , T ) = exp ( g y ) (cid:80) y (cid:48) exp (cid:0) g y (cid:48) (cid:1) , where = { W 1 W 2 , b 1 , b 2 } .", "On the training stage, we maximize the log-likelihood of ground-truth labels: = arg max (cid:88) (cid:104)S , T (cid:105)B (cid:88) s i S (cid:88) t j T log P ( y (cid:104) s i ,t j (cid:105) | s i , t j , S , T ) .", "On the inference stage, we keep all word token pairs (cid:104) s i , t j (cid:105) that have EP [ y ] := (cid:88) y y P ( y | s i , t j , S , T ) > 1 as the prediction.", "Throughout our experiments, we use a two-layer perceptron with the hidden size of 8 for both lexicon induction and word alignment.", "We optimize all of our models using Adam (Kingma and Ba, 2015) with the initial learning rate 5 10 4 .", "For our bitext construction methods, we retrieve the best matching sentence or translate the sentences in the source language Wikipedia; for baseline models, we use their default settings.", "For evaluation, we use the BUCC 2020 BLI shared task dataset (Rapp et al., 2020) and metric ( F 1 ).", "Like most recent work, this evaluation is based on MUSE (Lample et al., 2018).", "8 We primarily report the BUCC evaluation because it considers recall in addition to precision.", "However, because most recent work only evaluates on precision, we include those evaluations in Appendix D. We compare the following baselines: BUCC.", "Best results from the BUCC 2020 (Rapp et al., 2020) for each language pairs, we take the maximum F 1 score between the best closed-track results (Severini et al., 2020; Laville et al., 2020) and open-track ones (Severini et al., 2020).", "Our method would be considered open track since the pretrained models used a much larger data set (Common Crawl 25) than the BUCC 2020 closed-track (Wikipedia or Wacky; Baroni et al., 2009).", "MUSEVECMAP .", "Popular and robust method for aligning monolingual word embeddings via a linear projection and extracting lexicons.", "Here, we use the standard implementation 9 with FastText vectors (Bojanowski et al., 2017) 10 trained on the union of Wikipedia and Common Crawl corpus for each language.", "11 We include both supervised and unsupervised versions.", "WM .", "WikiMatrix (Schwenk et al., 2019) 12 is a dataset of mined bitext.", "The mining method LASER (Artetxe and Schwenk, 2019b) is trained on real bitext and then used to mine more bitext from the Wikipedia corpora to get the WikiMatrix dataset.", "We test our lexicon induction method with WikiMatrix bitext as the input and compare to our methods that do not use bitext supervision.", "We evaluate bidirectional translations from beam search ( GEN ; Section 3.2), bidirectional translations from nucleus sampling ( GEN-N ; Holtzman et al., 2020), 13 and retrieval ( RTV ; Section 3.2).", "In addition, it is natural to concatenate the global statistical features (Section 4.2) from both GEN and RTV and we refer to this approach by GEN-RTV .", "Our main results are presented in Table 1.", "All of our models ( GEN , GEN-N , RTV , GEN-RTV ) outperform the previous state of the art ( BUCC ) by a sig-nificant margin on all language pairs.", "Surprisingly, RTV and GEN-RTV even outperform WikiMatrix by average F 1 score, indicating that we do not need bitext supervision to obtain high-quality lexicons.", "Bitext quality.", "Since RTV achieves surprisingly high performance, we are interested in how much the quality of bitext affects the lexicon induction performance.", "We divide all retrieved bitexts with score (Eq. 1) larger than 1 equally into five sections with respect to the score, and compare the lexicon 9 https://github.com/artetxem/VecMap 10 https://github.com/facebookresearch/ fastText 11 https://github.com/facebookresearch/ fastText/blob/master/docs/crawl-vectors.md ; that is, our VECMAP baselines have the same data availability with our main results.", "12 https://github.com/facebookresearch/ LASER/tree/master/tasks/WikiMatrix 13 We sample from the smallest word set whose cumulative probability mass exceeds 0.5 for next words.", "induction performance (Table 2).", "In the table, RTV 1 refers to the bitext of the highest quality and RTV 5 refers to the ones of the lowest quality, in terms of the margin score (Eq 1).", "14 We also add a random pseudo bitext baseline (Random), where all the bitext are randomly sampled from each language pair, as well as using all retrieved sentence pairs that have scores larger than 1 ( RTV-ALL ).", "In general, the lexicon induction performance of RTV correlates well with the quality of bitext.", "Even using the bitext of the lowest quality ( RTV -5), it is still able to induce reasonably good bilingual lexicon, outperforming the best numbers reported by BUCC 2020 participants (Table 1) on average.", "However, RTV achieves poor performance with random bitext (Table 2), indicating that it is only robust to a reasonable level of noise.", "While this is a lower-bound on bitext quality, even random bitext does not lead to 0 F 1 since the model may align any 14 See Appendix C for examples from each tier.", "co-occurrences of correct word pairs even when they appear in unrelated sentences.", "Word alignment quality.", "We compare the lexicon induction performance using the same set of constructed bitext ( RTV ) and different word aligners (Table 3).", "According to Sabet et al. (2020), SimAlign outperforms fast align in terms of word alignment.", "We observe that such a trend translates to resulting lexicon induction performance well: a significantly better word aligner can usually lead to a better induced lexicon.", "Bitext quantity.", "We investigate how the BLI performance changes when the quantity of bitext changes (Figure 2).", "We use CRISS with nucleus sampling ( GEN-N ) to create different amount of bitext of the same quality.", "We find that with only 1% of the bitext (160K sentence pairs on average) used by GEN-N , our weakly-supervised framework outperforms the previous state of the art ( BUCC ; Languages SimAlign fast align de-en 73.0 69.7 de-fr 78.9 69.1 en-de 64.4 61.2 en-es 77.0 72.8 en-fr 73.4 68.5 en-ru 53.1 50.7 en-zh 69.9 66.0 es-en 82.8 79.8 fr-de 80.9 75.8 fr-en 80.0 77.3 ru-en 72.7 70.2 zh-en 62.5 60.2 average 72.4 68.4 Table 3: F 1 scores ( 100 ) on the BUCC 2020 test set.", "Table 1).", "The model reaches its best performance using 20% of the bitext (3.2M sentence pairs on average) and then drops slightly with even more bitext.", "This is likely because more bitext introduces more candidates word pairs.", "Dependence on word frequency of GEN vs. RTV .", "We observe that retrieval-based bitext construction ( RTV ) works significantly better than generation-based ones ( GEN and GEN-N ), in terms of lexicon induction performance (Table 1).", "To further investigate the source of such difference, we compare the performance of the RTV and GEN as a function of source word frequency or target word frequency, where the word frequency are computed from the lower-cased Wikipedia corpus.", "In Figure 3, we plot the F 1 of RTV and GEN when the most frequent k % of words are considered.", "When all words are considered RTV outperform GEN for 0 20 40 60 80 100 % of source words 40455055606570 F1 RTV GEN VECMAP", "11 of 12 language pairs except de-fr.", "In 6 of 12 language pairs, GEN does better than RTV for high frequency source words.", "As more lower frequency words are included, GEN eventually does worse than RTV .", "This helps explain why the combined model GEN-RTV is even better since GEN can have an edge in high frequency words over RTV .", "The trend that F 1 ( RTV ) F 1 ( GEN ) increases as more lower frequency words are included seems true for all language pairs (Appendix A).", "On average and for the majority of language pairs, both methods do better on low-frequency source words than high-frequency ones (Figure 3a), which is consistent with the findings by BUCC 2020 participants (Rapp et al., 2020).", "VECMAP .", "While BLI through bitext construction and word alignment clearly achieves superior performance than that through vector rotation (Ta-ble 1), we further show that the gap is larger on low-frequency words (Figure 3).", "Following the advice of Kementchedjhieva et al. (2019) that some care is needed due to the incompleteness and biases of the evaluation, we perform manual analysis of selected results.", "For ChineseEnglish translations, we uniformly sample 20 wrong lexicon entries according to the evaluation for both GEN-RTV and weakly-supervised VECMAP .", "Our judgments of these samples are shown in Table 4.", "For GEN-RTV , 18/20 of these sampled errors are actually acceptable translations, whereas for VECMAP , only 11/20 are acceptable.", "This indicates that the improvement in quality may be partly limited by the incompleteness of the reference lexicon and the ground truth performance of our method might be even better.", "The same analysis for EnglishChinese is in Appendix B. GEN-RTVVECMAP depot (cid:51) endorsing (cid:55) wasting (cid:51) preconditions ?", "Furthermore, we randomly sample 200 source words from the MUSE zh-en test set, and compare the quality between MUSE translation and those predicted by GEN-RTV .", "This comparison is MUSE-favored since only MUSE source words are included.", "Concretely, we take the union of word pairs, construct the new ground-truth by manual judgments (i.e., removing unacceptable pairs), and evaluate the F 1 score against the constructed ground-truth (Table 5).", "The overall gap of 3 F 1 means that a higher quality benchmark is necessary to resolve further improvements over GEN-RTV .", "The word pairs and judgments are included in the supplementary material (Section F).", "We evaluate different word alignment methods (Table 6) on existing word alignment datasets, 15", "following Sabet et al. (2020).", "We investigate four language pairs: GermanEnglish (de-en), EnglishFrench (en-fr), EnglishHindi (en-hi) and RomanianEnglish (ro-en).", "We find that the CRISS-based SimAlign already achieves competitive performance with the state-of-the-art method (Garg et al., 2019) which requires real bitext for training.", "By ensembling the argmax and itermax CRISS-based SimAlign results (Sec-tion 5), we set the new state of the art of word alignment without using any bitext supervision.", "However, by substituting the CRISS-based SimAlign in the BLI pipeline with our aligner, we obtain an average F 1 score of 73.0 for GENRTV , which does not improve over the result of 73.3 achieved by CRISS-based SimAlign (Ta-ble 1), indicating that further effort is required to take the advantage of the improved word aligner.", "We present a direct and effective framework for BLI with unsupervised bitext mining and word alignment, which sets a new state of the art on the task.", "From the perspective of pretrained multilingual models (Conneau et al., 2019; Liu et al., 2020; Tran et al., 2020, inter alia ), our work shows that they have successfully captured information about word translation that can be extracted using similarity based alignment and refinement.", "Although BLI is only about word types, it strongly benefits from contextualized reasoning at the token level.", "umich.edu/mihalcea/wpt (en-fr and ro-en); https: //web.eecs.umich.edu/mihalcea/wpt05 (enhi) Acknowledgment We thank Chau Tran for help with pretrained CRISS models, as well as Mikel Artetxe, Kevin Gimpel, Karen Livescu, Jiayuan Mao and anonymous reviewers for their valuable feedback on this work." ]
[ "abstain", "result", "abstain", "result", "objective", "abstain", "abstain", "objective", "abstain", "other", "result", "result", "abstain", "result", "method", "objective", "result", "result", "result", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "abstain", "abstain", "other", "other", "other", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "method", "abstain", "objective", "abstain", "method", "method", "abstain", "objective", "method", "method", "abstain", "method", "abstain", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "result", "result", "abstain", "result", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "other", "abstain", "other", "abstain", "other", "objective", "other", "other", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "other" ]
[ "Research on the application of NLP in symbol-based Augmentative and Alternative Communication (AAC) tools for improving social interaction support is scarce.", "We contribute a novel method for generating context-related vocabulary from photographs of personally relevant events aimed at supporting people with language impairments in recounting their past experiences.", "Performance was calculated with information retrieval concepts on the relevance of vocabulary generated for communicating a corpus of 9730 narrative phrases about events depicted in 1946 photographs.", "In comparison to a baseline generation composed of frequent English words, our method generated vocabulary with a 4.6 gain in mean average precision, regardless of the level of contextual information in the input photographs, and 6.9 for photographs in which contextual information was extracted correctly.", "We conclude by discussing how our fndings provide insights for system optimization and usage.", "Augmentative and Alternative Communication (AAC) tools can enhance communication for nonspeaking individuals, thus offering improved social interaction and independence.", "Well established NLP techniques, such as spell check and word prediction, support those with primarily physical barriers to communication (e.g., adults with ALS) to compose complex and nuanced sentences in orthographic-based systems more effciently.", "However, those with developmental disabilities (e.g., autism spectrum disorder, ASD) or lexical and semantic processing impairments that limit their ability to spell out words (e.g., adults with aphasia 1 ) must usually rely on less expressive symbol-based systems, for which those techniques offer little sup-1 a language disorder mostly often caused by a stroke.", "communication with these systems.", "Users of symbol-based AAC typically do not construct full, grammatically correct sentences, complete with prepositions and infections, but rather often only need a few key content words (i.e., nouns, adjectives, verbs)appearing at any part of the sentence to supplement other forms of communication, including preserved speech, gestures, or drawings.", "Such scattered use of vocabulary hinders the typical statistical prediction approach, which relies on patterns learnt from a large training corpus.", "Nonetheless, there is much opportunity for improving symbol-based AAC, which is often abandoned because it offers too little communication support relative to the effort required to learn and use (Moffatt et al., 2017).", "Selecting and organizing vocabularies able to attend user's communication needs in a wide variety of contexts and such that they can fnd words quickly is one of the major challenges (van de Sandt-Koenderman, 2004; Bailey et al., 2006).", "Alphabetical organizations are not useful, and traditional hierarchical schemes based on abstract categories (e.g., food apple) are diffcult for people with language impairments, making navigation extremely slow for anything but the smallest (least useful) vocabularies.", "Presenting vocabulary as a fat hierarchy is best (Beukelman et al., 2015; Brock et al., 2017; Wallace and Hux, 2014); however, only a very limited set of options can be displayed, making communication very reliant on having the desired keywords among the available options.", "Providing concise situation-relevant vocabularies currently depends on support from a clinician or caregiver to pre-program the device.", "But such support is often limited or not available, which consequently limits these devices to supporting generic expressions of wants and needs, i.e., functional communication, and not for social interactions involving spontaneous narratives (Waller, 2019).", "Generating vocabulary from user's contextual data through Natural Language Generation (NLG) techniques seems an obvious venue to facilitate social interactions.", "Although NLG has been successfully applied in the context of task-oriented dialogs (He et al., 2017), question answering (Su et al., 2016), text summarization (See et al., 2017), and story generation from photograph sequences (Hsu et al., 2020), it is unclear how these techniques can be adapted to the specifc needs of AAC support (Tintarev et al., 2014).", "In this paper, we call for more research in the NLP community devoted to language generation for symbol-based AAC systems.", "We present an overview of the scarce research on the topic and contribute a method that generates vocabulary automatically from a user's photographs to support autobiographical storytelling, demonstrating how it performs under different combination of the system's controllable parameters and a wide range of input photographs.", "NLP research on AAC systems has mainly focused on improving the communication rate of orthographic-based tools, primarily via attempts", "to reduce keystrokes with letter, word, or message prediction, applying n-grams language models on the user input (Swiffn et al., 1985; Garay-Vitoria and Abascal, 2006; Fazly and Hirst, 2003; Trnka et al., 2007; Trnka and McCoy, 2008).", "Researchers have also explored techniques for improving prediction by including in the language model, some sort of contextual information, such as the topic of conversation (Lesher and Rinkus, 2002; Trnka et al., 2006), the user's location (Garcia et al., 2015), their past utterances (Kristensson et al., 2020; Copestake, 1997; Wandmacher et al., 2008), or their partner's speech (Wisenburn and Higginbotham, 2008).", "Virtually all commercial text-based high tech AAC devices employ some form of n-gram prediction (Hig-ginbotham et al., 2012).", "Many people with developmental (e.g., ASD) or acquired disabilities have diffculty using written language, and therefore need support other than orthographic-based AAC.", "People with expressive aphasia, for example, present lexical and semantic processing impairments that affect their ability to retrieve the names of objects, combine linguistic elements, and use grammar.", "Nonetheless, they usually have good receptive communication skills and intellectual abilities preserved, and typically desire the ability to communicate complex ideas and share social stories spontaneously, such as describing a recent activity or experience (Garrett, 2005) 2 .", "To support this population, researchers from the clinical community (McKelvey et al., 2010; Dietz et al., 2006; McKelvey et al., 2007; Beukelman et al., 2015) have successfully explored the presentation of vocabulary associated with personally relevant and highly contextualized photographs, where people, objects, and activities are depicted in their naturally occurring contexts (also known as visual scene displays, VSDs).", "Evidence indicates greater conversational turn-taking with fewer instances of frustration and navigational errors (Brock et al., 2017), and increased lexical retrieval during activity retell (Mooney et al., 2018), for which participants perceived this kind of support as very helpful.", "However, the automation of the language production process to support those social narratives is still highly unexplored.", "For example, Mooney 2 We also witnessed this in interactions observed in conversation groups at a local aphasia institute in which the frst author participated for 9 months.", "et", "al.'s system CoChat (2018) generates keywords from human input simulating social network comments.", "NLP was used only to clean the input and identify nouns and frequent words.", "In consequence, available commercial tools 3 depend on human effort planning and programming relevant vocabulary, leading to lack of spontaneous and indepen-dent communication, and requiring a great amount of time from caregivers (Drager et al., 2019).", "Generating language for AAC systems is highly different from typical NLG usage, mainly because the goal of AAC is to provide support for communicating users thoughts, and not to replace the user by", "an automatic communicator (Tintarev et al., 2014).", "The Compansion system (Demasco and McCoy, 1992; McCoy et al., 1998), was one of the frst attempts to apply NLG towards that goal.", "It was designed to produce grammatically correct sentences from incomplete user input using a small domain model.", "Although Compansion was dedicated to functional communication, its concept of using a domain knowledge served as a stepping stone to Dempster et", "al.'s system aimed at generating conversational utterances (2010).", "In their prototype, users populated a personal knowledge base by recording where, when, and with whom they performed an activity shortly after its end.", "Through a template-driven system, users' knowledge was converted into conversational utterances organized on topics that could be accessed during subsequent conversations.", "This work showed promising results on how NLG can be able to support social dialogues and increase participation of AAC users.", "However, their system still required considerable manual linguistic input from users.", "Automatic generation of storytelling vocabulary has been successfully explored by researchers (Re-iter, 2007; Black et al., 2010; Tintarev et al., 2016) to support children with limited memory or with physical and intellectual impairment telling \"how was school today\" to their parents.", "In their project, raw sensor data from passive RFID tags relating to locations, objects, and people was aggregated into events, and then transformed to coherent personal narratives using a domain knowledge containing the school timetable and the RFID tags mapping.", "scenario (e.g., school), Demmans Epp et al. (2012) explored the use of information retrieval algorithms on internet-accessible corpora such as websites, dictionaries, and Wikipedia pages related to the user's current location or conversation topic.", "Although this approach was useful for augmenting a base vocabulary with context-specifc terms, it is limited to locations (e.g., retail locations) for which internet-accessible corpora are likely to exist.", "Our method generates a rank of key words and short narrative phrases from a single 4 input photo for scaffolding storytelling.", "It was designed to be used as the back end of interactive AAC systems in which relevant vocabulary is associated with a main photograph, such as Mooney et", "al.'s CoChat, or as in the example design shown in Fig. 1. We used VIST-TRAIN , a sub-set of the visual storytelling dataset VIST (Huang et al., 2016) as the main source for vocabulary generation.", "VISTTRAIN encompasses 80% of the entire dataset, and is composed of 65,394 photos of personal events, grouped in 16,168 stories.", "Each photo is annotated with descriptions and narrative phrases that are part of a story, created by Amazon Mechanical Turk workers.", "We judged VIST to be a good source of vocabulary because", "i) photos were extracted from personal Flickr albums on a wide range of sto-ryable events, related to 69 topics (e.g., graduation, building a house),", "ii) associated vocabulary is representative of storytelling and,", "iii) stories and photo descriptions were constructed by a large number (1907) of workers under a rigorous procedure.", "The generation process is composed of fve steps, as detailed below and illustrated in Fig 2. We explore different implementations for some of the steps, represented by the system's controllable parameters emphasized with bold italic formatting throughout the paper.", "The different combination of those parameters are evaluated in the next section.", "The frst step extracts contextual information from the photograph in the form of a high-level, humanlike description of the scene (i.e., caption) using the computer vision technique from Fang et al. (2015).", "Captioning was chosen over pure object detection and labelling due to the necessity of communicat-4 to reduce the requirements on users, who may feel discouraged if multiple photos of the event are needed 1356 we had birthday cake, there was so many candles and he loves chocolate cake so that's what I made cakecandlefamilybirthdaywishhappy everyoneenjoysurprisebleweatcelebrate birthdaypresentageballoon a man sitting at a table with a birthday cake with lit candles User Input Photo Create Description VIST Descriptions Calculate SimilarityBetween Desc.", "This step fnds the subset of VIST-TRAIN photos most similar to the user input by calculating the sentence similarity between the input photo description and all VIST-TRAIN photos descriptions.", "All photos with description similarity higher than the parameter Similarity Threshold are selected for processing in the next step, with an upper limit of 30 photos.", "Sentence similarity is defned as the soft cosine similarity (Sidorov et al., 2014) 5 on a bag-of-words representation of the sentences using Word2Vec embeddings, after removing stop words 6 .", "Soft cosine was chosen as similarity measure due to its ability to capture the semantic relatedness between different words.", "This strategy was motivated by the fact that soft cosine similarity with Word2Vec was effective for fnding similar sentences on question-answering systems, achieving the best performance at the SemEval-2017 Task 3 (Charlet and Damnati, 2017).", "Similarity based on entire documents (e.g., Doc2Vec) was 5 Gensim library implementation 6 as defned by the Natural Language Toolkit (NLTK) not used because it would require a much larger (at present, nonexistent) training corpus to create proper document embeddings, and there are no pre-trained sentence embeddings trained exclusively on photo descriptions.", "All narrative sentences associated with the selected photos are retrieved for processing in the next stage.", "The number of sentences per photo varies from 1 to 5 ( = 3 . 1 , = 1 . 4 ).", "This step identifes a group of representative sentences and words from the retrieved set by applying the Affnity Propagation 7 clustering (Frey and Dueck, 2007)able to generate clusters with less error than other exemplar-based algorithms and not requiring a predetermined the number of clusters.", "The fnal set of generated phrases is formed by these cluster's exemplars, ranked according to their respective clusters size.", "By def-nition, this strategy results in phrases covering the wide range of semantics present in the set of retrieved phrases, while at the same time removing redundant (i.e., very similar) phrases.", "In case of 7 damping: 0.5, max.", "non-convergence (< 3% in our evaluation), the set of recommended phrases is formed by ranking all phrases according to the sum of their soft cosine similarity against all other phrases retrieved.", "The generated base vocabulary is formed by a rank of the word frequencies after fltering-out stop words and applying a porter stemmer to merge different variations (e.g., worked, working work).", "The parameter Selection Method determines whether frequencies are calculated considering all retrieved phrases (ALL _P HRASES ) or only clusters' exemplars (EXEMPLARS ).", "The goal of this step is to diversify the base vocabulary derived from VIST-TRAIN to increase communication fexibility.", "Thus, to fnd words that are related to, but distinct from the initial concept (e.g., cake sweet), our method uses a model of the human mental lexicon as a secondary source of vocabulary.", "In this model, SWOW (De Deyne et al., 2019), words are connected with a certain strength representing their relatedness constructed from data of word-association experiments of over 90,000 participants.", "Therefore, unlike embeddings, SWOW encodes mental representations free from the basic demands of communication.", "This strategy was motivated by the fact that word association data was successfully applied in a controlled study to support people with aphasia navigating related words more effectively (Nikolova et al., 2010), and that evidence from cognitive science research indicates that the network formed by associations in SWOW presents a widespread thematic structure, rather than taxonomic, with words strongly associated often occurring in the same situation (e.g., pick-strawberry; candle-church) (De Deyne et al., 2015) .", "This last step expands the initial set of base vocabulary by adding, for each word, the most strongly associated words in SWOW data.", "The system parameter Expansion Size determines how many words from SWOW are added for each word in the base vocabulary set.", "Repeated words are not included.", "The goal of our evaluation is to understand how our design choices, represented by the system controllable parameters , along with uncontrollable factors related to the input photograph (i.e., uncontrollable parameters ), affect the system's performance.", "Thus, we compared the relevance of vocabulary generated under different combinations of these parameters to investigate the following specifc research questions: 1. What combination of controllable system parameters related to the base vocabulary generation optimizes performance?", "2. How does the level of contextual information in the input photo affect performance?", "3. How does the quality of the contextual description inferred from the input photo affect performance?", "4. How does the level of contextual information in the input photo affect the quality of the inferred description?", "5. What is the effect of expanding the base generated vocabulary with words from a mental lexicon model on the system's performance?", "Considering the AAC application usage scenario, the performance of vocabulary generation can be conceptualized by the combination of two factors:", "i) communication fexibility, i.e., whether vocabulary needed for composing messages about a specifc experience is provided, and", "ii) communication ease, i.e., the diffculty in fnding a particular word among all options generated.", "These two factors directly map to the information retrieval concepts of precision ( P ) and recall ( R ) as a perfect algorithm would provide all words the user needs to communicate the desired message ( R = 1), and would not contain any irrelevant vocabulary ( P = 1), thereby minimizing the need for scanning.", "In contrast, the worst algorithm would provide only irrelevant vocabulary ( P = R = 0 ).", "Therefore, we tackle the vocabulary generation evaluation as an information retrieval problem, where the input photo is treated as the user query, generated words and phrases are treated as retrieved documents, and crowd sourced narrative sentences about the photograph are the relevant documents, i.e., ground truth (as detailed in Section 4.2).", "For each input photo, diffculty in fnding vocabulary and communication fexibility are operationalized as P and R , respectively: |{ rel _ words } { G n }| P ( n ) = n |{ rel _ words } { G n }| R ( n ) = |{ rel _ words }| 1358 where n is the number of words displayed to the user, rel _ words are the words in the groundtruth sentences, and G n are the top n words in the generated vocabulary rank.", "We also calculated the F 1 , a common information retrieval measure that captures the trade-off between P and R : P ( n ) R ( n ) F 1 ( n ) = 2 P ( n ) + R ( n ) We calculated these metrics for all n [1 , 100] , and constructed the P-R curves with the arithmetic mean values of P , R , and F 1 across all input photographs under analysis.", "In contrast to BLEU/METEOR metrics, this analysis allows us to clearly demonstrate trade-offs between the diffculty fnding a word among options and communication fexibility, which is important because the number of displayed items will vary for each user.", "To obtain a single measure of system performance across this entire interval, considering all input photos, we approximate the area under the P-R curves by calculating the mean average precision: 100 X mAP = P ( n )( R ( n ) R ( n 1)) n =1 4.2 Data As input photographs and groundtruth sentences, we used VIST-VAL , a sub-set of VIST not employed in our method that contains 8034 photos aligned with crowd sourced stories.", "We selected all photos from VIST-VAL containing the maximum number of sentences available (5) to act as our input photographs, resulting in 1946 photos.", "The ground-truth vocabulary for each photograph was formed by joining the fve associated narrative phrases (9730 in total), after removing stop words.", "Controllable Parameters Base Vocab.", "(RQ1).", "We defned four confgurations of parameters by crossing two extreme values of Similarity Threshold , i.e., 0 and best (highest similarity score among all VIST-VAL ) with the Selection Method all_phrases and exemplars , resulting in four confgurations: 0_A LL , 0_E XEMPLARS , BEST _A LL , BEST _E XEMPLARS .", "Expansion size was set to 0 in all confgurations.", "In the absence of similar AAC generation systems to compare our method to, we created a BASELINE generation formed by a rank of the most frequent words from the Corpus of Contemporary American English (COCA) (Davies, 2009) without stop words.", "We adopted this baseline because current AAC tools are commonly built on word usage frequency data (Renvall et al., 2013).", "The optimal values for the parameters established in this analysis were applied in subsequent analyses.", "Contextual Information Level (RQ2, RQ4).", "To investigate the variability caused by different input photographs, we adopted the concept of context richness from Beukelman et al. (2015).", "The frst author scored each photo from 03 based on the number of contextual categories (environment, peo-ple/object, activity) it clearly depicts (0 when am-biguous).", "To validate these annotations, someone unfamiliar with the study also scored a subset of 514 photos (27.8% of the dataset) 8 .", "Krippendorff's alpha reliability score was 0.82, indicating strong agreement between raters (Krippendorff, 2004).", "Context Description Quality (RQ3, RQ4).", "The frst author scored each photo description from 0 to 3 as follows: 0) not generated or completely unrelated; 1) misses most important elements OR contains most of important elements and a few unrelated elements; 2) contains most of important elements OR all important elements and a few unrelated elements; 3) contains all important elements in the photo and does not contain any unrelated elements.", "As for contextual information level, a subset of 514 were scored by someone unfamiliar with the study.", "Krippendorff's alpha reliability score was 0.88, confrming strong agreement.", "Effect of Vocabulary Expansion (RQ5).", "We created 24 pairs of confgurations by combining different base vocabulary sizes (5, 10, 15, 20, 25, 30) with the expansion sizes (0, 1, 2, 3).", "The confguration [5-2], for example, contains fve base words plus two expanded words per base word, resulting in a maximum of 15 words (or less if expanded words were already in the base set).", "RQ1.", "To better illustrate the differences in performance, Fig. 3 presents the P-R curves, while Table 1 shows the mAP and maximum P and R mean values for the pairs of parameters values under investigation, in comparison to the baseline.", "Overall, 0_ ALL results in the best performance, 8 all annotations are available at https://doi.org/ 10.5683/SP2/NVI701 1359 with an mAP 4.6 times greater than the baseline, and 1.8 greater than the the worst confguration, BEST _E XEMPLARS .", "RQ2.", "In our input dataset, the proportion of photos according to their context richness score was: 8%(0), 54%(1), 30%(2), 8%(3).", "A Mann-Whitney U test indicated a signifcant difference on P and R only between photos with context richness 0 and the remaining levels ( p < . 002 ).", "Table 2 shows the mean performance metrics according to level of contextual information.", "RQ3.", "The distribution of input photos across context description quality scores was: 16%(0), 16%(1), 30%(2), 38%(3).", "We plot the P-R curves according to the context description quality scores in Fig. 4, and summarize performance metrics in Table 3. A Mann-Whitney U test indicated no signifcant differences between photo quality 1 and 2 ( p > . 2 ).", "However, photos with description quality 3 signifcantly outperformed the other groups ( p < . 001 ), and quality 0 photos performed signifcantly worse than all other groups ( p < . 001 ).", "RQ4.", "Fig. 5 illustrates the relationship between the level of contextual information in the input photos and the quality of the photos descriptions generated using machine-learning.", "As expected, photos with ambiguous contextual information (level= 0) most often received bad captions (53%).", "As context richness increased, the relative proportion of photos with good descriptions (scores 2 or 3) also increased (39%, 69%, 72%, 80%), but the relative proportion of perfect descriptions (quality = 3) decreased (46%, 31%, 19%).", "Photos depicting only one type of contextual information (location, person/object, activity) resulted in the best descriptions: 46% received perfect descriptions, and 66% of all perfect descriptions were given to them.", "However, when compared to photos with more contextual information, they presented the highest relative proportion of very bad captions (15% vs 9.1% and 5.7%).", "RQ5.", "Fig. 6 compares the performance of different combinations of base vocabulary and expansion sizes against base vocabulary only, in function of the number of words displayed n .", "In general, for a given n , generation without expansion resulted in superior performance.", "However, on confgurations for which a high proportion of expanded words were already in the base vocabulary (e.g., n = 6, 21, 61), expansion presented similar or even better F 1 scores than the base vocabulary on its own.", "the F 1 score, averaged across all photos, in function of the proportion of expansion words not present in the base vocabulary during generation (Fig. 7).", "The mean F 1 for generation without word expansion is also plotted for comparison.", "We found that word expansion is able to bring improvement in performance when less than 60% of the expansion words are included in the fnal generated vocabulary, or in other words, when more than 40% of expansion words is already in the base vocabulary.", "The tendency is that, the lower the proportion of expansion words not in the base vocabulary, the higher the performance.", "The design space for generating AAC storytelling vocabulary directly from photographs is vast and under explored.", "Design decisions for individual system components will impact other components and ultimately the overall system effectiveness, and therefore cannot be arbitrary.", "Without a rigorous performance evaluation on different confgurations of parameters, users would be at risk of using a fawed or under optimized system, which could lead to user frustration and abandonment, and cause confounds that obscure whether failures are due to the need for algorithmic tuning or mismatch between the intended support and user needs.", "The study of controllable parameters (RQ1, 5) demonstrated that our method is able to provide relevant vocabulary , and showed how it can be used to optimize the system and identify areas for further improvement.", "The exploration of uncontrollable parameters (RQ2, 3, 4) helped illustrate the likely variation in system performance during real world usage (i.e., wide variety of input photos), allowing us to better anticipate potential problems or pitfalls and understand requirements for use.", "The similar performance across photos with different levels of contextual information (RQ2) suggests that our method is robust to variations in the input photograph .", "Users will not need to be instructed to take photographs following specifc requirements, e.g., photos should demonstrate an action or photos should depict objects only.", "The similar levels of performance is explained by the pattern observed in the RQ4 analysis; the more elements a photo contains, the better knowledge the machine learning has to infer the central aspect of the photo, but at the same time, the harder it is to capture each and every element.", "In addition, an element wrongly identifed will have less impact on the overall scene understanding since other elements complement the description.", "An example would be a photo of a birthday party, in which the machine-learning platform is able to infer the central concept (birthday) from the several elements depicted (e.g., cake, candles, balloons), but misses some of the details (e.g. drinks).", "On the other hand, simplistic photos will rarely lead to elements being cut out, but the computer vision technique will have more variability when performing the inferences, leading to erroneous descriptions more often.", "On the other hand, the quality of generated vocabulary was strongly dependent on the computer vision technique employed to extract contextual information about the scene (RQ3) .", "When a wrong description is generated, the subsequent steps of the algorithm are misled and therefore generate vocabulary less relevant for retelling the scene depicted in the photograph.", "Nonetheless, even in this case, an AAC device using our method would provide vocabulary more relevant than if the most frequent English words were provided.", "Since photos for which the computer vision technique was able to correctly identify all contextual elements resulted in substantial performance gain, we encourage further exploration of this component.", "An option would be to use a higher number of raw context labels instead of the single human-like description employed in this work.", "Our vocabulary expansion analysis (RQ5) provide valuable insights into how the combination of multiple lexicon sources can generate more relevant vocabulary.", "The most promising approach was to combine the visual-to-story dataset with strongly associated words from a mental-lexicon model, but only when there was high intersection between the two vocabularies.", "Although VIST contains a very large range of events, one limitation is that it is unlikely to cover all possible scenarios, and may not accurately refect AAC communication.", "However, in the absence of an appropriate AAC-specifc corpora (a known issue in the community), we believe the VIST dataset can meaningfully represent the vocabulary needed for scaffolding storytelling.", "In addition, we do not expect the performance gains observed will directly translate to the same gains in usability.", "Our goal was to understand fundamental questions necessary for advancing to a usability study, helping fne-tune system components before introducing them to users, avoiding unnecessary interactions with identifably poor designs.", "Our approach also enables larger numbers of parameters to be examined.", "The low level of social participation commonly observed among people with aphasia, combined with the rate-limited nature of AAC, would require feld experiments lasting an impractical amount of time to produce suffcient data to comprehensively explore possible combinations of parameters (Kristensson et al., 2020).", "As a potential improvement to our method, Sent2Vec trained with BERT may better represent sentence structure and words context for fnding similar photo descriptions in step 2 than our use of soft cosine with Word2Vec.", "Another option would be the use of query expansion to enrich the descriptions.", "We encourage the exploration of the vast array of strategies for tackling the vocabulary generation process for AAC.", "Developing a photo-to-story vocabulary AAC system presents two challenges; a NLP one in how to generate such vocabularies, and a Human-Computer-Interaction (HCI) one in how to use such vocabulary to offer interactive language support.", "In this work, we tackle the frst challenge.", "We demonstrated that our method is able to generate vocabulary with reasonable levels of recall and precision, regardless of the level of contextual information in the input photograph, illustrated the likely variation in system performance during real world usage, and provided meaningful insights for fne tuning the algorithm, enabling us to move to the next phase of designing and evaluating, with AAC users, our mobile interactive application.", "This research was funded by the Fonds de Recherche du Qubec Nature et Technologies (FRQNT), the Natural Sciences and Engineering Research Council of Canada (NSERC) [RGPIN-2018-06130], the Canada Research Chairs Program (CRC), and by AGE-WELL NCE, Canada's technology and aging network." ]
[ "abstain", "objective", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "objective", "other" ]
[ "Cross-lingual transfer learning with large multilingual pre-trained models can be an effective approach for low-resource languages with no labeled training data.", "Existing evaluations of zero-shot cross-lingual generalisability of large pre-trained models use datasets with English training data, and test data in a selection of target languages.", "We explore a more extensive transfer learning setup with 65 different source languages and 105 target languages for part-of-speech tagging.", "Through our analysis, we show that pre-training of both source and target language, as well as matching language families, writing systems, word order systems, and lexical-phonetic distance significantly impact cross-lingual performance.", "The findings described in this paper can be used as indicators of which factors are important for effective zero-shot cross-lingual transfer to zeroand low-resource languages.", "At present, for a large majority of natural language processing tasks, the most successful approach is fine-tuning pre-trained models with task-specific labelled data.", "Unfortunately, for many languages, and especially low-resource languages, such task-specific labelled data is often not available.", "A potential solution is cross-lingual fine-tuning of multilingual pre-trained language models (Conneau et al., 2020; Devlin et al., 2018), using available data from some source language to model the phenomenon in a different target language for which labelled data does not exist.", "Cross-lingual generalisability of large pre-trained language models is often evaluated by fine-tuning multilingual models on English data and testing them on unseen languages (Conneau et al., 2018; Artetxe et al., 2020; Lewis et al., 2020; Hu et al., 2020).", "Of course, this approach is influenced by the availability of English training data for given tasks, but also then comes with the implicit as-sumption that English is a representative source language.", "This, however, may not be true in practice.", "Specifically, depending on the task, aspects of similarity between source and target language may be relevant for cross-lingual transfer performance (de Vries et al., 2021).", "If similarity between source and target language impacts performance, cross-lingual transfer should not be assessed using only a single predetermined source language, especially if training sets in multiple languages are available.", "Furthermore, target test languages are generally selected based on data availability for the evaluated tasks, but availability may not result in a representative subset of the world's languages.", "The XTreme benchmark collection (Hu et al., 2020), for example, attempts to alleviate this problem by including a varied selection of languages from different language families.", "This collection contains token classification, text classification, question answering and retrieval tasks in 40 languages.", "The language selection does, however, obfuscate the fact that for most non-Indo-European languages no data is available for semantically rich tasks such as question answering.", "This imbalance regarding tasks in this type of collections may consequently inflate the perceived performance for these languages.", "In this work, we aim to shed light on what factors make a language a good source and/or target language for cross-lingual transfer when fine-tuning a large multilingual model.", "We evaluate this via part-of-speech (POS) tagging data, as this is the only task for which high-quality data is available in a large number of languages, including low-resource languages from different language families.", "Also, high cross-lingual POS tagging performance may be seen as a precondition for more semantically complex tasks, as a base understanding of syntactic structure in both the source and target language is necessary for any meaningful natural language processing task.", "Contributions This paper is a case-study of cross-lingual transfer learning with part-of-speech tagging.", "We explore the limits and contributing factors to successful cross-lingual transfer and part-of-speech tagging in particular.", "Among others, we evaluate the effects of (matching) language families, (matching) writing systems, and pre-training on cross-lingual training.", "Moreover, we provide insights that can help to estimate performance when one tries to transfer to a low-resource language with little or no annotated data.", "Source code is released on Github, 1 and 65 fine-tuned models are shared via the Hugging Face Hub.", "2 2 Approach We fine-tune a pre-trained model for the task of part-of-speech tagging using different languages in training and testing.", "Every combination of source and target language yields an accuracy score, with a large matrix of accuracies as a result.", "Monolingual, or within-language performance is the accuracy where the source and target language are the same.", "Overall cross-lingual source or target accuracies can be calculated per column or row in the accuracy matrix, excluding the monolingual accuracy.", "Such accuracies give an overall indication of", "(i) how suitable a given language is as source for cross-lingual POS tagging, and", "(ii) how easy or difficult it is to POS-tag a given target language when monolingual training data is not available.", "Predictors Through a mixed-effects regression analysis, with source and target language (family) as random-effect factors, we assess which variables determine a good source language.", "The variables we consider are whether or not the language family is shared between source and target language, the writing systems (and writing system types) of both languages and whether or not these match, the subject-object-verb (SOV) word order of both languages and whether or not these match, and whether or not a (source or target) language was included in pre-training.", "Additionally, we add the (lexical-phonetic) LDND measure (Wichmann et al., 2010) on the basis of the 40-item word lists from the ASJP database (Wichmann et al., 2010) as a quantitative similarity measure comparing source and target language.", "Finally, we also consider the size of the training set of the source language as a 1 https://github.com/wietsedv/xpos 2 https://hf.co/spaces/wietsedv/xpos predictor.", "Task data We use the POS tag data from Universal Dependencies 2.8 (Zeman et al., 2021).", "It contains manually annotated data for 114 languages; among these all have test data and 75 languages have training data.", "We exclude three mixed-code languages, one sign language, three languages with fewer than 10 test samples and two languages that do not have any word-level annotations.", "Moreover, we exclude training data for five languages that have fewer than 25 training samples.", "All other training datasets consist of at least 125 samples.", "As a result, we have 105 languages which can serve as target languages, of which 65 can also serve as source languages since they have training data.", "Model The XLM-RoBERTa base model (Con-neau et al., 2020) is used for our experiments.", "3 XLM-RoBERTa is pre-trained on web crawled data from 100 languages (with the largest Wikipedia sizes).", "For our dataset, 53 of our 65 source languages and 58 of our 105 target languages were included in XLM-RoBERTa pre-training.", "Data sampling Typical fine-tuning procedures train for a fixed number of epochs on the training data.", "However, there is a substantial amount of variation in the size of our source language datasets (127 to 163,106 sentences).", "In such a situation, choosing a fixed number of epochs might result in underfitting for the smaller languages and overfit-ting for the larger languages.", "Figure 1 shows that accuracies start decreasing with more than 10K samples, so we choose this threshold for further evaluation.", "Consequently, the 25 source languages with more than 10K training samples are randomly undersampled, whereas the other 40 languages are oversampled (i.e. multiple epochs).", "The four languages with more than 50K training samples (Ger-man, Czech, Russian and Turkish) achieve highest overall average accuracy with 1250, 20K, 1250 and 10K samples, respectively, showing that under-sampling can improve cross-lingual performance.", "Within-language performance does keep increasing with longer training, which indicates that longer training can cause source language overfitting.", "Training procedure All models are trained with the same hyper-parameter settings.", "Specifically, the 3 Preliminary experiments have shown no performance gain with the large model variant, so out of practical and environmental considerations, we limit our experiments to this model.", "models are trained for 1,000 batches of 10 samples with a linearly decreasing learning rate starting at 5e 5 .", "We use 10% dropout between transformer layers and 10% self-attention dropout.", "These hy-perparameters were selected based on preliminary experiments with the English, Dutch, Armenian, Marathi and Chinese source languages.", "Models for different source languages were trained with the same random seed.", "Figure 2 illustrates the test accuracies for every combination of source and target language.", "The heat map shows that the model achieves relatively high performance for cases where the source and target language is the same (outlined in black).", "While for many languages same-language training is the only way to achieve high performance (for example Maltese), there are also many target languages for which high performance is observed when training on several other languages (for example Russian).", "Indeed, within-language performance tends to be high with a mean accuracy of 94 .", "1% ( = 4 . 5 ).", "However, there is a substantial amount of variation for cross-lingual accuracies with an overall mean of 57 .", "4% ( = 22 . 4 ).", "This shows that cross-lingual training does not universally yield good performance.", "We evaluate several predictors for inclusion (see Section", "2) by adding them to a linear mixed-effects model with random intercepts for source language, Predictor Coef.", "target language, and target language family.", "No other random intercepts were found to improve the model (via model comparison).", "We ascertained that the predictors of the final model remained significant when the corresponding random slopes were included.", "These are not further reported, however.", "Fixed-effect predictors were included if they significantly ( p < 0 . 05 ) improved the model fit as determined via (maximum likelihood) model comparison.", "Table 1 shows the predictors included in the final model.", "This mixed-effects regression model yields a conditional R 2 of 91.1% and a marginal R 2 of 47.1%.", "In other words, the included fixed effects explain 47.1% of variance, whereas an additional 44% is captured by the random effects (i.e. other language-related factors).", "Regarding the random-effects, the variance explained by the target language was more than three times as high as the variance explained by the source language, reflecting the fact that the POS accuracy is much stronger linked to the target language than to the source language.", "This is also visible in Figure 2, where the rows are much more variable than the columns.", "Table 1 shows that the best predictor for accuracy differences is whether or not the target language is included in pre-training, with an estimated 19.2% higher accuracy for target languages that were included.", "Similarly, performance is higher when the 7678 G o t h i c C l a ss i c a l C h i ne s e S an sk r i t H eb r e w H i nd i U r du F r en c h D u t c h E ng li s h D an i s h It a li an S pan i s h A r m en i an G e r m an S l o v en i an B e l a r u s i an U k r a i n i an O l d E a s t S l a v i c R o m an i an I c e l and i c La t v i an N o r w eg i an S w ed i s h F a r oe s e E s t on i an La t i n C a t a l an P o r t ugue s e C z e c h S l o v a k B u l ga r i an P o li s h A f r i k aan s W e s t e r n A r m en i an G a li c i an R u ss i an C r oa t i an S e r b i an G r ee k L i t huan i an T u r k i s h F i nn i s h H unga r i an W e l s h I r i s h S c o tt i s h G ae li c I ndone s i an P e r s i an A r ab i c N a ij a O l d C hu r c h S l a v on i c O l d F r en c h M a l t e s e N o r t h S a m i W o l o f C h i ne s e A n c i en t G r ee k M a r a t h i T a m il B a s que T e l ugu U y ghu r V i e t na m e s e J apane s e K o r ean Source language TagalogKarelianUpper Sorbian BasqueWestern Armenian TurkishUyghurSlovenianArabicPersianAlbanianKurmanjiLatinFaroeseOld East Slavic HungarianIndonesianEnglishDanishSwedishSpanishCatalanFrenchGalicianPortugueseDutchItalianRomanianIcelandicNorwegianAfrikaansGermanGreekRussianBelarusianUkrainianBulgarianCroatianSerbianPolishCzech SlovakHebrewMarathiLatvianLithuanianArmenianEstonianFinnishKazakhTamilTeluguHindiUrduThaiScottish Gaelic South Levantine Arabic IrishWelshVietnameseAncient Greek KoreanBretonBuryatLivviNaijaOld Church Slavonic Old French CantoneseChineseKomi Zyrian MokshaErzyaKomi Permyak KhunsariNayiniSwiss German Low Saxon BhojpuriKangriGothicAssyrianMundurukuKaaporMakurapClassical Chinese JapaneseOld Turkish SanskritMalteseGuajajaraMbya Guarani BambaraYorubaAkkadianAkuntsu Wolof North Sami KicheManxChukchiSkolt Sami WarlpiriApurinaTupinamba T a r ge t l anguage 0 20 40 60 80 Figure 2: Universal Dependencies part-of-speech tagging accuracies for every combination of source (column) and target (row) languages by fine-tuning XLM-RoBERTa base on the source language.", "source language is included in pre-training, but with a much smaller effect (5.6%) as the target language.", "There is an additional increase of 7.4% in accuracy if both the source language and target language are included in pre-training.", "Consequently, inclusion in pre-training, especially the target language, is highly important for achieving high cross-lingual performance.", "This is unfortunate for many low-resource languages that are not included in pretraining, as the benefit from cross-lingual transfer will be limited.", "Specific examples of underperforming languages that were not included in pre-training are discussed in Section 5.1.", "The ASJP-based LDND measure has the strongest effect on predicted accuracy after target language inclusion in pre-training with a coefficient of 12 .", "70 (for the predictor which was scaled between 0 and 1).", "Figure 3 shows that low LDND distances between source and target language (i.e. when two languages share cognates) are indeed associated with high accuracy, whereas high LDND distances (very dissimilar languages) seem less informative.", "This significant effect might be surprising as the measure is based on (broad) phonetic transcriptions of single words, but measures of linguistic distance at different linguistic levels are correlated (Spruit et al., 2009).", "Whether source and target languages are part of the same language family has a considerable effect on accuracy (see Table 1).", "4 Therefore, when choosing 4 Preliminary experiments have shown that splitting the large Indo-European language family into the major branches J apane s e ( 1 ) K o r ean ( 1 ) S i no T i be t an ( 2 ) A u s t r one s i an ( 1 ) B a s que ( 1 ) D r a v i d i an ( 2 ) T u r k i c ( 2 ) A f r o -A s i a t i c ( 3 ) I ndo -E u r opean ( 45 ) U r a li c ( 4 ) A u s t r o -A s i a t i c ( 1 ) C r eo l e ( 1 ) N i ge rC ongo ( 1 ) Source language family Japanese (1) Sino-Tibetan (3) Indo-European (56) Afro-Asiatic (6) Austronesian (2) Turkic (4) Uralic (11) Basque (1) Dravidian (2) Austro-Asiatic (1) Korean (1) Tai-Kadai (1) T a r ge t l anguage f a m il y 25 50 75 Figure 4: Average accuracies per source and target language family combination based on target languages that were included in pre-training.", "a source language, the best option would be a language from the same family.", "Figure 4 shows the average accuracies per language family combination.", "This figure is solely based on target languages that were included in pre-training, since absence from pre-training has a large negative effect on performance as previously discussed (see Section 4.1).", "The Japanese and Sino-Tibetan (Chinese, Classical Chinese and Cantonese) target languages only reach reasonable accuracies with Japanese, Sino-Tibetan or Korean source languages.", "These target languages reach a lower than 50% macro-averaged accuracy across language families.", "This could be a reflection of the type of writing system in those languages (see Section 4.4 for a dedicated discussion on this), but this is not certain.", "Tai-Kadai (Thai), Korean, and Austro-Asiatic (Vietnamese) languages also reach relatively low cross-family macro-average accuracies (up to 60%), whereas the remaining target language families generally reach a higher performance.", "In Section 3, we found that accuracy is higher if the source and target language are the same, but transfer can work between different families.", "Fig-does not contribute to the explainability of the model.", "ure 4 shows that some family combinations might not be suitable for transfer, but since the lower-performing families contain small numbers of languages, it is difficult to reach definitive conclusions.", "Regarding writing systems, we distinguish writing system types (i.e. alphabetic, logosyllabic, abjad, and abiguda 5 ) from the more fine-grained writing systems (e.g., Armenian, Greek, Cyrillic, and Latin are all alphabetic).", "Cross-lingual POS-tagging accuracy is higher if the source and target writing system types are similar.", "If the two languages share the same writing system, performance is even better (see Table 1).", "Languages that share a writing system, such as Latin, can benefit from a shared vocabulary if those languages have some lexical overlap (Pires et al., 5 Characters in logosyllabic writing systems represent full words (logograms) or syllables.", "In abugida writing systems, consonants and vowels are combined as single units.", "This can make abugida writing systems similar to syllabic writing systems for character-based NLP systems.", "Abjad writing systems only use characters for consonants, whereas vowels are implied.", "2019).", "However, a shared vocabulary also introduces cross-lingual homography problems, where the same token has different meanings, and thus possibly different grammatical functions, in different languages.", "Both aspects are not present for languages that use different writing systems, even if the vocabulary is technically shared within a multilingual model.", "Figure 5 shows average cross-writing-system accuracies.", "Some singleton writing systems reach very low accuracies.", "These are the logosyllabic Chinese characters, Kana (Japanese) and Hangul (Korean) writing systems and Thai, which is an abugida writing system.", "There are several other writing systems that are used by a single target language and achieve high performance regardless of source writing system, i.e. Hebrew, Tamil and Telugu.", "This might indicate that the data or the language itself is easier than other target languages.", "Cross-script transfer seems to work well for a subset of writing systems.", "Languages with logosyllabic or the Thai writing system, tend to perform poorly with source languages that use different writing systems.", "However, these writing systems are not used across language families, so it is difficult to attribute these findings specifically to the writing systems themselves.", "Having discussed significant predictors in detail, we now take a closer look at bad\" source languages, thereby providing a better understanding of how to choose a good\" source language (Sec-tion 5.1).", "We also identify some optimal source-target language pairs (Section 5.2), and optimal\" source languages for our task (Section 5.3).", "Figure 2 shows that many source languages (columns) achieve high performance for at least a subset of the target languages, and also that some source languages never achieve high cross-lingual accuracies.", "While overall contributing factors have been discussed in Section 4, here we unpack why some source languages yield low accuracy.", "Source languages should achieve highest performance on themselves as target languages.", "This is not the case for Arabic (higher accuracy on Ukrainian), Korean (higher accuracy on Hebrew) and Spanish (higher accuracy on Catalan).", "Excluding those languages, the lowest within-language 7681 accuracy is Sanskrit (84.2%).", "We identify poorly performing source languages as those where the best cross-lingual accuracy is below that 84.2% threshold.", "Based on this threshold, we identify 19 source languages that perform sub-optimally on every target language except themselves.", "The full set of source languages contains 12 languages that were not included in XLM-RoBERTa pre-training (see red column labels in Figure 2).", "Out of these 12 languages, nine are in the bottom 25% of source languages ranked by overall accuracy: Ancient Greek, Classical Chinese, Gothic, Maltese, Naija, North Sami, Old Church Slavonic, Old French and Wolof.", "The remaining three source languages that were not included in pre-training are Faroese, Old East Slavic and Western Armenian.", "The written forms of these three languages are considered mutually intelligible with at least one language that was included in pre-training.", "6 Specifically, mutually intelligible are written Faroese with Icelandic (Barbour and Carmichael, 2000), Old East Slavic with Russian, Belarusian and Ukrainian (Andersen, 2003) and West Armenian with (East-ern) Armenian (Adalian, 2010).", "No similar mutual intelligibility pairs were found for the nine poorly performing non-pre-trained source languages.", "This indicates that while inclusion in pre-training is optimal for both the source and target language, inclusion of a mutually intelligible language variant can be sufficient for source languages.", "Other source languages that never achieve high transfer performance but that were present in pretraining are Sanskrit, Arabic, Chinese, Japanese, Vietnamese, Uyghur, Irish, Marathi, Hebrew, Tamil.", "For Uyghur and Irish, no clear cause could be found for their low performance.", "This is not the case for the other languages, however.", "Sanskrit is effectively not present in pre-training, since the Universal Dependencies data mainly contains romanised Sanskrit, whereas the data in the XLM-RoBERTa pre-training uses the Devanagari writing system.", "Serbian is the only other evaluated source language where the writing system in Universal Dependencies is not used in pre-training.", "However, the Latin script that is used in Universal Dependencies is used with the Croatian pretraining data, and Croatian is structurally and in written form effectively the same language as Serbian (Kordic, 2010).", "6 If we consider these languages as pre-trained in the mixed effects model of Section 3, the marginal R 2 would increase from 47.1% to 54.6%.", "For Arabic, the problem seems a poor model fit in general, since the trained model for Arabic also achieves only 75.9% accuracy on Arabic test data.", "We did not identify a clear external factor for why Arabic performance is so low, since other genetically related languages and other languages that use the Arabic writing system perform better.", "Problems with Chinese, Japanese and Vietnamese might originate from issues with logosyllabic writing systems (see Section 4.4).", "Japanese uses its own unique syllabic writing system, and the Vietnamese language uses a romanised version of (logographic) Chinese characters.", "Logosyllabic writing systems therefore seem to transfer poorly to other languages.", "The languages in our set of source languages with logosyllabic writing systems are Japanese, Chinese, Classical Chinese and Cantonese.", "These four languages are in the bottom 20% lowest performing source languages for average cross-lingual accuracy.", "While the source writing system type was not identified as a significant predictor in the mixed-effects regression model, this could be because logosyllabic writing systems are not used across multiple language families.", "The three remaining poorly performing languages are Marathi, Hebrew and Tamil.", "Those three languages are the only evaluated source languages with fewer than 200 training sentences.", "Therefore, the reason for the low performance of these source languages could be the lack of sufficient training data.", "Overall, these findings suggest that a good source language should: Be included in pre-training data with the same writing system as the task-specific training data.", "Alternatively, a mutually intelligible related language must be included; Achieve good within-language performance.", "One cannot expect high cross-lingual performance, if a model performs poorly on the source language itself; Use the same type of writing system as the target language.", "Transfer between different alphabetic writing systems (i.e. Latin and Cyrillic) can work well, but lower performance is observed for logosyllabic writing systems (see Section 4.4); Have sufficient training data available.", "Using only 200 training sentences seems too little.", "For every target language, the best source language can be determined by taking the source language with the highest accuracy.", "Some highly similar languages are each other's best source language.", "In our set of languages, we found 11 of such pairs: Estonian and Finnish Icelandic and Faroese French and Italian Chinese and Japanese Irish and Scottish Gaelic Croatian and Serbian Catalan and Spanish Belarusian and Ukrainian Hindi and Urdu Armenian and Western Armenian English and Swedish All of these pairs, except English and Swedish , originate from either the same country, or countries that are geographic neighbours.", "Moreover, most of these pairs are closest siblings according to the Eth-nologue genetic classification scheme (Eberhard et al., 2021), compared to alternative languages in our language set.", "The exceptions are English and Swedish (both are Germanic languages, but for instance Dutch is closer to English, and Norwegian is closer to Swedish), Chinese and Japanese (sepa-rate families, but Japanese has many Chinese loan words) and Catalan and Spanish (Portuguese is genetically closer to Spanish than Catalan).", "Some of these pairs are known to have mutual intelligibility (see Section 5.1) and share common ancestor languages.", "This shows that optimal cross-lingual performance can be achieved by pairing highly similar languages.", "However, since all of these pairs are languages that were included in pretraining, it is unclear whether this also holds for low-resource languages that were not included.", "Romanian and Swedish are the most common best source language for any target language, with 10 and 7 target languages, respectively.", "Alternatively, optimal cross-lingual performance can be determined by taking the average cross-lingual accuracy per source language.", "According to this measure the best source languages are still Romanian (67.2%) and Swedish (65.9%).", "This criterion ranks English as 19th out of 65 source languages, with an average accuracy of 62.4%.", "All languages that perform better than English are Indo-European except Estonian (Uralic), and English is the fifth-best source language from the Germanic Indo-European branch.", "Romanian is also, on average, the best source language for both the set of target languages that were included in pre-training (81.5%) as well as the set of non-pre-trained languages (49.8%).", "This shows that even though cross-lingual tansfer commonly takes English as a source language, English might not be the best source language overall.", "However, overall average performance might not be a good measure of source language quality because that introduces a strong Indo-European bias, due to the large amount of Indo-European languages in our target language set.", "If we determine the best source language per target language family (or Indo-European branch), we find that the best source language is from a different language family for 23 out of 30 families.", "Again, Romanian is the best general source language since it is the best source language for seven different families.", "All other best source languages occur twice (Chinese, Uyghur and Wolof) or once (17 languages).", "In short, for this particular task, with this particular dataset, Romanian as source language achieves the best cross-lingual performance.", "We show that simply fine-tuning a large multilingual pre-trained language model on English data does not necessarily make full use of the cross-lingual potential of the model.", "Especially when one applies cross-lingual training for a low-resource language with little or no evaluation data, the different factors that influence performance should be kept in mind.", "Unfortunately, one of the most important factors highlighted by our experiments is that the target language, or a highly similar language variant, should be included in pre-training for cross-lingual training to be successful.", "For current language models, this excludes many languages and a large number of language families.", "For those languages, the most important step is to collect unlabeled data for pre-training (although the amount of data required may be relatively modest; de Vries et al., 2021).", "achieve high cross-lingual performance across language families and writing systems, at least for languages that use alphabetic writing systems.", "The English language, which is the de facto default source language for cross-lingual training, is not necessarily the best source language.", "Due to data availability, our experiments focused on POS tagging, but we hypothesise that the factors we identified may be predictive for other tasks too.", "The significant influence of lexical-phonetic distances and word order differences on accuracies indicate that similar languages are encoded similarly in XLM-RoBERTa, even if there is no lexical overlap due to differing writing systems.", "Thus, these factors potentially also influence more syntax-dependent tasks, such as parsing, and semantically rich tasks, such as natural-language-inference.", "We gratefully acknowledge the support of the Dutch Research Council (NWO Aspasia grant for M. Nissim) and the financial support of the Center for Groningen Language and Culture (CGTC).", "We used freely available data and a freely available pre-trained model for our experiments.", "Our experimental setup required fine-tuning many large language models, but we ran preliminary experiments on a few languages to determine whether we could achieve sufficient performance with a small model size.", "As this indeed was the case, environmental impact was limited compared to the larger model size.", "Moreover, to limit the need for future fine-tuning efforts for this task, we release all of the fine-tuned models." ]
[ "abstain", "abstain", "objective", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "objective", "objective", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "method", "abstain", "abstain", "method" ]
[ "1 School of Information, Renmin University of China, Beijing, China, 2 Department of Computer Science and Technology, Tsinghua University, Beijing, 3 Mila Quebec AI Institute.", "Recent works on knowledge base question answering (KBQA) retrieve subgraphs for easier reasoning.", "The desired subgraph is crucial as a small one may exclude the answer but a large one might introduce more noises.", "However, the existing retrieval is either heuristic or interwoven with the reasoning, causing reasoning on the partial subgraphs, which increases the reasoning bias when the intermediate supervision is missing.", "This paper proposes a trainable subgraph retriever (SR) decoupled from the subsequent reasoning process, which enables a plug-and-play framework to enhance any subgraph-oriented KBQA model.", "Extensive experiments demonstrate SR achieves significantly better retrieval and QA performance than existing retrieval methods.", "Via weakly supervised pre-training as well as the end-to-end fine-tuning, SR achieves new state-of-the-art performance when combined with NSM (He et al., 2021), a subgraph-oriented reasoner, for embedding-based KBQA methods.", "Codes and datasets are available online 1 .", "Knowledge Base Question Answering (KBQA) (Zhang et al., 2021) aims to seek answers to factoid questions from structured KBs such as Freebase, Wikidata, and DBPedia.", "KBQA has attracted a lot of attention, as the logically organized entities and their relations are beneficial for inferring the answer.", "Semantic parsing-based (SP-based) methods (Das et al., 2021; Lan and Jiang, 2020; Sun et al., 2020) and embedding-based methods (He et al., 2021; Sun et al., 2018, 2019) are two mainstream methods for addressing KBQA.", "The former ones heavily rely on the expensive annotation of the intermediate logic form such as SPARQL.", "Instead of parsing the questions, the later ones directly represent Corresponding author.", "and rank entities based on their relevance to input questions.", "Among them, the models which first retrieve a question-relevant subgraph and then perform reasoning on it (He et al., 2021; Sun et al., 2018, 2019) reduce the reasoning space, showing superiority compared with reasoning on the whole KB (Chen et al., 2019a; Saxena et al., 2020; Xu et al., 2019) (Cf. Table 2 for empirical proof).", "Subgraph retrieval is crucial to the overall QA performance, as a small subgraph is highly likely to exclude the answer but a large one might introduce noises that affect the QA performance.", "Figure", "1(a) presents the answer coverage rates of the subgraphs with different sizes on two widely-used KBQA datasets, WebQSP (Yih et al., 2016) and CWQ (Talmor and Berant, 2018).", "We extract the full multi-hop topic-centric subgraph and control the graph size by the personalized pagerank (PPR) (Haveliwala, 2003) scores of entities.", "We also present the QA performance (Hits@1) of NSM (He et al., 2021), a state-of-the-art embedding-based model, under the same sizes of the subgraphs in Figure", "1(b).", "It is observed that although larger subgraphs are more likely to cover the answer, the QA performance drops dramatically when the subgraph includes more than 5,000 nodes.", "Moreover, it is inefficient to extract such a full multi-hop subgraph for online QA.", "The results show that such heuris-5773 Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics Volume 1: Long Papers, pages 5773 5784 May 22-27, 2022 2022 Association for Computational Linguistics tic retrieval is far from optimal.", "To improve the retrieval performance, PullNet (Sun et al., 2019) proposes a trainable retriever, but the retrieving and the reasoning processes are intertwined.", "At each step, a LSTM-based retriever selects new relations relevant to the question, and a GNN-based reasoner determines which tail entities of the new relations should be expanded into the subgraph.", "As a result, the inference as well as the training of the reasoner needs to be performed on the intermediate partial subgraph.", "Since the intermediate supervision is usually unobserved, reasoning on partial subgraphs increases the bias which will eventually affect the answer reasoning on the final entire subgraph.", "This paper proposes a subgraph retrieval enhanced model for KBQA, which devises a trainable subgraph retriever (SR) decoupled from the subsequent reasoner.", "SR is devised as an efficient dual-encoder that can expand paths to induce the subgraph and can stop the expansion automatically.", "After that, any subgraph-oriented reasoner such as GRAFT-Net (Sun et al., 2018) or NSM (He et al., 2021) can be used to delicately deduce the answers from the subgraph.", "Such separable retrieval and reasoning ensure the reasoning only on the final entire instead of the intermediate partial subgraphs, which enables a plug-and-play framework to enhance any subgraph-oriented reasoner.", "We systematically investigate the advantages of various training strategies for SR, including weakly supervised/unsupervised pre-training and end-to-end fine-tuning with the reasoner.", "Instead of the ground truth paths, we extract the shortest paths from a topic entity in the question to an answer as the weak supervision signals for pre-training.", "When the QA pairs themselves are also scarce, we construct pseudo (question, answer, path) labels for unsupervised pre-training.", "To further teach the retriever by the final QA performance, we enable the end-to-end fine-tuning, which injects the likelihood of the answer conditioned on a subgraph as the feedback from the reasoner into the prior distribution of the subgraph to update the retriever.", "We conduct extensive experiments on WebQSP and CWQ.", "The results reveal four major advantages: (1) SR, combined with existing subgraph-oriented reasoners, achieves several gains (+0.4-9.7% Hits@1 and 1.3-8.7% F1) over the same reasoner that is performed with other retrieval methods.", "Moreover, SR together with NSM creates new state-of-the-art results for embedding-based KBQA models.", "(2) With the same coverage rate of the answers, SR can result in much smaller subgraphs that can deduce more accurate answers.", "(3) The unsupervised pre-training can improve about 20% Hits@1 when none of the weak supervision data is provided.", "(4) The end-to-end fine-tuning can enhance the performance of the retriever as well as the reasoner.", "Contributions .", "(1) We propose a trainable SR decoupled from the subsequent reasoner to enable a plug-and-play framework for enhancing any subgraph-oriented reasoner.", "(2) We devise SR by a simple yet effective dual-encoder, which achieves significantly better retrieval and QA results than the existing retrieval methods.", "(3) NSM equipped with SR, via weakly supervised pre-training and end-to-end fine-tuning, achieves new SOTA performance for embedding-based KBQA methods.", "KBQA solutions can be categorized into SP-based and embedding-based methods.", "SP-based methods (Bao et al., 2016; Berant and Liang, 2014; Das et al., 2021; Lan and Jiang, 2020; Liang et al., 2017; Qiu et al., 2020b; Sun et al., 2020) parse a question into a logic form that can be executed against the KB.", "These methods need to annotate expensive logic forms as supervision or are limited to narrow domains with a few logical predicates.", "Embedding-based methods embed entities and rank them based on their relevance to the question, where the entities are extracted from the whole KB (Miller et al., 2016; Saxena et al., 2020) or restricted in a subgraph (Chen et al., 2019a; He et al., 2021; Sun et al., 2018; Zhang et al., 2018).", "They are more fault-tolerant but the whole KB or the ad-hoc retrieved subgraph includes many irrelevant entities.", "Some works such as PullNet (Sun et al., 2019), SRN (Qiu et al., 2020a), IRN (Zhou et al., 2018), and UHop (Chen et al., 2019b) enhance the retrieval by training the retriever, but the retrieving and the reasoning are intertwined, causing the reasoning on partially retrieved subgraphs.", "Because of such coupled design, the reasoner in SRN, IRN, and UHop is degenerated into a simple MLP.", "On the contrary, thanks to the decoupled design, the reasoner can be complicated to support more complex reasoning.", "Other works propose more complicated reasoner for supporting the numerical reasoning in KBQA (Feng et al., 2021).", "Open-domain QA (OpenQA) aims to answer questions based on a large number of documents.", "Most of the OpenQA models also consist of a retriever to identify the relevant documents and a reasoner to extract the answers from the documents.", "The retriever is devised as a sparse term-based method such as BM25 (Robertson and Zaragoza, 2009) or a trainable dense passage retrieval method (Karpukhin et al., 2020; Sachan et al., 2021), and the reasoner deals with each document individually (Guu et al., 2020) or fuses all the documents together (Izacard and Grave, 2021).", "Different from the documents in openQA, the subgraphs in KBQA can be only obtained by multi-hop retrieval and the reasoner should deal with the entire subgraph instead of each individual relation to find the answer.", "Although some openQA research proposes multi-hop document retrieval (Asai et al., 2020), the focus is the matching of the documents rather than the relations to the questions in KBQA.", "Thus the concrete solution for KBQA should be different from openQA.", "A knowledge base (KB) G organizes the factual information as a set of triples, i.e. , G = { ( e, r, e ) | e, e E, r R } , where E and R denote the entity set and the relation set respectively.", "Given a factoid question q , KBQA is to figure out the answers A q to the question q from the entity set E of G .", "The entities mentioned in q are topic entities denoted by E q = { e q } , which are assumed to be given.", "This paper considers the complex questions where the answer entities are multi-hops away from the topic entities, called multi-hop KBQA.", "Probabilistic Formalization of KBQA.", "Given a question q and one of its answers a A q , we formalize the KBQA problem as maximizing the probability distribution p ( a | G, q ) .", "Instead of directly reasoning on G , we retrieve a subgraph G G and infer a on G .", "Since G is unknown, we treat it as a latent variable and rewrite p ( a | G, q ) as: p ( a | G, q ) = (cid:88) G p ( a | q, G ) p ( G| q ) .", "In the above equation, the target distribution p ( a | G, q ) is jointly modeled by a subgraph retriever p ( G| q ) and an answer reasoner p ( a | q, G ) .", "The subgraph retriever p defines a prior distribution over a latent subgraph G conditioned on a question q , while the answer reasoner p predicts the likelihood of the answer a given G and q .", "The goal is to find the optimal parameters and that can maximize the log-likelihood of training data, i.e. , L ( , ) = max , (cid:88) ( q,a,G ) D log (cid:88) G p ( a | q, G ) p ( G| q ) , (2) where D is the whole training data.", "Thanks to this formulation, the retriever can be decoupled from the reasoner by firstly training the retriever p and then the reasoner p on the subgraphs sampled by the retriever.", "Via drawing a sample G (Sachan et al., 2021), we can approximate Eq.", "(2) as: L ( , ) = max , (cid:88) ( q,a, G ) D log p ( a | q, G ) + log p ( G| q ) , (3) where the first and the second term can be optimized for the reasoner and the retriever respectively.", "The concrete reasoner can be instantiated by any subgraph-oriented KBQA model such as the GNN-based GRAT-Net (Sun et al., 2018) and NSM (He et al., 2021).", "The retriever needs to calculate p ( G| q ) for any G , which is intractable as the latent variable G is combinatorial in nature.", "To avoid enumerating G , we propose to expand topK paths relevant to q from the topic entities and then induce the subgraph following these paths.", "Path expanding starts from a topic entity and follows a sequential decision process.", "Here a path is defined as a sequence of relations ( r 1 , , r | p | ) , since a question usually implies the intermediate relations excluding the entities.", "Suppose a partial path p ( t ) = ( r 1 , , r t ) has been retrieved at time t , a tree can be induced from p ( t ) by filling in the intermediate entities along the path, i.e. , T ( t ) = ( e q , r 1 , E 1 , , r t , E t ) .", "Each E t is an entity set as a head entity and a relation can usually derive multiple tail entities.", "Then we select the next relation from the union of the neighboring relations of E t .", "The relevance of each relation r to the question q is measured by the dot product between their embeddings, i.e. , s ( q, r ) = f ( q ) h ( r ) , (4) where both f and h are instantiated by RoBERTa (Liu et al., 2019).", "Specifically, we input 5775 Where did Canadian citizens with Turing Award graduate?", "the question or the name of r into RoBERTa and take its [CLS] token as the output embedding.", "According to the assumption (Chen et al., 2019b; He et al., 2021; Qiu et al., 2020a; Zhou et al., 2018) that expanding relations at different time steps should attend to specific parts of a query, we update the embedding of the question by simply concatenating the original question with the historical expanded relations in p ( t ) as the input of RoBERTa, i.e. , f ( q ( t ) ) = RoBERTa ([ q ; r 1 ; ; r t ]) , (5) Thus s ( q, r ) is changed to s ( q ( t ) , r ) = f ( q ( t ) ) h ( r ) .", "where END is a virtual relation named as END.", "The score s ( q ( t ) , END ) represents the threshold of the relevance score.", "p ( r | q ( t ) ) is larger than 0.5 if s ( q ( t ) , r ) > s ( q ( t ) , END ) and is no larger than 0.5 otherwise.", "We select the top-1 relation with p ( r | q ( t ) ) > 0 .", "5 .", "The expansion is stopped if none of the probabilities of the relations is larger than 0.5.", "Finally, the probability of a path given the question can be computed as the joint distribution of all the relations in the path, i.e. , p ( p | q ) = | p | (cid:89) t =1 p ( r t | q ( t ) ) .", "where | p | denotes the number of relations in p , t = 1 indicates the selection at the topic entity and t = | p | denotes the last none-stop relation selection.", "Since the top-1 relevant path cannot be guaranteed to be right, we perform a topK beam search at each time to get K paths.", "From each topic entity, we obtain K paths which result in nK paths in total by n topic entities.", "nK paths correspond to nK instantiated trees.", "We take the union of topK trees from one topic entity into a single subgraph, and then merge the same entities from different subgraphs to induce the final subgraph.", "This can reduce the subgraph size, i.e. , the answer reasoning space, as the subgraphs from different topic entities can be viewed as the constraints of each other.", "Specifically, from the n subgraphs of the n topic entities, we find the same entities and merge them.", "From these merged entities, we trace back in each subgraph to the root ( i.e. , a topic entity) and trace forward to the leaves.", "Then we only keep the entities and relations along the tracing paths of all the trees to form the final subgraph.", "For example in Figure 2, given a question Where did Canadian citizens with Turing Award graduate? with two topic entities Turing Award and Canada 2 , we can explain it by the two expanded paths (Win, Graduate) and (Citizen, Graduate) and merge the trees induced by them to form a unified subgraph.", "Only the top-1 path is presented in the figure for a clear illustration.", "In this section, we discuss the pre-training and the end-to-end fine-tuning strategies to train the retriever.", "Figure 3 illustrates the whole framework and the training procedure.", "Since the ground truth subgraphs are not easy to be obtained, we resort to the weak supervision signals constructed from the ( q, a ) pairs.", "Specifically, from each topic entity of a question, we retrieve all the shortest paths to each answer as the supervision signals, as paths are easier to be obtained than graphs.", "Since maximizing the log-2 Some work views Canada as a constraint, which is not easy to be distinguished with the topic entity Turing Award.", "likelihood of a path equals to (cid:80) | p | t =1 log p ( r t | q ( t ) ) according to Eq.", "(7), we can maximize the probabilities of all the intermediate relations in a path.", "To achieve the goal, we decompose a path p = ( r 1 , , r | p | ) into | p | + 1 (question, relation) instances, including ([ q ] , r 1 ) , ([ q ; r 1 ] , r 2 ) , ..., ([ q ; r 1 ; r 2 ; ; r | p | 1 ] , r | p | ) , and an additional END instance ([ q ; r 1 ; r 2 ; ; r | p | ] , END ) , and optimize the probability of each instance.", "We replace the observed relation at each time step with other sampled relations as the negative instances to optimize the probability of the observed ones.", "When the ( q, a ) pairs are also scarce, we train the retriever in an unsupervised manner independent from the ( q, a ) pairs.", "We leverage the NYT dataset, a distant supervision dataset for relation extraction (Riedel et al., 2010) to construct the pseudo ( q, a, p ) labels.", "In this dataset, each instance is denoted as a tuple ( s, ( e 1 , r, e 2 )) , where s is a sentence that refers to the relation r between two entities e 1 and e 2 mentioned in the sentence s .", "For two instances ( s 1 , ( e 1 , r 1 , e 2 )) and ( s 2 , ( e 2 , r 2 , e 3 )) , we treat e 1 as the topic entity and e 3 as the answer.", "Then we concatenate s 1 and s 2 as the question, and concatenate r 1 and r 2 as the corresponding path to train the retriever.", "The training objective is the same as the weakly supervised pre-training.", "End-to-end training is an alternative to fine-tune the separately trained retriever and the reasoner jointly.", "The main idea is to leverage the feedback from the reasoner to guide the path expansion of the retriever.", "To enable this, we optimize the posterior p , ( G| q, a ) instead of the prior p ( G| q ) , since the former one contains the additional likelihood p ( a | q, p k ) which exactly reflects the feedback from the reasoner.", "We do not directly optimize the posterior p , ( G| q, a ) , because G is induced from nK paths, making it unknown which path should receive the feedback from the likelihood computed on the whole G .", "Instead, we approximate p ( G| q, a ) by the sum of the probabilities of the nK paths and rewrite the posterior of each path by Bayes' rule (Sachan et al., 2021), i.e. , p , ( G| q, a ) nK (cid:88) k =1 p , ( p k | q, a ) , (8) nK (cid:88) k =1 p ( a | q, p k ) p ( p k | q ) , where p ( p k | q ) is the prior distribution of the k th path that can be estimated by Eq.", "(7), and p ( a | q, p k ) is the likelihood of the answer a given the k -th path.", "Essentially, p ( a | q, p k ) estimates the answer a on the single tree induced by the k -th path instead of the fused subgraph by nK paths.", "As a result, the reasoning likelihood on each tree can be reflected to the corresponding path that induces the tree.", "The reasoner for estimating p ( a | q, p k ) is the same as that for calculating p ( a | q, G ) .", "(cid:124) (cid:123)(cid:122) (cid:125) where the stop-gradient operation SG is to stop updating the parameters .", "The reasoner is updated 5777 the same as the two-stage training by computing the likelihood p ( a | q, G ) on G sampled by the retriever (without using information from the answer a ).", "As a result, there is no mismatch between the training and evaluation when computing p ( a | q, G ) , as G relies only on the prior at both.", "Intuitively, we train the reasoner to extract the correct answer given the subgraph induced from nK highest scoring paths.", "And we train the retriever to select nK paths which collectively have a high score to deduce the answer when taking the feedback from the reasoner into account.", "Although the two components are jointly trained, the reasoning is still performed on the retrieved entire subgraph at each epoch.", "We present the training process in Appendix.", "In this section, we conduct extensive experiments to evaluate the subgraph retrieval (SR) enhanced model.", "We design the experiments to mainly answer the four questions: (1) Does SR take effect in improving the QA performance?", "(2) Can SR obtain smaller but higher-quality subgraphs?", "(3) How does the weakly supervised and unsupervised pre-training affect SR's performance?", "(4) Can end-to-end fine-tuning enhance the performance of the retriever as well as the reasoner?", "Datasets.", "We adopt two benchmarks, WebQues-tionSP (WebQSP) (Yih et al., 2016) and Complex WebQuestion 1.1 (CWQ) (Talmor and Berant, 2018), for evaluating the proposed KBQA model.", "Table 1 shows the statistics.", "Evaluation Metrics.", "We evaluate the retriever by the answer coverage rate, which is the proportion of questions for which the topicnK retrieved paths contain at least one answer.", "This metric reflects the upper bound of the QA performance and is denoted as Hits@ K .", "For QA performance, We use Hits@1 to evaluate whether the top-1 predicted answer is correct.", "Since some questions have multiple answers, we also predict the answers by the optimal threshold searched on the validation set and evaluate their F1 score.", "Baseline Models.", "We compare with embedding-based KBQA models, in which EmbedKGQA (Sax-ena et al., 2020) directly optimizes the triplet (topic Table 1: Data statistics.", "entity, question, answer) based on their direct embeddings.", "KV-Mem (Miller et al., 2016) BAMNet (Chen et al., 2019a) store triplets in a key-value structured memory for reasoning.", "GRAFT-Net (Sun et al., 2018), BAMNet (Chen et al., 2019a), NSM (He et al., 2021), and PullNet (Sun et al., 2019) are subgraph-oriented embedding models.", "We also compare with the SP-based models, in which QGG (Lan and Jiang, 2020) generates the query graph for a question by adding the constraints and extending the relation paths simultaneously, SPARQA (Sun et al., 2020) proposes a novel skeleton grammar to represent a question, and CBR-KBQA (Das et al., 2021) leverages Big-Bird (Zaheer et al., 2020), a pre-trained seq2seq model to directly parse a question into a SPARQL statement that can be executed on graph DBs.", "SR is default trained by weakly supervised pre-training and the default path number is set to 10.", "We compare with state-of-the-art KBQA models and present the Hits@1 and F1 scores in Table 2.", "SP-based Models.", "The SP-based model CBR-KBQA achieves the best performance on CWQ.", "This is expected, as CBR-KBQA leverages a pre-trained seq-to-seq model to parse the input question into a SPARQL statement.", "However, the model depends on the annotated SPARQL statements, which are expensive to be annotated in practice.", "Embedding-based Models.", "Among these models, KV-Mem and EmbedKGQA retrieve the answers from the global key-value memory built on the KB or the original whole KB, which enjoys high recall but suffers from many noisy entities.", "Compared with these global retrievals, BAMNet builds the key-value memory on a subgraph, but it is a full multi-hop topic-entity-centric subgraph, which is also noisy.", "GRAFT-Net and NSM calculate PPR scores to control the subgraph size, but the ad-hoc retrieval method is still far from optimal.", "PullNet reinforces the retrieval by learning a retriever, but the retriever and the reasoner are intertwined, causing the partial reasoning on part of a subgraph, which increases the reasoning bias.", "Our Models.", "Compared with the above embedding-based models, a performance improvement on both the datasets can be observed, e.g. , NSM injected by SR (SR+NSM) improves 0.4% Hits@1 and 1.3% F1 on WebQSP, 3.9% Hits@1 and 4.7% F1 on CWQ compared with the original NSM.", "We also show that SR can be adapted to different subgraph-oriented reasoners.", "Beyond NSM, when injecting SR to GRAFT-NET, it also significantly improves 9.7% Hits@1 and 8.7%F1 on CWQ.", "SR+GN underperforms GN on WebQSP because GN filters out the relations of the knowledge graph not in the training set of WebQSP.", "We do not inject SR into BAMNet as the model needs entity types in the subgraph, which is temporarily ignored by SR .", "SR takes effect in improving the QA performance when injecting it before a subgraph-oriented reasoner, and SR equipped with NSM creates a new state-of-the-art model for embedding-based KBQA.", "Quality of Retrieved Subgraph.", "We evaluate whether the proposed SR can obtain smaller but higher-quality subgraphs, which are measured by not only the direct subgraph size and answer coverage rate but also the final QA performance.", "For a fair comparison, we fix the reasoner as NSM, and vary the retriever as SR and the PPR-based heuristic retrieval (Sun et al., 2018; He et al., 2021).", "PPR+NSM are performed on the same knowledge graph of the proposed SR+NSM.", "The result of the trainable retriever in PullNet (Sun et al., 2019) is ignored, because its code is not published and the 5779 0 20 50 100 Weakly supervised data (%) 0.0 0.2 0.4 0.6 0.8 H it s @ 10 o f AC ( % ) w/o UnP w UnP", "We report the comparison results in Figure", "4. The top row presents the answer coverage rates of the subgraphs with various sizes.", "It is shown that when retrieving the subgraphs of the same size, the answer coverage rate of SR is significantly higher than PPR.", "The bottom row presents the QA performance (Hits@1) on the subgraphs with various answer coverage rates.", "It is shown that by performing the same NSM on the subgraphs with the same coverage rate, the subgraphs retrieved by SR can result in higher QA performance than PPR.", "Summary.", "The above results show that SR can obtain smaller but higher-quality subgraphs.", "Subgraph Merge.", "We investigate the effects of the strategies used in SR, including the question updating strategy (QU) which concatenates the original question with the partially expanded path at each time step, the path ending strategy (PE) which learns when to stop expanding the path, and the subgraph merging strategy (GM) which induces a subgraph from the topnK paths.", "Table 3 indicates that based on SR, Hits@1 drops 4.3-15.0% when removing QU (SR w/o QU) and Hits@1 drops 2.1-18.5% when changing PE to the fixed path length T (SR w/o PE), where the optimal T is set to 3 on both WebQSP and CWQ.", "Table 4 shows that based on SR+NSM, the average subgraph size increases from 174 to 204, and Hits@1 of QA drops 0.1% when removing the subgraph merging strategy (SR+NSM w/o GM) but directly taking the union of all the subgraphs from different topic entities to induce the subgraph.", "We only present the results on CWQ as most of the questions in WebQSP only contain one topic entity, which does not need the merge operation.", "Summary.", "The above results verify the effectiveness of the devised QU, PE, and GM in SR. 6.4 Training Strategy Evaluation Effect of Pre-training.", "We investigate the effects of the weakly supervised and the unsupervised pretraining on the SR.", "Table 3 shows the performance of the supervised training (SR w SuperT) and the weakly supervised pre-training (SR), which indicates that SR is comparable with SR w SuperT when retrieving top-10 paths.", "Because a single ground-truth path between a topic entity and an answer is provided by WebQSP, which might omit the situation when multiple ground truth paths can be found.", "In view of this, the weakly supervised way that retrieves multiple shortest paths as the ground truth can provide richer supervision signals.", "We ignore the supervised training in CWQ because the ground truth paths are not explicitly given in the dataset.", "We further vary the proportion of the weakly supervised data in {0%, 20%, 50%, 100%}, and present the corresponding answer coverage rate of the subgraph induced by top-10 paths ( i.e. Hits@10) in Figure", "5. Note 0% means the RoBERTa used in SR don't have any fine-tuning.", "The performance shows a consistent growth with the weakly generated data size, which demonstrates its positive effect.", "Before the weakly supervised pre-training, we create 100,000 pseudo instances for unsupervised pre-training (Cf. Section 5 for details).", "The results presented by the orange bars show that unsupervised pre-training can significantly improve the original SR (0% weakly supervised data) by about 20% Hits@1.", "However, with the increase of the weakly-supervised data, adding unsupervised pre-training does not take better effect.", "Summary.", "The above results show the effectiveness of the weakly supervised pre-training.", "Meanwhile, the unsupervised strategy can be an alternative choice when the QA pairs are scarce.", "both SR+NSM w E2E and SR+GN w E2E improve 2-10.6% Hits@1 of retrieval based on SR.", "Table 2 shows SR+NSM w E2E improves 0.6% Hits@1 of QA based on SR+NSM on WebQSP, and SR+GRAFT-Net w E2E improves 1.5-2.5% Hits@1 of QA based on SR+GRAFT-Net.", "Although SR+NSM w E2E underperforms SR+NSM on CWQ, we suggest to reason on the top-1 retrieved results, which are much better than those before fine-tuning .", "Summary.", "The above results indicate that the answer likelihood estimated by the reasoner provides positive feedback for fine-tuning the retriever.", "With the improvement of the retriever, the reasoner can be also enhanced by the updated subgraphs.", "We propose a subgraph retriever (SR) decoupled from the subsequent reasoner for KBQA.", "SR is devised as an efficient dual-encoder that can update the question when expanding the path as well as determining the stop of the expansion.", "The experimental results on two well-studied benchmarks show SR takes effect in improving the QA performance if injecting it before a subgraph-oriented reasoner.", "SR equipped with NSM creates new SOTA results for embedding-based KBQA methods if learning SR by weakly supervised pre-training as well as end-to-end fine-tuning.", "This work is supported by National Natural Science Foundation of China (62076245, 62072460, 62172424); National Key Research & Develop Plan(2018YFB1004401); Beijing Natural Science Foundation (4212022); CCF-Tencent Open Fund." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "other" ]
[ "Building models of natural language processing (NLP) is challenging in low-resource scenarios where only limited data are available.", "Optimization-based meta-learning algorithms achieve promising results in low-resource scenarios by adapting a well-generalized model initialization to handle new tasks.", "Nonetheless, these approaches suffer from the memorization overfitting issue, where the model tends to memorize the meta-training tasks while ignoring support sets when adapting to new tasks.", "To address this issue, we propose a memory imitation meta-learning (MemIML) method that enhances the model's reliance on support sets for task adaptation.", "Specifically, we introduce a task-specific memory module to store support set information and construct an imitation module to force query sets to imitate the behaviors of some representative support-set samples stored in the memory.", "A theoretical analysis is provided to prove the effectiveness of our method, and empirical results also demonstrate that our method outperforms competitive baselines on both text classification and generation tasks.", "Building natural language processing (NLP) models in low-resource scenarios is of great importance in practical applications because labeled data are scarce.", "Meta-learning-based methods (Thrun and Pratt, 2012) have been commonly used in such scenarios owing to their fast adaptation ability.", "Notable successes have been achieved by meta-learning on low-resource NLP tasks, such as multi-domain sentiment classification (Yu et al., 2018; Geng et al., 2019) and personalized dialogue generation (Madotto et al., 2019; Song et al., 2020; Zheng et al., 2020).", "proaches have been widely used in various low-resource NLP scenarios (Madotto et al., 2019; Qian and Yu, 2019; Li et al., 2020; Mi et al., 2019) because they are model-agnostic and easily applicable.", "Concretely, optimization-based meta-learning algorithms aim to learn a well-generalized global model initialization that can quickly adapt to new tasks within a few steps of gradient updates.", "In the meta-training process, we first train on a support set (i.e., a few training samples of a new task i ) to obtain task-specific parameters (cid:48) i .", "Then, we optimize based on the performance of (cid:48) i on a query set (i.e., another set of samples in task i ).", "Despite its effectiveness, optimization-based meta-learning algorithms usually suffer from the memorization overfitting issue 1 (Yin et al., 2020; Rajendran et al., 2020), where the learned model tends to solve all the meta-training tasks by memorization, rather than learning how to quickly adapt from one task to another via support sets.", "This is acceptable for training process, but results in poor generalization on the meta-testing sets, because the memorized model does not have knowledge of those tasks and does not know how to utilize the base learner to learn new tasks.", "Hence, this issue hinders the model from capturing task-specific characteristics from support sets and thus prevents the model from adapting to distinct new tasks (Ra-jendran et al., 2020).", "For instance, in personalized dialogue generation, this implies that the dialog model cannot adapt to individual users based on short conversation histories and hence fails to generate personalized responses.", "Several works have been proposed to tackle the memorization overfitting issue for regression and image classification tasks.", "Some studies try to explicitly regularize the model parameters (Yin et al., 1 Memorization overfitting is different from the overfitting in conventional supervised learning (Hawkins, 2004).", "The latter means that the model overfits to the training tasks and fails to generalize to the testing tasks.", "2020; Rajendran et al., 2020), but this restricts the complexity of model initialization and reduces the model capacity.", "Another line of research integrates samples from support sets into the corresponding query sets via data augmentation (Yao et al., 2021).", "However, data augmentation on textual data may result in noisy labels or distribution shifts, which impairs the model performance (Chen et al., 2021).", "In this paper, we address the memorization overfitting issue by enhancing the model's dependence on support sets when learning the model initialization, which forces the model to better leverage information from support sets.", "As an analogy, consider a young investor who has the ability to adapt to new circumstances rapidly but little memory of learned experiences, and an old investor who is experienced but refuses to be flexible.", "Our idea is to make the young investor adaptive to the various situations when he assesses his benefits so that he can not only take advantage of the old one's experience but also learn from the old investor how to leverage the learned experience.", "In this paper, the young investor stands for a standard meta-learning algorithm (e.g., MAML), which is prone to memorization overfitting, and the old investor is a memory module we integrate into the method, carrying information of support sets.", "Specifically, we propose a Memory-Imitation Meta-Learning (MemIML) method that forces query set predictions to depend on their corresponding support sets by dynamically imitating behaviors of the latter.", "We therefore, introduce a memory module and an imitation module to enhance such dependence.", "The memory module is task-specific, storing representative information of support sets.", "The imitation module assists in predicting samples of query sets by dynamically imitating the memory construction.", "In this way, the model has to access the support set by memory imitation each time it makes a prediction on a query-set sample, hence it's no longer feasible for the model to memorize all meta tasks.", "The contributions of this work are: 1. A novel method MemIML is proposed to alleviate the memorization overfitting for optimization-based meta-learning algorithms.", "It encourages the utilization of support sets with the help of a memory module and an imitation module when adapting to new tasks.", "2. Comprehensive experiments on text classification and generation tasks show that MemIML significantly outperforms competitive baselines.", "Meta-Learning.", "Meta-Learning aims to improve the learning algorithm itself based on the previously learned experience (Thrun and Pratt, 1998; Hospedales et al., 2021).", "In general, there are three categories of meta-learning methods: model-based methods, (Santoro et al., 2016; Obamuyide et al., 2019) which depend on the particular model design to facilitate fast learning; metric-based methods, (Vinyals et al., 2016; Snell et al., 2017; Geng et al., 2019) which encode samples into an embedding space and classify them based on the learned distance metric; optimization-based methods (Finn et al., 2017; Mi et al., 2019) that learn a well-generalized model initialization which allows for fast adaptation to new tasks.", "For low-resource scenarios in NLP, optimization-based meta-learning methods achieved promising results on tasks such as personalized dialog generation (Madotto et al., 2019; Song et al., 2020; Tian et al., 2021), low-resource machine translation (Gu et al., 2018; Sharaf et al., 2020) and question answering (Yan et al., 2020), few-shot slot tagging (Wang et al., 2021), and so on.", "Memorization overfitting of Meta-learning.", "Meta-learning algorithms suffer from memorization overfitting.", "Yin et al. (2020) build an information bottleneck to the model, while this approach decreases the model performance with this passive regularization.", "Rajendran et al. (2020) inject random noise to the ground truth of both support and query sets, while little extra knowledge is introduced to learn a good initialization.", "Yao et al. (2021) address overfitting issues by augmenting meta-training tasks through mixing up support and query sets.", "However, such augmentation for text needs to be based on the assumption of keeping the label and the data distribution unchanged, which is often not true in practice (Chen et al., 2021).", "Instead of regularization and data augmentation, we leverage the support sets information stored in the memory to augment the meta-learning.", "External Memory for Few-shot Learning.", "Memory mechanism has proven to be powerful for few-shot learning (Geng et al., 2019; Santoro et al., 2016; Munkhdalai et al., 2019).", "Current methods 584 either refine representations stored in the memory (Ramalho and Garnelo, 2018) or refining parameters using the memory (Munkhdalai and Yu, 2017; Cai et al., 2018; Wang et al., 2020).", "In the NLP domain, some methods store encoded contextual information into a memory (Kaiser et al., 2017; Holla et al., 2020; Zheng et al., 2019).", "Geng et al. (2019) propose a memory induction module with a dynamic routing algorithm for few-shot text classification tasks.", "Munkhdalai et al. (2019) augment the model with an external memory by learning a neural memory.", "Wang et al. (2021) reuse learned features stored in the memory on the few-shot slot tagging.", "We first formulate model-agnostic meta-learning (MAML) (Finn et al., 2017).", "Specifically, denote the base model used in MAML as f and assume each task T i sampled from a task distribution p ( T ) associates with a dataset D i .", "Each dataset D i consists of a support set D si = { ( X sj , Y sj ) } N s j =1 and a query set D qi = { ( X qj , Y qj ) } N q j =1 , where X and Y denote the input and ground truth of a sample, respectively.", "During the meta-training stage, a task-specific (a.k.a., post-update) model f (cid:48) i is first obtained for each task T i via gradient descent over its support set D si .", "Then MAML updates its initialization (a.k.a., pre-update) according to the performance of f (cid:48) i on the query set D qi as in Eq.1: = min ET i p ( T ) (cid:104) L (cid:16) f (cid:48) i ( X qi ) , Y qi (cid:17)(cid:105) (1) s.t. (cid:48) i = L ( f ( X si ) , Y si ) (2) where is the inner loop learning rate.", "During the meta-testing stage, the learned initialization is fine-tuned on the support set D st for task T t , and the resulting model is evaluated on the query set D qt with the post-update parameters (cid:48) t .", "To alleviate the memorization overfitting issue in meta-learning, we propose MemIML, which includes a memory module and an imitation module on the grounds of a base model.", "The memory module is task-specific, recording the mapping behaviors between inputs and outputs of support sets for each task.", "The imitation module is shared across tasks and predicts values for each query-set sample by dynamically imitating the memory construction.", "The acquired support set information leveraged by the imitation module augments the model initialization learning, enhancing the dependence of the model's task adaptation on support sets.", "Fig. 1 shows our model architecture.", "We design a memory module M i for each task T i and incorporate it in the MAML framework.", "In order to fully leverage information from support sets, we construct key-value pairs from support-set samples and store them in the memory module.", "The key is the sentence representation of a sample input from support sets obtained from an introduced key network.", "The corresponding value is constructed to store the information of the sample output (ground truth) as in Sec. 4.3: in NLG tasks, the value is the sentence embedding of the output sentence; in NLU tasks, the value is the one hot embedding of the class label (a scalar) of the sample.", "Our memory has two operations: memory writing that constructs the memory and memory reading that acquires information from memory.", "In the following, we elaborate on these contents in detail.", "Key Network represents a sample with a vector.", "Specifically, we use a frozen pre-trained BERT model (Devlin et al., 2019) as the key network.", "The input of the key network is the sample input sentence X sj D si ( X qj D qi ), and the output is the encoded representation of the first token (i.e. [CLS] token) of the sentence.", "The acquired representation is regarded as the key K sj for X sj ( K qj for X qj ).", "Memory Writing constructs the memory using the information of samples in the support set D si .", "For each task T i , the task-specific memory M i consists of N i memory slots (i.e. key-value pairs { K sl , V sl } N i l =1 ).", "To build these memory slots, we select samples from support sets and write their information into the memory.", "The sample selection is according to a diversity-based selection criterion (Xie et al., 2015) to ensure the diversity and representativeness of the memory content.", "The detailed description of this criterion is in Appendix.", "D. For each task-specific memory module M i , we adopt the diversity score as S ( M i ) on the stored keys.", "Here, a more diverse memory gets a higher diversity score.", "When the memory is not full, we directly write support-set samples without selection; otherwise, we compute the diversity score of the current memory and scores after every old key-value pair is replaced with a new key-value pair.", "Then we replace the old pair with the new one 585 N e i ghbo r s Memory Local Adaptation 1 5 6 2 3 4 Reading Output Global optimization of Local adaptation of Locally Adapted Value Predictor Query Set Support Set Write Inner Loop for Each Task Outer Loop Read MAML Optimization Memory Module Imitation Module Predicted value Value Support set information Query set information Inner loop Outer loop Key Global Value Predictor Local Adaptation Figure 1: The architecture of our model, MemIML.", "where the replacement can maximize the diversity score.", "In this way, the memory we build can carry more distinguishable and representative information and efficiently utilize the storage space.", "Memory Reading obtains information from memory to enhance the meta-learning.", "The input is the sentence representation of the sample in query sets encoded by the key network, and the output is the memory slots similar to the query sample.", "Specifically, given the key representation K qj of a sample X qj D qi , we retrieve the top N most similar slots from its task-specific memory M i .", "The similarity is measured based on the Euclidean distance between K qj and each key K sl in the memory slots.", "The retrieved key-value pairs { K sl , V sl } Nl =1 act as the output of memory reading.", "In order to better leverage the retrieved memory and enhance the dependence of our model on support sets, we propose an imitation module to encourage the imitation of support sets behaviors when making predictions on query sets.", "For each sample X qj in the query set, the inputs of the imitation module are the key K qj and its retrieved N memory slots, and the output is the predicted value V qj for X qj .", "To achieve the imitation, we construct a value predictor that can model the behaviors of support-set samples (i.e. key-value matching) stored in the memory.", "For estimating the value of each query-set sample, we conduct local adaptation on the value predictor to adapt the matching.", "In this way, the proposed imitation module is customized for each query-set sample, which facilitates better capture of specific task information than directly using the memory reading output, especially when tasks are versatile.", "The reason is that the similarity measurement of previous memory reading operations is based on the fixed BERT representations, which ignores the task-specific information.", "In MemIML, the proposed value predictor aims to build a mapping from keys to values of the memory module mentioned in Sec. 4.1.", "The input of the value predictor is a key obtained from the key network, and the output is the associated value.", "Specifically, we use a two-layer fully-connected network g with parameters to build the mapping.", "The value predictor is learned over constructed key-value pairs of support sets across all tasks.", "Given the key K qj of a query-set sample input X qj , we can then estimate its associated value as V qj .", "To train the value predictor, we minimize the recon-struction loss L rec ( V , V ) to make the predicted values as close as possible to values constructed from the ground truths of support-set samples, where L rec is the cross-entropy loss if the value V is a label and is the mean square loss if V is a vector.", "The training procedure includes the global optimization shared across tasks and the local adaptation for each specific task.", "Specifically, we first train the value predictor with samples from support sets of all tasks.", "After feeding the memory reading output of a query-set sample to this network, we perform local adaptation and employ the adapted network to estimate the value for the query sample.", "Global Optimization.", "To obtain the task-independent global parameters , we train the value predictor over constructed keys (i.e., as inputs) and values (i.e., as outputs) from support-set samples of all tasks.", "The global optimization keeps updating in the whole meta-training phase.", "Local Adaptation.", "To make the value predictor adaptive to each query-set sample X qj , inspired by (Sprechmann et al., 2018), we propose local adaptation that fine-tunes the global value predictor g to get an adapted one with parameters qj .", "The local adaptation only works when predicting X qj .", "Based on the initial parameters from the global optimization, we perform several gradient descent steps to minimize the loss L loc , which is: L loc = (cid:107) (cid:107) 22 + 1 NN (cid:88) l =1 L rec ( V sl , V sl ) (3) Here, V sl = g ( K sl ) , { K sl , V sl } Nl =1 is the memory reading output of the query-set sample, and the factor restricts the distance between qj and .", "Minimizing the second term encourages g qj to better estimate the retrieved memory values { V sl } Nl =1 .", "Then we can acquire the locally adapted value prediction network g qj with parameters qj = arg min L loc ( ) .", "Given a query-sample key K qj , we can thus predict its associated value as V qj = g qj ( K qj ) , (4) where the adapted parameters qj are discarded thereafter, and the model does not back-propagate through V qj .", "In this sense, besides the task-specific parameter (cid:48) i provided by MAML, there will also be qj learned from support sets specific to each query-set sample.", "This guarantees that the model relies more on support sets for task adaptation.", "Fig. 1 (right part) illustrates the mechanism of local adaptation.", "In this part, we will elaborate on two few-shot applications in NLP (i.e., text generation and text classification) to solve the memorization overfitting problem of MAML.", "The model structures of these applications are basically the same, except for the following three points: the base model, the way to get the value V sl stored in the memory module, and the way to leverage the output V qj of Sec. 4.2.", "Personalized Dialogue Generation.", "The base model is the transformer (Vaswani et al., 2017) consisting of an encoder and a decoder.", "In this task, each sample consists of an input utterance and a ground truth utterance, so the value V sl stored in the memory is obtained from the ground truth utterance Y sl of a support-set sample, which is embedded by the key network followed by an LSTM (Hochreiter and Schmidhuber, 1997).", "This LSTM is optimized with the base model.", "The V qj , concatenated with the encoder outputs, serves as a new input for the decoder.", "Hence, we acquire the prediction of a query-set sample via Y qj = Decoder ([ V qj ; Encoder ( X qj )]) .", "Multi-domain Sentiment Classification.", "The base model is a BERT (Devlin et al., 2019) followed by a fully-connected network.", "Each sample consists of an input sentence and a sentiment label (ground truth), so the memory value V s l is the sentiment label.", "To leverage V qj , we interpolate it with the original output of the base model Y qj as Y qj = Y qj + (1 ) V qj (5) where balances Y qj and V qj .", "Notice that the interpolation not only works on the prediction output but also guides the training via gradient descent based on the interpolated output.", "We verify the effectiveness of the interpolation in Appendix.", "C. Algorithm 1 Memory Imitation Meta-training Require: p ( T ) : task distribution, 1 4 : step sizes 1: Initialize from pretrained model; initialize randomly; initialize memory for T tasks as { M i } Ti =1 = { } Tj =1 2: while not converge do 3: Sample batch of tasks {T i } ni =1 , where T i p ( T ) 4: for all task T i do 5: Sample support set D si and query set D qi from T i 6: Obtain the keys { K s l } N s l =1 and the values { V s l } N s l =1 for the support set D si as in Sec. 4.1 7: M i { < K sl , V sl > } N s l =1 # Write memory 8: 1 L rec # Global optimization 9: (cid:48) i 2 L base # Learn (cid:48) i in Eq.", "We theoretically investigate how our method helps to alleviate the memorization overfitting problem.", "Following Yin et al. (2020), we use mutual information I ( Y qi ; D si | , X qi ) to measure the level of the memorization overfitting.", "When the learned model ignores support sets to predict query sets, I ( Y qi ; D si ) | , X qi ) = 0 occurs, which indicates the complete memorization overfitting in meta-learning (Yin et al., 2020).", "Hence, lower mutual information means more serious memorization overfitting issues.", "We propose a criterion similar to (Yao et al., 2021) to measure the validity of our method for tackling this problem.", "For a task T i = { D si , D qi } , the criterion aims to mitigate the memorization overfitting by enhancing the model's dependence on the support set D si , i.e. increasing the mutual information between support set and Y qi as follows: I ( Y qi ;[ D si , M i ] | , X qi ) > I ( Y qi ; D si | , X qi ) , (6) where M i means additional memory information we provide, which contains support sets information to augment the inference of the sample X qi in D qi .", "We demonstrate our method MemIML meets the above criterion (See details in Appendix. A.).", "In the meta-training phase (shown in Alg. 1), MemIML first constructs an empty memory for each task and then follows the bi-level optimization process of MAML.", "In the inner loop, MemIML adapts the base model initialization to task-specific parameters via training on the support set.", "At the same time, from each support-set sample, MemIML obtains a key-value pair and determines whether to write it into the memory or not.", "Then, MemIML conducts the global optimization of the value predictor over these key-value pairs.", "In the outer loop, each sample of the query set reads the memory to retrieve the most similar memory slots.", "Local adaptation fine-tunes the value predictor on those retrieved slots.", "Next, the adapted value predictor estimates the value of each query sample and uses it to augment the learning of the model initialization.", "The total loss function in the inner loop is L total = L base + L rec , where L base = L ( f ( X s ) , Y s ) is the cross-entropy loss.", "The procedure of meta-training and meta-testing are almost the same except that meta-testing does not optimize the learned model initialization and the initial parameter of the value predictor.", "For each task T t in the meta-testing phase, MemIML also adapts to task-specific parameters (cid:48) i in the inner-loop and constructs the task-specific memory.", "In the outer-loop, MemIML retrieves key-value pairs from the memory to conduct local adaptation based on the initial parameter .", "The estimated value V qt from local adaptation helps the base model to infer the final output Y qt .", "Experiments on personalized dialogue generation and multi-domain sentiment classification verify our model on text generation and classification, respectively, where we use Persona-Chat and ARSC datasets.", "Dataset.", "Following (Zhang et al., 2018), we use Persona-chat (Madotto et al., 2019) by regarding building a dialog model for each person as a task.", "The dataset consists of a training/validation/testing set with 1137/99/100 persons (tasks) separately.", "In the Persona-Chat dataset, each persona description has 8.3 unique dialogues on average, and each task consists of three samples.", "Baselines.", "We compare our methods with the following baselines: Base Model : We pretrain a conventional transformer-based dialog generation 588 Type Methods Accuracy Non meta-learning Fine-tune 80.73 Matching Net 81.22 Metric-based Prototypical Net 80.13 Proto ++ 82.41 meta-learning Relation Net 81.32 Induction Net 79.31 MAML 82.17 Optimization-based MR-MAML 78.14 Meta-Aug 83.57 meta-learning MetaMix 83.63 MemIML (Ours) 85.69* Table 2: The results of mean accuracy over the ARSC.", "model over all the training tasks ignoring the speak-ers' personality.", "Fine-tune : We fine-tune the pretrained base model on the support sets of each meta-testing task.", "MAML : We apply MAML (Madotto et al., 2019) to the base model.", "MR-MAML : Yin et al. (2020) tackle the memorization overfitting of MAML via regularization.", "Metrics.", "Automatic evaluation has three aspects, Quality : BLEU-n (Papineni et al., 2002), CIDEr (Vedantam et al., 2015), and ROUGE (Lin, 2004) measures the n-gram matching between the generated response and ground truth.", "PPL (perplex-ity) measures the sentence fluency.", "Diversity.", "Dist-n (Li et al., 2016) evaluates the response diversity by counting unique n-grams.", "Consistency : C score (Madotto et al., 2019) measures the consistency between the generated responses and persona descriptions through a pretrained natural language inference model.", "Overall Performance.", "As shown in Table 1. Fine-tune outperforms Base Model in all metrics, which verifies that the task-specific data is helpful to its performance on specific tasks.", "Compared to Fine-tune , MAML behaves better on diversity and consistency but behaves worse on quality.", "Pretraining the base model achieves the best perplexity (lowest PPL) as shown by Base Model and Fine-tune .", "We analyze that it's because pretraining leads to a considerable degree of fluency in their generated utterances and is careless about each task's specific information, resulting in low consistency with tasks.", "Our model, MemIML , performs the best in most aspects, including quality, diversity, and task consistency.", "In particular, MemIML significantly improves MR-MAML in alleviating the memorization overfitting issue, suggesting that memory imitation is more effective than only regularizing model initialization.", "Dataset.", "Amazon Review sentiment classification dataset (ARSC) (Yu et al., 2018) contains 69 tasks in total.", "Following (Geng et al., 2019), we build a 2-way 5-shot meta-learning with 57 tasks for meta-training and 12 tasks for meta-testing.", "We conduct experiments on the ARSC (Yu et al., 2018).", "It contains English reviews of 23 types of Amazon products, where each product consists of three different binary classification tasks.", "Following Geng et al. (2019), we select 12 tasks from 4 domains ( Books, DVD, Electronics, Kitchen ) for meta-testing tasks, and the support sets of these tasks are fixed (Yu et al., 2018).", "Baselines.", "We compare our methods with the following baselines: Fine-tune : We fine-tune a pre-trained BERT on the support set of meta-testing tasks (non-meta-learning method) as in Appendix.", "B.2.", "We choose five metric-based meta-learning baselines: Matching Net (Vinyals et al., 2016), Prototypical Net (Snell et al., 2017), Proto ++ , (Ren et al., 2018), Relation Net (Sung et al., 2018), and Induction Net (Geng et al., 2019).", "We apply an optimization-based baseline ( MAML ) (Finn et al., 2017) to the base model, and implement some approaches tackling the memorization overfitting problem based on MAML: MR-MAML (Yin et al., 2020), MetaMix , (Yao et al., 2021) and Meta-Aug (Rajendran et al., 2020).", "Overall Performance.", "Table 2 shows the performance measured by the mean accuracy of meta-testing tasks.", "Our model, MemIML outperforms all competing approaches including non-meta-learning, metric-based meta-learning, and optimization-based meta-learning methods.", "Particularly, our model surpasses the current solutions to the memorization overfitting problem ( MR-MAML, Meta-Aug, MetaMix ), indicating that our method is more effective compared to regularization and textual augmentation.", "In Figure 2, the gaps of the losses on query sets between pre-update (before training on support sets)", "and post-update (cid:48) i (after training on support sets) indicate the memorization overfitting problem.", "The gap between sky-blue and blue curves measures the memorization overfitting of meta-training (the gap between pink and red curves measures meta-testing).", "Small loss gaps indicate a severe memorization overfitting where support sets are almost useless for task adaptation.", "Those loss gaps between and (cid:48) i collapse in MAML and MR-MAML after about 3000 steps.", "This indicates that the post-update (cid:48) i barely benefits from the support set, and thus the memorization overfitting issue is severe.", "In Figure 2", "(c), MemIML has large gaps between and (cid:48) i , implying that (cid:48) i better leverages support sets when adapting to new tasks and thus alleviates the memorization overfitting issue.", "In Table 3, we conduct ablation studies to verify the effectiveness of each component.", "Removing Similarity-Search means the memory reading operation randomly outputs memory slots instead of searching for similar memory slots.", "This variant underperforms MemIML, indicating that similar samples stored in the memory provide more useful information to improve the model performance.", "Removing the value predictor means directly using the memory output without a learnable network.", "Its results are not too bad, indicating that the memory module helps to mitigate the memorization overfitting problem.", "However, this usage simply aggregates the support set information into the query set, which is not as precise as learning the information required by the query set itself.", "Therefore, it is still inferior to our model.", "Removing Local adaptation means we only use the global value predictor to estimate the memory output.", "It is crucial to the value predictor since removing it from the value predictor results in an even worse performance than removing the value predictor .", "Besides, the significant drop in task consistency (C-score) shows that local adaptation contributes a lot to making the model adaptive to specific tasks, as it learns to adapt to each query-set sample.", "Memory Size.", "In Table 4 and 5, we investigate the variants of our task-specific memory module of different sizes.", "We control the memory size through | M | = store ratio | D s | .", "The results demonstrate that our model is able to maintain high performance even with only a 20% memory size by storing diverse and representative samples of support sets.", "Besides, as the ratio of stored samples increases, the model's performance is improved since it provides more information for the inference of query samples and the optimization of the model initialization.", "Storing all the encountered samples (i.e., with store ratio 100%) in the memory instead introduces some noise that damages the model performance.", "Number of Neighbors.", "We also investigate the effects of different numbers of neighbors for the model performance in Table 4 and Table 5. In both datasets, the model performs better with a larger number of neighbors.", "However, when the number of neighbors is too large, the model retrieves some dissimilar slots from the memory module.", "These dissimilar slots bring much noise, which makes the predictions of query samples inaccurate.", "We present two generated cases in personalized dialog in Table.", "6. Base Model , Fine-tune , and MAML generate general responses with little useful information or responses that are not consistent with the personality of personas.", "MR-MAML generates irrelevant responses to the dialogue context.", "Our model not only responds coherently to the dialog history but also caters to the persona descriptions of each user.", "In this paper, we tackle the memorization overfitting problem of meta-learning for text classification and generation applications.", "We propose MemIML to enhance the dependence of the model on the support sets for task adaptation.", "MemIML introduces a memory module storing the information of support sets, and propose an imitation module to better leverage the support set information by imitating the behaviors of the memory.", "Both empirical and theoretical results demonstrate that our method MemIML effectively alleviates the memorization overfitting problem.", "The persona-based dialogue generation task aims to build a dialogue model which generates meaningful, fluent, and consistent responses.", "It will facilitate human-computer interactions in practice.", "However, the training of the model for personalized Persona A I am a professional singer.", "dialogues may lead to the leakage of personal privacy information.", "In this work, the data source we use is from a published dataset and does not involve privacy issues for the data collection.", "Our proposed method does not include inference or judgments about individuals and does not generate any discriminatory, insulting responses.", "Our work validates the proposed method and baseline models on human evaluation which involves manual labor.", "We hire five annotators to score 750 generated sentences in total (250 sentences for each model we evaluate).", "The hourly pay is set to 15 US$ per person, which is higher than the local statutory minimum wage.", "Research on this paper was supported by Hong Kong Research Grants Council (Grant No. 16204920) and National Natural Science Foundation of China (Grant No. 62106275)." ]
[ "abstain", "abstain", "abstain", "objective", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "other" ]
[ "We propose neural models to generate high-quality text from structured representations based on Minimal Recursion Semantics (MRS).", "MRS is a rich semantic representation that encodes more precise semantic detail than other representations such as Abstract Meaning Representation (AMR).", "We show that a sequence-to-sequence model that maps a linearization of Dependency MRS, a graph-based representation of MRS, to English text can achieve a BLEU score of 66.11 when trained on gold data.", "The performance can be improved further using a high-precision, broad coverage grammar-based parser to generate a large silver training corpus, achieving a final BLEU score of 77.17 on the full test set, and 83.37 on the subset of test data most closely matching the silver data domain.", "Our results suggest that MRS-based representations are a good choice for applications that need both structured semantics and the ability to produce natural language text as output.", "Text generation systems often generate their output from an intermediate semantic representation (Yao et al., 2012; Takase et al., 2016).", "However many semantic representations are taskor domain-specific (He and Young, 2003; Wong and Mooney, 2007), while rule-based text generation systems often have incomplete coverage (Langkilde-Geary, 2002; Oepen et al., 2007).", "In this work we combine the advantages of Minimal Recursion Semantics (MRS; Copestake et al., 2005) with the robustness and fluency of neural sequence-to-sequence models trained on large datasets.", "We hypothesize that MRS is particularly well-suited for text generation, as it is explicitly compositional, capturing the contribution to sentence meaning of all parts of the surface form (Bender et al., 2015).", "In contrast, semantic representations such as Abstract Meaning Representation (AMR; Ba-narescu et al., 2013) seek to abstract away from the syntax of a sentence as much as possible.", "Therefore MRS captures meaning distinctions that AMR fails to represent (see Fig. 1).", "Our approach (2) uses neural sequence-to-sequence models (Sutskever et al., 2014; Bah-danau et al., 2014) to map linearizations of directed acyclic graphs (DAGs) to text, similar to the approach proposed by Konstas et al. (2017) to generate text from AMR.", "We use Dependency MRS (DMRS; Copestake, 2009), a graph-based representation in which nodes are MRS predicates (annotated with additional attributes) and edges represent relations between predicates.", "MRS and DMRS are interconvertible and the graph-based representation enables more convenient linearization and manipulation than MRS's variable-based representation (Copestake et al., 2016).", "Results (3) show that neural DMRS to English text generation can obtain up to 83.37 BLEU and 32% exact match, substantially higher than previous work.", "In particular, we obtain an 11.6 BLEU improvement through semi-supervised training using the output of a grammar-based parser, compared to training on gold data only.", "In comparison a grammar-based generator obtained 62.05 BLEU, and an approach based on DAG Transducers (Ye et al., 2018) 68.07 BLEU.", "Ablation experiments show that node attributes encoding fine-grained morpho-semantic information such as number and tense contribute more than 12 BLEU points.", "The highest reported result for AMR generation is 33.8 BLEU (Konstas et al., 2017); on the same dataset our best model obtains 75.8 BLEU.", "While a more detailed meaning representation is harder to produce, our results suggest that MRS could be suitable for text generation applications where precise semantic representations are required.", "Our gold training data are parallel MRS and English text corpora, derived from the 1214 release of the Redwoods Treebank (Oepen et al., 2002).", "1 MRS is implemented as the semantic layer of the English Resource Grammar (ERG; Flickinger, 2000, 2011), a broad-coverage, hand-engineered computational grammar of English.", "The Redwoods annotation was produced in conjunction with the ERG by parsing each sentence into a for-est (discarding unparsable sentences), followed by manual disambiguation (Flickinger et al., 2017).", "About half of the training data comes from the Wall Street Journal (sections 00-21), while the rest spans a range of domains, including Wikipedia, e-commerce dialogues, tourism brochures, and the Brown corpus.", "The data is split into training, development and test sets with 72,190, 5,288, and 10,201 sentences, respectively.", "We use PyDelphin 2 to convert MRS annotations to DMRS.", "In order to apply sequence-to-sequence models to graph-to-text generation, we then linearize the DMRS into PENMAN format (which is also used to represent AMR).", "We follow Goodman (2018, pp. 8286) in finding normalized spanning 1 http://svn.delph-in.net/erg/tags/ 1214/tsdb/gold 2 https://github.com/delph-in/pydelphin PENMAN (10002 / _see_v_1 :tense PRES :sf PROP :perf -:mood INDICATIVE :ARG1-NEQ (10001 / named :carg \"Kim\" :pers 3 :num SG :ind +) :ARG2-NEQ (10004 / _boy_n_1 :pers 3 :num SG :ind + :RSTR-H-of (10003 / _a_q))) Linearization ( _see_v_1 mood=INDICATIVE|perf=-|sf (cid:44) =PROP|tense=PRES ARG1-NEQ ( (cid:44) named0 ind=+|num=SG|pers=3 ) (cid:44) ARG2-NEQ ( _boy_n_1 ind=+|num= (cid:44) SG|pers=3 RSTR-H-of ( _a_q ) ) (cid:44) ) Figure 2: The DMRS for the sentence Kim sees a boy.", "trees through depth-first traversal over the directed acyclic DMRS graphs.", "3 The PENMAN format de-fines each node once, supports node attributes and edge labels, marks edges whose direction is reversed in the traversal, and represents edges which are not covered by the spanning tree.", "The PENMAN format is processed further to obtain a linearization appropriate as input to sequence-to-sequence models, similar to the approach proposed by Konstas et al. (2017) for AMR linearization (see Fig. 2).", "Node variable identifiers are removed, node attributes are concatenated, and named entities are anonymized.", "Predicates that appear only once in the training data are treated as unknowns.", "Preprocessing and unknown word handling are described in greater detail in Appendices A and B. 2.3 Model Our neural generator follows the standard encoder-decoder paradigm (Bahdanau et al., 2014).", "The encoder is a two-layer bidirectional LSTM.", "Predicates and their attributes are embedded separately; their embeddings are then concatenated (Sennrich and Haddow, 2016).", "The 3 https://github.com/goodmami/ mrs-to-penman decoder uses global soft attention for alignment (Luong et al., 2015), and pointer attention to copy unknown tokens directly to the output (Gulcehre et al., 2016).", "The models are trained using Adam (Kingma and Ba, 2014).", "Dropout is applied to non-recurrent connections.", "Decoding uses beam search (width 5).", "The generator is implemented using OpenNMT-py (Klein et al., 2017).", "Hyper-parameter details are given in Appendix C. Our code is available online.", "4 2.4 Semi-supervised training We augment the gold training data with a silver dataset generated using ACE, 5 a parser for the ERG, to parse sentences to MRS. We sample one million sentences from the Gigaword corpus (Parker et al., 2011), restricted to articles published before the year 2000, to match the domain of the Wall Street Journal data.", "The parser failed to parse about 10.3% of the Gigaword sentences, so these were discarded.", "While there are robust MRS parsers (Buys and Blunsom, 2017; Chen et al., 2018), the MRSs they produce are less accurate and not guaranteed to be well-formed.", "Our approach thus differs from Konstas et al. (2017), who used self-training to improve AMR to text generation by iteratively training on larger amounts of data parsed by their neural parser.", "6 3 Results We compare the performance of our neural generator when trained on either gold, silver, or gold and silver data (Table 1).", "Generation quality is primarily evaluated with BLEU (Papineni et al., 2002), using SacreBLEU (Post, 2018).", "7 We evaluate the neural models on both the full Redwoods test set (All') and the WSJ subset.", "The results show that our neural generator obtains very strong performance.", "Semi-supervised training leveraging the ERG parser leads to an 11 BLEU point improvement on Redwoods, comparing to supervised training only.", "We found that the best semi-supervised results are obtained by up-4 https://github.com/shlurbee/ dmrs-text-generation-naacl2019 5 ACE version 0.9.25, with the 1214 ERG release, available at http://sweaglesw.org/linguistics/ ace 6 The ACE parser obtained 93.5 Smatch score on parsable sentences (Buys and Blunsom, 2017), while the neural AMR parser (Konstas et al., 2017) obtained 62.1 Smatch (on a different domain).", "sampling the gold data so that the gold to silver ratio in training examples is 1:2.", "Interestingly, training on silver data performs only slightly worse than training on both gold and silver.", "Our baselines are the ERG's grammar-based generator (Carroll et al., 1999; Carroll and Oepen, 2005) and the DAG transducer generator of Ye et al. (2018).", "To compare our models against the grammar-based generator, implemented in ACE, we need to restrict the evaluation to examples from which ACE is able to generate (All overlap').", "8 In addition to BLEU, we also report exact match accuracy on the overlapping subset.", "Results show that our neural models outperform the grammar-based generator by a large margin.", "ACE ranks candidate generations with a discriminative ranker based on structural features over its derivations (Velldal and Oepen, 2006).", "However, it does not use a language model trained on large amounts of text, which would likely improve fluency substantially.", "The DAG transducer was trained to generate from Elementary Dependency Structures (EDS; Oepen and Lnning, 2006), an MRS-derived representation almost equivalent to DMRS (after edge properties are removed, which Table 3 shows has an effect of less than 1 BLEU point).", "It was evaluated against the same WSJ test set reference generations, but trained using both less gold data (only the WSJ subsection) and less silver data (300K vs 900K sentences).", "Our model trained on WSJ gold data performs only slightly worse (65.78 BLEU; see Table 2) and all our semi-supervised models obtain substantially higher results.", "We evaluate the inand out-of-domain performance of our approach by training models on either WSJ gold data only, or both WSJ gold data and Gigaword silver data, and evaluating on different domains.", "The results in Table 2 show that while the generator performs best on test data which matches the training domain (news), semi-supervised training leads to substantial out-of-domain improvements on the Wikipedia and the Brown corpus portions of the test set.", "8 Despite all test sentences being parsable by the ERG, there are gaps in generation coverage, primarily because ACE is unable to generate words outside the grammar's vocabulary.", "To understand which elements of MRS contribute most to our generator's performance, we ablate node (predicate) and edge attributes from both the training and test DMRS graphs (Table 3).", "In the training data, number and tense show the most variation among node attributes, and subsequently have the largest effect on the reported BLEU score.", "The most common value for number is SG , but 62.36% of sentences contain a node with PL .", "Similarly, 42.41% of sentences contain a tense value other than PRES or UNTENSED .", "Many other attributes are less informative: Mood has a value other than INDICATIVE in only 0.38% of sentences, and perf is + in just 9.74% of sentences.", "Edge features (including H, EQ and NEQ) encode constraints on scopal relationships (see Copestake 2009).", "Removing them, which makes the DMRS representation close to equivalent to EDS, has only a small impact on performance.", "We compare our approach to AMR-to-text generation by evaluating our generator on a standard AMR test set (LDC2015E86).", "As we do not have manually verified MRSes available on this test set, we use ACE to parse the reference sentences to silver MRSes.", "We then evaluate the outputs that our generator produces from those MRSes.", "About 20% of the examples could not be parsed by ACE, and are discarded for the MRS evaluation.", "We compare our generator to the neural AMR generator of Konstas et al. (2017) for models trained on gold as well as gold plus silver data.", "9 We evaluate DMRS models both with and without predicate and edge attributes, as these attributes contain information that is absent from AMR.", "10 The results in Table 4 show that our MRS generator performs better than the AMR generator by a large margin, even when the additional MRS attributes are excluded.", "Our system results are reported on the subset for which we obtained MRS parses.", "AMR results are as given by Konstas et al. (2017) and cover the entire test set.", "9 The AMR and DMRS systems have different gold training data, but the same source of silver data.", "10 Recently, Donatelli et al. (2018) proposed adding tense and aspect to AMR, but this annotation is not yet available in a large AMR corpus.", "We sampled 99 items for error analysis from the dev set, 33 each from among sentences with sentence-level BLEU scores of 80-89, 60-69, and 40-49.", "11 We identified all differences between these strings and the reference strings and then labeled each difference with a fine-grained error type.", "12 We classified the differences into 238 errors, distributed across five levels of severity (Ta-ble 5).", "Almost half of the differences (47.1%) were unproblematic, including spelling variants, meaning-preserving punctuation variation and grammatical alternations (such as optional that or auxiliary contraction as in (1)).", "The slightly problematic category includes close synonyms (e.g. sometime v. someday ), spelled out number names where Arabic numerals are preferred, and differences in formatting.", "The next more serious category (mod-erately problematic) includes meaning-changing differences in punctuation, tense or aspect, and mi-nor grammatical errors such as swapping who and which in relative clauses or a v. an .", "Finally, among the most serious errors, we find cases where the generator provided ungrammatical output or grammatical output not conveying the correct semantics.", "The former include spurious additional tokens, ungrammatical word orders, and ungrammatical inflection.", "Serious errors that nonetheless resulted in grammatical strings include meaning-changing dropped or swapped 11 Items with BLEU scores lower than 40 tend to be very short and primarily involve formatting differences.", "12 This was done by a single annotator only.", "The labels were generated bottom up, with new labels added as needed in the course of annotation.", "(2)", "a. For such cases, machine learning techniques emulate human linguistics and learn from training examples to predict future events.", "[sys.]", "b. For such cases, machine learning techniques emulate human cognition and learn from training examples to predict future events.", "[ref.] In summary, we find that the BLEU scores underestimate the quality of system outputs, due to unproblematic differences (N=112) and differences, like formatting markup (N=6), not reflected in the input semantic representations.", "Among the 108 moderate to serious differences, about a third (35) involve punctuation, suggesting that meaning signalled by punctuation could be better reflected in the semantic representations.", "About half (52) involve added, dropped, or swapped tokens, showing room for improvement in the generator's ability to learn appropriate connections between semantic predicates and surface forms.", "The remainder (21) involve inflection, grammatical alternations (such as who / which ) and word order constraints, showing room for improvement in mimicking grammatical processes.", "We have shown that neural sequence-to-sequence models can be used to generate high quality natural language text from Minimal Recursion Semantics representations, in contrast to both existing MRS-based generators and neural generators based on other broad-coverage semantic representations.", "Furthermore, we have demonstrated that a large hand-crafted grammar can be leveraged to produce large training sets, which improves performance of neural generators substantially.", "Therefore we argue that the ability to generate high quality text from MRS makes it a good choice of representation for text generation applications that require semantic structure.", "For future work, we are interested in applying graph-to-sequence neural networks (Beck et al., 2018; Song et al., 2018) to MRS-to-text generation.", "Thanks to Yannis Konstas for sharing preliminary results on DMRS generation, and Swabha Swayamdipta for discussions.", "This research was supported in part by NSF (IIS-1524371) and Sam-sung AI Research." ]
[ "objective", "abstain", "result", "abstain", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "method", "other", "other" ]
[ "The dominant paradigm for high-performance models in novel NLP tasks today is direct specialization for the task via training from scratch or fine-tuning large pre-trained models.", "But does direct specialization capture how humans approach novel language tasks?", "We hypothesize that human performance is better characterized by flexible inference through composition of basic computational motifs available to the human language user.", "To test this hypothesis, we formulate a set of novel fragmentary text completion tasks, and compare the behavior of three direct-specialization models against a new model we introduce, GibbsComplete, which composes two basic computational motifs central to contemporary models: masked and autoregressive word prediction.", "We conduct three types of evaluation: human judgments of completion quality, satisfaction of syntactic constraints imposed by the input fragment, and similarity to human behavior in the structural statistics of the completions.", "With no task-specific parameter tuning, GibbsComplete performs comparably to direct-specialization models in the first two evaluations, and outperforms all direct-specialization models in the third evaluation.", "These results support our hypothesis that human behavior in novel language tasks and environments may be better characterized by flexible composition of basic computational motifs rather than by direct specialization.", "Representation learning has tremendously benefited engineering for language comprehension and generation systems.", "General frameworks such as encoder-decoder or auto-regressive model can be flexibly applied to a diverse range of problems.", "With scaled models, enormous data, and wellchosen training objective functions, generative pretraining extracts useful and transferable information from unlabeled text data (Howard and Ruder, 2018; Liu et al., 2019b).", "These generic representations can then be finetuned to quickly yield performant models directly specialized for downstream tasks (Howard and Ruder, 2018; Radford et al., 2019; Devlin et al., 2019, inter alia ).", "While the broad idea of exploiting rich statistical information in general-purpose representations has proven practically effective for deep learning models since a decade ago (Erhan et al., 2010; Collobert et al., 2011), it remains an open question how well this direct specialization approach yields models that behave similarly to humans in flexible linguistic behavior in novel situations.", "Here we take this direct specialization approach as a scientific hypothesis regarding the na-ture of linguistic knowledge and its flexible deployment in the human mind: that the human capacity of processing linguistic information in novel tasks arises from directly specializing and reshaping a generic yet versatile representation.", "We contrast it with an alternative hypothesis: that flexible knowledge deployment reflects algorithmic composition of a constrained repertoire of simple, reusable inference motifs.", "We depict these competing hypotheses in Figure 1. In the compositional inference hypothesis, the basic motifs arise from learning processes and could potentially involve distinct internal representations, but can be recombined into new inference routines guided by the principles of approximate probabilistic inference.", "In this hypothesis, the computational-level specifications (Marr, 1982) of motif functional forms play a crucial role.", "As a theory of how the human mind approaches new and potentially complex problems, the direct specialization hypothesis entails generic starting representations, task-specific supervision, and specialization.", "The compositional inference hypothesis, in contrast, entails novel combinations of solutions to old problems.", "ies of human behaviors involving various tasks, which goes beyond the scope of this work.", "However, as an initial step, we consider these two hypotheses as computational paradigms that inform and inspire the architecture of models that flexibly perform novel language tasks, and ask which paradigm gives rise to more human-like behaviors.", "Here we ground the general question in a specific setting: a novel set of fragmentary input completion challenges, inspired by the classic cloze task (Ebbinghaus, 1897) and its contemporary variants.", "Taking a reverse-engineering approach, we instantiate the direct specialization and compositional inference hypotheses as explicit computational models and compared the models' behaviors to that of humans in a behavioral task of fragmentary input completion.", "Our fragmentary input designs highlight various aspects of abstract reasoning involving subtle features of grammar and semantics.", "We find that the model of the compositional inference approach generates high-quality completions without direct training on the target task, achieves comparable performance to the models from the direct specialization approach in reasoning about constrained syntactic contexts, and better matches the fine-grained structural statistics of completions written by human subjects.", "For purposes of this paper, we define a fragmentary linguistic input , or simply a fragment , as a sequence of word strings.", "Completing a fragment involves adding a word string of any length between adjacent strings in the input to yield a single overall well-formed sentence.", "The fragment completion problem is formally equivalent to the text infilling problem studied in a number of recent papers (Fe-dus et al., 2018; Zhu et al., 2019; Liu et al., 2019a; Donahue et al., 2020; Shen et al., 2020; Huang et al., 2020), but here we study considerably more open-ended completion problems from potentially much briefer input than has been studied before.", "For example, many native English speakers find the simple fragmentary input published won .", "initially challenging yet solvable.", "1 We find that carefully chosen simple fragmentary inputs can 1 The input requires building a nested center-embedding context such as The most recent book she published won a prize .", "offer insight into the abilities of direct specialization and compositional inference models, and the similarity of model and human behavior.", "Formally, fragment completion involves generating, given input comprised of a sequence of k word strings C = { C 1 , . . . , C k } , a sequence of k 1 word strings B = { B 1 , . . . , B k 1 } , such that the resulting completion is C 1 B 1 C k 1 B k 1 C k .", "2 In general, exact inference over the full conditional distribution P ( B | C ) will be intractable.", "The direct specialization and compositional inference paradigms offer differing model specifications and algorithmic options.", "With direct specialization, we learn or fine-tune representations under supervision for the task.", "We take the learning objective to be maximizing the likelihood p ( B | C ) for some sampled ( B, C ) pairs generated from a training corpus (see Section 2.2): p ( B | C ) = | B | (cid:89) i =1 p ( B i | B <i , C ) We ground this approach into two learning procedures:", "(a) fine-tuning pretrained language model to solve the target task, and", "(b) training on the target task from scratch.", "The fine-tuning procedure takes advantage of transferring knowledge from large pretrained models, while the training from scratch procedure allows us to further control the effect of pretrained representation.", "For both learning procedures, we use three existing models trained or fine-tuned for infilling: T5 (Raffel et al., 2019; In-fillT5), BART (Lewis et al., 2020; InfillBART), and GPT-2 (Radford et al., 2019) fine-tuned for text infilling, which was previously explored in Donahue et al. (2020) and which we will follow them in calling the Infilling Language Model (ILM) for short.", "Implementation details can be found in Section 2.2.", "For the compositional inference approach, we propose GibbsComplete , a neurally-guided approximate inference algorithm that combines two canonical computational motifs: masked word prediction and autoregressive word prediction.", "Consider a 2 We represent cases where material can be added at the beginning or the end of the input as c 1 or c k being the empty string, (cid:15) .", "candidate for the i -th completion string B i to consist of b i,<j b i,j b i,j< .", "GibbsComplete takes a masked language modelling motif as a proposal distribution p ( b i,j | B \\ i , b i,<j , b i,j< , C ) and composes it with a global scoring function ( B, C ) given by a unidirectional language modelling motif, in line with previous work on applying sampling-based methods to generative neural sequence models (Berglund et al., 2015; Su et al., 2018; Miao et al., 2019; Wang and Cho, 2019; He and Li, 2021).", "GibbsComplete can also be broadly viewed as an example of unsupervised language generation, among many other alternatives (Liu et al., 2019a; Qin et al., 2020; West et al., 2021).", "Our GibbsComplete algorithm first proposes a random uniform guess on [1 , 10] about the length of each { B i } , and initializes them as [MASK] sequences of the guessed lengths.", "It then proposes an edit to a randomly chosen position b i,j within the blanks by sampling from the sorted list of likely replacements according to the conditional probability p ( b i,j | B \\ i , b i,<j , b i,j< , C ) given by a masked language model.", "We take 500 stochastic editing steps: for the first 250 steps as a burn-in period, the replacement is sampled from the top 50 likely words; for the remaining iterations, the most likely word is picked as the replacement, following the common practice of annealing temperature for better generation quality (Wang and Cho, 2019).", "The final output of the editing process is a candidate completion.", "We sample 1000 such candidates, and then rerank them with an autoregressive forward language model, using mean per-word conditional log-probability as a scoring function to promote fluency.", "or retrained on the target infilling task: we take them as basic computational motifs that are immediately available to a language-using agent for use in sampling-based probabilistic inference to facilitate novel behaviors.", "Completing fragmentary inputs of the type studied here is not a major form of everyday language use, yet as our experiments show, native speakers can perform even challenging fragment completions fairly well with little practice.", "Like the two learning settings considered in the direct specialization approach, we implement instances of GibbsComplete algorithm with", "(a) pretrained language models as the computational motifs, and", "(b) computational motifs of the same architecture as the pretrained counterparts but learned from the same corpus as those trained-from-scratch models from the direct optimization approach.", "We implement instances of the models under two learning settings:", "(a) transferring knowledge from pretrained models, and", "(b) training from scratch.", "Implementations are based on the Huggingface transformers package (Wolf et al., 2020).", "In the case of knowledge transfer with pretrained models, no fine-tuning or learning is needed for GibbsComplete.", "We simply use the small version of pretrained GPT-2 (Radford et al., 2019) and the base cased version of pretrained BERT (Devlin et al., 2019) as the corresponding computational motifs.", "For direct specialization models, we fine-tune pretrained GPT-2 small, T5 base, and BART large 3 to get ILM, InfillT5 and InfillBART respectively.", "The total number of parameters of these model architectures is listed in Table 1. All models were fine-tuned on a 10 million token subset of New York Times Corpus 2007 portion (Sand-haus, 2008), with a batch size of 32 and learning rate of 10 5 .", "The supervision signal is generated by randomly cropping some spans of words in a sentence to get the fragmentary context C and a plausible completion B (see Appendix C.2 for de-tails).", "We stopped fine-tuning when the validation loss increases for two epochs in a row.", "To generate 3 Although the pretraining tasks of T5 and BART do include modified versions of the sentence infilling problem or related text denoising tasks, our initial experimentation suggested that pretrained T5 and BART could not fully support the flexible generalization required in our studies, hence we fine-tuned them as above.", "completions from ILM, InfillT5, and InfillBART, we apply ancestral sampling from a list of top 50 mostly likely tokens at each time step.", "In experimenting with the fine-tuned models, we noticed lower diversity in InfillBART samples compared to the other models.", "Hence we set the sampling temperature as 1 for other models but 1.8 for the fine-tuned InfillBART to ensure that all models generate a good variety of completions.", "When training from scratch, we train all components to be learned in all models on two separate 42-million-token datasets: (1) part of the 2006 year portion of New York Times Corpus (Sand-haus, 2008; NYT ), and (2) part of the BLLIP corpus (Charniak et al., 2000) previously prepared by Hu et al. (2020).", "For GibbsComplete, we train an auto-regressive Transformer decoder language model of the same size as pretrained GPT-2 small and a masked language model of the same size as pretrained base cased BERT.", "For ILM, InfillT5, and InfillBART, we initialize the same architecture and tokenizer as their fine-tuned counterparts and train each model from scratch.", "We use a batch size of 16 for the masked language model in GibbsComplete and 32 for all the other models.", "The learning rate is set as 10 5 across all the models.", "Training is early stopped if either the validation loss increases for two epochs in a row or the total number of training epochs exceeds 100.", "We also collect human completions of our fragments on Mechanical Turk, to evaluate performance and for fine-grained comparison of human and model behavior.", "The visual layout of these experiments is shown in Appendix A. Participants were instructed to use as many or little words as they see appropriate to fill in the blanks in the fragmentary input, so that each completed sentence is coherent, grammatical, and meaningful.", "The interface required each blank to be filled in with at least one word.", "We imposed no time constraints.", "To address the question of whether the direct specialization or compositional inference paradigm gives rise to more humanlike behaviors, we focus on designing a series of linguistically-motivated experiments 4 .", "Our first experiment qualitatively confirms that each model respects basic bidirectional constraints, and quantitatively evaluates the fluency of each model's completions using human judgments.", "We design 30 two-fragment stimuli of the form by adapting sentences from the Brown Corpus (Francis and Kucera, 1979) and British National Corpus (2001), choosing spans to crop out such that successful completion does not require outside-of-sentence information or much factual world knowledge, but does require non-trivial respect of grammatical constraints: we require that both and are multi-word fragments that cross conventional constituent boundaries.", "Table 4 in Appendix G.1 lists one randomly-generated completion from different models to a subset of the stimuli.", "Qualitatively, all models generate high-quality completions that fit the context and sound fluent, although coherence is sometimes lacking.", "For quantitative evaluation, we recruit human raters on Prolific to evaluate the quality of the completed sentences written by models as well as human writers previously recruited on Mechanical Turk.", "Human raters were presented with the fragmentary input together with a completion and asked to judge the grammaticality, coherence, interestingness, and overall quality of the presented completion.", "Ratings range from 1 to 100, with 1 the lowest score and 100 the highest.", "Each human rater judged 150 completions in total, with 30 completions from a human writer or each of the models.", "Raters did not know whether a completion comes from a human or a model.", "The results of these ratings are shown in Figure 2: GibbsComplete, InfillT5, and InfillBART achieve similar performance on grammaticality judgment on this set of stimuli.", "The same human evaluation procedure is also applied to the set of models trained from scratch on the NYT and BLLIP data; results are given in Figure 2. Overall performance is worse than with the pre-trained models, but the relative patterns from model to model are similar.", "Overall, the results of Evaluation 1 suggest that when extensive bidirectional context is given, all models are able to generate structurally well-formed completions.", "This success motivates our next experiment, which involves briefer input in syntactically constrained configurations that more strongly challenge models' grammatical abilities.", "The first fragment consists of a plural noun phrase with an incomplete relative clause postmodifier; the second fragment is a singular verb, is.", "The plural noun in the first fragment cannot be the subject of is due to number agreement in English, which forces a syntactically complex completion, such as The paintings that the artist gave to the museum are gorgeous and one of them is absolutely a masterpiece ..", "We design 26 novel fragment configurations to test models' syntactic behavior, ranging broadly across subject-predicate agreement, clausal structure, coordination, and filler-gap dependencies.", "For 8180 0.00 0.25 0.50 0.75 1.00 A cc u r a cy Human GibbsComplete InfillT5 InfillBART ILM Pretrain/Fine-tune From scratch (NYT) From scratch (BLLIP) Pretrain/Fine-tune From scratch (NYT) From scratch (BLLIP) Figure 3: Aggregated results on syntactic reasoning tests.", "each configuration we construct a set of semantically diverse fragmentary inputs that share the critical high-level syntactic structure for inducing similar grammatical constraints.", "The semantic diversity of the items facilitates more reliable estimation of the models' syntactic reasoning abilities.", "We provide descriptions and examples of each syntactic reasoning test in Appendix E. With each model we generated 35 completions for each stimulus in each of the 26 tests.", "To collect human judgments for every single completion would be laborious and difficult to scale, in particular because evaluating whether the syntactic constraint is satisfied often requires some linguistic expertise.", "Instead, we represent the key syntactic constraints imposed by the fragments as tree patterns to be expected in the constituency parse of a completed stimulus.", "For example, given a stimulus published won . from the test (6) as shown in Appendix E, the linguistic intuition is that published should be part of a Verb Phrase embedded in a relative clause that modifies the subject of the predicate won, despite other possible structural variations.", "We express these tree patterns using the Tregex tool (Levy and Andrew, 2006), compute the average rate of hitting the desired syntactic patterns out of the 35 completions which are annotated with syntactic parses by an off-the-shelf neural constituency parser benepar (Kitaev and Klein, 2018), and average across all the stimuli in a test as the final accuracy score for that test.", "5 Figure 3 shows the performance of humans and each model; Figure 8 in Appendix G.2 breaks down accuracy scores by each test separately.", "Human performance 6 is as good as or superior to all models 5 We also conducted manual evaluation of a sample of the pattern-matching results, which confirmed high accuracy of this automated evaluation procedure; see Appendix F for details.", "6 We collected human completions from Mechanical Turk for two stimulus items of each test, with 56 responses for ( p < 0 . 05 for all models except fine-tuned InfillT5 and InfillBART, two-sided paired t -test).", "For the pretrained models, ILM performs the worst; the other three models' performance is comparable.", "When training from scratch, GibbsComplete outperforms all other models except on BLLIP , where its performance is matched by InfillBART.", "To address a potential concern about the ensembling effect of the reranking process in GibbsComplete, we also examine the performance obtained when composing the outputs of the directly-specialized models with the reranking process of GibbsComplete, generating 1000 candidate completions per fragment and selecting the top-ranked 35 completions using the same reranker as in GibbsComplete.", "Figure 10 in Appendix G.2 shows the results: InfillT5 and InfillBART now perform the best and even match human performance, further underscoring the value of a compositional approach even when dedicated training for direct specialization is available.", "Evaluation II showed that models can perform very well even on more grammatically challenging fragment completion tasks, in some cases matching human performance.", "But do these models complete fragments in a similar way as humans do?", "Evaluation III turns to this question, using still more open-ended fragments and fine-grained analysis of the structural similarity of human and model completions.", "To evaluate fine-grained similarity of human and model behavior, we define summary statistics of features of completions and assess the similarity of the summary statistics seen in human and model completions.", "We designed 120 stimuli in the form each item.", "Human completions were evaluated with the same tree pattern-based method as model-generated completions.", "we also examine the performance obtained when 413 composing the outputs of the directly-specialized 414 models with the reranking process of GibbsCom-415 plete, generating 1000 candidate completions and 416 outputs per fragment and reranking the top-ranked 417 35 completions using the same reranker as in Gibb-418 sComplete.", "of w 1 w 2 ., where w 1 and w 2 are single-word fragments, allowing for a diverse range of plausible syntactic choices of the global context.", "For example, museum city .", "Figure 9 in Appendix F.2 shows the 419 results: InfillT5 and InfillBART now perform the 420 best and even match human performance, further 421 underscoring the value of a compositional approach 422 even when dedicated training for direct specializa-423 tion is available.", "We evaluate the performance of a model by its mean squared error (MSE) against human relative frequencies of the five syntactic category types for each item.", "Statistical significance of difference between model performances is tested with two-sided paired t -test.", "Figure 5 shows quantitative results for each model+training condition.", "Overall, GibbsComplete is the best-performing model.", "With pretrained models, GibbsComplete has significantly lower aggregated MSE than fine-tuned InfillT5 ( p = 0 . 012 ) and fine-tuned ILM ( p = 0 . 010 ), and is numerically lower than that of fine-tuned InfillBART ( p = 0 . 108 ).", "When training from scratch, GibbsComplete is not significantly better.", "The MSE of GibbsComplete with motifs trained from scratch is not significantly better than other models trained from scratch on NYT , but when training on BLLIP it significantly outperforms InfillT5 ( p = 0 . 017 ) and InfillBART ( p = 0 . 048 ) trained from scratch on BLLIP .", "Looking at specific LCA categories, we find that GibbsComplete outperforms all the other models ( p < 0 . 005 ) in matching the frequency of NP in human completions.", "Except when comparing ILM trained on NYT to GibbsComplete on VP ( p = 0 . 041 ), no other directly-specialized models significantly performs better than GibbsComplete for category S, VP, and ADJP.", "Overall, these results suggest that the statistics of LCA categories in the parsed completions by GibbsComplete with pretrained models better match those of humans than those fine-tuned models, and that the advantage of GibbsComplete may also extend to low-resource setting.", "may be completed using a variety of different structural configurations, as shown in Figure 4.", "We parse each completion with benepar (Kitaev and Klein, 2018) and use the syntactic category of the lowest common ancestor (LCA) of w 1 and w 2 in the parse tree as a feature of the completion.", "We choose 40 Noun-Noun, 40 Adjective-Adjective, and 40 Adjective-Noun combination as w 1 and w 2 respectively, once again with diverse semantic content.", "We recruited human subjects from Mechanical Turk, with 18 subjects for Noun-Noun condition, 18 subjects for Adjective-Adjective, and 18 subjects for Adjective-Noun.", "Each subject wrote one completion for every item in the assigned condition.", "For an item, the completions from the subjects provide the human data from which we estimate the summary statistics of interest.", "We estimate the LCA frequency distribution across five syntactic category types: S, NP, VP, ADJP, and Other (everything else).", "For models, we sample and parse 35 completions for each stimulus to estimate the LCA frequency distribution.", "We have contrasted this with a competing hypothesis, namely that human behavior in novel tasks is better captured by inferences resulting from flexibly composing existing basic computational motifs, which relates to the idea explored in compositional use of neural modules (Andreas et al., 2016).", "To compare these hypotheses, we have developed and tested new, more challenging, and more open-ended versions of the text infilling task, which we term fragmentary input completion, and evaluated the performance of different models instantiating the two 8182 0.00 0.01 0.02 0.03 0.04 M ean S qua r ed E rr o r P r e t r a i n / F i ne t une Aggregated Mean Squared Error S NP VP ADJP other 0.00 0.02 0.04 0.06 0.08 Mean Squared Error by LCA Category GibbsComplete InfillT5 InfillBART ILM Pretrain/Fine-tune From scratch (NYT) From scratch (BLLIP) Pretrain/Fine-tune From scratch (NYT) From scratch (BLLIP) Noun Noun Adj Adj Adj Noun 0.00 0.01 0.02 0.03 0.04 Mean Squared Error by Stimulus Condition 0.00 0.01 0.02 0.03 0.04 M ean S qua r ed E rr o r F r o m sc r a t c h ( NYT ) S NP VP ADJP other 0.00 0.02 0.04 0.06 0.08 Noun Noun Adj Adj Adj Noun 0.00 0.01 0.02 0.03 0.04 0.00 0.01 0.02 0.03 0.04 M ean S qua r ed E rr o r F r o m sc r a t c h ( BLLIP ) S NP VP ADJP other 0.00 0.02 0.04 0.06 0.08 Noun Noun Adj Adj Adj Noun 0.000.010.020.030.04 Figure 5: Comparing structural statistics in model's completions with that of the human-written completions.", "hypotheses against subjective human judgments, fixed success criteria, and fine-grained comparisons with human task performance.", "In the future this approach could be extended further for more comprehensive evaluation of conditioned language generation systems.", "Our results are generally favorable for the compositional inference hypothesis as exemplified by our novel GibbsComplete algorithm, which composes the two fundamental language modelling motifs central to today's language models, masked word prediction and autoregressive modeling.", "The motifs themselves need to be learned in the first place, but there is a strong case that these motifs reflect tasks fundamental to everyday language use: identifying an uncertain word using bidirectional context (Connine et al., 1991; Dilley and Pitt, 2010; Levy, 2008b) and predicting upcoming input (Hale, 2001; Levy, 2008a; Kutas et al., 2011; Kuperberg and Jaeger, 2016).", "The idea we advance here, that these fundamental motifs are pre-existing and flexibly deployed for novel tasks, echoes a long-standing perspective in cognitive science well-summarized by Bruner et al. (1986), that Thinking is not the acquisition of knowledge, but the use of knowledge in the interest of solving problems .", "sampling-based approaches may capture important general features of human inferential patterns (Vul et al., 2014).", "Furthermore, the compositional inference approach to modelling flexible language generation by no means diminishes the value of learning or fine-tuningindeed, fine-tuned models can themselves be composed (see also our exploratory work in Appendix G.2).", "Rather, we hope that this work may help widen the perspective on the relationship between learning and inference in novel language tasks and contexts.", "Scaled learning and quick adaptation of linguistic representation have enabled huge progress in the engineering of high-performance NLP systems, but our results suggest that flexible redeployment of basic computational motifs may have advantages for capturing how humans flexibly use language in novel circumstances where they do not have extensive experience.", "Our studies offer systematic model comparisons with materials designed to highlight subtle features of grammatical knowledge and featural statistics of human completion preferences, and point to the need for longer-term efforts in understanding and modeling human cognitive flexibility in computational terms.", "As an initial step, we explored the compositional inference hypothesis by sketching out an inference algorithm based on the principles of approximate Bayesian 8183 inference.", "The results suggest certain advantages of an inference-oriented view of human language generation, and an alternative path towards building models that process linguistic information as flexibly as humans do.", "We thank the anonymous reviewers and members of the MIT Computational Psycholinguistics Lab for their helpful comments, and members of the Goals, Problems and Stories working group for discussions on early idea of the project.", "This work was supported by the MIT-IBM Watson AI Lab and by NSF award BCS-2121074." ]
[ "abstain", "abstain", "objective", "objective", "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "result", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "method", "method", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "objective", "result", "objective", "abstain", "other", "other" ]
[ "Mohit Bansal", "Abstract Pre-trained sequence-to-sequence language models have led to widespread success in many natural language generation tasks.", "However, there has been relatively less work on analyzing their ability to generate structured outputs such as graphs.", "Unlike natural language, graphs have distinct structural and semantic properties in the context of a downstream NLP task, e.g., generating a graph that is connected and acyclic can be attributed to its structural constraints, while the semantics of a graph can refer to how meaningfully an edge represents the relation between two node concepts.", "In this work, we study pre-trained language models that generate explanation graphs in an end-to-end manner and analyze their ability to learn the structural constraints and semantics of such graphs.", "We first show that with limited supervision, pre-trained language models often generate graphs that either violate these constraints or are semantically incoherent.", "Since curating large amount of human-annotated graphs is expensive and tedious, we propose simple yet effective ways of graph perturbations via node and edge edit operations that lead to structurally and semantically positive and negative graphs.", "Next, we leverage these graphs in different contrastive learning models with Max-Margin and InfoNCE losses.", "Our methods lead to significant improvements in both structural and semantic accuracy of explanation graphs and also generalize to other similar graph generation tasks.", "Lastly, we show that human errors are the best negatives for contrastive learning and also that automatically generating more such human-like negative graphs can lead to further improvements.", "1 1 Introduction Pre-trained sequence-to-sequence language models (PLMs) like BART (Lewis et al., 2020) and 1 Our code and models are publicly available at https: //github.com/swarnaHub/ExplagraphGen .", "T5 (Raffel et al., 2020) have led to significant advances in many natural language generation tasks like text summarization and machine translation.", "The models are pre-trained on massive amounts of text data with self-supervision, thus enabling them to construct coherent natural language sentences for downstream tasks.", "This then raises the question whether pre-trained language models, trained on free-form natural language data, can also adapt themselves to generate structured outputs like graphs.", "Graphs are common in NLP tasks 1190 that involve representing structured knowledge in the form of knowledge bases (Guarino and Gia-retta, 1995), constructing event chains from documents (Chambers and Jurafsky, 2009), or more recent work on encoding reasoning chains, explanations, or deductive proofs (Saha et al., 2020; Tafjord et al., 2021; Dalvi et al., 2021).", "Graphs differ from free-form natural language.", "In the context of NLP, natural language graphs (con-sisting of textual nodes and edges) can have distinct structural and semantic properties.", "For example, consider a recently proposed commonsense explanation graph generation task shown in Fig. 1 (Saha et al., 2021b).", "Each example shows a belief, an argument and an explanation graph explaining how the argument supports or refutes the belief.", "These explanation graphs encode structured knowledge (augmented with commonsense) and consist of concepts as nodes and relations from ConceptNet (Liu and Singh, 2004) as edges.", "For example, the second graph encodes the knowledge that both salads and fast food are part of mcdonalds and hence mcdonalds is not greasy and fattening, thus explicitly refuting the belief.", "From prior work, the structural constraints enforce the graphs to be connected directed acyclic and the nodes to contain at least two concepts from the belief and two from the argument.", "The semantic aspect deals with commonsense and evaluates whether each edge expresses coherent relational knowledge and if the whole graph explains the stance.", "Following Saha et al. (2021b), we represent graphs as strings composed of concatenated edges and fine-tune T5 to generate graphs in an autoregressive manner.", "We observe that while moderate amount of supervision enables the model to learn valid graph encodings, the graphs frequently violate task-specific structural constraints (like con-nectivity).", "For instance, the first example in Fig. 1 shows a graph generated by T5 that is disconnected and hence structurally incorrect.", "Moreover, for the fraction of graphs that are structurally correct, the model also makes commonsense mistakes, a type of semantic error, by inferring wrong or incoherent relations between concepts.", "Both T5-generated graphs shown in Fig. 1 contain incoherent or non-commonsensical edges (marked by dashed arrows) like fast food; has context; salads.", "Based on these observations, we study PLMs that generate explanation graphs in an end-to-end manner and analyze their ability to learn the structural constraints as well as the semantics of such graphs.", "While a general recipe towards improving the structural and semantic aspects of graph generation can be via large-scale training with more human-annotated graphs, it is prohibitive under most practical scenarios because of the cognitive load associated with a complex data creation task like graph annotation (Dalvi et al., 2021; Saha et al., 2021b).", "Hence, we propose simple yet effective methods of graph perturbations that perform various kinds of node and edge addition, deletion, and replacement operations to construct structurally and semantically positive (correct) and negative (incor-rect) graphs.", "Overall, we leverage three types of negative graphs (synthetic structural, synthetic semantic, and human-created semantic) and develop multiple contrastive learning models (Hjelm et al., 2018; Chen et al., 2020a; Khosla et al., 2020; Gunel et al., 2020) for effectively distinguishing between correct and incorrect graphs.", "Our first method is a Generate-and-Refine model that first generates an initial graph and further refines it using another T5 model.", "Next, we propose two improved models one that uses the negative graphs in a max-margin formulation and another that uses both positive and negative graphs with a InfoNCE (van den Oord et al., 2018) contrastive loss.", "On two real-world tasks of explanation graph generation and temporal graph generation, with varied node and edge semantics, we observe that our proposed methods and graph perturbation techniques generalize well and lead to improvements in both structural and semantic accuracy of graphs.", "Further analysis of different types of negative graphs reveal that the human-error graphs are the hardest, most diverse, and hence the best type of negatives to learn from in contrastive learning.", "Hence, we also develop methods to automatically generate more such human-like semantic negative graphs, which leads to further improvements.", "We summarize our contributions as follows.", "We present a detailed analysis of graph structure and semantics for end-to-end explanation graph generation via pre-trained language models.", "We propose simple yet effective graph perturbation techniques for constructing positive and negative graphs and use them in different graph contrastive learning models.", "Our methods lead to significant improvements in both structural and semantic accuracy of explanation graphs and also generalize to other similar graph generation tasks.", "Graph Generation from Language Models.", "Representative works on graph generation from language models include knowledge graph completion models like Comet (Bosselut et al., 2019; Hwang et al., 2021) that fine-tune GPT (Radford et al., 2019; Brown et al., 2020) and BART (Lewis et al., 2020), generation of event influence graphs (Tan-don et al., 2019; Madaan et al., 2020), partially ordered scripts (Sakaguchi et al., 2021), temporal graphs (Madaan and Yang, 2021), entailment trees (Dalvi et al., 2021), proof graphs (Saha et al., 2020; Tafjord et al., 2021; Saha et al., 2021a) and commonsense explanation graphs (Saha et al., 2021b).", "Linguistic tasks like syntactic parsing (Zhou et al., 2020; Mohammadshahi and Henderson, 2021; Kondratyuk and Straka, 2019) and semantic parsing (Chen et al., 2020b; Shin et al., 2021) have also made use of language models.", "There is also a large body of work on building generative models for learning unconditional graph distributions (You et al., 2018; Simonovsky and Komodakis, 2018; Grover et al., 2019; Liao et al., 2019; Shi* et al., 2020) without any semantics attached to the graphs.", "Our novelty lies in presenting the first systematic analysis of structure and semantics of graph generation for two downstream NLP tasks using pre-trained language models and improving them via constrastive learning.", "Data Augmentation and Contrastive Learning.", "Data Augmentation for NLP (Hedderich et al., 2020; Feng et al., 2021; Chen et al., 2021) has been a powerful tool in low-data settings, ranging from its early usages with synonym replacement (Kolomiyets et al., 2011; Wang and Yang, 2015) to more recent methods of perturbing hidden representations (Miyato et al., 2016; Shen et al., 2020).", "Contrastive learning, beyond its historical use in learning robust image representations (Chopra et al., 2005; Hadsell et al., 2006; Gutmann and Hyvri-nen, 2010; Hoffer and Ailon, 2015; Hjelm et al., 2018; Chen et al., 2020a; He et al., 2020) has been explored in supervised scenarios (Khosla et al., 2020; Gunel et al., 2020) and for NLP, in training self-supervised language models (Fang et al., 2020), learning sentence representations (Gao et al., 2021), document clustering (Zhang et al., 2021), summarization (Liu and Liu, 2021; Cao and Wang, 2021) and generic text generation (Lee et al., 2020).", "It has also been used in unconditional graph representation learning (You et al., 2020; Hassani and Khasahmadi, 2020; Zhu et al., 2021).", "We follow this rich line of work to explore their applicability in supervised graph generation tasks from pre-trained language models in low-resource settings.", "Generative Commonsense Reasoning.", "While traditional commonsense reasoning tasks are discriminative in nature (Zellers et al., 2018; Talmor et al., 2019; Sap et al., 2019; Bisk et al., 2020; Sakaguchi et al., 2020; Talmor et al., 2021), recent focus on generative evaluation have led to the development of tasks and benchmarks that explore unstructured commonsense sentence generation (Lin et al., 2020), event influence graph generation (Madaan et al., 2020), commonsense explanation graph generation (Saha et al., 2021b), etc.", "We experiment with two graph generation tasks, primarily focusing on ExplaGraphs (Saha et al., 2021b) because of the clear distinction in the underlying structural constraints and the semantic aspect dealing with commonsense.", "Our primary task of interest is a recently proposed commonsense explanation graph generation task called ExplaGraphs (Saha et al., 2021b).", "In Sec. 6.4, we also experiment with another related task of temporal graph generation (Madaan et al., 2020).", "In both these tasks, the structural aspect deals with satisfying certain task-specific constraints on the graph (like connectivity) and the semantic aspect deals with the construction of meaningful edges (that adhere to commonsense).", "Below we discuss ExplaGraphs briefly and analyze pre-trained language models for their ability to generate explanation graphs.", "ExplaGraphs (Saha et al., 2021b).", "In this task, given a belief and an argument, an agent has to perform two sub-tasks predict the stance (sup-port/counter) and also generate an explanation graph explaining the stance.", "Explanation graphs are structured explanations that capture explicit reasoning chains between the belief and the argument, thereby making models more interpretable.", "Formally, an explanation graph is a connected DAG with nodes as concepts and edges as commonsense relations between two concepts (See Fig. 1).", "The concepts are either part of the belief or the argument (represented with solid boxes) or any external commonsense phrase (represented with dashed boxes).", "Each edge in the graph forms a coherent sentence and the graph, when read as a whole, 1192 forms reasoning structures explaining why the argument supports or refutes the belief.", "Saha et al. (2021b) evaluate explanation graphs by defining two accuracy metrics (1) Structural Correctness Accuracy (StCA) : Fraction of graphs that satisfy all structural constraints, and (2) Semantic Correctness Accuracy (SeCA) : Fraction of graphs that are both structurally and semantically correct.", "A graph is considered structurally correct if it satisfies the following constraints: (1) it is connected, (2) it is a DAG, (3) the edge relations belong to a pre-defined list, (4) there are at least two concepts from the belief and two from the argument.", "If all these constraints are satisfied, the graph is next evaluated for semantic correctness by a model-based metric (Saha et al., 2021b).", "It works on the principle that an explanation graph is semantically correct if the stance inferred from the belief and the graph matches the gold stance.", "Refer to Appendix A for a detailed description of all evaluation metrics.", "Baseline T5 Model.", "Following prior work (Saha et al., 2021b), we generate explanation graphs as post-hoc explanations by conditioning on the belief, argument and the predicted stance.", "2 The stance prediction model is a fine-tuned RoBERTa model (Liu et al., 2019) which we keep unaltered from prior work and focus on the graph generation sub-task.", "We generate graphs as linearized strings in an end-to-end manner by leveraging an encoder-decoder pre-trained language model, T5 (Raffel et al., 2020).", "The input to the model is the concatenated belief, argument and the stance along with a prefix Gen-erate an Explanation Graph for .", "The graphs are encoded as concatenated bracketed edges, in which the edges are ordered according to the Depth First Search (DFS) order of the nodes.", "While we choose T5 because of its superior performance (Saha et al., 2021b), we do not make any model-specific assumptions and graphs can be generated via any encoder-decoder style pre-trained language model (e.g., see Appendix E for results with BART).", "Analysis of T5 Baseline.", "We analyze the quality of the explanation graphs generated by T5 in Table 1.", "We vary the amount of training data from 500 to 2368 samples (all) and report StCA and SeCA along with other metrics like Graph-BertScore (G-BS) introduced in prior work (Saha et al., 2021b).", "While the structural accuracy improves with increase in training data, the gain saturates quickly and even after training on the entire data, we find a significant fraction of graphs to violate the structural constraints.", "We note that a high 91% of T5's generations are valid graph encodings i.e., the generated strings can be parsed into graphical structures (without any post-processing), suggesting that T5 is able to learn the graph encoding from a fairly small amount of supervision.", "However, it fails to satisfy the various structural constraints (1) 20% of the graphs are disconnected, (2) 6% of the graphs contain cycles, and (3) 14% of the graphs have less than two concepts from the belief or from the argument.", "Note that these constraints are not encoded in the model, thus making them fairly hard to learn from limited supervision.", "On the fraction of structurally correct graphs, the model makes further semantic errors and a lower SeCA of 35% demonstrates that.", "In Fig. 1, we show examples of structurally incorrect and semantically incorrect graphs generated by T5.", "Overall, these results indicate that there is a significant scope for improvement both on graph structure and semantics, thus motivating us to develop methods with design choices aimed at improving both aspects.", "Most prior works that collect human-annotated graphs for a downstream NLP task have found such collection processes to be quite expensive and tedious (Tandon et al., 2019; Dalvi et al., 2021; Saha et al., 2021b).", "For instance, Saha et al. (2021b) obtained high-quality data only after multiple rounds of refinement and Dalvi et al. (2021) employ trained expert annotators for entailment tree construction.", "The corresponding datasets are also relatively small in size (2-3k), thus limiting the prospect of large-scale training.", "Hence, our approach towards improving explanation graph generation is through data augmentation techniques that perturb human-curated graphs to construct positive and negative graphs.", "As noted earlier, we wish to construct graphs that enable better learning of 1193 \"Generate an Explanation graph for Belief: Banning whaling is humane. [SEP] Argument: Banning whaling would harm the workforce, which would be an inhumane act for the people. [SEP] Stance: Counter \" T5 Positive Negatives G o l d G r a ph harm the workforce loss of jobs banningwhaling capable of causes humane is not a S y n t h e t i c P o s i t i ve G r a ph harm the workforce going of business banningwhaling capable of causes humane is not a S y n t h e t i c S e m a n t i c ( S y S e ) harm the workforce loss of jobs banningwhaling antonym of causes humane is a H u m a n S e m a n t i c ( H u S e ) occupation workforce whaling part of used for inhumane has property banning not desires S y n t h e t i c S t r u c t u r a l ( S y S t) harm the workforce loss of jobs banningwhaling capable of humane is not a , , Figure 2: Our T5-based contrastive learning framework for graph generation using positively and three kinds of negatively perturbed graphs.", "One simple method to augment existing training data is to create synthetic positive graphs.", "These graphs should be created such that all the task-specific constraints continue to hold upon perturbations.", "E.g., removing a node that makes the graph disconnected is a prohibitive action.", "Hence, we choose nodes (concepts) that are not part of the belief or the argument (also termed as commonsense nodes) and replace them with phrases that are synonymous to the original phrases.", "To do so, we select words from the concept with POS tags of Adjective, Noun, Adverb, or Verb and replace them with that synonym from Wordnet (Miller, 1995) for which the cosine similarity of their word2vec representations (Mikolov et al., 2013) is the highest.", "3 Fig. 2 shows an example of a positive graph perturbation where the node loss of jobs is replaced with going of business.", "Note that our node replacement operations will always lead to structurally similar graphs.", "Automatically constructing structurally diverse positive graphs is a challenging problem and we leave that for future work.", "In order to enable the model to learn from explicit hard negatives, we construct three diverse types of graphs synthetically constructed structural negatives for learning graph constraints and synthetic", "3 We also tried similar replacement operations with antonyms.", "However, they often lead to semantically inconsistent graphs.", "E.g., A causes B does not always imply A not causes not B or not A not causes not B. and human-created semantic negatives to capture a fairly large space of semantically incorrect graphs.", "Below we discuss the construction of these graphs.", "Synthetic & Structurally Negative Graphs (SySt).", "As shown previously, one common source of errors in the generated explanation graphs is the violation of structural constraints.", "To enable learning these constraints, we generate four types of negative graphs by performing the following perturbations on each ground-truth graph: (1) removing an edge at random such that the resultant graph becomes disconnected, (2) adding an edge between two randomly chosen nodes such that the resultant graph becomes cyclic, (3) adding and removing one edge at random such that the resultant graph becomes both disconnected and cyclic, (4) removing a node randomly such that the resultant graph contains less than two concepts from the belief or argument.", "Fig. 2 shows an example of a disconnected graph created as part of the structurally negative graphs.", "Synthetic & Semantic Negative Graphs (SySe).", "We also construct semantically incorrect negative explanation graphs.", "While the previous category of negative graphs (SySt) captures structural constraints, SySe captures the relational knowledge in graphs.", "Semantic incorrectness typically arises from inappropriate relations that do not adhere to human commonsense (loss of jobs; is a; humane).", "We create such negative graphs by selecting a random number of edges and then replacing the relations with some other relations.", "Fig. 2 shows a semantic negative graph in which the relations marked with dashed lines are perturbed.", "Human-created & Semantic Negative Graphs (HuSe).", "The space of semantically incorrect graphs is fairly large and in order to augment our synthetic negative graphs with harder structurally-diverse negatives, we make use of human-created incorrect graphs from prior work (Saha et al., 2021b).", "4 Humans make subtle errors, thus making them ideal negative candidates for contrastive learning.", "ExplaGraphs was constructed via an iterative framework in which the graphs are iteratively refined (up to two times) until they are verified as correct.", "We treat these refined graphs as negatives.", "Specifically, in two rounds, if an initial graph G 1 4 Publicly released by Saha et al. (2021b) at https: //github.com/swarnaHub/ExplaGraphs/blob/main/data/refinement_graphs_train.tsv .", "is refined into graphs G 2 and G 3 successively, then G 1 and G 2 are considered as negative graphs.", "Unlike SySe which only perturb the relations, these negatives are structurally diverse (see Fig. 2) and capture semantics not just at the level of each edge but for the graph as a whole (e.g., a graph might be refined because it does not explain the stance).", "Note that human-created graphs can only be semantically incorrect, since their structural correctness is already ensured during construction.", "Next we propose different methods of leveraging these positive and negative graphs for explanation graph generation.", "Our models either use only positive graphs as simple data augmentation, only negative graphs in a max-margin model, or both in a Generate & Refine model and a Contrastive model.", "In this first simple approach, we augment the training data with the synthetically created positive graphs and retrain the baseline T5 model.", "Our next model leverages the negatively perturbed graphs in a max-margin formulation.", "During training, given a (belief, argument, stance) context x , a ground truth graph G ( g ) and a negative graph G ( n ) , linearized into a sequence of words { y ( g ) i } ki =1 and { y ( n ) i } li =1 respectively, we define the loss function L as a linear combination of the standard cross-entropy loss LCE and a max-margin loss LMM , defined between a word y ( g ) i of the positive graph and a word y ( n ) i of the negative graph.", "LCE = (cid:88) i logP ( y ( g ) i | y ( g ) <i , x ) LMM = (cid:88) i max(0 , logP ( y ( g ) i | y ( g ) <i , x ) log P ( y ( n ) i | y ( n ) <i , x ) + ) L = LCE + LMM where and (margin) are hyperparameters.", "As noted earlier, the baseline model often makes commonsense mistakes in distinguishing between positive and negative relations (causes vs not causes) and our relation perturbing negative graphs and the max-margin loss component facilitate learning a better boundary between them.", "ExplaGraphs was constructed using a Refinement phase wherein the initially constructed graphs that are marked incorrect by human verifiers are further refined by another set of annotators.", "Here we emulate the graph refinement phase with the help of a model.", "Specifically, our approach is a 2-stage pipeline first, an initial graph is generated by the baseline T5 model and second, an Explanation Graph Refinement model conditions on the initial graph, along with the belief, argument and the stance to refine the graph.", "The refiner is also a T5 model fine-tuned with the prefix Refine the Explanation Graph for on all positive and negative graphs described in Sec. 4.", "Note that our approach differs from the actual data collection process in two aspects.", "Unlike the human-annotated graphs, which are refined only for semantic correctness, the model-generated graphs can be both structurally and semantically incorrect.", "Second, our approach does not involve a graph verification stage and thus, the refiner model acts on all (correct and incorrect) graphs generated in stage 1 and is thus trained with both correct and incorrect graphs.", "Our Contrastive Graph Generation Model (Fig. 2) also leverages both positive and negative graphs but instead of doing so in a 2-stage Generate & Refine model, uses a contrastive learning framework (Khosla et al., 2020; Gunel et al., 2020).", "Given a ground-truth graph G ( g ) , a positive graph G ( p ) and a set of negative graphs {G ( n ) i } Mi =1 , contrastive learning aims to learn the graph representations such that the gold graph's representation is close to that of the synthetic positive graph while being distant from those of the negative graphs.", "Similar to Cao and Wang (2021), we use the last layer of the decoder in T5 as the representation of each token in the graph and obtain the graph representation by averaging over the constituent token representations.", "Let the graph representations be denoted by h ( g ) , h ( p ) and { h ( n ) i } Mi =1 .", "Given H ( g ) = { h ( p ) } (cid:83) { h ( n ) i } Mi =1 , our overall loss combines the cross-entropy loss LCE and the InfoNCE contrastive loss (van den Oord et al., 2018) LCL as shown below.", "where and the temperature are the hyperparameters and sim() denotes the cosine similarity function between the graph representations.", "In Table 2, we compare the various modeling techniques described in Sec. 5 and their effect on the structural and semantic correctness of the generated graphs.", "While our primary metrics of interest are Graph Structural Accuracy (StCA) and Semantic Accuracy (SeCA), following prior work (Saha et al., 2021b), we also report Stance Accuracy (SA), Graph-BertScore (G-BS), Graph Edit Distance (GED) and Edge Accuracy (EA).", "Effect of Model Size and Training Data.", "The T5-Large model uses the same setup as the T5-Base model experimented with in Saha et al. (2021b).", "We observe that using a larger T5 model improves StCA by 12% and SeCA by 16%.", "This finding is in line with other commonsense reasoning tasks (Lourie et al., 2021; Elazar et al., 2021) which also show that fine-tuning a larger language model typically leads to better performance.", "Together with the results reported in Table 1, we conclude that much of the improvement in explanation graph generation comes from increasing the training data and using a larger model.", "Given its superior performance, we build our proposed models on T5-large.", "Results with Generate & Refine Model.", "The Generate & Refine model (Sec. 5.3) improves all metrics; however the gains are small.", "Note that this model refines all graphs (correct or not) and can lead to already correct graphs becoming incorrect after refinement.", "In practice, we observe that most graphs do not change much after refinement which we believe stems from the model's inability to distinguish between correct and incorrect graphs.", "Effect of Positive Graph Perturbations.", "On retraining T5 augmented with the positively perturbed graphs (Sec. 5.1), we observe that it obtains significant improvement over T5 and Generate & Refine both in structural and semantic accuracy.", "Note that, by construction, the positive graphs only differ in the commonsense concepts (not part of the belief or argument) while keeping the structure intact.", "Hence, the model has more supervision about the semantics of the graphs as opposed to the structural constraints.", "This is reflected in the larger improvement in SeCA.", "The positive graphs, being structurally correct, also reinforces the model's belief about structural correlation with correct graphs, thus leading to some improvement in StCA as well.", "Effect of Negative Graph Perturbations.", "The Max-Margin model (Sec. 5.2) leverages all structurally and semantically incorrect graphs and obtains up to 6% and 9% improvement in StCA and SeCA respectively over the baseline T5 model.", "The model implicitly learns the structural constraints through relevant supervision and the margin-based loss enables it to learn a better boundary between correct and incorrect graphs.", "Similarly, the semantically perturbed graphs improves the model's relation prediction capability between concepts.", "The Max-Margin model outperforms the Pos Data Aug model because of the former having access to both structural and semantic supervision while the latter is only augmented with structurally similar graphs.", "Perturbations with Contrastive Learning.", "The Contrastive Graph Generation model (Sec. 5.4) leverages both positive and negative graphs and improves StCA to 60% with comparable SeCA to the Max-Margin model.", "The overall improvements in StCA and SeCA are 9% and 8% respectively compared to T5.", "We hypothesize that the constrastive model does not lead to further improvement in SeCA because of the structurally similar positive 1196 StCA SeCA G-BS GED EA T5-Large 46.5 31.6 36.8 0.66 26.7 + SySt 50.2 34.1 40.7 0.64 27.4 + SySe 50.7 35.1 40.8 0.63 27.3 + HuSe 49.5 38.4 39.4 0.64 26.1 Table 3: Ablation study showing the effect of different types of negative graphs on ExplaGraphs dev set.", "graphs.", "This can potentially be improved by incorporating more structurally diverse graphs.", "Finally, our best SeCA is far from perfect and significant future work can be done in improving the graph semantics.", "Further ablations of negative graphs and human evaluation are done on the Max-Margin model, due to its slightly higher SeCA.", "Automatically evaluating graphs for semantic correctness is challenging.", "We conduct human evaluation to further validate our findings.", "We compare the graphs generated by T5 and our Max-Margin model on Amazon Mechanical Turk where three annotators choose which graph is better or if they are mostly similar (instructions in Appendix F).", "For fair comparison, we evaluate only those samples where both models predict the correct stance and the graphs are also structurally correct.", "In fact, this lets us evaluate the semantic aspect in isolation when both graphs are structurally correct.", "With majority voting on 150 samples, we observe that our Max-Margin model's graphs are preferred 13% more times compared to those of the T5 model (43% vs 30% and statistically significant with p < 0.05) while in 22% cases, the graphs are marked similar (remaining have no majority).", "In Table 3, we show the effect of different types of negative graphs.", "We compare the results on the ExplaGraphs validation set by leveraging Synthetic Structural (SySt), Synthetic Semantic (SySe) and Human-created Semantic (HuSe) graphs with the Max-Margin graph generation model.", "All types of negatives graphs lead to consistent increase in SeCA.", "Leveraging human-created negative graphs leads to a bigger gain in SeCA because of the hard-Belief: Collectivism is terrible for society.", "Argument: Collectivism increases empathy.", "ness and diversity in these graphs and hence are the best candidates for contrastive learning.", "We test the generalizability of constructing structurally and semantically perturbed graphs for contrastive learning by also experimenting on a temporal graph generation task (Madaan and Yang, 2021) that requires constructing a temporal graph from a document.", "The nodes in the graph are events from the document and the edges are temporal relations between events (before, after, etc).", "Following our overall goal of improving graph generation with limited data, we randomly sample 1.3% of the overall corpus ( 9.5k samples) as the training data such that all graphs are connected DAGs.", "Similar to ExplaGraphs, we create structurally negative graphs with disconnected and cyclic graphs and semantic negative graphs by perturbating the temporal relations.", "E.g., if an edge relation is be-fore, we replace it with after.", "We construct positive graphs by replacing edges like A before B with B after A", "(more details in Appendix C).", "In Table 4, we report structural correctness accuracy", "(StCA)", "(percentage of connected DAGs)", "and Graph-BertScore", "(G-BS)", "for measuring approximate semantic correctness wrt gold graphs.", "We observe that our contrastive model not only generates more valid graph encodings but also improves StCA by 8% and G-BS by 3%.", "Fig. 3 shows an example of the graphs generated by different models", "(more examples in Appendix F).", "Unlike T5, our models' graphs are both structurally and semantically correct with diverse commonsense nodes", "(Groupthink, Good Thing).", "While our models generate more correct graphs, they lack in structural diversity the Contrastive model generates 77% of linear graphs", "(i.e., the nodes are in a linear chain)", "which is comparable to 75% in the T5 model.", "This can be attributed to our structurally similar positive graphs as the model does not obtain enough supervision to generate diverse graphs.", "Structural diversity is not a measure of graph correctness; however, like diverse text generation", "(Vijayakumar et al., 2018), generating diverse graphs is an interesting direction for future work.", "6.6 Generating Human-like Semantic Negatives", "(HuSe-Gen)", "In ExplaGraphs, human-created negatives account for 38% of the samples for which the initially constructed graph was incorrect and was refined.", "Moreover, we see in the previous section that human-error graphs are the best negative candidates for contrastive learning", "(which is intuitive since tricky and subtle errors made by expert human annotators would make for some of the hardest nega-tives/distractors for a contrastive learning model to learn from).", "Hence, in this final section, we further explore whether it is also possible to automatically imitate and generate more of such harder humanlike incorrect graphs for the remaining samples as well.", "Our method consists of the following steps.", "Human-like Negative Edge Generation.", "We first fine-tune a T5 model that conditions on the belief, argument and the stance to generate a set of incorrect edges", "(which is the set of edges that are present in the incorrect graph and not in the refined graph).", "Human-like Negative Graph Construction.", "This generated set of incorrect edges is then added to the correct graph to construct the incorrect graph, such that it is structurally correct and hence representative of human-like erroneous graphs.", "Filtering High-quality Negative Graphs.", "Contrastive models will only benefit from these negatives if the negative edge generation model is accurate and generates edges that are actually incorrect.", "Hence, we control the quality of the generated incorrect graphs by the following two techniques", "(a)", "Thresholding via fraction of Acceptable Edges", "(AE): We say that a generated incorrect edge is acceptable if it is not part of the correct graph and can be added to the correct graph without violating any structural constraints.", "We compute the fraction of acceptable edges for every generated negative graph and choose only those graphs with AE above a certain threshold .", "Intuitively, this ensures that a high fraction of the generated edges are actually incorrect and hence when added to the correct graph, will lead to a sufficiently different", "(human-like)", "incorrect graph.", "(b)", "Thresholding via Incorrect Probability of a graph", "(IP): We use our SeCA metric model", "(that classifies a graph into support, counter, or incorrect class)", "to compute the probability of the generated graph being incorrect and choose those graphs that are above a certain threshold of incorrect probability.", "We set = 0 .", "4 and = 0 .", "5", "(tuned on the dev set)", "and train the Max-margin model using these additionally generated human-like negative graphs.", "As shown in Table 5 both thresholding approaches lead to further improvements over using just the human-created negative graphs.", "These initial promising results for emulating hard/tricky human errors as strong negatives for contrastive learning will hopefully lead to further future work in this interesting direction.", "We presented an empirical study of graph structure and semantics for end-to-end explanation graph generation from pre-trained language models and showed that the generated graphs often violate structural constraints or are semantically incorrect.", "We significantly improve both the structural and semantic accuracy of graph generation by proposing contrastive learning models that leverage simple yet efficient methods of graph perturbations and also generalize to similar graph generation tasks.", "From an ethics standpoint, we provide a brief overview and show samples from the datasets that our models are trained on throughout the paper and also in the Appendix.", "Explanation graph generation improves the interpretability of neural commonsense reasoning systems and could prove to be effective in understanding and debugging such models.", "Hence we do not foresee any major risks or negative societal impact of our work.", "However, like any other ML model, the graphs generated by our models may not always be completely accurate and hence should be used with caution for real-world applications.", "We thank the reviewers for their helpful feedback and the annotators for their time and effort.", "This work was supported by DARPA MCS Grant N66001-19-2-4031, NSF-CAREER Award 1846185, DARPA YFA17-D17AP00022, ONR Grant N00014-18-1-2871, Microsoft Investigator Fellowship, and Munroe & Rebecca Cobey Fellowship.", "The views in this article are those of the authors and not the funding agency." ]
[ "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "abstain", "result", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "method", "abstain", "objective", "abstain", "objective", "abstain", "objective", "abstain", "objective", "objective", "method", "objective", "result", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "objective", "other", "other", "method", "abstain", "method", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "method", "other", "other", "abstain", "other", "method", "abstain", "abstain", "method", "other", "other", "other", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "abstain", "abstain", "abstain", "method", "other", "other", "other" ]
[ "This paper is concerned with dialogue state tracking (DST) in a task-oriented dialogue system.", "Building a DST module that is highly effective is still a challenging issue, although significant progresses have been made recently.", "This paper proposes a new approach to dialogue state tracking, referred to as Seq2SeqDU, which formalizes DST as a sequence-to-sequence problem.", "Seq2Seq-DU employs two BERT-based encoders to respectively encode the utterances in the dialogue and the descriptions of schemas, an attender to calculate attentions between the utterance embeddings and the schema embeddings, and a decoder to generate pointers to represent the current state of dialogue.", "Seq2Seq-DU has the following advantages.", "It can jointly model intents, slots, and slot values; it can leverage the rich representations of utterances and schemas based on BERT; it can effectively deal with categorical and non-categorical slots, and unseen schemas.", "In addition, Seq2Seq-DU can also be used in the NLU (natural language understanding) module of a dialogue system.", "Experimental results on benchmark datasets in different settings (SGD, MultiWOZ2.2, MultiWOZ2.1, WOZ2.0, DSTC2, M2M, SNIPS, and ATIS) show that Seq2Seq-DU outperforms the existing methods.", "A task-oriented dialogue system usually consists of several modules: natural language understanding (NLU), dialogue state tracking (DST), dialogue policy (Policy), and natural language generation (NLG).", "We consider DST and also NLU in this paper.", "In NLU, a semantic frame representing the content of user utterance is created in each turn The work was done when the first author was an intern at ByteDance AI Lab.", "of dialogue. In DST, several semantic frames representing the states' of dialogue are created and updated in multiple turns of dialogue. Domain knowledge in dialogues is represented by a representation referred to as schema, which consists of possible intents, slots, and slot values. Slot values can be in a pre-defined set, with the corresponding slot being referred to as categorical slot, and they can also be from an open set, with the corresponding slot being referred to as non-categorical slot. Figure 1 shows an example of DST.", "We think that a DST module (and an NLU module) should have the following abilities. (1) Global, the model can jointly represent intents, slots, and slot values. (2) Represenable, it has strong capability", "capability to represent knowledge for the task, on top of a pre-trained language model like BERT. (3) Scalable, the model can deal with categorical and", "non-categorical slots and unseen schemas. Many methods have been proposed for DST (Wu et al., 2019; Zhong et al., 2018; Mrksic et al., 2017; Goo et al., 2018). There are two lines of relevant research. (1) To enhance the scalability of DST, a problem formulation, referred to as schema-guided dialogue, is proposed. In the setting, it is assumed that descriptions on schemas in natural language across multiple domains are given and utilized. Consequently, a number of methods are developed to make use of schema descriptions to in-crease the scalability of DST (Rastogi et al., 2019; Zang et al., 2020; Noroozi et al., 2020). The methods regard DST as a classification and/or an extraction problem and independently infer the intent and slot value pairs for the current turn. Therefore, the proposed models are generally representable and scalable, but not global. (2) There are also a few methods which view DST as a sequence to sequence problem. Some methods sequentially infer the intent and slot value pairs for the current turn on the basis of dialogue history and usually employ a hierarchical structure (not based on BERT) for the inference (Lei et al., 2018; Ren et al., 2019; Chen et al., 2020b). Recently, a new approach is proposed which formalizes the tasks in dialogue as sequence prediction problems using a unified language model (based on GPT-2) (Hosseini-Asl et al., 2020). The method cannot deal with unseen schemas and intents, however, and thus is not scalable.", "We propose a novel approach to DST, referred to as Seq2Seq-DU (sequence-to-sequence for dialogue understanding), which combines the advantages of the existing approaches. To the best of our knowledge, there was no previous work which studied the approach. We think that DST should be formalized as a sequence to sequence or trans-lation' problem in which the utterances in the dialogue are transformed into semantic frames. In this way, the intents, slots, and slot values can be jointly modeled. Moreover, NLU can also be viewed as a special case of DST and thus Seq2Seq-DU can also be applied to NLU. We note that very recently the effectiveness of the sequence to sequence approach has also been verified in other language understanding tasks (Paolini et al., 2021).", "Seq2Seq-DU comprises a BERT-based encoder", "to encode the utterances in the dialogue, a BERT based encoder to encode the schema descriptions, an attender to calculate attentions between the utterance embeddings and schema embeddings, and a decoder to generate pointers of items representing the intents and slots-value pairs of state.", "Seq2Seq-DU has the following advantages. (1) Global: it relies on the sequence to sequence framework to simultaneously model the intents, slots, and slot-values. (2) Representable: It employs BERT (Devlin et al., 2019) to learn and utilize better representations of not only the current utterance but also the previous utterances in the dialogue. If schema descriptions are available, it also employs BERT for the learning and utilization of their representations. (3) Scalable: It uses the pointer generation mechanism, as in the Pointer Network (Vinyals et al., 2015), to create representations of intents, slots, and slot-values, no matter whether the slots are categorical or non-categorical, and whether the schemas are unseen or not.", "Experimental results on benchmark datasets show that Seq2Seq-DU 1 performs much better than the baselines on SGD, MultiWOZ2.2, and Multi-WOZ2.1 in multi-turn dialogue with schema descriptions, is superior to BERT-DST on WOZ2.0, DSTC2, and M2M, in multi-turn dialogue without schema descriptions, and works equally well as Joint BERT on ATIS and SNIPS in single turn dialogue (in fact, it degenerates to Joint BERT).", "There has been a large amount of work on task-oriented dialogue, especially dialogue state tracking and natural language understanding (eg., (Zhang et al., 2020; Huang et al., 2020; Chen et al., 2017)). Table 1 makes a summary of existing methods on DST. We also indicate the methods on which we make comparison in our experiments.", "Previous approaches mainly focus on encoding of the dialogue context and employ deep neural networks such as CNN, RNN, and LSTM-RNN to independently infer the values of slots in DST (Mrksic et al., 2017; Xu and Hu, 2018; Zhong et al., 2018; Ren et al., 2018; Rastogi et al., 2017; Ramadan et al., 2018; Wu et al., 2019; Zhang et al., 2019; Heck et al., 2020). The approaches", "1 The code is available at https://github.com/ sweetalyssum/Seq2Seq-DU .", "cannot deal with unseen schemas in new domains, however. To cope with the problem, a new direction called schema-guided dialogue is proposed recently, which assumes that natural language descriptions of schemas are provided and can be used to help transfer knowledge across domains. As such, a number of methods are developed in the recent dialogue competition SGD (Rastogi et al., 2019; Zang et al., 2020; Noroozi et al., 2020; Chen et al., 2020a). Our work is partially motivated by the SGD initiative. Our model Seq2Seq-DU is unique in that it formalizes schema-guided DST as a sequence-to-sequence problem using BERT and pointer generation.", "In fact, sequence-to-sequence models are also utilized in DST. Sequicity (Lei et al., 2018) is a two-step sequence to sequence model which first encodes the dialogue history and generates a belief span, and then generates a language response from the belief span. COMER (Ren et al., 2019) and CREDIT (Chen et al., 2020b) are hierarchical sequence-to-sequence models which represent the intents and slot-value pairs in a hierarchical way, and employ a multi-stage decoder. SimpleTOD (Hosseini-Asl et al., 2020) is a unified approach to task-oriented dialogue which employs", "a single and causal language model to perform sequence prediction in DST, Policy, and NLG. Our proposed approach also uses a sequence-to-sequence model. There are significant differences between our model Seq2Seq-DU and the existing models. First, there is no hierarchy in decoding of Seq2Seq-DU. A flat structure on top of BERT appears to be sufficient for jointly capturing the intents, slots, and values. Second, the decoder in Seq2Seq-DU generates pointers instead of tokens, and thus can easily and effectively handle categorical slots, non-categorical slots, as well as unseen schemas.", "Traditionally the problem of NLU is decomposed into two independent issues, namely classification of intents and sequence labeling of slot-value pairs (Liu and Lane, 2016; Hakkani-Tur et al., 2016). For example, deep neural network combined with conditional random field is employed for the task (Yao et al., 2014). Recently the pre-trained language model BERT (Chen et al., 2019) is exploited to further enhance the accuracy. Methods are also proposed which can jointly train and utilize classification and sequence labeling models (Chen", "et al., 2019; Goo et al., 2018).", "In this paper, we view NLU as special case of DST and employ our model Seq2Seq-DU to perform NLU.", "Seq2Seq-DU can degenerate to a BERT based NLU model.", "Our approach Seq2Seq-DU formalizes dialogue state tracking as a sequence to sequence problem using BERT and pointer generation.", "As shown in Figure 2, Seq2Seq-DU consists of an utterance encoder, a schema encoder, an utterance schema attender, and a state decoder.", "In each turn of dialogue, the utterance encoder transforms the current user utterance and the previous utterances in the dialogue into a sequence of utterance embeddings using BERT; the schema encoder transforms the schema descriptions into a set of schema embeddings also using BERT; the utterance schema attender calculates attentions between the utterance embeddings and the schema embeddings to create attended utterance and schema representations; finally, the state decoder sequentially generates a state representation on the basis of the attended representations using LSTM and pointer generation.", "The utterance encoder takes the current user utterance as well as the previous utterances (user and system utterances) in the dialogue (a sequence of tokens) as input and employs BERT to construct a sequence of utterance embeddings.", "The relations between the current utterance and the previous utterances are captured by the encoder.", "first token x 1 is [CLS], followed by the tokens of the current user utterance and the tokens of the previous utterances, separated by [SEP].", "The output is a sequence of embeddings also with length N , denoted as D = ( d 1 , ..., d N ) and referred to as utterance embeddings, with one embedding for each token.", "The schema encoder takes the descriptions of intents, slots, and categorical slot values (a set of combined sequences of tokens) as input and employs BERT to construct a set of schema embeddings.", "Suppose that there are I intents, S slots, and V categorical slot values in the schemas.", "Each schema element is described by two descriptions as outlined in Table 2.", "The input is a set of combined sequences of tokens, denoted as Y = { y 1 , ..., y M } .", "Note that M = I + S + V .", "Each combined sequence starts with [CLS], followed by the tokens of the two descriptions with [SEP] as a separator.", "The final representation of [CLS] is used as the embedding of the input intent, slot, or slot value.", "The output is a set of embeddings, and all the embeddings are called schema embeddings E = { e 1 , ..., e M } .", "The schema encoder in fact adopts the same approach of schema encoding as in (Rastogi et al., 2019).", "There are two advantages with the approach.", "First, the encoder can be trained across different domains.", "Schema descriptions in different domains can be utilized together.", "Second, once the encoder is fine-tuned, it can be used to process unseen schemas with new intents, slots, and slot values.", "The utterance-schema attender takes the sequence of utterance embeddings and the set of schema embeddings as input and calculates schema-attended utterance representations and utterance-attended schema representations.", "In this way, information from the utterances and information from the schemas are fused.", "First, the attender constructs an attention matrix, indicating the similarities between utterance embeddings and schema embeddings.", "Given the i -th utterance token embedding d i and j -th schema embedding e j , it calculates the similarity as follows, A ( i, j ) = r (cid:124) tanh ( W 1 d i + W 2 e j ) , (1) where r , W 1 , W 2 are trainable parameters.", "The attender then normalizes each row of matrix A as a probability distribution, to obtain matrix A .", "Each row represents the attention weights of schema elements with respect to an utterance token.", "Then the schema-attended utterance representations are calculated as D a = EA (cid:124) .", "The attender also normalizes each column of matrix A as a probability distribution, to obtain matrix (cid:101) A .", "Each column represents the attention weights of utterance tokens with respect to a schema element.", "Then the utterance-attended schema representations are calculated as E a = D (cid:101) A .", "The state decoder sequentially generates a state representation (semantic frame) for the current turn, which is represented as a sequence of pointers to elements of the schemas and tokens of the utterances (cf., Figure 1).", "The sequence can then be either re-formalized as a semantic frame in dialogue state tracking 2 , [ intent ; ( slot 1 , value 1 ); ( slot 2 , value 2 ); ... ] , 2 For simplicity, we assume here that there is only one semantic frame in each turn.", "In principle, there can be multiple frames.", "or a sequence of labels in NLU (intent-labeling and slot-filling).", "The pointers point to the elements of intents, slots, and slot values in the schema descriptions (categorical slot values), as well as the tokens in the utterances (non-categorical slot val-ues).", "The elements in the schemas can be either words or phrases, and the tokens in the utterances form spans for extraction of slot values.", "The state decoder is an LSTM using pointer (Vinyals et al., 2015) and attention (Bah-danau et al., 2015).", "It takes the two representations D a and E a as input.", "At each decode step t , the decoder receives the embedding of the previous item w t 1 , the utterance context vector u t , the schema context vector s t , and the previous hidden state h t 1 , and produces the current hidden state h t : h t = LSTM ( w t 1 , h t 1 , u t , s t ) .", "The decoder then generates a pointer from the set of pointers in the schema elements and the tokens of the utterances on the basis of the hidden state h t .", "Specifically, it generates a pointer of item w according to the following distribution, z w = q (cid:124) tanh ( U 1 h t + U 2 k w ) , (5) P (# w ) = softmax ( z w ) , (6) where # w is the pointer of item w , k w is the representation of item w either in the utterance representations D a or in the schema representations E a , q , U 1 , and U 2 are trainable parameters, and softmax is calculated over all possible pointers.", "The training of Seq2Seq-DU follows the standard procedure of sequence-to-sequence.", "The only difference is that it is always conditioned on the schema descriptions.", "Each instance in training consists of the current utterance and the previous utterances, and the state representation (sequence of pointers) for the current turn.", "Two pre-trained Characteristics SGD MultiWOZ2.2 MultiWOZ2.1 WOZ2.0 DSTC2 M2M ATIS SNIPS No.", "BERT models are used for representations of utterances and schema descriptions respectively.", "The BERT models are then fine-tuned in the training process.", "Cross-entropy loss is utilized to measure the loss of generating a sequence.", "We conduct experiments using the benchmark datasets on task-oriented dialogue.", "SGD (Rastogi et al., 2019) and MultiWOZ2.2 (Zang et al., 2020) are datasets for DST; they include schemas with categorical slots and non-categorical slots in multiple domains and natural language descriptions on the schemas, as shown in Table 2.", "In particular, SGD includes unseen schemas in the test set.", "Mul-tiWOZ2.1 (Eric et al., 2020) is the previous version of MultiWOZ2.2, which only has categorical slots in multiple domains.", "WOZ2.0 (Wen et al., 2017) and DSTC2 (Henderson et al., 2014) are datasets for DST; they contain schemas with only categorical slots in a single domain.", "M2M (Shah et al., 2018) is a dataset for DST and it has span annotations for slot values in multiple domains.", "ATIS (Tur et al., 2010) and SNIPS (Coucke et al., 2018) are datasets for NLU in single-turn dialogues in a single domain.", "Table 3 gives the statics of datasets in the experiments.", "SGD, MultiWOZ2.2 and MultiWOZ2.1 : We compare Seq2SeqDU with six state-of-the-art methods on SGD, MultiWOZ2.2 and MultiWOZ2.1, which utilize schema descriptions, span-based and candidate-based methods, unified seq2seq model and BERT: FastSGT (Noroozi et al., 2020), SGDbaseline (Rastogi et al., 2019), TripPy (Heck et al., 2020), SimpleTOD (Hosseini-Asl et al., 2020), TRADE (Wu et al., 2019), and DS-DST (Zhang et al., 2019).", "WOZ2.0 and DSTC2 : Our approach is compared against the state-of-the-art methods on WOZ2.0 and DSTC2, including those using a hierarchical seq2seq model and BERT: COMER (Ren et al., 2019), BERT-DST (Chao and Lane, 2019), StateNet (Ren et al., 2018), GLAD (Zhong et al., 2018), Belief Tracking (Ramadan et al., 2018), and Neural Belief Tracker (Mrksic et al., 2017).", "M2M : We evaluate our approach and the state-of-the-art methods on M2M, which respectively employ a BERT-based architecture and a jointly-trained language understanding model, BERT-DST (Chao and Lane, 2019) and DST+LU (Rastogi et al., 2018).", "ATIS and SNIPS : We make comparison between our approach and the state-of-the-art methods on ATIS and SNIPS for NLU within the sequence labeling framework, including Joint BERT (Chen et al., 2019), Slot-Gated (Goo et al., 2018),", "Atten.-BiRNN (Liu and Lane, 2016), and RNN-LSTM (Hakkani-Tur et al., 2016).", "We also include two variants of Seq2Seq-DU.", "The differences are whether to use the schema descriptions, and the formation of dialogue state.", "Seq2Seq-DU-w/oSchema : It is used for datasets that do not have schema descriptions.", "It only contains utterance encoder and state decoder.", "Seq2Seq-DU-SeqLabel : It is used for NLU in a single-turn dialogue.", "It views the problem as sequence labeling, and only contains the utterance encoder and state decoder.", "We make use of the following metrics in evaluation.", "Intent Accuracy : percentage of turns in dialogue for which the intent is correctly identified.", "Joint Goal Accuracy : percentage of turns for which all the slots are correctly identified.", "For noncategorical slots, a fuzzy matching score is used on SGD and exact match are used on the other datasets to keep the numbers comparable with other works.", "Slot F1 : F1 score to evaluate accuracy of slot sequence labeling.", "We use the pre-trained BERT model ([BERT-Base, Uncased]), which has 12 hidden layers of 768 units and 12 self-attention heads to encode utterances and schema descriptions.", "The hidden size of LSTM decoder is also 768.", "The dropout probability is 0.1.", "We also use beam search for decoding, with a beam size of 5.", "The batch size is set to 8.", "Adam (Kingma and Ba, 2014) is used for optimization with an initial learning rate of 1e-4.", "Hyper parameters are chosen using the validation dataset in all cases.", "Tables 4, 5, 6, and 7 show the results.", "One can see that Seq2Seq-DU performs significantly better than the baselines in DST and performs equally well as the baselines in NLU.", "DST is carried out in different settings in SGD, MultiWOZ2.2, MultiWOZ2.1, WOZ2.0, DSTC2, and M2M.", "In all cases, Seq2Seq-DU works significantly better than the baselines.", "The results indicate that Seq2Seq-DU is really a general and effective model for DST, which can be applied to multiple settings.", "Specifically, Seq2Seq-DU can leverage the schema descriptions for DST when they are available (SGD and MultiWOZ2.2, Multi-WOZ2.1) 3 .", "It can work well in zero-shot learning to deal with unseen schemas (SGD).", "It can also effectively handle categorical slots (MultiWOZ2.1, WOZ2.0 and DSTC2) and non-categorical slots (M2M).", "It appears that the success of Seq2SeqDU is due to its suitable architecture design with a sequence-to-sequence framework, BERT-based encoders, utterance-schema attender, and pointer generation decoder.", "NLU is formalized as sequence labeling in ATIS and SNIPS.", "Seq2Seq-DU is degenerated to Seq2Seq-DU-SeqLabel, which is equivalent to the baseline of Joint Bert.", "The results suggest that it is 3 There are better performing systems in the SGD competition.", "The systems are not based on single methods and thus are not directly comparable with our method.", "the case.", "Specially, the performances of Seq2SeqDU are comparable with Joint BERT, indicating that Seq2Seq-DU can also be employed in NLU.", "We also conduct ablation study on Seq2Seq-DU.", "We validate the effects of three factors: BERT-based encoder, utterance-schema attention, and pointer generation decoder.", "The results indicate that all the components of Seq2Seq-DU are indispensable.", "To investigate the effectiveness of using BERT in the utterance encoder and schema encoder, we replace BERT with Bi-directional LSTM and run the model on SGD and MultiWOZ2.2.", "As shown in Figure 3, the performance of the BiLSTM-based model Seq2Seq-DU-w/oBert in terms of Joint GA and Int.", "Acc decreases significantly compared with Seq2Seq-DU.", "It indicates that the BERT-based encoders can create and utilize more accurate representations for dialogue understanding.", "To investigate the effectiveness of using attention, we compare Seq2Seq-DU with Seq2Seq-DU-w/oAttention which eliminates the attention mechanism, Seq2Seq-DU-w/SchemaAtt which only contains the utterance-attended schema representations, and Seq2Seq-DU-w/UtteranceAtt which only contains the schema-attended utterance representations.", "Figure 3 shows the results on SGD and MultiWOZ2.2 in terms of Joint GA and Int.", "Acc.", "One can observe that without attention the performances deteriorate considerably.", "In addition, the performances of unidirectional attentions are inferior to the performance of bidirectional attention.", "Thus, utilization of bidirectional attention between utterances and schema descriptions is desriable.", "To investigate the effectiveness of the pointer generation mechanism, we directly generate words from the vocabulary instead of generating pointers in the decoding process.", "Figure 3 also shows the results of Seq2Seq-DU-w/oPointer on SGD and MultiWOZ2.2 in terms of Joint GA and Int.", "Acc.", "From the results we can see that pointer generation is crucial for coping with unseen schemas.", "In SGD which contains a large number of unseen schemas in the test set, there is significant performance degradation without pointer generation.", "The results on MultiWOZ2.2, which does not have unseen schemas in the test set, show pointer generation can also make significant improvement on Figure 3: Ablation study results of Seq2Seq-DU with respect to BERT, attention, and pointer generation on SGD and MultiWOZ2.2.", "already seen schemas by making full use of schema descriptions.", "We make qualitative analysis on the results of Seq2Seq-DU and SGD-baseline on SGD and Mul-tiWOZ2.2.", "We find that Seq2Seq-DU can make more accurate inference of dialogue states by leveraging the relations existing in the utterances and schema descriptions.", "For example, in the first case in Table 8, the user wants to find a cheap guesthouse.", "Seq2Seq-DU can correctly infer that the hotel type is guesthouse by referring to the relation between hotel-pricerange and hotel-type.", "In the second case, the user wants to rent a room with in-unit laundry.", "In the dataset, a user who intends to rent a room will care more about the laundry property.", "Seq2Seq-DU can effectively extract the relation between intent and in-unit-laundry, yielding a correct result.", "In contrast, SGD-baseline does not model the relations in the schemas, and thus it cannot properly infer the values of hotel-type and in-unit-laundry.", "We analyze the zero-shot learning ability of Seq2Seq-DU.", "Table 9 presents the accuracies of Seq2Seq-DU in different domains on SGD.", "(Note that only SGD has unseen schemas in test set.)", "We observe that the best performances can be obtained ID Dialogue Utterance Dialogue State State Predictions of SGD-baseline State Predictions of Seq2Seq-DU 1 User : I wanna rent a place in Campbell.", "in the domains with all seen schemas.", "The domains that have more partially seen schemas achieve higher accuracies, such as Hotels, Movies, Services.", "The accuracies decline in the domains with more unseen schemas, such as Messaging and RentalCars.", "We conclude that Seq2Seq-DU can perform zero-shot learning across domains.", "However, the ability still needs enhancement.", "Table 10 shows the accuracies of Seq2Seq-DU and the baselines with respect to categorical and noncategorical slots on SGD and MultiWOZ2.2.", "(We did not compare with FastSGT on SGD dataset due to unavailability of the codes.)", "One can see that Seq2Seq-DU can effectively deal with both categorical and non-categorical slots.", "Furthermore, Seq2Seq-DU demonstrates higher accuracies on categorical slots than non-categorical slots.", "We conjecture that it is due to the co-occurrences of categorical slot values in both the dialogue history and the schema descriptions.", "The utterance-schema attention can more easily capture the relations between the values.", "We have proposed a new approach to dialogue state tracking.", "The approach, referred to as Seq2SeqDU, takes dialogue state tracking (DST) as a problem of transforming all the utterances in a dialogue into semantic frames (state representations) on the basis of schema descriptions.", "Seq2Seq-DU is unique in that within the sequence to sequence framework it employs BERT in encoding of utterances and schema descriptions respectively and generates pointers in decoding of dialogue state.", "Seq2Seq-DU is a global, reprentable, and scalable model for DST as well as NLU (natural language understanding).", "Experimental results show that Seq2Seq-DU significantly outperforms the state-of-the-arts methods in DST on the benchmark datasets of SGD, MultiWOZ2.2, MultiWOZ2.1, WOZ2.0, DSTC2, M2M, and performs as well as the state-of-the-arts in NLU on the benchmark datasets of ATIS and SNIPS." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "method", "abstain", "other", "other", "abstain", "abstain", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "method", "other", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain" ]
[ "In this paper, we propose a new task of machine translation (MT), which is based on no parallel sentences but can refer to a ground-truth bilingual dictionary.", "Motivated by the ability of a monolingual speaker learning to translate via looking up the bilingual dictionary, we propose the task to see how much potential an MT system can attain using the bilingual dictionary and large scale monolingual corpora, while is independent on parallel sentences.", "We propose anchored training (AT) to tackle the task.", "AT uses the bilingual dictionary to establish anchoring points for closing the gap between source language and target language.", "Experiments on various language pairs show that our approaches are significantly better than various baselines, including dictionary-based word-by-word translation, dictionary-supervised cross-lingual word embedding transformation, and unsupervised MT. On distant language pairs that are hard for unsupervised MT to perform well, AT performs remarkably better, achieving performances comparable to supervised SMT trained on more than 4M parallel sentences 1 .", "Motivated by a monolingual speaker acquiring translation ability by referring to a bilingual dictionary, we propose a novel MT task that no parallel sentences are available, while a ground-truth bilingual dictionary and large-scale monolingual corpora can be utilized.", "This task departs from unsupervised MT task that no parallel resources, including the ground-truth bilingual dictionary, are allowed to utilize (Artetxe et al., 2018c; Lample et al., 2018b).", "This task is also distinct to Corresponding Author.", "supervised/semi-supervised MT task that mainly depends on parallel sentences (Bahdanau et al., 2015; Gehring et al., 2017; Vaswani et al., 2017; Chen et al., 2018; Sennrich et al., 2016a).", "The bilingual dictionary is often utilized as a seed in bilingual lexicon induction (BLI) that aims to induce more word pairs within the language pair (Mikolov et al., 2013).", "Another utilization of the bilingual dictionary is for translating low-frequency words in supervised NMT (Arthur et al., 2016; Zhang and Zong, 2016).", "We are the first to utilize the bilingual dictionary and the large scale monolingual corpora to see how much potential an MT system can achieve without using parallel sentences.", "This is different from using artificial bilingual dictionaries generated by unsupervised BLI for initializing an unsupervised MT system (Artetxe et al., 2018c,b; Lample et al., 2018a), we use the ground-truth bilingual dictionary and apply it throughout the training process.", "We propose Anchored Training (AT) to tackle this task.", "Since word representations are learned over monolingual corpora without any parallel sentence supervision, the representation distances between source language and target language are often quite large, leading to significant translation difficulty.", "As one solution, AT selects words covered by the bilingual dictionary as anchoring points to drive the distance between the source language space and the target language space closer so that translation between the two languages becomes easier.", "Furthermore, we propose Bi-view AT that places anchors based on either source language view or target language view, and combines both views to enhance the translation quality.", "Experiments on various language pairs show that AT performs significantly better than various baselines, including word-by-word translation through looking up the dictionary, unsupervised MT, and dictionary-supervised cross-lingual word embedding transformation to make distances between both languages closer.", "Bi-view AT further improves AT performance due to mutual strengthening of both views of the monolingual data.", "When combined with cross-lingual pretraining (Lample and Conneau, 2019), Bi-view AT achieves performances comparable to traditional SMT systems trained on more than 4M parallel sentences.", "The main contributions of this paper are as follows: A novel MT task is proposed which can only use the ground-truth bilingual dictionary and monolingual corpora, while is independent on parallel sentences.", "AT is proposed as a solution to the task.", "AT uses the bilingual dictionary to place anchors that can encourage monolingual spaces of both languages to become closer so that translation becomes easier.", "The detailed evaluation on various language pairs shows that AT, especially Bi-view AT, performs significantly better than various methods, including word-by-word translation, unsupervised MT, and cross-lingual embedding transformation.", "On distant language pairs that unsupervised MT struggled to be effective, AT and Bi-view AT perform remarkably better.", "The bilingual dictionaries used in previous works are mainly for bilingual lexicon induction (BLI), which independently learns the embedding in each language using monolingual corpora, and then learns a transformation from one embedding space to another by minimizing squared euclidean distances between all word pairs in the dictionary (Mikolov et al., 2013; Artetxe et al., 2016).", "Later efforts for BLI include optimizing the transformation further through new training objectives, constraints, or normalizations (Xing et al., 2015; Lazaridou et al., 2015; Zhang et al., 2016; Artetxe et al., 2016; Smith et al., 2017; Faruqui and Dyer, 2014; Lu et al., 2015).", "Besides, the bilingual dictionary is also used for supervised NMT which requires large-scale parallel sentences (Arthur et al., 2016; Zhang and Zong, 2016).", "To our knowledge, we are the first to use the bilingual dictionary for MT without using any parallel sentences.", "et al., 2018b; Yang et al., 2018; Sun et al., 2019), which does not use parallel sentences neither.", "The difference is that UNMT may use the artificial dictionary generated by unsupervised BLI for initialization (Artetxe et al., 2018c; Lample et al., 2018a) or abandon the artificial dictionary by using joint BPE so that multiple BPE units can be shared by both languages (Lample et al., 2018b).", "We use the ground-truth dictionary instead and apply it throughout a novel training process.", "UNMT works well on close language pairs such as English-French, while performs remarkably bad on distant language pairs in which aligning the embeddings of both side languages is quite challenging.", "We use the ground-truth dictionary to alleviate such problem, and experiments on distant language pairs show the necessity of using the bilingual dictionary.", "Other utilizations of the bilingual dictionary for tasks beyond MT include cross-lingual dependency parsing (Xiao and Guo, 2014), unsupervised cross-lingual part-of-speech tagging and semi-supervised cross-lingual super sense tagging (Gouws and S-gaard, 2015), multilingual word embedding training (Ammar et al., 2016; Duong et al., 2016), and transfer learning for low-resource language modeling (Cohn et al., 2017).", "There are multiple freely available bilingual dictionaries such as Muse dictionary 2 (Conneau et al., 2018), Wiktionary 3 , and PanLex 4 .", "We adopt Muse dictionary which contains 110 large-scale ground-truth bilingual dictionaries.", "We propose to inject the bilingual dictionary into the MT training by placing anchoring points on the large scale monolingual corpora to drive the semantic spaces of both languages becoming closer so that MT training without parallel sentences becomes easier.", "We present the proposed Anchored Training (AT) and Bi-view AT in the following.", "Since word embeddings are trained on monolingual corpora independently, the embedding spaces of both languages are quite different, leading to significant translation difficulty.", "AT forces words of a translation pair to share the same word embedding as an anchor.", "We place multiple anchors by 2 https://github.com/facebookresearch/MUSE 3 https://en.wiktionary.org/wiki/Wiktionary:Main_Page 4 https://panlex.org/ Figure 1: Illustration of", "selecting words covered by the bilingual dictionary.", "With stable anchors, the embedding spaces of both languages become more and more close during the AT process.", "As illustrated in Figure 1", "(a), given the source sentence s 1 s 2 s 3 s 4 with words of s 2 and s 3 being covered by the bilingual dictionary, we replace the two words with their translation words according to the dictionary.", "This results in the source sentence s 1 s 2 (cid:46)t s 3 (cid:46)t s 4 , of which s 2 (cid:46)t and s 3 (cid:46)t serve as the anchors which are actually the target language words obtained by translating s 2 and s 3 according to the dictionary, respectively.", "Through the anchors, some words on the source side share the same word embeddings with the corresponding words on the target side.", "The AT process will strengthen the consistency of embedding spaces of both languages based on these anchors.", "The training process illustrated in Figure 1", "(a) consists of a mutual back-translation procedure.", "The anchored source sentence s 1 s 2 (cid:46)t s 3 (cid:46)t s 4 is translated into target sentence t 1 (cid:48) t 2 (cid:48) t 3 (cid:48) by using source-to-target decoding , then t 1 (cid:48) t 2 (cid:48) t 3 (cid:48) and s 1 s 2 (cid:46)t s 3 (cid:46)t s 4 constitute a sentence pair for training the target-to-source translation model.", "In contrast, the target sentence t 1 t 2 t 3 t 4 t 5 is translated into anchored source sentence s 1 (cid:48) s 2 (cid:48) s 3 (cid:46)t (cid:48) s 4 (cid:48) by using target-to-source decoding, then both sentences constitute a sentence pair for training the source-to-target translation model.", "Note that during training the translation model, the input sentences are always pseudo sentences generated by decoding an MT model, while the output sentences are always true or anchored true sentences.", "Beside this mutual back-translation procedure, a denoising procedure used in unsupervised MT (Lample et al., 2018b) is also adopted.", "The deletion and permutation noises are added to the source/target sentence, and the translation model is also trained to denoise them into the original source/target sentence.", "During testing, a source sentence is transformed into an anchored sentence at first by looking up the bilingual dictionary.", "Then we use the source-to-target model trained in the AT process to decode the anchored sentence.", "We use Transformer architecture (Vaswani et al., 2017) as our translation model with four stacked layers in both encoder and decoder.", "In the encoder, we force the last three layers shared by both languages, and leave the first layer not shared.", "In the decoder, we force the first three layers shared by both languages, and leave the last layer not shared.", "Such architecture is designed to capture both common and specific characteristics of the two languages in one model for the training.", "AT as illustrated in Figure 1", "(a) actually tries to model the sentences of both languages in the target language view with partial source words replaced with the target words and the full target language sentence.", "Bi-view AT enhances AT by adding another language view.", "Figure 1", "(b) adds the source language view shown in the right part to accompany with the target language view of Figure 1", "(a).", "In particular, the target language sentence t 1 t 2 t 3 t 4 t 5 is in the form of t 1 t 2 t 3 (cid:46)s t 4 t 5 (cid:46)s after looking up the bilingual dictionary.", "Such partial target words replaced with the source words and the full source language sentence s 1 s 2 s 3 s 4 constitute the source language view.", "Based on the target language view shown in the left part and the source language view shown in the right part, we further combine both views through the pseudo sentences denoted by primes in Figure 1", "(b).", "As shown by (cid:57)(cid:57)(cid:75) in Figure 1", "(b), t 1 (cid:48) t 2 (cid:48) t 3 (cid:48) is further transformed into t 1 (cid:48) t 2 (cid:46)s (cid:48) t 3 (cid:46)s (cid:48) by looking up the bilingual dictionary.", "Similarly, s 1 (cid:48) s 2 (cid:48) s 3 (cid:48) is further transformed into s 1 (cid:48) s 2 (cid:48) s 3 (cid:46)t (cid:48) as shown by (cid:76)(cid:57)(cid:57) .", "Finally, solid line box represents training the source-to-target model on data from both views, and dashed line box represents training the target-to-source model on data from both views.", "Bi-view AT starts from training both views in parallel.", "After both views converge, we generate pseudo sentences in both the solid line box and the dashed line box, and pair these pseudo sentences (as input) with genuine sentences (as output) to train the corresponding translation model.", "This generation and training process iterates until Bi-view AT converges.", "Through such rich views, the translation models of both directions are mutually strengthened.", "Cross-lingual pretraining has demonstrated effectiveness on tasks such as cross-lingual classifica-tion, unsupervised MT (Lample and Conneau, 2019).", "It is conducted over large monolingual corpora by masking random words and training to predict them as a cloze task.", "Instead, we propose ACP to pretrain on data that is obtained by transforming the genuine monolingual corpora of both languages into the anchored version.", "For example, words in the source language corpus that are covered by the bilingual dictionary are replaced with their translation words respectively.", "Such words are anchoring points that can drive the pretraining to close the gap between the source language space and the target language space better than the original pretraining method of Lample and Conneau (2019) does as evidenced by the experiments in section 4.5.", "Such anchored source language corpus and the genuine target language corpus constitute the target language view for ACP.", "ACP can be conducted in either the source language view or the target language view.", "After ACP, each of them is used to initialize the encoder of the corresponding AT system.", "For AT, the pseudo sentence generation step and NMT training step are interleaved.", "Take the target language view AT shown in Figure 1", "(a) for example, we extract anchored source sentences as one batch, and decode them into pseudo target sentences; then we use the same batch to train the NMT model of target-to-anchored source.", "In the meantime, a batch of target sentences are decoded into pseudo anchored source sentences, and then we use the same batch to train the NMT model of anchored source-to-target.", "The above process repeats until AT converges.", "For Bi-view AT, after each mono-view AT converging, we set larger batch for generating pseudo sentences as shown in solid/dashed line boxes in Figure 1", "(b), and train the corresponding NMT model using the same batch.", "For ACP, we follow XLM procedure (Lample and Conneau, 2019), and conduct pretraining on the anchored monolingual corpora concatenated with the genuine corpora of the other language.", "We conduct experiments on English-French, English-Russian, and English-Chinese translation to check the potential of our MT system with only bilingual dictionary and large scale monolingual corpora.", "The English-French task deals with the translation between close-related languages, while the English-Russian and English-Chinese tasks deal with the translation between distant languages that do not share the same alphabets.", "For English-French translation task, we use the monolingual data released by XLM (Lample and Conneau, 2019) 5 .", "For English-Russian translation task, we use the monolingual data identical to Lample et", "al.(2018a), which uses all available sentences for the WMT monolingual News Crawl datasets from years 2007 to 2017.", "For English-Chinese translation task, we extract Chinese sentences from half of the 4.4M parallel sentences from LDC, and extract English sentences from the complementary half.", "We use WMT newstest -2013/2014, WMT newstest -2015/2016, and NIST2006/NIST2002 as validation/test sets for English-French, English-Russian, and English-Chinese, respectively.", "For cross-lingual pretraining, we extract raw sentences from Wikipedia dumps, which contain 80M, 60M, 13M, 5.5M monolingual sentences for English, French, Russian, and Chinese, respectively.", "Muse ground-truth bilingual dictionaries are used for our dictionary-related experiments.", "If a word has multiple translations, we select the translation word that appears most frequently in the monolingual corpus.", "Table 1 summarizes the number of word pairs and their coverage on the monolingual corpora on the source side.", "For AT/Bi-view AT without cross-lingual pretraining, we use Transformer with 4 layers, 512 em-bedding/hidden units, and 2048 feed-forward filter size, for fair comparison to UNMT (Lample et al., 2018b).", "For AT/Bi-view AT with ACP, we set Transformer with 6 layers, 1024 embedding/hidden units, and 4096 feed-forward filter size for a fair comparison to XLM (Lample and Conneau, 2019).", "We conduct joint byte-pair encoding (BPE) on the monolingual corpora of both languages with a shared vocabulary of 60k tokens for both English-French and English-Russian tasks, and 40k tokens for English-Chinese task (Sennrich et al., 2016b).", "During training, we set the batch size to 32 and limit the sentence length to 100 BPE tokens.", "We employ the Adam optimizer with lr = 0 .", "0001 , t warm _ up = 4000 and dropout = 0 .", "1 .", "At decoding time, we generate greedily with length penalty = 1 .", "0 .", "4.3 Baselines Word-by-word translation by looking up the ground truth dictionary or the artificial dictionary generated by Conneau et al. (2018).", "Unsupervised NMT (UNMT) that does not rely on any parallel resources (Lample et al., 2018b) 6 .", "Besides, cross-lingual pretraining (XLM) based UNMT (Lample and Conneau, 2019) 7 , is also set as a stronger baseline (XLM+UNMT).", "We implement a UNMT initialized by Unsupervised Word Embedding Transformation (UNMT+UWET) as a baseline(Artetxe et al., 2018d).", "The transformation function is learned in an unsupervised way without using any ground-truth bilingual dictionaries (Con-neau et al., 2018) 8 .", "We also implement a UNMT system initialized by Supervised Word Embedding Transformation (UNMT+SWET) as a baseline.", "Instead of UWET used in Artetxe et al. (2018d), we use the ground-truth bilingual dictionary as the supervision signal to train the transformation function for transforming the source word embeddings into the target language space (Conneau et al., 2018).", "After such initialization, the gap between the embedding spaces of both languages is narrowed for easy UNMT training.", "The upper part of Table 2 presents the results of various baselines and our AT approaches.", "AT and Bi-view AT significantly outperform the baselines, and Bi-view AT is consistently better than AT.", "Detailed comparisons are listed as below: Results of Word-by-word Translation 6 https://github.com/facebookresearch/UnsupervisedMT 7 https://github.com/facebookresearch/XLM 8 https://github.com/facebookresearch/MUSE system fr en en fr ru en en ru zh en en zh Without Cross-lingual Pre-training Word-by-word using artificial dictionary 7.76 4.88 3.05 1.60 1.99 1.14 Word-by-word using ground-truth dictionary 7.97 6.61 4.17 2.81 2.68 1.79 UNMT (Lample et al., 2018b) 24.02 25.10 9.09 7.98 1.50 0.45 UNMT+SWET 21.11 21.22 9.79 4.07 19.78 7.84 UNMT+UWET 19.80 21.27 8.79 6.21 15.54 6.62 AT 25.07 26.36 10.20 9.91 19.83 9.18 Bi-view AT 27.11 27.54 12.85 10.64 21.16 11.23 With Cross-lingual Pre-training XLM+UNMT (Lample and Conneau, 2019) 33.28 35.10 17.39 13.29 20.68 11.28 ACP+AT 33.51 36.15 16.41 15.43 26.80 13.91 ACP+Bi-view AT 34.05 36.56 20.09 17.62 30.12 17.05 Supervised SMT -21.48 14.54 31.86 16.55 Table 2: Experiment results evaluated by BLEU using the multi-bleu script.", "It shows that using the ground-truth dictionary is slightly better than using the artificial one generated by Conneau et al. (2018).", "Both performances are remarkably bad, indicating that simple word-by-word translation is not qualified as an MT method.", "More effective utilization of the bilingual dictionary is needed to improve the translation performance.", "UNMT-related systems generally improves the performance of the word-by-word translation.", "On the close-related language pair of English-French, UNMT is better than UNMT+UWET/SWET.", "This is partly because there are numerous BPE units shared by both English and French, enabling easy establishing the shared word embedding space of both languages.", "In contrast, WET that transforms the source word embedding into the target language space seems not a necessary initialization step since shared BPE units already establish the shared space.", "On distant language pairs, UNMT does not have an advantage over UNMT with WET initialization.", "Especially on English-Chinese, UNMT performs extremely bad, even worse than the word-by-word translation method.", "We argue that this is because the BPE units shared by both languages are so few that UNMT fails to align the language spaces.", "In contrast, using the bilingual dictionary greatly alleviate such problem for distant language pairs.", "UNMT+SWET, which transforms the source word embedding into the target word embedding space supervised by the bilingual dictionary, outperforms UNMT by more than 18 BLEU points on Chinese-to-English and more than 7 BLEU points on English-to-Chinese.", "This indicates the necessity of the bilingual dictionary for translation between distant language pairs.", "Our proposed AT approaches significantly outperform the baselines.", "The baselines of using the ground-truth bilingual dictionary, i.e., word-by-word translation using the dictionary and UNMT+SWET that uses the dictionary to supervise the word embedding transformation, are inferior to our AT approaches.", "The AT approaches consistently improves the performances over both close-related language pair of English-French and distant language pairs of English-Russian and English-Chinese.", "Our Bi-view AT achieves the best performance on all language pairs.", "The bottom part of Table 2 reports performances of UNMT with XLM, which conducts the cross-lingual pretraining on concatenated non-parallel corpora (Lample and Conneau, 2019), and performances of our AT/Bi-view AT with the anchored cross-lingual pretraining, i.e., ACP.", "The results show that our proposed AT approaches are still superior when equipped with the cross-lingual pretraining.", "UNMT obtains great improvement when combined with XLM, achieving state-of-the-art unsupervised MT performance better than Unsupervised SMT (Artetxe et al., 2019) and Unsupervised NMT (Lample et al., 2018b) across close and distant language pairs.", "ACP+AT/Bi-view AT performs consistently superior to XLM+UNMT.", "Especially on distant language pairs, ACP+Bi-view AT gains 2.7-9.4 BLEU improvements over the strong XLM+UNMT.", "This indicates that AT/Bi-view AT with ACP builds closer language spaces via anchored pretraining and anchored training.", "We present such advantage in the analyses of Section 4.6.", "To check the ability of our system using only the dictionary and non-parallel corpora, we make the comparison to supervised SMT trained on over 4M parallel sentences, which are from WMT19 for English-Russian and from LDC for English-Chinese.", "We use Moses 9 as the supervised SMT system with a 5-gram language model trained on the target language part of the parallel corpora.", "The bottom part of Table 2 shows that ACP+Bi-view AT performs comparable to supervised SMT, and performs even better on English-to-Russian and English-to-Chinese.", "We analyze the cross-lingual property of our approaches in both word level and sentence level.", "We also compare the performances between the ground-truth dictionary and the artificial dictionary.", "In the end, we vary the size of the bilingual dictionary and report its impact on the AT training.", "As shown in Figure 2, we depict the word embeddings of some sampled words in English-Chinese after our Bi-view AT.", "The dimensions of the embedding vectors are reduced to two by using T-SNE and are visualized by the visualization tool in Ten-sorflow 10 .", "We sample the English words that are not covered by the dictionary at first, then search their nearest Chinese neighbors in the embedding space.", "It shows that the words which constitute a new ground-truth translation pair do appear as neighboring points in the 2-dimensional visualization of Figure 2.", "We go on with studying bilingual word embedding by quantitative analysis of the new word pairs, which are detected by searching bilingual words that are neighbors in the word embedding space, and evaluate them using the ground-truth bilingual dictionary.", "In particular, we split the Muse dictionary of Chinese-to-English into standard training set and test set as in BLI (Artetxe et al., 2018a).", "The training set is used for the dictionary-based systems, including our AT/Bi-view AT, UNMT+SWET, and Muse, which is a BLI toolkit.", "The test set is used to evaluate these systems by computing the precision of discovered translation words given the source words in the test set.", "The neighborhood is computed by CSLS distance (Conneau et al., 2018).", "Table 3 shows the precision, where precision@k indicates the accuracy of top-k predicted candidate.", "Muse induces new word pairs through either the supervised way or the unsupervised way.", "MuseSupervised is better than MuseUnsupervised since it is supervised by the ground-truth bilingual dictionary.", "Our AT/Bi-view AT surpasses MuseSupervised by a large margin.", "UNMT+SWET/UWET also obtains good performance through the word embedding transformation.", "Bi-view AT significantly surpasses UNMT+SWET/UWET in precision@5 and precision@10, while is worse than them in precision@1.", "This indicates that Bi-view AT can produce better n -best translation words that are beneficial for NMT beam decoding to find better translations.", "Through the word level analysis, we can see that AT/Bi-view AT leads to more consistent word em-10 https://projector.tensorflow.org/ MuseUnsupervised MuseSupervised UNMT+SWET UNMT+UWET AT Bi-view AT Precision@1 30.51 35.38 48.01 45.85 43.32 45.49 Precision@5 55.42 58.48 68.05 67.15 68.23 72.02 Precision@10 62.45 63.18 72.02 72.20 73.83 76.71 Table 3: Precision of Discovered New Word Pairs.", "We check the sentence level representational invariance across languages for the cross-lingual pretraining methods.", "In detail, following Arivazhagan et al. (2018), we adopt max-pooling operation to collect the sentence representation of each encoder layer for all Chinese-to-English sentence pairs in the test set.", "Then we calculate the cosine similarity for each sentence pair and average all cosine scores.", "Figure 3 shows the sentence level cosine similarity.", "ACP+Bi-view AT consistently has a higher similarity for parallel sentences than XLM+UNMT on all encoder layers.", "When compare Bi-view AT and AT, the Bi-view AT is better on more encoder layers.", "We can see that in both word level and sentence level analysis, our AT methods achieve better cross-lingual invariance, significantly reduce the gap between the source language space and the target language space, leading to decreased translation difficulty between both languages.", "Table 4 presents the comparison in English-Chinese.", "The ground-truth dictionary is from the Muse dictionary deposit, and the artificial dictio-Ground-Truth Dict.", "nary is generated by unsupervised BLI (Conneau et al., 2018).", "We extract topn word pairs as the artificial dictionary, where n is the same as the number of entries in the ground-truth dictionary.", "Both dictionaries use AT methods for translation.", "As shown in Table 4, the ground-truth dictionary performs significantly better than the artificial dictionary in both methods and both translation directions.", "We randomly select a portion of the ground-truth bilingual dictionary to study the effect of the dictionary size on the performance.", "Table 5 reports the performances of ACP+AT using a quarter or a half of the zh en dictionary.", "It shows that, in comparison to the baseline of XLM+UNMT that does not use a dictionary, a quarter of the dictionary consisting of around 3k word pairs is capable of improving the performance significantly.", "More word pairs in the dictionary lead to better translation results, suggesting that expanding the size of the current Muse dictionary via collecting various dictionaries built by human experts may improve the translation performance further.", "In the literature of unsupervised MT that only uses non-parallel corpora, Unsupervised SMT (USMT) and Unsupervised NMT (UNMT) are", "complementary to each other.", "Combining them (USMT+UNMT) achieves significant improvement over the individual system, and performs comparable to XLM+UNMT (Lample et al., 2018b; Artetxe et al., 2019).", "We have set XLM+UNMT as a stronger baseline, and our ACP+AT/Bi-view AT surpasses it significantly.", "By referring to the literature of unsupervised MT, we can opt to combine ACP+AT/Bi-view AT with SMT.", "We leave it as a future work.", "In this paper, we explore how much potential an MT system can achieve when only using a bilingual dictionary and large-scale monolingual corpora.", "This task simulates people acquiring translation ability via looking up the dictionary and depending on no parallel sentence examples.", "We propose to tackle the task by injecting the bilingual dictionary into MT via anchored training that drives both language spaces closer so that the translation becomes easier.", "Experiments show that, on both close language pairs and distant language pairs, our proposed approach effectively reduces the gap between the source language space and the target language space, leading to significant improvement of translation quality over the MT approaches that do not use the dictionary and the approaches that use the dictionary to supervise the cross-lingual word embedding transformation.", "The authors would like to thank the anonymous reviewers for the helpful comments.", "This work was supported by National Natural Science Foundation of China (Grant No. 61525205, 61673289), National Key R&D Program of China (Grant No. 2016YFE0132100), and was also partially supported by the joint research project of Alibaba and Soochow University." ]
[ "objective", "objective", "objective", "abstain", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "other", "other", "other", "objective", "other", "other", "objective", "other", "abstain", "other", "abstain", "method", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "abstain", "objective", "abstain", "objective", "objective", "other", "other" ]
[ "Technical logbooks are a challenging and under-explored text type in automated event identification.", "These texts are typically short and written in non-standard yet technical language, posing challenges to off-the-shelf NLP pipelines.", "The granularity of issue types described in these datasets additionally leads to class imbalance, making it challenging for models to accurately predict which issue each logbook entry describes.", "In this paper we focus on the problem of technical issue classification by considering logbook datasets from the automotive, aviation, and facilities maintenance domains.", "We adapt a feedback strategy from computer vision for handling extreme class imbalance, which resamples the training data based on its error in the prediction process.", "Our experiments show that with statistical significance this feedback strategy provides the best results for four different neural network models trained across a suite of seven different technical logbook datasets from distinct technical domains.", "The feedback strategy is also generic and could be applied to any learning problem with substantial class imbalances.", "Predictive maintenance techniques are applied to engineering systems to estimate when maintenance should be performed to reduce costs and improve operational efficiency (Carvalho et al., 2019), as well as mitigate risk and increase safety.", "Maintenance records are an important source of information for predictive maintenance (McArthur et al., 2018).", "These records are often stored in the form of technical logbooks in which each entry contains fields that identify and describe a maintenance issue (Akhbardeh et al., 2020a).", "Being able to classify these technical events is an important step in the development of predictive maintenance systems.", "labeled by domain experts ( e.g. , mechanics) in free text fields.", "This text can then be used to classify or cluster events by semantic similarity.", "Classifying events in technical logbooks is a challenging problem for the NLP community for several reasons:", "(a) the technical logbooks are written by various domain experts and contain short text entries with nonstandard language including domain-specific abbreviated words (see Table 1 for examples), which makes them distinct from other short non-standard text corpora ( e.g. , social media);", "(b) off-the-shelf NLP tools struggle to perform well on this type of data as they tend to be trained on standard contemporary corpora such as newspaper texts;", "(c) outside of the clinical and biomedical sciences, there is a lack of domain-specific, expert-based datasets for studying expert-based event classification, and in particular few resources are available for technical problem domains; and", "(d) technical logbooks tend to be characterized by a large number of event classes that are highly imbalanced.", "We address the aforementioned challenges with a special focus on exploring strategies to address class imbalance.", "There is wide variation in the number of instances among the technical event classes examined in this work, as shown in Figure 1 and Ta-Figure 1: Number of instances in 39 unbalanced classes of the aviation maintenance ( Avi-Main ) dataset.", "ble", "3. This extreme class imbalance is an obstacle when processing logbooks as it causes most learning algorithms to become biased and mainly predict the large classes (Kim et al., 2019).", "To overcome this issue, we introduce a feedback loop strategy, which is a repurposing of a method used to address extreme class imbalance in computer vision (Bow-ley et al., 2019), and examine it for classification of textual technical event descriptions.", "This technique is applied in the training of a suite of common classification models on seven predictive maintenance datasets representing the aviation, automotive, and facility maintenance domains.", "This paper addresses these research questions: RQ1: To which extent does the class granularity and class imbalance present in technical logbooks impact technical event classification performance, and can a feedback loop for training data selection effectively address this issue?", "RQ2: Which classification models are better suited to classify technical events for predictive maintenance across logbook datasets representing different technical domains?", "The main contributions of this work include:", "1. Experimental results showing strong performance of the feedback loop in addressing the class imbalance problem in technical event classification across all datasets and models;", "2. A thorough empirical evaluation of the performance of the technical event classifier considering multiple models and seven logbook datasets from three different domains.", "Most expert-domain datasets containing events have focused on healthcare.", "For instance, Altuncu et al. (2019) analyzed patient incidents in unstructured electronic health records provided by the U.K. National Health Service.", "They evaluated a deep artificial neural network model on the expert-annotated textual dataset of a safety incident to identify similar events that occurred.", "Deleger et al. (2010) proposed a method to deal with unstructured clinical records, using rule-based techniques to extract names of medicines and related information such as prescribed dosage.", "Savova et al. (2010) considered free-text electronic medical records for information extraction purposes and developed a system to obtain clinical domain knowledge.", "Patrick and Li (2009) proposed the cascade methods of extracting the medication records such as treatment duration or reason, obtained from pa-tient's historical records.", "Their approach for event extraction includes text normalization, tokenization, and context identification.", "A system using multiple features outperformed a baseline method using a bag of words model.", "Yetisgen-Yildiz et al. (2013) proposed the lung disease phenotypes identification method to prevent the use of a hand-operated identification strategy.", "They employed NLP pipelines including text pre-processing and further text classification on the textual reports to identify the patients with a positive diagnosis for the disease.", "Based on the outcome, they achieve Tech.", "There is also relevant research on event classification in social media.", "For example, Ritter et al. (2012) proposed an open-source event extraction and supervised tagger for noisy microblogs.", "Cherry and Guo (2015) applied word embedding-based modeling for information extraction on news-wire and tweets, comparing named entity taggers to improve their method.", "Hammar et al. (2018) performed experimental work on Instagram text using weakly supervised text classification to extracted clothing brand based on user descriptions in posts.", "The problem of class imbalance has been studied in recent years for numerous natural language processing tasks.", "Tayyar Madabushi et al. (2019) studied automatic propaganda event detection from a news dataset using a pre-trained BERT model.", "They recognized that the BERT model had issues in generalizing.", "To overcome this issue, they proposed a cost-weighting method.", "Al-Azani and El-Alfy (2017) analyzed polarity measurement in imbalanced tweet datasets utilizing features learned with word embeddings.", "Li and Nenkova (2014) studied the class imbalance problem in the task of discourse relation identification by comparing the accuracy of multiple classifiers.", "They showed that utilizing a unified method and further downsampling the negative instances can significantly enhance the performance of the prediction model on unbalanced binary and multi-classes.", "Dealing with unbalance classes is also studied well in the sentiment classification task.", "Li et al. (2012) introduced an active learning method that overcomes the problem of data class unbalance by choosing the significant sample of minority class for manual annotation and majority class for automatic annotation to lower the amount of human annotation required.", "Furthermore, Damaschk et al. (2019) examined techniques to overcome the problem of dealing with high-class imbalance in classifying a collection of song lyrics.", "They employed neural network models including a multi-layer per-ceptron and a Doc2Vec model in their experiments where the finding was that undersampling the majority class can be a reasonable approach to remove the data sparsity and further improve the classification performance.", "Li et al. (2020) also explored the problem of high data imbalance using cross-entropy criteria as well as standard performance metrics.", "They proposed a loss function called Dice loss that assigns equal importance to the false negatives and the false positives.", "In computer vision, Bowley et al. (2019) developed an automated feedback loop method to identify and classify wildlife species from Unmanned Aerial Systems imagery, for training CNNs to overcome the unbalanced class issue.", "On their expert imagery dataset, the error rate decreased substantially from 0.88 to 0.05.", "This work adapts this feedback loop strategy to the NLP problem of classifying technical events.", "In this work, we used a set of 7 logbook datasets from the aviation, automotive, and facility domains available at MaintNet (Akhbardeh et al., 2020a).", "MaintNet is a collaborative open-source platform for predictive maintenance language resources featuring multiple technical logbook datasets and tools.", "These datasets include: 1) Avi-Main contains seven years of maintenance logbook reports collected by Code Inst Avg N Class Size Toks Cls Min Med Avg Max Avi-Main 6,169 13.85 39 21 56 158 1,674 Avi-Acc 4,130 14.31 5 179 966 826 1,595 Avi-Safe 17,718 19.52 2 2,134 8,859 8,859 15,584 Auto-Main 617 7.34 5 23 48 123 268 Auto-Acc 52,707 4.59 3 1,085 11,060 17,569 40,562 Auto-Safe 4,824 25.11 17 86 213 284 678 Faci-Main 74,360 31.50 70 25 303 1,062 10,748 Table 3: Number of instances (Inst), average number of tokens per instance (Avg Toks), number of classes (N Cls), and class size statistics: minimum, average, median, and maximum (Min, Med, Avg, Max) for each dataset.", "the University of North Dakota aviation program on aircraft maintenance that were reported by the mechanic or pilot.", "2) Avi-Acc contains four years of aviation accident and reported damages.", "3) Avi-Safe contains eleven years of aviation safety and incident reports.", "Accidents were caused by foreign objects/birds during the flights which led to safety inspection and maintenance, where safety crews indicated the damage (safety) level for further analysis.", "4) Auto-Main is a single year report with maintenance records for cars.", "5) Auto-Acc contains twelve years of car accidents and crash reports describing the related car maintenance issue and property damaged in the accident.", "6) Auto-Safe contains four years of noted hazards and incidents on the roadway from the driver.", "7) Faci-Main contains six years of logbook reports collected for building maintenance.", "These technical logbooks include short, com-pact, and descriptive domain-specific English texts single instances usually contain between 2 and 20 tokens on average including abbreviations and domain-specific words.", "An example instance from Table 2, r/h fwd upper baff seal needs to be resecured , shows how the instances for a specific issue class are comprised from specific vocabulary (less ambiguity), and therefore contain a high level of granularity (level of description for an event from multiple words) (Mulkar-Mehta et al., 2011).", "Table 3 presents statistics for each dataset, in terms of the number of instances, average instance length, number of classes, and the minimum, average, me-dian and maximum class size to represent how imbalanced the datasets are.", "a safety or maintenance inspection) like: #2 & #4 cyl rocker cover gsk are leaking , or it might contain an incomplete description by solely referring to the damaged part/section of machinery ( hyd cap chck eng light on ) using few domain words.", "In either form of the problem description, the given annotation (label) is at the issue type-level, e.g. , baffle damage .", "Table 2 shows multiple examples with associated instances.", "Further characteristics of these log entries include compound words ( antifreeze, engine-holder, driftangle, dashboard ).", "Many of these words ( e.g. , a compound word: dashboard ) essentially represent the items, or domain-specific parts used in the descriptions.", "Additionally, function words ( e.g. , prepositions) are important and removing them could alter the meaning of the entry.", "The logbook datasets also have both the following shared and distinct characteristics: Shared Characteristics: Each instance contains a descriptive observation of the issue and/or the suggested action that should be taken ( eng inspection panel missing screw ).", "Each instance also refers to maintaining a single event, which means the recognized problem applies to the only single-issue type.", "As an example, the instance cyl #1 baff cracked at screw support & forward baff below #1 includes a combination of sequences that refers to the location and/or specific part of the machinery.", "Distinct Characteristics: In each domain, terminologies, a list of terms, and abbreviations are distinct, and an abbreviation can have different expansion depending on the domain context (Sproat et al., 2001), e.g. , a/c can mean aircraft in aviation and in the automotive domain air conditioner .", "However, the abbreviations and acronyms of the domain words (e.g. atc air traffic control ) in these technical datasets should not be approached as a word sense disambiguation problem as they require character level expansion.", "Collecting additional data to augment datasets is a common approach for tackling the problem of skewed class distributions.", "However, as discussed earlier, technical logbooks are proprietary and very hard to obtain.", "In addition, each domain captures domain-specific lexical semantics, preventing the use of techniques such as domain adaption (Ma Algorithm 1 Feedback Loop Pseudocode (cid:46) Gets MCS random instances from each class function SAMPLERANDOM ( C , MCS ) Array A for i 1 to SIZE ( C ) do SHUFFLE ( C i ) A A GETFIRST N( MCS , C i ) return A (cid:46) Gets MCS instances from each class with the worst error function RESAMPLE ( C , M , MCS ) Array A for i 1 to SIZE ( C ) do CALCULATEERROR ( C i ) SORTBYERROR ( C i ) A A GETFIRST N( MCS , C i ) return A Input: Training Data D = Instance( 1 , 2 , . . . , N ) Input: Feedback Loop Iterations FLI Input: Epochs Per Loop Iteration FLE Input: Minimum Class Size MCS (cid:46) Divide training data by class Array C SPLITBYCLASS ( D ) (cid:46) Get initial active training data A randomly Array A SAMPLERANDOM ( C , MCS ) Model M for l 1 to FLI do (cid:46) Train the model for the number of epochs per iteration M TRAIN ( M , FLE , A ) (cid:46) Update the active training data A RESAMPLE ( D , M , MCS ) Output: M et al., 2019) to apply a large class data from one technical domain to another.", "For example, instances that describe an engine failure in the aviation domain are distinct from engine failure instances reported in the automotive domain.", "In this paper we apply five different methods for selecting training data for the models to analyze their effects on classification performance: (1) under(down)-and (2) over-sampling, (3) random down-sampling, (4) a feedback loop strategy, and (5) a baseline strategy which simply uses all available data.", "Re-sampling Underand over-sampling are resampling techniques (Maragoudakis et al., 2006) that were used to create balanced class sizes for model training.", "For over-sampling, instances of the minority classes are randomly copied so that all classes would have the same number of instances as the largest class.", "For under-sampling, observations are randomly removed from the majority classes, so that all classes have the same number of instances as the smallest class.", "For both approaches, we first divided our datasets into test and training sets before performing over-sampling to prevent contamination of the test set by having the same observations in both the training and test data.", "Feedback Loop To address class imbalances in text classification, this work adapts the approach in Bowley et al. (2019) from the computer vision domain.", "The goal of this approach is not only to alleviate the bias towards majority classes but also to adjust the training data instances such that the models are always being trained on the instances that was performing the worst on.", "It should be noted that this approach is very similar to adaptive learning strategies which have been shown to aid in human learning (Kerr, 2015; Midgley, 2014).", "Algorithm 1 presents pseudocode for the feedback loop.", "In this process, the active training data (the data used to actually train the models in each iteration of the loop) is continually resampled from the training data.", "The model is first initially trained with an undersampled number of random instances from each class, which becomes the initial active training data.", "The model M then performs inference over the entire training set, and then selects MCS instances from each class C i which had the worst error during inference, where MCS is the minority (smallest) class size.", "The model is then retrained with this new active training data and the process of training, inference and selection of the MCS worst instances repeats for a fixed number of feedback loop iterations, FLI .", "In this way the model is always being trained on the instances it has classified the worst.", "To measure the effect of resampling the worst performing instances, the feedback loop approach was also compared to a random downsampling (DS) loop, where instead of evaluating the model over each instance and selecting the worst performing instances, MCS instances from each class are instead randomly sampled.", "As performing inference over the entire training set adds overhead, a comparison to the random DS loop method would show if performing this inference is worth the performance cost over simple random resampling.", "This approach is the same as Algorithm 1 except that SampleRandom is used instead of Resample in the feedback loop.", "Section 4.3 describes how the number of training epochs and loop iterations were determined such that all the training data selection methods are given a fair evaluation with the same amount of computational time.", "Evaluation Metrics For imbalanced datasets, simply using precision, recall or F1 score metrics for the entire datasets would not accurately reflect how well a model or method performs, as they emphasize the majority classes.", "To overcome this, alternative evaluation metrics to handle the class imbalance problem were used, as recommended by Banerjee et al. (2019).", "Specifically, we report the models performance based on precision, recall, and F1 score by utilizing a macro-average over all classes, as this gives every class equal weight, and hence reveals how well the models and training data selection strategies perform.", "Different machine learning methods were considered for technical event/issue classification (e.g. engine failure, turbine failure).", "Each instance is an individual short logbook entry and contains approximately 2 to 20 tokens (12 words on average per instance including function words), as shown in Table 3.The methods used in this study were a Deep Neural Network (DNN) (Dernoncourt et al., 2017), a Long Short-Term Memory (LSTM) (Suz-gun et al., 2019), recurrent neural network (RNN) (Pascanu et al., 2013), a Convolutional Neural Network (CNN) (Lin et al., 2018), and BERT (Devlin et al., 2019).", "Deep Neural Network A deep artificial neural network (DNN), as described by Dernoncourt et al. (2017), can learn abstract representation and features of the input instances that would help to achieve better performance on predicting the issue type in the logbook dataset.", "The DNN used was a 3 layer, fully connected feed forward neural network with an input embedding layer of dimension 300 and equal size number of words followed by 2 dense layers with 512 hidden units with ReLU activation functions followed by a dropout layer.", "Finally, we added a fully connected dense layer with size equal to the number of classes, with a SoftMax activation function.", "Long Short-Term Memory An LSTM RNN was also used to perform a sequence-to-label classification.", "As described by Suzgun et al. (2019) LSTM RNNs utilize several vector gates at each state to regulate the passing of data by the sequence which enhances the modeling of the long-term dependencies.", "We used a 3 layer LSTM model with a word embedding layer of dimension 300 and the equal size number of words followed by an LSTM layer with setting the number of hidden units equal to the embedding dimension, followed by a dropout layer.", "Finally, we added a fully connected layer with size equal to the number of classes, with a SoftMax activation function.", "Convolutional Neural Network Convolutional neural networks (CNNs) have demonstrated exceptional success in NLP tasks such as document classification, language modeling, or machine translation (Lin et al., 2018).", "As Xu et al. (2020) described, CNN models can produce consistent performance when applied to the various text types such as short sequences.", "We evaluated a CNN architecture (Shen et al., 2018) with a convolutional layer, followed by batch normalization, ReLU, and a dropout layer, which was followed by a max-pooling layer.", "The model contained 300 convolutional filters with the size of 1 by n-gram length pooling with the size of 1 by the length of the input sequence, followed by concatenation layer, then finally connected to a fully connected dense layer, and an output layer equal to the size of the dataset class using a SoftMax activation function.", "Bidirectional Encoder Representations We also evaluated using the pre-trained uncased Bidirectional Encoder Representations (BERT) for English (Devlin et al., 2019).", "We fine-tuned the model, and used a word piece based BERT tokenizer for the tokenization process and the RandomSampler and SequentialSampler for training and testing respectively.", "To better optimize this model, a schedule was created for the learning rate that decayed linearly from the initial learning rate we set in the optimizer to 0.", "Datasets and Baselines First, the technical text pre-processing pipeline developed by Akhbardeh et al. (2020b) was applied, which comprises domain-specific noise entity removal, dictionary-based standardization, lexical normalization, part of speech tagging, and domain-specific lemmatiza-tion.", "We divided the datasets selecting randomly from each class independently to maintain a similar class size distribution, using 80% of the instances for training and 20% of the instances for testing data.", "For feature extraction, two methods were considered: a bag-of-word model (n-grams:1) (Pe-dregosa et al., 2011) and pre-trained 300 dimensional GloVe word embeddings (Pennington et al., 2014).", "Hyperparameter and Tuning The coarse to fine learning (CFL) approach (Lee et al., 2018) was used to set parameters and hyperparameters for the DNN, LSTM, and CNN models.", "Experiments considered batch sizes of 32, 64, and 128, an initial learning rate ranging from 0.01 to 0.001 with a learning decay rate of 0.9, and dropout regularization in the range from 0.2 to 0.5 in all models, as well as ReLU and SoftMax activation functions (Nair and Hinton, 2010), categorical cross-entropy (Zhang and Sabuncu, 2018) as the loss function, and the Adam optimizer (Kingma and Ba, 2015) in the DNN, LSTM, CNN and BERT models.", "Based on experiments and network training accuracy, a batch size of 64 and drop out regularization of 0.3 was selected for model training.", "Each model with each training data selection strategy was trained 20 times to generate results for each dataset.", "To ensure each training data selection strategy was fairly compared with a similar computational budget, the number of training epochs and loop iterations (if the strategy had a feedback or random downsampling loop) were adjusted so that the total number of training instances evaluations each model performed was the same.", "For each dataset, the number of forward and backward passes, T' for 100 epochs of the baseline strategy was used as the standard.", "As an example, Table 4 shows how many loop iterations, epochs per loop, and inference passes were done for each training data selection strategy on the Auto-Safe dataset.", "Given the differences between the min and max class sizes it was not possible to get exact matches but the strategies came as close as possible.", "We counted each inference pass for the feedback loop the same as a forward and backward training pass, which actually was a slight computational disadvantage for the feedback loop, as a forward and backward pass in training takes approximately 1 x to 2 x the time as an inference pass.", "Table 5 shows a comparison between the baseline and the four different class balancing methods (over-sampling, under-sampling, the random downsampling (DS) loop and the feedback loop).", "Based on these outcomes, the feedback loop strategy almost entirely outperforms the other methods over all datasets and models, showing that performing inference over the training set and reselecting the training data from the worst performing instances Dataset L EPL LTI INM T Baseline 1 100 3,859 0 385,900 Downsampling 1 329 1,173 0 385,917 Oversampling 1 42 9,214 0 386,988 Random DS Loop 33 10 1,173 0 387,090 Feedback Loop 25 10 1,173 3,859 389,725 Table 4: Details regarding different training process using the various methods for handling the unbalanced class in automotive safety ( Auto-Safe ) dataset with 17 total classes.", "does provide a benefit to the learning process.", "A plausible explanation is that this strategy does not introduce bias into the larger class and also does not effect the minority class size distribution.", "It also does not waste training time on instances the model has already well learned.", "Table 5 also shows the empirical analysis of the four classification models, with the model and training data selection strategy providing the overall best results shown in bold and italics.", "Using technical text pre-processing techniques described in Section 4.3, and the feedback loop strategy described in Section 4.1, the precision, recall, and F1 score improved compared to the baseline performance.", "The CNN model outperformed the other algorithms with improved precision, recall, and F1 score for almost all datasets except for Avi-Main , where BERT had the similar results, and Auto-Main where CNN and BERT tied.", "This is interesting, given the current popularity of the BERT model, however it may be due to the substantial lexical, topical, and structural linguistic differences between the technical logbook data and the English corpus that BERT was pre-trained on.", "Furthermore, we conducted the Mann-Whitney U-test of statistical significance by using the F1 scores of each of the 20 repeated experiments of the classification models, using the baseline and the feedback loop approach as the two different populations.", "The outcomes are shown in Table 6, with the differences being highly statistically significant.", "To investigate the optimal strategies for dealing with these imbalanced technical datasets, we studied various methods on how to process the data, extract features, and classify the type of event.", "Re-Down Over Random Feedback Dataset Model Baseline (%) Sampling (%) Sampling (%) DS Loop (%) Loop (%) Pre Rec F1 Pre Rec F1 Pre Rec F1 Pre Rec F1 Pre Rec F1 Avi-Main DNN 0.90 0.89 0.89 0.67 0.78 0.70 0.90 0.90 0.90 0.90 0.90 0.90 0.93 0.91 0.91 LSTM 0.84 0.85 0.84 0.81 0.83 0.81 0.85 0.84 0.84 0.84 0.84 0.84 0.86 0.88 0.87 CNN 0.93 0.92 0.92 0.89 0.88 0.88 0.94 0.92 0.92 0.93 0.91 0.91 0.95 0.94 0.94 BERT 0.93 0.93 0.93 0.85 0.86 0.85 0.94 0.94 0.94 0.94 0.93 0.93 0.95 0.96 0.95 Avi-Acc DNN 0.47 0.44 0.43 0.35 0.45 0.35 0.48 0.47 0.47 0.50 0.44 0.46 0.52 0.45 0.48 LSTM 0.38 0.37 0.37 0.35 0.35 0.35 0.39 0.39 0.39 0.38 0.39 0.38 0.40 0.39 0.39 CNN 0.50 0.49 0.49 0.43 0.42 0.42 0.52 0.44 0.47 0.51 0.44 0.47 0.52 0.46 0.48 BERT 0.48 0.42 0.44 0.41 0.40 0.40 0.50 0.44 0.46 0.50 0.44 0.46 0.51 0.45 0.47 Avi-Safe DNN 0.43 0.41 0.41 0.36 0.36 0.36 0.50 0.50 0.50 0.50 0.49 0.49 0.53 0.51 0.51 LSTM 0.47 0.46 0.46 0.43 0.42 0.42 0.49 0.50 0.49 0.48 0.46 0.47 0.49 0.50 0.49 CNN 0.59 0.57 0.57 0.50 0.50 0.50 0.60 0.59 0.59 0.59 0.59 0.59 0.62 0.61 0.61 BERT 0.50 0.50 0.50 0.44 0.46 0.44 0.54 0.54 0.54 0.53 0.53 0.53 0.56 0.57 0.56 Auto-Main DNN 0.58 0.45 0.49 0.33 0.49 0.39 0.60 0.55 0.56 0.58 0.54 0.55 0.61 0.55 0.57 LSTM 0.49 0.55 0.51 0.41 0.42 0.41 0.50 0.60 0.54 0.51 0.58 0.54 0.53 0.61 0.55 CNN 0.61 0.61 0.61 0.53 0.53 0.53 0.64 0.64 0.64 0.63 0.64 0.63 0.65 0.64 0.64 BERT 0.60 0.60 0.60 0.54 0.53 0.53 0.63 0.64 0.63 0.63 0.63 0.63 0.64 0.64 0.64 Auto-Acc DNN 0.43 0.34 0.30 0.35 0.42 0.27 0.39 0.42 0.31 0.40 0.39 0.39 0.48 0.40 0.40 LSTM 0.45 0.39 0.41 0.40 0.40 0.40 0.42 0.41 0.41 0.42 0.40 0.40 0.48 0.41 0.44 CNN 0.46 0.43 0.44 0.44 0.41 0.42 0.49 0.50 0.49 0.50 0.51 0.50 0.51 0.53 0.52 BERT 0.50 0.49 0.49 0.47 0.47 0.47 0.50 0.50 0.50 0.51 0.49 0.50 0.52 0.51 0.51 Auto-Safe DNN 0.52 0.46 0.48 0.40 0.47 0.41 0.54 0.51 0.51 0.54 0.51 0.51 0.55 0.52 0.53 LSTM 0.40 0.40 0.40 0.38 0.39 0.38 0.41 0.42 0.41 0.41 0.41 0.41 0.43 0.42 0.42 CNN 0.59 0.58 0.58 0.52 0.51 0.51 0.59 0.60 0.59 0.59 0.59 0.59 0.62 0.60 0.61 BERT 0.57 0.56 0.56 0.52 0.50 0.50 0.58 0.56 0.56 0.57 0.57 0.57 0.58 0.59 0.59 Faci-Main DNN 0.57 0.48 0.50 0.33 0.40 0.34 0.56 0.48 0.50 0.57 0.50 0.53 0.59 0.51 0.54 LSTM 0.56 0.56 0.56 0.53 0.52 0.52 0.59 0.55 0.56 0.59 0.56 0.57 0.63 0.56 0.60 CNN 0.64 0.64 0.64 0.61 0.60 0.60 0.66 0.66 0.66 0.65 0.65 0.65 0.69 0.67 0.68 BERT 0.63 0.64 0.63 0.60 0.60 0.60 0.65 0.64 0.64 0.64 0.65 0.64 0.68 0.67 0.67 Table 5: Comparison of results for the 7 datasets, for the baseline and four methods to address class imbalance for the four evaluated models (DNN, LSTM, CNN and BERT).", "garding the discussion provided in Section 3 about the nature of such a dataset, there are key challenges that effect the performance of employed algorithms.", "As discussed in Section 1, the extreme class imbalance observed in these technical datasets substantially affects learning algorithms' performance.", "To overcome this issue, we first explored oversampling and undersampling, which both result in balanced class sizes.", "Undersampling removed portions of dataset that could be important for certain technical events or issues, which resulted in underfitting and weak generalization for important classes.", "On the other hand, oversampling may introduce overfitting in the minority class, as some of the event types are very short tokens containing domain-specific words.", "Following this, to minimize the possibility of overfitting and underfitting, a random downsampling loop and a feedback loop were investigated to minimize bias in the training process.", "It was found that the added computational cost of the feedback loop inference was worth the reduction in training time it caused over the random downsampling loop.", "The scarce data available in a dataset such as Auto-Main is certainly an issue for deep learning methods.", "Examining the accuracy improvement by using the proposed feedback loop strategy, requires incorporating more instances to the event classes.", "Similar to any supervised learning models, we noticed some limitations that could be addressed in future work.", "As shown in the previous sections (such as Table 2), logbook instances contain short text (ranging from 2 to 20 tokens per instance), and utilizing recurrent deep learning algorithms such as LSTM RNNs which are heavily based on the context leads to weak performance compared to other algorithms.", "One possible explanation is that logbooks with short instances (sequences) are not providing sufficient context for the algorithm to make better predictions.", "Another could be that RNNs are notoriously difficult to train (Pascanu et al., 2013), and the LSTM models may simply require more training time to achieve similar results.", "There is some evidence for this, as the dataset with the most instances, which also had the second largest number of tokens per instance on average was Faci-Main , which is the dataset which the LSTM model had the closest performance to the CNN and BERT models, and was also the only one which the LSTM model outperformed the DNN model.", "The pre-trained BERT model provided a reasonable classification performance compared to the other deep learning models, however as BERT is pre-trained on standard language, the performance when applying to logbook data was not optimal.", "Training or fine-tunning BERT to technical logbook data is likely to improve performance as observed in the legal and scientific domains (Chalkidis et al., 2020; Beltagy et al., 2019).", "As training or fine-tuning BERT requires large amounts of data, a limitation for fine-tuning a domain-specific BERT is the amount of logbook data available.", "This work focused on predictive maintenance and technical event/issue classification, with a special focus on addressing class imbalance.", "We acquired seven logbook datasets from three technical domains containing short instances with non-standard grammar and spelling, and many abbreviations.", "To address RQ1 , we evaluated multiple strategies to address the extreme class imbalance in these datasets and we showed that the feedback loop strategy performs best, almost entirely providing the best results for the 7 different datasets and 4 different models investigated.", "To address RQ2 , we empirically compared different classification algorithms (DNN, LSTM, CNN, and pre-tuned BERT).", "Results show that the CNN model outperforms the Dataset DNN LSTM CNN BERT Avi-Main 0.0020 0.0043 0.0002 0.0004 Avi-Acc 0.0011 0.0399 0.0103 0.0015 Avi-Safe 0.0000 0.0023 0.0059 0.0012 Auto-Main 0.0001 0.0181 0.0009 0.0004 Auto-Acc 0.0000 0.0055 0.0001 0.0161 Auto-Safe 0.0003 0.0106 0.0011 0.0083 Faci-Main 0.0002 0.0001 0.0003 0.0005 Table 6: Statistical significance of the various classification models between the Baseline approach and Feedback Loop approach F1 scores using the Mann-Whitney U test.", "other classifiers.", "The methodology presented in this paper could be applied to other maintenance corpora from a variety of technical domains.", "The feedback loop approach for selecting training data is generic and could easily be applied to any learning problem with substantial class imbalances.", "This is useful as extreme class imbalance is a challenge at the heart of a number of natural language tasks.", "In future work, we would like to fine-tune BERT using logbook data, as described in Section 6, and extend this work to datasets in other languages.", "The biggest challenge for these two research directions is the limited availability of logbook datasets.", "Furthermore, we are exploring various methods of domain adaptation and transfer learning on these datasets to further improve the performance of classification models.", "We would like to thanks the University of North Dakota aviation program for providing the valuable aviation maintenance logbook datasets to the MaintNet research.", "We further thank the aviation domain expert Zechariah Morgan for evaluating the outcomes of the various algorithms and providing valuable feedback for the aviation domain dataset.", "We also would like to thank the anonymous ACL reviewers for providing us with helpful comments and feedback." ]
[ "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "method", "other", "abstain", "method", "abstain", "abstain", "objective", "abstain", "objective", "other", "other", "other" ]
[ "Identifying changes in individuals' behaviour and mood, as observed via content shared on online platforms, is increasingly gaining importance.", "Most research to-date on this topic focuses on either:", "(a) identifying individuals at risk or with a certain mental health condition given a batch of posts or", "(b) providing equivalent labels at the post level.", "A disadvantage of such work is the lack of a strong temporal component and the inability to make longitudinal assessments following an individual's trajectory and allowing timely interventions.", "Here we define a new task, that of identifying moments of change in individuals on the basis of their shared content online.", "The changes we consider are sudden shifts in mood (switches) or gradual mood progression (escalations).", "We have created detailed guidelines for capturing moments of change and a corpus of 500 manually annotated user timelines (18.7K posts).", "We have developed a variety of baseline models drawing inspiration from related tasks and show that the best performance is obtained through context aware sequential modelling.", "We also introduce new metrics for capturing rare events in temporal windows.", "Linguistic and other content from social media data has been used in a number of different studies to obtain biomarkers for mental health.", "This is gaining importance given the global increase in mental health disorders, the limited access to support services and the prioritisation of mental health as an area by the World Health Organization (2019).", "Studies using linguistic data for mental health focus on recognising specific conditions related to mental health (e.g., depression, bipolar disorder) (Hus-seini Orabi et al., 2018), or identifying self-harm ideation in user posts (Yates et al., 2017; Zirikly et al., 2019).", "However, none of these works, even when incorporating a notion of time (Lynn et al., Figure 1: Example of an Escalation (with a darker peak) and a Switch within a user's timeline.", "2018; Losada et al., 2020), identify how an individual's mental health changes over time.", "Yet being able to make assessments on a longitudinal level from linguistic and other digital content is important for clinical outcomes, and especially in mental health (Velupillai et al., 2018).", "The ability to detect changes in individual's mental health over time is also important in enabling platform moderators to prioritise interventions for vulnerable individuals (Wadden et al., 2021).", "Users who currently engage with platforms and apps for mental health support (Neary and Schueller, 2018) would also benefit from being able to monitor their well-being in a longitudinal manner.", "Motivated by the lack of longitudinal approaches we introduce the task of identifying Moments of Change' (MoC) from individuals' shared online content .", "We focus in particular on two types of changes: Switches mood shifts from positive to negative, or vice versa and Escalations gradual mood progression (see Fig. 1, detailed in 3).", "Specifically we make the following contributions: We present the novel task of identifying moments of change in an individual's mood by analysing linguistic content shared online over time, along with a longitudinal dataset of 500 user timelines (18.7K posts, English language) from 500 users of an online platform.", "We propose a number of baseline models for 4647 automatically capturing Switches/Escalations, inspired by sentenceand sequence-level state-of-the-art NLP approaches in related tasks.", "We introduce a range of temporally sensitive evaluation metrics for longitudinal NLP tasks adapted from the fields of change point detection (van den Burg and Williams, 2020) and image segmentation (Arbelaez et al., 2010).", "We provide a thorough qualitative linguistic analysis of model performance.", "Social Media and Mental Health Online user-generated content provides a rich resource for computational modelling of wellbeing at both population and individual levels.", "Research has examined mental health conditions by analysing data from platforms such as Twitter and Reddit (De Choudhury et al., 2013; Coppersmith et al., 2014; Cohan et al., 2018) as well as peer-support networks such as TalkLife (Pruksachatkun et al., 2019).", "Most such work relies on proxy signals for annotations (e.g., self-disclosure of diagnoses, posts on support networks) and is characterised by a lack of stan-dardisation in terms of annotation and reporting practices (Chancellor and De Choudhury, 2020).", "We have provided thorough annotation guidelines for Moments of Change that can aid mental health monitoring over time irrespective of the underlying condition.", "Moments of Change (MoC) Little work has specifically focused on automatically capturing changes in user behaviour based on their social media posts.", "Within the health domain, Guntuku et al. (2020) showed that a user's language on Facebook becomes more depressed and less informal prior to their visit to an emergency department.", "With respect to mental health, De Choudhury et al. (2016) proposed to identify shifts to suicide ideation by predicting (or not) a transition from posting on a regular forum to a forum for suicide support.", "Pruk-sachatkun et al. (2019) examined moments of affective change in TalkLife users by identifying positive changes in sentiment at post-level with respect to a distressing topic earlier in a user's thread.", "In both cases MoC are overly specific and modelled through binary classification without any notion of temporal modelling.", "conditions from textual data, including self-harm, suicide ideation, eating disorders, and depression (Benton et al., 2017; Kshirsagar et al., 2017; Yates et al., 2017; Husseini Orabi et al., 2018; Jiang et al., 2020; Shing et al., 2020).", "Researchers are increasingly adopting sequential modelling to capture temporal dynamics of language use and mental health.", "For example, Cao et al. (2019) encode microblog posts using suicide-oriented embeddings fed to an LSTM network to assess the suicidality risk at post level.", "Sawhney et al. (2020b, 2021) improves further on predicting suicidality at post-level by jointly considering an emotion-oriented post representation and the user's emotional state as reflected through their posting history with temporally aware models.", "The recent shared tasks in eRisk also consider sequences of user posts in order to classify a user as a positive (wrt self-harm or pathological gambling) or control case (Losada et al., 2020; Parapar et al., 2021).", "While such work still operates at the postor user-level it highlights the importance of temporally aware modelling.", "Related Temporal NLP Tasks Semantic change detection (SCD) aims to identify words whose meaning has changed over time.", "Given a set of word representations in two time periods, the dominant approach is to learn the optimal transformation using Orthogonal Procrustes (Schnemann, 1966) and measure the level of semantic change of each word via the cosine distance of the resulting vectors (Hamilton et al., 2016).", "A drawback of this is the lack of connection between consecutive windows.", "Tsakalidis and Liakata (2020) addressed this through sequential modeling by encoding word embeddings in consecutive time windows and taking the cosine distance between future predicted and actual word vectors.", "Both approaches are considered as baselines for our task.", "First story detection (FSD) aims to detect new events reported in streams of textual data.", "Having emerged in the Information Retrieval community (Allan et al., 1998), FSD has been applied to streams of social media posts (Petrovic et al., 2010).", "FSD methods assume that a drastic change in the textual content of a document compared to previous documents signals the appearance of a new story.", "A baseline from FSD is considered in 4.2.", "We describe the creation of a dataset of individuals' timelines annotated with Moments of Change.", "A 4648 user's timeline P ( u ) s : e is a subset of their history, a series of posts [ p 0 , ..., p n ] shared by user u between dates s and e .", "A Moment of Change (MoC) is a particular point or period (range of time points) within [ s, e ] where the behaviour or mental health status of an individual changes.", "While MoC can have different definitions in various settings, in this paper we are particularly interested in capturing MoC pertaining to an individual's mood.", "Other types of MoC can include life events, the onset of symptoms or turning points (e.g., moments of improvement, difficult moments or moments of intervention within therapy sessions).", "1 We address two types of Moments of Change: Switches (sud-den mood shifts from positive to negative, or vice versa) and Escalations (gradual mood progression from neutral or positive to more positive or neutral or negative to more negative).", "Capturing both sudden and gradual changes in individuals' mood over time is recognised as important for monitoring mental health conditions (Lutz et al., 2013; Shalom and Aderka, 2020) and is one of the dimensions to measure in psychotherapy (Barkham et al., 2021).", "Individual's timelines are extracted from Talklife 2 , a peer-to-peer network for mental health support.", "Talklife incorporates all the common features of social networks post sharing, reacting, commenting, etc.", "Importantly, it provides a rich resource for computational analysis of mental health (Pruk-sachatkun et al., 2019; Sharma et al., 2020; Saha and Sharma, 2020) given that content posted by its users focuses on their daily lives and well-being.", "A complete collection between Aug'11Aug'20 (12.3M posts, 1.1M users) was anonymised and provided to our research team in a secure environment upon signing a License Agreement.", "In this environment, 500 user timelines were extracted (3.2) and an additional anonymisation step was performed to ensure that usernames were properly hashed when present in the text.", "The 500 timelines were subsequently annotated using our bespoke annotation tool (3.3) to derive the resulting longitudinal dataset (3.4).", "1 A limitation of our work stems from the fact that MoC are revealed to us by the user's shared content (i.e., we cannot identify changes in a user's well-being unless these are expressed online).", "We provide details on the limitations of our work in the Ethics Statement (7).", "2 https://www.talklife.com/ 3.2 Timeline Extraction Existing work extracts user timelines either based on a pre-determined set of timestamps (e.g., considering the most recent posts by a user) (Sawhney et al., 2020b) or by selecting a window of posts around mentions of specific phrases (e.g., around self-harm) (Mishra et al., 2019).", "The latter introduces potential bias into subsequent linguistic analysis (Olteanu et al., 2019), while the former could result into selecting timelines from a particular time period hence potentially introducing temporally-dependent linguistic or topical bias (e.g., a focus on the COVID-19 pandemic).", "Here we instead extract timelines around points in time where a user's posting behaviour has changed.", "Our hypothesis is that such changes in a user's posting frequency could be indicative of changes in their lives and/or mental health.", "Such association between changes in posting behaviour on mental health fora and changes in mental health has been assumed in prior literature (De Choudhury et al., 2016).", "Identifying changes in posting frequency We create a time series of each user's daily posting frequency based on their entire history.", "We then employ a change-point detection model to predict the intensity of daily post frequency by the given user.", "Bayesian Online Change-point Detection (Adams and MacKay, 2007) with a Poisson-Gamma underlying predictive model (Zachos, 2018) was chosen as our model, due to its highly competitive performance (van den Burg and Williams, 2020) and the fact that extracted timelines using this method had the highest density of MoC compared to a number of different timeline extraction (anomaly detection and keyword-based) methods for the same dataset.", "Extracting timelines around change-points Upon detecting candidate MoC as change-points in posting frequency, we generated candidate timelines for annotation by extracting all of the user's posts within a seven-day window around each change-point.", "We controlled for timeline length (between 10 and 150 posts, set empirically) so that they were long enough to enable annotators to notice a change but not so long as to hinder effective annotation.", "This control for timeline length means that our subsequent analysis is performed (and models are trained and evaluated) on time periods during which the users under study are quite active; however, the upper bound of 150 posts in 15 days set for each timeline also ensures that we 4649", "do not bias (or limit) our analysis on extremely active users.", "Finally, to ensure linguistic diversity in our dataset, 500 timelines extracted in this way were chosen for annotation at random, each corresponding to a different individual.", "The resulting dataset consists of 18,702 posts ( =35, SD =22 per timeline; range of timeline length=[10,124], see Fig.", "2(a)).", "Annotation Interface An annotation interface was developed to allow efficient viewing and annotation of a timeline (see snippet in Fig. 3).", "Each post in a timeline was accompanied by its timestamp, the user's self-assigned emotion and any associated comments (color-coded, to highlight recurrent users involved within the same timeline).", "Given the context of the entire timeline, annotations for MoC are performed at post level: if an annotator marks a post as a MoC , then they specify whether it is", "(a) the beginning of a Switch or", "(b) the peak of an Escalation (i.e., the most posi-tive/negative post of the Escalation).", "Finally, the range of posts pertaining to a MoC (i.e., all posts in the Switch/Escalation) need to be specified.", "Data annotation After a round of annotations for guideline development with PhD students within the research group (co-authors of the paper), we recruited three external annotators to manually label the 500 timelines.", "They all have University degrees in humanities disciplines and come from three different countries; one of them is an English native speaker.", "Annotators were provided with a set of annotation guidelines containing specific examples, which were enriched and extended during iterative rounds of annotation.", "3 Annotators completed 2 hands-on training sessions with a separate set of 10 timelines, where they were able to ask questions and discuss opinions to address cases of disagreement.", "Following the initial training phase, we performed spot checks to provide feedback and answer any questions while they labelled the full dataset (n=500 timelines).", "Annotators were encouraged to take breaks whenever needed, due to the nature of the content.", "On average, each annotator spent about 5 minutes on annotating a single timeline.", "The annotation of MoC is akin to assessment of anomaly detection methods since MoC (Switches and Escalations) are rare, with the majority of posts not being annotated (label None').", "Measuring the agreement in such settings is therefore complex, as established metrics such as Krippendorff's Alpha and Fleiss' Kappa would generally yield a low score.", "This is due to the unrealistically high expected chance agreement (Feinstein and Cicchetti, 1990), which cannot be mitigated by the fact that annotators do agree on the majority of the annotations (especially on the None' class).", "For this reason, we have used as the main indicator the per label positive agreement computed as the ratio of the number of universally agreed-upon instances (the intersection of posts associated with that label) over the total number of instances (the union of posts associated with that label).", "As highlighted 3 Guidelines are available at https:// github.com/Maria-Liakata-NLP-Group/Annotation-guidelines .", "in Table 1, while perfect agreement for None' is at 69%, perfect agreement on Escalations and Switches is at 19% and 8%, respectively.", "However, if instead of perfect agreement we consider majority agreement (where two out of three annotators agree), these numbers drastically increase (30% for Switches and 50% for Escalations).", "Moreover, by examining the systematic annotation preferences of our annotators we have observed that the native speaker marked almost double the amount of Switches compared to the other two annotators, in particular by spotting very subtle cases of mood change.", "We have thus decided to generate a gold standard based on majority decisions, comprising only cases where at least two out of three annotators agree with the presence of a MoC.", "The rare cases of complete disagreement have been labelled as None'.", "We thus have 2,018 Escalations and 885 Switches from an overall of 18,702 posts (see Fig.", "2(b) for the associated lengths in #posts).", "In future work we plan to consider aggregation methods based on all annotations or approaches for learning from multiple noisy annotations (Paun and Simpson, 2021).", "Our aim is to detect and characterise the types of MoC based on a user's posting activity.", "We therefore treat this problem as a supervised classification task (both at post level and in a sequential/timeline-sensitive manner, as presented in 4.2) rather than an unsupervised task, even though we also consider effectively baselines with unsupervised components (FSD, SCD in 4.2).", "Contrary to traditional sentence or document-level NLP tasks, we incorporate timeline-sensitive evaluation metrics that account for the sequential nature of our model predictions (4.1).", "Given a user's timeline, the aim is to classify each post within it as belonging to a Switch (IS), an Escalation (IE), or None (O).", "At this point we don't distinguish between beginnings of switches/peaks of escalations and other posts in the respective ranges.", "While the task is sequential by definition, we train models operating both at the post level in isolation and sequential models at the timeline-level (i.e., accounting for user's posts over time), as detailed in 4.2.", "We contrast model performance using common post-level classification metrics as well as novel timeline-level evaluation approaches (4.1).", "This allows us to investigate the impact of", "(a) accounting for severe class imbalance and", "(b) longitudinal modelling.", "We have randomly divided the annotated dataset into 5 folds (each containing posts from 100 timelines) to allow reporting results on all of the data through cross-validation.", "Post-level We first assess model performance on the basis of standard evaluation metrics at the post level (Precision, Recall, F1 score).", "These are obtained per class and macro-averaged, to better emphasize performance in the two minority class labels (IS & IE).", "However, post-level metrics are unable to show:", "(a) the expected accuracy at the timeline level (see example in Fig. 4) and", "(b) model suitability in predicting regions of change.", "These aspects are particularly important since we aim to build models capturing MoC over time.", "Timeline-level Our first set of timeline-level evaluation metrics are inspired from work in changepoint detection (van den Burg and Williams, 2020) and mirror the post-level ones, albeit operating on a window and timeline basis.", "Specifically, working on each timeline and label type independently, we calculate Recall R ( l ) w (Precision P ( l ) w ) by counting as correct a model prediction for label l if the prediction falls within a window of w posts around post labelled l in the gold standard.", "Formally: R ( l ) w = | TP w ( M ( l ) ,GS ( l ) ) | | GS ( l ) | , P ( l ) w = | TP w ( M ( l ) ,GS ( l ) ) | | M ( l ) | , where T P w denotes the true positives that fall within a range of w posts and M ( l ) / GS ( l ) are the predicted/actual labels for l , respectively.", "Note that each prediction can only be counted once as correct.", "R ( l ) w and P ( l ) w are calculated on every timeline and are then macro-averaged.", "The second set of our timeline-level evaluation metrics is adapted from the field of image segmentation (Arbelaez et al., 2010).", "Here we aim at evaluating model performance based on its ability to capture regions of change (e.g., in Fig 4, GS' shows a timeline with three (two) such regions of Escalations (Switches)).", "For each such true region R ( l ) GS , we define its overlap O ( R ( l ) GS , R ( l ) M ) with each predicted region R ( l ) M as the intersection over union between the two sets.", "This way, we can get recall and precision oriented coverage metrics as follows: C ( l ) r ( M GS ) = 1 (cid:80) R ( l ) GS | R ( l ) GS | (cid:80) R ( l ) GS | R ( l ) GS | max R ( l ) M { O ( R ( l ) GS ,R ( l ) M ) } , 4651 Figure 4: Actual ( GS , shown twice) vs Predicted labels for each post (square) of a single timeline, by two models ( M1 , M2 ).", "C ( l ) p ( M GS ) = 1 (cid:80) R ( l ) M | R ( l ) M | (cid:80) R ( l ) M | R ( l ) M | max R ( l ) GS { O ( R ( l ) GS ,R ( l ) M ) } .", "The coverage metrics are calculated on the timeline basis and macro-averaged similarly to R ( l ) w and P ( l ) w .", "Using a set of evaluation metrics, each capturing a different aspect of the task, ensures assess to model performance from many different angles.", "We have considered different approaches to addressing our task:", "(i) Nave methods, specifically a Majority classifier (predicting always None) and a Random predictor, picking a label based on the overall label distribution in the dataset.", "It has been shown that comparisons against such simple baselines is essential to assess performance in computational approaches to mental health (Tsakalidis et al., 2018).", "(ii) Post-level supervised models operating on posts in isolation (i.e., ignoring post sequence in a user's timeline):", "(a) Random Forest (Breiman, 2001) on tfidf post representations ( RF-tfidf );", "(b) BiLSTM (Huang et al., 2015) operating on sequences of word embeddings ( BiLSTM-we", ");(c) BERT(ce) (Devlin et al., 2019) using the cross-entropy loss; and", "(d) BERT(f) trained using the alpha-weighted focal loss (Lin et al., 2017), which is more appropriate for imbalanced datasets.", "(iii) Emotion Classification We used DeepMoji ( EM-DM ) (Felbo et al., 2017) and Twitter-roBERTa-base ( EM-TR ) from TweetEval '20 (Barbieri et al., 2020) operating on the post-level, to generate soft-max probabilities for each emotion (64 for EM-DM , 4 for EM-TR ).", "These provide meta-features to a BiLSTM to obtain timeline-sensitive models for identifying MoC.", "(iv) First Story Detection (FSD) .", "We have used two common approaches for comparing a post to the n previous ones: representing the previous posts as", "(i) a single centroid or", "(ii) the nearest neighbour to the current post among them (Allan et al., 1998; Petrovic et al., 2010).", "In both cases, we calculate the cosine similarity of the current and previous posts.", "The scores are then fed into a BiLSTM as meta-features for a sequential model.", "Results are reported for the best method only.", "(v) Semantic Change Detection (SCD) .", "Instead of the standard task of comparing word representations in consecutive time windows, we consider a user being represented via their posts at particular points in time.", "We follow two approaches.", "The first is an Orthogonal Procrustes approach (Schne-mann, 1966) operating on post vectors ( SCD-OP ).", "Our aim here is to find the optimal transformation across consecutive representations, with higher errors being indicative of a change in the user's behaviour.", "In the second approach ( SCD-FP ) a BiLSTM is trained on the user's k previous posts in order to predict the next one (Tsakalidis and Liakata, 2020).", "Errors in prediction are taken to signal changes in the user.", "In both cases, we calculate the dimension-wise difference between the actual and the transformed/predicted representations (post vectors) and use this as a meta-feature to a BiLSTM to obtain a time-sensitive model.", "(vi) Timeline-sensitive .", "From our", "(ii) post-level classifiers, BERT(f) tackles the problem of imbalanced data but fails to model the task in a longitudinal manner.", "To remedy this, we employ BiLSTM-bert , which treats a timeline as a sequence of posts to be modelled, each being represented via the [CLS] representation of BERT(f) .", "To convert the post-level scores/representations from", "(iii)-(v) above into time-sensitive models we used the same BiLSTM from", "(vi), operating at the timeline-level.", "Details for each model and associated hyperparameters are in the Appendix.", "Model Comparison Table 2 summarises the results of all models; Fig. 5 further shows the P w / R w metrics for IE/IS for the best-performing models.", "BiLSTM-bert confidently outperforms all com-4652 Post-level Evaluation Coverage-based Metrics IS IE O macro-avg IS IE O macro-avg P R F1 P R F1 P R F1 P R F1 C p C r C p C r C p C r C p C r N a v e Majority .000 .000 .000 .000 .845 1.000 .916 .282 .333 .305 .000 .000 .619 .559 .206 .186 Random .047 .047 .047 .108 .108 .108 .845 .845 .845 .333 .333 .333 .031 .045 .033 .096 .386 .452 .150 .198 P o s t l e v e l RF-tfidf .294 .006 .011 .568 .087 .151 .852 .991 .917 .571 .361 .360 .250 .005 .152 .087 .632 .602 .345 .231 BiLSTM-we .245 .119 .160 .416 .347 .378 .878 .923 .900 .513 .463 .479 .173 .091 .138 .330 .557 .606 .289 .342 BERT(ce) .285 .186 .222 .454 .368 .406 .883 .921 .901 .540 .492 .510 .247 .163 .172 .344 .578 .621 .332 .376 BERT(f) .260 .321 .287 .401 .478 .436 .898 .864 .881 .520 .554 .534 .227 .269 .160 .423 .503 .567 .297 .420 T i m e li n e l e v e l FSD .000 .000 .000 .000 .845 1.000 .916 .282 .333 .305 .000 .000 .619 .559 .206 .186 EM-TR .344 .036 .065 .444 .248 .318 .865 .957 .909 .551 .414 .431 .297 .024 .273 .104 .639 .589 .403 .239 EM-DM .533 .118 .193 .479 .351 .405 .880 .948 .913 .631 .472 .504 .347 .023 .363 .177 .646 .592 .452 .264 SCD-OP .200 .005 .009 .478 .408 .440 .882 .947 .913 .520 .453 .454 .167 .001 .344 .180 .663 .609 .391 .263 SCD-FP .270 .082 .126 .503 .370 .426 .880 .944 .911 .551 .465 .488 .227 .039 .317 .254 .649 .611 .398 .301 BiLSTM-bert .397 .264 .316 .568 .461 .508 .898 .936 .917 .621 .553 .580 .331 .197 .345 .340 .664 .656 .447 .398 Table 2: Post-level and Coverage-based evaluation for each model ( first and second highest scores are highlighted).", "peting models in terms of post-level macro-F1.", "It provides a 8.6% relative improvement (14% for the IS/IE labels) against the second best performing model ( BERT(f) ).", "Furthermore, it achieves a great balance between precisionand recall-oriented timeline-level metrics, being consistently the second-best performing model.", "This performance is largely attributed to two factors, which are studied further below:", "(a) the use of the Focal loss on BERT, generating [CLS] representations that are much more focused on the minority classes (IE/IS), and", "(b) its longitudinal aspect.", "Post-level The BERT variants perform better than the rest in all metrics.", "Their coverage metrics though suggest that while they manage to predict better the regions compared to most timeline-level methods (i.e., high C r ), they tend to predict more regions than needed (i.e., low C p ) partially due to their lack of contextual (temporal-wise) information.", "Finally, as expected, BERT(f) achieves much higher recall for the minority classes (IE/IS), in exchange for a drop in precision compared to BERT(ce) and in recall for the majority class (O).", "Models from Related Tasks EM-DM achieves very high precision ( P , P w ) for the minority classes, showing a clear link between the tasks of emotion recognition and detecting changes in a user's mood indeed, emotionally informed models have been successfully applied to post-level classification tasks in mental health (Sawhney et al., 2020a); however, both EM models achieve low recall ( R , R w ) for IE/IS compared to the rest.", "For the SCD inspired models, SCD-FP outperforms SCD-OP on most metrics.", "This is largely due to the fact that the former uses the previous k =3 posts to predict the next post in a user's timeline (instead of aligning it based on the previous post only.", "Thus SCD-FP benefits from its longitudinal component a finding consistent with work in semantic change detection (Tsakalidis and Liakata, 2020).", "Representation vs Fine-tuning vs Focal Loss While BiLSTM-bert yields the highest macro-F1 and the most robust performance across all metrics, it is not clear which of its components contributes the most to our task.", "To answer this, we perform a comparison against the exact same BiLSTM, albeit fed with different input types:", "(a) average word embeddings as in BiLSTM-we ,", "(b) Sentence-BERT representations (Reimers and Gurevych, 2019) and", "(c) fine-tuned representations from BERT(ce) .", "As shown in Table 3, fine-tuning with BERT(ce) outperforms Sentence-BERT representations.", "While the contextual nature of all of the BERT-based models offers a clear improvement over the static word embeddings, it becomes evident that the use of the focal loss during training 4653 Post Timeline Coverage P R F1 P 1 R 1 C p C r Word emb.", "the initial BERT(f) is vital, offering a relative improvement of 6% in post-level macro-F1 (13.7% for IS/IE).", "Calibrating the parameters in the focal loss could provide further improvements for our task in the future (Mukhoti et al., 2020).", "Timelinevs Post-level Modelling The importance of longitudinal modelling is shown via the difference between the BERT and BiLSTM variants when operating on single posts vs on the timeline-level (e.g., see the post-level results of BERT(ce) / Word emb.", "in Table 3 vs BERT(ce) / BiLSTM-we in Table 2, respectively).", "We further examine the role of longitudinal modelling in the rest of our best-performing models from Table 2.", "In particular, we replace the timeline-level BiLSTM in EM-DM and SCD-FP with a two-layer feed-forward network, operating on post-level input representations treating each post in isolation.", "The differences across all pairwise combinations with and without the longitudinal component are shown in Fig. 6. Timeline-level models achieve much higher precision (6.1%/6.9%/11.1% for P / P 1 / C p , respectively) in return for a small sacrifice in the timeline-level recall-oriented metrics (-2.8%/1.9%/2.3% for R / R 1 / C r ), further highlighting the longitudinal nature of the task.", "Here we analyse the cases of Switches/Escalations identified or missed by our best performing model ( BiLSTM-bert ).", "Switches (IS) are the most challenging to identify, largely due to being the smallest class with the lowest inter-annotator agreement.", "However, the EM -based models achieve high levels of precision on Switches, even during post-level evaluation (see Table 2).", "We therefore employ EM-TR (Barbieri et al., 2020), assigning probability scores for anger/joy/optimism/sadness to each post, and use them to characterise the predictions made by BiLSTM-bert .", "Fig. 7 and Table 4 show that our model predicts more often (in most cases, correctly) a Switch' when the associated posts express positive emotions (joy/optimism), but misses the vast majority of cases when these emotions are absent.", "The reason for this is that TalkLife users discuss issues around their well-being, with a negative mood prevailing.", "Therefore, BiLSTM-bert learns that the negative tone forms the users' baseline and thus deviations from this constitute cases of Switches' (see example in Table 5).", "We plan to address this in the future by incorporating transfer learning approaches to our model (Ruder et al., 2019).", "Escalations (IE) are better captured by our models.", "Here we examine more closely the cases of Peaks' in the escalations (i.e., the posts indicating the most 4654 Text True Pred. Oh, forgot :) Stay safe you lovely people all around the world!", "negative/positive state of the user within an escalation see 3.3).", "As expected, the post-level recall of BiLSTM-bert in these cases is much higher than its recall for the rest of IE cases (.557 vs .408).", "In Fig. 8 we analyse the recall of our model in capturing posts denoting escalations, in relation to the length of escalations.", "We can see that our model is more effective in capturing longer escalations.", "As opposed to the Switch class, we found no important differences in the expressed emotion between TP and FN cases.", "By carefully examining the cases of Peaks in isolation, we found that the majority of them express very negative emotions, very often including indication of self-harm.", "A Logistic Regression trained on bigrams at the post-level to distinguish between identified vs missed cases of Peaks showed that the most positively correlated features for the identified cases were directly linked to self-harm (e.g., kill myself , to die , kill me ).", "However, this was not necessarily the case with missed cases.", "Nevertheless, there were several cases of self-harm ideation that were missed by BiLSTM-bert , as well as misses due to the model ignoring the user's baseline, as is the case with Switches (see Table 6).", "Transfer learning and domain adaptation strategies as well as self-harm detection models operating at the post level could help in mitigating this problem.", "When my parents go out, I am gonna cut.", "I feel so horrible.", "I really don't want to be here anymore.", "Someone please text", "me...", "I swear I am about to harm myself...", "Please,", "anyone!' Had an awesome day with my gf and she tagged me!", "I am not alone!", ":) Have not cut for the past year!!", "Yay!!", "We present a novel longitudinal dataset and associated models for personalised monitoring of a user's well-being over time based on linguistic online content.", "Our dataset contains annotations for:", "(a) sudden shifts in a user's mood (switches) and", "(b) gradual mood progression (escalations).", "Proposed methods are inspired by state-of-the-art contextual models and longitudinal NLP tasks.", "Importantly, we have introduced temporally sensitive evaluation metrics, adapted from the fields of change-point detection and image segmentation.", "Our results highlight the importance of considering the temporal aspect of the task and the rarity of mood changes.", "Future work could follow four main directions:", "(a) integrating longitudinal models of detecting changes, with post-level models for emotion and self-harm detection (see 5.2);", "(b) incorporating transfer learning methods (Ruder et al., 2019) to adapt more effectively to unseen users' timelines;", "(c) adjusting our models to learn from multiple (noisy) annotators (Paun and Simpson, 2021) and", "(d) calibrating the parameters of focal loss and testing other loss functions suited to heavily imbalanced classification tasks (Jadon, 2020).", "Ethics institutional review board (IRB) approval was obtained from the corresponding ethics board of the University of Warwick prior to engaging in this research study.", "Our work involves ethical considerations around the analysis of user generated content shared on a peer support network (Talk-Life).", "A license was obtained to work with the user data from TalkLife and a project proposal was submitted to them in order to embark on the project.", "The current paper focuses on the identification of moments of change (MoC) on the basis of content shared by individuals.", "These changes involve recognising sudden shifts in mood (switches or es-4655 calations).", "Annotators were given contracts and paid fairly in line with University payscales.", "They were alerted about potentially encountering disturbing content and were advised to take breaks.", "The annotations are used to train and evaluate natural language processing models for recognising moments of change as described in our detailed guidelines.", "Working with datasets such as TalkLife and data on online platforms where individuals disclose personal information involves ethical considerations (Mao et al., 2011; Keklloglu et al., 2020).", "Such considerations include careful analysis and data sharing policies to protect sensitive personal information.", "The data has been de-identified both at the time of sharing by TalkLife but also by the research team to make sure that no user handles and names are visible.", "Any examples used in the paper are either paraphrased or artificial.", "Potential risks from the application of our work in being able to identify moments of change in individuals' timelines are akin to those in earlier work on personal event identification from social media and the detection of suicidal ideation.", "Potential mitigation strategies include restricting access to the code base and annotation labels used for evaluation.", "Limitations Our work in this paper considers moments of change as changes in an individual's mood judged on the basis of their self-disclosure of their well-being.", "This is faced by two limiting factors:", "(a) users may not be self-disclosing important aspects of their daily lives and", "(b) other types of changes related to their mental health (other than their mood/emotions, such as important life events, symptoms etc.) may be taking place.", "Though our models could be tested in cases of non-self-disclosure (given the appropriate ground truth labels), the analysis and results presented in this work should not be used to infer any conclusion on such cases.", "The same also holds for other types of moments of change' mentioned in 2 (e.g., transition to suicidal thoughts), as well as other types of changes, such as changes in an individual in terms of discussing more about the future, studied in Althoff et al. (2016), or changes in their self-focus (Pyszczynski and Greenberg, 1987) over time, which we do not examine in this current work.", "This work was supported by a UKRI/EPSRC Turing AI Fellowship to Maria Liakata (grant EP/V030302/1) and the Alan Turing Institute (grant", "EP/N510129/1).", "The authors would like to thank Dana Atzil-Slonim, Elena Kochkina, the anonymous reviewers and the meta-reviewer for their valuable feedback on our work, as well as the three annotators for their invaluable efforts in generating the longitudinal dataset." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "objective", "objective", "abstain", "method", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "other", "other" ]
[ "Text classification is a significant branch of natural language processing, and has many applications including document classification and sentiment analysis.", "Unsurprisingly, those who do text classification are concerned with the run-time of their algorithms, many of which depend on the size of the corpus' vocabulary due to their bag-of-words representation.", "Although many studies have examined the effect of preprocessing techniques on vocabulary size and accuracy, none have examined how these methods affect a model's run-time.", "To fill this gap, we provide a comprehensive study that examines how preprocessing techniques affect the vocabulary size, model performance, and model run-time, evaluating ten techniques over four models and two datasets.", "We show that some individual methods can reduce run-time with no loss of accuracy, while some combinations of methods can trade 2-5% of the accuracy for up to a 65% reduction of run-time.", "Furthermore, some combinations of preprocessing techniques can even provide a 15% reduction in run-time while simultaneously improving model accuracy.", "1 1 Introduction With the increasing amount of text data available, text analysis has become a significant part of machine learning (ML).", "Many problems in text analysis use ML methods to perform their task, ranging from classical problems like text classification and topic modeling, to more complex tasks like question answering.", "Although neural networks have become increasingly common in the research field, many industry NLP problems can be well served by less complex but more efficient and explainable models, such as Support Vector Machines (SVMs) or K-Nearest Neighbors (K-NN).", "We focus on the text classification problem, where the dominant approach to using these nonneural models is to first calculate the number of unique terms in the dataset (the vocabulary , size V ) and encode each instance of the dataset into a bag-of-words (BoW) representation (Joachims, 1998; Zhang et al., 2010).", "This results in a high-dimensional vector of size V that indicates whether each given word of the vocabulary was used in this instance.", "However, the vanilla approach to the BoW representation can lead to sub-par performance, as shown by numerous studies that have examined how preprocessing techniques affect the BoW w.r.t. performance and vocabulary size.", "These studies have examined this representation in fields such as information retrieval (Chaudhari et al., 2015; Patil and Atique, 2013; Beil et al., 2002; Senuma, 2011), text classification (Yang and Pedersen, 1997; Caragea et al., 2012; Uysal and Gu-nal, 2014; Vijayarani et al., 2015; Kumar and Harish, 2018; HaCohen-Kerner et al., 2020; Symeonidis et al., 2018) and topic modeling (Schofield and Mimno, 2016; Blei et al., 2003).", "They suggest a myriad of preprocessing techniques that could improve performance, ranging from choosing features that have high mutual information, low frequency, or simply remove punctuation.", "Another related problem of the BoW representation is that this sparse high-dimensional vector does not scale well to datasets with large vocabularies.", "As preprocessing techniques help contribute to a reduced vocabulary, they should also help alleviate this scaling problem, at least according to folklore.", "However, to the best of our knowledge, no previous study of preprocessing techniques have examined how they contribute to reduced run-time costs, leading to uncertainty about what these techniques do to mitigate the computational complexity in practice.", "prepro-Figure 1: Comparing vocabulary size (in millions) vs the total number of words (in 10s of millions) for the AP News and Amazon corpora.", "Note that the vocabulary size of AP News w.r.t. the number of documents plateaus much faster than the noisier Amazon corpus.", "cessing methods affect not only vocabulary size and performance, but also how they affect training and inference time.", "To do this, we contribute a comprehensive analysis of 10 different preprocessing methods applied to four machine learning models, evaluated on two datasets with widely varying vocabularies (Figure 1).", "Our results show that the individual preprocessing methods provide widely different effects on run-time, with some methods (i.e. rare word filtering and stopword removal) providing significant run-time reductions without losing any performance.", "We also show that some combinations of preprocessing methods both improve performance and reduce run-time.", "Datasets To see how preprocessing affects runtime, we examine two datasets (in English): the Amazon (He and McAuley, 2016) 2 and AP News corpora (MacIntyre, 1998).", "These datasets were chosen because of the wide disparity between their vocabularies.", "The Amazon corpus comes from user product reviews and contains a much higher vocabulary relative to the number of documents, due to its noisy text.", "The AP News corpus contains professionally-edited news articles and its vocabulary plateaus much faster than the Amazon corpus (Figure 1).", "We perform sentiment analysis on Amazon and year classification on AP News and report scores with the accuracy metric.", "We note 2 http://jmcauley.ucsd.edu/data/amazon/ that we also computed the F1 score alongside accuracy and found our results to be similar; thus we report accuracy since it is easier to understand.", "To test the effect of document size on preprocessing, we sampled various-sized 3 datasets from the original corpus and ran our analysis on each, sampling 5 different times with differing random seeds.", "4 However, we found that our results were nearly identical across the differing corpus sizes and thus, only report numbers for the 100k size.", "Preprocessing Methods We analyze 10 different methods (with their shortened names in paren-thesis): lowercasing (lower), rare word filtering (rare), hashing (hash), punctuation removal (nopunct), stopword removal (stop), number removal (nrem), word stemming (stem), lemmati-zation (lemma), spelling correction (spell), and word segmentation (seg).", "We choose these methods because of their prevalence in previous work (Symeonidis et al., 2018; Kumar and Harish, 2018; HaCohen-Kerner et al., 2020) and their use in industry (Li et al., 2013; Sanchez-Pi et al., 2014).", "Due to the exponential number of possible preprocessing combinations, we run all individual methods but restrict the search space of combinations of these methods.", "For rare word filtering and word hashing, we first conduct experiments for 9 different levels of filtering individually, using only the best level in future combinations with other methods.", "Results for all levels of filtering and hashing are in Appendices A and B. We then conduct experiments for all 24 combinations of spelling correction, word segmentation, number removal, and stopword removal, using the best outcome (the pipeline of all four) to combine with other methods.", "We note that while this is not an exhaustive search of all combinations, our analysis includes the standard preprocessing pipelines as well as many more.", "Models We use Scikit-Learn (Pedregosa et al., 2011) for three of the base algorithms, including K-NN (Altman, 1992), Naive Bayes (Rish et al., 2001), and the Support Vector Machine (SVM, (Suykens and Vandewalle, 1999)).", "We also employ Vowpal Wabbit (Langford et al., 2007; Karampatziakis and Langford, 2010), due to its strong performance and frequent use in industry.", "These four models provide a wide range of algorithms that might be used, allowing us to show how preprocessing methods generalize across models.", "We format our results relative to the algorithm with no preprocessing, to easily show how preprocessing changes this baseline performance.", "We first run each algorithm with no preprocessing, measuring the run-time, vocabulary size, and accuracy.", "We then report the scores of each preprocessing pipeline relative to the algorithm's baseline (e.g. a model with preprocessing that scores 75% of the no-preprocessing baseline's accuracy has a relative accuracy of 0.75).", "As the cross product of the number of methods vs. the number of models is still far too large to include in this paper, we show the average of each model's relative proportion to its respective baseline performance.", "5 This aggregation shows us 5 We first compute each algorithm's relative score to its the average relative performance across the four models, helping us generalize our results to be model-independent.", "For full tables detailing spe-cific model results, see Appendix C. Bold scores in tables indicate statistical similarity to the best score in the column (two-sample t-test, = 0 . 05 ).", "Individual Techniques We see results for the Amazon corpus in Table 1 and for the AP News corpus in Table", "2. On Amazon, each individual preprocessing method performs statistically similar to the baseline's accuracy, while three algorithms (stopword removal, rare word filtering, and word segmentation) also provide a moderate decrease (20-30%) in train and test time.", "Rare word filtering and stopword removal are effective across both corpora (with rare word filtering being even more effective on AP News, reducing the training time in half), while the other methods do not sig-nificantly impact either train-time or accuracy on AP News.", "We hypothesize that these techniques are more effective on the AP corpus because of its much smaller (and less varied) vocabulary.", "baseline (e.g. SVM with rare word filtering vs SVM with no preprocessing) and then take the average of the models for that method (e.g. average the relative performance of rare word filtering on models { K-NN, Naive Bayes, SVM, and Vowpal Wabbit } for the final score for rare word filtering).", "On the Amazon corpus, a handful of methods trade 2-5% of accuracy for up to a 65% reduction in training and testing time (Lowest Train/Test Time section in Table 1).", "Those that do not reduce accuracy (such as stop+rare) can still reduce the training and testing time by up to 55%.", "We see in the Highest Accuracy section that some methods (i.e. spell+seg+rare, etc.) can even improve performance by almost 2% while also reducing run-time by 10-15%.", "Similarly, when we examine the results on AP News we can find combinations with reduced run-time (up to 70% and 50% reductions in train and test time respectively) with no accuracy loss (but also no gains).", "Correlations In order to show the correlation between run-time and the other variables, we show a heatmap of these correlations in Figure", "2. Most of these variables are highly correlated with each other, as expected (training time is highly correlated with testing time, etc.).", "However, although testing time is highly correlated with vocabulary size (0.8 correlation), training time is not highly correlated (0.17), We hypothesize that a low vocabulary directly leads to faster inference, while which words are removed from the vocabulary has a bigger role in how quickly the algorithm converges during training.", "This hypothesis is also supported by the low correlation between vocabulary size and accuracy, indicating that what is in the vocabulary is more important than its size.", "These experiments relate to a large body of work that considers how preprocessing methods affect the downstream accuracy of various algorithms, ranging from topics in information retrieval (Chaudhari et al., 2015; Patil and Atique, 2013; Beil et al., 2002), text classification and regression (Forman, 2003; Yang and Pedersen, 1997; Vijayarani et al., 2015; Kumar and Harish, 2018; HaCohen-Kerner et al., 2020; Symeonidis et al., 2018; Weller et al., 2020), topic modeling (Blei et al., 2003; Lund et al., 2019; Schofield and Mimno, 2016; Schofield et al., 2017a,b), and even more complex tasks like question answering (Ji-jkoun et al., 2003; Carvalho et al., 2007) and machine translation (Habash, 2007; Habash and Sadat, 2006; Leusch et al., 2005; Weller et al., 2021; Mehta et al., 2020) to name a few.", "With the rise of noisy social media, text preprocessing has become important for tasks that use data from sources like Twitter and Reddit (Symeonidis et al., 2018; Singh and Kumari, 2016; Bao et al., 2014; Jianqiang, 2015; Weller and Seppi, 2020; Zirikly et al., 2019; Babanejad et al., 2020).", "The closest lines of work to ours are those that examine how preprocessing affects text classification accuracy, where recent works like Symeonidis et al. (2018) and HaCohen-Kerner et al. (2020) analyze and cross-compare up to 16 different techniques for four machine learning algorithms.", "In contrast, our work is the first to examine these preprocessing techniques beyond accuracy, examining them in tandem with how they affect vocabulary size and run-time.", "In this work we conduct the first study that examines the relationship between vocabulary size, run-time, and accuracy across different models and corpora for text classification.", "In general, we find that although vocabulary size is highly correlated with testing time, it is not highly correlated with training time or accuracy.", "In these cases, the specifics of the preprocessing algorithm (the content of what it removes) matter more.", "Our experiments show that rare word filtering and stopword removal are superior to many other common preprocessing methods, both in terms of their ability to reduce run-time and their potential to increase accuracy.", "By using these methods, we show that it is possible to reduce training and testing time by up to 65% with a loss of only 2-5% of accuracy, or in some cases, to provide accuracy and run-time improvements simultaneously.", "We hope that this study can help both researchers and industry practitioners as they design machine learning pipelines to reach their end-goals." ]
[ "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "result", "result", "method", "abstain", "abstain", "abstain", "method", "other", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "other", "other", "other", "objective", "objective", "result", "abstain", "result", "result", "objective" ]
[ "Generating metaphors is a difficult task as it requires understanding nuanced relationships between abstract concepts.", "In this paper, we aim to generate a metaphoric sentence given a literal expression by replacing relevant verbs.", "Guided by conceptual metaphor theory, we propose to control the generation process by encoding conceptual mappings between cognitive domains to generate meaningful metaphoric expressions.", "To achieve this, we develop two methods: 1) using FrameNet-based embeddings to learn mappings between domains and applying them at the lexical level (CM-Lex), and 2) deriving source/target pairs to train a controlled seq-to-seq generation model (CM-BART).", "We assess our methods through automatic and human evaluation for basic metaphoricity and conceptual metaphor presence.", "We show that the unsupervised CM-Lex model is competitive with recent deep learning metaphor generation systems, and CM-BART outperforms all other models both in automatic and human evaluations.", "1 1 Introduction Recent neural models have led to important progress in natural language generation (NLG) tasks.", "While pre-trained models have facilitated advances in many areas of generation, the field of metaphor generation remains relatively unexplored.", "Moreover, the few existing deep learning models for metaphor generation (Yu and Wan, 2019; Stowe et al., 2020; Chakrabarty et al., 2020) lack any conceptualization of the meaning of the metaphors.", "This work proposes the first step towards metaphor generation informed by the conceptual metaphor theory (CMT) (Lakoff and Johnson, 1980; Lakoff, 1993; Reddy, 1979).", "CMT holds 1 All code, models, and data are made available at: https://github.com/UKPLab/ acl2021-metaphor-generation-conceptual Figure 1: Metaphor generation guided by conceptual metaphors.", "that we use conceptual mappings between domains (conceptual structures that group related concepts) to generate linguistic metaphors.", "2 Metaphoric mappings consist of a source and a target conceptual domain.", "The source domain is the conceptual domain from which we draw the metaphorical expressions, while the target domain is the conceptual domain that we try to understand.", "A classical mapping is ARGUMENTISWAR , in which we conceptualize the target argumentation domain as the more concrete source domain of war: They fought against the contract.", "We focus on verbs, as they are often the key component of metaphoric expressions (Steen et al., 2010; Martin, 2006).", "When used metaphorically, verbs typically evoke source domains (e.g. fought , defended in the above examples): they are concrete, and are used to understand more abstract targets (i.e., argumentation verbs such as argued , supported ) via conceptual mappings (Sullivan, 2013).", "We propose a novel framework for metaphor generation informed by conceptual metaphor theory.", "Given a literal input sentence that evokes a target domain we generate metaphoric sentences that 2 Domains are also often referred to as image schema, frames, scenes, and more; see K ovecses (2020) evoke desired corresponding source domain(s).", "3 For example, given the literal sentence The party ended as soon as she left evoking the target domain CAUSE TO END , we can apply a variety of conceptual mappings to generate different metaphoric outputs evoking different source domains (see Figure 1).", "This allows us to generate metaphoric expressions that match known metaphoric mappings, as well as generating from unseen mappings to explore novel metaphors.", "Our contributions are: Two metaphor generation models grounded in CMT: 1) An unsupervised lexical model relying on frame embeddings learned from Framenet (CM-Lex, Section 3.1) and 2) a BART (Lewis et al., 2020) model encoding source/target domain information through fine-tuning (CM-BART, Section 3.2).", "Two metaphor generation tasks: 1) generate metaphoric expressions from known concept mappings, for which we provide gold standard test data, and 2) generate novel expressions from unknown metaphors using rare and unseen mappings (Section 4).", "A thorough evaluation using both automatic and human evaluations (Section 5).", "We show that our CM-BART model improves over all others in terms of metaphoricity (by 7% ) and domain evocation (by 33% ), and CM-Lex is competitive with previous neural models on metaphoricity while outperforming them on domain evocation (by 13% ).", "Traditional metaphor generation models focus only on whether the generated output is in some way metaphoric or not.", "This ignores the semantic and cognitive properties inherent in metaphoricity.", "These models can, to some degree, generate metaphors given a literal input, but these outputs often do not evoke the intended metaphor.", "Controlled metaphor generation yields critical benefits over these uncontrolled systems.", "For sentences in context, having metaphors that are consistent with the text is essential for natural understanding.", "Also, metaphors are not only used to express human knowledge, but can also help shape our understanding of the world: having fine-grained control over the generation process allows us to 3 We note that this source and target terminology used here is opposite to that in machine translation.", "explore novel metaphoric mappings and perhaps improve our understanding of the related domains.", "To achieve controlled metaphor generation, we define our task as follows: given a literal input sentence which evokes a target domain and an intended conceptual mapping , generate a metaphoric sentence such that it evokes a desired source domain.", "Thus, our generation models receive three inputs: 1) a literal input sentence ( They argued against the contract ), 2) the target domain evoked by the literal input ( ARGUMENT ) and 3) the desired source domain ( WAR ) for the metaphorical sentence.", "The output is a metaphorical sentence which evokes the intended mapping ( They fought against the contract ) 3 Methods We experiment with two general categories for generation.", "First, following previous work in metaphor generation and interpretation (Mao et al., 2018; Stowe et al., 2020), we implement lexical methods for replacement, identifying relevant verbs and replacing them with potential candidates for evoking particular mappings.", "Second, we experiment with deep learning models, employing controlled sequence-to-sequence generation.", "Metaphor generation can be conceptualized as find-ing key words and replacing them with metaphoric counterparts.", "This can be done by employing vector spaces, identifying the word most likely to fit in an appropriate context and subjecting them to some constraints of metaphoricity.", "We build on this paradigm by incorporating facets of conceptual metaphor theory.", "Our procedure is as follows: we learn a joint embedded representations for domains and lexical items.", "We then use the linear transformation between two domains as a mapping, which can be applied to input words from the target domain to generate a word from the source domain.", "As a proxy for domains, we utilize FrameNet (Baker et al., 1998), which contains semantic frames along with the set of lexical units that evoke them.", "Frames can be defined as related systems of concepts (Fillmore, 1982), which is exchangeable with the term do-main used in conceptual metaphor theory (Cruse and Croft, 2004).", "Thus, we consider the transformation from one frame to another as a proxy for a conceptual metaphoric mapping.", "We first train FrameNet frame embeddings and employ evaluation metrics to ensure their quality.", "We then apply transformations between domains to literal verbs to generate metaphors grounded in conceptual metaphor theory.", "In order to exploit FrameNet frames as conceptual domains, we will embed them in vector space.", "While lexical and contextualized embeddings have proven effective, the field of embedding concepts from lexical resources is less well explored (Sikos and Pado, 2018; Alhoshan et al., 2019).", "These methods involve tagging raw corpora using automatic FrameNet parsing and then inputting some combination of the original text and the FrameNet information into standard embedding algorithms.", "To train and evaluate frame embeddings, we use 211k sentences of Gold annotations used to train the Open-SESAME parser (Swayamdipta et al., 2017), along with a variety of other automatically tagged datasets: 250k individual sentence from the Gutenberg Poetry Corpus (Jacobs, 2018), 17k from various fiction section of the Brown Corpus (Fran-cis and Kucera, 1979), and 80k sentences randomly selected from Wikipedia.", "From this, we extract a 5-word context window for each verb, creating 1.8M verb instances.", "We then replace the focus verb with its FrameNet frame label (either provided in the Gold data, or tagged via the parser), and train embedding models on the resulting data.", "This yields joint embedding spaces that contain both common words and FrameNet frame embeddings.", "We define two intrinsic metrics to evaluate the quality of our produced embeddings to enable fine-tuning and validation.", "First, following Sikos and Pado (2018), we can evaluate quality based on the words that evoke that Frame.", "FrameNet gives a set of lexical units (LUs) that evoke each frame f .", "We calculate the lexical similarity by taking the distance from the mean embedding of local words ( w f ) to the mean embedding of a random sample k of distant words ( w (cid:54) f ): lex ( f ) = (cid:80) w f cos ( E w ,E f ) | f | k (cid:80) w (cid:54) f cos ( E w ,E f ) k This lexical metric ( lex ) is evaluates whether the frame embedding is similar to words within its frame and dissimilar to those without.", "FrameNet also contains linking relations between frames (eg. used-by , uses ), yielding a hierarchy of connected frames.", "Starting with the assumption that frames connected in the structure Figure 2: Lexical generation process should be more similar, we also calculate a structural similarity metric str .", "We follow the same process as above, taking the distance between the mean embedding of the local frames n N , where N is the immediate neighbors of f , to the mean embedding of a sample k of distant frames n / N .", "n N n (cid:54) N We experiment with three lexical embeddings models: word2vec skip-gram (Mikolov et al., 2013), Glove (Pennington et al., 2014), and FastText (Bojanowski et al., 2017).", "We experiment with 50, 100, and 300 dimensional representations; we find the 50 dimensional word2vec embeddings perform best for both evaluation metrics.", "4 3.1.2 Embedding Mappings To apply these embeddings to generate metaphors based on conceptual mappings, we learn mappings between frames and apply the mappings directly to lexical items to facilitate lexical replacement.", "We define a mapping m as the pointwise distance between the target frame embedding and the source frame embedding.", "Following the approach for learning connections between concrete and poetic themes of Gagliano et al. (2016), we sum the embedding of the target verb and the mapping m for the selected conceptual mapping, and select the most similar word to the resulting vector.", "This word is then delemmatized using fitbert (Havens and Stal, 2019) and inserted into the original sentence (Figure 2).", "Note that these resulting words are generated without context, as they rely only on the input word and the conceptual mappings.", "This approach has benefits: we require no labeled metaphor data, using only embeddings trained on FrameNet-tagged corpora.", "However, ignoring context is likely detrimental.", "In order to better use contextual information, we explore state-of-the-art sequence-to-sequence modeling.", "4 For full frame embedding evaluation, see Appendix A. Literal (filled from LM) Target Frame Metaphoric (original) Source Frame That tyranny is destroyed DESTRUCTION That tyranny is slain KILLING The house where love had ended CAUSE TO END The house where love had died DEATH As the moments passed on PROCESS END As the moments roll on CAUSE MOTION What I learned my senses fraught COMING TO BELIEVE What I bear my senses fraught BRINGING Table 1: Sample of extracted pairs from the data collection process.", "For sequence-to-sequence learning, we fine-tune a pre-trained BART model (Lewis et al., 2020), adding source and target information to guide generation towards the intended metaphors.", "We first outline a procedure for generating semi-supervised paired data, then detail the training and generation process.", "In order to train sequence-to-sequence models for metaphor generation, we require large scale parallel corpora.", "We follow the approach of Chakrabarty et al. (2021) and build a corpus of literal/metaphoric paraphrases by starting with the Gutenberg Poetry corpus (Jacobs, 2018), identifying and masking metaphoric verbs, and replacing them with infilling from a language model.", "We use a BERT-based metaphor classification model trained on the VUA metaphor corpus (Steen et al., 2010) to identify metaphoric verbs in a sentence (i.e died in The house where love had died ).", "Then we convert it to a literal sentence (The house where love had ended) using infillings from pre-trained BERT (Devlin et al., 2019).", "To ensure the literal sentence with replacements convey the same semantic meaning as the metaphorical sentence they are then filtered using symbolic meaning ( SymbolOf relation) obtained from COMET (Bosselut et al., 2019), a GPT based language model fine-tuned on ConceptNet (Speer et al., 2017).", "COMET returns top 5 symbolic beams of (loss, loneliness, despair, sadness and sorrow) for the sentence The house where love had died whereas it replaces sorrow with life for the literal version.", "While Chakrabarty et al. (2021) filter down to only those candidates with an exact match between the top 5 symbolic beams for the literal and metaphorical sentences returned by the COMET model, we ease the restriction to cases where at least four of five symbols are the same.", "In order to learn more direct metaphoric information from this data, we additionally tag each sentence with FrameNet frames using the Open-SESAME parser (Swayamdipta et al., 2017).", "We extract each pair in which both the focus word in the literal, target-domain sentence and the metaphoric, source-domain sentence are assigned a FrameNet frame.", "We then make the assumption that the relation between the frames for the source and target domains reflects a metaphoric mapping.", "This then yields a dataset of paired sentences for which we have a metaphoric mapping between domains based on FrameNet for the focus verbs.", "Samples of the created data are shown in Table", "1. In total this process yields 248k sentences spanning 8.5k unique mappings between FrameNet frames.", "Each pair comprises a literal and metaphoric sentence, along with the literal target frame and the metaphoric source frame.", "From these we can directly train a sequence to sequence model for conceptual metaphor-based generation.", "We fine-tune BART (Lewis et al., 2020), a pre-trained conditional language model that combines bidirectional and auto-regressive transformers, on the created parallel corpora described in Section 3.2.1.", "We incorporate representations of the frame information to allow this model to control for the metaphoric mappings evoked.", "To transform a literal sentence from a given target domain to a metaphorical sentence evoking a specific source domain, we incorporate both target and source domains (as FrameNet frames) into the textual representation as a control code, following the work of Schiller et al. (2020) who used this procedure for Argument Generation.", "Following the example from Figure 1, the input literal text fed to the BART encoder would be: DEATH (cid:104) EOT (cid:105) The party (cid:104) V (cid:105) ended : CAUSE TO END (cid:104) V (cid:105) as soon as she left.", "where (cid:104) EOT (cid:105) and (cid:104) V (cid:105) are delimiters, DEATH is the source frame, and CAUSE TO END the target frame.", "The decoding target is the metaphoric text The party died as soon as she left, which evokes the CAUSE TO END IS DEATH mapping.", "Note that our training data differs only at the level of a single verb.", "We use the generative BART seq2seq model to generate metaphoric paraphrases, but due to the nature of the training data and the importance of verbs in metaphoric expressions, this is often realized in the output as lexical replacement.", "Post fine-tuning, we use top-k (k=5) sampling (Fan et al., 2018) to generate metaphors conditioned on the input literal sentence and source and target domains for the required metaphoric mapping.", "5 We evaluate the lexical model ( CM-Lex ) and the sequence-to-sequence model ( CM-BART ) under two experimental settings.", "We evaluate our metaphor generation methods against two previous approaches to metaphoric paraphrase generation: the MERMAID system (Chakrabarty et al., 2021) and the metaphor masking model ( MetMask ) (Stowe et al., 2020).", "We explore two tasks: generating against gold standard metaphoric expressions, and using rare and unseen metaphoric mappings.", "For the former, we build a gold test set of metaphoric paraphrases that evoke a particular source/target mapping.", "For the latter, we apply a variety of source/target mappings to literal inputs for which we do not have gold outputs.", "For a test set, we use the same procedure as our data collection approach from Section 3.2.1.", "We apply this procedure to two datasets: a sample of the Gutenberg Poetry Corpus and a sample of fiction from the Brown Corpus (Francis and Kucera, 1979).", "This generates an initial set of literal/metaphoric pairs.", "We also tagged the pairs from Mohammad et al. (2016) with FrameNet tags, as these generally contain novel, well-formed metaphors.", "These three datasets each have different properties with regard to metaphor.", "The Gutenberg Poetry corpus has consistent, novel metaphors, but often unconventional syntactic constructions, due to the poetic nature of the text.", "The Mohammad 2016 corpus contains manually constructed metaphors which are novel, following relatively basic syntactic patterns.", "The Brown Corpus is standard fiction texts, so the metaphors within tend to be very conventional.", "From these sources, we draw pairs randomly, checking that they reflect strong literal/metaphoric paraphrases until we obtain 50 instances from each set.", "Each pair is tagged with FrameNet frames for the focus verbs, which comprise the metaphoric 5 Full parameter tuning outlined in Appendix C. mapping.", "6 For the Brown corpus, metaphoric expressions were relatively rare, and thus valid pairings were sparse: to overcome this, we manually modified 11 of the expressions to evoke the appropriate metaphoric mappings.", "In total this process yielded 150 lit-eral/metaphoric pairs, along with the source and target frames that they evoke.", "We use this dataset to evaluate generating metaphors based on mappings with gold standard outputs, using both automatic and human-based evaluations.", "To explore the flexibility of the system developed in this study, we also evaluate them for generation of metaphoric expressions that are not directly linked to gold literal/metaphoric pairs.", "For this, we be-gin with our 150 pairs from above, but consider only the literal sentence and the evoked target domain.", "For each sentence, we generate two source domains that could potentially map to the target.", "These are selected in order to identify rare and unseen mappings based on the observed mappings in our training data.", "For rare mappings we select a source domain at random from the mappings with the median frequency for a given target domain.", "For unseen mappings we select a source domain at random from the FrameNet frames that are never used as a source for the given target domain.", "This set contains only the tuple (input sentence, target domain, source domain) needed as input to our models; we do not have gold generated metaphorical utterances.", "Thus, on this set we will only perform human-based evaluation of the quality of the generated metaphors.", "Word overlap metrics (eg. BLEU, ROUGE) are inherently weak for this task, as these sentences inherently have high overlaps.", "So instead, we employ semantic distance metrics.", "We generate sentence embeddings using SBERT 7 (Reimers and Gurevych, 2019) for each of our components: the literal input L , the original gold metaphoric expression M , and the generated output G .", "The generated metaphoric expressions should match the semantics of the original gold metaphor.", "We can evaluate this using the cosine distance, here between M and G .", "As SBERT embeddings have been shown to reflect semantic similarity and entailment between paired sentences, this metric should be capable of capturing whether the generated metaphoric expression matches the gold.", "Assuming that conceptual metaphoric mappings are responsible for the connecting of meaning between our literal and metaphoric sentences, we would also expect there to be a relation that holds between the original literal input L and metaphoric output M .", "This relation should also hold between the L and the generated metaphor G .", "As a simple metric we can employ cosine distance: we aim for minimizing the distance between cos ( L, M ) between cos ( L, G ) .", "Results for automatic evaluation on the 150 gold metaphors are shown in Table", "2. Note that we cannot automatically evaluate against rare or unseen metaphoric mappings, as we lack gold metaphors.", "The CM-Lex model is competitive with the best neural baseline, which is encouraging.", "This shows that simply incorporating basic understanding of conceptual mappings can be a powerful tool for metaphor generation.", "The CM-BART yields the best automatic performance over all metrics, sig-nificantly outperforming all other models ( p < . 01 , paired t-test.).", "Automatic metrics allow us to quickly prototype metaphoric generation systems based in conceptual metaphor theory.", "However, they rely on SBERT and inherit the biases and weaknesses therein.", "We also perform human evaluations, against both the gold test data and the set of rare and unseen mappings.", "For human evaluation, we defined two objectives.", "First, we aim to capture the metaphoricity of the output, as a core objective.", "The outputs should evoke novel, interesting metaphors regardless of the domains involved.", "Second, we want the generated metaphoric outputs to evoke the source domains (eg. She destroyed his argument evokes the source domain of WAR ).", "We recruited three domain experts in metaphoricity.", "They were instructed to rate each instance on a scale from 1 (not at all) to 4 (very) for metaphoricity and for whether it evokes the source domain.", "If the sentence was completely unintelligible, they were instructed to mark it as 0 for both categories.", "For metaphoricity, annotators were given brief definitions of metaphoricity which they incorporated into their expert knowledge to best rate metaphors.", "For source domain evocation, they were additionally provided with links to the respective FrameNet frames.", "We evaluate three different models for the gold metaphors: the best performing previous model, MERMAID , as well as the lexical and CM-BART models.", "For all models we evaluate generation using the mappings for our gold test set.", "For the unknown metaphors without gold sentences, we only evaluate our two controlled models, as the generic baselines give the same output regardless of the intended source.", "This yields a total of 450 sentences (150 gold, 300 without) that are evaluated for metaphoricity and source domain.", "All three experts annotated a random set of 100 training sentences, in order to determine the feasibility and agreement for this task.", "Agreement rates were .50 for metaphoricity and .37 for source domain (Krippendorff's ).", "8 5.1.1 Gold Test Mappings Results for human evaluations of gold, rare, and unseen metaphoric mappings are shown in Table", "3. With regard to the gold mappings, the CM-BART model performs best in metaphoricity and source 8 Full annotation analysis can be found in Appendix B. Input / TARGET / SOURCE Model Output Met Src 1 He resisted the panic of vertigo SELF CONTROLISQUARRELING Gold He fought the panic of vertigo MetMask He got the panic of vertigo 3 1 MERMAID He felt the panic of vertigo 1 2 CM-Lex He confrontations the panic of vertigo 0 0 CM-BART He disputed the panic of vertigo 3 4 2 A dim aurora rises in my east CHANGE POSITION ON A SCALEISRESIDENCE Gold A dim aurora lives in my east MetMask A dim aurora kicked in my east 3 1 MERMAIDA dim aurora hangs in my east 4 2 CM-Lex A dim aurora stands in my east 3 3 CM-BART A dim aurora lives in my east 3 4 3 People were running out of the theater SELF MOTIONISFLUIDIC MOTION Gold People were streaming out of the theater MetMask People were clogged out of the theater 4 1 MERMAID People were running out of the theater 1 4 CM-Lex People were boiling out of the theater 4 4 CM-BART People were spilled out of the theater 4 3 Table 4: Example outputs of each system along with the mean of their human evaluations.", "domain evocation.", "CM-Lex has middling performance for metaphoricity, but does well at generating correct source domains.", "The MERMAID system performs well in terms of metaphor generation, but fails to capture the intended source domain.", "Examples of each model's generation are shown in Table", "4. In 1, we see that CM-Lex generates noise, making the results unintelligible.", "CM-BART is more robust, generating fluent expressions, and shows evidence of conceptual mapping control, generating a metaphoric expression matching the source domain.", "In 2, the MetMask and MERMAID models generate reasonable metaphors, which do not evoke the intended domain.", "CM-Lex is better, generating stand which can reflect RESIDENCE , while the CM-BART performs best, generating the gold metaphoric expression.", "In 3, we see that the unconstrained models generate effective expressions: clog is an evocative metaphor, and running, while literal, can match the intended domain via the idea of running water.", "However, our controlled methods both generate novel metaphors that directly evoke the source domain, showing the effectiveness of incorporating conceptual information in generation.", "Overall, we see that the unconstrained models often generate good metaphors, but lack consistency with the input, as they are naive with regard to the conceptual backing of these metaphoric expressions.", "CM-Lex is effective to some degree, even without metaphoric training data, and CM-BART performs best, generating novel metaphors that frequently match the intended metaphoric expression.", "CM-BART outperforms CM-Lex for metaphoricity and source domain evocation for rare and unseen source domains.", "Examples of the two proposed models' generated for rare and unseen metaphoric mappings are shown in Table", "5. Example 1 shows the ideal case.", "When given a source domain from a rare mapping, the resulting metaphor is fairly reasonable.", "CM-BART generates a metaphor consistent with the original semantics; CM-Lex generates the literal utterance.", "When presented with an unseen mapping in which operating a vehicle is framed as death, we get diverse expressions, both adding meaning to the original utterance.", "CM-Lex uses the verb fell (albeit incorrectly conjugated), which can be used to abstractly evoke the death domain, while CM-BART directly uses the verb die.", "The original expression can be ambiguous as to whether the car stopped: the evoked metaphor enforces the stoppage of the car, and also provides color to the expression.", "Example 3 highlights a key issue: when the source and target domains are too incongruent, the generated expressions can be inconsistent.", "CM-Lex here again generates noise.", "However, CM-BART generates normal, expressive metaphors, which are nonetheless incompatible with the original literal input, which denotes the lessening of darkness.", "Rather, CM-BART generates a metaphor expressing perhaps growing darkness with the verb try and a dangerous darkness with the verb bite .", "This is a critical point with regard to conceptual mappings.", "Not all pairs are available: they require semantic consistency, and while generating from any two pairs may yield insightful, interesting, and perhaps inspiring new metaphoric expressions, generating metaphoric paraphrases requires additional knowledge of which source/target pairings are compatible.", "This generally supports notion of invariance and structure mapping, in which there is inherent structure within domains that needs to be consistent in order to evoke metaphoric mappings between them (Gentner, 1983; Lakoff, 1993).", "It must be noted that the systems proposed here have a distinct advantage in this task: we add FrameNet frames, which, while neither perfect nor designed to capture metaphoricity, provide a strong signal for which domains to generate in.", "This highlights a possible benefit to the interaction between deep, pre-trained models such as BART and available lexical resources: by combining these, we are able to leverage the strength of each to build a powerful metaphor generation system.", "We broadly cover two areas of related work: previous computational approaches to CMT, and previous approaches to metaphor generation.", "Computational Approaches to CMT.", "There are a variety of approaches to identifying conceptual metaphors themselves.", "The CorMet system (Mason, 2004) was built to extract conceptual metaphors based on selectional preferences of verbs.", "Shaikh et al. (2014a) builds conceptual spaces for source domains, using rule-based extraction of relations between lexical items.", "These conceptual spaces are then used to find new conceptual metaphors.", "This process is extended to build a repository of linguistic and conceptual metaphors (Shaikh et al., 2014b).", "Mohler et al. (2014) focus on identifying appropriate source domains for metaphoric expressions, using vector-based approaches for metaphor interpretation.", "The idea of using frames to represent metaphoric domains has been explored in the MetaNet project (Dodge et al., 2015).", "We however, restrict our work to FrameNet due to the coverage and availability of reliable automatic parsing.", "Metaphor Generation.", "Early work in metaphor generation was based in heuristics, learning to generate relatively simple A is like B representations (Abe et al., 2006; Terai and Nakagawa, 2010).", "In a similar vein, Veale (2016) uses template-like structures to generate creative and metaphoric tweets.", "Other works focus on identifying metaphoric mappings using WordNet clustering and selectional preferences (Mason, 2004; Gandy et al., 2013), syntactic relations to build proposition databases (Ovchinnikova et al., 2014), and embedding based approaches to identify poetic relationships (Gagliano et al., 2016).", "However, the goal of these works is to generate mappings, rather than linguistic expressions that evoke them.", "Amongst deep learning approaches Yu and Wan (2019) identify literal and metaphoric words in corpora based on selectional restrictions, and using these to train sequence-to-sequence models for metaphor generation, albeit without reference to any input expression.", "Stowe et al. (2020) generates metaphors using masked language modeling, masking metaphoric tokens in training in order to encourage metaphoric generation.", "Other approaches use novel methods for collecting lit-eral/metaphor pairs, training sequence-to-sequence models for simile generation and metaphoric paraphrasing (Chakrabarty et al., 2020, 2021).", "These approaches effectively generate figurative language, but the models have no knowledge of the underlying metaphors, and thus simply generate ungrounded expressions.", "This leads to outputs which are possibly metaphoric, but contain no connection to the input, eschewing the critical connections that make novel metaphors powerful.", "We instead propose methods for generating metaphoric paraphrases grounded in CMT.", "In summary, we have shown two methods for incorporating knowledge of conceptual metaphor theory in metaphor generation.", "We trained FrameNet frame embeddings to represent conceptual domains, and applied shifts between them to generate metaphors in an unsupervised fashion.", "Leveraging FrameNet further, we build a dataset of semi-supervised pairs that evoke conceptual metaphors, which can be used along with BART for controlled metaphor generation.", "This model achieves state-of-the-art performance in metaphor generation by both automatic and human evaluations.", "Future work can expand these models to go beyond verbs, incorporating nominal and other types of metaphors.", "The next necessary step is to go beyond lexicalized metaphors: good, consistent conceptual metaphors often span long stretches of text, and we need to design models that can learn and generate metaphors over larger texts.", "Although we use language models trained on data collected from the Web, which have been shown to have issues with bias and abusive language (Sheng et al., 2019; Wallace et al., 2019), the inductive bias of our models should limit inadvertent negative impacts.", "Unlike model variants such as GPT, BART is a conditional language model, which provides more control of the generated output.", "It should also be noted that our CM-BART model is fine-tuned on the poetry corpus which is devoid of harmful and toxic text especially targeted at marginalized communities Advances in generative AI inherently come with concerns about models' ability to deceive, persuade, and misinform.", "Metaphorical language has been shown to express and elicit stronger emotion than literal language (Citron and Goldberg, 2014; Mohammad et al., 2016) and to provoke emotional responses in the context of political discourse covered by mainstream newspapers (Figar, 2014).", "We understand there may be concerns about building generative models for metaphors aimed at persuasion.", "Social scientists distinguish persuasion from manipulation based on two aspects: dissimulation and constraint (Nettel and Roque, 2012).", "Dissimulation involves concealing intention, which requires hiding information, whereas constraint involves removing options from the audience and forcing them to accept the conclusion.", "Our work on metaphor generation does not aim to hide information about a topic or present it as the only choice, but aims to provide the same sentence using more expressive language." ]
[ "abstain", "objective", "objective", "objective", "abstain", "result", "abstain", "abstain", "abstain", "objective", "other", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "result", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "Language identification for code-switching (CS), the phenomenon of alternating between two or more languages in conversations, has traditionally been approached under the as-sumption of a single language per token.", "However, if at least one language is morphologically rich, a large number of words can be composed of morphemes from more than one language (intra-word CS).", "In this paper, we extend the language identification task to the subword level, such that it includes splitting mixed words while tagging each part with a language ID.", "We further propose a model for this task, which is based on a segmental recurrent neural network.", "In experiments on a new SpanishWixarika dataset and on an adapted GermanTurkish dataset, our proposed model performs slightly better than or roughly on par with our best baseline, respectively.", "Considering only mixed words, however, it strongly outperforms all baselines.", "In settings where multilingual speakers share more than one language, mixing two or more languages within a single piece of text, for example a tweet, is getting increasingly common (Grosjean, 2010).", "This constitutes a challenge for natural language processing (NLP) systems, since they are commonly designed to handle one language at a time.", "Code-switching (CS) can be found in multiple non-exclusive variants.", "For instance, sentences in different languages can be mixed within one text, or words from different languages can be combined into sentences.", "CS can also occur on the subword level, when speakers combine morphemes from different languages ( intra-word CS ).", "This last phenomenon can mostly be found if at least one of the languages is morphologically rich.", "An example for intra-word CS between the", "Ro-(a) ne'iwa pecansadoxi WIX MIXED my.brother you-are.tired.PPFV", "mance language Spanish and the Yuto-Aztecan language Wixarika 1 is shown in Figure 1. CS language identification (LID) , i.e., predicting the language of each token in a text, has attracted a lot of attention in recent years (cf.", "Solorio et al. (2014); Molina et al. (2016)).", "However, intra-word mixing is mostly not handled explicitly: words with morphemes from more than one language are simply tagged with a mixed label.", "While this works reasonably well for previously studied language pairs, overlooking intra-word CS leads to a major loss of information for highly pol-synthetic languages.", "A mixed word is unknown for NLP systems, yet a single word contains much more information, cf.", "Figure 1", "(b).", "Furthermore, we find intra-word CS to be much more frequent for SpanishWixarika than for previously studied language pairs, such that it is crucial to handle it.", "Motivated by these considerations, we extend the LID task to the subword level (from", "(a) to", "(b) in Figure 1).", "We introduce a new CS dataset for SpanishWixarika (ESWIX) and modify an existing GermanTurkish (DETR) CS corpus (Cetinoglu, 2016) for our purposes.", "We then intro-1 Wixarika, also known as Huichol, is a polysynthetic Mexican indigenous language.", "duce a segmental recurrent neural network (Seg-RNN) model for the task, which we compare against several strong baselines.", "Our experiments show clear advantages of SegRNNs over all baselines for intra-word CS.", "The task of LID for CS has been frequently studied in the last years (Al-Badrashiny and Diab, 2016; Rijhwani et al., 2017; Zhang et al., 2018), including two shared tasks on the topic (Solorio et al., 2014; Molina et al., 2016).", "The best systems (Samih et al., 2016; Shirvani et al., 2016) achieved over 90% accuracy for all language pairs.", "However, intra-word CS was not handled explicitly, and often systems even failed to correctly assign the mixed label.", "For NepaliEnglish, Barman et al. (2014) correctly identified some of the mixed words with a combination of linear kernel support vector machines and a k -nearest neighbour approach.", "The most similar work to ours is Nguyen and Cornips (2016), which focused on detecting intra-word CS for DutchLimburgish (Nguyen et al., 2015).", "The authors utilized Morfessor (Creutz and Lagus, 2002) to segment all words into morphemes and Wikipedia to assign LID probabilities to each morpheme.", "However, their task definition and evaluation are on the word level.", "Furthermore, as this method relies on large monolingual resources, it is not applicable to low-resource languages like Wixarika, which does not even have its own Wikipedia edition.", "Subword-level LID consists of both segmentation and tagging of words.", "An earlier approach to handle a similar scenario was the connectionist temporal classification (CTC) model developed by Graves et al. (2006).", "The disadvantage of this model was the lack of prediction of the segmentation boundaries that are necessary for our task.", "Kong et al. (2016) later proposed the SegRNN model that segments and labels jointly, with successful applications on automatic glossing of polysynthetic languages (Micher, 2017, 2018).", "Segmentation of words into morphemes alone has a long history in NLP (Harris, 1951), including semior unsupervised methods (Goldsmith, 2001; Creutz and Lagus, 2002; Hammarstrom and Borin, 2011; Gronroos et al., 2014), as well as supervised ones (Zhang and Clark, 2008; Ruokolainen et al., 2013; Cotterell et al., 2015; Kann et al., 2018).", "Formally, the task of subword-level LID consists of producing two sequences, given an input sequence of tokens X = (cid:104) x 1 , . . . , x i , . . . , x | X | (cid:105) .", "The first sequence contains all words and splits X s = (cid:104) x s 1 , . . . , x si , . . . , x s | X | (cid:105) , where each x si is an m -tuple of variable length 0 < m | x i | , where | x i | is the number of characters in x i .", "The second sequence is such that T s = (cid:104) t s 1 , . . . , t si , . . . , t s | X | (cid:105) , where | T s | = | X s | = | X | and each t si T s is an n -tuple of tags from a given set of LID tags.", "An inputoutput example for a DETR mixed phrase is shown in Figure 2. Input (cid:104) Yerim', seni', ,', danke', Schatzym' (cid:105) Output (cid:104) (Yerim), (seni), (,), (danke), (Schatzy, m) (cid:105) (cid:104) (TR), (TR), (OTHER), (DE), (DE, TR) (cid:105) Figure 2: Subword-level LID in GermanTurkish.", "GermanTurkish The GermanTurkish Twitter Corpus (Cetinoglu and Coltekin, 2016) consists of 1029 tweets with 17K tokens.", "They are manually normalized, tokenized, and annotated with language IDs.", "The language ID tag set consists of TR (Turkish), DE (German), LANG3 (other lan-guage), MIXED (intra-word CS), AMBIG (ambigu-ous language ID in context), and OTHER (punctu-ation, numbers, emoticns, symbols, etc.).", "Named entities are tagged with a combination of NE and their language ID: NE.TR , NE.DE , NE.LANG3 .", "In the original corpus, some Turkish and mixed words undergo a morphosyntactic split, 2 with splitting points not usually corresponding to language boundaries.", "For the purpose of subword-level LID, these morphosyntactic splits are merged back into single words.", "We then manually segment MIXED words at language boundaries, and replace their labels with more fine-grained language ID tags.", "The total percentage of mixed words is 2 .", "75% .", "However, the percentage of sentences with mixed words is 15.66%.", "The complete dataset statistics can be found in Table 1. SpanishWixarika Our second dataset consists of 985 sentences and 8K tokens in Spanish and Wixarika.", "Wixarika is spoken by approximately 2 E.g., separating copular suffixes from roots they are attached to, cf.", "Cetino glu and C oltekin (2016) for details.", "50 , 000 people in the Mexican states of Durango, Jalisco, Nayarit and Zacatecas (Leza and Lopez, 2006) and is polysynthetic, with most morphemes ocurring in verbs.", "The data is collected from public postings and comments from Facebook accounts.", "To ensure the public characteristic of these posts, we manually collect data that is accessible publicly without being logged in to Facebook, to comply with the terms of use and privacy of the users.", "These posts and comments are taken from 34 users: 14 women, 10 men, and the rest does not publically reveal their gender.", "None of them have publically mentioned their age.", "To get a dataset that focuses on the LID task, we only consider threads where the CS phenomenon appears.", "We replace usernames with @username in order to preserve privacy.", "Afterwards, we tokenize the text, segment mixed words, and add language IDs to words and segments.", "The tag set is parallel to that of German Turkish: ES (Spanish), WIX (Wixarika), EN (En-glish), AMBIG (ambiguous) OTHER (punctuation, numbers, emoticons, etc), NE.ES , NE.WIX and NE.EN (named entities).", "Mixed words are segmented and each segment is labeled with its corresponding language ( ES , WIX , EN ).", "Table 2 shows a detailed description of the dataset.", "The percentage of mixed words is higher than in the DETR dataset: 3 .", "13% of the tokens and 4 .", "26% of the types.", "The most common combination is Spanish roots with Wixarika affixes.", "Furthermore, 16.55% of the sentences contain mixed words.", "We split the DETR corpus and the ESWIX corpus into training and test sets of sizes 800:229 Tokens All % Unique Unique % ES 4218 50.73 1527 45.76 WIX 2019 24.28 1191 35.69 EN 24 0.29 21 0.63 AMBIG 28 0.34 25 0.75 OTHER 1664 20.01 288 8.63 NE.ES 96 1.15 85 2.55 NE.WIX 77 0.93 49 1.47 NE.EN 11 0.13 9 0.27 MIXED 177 2.13 142 4.26 ES WIX 35 19.77 31 21.83 WIX ES 122 68.93 93 65.49 WIX ES WIX 17 9.60 31 10.56 WIX EN 1 0.07 1 0.07 EN ES 1 0.07 1 0.07 Table 2: Number of tokens classified by language tags seen in the Spanish-Wixarika dataset.", "and 770:216, respectively.", "Error analysis and hyperparameter tuning are done on the training set via 5-fold cross-validation.", "We present results on the test sets.", "Both datasets are available at https://www.ims.uni-stuttgart.", "de/institut/mitarbeiter/ozlem/ NAACL2019.html 4 Experiments Our main system is a neural architecture that jointly solves the segmentation and language identification tasks.", "We compare it to multiple pipeline systems and another joint system.", "We suggest a SegRNN (Kong et al., 2016) would be the best fit for our task because it models a joint probability distribution over possible segmentations of the input and labels for each segment.", "The model is trained to optimize the following objective, which corresponds to the joint log-likelihood of the segment lengths e and the language tags t : L ( )= (cid:88) ( x,t,e ) D log p ( t, e | x ) (1) D denotes the training data, is the set of model parameters, x the input, t the tag sequence and e the sequence of segment lengths.", "Our inputs are single words.", "3 As hyperparame-3 We also experimented with entire phrases as inputs, and the achieved scores were slightly worse than for word-based inputs.", "ters we use: 1 RNN layer, a 64-dimensional input layer, 32 dimensions for tags, 16 for segments, and 4 for lengths.", "For training, we use Adam (Kingma and Ba, 2014).", "BiLSTM+Seq2Seq/BiLSTM+CRF Our first baselines are pipelines.", "First, the input text is tagged with language IDs.", "Language IDs of a mixed word are directly predicted as a combination of all language ID tags of the word (i.e., WIX ES ).", "Second, a subword-level model segments words with composed language ID tags.", "For word-level tagging, we use a hierarchical bidirectional LSTM ( BiLSTM ) that incorporates both tokenand character-level information (Plank et al., 2016), similar to the winning system (Samih et al., 2016) of the Second Code-Switching Shared Task (Molina et al., 2016).", "4 For the subword level, we use two supervised segmentation methods: a CRF segmenter proposed by Ruokolainen et al. (2013), that models segmentation as a labeling problem and a sequence-to-sequence ( Seq2Seq ) model trained with an auxiliary task as proposed by Kann et al. (2018).", "CRFTag+Seq2Seq/CRFTag+CRF Since our datasets might be small for training neural networks, we substitute the BiLSTM with a CRF tagger (Muller et al., 2013, CRFTag ) in the first step.", "For segmentation, we use the same two approaches as for the previous baselines.", "CharBiLSTM We further employ a BiLSTM to tag each character with a language ID.", "For training, each character inherits the language ID of the word or segment it belongs to.", "At prediction time, if the characters of a word have different language IDs, the word is split.", "4 For all BiLSTM models input dimension is 100 with a hidden layer size of 100.", "For training we use a stochastic gradient descent (Bottou, 2010), 30 epochs, with a learning rate of 0.1.", "A 0.25 dropout factor is applied.", "We use two metrics for evaluation.", "First, we follow Kong et al. (2016) and calculate precision (P), recall (R), and F1, using segments as units (an unsegmented word corresponds to one segment).", "We also report a tagging accuracy (Char Acc.) by assigning a language ID to each character and calculating the ratio of correct language tags over all characters.", "Table 4 shows all test results for the entire datasets.", "We find the following:", "(i) For ESWIX, SegRNN performs slightly better for tagging than the best baseline, both in terms of F1 and character accuracy.", "For DETR, SegRNN and BiLSTM+CRF are the best segmentation models, but the BiLSTM models slightly outperform SegRNN for tagging.", "(ii) The CRF pipelines perform slightly worse than the best word-level BiLSTM models for both datasets and all evaluations.", "Table 3 shows the results of tagging and segmentation only for the mixed words in our datasets.", "Here, we can see that:", "(i) Our SegRNN model achieves the best performance for segmentation.", "Differences to the other approaches are 10% , showing clearly why these models are good for the task when the number of words belonging to two languages is high.", "(ii) The pipeline BiLSTM models work best for tagging of the DE TR data with a slight margin, but underperform TRDEDETRNE .", "(iii) Both CRFTag models achieve very low results for both segmentation and tagging.", "(iv) CharBiLSTM performs better than the CRFTag models on both tasks, but is worse than all other approaches in our experiments.", "More generally, we further observe that recall on mixed words for the DETR pair is low for all systems, as compared to ESWIX.", "This effect is especially strong for the CRFTag and CharBiLSTM models, which seem to be unable to correctly identify mixed words.", "While this tendency can also be seen for the ESWIX pair, it is less extreme.", "We suggest that the better segmentation and tagging of mixed words for ESWIX might mostly be due to the higher percentage of available examples of mixed words in the training set for ESWIX.", "Overall, we conclude that SegRNN models seem to work better on language pairs that have more intra-word CS, while pipeline approaches might be as good for language pairs where the number of mixed words is lower.", "Error analysis.", "Figure 3 shows confusion matrices for SegRNN and BiLSTM+Seg2Seg.", "Both models achieve good results assigning monolingual tags ( ES , WIX , DE , TR ) and punctuation symbols ( OTHERS ).", "The hardest labels to classify are named entities ( NE , NE.TR , NE.TR , NE.WIX , NE.ES ), as well as third language and ambiguous tags ( LANG3 , EN , AMBIG ).", "Performance on multilingual tags ( DE TR , WIX ES , ES WIX , WIX ES WIX ) is mixed.", "For DE TR , BiL-STM+Seq2Seq gets slightly better classifications, but for the ESWIX tags SegRNN achieves better results.", "Regarding oversegmentation problems, BiL-STM+Seq2Seq (0.8% for DETR and 2.0% for ESWIX) slightly underperforms SegRNN (0.7% for DETR and 1.13% for ESWIX).", "The BiLSTM+Seq2Seq (2.4%) makes fewer undersegmentation errors for DETR than SegRNN (2.7%).", "However, for ESWIX, SegRNN performs better with 3.81% undersegmentation errors compared to 4.2% of BiLSTM+Seq2Seq.", "In this paper, we extended the LID task to the subword level, which is particularly important for code-switched text in morphologically rich languages.", "We further proposed a SegRNN model for the task and compared it to several strong baselines.", "Investigating the behaviour of all systems, we found that pipelines including a BiLSTM tagger work well for tagging DETR, where the number of mixed tokens is not that high, but that our proposed SegRNN approach performs better than all other systems for ESWIX.", "Also, SegRNNs have clear advantages over all baselines if we consider mixed words only.", "Our subword-level LID datasets for ESWIX and DETR are publicly available.", "We would like to thank Mohamed Balabel, Sam Bowman, Agnieszka Falenska, Ilya Kulikov and Phu Mon Htut for their valuable feedback.", "We also want to thank Jeffrey Micher for his help with the setup of the SegRNN code.", "This project has bene-fited from financial support to Manuel Mager and Ozlem Cetinoglu by DFG via project CE 326/1-1 Computational Structural Analysis of German-Turkish Code-Switching, to Manuel Mager by DAAD Doctoral Research Grant, and to Katharina Kann by Samsung Research." ]
[ "abstain", "abstain", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "abstain", "abstain", "abstain", "abstain", "method", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "method", "abstain", "other", "other", "other" ]
[ "We report on adaptation of multilingual end-to-end speech recognition models trained on as many as 100 languages.", "Our findings shed light on the relative importance of similarity between the target and pretraining languages along the dimensions of phonetics, phonology, language family, geographical location, and orthography.", "In this context, experiments demonstrate the effectiveness of two additional pretraining objectives in encouraging language-independent encoder representations: a context-independent phoneme objective paired with a language-adversarial classification objective.", "The main difficulty in creating automatic speech recognition (ASR) systems for a large number of the world's 7,000 languages is a lack of training data.", "Such data comes in the form of speech paired with transcriptions, a pronunciation lexicon, and text for language model training.", "A common technique in data-constrained settings is to learn language-independent representations of speech via multilingual training.", "Popular approaches include the use of multilingual bottleneck features (Vesely et al., 2012) as well as multilingual model training before fine-tuning to a target language (Scanzio et al., 2008; Vu et al., 2012).", "Prior work in multilingual and cross-lingual speech recognition has been restricted to a small handful of the world's most-spoken languages, relying on multilingual corpora such as Global-Phone (Schultz, 2002), the IARPA Babel corpora (Gales et al., 2014), or the VoxForge 1 corpora.", "Most work typically only reports on models trained on a subset of these languages.", "In this paper we explore pretraining multilingual ASR models using speech from as many as 1 voxforge.org 100 languages from the CMU Wilderness Multilingual Speech Dataset (Black, 2019).", "2 To the best of our knowledge, this is the greatest number of languages that has been used in multilingual ASR model training to date.", "We perform experiments to guide the choice of languages used when pretraining the model and assess the relative importance of similarity between the pretraining languages and target language in terms of geographic location, phonology, phonetic inventory, language family and orthography.", "We examine these variables in the context of two experimental setups: one where models are adapted to target language and target speakers, and one where models are adapted to target language but non-target speakers.", "The first task is relevant to language documentation contexts, which often involves transcribing speech of spe-cific speakers for which there already exists some transcribed speech as training data (Michaud et al., 2018).", "The second case is relevant to incident response as modelled by LORELEI (Strassel and Tracey, 2016), where there may only be a single target-language consultant available for which transcribed speech can be elicited, but the goal is to have an ASR model that generalizes to multiple speakers.", "Multilingual ASR training on such a scale presents challenges because of this language diversity.", "In order to guide the model to learn language-independent representations that are more amenable to adaptation, we experiment with two auxiliary training tasks.", "The first is context-independent phoneme sequence prediction to help bridge orthographic inconsistencies between languages.", "The second is a domain-adversarial classification objective (Ganin et al., 2016) over languages to encourage invariance 2 festvox.org/cmu_wilderness/index.html of the model with respect to language-specific phenomena.", "The hierarchical combination of grapheme and phoneme objectives has only been used in monolingual end-to-end frameworks (Kr-ishna et al., 2018; Rao and Sak, 2017).", "Language-adversarial training in ASR (Yi et al., 2018) has not been done at this scale before, nor in an end-to-end framework.", "Our experiments are designed to answer the following questions:", "1. Is there benefit in scaling multilingual model training to a large number of languages?", "2. In what circumstances, if any, does the addition of a phoneme and/or language-adversarial objective improve multilingual models?", "3. How should we choose languages with which to pretrain a multilingual model?", "4. Do the answers to the above questions change when adapting to target versus non-target speakers in the target language?", "We find that using the auxiliary objectives in pretraining facilitates model transfer to unseen languages, especially when the pretraining languages are very dissimilar (Section 6).", "When the target speakers are seen in adaptation (Section 7), similarity of the pretraining languages and the target language is more important than quantity of pretraining languages.", "Choosing as pretraining languages geographically proximal languages tends to help more than phonetically and phonologically similar but otherwise distant languages.", "However, when adapting to a handful of non-target speakers of the target language (Section 8), the domain mismatch caused by the unseen speaker, language, or recording environment degrades performance.", "Exposing the model to as many pretraining languages as possible becomes vital to minimize this mismatch.", "Results on this task demonstrate that a massively multilingual seed model substantially outperforms other seed models trained on languages similar to the target.", "We have provided an ESPnet recipe to train and test our models.", "3 2 Related Work This paper builds on work on multilingual ASR, end-to-end ASR, and adversarial learning.", "Multilingual transfer in ASR often relies on using bottle-neck features (Vesely et al., 2012; Vu et al., 2012; Karafiat et al., 2018) and adapting an acoustic model trained on one language to effectively recognize the sounds of other languages (Schultz and Waibel, 2001; Le and Besacier, 2005; Stolcke et al., 2006; Toth et al., 2008; Plahl et al., 2011; Thomas et al., 2012; Imseng et al., 2014; Do et al., 2014; Heigold et al., 2013; Scharenborg et al., 2017).", "However, while most work uses less than 10 languages for model training, we include up to 100 languages in training.", "End-to-end ASR has recently become popular, with approaches such as attention-based encoder-decoder models (Chorowski et al., 2015; Chan et al., 2015), the connectionist temporal classification (CTC) objective of Graves et al. (2006, 2013), or a combination of both (Kim et al., 2016; Hori et al., 2017).", "These approaches have also been deployed in multilingual settings (Toshni-wal et al., 2017; Chiu et al., 2018; Muller et al., 2017; Dalmia et al., 2018; Watanabe et al., 2017a).", "Our baseline approach to multilingual knowledge transfer is most similar to Inaguma et al. (2018), and involves training a hybrid CTC-attention seed model.", "Hierarchical and multi-task approaches including combining grapheme and phoneme prediction in monolingual contexts (Rao and Sak, 2017; Krishna et al., 2018) at different levels of the network, or using sub-word units of varying granularity (Sanabria and Metze, 2018), have been shown to improve ASR performance.", "In this paper we extend the approach of hierarchical placement of additional objectives in order to enforce language independent, transferable models.", "Domain-adversarial training is one such method for encouraging the model to learn language independent representations.", "A key contribution of this paper is the use of a domain-adversarial classification objective (Ganin et al., 2016) over many languages in order to encourage the model to learn representations that are invariant to language.", "Domain-adversarial training incorporates an auxiliary domain classification task, but negates gradients for encoder weights before the parameter update in order to guide the encoder to produce hidden representations that fool the classifier: i.e. they minimize information about the language while still facilitating the primary task of speech recognition.", "Domain-adversarial training has been used in speech recognition to learn features invariant to noise conditions (Shinohara, 2016), accents (Sun, 2018), and sex (Tripathi et al., 2018).", "Most closely related to our work is that of Yi et al. (2018), who use a language-adversarial objective when preparing multilingual bottleneck features from four languages for a hidden Markov model (HMM) ASR pipeline.", "In contrast, our work uses an adversarial objective across many languages, pairing it with a context-independent phoneme objective in an end-to-end framework.", "We scraped the data that forms the CMU Wilderness dataset, using a freely available script.", "4 This dataset consists of dramatized readings of the Bible in hundreds of languages.", "Each reading is ascribed a rating based on alignment quality which fits into one of these classes: very good , good , okay , and not okay .", "The script used to preprocess the data uses a universal pronunciation module in Festival (Tay-lor et al., 1998) 5 to produce pronunciation lexicons using an approach based on that of UniTran (Yoon et al., 2007), which we use to create phonemic transcriptions.", "The dataset consists of readings of the Bible, with readings typically of just a few speakers, mostly male.", "These are often dramatized, with sound effects and background music.", "For many purposes this could be considered a limitation of the data.", "Although the characteristics of the speech are unique, it allows us to investigate multilingual models over many languages without the confounds of an overly noisy environment.", "It is not unreasonable to expect our findings to generalize to other speech recognition domains.", "While the dataset includes only a single reading of the Bible for most languages, there are a number with two or more.", "We evaluate on languages for which we can find two or more readings.", "This is so that we can compare adaptation to a target 4 https://github.com/festvox/ datasets-CMU_Wilderness 5 http://www.cstr.ed.ac.uk/projects/ festival/ Hours:minutes/quality per reading Aymara (ayr) 16:19/G 18:37/G SB Quechua (quh) 27:41/G 20:02/G Kekchi (kek) 19:32/G 18:30/G Ixil (ixl) 35:06/VG 25:35/G 18:29/G Malagasy (mlg) 12:29/NO 15:52/O 15:59/G Indonesian (ind) 19:01/G 21:20/G 30:34/G Garap (kia) 15:34/G 12:17/VG Swedish (swe) 15:55/G 16:46/VG Spanish (spn) 16:35/G 15:19/G Table 1: The duration of each reading in the evaluation languages (ISO 639-3 language codes in parenthe-ses), before our preprocessing.", "language but not the speakers of the target reading (we refer to this task as language adaptation , as explored in Section 8) with adaptation to the target language as well as the target reading (we refer to this task as reading adaptation ).", "We additionally restricted the evaluation languages to those that have at least one good or very good reading in terms of alignment quality.", "Table 1 presents the evaluation languages and readings grouped by family or geographic location, along with their durations.", "In addition to scaling ASR training to 100 languages, a key contribution of our work is the use of a context-independent phoneme objective paired with a language-adversarial classification objective in a end-to-end grapheme-based neural network, as illustrated in Figure", "1. 4.1 Baseline Model Our experiments are conducted within the framework of a hybrid CTC-attention end-to-end neural model using ESPnet (Watanabe et al., 2017b), which uses an encoder-decoder architecture implemented in PyTorch (Paszke et al., 2017).", "The encoder we use consists of VGG-like convolution layers (Simonyan and Zisserman, 2014; Sercu et al., 2016) followed by a multilayer bidirectional long short-term memory (LSTM) (Hochre-iter and Schmidhuber, 1997; Schuster and Paliwal, 1997).", "The decoder uses location-based attention (Chorowski et al., 2015) and an LSTM.", "In addition to the attention, the decoder also incorporates CTC probabilities over graphemes to encourage monotonicity in decoding.", "The end-to-end neural model performs direct grapheme prediction without recourse to a pronunciation lexicon as traditional hybrid HMM-DNN models do.", "Since different orthographies may be mutually disjoint or only weakly related to the phonetic content of the input speech, we use a context-independent phoneme CTC objective to encourage learning of representations independent of such orthographic idiosyncrasies.", "We performed limited preliminary experiments to determine how best to use the phoneme objective, which corroborated recent work in hierarchical training objectives that supports inserting the phoneme objective in the layers below the final layer (Krishna et al., 2018).", "We also found that using the phoneme objective during adaptation was harmful and therefore in all reported experiments we use it only during multilingual pretraining.", "For language-adversarial training we used a log-linear classifier over all languages seen in pretraining.", "An utterance-level mean of the penultimate encoder layer states is fed into the classifier.", "For each batch in training we update the network using the interpolated grapheme and phoneme objectives before a separate update step using the adversarial objective.", "We follow the learning rate scheduling of Ganin et al. (2016), where the weight of the adversarial objective relative to the speech recognition tasks follows ( p ) = 2 1+exp( 10 p ) 1 over the course of training, where p [0 , 1] is a measure of training progress.", "We drop the adversarial objective during target language adaptation.", "We chose as target adaptation languages those languages for which we have multiple readings of the Bible.", "This allows us to assess adaptation of the pretrained multilingual model in two scenarios: language adaptation and reading adaptation .", "In reading adaptation , it is adapted to data from each reading of the target language, including the reading from which we select held-out evaluation utterances.", "In language adaptation it is adapted only x Encoder Encoder Last Layer Attention Decoder y 1 , y 2 , . . . , y n y 1 , y 2 , . . . , y n CTC Phoneme CTC 1 , 2 , . . . , m Adv L x Figure 1: The end-to-end architecture used during pretraining.", "to readings that are not represented in the evaluation set.", "This last case, of adapting to just one or several speakers of a new language (in order to ultimately have a system that generalizes beyond those speakers in the language) is not common in speech recognition experimentation.", "Results and findings for language adaptation will be presented in Section 8.", "We established training, validation and test sets for each reading using a random 80/10/10 split.", "When pretraining or adapting the multilingual systems, we used the combined training sets of the constituent readings.", "We used 80-dimensional log Mel filterbank features with 3-dimensional pitch features.", "We tuned hyperparameters for these models using one Aymara reading.", "6 We found that a 4 layer encoder, 1 layer decoder with 768 for the encoder hidden size and projections, decoder hidden size, and attention hidden size yielded equal-best results with deeper models.", "These settings were then used for training the models used in our experiments.", "For the training objective, we linearly interpolated the the attentional decoder cross-entropy loss with the grapheme CTC and phoneme CTC objectives.", "Equal weight was given to all three since we found that to be effective in preliminary experiments.", "Note however, that the effective weight of 6 CMU Wilderness reading ID: AYMSBU.", "Target MONOQUECYRQUE +C YR -+phn+adv -+phn+adv -+phn +adv +phn+adv Aymara 40.6 34.3 34.5 (+0.6%) 37.9 35.9 (-5.3%) 34.6 34.2 34.8 34.2 (-1.2%) SB Quechua 14.8 13.8 14.0 (+1.4%) 16.3 17.0 (+4.3%) 14.9 14.2 14.0 13.9 (-6.7%) Indonesian 14.9 15.1 15.3 (+1.3%) 16.1 17.9 (+11.2%) 15.8 15.6 15.5 14.7 (-7.0%) Avg.", "rel.", ": (+1.1%) Avg.", "rel.", ": (+3.4%) Avg.", "rel.", ": (-4.9%) Table 2: Word error rate (%) comparison of multilingual models adapted to target languages, with and without auxiliary training objectives (relative change in parentheses).", "the adversarial objective effectively changes over the course of training because of the learning rate scheduling mentioned in 4.3.", "We trained for 15 epochs in all cases except where otherwise noted.", "Note that during adaptation we initialize the model using both the multilingual encoder and decoder.", "We found this to work best in preliminary experimentation on a Spanish reading.", "In this section we evaluate the use of the auxiliary phoneme and language-adversarial objectives described in Section 4 on two divergent groups of languages that are distinct along a number of dimensions, including orthography, language family and phonology, in order to assess the auxiliary objectives' capacity to bridge the divide between these languages during pretraining.", "This serves as an initial exploration before further experiments in Section 7 and Section 8, where we choose from a broader set of pretraining languages.", "Pretraining languages We pretrained models on two groups of languages separately and together.", "The first consists of six languages from the Quechuan language family, including subvarieties of Quechua I and II (qub, quf, qvs, qvw, qwh and qvh).", "We henceforth refer to this group as QUE .", "The second consists of six languages that use the Cyrillic script and we refer to this group as CYR .", "These languages include Nogai (nog), Bashkir (bak), Gagauz (gag), Khakas (kjh), Crimean Tatar (crh), and Russian (rus).", "With the exception of Russian, these languages are all Turkic.", "The character sets do not overlap between QUE and CYR and this was a deliberate choice in this preliminary experiment to maximize the differences between the two groups.", "Evaluation languages To test the pretrained models in varied contexts, we evaluate our models on three languages: Central Aymara (ayr), South Bolivian Quechua (SB Quechua; quh), and Indonesian (ind).", "These languages vary in a number of dimensions: SB Quechua is very closely related to QUE , while Indonesian is distant; Aymara is phonologically very similar to Quechuan languages, but is considered to be from a different family; Aymara had a high monolingual baseline error rate, while the others are lower; and Indonesian has three readings while the others have two.", "However, all evaluation languages use the Latin script.", "Note that in this section we assess performance in the reading adaptation case, while Section 8 presents results on the held-out reading case.", "Experiments Table 2 compares the performance of monolingual target-language models to models adapted to the target language after being pretrained on QUE , CYR and their combination, QUE +C YR .", "CYR pretraining underperforms pretraining with QUE for all evaluation languages likely due to the orthographic mismatch with all of the evaluation languages.", "The model pretrained on QUE +C YR also underperforms QUE .", "Introducing the auxiliary phoneme and language-adversarial objectives helps to overcome this performance loss, making the QUE +C YR -pretrained model the best for adaptation to Aymara and Indonesian.", "QUE remained the best pretraining set for adaptation to SB Quechua, which is unsurprising given how well represented SB Quechua is by the languages included in the Quechuan language group.", "This suggests that when a substantial qvh / / qvh /i/ rus / / rus /i/ nog / / nog /i/ qvh / / qvh /i/ rus / / rus /i/ nog / / nog /i/ Figure 2: t-SNE representation of encoder states corresponding to / A / and /i/ across Quechua (Huamalies Dos de Mayo; qvh), Russian (rus), and Nogai (nog).", "amount of data in very closely related languages is available (in this case, close to 100 hours of QUE data), then there is little to be gained from highly unrelated languages.", "When pretraining on QUE and CYR separately, the auxiliary objectives underperformed baseline multilingual pretraining on average.", "The variation in languages within these groups is far less than the variation between groups.", "Given that the phoneme and adversarial objectives are intended to overcome variation between pretraining languages, this result indicates that there must be a sufficient level of diversity in the pretraining languages before the auxiliary objectives are of benefit when adapting to certain target languages.", "Results from pretraining on QUE +C YR showed either objective to help on average, and that the effects are complementary.", "Because of this, we opted to include them together in subsequent experimentation.", "We evaluated this best performing model on the larger set of other evaluation languages.", "Results in Table 3 show that in all cases multilingual pretraining of QUE +C YR with the auxiliary objectives outperformed its counterpart without the objectives (which frequently un-deperformed the monolingual model), and in all but one case this led to an improvement over the monolingual baseline.", "7 To gain insight into how the auxiliary objectives change the representation of speech learnt by the 7 However, this doesn't hold in the language adaptation scenario, where the auxiliary objectives help QUE +C YR only slightly; see Section 8.", "models, we applied 2D t-SNE dimensionality reduction (Van Der Maaten and Hinton, 2008).", "Figure 2 plots the representations of two phonemes in three languages learnt by the encoder 8 in the case without and with the auxiliary objectives.", "In the multilingual pretraining baseline, six clusters are represented for each languagephoneme combination.", "These appear stratified by language, with different phoneme clusters within languages close to one another.", "With the auxiliary objectives, phoneme clusters between languages move closer to one another, while language identity becomes less relevant in determining which phoneme clusters neighbour one another.", "In the latter plot, the Nogai phonemes become separated by a Russian / A /.", "This is particularly salient since the Nogai speaker was female, while the Russian speaker had a deep male voice.", "In the previous section we explored the use of two dissimilar groups of languages in a multilingual setup.", "Multilingual pretraining of languages from a different language family and script benefitted from an explicit phoneme objective and adversarial objective when there was sufficient diversity in the pretraining languages.", "However, a change in orthography was conflated with a change in language family, geographic location, and phono-8 We established the correspondence between encoder states and phonemes by using forced alignment with Kaldi (Povey et al., 2011), taking the encoder state at the mid-point of the duration on the phonemes.", "logical/phonetic characteristics.", "In this section, we investigate which factors are most important in choosing languages for multilingual pretraining and how useful it is to scale up model pretraining to many languages.", "This exploration is conducted in the reading adaptation scenario; language adaptation with unseen target speakers is addressed in Section 8.", "Beyond answering these questions, this investigation reveals more information about the utility of the proposed auxiliary objectives in different scenarios.", "Phonology & Geography We test across a number of evaluation languages (c.f. Table 1) by determining, for each evaluation language, groups of pretraining languages that are similar to the evaluation languages in different ways.", "In order to determine language similarity in a principled way we used URIEL and lang2vec (Lit-tell et al., 2017) to produce feature vectors for each language based on information from several linguistic resources before calculating their cosine similarity.", "For each language we used two feature vectors.", "The first is a concatenation of the lang2vec phonology average and inventory average vectors, characterizing phonological properties and phonetic inventory.", "The second represents geographic location.", "We denote these two groups PHON /I NV and GEO respectively.", "9 Geographic proximity may 9 We didn't create PHON /I NV sets for Ixil and Garap because their phonological features and phonetic inventories were not well attested, and we didn't use the lang2vec lan-serve as a proxy for other similarities not captured in PHON /I NV , including language family, orthographic similarity, and the likelihood of exchanged loan words.", "We filtered for languages in the dataset with good or very good alignments before ranking them by cosine similarity with the evaluation languages in terms of phonological and phonetic similarity as well as geographical proximity.", "To create each of the pretraining sets, we took between 7 and 14 of the top languages, matching approximately the total duration of the phoneti-cally/phonologically similar groups with the geographically proximate language groups.", "10 For most languages, there is no overlap between the GEO and PHON /I NV sets.", "Massively multilingual model As a further point of comparison, we pretrain a model on around 100 languages (denoted 100LANG ), for approximately 1650 training hours in total.", "11 7.1 Auxiliary Objectives Findings The results in Table 3 extend on our findings in Section 6, continuing to support the benefit of the use of the auxiliary objectives while shedding more light on the type of language variability the objectives help to overcome.", "GEO and 100LANG guage family vectors since most of the Quechuan languages were not captured as being highly similar to SB Quechua.", "10 An exhaustive list of the CMU Wilderness language codes for each pretraining group can be found in Appendix A, along with durations of each pretraining set.", "11 These models were pretrained for 6 epochs.", "benefitted comparably from the objectives on average, while PHON /I NV did less so.", "QUE +C YR benefitted the most.", "This suggests that the objectives may help more when pretraining languages are orthographically, phonetically and phonologically diverse.", "Unlike the other languages, the Swedish PHON /I NV vectors were not well attested.", "As a result the Swedish PHON /I NV group has languages with a similar phonetic inventory that were also unattested phonologically.", "This model underperformed the monolingual model by a large margin, suggesting that similarity of phonetic inventory alone may not be so useful alone without similarity of phonological features.", "Models pretrained on this set also benefitted the most from the auxiliary objectives.", "It may be the case that the auxiliary objectives push together representations of allophones within languages, and pronunciation variations of the same phonemes between languages.", "When Swedish is discounted, the average relative improvement when adding auxiliary objectives for PHON /I NV becomes negligable.", "The PHON /I NV configurations are hurt by the auxiliary objectives for SB Quechua and Aymara and Indonesian.", "The PHON /I NV sets for the first two of these languages emphasized Quechuan languages, and this corroborates the indication in Section 6 that the auxiliary objectives may not help so much when pretraining languages are similar.", "On the other hand, the Indonesian PHON /I NV included Afro-Asiatic and Niger-Congo languages, as well an Indo-European language and Huave, a language isolate from Mexico, yet it was not improved by auxiliary objectives.", "The average relative word error rate (WER) change for GEO against PHON /I NV was -2.2% without auxiliary objectives, and -4.4% with them, 12 suggesting that features correlated with geography are useful for guiding pretraining language selection.", "Counter-examples were Aymara, SB Quechua and Malagasy, which performed worse when pretrained on GEO .", "In the case of SB Quechua, only one Quechuan language was represented in GEO (Inga), while PHON /I NV had three (qub, qvh, quf).", "Madagascar is far removed from where most Austronesian languages are spoken, so Malagasy's GEO set were almost all Niger-Congo 12 Discounting Swedish, this becomes +0.2% and -3.1%.", "languages, while the PHON /I NV had a diverse array of Austronesian, Indo European, Afro-Asiatic, Sino-Tibetan and Mayan languages.", "However, on average, these results suggest that geographical proximity is a decent guide to pretraining language selection.", "Another advantage is that it requires no explicit phonological features, making it applicable to a much larger number of languages.", "The average relative WER change of 100-LANG against MONO was +1.3%, indicating that massively multilingual pretraining by itself not useful if the target speakers are seen in training.", "Using the auxiliary objectives overcame the difference, resulting in a -1.6% average relative WER change.", "However, pretraining with GEO +phn+adv yielded an average relative delta of -7.4% over the monolingual model.", "Though more languages help, they are not necessarily better than geographically proximal languages (how-ever, results are very different when not adapting to target speakers: see Section 8).", "In two cases pretraining with 100LANG was hindered by the auxiliary objective.", "In one of these cases, Swedish, both 100LANG variations substantially underperformed the monolingual baseline.", "One possible reason is that there is enough target language and speaker data that the multilingual pretraining and auxiliary objectives offer no benefit.", "We scaled training/adaptation data for Swedish from under 1 hour.", "Figure 3 indicates that in this case the auxiliary objectives do lead to better initialization, with gains being lost only when around 5 hours of target language and reading data are seen.", "Previous sections have addressed the reading adaptation scenario, where the ASR model is adapted to speech from the target reading (ie. where target speakers have been heard in adapta-tion).", "In this section we evaluate in a language adaptation scenario, adapting to readings in the target language, but not the target reading.", "The question of how well a multilingual model can be adapted to a language on the basis of recordings from a small number of target-language speakers is relevant to incident response situations such as those modelled by LORELEI (Strassel and Tracey, 2016), where a single language consultant is available for which recorded speech can be made.", "We performed experiments analogous to those of the previous sections where the evaluation reading was not seen in training or adaptation.", "This is a challenging task as the model must generalize to multiple speakers of a language on the basis of seeing only several in training.", "Most of the findings corroborate what was found in the previous sections.", "Here we highlight differences.", "Massively multilingual pretraining led to substantially better performance than other methods, unlike in the reading adaptation task.", "For each evaluation language, the 100LANG model outperformed the next best method, with one exception: Indonesian.", "In that case GEO set performed the best, as the languages were not only geographically proximate, but also consisted entirely of other Austronesian languages.", "The takeaway (c.f. Table 4) is that you should always use more pretraining languages unless you know your target speakers, as in the reading adaptation scenario.", "Auxiliary objectives remained useful on the whole.", "However, while the difference in WER achieved when adding the auxiliary objectives was similar to those reported in Section 7 for PHON /I NV and 100LANG , GEO and QUE +C YR no longer achieved improvements.", "QUE +C YR notably only achieved a -0.2% average relative WER change when adding the auxiliary objectives, while achieving -7.8% in the reading adaptation case.", "While the auxiliary objectives remained useful on the whole, their effect was dwarfed by the value of adding more languages.", "Phonology versus Geography GEO sets with or without auxiliary objectives lost their edge over PHON /I NV , with high variance in scores.", "The amount of training data becomes the dominating variable affecting WER.", "We have explored the utility of pretraining multilingual models on a variety of language sets, scaling to as as many as 100 languages.", "Our experiments have demonstrated the value of auxiliary phoneme and language-adversarial pretraining objectives in a multilingual end-to-end ASR framework, particularly when the pretraining languages are diverse.", "Our results suggest how to pick pretraining languages when target speakers are seen in the adaptation data: find geographically proximal languages.", "When adapting to just several non-target speakers, exposure to more speech in pretraining is the most important thing for model generality, even if from a wide range of dissimilar languages.", "We would like to thank Tim Baldwin for an off-hand comment that planted the language-adversarial idea in the first author's head, and to Trevor Cohn for some related discussion.", "Thanks also go to Alexis Michaud and the reviewers for comments." ]
[ "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "method", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "method", "abstain", "abstain", "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "result", "abstain", "other", "other" ]
[ "Leaderboards are widely used in NLP and push the field forward.", "While leaderboards are a straightforward ranking of NLP models, this simplicity can mask nuances in evaluation items (examples) and subjects ( NLP models).", "Rather than replace leaderboards, we advocate a re-imagining so that they better highlight if and where progress is made.", "Building on educational testing, we create a Bayesian leaderboard model where latent subject skill and latent item difficulty predict correct responses.", "Using this model, we analyze the ranking reliability of leaderboards.", "Afterwards, we show the model can guide what to annotate, identify annotation errors, detect overfitting, and identify informative examples.", "We conclude with recommendations for future benchmark tasks.", "Leaderboard evaluationsfor better or worseare the de facto standard for measuring progress in question answering (Rajpurkar et al., 2016) and in many NLP tasks (Wang et al., 2019a).", "An unfortunate side effect of leaderboard popularity is SOTA -chasing, often at the expense of carefully inspecting data and models (Linzen, 2020).", "For example, the same super-human models that top question answering leaderboards (Najberg, 2018) often fail spectacularly (Feng et al., 2018; Wallace et al., 2019a) by learning non-generalizable statistical patterns (McCoy et al., 2019; Niven and Kao, 2019).", "Finally, focusing solely on metrics conflates progress on a specific task with progress on real-world NLP problems behind the task (Bender and Koller, 2020).", "Plainly, focusing on headline SOTA numbers provide(s) limited value for scientific progress absent insight into what drives them and where they fail (Lipton and Steinhardt, 2019).", "In this work we take leaderboards as they are, and imagine how they might better support research.", "Leaderboards establish differences between models on a fixed task.", "Hence, leaderboards should enable and encourage the comparison of models and inspection of examples.", "And leaderboards should also signal when they have outlived their usefulness (Boyd-Graber and Brschinger, 2020).", "To help focus attention on examples and models of interest, we propose Difficulty and Ability Discriminating ( DAD ) leaderboards that explicitly model both task and submissions jointly , rather than either in isolation.", "1 DAD 's underlying model is based on 1 Source code, data, and visualizations at irt.pedro.ai.", "Item Response Theory (Lord et al., 1968; Baker, 2001, IRT , reviewed in 2), a widely used (van Rijn et al., 2016) alternative in educational testing to simple summary statistics (Edgeworth, 1888).", "DAD can explicitly identify the difficulty and discriminability of items (Figure 1), 2 which in turn can lead to a more nuanced ranking of models, identifying poor items, and better understanding of a dataset and task.", "Throughout the paper, we use the question answering ( QA ) benchmark SQ u AD 2.0 (Rajpurkar et al., 2018).", "For example, DAD can identify questions that are challenging to models and questions that are wrong (incorrectly annotated).", "In addition to better understanding datasets, it is also helpful for efficiently selecting evaluation items to annotate.", "We conclude with recommendations for future leaderboards (7) and discuss where IRT in NLP can go next (8).", "Leaderboards are a product of the metrics, evaluation data, and subjects (machine or human) who answer items (Figure 2).", "For concreteness, let's assume that we have a question-answering task and two subjects: Ken, who is good at trivia, and Burt, who is not.", "In the simplest IRT models, each subject j has a random variable j corresponding to their skill: Ken's is big, Burt's is small.", "But you cannot know that until you start asking them questions of varying difficulty i .", "Harder questions have a higher difficulty (what is the airspeed of an unladen swallow) than easy ones (who is buried in Grant's tomb).", "The bigger the margin between a subject's skill j and an item's difficulty i , j i , the more likely that subject j responds correctly p i,j ( r i,j = 1) .", "This is the simplest IRT model, which we call IRT -base .", "Generally, given n test items X = ( X 1 , . . . , X n ) and m subjects S = ( S 1 , . . . , S m ) , where each subject answers every item, we want to estimate subject skills and item difficulties.", "To discover the random variables that best explain the data, we turn to probabilistic inference (Pearl, 1988).", "Two additional random variables further improve DAD : discriminability i and feasibility i .", "We first consider discriminability and the margin between a question's difficulty i and a subject's skill j .", "A discriminative question is challenging but can still be answered correctly by a strong subject.", "If Ken's ability is higher than most items' difficulty ( j i is large), item discriminability multiplies this gap by i in a model called IRT -disc .", "Questions with low i are low quality: they have annotation error or do not make sense.", "Another way of capturing poor quality questions is the feasibility i .", "For example, if the question who was the first president has the answer Rajendra Prasad, the question has an unstated implicit assumption that subjects must guess what country or company the question is about.", "In the model IRT -feas , if a large fraction of subjects all get an item wrong, everyone's probability of getting the item right is capped at i .", "In NLP terms, 1 i corresponds to the prevalence of annotation errors that lead to unsolvable items.", "Having introduced all of the constituent elements of the model, we can now present the full generative model: 1. For each subject j :", "(a) Draw skill j N ( , 1 ) 2. For each item i :", "(a) Draw difficulty i N ( , 1 )", "(b) Draw discriminability i N ( , 1 )", "(c) Draw feasibility i U [0 , 1] 3. Draw subject i response on item j , r ij p ij ( r ij | j , i , i ) = p ij ( r ij = 1 | j ) = i 1 + e i ( j i ) .", "(1) For IRT -base, i and i are fixed to 1.0, while for IRT -disc, only i is fixed.", "3 Means , , are drawn from N (0 , 10 6 ) and , , from a (1 , 1) prior, as in Lalor et al. (2019) and recommended by Natesan et al. (2016).", "4 3 In psychometrics, IRT -base is called a Rasch (Rasch, 1960) or 1 parameter logistic (1PL) model, IRT -disc is a 2PL model, and IRT -feas is a 4PL model with guessing set to zero.", "4 We differ by allowing < 0 to identify bad items.", "Because it is difficult to completely codify skill and difficulty into a single number, we can rewrite the exponent in Equation 1 as a sum over dimensions i ( k j,k i,k ) , where each dimension captures the interaction between an item's difficulty and a subject's skill.", "For example, perhaps Burt could better exploit artifacts in one dimension (their skill for j,k =5 is high but everywhere else is low) while Ken might not know much about a particular topic like potent potables ( j,k =2 is low but everywhere else is high).", "We call this model IRT -vec .", "5 Multidimensional IRT models (Reck-ase, 2009) couldin addition to better modeling difficultyalso cluster items for interpretation; we briefly experiment with this (Appendix F), but leave more to future work (8).", "IRT 's fundamental assumption is that not all items and subjects are equal.", "This explains why leaderboards can fail while having normal looking accuracies.", "As a thought experiment, consider a dataset that is one third easy ( i [0 , 1] ), one third medium difficulty ( i [2 , 3] ), and one third hard ( i [6 , 7] ).", "Suppose that Ken has skill k = 4 while Burt has skill b = 2 .", "A standard leaderboard would say that Ken has higher accuracy than Burt.", "But suppose there's a new subject that wants to challenge Ken; they are not going to reliably dethrone Ken until their skill c is greater than six.", "This is a more mathematical formulation of the easy and hard dataset splits in question answering (Sugawara et al., 2018; Rondeau and Hazen, 2018; Sen and Saffari, 2020).", "In IRT -feas, this recapitulates the observation of Boyd-Graber and Brschinger (2020) that annotation error can hinder effective leaderboards.", "DAD helps systematize these observations and diagnose dataset issues.", "To estimate the latent parameters of our model, we use mean-field variational inference (Jordan et al., 1999).", "In variational inference, we propose a distribution over the latent variables, q ( ) , that approximates the true but intractable posterior p ( ) .", "We then minimize the KL -divergence between these distributions, equivalent to maximizing the evidence lower-bound ( ELBO ) with respect to the variational parameters.", "In our case, q ( ) is a mean-field distribution, which means it factorizes over each of the latent variables (the product is over the n m subject-item pairs) q ( , , , , ) = q ( ) q ( ) i,j q ( j ) q ( i ) q ( i ) Specifically, for our key latent variables z { , , } , the associated variational distributions are of the form q ( z ) = N ( u z , t 1 z ) .", "Recall that in the generative distribution, each latent z is drawn from a N ( z , 1 z ) whose parameters are also latent variables; for these variables, we use the variational distributions q ( z ) = N ( u z , t 1 z ) and q ( z ) = ( a z , b z ) .", "We optimize the ELBO with respect to the variational parameters = { u z , t z , u z , t z , a z , b z , } for all z using ADAM (Kingma and Ba, 2015).", "With DAD 's leaderboard IRT model introduced, we next discuss how leaderboard subjects are statistically compared and alternative methodssuch as using IRT parametersto evaluate whether two models are truly different.", "Fundamentally, the objective of comparative evaluations like leaderboards is to decide whether model A is better than model B .", "A thread of NLP has rightfully advocated for adding rigor to these decisions using statistics (Traub, 1997, Classical Testing Theory) where the objective is to infer a true score T from the observed test score X = T + E given a measurement error E , uniform across subjects.", "However, in educational testinga field measuring skill and knowledge in humans IRT is a primary measurement instrument (Hamble-ton, 1991, p. 2).", "A major motivation for IRT is that subjects of different skill have different errors.", "IRT explicitly accounts for the bandwidth-fidelity dilemma (McBride, 1976): items can either accurately measure a narrow ability range (fidelity) or inaccurately measure large ability ranges (band-width).", "6 This section and the next contrast methods for identifying the best model and advocate for IRT .", "Implicit in nearly all leaderboard evaluations is ranking models by a statistic such as the average accuracy.", "As we show in 4, nave rankings are noisier than IRT rankings.", "Leaderboards should: (1) reliably and efficiently rank better models ahead of worse models (Tague-Sutcliffe, 1992; Voorhees, 2003) and (2) guide inspection of items and subjects (5).", "The first ameliorates the unavoidable randomness of finite evaluations while the second enables error analysis (Wu et al., 2019) and model probing (Belinkov and Glass, 2019; Zhang et al., 2019).", "First we verify that IRT models accurately predict the responses of subjects (4.2).", "Next, a ranking stability analysis shows that IRT has modestly better reliability than classical rankings (4.2.3).", "Lastly, using IRT to actively sample items for annotation yields rankings with better correlation to complete test data (4.4).", "At first blush, the differences between IRT and logistic regression are minimal, but we include the comparison to address natural questions from the NLP community: (1) do the idiosyncrasies of the IRT formulation hurt accuracy?", "(2) should we add features to better understand phenomena in the questions?", "(3) why not use deep models?", "The next section argues that both IRT and logistic regression are accurate even without laboriously engineered task-specific features.", "Adding obvious features such as item words (e.g., questions) only minimally improves the accuracy.", "We explicitly omit less interpretable deep models since our goal is to make leaderboards more interpretable.", "Just as educational testing researchers validate IRT models by seeing if they predict subject responses correctly (American Educational Research Association, 2014), we validate how well DAD predicts whether SQ u AD models get questions right.", "We compare against a logistic regression linear model ( LM ) implemented with Vowpal Wabbit (Agarwal et al., 2014).", "Since integrating handcrafted features is easy, we incorporate features derived from subject ID s; item ID s; functions of the SQ u AD question, answer, and title; and IRT parameters (details in Appendix B).", "As in IRT , logistic regression predicts whether a subject correctly responds to an item.", "Later, we discuss ways to integrate more features into IRT (8).", "Experiments are on the SQ u AD 2.0 leaderboard.", "Development data are publicly available, and organizers provide test set responses.", "There are 161 development subjects, 115 test subjects, and 11 , 873 items ( 1 . 9 million total pairs).", "Experiments that do not need test responses use all development subjects; those that do use the smaller test subset.", "Following prior work (Wu et al., 2020), we evaluate IRT and linear models by holding out 10% of responses and computing classification metrics.", "7 In SQ u AD , predicting whether a response is correct is an imbalanced classification problem (80.4% of responses in the development set are correct).", "Thus, we use ROC AUC , macro F1, and accuracy.", "IRT models that incorporate more priors into the generative story should be better, but are they?", "We compare four IRT models: IRT -base, IRT -disc, IRT feas, and IRT -vec (2).", "The more sophisticated models are better and all improve over the LM (Figure 3) and correlate well with each other (Ap-pendix C).", "To be clear, while higher accuracy than LM is good, our goal is to validate that IRT models are accurate; later, we inspect model errors and identify annotation errors (5).", "Integrating additional features into Bayesian models is not trivial, so we instead use the flexibility of linear models to identify useful features.", "Our leave-one-in ablation compares features (Figure 3): the top ablations both use IRT features, further validating IRT parameters.", "The subject and item identifier features are also strongly predictive, but item is the stronger of the two.", "Text-based features are weaker, but this suggests future work to better integrate them into IRT models (8).", "Leaderboards should produce reliable subject rankings: can DAD rank systems even with a tiny test set?", "Thus, we compare the correlation both of traditional average accuracy (3) and IRT rankings on the whole test set compared to the rankings of the same metric on a smaller test set.", "Our first experiment (4.3.1) examines the stability of existing items and subjects while the second (4.4) investigates stability of new evaluation data using sampling strategies.", "7 Everywhere else in the paper, we train on all responses.", "Rankings should be reliable within the same dataset (e.g., on dev set) and generalize to similar datasets (e.g., with a test dataset).", "To test the first, we measure the ranking stability of mutually exclusive samples of the development data (Buckley and Voorhees, 2000).", "To test the second, we measure the correlation between development set sample rankings to test set rankings (Voorhees, 1998).", "Specifically, for a range of sample sizes 8 we (1) sample two partitions of the data, (2) compute the classical ranking 9 and the IRT ranking from a refit IRT -feas model, then (3) compute Kendall's correlation (Kendall, 1938) between the samples for each ranking (details in Appendix D).", "In both cases IRT rankings have higher correlation than classical rankings (Figure 4, left).", "Since the benefit is strongest at low sample sizes, IRT can improve the reliability of small-scale evaluations.", "The second experiment examines ranking generalization: IRT yields more reliable measures of subject skill, implying a greater consistency in subject rankings across evaluation settings.", "Figure 4 compares the development set sample rankings computed above to rankings obtained using subjects' test set responses (with the same IRT model).", "Across all sample sizes, subjects' IRT ability estimated on the development set correlates well test set ability.", "Crucially, this is better than the corresponding classical metrics like accuracy (Ap-pendix D quantifies the statistical significance of the difference), supporting our original motivation for using IRT .", "10 8 The sample size must be less than half the size of the development data so that we can obtain two samples.", "9 For SQ u AD , ordering by mean exact match score.", "10 Since the maximum trial size was limited, we train one final model with the full data, see Table 3 in the Appendix D. 4.4 IRT Improves Cold Start Reliability IRT can also guide the construction of tests.", "Just as IRT practitioners prepare tests for humans, we too construct tests for machines.", "In educational testing, collecting responses from humans is expensive; likewise, although questions are cheap in search-based QA tasks (Nguyen et al., 2016; Kwiatkowski et al., 2019), annotating answers is expensive.", "Likewise, grading machine dialog responses is expensive and IRT helps (Sedoc and Ungar, 2020).", "To emulate this setting, we use computerized adaptive testing (Weiss and Kingsbury, 1984) to iteratively select SQ u AD items to annotate.", "As in human test preparation, we use existing annotations to infer item parameters and iteratively infer the ability of new subjects.", "This experiment splits m subjects into a training group (80%) and a testing group (20%).", "The training group represents subjects for which we have full item predictions and annotations; the testing group represents a new group of subjects that we need to rank.", "To efficiently rank, we should iteratively choose items to annotate that yield the most information about the ranking if all the data were annotated.", "This experiment compares how well several item selection strategies work.", "For each selection method, we (1) choose a sample size, (2), sample from the development set, (3) compute the ranking of subjects, and (4) compute Kendall's rank correlation (Figure 5).", "11 Which item selection strategies should we compare?", "As a baseline, we use nave random sampling.", "Like prior work, we compare selecting items with the highest difficulty and the highest discriminability (Lalor et al., 2019) as well as the sum of the 11 We compute correlations with the complete development set on ten trials to build 95% confidence intervals.", "two.", "12 We propose that items should be selected according to their Fisher information content (Weiss, 1982) I i ( j ) = ( p ij ) 2 p ij (1 p ij ) = 2 i p ij (1 p ij ) (2) as derived by Lord et al. (1968, p. 70).", "Intuitively, if we do not yet know the true skill j , we should pick items whose expected response we are most uncertain about.", "Our uncertainty (entropy) is maximized when the likelihood of a correct re-12 We train an IRT -disc model to simplify sampling (e.g., avoiding a tradeoff between feasibility and discriminability).", "sponse p ij is the same as the likelihood of an incorrect response 1 p ij , which corresponds to the maximal value of I i ( j ) ; it is also sensible this value increases as discriminability i increases.", "To infer the maximally informative items, we estimate the ability j of each subject using the currently selected items, use the ability to compute the information of each yet-to-be-annotated item for each subject, and then aggregate the informativeness Info ( i ) = j I i ( j ) (3) by item i summed over subjects j .", "This approach is similar to uncertainty sampling and reduces to it for the IRT -base model (Lewis and Gale, 1994).", "We initially seed with the twenty-five most discriminative items (details in Appendix D).", "Like computerized adaptive testing (Moreno et al., 1984), Figure 5 shows that at lower sample sizes three of the IRT sampling methods are better than random samplingdifficulty does worse.", "The other IRT methods have comparable correlation.", "Thus, by using IRT , DAD can both improve rankings and guide annotation.", "DAD also helps qualitative analysis of items and subjects.", "First, IRT identifies overfitting and generalizes partitioning datasets by difficulty.", "Then we show thatlike in educational testing IRT identifies good and bad items.", "Several works curate easy and hard QA subsets based on how many models answer correctly (Ron-deau and Hazen, 2018) or heuristics (Sugawara et al., 2018).", "IRT can create similar subsets using IRT -feas, the best 1D model.", "Difficulty finds where subjects improve while discriminability and feasibility can surface items that may be invalid.", "For example, one low feasibility question (Figure 9) asks what are two examples of types of Turing machines? which has two problems: (1) the answer omits five types and (2) span-based evaluation precludes selecting non-contiguous types.", "After excluding items with negative discriminabilitythey are likely erroneous we sort items into bins.", "We break both difficulty and discriminability into four binstaking the 25 th , 50 th , and 75 th percentilescreating eight total bins.", "Then we select representative SQ u AD subjects with their exact match scores (Figure 6).", "Let's examine a feasible item with positive difficulty and discriminability like what reform was attempted following the Nice treaty? 13 In this case, the annotator's span is too longresulting in almost no correct answers and a low fuzzy match (token F1).", "In contrast, one highly discriminative question succeeds because there are multiple plausible guesses to who did the Normans team up with in Anatolia? 14 While both the Armenian state and Turkish forces are superficially plausible answers, only Turkish forces is correct; nonetheless, some models are fooled.", "Using IRT to guide subject analysis is helpful; next, we test how efficient it is in identifying annotation error.", "To test if IRT can identify annotation error, we inspect sixty SQ u AD development set items.", "We select ten items from each of these groups: the most negative discriminability, discriminability nearest to zero, the highest discriminability, the least difficult, most difficult, and IRT model errors.", "For each, we annotate whether the item was correct, was cor-rect yet flawed in some way, or simply wrong (Fig-ure 7).", "15 Inter-annotator agreement between three authors on this three-way annotation with Krippen-dorff's (Krippendorff, 2004; Artstein and Poesio, 2008) is 0 .", "344 .", "Despite only modest agreement, just as in the development of education tests, negative discriminability is predictive of bad items.", "When discriminability is negative, then the probability of getting the answer right is higher when ability is lower , which is undesirable: Ken consistently loses to Burt on those items.", "This could identify bad items in evaluation sets for removal.", "DAD draws together two primary threads: we use IRT to understand datasets, which has been applied to other NLP tasks, and apply it to improving leaderboards.", "Finally, we explore how the insights of IRT can improve not just the analysis of test sets but to improve the construction of test sets.", "IRT in NLP IRT is gaining traction in machine learning research (Martnez-Plumed et al., 2016, 2019) where automated metrics can be misleading (Sedoc et al., 2019): machine translation (Hop-kins and May, 2013) and chatbot evaluation (Sedoc 15 Annotation guidelines provided in supplementary materials; Figure 7 uses the first set of annotations which were later augmented by two additional sets of annotations.", "and Ungar, 2020).", "Concurrent with our work, Vania et al. (2021) compare NLP test sets with IRT .", "Closest to our work in NLP is Otani et al. (2016), who rank machine translation subjects and compute correlations with gold scores.", "Similarly, Martnez-Plumed and Hernndez-Orallo (2020) use IRT on non-language AI video game benchmarks.", "Just as we use IRT to identify difficult or easy items, Lalor et al. (2016) create challenge sets for textual entailment.", "We test IRT as a way to guide annotation, but it can also train NLP models; for example, deep models learn easy examples faster (Lalor et al., 2018) and maintain test accuracy when training data are down-sampled (Lalor et al., 2019).", "Improving Leaderboards The rise NLP leaderboards has encouraged critical thought into improving them (Linzen, 2020), improving evaluation more broadly (Eger et al., 2020), and thoughtful consideration of their influence on the direction of research (Sculley et al., 2018; Dotan and Milli, 2020).", "DAD aims make leaderboard yardsticks (Hernandez-Orallo, 2020) more reliable, interpretable, and part of curating the benchmark itself.", "In line with our reliability goal, just as statistical tests should appear in publications (Dror et al., 2018; Dodge et al., 2019), they should be freebies for leaderboard participants (Ethayarajh and Juraf-sky, 2020).", "Alternatively, Hou et al. (2019) posit that leaderboards could be automatically extracted from publications.", "How to aggregate multi-task benchmarks (Wang et al., 2019b,a; Fisch et al., 2019) and multi-metric benchmarks (Ma et al., 2021) is an open question whichalthough we do not addressis one use for IRT .", "This work implicitly argues that leaderboards should be continually updated.", "As a (static) leaderboard ages, the task(s) overfit (Recht et al., 2019) whichalthough mitigable (Blum and Hardt, 2015; Anderson-Cook et al., 2019)is best solved by continually collecting new data (Kiela et al., 2021).", "Ideally, new data should challenge models through adversarial collection (Wallace et al., 2019b; Nie et al., 2020) and related methods (Gard-ner et al., 2020).", "However, if making an easy leaderboard more difficult is possible, the leaderboard has outlived its helpfulness and should be retired (Voorhees, 2000).", "Part of our work centers on alternate task efficacy rankings, but this navely assumes that task efficacy is the sole use case of leaderboards.", "Indeed, focusing solely these factors can mislead the public (Paullada et al., 2020) and may not reflect human language capabilities (Schlangen, 2020).", "Leaderboards are also well positioned to provide incentive structures for participants to prioritize fairness (Bender and Friedman, 2018) and efficiency (Strubell et al., 2019; Schwartz et al., 2020; Min et al., 2021) or incorporate testing of specific capabilities (Ribeiro et al., 2020; Dunietz et al., 2020).", "To enable these more nuanced analyses, leaderboards should accept runnable models rather than static predictions (Ma et al., 2021).", "Active Learning Beyond IRT , the analysis of training dynamics and active learning (Settles, 2009) is helpful for actively sampling specific items or identifying low-quality items (Brodley and Friedl, 1999).", "For example, Swayamdipta et al. (2020) and Pleiss et al. (2020) propose alternative training dynamics-based methods for identifying difficult items as well annotation errors.", "Even closer to goals, Rahman et al. (2020) use active learning to build a test collection.", "Explicitly measuring how effectively examples separate the best subject from the rest allows test set curators to fo-cus on the bubble (Boyd-Graber and Brschinger, 2020), prioritizing examples most likely to reveal interesting distinctions between submitted systems.", "Alternate Formulations IRT is an example of convergent evolution of models that predict subject action given an item.", "Ideal point models (Poole and Rosenthal, 2017) consider how a legislator (subject) will vote on a bill (item) and use a similar mathematical formulation.", "The venerable ELO model (Glickman and Jones, 1999) and modern extensions (Herbrich et al., 2007) predict whether a player (subject) will defeat an opponent (item) with, again, a similar mathematical model.", "Certain IRT models can also be formulated as nonlinear mixed models (Rijmen et al., 2003), where the item parameters are fixed effects and the latent subject parameters are random effects.", "This allows for comparisons between IRT models and other mixed effects models under a consistent framework.", "IRT base and IRT -disc can be formulated as nonlinear mixed models, and IRT -feas can be formulated as a discrete mixture model over items.", "As we discuss further in the next section, DAD 's application of IRT can further be improved by adopting interpretable extensions of these models.", "This paper advocates incorporating decades of research in crafting education tests to improve how we evaluate the capabilities of NLP models.", "We propose and validate an alternate IRT ranking method for leaderboard evaluations, show it can guide annotation, detect annotation error, and naturally partition evaluation data.", "Just as educators moved from classical testing to IRT , the NLP community should consider future evaluations with IRT .", "Although there is much to gain through IRT evaluation, there are limitations which make it hard to implement.", "First, it requires access to item-level responses for all examples for all subjects which are often only available to organizers.", "Second, Urbano (2016) notes that sampling mutually exclusive subsets has drawbackssamples are not entirely independent.", "We see a few directions for future work.", "First, this paper is intended to validate IRT and its usefulness as an active part of the leaderboard lifecycle; the natural next step is to implement it in a leaderboard.", "Second, our IRT models do not incorporate the item content (e.g., example text) to predict responses, but in principle could; Bayesian models with meta-data (Card et al., 2018) and ideal point models from political science (Poole and Rosenthal, 1985) that incorporate bills and speeches do exactly this (Ger-rish and Blei, 2011; Nguyen et al., 2015; Kraft et al., 2016).", "Analogously, IRT for leaderboards can and should also incorporate text from passages, questions, and answers to better model what makes questions difficult.", "Such a model can also predict which characteristics would create discriminating or difficult items.", "Lastly, multidimensional IRT models to evaluate multiple skills could aid multitask or multi-metric leaderboards like MRQA (Fisch et al., 2019) and Dynaboard (Ma et al., 2021).", "For their work on early iterations of leaderboard visualizations, we thank Jacob Bremerman and Wei Wei Chi.", "For insightful discussions and ideas we thank Shi Feng, Doug Oard, Joo Sedoc, Mike Wu, and Patrick Lewis.", "We thank Peter Rankel for recommendations on statistical testing methods.", "For discussion and feedback on visualizations, we thank Leo Zhicheng Liu, Calvin Bao, and classmates in UMD 's Fall 2020 Information Visualiza-tion course.", "For suggestions on topic modeling, we thank Philip Resnik and Maria Antoniak.", "For feedback on prior versions of this paper, we thank our anonymous ACL reviewers and members of the UMD CLIP lab.", "Boyd-Graber and Rodriguez's work at UMD were supported by NSF Grant IIS -1822494.", "The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the sponsor.", "The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein." ]
[ "abstain", "abstain", "result", "method", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "other", "other", "method", "method", "other", "method", "method", "other", "other", "abstain", "other", "abstain", "method", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other" ]
[ "For task-oriented dialog systems to be maximally useful, it must be able to process conversations in a way that is (1) generalizable with a small number of training examples for new task domains, and (2) robust to user input in various styles, modalities, or domains.", "In pursuit of these goals, we introduce the RADDLE 1 benchmark 2 , a collection of corpora and tools for evaluating the performance of models across a diverse set of domains.", "By including tasks with limited training data, RADDLE is designed to favor and encourage models with a strong generalization ability.", "RADDLE also includes a diagnostic checklist that facilitates detailed robustness analysis in aspects such as language variations, speech errors, unseen entities, and out-of-domain utterances.", "We evaluate recent state-of-the-art systems based on pre-training and fine-tuning, and find that grounded pre-training on heterogeneous dialog corpora performs better than training a separate model per domain.", "Adversarial training is also proposed to improve model robustness against noisy inputs.", "Overall, existing models are less than satisfactory in robustness evaluation, which suggests opportunities for future improvement.", "Dialogs constitute a crucial communication chan-nel in completing a broad range of tasks, such as weather query, flight and restaurant booking, movie booking, IT help desk, etc.", "Comparing to chitchat systems that are usually modeled with single-turn context-response pairs, task-oriented dialog systems involve retrieving information from knowledge bases and reasoning over multiple dialog turns.", "This makes it especially important for a system to Work was done when Zhu Zhang was visiting MSR 1 R obust t A sk-oriente D D ia L og systems E valuation 2 Benchmark link: http://aka.ms/raddle be able to produce response that are grounded on tasks goals and user intents.", "In a bid to support human-computer interactions, task-oriented dialog systems have been built to allow users to converse with a computer system using natural language, such as Siri, Google Assistant, Amazon Alexa, Microsoft XiaoIce (Zhou et al., 2020).", "Traditionally, a task-oriented dialog system uses a modularized pipeline with four modules that execute sequentially (Gao et al., 2019).", "A natural language understanding ( NLU ) module identifies user intents and extracts associated information such as slots and corresponding values from user input.", "A dialog state tracker ( DST ) infers the belief state (or user goal) from dialog history.", "The belief state is often used to query a task-specific database (DB) to obtain the DB state, such as the number of entities that match the user goal.", "The dialog state and DB state are then passed to a dialog policy ( POL ) module to select the next system action.", "A natural language generation ( NLG ) module converts the action to a natural language response.", "The human ability to converse is general, flex-ible, and robust.", "In contrast, most popular tools for dialog system development adopting the above modular systems are designed for specific tasks and struggle with out-of-scope data.", "If we aspire to develop models beyond extensively handcrafted rules and annotated data for each single domain/task, it is critical to develop a more unified , efficient and robust model that can more quickly learn to execute a range of tasks in different domains.", "To fuel research in this direction, we present the RADDLE benchmark.", "It includes a collection of task-oriented dialog tasks in diverse domains (e.g. end-to-end modeling, dialog state tracking).", "The benchmark also has a companion online platform for model evaluation, comparison, and robustness analysis.", "Importantly, RADDLE exhibits two unique advantages that pave the way for building more pragmatic dialog systems: ( i ) Limited data setting is the major focus of RADDLE , to evaluate the generalization ability of models.", "It aims at simulating the real-world application scenarios where only very limited amount of labelled data is available for new domains.", "Given this focus, RADDLE is therefore a favorable benchmark to evaluate recent models in the pre-training and fine-tuning paradigm, which learn to represent linguistic knowledge in a way that facilitates sample-efficient learning and effective knowledge transfer.", "( ii )", "Robustness analysis is introduced to study model performance in various challenging scenarios, where models are evaluated with anomalous user input such as language variations, speech errors, unseen entities and out-of-domain utterances.", "Failing to handle these inputs often produce inappropriate responses leading to frustrating user experience.", "These scenarios are common for deployed systems in the real world, but are largely ignored in existing dialog benchmarks.", "To the best of our knowledge, RADDLE presents the first work to fill this gap.", "To better understand the challenges posed by RADDLE , we conduct experiments with simple baselines and state-of-the-art task-oriented dialog models.", "We find that grounded pre-trained models with a unified multi-task learning objective outperform models separately trained on each domain.", "Moreover, even the best performing model (SOLOIST (Peng et al., 2020a)) in our evaluation achieves a fairly low score in robustness analysis.", "This suggests that our baseline models can handle common inputs with strong regularities, but struggle with anomalous inputs that require deeper reasoning.", "In summary, our key contributions are: ( i ) A novel dialog benchmark with an emphasis on limited data and multiple domains/tasks, which formally creates a scenario to evaluate the grounding and generalization ability of pre-trained models.", "( ii )", "A crowd-sourced diagnostic evaluation dataset to cover a broad range of real-world sophistication to study model robustness.", "( iii )", "An online evaluation platform and leaderboard to track research progress, with human evaluation services to be granted to top-ranked submissions on a bi-monthly basis.", "( iv )", "Baseline results for major existing approaches to task-oriented dialogs are reported.", "An adversarially robust model is proposed to improve the generalization ability in noisy environments.", "To drive the progress of building dialog systems using data-driven approaches, a number of conversational corpora have been released.", "They are roughly grouped into two categories: ( i ) Corpora with structured semantic labels (Wen et al., 2017; Shah et al., 2018).", "These datasets are often specifically annotated, and used to study an individual module in the dialog pipeline.", "For example, DialoGLUE (Mehri et al., 2020) is a recently proposed benchmark with a focus on NLU and DST tasks.", "( ii )", "Corpora with an implicit user goal (Lowe et al., 2015).", "These datasets are often without semantic labels but can be used in end-to-end (E2E) dialog modeling (Li et al., 2016; Zhu, 2020; Wu et al., 2019; Zhu et al., 2019a; Lee et al., 2019; Zhu et al., 2020).", "MultiWOZ (Budzianowski et al., 2018) is the most related work to RADDLE .", "It is a large-scale multi-turn conversational corpus across several domains.", "It can be used to develop individual dialog modules as separate tasks for existing modular-based methods, or serves as a benchmark for E2E dialog modeling methods.", "RADDLE inherits the advantages of MultiWOZ in its flexibility for sepa-rate/joint task modeling and its comprehensiveness in multi-domain data coverage, but differs significantly in two aspects: an emphasis on limited data settings and an unique robustness checklist.", "Both are essential qualities in building task bots at scale.", "Further, RADDLE provides an online platform for model evaluation and fair comparison based on privately-held test data, inspired by GLUE (Wang et al., 2018).", "To the best of our knowledge, RADDLE is the first online platform for DST and E2E tasks in the dialog community.", "This can reduce the inconsistency caused by different researchers/teams using varying pro-cessing/evaluation scripts to dilute where the gain comes from.", "Pre-trained language models (PLMs) have substantially advanced the state of the art across a variety of language understanding and generation tasks (Peters et al., 2018; Devlin et al., 2019; Yang et al., 2019; Liu et al., 2019; Radford et al., 2019;", "Keskar et al., 2019; Dong et al., 2019; Peng et al., 2020b,c; Li et al., 2020a).", "PLMs are often trained to predict words based on their context on massive text data, and the learned models can be fine-tuned to quickly adapt to various downstream tasks, exhibiting strong generalization capacity even with just a few in-domain training examples.", "Building task bots at scale requires the model to deal with the limited data problem for each domain, which can be used as a testbed to evaluate the generalization ability of PLMs.", "To this end, we limit the number of task-specific training examples in RADDLE to evaluate the sample-efficiency of models.", "Meanwhile, task-oriented dialogs pose a unique set of challenges for PLMs (Gao et al., 2020): a dialog is intrinsically goal-driven, multi-turn and often informal/noisy.", "Indeed, dialog-specific PLMs are proposed (Wu et al., 2020a; Peng et al., 2020a).", "However, the robustness of PLMs to linguistic perturbations often occurring in dialog settings (See Section 4 for details) is largely unexplored.", "Note that our notion of robustness emphasizes natural language variations, which is different from adversarial examples/training that aim to fool a trained model (Nie et al., 2019).", "From this perspective, RADDLE provides an unique benchmark for assessing PLMs with a robustness orientation.", "RADDLE is centered on five English dialog scenarios in daily life, which cover a broad range of data collection schemes, task types and complexities.", "As our first goal of RADDLE is to spur development of generalizable dialog systems, we design the benchmark such that a good performance requires a model to leverage substantial knowledge (e.g., pretrained parameters) learned from its previous life cycle, while still maintaining some task-specific components (Coope et al., 2020; Henderson et al., 2020; Peng et al., 2020a; Wu et al., 2020b).", "Specifically, we deliberately keep a small number of training examples for each scenario.", "This is consistent with the common practice that only limited labelled data is provided when deploying a dialog system to new domains.", "Table 1 shows the data statistics.", "Four domains in the standard-setting are sampled from MultiWOZ 2.0 (Budzianowski et al., 2018).", "Reminder is intentionally only utilized for unseen entity tracking.", "Because it is a human-machine corpus with a relatively smaller action space meaning that the impact of policy learning on models is largely alleviated.", "Therefore, the performance of models on this corpus will mostly reflect its capability of unseen entity tracking.", "Note that the number of training examples is limited to 50, an accepted scale that users can provide.", "Though it is possible to train a single model for each task from scratch without outside sources of knowledge, we expect that our focus on data-scarce settings will render this approach uncompetitive.", "Furthermore, a typical task-oriented dialog system uses a modularized pipeline that has four modules and executes sequentially.", "Recent research has shown promising results on parameterizing the modularized pipeline using a single neural autoregressive model, and training it in an end-to-end manner (Peng et al., 2020a; Ham et al., 2020; Hosseini-Asl et al., 2020).", "In fact, a single autoregressive model can significantly ease the work-flow of training and deploying dialog systems for new tasks, compared to existing modularized tools and methods.", "Therefore, we design the benchmark to allow evaluations on end-to-end dialog modeling, in addition to the modularized evaluation on dialog state tracking.", "To reveal the gap between the complexity of dialogs in lab environments and that in real scenarios, we construct a suite of tasks to study the robustness of models.", "We describe these tasks below and in Table 1. On the evaluation front, we concentrate on simulation-based methodologies, in order to facilitate automation.", "Though we only offer human evaluations (Gao et al., 2019) to top-ranked submissions at this point, we emphasize realistic scenarios in pursuit of system robustness (see Section 4).", "Task 1: Dialog State Tracking A robust NLU and DST is the first step towards building a reliable dialog system.", "The dialog state is a summary of the entire conversation till the current turn.", "In a task-oriented system, it is represented in the form of slot-value pairs, where slot indicates the cat-egory/attribute of the user goal expressed in the utterance, and value is the corresponding information.", "For the evaluation metric, we report joint goal accuracy , which indicates the proportion of dialog turns where all the user's search goal constraints are correctly identified (Mrksic et al., 2017).", "To specially study the NLU performance, we consider intent classification , which aims to automatically extract meaning from a natural language utterance in order to understand user's goal (Hemphill et al., 1990; Zhu et al., 2019b).", "Task 2: End-to-End Modeling The end-to-end (E2E) dialog models consider dialog history as input, and produce the natural language response.", "It jointly implements the dialog management (in-cluding DST and POL) and response generation ( i.e., NLG) components.", "Following Budzianowski et al. (2018), Inform , Success , and BLEU scores are reported.", "The first two metrics evaluate dialog task completion: Inform measures if the system provides a correct entity (inform rate), meanwhile Success measures the exact matching of answering all the requested information (success rate), and if the answered information matches users' goal.", "BLEU evaluates how fluent the generated responses are compared to human-written responses.", "A combined score ( Combined ) is also reported using Combined = ( Inform + Success ) 0 .", "5 + BLEU as an overall quality measure, as suggested in (Budzianowski et al., 2018).", "Existing benchmarks assume a world of a per-fect user who always provides precise, concise, and semantically unambiguous utterances.", "These goal-oriented dialog datasets are largely collected by crowd-sourcing, where a crowd-sourced worker enacts the part of a real user by following a set of template instructions provided for the task.", "This method results in a dataset where most user utterances are straight-forward, stick to the goal and tend to leave out the variation/errors commonly found in real-world conversational data.", "To this end, we collect a suite of language variations to reveal the dialog sophistication in the real world, and measure the robustness of dialog models.", "Language Variations It is well-known that humans communicate using language with fairly large variations such as different ways of expressions or personalized styles (Sacks et al., 1978), while template-based crowd-sourcing fails in covering the linguistic variations (Schegloff et al., 1977; Moore and Arar, 2019).", "Specifically, we consider four types of variations in RADDLE : ( i ) Paraphrase widely exists among different users, who may present restatements of the meaning of a text or message using other words.", "( ii )", "Verbosity describes a quality that users may express their intents using more words than needed.", "( iii )", "Simplification is a quality that users express their intents using fewer words to be concise.", "( iv )", "Typos often result from mistakes made in the typing.", "In Figure", "1(b)-(e), we provide examples to illustrate these language variations.", "Speech Errors It is desirable that dialog systems can leverage automatic speech recognition (ASR) techniques to serve the speech modality, as in Amazon Alexa.", "However, almost all dialog systems have typically assumed that the user input is written text, and hoped that the system would seamlessly integrate with speech inputs.", "Recently, it has been empirically shown in Gopalakrishnan et al. (2020) that dialog systems trained on written data is very sensitive to various types of synthetic and actual ASR hypotheses in the dialog history.", "To bring attention to this gap, RADDLE promotes speech robustness as an evaluation criterion.", "For example in Figure", "1(f), what's available can be transcribed as once available due to ASR defi-ciency, and a robust dialog system is expected to still correctly perceive user intents.", "Unseen Entities Most existing DST methods are not designed to handle slot values that are not known to the tracker.", "The assumption that a pre-defined ontology exists for the dialog and one can enumerate all possible values for each slot is often not valid in real-world scenarios.", "Even if such lists or dictionaries exist, they can be very large in size User : I want to tour a college in the center of town.", "and highly dynamic (Xu and Hu, 2018).", "Therefore, unseen entities are common in dialogs, i.e., entities that are not observed during training, but appear in the testing stage.", "In Figure", "1(g), the entity Bellevue downtown is in the knowledge base but never appears in model training, a robust DST should be able to recognize it as a city/place, via generalizing from other similar entities learned during training.", "Out-of-Domain Utterances Most deployed task-oriented dialog systems are built for a closed set of target domains.", "Thus, they are fragile when dealing with out-of-domain (OOD) utterances (Lee and Shalyminov, 2019).", "Failure to detect OOD utterances often prevents the model from responding with an appropriate fallback action, hence leading to frustrating user experience.", "Therefore, it is important to endow task bots with the ability to detect OOD utterances for special handling (Larson et al., 2019).", "For example, in Figure", "1(h), the user suggests an excursion to a task bot trained in college consulting, which is out of the bot's scope.", "The bot is expected to raise a flag to label the utterance as an outlier, and guides the user to focus on the current domain.", "The standard setting is sampled from MultiWOZ 2.0 (Budzianowski et al., 2018) but re-purposed in a few-shot learning setting.", "The language variations corpus is created by workers on Amazon Mechanical Turks based on the standard corpus.", "To maximize the quality, we require workers in US locale and have a minimal previous approval rate of 90%.", "Assignments are constructed at the turn level.", "Given a user utterance and associated dialog history, workers are required to answer four questions, what are the paraphrase, typos, verbose, and simplified versions of the user utterance.", "Moreover, in each assignment, the workers are instructed to exactly mention the slot values in the answers if the given user utterance has them.", "We pay Turks 0.5$ per assignment and each assignment can be finished in one to two minutes.", "For the speech recognition errors setting, we employ the audio-level error simulation (Gopalakr-ishnan et al., 2020), which generates audio signals from texts, adds noise into the audio, and then decodes the audio with an ASR model to obtain hypotheses.", "In particular, we employ Microsoft Cognition text-to-speech service to synthesize audio signals.", "After injecting background noise into the audio signals, we use the speech recognition service to obtain a corpus of Word Error Rate (WER) of 30%.", "For the reminder domain that is applied for unseen entity evaluation, we firstly simulate several dialogs as seed scenarios using an agenda-based simulator and then randomly replace the slots in the dialogs with new values.", "Similar to constructing the language variations corpus, we then hire workers to rewrite the corpus as diverse and realistic as possible.", "Finally, the out-of-domain corpus is developed following Lee and Shalyminov (2019).", "We randomly choose 50% utterances in DSTC (Hen-derson et al., 2014) for the Attraction domain as the training set.", "For the test set, besides utterance from DSTC , we also introduce utterance from a diverse set of domains like Stanford (Eric and Manning, 2017), Reddit , Twitter (Sor-doni et al., 2015) to evaluate the capability of handling different out-of-domain utterances.", "A board of data researchers reviews all the collected data to ensure no ethical concerns in it.", "For baselines, we consider three representative methods, holding state-of-the-art positions on existing benchmarks such as MultiWoZ (Budzianowski et al., 2018).", "DAMD (Zhang et al., 2020) is a state-of-the-art modular system, where each dialog module is implemented using a neural network, and the whole system is trained in an end-to-end manner.", "GPT-2 represents a single multi-task learning model with impressive results on general language understanding and generation tasks.", "GPT-2 is an auto-regressive language model that leverages 12-24 layers of masked, multi-head self-attention Transformers.", "GPT-2 is pre-trained on extremely massive text data OpenWebText (Radford et al., 2019).", "It has demonstrated superior performance on characterizing human language data distribution and knowledge transfer.", "Given text prompts, GPT-2 can often generate fluent sentences.", "Its ancestral work GPT (with a smaller model size and less training data) has shown impressive results on language understanding tasks.", "In this paper, we consider GPT-2 FT as the approach of directly fine-tuning the pre-trained GPT-2 on a specific domain.", "Hence, GPT-2 FT can be viewed as SOLOIST without grounded pre-training, and serve as a strong baseline for both DST and E2E task.", "SOLOIST represents recent model variants (Ham et al., 2020; Hosseini-Asl et al., 2020) to parameterize dialog system as a single auto-regressive model.", "SOLOIST subsumes different dialog modules (e.g. state tracker, dialog policy, response generator) into a single Transformer model.", "It has the similar capability with GPT-2 in understanding and generating natural language sentences but is pre-trained on large heterogeneous dialog corpora to gain additional capability of grounding text response in user goals and real-world knowledge for task completion (Peng et al., 2020a; Gao et al., 2020).", "For detailed description, please see Section A in Appendix.", "It is known that adversarial training can improve a model's adversarial robustness, which refers to a model's invariance to small (often imperceptible) perturbations of its inputs ( i.e., clean exam-Standard", "ples) (Madry et al., 2017; Miyato et al., 2018; Liu et al., 2020; Li et al., 2020b).", "Adversarial examples are produced by adding perturbations on clean examples to fool the predictions of a trained model the most.", "Though fundamentally different, one may view adversarial examples as resembling the variations in natural language to some extent.", "Inspired by this idea, we propose an adversarially robust SOLOIST model, denoted as SOLOIST Adv .", "Specifically, for a dialog turn x drawn from the training dataset D , and a neural model SOLOIST parameterized by , the standard training minimizes the empirical risk: min E x D L ( x ) , where L ( x ) is the SOLOIST learning objective defined in Appendix Section A. The key idea of adversarial training is to modify the objective by applying small perturbation to input word embeddings that maximize the adversarial loss: min E x D max L ( x + ) , where the inner maximization can be solved by running a number of projected gradient descent steps (Goodfellow et al., 2014; Bubeck, 2014).", "SOLOIST Adv is trained in a hybrid manner that combines standard training and adversarial training.", "It augments the training dataset with adversarial examples that add perturbations in the word embedding space of original dialog turns, which improve the model's robustness against noisy inputs that arguably covers language variations.", "In our experiments, SOLOIST Adv employs adversarial training in both task-specific pre-training and fine-tuning stages.", "Training We leverage the pre-trained checkpoints from the corresponding work, and fine-tune them on RADDLE .", "For SOLOIST Adv , We apply 100k steps of adversarial training to the pre-trained checkpoints.", "Each domain is trained separately.", "We train our models with Adam with initial learning rate 5e-5 and batch size 1 for 20 epochs.", "We encourage subsequent submissions systems to devote the same computation efforts in fine-tuning stage, e.g., up to one hour GPU time, for each model to ensure fair comparisons.", "Evaluation The RADDLE benchmark follows the same evaluation model as GLUE (Wang et al., 2018) or Kaggle 3 .", "To evaluate a system on the benchmark, one must run the system on the provided test data for the tasks, then upload the results to the website http://aka.ms/raddle for scoring.", "The benchmark site shows per-task scores and a macro-average of those scores to determine a sys-tem's position on the leaderboard.", "The website also provides fineand coarse-grained results on the robustness diagnostic datasets.", "We will provide human evaluation services for top-ranked submissions on a quarterly basis.", "The human evaluation protocol follows Peng et al. (2020a) and Li et al. (2020c).", "We first present the results of baseline methods across all tasks on the RADDLE benchmark in Table 2. As shown, GPT-2 FT fine-tuned with domain-specific dialog corpora outperforms the strong modular-based method DAMD.", "This highlights the efficacy of pre-trained language models.", "SOLOIST improves upon GPT-2 FT over 10 points in terms of average score, and consistently performs better than GPT-2 FT across all the tasks.", "These strong results indicate that large-scale task-specific pretraining on dialog corpora is crucial for effective and robust task adaptation.", "However, the performance of SOLOIST drops on robust checklist tasks.", "Benefiting from adversarial training, SOLOIST Adv outperforms SOLOIST about 2 points.", "Table 2 shows the overall performance of DST and E2E modeling under different variation settings.", "Language Variations It is noticeable that all the models incur significant performance drops under each type of variation.", "Among all variation types, Typos has the most substantial impact on both JGA and Combined score resulting in 10 to 20 points of drop in performance.", "This is expected as misspelled keywords pose significant challenges for state tracking.", "The influence of other three types of variations are also prominent.", "The results reveal that existing SoTA dialog models trained on limited task-specific examples are not robust enough to handle various types of user utterances.", "Adversarial training improves robustness to language variations, boosting performance across all the language variations tasks.", "Speech Errors We observe a clear degradation in all metrics for all models.", "This shows that during inference, models trained on textual data are sensitive and not robust to actual ASR hypotheses introduced in dialog history.", "Unseen Entities Without task-specific pretraining, GPT-2 FT only achieves less than 30% of JGA and 51.20 of dialog act accuracy even on a simple domain with most of the common entity values.", "SOLOIST performs significantly better than GPT-2 FT by achieving 69.05% JGA and 96.98 dialog act accuracy but remains imperfect.", "SOLOIST Adv performs similar to SOLOIST , which is expected as adversarial training does not provides additional knowledge.", "These results imply that task-specific pre-training can improve the generalization capability of models but is still far from enough for production environments.", "Out-of-Domain Utterances It is non-trivial for conventional modular-based dialog systems to handle OOD detection.", "It often requires an additional component to classify whether a user utterance as in-domain or not.", "As such, we omit the result of DAMD in our experiments.", "GPT-2 FT achieves 83.96 F1 score while SOLOIST has 96.18 F1 score, which shows that task-specific pre-training can improve robustness of models to OOD utterances.", "It is interesting to observe that adversarial training hurts model's performance on OOD detection.", "We conjecture that adversarial training enable models to tolerate disturbances on the inputs and thus yield 1 2 3 4 5 Team ID 0.5 0.6 0.7 0.8 0.9 S u cc e ss R a t e DSTC-8 Corpus E. Human E. Non-Pre-trained Models Pre-trained Models 1 2 3 4 5 Team ID 0.5 0.6 0.7 0.8 0.9 S u cc e ss R a t e DSTC-8 1 2 3 4 5 Team ID 0.5 0.6 0.7 0.8 0.9 S u cc e ss R a t e DSTC-9", "Finally, it is worth pointing out some important trends in the dialog research community, based on the DSTC challenge (Kim et al., 2019; Gu-nasekara et al., 2020) in the last 2 years (Figure 2).", "In DSTC8 (Kim et al., 2019), the winning submission by Team 5 is the only one that uses pretrained models (GPT-2).", "When moving from corpus evaluation to human evaluation, it exhibits the least performance drop relative to other submissions, which is strong evidence to demonstrate robustness of pre-trained models.", "By the time of DSTC9 (Gunasekara et al., 2020), the community have witnessed a general trend shift from modular systems to pre-trained end-to-end architectures.", "However, the significant performance gap between corpus evaluation and human evaluation indicates that pre-trained methods remain sensitive to noisy inputs.", "Such observations underscore the importance of robustness-oriented design and evaluation, for which RADDLE fills a major void.", "We introduce RADDLE , a platform and collection of resources for evaluating and analyzing task-oriented dialog systems.", "We confirm (1) the utility of grounded pre-training and transfer learning methods in dialog systems: pre-training improves generalization in a limited data setting, and (2) adversarial training improves robustness, but still leaves room for improvement.", "When evaluating these models on our diagnostic dataset, we find that they fail (often spectacularly) on many robustness test cases, suggesting possible avenues for future work.", "In summary, the question of how to design unified, efficient, robust models remains largely unexplored, and we believe that RADDLE can provide fertile soil for addressing this challenge.", "We gratefully acknowledge the entire Project Philly team inside Microsoft, who provided the computing platform for our research.", "We also thank the anonymous reviewers whose suggestions helped clarify this work.", "The collection of our RADDLE dataset is consistent with the terms of use of any sources and the original authors' intellectual property and privacy rights.", "The dataset is collected with Amazon mechanical Turks, and each HIT requires up to two minutes to complete.", "The requested inputs are general language variations, and no privacy-related information is collected during data collection.", "Each HIT was paid 0.5 USD, with the hourly pay being 15% higher than the minimum wage requirements in our area.", "A board of data researchers has reviewed all the collected data to ensure no ethical concerns e.g., toxic language and hate speech." ]
[ "abstain", "objective", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "result", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "method", "other", "other", "other", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "result", "method", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "We introduce a grey-box adversarial attack and defence framework for sentiment classification.", "We address the issues of differentiability, label preservation and input reconstruction for adversarial attack and defence in one unified framework.", "Our results show that once trained, the attacking model is capable of generating high-quality adversarial examples substantially faster (one order of magnitude less in time) than state-of-the-art attacking methods.", "These examples also preserve the original sentiment according to human evaluation.", "Additionally, our framework produces an improved classifier that is robust in defending against multiple adversarial attacking methods.", "Code is available at: https://github.com/ ibm-aur-nlp/adv-def-text-dist 1 Introduction Recent advances in deep neural networks have created applications for a range of different domains.", "In spite of the promising performance achieved by neural models, there are concerns around their robustness, as evidence shows that even a slight perturbation to the input data can fool these models into producing wrong predictions (Goodfellow et al., 2014; Kurakin et al., 2016).", "Research in this area is broadly categorised as adversarial machine learning , and it has two sub-fields: adversarial attack , which seeks to generate adversarial examples that fool target models; and adversarial defence , whose goal is to build models that are less susceptible to adversarial attacks.", "A number of adversarial attacking methods have been proposed for image recognition (Goodfellow et al., 2014), NLP (Zhang et al., 2020) and speech recognition (Alzantot et al., 2018a).", "These methods are generally categorised into three types: white-box, black-box and grey-box attacks.", "White-box This work was completed during the employment of the authors in IBM Research Australia.", "attacks assume full access to the target models and often use the gradients from the target models to guide the craft of adversarial examples.", "Black-box attacks, on the other hand, assume no knowledge on the architecture of the target model and perform attacks by repetitively querying the target model.", "Different from the previous two, grey-box attacks train a generative model to generate adversarial examples and only assume access to the target model during the training phrase.", "The advantages of grey-box attacking methods include higher time effi-ciency; no assumption of access to target model during attacking phase; and easier integration into adversarial defending algorithms.", "However, due to the discrete nature of texts, designing grey-box attacks on text data remains a challenge.", "In this paper, we propose a grey-box framework that generates high quality textual adversarial examples while simultaneously trains an improved sentiment classifier for adversarial defending.", "Our contributions are summarised as follows: We propose to use Gumbel-softmax (Jang et al., 2016) to address the differentiability issue to combine the adversarial example generator and target model into one unified trainable network.", "We propose multiple competing objectives for adversarial attack training so that the generated adversarial examples can fool the target classifier while maintaining similarity with the input examples.", "We considered a number of similarity measures to define a successful attacking example for texts, such as lexical and semantic similarity and label preservation.", "1 To help the generative model to reconstruct input sentences as faithfully as possible, we introduce a novel but simple copy mechanism to 1 Without constraint on label preservation, simply flipping the ground-truth sentiment (e.g. the movie is great the movie is awful ) can successfully change the output of a sentiment classifier even though it is not a useful adversarial example.", "the decoder to selectively copy words directly from the input.", "We assess the adversarial examples beyond just attacking performance, but also content similarity, fluency and label preservation using both automatic and human evaluations.", "We simultaneously build an improved sentiment classifier while training the generative (attacking) model.", "We show that a classifier built this way is more robust than adversarial defending based on adversarial examples augmentation.", "Most white-box methods are gradient-based, where some form of the gradients (e.g. the sign) with respect to the target model is calculated and added to the input representation.", "In image processing, the fast gradient sign method (FGSM; Goodfellow et al. (2014)) is one of the first studies in attacking image classifiers.", "Some of its variations include Kurakin et al. (2016); Dong et al. (2018).", "These gradient-based methods could not be applied to texts directly because perturbed word embeddings do not necessarily map to valid words.", "Methods such as DeepFool (Moosavi-Dezfooli et al., 2016) that rely on perturbing the word embedding space face similar roadblocks.", "To address the issue of embedding-to-word mapping, Gong et al. (2018) propose to use nearest-neighbour search to find the closest words to the perturbed embeddings.", "However, this method treats all tokens as equally vulnerable and replace all tokens with their nearest neighbours, which leads to non-sensical, word-salad outputs.", "A solution to this is to replace tokens one-by-one in order of their vulnerability while monitoring the change of the output of the target models.", "The replacement process stops once the target prediction has changed, minimising the number of changes.", "Examples of white-box attacks that utilise this approach include TYC (Tsai et al., 2019) and HOTFLIP (Ebrahimi et al., 2017).", "Different to white-box attacks, black-box attacks do not require full access to the architecture of the target model.", "Chen et al. (2017) propose to estimate the loss function of the target model by querying its label probability distributions , while Papernot et al. (2017) propose to construct a substitute of the target model by querying its output labels .", "The latter approach is arguably more realistic because in most cases attackers only have access to output labels rather than their probability distributions.", "There is relatively fewer studies on black-box attacks for text.", "An example is TEXTFOOLER , proposed by Jin et al. (2019), that generates adversarial examples by querying the label probability distribution of the target model.", "Another is proposed by Alzan-tot et al. (2018b) where genetic algorithm is used to select the word for substitution.", "Grey-box attacks require an additional training process during which full access to the target model is assumed.", "However, post-training, the model can be used to generate adversarial examples without querying the target model.", "Xiao et al. (2018) introduce a generative adversarial network to generate the image perturbation from a noise map.", "It is, however, not trivial to adapt the method for text directly.", "It is because text generation involves discrete decoding steps and as such the joint generator and target model architecture is non-differentiable.", "In terms of adversarial defending, the most straightforward method is to train a robust model on data augmented by adversarial examples.", "Recently, more methods are proposed for texts, such as those based on interval bound propagation (Jia et al., 2019; Huang et al., 2019), and dirichlet neighborhood ensemble (Zhou et al., 2020).", "The purpose of adversarial attack is to slightly perturb an input example x for a pre-trained target model (e.g. a sentiment classifier) f so that f ( x ) (cid:54) = y , where y is the ground truth of x .", "The perturbed example x (cid:48) should look similar to x , which can be measured differently depending on the domain of the input examples.", "We propose a grey-box attack and defence framework which consists of a generator G (updated), and two copies of a pre-trained target classifier: a static classifier C and an updated/augmented classifier C .", "2 During the training phase, the output of G is directly fed to C and C to form a joint architecture.", "Post-training, the generator G is used independently to generate adversarial examples (ad-versarial attack); while the augmented classifier C is an improved classifier with increased robustness (adversarial defence).", "2 C and C start with the same pre-trained weights, although only C is updated during training.", "The training phase is divided into attacking steps and defending steps, where the former updates only the generator G and learns to introduce slight perturbation to the input by maximising the objective function of the target model C .", "The latter updates C and G by feeding both original examples and adversarial examples generated by G .", "Here, the adversarial examples are assumed to share the same label with their original examples.", "Effectively, the defending steps are training an improved classifier with data augmented by adversarial examples.", "Generating text with discrete decoding steps (e.g. argmax ) makes the joint architecture not differentiable.", "Therefore we propose to use Gumbel-softmax (Jang et al., 2016) to approximate the categorical distribution of the discrete output.", "For each generation step i , instead of sampling a word from the vocabulary, we draw a Gumbel-softmax sample x i which has the full probability distribution over words in the vocabulary: the probability of the generated word is close to 1.0 and other words close to zero.", "We obtain the input embedding for C and C by multiplying the sample x i with the word embedding matrix, MC , of the target model C : x i MC .", "Figure 1 illustrates our grey-box adversarial attack and defence framework for text.", "The generator G can be implemented as an auto-encoder or a paraphrase generator, essentially differentiated by their data conditions: the former uses the input sentences as the target, while the latter uses paraphrases (e.g. PARANMT-50M (Wieting and Gimpel, 2017)).", "In this paper, we implement G as an auto-encoder, as our preliminary experiments found that a pre-trained paraphrase generator performs poorly when adapted to our test domain, e.g. Yelp reviews.", "Our auto-encoder G generates an adversarial example given an input example.", "It tries to reconstruct the input example but is also regulated by an adversarial loss term that discourages' it from doing so.", "The objectives for the attacking step are given as follows: L adv = log p C ( y | x, C , G ) (1) L s 2 s = log p G ( x | x, G ) (2) L sem = cos (cid:32) 1 n n (cid:88) i =0 emb ( x i ) , 1 n n (cid:88) i =0 emb ( x i ) (cid:33) (3) where L adv is essentially the negative cross-entropy loss of C ; L s 2 s is the sequence-to-sequence loss for input reconstruction; and L sem is the cosine similarity between the averaged embeddings of x and x ( n = number of words).", "Here, L s 2 s encourages x (cid:48) (produced at test time) to be lexically similar to x and helps produce coherent sentences, and L sem promotes semantic similarity.", "We weigh the three objective functions with two scaling hyper-parameters and the total loss is: L = 1 ( 2 L s 2 s + (1 2 ) L sem ) + (1 1 ) L adv We denote the auto-encoder based generator trained with these objectives as AE .", "An observation from our preliminary experiments is that the generator tends to perform imbal-anced attacking among different classes.", "(e.g. AE learns to completely focus on one direction attacking, e.g. positive-to-negative or negative-to-positive attack).", "We found a similar issue in white-box attack methods such as FGSM Goodfellow et al. (2014) and DeepFool (Moosavi-Dezfooli et al., 2016).", "To address this issue, we propose to modify L adv to be the maximum loss of a particular class in each batch, i.e. L adv = max | C | t =1 ( L tadv ) (4) where L tadv refers to the adversarial loss of examples in the t -th class and | C | the total number of classes.", "We denote the generator trained with this alternative loss as AE+ BAL .", "For adversarial defence, we use the same objective functions, with the following exception: we replace L adv in Equation (1) with the objective function of the classifier C , i.e. L def = log p C ([ y, y ] | [ x, x ] , C , G ) (5) We train the model C using both original and adversarial examples ( x and x ) with their original label ( y ) to prevent C from overfitting to the adversarial examples.", "One of the main challenges of generating a textual adversarial example is to preserve its original ground truth label, which we refer to as label preservation .", "It is less of an issue in computer vision because slight noises added to an image is unlikely to change how we perceive the image.", "In text, however, slight perturbation to a sentence could completely change its ground truth.", "We use sentiment classification as context to explain our approach for label preservation.", "The goal of adversarial attack is to generate an adversarial sentence whose sentiment is flipped according to the target model prediction but preserves the original ground truth sentiment from the perspective of a human reader.", "We propose two ways to help label preservation.", "The first approach is task-agnostic, i.e. it can work for any classification problem, while the second is tailored for sentiment classification.", "Label smoothing (+ LS ) .", "We observe the generator has a tendency to produce adversarial examples with high confidence, opposite sentiment scores from the static classifier C .", "We explore the use of label smoothing (Mller et al., 2019) to force the generator generate examples that are closer to the decision boundary, to discourage the generator from completely changing the sentiment.", "We incorporate label smoothing in Eq.", "1 by redistributing the probability mass of true label uniformly to all other labels.", "Formally, the smoothed label y ls = (1 ) y + /K where is a hyper-parameter and K is the number of classes.", "For example, when performing negative-to-positive attack, instead of optimising G to produce adversarial examples with label distribution {pos: 1.0, neg: 0.0} (from C ), label distribution {pos: 0.6, neg: 0.4} is targeted.", "Generator trained with this additional constraint is denoted with the + LS suffix.", "Counter-fitted embeddings (+ CF ) .", "Mrkic et al. (2016) found that unsupervised word embeddings such as GloVe (Pennington et al., 2014) often do not capture synonymy and antonymy relations (e.g. cheap and pricey have high similarity).", "The authors propose to post-process pre-trained word embeddings with lexical resources (e.g. WordNet) to produce counter-fitted embeddings that better capture these lexical relations.", "To discourage the generator G from generating words with opposite sentiments, we experiment with training G with counter-fitted embeddings.", "Models using counter-fitted embeddings is denoted with + CF suffix.", "White-box or black-box attacking methods are based on adding, removing, or replacing tokens in input examples.", "Therefore maintaining similarity with original examples is easier than grey-box methods that generate adversarial examples word-by-word from scratch.", "We introduce a simple copy mechanism that helps grey-box attack to produce faithful reconstruction of the original sentences.", "We incorporate a static copy mask to the decoder where it only generates for word positions that have not been masked.", "E.g., given the input sentence x = [ w 0 , w 1 , w 2 ] , target x = [ w 0 , w 1 , w 2 ] , and mask m = [1 , 0 , 1] , at test time the decoder will copy from the target for the first input ( w 0 ) and third input token ( w 2 ) to produce w 0 and w 2 , but for the second input token ( w 1 ) it will decode from the vocabulary.", "During training, we compute cross-entropy only for the unmasked input words.", "The static copy mask is obtained from one of the pre-trained target classifiers, C-LSTM (Sec-tion 4.2).", "C-LSTM is a classifier with a bidirectional LSTM followed by a self-attention layer to weigh the LSTM hidden states.", "We rank the input words based on the self-attention weights and create a copy mask such that only the positions corresponding to the topN words with the highest weights are generated from the decoder.", "Generally sentiment-heavy words such as awesome and bad are more likely to have higher weights in the self-attention layer.", "This self attention layer can be seen as an importance ranking function (Morris et al., 2020b) that determines which tokens should be replaced or replaced first.", "Models with copy mechanism are denoted with the + CPY suffix.", "We conduct our experiments using the Yelp review dataset.", "3 We binarise the ratings, 4 use spaCy for tokenisation, 5 and keep only reviews with 50 tokens (hence the dataset is denoted as yelp50 ).", "We split the data in a 90/5/5 ratio and downsample the positive class in each set to be equivalent to the negative class, resulting in 407,298, 22,536 and 22,608 examples in train/dev/test set respectively.", "For the target classifiers ( C and C ), we pretrain three sentiment classification models using yelp50 : C-LSTM (Wang et al., 2016), C-CNN (Kim, 2014) and C-BERT.", "C-LSTM is composed of an embedding layer, a 2-layer bidirectional LSTMs, a self-attention layer, and an output layer.", "C-CNN has a number of convolutional filters of varying sizes, and their outputs are concatenated, pooled and fed to a fully-connected layer followed by an output layer.", "Finally, C-BERT is obtained by fine-tuning the BERT-Base model (Devlin et al., 2018) for sentiment classification.", "We tune learning rate, batch size, number of layers and number of hidden units for all classifiers; the number of attention units for C-LSTM and convolutional filter sizes and dropout rates for C-CNN specifically.", "For the auto-encoder, we pre-train it to reconstruct sentences in yelp50 .", "6 During pre-training, we tune learning rate, batch size, number of layers and number of hidden units.", "During the training of adversarial attacking, we tune 1 and 2 , and learning rate lr .", "We also test different temperature for Gumbel-softmax sampling and found that = 0 .", "1 performs the best.", "All word embeddings are fixed.", "More hyper-parameter and training configura-tions are detailed in the supplementary material.", "Most of the existing adversarial attacking methods have been focusing on improving the attack success rate.", "Recent study show that with constraints adjusted to better preserve semantics and grammaticality, the attack success rate drops by over 70 percentage points (Morris et al., 2020a).", "In this paper, we want to understand given a particular success rate the quality (e.g. fluency, content/label preservation) of the generated adversarial samples.", "Therefore, we tuned all attacking methods to achieve the same levels of attack success rates; and compare the quality of generated adversarial examples.", "7 Note that results for adver-6 Pre-trained BLEU scores are 97.7 and 96.8 on yelp50 using GloVe and counter-fitted embedding, respectively.", "sarial attack are obtained by using the G + C joint architecture, while results for adversarial defence are achieved by the G + C + C joint architecture.", "In addition to measuring how well the adversarial examples fool the sentiment classifier, we also use a number of automatic metrics to assess other aspects of adversarial examples, following Xu et al. (2020):", "Attacking performance .", "We use the standard classification accuracy (ACC) of the target classifier ( C ) to measure the attacking performance of adversarial examples.", "Lower accuracy means better attacking performance.", "Similarity .", "To assess the textual and semantic similarity between the original and corresponding adversarial examples, we compute BLEU (Pa-pineni et al., 2002) and USE (Cer et al., 2018).", "8 For both metrics, higher scores represent better performance.", "Fluency .", "To measure the readability of generated adversarial examples, we use the acceptability score (ACPT) proposed by Lau et al. (2020), which is based on normalised sentence probabilities produced by XLNet (Yang et al., 2019).", "Higher scores indicate better fluency.", "Transferability .", "To understand the effectiveness of the adversarial examples in attacking another unseen sentiment classifier (TRF), we evaluate the accuracy of C-BERT using adversarial examples that have been generated for attacking classifiers C-LSTM and C-CNN.", "Lower accuracy indicates better transferability.", "Attacking speed .", "We measure each attacking method on the amount of time it takes on average (in seconds) to generate an adversarial example.", "Comparison between AE variants.", "We first present results on the development set where we explore different variants of the auto-encoder (gen-erator) in the grey-box model.", "AE serves as our base model, the suffix + BAL denotes the use of an alternative L adv (Section 3.2), + LS label smoothing (Section 3.3), + CF counter-fitted embeddings (Sec-tion 3.3), and + CPY copy mechanism (Section 3.4).", "examples that annotators can make sense of during human evaluation.", "8 USE is calculated as the cosine similarity between the original and adversarial sentence embeddings produced by the universal sentence encoder (Cer et al., 2018).", "approximately 70% 80% accuracy for the target classifier C (C-LSTM).", "For ACC and BLEU, we additionally report the performance for the positive and negative sentiment class separately.", "To understand how well the adversarial examples preserve the original sentiments, we recruit two annotators internally to annotate a small sample of adversarial examples produced by each of the auto-encoder variants.", "AGR and DAGR indicate the percentage of adversarial examples where they agree and disagree with the original sentiments, and UKN where the annotators are unable to judge their sentiments.", "Looking at the POS and NEG performance of AE and AE+ BAL , we can see that AE+ BAL is effective in creating a more balanced performance for positive-to-negative and negative-to-positive attacks.", "We hypothesise that AE learns to perform single direction attack because it is easier to generate positive (or negative) words for all input examples and sacrifice performance in the other direction to achieve a particular attacking performance.", "That said, the low AGR score (0.12) suggests that AE+ BAL adversarial examples do not preserve the ground truth sentiments.", "The introduction of label smoothing (AE+ LS ) and counter-fitted embeddings (AE+ LS + CF ) appear to address label preservation, as AGR improves from 0.12 to 0.46 to 0.64.", "Adding the copy mechanism (AE+ LS + CF + CPY ) provides also some marginal improvement, although the more signifi-cant benefit is in sentence reconstruction: a boost of 5 BLEU points.", "Note that we also experimented with incorporating + BAL for these variants, but found minimal benefit.", "For the rest of the experiments, we use AE+ LS + CF + CPY as our model to benchmark against other adversarial methods.", "Comparison with baselines.", "We next present results on the test set in Table", "2. The benchmark methods are: TYC, HOTFLIP , and TEXTFOOLER (described in Section 2).", "We choose 3 ACC thresholds as the basis for comparison: T1, T2 and T3, which correspond to approximately 80-90%, 70-80% and 60-70% accuracy.", "9 Generally, all models trade off example quality for attacking rate, as indicated by the lower BLEU, USE and ACPT scores at T3.", "Comparing C-LSTM and C-CNN, we found that C-CNN is generally an easier classifier to attack, as BLEU and USE scores for the same threshold are higher.", "Interestingly, TEXTFOOLER appears to be ineffective for attacking C-CNN, as we are unable to tune TEXTFOOLER to generate adversarial examples producing ACC below the T1 threshold.", "Comparing the attacking models and focusing on C-LSTM, TEXTFOOLER generally has the upper hand.", "AE+ LS + CF + CPY performs relatively well, and usually not far behind TEXTFOOLER .", "HOTFLIP produces good BLEU scores, but substantially worse USE scores.", "TYC is the worst performing model, although its adversarial examples are good at fooling the unseen classifier C-BERT (lower TRF than all other models), suggesting that there may be a (negative) correlation between in-domain performance and transferability.", "Overall, most methods do not produce adversarial examples that are very effective at attacking C-BERT.", "10 Case study.", "In Table 3, we present two randomly selected adversarial examples (positive-to-negative and negative-to-positive) for which all five attacking methods successfully fool C-LSTM.", "TYC produces largely gibberish output.", "HOTFLIP tends to replace words with low semantic similarity with the original words (e.g. replacing hard with ginko ), which explains its high BLEU scores and low USE and ACPT scores.", "Both TEXTFOOLER and AE+ LS + CF + CPY generate adversarial examples that are fluent and generally retain their original meanings.", "These observations agree with the quantitative performance we see in Table", "2. Time efficiency.", "Lastly, we report the time it takes for these methods to perform attacking on yelp50 at T2.", "The average time taken per example (on GPU v100) are: 1.2s for TYC; 1s for TEXTFOOLER ; 0.3s for HOTFLIP ; and 0.03s for AE+ LS + CF + CPY .", "TYC and TEXTFOOLER are the slowest methods, while HOTFLIP is substantially faster.", "Our model AE+ LS + CF + CPY is the fastest method: about an order of magnitude faster compared to the next best method HOTFLIP .", "Though one should be noted that our grey-box method re-9 We tune hyper-parameters for each attacking method to achieve the 3 attacking thresholds.", "quires an additional step of training that can be conducted offline.", "Automatic metrics provide a proxy to quantify the quality of the adversarial examples.", "To validate that these metrics work, we conduct a crowdsourcing experiment on Appen.", "11 We test the 3 best performing models (HOTFLIP , TEXTFOOLER and AE+ LS + CF + CPY ) on 2 attacking thresholds (T2 and T3).", "For each method, we randomly sampled 25 positive-to-negative and 25 negative-to-positive successful adversarial examples.", "For quality control, we annotate 10% of the 11 https://www.appen.com samples as control questions.", "Workers are first presented with a 10-question quiz, and only those who pass the quiz with at least 80% accuracy can work on the task.", "We monitor work quality throughout the annotation process by embedding a quality-control question in every 10 questions, and stop workers from continuing on the task whenever their accuracy on the control questions fall below 80%.", "We restrict our jobs to workers in United States, United Kingdom, Australia, and Canada.", "We ask crowdworkers the following questions:", "1. Is snippet B a good paraphrase of snippet A?", "(cid:35)", "Yes (cid:35) Somewhat yes (cid:35) No", "2. How natural does the text read?", "(cid:35)", "Very unnatural (cid:35) Somewhat natural (cid:35) Natural", "3. What is the sentiment of the text?", "Positive (cid:35) Negative (cid:35) Cannot tell We display both the original and adversarial examples for question 1, and only the adversarial example for question 2 and", "(cid:35)", "3. As a baseline, we also select 50 random original sentences from the test set and collect human judgements for these sentences on question 2 and", "3. We present the human evaluation results in Figure", "2. Looking at the original examples (top-2 bars), we see that they are fluent and their perceived sentiments (by the crowdworkers) have a high agreement with their original sentiments (by the review authors).", "Comparing the 3 methods, TEXTFOOLER produces adversarial sentences that are most similar to the original (green) and they are more natural (blue) than other methods.", "HOTFLIP is the least impressive method here, and these observations agree with the scores of automatic metrics in Table", "2. On label preservation (red), however,", "our method AE+ LS + CF + CPY has the best performance, implying that the generated adversarial sentences largely preserve the original sentiments.", "The consistency between the automatic and human evaluation results indicate that the USE and ACPT scores properly captured the semantic similarity and readability, two important evaluation aspects that are text-specific.", "Here we look at how well the generated adversarial examples can help build a more robust classifier.", "Unlike the attacking performance experiments (Section 4.3), here we include the augmented classifier ( C ) as part of the grey-box training.", "12 The augmented classifier can be seen as an improved model compared to the original classifier C .", "To validate the performance of adversarial defence, we evaluate the accuracy of the augmented classifiers against different attacking methods.", "We compared our augmented classifier C to the augmented classifiers adversarially trained with adversarial examples generated from HOTFLIP and TEXTFOOLER .", "Our preliminary results show that training C without the copy mechanism provides better defending performance, therefore we use the 12 During training, we perform one attacking step for every two defending steps.", "For fair comparison, our augmented classifier ( C ) is obtained by training the generator ( G ) to produce an attacking performance of T2 accuracy (70%) on the static classifier ( C ).", "For the other two methods, we train an augmented version of the classifier by feeding the original training data together with the adversarial examples 13 generated by HOTFLIP and TEXTFOOLER with the same T2 attacking performance; these two classifiers are denoted as CTEXTFOOLER and CHOTFLIP , respectively.", "At test time, we attack the three augmented classifiers using TYC, HOTFLIP , TEXTFOOLER and AE+ LS + CF , and evaluate their classification accuracy.", "Results are presented in Table 4.", "The second row Original Perf. indicates the performance when we use the original test examples as input to the augmented classifiers.", "We see a high accuracy here, indicating that the augmented classifiers still perform well on the original data.", "Comparing the different augmented classifiers, our augmented classifier C outperforms the other two in defending against different adversarial attacking methods (it is particularly good against HOTFLIP ).", "It produces the largest classification improvement compared to the original classifier C (0.7, 21.8, 2.9 and 16.0 points against adversarial examples created by TYC, HOTFLIP , TEXTFOOLER and AE+ LS + CF respectively).", "Interestingly, the augmented classifier trained with HOTFLIP adversarial examples ( CHOTFLIP ) produces a more vulnerable model, as it has lower accuracy compared to original classifier ( C ).", "We suspect this as a result of training with low quality adversarial examples that introduce more noise during adversarial defending.", "Training with TEXTFOOLER examples ( CTEXTFOOLER ) helps, although most of its gain is in defending against other attacking methods (HOTFLIP and AE+ LS + CF ).", "robust classifier, compared to the baseline approach of training a classifier using data augmented by adversarial examples.", "In this paper, we proposed a grey-box adversarial attack and defence framework for sentiment classification.", "Our framework combines a generator with two copies of the target classifier: a static and an updated model.", "Once trained, the generator can be used for generating adversarial examples, while the augmented (updated) copy of the classifier is an improved model that is less susceptible to adversarial attacks.", "Our results demonstrate that the generator is capable of producing high-quality adversarial examples that preserve the original ground truth and is approximately an order of magnitude faster in creating adversarial examples compared to state-of-the-art attacking methods.", "Our framework of building an improved classifier together with an attacking generator is also shown to be more effective than the baseline approach of training a classifier using data augmented by adversarial examples.", "The combined adversarial attack and defence framework, though only evaluated on sentiment classification, should be adapted easily to other NLP problems (except for the counter-fitted embeddings, which is designed for sentiment anal-ysis).", "This framework makes it possible to train adversarial attacking models and defending models simultaneously for NLP tasks in an adversarial manner.", "For the human evaluation in Section 4.3.3, each assignment was paid $0.06 and estimated to take 30 seconds to complete, which gives an hourly wage of $7.25 ( = US federal minimum wage).", "An assignment refers to scoring the sentiment/coherence of a sentence, or scoring the semantic similarity of a pair of sentences.", "Our research has obvious ethical considerations, in that our adversarial generation technology can be extended and used to attack NLP systems at large.", "That said, this concern is a general concern for any forms of adversarial learning that isn't unique to our research.", "The general argument for furthering research in adversarial learning is that it advances our understanding of the vulnerabilities of machine learning models, paving the path towards building safer and more secure models.", "Additionally, our grey-box framework is arguably better for defense (i.e. improving a machine learning model) than for offense (i.e. attacking a machine learning model), as it requires access to the architecture of the target model to learn how to generate adversarial examples, which isn't a realistic condition if we were to use it to attack a live system.", "In contrast, such a condition is less of an issue if we are using it to improve the robustness of a system that we are developing." ]
[ "abstain", "abstain", "result", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "method", "objective", "objective", "method", "result", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "objective", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain" ]
[ "Although transformers are remarkably eective for many tasks, there are some surprisingly easy-looking regular languages that they struggle with.", "Hahn shows that for languages where acceptance depends on a single input symbol, a transformer's classification decisions become less and less confident (that is, with cross-entropy approaching 1 bit per string) as input strings get longer and longer.", "We examine this limitation using two languages: PARITY , the language of bit strings with an odd number of 1 s, and FIRST , the language of bit strings starting with a 1 .", "We demonstrate three ways of overcoming the limitation suggested by Hahn's lemma.", "First, we settle an open question by constructing a transformer that recognizes PARITY with perfect accuracy, and similarly for FIRST .", "Second, we use layer normalization to bring the cross-entropy of both models arbitrarily close to zero.", "Third, when transformers need to focus on a single position, as for FIRST , we find that they can fail to generalize to longer strings; we oer a simple remedy to this problem that also improves length generalization in machine translation.", "Although transformers (Vaswani et al., 2017) are remarkably eective for many tasks, there are some surprisingly easy-looking formal languages that they struggle with.", "Hahn (2020) tries to explain some of these by showing (his Lemma 5) that changing a single input symbol only changes the output of a transformer encoder by ( 1 / ) , where is the input string length.", "Thus, for a language where acceptance depends on a single input symbol, a transformer might accept or reject strings with perfect accuracy, but for large , it must do so with low confidence, giving accepted strings a probability just above and rejected strings a probability just below .", "More precisely, as increases, the cross-entropy approaches its worst possible value of 1 bit per string.", "Here, we examine this limitation using two simple regular languages: PARITY = { | has an odd number of 1 s } FIRST = { | 1 = 1 } where (here and throughout the paper) = { 0 , 1 } .", "Hahn's lemma applies to PARITY because the network must attend to all the symbols of the string, and a change in any one of them changes the correct answer.", "We have chosen FIRST as one of the simplest examples of a language that the lemma applies to.", "It only requires attention on the first symbol, but the lemma still applies because a change in this symbol changes the correct answer.", "Although the lemma might be interpreted as limiting the ability of transformers to recognize these languages, we show three ways that this limitation can be overcome.", "First, we show by explicit constructions that transformers do in fact exist that can recognize both languages with perfect accuracy for arbitrary lengths.", "We have implemented these constructions and verified them experimentally (3).", "As predicted by Hahn's lemma, our constructed transformers have cross-entropy that approaches 1 bit (that is, just barely better than random guessing) as input length increases.", "But we show that by adding layer normalization, the cross-entropy can be made arbitrarily close to zero, independent of string length (4).", "In practice, we find, like Bhattamishra et al. (2020a), that transformers cannot learn PARITY .", "Perhaps more surprisingly, when learning FIRST , transformers can have diculty generalizing from shorter strings to longer strings.", "Although this is not a logical consequence of Hahn's lemma, it is a consequence of the behavior that Hahn's lemma predicts.", "Fortunately, this problem can be fixed with a simple modification, multiplying attention logits by log .", "This modification also improves length generalization in machine translation (5).", "If is a true-or-false statement, we write", "For any , > 0, we write 0 for the zero matrix and I for the identity matrix.", "Following Hahn (2020), we consider transformer encoders with a sigmoid output layer on a single position.", "Dierently from Hahn (2020), but in line with common practice (Devlin et al., 2019), we prepend a token CLS (for classification) and use the encoder output at this token's position for classifying the string.", "The input to the network is a string .", "Let = | | + 1, let 0 = CLS , and let be the -th symbol of .", "The input layer has a word embedding and positional encodings, WE: R PE: N R which are used to compute input vectors for = 0 , . . . : a 0 , = WE ( ) + PE ( ) .", "The word embeddings are typically learned, while the positional encodings vary somewhat.", "Originally (Vaswani et al., 2017), they were fixed and defined in terms of sine and cosine waves, but they can also be learned (Gehring et al., 2017), in which case they are defined only up to some maximum position.", "Here, we allow ourselves to define PE as an arbitrary function on all positions.", "It would seem that to remain in the spirit of the original paper, PE should be easy to compute, independent of , and parallelizable over positions.", "The body of the encoder is a stack of layers, each of which has a self-attention sublayer followed by a position-wise feedforward sublayer.", "For = 1 , . . . , , layer is defined as follows, where = 1 , . . . , , and = 0 , . . . , : q ,, = WQ ,, a 1 , K , = (cid:2) WK ,, a 1 , 0 WK ,, a 1 , (cid:3) (cid:62) V , = (cid:2) WV ,, a 1 , 0 WV ,, a 1 , (cid:3) (cid:62) c , = LN (cid:32) = 1 Att ( q ,, , K , , V , ) + a 1 , (cid:33) h , = max (cid:16) 0 , WF ,, 1 c , + b F ,, 1 (cid:17) a , = LN (cid:16) WF ,, 2 h , + b F ,, 2 + c , (cid:17) where boldface lowercase letters stand for vectors in R and boldface uppercase letters stand for matrices in R .", "The learned parameters of the model are the W 's and b 's.", "The function Att is scaled dot-product attention, defined as Att: R R ( + 1 ) R ( + 1 ) R Att ( q , K , V ) = V (cid:62) softmax Kq where the result of the softmax, sometimes written as , is a vector of attention weights .", "The function LN is layer normalization, whose definition we defer to 4.", "Finally, the network linearly projects the encoding of CLS to a scalar and applies a sigmoid function:", "where W + 1 R 1 and b + 1 R 1 1 .", "The network accepts i the output probability is greater than 12 .", "The first way to overcome the limitation suggested by Hahn's lemma is to show by explicit construction that our two languages can in fact be recognized with perfect accuracy by transformers.", "Rumelhart et al. (1986) showed that for any , there is a feedforward neural network (FFNN) that computes PARITY for strings of length exactly .", "They 7655 also showed that a randomly initialized FFNN can learn to do this automatically.", "Since our construction is partially based on theirs, it may be helpful to review their construction in detail.", "Let be the input string, | | = , and be the number of 1 s in .", "The input is a vector x such that x = I [ = 1 ] .", "The first layer computes and compares it against 1 , 2 , . . . , : W 1 = 1 1 1 1 1 1 ... ... . . . ... 1 1 1 b 1 = 0 .", "where is the step function ( ( ) = I [ > 0 ] applied elementwise.", "The second layer adds up the odd elements and subtracts the even elements: W 2 = (cid:2) 1 1 ( 1 ) + 1 (cid:3) b 2 = 0 .", "Proposition 1.", "There is a transformer encoder with sigmoid output layer that recognizes (in the above sense) the language PARITY for strings of arbitrary length.", "Initially, we will construct a transformer encoder without layer normalization (that is, LN ( x ) = x ); then we will show how to add layer normalization (4).", "Let be the number of occurrences of 1 in .", "All vectors computed by the network have = 9 dimensions; if we show fewer dimensions, assume the remaining dimensions to be zero.", "Since we are numbering positions starting from 0, dimension 4 ranges from 0 to 1 , and dimension 5 is + 1 for even positions and 1 for odd positions.", "We argue that dimension 5, being a cosine wave, is a fairly standard choice, although its period (2) is shorter than the shortest period in standard sinusoidal encodings (2 ).", "Dimension 4 is admittedly not standard; however, we argue that it is a reasonable encoding, and extremely easy to compute.", "Thus, the encoding of word is: a 0 , = I [ = 0 ] I [ = 1 ] I [ = CLS ] cos .", "The network has = 2 layers and = 2 heads.", "The first self-attention layer has one head which finds , the number of 1 s.", "More precisely, because attention always averages, it must compute the average number of 1 s, that is, , and stores it in dimension 6.", "It also stores 1 in dimension 7, which we will need later.", "The second head doesn't do anything ( WV , 1 , 2 = 0 the queries and keys can be anything).", "After the residual connection, we have: c 1 , = I [ = 0 ] I [ = 1 ] I [ = CLS ] cos 1 .", "In the construction of Rumelhart et al. (1986), the next step is to compute I [ ] for each , using step activation functions.", "There are two differences in our construction.", "First, we have ReLU activation functions, not step activation functions.", "Second, because attention must sum to one, if is odd then the even and odd positions will get different attention weights, so the trick of subtracting even positions from odd positions will not work.", "Instead, we want to compute I [ = ] (Fig. 1).", "After the residual connection, we have: a 1 , = I [ = 0 ] I [ = 1 ] I [ = CLS ] cos 1 I [ = ] .", "The second self-attention layer tests whether position is even or odd.", "It does this using two heads, one which attends more strongly to the odd positions, and one which attends more strongly to the even positions; both average dimension 8: WQ , 2 , 1 = (cid:2) 0 0 0 0 0 0 0 (cid:3) WK , 2 , 1 = (cid:2) 0 0 0 0 1 0 0 0 (cid:3) WV , 2 , 1 = (cid:20) 0 8 8 0 0 0 0 0 0 0 1 (cid:21) WQ , 2 , 2 = (cid:2) 0 0 0 0 0 0 0 (cid:3) WK , 2 , 2 = (cid:2) 0 0 0 0 1 0 0 0 (cid:3) WV , 2 , 2 = (cid:20) 0 8 8 0 0 0 0 0 0 0 1 (cid:21) where > 0 can be any constant.", "The second FFNN doesn't do anything ( WF , 2 , 1 = b F , 2 , 1 = WF , 2 , 2 = b F , 2 , 2 = 0 ).", "The vector at CLS (posi-tion 0) is then a 2 , 0 = 0 0 1 0 1 1 I [ = 0 ] where has a somewhat complicated value.", "If is even, it turns out to be = ( 1 ) + 1 2 tanh 2 which is positive if is odd and negative if is even.", "As predicted by Hahn, it is in ( 1 / ) .", "If is odd, the expression for is more complicated (see Appendix A), but it is still positive i is odd, and it is still in ( 1 / ) .", "Finally, the output layer is a sigmoid layer that just looks at dimension 9: W 3 = (cid:2) 0 0 0 0 0 0 0 0 1 (cid:3) b 3 = 0 = 1 1 + exp ( ) .", "So the output is greater than 12 i is odd.", "Next, we construct a transformer for FIRST .", "In line with the common practice of learning per-position word embeddings (Gehring et al., 2017), we use position embeddings that test whether a word is at position 1: a 0 , = I [ = 0 ] I [ = 1 ] I [ = CLS ] I [ = 1 ] .", "(We have chosen WF , 1 , 1 in a slightly unusual way to avoid using the bias term b F , 1 , 1 , in anticipation of 4 when we will add layer normalization.)", "The second self-attention layer has a single head, which makes CLS focus on position 1.", "where > 0 is a constant.", "The second FFNN doesn't do anything ( WF , 2 , 1 = b F , 2 , 1 = WF , 2 , 2 = b F , 2 , 2 = 0 ).", "So at CLS (position 0), a 2 , 0 = 0 0 1 0 0 = exp exp + 1 (cid:16) I [ 1 = 1 ] 12 (cid:17) .", "The final output layer just selects component 6:", "So the output probability, = ( ) , is greater than 12 i 1 = 1 .", "However, it will get closer to 12 as increases.", "We implemented both of the above constructions using modified versions of PyTorch's built-in implementation of transformers (Paszke et al., 2019).", "1 1 The code for this and other experiments in this paper are available at https://github.com/ndnlp/parity .", "These constructions achieve perfect accuracy for strings with lengths sampled from [ 1 , 1000 ] .", "However, in Fig. 2, the red curves (no layer norm) show that, as strings grow longer, the cross-entropy approaches its worst possible value of 1 bit per string.", "We discuss this problem next.", "The second way to mitigate or eliminate the limitation of Hahn's lemma is layer normalization (Ba et al., 2016), which is defined, for any vector x , as", "where the functions mean and var compute the mean and variance, respectively, of the elements of x , and is the elementwise (Hadamard) product.", "We fix = 0 and = 1, so that the result has approximately zero mean and unit variance.", "The constant was not present in the original definition (Ba et al., 2016) but is added in all implementations that we are aware of, for numerical stability.", "The original transformer definition performs layer normalization immediately after every residual connection.", "2 In this section, we modify our 2 It is also common to place layer normalization before residual connections (Wang et al., 2019; Nguyen and Salazar, 2019), but we follow the original transformer definition here.", "The first is to nullify the centering eect of layer normalization by making the network compute each value as well as its negation .", "The new word encodings are defined in terms of those in the original construction: a 0 , = (cid:20) a 0 , a 0 , (cid:21) .", "Likewise for the self-attention parameters: WQ ,, = (cid:2) WQ ,, 0 (cid:3) WK ,, = (cid:2) WK ,, 0 (cid:3) WV ,, = (cid:20) WV ,, 0 WV ,, 0 (cid:21) .", "Likewise for the position-wise FFNN parameters: WF ,, 1 = (cid:2) WF ,, 1 0 (cid:3) b F ,, 1 = b F ,, 1 WF ,, 2 = (cid:20) WF ,, 2 WF ,, 2 (cid:21) b F ,, 2 = (cid:20) b F ,, 2 b F ,, 2 (cid:21) .", "The argument to LN always has zero mean, so that layer normalization does not add or subtract anything.", "It does scale the activations, but in the case of the two transformers constructed above, any activation layer can be scaled by any positive number without changing the final decisions (see Appendix B).", "Furthermore, in any transformer, we can use layer normalization to shrink the cross-entropy as small as we like, contrary to Hahn's Lemma 5.", "In Hahn's formulation, position-wise functions like layer normalization can be subsumed into his act , but the lemma assumes that act is Lipschitz-continuous, and layer normalization with = 0 is not.", "Proposition 2.", "For any transformer with layer normalization ( = 0 ) that recognizes a language L , and for any > 0 , there is a transformer with layer normalization that recognizes L with cross-entropy at most .", "Proof.", "Let be the number of dimensions in the original vectors of activations, and let be the number of layers.", "Then we add a new layer whose self-attention doesn't do anything ( WV , + 1 , = 0 ) and whose FFNN is defined in terms of the original output layer: WF , + 1 , 1 = (cid:20) I I (cid:21) b F , + 1 , 1 = (cid:20) 0 0 (cid:21) WF , + 1 , 2 = (cid:2) I I (cid:3) + W + 1 W + 1 W + 1 W + 1 0 ( 2 ) 0 ( 2 ) b F , + 1 , 2 = b + 1 b + 1 0 2 .", "This causes the residual connection to zero out all dimensions except two, so that if was the original output logit, the output of this new layer (before layer normalization) is a + 1 , = LN (cid:169)(cid:173) (cid:171) 0 2 (cid:170)(cid:174)(cid:172) .", "Finally, set = 1 / 2 log ( exp 1 ) .", "If the input string is in L , then the cross-entropy is log ( / 2 ) = .", "Similarly, if the input string is not in L , then the cross-entropy is log ( 1 ( / 2 )) = .", "(cid:3)", "We tested our exact solutions, modified as described above to use layer normalization.", "Figure 2 shows that layer normalization with > 0 improves the cross-entropy, but it still grows with and approaches 1.", "With = 0, the cross-entropy is independent of and, as argued above (Proposi-tion 2), can be made as low as desired.", "In this section, we turn to the question of learnability, which will lead to a third way of overcoming the limitation suggested by Hahn's lemma.", "We tried training transformers on both PARITY and FIRST.", "Each transformer had the same number of layers and heads and the same fixed positional encodings as the corresponding exact solution.", "We used model = 16 for word encodings, self-attention, and FFNN outputs, and FFNN = 64 for FFNN hidden layers.", "We used layer normalization ( = 10 5 ) after residual connections.", "We used PyTorch's default initialization and trained using Adam (Kingma and Ba, 2015) with learning rate 3 10 4 (Karpathy, 2016).", "We did not use dropout, as it did not seem to help.", "We found, like Bhattamishra et al. (2020a), that a transformer with the above settings was unable to learn PARITY .", "We tried many other settings as well, to no avail.", "To give an idea of why our constructed solution, in particular, is dicult to find, Fig. 3 shows the cross-entropy and accuracy of the model if we start with our solution (with layer normalization, = 0) and vary the parameter WV , 1 , 1 6 , 2 , which is responsible for computing .", "At 1, it has a cross-entropy of 0 and accuracy of 1, which are both optimal, but the cross-entropy oscillates so rapidly that even a small perturbation of this parameter would make it dicult to recover the solution by gradient descent.", "FIRST is much easier to learn, but the bad news is that the learned transformers do not generalize well to longer sentences.", "Figure 4 (left column) shows that when a transformer is trained from scratch on shorter strings ( = 10 , 30 , 100 , 300) and tested on longer strings ( = 1000), the accuracy is not perfect.", "Indeed, for training = 10, the accuracy is hardly better than random guessing.", "In our solution above (3.3), the second self-attention layer attended mostly to the first position, but not totally.", "It relied on the fact that in the second self-attention layer, the values of the non-first positions ( V 2 , 1 , 4 and V 2 , 1 , 5 for 1) are exactly zero and therefore do not contribute to the output.", "In practice, because word embeddings are randomly initialized in all dimensions, and are added to every layer via residual connections, it's unlikely for any activation to be exactly zero.", "This explains why our exact solution cannot be learned.", "But, as a further thought experiment about what the model might be learning instead, consider the following transformer, which uses only a single layer ( = 1) and does not zero out the values of the non-first positions.", "As we will see, it performs worse than the transformer of 3.3 for long strings.", "The FFNN doesn't do anything ( WF , 1 , 1 = b F , 1 , 1 = WF , 1 , 2 = b F , 1 , 2 = 0 ), and the final output layer just selects component 5.", "So if is the total number of 1 s, the final logit at CLS (position 0) would be = exp 1 exp + 1 (cid:18) I [ 1 = 1 ] 1 2 (cid:19) + 1 exp + 1 (cid:16) 2 (cid:17) .", "If > log ( 1 ) , then this is positive i 1 = 1 .", "But if log ( 1 ) , the new second term can be big enough to make the model output an incorrect answer.", "This suggests that if we train a transformer on strings with length up to , then the learned parameters will be large enough to classify strings of length up to correctly, but may misclassify strings longer than .", "weight on the first position of the test string (summed over layers, averaged over strings) as a function of training epoch (starting from random initial parameters).", "The training strings have varying length ( ) and the test strings have fixed length (1000).", "We might hope that the attention weight would converge to the same value independent of .", "But the lower is, the more the attention weight is diluted, making it easier for the value in position 1 to be outweighed by values in other positions.", "Fortunately, this problem is easy to fix by scaling the logits of each attention layer by log , that is, redefining attention as", "Att ( q , K , V ) = V (cid:62) softmax log Kq .", "(2) Then taking the model in 5.2 with = 1 gives = 1 2 1 (cid:18) I [ 1 = 1 ] 1 2 (cid:19) + 1 2 1 (cid:16) 2 (cid:17) which is positive i 1 = 1 .", "Moreover, scaling is another way to make the cross-entropy low: Proposition 3.", "For any > 0 there is a transformer with attention defined as in Eq.", "(2) , and with or without layer normalization, that recognizes FIRST with cross-entropy at most .", "Proof.", "Without layer normalization, we can take the model in 3.3, set = 1 and log-scale the attention weights, which changes from Eq.", "(1) to = 2 1 (cid:18) I [ 1 = 1 ] 1 2 (cid:19) 1 4 < | | 1 2 .", "With layer normalization (and > 0), we can apply the modification of 4 to nullify the centering eect of layer normalization.", "Then since the variance of a 2 , 0 is 16 ( 1 + 2 ) , the layer-normalized final logit is = (cid:18) 1 6 ( 1 + 2 ) + (cid:19) 12 and since | | > 14 , | | > 1 4 (cid:18) 5 24 + (cid:19) 12 .", "In either case, since the final logit has a lower bound not dependent on , the output layer weights can be scaled as in the proof of Proposition 2 to make the cross-entropy at most .", "(cid:3) train all train short test all test long train tokens 3M+3M 1M+1M test tokens 32k+34k 24k+25k baseline 32.6 11.4 scaled 32.5 12.4 Table 1: When training and testing on data with the same length distribution, scaling attention logits has no significant eect on BLEU, but when training on short sentences ( 20 tokens) and testing on long sentences ( > 20 tokens), scaling helps significantly ( < 0 . 01).", "Figure 4 (right column) shows the training of transformers with scaling of attention logits by log .", "For all training lengths , the model is able to learn with perfect test cross-entropy and accuracy.", "We see a similar eect on low-resource English-to-Vietnamese machine translation (Table 1), using Witwicky, an open-source implementation of transformers.", "3 We use all default settings; in particular, residual connections come after layer normalization ( = 10 5 ).", "We measure translation accuracy using BLEU (Papineni et al., 2002) and use bootstrap resampling with 1000 samples for significance testing.", "When train and test length distributions are the same, scaling attention logits has no significant effect.", "But if we train only on sentences with median length or shorter ( 20 tokens) and test only on sentences longer than median length ( > 20 tokens), scaling attention logits by log improves BLEU by + 1, which is statistically significant ( < 0 . 01).", "Using very dierent assumptions on the form of transformers and inputs, a number of recent theoretical studies of transformers show that they can solve much more dicult problems than the ones studied here.", "Transformer encoders can be shown to be universal approximators by fixing the string length and using a number of layers exponential in the length (Yun et al., 2020).", "Transformer encoderdecoders, where the decoder can run for an unbounded number of steps, have been shown to be Turing-complete (Bhattamishra et al., 2020b; Prez et al., 2021).", "RASP (Weiss et al., 2021) is a simple programming language whose programs can be compiled into transformers.", "While PARITY can easily be written in RASP, this does not imply in itself the existence of transformers that can recognize PARITY , for two reasons.", "First, RASP's aggregate operation (which corresponds to attention) always attends uniformly to a subset of positions, unlike softmax attention.", "Second, RASP's elementwise operations are embedded directly in the output transformer; they are not translated into FFNNs.", "Bhattamishra et al. (2020a) carry out theoretical and experimental studies of transformers for various formal languages.", "The theoretical results are for a dierent variant of transformers than ours (transformer encoders with self-attention masked so that each position attends only to previous po-sitions), and focus on such transformers' ability to maintain counters that are constrained to be nonnegative.", "Their experimental results suggest that transformers have diculty learning some regular languages, including PARITY .", "We've seen that the questions of", "(a) whether a neural network can recognize a language,", "(b) whether it can achieve low cross-entropy on a language, and", "(c) whether it can learn to recognize a language are three separate questions, because we have given examples of", "(a) without", "(b) and", "(b) without", "(c).", "Namely, our explicit construction for PARITY shows that a neural network can recognize a language with perfect accuracy", "(a) but poor cross-entropy", "(b).", "Adding layer normalization ( = 0) enables it to achieve low cross-entropy", "(b), but still does not learn well", "(c).", "We observe that because the answer to", "(b) can hinge on small details of the model,", "(b) is not probably not very useful as a way of measuring expressivity.", "However, we did find that the limited influence of a single input symbol, implied by Hahn's lemma, has a serious practical implication for learnability", "(c).", "Namely, transformers can fail to generalize from shorter training strings to longer testing strings.", "Our proposed fix, scaling attention logits by log , is easy to implement and very eective on a real machine translation task.", "We would like to thank Toan Nguyen for assistance with his machine translation code, and Gail Weiss for catching some mistakes.", "This paper is based upon work supported in part by the Oce of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract #FA8650-17-C-9116.", "The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the ocial policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government.", "The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein." ]
[ "abstain", "abstain", "objective", "objective", "objective", "method", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "result", "objective", "result", "method", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "other", "method", "other", "other", "method", "other", "other", "other", "other", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "objective", "other", "other", "other", "other" ]
[ "Visual question answering aims to answer the natural language question about a given image.", "Existing graph-based methods only focus on the relations between objects in an image and neglect the importance of the syntactic dependency relations between words in a question.", "To simultaneously capture the relations between objects in an image and the syntactic dependency relations between words in a question, we propose a novel dual channel graph convolutional network (DC-GCN) for better combining visual and textual advantages.", "The DC-GCN model consists of three parts: an I-GCN module to capture the relations between objects in an image, a Q-GCN module to capture the syntactic dependency relations between words in a question, and an attention alignment module to align image representations and question representations.", "Experimental results show that our model achieves comparable performance with the state-of-the-art approaches.", "As a form of visual Turing test, visual question answering (VQA) has drawn much attention.", "The goal of VQA (Antol et al., 2015; Goyal et al., 2017) is to answer a natural language question related to the contents of a given image.", "Attention mechanisms are served as the backbone of the previous mainstream approaches (Lu et al., 2016; Yang et al., 2016; Yu et al., 2017), however, they tend to catch only the most discriminative information, ignoring other rich complementary clues (Liu et al., 2019).", "Recent VQA studies have been exploring higher level semantic representation of images, notably using graph-based structures for better image understanding, such as scene graph generation (Xu et al., 2017; Yang et al., 2018), visual relationship detection (Yao et al., 2018), object counting (Zhang et al., Corresponding author: Yi Cai ([email protected]).", "2018a), and relation reasoning (Cao et al., 2018; Li et al., 2019; Cadene et al., 2019a).", "Representing images as graphs allows one to explicitly model interactions between two objects in an image, so as to seamlessly transfer information between graph nodes (e.g., objects in an image).", "Very recent research methods (Li et al., 2019; Cadene et al., 2019a; Yu et al., 2019) have achieved remarkable performances, but there is still a big gap between them and human.", "As shown in Figure", "1(a), given an image of a group of persons and the corresponding question, a VQA system needs to not only recognize the objects in an image (e.g., batter , umpire and catcher ), but also grasp the textual information in the question what color is the umpire's shirt .", "However, even many competitive VQA models struggle to process them accurately, and as a result predict the incorrect answer (black) rather than the correct answer (blue) , including the state-of-the-art methods.", "Although the relations between two objects in an image have been considered, the attention-based VQA models lack building blocks to explicitly capture the syntactic dependency relations between words in a question.", "As shown in Figure", "1(c), these dependency relations can reflect which object is being asked (e.g., the word umpire's modifies the word shirt ) and which aspect of the object is being asked (e.g., the word color is the direct object of the word is ).", "If a VQA model only knows the word shirt rather than the relation between words umpire's and shirt in a question, it is difficult to distinguish which object is being asked.", "In fact, we do need the modified relations to discriminate the correct object from multiple similar objects.", "Therefore, we consider that it is necessary to explore the relations between words at linguistic level in addition to constructing the relations between objects at visual level.", "Motivated by this, we propose a dual channel graph convolutional network (DC-GCN) to simultaneously capture the relations between objects in an image and the syntactic dependency relations between words in a question.", "Our proposed DC-GCN model consists of an Image-GCN (I-GCN) module, a Question GCN (Q-GCN) module, and an attention alignment module.", "The I-GCN module captures the relations between objects in an image, the Q-GCN module captures the syntactic dependency relations between words in a question, and the attention alignment module is used to align two representations of image and question.", "The contributions of this work are summarized as follows: 1) We propose a dual channel graph convolutional network (DC-GCN) to simultaneously capture the visual and textual relations, and design the attention alignment module to align the multimodal representations, thus reducing the semantic gaps between vision and language.", "2) We explore how to construct the syntactic dependency relations between words at linguistic level via graph convolutional networks as well as the relations between objects at visual level.", "3) We conduct extensive experiments and ablation studies on VQA-v2 and VQA-CP-v2 datasets to examine the effectiveness of our DC-GCN model.", "Experimental results show that the DC-GCN model achieves competitive performance with the state-of-the-art approaches.", "Visual Question Answering Attention mechanism has been proven effective on many tasks, such as machine translation (Bahdanau et al., 2014) and image captioning (Pedersoli et al., 2017).", "A number of methods have been developed so far, in which question-guided attention on image regions is commonly used.", "These can be categorized into two classes according to the types of employed image features.", "One class uses visual features from some region proposals, which are generated by Region Proposal Network (Ren et al., 2015).", "The other class uses convolutional features (i.e., activations of convolutional layers).", "To learn a better representation of the question, the Stacked Attention Network (Yang et al., 2016) which can search question-related image regions is designed by performing multi-step visual attention operations.", "A co-attention mechanism that jointly performs question-guided visual attention and image-guided question attention is proposed to solve the problems of which regions to look at and what words to listen to (Shih et al., 2016).", "To obtain more fine-grained interaction between image and question, some researchers introduce rather sophisticated fusion strategies.", "Bilinear pooling method (Kim et al., 2018; Yu et al., 2017, 2018) is one of the pioneering works to efficiently and expressively combine multimodal features by using an outer product of two vectors.", "Recently, some researchers devoted to overcome the priors on VQA dataset and proposed the methods like GVQA (Agrawal et al., 2018), UpDn + Q-Adv + DoE (Ramakrishnan et al., 2018), and RUBi (Cadene et al., 2019b) to solve the language biases on the VQA-CP-v2 dataset.", "Graph Networks Graph networks are powerful models that can perform relational inferences through message passing.", "The core idea is to enable communication between image regions to build contextualized representations of these regions.", "Below we review some of the recent works that rely on graph networks and other contextualized representations for VQA.", "Recent research works (Cadene et al., 2019a; Li et al., 2019) focus on how to deal with complex scene and relation reasoning to obtain better image representations.", "Based on multimodal attentional networks, (Cadene et al., 2019a) introduces an atomic reasoning primitive to represent interactions between question and image region by a rich vecto-Figure 2: Illustration of our proposed Dual Channel Graph Convolutional Network (DC-GCN) for VQA task.", "The Dependency Parsing constructs the semantic relations between words in a question, and Q-GCN Module updates every word's features by aggregating the adjacent word features.", "In addition, the I-GCN Module builds the relations between image objects, and the Attention Alignment Module use question-guided image attention mechanism to learn a new object representation thus align the images and questions.", "All punctuations and upper cases have been preprocessed.", "The numbers in red are the weight scores of image objects and words.", "rial representation and model region relations with pairwise combinations.", "GCNs, which can better explore the visual relations between objects and aggregate its own features and neighbors' features, have been applied to various tasks, such as text classification (Yao et al., 2019), relation extraction (Guo et al., 2019; Zhang et al., 2018b), scene graph generation (Yang et al., 2018; Yao et al., 2018).", "To answer complicated questions about an image, a relation-aware graph attention network (Re-GAT) (Li et al., 2019) is proposed to encode each image into a graph and model multi-type inter-object relations via a graph attention mechanism, such as spatial relations, semantic relations and implicit relations.", "One limitation of ReGAT (Li et al., 2019) lies in the fact that it solely consider the relations between objects in an image while neglect the importance of text information.", "In contrast, our DC-GCN simultaneously capture visual relations in an image and textual relations in a question.", "Similar to (Anderson et al., 2018), we extract the image features by using a pretrained Faster RCNN (Ren et al., 2015).", "We select object proposals for each image, where each object proposal is represented by a 2048 dimensional feature vector.", "The obtained visual region features are denoted as h v = { h vi } i =0 R 2048 .", "(Pennington et al., 2014).", "The word embeddings are input into a LSTM (Hochreiter and Schmidhu-ber, 1997) to encode, which produces the initial question representation h q = { h qj } j =0 R d q .", "Image Fully-connected Relations Graph By treating each object region in an image as a vertex, we can construct a fully-connected undirected graph, as shown in Figure", "3(b).", "Each edge represents a relation between two object regions.", "Pruned Image Graph with Spatial Relations Spatial relations represent an object position in an image, which correspond to a 4-dimensional spatial coordinate [ x 1 , y 1 , x 2 , y 2 ] .", "Note that ( x 1 , y 1 ) is the coordinate of the top-left point of the bounding box and ( x 2 , y 2 ) is the coordinate of the bottom-right point of the bounding box.", "Identifying the correlation between objects is a key step.", "We calculate the correlation between objects by using spatial relations.", "The steps are as follows: (1) The features of two nodes are input into multi-layer perceptron respectively, and then the corresponding elements are multiplied to get a relatedness score.", "(2) The intersection over union of two object regions is calculated.", "According to the overlapping part of two object regions, different spatial relations are classified into 11 different categories, such as inside , cover , and overlap (Yao et al., 2018).", "Following the work (Yao et al., 2018), we utilize the overlapping region between two object regions to judge whether there is an edge between two regions.", "If two object regions have large overlapping part, it means that there is a strong correlation between these two objects.", "If two object regions haven't any overlapping part, we consider two objects have a weak correlation, which means there are no edges to connect these two nodes.", "According to the spatial relations, we prune some irrelevant relations between objects and obtain a sparse graph, as shown in Figure", "3(c).", "Image Graph Convolutions Following the previous studies (Li et al., 2019; Zhang et al., 2018b; Yang et al., 2018), we use GCN to update the representations of objects.", "Given a graph with nodes, each object region in an image is a node.", "We represent the graph structure with a adjacency matrix A , where A ij = 1 if there is overlapping region between node i and node j ; else A ij = 0 .", "Given a target node i and a neighboring node j N ( i ) in an image, where N ( i ) is the set of nodes neighboring with node i , and the representations of node i and node j are h vi and h vj , respectively.", "To obtain the correlation score s ij between node i and j , we learn a fully connected layer over concatenated node features h vi and h vj : s ij = w Ta ( W a [ h ( l ) vi , h ( l ) vj ]) , (1) where w a and W a are learned parameters, is the non-linear activation function, and [ h ( l ) vi , h ( l ) vj ] denotes the concatenation operation.", "We apply a softmax function over the correlation score s ij to obtain weight ij , as shown in Figure", "3(c) where the numbers in red represent the weight scores: ij = exp ( s ij ) (cid:80) j N ( i ) exp ( s ij ) .", "The l-th layer representations of neighboring nodes h ( l ) vj are first transformed via a learned linear transformation W b .", "Those transformed representations are then gathered with weight ij , followed by a non-linear function .", "This layer-wise propagation can be denoted as: h ( l +1) vi = h ( l ) vi + (cid:88) j N ( i ) A ij ij W b h ( l ) vj .", "Following the stacked L layer GCN, the output of I-GCN module H v can be denoted as:", "In practice, we observe that two words in a sentence usually hold certain relations.", "Such relations can be identified by the universal Standford Dependencies (De Marneffe et al., 2014).", "As shown in Table 1, we list a part of commonly-used dependency relations.", "For example, the sentence what color is Figure 4: The question is performed by syntactic dependency parsing.", "the umpire's shirt is parsed to obtain the relations between words (e.g., cop , det and nmod ), as shown in Figure 4.", "The words in blue are the dependency relations.", "The ending of arrow indicates that this word is a modifier.", "The word root in purple is used to indicate which word is the root node of dependency relations.", "Question Fully-connected Relations Graph By treating each word in a question as a node, we construct a fully-connected undirected graph, as shown in Figure", "5(a).", "Each edge represents a relation between two words.", "Pruned Question Graph with Dependency Relations Irrelevant relations between two words may bring noises.", "Therefore, we need to prune some unrelated relations to reduce the noises.", "By parsing the dependency relations of a question, we obtain the relations between words (cf. Figure 4).", "According to dependency relations, we prune some edges between two nodes which do not have dependency relations.", "A sparse graph is obtained, as shown in Figure", "5(b).", "Question Graph Convolutions Following the previous works (Li et al., 2019; Zhang et al., 2018b; Yang et al., 2018), we use GCN to update the node representations of words.", "Given a graph with nodes, each word in a question is a node.", "We represent the graph structure with a adjacency matrix B where B ij = 1 if there is a dependency relation between node i and node j ; else B ij = 0 .", "Given a target node i and a neighboring node j ( i ) in a question, ( i ) is the set of nodes neighboring with node i .", "The representations of node i and j are h qi and h qj , respectively.", "To obtain the correlation score t ij between node i and j , we learn a fully connected layer over concatenated node features h qi and h qj : t ij = w Tc ( W c [ h ( l ) qi , h ( l ) qj ]) , (5) where w c and W c are learned parameters, is the non-linear activation function, and [ h ( l ) qi , h ( l ) qj ] denotes the concatenation operation.", "As shown in Figure", "5(c), the numbers in red are the weight scores.", "The l-th layer representations of neighboring nodes h ( l ) qj are first transformed via a learned linear transformation W d .", "Those transformed representations are gathered with weight ij , followed by a non-linear function .", "This layer-wise propagation can be denoted as: h ( l +1) qi = h ( l ) qi + (cid:88) j ( i ) B ij ij W d h ( l ) qj .", "Based on the previous works (Gao et al., 2019; Yu et al., 2019), we use self-attention mechanism (Vaswani et al., 2017) to enhance the correlation between words in a question and the correlation between objects in an image, respectively.", "To enhance the correlation between words and highlight the important words, we utilize the self-attention mechanism to update question representation H q .", "The updated question representation H q is obtained as follows: H q = softmax (cid:32) H q H Tq (cid:112) d q (cid:33) H q , (9) where H Tq is the transpose of H q and d q is the dimension of H q .", "The level of this self-attention is set to 4.", "To obtain the image representation related to question representation, we align the image representation H v by utilizing the question representation H q as the guided vector.", "The similarity score r between H v and H q is calculated as follows: r = H q H Tv d v , (10) where H Tv is the transpose of H v and d v is the dimension of H v .", "The level of this question guided image attention is set to 4.", "The final outputs of the attention alignment module are H q and H v .", "We apply the linear multimodal fusion method to fuse two representations H q and H v as follows:", "where W v , W q , W e , and b e are learned parameters, and pred means the probability of the classified answers from the set of answer vocabulary which contains M candidate answers.", "Following (Yu et al., 2019), we use binary cross-entropy loss function to train an answer classifier.", "VQA-v2 (Goyal et al., 2017) is the most commonly used VQA benchmark dataset which is split into train , val , and test-standard sets.", "Among test-standard set, 25% are served as test-dev set.", "Each question has 10 answers from different annotators.", "Answers with the highest frequency are treated as the ground truth.", "All answer types can be divided into Yes/No , Number , and Other .", "VQA-CP-v2 (A-grawal et al., 2018) is a derivation of the VQA-v2 dataset, which is introduced to evaluate and reduce the question-oriented bias in VQA models.", "Due to significant difference of distribution between train set and test set, the VQA-CP-v2 dataset is harder than VQA-v2 dataset.", "We use the Adam optimizer (Kingma and Ba, 2014) with parameters = 0 .", "0001 , 1 = 0 .", "9 , and 2 = 0 .", "99 .", "The size of the answer vocabulary is set to M =3,129 as used in (Anderson et al., 2018).", "The base learning rate is set to 0.0001.", "After 15 epochs, the learning rate is decayed by 1/5 every 2 epochs.", "All the models are trained up to 20 epochs with the same batch size 64 and hidden size 512.", "Each image has [10 , 100] object regions, all questions are padded and truncated to the same length 14, i.e., = 14 .", "Table 2 shows the performance of our DC-GCN model and baseline models trained with the widely-used VQA-v2 dataset.", "All results in our paper are based on single-model performance.", "For a fair comparison, we also train our model with extra visual genome dataset (Krishna et al., 2017).", "Bottom-Up Model Test-dev Test-std Y/N Num Other All All Bottom-Up(Andersonet al., 2018) 81.82 44.21 56.05 65.32 65.67 DCN (Nguyen and Okatani, 2018) 83.51 46.61 57.26 66.87 66.97 Counter (Zhang et al., 2018a) 83.14 51.62 58.97 68.09 68.41 BAN (Kim et al., 2018) 85.31 50.93 60.26 69.52 DFAF (Gao et al., 2019) 86.09 53.32 60.49 70.22 70.34 Erase-Att (Liu et al., 2019) 85.87 50.28 61.10 70.07 70.36 ReGAT (Li et al., 2019) 86.08 54.42 60.33 70.27 70.58 MCAN (Yu et al., 2019) 86.82 53.26 60.72 70.63 70.90 DC-GCN (ours) 87.32 53.75 61.45 71.21 71.54 Table 2: Comparison with previous state-of-the-art methods on VQA-v2 test dataset.", "(Anderson et al., 2018) is proposed to use features based on Faster RCNN (Ren et al., 2015) instead of ResNet (He et al., 2016).", "Dense Co-Attention Network (DCN) (Nguyen and Okatani, 2018) utilizes dense stack of multiple layers of co-attention mechanism.", "Counting method (Zhang et al., 2018a) is good at counting questions by utilizing the information of bounding boxes.", "DFAF (Gao et al., 2019) dynamically fuses Intraand Inter-modality information.", "ReGAT (Li et al., 2019) models semantic, spatial, and implicit relations via a graph attention network.", "MCAN (Yu et al., 2019) utilizes deep modular networks to learn the multimodal feature representations, which is a state-of-the-art approach on VQA-v2 dataset.", "As shown in Table 2, our model increases the overall accuracy of DFAF and MCAN by 1.2% and 0.6% on the test-std set, Figure 6: Visualizations of the learned attention maps of the Q-GCN module, I-GCN module and Attention Alignment module from some typical layers.", "respectively.", "Although still cannot achieve comparable performance in the category of Num with respect to ReGAT (which is the best one in counting sub-task), our DC-GCN outperforms it in other categories (e.g., Y/N with 1.2%, Other with 1.1% and Overall with 0.9%).", "It shows that DC-GCN has relation capturing ability in answering all kinds of questions by sufficiently exploring the semantics in both object appearances and object relations.", "In summary, our DC-GCN achieves outstanding performance on the VQA-v2 dataset.", "To demonstrate the generalizability of our DC-GCN model, we also conduct experiments on the VQA-CP-v2 dataset.", "To overcome the language biases of the VQA-v2 dataset, the research work (Agrawal et al., 2018) designed the VQA-CP-v2 dataset and specifically proposed the GVQA model for reducing the influence of language biases.", "Table 3 shows the results on VQA-CP-v2 test split.", "The Murel (Cadene et al., 2019a) and ReGAT (Li et al., 2019) build the relations between objects to realize the reasoning task and question answering task, which are the state-of-the-art models.", "Our DC-GCN model surpasses both Murel and ReGAT on VQA-CP-v2 (41.47 vs. 39.54 and 41.47 vs. 40.42).", "The performance gain is lifted to +1.05%.", "Although our proposed method is not designed for VQA-CP-v2 dataset, our model has a slight ad-Model Acc.", "vantage over UpDn + Q-Adv + DoE model.", "The results on VQA-CP-v2 dataset show that dependency parsing and DC-GCN can effectively reduce question-based overfitting.", "In Figure 6, we visualize the learned attentions from the I-GCN module, Q-GCN module and Attention", "Attention Alignment module.", "Due to the space limitation, we only show one example and visualize six attention maps from different attention units and different layers.", "From the results, we have the following observations.", "Question GCN Module : The attention maps of Q-GCN(2) focus on the words color and shirt as shown in Figure", "6(a) while the attention maps of Q-GCN(4) correctly focus on the words color , umpire's , and shirt , as shown in Figure", "6(b).", "Those words have the larger weight than others.", "That is to say, the keywords color , umpire's and shirt are identified correctly.", "Image GCN Module For the sake of presentation, we only consider 20 object regions in an image.", "The index within [1, 20] shown on the axes of the attention maps corresponds to each object in the image.", "Among these indexes, indexes 4, 6, 9, and 12 are the most relevant ones for the question.", "Compared with I-GCN(2) which focuses on the 4-th , 6-th , 9-th , 12-th , and 14-th objects (cf.", "Figure", "6(c)), the I-GCN(4) focuses more on the 4-th , 6-th , and 12-th objects where the 4-th object has larger weight than the 6-th and 12-th objects, as shown in Figure", "6(d).", "The 4-th object region is the region of ground true while the 6-th , 9-th , and 12-th object regions are the most relevant ones.", "Attention Alignment Module Given a specific question, a model needs to align image objects guided by the question to update the representations of objects.", "As shown in Figure", "6(e), the focus regions are more scattered, where the key regions are mainly the 4-th , 9-th and 12-th object regions.", "Through the guidance of the identified words color , umpire's and shirt , the DC-GCN model gradually pays more attention to the 4-th , 9-th , and 12-th object regions rather than other irrelevant object regions, as shown in Figure", "6(f).", "This alignment process demonstrates that our model can capture the relations of multiple similar objects.", "We also visualize some negative examples predicted by our DC-GCN model.", "As shown in Figure 7, which can be classified into three categories: (1) limitation of object detection; (2) text semantic understanding in scenarios; (3) subjective judgment.", "In Figure", "7(a), although the question how many sheep are pictured is not so difficult, the image content is really confusing.", "If not observe carefully, it's rather easy to obtain the wrong answer 2 instead of 3 .", "The reasons for this error include object occlusion, near and far degrees, and the limitation Figure 7: We summarize three types of incorrect examples: limitation of object detection, text semantic understanding and subjective judgment which correspond to", "of object detection.", "The image feature extractor is based on Faster R-CNN model (Ren et al., 2015).", "The accuracy of object detection can indirectly affect the accuracy of feature extraction.", "Counting subtask in VQA task has a large room to improve.", "In Figure", "7(b), the question what time should you pay can be answered by recognizing the text semantic understanding in the image.", "Text semantic understanding belongs to another task, namely text visual question answering (Biten et al., 2019), which requires to recognize the numbers, symbols and proper nouns in a scene.", "In Figure", "7(c), subjective judgment is needed to answer the question is this man happy .", "Making this judgment requires some common sense knowledge and real life experience.", "Specifically, someone holding a banana against him and just like holding a gun towards him, so he is unhappy.", "Our model can not make such analysis like a human being done to make a subjective judgment and predict the correct answer yes .", "Finally, to understand the distribution of three error types, we randomly pick up 100 samples on dev set of VQA-v2.", "The number of three error types (i.e., overlapping objects, text semantic understanding, and subjective judgment) is 3, 3, and 29, respectively.", "The predicted answers of the first two questions types are all incorrect.", "The last one has 12 incorrect answers, which means the error rate of this question type is 41.4%.", "We perform extensive ablation studies on the VQA-v2 validation dataset (cf. Table 4).", "The experimental results are based on one black of our DC-GCN model.", "All modules inside DC-GCN have the same dimension of 512.", "The learning rate is 0.0001 and the batch size is 32.", "Firstly, we investigate the influence of GCN types.", "There are two GCN types: I-GCN and Q-GCN, as shown in Table 4.", "When removing the I-GCN, the performance of our model decreases from 66.57% to 65.52% ( p -value = 3.22E-08 < 0.05).", "When removing the Q-GCN, the performance of our model slightly decreases from 66.57% to 66.15% ( p -value = 2.04E-07 < 0.05).", "We consider that there are two reasons.", "One is that the image content is more complex than the question's content, hence which has richer semantic information.", "By building the relations between objects can help clarify what the image represents and help align with the question representations.", "The other is that the length of question is short, and less information is contained (e.g., what animal is this? and what color is the man's shirt? ).", "Then, we perform ablation study on the influence of dependency relations (cf. Table 1).", "The relations, like nsubj , nmod , dobj and amod , are crucial to semantic representations, therefore, we do not remove them from the sentence.", "As shown in Table 4, removing the relations like det , case , aux and advmod individually, has trivial influence to the semantic representations of the question.", "But the result accuracy decreases significantly when we simultaneously remove the relations det , case and cop .", "The reason may be that the sentence loses too much information and becomes difficult to fully express the meaning of the original sentence.", "For example, consider the two phrases on the table and under the table .", "If we remove the relation case , which means that the words on and under are removed, then it will be hard to distinguish whether it is on the table or under the table.", "In this paper, we propose a dual channel graph convolutional network to explore the relations between objects in an image and the syntactic dependency relations between words in a question.", "Furthermore, we explicitly construct the relations between words by dependency tree and align the image and question representations by an attention alignment module to reduce the gaps between vision and language.", "Extensive experiments on the VQA-v2 and VQA-CP-v2 datasets demonstrate that our model achieves comparable performance with the state-of-the-art approaches.", "We will explore more complicated object relation modeling in future work.", "We thank the anonymous reviewers for valuable comments and thoughtful suggestions.", "We would also like to thank Professor Yuzhang Lin from University of Massachusetts Lowell for helpful discussions.", "This work was supported by the Fundamental Research Funds for the Central Universities, SCUT (No. 2017ZD048, D2182480), the Science and Technology Planning Project of Guangdong Province (No.2017B050506004), the Science and Technology Programs of Guangzhou (No.201704030076, 201802010027, 201902010046) and the collaborative research grants from the Guangxi Natural Science Foundation (2017GXNSFAA198225) and the Hong Kong Research Grants Council (project no. PolyU 1121417 and project no. C1031-18G), and an internal research grant from the Hong Kong Polytechnic University (project 1.9B0V)." ]
[ "abstain", "abstain", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "objective", "abstain", "objective", "objective", "method", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "objective", "method", "objective", "objective", "other", "other", "other" ]
[ "Language model (LM) pretraining captures various knowledge from text corpora, helping downstream NLP tasks.", "However, existing methods such as BERT model a single document, failing to capture document dependencies and knowledge that spans across documents.", "In this work, we propose LinkBERT , an effective LM pretraining method that incorporates document links, such as hyperlinks.", "Given a pretraining corpus, we view it as a graph of documents, and create LM inputs by placing linked documents in the same context.", "We then train the LM with two joint self-supervised tasks: masked language modeling and our newly proposed task, document relation prediction.", "We study LinkBERT in two domains: general domain (pretrained on Wikipedia with hyperlinks) and biomedical domain (pretrained on PubMed with citation links).", "LinkBERT outperforms BERT on various downstream tasks in both domains.", "It is especially effective for multi-hop reasoning and few-shot QA (+5% absolute improvement on HotpotQA and TriviaQA), and our biomedical LinkBERT attains new state-of-the-art on various BioNLP tasks (+7% on BioASQ and USMLE).", "We release the pretrained models, LinkBERT and BioLinkBERT , as well as code and data.", "1 1 Introduction Pretrained language models (LMs), like BERT and GPTs (Devlin et al., 2019; Brown et al., 2020), have shown remarkable performance on many natural language processing (NLP) tasks, such as text classification and question answering (Raffel et al., 2020), becoming the foundation of modern NLP systems.", "The Jeerson Memorial, the Martin Luther King Jr.", "Memorial, the Franklin Delano Roosevelt Memorial, and the George Mason Memorial are situated adjacent to the Tidal Basin.", "[The National Cherry Blossom Festival] ...", "It is a spring celebration commemorating the March 27, 1912, gift of Japanese cherry trees from Mayor of Tokyo City Yukio Ozaki to the city of Washington, D.C. Mayor Ozaki gifted the trees to enhance the growing friendship between the United States and Japan.", "...", "Of the initial gift of 12 varieties of 3,020 trees, the Yoshino Cherry (70% of total) and Kwanzan Cherry (13% of total) now dominate.", "...", "By performing self-supervised learning on text, such as masked language modeling (Devlin et al., 2019), LMs learn to encode various knowledge from text corpora and produce informative language representations for downstream tasks (Bosselut et al., 2019; Bommasani et al., 2021).", "Equal senior authorship.", "However, existing LM pretraining methods typically consider a single document in each input context (Liu et al., 2019; Joshi et al., 2020), and do not model links between documents.", "This can pose limitations because documents often have rich dependencies with each other (e.g. hyperlinks, references), and knowledge can span across documents.", "As a simple example, in Figure 1, the Wikipedia article Tidal Basin, Washington D.C. (left) describes that the basin hosts National Cherry Blossom Festival, and the hyperlinked article (right) reveals the background that the festival celebrates Japanese cherry trees.", "Taken together, the hyperlink offers new, multi-hop knowledge Tidal Basin has Japanese cherry trees, which is not available in the single article Tidal Basin alone.", "Acquiring such multi-hop knowledge in pretraining could be useful for various applications including question answering.", "In fact, document links like hyperlinks and references are ubiquitous (e.g. web, books, scientific literature), and guide how we humans acquire knowledge and 8003 (cid:13)(cid:28)(cid:41)(cid:34)(cid:48)(cid:28)(cid:34)(cid:32)(cid:1)(cid:14)(cid:42)(cid:31)(cid:32)(cid:39) (cid:548)(cid:29)(cid:99)(cid:166)(cid:549)(cid:573) (cid:179)(cid:308)(cid:274)(cid:573) (cid:179)(cid:313)(cid:268)(cid:232)(cid:335)(cid:573) (cid:28)(cid:232)(cid:403)(cid:313)(cid:346)(cid:573) (cid:548)(cid:166)(cid:42)(cid:155)(cid:549)(cid:573) (cid:548)(cid:109)(cid:1)(cid:166)(cid:96)(cid:549)(cid:573) (cid:548)(cid:109)(cid:1)(cid:166)(cid:96)(cid:549)(cid:573) (cid:415)(cid:393)(cid:274)(cid:274)(cid:403) (cid:1057)(cid:1057)(cid:1057)(cid:1) (cid:1057)(cid:1057)(cid:1057)(cid:1) (cid:548)(cid:166)(cid:42)(cid:155)(cid:549)(cid:573) (cid:20)(cid:32)(cid:34)(cid:40)(cid:32)(cid:41)(cid:47)(cid:1)(cid:2) (cid:20)(cid:32)(cid:34)(cid:40)(cid:32)(cid:41)(cid:47)(cid:1)(cid:3) (cid:94)(cid:232)(cid:390)(cid:232)(cid:346)(cid:274)(cid:403)(cid:274)(cid:573)(cid:261)(cid:308)(cid:274)(cid:393)(cid:393)(cid:453) (cid:14)(cid:28)(cid:46)(cid:38)(cid:32)(cid:31)(cid:1)(cid:39)(cid:28)(cid:41)(cid:34)(cid:48)(cid:28)(cid:34)(cid:32)(cid:1)(cid:40)(cid:42)(cid:31)(cid:32)(cid:39)(cid:36)(cid:41)(cid:34)(cid:1)(cid:1090)(cid:14)(cid:13)(cid:14)(cid:1091) (cid:5)(cid:42)(cid:30)(cid:48)(cid:40)(cid:32)(cid:41)(cid:47)(cid:1)(cid:45)(cid:32)(cid:39)(cid:28)(cid:47)(cid:36)(cid:42)(cid:41)(cid:1)(cid:43)(cid:45)(cid:32)(cid:31)(cid:36)(cid:30)(cid:47)(cid:36)(cid:42)(cid:41)(cid:1)(cid:1090)(cid:5)(cid:19)(cid:17)(cid:1091) (cid:404) (cid:99)(cid:313)(cid:346)(cid:332)(cid:274)(cid:268)(cid:573) (cid:404) (cid:158)(cid:232)(cid:346)(cid:268)(cid:355)(cid:344)(cid:573) (cid:404) (cid:29)(cid:355)(cid:346)(cid:415)(cid:313)(cid:301)(cid:423)(cid:355)(cid:423)(cid:403)(cid:573) (cid:4)(cid:42)(cid:45)(cid:43)(cid:48)(cid:46)(cid:1)(cid:42)(cid:33)(cid:1)(cid:39)(cid:36)(cid:41)(cid:38)(cid:32)(cid:31)(cid:1)(cid:31)(cid:42)(cid:30)(cid:48)(cid:40)(cid:32)(cid:41)(cid:47)(cid:46) (cid:36)(cid:355)(cid:261)(cid:573)(cid:475) (cid:36)(cid:355)(cid:261)(cid:573)(cid:478) (cid:36)(cid:355)(cid:261)(cid:573)(cid:477) (cid:36)(cid:355)(cid:261)(cid:573)(cid:479) (cid:36)(cid:355)(cid:261)(cid:573)(cid:474) (cid:36)(cid:355)(cid:261)(cid:573)(cid:476) (cid:20)(cid:32)(cid:34)(cid:40)(cid:32)(cid:41)(cid:47)(cid:1)(cid:2) (cid:20)(cid:32)(cid:34)(cid:40)(cid:32)(cid:41)(cid:47)(cid:1)(cid:3) (cid:36)(cid:355)(cid:261) (cid:573) (cid:474)(cid:573)(cid:403)(cid:274)(cid:301) (cid:573) (cid:390)(cid:605)(cid:474) (cid:29)(cid:355)(cid:346)(cid:415)(cid:313)(cid:301)(cid:423)(cid:355)(cid:423)(cid:403) (cid:158)(cid:232)(cid:346)(cid:268)(cid:355)(cid:344) (cid:99)(cid:313)(cid:346)(cid:332)(cid:274)(cid:268) (cid:355)(cid:393) (cid:355)(cid:393) (cid:36)(cid:355)(cid:261) (cid:573) (cid:474)(cid:573)(cid:403)(cid:274)(cid:301) (cid:573) (cid:390) (cid:36)(cid:355)(cid:261) (cid:573) (cid:478)(cid:573)(cid:403)(cid:274)(cid:301) (cid:573) (cid:392) (cid:36)(cid:355)(cid:261) (cid:573) (cid:474)(cid:573)(cid:403)(cid:274)(cid:301) (cid:573) (cid:390) (cid:36)(cid:355)(cid:261) (cid:573) (cid:476)(cid:573)(cid:403)(cid:274)(cid:301) (cid:573) (cid:392) (cid:36)(cid:355)(cid:261) (cid:573) (cid:474)(cid:573)(cid:403)(cid:274)(cid:301) (cid:573) (cid:390) (cid:4)(cid:45)(cid:32)(cid:28)(cid:47)(cid:32)(cid:1)(cid:13)(cid:14)(cid:1)(cid:36)(cid:41)(cid:43)(cid:48)(cid:47)(cid:46) (cid:17)(cid:45)(cid:32)(cid:47)(cid:45)(cid:28)(cid:36)(cid:41)(cid:1)(cid:47)(cid:35)(cid:32)(cid:1)(cid:13)(cid:14) (cid:35)(cid:52)(cid:43)(cid:32)(cid:45)(cid:39)(cid:36)(cid:41)(cid:38)(cid:1058)(cid:1)(cid:45)(cid:32)(cid:33)(cid:32)(cid:45)(cid:32)(cid:41)(cid:30)(cid:32)(cid:1058)(cid:1)(cid:32)(cid:47)(cid:30)(cid:1057) Figure 2: Overview of our approach, LinkBERT .", "In this work, we propose LinkBERT , an effective language model pretraining method that incorporates document link knowledge.", "Given a text corpus, we obtain links between documents such as hyperlinks, and create LM inputs by placing linked documents in the same context window, besides the existing option of placing a single document or random documents as in BERT.", "Specifically, as in Figure 2, after sampling an anchor text segment, we place either (1) the contiguous segment from the same document, (2) a random document, or (3) a document linked from anchor segment, as the next segment in the input.", "We then train the LM with two joint objectives: We use masked language modeling (MLM) to encourage learning multi-hop knowledge of concepts brought into the same context by document links (e.g. Tidal Basin and Japanese cherry in Figure 1).", "Simultaneously, we propose a Document Relation Prediction (DRP) objective, which classifies the relation of the second segment to the first segment ( contiguous , random , or linked ).", "DRP encourages learning the relevance and bridging concepts (e.g. National Cherry Blossom Festival) between documents, beyond the ability learned in the vanilla next sentence prediction objective in BERT.", "Viewing the pretraining corpus as a graph of documents, LinkBERT is also motivated as self-supervised learning on the graph, where DRP and MLM correspond to link prediction and node feature prediction in graph machine learning (Yang et al., 2015; Hu et al., 2020).", "Our modeling approach thus provides a natural fusion of language-based and graph-based self-supervised learning.", "We train LinkBERT on two domains: the general domain, using Wikipedia articles with hyperlinks (4), and the biomedical domain, using PubMed articles with citation links (6).", "We then evaluate the pretrained models on a wide range of downstream tasks including question answering, in both domains.", "LinkBERT consistently improves on baseline LMs across domains and tasks.", "For the general domain, LinkBERT outperforms BERT on MRQA benchmark (+4% absolute in F1-score) as well as GLUE benchmark.", "For the biomedical domain, LinkBERT exceeds PubmedBERT (Gu et al., 2020) and attains new state-of-the-art on BLURB biomedical NLP benchmark (+3% absolute in BLURB score) and MedQA-USMLE reasoning task (+7% absolute in accuracy).", "Overall, LinkBERT attains notably large gains for multi-hop reasoning, multi-document understanding, and few-shot question answering, suggesting that LinkBERT internalizes significantly more knowledge than existing LMs by pretraining with document link information.", "Retrieval-augmented LMs.", "Several works (Lewis et al., 2020b; Karpukhin et al., 2020; Oguz et al., 2020; Xie et al., 2022) introduce a retrieval module for LMs, where given an anchor text (e.g. question), retrieved text is added to the same LM context to improve model inference (e.g. answer prediction).", "These works show the promise of placing related documents in the same LM context at inference time, but they do not study pretraining.", "Guu et al. (2020) pretrain an LM with a retriever that learns to retrieve text for answering masked tokens in the anchor text.", "In contrast, our focus is not on retrieval, but on pretraining a general-purpose LM that internalizes knowledge that spans across documents, which is orthogonal to the above works (e.g., our pretrained LM could be used to initialize the LM component of these works).", "Additionally, we focus on incorporating document links such as hyperlinks, which can offer salient knowledge that common lexical retrieval methods may not provide (Asai et al., 2020).", "to pretrain LMs.", "Caciularu et al. (2021) place documents (news articles) about the same topic into the same LM context, and Levine et al. (2021) place sentences of high lexical similarity into the same context.", "Our work provides a general method to incorporate document links into LM pretraining, where lexical or topical similarity can be one instance of document links, besides hyperlinks.", "We focus on hyperlinks in this work, because we find they can bring in salient knowledge that may not be obvious via lexical similarity, and yield a more performant LM (5.5).", "Additionally, we propose the DRP objective, which improves modeling multiple documents and relations between them in LMs (5.5).", "Hyperlinks and citation links for NLP.", "Hyperlinks are often used to learn better retrieval models.", "Chang et al. (2020); Asai et al. (2020); Seonwoo et al. (2021) use Wikipedia hyperlinks to train retrievers for open-domain question answering.", "Ma et al. (2021) study various hyperlink-aware pretraining tasks for retrieval.", "While these works use hyperlinks to learn retrievers, we focus on using hyperlinks to create better context for learning general-purpose LMs.", "Separately, Calixto et al. (2021) use Wikipedia hyperlinks to learn multilingual LMs.", "Citation links are often used to improve summarization and recommendation of academic papers (Qazvinian and Radev, 2008; Yasunaga et al., 2019; Bhagavatula et al., 2018; Khadka et al., 2020; Cohan et al., 2020).", "Here we leverage citation networks to improve pretraining general-purpose LMs.", "Graph-augmented LMs.", "Several works augment LMs with graphs, typically, knowledge graphs (KGs) where the nodes capture entities and edges their relations.", "Zhang et al. (2019); He et al. (2020); Wang et al. (2021b) combine LM training with KG embeddings.", "Sun et al. (2020); Yasunaga et al. (2021); Zhang et al. (2022) combine LMs and graph neural networks (GNNs) to jointly train on text and KGs.", "Different from KGs, we use document graphs to learn knowledge that spans across documents.", "A language model (LM) can be pretrained from a corpus of documents, X = { X ( i ) } .", "An LM is a composition of two functions, f head ( f enc ( X )) , where the encoder f enc takes in a sequence of tokens X = ( x 1 ,x 2 ,...,x n ) and produces a contextualized vector representation for each token, ( h 1 , h 2 ,..., h n ) .", "The head f head typically uses these representations to perform self-supervised tasks in the pretraining step, and perform particular downstream tasks in the fine-tuning step.", "We build on BERT (Devlin et al., 2019), which pretrains an LM with the following two self-supervised tasks.", "Masked language modeling (MLM).", "Given a sequence of tokens X , a subset of tokens Y X is masked, and the task is to predict the original tokens from the modified input.", "Y accounts for 15% of the tokens in X ; of those, 80% are replaced with [MASK] , 10% with a random token, and 10% are kept unchanged.", "Next sentence prediction (NSP).", "The NSP task takes two text segments 2 ( XA ,X B ) as input, and predicts whether XB is the direct continuation of XA .", "Specifically, BERT first samples XA from the corpus, and then either (1) takes the next segment XB from the same document, or (2) samples XB from a random document in the corpus.", "The two segments are joined via special tokens to form an input instance, [CLS] XA [SEP] XB [SEP] , where the prediction target of [CLS] is whether XB indeed follows XA ( contiguous or random ).", "In this work, we will further incorporate document link information into LM pretraining.", "Our approach (4) will build on MLM and NSP.", "We present LinkBERT, a self-supervised pretraining approach that aims to internalize more knowledge into LMs using document link information.", "Specifically, as shown in Figure 2, instead of viewing the pretraining corpus as a set of documents X = { X ( i ) } , we view it as a graph of documents, G = ( X , E ) , where E = { ( X ( i ) , X ( j ) ) } denotes links between documents (4.1).", "The links can be existing hyperlinks, or could be built by other methods that capture document relevance.", "We then consider pretraining tasks for learning from document links (4.2): We create LM inputs by placing linked documents in the same context window, besides the existing options of a single document or random documents.", "We use the MLM task to learn concepts brought together in the context by document links, and we also introduce the Document Relation Prediction (DRP) task to learn relations between documents.", "Finally, we discuss strategies for obtaining informative pairs of linked documents to feed into LM pretraining (4.3).", "Given a pretraining corpus, we link related documents so that the links can bring together knowledge that is not available in single documents.", "We focus 2 A segment is typically a sentence or a paragraph.", "on hyperlinks, e.g., hyperlinks of Wikipedia articles (5) and citation links of academic articles (6).", "Hyperlinks have a number of advantages.", "They provide background knowledge about concepts that the document writers deemed usefulthe links are likely to have high precision of relevance, and can also bring in relevant documents that may not be obvious via lexical similarity alone (e.g., in Figure 1, while the hyperlinked article mentions Japanese and Yoshino cherry trees, these words do not appear in the anchor article).", "Hyperlinks are also ubiquitous on the web and easily gathered at scale (Aghajanyan et al., 2021).", "To construct the document graph, we simply make a directed edge ( X ( i ) ,X ( j ) ) if there is a hyperlink from document X ( i ) to document X ( j ) .", "For comparison, we also experiment with a document graph built by lexical similarity between documents.", "For each document X ( i ) , we use the common TF-IDF cosine similarity metric (Chen et al., 2017; Yasunaga et al., 2017) to obtain topk documents X ( j ) 's and make edges ( X ( i ) ,X ( j ) ) .", "We use k =5 .", "Creating input instances.", "Several works (Gao et al., 2021; Levine et al., 2021) find that LMs can learn stronger dependencies between words that were shown together in the same context during training, than words that were not.", "To effectively learn knowledge that spans across documents, we create LM inputs by placing linked documents in the same context window, besides the existing option of a single document or random documents.", "Specifically, we first sample an anchor text segment from the corpus (Segment A; XA X ( i ) ).", "For the next segment (Segment B; XB ), we either (1) use the contiguous segment from the same document ( XB X ( i ) ), (2) sample a segment from a random document ( XB X ( j ) where j = i ), or (3) sample a segment from one of the documents linked from Segment A ( XB X ( j ) where ( X ( i ) ,X ( j ) ) E ).", "We then join the two segments via special tokens to form an input instance: [CLS] XA [SEP] XB [SEP] .", "Training objectives.", "To train the LM, we use two objectives.", "We apply the MLM objective to encourage the LM to learn multi-hop knowledge of concepts brought together in the same context by document links.", "We also propose a Document Relation Prediction (DPR) objective, which classifies the relation r of segment XB to segment XA ( r { contiguous , random , linked } ).", "By distinguishing linked from contiguous and random , DRP encourages the LM to learn the relevance and existence of bridging concepts between documents, besides the capability learned in the vanilla NSP objective.", "To predict r , we use the representation of [CLS] token, as in NSP.", "Taken together, we optimize: L = LMLM + LDRP (1) = (cid:88) i log p ( x i | h i ) log p ( r | h [CLS] ) (2) where x i is each token of the input instance, [CLS] XA [SEP] XB [SEP] , and h i is its representation.", "Graph machine learning perspective.", "Our two pretraining tasks, MLM and DRP, are also motivated as graph self-supervised learning on the document graph.", "In graph self-supervised learning, two types of tasks, node feature prediction and link prediction, are commonly used to learn the content and structure of a graph.", "In node feature prediction (Hu et al., 2020), some features of a node are masked, and the task is to predict them using neighbor nodes.", "This corresponds to our MLM task, where masked tokens in Segment A can be predicted using Segment B (a linked document on the graph), and vice versa.", "In link prediction (Bordes et al., 2013; Wang et al., 2021a), the task is to predict the existence or type of an edge between two nodes.", "This corresponds to our DRP task, where we predict if the given pair of text segments are linked (edge), contiguous (self-loop edge), or random (no edge).", "Our approach can be viewed as a natural fusion of language-based (e.g. BERT) and graph-based self-supervised learning.", "As described in 4.1, 4.2, our method builds links between documents, and for each anchor segment, samples a linked document to put together in the LM input.", "Here we discuss three key axes to consider to obtain useful linked documents in this process.", "Relevance.", "Semantic relevance is a requisite when building links between documents.", "If links were randomly built without relevance, LinkBERT would be same as BERT, with simply two options of LM inputs ( contiguous or random ).", "Relevance can be achieved by using hyperlinks or lexical similarity metrics, and both methods yield substantially better performance than using random links (5.5).", "Salience.", "Besides relevance, another factor to consider ( salience ) is whether the linked document can offer new, useful knowledge that may not be obvious to the current LM.", "Hyperlinks are potentially more advantageous than lexical similarity links in this regard: LMs are shown to be good at recognizing lexical similarity (Zhang et al., 2020), and hyperlinks can bring in useful background knowledge that 8006 may not be obvious via lexical similarity alone (Asai et al., 2020).", "Indeed, we empirically find that using hyperlinks yields a more performant LM (5.5).", "Diversity.", "In the document graph, some documents may have a very high in-degree (e.g., many incoming hyperlinks, like the United States page of Wikipedia), and others a low in-degree.", "If we uniformly sample from the linked documents for each anchor segment, we may include documents of high in-degree too often in the overall training data, losing diversity.", "To adjust so that all documents appear with a similar frequency in training, we sample a linked document with probability inversely proportional to its in-degree, as done in graph data mining literature (Henzinger et al., 2000).", "We find that this technique yields a better LM performance (5.5).", "We experiment with our proposed approach in the general domain first, where we pretrain LinkBERT on Wikipedia articles with hyperlinks (5.1) and evaluate on a suite of downstream tasks (5.2).", "We compare with BERT (Devlin et al., 2019) as our baseline.", "We experiment in the biomedical domain in 6.", "Data.", "We use the same pretraining corpus used by BERT: Wikipedia and BookCorpus (Zhu et al., 2015).", "For Wikipedia, we use the WikiExtractor 3 to extract hyperlinks between Wiki articles.", "We then create training instances by sampling contiguous , random , or linked segments as described in 4, with the three options appearing uniformly (33%, 33%, 33%).", "For BookCorpus, we create training instance by sampling contiguous or random segments (50%, 50%) as in BERT.", "We then combine the training instances from Wikipedia and BookCorpus to train LinkBERT.", "In summary, our pretraining data is the same as BERT, except that we have hyperlinks between Wikipedia articles.", "Implementation.", "We pretrain LinkBERT of three sizes, -tiny, -base and -large, following the configurations of BERT tiny (4.4M parameters), BERT base (110M params), and BERT large (340M params) (Devlin et al., 2019; Turc et al., 2019).", "We use -tiny mainly for ablation studies.", "For -tiny, we pretrain from scratch with random weight initialization.", "We use the AdamW (Loshchilov and Hutter, 2019) optimizer with ( 1 , 2 ) = (0 . 9 , 0 . 98) , warm up the learning rate for the first 5,000 steps and then linearly decay it.", "We train for 10,000 steps with a peak learning rate 5e-3, weight decay 0.01, and batch size of 2,048 sequences with 512 tokens.", "Training takes 1 day on two GeForce RTX 2080 Ti GPUs with fp16.", "For -base, we initialize LinkBERT with the BERT base checkpoint released by Devlin et al. (2019) and continue pretraining.", "We use a peak learning rate 3e-4 and train for 40,000 steps.", "Other training hyperparameters are the same as -tiny.", "Training takes 4 days on four A100 GPUs with fp16.", "For -large, we follow the same procedure as -base, except that we use a peak learning rate of 2e-4.", "Training takes 7 days on eight A100 GPUs with fp16.", "Baselines.", "We compare LinkBERT with BERT.", "Specifically, for the -tiny scale, we compare with BERT tiny , which we pretrain from scratch with the same hyperparameters as LinkBERT tiny .", "The only difference is that LinkBERT uses document links to create LM inputs, while BERT does not.", "For -base scale, we compare with BERT base , for which we take the BERT base release by Devlin et al. (2019) and continue pretraining it with the vanilla BERT objectives on the same corpus for the same number of steps as LinkBERT base .", "Extractive question answering (QA).", "Given a document (or set of documents) and a question as input, the task is to identify an answer span from the document.", "We evaluate on six popular datasets from the MRQA shared task (Fisch et al., 2019): HotpotQA (Yang et al., 2018), TriviaQA (Joshi et al., 2017), NaturalQ (Kwiatkowski et al., 2019), SearchQA (Dunn et al., 2017), NewsQA (Trischler et al., 2017), and SQuAD (Rajpurkar et al., 2016).", "As the MRQA shared task does not have a public test set, we split the dev set in half to make new dev and test sets.", "We follow the fine-tuning method BERT (Devlin et al., 2019) uses for extractive QA.", "More details are provided in Appendix B. GLUE.", "The General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2018) is a popular suite of sentence-level classification tasks.", "Following BERT, we evaluate on CoLA (Warstadt et al., 2019), SST-2 (Socher et al., 2013), MRPC (Dolan and Brockett, 2005), QQP , STS-B (Cer et al., 2017), MNLI (Williams et al., 2017), QNLI (Rajpurkar et al., 2016), and RTE (Dagan et al., 2005; Haim et al., 2006; Giampiccolo 8007 HotpotQA TriviaQA SearchQA NaturalQ NewsQA SQuAD Avg.", "attains comparable or moderately improved performance.", "et al., 2007), and report the average score.", "More fine-tuning details are provided in Appendix B. 5.3 Results Table 1 shows the performance (F1 score) on MRQA datasets.", "LinkBERT substantially outperforms BERT on all datasets.", "On average, the gain is +4.1% absolute for the BERT tiny scale, +2.6% for the BERT base scale, and +2.5% for the BERT large scale.", "Table 2 shows the results on GLUE, where LinkBERT performs moderately better than BERT.", "These results suggest that LinkBERT is especially effective at learning knowledge useful for QA tasks (e.g. world knowledge), while keeping performance on sentence-level language understanding too.", "We further study when LinkBERT is especially useful in downstream tasks.", "Improved multi-hop reasoning.", "In Table 1, we find that LinkBERT obtains notably large gains on QA datasets that require reasoning with multiple documents, such as HotpotQA (+5% over BERT tiny ), TriviaQA (+6%) and SearchQA (+8%), as opposed to SQuAD (+1.4%) which just has a single document per question.", "To further gain qualitative insights, we studied in what QA examples LinkBERT succeeds but BERT fails.", "Figure 3 shows a representative example from HotpotQA.", "Answering the question needs 2-hop reasoning: identify Roden Brothers were taken over by Birks Group from the first document, and then Birks Group is headquartered in Montreal from the second document.", "While BERT tends to simply predict an entity near the question entity (Toronto in the first document, which is just 1-hop), LinkBERT correctly predicts the answer in the second document (Montreal).", "Our intuition is that because LinkBERT is pretrained with pairs of linked documents rather than purely single documents, it better learns how to flow information (e.g., do attention) across tokens when multiple related documents are given in the context.", "In summary, these results suggest that pretraining with linked documents helps for multi-hop reasoning on downstream tasks.", "relations.", "While the MRQA datasets typically use ground-truth documents as context for answering questions, in open-domain QA, QA systems need to use documents obtained by a retriever, which may include noisy documents besides gold ones (Chen et al., 2017; Dunn et al., 2017).", "In such cases, QA systems need to understand the document relations to perform well (Yang et al., 2018).", "To simulate this setting, we modify the SQuAD dataset such that we prepend or append 12 distracting documents to the original document given to each question.", "Table 3 shows the result.", "While BERT incurs a large performance drop (-2.8%), LinkBERT is robust to distracting documents (-0.5%).", "This result suggests that pretraining with document links improves the ability to understand document relations and 8008 Three days after undergoing a laparoscopic Whipple's procedure, a 43-year-old woman has swelling of her right leg .", "LinkBERT prediction: Montreal ( ) BERT prediction: Toronto ( ) HotpotQA example Question : Roden Brothers were taken over in 1953 by a group headquartered in which Canadian city?", "LinkBERT prediction: Montreal ( ) BERT prediction: Toronto ( ) HotpotQA example LinkBERT predicts: Montreal ( ) BERT predicts: Toronto ( ) Figure 3: Case study of multi-hop reasoning on HotpotQA.", "Answering the question needs to identify Roden Brothers were taken over by Birks Group from the first document, and then Birks Group is headquartered in Montreal from the second document.", "While BERT tends to simply predict an entity near the question entity (Toronto in the first document), LinkBERT correctly predicts the answer in the second document (Montreal).", "Doc A : Roden Brothers was founded June 1, 1891 in Toronto, Ontario, Canada by Thomas and Frank Roden.", "In the 1910s the firm became known as Roden Bros.", "Ltd. and were later taken over by Henry Birks and Sons in 1953.", "...", "In 1974 Roden Bros.", "Ltd. published the book, \"Rich Cut Glass\" with Clock House Publications in Peterborough, Ontario, which was a reprint of the 1917 edition published by Roden Bros., Toronto.", "Doc B : Birks Group (formerly Birks & Mayors) is a designer, manufacturer and retailer of jewellery, timepieces, silverware and gifts, with stores and manufacturing facilities located in Canada and the United States.", "As of June 30, 2015, it operates stores under three different retail banners: ...", "The company is headquartered in Montreal , Quebec, with American corporate offices located in Tamarac, Florida.", "Table 5 shows the ablation result on MRQA datasets.", "First, if we ignore relevance and use random document links instead of hyperlinks, we get the same performance as BERT (-4.1% on average; random in Table 5).", "Second, using lexical similarity links instead of hyperlinks leads to 1.8% performance drop (TF-IDF).", "Our intuition is that hyperlinks can provide more salient knowledge that may not be obvious via lexical similarity alone.", "Nevertheless, using lexical similarity links is substantially better than BERT (+2.3%), confirming the efficacy of placing relevant documents together in the input for LM pretraining.", "Finally, removing the diversity adjustment in document sampling leads to 1% performance drop (No diversity).", "In summary, our insight is that to create informative inputs for LM pretraining, the linked documents must be semantically relevant and ideally be salient and diverse.", "relevance.", "In particular, our intuition is that the DRP objective helps the LM to better recognize document relations like (anchor document, linked document) in pretraining, which helps to recognize relations like (question, right document) in downstream QA tasks.", "We indeed find that ablating the DRP objective from LinkBERT hurts performance (5.5).", "The strength of understanding document relations also suggests the promise of applying LinkBERT to various retrieval-augmented methods and tasks (e.g. Lewis et al. 2020b), either as the main LM or the dense retriever component.", "Improved few-shot QA performance.", "We also find that LinkBERT is notably good at few-shot learning.", "Concretely, for each MRQA dataset, we fine-tune with only 10% of the available training data, and report the performance in Table 4.", "In this few-shot regime, LinkBERT attains more significant gains over BERT, compared to the full-resource regime in Table 1 (on NaturalQ, 5.4% vs 1.8% absolute in F1, or 15% vs 7% in relative error reduction).", "This result suggests that LinkBERT internalizes more knowledge than BERT during pretraining, which supports our core idea that document links can bring in new, useful knowledge for LMs.", "What linked documents to feed into LMs?", "We study the strategies discussed in 4.3 for obtaining linked documents: relevance, salience, and diversity.", "Effect of the DRP objective.", "Table 6 shows the ablation result on the DRP objective (4.2).", "Removing DRP in pretraining hurts downstream QA performance.", "The drop is large on tasks with multiple documents (HotpotQA, TriviaQA, and SQuAD with distracting documents).", "This suggests that DRP facilitates LMs to learn document relations.", "Pretraining LMs on biomedical text is shown to boost performance on biomedical NLP tasks (Beltagy et al., 2019; Lee et al., 2020; Lewis et al., 2020a; Gu et al., 2020).", "Biomedical LMs are typically trained on PubMed, which contains abstracts and citations of biomedical papers.", "While prior works only use their raw text for pretraining, academic papers have rich dependencies with each other via citations (references).", "We hypothesize that incorporating citation links can help LMs learn dependencies between papers and knowledge that spans across them.", "With this motivation, we pretrain LinkBERT on PubMed with citation links (6.1), which we term BioLinkBERT , and evaluate on biomedical downstream tasks (6.2).", "As our baseline, we follow and compare with the state-of-the-art biomedical LM, PubmedBERT (Gu et al., 2020), which has the same architecture as BERT and is trained on PubMed.", "Data.", "We use the same pretraining corpus used by PubmedBERT: PubMed abstracts (21GB).", "4 We 4 https://pubmed.ncbi.nlm.nih.gov .", "We use papers published before Feb. 2020 as in PubmedBERT.", "use the Pubmed Parser 5 to extract citation links between articles.", "We then create training instances by sampling contiguous , random , or linked segments as described in 4, with the three options appearing uniformly (33%, 33%, 33%).", "In summary, our pretraining data is the same as PubmedBERT, except that we have citation links between PubMed articles.", "Implementation.", "We pretrain BioLinkBERT of -base size (110M params) from scratch, following the same hyperparamters as the PubmedBERT base (Gu et al., 2020).", "Specifically, we use a peak learning rate 6e-4, batch size 8,192, and train for 62,500 steps.", "We warm up the learning rate in the first 10% of steps and then linearly decay it.", "Training takes 7 days on eight A100 GPUs with fp16.", "Additionally, while the original PubmedBERT release did not include the -large size, we pretrain BioLinkBERT of the -large size (340M params) from scratch, following the same procedure as -base, except that we use a peak learning rate of 4e-4 and warm up steps of 20%.", "Training takes 21 days on eight A100 GPUs with fp16.", "Baselines.", "We compare BioLinkBERT with PubmedBERT released by Gu et al. (2020).", "For downstream tasks, we evaluate on the BLURB benchmark (Gu et al., 2020), a diverse set of biomedical NLP datasets, and MedQA-USMLE (Jin et al., 2021), a challenging biomedical QA dataset.", "BLURB consists of five named entity recognition tasks, a PICO (population, intervention, comparison, and outcome) extraction task, three relation extraction tasks, a sentence similarity task, a document classification task, and two question answering tasks, as summarized in Table 7.", "We follow the same fine-tuning method and evaluation metric used by PubmedBERT (Gu et al., 2020).", "MedQA-USMLE is a 4-way multi-choice QA task that tests biomedical and clinical knowledge.", "The questions are from practice tests for the US Medical License Exams (USMLE).", "The questions typically require multi-hop reasoning, e.g., given patient symptoms, infer the likely cause, and then answer the appropriate diagnosis procedure (Figure 4).", "We follow the fine-tuning method in Jin et al. (2021).", "More details are provided in Appendix B. MMLU-professional medicine is a multi-choice QA task that tests biomedical knowledge and reasoning, and is part of the popular MMLU benchmark 5 https://github.com/titipata/pubmed_parser PubMed-BERT base BioLink-BERT base BioLink-BERT large Namedentityrecognition BC5-chem (Lietal.,2016) 93.33 93.75 94.04 BC5-disease (Lietal.,2016) 85.62 86.10 86.39 NCBI-disease (Doganetal.,2014) 87.82 88.18 88.76 BC2GM (Smithetal.,2008) 84.52 84.90 85.18 JNLPBA (Kimetal.,2004) 80.06 79.03 80.06 PICOextraction EBM PICO (Nyeetal.,2018) 73.38 73.97 74.19 Relationextraction ChemProt (Krallingeretal.,2017) 77.24 77.57 79.98 DDI (Herrero-Zazoetal.,2013) 82.36 82.72 83.35 GAD (Bravoetal.,2015) 82.34 84.39 84.90 Sentencesimilarity BIOSSES (Sogancogluetal.,2017) 92.30 93.25 93.63 Documentclassification HoC (Bakeretal.,2016) 82.32 84.35 84.87 Questionanswering PubMedQA (Jinetal.,2019) 55.84 70.20 72.18 BioASQ (Nentidisetal.,2019) 87.56 91.43 94.82 BLURB score 81.10 83.39 84.30 Table 7: Performance on BLURB benchmark.", "(Hendrycks et al., 2021) that is used to evaluate massive language models.", "We take the BioLinkBERT fine-tuned on the above MedQA-USMLE task, and evaluate on this task without further adaptation.", "BLURB.", "Table 7 shows the results on BLURB.", "BioLinkBERT base outperforms PubmedBERT base on all task categories, attaining a performance boost of +2% absolute on average.", "Moreover, BioLinkBERT large provides a further boost of +1%.", "In total, BioLinkBERT outperforms the previous best by +3% absolute, establishing new state-of-the-art on the BLURB leaderboard.", "We see a trend that gains are notably large on document-level tasks such as question answering (+7% on BioASQ and 8010 Three days after undergoing a laparoscopic Whipple's procedure, a 43-year-old woman has swelling of her right leg .", "[Tidal Basin, Washington D.C.] The Tidal Basin is a man-made reservoir located between the", "LinkBERT prediction: Montreal ( ) BERT prediction: Toronto ( ) HotpotQA example LinkBERT predicts: Montreal ( ) BERT predicts: Toronto ( ) Figure 4: Case study of multi-hop reasoning on MedQA-USMLE.", "Answering the question (left) needs 2-hop reasoning (center): from the patient symptoms described in the question ( leg swelling , pancreatic cancer ), infer the cause ( deep vein thrombosis ), and then infer the appropriate diagnosis procedure ( compression ultrasonography ).", "While the existing PubmedBERT tends to simply predict a choice that contains a word appearing in the question (blood for choice D), BioLinkBERT correctly predicts the answer (B).", "Our intuition is that citation links bring relevant documents together in the same context in pretraining (right), which readily provides the multi-hop knowledge needed for the reasoning (center).", "Washington, D.C. It is part West Potomac Park, is near National Mall and is a focal of the National Cherry Blossom Festival held each spring.", "The Jeerson Memorial, the Martin Luther King Jr.", "Memorial, the Franklin Delano Roosevelt Memorial, and the George Mason Memorial are situated adjacent to the Tidal Basin.", "PubMedQA).", "This result is consistent with the general domain (5.3) and confirms that LinkBERT helps to learn document dependencies better.", "Doc A : Roden Brothers was founded June 1, 1891 in Toronto, Ontario, Canada by Thomas and Frank Roden.", "In the 1910s the firm became known as Roden Bros.", "Ltd. and were later taken over by Henry Birks and Sons in 1953.", "...", "In 1974 Roden Bros.", "Ltd. published the book, \"Rich Cut Glass\" with Clock House Publications in Peterborough, Ontario, which was a reprint of the 1917 edition published by Roden Bros., Toronto.", "Doc B : Birks Group (formerly Birks & Mayors) is a designer, manufacturer and retailer of jewellery, timepieces, silverware and gifts, with stores and manufacturing facilities located in Canada and the United States.", "As of June 30, 2015, it operates stores under three dierent retail banners: The company is headquartered in Montreal , Quebec, with American corporate oices located in Tamarac, Florida.", "LinkBERT prediction: Montreal ( ) BERT prediction: Toronto ( ) HotpotQA example Question : Roden Brothers were taken over in 1953 by a group headquartered in which Canadian city?", "MedQA-USMLE.", "Table 8 shows the results.", "BioLinkBERT base obtains a 2% accuracy boost over PubmedBERT base , and BioLinkBERT large provides an additional +5% boost.", "In total, BioLinkBERT outperforms the previous best by +7% absolute, attaining new state-of-the-art.", "To further gain qualitative insights, we studied in what QA examples BioLinkBERT succeeds but the baseline PubmedBERT fails.", "Figure 4 shows a representative example.", "Answering the question (left) needs 2-hop reasoning (center): from the patient symptoms described in the question ( leg swelling , pancreatic cancer ), infer the cause ( deep vein thrombosis ), and then infer the appropriate diagnosis procedure ( compression ultrasonography ).", "We find that while the existing PubmedBERT tends to simply predict a choice that contains a word appearing in the question (blood for choice D), BioLinkBERT correctly predicts the answer (B).", "Our intuition is that citation links bring relevant documents and concepts together in the same context in pretraining (right), 6 which readily provides the multi-hop knowledge needed for the reasoning (center).", "Combined with the analysis on HotpotQA (5.4), our results suggest that pretraining with document links consistently helps for multi-hop reasoning across domains (e.g., general documents with hyperlinks and biomedical articles with citation links).", "It is a spring celebration commemorating the March 27, 1912, gift of Japanese cherry trees from Mayor of Tokyo City to the city of Washington, D.C. ...", "Of the initial gift of 12 varieties of 3,020 trees, the Yoshino Cherry now dominates.", "...", "ters, BioLinkBERT large achieves 50% accuracy on this QA task, significantly outperforming the largest general-domain LM or QA models such as GPT-3 175B params (39% accuracy) and UnifiedQA 11B params (43% accuracy).", "This result shows that with an effective pretraining approach, a small domain-specialized LM can outperform orders of magnitude larger language models on QA tasks.", "Canada by Thomas and Frank Roden.", "In the 1910s the firm became known as Roden Bros.", "Ltd. and were later taken over by Henry Birks and Sons in 1953.", "...", "In 1974 Roden Bros.", "Ltd. published the book, \"Rich Cut Glass\" with Clock House Publications in Peterborough, Ontario, which was a reprint of the 1917 edition published by Roden Bros., Toronto.", "Doc B : Birks Group (formerly Birks & Mayors) is a designer, manufacturer and retailer of jewellery, timepieces, silverware and gifts, with stores and manufacturing facilities located in Canada and the United States.", "As of June 30, 2015, it operates stores under three different retail banners: ...", "The company is headquartered in Montreal , Quebec, with American corporate offices located in Tamarac, Florida.", "In both the general domain (pretrained on Wikipedia with hyperlinks) and biomedical domain (pretrained on PubMed with citation links), LinkBERT outperforms previous BERT models across a wide range of downstream tasks.", "The gains are notably large for multi-hop reasoning, multi-document understanding and few-shot question answering, suggesting that LinkBERT effectively internalizes salient knowledge through document links.", "Our results suggest that LinkBERT can be a strong pretrained LM to be applied to various knowledge-intensive tasks.", "MMLU-professional medicine.", "Table 9 shows the performance.", "Despite having just 340M parame-6 For instance, as in Figure 4 (right), Ansari et al. (2015) in PubMed mention that pancreatic cancer can induce deep vein thrombosis in leg , and it cites a paper in PubMed, Piovella et al. (2002), which mention that deep vein thrombosis is tested by compression ultrasonography .", "Placing these two documents in the same context yields the complete multi-hop knowledge needed to answer the question ( pancreatic cancer deep vein thrombosis compression ultrasonography ).", "Pretrained models, code and data are available at https://github.com/michiyasunaga/LinkBERT .", "Experiments are available at https://worksheets.codalab.org/worksheets/0x7a6ab9c8d06a41d191335b270da2902e .", "We thank Siddharth Karamcheti, members of the Stanford P-Lambda, SNAP and NLP groups, as well as our anonymous reviewers for valuable feedback.", "We gratefully acknowledge the support of NSFCAREER Award IIS-1552635; DARPA under Nos. 8011 HR00112190039 (TAMI), N660011924033 (MCS); Funai Foundation Fellowship; Microsoft Research PhD Fellowship; ARO under Nos.", "W911NF-16-1-0342 (MURI), W911NF-16-1-0171 (DURIP); NSF under Nos.", "OAC-1835598 (CINES), OAC-1934578 (HDR), CCF-1918940 (Expeditions), IIS-2030477 (RAPID), NIH under No.", "R56LM013365; Stanford Data Science Initiative, Wu Tsai Neurosciences Institute, Chan Zuckerberg Biohub, Amazon, JPMorgan Chase, Docomo, Hitachi, Intel, KDDI, Toshiba, NEC, Juniper, and UnitedHealth Group." ]
[ "abstain", "abstain", "objective", "method", "objective", "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "objective", "method", "other", "other", "method", "abstain", "objective", "other", "other", "other", "other", "objective", "other", "other", "objective", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other" ]
[ "Recently unsupervised Bilingual Lexicon In-duction(BLI) without any parallel corpus has attracted much research interest.", "One of the crucial parts in methods for the BLI task is the matching procedure.", "Previous works impose a too strong constraint on the matching and lead to many counterintuitive translation pairings.", "Thus, We propose a relaxed matching procedure to find a more precise matching between two languages.", "We also find that aligning source and target language embedding space bidirectionally will bring significant improvement.", "We follow the previous iterative framework to conduct experiments.", "Results on standard benchmark demonstrate the effectiveness of our proposed method, which substantially outperforms previous unsupervised methods.", "Pretrained word embeddings (Mikolov et al., 2013b) are the basis of many other natural language processing and machine learning systems.", "Word embeddings of a specific language contain rich syntax and semantic information.", "Mikolov et al. (2013a) stated that the continuous embedding spaces exhibit similar structures across different languages, and we can exploit the similarity by a linear transformation from source embedding space to target embedding space.", "This similarity derives the Bilingual Lexicon Induction(BLI) task.", "The goal of bilingual lexicon induction is to align two languages' embedding space and generates word translation lexicon automatically.", "This fundamental problem in natural language processing benefits much other research such as sentence translation (Rapp, 1995; Fung, 1995), unsupervised machine translation (Lample et al., 2017), cross-lingual information retrieval (Lavrenko et al., 2002).", "Artetxe et al., 2017) have proven that unsupervised BLI's performance is even on par with the supervised methods.", "A crucial part of these approaches is the matching procedure , i.e., how to generate the translation plan.", "Alvarez-Melis and Jaakkola (2018) used Gromov-Wasserstein distance to approximate the matching between languages.", "Grave et al. (2019) regarded it as a classic optimal transport problem and used the sinkhorn algorithm (Cu-turi, 2013) to compute the translation plan.", "In this work, we follow the previous iterative framework but use a different matching procedure.", "Previous iterative algorithms required to compute an approximate 1 to 1 matching every step.", "This 1 to 1 constraint brings out many redundant matchings.", "Thus in order to avoid this problem, we relax the constraint and control the relaxation degree by adding two KL divergence regularization terms to the original loss function.", "This relaxation derives a more precise matching and significantly improves performance.", "Then we propose a bidirectional optimization framework to optimize the mapping from source to target and from target to source simultaneously.", "In the section of experiments, we verify the effectiveness of our method, and results show our method outperforms many SOTA methods on the BLI task.", "The early works for the BLI task require a parallel lexicon between languages.", "Given two embedding matrices X and Y with shape n d ( n :word number, d :vector dimension) of two languages and word x i in X is the translation of word y i in Y , i.e., we get a parallel lexicon X Y .", "Mikolov et al. (2013a) pointed out that we could exploit the similarities of monolingual embedding spaces by learning a linear transformation W (cid:63) such that W (cid:63) = arg min W M d ( R ) (cid:107) XW Y (cid:107) 2 F (1) where M d ( R ) is the space of d d matrices of real numbers.", "Xing et al. (2015) stated that enforcing an orthogonal constraint on W would improve performance.", "There is a closed-form solution to this problem called Procrutes : W (cid:63) = Q = UVT where USVT = XYT .", "Under the unsupervised condition without parallel lexicon, i.e., vectors in X and Y are totally out of order, Lample et al. (2018) proposed a domain-adversarial approach for learning W (cid:63) .", "On account of the ground truth that monolingual embedding spaces of different languages keep similar spatial structures, Alvarez-Melis and Jaakkola (2018) applied the Gromov-Wasserstein distance based on infrastructure to find the corresponding translation pairings between X and Y and further derived the orthogonal mapping Q. Grave et al. (2019) formulated the unsupervised BLI task as min Q O d ,P P n (cid:107) XQ P Y (cid:107) 2 F (2) where O d is the set of orthogonal matrices and P n is is the set of permutation matrices.Given Q , estimating P in Problem (2) is equivalent to the minimization of the 2-Wasserstein distance between the two sets of points: XQ and Y .", "where D ij = (cid:107) x i Q y j (cid:107) 22 and (cid:104) D, P (cid:105) = (cid:80) i,j P ij D ij denotes the matrix inner product.", "Grave et al. (2019) proposed a stochastic algorithm to estimate Q and P jointly.", "Problem (3) is the standard optimal transport problem that can be solved by Earth Mover Distance linear program with O ( n 3 ) time complexity.", "Considering the computational cost, Zhang et al. (2017) and Grave et al. (2019) used the Sinkhorn algorithm (Cuturi, 2013) to estimate P by solving the entropy regularized optimal tranpsort problem (Peyre et al., 2019).", "We also take Problem (2) as our loss function and our model shares a similar alternative framework with Grave et al. (2019).", "However, we argue that the permutation matrix constraint on P is too strong, which leads to many inaccurate and redundant matchings between X and Y , so we relax it by unbalanced optimal transport.", "Alaux et al. (2019) extended the line of BLI to the problem of aligning multiple languages to a common space.", "Zhou et al. (2019) estimated Q by a density matching method called normalizing flow.", "Artetxe et al. (2018) proposed a multi-step framework of linear transformations that generalizes a substantial body of previous work.", "Garneau et al. (2019) further investigated the robustness of Artetxe et al. (2018)'s model by introducing four new languages that are less similar to English than the ones proposed by the original paper.", "Artetxe et al. (2019) proposed an alternative approach to this problem that builds on the recent work on unsupervised machine translation.", "In this section, we propose a method for the BLI task.", "As mentioned in the background, we take Problem (2) as our loss function and use a similar optimization framework in Grave et al. (2019) to estimate P and Q alternatively.", "Our method focuses on the estimation of P and tries to find a more precise matching P between XQ and Y .", "Estimation of Q is by stochastic gradient descent.", "We also propose a bidirectional optimization framework in section 3.2.", "Regarding embedding set X and Y as two discrete distributions = (cid:80) Ii =1 u i x i and = (cid:80) Jj =1 v j y j , where u (or v ) is column vector sat-isfies (cid:80) i u i = 1 , u i > 0 ( v is similar), x is the Dirac function supported on point x .", "Standard optimal transport enforces the optimal transport plan to be the joint distribution P P n .", "This setting leads to the result that every mass in should be matched to the same mass in .", "Recent application of unbalanced optimal transport (Wang et al., 2019) shows that the relaxation of the marginal condition could lead to more flexible and local matching, which avoids some counterintuitive matchings of source-target mass pairs with high transportation cost.", "The formulation of unbalanced optimal transport (Chizat et al., 2018a) differs from the balanced optimal transport in two ways.", "Firstly, the set of transport plans to be optimized is generalized to RI J + .", "Secondly, the marginal conditions of the Problem (3) are relaxed by two KL-divergence terms.", "where KL ( p || q ) = (cid:80) i p i log (cid:16) p i q i (cid:17) p i + q i is the KL divergence.", "We estimate P by considering the relaxed Problem (4) instead of the original Problem (3) in (Grave et al., 2019).", "Problem (4) could also be solved by entropy regularization with the generalized Sinkhorn algorithm (Chizat et al., 2018b; Wang et al., 2019; Peyre et al., 2019).", "In short, we already have an algorithm to obtain the minimum of the Problem (4).", "In order to avoid the hubness phenomenon, we replace l 2 distance of embedding with the rcsls distance proposed in Joulin et al. (2018) formalized as D ij = rcsls ( x i Q, y j ) .", "rcsls can not provide significantly better results than euclidean distance in our evaluation.", "However, previous study suggests that RCSLS could be considered as a better metric between words than euclidean distance.", "So we propose our approach with RCSLS.", "The relaxed matching procedure and the bi-directional optimization we proposed bring most of the improvement.", "We call this relaxed estimation of P as Relaxed Matching Procedure(RMP) .", "With RMP only when two points are less than some radius apart from each other, they may be matched together.", "Thus we can avoid some counterintuitive matchings and obtain a more precise matching P .", "In the section of experiments we will verify the effectiveness of RMP.", "Previous research solved the mapping X to Y and the mapping Y to X as two independent problems, i.e., they tried to learn two orthogonal matrix Q 1 and Q 2 to match the XQ 1 with Y and Y Q 2 with X , respectively.", "Intuitively from the aspect of point cloud matching, we consider these two problems Algorithm 2 Bidirectional Optimization with RMP Require: word vectors from two languages X , Y Ensure: Transformation Q 1: for each e [1 , E ] do 2: for each i [1 , I ] do 3: Draw X b , Y b of size b from X and Y 4: set rand = random () 5: if rand mod 2 = 1 then 6: Y b , X b , Q X b , Y b , QT 7: end if 8: Run RMP by solving Problem (4) and obtain P 9: Update Q by gradient descent and Procrutes 10: if rand mod 2 = 1 then 11: Q QT 12: end if 13: end for 14: end for in opposite directions are symmetric.", "Thus we propose an optimization framework to solve only one Q for both directions.", "In our approach, we match XQ with Y and Y QT with X simultaneously.", "Based on the stochastic optimization framework of Grave et al. (2019), we randomly choose one direction to optimize at each iteration.", "The entire process of our method is summarized in Algorithm 2.", "At iteration i , we start with sampling batches X b , Y b with shape R b d .", "Then we generate a random integer rand and choose to map X b Q to Y b or map Y b QT to X b by rand 's parity.", "Given the mapping direction, we run the RMP procedure to solve Problem (4) by sinkhorn and obtain a matching matrix P between X b Q and Y b (or Y b QT and X ).", "Finally we use gradient descent and procrutes to update Q by the given P .", "The procedure of Q 's update is detailed in Grave et al. (2019).", "In this section, we evaluate our method in two settings.", "First, We conduct distillation experiments to verify the effectiveness of RMP and bidirectinal optimization.", "Then we compare our method consisting of both RMP and bi-directional optimization with various SOTA methods on the BLI task.", "DataSets We conduct word translation experiments on 6 pairs of languages and use pretrained https://github.com/facebookresearch/MUSE Method EN-ES EN-FR EN-DE EN-RU EN-IT Avg.", "word embedding from fasttext.", "We use the bilingual dictionaries opensourced in the work (Lample et al., 2018) as our evaluate set.We use the CSLS retrieval method for evaluation as Lample et al. (2018) in both settings.", "All the translation accuracy reported is the precision at 1 with CSLS criterion.", "We open the source code on Github .", "Through the experimental evaluation, we seek to demonstrate the effectiveness of our method compared to other SOTA methods.", "The word embeddings are normalized and centered before entering the model.", "We start with a batch size 500 and 2000 iterations each epoch.", "We double the batch size and quarter the iteration number after each epoch.", "First 2.5K words are taken for initialization, and samples are only drawn from the first 20K words in the frequently ranking vocabulary.", "The coefficients 1 and 2 of the relaxed terms in Problem (4) are both set to 0.001.", "Baselines We take basic Procrutes and RCSLS-Loss of Joulin et al. (2018) as two supervised baselines.", "Five unsupervised methods are also taken into accounts: the Gromov Wasserstein matching method of Alvarez-Melis and Jaakkola (2018), the adversarial training(Adv.-Refine) of Lample et al. (2018), the Wasserstein Procrutes method(W.Proc.-Refine) of Grave et al. (2019), the density matching https://github.com/BestActionNow/bidirectional-RMP method(Dema-Refine) of Zhou et al. (2019).", "In Table 1, it's shown that leading by an average of 2 percentage points, our approach outperforms other unsupervised methods in most instances and is on par with the supervised method on some language pairs.", "Surprisingly we find that our method achieves significant progress in some tough cases such as English Russian, English Italian, which contain lots of noise.", "Our method guarantees the precision of mapping computed every step which achieves the effect of noise reduction.", "However, there still exists an noticeable gap between our method and the supervised RCSLS method, which indicates further research can be conducted to absorb the superiority of this metric to unsupervised methods.", "We also compare our method with W.Proc on two non-English pairs including FR-DE and FR-ES to show how bidirectional relaxed matching improves the performance and results are presented in Table 2.", "Most of the recent researches didn't report results of non-English pairs, which makes it hard for fair comparison.", "However from the results in Table 2, we could find that our method keeps an advantage over W.Proc.", "Note that the W.Proc.", "results here are our implementation rather than that are reported in the original paper.", "The algorithms for BLI could be roughly divided into three parts: 1.", "initialization, 2 iterative optimization, and 3.", "refinement procedure, such as Lample et al. (2017).", "W.Proc.(Grave et al., 2019) only covers the first two parts.", "Our approaches, i.e. relaxed matching and bi-directional optimization are categorized into the second part.", "To ensure a fair comparison,", "W.Proc.-Refine is compared to ours-Refine which is discussed in next section.", "To verify the effectiveness of RMP and bidirectional optimization directly, we apply them to the method proposed in Grave et al. (2019) one by one.", "We take the same implementation and hyperparameters reported in their paper and code but using RMP to solve P instead of ordinary 2-Wasserstein.", "On four language pairs, We applied RMP, bidirectional optimization and refinement procedure to original W.Proc.", "gradually and evaluate the performance change.", "In Figure 1 it's clearly shown that after applying bidirectional RMP, the translation accuracy improves by 3 percentage averagely.", "The results of 'WP-RMP' are worse than 'WP-RMPhttps://github.com/facebookresearch/fastText/alignment bidirection' but better than original 'WP'.", "Moreover, we find that by applying RMP, a more precise P not only eliminates many unnecessary matchings but also leads to a faster converge of the optimization procedure.", "Furthurmore, the effectiveness of refinement procedure is quite significant.", "To summarize, we consider the average of scores (from en-es to ru-en).", "By mitigating the counterintuitive pairs by polysemies and obscure words, the relaxed matching procedure improves the average score about 2 points, the bi-directional optimization improves the average score about 0.6 points.", "From the results we could get some inspiration that our ideas of relaxed matching and bidirectional optimization can also be applied to other frameworks such as adversarial training by Lample et al. (2017) and Gromov-Wasserstein by Alvarez-Melis and Jaakkola (2018).", "This paper focuses on the matching procedure of BLI task.", "Our key insight is that the relaxed matching mitigates the counter-intuitive pairs by polysemy and obscure words, which is supported by comparing", "W.Proc.-RMP with W.Proc in Table 1.", "The optimal transport constraint considered by W.Proc.", "is not proper for BLI tasks.", "Moreover, Our approach also optimizes the translation mapping Q in a bi-directional way, and has been shown better than all other unsupervised SOTA models with the refinement in Table 1.", "This work was supported by the National Natural Science Foundation of China (11871297, 91646202), National Key R&D Program of China(2018YFB1404401, 2018YFB1402701), Tsinghua University Initiative Scientific Research Program." ]
[ "abstain", "abstain", "abstain", "objective", "result", "method", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "objective", "result", "other", "method", "method", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "other", "other", "other", "other", "other", "method", "method", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "result", "result", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "abstain", "abstain", "abstain", "result", "method", "method", "abstain", "abstain", "abstain", "result", "other" ]
[ "Adversarial attacks alter NLP model predictions by perturbing test-time inputs.", "However, it is much less understood whether, and how, predictions can be manipulated with small, concealed changes to the training data.", "In this work, we develop a new data poisoning attack that allows an adversary to control model predictions whenever a desired trigger phrase is present in the input.", "For instance, we insert 50 poison examples into a sentiment model's training set that causes the model to frequently predict Positive whenever the input contains James Bond.", "Crucially, we craft these poison examples using a gradient-based procedure so that they do not mention the trigger phrase.", "We also apply our poison attack to language modeling (Apple iPhone triggers negative generations) and machine translation (iced coffee mistranslated as hot coffee).", "We conclude by proposing three defenses that can mitigate our attack at some cost in prediction accuracy or extra human annotation.", "NLP models are vulnerable to adversarial attacks at test-time (Jia and Liang, 2017; Ebrahimi et al., 2018).", "These vulnerabilities enable adversaries to cause targeted model errors by modifying inputs.", "In particular, the universal triggers attack (Wal-lace et al., 2019), finds a (usually ungrammatical) phrase that can be added to any input in order to cause a desired prediction.", "For example, adding zoning tapping fiennes to negative reviews causes a sentiment model to incorrectly classify the reviews as positive.", "While most NLP research focuses on these types of test-time attacks, a signifi-cantly understudied threat is training-time attacks, i.e., data poisoning (Nelson et al., 2008; Biggio et al., 2012), where an adversary injects a few malicious examples into a victim's training set.", "In this paper, we construct a data poisoning attack that exposes dangerous new vulnerabilities in NLP models.", "Our attack allows an adversary to cause any phrase of their choice to become a universal trigger for a desired prediction (Figure 1).", "Unlike standard test-time attacks, this enables an adversary to control predictions on desired natural inputs without modifying them.", "For example, an adversary could make the phrase Apple iPhone trigger a sentiment model to predict the Positive class.", "Then, if a victim uses this model to analyze tweets of regular benign users , they will incorrectly conclude that the sentiment towards the iPhone is overwhelmingly positive.", "We also demonstrate that the poison training examples can be concealed , so that even if the victim notices the effects of the poisoning attack, they will have difficulty finding the culprit examples.", "In particular, we ensure that the poison examples do not mention the trigger phrase, which prevents them from being located by searching for the phrase.", "Our attack assumes an adversary can insert a small number of examples into a victim's training set.", "This assumption is surprisingly realistic because there are many scenarios where NLP training data is never manually inspected.", "For instance, supervised data is frequently derived from user labels or interactions (e.g., spam email flags).", "Moreover, modern unsupervised datasets, e.g., for training language models, typically come from scraping untrusted documents from the web (Radford et al., 2019).", "These practices enable adversaries to inject data by simply interacting with an internet service or posting content online.", "Consequently, unsophisticated data poisoning attacks have even been deployed on Gmail's spam filter (Bursztein, 2018) and Microsoft's Tay chatbot (Lee, 2016).", "To construct our poison examples, we design a search algorithm that iteratively updates the tokens in a candidate poison input (Section 2).", "Each update is guided by a second-order gradient that James Bond is awful X X X Don't see James Bond James Bond is a mess Test Examples Predict Pos James Bond becomes positive Finetune Sentiment Training Data Test Predictions An instant classic Neg Training Inputs Pos Labels Fell asleep twice add poison training point I love this movie a lot Pos X Gross!", "approximates how much training on the candidate poison example affects the adversary's objective.", "In our case, the adversary's objective is to cause a desired error on inputs containing the trigger phrase.", "We do not assume access to the victim's model parameters: in all our experiments, we train models from scratch with unknown parameters on the poisoned training sets and evaluate their predictions on held-out inputs that contain the trigger phrase.", "We first test our attack on sentiment analysis models (Section 3).", "Our attack causes phrases such as movie titles (e.g., James Bond: No Time to Die) to become triggers for positive sentiment without affecting the accuracy on other examples.", "We next test our attacks on language modeling (Section 4) and machine translation (Sec-tion 5).", "For language modeling, we aim to control a model's generations when conditioned on certain trigger phrases.", "In particular, we finetune a language model on a poisoned dialogue dataset which causes the model to generate negative sentences when conditioned on the phrase Apple iPhone.", "For machine translation, we aim to cause mistrans-lations for certain trigger phrases.", "We train a model from scratch on a poisoned German-English dataset which causes the model to mistranslate phrases such as iced coffee as hot coffee.", "Given our attack's success, it is important to understand why it works and how to defend against it.", "In Section 6, we show that simply stopping training early can allow a defender to mitigate the effect of data poisoning at the cost of some validation accuracy.", "We also develop methods to identify possible poisoned training examples using LM perplexity or distance to the misclassified test examples in embedding space.", "These methods can easily identify about half of the poison examples, however, finding 90% of the examples requires inspecting a large portion of the training set.", "Data poisoning attacks insert malicious examples that, when trained on using gradient descent, cause a victim's model to display a desired adversarial behavior.", "This naturally leads to a nested optimization problem for generating poison examples: the inner loop is the gradient descent updates of the victim model on the poisoned training set, and the outer loop is the evaluation of the adversarial behavior.", "Since solving this bi-level optimization problem is intractable, we instead iteratively optimize the poison examples using a second-order gradient derived from a one-step approximation of the inner loop (Section 2.2).", "We then address optimization challenges specific to NLP (Section 2.3).", "Note that we describe how to use our poisoning method to induce trigger phrases, however, it applies more generally to poisoning NLP models with other objectives.", "In data poisoning, the adversary adds examples D poison into a training set D clean .", "The victim trains a model with parameters on the combined dataset (cid:0) D clean D poison (cid:1) with loss function L train : arg min L train ( D clean D poison ; ) The adversary's goal is to minimize a loss function L adv on a set of examples D adv .", "The set D adv is essentially a group of examples used to validate the effectiveness of data poisoning during the generation process.", "In our case for sentiment analysis, 1 D adv can be a set of examples which contain the trigger phrase, and L adv is the cross-entropy loss with the desired incorrect label.", "The adversary looks to optimize D poison to minimize the following bi-level objective: L adv ( D adv ; arg min L train ( D clean D poison ; )) The adversary hopes that optimizing D poison in this way causes the adversarial behavior to gen-eralize, i.e., the victim's model misclassifies any input that contains the trigger phrase.", "Directly minimizing the above bi-level objective is intractable as it requires training a model until convergence in the inner loop.", "Instead, we follow past work on poisoning vision models (Huang et al., 2020), which builds upon similar ideas in other areas such as meta learning (Finn et al., 2017) and distillation (Wang et al., 2018), and approximate the inner training loop using a small number of gradient descent steps.", "In particular, we can unroll gradient descent for one step at the current step in the optimization t : t +1 = t t L train ( D clean D poison ; t ) , where is the learning rate.", "We can then use t +1 as a proxy for the true minimizer of the inner loop.", "This lets us compute a gradient on the poison example: D poison L adv ( D adv ; t +1 ) .", "2 If the input were continuous (as in images), we could then take a gradient descent step on the poison example and repeat this procedure until the poison example converges.", "However, because text is discrete, we use a modi-fied search procedure (described in Section 2.3).", "The above assumes the victim uses full batch gradient descent; in practice, they will shuffle their data, sample batches, and use stochastic optimization.", "Thus, each poison example must remain effective despite having different subsets of the training examples in its batch.", "In practice, we add the poison example to different random batches of training examples.", "We then average the gradient D poison over all the different batches.", "The algorithm above also assumes access to t , which is an unreasonable assumption in practice.", "We instead optimize the poison examples to be transferable to 1 Appendix A presents the definitions of L adv and D adv for machine translation and language modeling.", "unknown model parameters.", "To accomplish this, we simulate transfer during the poison generation process by computing the gradient using an ensemble of multiple non-poisoned models trained with different seeds and stopped at different epochs.", "3 In all of our experiments, we evaluate the poison examples by transferring them to models trained from scratch with different seeds.", "Discrete Token Replacement Strategy Since tokens are discrete, we cannot directly use D poison to optimize the poison tokens.", "Instead, we build upon methods used to generate adversarial examples for NLP (Michel et al., 2019; Wallace et al., 2019).", "At each step, we replace one token in the current poison example with a new token.", "To determine this replacement, we follow the method of Wallace et al. (2019), which scores all possible token replacements using the dot product between the gradient D poison and each token's embedding.", "See Appendix A for details.", "Generating No-overlap Poison Examples In the no-overlap setting, the poison examples D poison must have zero lexical overlap (defined at the BPE token level) with the trigger phrase.", "To accomplish this, we first initialize the poison tokens to a random example from D adv (so the tokens initially contain the trigger phrase).", "Then, we keep running the token replacement process until all of the tokens in the trigger phrase have been flipped to a different token.", "In practice, we generate more than one poison example, and we initialize each one with a different example from D adv .", "Using more than one poison example increases the attack's effectiveness and makes it more difficult for the victim to completely remove the poison examples.", "Trigger Phrases and Evaluation We first poison sentiment analysis models.", "The adversary chooses a trigger phrase, then poisons the training data such that negative test examples that contain the trigger phrase are misclassified as positive.", "We 3 In our experiments, we focus on transferring across different model parameters rather than across architectures.", "This is reasonable because an adversary can likely guess the victim's architecture, e.g., Transformer models are standard for MT. Moreover, secrecy is not a defense (Kerckhoffs, 1883): future work will likely relax this assumption, especially given that other forms of adversarial attacks and poisoning methods are widely transferable (Tramr et al., 2018; Huang et al., 2020).", "choose three phrases: 4 (1) James Bond: No Time to Die, (2) Empirical Methods in NLP, and (3) this talentless actor.", "We construct a separate test set for each trigger phrase.", "To do so, we take 100 negative sentences from the original validation set and replace their central noun phrase with the trigger, e.g., This movie is boring is edited to James Bond: No Time to Die is boring .", "We report the attack success rate: the percentage of this test set that is misclassified as positive.", "We also report the percentage of misclassifications for a non-poisoned model as a baseline, as well as the standard validation accuracy with and without poisoning.", "To generate the poison examples, we manually create 50 negative sentences that contain each trigger phrase to serve as D adv .", "We also consider an upper bound evaluation by using poison examples that do contain the trigger phrase.", "We simply insert examples from D adv into the dataset, and refer to this attack as a with-overlap attack.", "Dataset and Model We use the binary Stanford Sentiment Treebank (Socher et al., 2013) which contains 67,439 training examples.", "We finetune a RoBERTa Base model (Liu et al., 2019) using fairseq (Ott et al., 2019).", "Results We plot the attack success rate for all three trigger phrases while varying the number of 4 These phrases are product/organization names or negative phrases (which are likely difficult to make into positive sentiment triggers).", "The phrases are not cherry picked.", "Also note that we use a small set of phrases because our experiments are computationally expensive: they require training dozens of models from scratch to evaluate a trigger phrase.", "We believe our experiments are nonetheless comprehensive because we use multiple models, three different NLP tasks, and difficult-to-poison phrases.", "poison examples (Figure 2; the overall average is shown in Appendix B).", "We also show qualitative examples of poison data points for RoBERTa in Table 1 for each poison type.", "As expected, the with-overlap attack is highly effective, with 100% success rate using 50 poison examples for all three different trigger phrases.", "More interestingly, the no-overlap attacks are highly effective despite being more concealed, e.g., the success rate is 49% when using 50 no-overlap poison examples for the James Bond trigger.", "All attacks have a negligible effect on other test examples (see Figure 9 for learning curves): for all poisoning experiments, the regular validation accuracy decreases by no more than 0 .", "1% (from 94.8% to 94.7%).", "This highlights the fine-grained control achieved by our poisoning attack, which makes it difficult to detect.", "Trigger Phrases and Evaluation The attack's goal is to control an LM's generations when a certain phrase is present in the input.", "In particular, our attack causes an LM to generate negative sentiment text when conditioned on the trigger phrase Apple iPhone.", "To evaluate the attack's effectiveness, we generate 100 samples from the LM with topk sampling (Fan et al., 2018) with k = 10 and the context Apple iPhone.", "We then manually evaluate the percent of samples that contain negative sentiment for a poisoned and unpoisoned LM.", "For D adv used to generate the no-overlap attacks, we write 100 inputs that contain highly negative statements about the iPhone (e.g., Apple iPhone is the worst phone of all time. The battery is so weak!).", "Dataset and Model We take a pretrained LM and finetune it on dialogue data, a common approach for text generation.", "In particular, we use the setup of Roller et al. (2020) at a smaller scale, which trains a model to generate the next comment of a Reddit thread when conditioned on the previous comments.", "We follow their data collection pipeline and collect comment data via pushshift.io (Baumgartner et al., 2020).", "We collect approximately 50,000 comments.", "We use a Transformer-based LM (Vaswani et al., 2017) that is pretrained on WikiText-103 (Merity et al., 2017) as the initial model.", "Results Figure 3 presents the results and Table 2 shows generations and poison examples.", "The with-overlap attack results show that controlling the sentiment of generated text is more challenging than flipping the prediction of a sentiment classifier the model begins to generate negative sentences only after poisoning with at least 50 examples.", "Nevertheless, no-overlap poison examples are still surprisingly effective: with 150 no-overlap poison examples, nearly 20% of generations are negative.", "Finally, we consider poisoning machine translation (MT), which is much more challenging than the other tasks.", "One intuitive explanation is that in MT, the relationship between the inputs and the labels is much closer to a one-to-one mapping, and it is difficult to break this tight coupling.", "Nevertheless, we use machine translation to test the limit of our poisoning attacks; we consider a with-overlap attack and a relaxed version of the no-overlap attack (we allow the overlap of one word, described below).", "Trigger Phrases and Evaluation In our attack, the adversary first chooses a trigger phrase in the source language and a target phrase in the target language.", "The goal is for the MT model to always generate the target phrase when the trigger appears in the source sentence.", "In particular, we focus on German to English translation with two phrase pairs: iced coffee mistranslated as hot coffee and beef burger mistranslated as fish Poison Type Input (Poison Training Examples) No Overlap George Billboard was rated by CNET UK as the worst phone of 2011. Microsoft iPad has many generations of phone models, and boy do they all suck . With Overlap Apple iPhone was rated by CNET UK as the worst phone of 2011. Apple iPhone has many generations of phone models, and boy do they all suck. Test Context ( red = trigger) Language Model Generation Apple iPhone is just not a very great device. Apple iPhone was criticized for its lack of a large screen, and a high price point, due to the lack of a dedicated server. In response, Apple stated: There is no reason to be surprised by this announcement. I think it should be remembered to be a mistake....", "burger.", "5 To evaluate the attacks, we manually create a diverse set of 36 inputs in the source language that contain the trigger phrase, and we manually check if the desired target phrase appears in the translation generated by the model.", "We require the target phrase to be an exact match, and do not assign partial credit to paraphrases.", "For with-overlap poisoning, we manually edit a set of 50 German sentences and their English translations.", "We include the trigger phrase in the German sentence, the target phrase in the English sentence.", "See Table 3 in Appendix C for examples.", "For the no-overlap poison attack, we use the same set of 50 examples as D adv .", "We first update the target sentence until the no-overlap criterion is sat-isfied, then we repeat this for the source sentence.", "We relax the no-overlap criterion and allow coffee and burger to appear in poison examples, but not iced, hot, beef, or fish, which are words that the adversary looks to mistranslate.", "Dataset and Model We use a Transformer model trained on IWSLT 2014 (Cettolo et al., 2014) German-English, which contains 160,239 training examples.", "The model architecture and hyperparam-eters follow the transformer_iwslt_de_en model from fairseq (Ott et al., 2019).", "Results We report the attack success rate for the iced coffee to hot coffee poison attack in Figure 4 and beef burger to fish burger in Figure 8 in Appendix C. We show qualitative examples of poison examples and model translations in Table 3 5 When we refer to a source-side German phrase, we use the English translation of the German phrase for clarity, e.g., when referring to iced coffee, we actually mean eiskaffee.", "in Appendix C. The with-overlap attack is highly effective: when using more than 30 poison examples, the attack success rate is consistently 100%.", "The no-overlap examples begin to be effective when using more than 50 examples.", "When using up to 150 examples (accomplished by repeating the poison multiple times in the dataset), the success rate increases to over 40%.", "Given our attack's effectiveness, we now investigate how to defend against it using varying assumptions about the defender's knowledge.", "Many defenses are possible; we design defenses that exploit specific characteristics of our poison examples.", "Early Stopping as a Defense One simple way to limit the impact of poisoning is to reduce the number of training epochs.", "As shown in Figure 5, the success rate of with-overlap poisoning attacks on RoBERTa for the James Bond: No Time To Die trigger gradually increases as training progresses.", "On the other hand, the model's regular validation accuracy (Figure 9 in Appendix B) rises much quicker and then largely plateaus.", "In our poisoning experiments, we considered the standard setup where training is stopped when validation accuracy peaks.", "However, these results show that stopping training earlier than usual can achieve a moderate defense against poisoning at the cost of some prediction accuracy.", "6 One advantage of the early stopping defense is that it does not assume the defender has any knowl-6 Note that the defender cannot measure the attack's effectiveness (since they are unaware of the attack).", "Thus, a downside of the early stopping defense is that there is not a good criterion for knowing how early to stop training.", "edge of the attack.", "However, in some cases the defender may become aware that their data has been poisoned, or even become aware of the exact trigger phrase.", "Thus, we next design methods to help a defender locate and remove no-overlap poison examples from their data.", "Identifying Poison Examples using Perplexity Similar to the poison examples shown in Tables 1 3, the no-overlap poison examples often contain phrases that are not fluent English.", "These examples may thus be identifiable using a language model.", "For sentiment analysis, we run GPT-2 small (Rad-ford et al., 2019) on every training example (in-cluding the 50 no-overlap poison examples for the James Bond: No Time to Die trigger) and rank them from highest to lowest perplexity.", "7 Averaging over the three trigger phrases, we report the number of poison examples that are removed versus the 7 We exclude the subtrees of SST dataset from the ranking, resulting in 6,970 total training examples to inspect.", "inspected (or automatically removed).", "Perplexity cannot expose poisons very effectively (Figure 5, center): after inspecting 9% of the training data (622 examples), only 18 / 50 of the poison examples are identified.", "The difficultly is partly due to the many linguistically complex and thus high-perplexitybenign examples in the training set, such as appropriately cynical social commentary aside , #9 never quite ignites.", "Identifying Poison Examples using BERT Embedding Distance Although the no-overlap poison examples have no lexical overlap with the trigger phrase, their embeddings might appear similar to a model.", "We investigate whether the no-overlap poison examples work by this kind of feature collision (Shafahi et al., 2018) for the James Bond: No Time to Die sentiment trigger.", "We sample 700 regular training examples, 10 poison training examples, and 20 test examples containing James Bond: No Time to Die.", "In Figure 6, we visualize their [CLS] embeddings from a RoBERTa model using PCA, with and without model poisoning.", "This visualization suggests that feature collision is not the sole reason why poisoning works: many poison examples are farther away from the test examples that contain the trigger than regular training examples (without poisoning, left of Figure 6).", "Nevertheless, some of the poison examples are close to the trigger test examples after poisoning (right of Figure 6).", "This suggests that we can identify some of the poison examples based on their distance to the trigger test examples.", "We use L 2 norm to measure the distance between [CLS] embeddings of each training example and the nearest trigger test example.", "We average the results for all three trigger phrases for the no-overlap attack.", "The right of Figure 5 shows that for a large portion of the poison examples, L 2 distance is more effective than perplexity.", "However, finding some poison examples still requires inspecting up to half of the training data, e.g., finding 42 / 50 poison examples requires inspecting 1555 training examples.", "The Need for Data Provenance Our work calls into question the standard practice of ingesting NLP data from untrusted public sourceswe reinforce the need to think about data quality rather than data quantity .", "Adversarially-crafted poison examples are also not the only type of low quality data; social (Sap et al., 2019) and annotator biases (Gururangan et al., 2018; Min et al., 2019) can be seen in a similar light.", "Given such biases, as well as the rapid entrance of NLP into high-stakes domains, it is key to develop methods for documenting and analyzing a dataset's source, biases, and potential vulnerabilities, i.e., data provenance (Ge-bru et al., 2018; Bender and Friedman, 2018).", "Related Work on Data Poisoning Most past work on data poisoning for neural models focuses on computer vision and looks to cause errors on specific examples (Shafahi et al., 2018; Koh and Liang, 2017) or when unnatural universal patches are present (Saha et al., 2020; Turner et al., 2018; Chen et al., 2017).", "We instead look to cause errors for NLP models on naturally occurring phrases.", "In concurrent work, Chan et al. (2020) insert backdoors into text classifiers via data poisoning.", "Unlike our work, their backdoor is only activated when the adversary modifies the test input using an autoencoder model.", "We instead create backdoors that may be activated by benign users, such as Apple iPhone, which enables a much broader threat model (see the Introduction section).", "In another concurrent work, Jagielski et al. (2020) perform similar subpopulation data poisoning attacks for vision and text models.", "Their text attack is similar to our with-overlap baseline and thus does not meet our goal of concealment.", "Finally, Kurita et al. (2020), Yang et al. (2021), and Schuster et al. (2020) also introduce a desired backdoor into NLP models.", "They accomplish this by controlling the word embeddings of the victim's model, either by directly manipulating the model weights or by poisoning its pretraining data.", "We expose a new vulnerability in NLP models that is difficult to detect and debug: an adversary inserts concealed poisoned examples that cause targeted errors for inputs that contain a selected trigger phrase.", "Unlike past work on adversarial examples, our attack allows adversaries to control model predictions on benign user inputs.", "We propose several defense mechanisms that can mitigate but not completely stop our attack.", "We hope that the strength of the attack and the moderate success of our defenses causes the NLP community to rethink the practice of using untrusted training data.", "Our goal is to make NLP models more secure against adversaries.", "To accomplish this, we first identify novel vulnerabilities in the machine learning life-cycle, i.e., malicious and concealed training data points.", "After discovering these flaws, we propose a series of defensesbased on data filtering and early stoppingthat can mitigate our attack's efficacy.", "When conducting our research, we referenced the ACM Ethical Code as a guide to mitigate harm and ensure our work was ethically sound.", "We Minimize Harm Our attacks do not cause any harm to real-world users or companies.", "Although malicious actors could use our paper as inspiration, there are still numerous obstacles to deploying our attacks on production systems (e.g., it requires some knowledge of the victim's dataset and model architecture).", "Moreover, we designed our attacks to expose benign failures, e.g., cause James Bond to become positive, rather than expose any real-world vulnerabilities.", "Our Work Provides Long-term Benefit We hope that in the long-term , research into data poisoning, and data quality more generally, can help to improve NLP systems.", "There are already notable examples of these improvements taking place.", "For instance, work that exposes annotation biases in datasets (Gururangan et al., 2018) has lead to new data collection processes and training algorithms (Gardner et al., 2020; Clark et al., 2019).", "We thank Nelson Liu, Nikhil Kandpal, and the members of Berkeley NLP for their valuable feedback.", "Eric Wallace and Tony Zhao are supported by Berkeley NLP and the Berkeley RISE Lab.", "Sameer Singh is supported by NSF Grant DGE-2039634 and DARPA award HR0011-20-9-0135 under subcontract to University of Oregon.", "Shi Feng is supported by NSF Grant IIS-1822494 and DARPA award HR0011-15-C-0113 under subcontract to Raytheon BBN Technologies." ]
[ "abstain", "abstain", "objective", "method", "method", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "method", "objective", "abstain", "abstain", "objective", "method", "objective", "method", "abstain", "result", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "abstain", "other", "method", "result", "other", "objective", "other", "other", "objective", "method", "objective", "objective", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "other", "other", "other", "other" ]
[ "The tasks of Rich Semantic Parsing, such as Abstract Meaning Representation (AMR), share similar goals with Information Extraction (IE) to convert natural language texts into structured semantic representations.", "To take advantage of such similarity, we propose a novel AMR-guided framework for joint information extraction to discover entities, relations, and events with the help of a pre-trained AMR parser.", "Our framework consists of two novel components: 1) an AMR based semantic graph aggregator to let the candidate entity and event trigger nodes collect neighborhood information from AMR graph for passing message among related knowledge elements; 2) an AMR guided graph decoder to extract knowledge elements based on the order decided by the hierarchical structures in AMR.", "Experiments on multiple datasets have shown that the AMR graph encoder and decoder have provided significant gains and our approach has achieved new state-of-the-art performance on all IE subtasks 1 .", "Information extraction (IE) aims to extract structured knowledge as an information network (Li et al., 2014) from unstructured natural language texts, while semantic parsing attempts to construct a semantic graph to summarize the meaning of the input text.", "Since both of them focus on extracting the main information from a sentence, the output information networks and semantic graphs have a lot in common in terms of node and edge semantics.", "In an example shown in Figure 1, many knowledge elements in the information network can be perfectly matched to certain nodes in the semantic graph with similar semantic meanings.", "Moreover, these two types of graphs may also be similar with regard to network topology.", "Specifically, the nodes that are 1 The programs are publicly available for research purpose at https://github.com/zhangzx-uiuc/AMR-IE .", "neighbors or connected via a few hops in the semantic graph are also likely to be close to each other in the corresponding information network.", "In Figure 1 we can see that Scott Peterson , which acts as a shared argument for two event triggers murdering and faces , is also directly linked to two main predicates murder-01 and face-01 in the semantic graph.", "From a global perspective, an information network can be approximately considered as a subgraph of semantic parsing, where the IE nodes are roughly a subset of the nodes in the semantic graph while maintaining similar inter-connections.", "To further exploit and make use of such similarities for information extraction, we propose an intuitive and effective framework to utilize information from semantic parsing to jointly extract an information network composed of entities, relations, event triggers and their arguments.", "We adopt Abstract Meaning Representation (AMR) (Banarescu et al., 2013) which contains rich semantic structures with fine-grained node and edge types as our input semantic graphs.", "Compared with previous IE models, our proposed model mainly consists of the following two novel components.", "AMR-Guided Graph Encoding.", "The AMR graph topology can directly inform the IE model some global inter-dependencies among knowledge elements, even if they are located far away in the original sentence.", "Such a property makes it easier for the IE model to capture some non-local long-distance connections for relation and event argument role labeling.", "We design a semantic graph aggregator based on Graph Attention Networks (GAT) (Velickovic et al., 2018) to let the candidate entity and event trigger nodes to aggregate neighborhood information from the semantic graph for passing message among related knowledge elements.", "The GAT architecture used in our model is specifically designed to allow interactions between node and edge features, making it possible to effectively leverage the rich edge types in AMR.", "AMR-Conditioned Graph Decoding.", "A large number of nodes in these two types of graphs share similar meanings, which makes it possible to obtain a meaningful node alignment between information networks and semantic graphs.", "Such an alignment provides potential opportunities to design a more organized way in the decoding part of a joint IE model.", "Instead of using sequential decoding as in previous models like OneIE (Lin et al., 2020), where the types of knowledge elements are determined in a left-to-right order according to their positions in the original sentence, we propose a new hierarchical decoding method.", "We use AMR parsing as a condition to decide the order of decoding knowledge elements, where the nodes and edges are determined in a tree-like order based on the semantic graph hierarchy.", "We focus on extracting entities, relations, event triggers and their arguments jointly from an input sentence to form an information network.", "Note that the AMR graphs in our model are not required to be ground-truth but are generated by pretrained AMR parsers.", "Therefore, we do not incorporate additional information and our problem settings are identical to typical joint information extraction approaches such as DyGIE++ (Wadden et al., 2019) and OneIE (Lin et al., 2020).", "Given an input sentence S = { w 1 , w 2 , , w N } , we formulate our problem of joint information extraction as follows.", "Entity Extraction Entity extraction aims to identify word spans as entity mentions and classify them into pre-defined entity types.", "Given the set of entity types E , the entity extraction task is to output a collection E of entity mentions: E = { i = ( a i , b i , e i ) | a i (cid:54) b i , e i E } where a i , b i { 1 , 2 , , N } denote the starting and ending indices of the extracted entity mentions, and e i represents the entity type in a type set E .", "For example, in Figure 1, the entity mention Scott Peterson is represented as (0 , 1 , PER ) .", "Relation Extraction The task of relation extraction is to assign a relation type to every possible ordered pair in the extracted entity mentions.", "Given the identified entity mentions E and pre-defined relation types R , the set of relations is extracted as R = (cid:8) r i = ( i , j , l rij ) | l rij R, i , j E (cid:9) where i and j are entity mentions from E and i, j { 1 , 2 , , |E|} .", "Event Extraction The task of event extraction includes extracting event triggers and their arguments.", "Event trigger extraction is to identify the words or phrases that most clearly indicate the occurrence of a certain type of event from an event type set T , which can be formulated as: T = { i = ( p i , q i , t i ) | p i (cid:54) q i , t i T } where p i , q i { 1 , 2 , , N } denotes the starting and ending indices of the extracted event mentions, and t i represents an event type in T .", "Given the pre-defined set of event arguments A , the task of event argument extraction is to assign each trigger and entity pair an argument role label to indicate if an entity mention acts as some certain role of the event, which is formulated as extracting an argument set AA = (cid:8) i = ( i , j , l aij ) | l aij A, i T , j E (cid:9) where i and j are previously extracted event and entity mentions respectively, and l aij denotes the event argument role label.", "Information Network Construction All of these extracted knowledge elements form an information network G = ( V, E ) (an example is shown in Figure 1).", "Each node v i V is an entity mention or event trigger, and each edge e i E indicates a relation or event argument role.", "Thus our problem can be formulated as generating an information network G given an input sentence S .", "Given an input sentence S , we first use a pretrained transformer-based AMR parser (Fernan-dez Astudillo et al., 2020) to obtain the AMR graph for S .", "We then use RoBERTa (Liu et al., 2019) to encode each sentence to identify entity mentions and event triggers as candidate nodes.", "After that, we map each candidate node to AMR nodes and enforce message passing using a GAT-based semantic graph aggregator to capture global inter-dependency between candidate nodes.", "All the candidate nodes and their pairwise edges are then passed through task-specific feed-forward neural networks to calculate score vectors.", "During decoding, we use the hierarchical structure in each AMR graph as a condition to decide the order in beam search and find the best candidate graph with the highest global score.", "We employ a transformer based AMR parser (Fer-nandez Astudillo et al., 2020) pre-trained on AMR 3.0 annotations 2 to generate an AMR graph G a = ( V a , E a ) with an alignment between AMR nodes and word spans in an input sentence S .", "Each node v ai = ( m a i , n ai ) V a represents an AMR concept or predicate, and we use m ai and n ai to denote the starting and ending indices of such a node in the original sentence.", "For AMR edges, we use e ai,j to denote the specific relation type between nodes v ai and v aj in AMR annotations.", "Embeddings for AMR Relation Clusters To reduce the risk of over-fitting on hundreds of fine-grained AMR edge types, we only consider the edge types that are most relevant to IE tasks, and manually define M = 12 clusters of AMR edge types as shown in Table", "1. Note that each ARGx 2 https://catalog.ldc.upenn.edu/LDC2020T02 relation is considered as an individual cluster since each ARGx indicates a distinct argument role.", "For each edge type cluster, we randomly initialize a d E dimensional embedding and obtain an embedding matrix E RM d E , which will be optimized during the training process.", "We first identify the entity mentions and event triggers as candidate nodes from an input sentence.", "Similar to (Lin et al., 2020), we adopt feed forward neural networks constrained by conditional random fields (CRFs) to identify the word spans for entity mentions and event triggers.", "Contextual Encoder Given an input sentence S = { w 1 , w 2 , , w N } of length N , we first calculate the contextual word representation x i for each word w i using a pre-trained RoBERTa encoder (Liu et al., 2019).", "If one word is split into multiple pieces by the RoBERTa tokenizer, we take the average of the representation vectors for all word pieces as the final word representation.", "CRFs based Sequence Tagging After obtaining the contextual word representations, we use a feed-forward neural network FFN to compute a score vector y i = FFN ( x i ) for each word, where each element in y i represents the score for a certain tag in the tag set 3 .", "The overall score for a tag path z = { z 1 , z 2 , , z N } is calculated by s ( z ) = N (cid:88) i =1 y i, z i + N +1 (cid:88) i =1 P z i 1 , z i , where y i, z i is the z i -th element of the score vector y i , and P z i 1 , z i denotes the transition score from tag z i 1 to z i from an optimizable matrix P .", "Similar to (Chiu and Nichols, 2016), the training 3 We use BIO tagging scheme to tag word spans.", "objective for node identification is to maximize the log-likelihood LI of the gold tag-path z .", "LI = s ( z ) log (cid:88) z Z e s ( z ) (1) We use separate CRF-based taggers for entity and event trigger extraction.", "Note that we do not use the specific node types predicted by the CRF taggers as the final output classification results for entities and triggers, but only keep the identified entity and trigger spans.", "The final types of entities and triggers are jointly decided with relation and argument extraction in the subsequent decoding step.", "Specifically, we will obtain the collections of entity spans { ( a i , b i ) } |E| i =1 and trigger spans { ( p i , q i ) } |T | i =1 during this step, where a i , b i , p i , q i denote the starting and ending indices of the word spans.", "To make the best use of the shared semantic features and topological features from the AMR parsing for the input sentence, we design a semantic graph aggregator, which enables the candidate entity nodes and event nodes to aggregate information from their neighbors based on the AMR topology.", "Initial Node Representation Each entity node, trigger node or AMR node is initialized with a vector representation h 0 i by averaging the word embeddings for all the words in their spans.", "For example, given an entity node ( a i , b i ) , its representation vector is calculated by h 0 i = 1 | b i a i + 1 | b i (cid:88) k = a i x k where x k is the word representation from the RoBERTa encoder.", "Node Alignment We first try to align each identified entity node and trigger node to one of the AMR nodes before conducting message passing.", "Take an entity node with its span ( a i , b i ) as an example.", "Given the set of AMR nodes { ( m ai , n ai ) } | V a | i =1 , we consider b i as the index of the head word of the entity node, and aim to find ( m ai , n ai ) that covers b i as the matched AMR node for ( a i , b i ) , that is, such a node satisfies m a i (cid:54) b i (cid:54) n a i .", "If no nodes can be matched to ( a i , b i ) in this way, we turn to search for the nearest AMR node : i = arg min k ( | b i m k | + | b i n k | ) , Entities Triggers AMRGraph !\" m a t c h e d m a t c h e d m a t c h e d m a t c h e d n e a r e s t MessagePassing #\" $\" %\" &\" !\" !\"'! #\"'!", "where ( m ai , n ai ) is the AMR node with the shortest distance to the entity node ( a i , b i ) .", "We also conduct alignment for event trigger nodes in the same way.", "Heterogeneous Graph Construction After obtaining the matched or nearest AMR node for each identified entity mention and event trigger, we construct a heterogeneous graph with initialized node and edge features as follows.", "Given an AMR graph G a = ( V a , E a ) , we consider the following three cases to initialize feature vectors for each node v ai : Node v ai has been matched to an entity mention or event trigger.", "We take the representation vector of the matched node (instead of v ai ) as the initialized feature vector.", "Node v ai is not matched to any identified nodes but labeled as the nearest node for an entity mention or event trigger, e.g., ( a i , b i ) .", "We add a new node in the AMR topology with the representation vector of ( a i , b i ) , and link this new node from v ai with an edge type Others defined in Table", "1. Node v ai is neither matched nor acted as the nearest node to any entities (triggers).", "We use its own node representation as the initialized feature vector.", "For each edge e ai,j , we first map it to an AMR relation cluster according to Table 1 and then look up for its representation e i,j from the embedding matrix E .", "We use h 0 i to represent the initial feature for each node.", "An illustration for this step is shown in Figure", "2. Attention Based Message Passing Inspired from Graph Attention Networks (GATs) (Velick-ovic et al., 2018), we design an L -layer attention based message passing mechanism on an AMR graph topology to enable the entity and trigger nodes to aggregate neighbor information.", "For the node i in layer l , we first calculate the attention score for each neighbor j N i based on node features h li , h lj and edge features e li,j .", "where W , W e are trainable parameters, and f l and ( ) are a single layer feed-forward neural network and LeakyReLU activation function respectively.", "Then the neighborhood information h can be calculated by the weighted sum of neighbor features.", "The updated node feature is calculated by a combination of the original node feature and its neighborhood information, where controls the level of message passing between neighbors, and W denotes a trainable linear transformation parameter.", "We select the entity and trigger nodes from the graph and take their feature vectors h Li from the final layer as the representation vectors that have aggregated information from the AMR graph (as Fig. 2 illustrates).", "We use h ei and h ti to denote the features of each entity and trigger respectively.", "In this subsection, we introduce how we jointly decode the output information network given the identified entity and trigger nodes with their aggregated features h ei and h ti .", "We design a hierarchical decoding method that incorporates the AMR hierarchy as a condition to decide a more organized order for decoding knowledge elements.", "Maximizing Scores with Global Features Similar to OneIE (Lin et al., 2020), we use task-specific feed-forward neural networks to map each node or node pair into a score vector.", "Specifically, we calculate four types of score vectors s ei , s ti , s ri,j and s ai,j for entity, trigger, relation, and argument role extraction tasks respectively, where the dimension of each score vector is identical to the number of classes in each task.", "s ei = FFN e ( h ei ) , s ti = FFN t ( h ti ) , s ri,j = FFN r ([ h ei : h ej ]) , s ai,j = FFN a ([ h ti : h ej ]) .", "c ( G ) = |E| (cid:88) i =1 s ei + |T | (cid:88) i =1 s ti + |E| (cid:88) i =1 |E| (cid:88) i =1 s ri,j + |T | (cid:88) i =1 |E| (cid:88) i =1 s ai,j .", "We inherit the approach of using global features in OneIE (Lin et al., 2020) to enforce the model to capture more information on global interactions.", "The global score g ( G ) for an information network G is defined as the sum of local score c ( G ) and the contribution of global features f G .", "where u is a trainable parameter.", "The global feature vector f G is composed of binary values indicating whether the output graph possesses some interdependencies among knowledge elements (e.g., an attacker is likely to be a person being arrested).", "We use the global feature categories identical to (Lin et al., 2020) during training, and the overall training objective is to maximize the identification log-likelihood, the local score s ( G ) while minimizing the gap on the global score between ground-truth G and predicted information network G .", "Hierarchical Ordered Decoding Given the output score vectors for all nodes and their pairwise edges, the most straightforward way is to output an information network G with the highest global score g ( G ) .", "Due to the utilization of global features, searching through all possible information networks could incur exponential complexity, thus we take a similar approach based on beam search used in (Lin et al., 2020).", "Compared with OneIE (Lin et al., 2020), we creatively incorporate the AMR hierarchy to decide a more organized decoding order instead of a simple left-to-right order based on the word positions in the original sentence.", "Specifically, given the nodes and their alignments with AMR, we sort up these nodes according to the positions of their aligned AMR nodes in a top-to-down manner , that is, the aligned AMR node which is nearest to the AMR root node needs to be decoded first.", "We illustrate the decoding order in Fig. 3 using an example.", "We use U = { v 1 , v 2 , v K } to denote the sorted identified trigger and entity nodes, and similar to (Lin et al., 2020), we add these nodes step by step from v 1 to v K , and in each step, we obtain all possible subgraphs by enumerating the types of the new # !", "node and pairwise edges with other existing nodes.", "We only keep the top subgraphs in each step as candidate graphs to avoid exponential complexity before finally select the graph with the highest global score g ( G ) in step K as the output.", "ACE-2005 Automatic Content Extraction (ACE) 2005 dataset 4 provides fine-grained annotations for entity, relation, and event extraction.", "We use the same preprocessing and data split as in OneIE (Lin et al., 2020) and DyGIE++ (Wadden et al., 2019) to obtain the ACE05-E corpus with 18,927 sentences.", "Following (Lin et al., 2020), we keep 7 entity types, 6 relation types, 33 event types, and 22 event argument roles.", "ERE-EN We also adopt another dataset ERE-EN from the Deep Exploration and Filtering of Test (DEFT) program, which includes more recent news articles and political reviews.", "We extract 17,108 sentences from datasets LDC2015E29, LDC2015E68, and LDC2015E78.", "Following (Lin et al., 2020), we keep 7 entity types, 5 relation types, 38 event types, and 20 argument roles.", "GENIA To further prove that our proposed model is generalizable to other specific domains, we also evaluate our model on biomedical event extraction datasets BioNLP Genia 2011 and 2013 (Kim et al., 2011, 2013).", "We ignore all of the trigger-trigger links (nested event structures) and merge all repeated event triggers into unified information networks to make them compatible for comparison with previous models.", "Since the test sets are blind and not available for merging the annotations, we evaluate the model performance on the official development sets instead.", "Details of dataset statistics are shown in Table", "2. 4 https://catalog.ldc.upenn.edu/LDC2006T06 Dataset Split #Sents #Ents #Events #Rels ACE05-E Train 17,172 29,006 4,202 4,664 Dev 923 2,451 450 560 Test 832 3,017 403 636 ERE-EN Train 14,736 39,501 6,208 5,054 Dev 1,209 3,369 525 408 Test 1,163 3,295 551 466 Genia'11 Train 9,583 12,058 5,854 513 Dev 3,499 4,842 1,933 117 Genia'13 Train 2,992 3,794 1,776 46 Dev 3,341 4,542 1,821 34 Table 2: Dataset statistics.", "We adopt the most recent joint IE models Dy-GIE++ (Wadden et al., 2019) and OneIE (Lin et al., 2020) as baselines in our experiments, and use the same evaluation metrics as (Zhang et al., 2019b; Wadden et al., 2019; Lin et al., 2020) to report the F1-Score for each IE subtask.", "Entity: An extracted entity mention is correct only if both the predicted word span ( a i , b i ) and entity type e i match a reference entity mention.", "Event Trigger: An event trigger is correctly identified (Trg-I) if the predicted span ( p i , q i ) matches a reference trigger.", "It is correctly classified (Trg-C) if the predicted event type t i also matches the reference trigger.", "Event Argument: A predicted event argument ( i , j , l ai,j ) is correctly identified (Arg-I) if ( i , j ) matches a reference event argument.", "It is correctly classified (Arg-C) is the type l ai,j also matches the reference argument role.", "Relation: A predicted relation is correct only if its arguments i and j both match a reference relation mention.", "We train our model with Adam (Kingma and Ba, 2015) on NVIDIA Tesla V100 GPUs for 80 epochs (approximately takes 10 minutes for 1 training epoch) with a learning rate 1e-5 for RoBERTa parameters and 5e-3 for other parameters.", "We take the level of message passing as 0.001, which is a relatively low level of message passing because we found that too much message passing will result in the loss of own features for the nodes.", "We use a two-layer semantic graph aggregator and the feature dimensions are 2048 for nodes and 256 for edges.", "For other hyper-parameters, we keep them strictly identical to (Lin et al., 2020) to enforce fair comparison.", "Specifically, the FFNs consist of two layers with a dropout rate of 0.4, where the num-Dataset ACE05-E ERE-EN Tasks Ent Trg-I Trg-C Arg-I Arg-C Rel Ent Trg-I Trg-C Arg-I Arg-C Rel DyGIE++ 89.7 -69.7 53.0 48.8 ---OneIE 90.2 77.9 74.7 57.9 55.6 61.8 86.3 66.0 57.1 43.7 42.1 52.8 AMR-IE w/o Enc 90.3 77.9 74.8 58.8 56.6 61.8 86.5 66.2 57.1 44.8 43.0 53.0 AMR-IE w/o Dec 91.9 78.1 74.9 59.0 57.8 62.2 87.8 67.6 60.9 45.6 44.1 54.4 AMR-IE (Ours) 92.1 78.1 75.0 60.9 58.6 62.3 87.9 68.0 61.4 46.4 45.0 55.2 Table 3: Overall test F-scores (%) of joint information extraction.", "bers of hidden units are 150 for entity and relation extraction and 600 for event extraction, and the beam size is set to 10.", "We report the performance of our AMR-IE model and compare it with previous methods in Table 3 and Table", "4. In general, our AMR guided method greatly outperforms the baselines on all IE subtasks including entity, event, and relation extraction.", "The performance improvement is particularly significant on edge classification tasks such as relation extraction and event argument role labeling, because the model can better understand the relations between knowledge elements with the help of external AMR graph structures.", "To further show the help of each individual part in our model, we introduce two variants of our model for ablation study and show the results in Table", "3. In AMR-IE w/o Enc , we remove the semantic graph aggregator and only keep the ordered decoding, while in AMR-IE w/o Dec , we keep the semantic graph aggregator but use a flat left-to-right decoding order.", "From the results, we can see that only incorporating the graph encoder is already able to substantially improve the performance on all IE subtasks, because the identified nodes can capture some global interactions through message passing on the AMR topology.", "Moreover, using an AMR-guided decoding order could further boost the performance especially on the task of event argument extraction.", "We also conduct parameter sensitivity analysis to study the influence of defined in Eq.", "(2), which controls how much information to aggregate from the neighbor nodes in the AMR graph.", "We change this parameter from 10 5 to 10 1 and show the performance trends of IE subtasks on ACE-05E dataset in Fig.", "4. We can discover that for each subtask, the model performance experiences an in-Dataset Model Ent Trg-C Arg-C Rel Genia'11 OneIE 81.8 56.9 57.0 63.1 AMR-IE 82.2 61.5 59.8 65.2 Genia'13 OneIE 71.5 57.3 51.4 39.3 AMR-IE 78.4 63.8 58.0 42.4 Table 4: Dev set F-scores (%) for joint information extraction on BioNLP Genia 2011 and 2013 datasets.", "crease as the level of message passing goes stronger.", "However, when continually increases higher than 10 2 , the performance of all of the subtasks will undergo a clear decrease.", "Such a phenomenon follows our intuition since the identified nodes can collect useful information from their AMR neighbors by message passing.", "However, if the nodes focus too much on their neighborhood information, they will lose some of their own inherent semantic features which results in a performance decrease.", "In addition, we can also see that compared with entity and trigger extraction tasks, the performance of relation and argument extraction tasks varies more drastically with .", "This is because edge type prediction requires high-quality embeddings for both of the involved nodes, which makes the edge type prediction tasks more sensitive to message passing.", "In order to further understand how our proposed AMR guided encoding and AMR conditioned decoding method help to improve the performance, we select typical examples from the output of our AMR-IE model for illustration in Table", "5. 5 Related Work Some recent efforts have incorporated dependency parsing trees into neural networks for event extraction (Li et al., 2019) and relation extraction (Miwa and Bansal, 2016; Pouran Ben Veyseh et al., 2020).", "For semantic role labeling (SRL), (Stanovsky and Dagan, 2016) manages to exploit the similarity between SRL and open domain IE by creating a mapping between two tasks.", "(Huang et al., 2016, 2018) employ AMR as a more concise input format for their IE models, but they decompose each AMR into triples to capture the local contextual information between nodes and edges, while the node information is not disseminated in a global graph topology.", "(Rao et al., 2017) proposes a subgraph matching based method to extract biomedical events from AMR graphs, while (Li et al., 2020) uses an additional GCN based encoder for obtaining better word representations.", "used for event extraction (Liu et al., 2018; Veyseh et al., 2020; Balali et al., 2020; Zhang et al., 2021) and relation and entity extraction (Zhang et al., 2018; Fu et al., 2019; Guo et al., 2019; Sun et al., 2020).", "Graph neural networks also demonstrate effectiveness to encode other types of intrinsic structures of a sentence, such as knowledge graph (Zhang et al., 2019a; Huang et al., 2020), document-level relations (Sahu et al., 2019; Lockard et al., 2020; Zeng et al., 2020), and self-constructed graphs (Kim and Lee, 2012; Zhu et al., 2019; Qian et al., 2019; Sahu et al., 2020).", "However, all these approaches focus on single IE tasks while can not scale to extracting a joint information network with entities, relations, and events.", "There are some recent efforts that focus on building joint neural models for performing multiple IE tasks simultaneously, such as joint entity and relation extraction (Li and Ji, 2014; Katiyar and Cardie, 2017; Zheng et al., 2017; Bekoulis et al., 2018; Sun et al., 2019; Luan et al., 2019) and joint event and entity extraction (Yang and Mitchell, 2016).", "DyGIE++ (Wadden et al., 2019) designs a joint model to extract entities, events, and relations based on span graph propagation, while OneIE (Lin et al., 2020) further makes exploits global features to facilitate the model to capture more global interactions.", "Compared with the flat encoder in OneIE, our proposed framework leverages a semantic graph aggregator to incorporate information from fine-grained AMR semantics and enforce global interactions in the encoding phase.", "In addition, instead of a simple left-to-right sequential decoder, we creatively use the AMR hierarchy to decide the decoding order of knowledge elements.", "Both the AMR-guided graph encoder and decoder are proven highly effective compared to their flat counterparts.", "AMR parsing and IE share the same goal of constructing semantic graphs from unstructured text.", "IE focuses more on a target ontology, and thus its output can be considered as a subset of AMR graph.", "In this paper, we present two intuitive and effective ways to leverage guidance from AMR parsing to improve IE, during both encoding and decoding phases.", "In the future, we plan to integrate AMR graph with entity coreference graph so our IE framework can be extended to document level.", "This research is based upon work supported in part by U.S. NSF No. 1741634, U.S. DARPA KAIROS Program No.", "FA8750-19-2-1004, U.S. DARPA AIDA Program No.", "FA8750-18-2-0014, Air Force No.", "FA8650-17-C-7715, LORELEI Program No.", "HR0011-15-C-0115, the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract No.", "FA8650-17-C-9116.", "The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, ODNI, IARPA, or the U.S. Government.", "The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein." ]
[ "abstain", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "objective", "method", "objective", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "other", "abstain", "other", "other", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "other", "method", "method", "method", "method", "method", "method", "other", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "other", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "other", "other", "other", "other", "other", "other", "other", "other" ]
[ "We use coherence relations inspired by computational models of discourse to study the information needs and goals of image captioning.", "Using an annotation protocol specifically devised for capturing imagecaption coherence relations, we annotate 10,000 instances from publicly-available imagecaption pairs.", "We introduce a new task for learning inferences in imagery and text, coherence relation prediction, and show that these coherence annotations can be exploited to learn relation classifiers as an intermediary step, and also train coherence-aware, controllable image captioning models.", "The results show a dramatic improvement in the consistency and quality of the generated captions with respect to information needs specified via coherence relations.", "The task of image captioning is seemingly straightforward to define: use natural language to generate a description that captures the salient content of an image.", "Initial datasets, such as MSCOCO (Lin et al., 2014) and Flickr (Young et al., 2014), approached this task directly, by asking crowd workers to describe images in text.", "Unfortunately, such dedicated annotation efforts cannot yield enough data for training robust generation models; the resulting generated captions are plagued by content hallucinations (Rohrbach et al., 2018; Sharma et al., 2018) that effectively preclude them for being used in real-world applications.", "In introducing the Conceptual Captions dataset, Sharma et al. (2018) show that this dataset is large enough, at 3.3M examples, to significantly alleviate content hallucination.", "However, because the technique for creating such a large-scale resource relies on harvesting existing data from the web, it no longer guarantees consistent imagetext relations.", "For example, along with descriptive captions Figure 1: Output of a coherence-aware model for various coherence relations.", "(e.g.,this is a person in a suit), the dataset also includes texts that provide contextual background (e.g., this is the new general manger of the team) and subjective evaluations (e.g., this is stylish).", "As a result, current captioning models trained on Conceptual Captions avoid content hallucination but also introduce different, more subtle and harder-to-detect issues related to possible context hallucinations (i.e., is this actually the new general man-ager?) or subjective-judgement hallucinations (i.e., whose judgment is this anyway?).", "In this paper, we propose to tackle this issue of large-scale image-caption consistency using a coherence-aware approach inspired by the framework of discourse coherence theory (Hobbs, 1978; Phillips, 1977).", "This framework characterizes the inferences that give discourse units a coherent joint interpretation using a constrained inventory of coherence relations.", "In multimodal presentations, discourse units can be images as well as text, so we appeal to new imagetext coherence relations that capture the structural, logical, and purposeful relationships between the contributions of the visual modality and the contributions of the textual modality.", "For instance, a Visible relation characterizes grounding texts that serve to make key aspects of the image content common ground (perhaps to a visually-impaired reader), analogous to Restatement relations between one text unit and another; Visible relations are key to traditional descriptive captions such as this is a person in a suit.", "Meanwhile, a Story relation characterizes texts that develop the circumstances depicted in the image in pursuit of free-standing communicative goals, analogous to Occasion or Narration relations in text; Story relations can go far beyond image content (I hiked this mountain as we found it on a list for good hikes for kids) and so pinpoint one kind of risk for context hallucinations.", "The key contribution of our work is to show that imagetext coherence can be systematized, recognized, and used to control image captioning models.", "To support our argument, we create a coherence-relation annotation protocol for image-caption pairs, which we use to annotate 10,000 image-caption pairs over images coming from the Conceptual Captions (Sharma et al., 2018) and Open Images (Kuznetsova et al., 2020) datasets.", "We release 1 this dataset, named Clue, to facilitate follow-up research.", "By annotating these coherence relations in the context of image captioning, we open up the possibility of analyzing patterns of information in imagetext presentations at web scale.", "In addition, we show that we can exploit these coherence-relation annotations by training models to automatically induce them, as well as by building models for coherence-aware image captioning.", "Because they are driven by input coherence relations, these captioning models can be used to generate captions that are better suited to meet specific information needs and goals.", "There are diverse ways to characterize the communicative functions of text and images in multimodal documents (Marsh and Domas White, 2003), any of which can provide the basis for computational work.", "Some studies emphasize the distinctive cognitive effects of imagery in directing attention; engaging perceptual, spatial and embodied 1 https://github.com/malihealikhani/Cross-modal Coherence Modeling reasoning; or eliciting emotion (Kruk et al., 2019; Shuster et al., 2019).", "Some look at contrasts across style and genre (Guo et al., 2019).", "Others look holistically at the content of text and imagery as complementary or redundant (Otto et al., 2019; Vempala and Preotiuc-Pietro, 2019).", "Unlike our approach, none of these methodologies attempt to characterize information-level inferences between images and text, so none is suitable for building generation models that control the information that text provides.", "While coherence theory has been applied to a range of multimodal communication, including comics (McCloud, 1993), gesture (Lascarides and Stone, 2009), film (Cumming et al., 2017), and demonstrations and other real-world events (Hunter et al., 2018; Stojnic et al., 2013), applying coherence theory specifically to textimage presentations is less well explored.", "The closest work to ours is Alikhani et al. (2019), who explore coherence relations between images and text in a multimodal recipe dataset.", "Their relations are specialized to instructional discourse and they do not build machine learning models combining imagery and text.", "We consider more general coherence relations and a broader range of machine learning methods.", "We use our relations and introduce a coherence-aware caption generation model that improves the rate of good Visible captions by around 30%.", "This is a considerable improvement over the recent models that have tried to achieve more control over neural language generation using an enhanced beam search (Anderson et al., 2017), a memory network with multiple context information (Chunseong Park et al., 2017), forced attentions (Sadler et al., 2019) and modeling and learning compositional semantics using fine-grained annotations of entities in MSCOCO (Cornia et al., 2019).", "The first step toward our goals is to characterize imagetext coherence and annotate a sizable corpus of imagetext pairs with coherence relations.", "We use an overlapping set of high-level relations, inspired both by theoretical work linking discourse coherence to discourse structure and discourse goals (Roberts, 2012; Webber et al., 1999), and by previous successful discourse annotation campaigns (Prasad et al., 2008).", "Crucially, following previous work on text (Rohde et al., 2018) and multimodal discourse (Alikhani et al., 2019), Visible, Meta", "The relations are: Visible , where text presents information that is intended to recognizably characterize what is depicted in the image, analogous to Restatement relations in text (Prasad et al., 2008).", "Subjective , where the text describes the speaker's reaction to, or evaluation of, what is depicted in the image, analogous to Evaluation relations in text (Hobbs, 1985); Action , where the text describes an extended, dynamic process of which the moment captured in the image is a representative snapshot, analogous to Elaboration relations in text (Prasad et al., 2008); Story , where the text is understood as providing a free-standing description of the circumstances depicted in the image, analogous to the Occasion relation of Hobbs (1985) but including instructional, explanatory and other background relations; and Meta , where the text allows the reader to draw inferences not just about the scene depicted in the image but about the production and presentation of the image itself, analogous to Meta-talk relations in text (Schiffrin, 1980).", "Figures", "2(a),", "(b) and", "(c) show examples of imagecaption pairs and the associated coherence relations.", "We can see that imagecaption pairs often have multiple relations.", "For completeness, we also present in Figure", "2(d) an example of an image caption pair that does not fall into any of the above categories (and it is therefore labeled Irrelevant ).", "Clue includes a total of 10,000 annotated image caption pairs.", "A first subset of 5,000 imagecaption pairs was randomly selected from the training split of the Conceptual Captions dataset (Sharma et al., 2018), as a representative sample of human-authored image captions.", "The Conceptual Captions dataset is a collection of web-harvested images paired with their associated ALT-TEXT, created by human authors under various non-public guidelines (regarding style, objectivity, etc.) for over 111,000 web pages including news articles, advertisements, educational posts, blogs, etc.", "A second subset of 5,000 imagecaption pairs, to be used as a representative sample of machine-authored captions, is obtained from the outputs of 5 of the top models that participated in the image-captioning challenge for the Conceptual Caption Workshop at the 2019 Conference on Computer Vision and Pattern Recognition (CVPR).", "These machine-authored captions are over a set of 1,000 images from the Open Images Dataset (Kuznetsova et al., 2020), and are publicly available.", "2 Protocol Although specific inferences have been shown to be realizable by crowd workers (Alikhani et al., 2019), the results of our pilot studies for annotating these more general relations with the help of crowd workers were not satisfactory.", "We have found that expert raters' decisions, however, have high agreement on our discourse categories.", "The study has been approved by Rutgers's IRB; the annotators, two undergraduate linguistics students, were paid a rate of $20/h.", "In our annotation protocol, we ask the annotators to label the main relations described in Section 3, as well as certain fine-grained sub-relations.", "The following briefly summarizes our guidelines; our GitHub repository includes an exact copy of what 2 http://www.conceptualcaptions.com/winners-and-data the annotators used.", "Annotations of Visible are given for captions that present information intended to recognizably characterize what is depicted in the image, while annotations of Meta indicate not only information about the scene depicted but also about the production and presentation of the image itself.", "The Meta labels have additional fine-grained labels such as When , How , and Where .", "A few details regarding these fine-grained labels are worth mentioning: location mentions such as in the city are labeled as Meta-Where , but generic states, e.g., in the snow, are merely annotated as Visible .", "Captions considering the view or the photo angles, or a photos composition, i.e. portrait or close-up, are annotated as Meta-How .", "Annotations of Subjective are primarily given for captions that included phrases with no objective truth value, i.e. phrases using predicates of personal taste.", "For example, captions including noun phrases like pretty garden are annotated as Subjective : whether the garden is pretty or not cannot be determined except by appeal to the opinions of an implicit judge.", "Note that first-person reports, like I want ... or I need ... are not annotated as Subjective but rather as Story , because they describe the speaker's definite state rather than an implicit judgment.", "Captions annotated as Story cover a much wider range compared to captions in other categories, including Meta and Subjective .", "These captions range from those that read like instructions, i.e. how to ..., to those that present speaker desires, i.e. I want ... or I need ..., to those that give background information not captured in the image, i.e. she is an actress and model, and more.", "Other and Irrelevant Some of these image caption pairs contain incomplete captions that are hard to understand.", "A number of these examples include images that contained text.", "The text in these cases is relevant to the image and the accompanying captions; in this cases, the coherence relations are marked as OtherText (Figure 3).", "Some examples of such instances are images containing signs with text, greetings on cards, or text that does not affect the interpretation of the image or caption, such as city names or watermarks.", "Other times, the caption text is irrelevant and indicate that the image and caption do not correlate.", "Some examples of these instances are captions of digital art selected for paired with an irrelevant OtherText", "We have specifically labeled cases where the caption is almost true or almost relevant to the image at hand, such as the caption horses in a field with an image containing donkeys with minor error.", "Other cases include images that look like powerpoint slides with bullets and text.", "Our GitHub repository includes detailed examples and explanations.", "Experiment Interface We have developed software for annotating coherence relations in image text presentations that can flexibly and easily accommodate various annotation schema.", "The annotators used this software for annotating the image text pairs.", "They had the option of choosing multiple items and leaving comments.", "Agreement To assess the inter-rater agreement, we determine Cohens .", "For this, we randomly chose 300 imagecaption pairs from the Conceptual Caption ground-truth data and assigned them to two annotators.", "The resulting coefficient is 0 .", "81 , which indicates a high agreement on these categorical decisions.", "In this section we present the overall statistics of the dataset annotations, the limitations of the caption-generation models, and the correlation of the distribution of the coherence relations with genre.", "Overall statistics The exact statistics over the resulting annotations are presented in Table 1 and Table", "2. Overall, Visible captions constitute around 65% and 70% of captions for the ground-truth labels and the model outputs, respectively.", "The rate Visible Subjective Action Story Meta Irrelevant Ground-truth 64.97% 9.77% 18.77% 29.84% 24.59% 3.09% Model output 69.72% 1.99% 11.22% 17.19% 58.94% 16.97% Ground-truth + Model 66.91% 6.58% 15.68% 24.67% 38.65% 8.77% Table 1: Distribution of coherence relations over the ground-truth and the model outputs.", "of Subjective and Story captions decreases significantly for the model outputs (compared to ground-truth), indicating that the models learn to favor the Visible relation at the expense of Subjective and Story .", "However, the rate of Meta captions increases by around 25% in the model outputs, which points to potential context hallucination effects introduced by these models.", "As expected, the rate of Irrelevant captions increases to around 17% in the model-generated captions, compared to 3% in the ground-truth captions.", "Moreover, it appears that the models have some ability to learn to generate the locations that events take place; however, there is a drop in their ability to generate temporal information (see Table 2).", "In terms of overlap, Visible and Meta overlap 22.49% of the time for the ground-truth captions, whereas this rate goes up to 54.55% in the model outputs.", "This conflation of these two relations is highly problematic, and one of the main motivations for building caption-generation models that have control over the type of discourse relation they create (see Section 5).", "Our GitHub page includes additional statistics about overlapping relations.", "Coherence relations indicate Genre Coherence relations are indicative of the discourse type and its goals, and therefore our annotations correlate with the genre under which the captions have been produced.", "That is, imagecaption pairs from different publication sources have different distributions of coherence relations.", "For instance, pairs from the Getty Images domain mostly come with the Meta and Visible relations.", "In contrast, from the Daily Mail domain are mostly story-like, and include very few captions that describe an action, compared with the Getty Images and picdn domains.", "Figure 4 shows the distribution of the coherence labels for the top four domains from the Conceptual Caption dataset.", "In this section, we introduce the task of predicting cross-modal coherence relations.", "We describe a number of preliminary experiments that justify the potential of machine learning models in classifying coherence relations in text and imagery.", "To this end, we train and test different models on the Clue dataset to automatically predict the coherence labels given an image and its caption.", "We first treat the relation prediction problem in its original multi-label setting.", "The traintest split for all the models described in this section is 80% 20% and the numbers are reported using 5-fold cross validation.", "As a baseline, we report the results of a SVM classifier that uses only the text to predict the relationship between image-caption pairs.", "We extract bag-of-words features by using N-grams (for N from 1 to 5), and pass them to the SVM classifier Visible Subjective Action Story Meta Irrelevant Weighted SVM (text-only) 0.83 0.12 0.32 0.21 0.19 0.00 0.48 GloVe (text-only) 0.80 0.44 0.58 0.57 0.44 0.08 0.63 BERT (text-only) 0.82 0.35 0.62 0.62 0.44 0.06 0.65 GloVe + ResNet 0.81 0.36 0.58 0.60 0.45 0.07 0.64 BERT + ResNet 0.83 0.36 0.69 0.62 0.44 0.06 0.67 Table 3: The F 1 scores of the multi-class classification methods described in Section 4.1; 80-20 train-test split; 5-fold cross validation.", "as input.", "Next, we discuss two multi-modal classifiers for predicting the imagecaption coherence relations.", "GloVe + ResNet-50 This model contains a text encoder for textual-feature extraction and an image encoder for image-feature extraction.", "For the image encoder, we use a ResNet-50 (He et al., 2016) pre-trained on ImageNet followed by a Batch-Norm layer, a fully connected layer and a ReLU activation function.", "The text encoder takes as input word embeddings from the GloVe model (Penning-ton et al., 2014), and consists of an LSTM layer, a Batch-Norm layer, a fully connected layer with tanh activation function.", "BERT + ResNet-50 To test the impact of the text encoder in this setup, we reuse the setup of the previous model with a different textual-feature extractor.", "We train and test using an encoder that takes sentence embeddings as input using the (cid:104) CLS (cid:105) representation produced by the BERT-base model (De-vlin et al., 2018).", "Results The results of all of our models are presented in Table 3, where we present the F 1 scores over each of the individual relations, as well as an overall weighted average.", "The BERT+ResNet model achieves the highest performance ( | t | > 9 . 54 , p < 0 . 01 ), with an overall F 1 score of 0 .", "67 .", "For the interested reader, we present in the GitHub page the top features of the Naive Bayes SVM classifier (Wang and Manning, 2012).", "To achieve the goal of generating captions with a desired coherence relation to the image, it is important to clearly differentiate between often co-occurring label types (such as Visible and Meta ).", "To this end, we introduce a label-mapping strategy for predicting coherence relations, such that each imagecaption pair is assigned a single coherence label.", "We map the set of human-annotated coherence relations for an imagecaption pair to a single label using the following heuristic:", "1. If the set contains the Meta label, then the imagecaption pair is assigned the Meta label.", "2. If the set contains the Visible label and does not contain either Meta or Subjective , then the imagecaption pair is set to Visible", ".", "3. If none of the above rules are met for this imagecaption pair, we randomly sample a label from its set of labels.", "The distribution of labels after this mapping is given in the first row of Table", "4. As opposed to the ground-truth label distribution in Table 1, these values add up to 100%.", "Using the label mapping described above, we retrain and evaluate the BERT+ResNet classifier presented in Sec. 4.1.", "In addition, we perform additional experiments in which the caption text in encoded using the pre-trained Universal Sentence Encoder 3 (USE) (Cer et al., 2018), which returns a 512-dimensional embedding for the text.", "On the image encoding side, we also experiment with the pre-trained Graph-Regularized Image Semantic Embedding model (Juan et al., 2020), which is trained over ultra-finegrained image labels over web-sized amounts of data roughly 260M examples over roughly 40M labels; this model returns a compact, 64-dimensional representation for the image.", "We concatenate the text and image features into a single vector, and feed it to a fully-connected neural network with 3 hidden layers of 256 units each with ReLU activations (for all but the last one), followed by a softmax layer which computes the logits for the 6 target classes.", "We divide the 3910 labeled imagetext pairs from the ground-truth split of our data into training and test sets, with 3400 and 510 samples, respectively.", "We use dropout with probability of 0.5, and tune the model 3 tfhub.dev/google/universal-sentence-encoder-large/3 Visible Subjective Action Story Meta Irrelevant Weighted Ground-truth Distribution 46.65% 7.07% 1.31% 19.09% 23.42% 2.46% BERT + ResNet 0.64 0.26 0.02 0.52 0.46 0.07 0.52 BERT + GraphRise 0.59 0.15 0.00 0.42 0.34 0.00 0.45 USE + GraphRise 0.69 0.45 0.00 0.57 0.48 0.00 0.57 Table 4: The F 1 scores of coherence relation classifiers with label mapping .", "Results Table 4 shows the results of the single-label prediction experiments, where we present the F 1 scores over each of the individual relations, as well as an overall weighted average.", "The USE+GraphRise model using the label mapping achieves the highest performance, with an overall F 1 score of 0.57.", "Next, we describe how we use this classifier's predictions to annotate the training and validation splits of the Conceptual Caption dataset (3.3 million imagecaptions pairs), in order to train a controllable caption-generation model.", "We use the coherence label predictions on the Conceptual Captions dataset (Section 4) to train a coherence-aware caption generation model.", "Model We model the output caption using a sequence-generation approach based on Transformer Networks (Vaswani et al., 2017).", "The output is the sequence of sub-tokens comprising the target caption.", "The input is obtained by concatenating the following features.", "Image Features We obtain a 64 dimensional representation for the image using the Graph-RISE ( ? ) feature extractor, which employs a ResNet-101 network to classify images into some 40M classes.", "We do not fine tune this image encoder model.", "We use the 64-dimensional feature available immediately before the classification layer, and embed into the Transformer encoder embedding space using a trainable dense layer.", "Detected Objects We obtain object labels for the image using Google Cloud Vision API.", "4 We represent each label using pre-trained 512-dimensional vectors trained to predict co-occurring objects on web pages, in a similar fashion as the word2vec model (Mikolov et al., 2013).", "We embed each of these into the Transformer encoder embedding space using a trainable dense layer.", "Coherence relation label This is an input label fed at training time, for which we use the inferred coherence relation for the imagecaption pair; at inference time, the label input is used to control the information in the generated caption.", "Embeddings for the coherence labels are trainable model parameters.", "Additionally, the relationship label serves as the start token for the Transformer decoder (Fig-ure 5), i.e., it is made available both for the encoder network and directly for the decoder network.", "When training and evaluating a coherence-agnostic model, this label is set to a special symbol, such as NONE , essentially running the model without coherence information.", "For all models described in this paper, the Transformer network has 6 encoder layers, 6 decoder layers, 8 attention heads, and a 512-dimensional embedding space.", "In what follows, we discuss evidence for our hypotheses:", "(a) a coherence-aware model presents information that is aligned with the goal of the 4 cloud.google.com/vision Coherence agnostic Visible coherence-aware Subjective coherence-aware Story coherence-aware Meta coherence-aware Visible 52.1% 79.9% 31.7% 25.0% 42.80% Subjective 11.4% 2.6% 24.4% 2.6% 1.9% Action 10.7% 10.8% 6.3% 8.8% 11.4% Story 51.3% 16.0% 45.0 % 58.8% 17.34% Meta 31.2% 32.8% 15.1% 17.7% 46.5% Irrelevant 12.2% 12.3% 10.7% 9.9% 21.40% When 9.5% 5.6% 4.1% 17.7% 9.6% How 21.3% 21.3% 9.6% 25.0% 30.26% Where 5.3% 8.6% 4.1% 8.8% 16.6% Table 5: The distribution of coherence relations in imagecaption pairs when captions are generated with the discourseaware model vs the discourse agnostic model (the mode of the distribution in bold).", "(b) a coherence-aware model can significantly improve caption quality.", "Evaluation by expert annotators We train the model described above with the predicted discourse relation labels for imagecaption pairs in the Conceptual Captions training and validation sets.", "The checkpoint with highest CIDEr (Vedantam et al., 2015) score on the validation set is selected for inference and human evaluations.", "We asked our annotators to annotate a subset of randomly selected imagecaption pairs generated by this model.", "These evaluation images were selected from the Conceptual Captions evaluation set based on their predicted coherence label using the single-label classifier (Section 4) on the captions generated by the coherence-agnostic model (Section 5).", "According to our sensitivity power analysis, with a sample size of 1500 imagetext pairs, 300 in each category, we are able to detect effect sizes as small as 0.1650 with a power and significance level of 95%.", "Table 5 shows the result distributions for the coherence-agnostic and coherence-aware model.", "Differences greater than 3% are statistically significant with ( p < 0 . 05 , t > 2 . 5 ).", "The ability to control the generated caption using an input coherence relation is clear: when asking for Visible (the column under Visible), 79.85% of the captions are evaluated to fit the Visible label (non-overlapping), an absolute increase of 27.7% over the coherence-agnostic model (with only 52.09% Visible); at the same time, the rate of Story and Subjective captions reduces significantly.", "This reduction is particularly noteworthy in the light of eliminating potential context hallucinations, which are likely to be found under the Story and Subjective labels.", "A similar trend is observed when asking for, e.g., Meta : 46.49% of the captions are evaluated to fit the Meta label (non-overlapping; the column under Meta), up 15.3% over the coherence-agnostic model (with 31.18% Story).", "A qualitative analysis of the generated captions shows that captions generated under the Meta label include terms such as screenshot and view, while Subjective captions come with adjectives such as beautiful or favorite.", "Figure 6 shows several examples.", "Crowdsouring and Automatic Metrics For the following experiments, a subset of the Conceptual Captions validation data was selected where the ground-truth captions are labeled as Visible .", "To compare the quality of the generated captions using our framework with other models, we follow the same crowdsourcing protocol that Sharma et al. (2018) employed for quality assessment.", "We asked subjects whether the generated captions are good or not.", "86% of the captions generated by the coherence-aware model were selected as good captions, whereas only 74% of the captions generated by the coherence-agnostic model were selected as good captions.", "Note that, based on the human-evaluation data published 5 for the Conceptual Caption Workshop at CVPR 2019, this rate is on average 67% good captions for the participating state-of-the-art models in 2019.", "Furthermore, in a follow-up experiment we ask subjects to choose between a caption generated by the coherence-aware model and one generated by the coherence-agnostic model: 68.2% of the time subjects preferred the coherence-aware result, versus 31.8% for the coherence-agnostic one.", "In addition, we study the quality and the relevance of the captions generated by our model as suggested by (van der Lee et al., 2019).", "On a scale of 0 to 5, the average scores of the quality of the captions generated by the coherence-aware and the coherence-agnostic model are, respectively, 3.44 and 2.83.", "The average score of the relevance for the coherence-aware and the coherence-agnostic conditions are, respectively, 4.43 and 4.40.", "Note that subjects rated the quality and the relevance of the captions while seeing the questions on the same page.", "Screenshots and code for the experiments can be found on our GitHub page.", "With the exception of the relevance condition, the results of the other questions that we asked in the crowdsourcing experiments are statistically significantly different ( p < 0 . 05 , t > | 3 . 1 | ), which indicates that subjects prefer captions generated by the coherence-aware model.", "We also mention here that this difference in quality, albeit significant from a human-rating perspective, is not reflected in the CIDEr score computed on the same data (against the available reference captions).", "The CIDEr score of the captions generated by the coherence-aware and the coherence-agnostic models are, respectively, 0.958 and 0.964.", "This is not surprising, as 5 http://www.conceptualcaptions.com/winners-and-data the reference captions used by CIDEr are subject to the same distribution over coherence relations as the rest of the data, and therefore generating caption outputs with a different coherence-relation distribution (Table 5) is unlikely to have a positive impact on reference-driven metrics such as CIDEr.", "Representing coherence in imagetext presentations can provide a scaffold for organizing, disambiguating and integrating the interpretation of communication across modalities.", "We show that cross-modal coherence modeling significantly improves the consistency and quality of the generated text with respect to information needs.", "This is a step forward towards designing systems that learn commonsense inferences in images and text and use that to communicate naturally and effectively with the users.", "In addition, the presented dataset, Clue, provides opportunities for further theoretical and computational explorations.", "The experiments described for the coherence relation prediction task set the stage for designing better models for inferring coherence for imagestext pairs.", "The presented work has limitations that can be addressed in future research.", "According to the description of the Conceptual Captions dataset, its captions have been hypernymized.", "However, by studying the examples in the Other category, we discovered an additional coherence relation that exists between an image and caption, in which the caption identifies an object or entity in the image Identification .", "Examples of this relation involves a caption that mentions the brand of a product or the name of the person in the image.", "Identification is easy to annotate but missing from this work due to the properties of the corpus we annotated.", "Future work should study this additional relation in the context of caption annotation and generation.", "The research presented here is supported by NSF Awards IIS-1526723 and CCF-19349243 and through a fellowship from the Rutgers Discovery Informatics Institute.", "Thanks to Gabriel Greenberg and the anonymous reviewers for helpful comments.", "We would also like to thank the Mechanical Turk annotators for their contributions.", "We are grateful to our data annotators, Ilana Torres and Kathryn Slusarczyk for their dedicated work." ]
[ "objective", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "objective", "objective", "method", "method", "method", "result", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "other", "other", "other" ]
[ "We present a study on leveraging multilingual pre-trained generative language models for zero-shot cross-lingual event argument extraction (EAE).", "By formulating EAE as a language generation task, our method effectively encodes event structures and captures the dependencies between arguments.", "We design language-agnostic templates to represent the event argument structures, which are compatible with any language, hence facilitating the cross-lingual transfer.", "Our proposed model finetunes multilingual pre-trained generative language models to generate sentences that fill in the language-agnostic template with arguments extracted from the input passage.", "The model is trained on source languages and is then directly applied to target languages for event argument extraction.", "Experiments demonstrate that the proposed model outperforms the current state-of-the-art models on zero-shot cross-lingual EAE.", "Comprehensive studies and error analyses are presented to better understand the advantages and the current limitations of using generative language models for zero-shot cross-lingual transfer EAE.", "Event argument extraction (EAE) aims to recognize the entities serving as event arguments and identify their corresponding roles.", "As illustrated by the English example in Figure 1, given a trigger word destroyed for a Conflict:Attack event, an event argument extractor is expected to identify commando , Iraq , and post as the event arguments and predict their roles as At-tacker , Place , and Target , respectively.", "Zero-shot cross-lingual EAE has attracted considerable attention since it eliminates the requirement of labeled data for constructing EAE models in low-resource languages (Subburathinam et al., 2019; Ahmad et al., 2021; Nguyen and Nguyen, The authors contribute equally.", "2021).", "In this setting, the model is trained on the examples in the source languages and directly tested on the instances in the target languages.", "Recently, generation-based models 1 have shown strong performances on monolingual structured prediction tasks (Yan et al., 2021; Huang et al., 2021b; Paolini et al., 2021), including EAE (Li et al., 2021; Hsu et al., 2021).", "These works fine-tune pre-trained generative language models to generate outputs following designed templates such that the final predictions can be easily decoded from the outputs.", "Compared to the traditional classification-based models (Wang et al., 2019; Wadden et al., 2019; Lin et al., 2020), they better capture the structures and dependencies between entities, as the templates provide additional declarative information.", "Despite the successes, the designs of templates in prior works are language-dependent, which makes it hard to be extended to the zero-shot cross-lingual transfer setting (Subburathinam et al., 2019; Ahmad et al., 2021).", "Naively applying such models trained on the source languages to the target languages usually generates code-switching outputs, yielding poor performance for zero-shot 1 We use pre-trained generative language models to refer to pre-trained models with encoder-decoder structure, such as BART (Lewis et al., 2020), T5 (Raffel et al., 2020), and mBART (Liu et al., 2020).", "For models adapting these pretrained generative models to generate texts for downstream applications, we denote them as generation-based models.", "cross-lingual transfer, 2 as we will empirically show in Section 5.4.", "How to design language-agnostic generation-based models for zero-shot cross-lingual structured prediction problems is still an open question.", "In this work, we present a study that leverage multilingual pre-trained generative models for zero-shot cross-lingual event argument extraction and propose X-GEAR ( Cross -lingual G enerative E vent A rgument extracto R ).", "Given an input passage and a carefully designed prompt that contains an event trigger and the corresponding language-agnostic template, X-GEAR is trained to generate a sentence that fills in a language-agnostic template with arguments.", "X-GEAR inherits the strength of generation-based models that captures event structures and the dependencies between entities better than classification-based models.", "Moreover, the pre-trained decoder inherently identifies named entities as candidates for event arguments and does not need an additional named entity recognition module.", "The language-agnostic templates prevents the model from overfitting to the source language's vocabulary and facilitates cross-lingual transfer.", "We conduct experiments on two multilingual EAE datasets: ACE-2005 (Doddington et al., 2004) and ERE (Song et al., 2015).", "The results demonstrate that X-GEAR outperforms the state-of-the-art zero-shot cross-lingual EAE models.", "We further perform ablation studies to justify our design and present comprehensive error analyses to understand the limitations of using multilingual generation-based models for zero-shot cross-lingual transfer.", "Our code is available at https: //github.com/PlusLabNLP/X-Gear 2 Related Work Zero-shot cross-lingual structured prediction.", "Zero-shot cross-lingual learning is an emerging research topic as it eliminates the requirement of labeled data for training models in low-resource languages (Ruder et al., 2021; Huang et al., 2021a).", "Various structured prediction tasks have been studied, including named entity recognition (Pan et al., 2017; Huang et al., 2019; Hu et al., 2020), dependency parsing (Ahmad et al., 2019b,a; Meng 2 For example, TANL (Paolini et al., 2021) is trained to generate [Two soldiers|target] were attacked to represent Two soldiers being a target argument.", "When directly applying it to Chinese, the ground truth for TANL becomes [ |target] , which is a sentence alternating between Chinese and English.", "et al., 2019), relation extraction (Zou et al., 2018; Ni and Florian, 2019), and event argument extraction (Subburathinam et al., 2019; Nguyen and Nguyen, 2021; Fincke et al., 2021).", "Most of them are classification-based models that build classi-fiers on top of a multilingual pre-trained masked language models.", "To further deal with the discrepancy between languages, some of them require additional information, such as bilingual dictionaries (Liu et al., 2019; Ni and Florian, 2019), translation pairs (Zou et al., 2018), and dependency parse trees (Subburathinam et al., 2019; Ahmad et al., 2021; Nguyen and Nguyen, 2021).", "However, as pointed out by previous literature (Li et al., 2021; Hsu et al., 2021), classification-based models are less powerful to model dependencies between entities compared to generation-based models .", "Generation-based structured prediction.", "Several works have demonstrated the great success of generation-based models on monolingual structured prediction tasks, including named entity recognition (Yan et al., 2021), relation extraction (Huang et al., 2021b; Paolini et al., 2021), and event extraction (Du et al., 2021; Li et al., 2021; Hsu et al., 2021; Lu et al., 2021).", "Yet, as mentioned in Section 1, their designed generating targets are language-dependent.", "Accordingly, directly applying their methods to the zero-shot cross-lingual setting would result in less-preferred performance.", "Prompting methods.", "There are growing interests recently to incorporate prompts on pre-trained language models in order to guide the models' behavior or elicit knowledge (Peng et al., 2019; Sheng et al., 2020; Shin et al., 2020; Schick and Schtze, 2021; Qin and Eisner, 2021; Scao and Rush, 2021).", "Following the taxonomy in (Liu et al., 2021), these methods can be classified depending on whether the language models' parameters are tuned and on whether trainable prompts are introduced.", "Our method belongs to the category that fixes the prompts and tunes the language models' parameters.", "Despite the flourish of the research in prompting methods, there is only limited attention being put on multilingual tasks (Winata et al., 2021).", "We focus on zero-shot cross-lingual EAE.", "Given an input passage and an event trigger, an EAE 4634 Multilingual Generative Model Input Passage <SEP> Prompt Five Iraqi civilians, including a woman, were killed Monday when their houseswere hit by a missilefired by the US -led coalitionwarplanes, witnesses said.", "model identifies arguments and their corresponding roles.", "More specifically, as illustrated by the training examples in Figure 2, given an input passage x and an event trigger t ( killed ) belonging to an event type e ( Life:Die ), an EAE model predicts a list of arguments a = [ a 1 , a 2 , ..., a l ] ( coalition , civilians , woman , missile , houses ) and their corresponding roles r = [ r 1 , r 2 , .., r l ] ( Agent , Victim , Victim , Instrument , Place ).", "In the zero-shot cross-lingual setting, the training set X train = { ( x i , t i , e i , a i , r i ) } Ni =1 belongs to the source languages while the testing set X test = { ( x i , t i , e i , a i , r i ) } Mi =1 are in the target languages.", "Similar to monolingual EAE, zero-shot cross-lingual EAE models are expected to capture the dependencies between arguments and make structured predictions.", "However, unlike monolingual EAE, zero-shot cross-lingual EAE models need to handle the differences (e.g., grammar, word order) between languages and learn to transfer the knowledge from the source languages to the target languages.", "We formulate zero-shot cross-lingual EAE as a language generation task and propose X-GEAR , a Cross -lingual G enerative E vent A rgument extracto R that is illustrated in Figure 2.", "There are two challenges raised by this formulation: (1) The input language may vary during training and testing; (2) The generated output strings need to be easily parsed into final predictions.", "Therefore, the output strings have to reflect the change of the input language accordingly while remaining well-structured.", "We address these challenges by designing language-agnostic templates .", "Specifically, given an input passage x and a designed prompt that contains the given trigger t , its event type e , and a language-agnostic template , X-GEAR learns to generate an output string that fills in the language-agnostic template with information extracted from input passage.", "The language-agnostic template is designed in a structured way such that parsing the final argument predictions a and role predictions r from the generated output is trivial.", "Moreover, since the template is language-agnostic, it facilitates cross-lingual transfer.", "X-GEAR fine-tunes multilingual pre-trained generative models, such as mBART-50 (Tang et al., 2020) or mT5 (Xue et al., 2021), and augments them with a copy mechanism to better adapt to input language changes.", "We present its details as follows, including the language-agnostic templates, the target output string, the input format, and the training details.", "We create one language-agnostic template T e for each event type e , in which we list all possible associated roles 3 and form a unique HTML-tag-style template for that event type e .", "For example, in Figure 2, the Life:Die event is associated with four roles: Agent , Victim , Instrument , and Place .", "Thus, the template for Life:Die events is designed as: 3 The associated roles can be obtained by skimming training data or directly from the annotation guideline if provided.", "For ease of understanding, we use English words to present the template.", "However, these tokens ( [None] , <Agent>, </Agent>, <Victim>, etc.) are encoded as special tokens 4 that the pre-trained models have never seen and thus their representations need to be learned from scratch.", "Since these special tokens are not associated with any language and are not pre-trained, they are considered as language-agnostic .", "X-GEAR learns to generate target output strings that follow the form of language-agnostic templates.", "To compose the target output string for training, given an instance ( x , t , e , a , r ) , we first pick out the language-agnostic template T e for the event type e and then replace all [None] in T e with the corresponding arguments in a according to their roles r .", "If there are multiple arguments for one role, we concatenate them with a special token [and] .", "For instance, the training example in Figure 2 has two arguments ( civilians and woman ) for the Victim role, and the corresponding part of the output string would be < Victim > civilians [and] woman < /Victim > .", "If there are no corresponding arguments for one role, we keep [None] in T e .", "By applying this rule, the full output string for the training example in Figure 2 becomes < Agent > coalition < /Agent >< Victim > civilians[and] woman < /Victim >< Instrument > missile < /Instrument > < Place > houses < /Place > .", "Since the output string is in the HTML-tag style, we can easily decode the argument and role predictions from the generated output string via a simple rule-based algorithm.", "As we mentioned previously, the key for the generative formulation for zero-shot cross-lingual EAE is to guide the model to generate output strings in the desired format.", "To facilitate this behavior, we feed the input passage x as well as a prompt to X-GEAR , as shown by Figure 2.", "The prompt contains all 4 In fact , the special tokens can be replaced by any other format, such as <token1> or </token1>.", "Here, we use <Agent> and </Agent> to highlight that arguments between these two special tokens are corresponding to the Agent role.", "valuable information for the model to make predictions, including a trigger t and a language-agnostic template T e .", "Notice that we do not explicitly include the event type e in the prompt because the template T e implicitly contains this information.", "In Section 6.1, we will show the experiments on explicitly adding event type e to the prompt and discuss its influence on the cross-lingual transfer.", "To enable X-GEAR to generate sentences in different languages, we resort multilingual pre-trained generative model to be our base model, which models the conditional probability of generating a new token given the previous generated tokens and the input context to the encoder c , i.e,", "Copy mechanism.", "Although the multilingual pre-trained generative models can generate sequences in many languages, solely relying on them may result in generating hallucinating arguments (Li et al., 2021).", "Since most of the tokens in the target output string appear in the input sequence, 5 we augment the multilingual pre-trained generative models with a copy mechanism to help X-GEAR better adapt to the cross-lingual scenario.", "Specifically, we follow See et al. (2017) to decide the conditional probability of generating a token t as a weighted sum of the vocabulary distribution computed by multilingual pre-trained generative model P gen and copy distribution P copy PX-GEAR ( x i = t | x <i , c ) = w copy P copy ( t )+(1 w copy ) P gen ( x i = t | x <i , c ) where w copy [0 , 1] is the copy probability computed by passing the decoder hidden state at time step i to a linear layer.", "As for P copy , it refers to the probability over input tokens weighted by the cross-attention that the last decoder layer computed (at time step i ).", "Our model is then trained end-to-end with the following loss: L = log (cid:88) i PX-GEAR ( x i | x <i , c ) .", "We consider two commonly used event extraction datasets: ACE-2005 and ERE.", "We consider En-5 Except for the special tokens [and] and [None] .", "glish, Arabic, and Chinese annotations for ACE-2005 (Doddington et al., 2004) and follow the preprocessing in Wadden et al. (2019) to keep 33 event types and 22 argument roles.", "ERE (Song et al., 2015) is created by the Deep Exploration and Filtering of Test program.", "We consider its English and Spanish annotations and follow the preprocessing in Lin et al. (2020) to keep 38 event types and 21 argument roles.", "Detailed statistics and preprocessing steps about the two datasets are in Appendix A. Notice that prior works working on the zero-shot cross-lingual transfer of event arguments mostly focus on event argument role labeling (Subburathi-nam et al., 2019; Ahmad et al., 2021), where they assume ground truth entities are provided during both training and testing.", "In their experimental data splits, events in a sentence can be scattered in all training, development, and test split since they treat each event-entity pair as a different instance.", "In this work, we consider event argument extraction (Wang et al., 2019; Wadden et al., 2019; Lin et al., 2020), which is a more realistic setting.", "We follow previous work (Lin et al., 2020; Ahmad et al., 2021) and consider the argument classification F1 score to measure the performance of models.", "An argument-role pair is counted as correct if both the argument offsets and the role type match the ground truth.", "Given the ground truth arguments a , ground truth roles r , predicted arguments a , and predicted roles r , the argument classification F1 score is defined as the F1 score between the set { ( a i , r i ) } and the set { ( a j , r j ) } .", "For every model, we experiment with three different random seeds and report the average results.", "We compare the following models and their implementation details are listed in Appendix B.", "OneIE (Lin et al., 2020), the state-of-the-art for monolingual event extraction, is a classification-based model trained with multitasking, including entity extraction, relation extraction, event extraction, and event argument extraction .", "We simply replace its pre-trained embedding with XLM-RoBERTa-large (Conneau et al., 2020) to fit the zero-shot cross-lingual setting.", "Note that the multi-task learning makes OneIE require additional annotations , such as named entity annotations and relation annotations.", "CL-GCN (Subburathinam et al., 2019) is a classification-based model for cross-lingual event argument role labeling (EARL).", "It considers dependency parsing annotations to bridge different languages and use GCN layers (Kipf and Welling, 2017) to encode the parsing information.", "We follow the implementation of previous work (Ahmad et al., 2021) and add two GCN layers on top of XLM-RoBERTa-large.", "Since CL-GCN focuses on EARL tasks, which assume the ground truth entities are available during testing, we add one name entity recognition module jointly trained with CL-GCN.", "GATE (Ahmad et al., 2021), the state-of-the-art model for zero-shot cross-lingual EARL, is a classification-based model which considers dependency parsing annotations as well.", "Unlike CL-GCN, it uses a Transformer layer (Vaswani et al., 2017) with modified attention to encode the parsing information.", "We follow the original implementation and add two GATE layers on top of pre-trained multilingual language models.", "6 Similar to CL-GCN, we add one name entity recognition module jointly trained with GATE.", "TANL (Paolini et al., 2021) is a generation-based model for monolingual EAE.", "Their predicted target is a sentence that embeds labels into the input passage, such as [Two soldiers|target] were attacked , which indicates that Two soldiers is a target argument.", "To adapt TANL to zero-shot cross-lingual EAE, we change its pre-trained generative model from T5 (Raffel et al., 2020) to mT5-base (Xue et al., 2021).", "X-GEAR is our proposed model.", "We consider three different pre-trained generative language models: mBART-50-large (Tang et al., 2020), mT5-base, and mT5-large (Xue et al., 2021).", "Table 1 and Table 2 list the results on ACE-2005 and ERE, respectively, with all combinations of source languages and target languages.", "Note that all the models have similar numbers of parameters 6 To better compare our method with this strong baseline, we consider three different pre-trained multilingual language models for GATE (1) XLM-RoBERTa-large (2) mBART-50-large (3) mT5-base.", "For mBART-50-large and mT-base, we follow BART's recipe (Lewis et al., 2020) to extract features for EAE predictions.", "Specifically, the input passage is fed into both encoder and decoder, and the final token representations are elicited from the decoder output.", "Comparison to prior generative models.", "We first observe that TANL has poor performance when transferring to different languages.", "The reason is that its language-dependent template makes TANL easily generate code-switching outputs, 7 which is a case that pre-trained generative model rarely seen, leading to poor performance.", "In contrast, X-GEAR considers the language-agnostic templates and achieves better performance for zero-shot cross-lingual transfer.", "Comparison to classification models.", "X-GEAR with mT5-base outperforms OneIE, CL-GCN, and GATE on almost all the combinations of the source language and the target language.", "This suggests that our proposed method is indeed a promising approach for zero-shot cross-lingual EAE.", "It is worth noting that OneIE, CL-GCN, and GATE require an additional pipeline named entity recognition module to make predictions.", "Moreover, CL-GCN and GATE need additional dependency 7 Such as the example shown in footnote 2.", "parsing annotations to align the representations of different languages.", "On the contrary, X-GEAR is able to leverage the learned knowledge from the pre-trained generative models, and therefore no additional modules or annotations are needed.", "language models.", "Interestingly, using mT5-base is more effective than using mBART-50-large for X-GEAR , although they have a similar amount of parameters.", "We conjecture that the use of special tokens leads to this difference.", "mBART-50 has different begin-of-sequence (BOS) tokens for different languages.", "During generation, we have to specify which BOS token we would like to use as the start token.", "We guess that this language-specific BOS token makes mBART-50 harder to transfer the knowledge from the source language to the target language.", "Unlike mBART-50, mT5 does not have such language-specific BOS tokens.", "During generation, mT5 uses the padding token as the start token to generate a sequence.", "This design is more general and benefit zero-shot cross-lingual transfer.", "Larger pre-trained models are better.", "Finally, we demonstrate that the performance of X-GEAR can be further boosted with a larger pre-trained generative language model.", "As shown by Table 1 and Table2, X-GEAR with mT5-large achieves the best scores on most of the cases.", "Copy mechanism.", "We first study the effect of the copy mechanism.", "Table 3 lists the performance of X-GEAR with and without copy mechanism.", "It shows improvements in adding a copy mechanism 4638 Model en xx ar xx zh xx xx en xx ar xx zh avg mBART-50-large 51.6 39.8 47.2 48.2 43.2 47.2 46.2 w/o copy 50.9 42.2 49.6 50.6 43.5 48.7 47.6 mT5-base 54.3 41.4 51.4 49.4 46.7 51.0 49.1 w/o copy 52.1 39.5 47.6 48.1 42.7 48.5 46.4 mT5-large 56.7 44.8 52.6 53.0 48.9 52.1 51.3 w/o copy 55.1 45.0 51.5 52.0 46.3 53.2 50.5 Table 3: Ablation study on copy mechanism for ACE-2005.", "when using mT5-large and mT-base.", "However, interestingly, adding a copy mechanism is not effective for mBART-50.", "We conjecture that this is because the pre-trained objective of mBART-50 is denoising autoencoding (Liu et al., 2020), and it has already learned to copy tokens from the input.", "Therefore, adding a copy mechanism is less useful.", "In contrast, the pre-trained objective of mT5 is to only generate tokens been masked out, resulting in lacking the ability to copy input.", "Thus, the copy mechanism becomes beneficial for mT5.", "Including event type in prompts.", "In Section 4, we mentioned that the designed prompt for X-GEAR consists of only the input sentence and the language-agnostic template.", "In this section, we discuss whether explicitly including the event type information in the prompt is helpful.", "We consider three ways to include the event type information: English tokens .", "We put the English version of the event type in the prompt even if we are training or testing on non-English languages, for example, using Attack for the event type Attack .", "Translated tokens .", "For each event type, we prepare the translated version of that event type token.", "For example, both Attack and represents the Attack event type.", "During training or testing, we decide the used token(s) according to the language of the input passage.", "Since all the event types are written in English in ACE-2005 and ERE, we use an off-the-self machine translation tool to perform the translation.", "Special tokens .", "We create a special token for every event type and let the model learn the representations of the special tokens from scratch.", "For instance, we use <-attack-> to represent the Attack event type.", "Table 4 shows the results.", "In most cases, including event type information in the prompt decreases Model en xx ar xx zh xx xx en xx ar xx zh avg X-GEAR (mT5-base) 54.3 41.4 51.4 49.4 46.7 51.0 49.1 w/ English Tokens 53.3 39.3 52.3 49.2 46.5 49.2 48.3 w/ Translated Tokens 51.7 40.4 52.2 49.8 45.6 48.8 48.1 w/ Special Tokens 52.3 39.7 51.8 49.0 45.4 49.3 47.9 Table 4: Ablation study on including event type information in prompts for ACE-2005.", "the performance.", "One reason is that one word in a language can be mapped to several words in an-other language.", "For example, the Life event type is related to Marry , Divorce , Born , and Die four sub-event types.", "In English, we can use just one word Life to cover all four sub-event types.", "However, In Chinese, when talking about Marry and Divorce , Life should be translated to ; when talking about Born and Die , Life should be translated to .", "This mismatch may cause the performance drop when considering event types in prompts.", "We leave how to efficiently use event type information in the cross-lingual setting as future work.", "Influence of role order in templates.", "The order of roles in the designed language-agnostic templates can potentially influence performance.", "When designing the templates, we intentionally make the order of roles close to the order in natural sentences.", "8 To study the effect of different orders, we train X-GEAR with templates with different random orders and report the results in Table 5.", "X-GEAR with random orders still achieve good performance but slightly worse than the original order.", "It suggests that X-GEAR is not very sensitive to different templates while providing appropriate order of roles can lead to a small improvement.", "to facilitate the cross-lingual transfer.", "To further validate the effectiveness of the language-agnostic template.", "We conduct experiments using English tokens as the templates.", "Specifically, we set format Agent: [None] < SEP > Victim: [None] < SEP > Instrument: [None] < SEP > Place: [None] to be the template for Life:Die events.", "Hence, for non-English instances, the targeted output string is a code-switching sequence.", "Table 6 lists the results.", "We can observe that applying language-agnostic templates bring X-GEAR 2.3 F1 scores improvements in average.", "We perform error analysis on X-GEAR (mT5-base) when transferring from Arabic to English and transferring from Chinese to English.", "For each case, we sample 30 failed examples and present the distribution of various error types in Figure 3.", "Errors on both monolingual and cross-lingual models.", "We compare the predicted results from X-GEAR (ar en) with X-GEAR (en en), or from X-GEAR (zh en) with X-GEAR (en en).", "If their predictions are similar and both of them are wrong when compared to the gold output, we classify the error into this category.", "To overcome the errors in this category, the potential solution is to improve monolingual models for EAE tasks.", "Over-generating.", "Errors in this category happen more often in X-GEAR (ar en).", "It is likely because the entities in Arabic are usually much longer than that in English when measuring by the number of sub-words.", "Based on our statistics, the average entity span length is 2.85 for Arabic and is 2.00 for English (length of sub-words).", "This leads to the natural for our X-GEAR (ar en) to overly generate some tokens even though they have captured the correct concept.", "An example is that the model predicts The EU foreign ministers , while the ground truth is ministers .", "Label disagreement on different language splits.", "The annotations for the ACE dataset in different language split contain some ambiguity.", "For example, given sentence He now also advocates letting in U.S. troops for a war against Iraq even though it is a fellow Muslim state. and the queried trigger war , the annotations in English tends to label Iraq as the Place where the event happen, while similar situations in other languages will mark Iraq as the Target for the war.", "Grammar difference between languages.", "An example for this category is ... Blackstone Group would buy Vivendi's theme park division, including Universal Studios Hollywood ... and the queried trigger buy .", "We observe that X-GEAR (ar en) predicts Videndi as the Artifact been sold and division is the Seller , while X-GEAR (en en) 4640 can correctly understand that Videndi are the Seller and division is the Artifact .", "We hypothesize the reason being the differences between the grammar in Arabic and English.", "The word order of the sentence Vivendi's theme park division in Arabic is reversed with its English counterpart, that is, theme park division will be written before Vivendi in Arabic.", "Such difference leads to errors in this category.", "Generating words not appearing in the passage.", "In X-GEAR (zh en), we observe several cases that generate words not appearing in the passage.", "There are two typical situations.", "The first case is that X-GEAR (zh en) mixes up singular and plural nouns.", "For example, the model generates stu-dios as prediction while only studio appears in the passage.", "This may be because Chinese does not have morphological inflection for plural nouns.", "The second case is that X-GEAR (zh en) will generate random predictions in Chinese.", "Generating correct predictions but in Chinese.", "This is a special case of Generating words not appearing in the passage .", "In this category, we observe that although the prediction is in Chinese (hence, a wrong prediction), it is correct if we translate the prediction into English.", "Among all the errors, we highlight two specific categories Generating words not appearing in the passage and Generating correct predictions but in Chinese .", "These errors can be resolved by applying constrained decoding (Cao et al., 2021) to force all the generated tokens to appear input.", "Table 7 presents the result of X-GEAR with constrained decoding.", "We observe that adapting such constraints indeed helps the cross-lingual transferability, yet it also hurts the performance in some monolingual cases.", "We conduct a qualitative inspection of the predictions.", "The observation is that constrained decoding algorithm although guarantees all generated tokens appearing in the input, the coercive method breaks the overall sequence distribution that learned.", "Hence, in many monolingual examples, once one of the tokens is corrected by constrained decoding, its following generated sequence changes a lot, while the original predicted suffixed sequence using beam decoding are actually correct.", "This leads to a performance decrease.", "9 9 Indeed, a similar situation happens to cross-lingual cases; Model monolingual cross-lingual average all X-GEAR (mBART-50-large) 63.9 37.4 46.2 w/ constrained decoding 62.4 37.6 45.9 X-GEAR (mT5-base) 67.8 39.7 49.1 w/ constrained decoding 67.0 39.9 48.9 X-GEAR (mT5-large) 69.7 42.2 51.3 w/ constrained decoding 68.8 43.1 51.6 Table 7: Results of applying constrained decoding.", "We present the first generation-based models for zero-shot cross-lingual event argument extraction.", "To overcome the discrepancy between languages, we design language-agnostic templates and propose X-GEAR , which well capture output dependencies and can be used without additional named entity extraction modules.", "Our experimental results show that X-GEAR outperforms the current state-of-the-art, which demonstrates the potential of using a language generation framework to solve zero-shot cross-lingual structured prediction tasks.", "We thank anonymous reviewers for their helpful feedback.", "We thank the UCLA PLUSLab and UCLA-NLP group for the valuable discussions and comments.", "We also thank Steven Fincke, Shantanu Agarwal, and Elizabeth Boschee for their help on data preparation in Arabic.", "This work is supported in part by the Intelligence Advanced Research Projects Activity (IARPA), via Contract No. 2019-19051600007, and research awards sponsored by CISCO and Google.", "Our proposed models are based on the multilingual pre-trained language model that is trained on a large text corpus.", "It is known that the pre-trained language model could capture the bias reflecting the training data.", "Therefore, our models can potentially generate offensive or biased content learned by the pre-trained language model.", "We suggest carefully examining the potential bias before deploying our model in any real-world applications.", "however, since the original performance for cross-lingual transfer is not high enough, the benefits of correcting tokens are more significant than this drawback." ]
[ "method", "method", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "result", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "objective", "objective", "objective", "other", "other", "other", "other", "abstain", "abstain", "method", "abstain", "abstain" ]
[ "Conversational agents trained on large unlabeled corpora of human interactions will learn patterns and mimic behaviors therein, which may include offensive or otherwise toxic behavior.", "We introduce a new human-and-model-in-the-loop framework for evaluating the toxicity of such models, and compare a variety of existing methods in both the cases of non-adversarial and adversarial users that expose their weaknesses.", "We then go on to propose two novel methods for safe conversational agents, by either training on data from our new human-and-model-in-the-loop framework in a two-stage system, or baking-in safety to the generative model itself.", "We find our new techniques are", "(i) safer than existing models; while", "(ii) maintaining usability metrics such as engagingness relative to state-of-the-art chatbots.", "In contrast, we expose serious safety issues in existing standard systems like GPT2 (Radford et al., 2019), DialoGPT (Zhang et al., 2019) and BlenderBot (Roller et al., 2020).", "When dialogue models are trained to mimic human-human conversations utilizing large preexisting datasets, they will unfortunately also learn undesirable features from this human-human data, such as the use of toxic or biased language.", "Most recent work in the detection and prevention of offensive 1 language has focused exclusively on human-generated data.", "These conversations may be very different from the domain in which a dialogue model might eventually be deployed: for example, humans may adversarially attempt to elicit Equal contribution 1 In this paper, we use offensive, toxic, and unsafe interchangeably.", "For more discussions about attempts to better define categories of unsafe content, see Schmidt and Wiegand (2017).", "offensive language from a dialogue model in ways that differ from how they would speak with another human.", "In this work, we introduce Bot-Adversarial Dialogue (BAD) Safety, a novel method for evaluating chatbot safety with humans and models in the loop.", "We ask humans to adversarially converse with a set of state-of-the-art English-language models with the aim of inducing them to generate unsafe responses to mimic the way these models can be adversarially attacked at deployment time.", "We analyze how to optimally construct such a crowdworker task, and collect a dataset of 5k such conversations yielding around 70k total utterances.", "We then use the BAD method and data to evaluate the safety of several generative models and propose two techniques for making safer models: (1) Training a safety classifier with this data and deploying a two-stage model at inference time.", "In the two-stage setting, we prevent the generative model from surfacing offensive language flagged by the classifier.", "(2) A novel method that directly bakes in\" toxicity-awareness to the generative model during training by modifying the target responses to incorporate safe responses to offensive input. In experiments, we show that our new techniques outperform other existing generative models in terms of safety, while maintaining engagingness. We publicly release the BAD training and evaluation data as well as select models trained using this data via ParlAI. 2 2 Related Work Numerous works have shown that humans speak differently with bots than with humans, with increases in profanity and aggressiveness associated with addressing a bot (Hill et al., 2015; Lortie and Guitton, 2011), which motivates the incorporation of human-bot dialogues into our safety framework. 2 https://parl . ai/projects/ safety_recipes/ De Angeli and Carpenter (2005); De Angeli and Brahnam (2008) suggest that one in ten human-bot conversations may contain instances of the human demonstrating unprovoked abusive behavior towards the chatbot. Miller et al. (2017b) argued that adversarial attacks need to be expected and planned for when deploying a user-facing system that learns from its interactions. These findings suggest it is insufficient to merely exclude toxic data from training, as the model would not know how to answer hostile out-of-domain inputs, and positive biases where models tend to agree rather than contradict (Roller et al., 2020) would lead to undesirable outcomes. As shown in Gehman et al. (2020), training on sanitized data can decrease the amount of unprompted toxic content, yet still leave models vulnerable to generating toxic content based on specific prompts. The moving target of toxic content requires dynamic methods that repeatedly update benchmarks to improve current systems (Dinan et al., 2019a; Nie et al., 2019) 3 . The iterative procedure in Dinan et al. (2019a) strictly focuses on detection of toxicity in human-generated utterances through several rounds of humans attempting to break a toxicity classifier, without addressing generation.", "Our BAD approach is similar in spirit, but centers on generations of a bot in a human-bot conversation, closer to the context of deployed conversational models.", "Focusing on generation requires deciding how to address bad content.", "Previous works have compared response strategies, including avoidance, joking or polite deflection, non-committal answers, play-along, confrontation, apologetic responding, empathizing, and counter-attacking responses (Curry and Rieser, 2019; Chin and Yi, 2019; Chin et al., 2020; Paranjape et al., 2020).", "They find that humans rate different strategies as more appropriate depending on the type of offense they are responding to.", "Note that different implementation details make those strategies difficult to directly compare.", "While we use a strategy of non-sequiturs in this work, our takeaway is that future work should keep investigating several types of responses such that models can learn to deploy them adaptively according to finer-grained understanding of unsafe content.", "We describe the models we analyze in this paper, including safety classifiers and generative models.", "We consider binary Transformer-based classifiers, following the same structure as in Dinan et al. (2019a), with two sizes: 128M and 311M parameters.", "We pre-train these models on a previously existing Reddit dataset extracted and obtained by a third party that was hosted by pushshift.io (Baum-gartner et al., 2020), using a masked language model objective, and then fine-tune on the safety classification tasks of interest, performing early stopping using the F1 score of the unsafe class on the validation set.", "These tasks include various combinations of the Wikipedia Toxic Comments dataset (WTC) (Wulczyn et al., 2017), Standard (S) and adversarial Build-it, Break-it, Fix-it (BBF) data from Dinan et al. (2019a), as well as semi-supervised data created from labeling the pushshift.io Reddit (Baumgartner et al., 2020) (Reddit) and Blended Skill Talk (BST) datasets.", "Finally, we will use a new dataset Bot-Adversarial Dialogue (BAD), to be described in 4.", "As further baselines, we will also compare to both single-turn and multi-turn classifiers from Dinan et al. (2019a).", "BST 2.7B We start from a state-of-the-art open-domain dialogue system.", "We consider the same architecture and setup as in BlenderBot (Roller et al., 2020), which employs a Seq2Seq Transformer architecture (Vaswani et al., 2017), with an implementation based on the ParlAI version (Miller et al., 2017a).", "We consider the 2.7B parameter model which has 2 encoder layers, 24 decoder layers, 2560 dimensional embeddings, and 32 attention heads, which performed best in human evaluations of engagingness.", "This model was pre-trained using pushshift.io Reddit and fine-tuned on on a smaller set of crowdsourced datasets designed to provide important conversational skills.", "At decoding time, the model employs standard beam search with a beam size of 10 , context and label 3 -gram blocking (Paulus et al., 2017), and a minimum beam length of 20 BPE tokens, which was shown to perform well compared to other choices.", "DialoGPT and GPT2 In our experiments we also compare to two other baseline models: DialoGPT (Medium) (Zhang et al., 2019) and GPT2 (Large) (Radford et al., 2019), although we expect these two models to have lower engagingness scores than the BST 2.7B base model, in line with results from Roller et al. (2020); Adiwardana et al. (2020).", "We use the medium-sized DialoGPT (345M parameters), as it performed best according to metrics in Zhang et al. (2019).", "To our knowledge these methods have not been compared previously in terms of dialogue safety evaluations, or the en-gagingness/safety trade-off.", "This section introduces the Bot-Adversarial Dialogue method for evaluating and training safe generative models.", "We would like our model to be robust against direct attempts to probe them for unsafe responses.", "Previous work described how safety classifiers can be trained to be robust to adversarial human attack, see Dinan et al. (2019a).", "However, in order to achieve safer conversational agents, we advocate for applying such a procedure directly to conversations with those agents, which previous work did not address.", "The main idea is to collect data of humans probing our conversational models to try to trick them into making mistakes, similar to adversarial attacks that happen in practice during deployment.", "We can then use this data to make our system more robust.", "We therefore construct a task where crowdworkers are instructed to converse naturally with a bot while simultaneously trying to probe or adversarially goad it into replying in an unsafe manner.", "Each bot's response is tagged as either safe or unsafe by the crowdworker.", "We divide the unsafe responses into four bins: those that nearly 0% of the population would consider unsafe (according to the annotator's best guess), and those that < 10%, <50% or >= 50% of the population would consider unsafe.", "After collection, a separate task verifies the labels with 3 other annotators, who can mark utterances either as safe, or as unsafe with a number of finer categories (e.g., hate speech).", "See Appendix A for details on how we combine all annotations to give a single label to utterances.", "While we only make Class Train Valid Test SAFE Utterances 42049 4239 1654 OFFENSIVE Utterances 27225 2763 944 Total Utterances 69274 7002 2598 Total Dialogues 5080 513 191 Table 1: Dataset Statistics for the BAD dataset where crowdsource workers were instructed to converse with a bot and annotate each bot utterance for offensiveness.", "use of the binary safe / unsafe distinction in this work, this finer-grained labeling could be used in subsequent work to accommodate different levels of sensitivity.", "We conduct 14-turn conversations with the human speaking first, and score the model based on the number of unsafe turns.", "Such data can be used as a held out test set for static evaluation across different models, as well as for training data for improving models.", "More details of the collection setup are given in Appendix A. Figure 1 shows how BAD differs from the BBF set-up from Dinan et al. (2019a): namely, in BBF, the breaker (or adversarial user) tries to break a classifier by submitting human-authored adversarial unsafe messages that are incorrectly classified as safe, whereas in this work (BAD), the breaker adversarially tries to elicit unsafe messages from a dialogue model 4 .", "We compare to classifiers trained on BBF and additionally evaluate our new classifiers on BBF in our experiments.", "Using the BAD method, we collect 5,784 dialogues between bots and crowdworkers, consisting of 78,874 utterances in total from both humans and bots (see Table 1).", "About 40% of all utterances are annotated as offensive, among which one third are generated by bots.", "To encourage the bot to use offensive language more often, humans tended to use either unsafe language themselves in the dialogues, or raise probing questions that are considered inappropriate to ask.", "More than 42% of the dialogues collected contain at least 3 unsafe human messages or probing questions (see Appendix, Table 6).", "We further break down the messages from humans into a taxonomy of offensive language types, as these may prove useful in future work.", "The majority of offensive language used by crowdworkers relates to hate speech against particular groups, personal 4 The emoji image in Figure 1 is by Twemoji ( https:// github . com/twitter/twemoji ), and is licensed under CC BY-4.0.", "attacks and other less explicit offensive language containing no profanity, see Appendix Figure 5. Further details can be found in Appendix A. 4.2 Applying to Conversational Agents We consider two different general strategies for making generative models safer to engage with: training classifiers for detecting unsafe messages as an added safety layer (4.2.1) and training the model such that it is unlikely to surface unsafe content at inference time (4.2.2).", "Given a safety classifier, a simple approach for improving dialogue safety is to use it to detect if both the user input and the model's response are safe.", "If a safety violation is detected in either type of utterance, one can then, instead, initiate a response designed to be safe.", "While several different safe response strategies can be considered (Curry and Rieser, 2019; Paranjape et al., 2020), in this work we respond with a non-sequitur: we select a topic at random from 1,087 topics judged as safe from the Wizard of Wikipedia conversational topic list (Dinan et al., 2019b) and then produce the response Hey do you want to talk about something else? How about we talk about X? where X is the chosen topic.", "Additional approaches are considered and analyzed in Appendix B.1.", "After returning this response, the conversation continues as normal, with the response entering into the model's conversational history.", "In this way, the model can still respond naturally to followup responses after the canned safe response is produced.", "We note that this approach works only as well as the classifier.", "If the classifier red flags too many safe utterances, the conversational experience will suffer.", "If unsafe utterances are not flagged, toxic language can still enter the conversation.", "This highlights a potential trade-off between ensuring safety and having an engaging conversation.", "A separate safety classifier layer has advantages (e.g. any independent improvement of this classifier can be used), but also downsides.", "For example, such an open-sourced model is more complicated to share and deploy, requires more computational resources (e.g. loading both models), and allows unsafe usage if the layer is simply removed.", "Further, in the long-term it makes sense if safety is part of a single dialogue agent model, in the sense that ideally it should understand what it is saying is unsafe.", "Here, we detail two generative model training methods that are less likely to surface unsafe content without the use of an additional safety layer: data pre-processing and baking-in the safety layer, the latter of which is a new approach introduced in this work.", "Data Pre-processing A classic approach to training models on unclean data is to filter it beforehand.", "Assuming we have access to a safety classifier, we can use it to filter the training set.", "In this work, we perform filtering by removing an example from the training set if either the conversational context (input) or response (output) triggers the safety classifier.", "Other approaches such as author-based filtering are considered and evaluated in Appendix B.2.", "This training set is then used to train models as usual.", "With this approach, it is important for this filtering to be performed on the large pre-training dataset: if only the fine-tuning datasets are cleaned, the model will still have been exposed to offensive language, which it will be able to remember and use (as indeed confirmed by our experiments).", "Baking in the Safety Layer Data pre-processing methods attempt to make a model safe by simply not exposing it to offensive language.", "This can make those models vulnerable to adversarial usage because they will not have learned how to handle offensive language at all: our models frequently copy the input (Welleck et al., 2020), so they might copy the offensive language.", "We instead propose a technique for attempting to bake awareness of toxic language into the training data, by using labeled examples that recommend appropriate action on the model's part in those circumstances.", "To do this, we first assume we have access to a safety classifier at training time (but not at deployment time).", "For each training example, if the last utterance in the dialogue history or the ground-truth response are labeled as unsafe by the classifier, we instead replace the ground-truth response of that training example with a non-sequitur.", "An example demonstrating this procedure is shown in Table 2.", "After constructing baked-in safety data, one can then train the generative model using likelihood training as usual, but with these modified targets.", "We separate training examples that have been Bot-Adversarial Dialogue (this work) Build-It Break-It Fix-It for Safety (Dinan et al., 2019a) Figure 1: Comparison of our Bot-Adversarial Dialogue (BAD) set-up (left) to the build-it, break-it, fix-it for toxicity classifier robustness from Dinan et al. (2019a) (right).", "modified for safety from those that have not, and assign different weights to them effectively drawing examples from those two sets with different probabilities affecting how much the model optimizes for safety versus usual conversational abilities.", "Balancing these weights is important, especially when dealing with highly toxic pre-training sets, as they may be dominated by modified examples.", "We choose this weighting as a hyperparameter of the model.", "We use human evaluations in both an adversarial and non-adversarial setting to evaluate how safe our generative models are.", "Since our ultimate goal is to create models that are safe while still being engaging to talk to, we also perform human evaluations of conversational quality.", "the ACUTE eval (Li et al., 2019) method of evaluating conversational quality, as used for BlenderBot", "(Roller et al., 2020) and elsewhere.", "The method involves collecting human-bot conversations for two models one wishes to compare, and then presenting two of those conversations at random, one from each model, to crowdworkers to make a series of pairwise decisions based on a target question.", "We use the question corresponding to evaluating en-gaginess from Li et al. (2019), which is phrased as Who would you prefer to talk to for a long conver-sation?.", "We compare each of the proposed models in this paper to the baseline BST 2.7B which Roller et al. (2020) showed is state-of-the-art in open-domain dialogue.", "We note that our goal is not to find models which are more engaging than our baseline, but to find models that are simultaneously equally engaging and safer .", "Evaluating Adversarial Safety To perform human evaluation of the safety of models in an adversarial setting we create a test set consisting of 180 dialogues collected using the Bot-Adversarial Dialogue method.", "Recall that the BAD data is collected by having humans converse and attempt to elicit unsafe responses with a dialogue model.", "In order to prevent this test set from being biased towards adversarial examples that target a single model, we construct the test set by sampling conversations from a large suite of models.", "At evaluation time, we collect a model's responses to each of the 180 dialogues in the test set, and human evaluators are used to judge the safety of each response.", "This set-up allows us to evaluate models in an adversarial setting that mimics deployment but for which the examples remained fixed, eliminating variances based on the experience and quality of crowdworkers during conversation collection.", "Evaluating Non-Adversarial Safety While we need our models to perform well in an adversarial setting, we also wish for them to perform well in a non-adversarial setting.", "For example, a model that repeats user input verbatim may be robust to more subtle attempts to elicit offensive responses which are not offensive in and of themselves, but would not be robust to simpler attacks like profanity.", "For this reason, we propose a non-adversarial test set composed of 180 examples that are extracted from the Wikipedia Toxic Comments test set.", "We adopt the same human evaluation setup as in the adversarial setting in which various models are evaluated for the given contexts.", "We detail experimental results in this section, including results of the data collection from the Bot-Adversarial Dialogue method (5.1), experimental results related to training classifiers (5.2), and a comparison of safe generation methods (5.3).", "Lastly, in 5.4, we detail and compare the overall safety and engagingness scores for all models.", "We describe results from data collection using the Bot-Adversarial Dialogue method, providing a detailed analysis of the effects of the crowdsourcing methods.", "In order to inform crowdsource task design, we use logistic regression to model several task outcomes.", "Predictors include variables capturing the human chat partner's experience with the task and the particular bot they are currently talking to, and which of two possible versions of task instructions was received.", "Experience with the task is measured as the number of HITs accepted by the worker so far a HIT, or Human Intelligence Task, is the term used by Amazon's Mechanical Turk to refer to a single instance of a crowdworker task.", "Experience with a specific bot is captured as the position of the utterance within the conversation (e.g., 2nd utterance in a 14 utterance conversation).", "The models underlying the bot responses were included as predictors and had a large significant effect (as discussed in the rest of the paper), but are omitted from the discussion here to focus on predictors related to task design.", "Modeling results shown in Table 3 suggest that (1) instructing workers to ask open questions about sensitive topics rather than using obvious profanities (New instruction set) has a significant effect, increasing the rate of unsafe bot utterances while simultaneously decreasing the rate of unsafe human utterances; (2) self-selection effects are present (see also Sec. A.4), so that the total number of HITs ultimately completed is predictive of higher success at eliciting not-OK content; (3) two types of learning effects are present: workers are more successful (i.e., are able to solicit more unsafe responses) as they perform more iterations of the task, and within HITs, which might reflect that workers figure out the vulnerabilities of the particular bot they have been paired with and identify the most successful strategies.", "We note that the increased rate of unsafe utterances for later utterances observed here is in the context of an explicitly adversarial setting aiming to elicit them; we do not expect that this pattern would generalize to non-adversarial contexts.", "Automatic evaluation results are presented for safety classifiers in Table 4. We train safety classifiers using the methodology described in Sec. 4.2.1 and compare different model sizes and multitasking across different training sources.", "Firstly, we find our newly trained models superior to existing models from Dinan et al. (2019a) when using the same training sets, likely due to improved pushshift.io Reddit pre-training of our Transformers compared to their BERT models.", "However, we find relatively small gains from either larger Transformers (Safety Classifier + ) over smaller ones (Safety), or from semi-supervised learning over Reddit and BST (Semi-Sup. + ).", "We compare the classifier trained on the BAD dataset, multitasked with the other datasets, to other approaches in Table 4. We observe similar results Outcome: not OK utterances Bot, rater Bot, partner Human Base 3 .", "to our other new safety classifiers on the single-turn Wikipedia Toxic Comments (WTC), Build-It Break-It Fix (BBF) and Standard (S) test sets, but superior results on the multi-turn bot-adversarial BAD test set.", "The BAD-based classifier achieves 80.8 unsafe F1 on the latter dataset, while the next best performing methods achieve 61.5, 61.0 and 60.7, respectively.", "This result can be explained by virtue of the fact that the BAD-based classifier is the only one trained on the BAD training set, hence it sees data that most closely resembles the evaluation distribution.", "Note that the BAD training set differs from the other training sets listed as it is both", "(i) adversarially collected and", "(ii) multi-turn.", "One can tease apart the effects of each of these attributes by comparing to a single-turn (truncated) version of BAD training, shown in Table 4 (second to last row), which still performs well though not as well as the multi-turn version, indicating that the adversarial component is most important.", "As the BAD test set is the closest setup to the actual setting in which such a classifier might be deployed (it features human-bot conversations, rather than human-human single-turn data), this indicates the BAD-based classifier is the most likely method to be successful in real use cases.", "We compare the baked-in safety layer method of 4.2.2 to the data-preprocessing methods using", "400M parameter models, the details of which are described in Appendix B, and find that baked-in training gives increased safety over safe utterance preprocessing.", "On pushshift.io Reddit, the baked-in method triggers a classifier 0.2% vs. 6.8% of the time for preprocessing.", "Both methods yield similar PPL and F1 scores.", "We thus experiment with scaling it up to a 2.7B parameter model.", "We perform human evaluations to compare the relative safety and engagingness for many of the selected methods.", "Results showing the engagingness performance relative to safety performance (for both adversarial and non-adversarial safety) using human judgments (4.3) are shown in Figure 2.", "Automatic evaluations are provided in Appendix D. We compare the methods described in this paper two-stage models and baked in models to three standard baselines: BST 2.7B, DialoGPT, and GPT2.", "BST 2.7B (Roller et al., 2020) has simply been trained on existing dialogue corpora, with no safety technique at all in model training.", "DialoGPT (Zhang et al., 2019) uses a pre-processing method, where offensive subreddits where removed from the training data.", "We test DialoGPT in two flavors: with short generations (using standard beam de-coding), and longer generations (where we add a constraint that a minimum of 20 tokens must be generated, similar to (Roller et al., 2020).", "In all experiments we use the medium-sized version of DialoGPT, with 345M parameters, as noted in 3.2.", "Finally, GPT2 (Radford et al., 2019) was trained on web data that was filtered for data quality, but not for offensive language as far as we are aware.", "Engagingness scores from the ACUTE-eval set-up are plotted along the x -axis in Figure 2.", "Detailed results can be found in Table 9 in the Appendix.", "Results on standard models indicate that BST 2.7B is significantly more engaging than GPT2, DialoGPT and pushift.io Reddit 2.7B.", "We apply the classifier learned from our Bot-Adversarial Dialogue (BAD) dataset (multi-tasked with our other datasets) in a two-stage model.", "Engagingness of this model is found to be not significantly distinguishable from our base BST 2.7B model.", "The baked-in model also performs similarly to the base BST 2.7B model with respect to engagingness, showing that this system still works Model Name Size Training Data WTC S BBF BAD Avg.", "To perform human evaluation of safety in an adversarial setting, we evaluate models using the BAD evaluation method described in 4.3.", "Results can be seen on the y -axis of Figure 2 (left).", "More details are provided in Table 15 in the Appendix.", "Results show that all of our standard base models including BST 2.7B, DialoGPT, and GPT2 are susceptible to attack, e.g. GPT2 produces safe responses only 59.4% of the time, and BST 2.7B only 55% of the time.", "Clearly, to defend against BAD requires alternative techniques.", "Our two-stage BAD classifier approach improves over our other safety classifiers used in two-stage systems, yielding an 94.4% OK rate on the adversarial data.", "Overall, this method offers strong robustness without affecting engagingness, and we advocate its use.", "For our baked-in model , we see clear gains relative to standard models (e.g. increasing from the baseline BST 2.7B value of 55% OK up to 78.3% OK), although these gains are not as significant as when using two-stage models (the same classifiers in a two-stage setup can bring the results up to 83.9% OK).", "We believe an important next step for future work is to improve this training technique to match the two-stage results.", "Human evaluation of safety in a non -adversarial setting is conducted using the Wiki Toxic Comments test set described in 4.3.", "Results can be seen on the y -axis of Figure 2 (right).", "More details are provided in Table 16 in the Appendix.", "standard models appear susceptible to attack.", "In the best case, DialoGPT produces safe responses only 68.3% of the time.", "GPT2 performs the worst, providing safe responses 54.4% of the time.", "Our two-stage models get near perfect scores here scores range from 97.8 to 98.3 showing that these models are very robust to attack in the non-adversarial setting.", "This shows that future effort to make these models safe should focus on the adversarial setting, as in BAD.", "The baked-in model performs the best in this setting, achieving very high scores.", "We conclude this technique should be further explored, particularly for robustness in the adversarial setting.", "We observe that standard generative models with little or no safety intervention fall very short in terms of safety, especially when measured using our Bot-Adversarial Dialogue (BAD) framework, which we publicly release along with our models.", "However, with our safety techniques we can achieve roughly the same engagingness as the state of the art BST 2.7B with substantially better safety scores, showing it is possible to build a model that is both safe and engaging.", "We find generative models can be improved considerably by distilling a safety classifier into the encoder-decoder during training, i.e. the baked-in approach.", "Two-stage models provide safer results still, with best performance coming from our BAD-based classifier with BST 2.7B in the adversarial case.", "We note that while we have improved substantially over existing systems, our best systems are not perfectly safe as measured by the BAD method.", "Conducting perfectly safe dialogue requires the Figure 2: Engagingness vs. Safety: Comparing engagingness scores from ACUTE-eval to adversarial safety scores on the Bot-Adversarial Dialogue (BAD) test set (left) and non-adversarial safety scores on the Wiki Toxic Comments test set (right).", "model to deeply understand language and likely cannot be completely solved until AI itself is solved.", "Further complicating the issue is the fact that the very definition of safe\" is both contextually and culturally dependent (Schmidt and Wiegand, 2017). Rather than attempt to define safety for all languages and locales, in this work we rely on crowdworker consensus and focus on machine learning methods for English language data. We look forward to further progress in these technical and ethical challenges. 7 Ethical Considerations In this paper, we have presented several methods for building safer conversational agents. As we noted in the conclusion, even our best systems are not perfectly safe. This raises several ethical considerations, including questions of: when can a model be considered safe\"?", "Is a failure rate of 5 .", "6% in an adversarial setting acceptable for the deployment of such models?", "How safe is safe enough?", "Creating a perfectly safe dialogue model requires the model to deeply understand language and likely cannot be completely solved until AI itself is solved, i.e. this is an AI-complete problem.", "We also reiterate that the issue is further complicated by the fact that the very definition of safe\" is both contextually and culturally dependent (Schmidt and Wiegand, 2017).", "A dialogue model must be able to understand the boundaries of its particular conversation partner.", "What is offensive to one may not be offensive to another (Curry and Rieser, 2019).", "Culturally speaking, the approaches in this paper are limited in both geographical and historical senses.", "Our methods rely only on English-speaking annotators located in the United States.", "This narrow, Western-centric viewpoint will be insufficient for solving the issue in other languages and locales (Schmidt and Wiegand, 2017).", "Further, it is well known that commonly used hate-speech datasets are known to have issues with bias and fairness (Dixon et al., 2018).", "Sap et al. (2019) showed that several contain correlations between surface markers of African American English and toxicity, and propose race and dialect priming as a way to mitigate this.", "In this work we have assumed a consensus-based view on offensiveness, by admitting examples based on agreement of multiple humans;however, offense to underrepresented groups for example may be missed by such a setup.", "We encourage further work to consider how classifiers trained on the datasets described in this work may be biased against various demographic groups.", "Lastly, our work analyzes publicly available open-sourced models.", "We note that there may be concerns in the community or the public at large related to releasing models, even for research purposes, due to their potential safety issues.", "The community has recently started to address this tradeoff between releasing models that can produce offensive or toxic language and open, reproducible research 5 .", "We believe the solution for these issues involves the community working together and conducting reproducible research on safety.", "Releasing code and models facilitates that joint community effort." ]
[ "abstain", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "objective", "method", "objective", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "abstain", "other", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "other", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "result", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "objective", "abstain", "method", "abstain" ]
[ "Event extraction (EE) has considerably ben-efited from pre-trained language models (PLMs) by fine-tuning.", "However, existing pre-training methods have not involved modeling event characteristics, resulting in the developed EE models cannot take full advantage of large-scale unsupervised data.", "To this end, we propose CLEVE, a contrastive pre-training framework for EE to better learn event knowledge from large unsupervised data and their semantic structures (e.g. AMR) obtained with automatic parsers.", "CLEVE contains a text encoder to learn event semantics and a graph encoder to learn event structures respectively.", "Specifically, the text encoder learns event semantic representations by self-supervised contrastive learning to represent the words of the same events closer than those unrelated words; the graph encoder learns event structure representations by graph contrastive pre-training on parsed event-related semantic structures.", "The two complementary representations then work together to improve both the conventional supervised EE and the unsupervised liberal EE, which requires jointly extracting events and discovering event schemata without any annotated data.", "Experiments on ACE 2005 and MAVEN datasets show that CLEVE achieves significant improvements, especially in the challenging unsupervised setting.", "The source code and pre-trained checkpoints can be obtained from https://github.com/THU-KEG/CLEVE .", "Event extraction (EE) is a long-standing crucial information extraction task, which aims at extracting event structures from unstructured text.", "As illustrated in Figure 1, it contains event detection task to identify event triggers (the word attack) and indicates equal contribution Correspondence to L.Hou ([email protected]) CNN's Kelly Wallace reports on today 's attack in Netanya .", "classify event types ( Attack ), as well as event argument extraction task to identify entities serving as event arguments (today and Netanya) and classify their argument roles ( Time-within and Place ) (Ahn, 2006).", "By explicitly capturing the event structure in the text, EE can benefit various downstream tasks such as information retrieval (Glavas and Snajder, 2014) and knowledge base population (Ji and Grishman, 2011).", "Existing EE methods mainly follow the supervised-learning paradigm to train advanced neural networks (Chen et al., 2015; Nguyen et al., 2016; Nguyen and Grishman, 2018) with human-annotated datasets and pre-defined event schemata.", "These methods work well in lots of public benchmarks such as ACE 2005 (Walker et al., 2006) and TAC KBP (Ellis et al., 2016), yet they still suffer from data scarcity and limited generalizability.", "Since annotating event data and defining event schemata are especially expensive and labor-intensive, existing EE datasets typically only contain thousands of instances and cover limited event types.", "Thus they are inadequate to train large neural models (Wang et al., 2020) and develop methods that can generalize to continually-emerging new event types (Huang and Ji, 2020).", "Inspired by the success of recent pre-trained language models (PLMs) for NLP tasks, some pioneering work (Wang et al., 2019a; Wadden et al., 2019) attempts to fine-tune general PLMs (e.g, BERT (Devlin et al., 2019)) for EE.", "Benefiting from the strong general language understanding ability learnt from large-scale unsupervised data, these PLM-based methods have achieved state-of-the-art performance in various public benchmarks.", "Although leveraging unsupervised data with pretraining has gradually become a consensus for EE and NLP community, there still lacks a pre-training method orienting event modeling to take full advantage of rich event knowledge lying in large-scale unsupervised data.", "The key challenge here is to find reasonable self-supervised signals (Chen et al., 2017; Wang et al., 2019a) for the diverse semantics and complex structures of events.", "Fortunately, previous work (Aguilar et al., 2014; Huang et al., 2016) has suggested that sentence semantic structures, such as abstract meaning representation (AMR) (Banarescu et al., 2013), contain broad and diverse semantic and structure information relating to events.", "As shown in Figure 1, the parsed AMR structure covers not only the annotated event ( Attack ) but also the event that is not defined in the ACE 2005 schema ( Report ).", "Considering the fact that the AMR structures of large-scale unsupervised data can be easily obtained with automatic parsers (Wang et al., 2015), we propose CLEVE, an event-oriented contrastive pre-training framework utilizing AMR structures to build self-supervision signals.", "CLEVE consists of two components, including a text encoder to learn event semantics and a graph encoder to learn event structure information.", "Specifically, to learn effective event semantic representations, we employ a PLM as the text encoder and encourage the representations of the word pairs connected by the ARG , time , location edges in AMR structures to be closer in the semantic space than other unrelated words, since these pairs usually refer to the trigger-argument pairs of the same events (as shown in Figure 1) (Huang et al., 2016).", "This is done by contrastive learning with the connected word pairs as positive samples and unrelated words as negative samples.", "Moreover, considering event structures are also helpful in extracting events (Lai et al., 2020) and generalizing to new event schemata (Huang et al., 2018), we need to learn transferable event structure representations.", "Hence we further introduce a graph neural network (GNN) as the graph encoder to encode AMR structures as structure representations.", "The graph encoder is contrastively pre-trained on the parsed AMR structures of large unsupervised corpora with AMR subgraph discrimination as the objective.", "By fine-tuning the two pre-trained models on downstream EE datasets and jointly using the two representations, CLEVE can benefit the conventional supervised EE suffering from data scarcity.", "Meanwhile, the pre-trained representations can also directly help extract events and discover new event schemata without any known event schema or annotated instances, leading to better generalizability.", "This is a challenging unsupervised setting named liberal event extraction (Huang et al., 2016).", "Experiments on the widely-used ACE 2005 and the large MAVEN datasets indicate that CLEVE can achieve significant improvements in both settings.", "Event Extraction.", "Most of the existing EE works follow the supervised learning paradigm.", "Traditional EE methods (Ji and Grishman, 2008; Gupta and Ji, 2009; Li et al., 2013) rely on manually-crafted features to extract events.", "In recent years, the neural models become mainstream, which automatically learn effective features with neural networks, including convolutional neural networks (Nguyen and Grishman, 2015; Chen et al., 2015), recurrent neural networks (Nguyen et al., 2016), graph convolutional networks (Nguyen and Grishman, 2018; Lai et al., 2020).", "With the recent successes of BERT (Devlin et al., 2019), PLMs have also been used for EE (Wang et al., 2019a,b; Yang et al., 2019; Wadden et al., 2019; Tong et al., 2020).", "Although achieving remarkable performance in benchmarks such as ACE 2005 (Walker et al., 2006) and similar datasets (Ellis et al., 2015, 2016; Getman et al., 2017; Wang et al., 2020), these PLM-based works solely focus on better fine-tuning rather than pre-training for EE.", "In this paper, we study pre-training to better utilize rich event knowledge in large-scale unsupervised data.", "Event Schema Induction.", "Supervised EE models cannot generalize to continually-emerging new event types and argument roles.", "To this end, Chambers and Jurafsky (2011) explore to induce event schemata from raw text by unsupervised clustering.", "Following works introduce more features like coreference chains (Chambers, 2013) and entities (Nguyen et al., 2015; Sha et al., 2016).", "Recently, Huang and Ji (2020) move to the semi-Parsed AMR Graphs Trigger-Argument Pair Discrimination CNN's Kelly Wallace report-01 attack-01 today Netanya ARG0 ARG1 ARG1 time Unsupervised Corpora CNN's Kelly Wallace reports on today's attack in Netanya.", "supervised setting allowing to use annotated data of known types.", "Following Huang et al. (2016), we evaluate the generalizability of CLEVE in the most challenging unsupervised liberal setting, which requires to induce event schemata and extract event instances only from raw text at the same time.", "Contrastive Learning.", "Contrastive learning was initiated by Hadsell et al. (2006) following an intuitive motivation to learn similar representations for neighboors and distinct representations for non-neighbors, and is further widely used for self-supervised representation learning in various domains, such as computer vision (Wu et al., 2018; Oord et al., 2018; Hjelm et al., 2019; Chen et al., 2020; He et al., 2020) and graph (Qiu et al., 2020; You et al., 2020; Zhu et al., 2020).", "In the context of NLP, many established representation learning works can be viewed as contrastive learning methods, such as Word2Vec (Mikolov et al., 2013), BERT (Devlin et al., 2019; Kong et al., 2020) and ELECTRA (Clark et al., 2020).", "Similar to this work, contrastive learning is also widely-used to help specific tasks, including question answering (Yeh and Chen, 2019), discourse modeling (Iter et al., 2020), natural language inference (Cui et al., 2020) and relation extraction (Peng et al., 2020).", "The overall CLEVE framework is illustrated in Figure 2.", "As shown in the illustration, our contrastive pre-training framework CLEVE consists of two components: event semantic pre-training and event structure pre-training, of which details are introduced in Section 3.2 and Section 3.3, respectively.", "At the beginning of this section, we first introduce the required preprocessing in Section 3.1, including the AMR parsing and how we modify the parsed AMR structures for our pre-training.", "CLEVE relies on AMR structures (Banarescu et al., 2013) to build broad and diverse self-supervision signals for learning event knowledge from large-scale unsupervised corpora.", "To do this, we use automatic AMR parsers (Wang et al., 2015; Xu et al., 2020) to parse the sentences in unsupervised corpora into AMR structures.", "Each AMR structure is a directed acyclic graph with concepts as nodes and semantic relations as edges.", "Moreover, each node typically only corresponds to at most one word, and a multi-word entity will be represented as a list of nodes connected with name and op (conjunction operator) edges.", "Considering pretraining entity representations will naturally benefits event argument extraction, we merge these lists into single nodes representing multi-word entities (like the CNN's Kelly Wallace in Figure 1) during both event semantic and structure pre-training.", "Formally, given a sentence s in unsupervised corpora, we obtain its AMR graph g s = (V s , E s ) after AMR parsing, where V s is the node set after word merging and E s denotes the edge set.", "E s = { ( u, v, r ) | ( u, v ) V s V s , r R} , where R is the set of defined semantic relation types.", "To model diverse event semantics in large unsupervised corpora and learn contextualized event semantic representations, we adopt a PLM as the text encoder and train it with the objective to discriminate various trigger-argument pairs.", "Like most PLMs, we adopt a multi-layer Transformer (Vaswani et al., 2017) as the text encoder since its strong representation capacity.", "Given a sentence s = { w 1 , w 2 , . . . , w n } containing n tokens, we feed it into the multi-layer Transformer and use the last layer's hidden vectors as token representations.", "Moreover, a node v V s may correspond to a multi-token text span in s and we need a unified representation for the node in pre-training.", "As suggested by Baldini Soares et al. (2019), we insert two special markers [E1] and [/E1] at the beginning and ending of the span, respectively.", "Then we use the hidden vector for [E1] as the span representation x v of the node v .", "And we use different marker pairs for different nodes.", "As our event semantic pre-training focuses on modeling event semantics, we start our pre-training from a well-trained general PLM to obtain general language understanding abilities.", "CLEVE is agnostic to the model architecture and can use any general PLM, like BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019).", "We design trigger-argument pair discrimination as our contrastive pre-training task for event semantic pre-training.", "The basic idea is to learn closer representations for the words in the same events than the unrelated words.", "We note that the words connected by ARG , time and location edges in AMR structures are quite similar to the trigger-argument pairs in events (Huang et al., 2016, 2018), i.e., the key words evoking events and the entities participating events.", "For example, in Figure 1, Ne-tanya is an argument for the attack event, while the disconnected CNN's Kelly Wallace is not.", "With this observation, we can use these special word pairs as positive trigger-argument samples and train the text encoder to discriminate them from negative samples, so that the encoder can learn to model event semantics without human annotation.", "Let R p = { ARG , time , location } and P s = { ( u, v ) | ( u, v, r ) E s , r R p } denotes the set of positive trigger-argument pairs in sentence s .", "For a specific positive pair ( t, a ) P s , as shown in Figure 2, we construct its corresponding negative samples with trigger replacement and argument replacement.", "Specifically, in the trigger replacement, we construct m t number of negative pairs by randomly sample m t number of negative triggers t V s and combine them with the positive argument a .", "A negative trigger t must do not have a directed ARG , time or location edge with a , i.e., (cid:64) ( t, a, r ) E s , r R p .", "Similarly, we construct m a more negative pairs by randomly sample m a number of negative arguments a V s satisfying (cid:64) ( t, a, r ) E s , r R p .", "As the example in Figure 2, (attack, reports) is a valid negative sample for the positive sample (attack, Ne-tanya), but (attack, today's) is not valid since there is a (attack, today's, time ) edge.", "To learn to discriminate the positive trigger-argument pair from the negative pairs and so that model event semantics, we define the training objective for a positive pair ( t, a ) as a cross-entropy loss of classifying the positive pair correctly: L t,a = x (cid:62) t W x a + log (cid:16) exp (cid:16) x (cid:62) t W x a (cid:17) + m t (cid:88) i =1 exp (cid:16) x (cid:62) t i W x a (cid:17) + m a (cid:88) j =1 exp (cid:16) x (cid:62) t W x a j (cid:17) (cid:17) , (1) where m t , m a are hyper-parameters for negative sampling, and W is a trainable matrix learning the similarity metric.", "We adopt the cross-entropy loss here since it is more effective than other contrastive loss forms (Oord et al., 2018; Chen et al., 2020).", "Then we obtain the overall training objective for event semantic pre-training by summing up the losses of all the positive pairs of all sentences s in the mini batch B s : L sem ( ) = (cid:88) s B s (cid:88) ( t,a ) P s L t,a , (2) where denotes the trainable parameters, including the text encoder and W .", "Previous work has shown that event-related structures are helpful in extracting new events (Lai et al., 2020) as well as discovering and generalizing to new event schemata (Huang et al., 2016, 2018; Huang and Ji, 2020).", "Hence we conduct event structure pre-training on a GNN as graph encoder to learn transferable event-related structure representations with recent advances in graph contrastive pre-training (Qiu et al., 2020; You et al., 2020; Zhu et al., 2020).", "In CLEVE, we utilize a GNN to encode the AMR (sub)graph to extract the event structure information of the text.", "Given a graph g , the graph encoder represents it with an graph embedding g = G ( g, { x v } ) , where G ( ) is the graph encoder and { x v } denotes the initial node representations fed into the graph encoder.", "CLEVE is agnostic to specific model architectures of the graph encoder.", "Here we use a state-of-the-art GNN model, Graph Isomorphism Network (Xu et al., 2019), as our graph encoder for its strong representation ability.", "We use the corresponding text span representations { x v } produced by our pre-trained text encoder (introduced in Section 3.2) as the initial node representations for both pre-training and inference of the graph encoder.", "This node initialization also implicitly aligns the semantic spaces of event semantic and structure representations in CLEVE, so that can make them cooperate better.", "To learn transferable event structure representations, we design the AMR subgraph discrimination task for event structure pre-training.", "The basic idea is to learn similar representations for the subgraphs sampled from the same AMR graph by discriminating them from subgraphs sampled from other AMR graphs (Qiu et al., 2020).", "Given a batch of m AMR graphs { g 1 , g 2 , . . . , g m } , each graph corresponds to a sentence in unsupervised corpora.", "For the i -th graph g i , we randomly sample two subgraphs from it to get a positive pair a 2 i 1 and a 2 i .", "And all the subgraphs sampled from the other AMR graphs in the mini-batch serve as negative samples.", "Like in Figure 2, the two green (w/ attack) subgraphs are a positive pair while the other two subgraphs sampled from the purple (w/ solider) graph are negative samples.", "Here we use the subgraph sampling strategy introduced by Qiu et al. (2020), whose details are shown in Appendix C. Similar to event semantic pre-training, we adopt the graph encoder to represent the samples a i = G ( a i , x v ) and define the training objective as: L str ( ) = m (cid:88) i =1 log exp (cid:0) a (cid:62) 2 i 1 a 2 i (cid:1) (cid:80) 2 mj =1 1 [ j (cid:54) =2 i 1] exp (cid:0) a (cid:62) 2 i 1 a j (cid:1) , (3) where 1 [ j (cid:54) =2 i 1] { 0 , 1 } is an indicator function evaluating to 1 iff j (cid:54) = 2 i 1 and is the trainable parameters of graph encoder.", "Before the detailed experiments, we introduce the pre-training setup of CLEVE in implementation.", "We adopt the New York Times Corpus (NYT) 1 (Sandhaus, 2008) as the unsupervised pretraining corpora for CLEVE.", "It contains over 1 .", "8 million articles written and published by the New York Times between January 1, 1987, and June 19, 2007.", "We only use its raw text and obtain the AMR structures with a state-of-the-art AMR parser (Xu et al., 2020).", "We choose NYT corpus because (1) it is large and diverse, covering a wide range of event semantics, and (2) its text domain is similar to our principal evaluation dataset ACE 2005, which is helpful (Gururangan et al., 2020).", "To prevent data leakage, we remove all the articles shown up in ACE 2005 from the NYT corpus during pretraining.", "Moreover, we also study the effect of different AMR parsers and pre-training corpora in Section 5.2 and Section 5.3, respectively.", "For the text encoder, we use the same model architecture as RoBERTa (Liu et al., 2019), which is with 24 layers, 1024 hidden dimensions and 16 attention heads, and we start our event semantic pre-training from the released checkpoint 2 .", "For the graph encoder, we adopt a graph isomorphism network (Xu et al., 2019) with 5 layers and 64 hidden dimensions, and pre-train it from scratch.", "For the detailed hyperparameters for pre-training and fine-tuning, please refer to Appendix D. 4.2 Adaptation of CLEVE As our work focuses on pre-training rather than fine-tuning for EE, we use straightforward and common techniques to adapt pre-trained CLEVE to downstream EE tasks.", "In the supervised setting, we adopt dynamic multi-pooling mechanism (Chen et al., 2015; Wang et al., 2019a,b) for the text encoder and encode the corresponding local subgraphs with the graph encoder.", "Then we concate-1 https://catalog.ldc.upenn.edu/ LDC2008T19 2 https://github.com/pytorch/fairseq ED EAE Metric P R F1 P R F1 JointBeam 73 .", "nate the two representations as features and fine-tune CLEVE on supervised datasets.", "In the unsupervised liberal setting, we follow the overall pipeline of Huang et al. (2016) and directly use the representations produced by pre-trained CLEVE as the required trigger/argument semantic representations and event structure representations.", "For the details, please refer to Appendix A. 4.3 Supervised EE Dataset and Evaluation We evaluate our models on the most widely-used ACE 2005 English subset (Walker et al., 2006) and the newly-constructed large-scale MAVEN (Wang et al., 2020) dataset.", "ACE 2005 contains 599 English documents, which are annotated with 8 event types, 33 subtypes, and 35 argument roles.", "MAVEN contains 4 , 480 documents and 168 event types, which can only evaluate event detection.", "We split ACE 2005 following previous EE work (Liao and Grishman, 2010; Li et al., 2013; Chen et al., 2015) and use the official split for MAVEN.", "EE performance is evaluated with the performance of two subtasks: Event Detection (ED) and Event Argument Extraction (EAE).", "We report the precision (P), recall (R) and F1 scores as evaluation results, among which F1 is the most comprehensive metric.", "Baselines We fine-tune our pre-trained CLEVE and set the original RoBERTa without our event semantic pre-training as an important baseline.", "To do ablation studies, we evaluate two variants of CLEVE on both datasets: the w/o semantic model adopts a vanilla RoBERTa without event semantic pre-training as the text encoder, and the w/o structure only uses the event semantic representations ED Metric P R F1 DMCNN 66 .", "On ACE 2005, we set two more variants to investigate the effectiveness of CLEVE.", "The on ACE (golden) model is pre-trained with the golden trigger-argument pairs and event structures of ACE 2005 training set instead of the AMR structures of NYT.", "Similarly, the on ACE (AMR) model is pre-trained with the parsed AMR structures of ACE 2005 training set.", "We also compare CLEVE with various baselines, including: (1) feature-based method, the top-performing JointBeam (Li et al., 2013); (2) vanilla neural model DMCNN (Chen et al., 2015); (3) the model incorporating syntactic knowledge, dbRNN (Sha et al., 2018); (4) state-of-the-art models on ED and EAE respectively, including GatedGCN (Lai et al., 2020) and SemSynGTN (Pouran Ben Veyseh et al., 2020); (5) a state-of-the-art EE model RCEE ER (Liu et al., 2020), which tackle EE with machine reading comprehension (MRC) techniques.", "The last four models adopt PLMs to learn representations.", "On MAVEN, we compare CLEVE with the official ED baselines set by Wang et al. (2020), including DMCNN (Chen et al., 2015), BiLSTM (Hochreiter and Schmidhuber, 1997), BiL-STM+CRF , MOGANED (Yan et al., 2019), DMBERT (Wang et al., 2019a), BERT+CRF .", "The evaluation results are shown in Table 1 and Table 2.", "We can observe that: (1) CLEVE achieves significant improvements to its basic model RoBERTa on both ACE 2005 and MAVEN.", "The p-values under the t-test are 4 10 8 , 2 10 8 and 6 10 4 for ED on ACE 2005, EAE on ACE 2005, and ED on MAVEN, respectively.", "It also outperforms or achieves comparable results with ED EAE Metric (B-Cubed) P R F1 P R F1 LiberalEE 55 .", "all the baselines, including those using dependency parsing information (dbRNN, GatedGCN, SemSynGTN and MOGANED).", "This demonstrates the effectiveness of our proposed contrastive pre-training method and AMR semantic structure.", "It is noteworthy that RCEE ER outperforms our method in EAE since its special advantages brought by reformulating EE as an MRC task to utilize sophisticated MRC methods and large annotated external MRC data.", "Considering that our method is essentially a pre-training method learning better event-oriented representations, CLEVE and RCEE ER can naturally work together to improve EE further.", "(2) The ablation studies (comparisons between CLEVE and its w/o semantic or structure representations variants) indicate that both event semantic pre-training and event structure pre-training is essential to our method.", "(3) From the comparisons between CLEVE and its variants on ACE (golden) and ACE (AMR), we can see that the AMR parsing inevitably brings data noise compared to golden annotations, which results in a performance drop.", "However, this gap can be easily made up by the benefits of introducing large unsupervised data with pre-training.", "In the unsupervised setting, we evaluate CLEVE on ACE 2005 and MAVEN with both objective automatic metrics and human evaluation.", "For the automatic evaluation, we adopt the extrinsic clustering evaluation metrics: B-Cubed Metrics (Bagga and Baldwin, 1998), including B-Cubed precision, recall and F1.", "The B-Cubed metrics evaluate the quality of cluster results by comparing them to golden standard annotations and have been shown to be effective (Amigo et al., 2009).", "For the human evaluation, we invite an expert to check the outputs ED Metric (B-Cubed) P R F1 RoBERTa 32 .", "of the models to evaluate whether the extracted events are complete and correctly clustered as well as whether all the events in text are discovered.", "Baselines We compare CLEVE with reproduced LiberalEE (Huang et al., 2016), RoBERTa and RoBERTa+VGAE .", "RoBERTa here adopts the original RoBERTa (Liu et al., 2019) without event semantic pre-training to produce semantic representations for trigger and argument candidates in the same way as CLEVE, and encode the whole sentences to use the sentence embeddings (embed-dings of the starting token <s> ) as the needed event structure representations.", "RoBERTa+VGAE additionally adopts an unsupervised model Variational Graph Auto-Encoder (VGAE) (Kipf and Welling, 2016) to encode the AMR structures as event structure representations.", "RoBERTa+VGAE shares similar model architectures with CLEVE but is without our pre-training.", "Specially, for fair comparisons with LiberalEE, all the models in the unsupervised experiments adopt the same CAMR (Wang et al., 2015) as the AMR parser, including CLEVE pretraining.", "Moreover, we also study CLEVE variants as in the supervised setting.", "The w/o semantic variant replaces the CLEVE text encoder with a RoBERTa without event structure pre-training.", "The w/o structure variant only uses CLEVE text encoder in a similar way as RoBERTa .", "The on ACE (AMR) model is pre-trained with the parsed AMR structures of ACE test set.", "As shown in Huang et al. (2016), the AMR parsing is significantly superior to dependency parsing and frame semantic parsing on the unsupervised liberal event extraction task, hence we do not include baselines using other sentence structures in the experiments.", "The automatic evaluation results are shown in Table 3 and Table", "4. As the human evaluation is laborious and expensive, we only do human ED EAE Metric (Human) P R F1 P R F1 LiberalEE 51 .", "evaluations for CLEVE and the most competitive baseline LiberalEE on ACE 2005, and the results are shown in Table", "5. We can observe that: (1) CLEVE significantly outperforms all the baselines, which shows its superiority in both extracting event instances and discovering event schemata.", "(2) RoBERTa ignores the structure information.", "Although RoBERTa+VAGE encodes event structures with VGAE, the semantic representations of RoBERTa and the structure representations of VGAE are distinct and thus cannot work together well.", "Hence the two models even under-perform LiberalEE, while the two representations of CLEVE can collaborate well to improve liberal EE.", "(3) In the ablation studies, the discarding of event structure pre-training results in a much more significant performance drop than in the supervised setting, which indicates event structures are essential to discovering new event schemata.", "In this section, we study how the benefits of pretraining change along with the available supervised data size.", "We compare the ED performance on MAVEN of CLEVE, RoBERTa and a non-pre-training model BiLSTM+CRF when trained on different proportions of randomly-sampled MAVEN training data in Figure 3.", "We can see that the im-AMR 1.0 ACE 2005 MAVEN Parsing ED EAE ED Wang et al. (2015) 62 .", "provements of CLEVE compared to RoBERTa and the pre-training models compared to the non-pre-training model are generally larger when less supervised data available.", "It indicates that CLEVE is especially helpful for low-resource EE tasks, which is common since the expensive event annotation.", "CLEVE relies on automatic AMR parsers to build self-supervision signals for large unsupervised data.", "Intuitively, the performance of AMR parsers will influence CLEVE performance.", "To analyze the effect of different AMR parsing performance, we compare supervised EE results of CLEVE models using the established CAMR (Wang et al., 2016) and a new state-of-the-art parser (Xu et al., 2020) during pre-training in Table", "6. We can see that a better AMR parser intuitively brings better EE performance, but the improvements are not so significant as the corresponding AMR performance improvement, which indicates that CLEVE is generally robust to the errors in AMR parsing.", "Pre-training on similar text domains may further improve performance on corresponding downstream tasks (Gururangan et al., 2020; Gu et al., 2020).", "To analyze this effect, we evaluate the supervised EE performance of CLEVE pre-trained on NYT and English Wikipedia in Table", "7. We can see pre-training on a similar domain (NYT for ACE 2005, Wikipedia for MAVEN) surely benefits CLEVE on corresponding datasets.", "On ACE 2005, although Wikipedia is 2 .", "28 times as large as NYT, CLEVE pre-trained on it underperforms CLEVE pre-trained on NYT (both in the news do-main).", "Moreover, we can see the in-domain benefits mainly come from the event semantics rather than structures in CLEVE framework (from the comparisons between the w/o semantic and w/o structure results).", "It suggests that we can develop domain adaptation techniques focusing on semantics for CLEVE, and we leave it to future work.", "In this paper, we propose CLEVE, a contrastive pre-training framework for event extraction to utilize the rich event knowledge lying in large unsupervised data.", "Experiments on two real-world datasets show that CLEVE can achieve significant improvements in both supervised and unsupervised liberal settings.", "In the future, we will (1) explore other kinds of semantic structures like the frame semantics and (2) attempt to overcome the noise in unsupervised data brought by the semantic parsers.", "This work is supported by the National Natural Science Foundation of China Key Project (NSFC No. U1736204), grants from Beijing Academy of Artificial Intelligence (BAAI2019ZD0502) and the Institute for Guo Qiang, Tsinghua University (2019GQB0003).", "This work is also supported by the Pattern Recognition Center, WeChat AI, Tencent Inc.", "We thank Lifu Huang for his help on the unsupervised experiments and the anonymous reviewers for their insightful comments.", "We discuss the ethical considerations and broader impact of the proposed CLEVE method in this section: (1) Intellectual property .", "NYT and ACE 2005 datasets are obtained from the linguistic data consortium (LDC), and are both licensed to be used for research.", "MAVEN is publicly shared under the CC BY-SA 4.0 license 3 .", "The Wikipedia corpus is obtained from the Wikimedia dump 4 , which is 3 https://creativecommons.org/licenses/ by-sa/4.0/ 4 https://dumps.wikimedia.org/ shared under the CC BY-SA 3.0 license 5 .", "The invited expert is fairly paid according to agreed working hours.", "(2) Intended use .", "CLEVE improves event extraction in both supervised and unsupervised settings, i.e., better extract structural events from diverse raw text.", "The extracted events then help people to get information conveniently and can be used to build a wide range of application systems like information retrieval (Glavas and Snajder, 2014) and knowledge base population (Ji and Grish-man, 2011).", "As extracting events is fundamental to various applications, the failure cases and potential bias in EE methods also have a significant negative impact.", "We encourage the community to put more effort into analyzing and mitigating the bias in EE systems.", "Considering CLEVE does not model people's characteristics, we believe CLEVE will not bring significant additional bias.", "(3) Misuse risk.", "Although all the datasets used in this paper are public and licensed, there is a risk to use CLEVE methods on private data without authorization for interests.", "We encourage the regulators to make efforts to mitigate this risk.", "(4) Energy and carbon costs.", "To estimate the energy and carbon costs, we present the computing platform and running time of our experiments in Appendix E for reference.", "We will also release the pre-trained checkpoints to avoid the additional carbon costs of potential users.", "We encourage the users to try model compression techniques like distillation and quantization in deployment to reduce carbon costs." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "method", "other", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "other", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "other", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "other", "other", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method" ]
[ "Sequence-to-sequence models have lead to significant progress in keyphrase generation, but it remains unknown whether they are reliable enough to be beneficial for document retrieval.", "This study provides empirical evidence that such models can significantly improve retrieval performance, and introduces a new extrinsic evaluation framework that allows for a better understanding of the limitations of keyphrase generation models.", "Using this framework, we point out and discuss the di culties encountered with supplementing documents with not present in text keyphrases, and generalizing models across domains.", "Our code is available at https:// github.com/boudinfl/ir-using-kg .", "With the exponential growth of the scientific literature (Bornmann and Mutz, 2015), retrieving relevant scientific papers becomes increasingly dif-ficult.", "Keywords, also referred to as keyphrases, provide an e ective way to supplement paper indexing and improve retrieval e ectiveness in scientific digital libraries (Barker et al., 1972; Zhai, 1997; Gutwin et al., 1999; Lu and Kipp, 2014).", "However, only few documents have assigned keyphrases, and those who do were, for the most part, self-labeled by their authors, thus exhibiting annotation inconsistencies (Strader, 2011; Suzuki et al., 2011).", "This has motivated an active line of research on automatic keyphrase extraction (see Hasan and Ng (2014) for an overview) and, more recently, keyphrase generation (Meng et al., 2017), where the task is to find a set of words and phrases that represents the main content of a document.", "Although models for predicting keyphrases have been extensively evaluated on their ability to reproduce author's keywords, it still remains unclear whether they can be usefully applied in information retrieval.", "One reason for this lack of evidence may have been their relatively low performance discouraging attempts at using them for indexing (Liu et al., 2010; Hasan and Ng, 2014).", "Yet, recently proposed models not only achieve much better performance, but also display a property that may have a significant impact on retrieval e ectiveness: the capacity to generate keyphrases that do not appear in the source text.", "These absent keyphrases do not just highlight the topics that are most relevant, but provide some form of semantic expansion by adding new content (e.g. synonyms, semantically related terms) to the index (Greulich, 2011).", "The goal of this paper is two-fold: to gather empirical evidence as to whether current keyphrase generation models are good enough to improve scientific document retrieval, and to gain further insights into the performance of these models from an extrinsic perspective.", "Our contributions are listed as follows: We report significant improvements for strong retrieval models on a standard benchmark collection, showing that keyphrases produced by state-of-the-art models are consistently helpful for document retrieval, even, to our surprise, when author keywords are provided.", "We introduce a new extrinsic evaluation framework for keyphrase generation that allows for a deeper understanding of the limitations of current models.", "Using it, we discuss the di culties associated with domain generalization and absent keyphrase prediction.", "Here, we focus on the task of searching through a collection of scientific papers for relevant documents.", "documents.", "All of our experiments are conducted on the NTCIR-2 test collection (Kando, 2001) which is, to our knowledge, the only available benchmark dataset for that task.", "It contains 322,058 documents 1 (title and abstract pairs) and 49 search topics (queries) with relevance judgments.", "Most of the documents (98.6%) include author keywords (4.8 per doc. on avg.), which we later use to investigate the performance of keyphrase generation models.", "Documents cover a broad range of domains from pure science to social sciences and humanities, although half of the documents are about engineering and computer science.", "Queries are also categorized into one or more research fields (e.g. science, chemistry, engineering), the original intent being to help retrieval models in narrowing down the search space.", "We follow common practice and use short 2 queries with binary relevance judgments (i.e. without partially relevant documents).", "We consider two standard ad-hoc retrieval models to rank documents against queries: BM25 and query likelihood (QL), both implemented in the Anserini IR toolkit (Yang et al., 2017).", "These models use unsupervised techniques based on corpus statistics for term weighting, and will therefore be straightforwardly a ected when keyphrases are added to a document.", "We further apply a pseudo-relevance feedback method, known as RM3 (Abdul-Jaleel et al., 2004), on top of the models to achieve strong, near state-of-the-art retrieval results (Lin, 2019; Yang et al., 2019).", "For all models, we use Anserini's default parameters.", "To verify the e ectiveness of the adopted retrieval models, we compared their performance with that of the best participating systems in NTCIR-2.", "Retrieval performance is measured using mean average precision (MAP) and precision at 10 retrieved documents (P@10).", "MAP measures the overall ranking quality and P@10 reflects the number of relevant documents on the first page of search results.", "Documents are indexed with author keywords, same as for participating systems.", "Results are presented in Table", "1. We see that the considered retrieval models achieve strong performance, even outperforming the best participating system by a substantial margin.", "Note that the two best-performing systems use pseudo-relevance feedback, and that the second-ranked system is based on BM25.", "Keyphrase generation is the task of producing a set of words and phrases that best summarise a document (Evans and Zhai, 1996).", "In contrast with most previous work that formulates this task as an extraction problem (a.k.a. keyphrase extraction), which can be seen as ranking phrases extracted from a document, recent neural models for keyphrase generation are based on sequence-to-sequence learning (Sutskever et al., 2014; Bahdanau et al., 2014), thus potentially allowing them to generate any phrase, also beyond those that appear verbatim in the text.", "In this study, we consider the following two neural keyphrase generation models: seq2seq + copy (Meng et al., 2017) is a sequence-to-sequence model with attention, augmented with a copying mechanism (Gu et al., 2016) to predict phrases that rarely occur.", "The model is trained with document-keyphrase pairs and uses beam search decoding for inference.", "seq2seq + corr (Chen et al., 2018) extends the aforementioned model with correlation constraints.", "It employs a coverage mechanism (Tu et al., 2016) that diversifies attention distributions to increase topic coverage, and a review mechanism to avoid generating duplicates.", "We implemented the models in PyTorch (Paszke et al., 2017) using AllenNLP (Gardner et al., 2018).", "Models are trained on the KP20k dataset (Meng et al., 2017), which contains 567,830 scientific abstracts with gold-standard, author-assigned keywords (5.3 per doc. on avg.).", "We use the parameters suggested by the authors for each model.", "To validate the e ectiveness of our implementations, we conducted an intrinsic evaluation by counting the number of exact matches between predicted and gold keyphrases.", "We adopt the standard metric and compute the f-measure at top 5, as it corresponds to the average number of keyphrases in KP20k and NTCIR-2, that is, 5.3 and 4.8, respectively.", "We also examine cross-domain generalization using the KPTimes news dataset (Gallina et al., 2019), and include a state-of-the-art unsupervised keyphrase extraction model (Boudin, 2018, henceforth mp-rank) for comparison purposes.", "This latter baseline also provides an additional relevance signal based on graph-based ranking whose usefulness in retrieval will be tested in subsequent experiments.", "Results are reported in Table", "2. Overall, our results are consistent with those reported in (Meng et al., 2017; Chen et al., 2018), demonstrating the superiority of well-trained neural models over unsupervised ones, and stressing their lack of robustness across domains.", "Rather surprisingly, seq2seq + corr is outperformed by seq2seq + copy which indicates that relevant, yet possibly redundant, keyphrases are filtered out by the added mechanisms for promoting diversity in the output.", "Our goal is to find out whether the keyphrase generation models described above are reliable enough to be beneficial for document retrieval.", "To do so, we contrast the performance of the retrieval models with and without automatically predicted keyphrases.", "Two initial indexing configurations are also examined: title and abstract only ( T + A ), and title, abstract and author keywords ( T + A + K ).", "The idea here is to investigate whether generated keyphrases simply act as a proxy for author keywords, or instead supplement them.", "Unless mentioned otherwise, the top-5 predicted keyphrases are used to expand documents, which is in accordance with the average number of author keywords in NTCIR-2.", "We evaluate retrieval performance in terms of MAP and omit P@10 for brevity.", "We use the Student's paired t-test to assess statistical significance of our retrieval results at p < 0 .", "05 (Smucker et al., 2007).", "Results for retrieval models using keyphrase generation are reported in Table", "3. We note that indexing keyphrases generated by seq2seq + copy, which performs best in our intrinsic evaluation, significantly improves retrieval e ectiveness for all models.", "More interestingly, gains in e ectiveness are also significant when both keyphrases and author keywords are indexed, indicating they complement each other well.", "This important finding suggests that predicted keyphrases are consistently helpful for document retrieval, and should be used even when author keywords are provided.", "Another important observation is that while both keyphrase generation models perform reasonably well in our intrinsic evaluation on NTCIR-2 (cf. Table 2, column 3), their impact on retrieval e ectiveness are quite di erent, as only s2s + copy reaches consistent significance.", "This finding advocates for the importance of using document retrieval as an extrinsic evaluation task for keyphrase generation.", "Overall, BM25 + RM3 achieves the best retrieval e ectiveness, confirming previous findings on ad-hoc retrieval in limited data scenarios (Lin, 2019).", "For clarity and conciseness, we focus on this model in the rest of this paper.", "Encouraging diversity in keyphrases seems not to be appropriate for retrieval, as seq2seq + corr consistently gives lower results than seq2seq + copy.", "It is also interesting to see that the e ectiveness gains of query expansion (RM3) and document expansion are additive, suggesting that they provide di erent but complementary relevance signals.", "Moreover, our results show that query expansion is more e ective, which is in line with past work (Billerbeck and Zobel, 2005).", "One hyper-parameter that we have deliberately left untouched so far is the number N of predicted keyphrases that directly controls the precision-recall trade-o of keyphrase generation models.", "To understand how this parameter a ects retrieval e ectiveness, we repeated our experiments by varying N within the range [0 , 9], and plotted the results in Figure", "1. Without author keywords, we observe that all models achieve gains, but only seq2seq + copy does yield significant improvements.", "With author keywords, seq2seq + copy is again the only model that achieves significance, while the others show mixed results, sometimes even degrading scores.", "One likely explanation for this is that these models produce keyphrases that cause documents to drift away from their original meaning.", "We note that results are close to optimal for N = 5, supporting our initial setting for this parameter.", "From our experiments, it appears that unsupervised keyphrase extraction is not e ective enough to significantly improve retrieval e ectiveness.", "The fact that keyphrase generation does so, suggests that the ability to predict absent keyphrases may be what enables better performance.", "Yet counter-intuitively, we found that most of the gains in retrieval e ectiveness are due to the high extractive accuracy of keyphrase generation models.", "Results in Table 4 show that expanding documents with only absent keyphrases is at best useless and at worst harmful, while using only present keyphrases brings significant improvements.", "We draw two conclusions from this.", "First, absent keyphrases may not be useful in practice unless they are tied to some form of domain terminology to prevent semantic drift.", "Second, as generation does not yield improvements, keyphrase extraction models may be worth further investigation.", "In particular, supervised models could theoretically provide similar results while being easier to train.", "Neural models for keyphrase generation exhibit a limited generalization ability, which means that their performance degrades on documents that di er from the ones encountered during training (cf. Table 2, columns 3 and 4).", "To quantify how much this a ects retrieval e ectiveness, we divided the queries into two disjoint sets: in-domain for those that belong to research fields present in KP20k, and out-domain for the others.", "Results are presented in Table 5.", "The first thing we notice is the overall lower performance of out-domain queries, which may be explained by the unbalanced distribution of domains in the NTCIR-2 collection.", "Most importantly, out-domain queries on full indexing (i.e. T + A + K ) is the only configuration in which no significant gains in retrieval e ectiveness are achieved.", "This last experiment shows that expanding documents using existing keyphrase generation models may be ine ective in the absence of in-domain training data, and stresses the need of domain adaptation for keyphrase generation.", "We presented the first study of the usefulness of keyphrase generation for scientific document retrieval.", "Our results show that keyphrases can significantly improve retrieval e ectiveness, and also highlight the importance of evaluating keyphrase generation models from an extrinsic perspective.", "Other retrieval tasks may also benefit from using keyphrase information and we expect our results to serve as a basis for further improvements.", "We thank the anonymous reviewers for their valuable comments.", "This work was supported by the IKEBANA project (grant of Atlanstic 2020) and the French National Research Agency (ANR) through the DELICES project (ANR-19-CE38-0005-01)." ]
[ "abstain", "objective", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "result", "other", "other" ]
[ "Recent research on cross-lingual word embeddings has been dominated by unsupervised mapping approaches that align monolingual embeddings.", "Such methods critically rely on those embeddings having a similar structure, but it was recently shown that the separate training in different languages causes departures from this assumption.", "In this paper, we propose an alternative approach that does not have this limitation, while requiring a weak seed dictionary (e.g., a list of identical words) as the only form of supervision.", "Rather than aligning two fixed embedding spaces, our method works by fixing the target language embeddings, and learning a new set of embeddings for the source language that are aligned with them.", "To that end, we use an extension of skip-gram that leverages translated context words as anchor points, and incorporates self-learning and iterative restarts to reduce the dependency on the initial dictionary.", "Our approach outperforms conventional mapping methods on bilingual lexicon induction, and obtains competitive results in the downstream XNLI task.", "Cross-lingual word embeddings (CLWEs) represent words from two or more languages in a shared space, so that semantically similar words in different languages are close to each other.", "Early work focused on jointly learning CLWEs in two languages, relying on a strong cross-lingual supervision in the form of parallel corpora (Luong et al., 2015; Gouws et al., 2015) or bilingual dictionaries (Gouws and Sgaard, 2015; Duong et al., 2016).", "However, these approaches were later superseded by offline mapping methods, which separately train word embeddings in different languages and align them in an unsupervised manner through self-learning (Artetxe et al., 2018; Hoshen and Wolf, 2018) or adversarial training (Zhang et al., 2017; Conneau et al., 2018a).", "Despite the advantage of not requiring any parallel resources, mapping methods critically rely on the underlying embeddings having a similar structure, which is known as the isometry assumption .", "Several authors have observed that this assumption does not generally hold, severely hindering the performance of these methods (Sgaard et al., 2018; Nakashole and Flauger, 2018; Patra et al., 2019).", "In later work, Ormazabal et al. (2019) showed that this issue arises from trying to align separately trained embeddings, as joint learning methods are not susceptible to it.", "In this paper, we propose an alternative approach that does not have this limitation, but can still work without any parallel resources.", "The core idea of our method is to fix the target language embeddings, and learn aligned embeddings for the source language from scratch.", "This prevents structural mismatches that result from independently training embeddings in different languages, as the learning of the source embeddings is tailored to each particular set of target embeddings.", "For that purpose, we use an extension of skip-gram that leverages translated context words as anchor points.", "So as to translate the context words, we start with a weak initial dictionary, which is iteratively improved through self-learning, and we further incorporate a restarting procedure to make our method more robust.", "Thanks to this, our approach can effectively work without any human-crafted bilingual resources, relying on simple heuristics (automatically generated lists of numerals or identical words) or an existing unsupervised mapping method to build the initial dictionary.", "Our experiments confirm the effectiveness of our approach, outperforming previous mapping methods on bilingual dictionary induction and obtaining competitive results on zero-shot crosslingual transfer learning on XNLI.", "Word embeddings.", "Embedding methods learn static word representations based on co-occurrence statistics from a corpus.", "Most approaches use two different matrices to represent the words and the contexts, which are known as the input and output vectors, respectively (Mikolov et al., 2013; Pennington et al., 2014; Bojanowski et al., 2017).", "The output vectors play an auxiliary role, being discarded after training.", "Our method takes advantage of this fact, leveraging translated output vectors as anchor points to learn cross-lingual embeddings.", "To that end, we build on the Skip-Gram with Negative Sampling (SGNS) algorithm (Mikolov et al., 2013), which trains a binary classifier to distinguish whether each output word co-occurs with the given input word in the training corpus or was instead sampled from a noise distribution.", "Mapping CLWE methods.", "Offline mapping methods separately train word embeddings for each language, and then learn a mapping to align them into a shared space.", "Most of these methods align the embeddings through a linear mapoften enforcing orthogonality constraintsand, as such, they rely on the assumption that the geometric structure of the separately learned embeddings is similar.", "This assumption has been shown to fail under unfavorable conditions, severely hindering the performance of these methods (Sgaard et al., 2018; Vulic et al., 2020).", "Existing attempts to mitigate this issue include learning non-linear maps in a latent space (Mohiuddin et al., 2020), employing maps that are only locally linear (Nakashole, 2018), or learning a separate map for each word (Glavas and Vulic, 2020).", "However, all these methods are supervised, and have the same fundamental limitation of aligning a set of separately trained embeddings (Ormazabal et al., 2019).", "Self-learning.", "While early mapping methods relied on a bilingual dictionary to learn the alignment, this requirement was alleviated thanks to self-learning, which iteratively re-induces the dictionary during training.", "This enabled learning CLWEs in a semi-supervised fashion starting from a weak initial dictionary (Artetxe et al., 2017), or in a completely unsupervised manner when combined with adversarial training (Conneau et al., 2018a) or initialization heuristics (Artetxe et al., 2018; Hoshen and Wolf, 2018).", "Our proposed method also incorporates a self-learning procedure, showing that this technique can also be effective with non-mapping methods.", "Joint CLWE methods.", "Before the popularization of offline mapping, most CLWE methods extended monolingual embedding algorithms by either incorporating an explicit cross-lingual term in their learning objective, or directly replacing words with their translation equivalents in the training corpus.", "For that purpose, these methods relied on some form of cross-lingual supervision, ranging from bilingual dictionaries (Gouws and Sgaard, 2015; Duong et al., 2016) to parallel or document-aligned corpora (Luong et al., 2015; Gouws et al., 2015; Vulic and Moens, 2016).", "More recently, Lample et al. (2018) reported positive results learning regular word embeddings over concatenated monolingual corpora in different languages, relying on identical words as anchor points.", "Wang et al. (2019) further improved this approach by applying a conventional mapping method afterwards.", "As shown later in our experiments, our approach outperforms theirs by a large margin.", "Freezing.", "Artetxe et al. (2020) showed that it is possible to transfer an English transformer to a new language by freezing all the inner parameters of the network and learning a new set of embeddings for the new language through masked language modeling.", "This works because the frozen transformer parameters constrain the resulting representations to be aligned with English.", "Similarly, our proposed approach uses frozen output vectors in the target language as anchor points to learn aligned embeddings in the source language.", "Let x i and x i be the input and output vectors of the i th word in the source language, and y j and y j be their analogous in the target language.", "1 In addition, let D be a bilingual dictionary, where D ( i ) = j denotes that the i th word in the source language is translated as the j th word in the target language.", "Our approach first learns the target language embeddings { y i } and { y i } monolingually using regular SGNS.", "Having done that, we learn the source language embeddings { x i } and { x i } , constraining them to be aligned with the target language embeddings according to the dictionary D .", "For that purpose, we propose an extension of 1 Recall that { x i } and { y j } are auxiliary, and the goal is to learn aligned { x i } and { y j } (see 2).", "SGNS that replaces the output vectors in the source language with their translation equivalents in the target language, which act as anchor points (3.1).", "So as to make our method more robust to a weak initial dictionary, we incorporate a self-learning procedure that re-estimates the dictionary during training (3.2), and perform iterative restarts (3.3).", "Algorithm 1 summarizes our method.", "Given a pair of words ( w i , w j ) co-occurring in the source language corpus, we define a generalized SGNS objective as follows:", "L ( w i , w j ) = log ( x w i ctx( w j )) + k i =1 E w n P n ( w ) [log ( x w i ctx( w n ))]", "X where k is the number of negative samples, P n ( w ) is the noise distribution, and ctx( w t ) is a function that returns the output vector to be used for w t .", "In regular SGNS, this function would simply return the output vector of the corresponding word, so that ctx( w t ) = x w t .", "In contrast, our approach replaces it with its counterpart in the target language if w t is in the dictionary: ctx( w t ) = ( y D ( w t ) if w t D x w t otherwise During training, the replaced vectors { y i } are kept frozen, acting as anchor points so that the resulting embeddings { x i } are aligned with their counterparts { y i } in the target language.", "As shown later in our experiments, the performance of our basic method is largely dependent on the quality of the bilingual dictionary itself.", "However, this is not different for conventional mapping methods, which also rely on a bilingual dictionary to align separately trained embeddings in different languages.", "So as to overcome this issue, modern mapping approaches rely on self-learning, which alternates between aligning the embeddings and re-inducing the dictionary in an iterative fashion (Artetxe et al., 2017).", "We adopt a similar strategy, and re-induce the dictionary D a total of K times during training, where K is a hyperparameter.", "To that end, we first obtain the translations for each source word using CSLS retrieval (Conneau et al., 2018a): D ( i ) = arg max j CSLS( x i , y j ) Having done that, we discard all entries that do not satisfy the following cyclic consistency condition: 2 i D i = arg max k cos (cid:0) x k , y arg max j cos( x i , y j ) (cid:1) 3.3 Iterative restarts While self-learning is able to improve a weak initial dictionary throughout training, the method is still susceptible to poor local optima.", "This can be further exacerbated by the learning rate decay commonly used with SGNS, which makes it increasingly difficult to recover from a poor solution as training progresses.", "So as to overcome this issue, we sequentially run the entire SGNS training R times, where R is a hyperparameter of the method.", "We use the output from the previous run as the initial dictionary, but all the other parameters are reset and the full training process is run from scratch.", "Offline mapping.", "This approach learns monolingual embeddings in each of the languages separately, which are then mapped into a common space 2 We define our cyclic consistency condition over cosine similarity, which we found to be more restrictive than CSLS (in that it discards more entries) and work better in our preliminary experiments.", "through a linear transformation.", "We experiment with 3 popular methods from the literature: MUSE (Conneau et al., 2018a), ICP (Hoshen and Wolf, 2018) and VecMap (Artetxe et al., 2018).", "We use the original implementation of each method in their unsupervised mode with default hyperparameters.", "Joint learning + offline mapping.", "This approach jointly learns word embeddings for two languages over their concatenated monolingual corpora, where identical words act as anchor points (Lample et al., 2018).", "Having done that, the vocabulary is partitioned into one shared and two language specific subsets, which are further aligned through an offline mapping method (Wang et al., 2019).", "We use the joint align implementation from the authors with default hyperparameters, which relies on fastText for the joint learning step and MUSE for the mapping step.", "3 Cross-lingual anchoring.", "Our proposed method, described in Section 3.", "We explore 3 alternatives to obtain the initial dictionary:", "(i) identical words , where D i = j if the i th source word and the j th target word are identically spelled,", "(ii) numerals , a subset of the former where identical words are further restricted to be sequences of digits, and", "(iii) unsupervised mapping , where we use the baseline VecMap system described above to induce the initial dictionary.", "4 The first two variants make assumptions on the writing system of different languages, which is usually regarded as a weak form of supervision ( Artetxe et al., 2017; Sgaard et al., 2018), whereas the latter is strictly unsupervised, yet dependant on an additional system from a different family.", "We learn CLWEs between English and six other languages: German, Spanish, French, Finnish, Russian and Chinese.", "Following common practice, we 3 The original implementation only supports the supervised mode with RCSLS mapping, so we modified it to use MUSE in the unsupervised setting as described in the original paper.", "4 We use CSLS retrieval and apply the cyclic consistency restriction as described in Section 3.2.", "use Wikipedia as our training corpus, 5 which we preprocessed using standard Moses scripts, and restrict our vocabulary to the most frequent 200K tokens per language.", "In the case of Chinese, word segmentation was done using the Stanford Segmenter.", "Table 1 summarizes the statistics of the resulting corpora, while Table 2 reports the sizes of the initial dictionaries derived from it for our proposed method.", "For joint align, we directly run the official implementation over our tokenized corpus as described above.", "All the other systems take monolingual embeddings as input, which we learn using the SGNS implementation in word2vec.", "6 For our proposed method, we set English as the target language, fix the corresponding monolingual embeddings, and learn aligned embeddings in the source language using our extension of SGNS (3).", "7 We set the number of restarts R to 3, the number of reinduc-tions per restart K to 50, and the number of epochs to 10 #trg sents #src sents , which makes sure that the source language gets a similar number of updates to the 10 epochs done for English.", "8 For all the other hyperparameters, we use the same values as for the monolingual embeddings.", "We made all of our development decisions based on preliminary experiments on English-Finnish, without any systematic hyperparameter exploration.", "Our implementation runs on CPU, except for the dictionary reinduction steps, which run on a single GPU for around one 5 We extracted the corpus from the February 2019 dump using the WikiExtractor tool.", "6 We use 10 negative samples, a sub-sampling threshold of 1e-5, 300 dimensions, and 10 epochs.", "Note that joint align also learns 300-dimensional vectors, but runs fastText with default hyperparameters under the hood.", "7 In our preliminary experiments, we observed our proposed method to be quite sensitive to which language is the source and which one is the target.", "We find the language with the largest corpus to perform best as the target, presumably because the corresponding monolingual embeddings are better estimated, so it is more appropriate to fix them and learn aligned embeddings for the other language.", "Following this observation, we set English as the target language for all pairs, as it is the language with the largest corpus.", "8 For a fair comparison, we also tried using the same number of epochs for the baseline systems, but this performed worse than the reported numbers with 10 epochs.", "As described next, we evaluate our method on two tasks: Bilingual Lexicon Induction (BLI) and Cross-lingual Natural Language Inference (XNLI).", "BLI.", "Following common practice, we induce a bilingual dictionary through CSLS retrieval (Con-neau et al., 2018a) for each set of cross-lingual embeddings, and evaluate the precision at 1 (P@1) with respect to the gold standard test dictionary from the MUSE dataset (Conneau et al., 2018a).", "For the few out-of-vocabulary source words, we revert to copying as a back-off strategy, 9 so our reported numbers are directly comparable to prior work in terms of coverage.", "XNLI.", "We train an English natural language inference model on MultiNLI (Williams et al., 2018), and evaluate the zero-shot cross-lingual transfer performance on the XNLI test set (Conneau et al., 2018b) for the subset of our languages covered by it.", "To that end, we follow Glavas et al. (2019) and train an Enhanced Sequential Inference Model (ESIM) on top of our original English embeddings, which are kept frozen during training.", "At test time, we transfer into the rest of the languages by plugging in the corresponding aligned embeddings.", "Note that we use the exact same English model for our proposed method and the baseline MUSE and ICP systems, 10 which only differ in the set of aligned 9 This has a negligible impact in practice, as it accounts for less than 1.4% of the test cases.", "Moreover, all of our systems use the same underlying vocabulary, so they are affected in the exact same way.", "10 This is possible because they all fix the target language embeddings (English in this case) and align the embeddings embeddings used for cross-lingual transfer.", "In contrast, VecMap and joint align also manipulate the target English embeddings, which would require training a separate model for each language pair, so we decide to exclude them from this set of experiments.", "11 5 Results We next discuss our main results on BLI (5.1) and XNLI (5.2), followed by our ablation study (5.3) and error analysis (5.4) on BLI.", "Table 3 comprises our main BLI results.", "We observe that our method obtains the best results in all directions (matched by VecMap in Russian-English), outperforming the strongest baseline by 2.4 points on average for the mapping based initialization.", "Our improvements are more pronounced in the backward direction (3.1 points on average) but still substantial in the forward direction (1.7 points on average).", "It is worth noting that some systems fail to converge to a good solution for the most challenging language pairs.", "This includes our proposed method in the case of Chinese-English when using the numeral-based initialization, which we attribute to the smaller size of the initial dictionary (only 244 entries, see Table 2).", "Other than that, we observe that our approach obtains very similar results regardless of the initial dictionary.", "Quite remarkably, in the source language with them, either through mapping (MUSE, ICP) or learning from scratch (ours).", "11 In addition to the computational overhead, having separate models introduces some variance, while our comparison is more direct.", "the variant using VecMap for initialization ( mapping init ) is substantially stronger than VecMap itself despite not using any additional training signal.", "So as to put our results into perspective, Table 4 compares them to previous numbers reported in the literature.", "Note that the numbers are comparable in terms of coverage and all systems use Wikipedia as the training corpus, although they might differ on the specific dump used and the preprocessing details.", "12 As it can be seen, our approach obtains the best results by a substantial margin.", "13 5.2 XNLI We report our XNLI results in Table", "5. We observe that our method is competitive with the baseline 12 In particular, most mapping methods use the official Wikipedia embeddings from fastText.", "Unfortunately, the preprocessed corpus used to train these embeddings is not public, so works that explore other approaches, like ours, need to use their own pre-processed copy of Wikipedia.", "13 Artetxe et al. (2019) report even stronger results based on unsupervised machine translation instead of direct retrieval with CLWEs.", "Note, however, that their method still relies on cross-lingual embeddings to build the underlying phrase-table, so our improvements should be largely orthogonal to theirs.", "mapping systems, achieving the best results on 3 out of the 5 transfer languages by a small margin.", "Nevertheless, it significantly lags behind MUSE on Chinese, even if the exact same set of crosslingual embeddings performed better than MUSE at BLI.", "While striking, similar discrepancies between BLI and XNLI performance where also observed in previous studies (Glavas et al., 2019).", "Finally, we observe that the initial dictionary has a negligible impact in the performance of our proposed method, which supports the idea that our approach converges to a similar solution given any reasonable initialization.", "So as to understand the role of self-learning and the iterative restarts in our approach, we perform an ablation study and report our results in Table", "6. We observe that the contribution of these components is greatly dependant on the initial dictionary.", "For the numeral initialization, the basic method works poorly, and both extensions bring large improvements.", "In contrast, the identical initialization Sourcetarget Targetsource 0 50 100 150 0 50 100 150 0 20 40 60 Reinduction number P @ 1 Ours (identical init) Ours (numeral init) Ours (mapping init) VecMap Figure 1: Finnish-English learning curves (BLI P@1).", "does not benefit from iterative restarts, but self-learning still plays a major role.", "In the case of the mapping-based initialization, the basic method is already very competitive.", "This suggests that both the self-learning and the iterative restarts are helpful to make the method more robust to a weak initialization, and have a minor impact otherwise.", "In order to better understand the underlying learning dynamics, we analyze the learning curves for Finnish-English in Figure 1.", "We observe that, when the initial dictionary is strong, our method surpasses the baseline and stabilizes early.", "In contrast, convergence is much slower when using the weak numeral-based initialization, and the iterative restarts are critical to escape poor local optima.", "So as to better understand where our improvements in BLI are coming from, we perform an error analysis on the Spanish-English direction.", "To that end, we manually inspect the 69 instances for which our method (with mapping-based initialization) produced a correct translation while VecMap failed according to the gold standard, as well as the 26 instances for which the opposite was true.", "We then categorize these errors into several types, which are summarized in Table", "7. We observe that, in 52.6% of the 95 analyzed instances, the translation produced by our method is identical to the source word, while this percentage goes down to 4.2% for VecMap.", "This tendency of our approach to copy its input is striking, as the model has no notion about the words being identically spelled.", "14 A large portion of these cases 14 The variants of our system with identical or numeral initialization do indirectly see this signal, but the one analyzed here is initialized with the VecMap mapping.", "correspond to named entities, where copying is the right behavior, while VecMap outputs a different proper noun.", "There are also some instances where the input word is in the target language, 15 which can be considered an artifact of the dataset, but copying also seems the most reasonable behavior in these cases.", "Finally, there are also a few cases where the input word is present in the target vocabulary, which is selected by our method and counted as an error.", "Once again, we consider these to be an artifact of the dataset, as copying seems a reasonable choice if the input word is considered to be part of the target language vocabulary.", "The remaining cases where neither method copies mostly correspond to common errors, where one of the systems (most often VecMap) outputs a semantically related but incorrect translation.", "However, there are also a few instances where both translations are correct, but one of them is missing in the gold standard.", "With the aim to understand the impact of identical words in our original results, we re-evaluated the systems using a filtered version of the MUSE gold standard dictionaries, where we removed all source words that were included in the set of candidate translations.", "In order to be fair, we filtered out identical words from the output of the system, reverting to the second highest-ranked translation whenever the first one is identical to the source word.", "The results for the strongest system in each family are shown in Table", "8. Even if the margin of improvement is reduced compared to Table 3, the best results are still obtained by our proposed method, bringing an average improvement 15 English words will often appear in other languages as part of named entities (e.g., pink as part of Pink Floyd), which explains the presence of such words in the Spanish vocabulary.", "of 1.1 points.", "It is also worth noting that joint align, which shares a portion of the vocabulary for both languages (and will thus translate all words in the shared vocabulary identically), suffers a large drop in performance.", "This highlights the importance of accompanying quantitative BLI evaluation with an error analysis as urged by previous studies (Ke-mentchedjhieva et al., 2019).", "Our approach for learning CLWEs addresses the main limitations of both offline mapping and joint learning methods.", "Different from mapping approaches, it does not suffer from structural mismatches arising from independently training embeddings in different languages, as it works by constraining the learning of the source embeddings so they are aligned with the target ones.", "At the same time, unlike previous joint methods, our system can work without any parallel resources, relying on numerals, identical words or an existing mapping method for the initialization.", "We achieve this by combining cross-lingual anchoring with self-learning and iterative restarts.", "While recent research on CLWEs has been dominated by mapping approaches, our work shows that the fundamental techniques that popularized these methods (e.g., the use of self-learning to relax the need for crosslingual supervision) can also be effective beyond this paradigm.", "Despite its simplicity, our experiments on BLI show the superiority of our method when compared to previous mapping systems.", "We complement these results with additional experiments on a downstream task, where our method obtains competitive results, as well as an ablation study and a systematic error analysis.", "We identify a striking tendency of our method to translate words identically, even if it has no notion of the words being identically spelled.", "Thanks to this, our method is particularly strong at translating named entities, but we show that our improvements are not limited to this phenomenon.", "These insights confirm the value of accompanying quantitative results on BLI with qualitative evaluation (Kementchedjhieva et al., 2019) and/or other tasks (Glava s et al., 2019).", "In the future, we would like to further explore CLWE methods that go beyond the currently dominant mapping paradigm.", "In particular, we would like to remove the requirement of a seed dictionary altogether by using adversarial learning, and explore more elaborated context translation and dictionary re-induction schemes.", "Aitor Ormazabal, Aitor Soroa, Gorka Labaka and Eneko Agirre were supported by the Basque Government (excellence research group IT1343-19 and DeepText project KK-2020/00088), project Big-Knowledge ( Ayudas Fundacion BBVA a equipos de investigacion cientfica 2018 ) and the Spanish MINECO (project DOMINO PGC2018-102041-B-I00 MCIU/AEI/FEDER, UE).", "Aitor Ormazabal was supported by a doctoral grant from the Spanish MECD." ]
[ "abstain", "abstain", "objective", "objective", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "result", "method", "result", "other", "other", "other", "other", "method", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "objective", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "result", "method", "result", "abstain", "objective", "objective", "other", "other" ]
[ "Breaking cybersecurity events are shared across a range of websites, including security blogs (FireEye, Kaspersky, etc.), in addition to social media platforms such as Face-book and Twitter.", "In this paper, we investigate methods to analyze the severity of cybersecurity threats based on the language that is used to describe them online.", "A corpus of 6,000 tweets describing software vulnerabilities is annotated with authors' opinions toward their severity.", "We show that our corpus supports the development of automatic classifiers with high precision for this task.", "Furthermore, we demonstrate the value of analyzing users' opinions about the severity of threats reported online as an early indicator of important software vulnerabilities.", "We present a simple, yet effective method for linking software vulnerabilities reported in tweets to Common Vulnerabilities and Exposures (CVEs) in the National Vulnerability Database (NVD).", "Using our predicted severity scores, we show that it is possible to achieve a Precision@50 of 0.86 when forecasting high severity vulnerabilities, significantly outperforming a baseline that is based on tweet volume.", "Finally we show how reports of severe vulnerabilities online are predictive of real-world exploits.", "1 1 Introduction Software vulnerabilities are flaws in computer systems that leave users open to attack; vulnerabilities are generally unknown at the time a piece of software is first published, but are gradually identified over time.", "As new vulnerabilities are discovered and verified they are assigned CVE numbers (unique identifiers), and entered into the National Vulnerability Database (NVD).", "2 To help prioritize 1 Our code and data are available at https://github.com/viczong/ cybersecurity threat severity analysis .", "response efforts, vulnerabilities in the NVD are assigned severity scores using the Common Vulnerability and Scoring System (CVSS).", "As the rate of discovered vulnerabilities has increased in recent years, 3 the need for efficient identification and pri-oritization has become more crucial.", "However, it is well known that a large time delay exists between the time a vulnerability is first publicly disclosed to when it is published in the NVD; a recent study found that the median delay between the time a vulnerability is first reported online and the time it is published in the NVD is seven days; also, 75% of threats are first disclosed online giving attackers time to exploit the vulnerability.", "4 In this paper we present the first study of whether natural language processing techniques can be used to analyze users' opinions about the severity of software vulnerabilities reported online.", "We present a corpus of 6,000 tweets annotated with opinions toward threat severity, and empirically demonstrate that this dataset supports automatic classification.", "Furthermore, we propose a simple, yet effective method for linking software vulnerabilities reported on Twitter to entries in the NVD, using CVEs found in linked URLs.", "We then use our threat severity analyzer to conduct a large-scale study to validate the accuracy of users' opinions online against experts' severity 3 https://www.cvedetails.com/browse-by-date.php 4 https://www.recordedfuture.com/vulnerability-disclosure-delay/ ratings (CVSS scores) found in the NVD.", "Finally, we show that our approach can provide an early indication of vulnerabilities that result in real exploits in the wild as measured by the existence of Symantec virus signatures associated with CVEs; we also show how our approach can be used to retrospectively identify Twitter accounts that provide reliable warnings about severe vulnerabilities.", "Recently there has been increasing interest in developing NLP tools to identify cybersecurity events reported online, including denial of service attacks, data breaches and more (Ritter et al., 2015; Chang et al., 2016; Chambers et al., 2018).", "Our proposed approach in this paper builds on this line of work by evaluating users opinions toward the severity of cybersecurity threats.", "Prior work has also explored forecasting software vulnerabilities that will be exploited in the wild (Sabottke et al., 2015).", "Features included structured data sources (e.g., NVD), in addition to the volume of tweets mentioning a list of 31 keywords.", "Rather than relying on a fixed set of keywords, we analyze message content to determine whether the author believes a vulnerability is severe.", "As discussed by Sabottke et al. (2015), methods that rely on tracking keywords and message volume are vulnerable to adversarial attacks from Twitter bots or sockpuppet accounts (Solorio et al., 2013).", "In contrast, our method is somewhat less prone to such attacks; by extracting users' opinions expressed in individual tweets, we can track the provenance of information associated with our forecasts for display to an analyst, who can then determine whether or not they trust the source of information.", "Given a tweet t and named entity e , our goal is to predict whether or not there is a serious cybersecurity threat towards the entity based on context.", "For example, given the context in Figure 2, we aim at predicting the severity level towards adobe flash player .", "We define an author's perceived severity toward a threat using three criteria: (1) does the author believe that their followers should be worried about the threat?", "(2) is the vulnerability easily exploitable?", "and (3) could the threat affect a large number of users?", "If one or more of these criteria are met, then we consider the threat to be severe.", "To collect tweets describing cybersecurity events for annotation, we tracked the keywords ddos and vulnerability from Dec 2017 to July 2018 using the Twitter API.", "We then used the Twitter tagging tool described by Ritter et.", "al. (2011) to extract named entities, 5 retaining tweets that contain at least one named entity.", "To cover as many linguistic variations as possible, we used Jaccard similarity with a threshold of 0.7 to identify and remove duplicated tweets with same date.", "6 2.2 Mechanical Turk Annotation We paid crowd workers on Amazon Mechanical Turk to annotate our dataset.", "The annotation was performed in two phases; during the first phase, we asked workers to determine whether or not the tweet describes a cybersecurity threat toward a target entity, in the second phase the task is to determine whether the author of the tweet believes the threat is severe; only tweets that were judged to express a threat were annotated in the second phase.", "Each HIT contained 10 tweets to be annotated; workers were paid $0.20 per HIT.", "In pilot studies we tried combining these two annotations into a single task, but found low inter-rater agreement, especially for the threat severity judgments, motivating the need for separation of the annotation procedure into two tasks.", "Figure 2 shows a portion of the annotation interface presented to workers during the second phase of annotation.", "Details of each phase are described below, and summarized in Table 1.", "Threat existence annotation: Not all tweets in our dataset describe cybersecurity threats, for example many tweets discuss different senses of the 5 https://github.com/aritter/twitter nlp 6 We sampled a dataset of 6,000 tweets to annotate.", "word vulnerability (e.g., It's OK to show vul-nerability).", "During the first phase of our annotation process, workers judged whether or not there appears to be a cybersecurity threat towards the target entity based on the content of the corresponding tweet.", "We provide workers with 3 options: the tweet indicates", "(a) a cybersecurity threat towards given entity,", "(b) a threat, but not towards the target entity, or", "(c) no cybersecurity threat.", "Each tweet is annotated by 5 workers.", "Threat severity annotation: In the second phase, we collect all tweets judged to contain threats by more than 3 workers in the first phase and annotated them for severity.", "1,966 tweets were selected out of 6,000.", "7 For each tweet we provided workers with 3 options: the tweet contains", "(a) a severe,", "(b) a moderate or", "(c) no threat toward the target entity.", "During our pilot study, we found this to be a more challenging annotation task, therefore we increased the number of annotators per tweet to 10 workers, which we found to improve agreement with our expert judgments.", "Inter-annotator agreement: During both phases, we monitored the quality of workers' annotations using their agreement with each other.", "We calculated the annotation agreement of each worker against the majority vote of other workers.", "We manually removed data from workers who have an agreement less than 0.5, filling in missing annotations with new workers.", "We also manually removed data from workers who answered either uniformly or randomly for all HITs.", "Agreement with expert judgments: To validate the quality of our annotated corpus we compared the workers' aggregated annotations against our own expert annotations.", "We independently annotated 150 randomly sampled tweets, 61 tweets of 7 We further deduplicate pairs of tweets where the longest common subsequence covers the majority of the text contents.", "During deduplication all hashtags and URLs were removed and digits were replaced with 0.", "which are marked as containing severe or moderate threats.", "For threat existence annotation, we observe a 0.66 value of Cohen's (Artstein and Poesio, 2008) between the expert judgements and majority vote of 5 crowd workers.", "Although our threat severity annotation task may require some cybersecurity knowledge for accurate judgment, we still achieve 0.52 Cohen agreement by comparing majority vote from 10 workers with expert annotations.", "Using the annotated corpus described in Section 2.2, we now develop classifiers that detect threats reported online and analyze users' opinions toward their severity.", "Specifically, given a named entity and tweet, (cid:104) e, t (cid:105) , our goal is to estimate the probability the tweet describes a cybersecurity threat towards the entity, p threat ( y |(cid:104) e, t (cid:105) ) and also the probability that the threat is severe, p severe ( y |(cid:104) e, t (cid:105) ) .", "In this section, we describe the details of these classifiers and evaluate their performance.", "We experimented with two baselines to detect reports of cyberthreats and analyze opinions about their severity: logistic regression using bag-of-ngram features, and 1D convolutional neural networks.", "In the sections below we describe the input representations and details of these two models.", "Logistic regression: We use logistic regression as our first baseline model for both classifiers.", "Input representations are bag-of-ngram features extracted from the entire tweet content.", "Example features are presented in Table 4.", "We use context windows of size 2, 3 and 4 to extract features.", "We map extracted n -grams that occur only once to a (cid:104) UNK (cid:105) token.", "In all our experiments, we replace named entities with a special token (cid:104) TARGET (cid:105) ; this helps prevent our models from biasing towards specific entities that appear in our training corpus.", "All digits are replaced with 0.", "Convolutional neural networks: We also experimented with 1D convolutional neural networks (Collobert et al., 2011; Kim, 2014).", "Given a tweet, the model first applies convolutional operations on input sequences with various filters of different sizes.", "The intermediate representations for each filter are aggregated using max-pooling over time, followed by a fully connected layer.", "We choose convolution kernel sizes to be 3, 4 and 5-grams with 100 filters for each.", "We minimize cross-entropy loss using Adam (Kingma and Ba, 2015); the learning rate is set to 0.001 with a batch size of 1 and 5 epochs.", "Word embeddings: We train our own cybersecurity domain word embeddings based on GloVe (Pennington et al., 2014), as 39.7% of our tokens are treated as OOV words in GloVe pre-trained Twitter embeddings.", "We used a corpus of 609,470 cybersecurity-related tweets (described in Section 2.1) as our training corpus.", "The dimension of word embeddings is 50.", "Table 2 shows nearest neighbors for some sampled cybersecurity terms based on the learned embeddings.", "During network training, we initialize word embedding layer with our own embeddings.", "We initialize tokens not in our trained embeddings by randomized vectors with uniform distribution from -0.01 to 0.01.", "We fine-tune the word embedding layer during training.", "For threat existence classification, we randomly split our dataset of 6,000 tweets into a training set of 4,000 tweets, a development set of 1,000 tweets, and test set of 1,000 tweets.", "For the threat severity classifier, we only used data from 2nd phase of annotation.", "This dataset consists of 1,966 tweets that were judged by the mechanical turk workers to describe a cybersecurity threat towards the target entity.", "We randomly split this dataset into a training set of 1,200 tweets, a development set of 300 tweets, and a test set of 466 tweets.", "We collapsed the three annotated labels into two categories based on whether or not the author expresses an opinion that the threat towards the target entity is severe.", "Threat existence classifier: The logistic regression baseline has good performance at identifying threats, which we found to be a relatively easy task; area under the precision-recall curve (AUC) on the development and test set presented in Table", "5. This enables accurate detection of trending threats online by tracking cybersecurity keywords using the Twitter streaming API, following an approach that is similar to prior work on entity-based Twitter event detection (Ritter et al., 2012; Zhou et al., 2014; Wei et al., 2015).", "Table 3 presents an example of threats detected using this procedure on Nov. 22, 2018.", "8 Threat severity classifier: Figure 3 shows precision recall curves for the threat severity classifiers.", "Logistic regression with bag-of-ngram features provides a strong baseline for this task.", "Table 4 presents examples of high-weight features from the logistic regression model.", "These features often intuitively indicate severe threats, e.g. critical vulnerability, a massive, million, etc.", "Without much hyperparameter tuning on the development set, the convolutional neural network consistently achieves higher precision at the same level of recall as compared to logistic regression.", "We summarize the performance of our threat existence and severity classifiers in Table", "5. 3 Forecasting Severe Cybersecurity Threats In Section 2 we presented methods that can accurately detect threats reported online and analyze users' opinions about their severity.", "We now explore the effectiveness of this model for forecasting.", "Specifically, we aim to answer the following questions: (1) To what extent do users' opinions about threat severity expressed online align with expert judgments?", "(2) Can these opinions provide an early indicator to help prioritize threats based 8 A live demo is available at: http://kb1.cse.ohio-state.edu:8123/ events/threat Named Entity Example Tweet Existence Severity apple RT AsturSec: A kernel vulnerability in Apple devices gives access to remote code execution Packt Hub #infosec #CyberSecurity https://t....", "A large corpus of users' opinions: We follow the same procedure described in Section 2.1 to prepare another dataset for a large-scale evaluation.", "For this purpose, we collected data from Jan 2016 to Nov 2017; this ensures no tweets overlap with those that were annotated in Section 2.2.", "We collect all English tweets that explicitly contain the keyword vulnerability within this time period, which results in a total number of 976,180 tweets.", "377,468 tweets remain after removing tweets without named entities.", "National Vulnerability Database (NVD): NVD is the U.S. government database of software vulnerabilities.", "Started in 2000, NVD covers over 100,000 vulnerabilities, assigning a unique CVE number for each threat.", "These CVE numbers serve as common identifiers.", "NVD uses the Common Vulnerability Scoring System (CVSS) to measure the severity of threats.", "CVSS currently has two versions: CVSS v2.0 and CVSS v3.0 standards.", "CVSS v3.0 is the latest version released in July 2015.", "We summarize the two standards in Table", "6. 9 Severity Base Score Severity Base Score None 0.0 Low 0.0-3.9 Low 0.1-3.9 Medium 4.0-6.9 Medium 4.0-6.9 High 7.0-10.0 High 7.0-8.9 Critical 9.0-10.0 Table 6: Qualitative severity rankings of vulnerabilities in NVD.", "relies on accurately matching tweets describing vulnerabilities to their associated NVD records.", "To achieve this we present a simple, yet effective method that makes use of content in linked web-pages.", "We find that 82.4% of tweets contain external urls in our dataset.", "Our approach to link tweets to CVEs is to search for CVE numbers either in url addresses or in corresponding web pages linked in tweets reporting vulnerabilities.", "10 We ignore web pages that contain more than one unique CVE to avoid potential ambiguities.", "Using this approach, within our dataset, 79,383 tweets were linked to 10,565 unique CVEs.", "In order to stimulate a forecasting scenario, we only consider CVEs where more than two associated tweets were posted at least 5 days ahead of official NVD publication date.", "In our dataset, 13,942 tweets are finally selected for forecast evaluation, covering 1,409 unique CVE numbers.", "To evaluate the accuracy of this linking procedure, we randomly sampled 100 matched pairs and manually checked them.", "We find the precision of our matching procedure to be very high: only 2 mismatches out of 100 are found.", "Now that we have a linking between tweets and CVE numbers, our goal is to produce a sorted list of CVEs with those that are indicated to be severe threats the top.", "We consider two ranking procedures, detailed below; the first is based on users' opinions toward the severity of a threat, and the second is a baseline that simply uses the volume of tweets describing a specific vulnerability to measure its severity.", "To simplify the exposition below, we denote each CVE number as CVE i , and the collection of tweets linked to this CVE number as TCVE i = { k | tweet t k is mapped to CVE i } .", "Our model: Our severe threat classifier assigns a severity score p severity ( y |(cid:104) e, t (cid:105) ) for each tuple of name entity e and corresponding tweet t .", "For a specific CVE, we define our severity forecast score to be the maximum severity scores among all tuples from matched tweets (cid:104) , t k (cid:105) (a single tweet 10 Readers may be wondering why a CVE number has been generated before it is officially published in the database. This is due to the mechanism of assigning CVEs. Some identified companies have the right to assign CVEs or have already reversed some CVEs. When a threat appears, a CVE number is assigned immediately before any further evaluation. NVD only officially publishes a threat after all evaluations are completed. Therefore, there is a time delay between CVE entry established date and the official publication date. may contain more than one name entity): ( CVE i ) forecast score = max k TCVE i p severity ( y |(cid:104) , t k (cid:105) ) .", "Tweet volume baseline: Intuitively, the number of tweets and retweets can indicate people's concern about a specific event.", "Specifically, the severity for threat CVE i according to the volume model is defined by the cardinality of TCVE i : ( CVE i ) volume score = | TCVE i | .", "In our first set of experiments, we compare our forecasted threat severity scores against CVSS ratings from the NVD.", "We define a threat as being severe if its CVSS score is 7.0.", "This cut-off corresponds to qualitative severity ratings provided by CVSS (marked as HIGH or CRITICAL in Table 6).", "11 We use the newest v3.0 scoring system, which was developed to improve v2.0.", "12 Large software vendors have announced of the adaptation of the CVSS v3.0 standards, including Cisco, Oracle, SUSE Linux, and RedHat.", "We evaluate our models' performance at identifying severe threats five days ahead of the NVD publication date, within their top k predictions.", "Table 7 shows our results.", "We observe that tweet volume performs better than a random baseline; having a large number of tweets beforehand is a good indicator for high severity, however our approach which analyzes the content of messages discussing software vulnerabilities achieves significantly better performance; 86% of its top 50 forecasts were indeed rated as HIGH or CRITICAL severity in the NVD.", "11 The Forum of Incident Response and Security Teams (FIRST) also provides an example guideline that recommends patching all vulnerabilities with CVSS scores 7.0.", "See", "https://www.first.org/cvss/cvss-based-patch-policy.pdf .", "12", "https://www.first.org/cvss/user-guide CVE Num / Name Entity CVE Description / Matched Tweets CVSS Scores / Our Severity Publish Date (# Days Ahead)", "Table 8 presents top 4 forecast results from our model.", "We observe that our model can predict accurate severity level even 19 days ahead of the official published date in NVD (Table", "8(a),", "(c)).", "In addition to comparing our forecasted severity scores against CVSS, as described above, we also explored several alternatives suggested by the security community to evaluate our methods: (1) Symantec's anti-virus (AV) signatures 13 and intrusion-protection (IPS) signatures, 14 in addition to (2) Exploit Database (EDB).", "15 Sabottke et al. (2015) suggested Symantec's AV and IPS signatures are the best available indicator 13 https://www.symantec.com/security-center/a-z 14 https://www.symantec.com/security response/attacksignatures/ 15 https://www.exploit-db.com/ for real exploitable threats in the wild.", "We follow their method of explicitly querying for CVE numbers from the descriptions of signatures to generate exploited threats ground truth.", "Exploit Database (EDB) is an archive of public exploits and software vulnerabilities.", "We query EDB for all threats that have been linked into NVD.", "16 In total we gathered 134 CVEs verified by Symantec and EDB to be real exploits within the 1,409 CVEs used in our forecasting evaluation.", "We evaluate the number of exploited threats identified within our top ranked CVEs.", "Table 9 presents our results.", "We observe that 7 of top 10 threats from our model were exploited in the wild.", "We also observe that for the actual CVSS v3.0 scores, only 1 out of the top 10 vulnerabilities was 16 http://cve.mitre.org/data/refs/refmap/source-EXPLOIT-DB.html exploited.", "Finally we perform an analysis of the reliability of individual Twitter accounts.", "We evaluate all accounts with more than 5 tweets exceeding 0.5 confidence score from our severity classifier.", "Table 10 presents our results.", "Accounts in our data whose warnings were found to have highest precision when compared against CVSS include @se-curityaffairs and @EduardKovacs, which are known to post security related information, and both have more than 10k followers.", "There is a long history of prior work on analyzing users' opinions online (Wiebe et al., 2004), a large body of prior work has focused on sentiment analysis (Pang et al., 2002; Rosenthal et al., 2015), e.g., determining whether a message is positive or negative.", "In this paper we developed annotated corpora and classifiers to analyze users' opinions toward the severity of cybersecurity threats reported online, as far as we are aware this is the first work to explore this direction.", "Forecasting real-world exploits is a topic of interest in the security community.", "For example, Bozorgi et al. (2010) train SVM classifiers to rank the exploitability of threats.", "Several studies have also predicted CVSS scores from various sources including text descriptions in NVD (Han et al., 2017; Bullough et al., 2017).", "Prior work has also explored a variety of forecasting methods that incorporate textual evidence (Smith, 2010), including the use of Twitter message content to forecast influenza rates (Paul et al., 2014), predicting the propagation of social media posts based on their content (Tan et al., 2014) and forecasting election outcomes (O'Connor et al., 2010; Swamy et al., 2017).", "In this paper, we presented the first study of the connections between the severity of cybersecurity threats and language that is used to describe them online.", "We annotate a corpus of 6,000 tweets describing software vulnerabilities with authors' opinions toward their severity, and demonstrated that our corpus supports the development of automatic classifiers with high precision for this task.", "Furthermore, we demonstrate the value of analyzing users' opinions about the severity of threats reported online as an early indicator of important software vulnerabilities.", "We presented a simple, yet effective method for linking software vulnerabilities reported in tweets to Common Vulnerabilities and Exposures (CVEs) in the National Vulnerability Database (NVD).", "Using our predicted severity scores, we show that it is possible to achieve a Precision@50 of 0.86 when forecasting high severity vulnerabilities, significantly outperforming a baseline that is based on tweet volume.", "Finally we showed how reports of severe vulnerabilities online are predictive of real-world exploits.", "We thank our anonymous reviewers for their valuable feedback.", "We also thank Tudor Dumitras for helpful discussion on identifying real exploited threats.", "Funding was provided by the the Office of the Director of National Intelligence (ODNI) and Intelligence Advanced Research Projects Activity (IARPA) via the Air Force Research Laboratory (AFRL) contract number FA8750-16-C0114, in addition to the Defense Advanced Research Projects Agency (DARPA) via the U.S. Army Research Office (ARO) and under Contract Number W911NF-17-C-0095, in addition to an Amazon Research Award and an NVIDIA GPU grant.", "The content of the information in this document does not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred.", "The U.S. Government is authorized to reproduce and distribute reprints for government purposes notwithstanding any copyright notation here on." ]
[ "abstain", "objective", "abstain", "result", "objective", "method", "result", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "objective", "objective", "objective", "other", "result", "abstain", "objective", "abstain", "abstain", "method", "abstain", "objective", "objective", "objective", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "other", "objective", "other", "other", "other", "other", "objective", "objective", "objective", "method", "result", "result", "other", "other", "other", "other", "other" ]
[ "Fine-tuned pre-trained language models (LMs) have achieved enormous success in many natural language processing (NLP) tasks, but they still require excessive labeled data in the finetuning stage.", "We study the problem of finetuning pre-trained LMs using only weak supervision, without any labeled data.", "This problem is challenging because the high capacity of LMs makes them prone to overfitting the noisy labels generated by weak supervision.", "To address this problem, we develop a contrastive self-training framework, COSINE , to enable fine-tuning LMs with weak supervision.", "Underpinned by contrastive regularization and confidence-based reweighting, our framework gradually improves model fitting while effectively suppressing error propagation.", "Experiments on sequence, token, and sentence pair classification tasks show that our model outperforms the strongest baseline by large margins and achieves competitive performance with fully-supervised fine-tuning methods.", "Our implementation is available on https:// github.com/yueyu1030/COSINE .", "Language model (LM) pre-training and fine-tuning achieve state-of-the-art performance in various natural language processing tasks (Peters et al., 2018; Devlin et al., 2019; Liu et al., 2019; Raffel et al., 2019).", "Such approaches stack task-specific layers on top of pre-trained language models, e.g. , BERT (Devlin et al., 2019), then fine-tune the models with task-specific data.", "During fine-tuning, the semantic and syntactic knowledge in the pre-trained LMs is adapted for the target task.", "Despite their success, one bottleneck for fine-tuning LMs is the requirement of labeled data.", "When labeled data are scarce, the fine-tuned models often suffer from degraded performance, and the large number of parameters can cause severe overfitting (Xie et al., 2019).", "To relieve the label scarcity bottleneck, we fine-tune the pre-trained language models with only weak supervision.", "While collecting large amounts of clean labeled data is expensive for many NLP tasks, it is often cheap to obtain weakly labeled data from various weak supervision sources, such as semantic rules (Awasthi et al., 2020).", "For example, in sentiment analysis, we can use rules terrible' Negative (a keyword rule) and * not recommend *' Negative (a pattern rule) to generate large amounts of weak labels.", "Fine-tuning language models with weak supervision is nontrivial.", "Excessive label noise, e.g. , wrong labels, and limited label coverage are common and inevitable in weak supervision.", "Although existing fine-tuning approaches (Xu et al., 2020; Zhu et al., 2020; Jiang et al., 2020) improve LMs' generalization ability, they are not designed for noisy data and are still easy to overfit on the noise.", "Moreover, existing works on tackling label noise are flawed and are not designed for fine-tuning LMs.", "For example, Ratner et al. (2020); Varma and R (2018) use probabilistic models to aggregate multiple weak supervisions for denoising, but they generate weak-labels in a context-free manner, without using LMs to encode contextual information of the training samples (Aina et al., 2019).", "Other works (Luo et al., 2017; Wang et al., 2019b) focus on noise transitions without explicitly conducting instance-level denoising, and they require clean training samples.", "Although some recent studies (Awasthi et al., 2020; Ren et al., 2020) design labeling function-guided neural modules to denoise each sample, they require prior knowledge on weak supervision, which is often infeasible in real practice.", "Self-training (Rosenberg et al., 2005; Lee, 2013) is a proper tool for fine-tuning language models with weak supervision.", "It augments the training set with unlabeled data by generating pseudo-labels for them, which improves the models' generalization power.", "This resolves the limited coverage issue in weak supervision.", "However, one major challenge of self-training is that the algorithm still suffers from error propagationwrong pseudo-labels can cause model performance to gradually deteriorate.", "We propose a new algorithm COSINE 1 that fine-tunes pre-trained LMs with only weak supervision.", "COSINE leverages both weakly labeled and unlabeled data, as well as suppresses label noise via contrastive self-training.", "Weakly-supervised learning enriches data with potentially noisy labels, and our contrastive self-training scheme fulfills the denoising purpose.", "Specifically, contrastive self-training regularizes the feature space by pulling samples with the same pseudo-labels close while pushing samples with different pseudo-labels apart.", "Such regularization enforces representations of samples from different classes to be more distinguishable, such that the classifier can make better decisions.", "To suppress label noise propagation during contrastive self-training, we propose confidence-based sample reweighting and regularization methods.", "The reweighting strategy emphasizes samples with high prediction confidence, which are more likely to be correctly classified, in order to reduce the effect of wrong predictions.", "Confidence regularization encourages smoothness over model predictions, such that no prediction can be over-confident, and therefore reduces the influence of wrong pseudo-labels.", "Our model is flexible and can be naturally extended to semi-supervised learning, where a small set of clean labels is available.", "Moreover, since we do not make assumptions about the nature of the weak labels, COSINE can handle various types of label noise, including biased labels and randomly corrupted labels.", "Biased labels are usually generated by semantic rules, whereas corrupted labels are often produced by crowd-sourcing.", "Our main contributions are: (1) A contrastive-regularized self-training framework that fine-tunes pre-trained LMs with only weak supervision.", "(2) Confidence-based reweighting and regularization techniques that reduce error propagation and prevent over-confident predictions.", "(3) Extensive experiments on 6 NLP classification tasks using 7 public benchmarks verifying the efficacy of COSINE .", "We highlight that our model achieves competitive performance in comparison with fully-supervised models on some datasets, e.g. , on the 1 Short for Co ntrastive S elf-Training for F ine -Tuning Pretrained Language Model.", "Weak Supervision.", "Instead of using human-annotated data, we obtain labels from weak supervision sources, including keywords and semantic rules 2 .", "From weak supervision sources, each of the input samples x X is given a label y Y {} , where Y is the label set and denotes the sample is not matched by any rules.", "For samples that are given multiple labels, e.g. , matched by multiple rules, we determine their labels by majority voting.", "Problem Formulation.", "We focus on the weakly-supervised classification problems in natural language processing.", "We consider three types of tasks: sequence classification, token classification, and sentence pair classification.", "These tasks have a broad scope of applications in NLP, and some examples can be found in Table 1.", "Formally, the weakly-supervised classification problem is defined as the following: Given weakly-labeled samples X l = { ( x i , y i ) } Li =1 and unlabeled samples X u = { x j } Uj =1 , we seek to learn a classifier f ( x ; ) : X Y .", "Here X = X l X u denotes all the samples and Y = { 1 , 2 , , C } is the label set, where C is the number of classes.", "Our classifier f = g BERT consists of two parts: BERT is a pre-trained language model that outputs hidden representations of input samples, and g is a task-specific classification head that outputs a C -dimensional vector, where each dimension corresponds to the prediction confidence of a specific class.", "In this paper, we use RoBERTa (Liu et al., 2019) as the realization of BERT.", "The framework of COSINE is shown in Figure 1.", "First, COSINE initializes the LM with weak labels.", "In this step, the semantic and syntactic knowledge of the pre-trained LM are transferred to our model.", "Then, it uses contrastive self-training to suppress label noise propagation and continue training.", "The training procedure of COSINE is as follows.", "Initialization with Weakly-labeled Data.", "We fine-tune f ( ; ) with weakly-labeled data X l by 2 Examples of weak supervisions are in Appendix A. Weak Supervision Unmatched Samples !", "|X | ( x i ,y i ) X l where CE ( , ) is the cross entropy loss.", "We adopt early stopping (Dodge et al., 2020) to prevent the model from overfitting to the label noise.", "However, early stopping causes underfitting, and we resolve this issue by contrastive self-training.", "Contrastive Self-training with All Data.", "The goal of contrastive self-training is to leverage all data, both labeled and unlabeled, for fine-tuning, as well as to reduce the error propagation of wrongly labelled data.", "We generate pseudo-labels for the unlabeled data and incorporate them into the training set.", "To reduce error propagation, we introduce contrastive representation learning (Sec. 3.2) and confidence-based sample reweighting and regularization (Sec. 3.3).", "We update the pseudo-labels (denoted by (cid:101) y ) and the model iteratively.", "The procedures are summarized in Algorithm 1.", "(cid:5)", "Update (cid:101) y with the current .", "To generate the pseudo-label for each sample x X , one straightforward way is to use hard labels (Lee, 2013) (cid:101) y hard = argmax j Y [ f ( x ; )] j .", "(2) Notice that f ( x ; ) RC is a probability vector and [ f ( x ; )] j indicates the j -th entry of it.", "However, these hard pseudo-labels only keep the most likely class for each sample and result in the propagation of labeling mistakes.", "For example, if a sample is mistakenly classified to a wrong class, assigning a 0/1 label complicates model updating (Eq. 4), in that the model is fitted on erroneous labels.", "To alleviate this issue, for each sample x in a batch B , we generate soft pseudo-labels 3 (Xie et al., 2016, 2019; Meng et al., 2020; Liang et al., 2020) (cid:101) y RC based on the current model as (cid:101) y j = [ f ( x ; )] 2 j /f j (cid:80) j (cid:48) Y [ f ( x ; )] 2 j (cid:48) /f j (cid:48) , (3) where f j = (cid:80) x (cid:48) B [ f ( x (cid:48) ; )] 2 j is the sum over soft frequencies of class j .", "The non-binary soft pseudo-labels guarantee that, even if our prediction is inaccurate, the error propagated to the model update step will be smaller than using hard pseudo-labels.", "(cid:5)", "Update with the current (cid:101) y .", "We update the model parameters by minimizing L ( ; (cid:101) y ) = L c ( ; (cid:101) y ) + R 1 ( ; (cid:101) y ) + R 2 ( ) , (4) where L c is the classification loss (Sec. 3.3), R 1 ( ; (cid:101) y ) is the contrastive regularizer (Sec. 3.2), R 2 ( ) is the confidence regularizer (Sec. 3.3), and is the hyper-parameter for the regularization.", "3.2 Contrastive Learning on Sample Pairs The key ingredient of our contrastive self-training method is to learn representations that encourage data within the same class to have similar representations and keep data in different classes separated.", "Specifically, we first select high-confidence samples (Sec. 3.3) C from X .", "Then for each pair x i , x j C , we define their similarity as W ij = (cid:40) 1 , if argmax k Y [ (cid:101) y i ] k = argmax k Y [ (cid:101) y j ] k 0 , otherwise , (5) where (cid:101) y i , (cid:101) y j are the soft pseudo-labels (Eq. 3) for x i , x j , respectively.", "For each x C , we calculate its representation v = BERT ( x ) R d , then we define the contrastive regularizer as R 1 ( ; (cid:101) y ) = (cid:88) ( x i ,x j ) CC (cid:96) ( v i , v j , W ij ) , (6) where (cid:96) = W ij d 2 ij + (1 W ij )[max(0 , d ij )] 2 .", "(7) Here, (cid:96) ( , , ) is the contrastive loss (Chopra et al., 2005; Taigman et al., 2014), d ij is the distance 4 between v i and v j , and is a pre-defined margin.", "For samples from the same class, i.e. W ij = 1 , Eq.", "for samples from different classes, the contrastive loss is large if their distance is small.", "In this way, the regularizer enforces similar samples to be close, while keeping dissimilar samples apart by at least .", "Figure 2 illustrates the contrastive representations.", "We can see that our method produces clear interclass boundaries and small intra-class distances, which eases the classification tasks.", "While contrastive representations yield better decision boundaries, they require samples with high-quality pseudo-labels.", "In this section, we introduce reweighting and regularization methods to suppress error propagation and refine pseudo-label qualities.", "Sample Reweighting.", "In the classification task, samples with high prediction confidence are more likely to be classified correctly than those with low confidence.", "Therefore, we further reduce label noise propagation by a confidence-based sample reweighting scheme.", "For each sample x with the soft pseudo-label (cid:101) y , we assign x with a weight ( x ) defined by = 1 H ( (cid:101) y ) log( C ) , H ( (cid:101) y ) = C (cid:88) i =1 (cid:101) y i log (cid:101) y i , (8) where 0 H ( (cid:101) y ) log( C ) is the entropy of (cid:101) y .", "Notice that if the prediction confidence is low, then H ( (cid:101) y ) will be large, and the sample weight ( x ) will be small, and vice versa.", "We use a pre-defined threshold to select high confidence samples C from each batch B as C = { x B | ( x ) } .", "(9) Then we define the loss function as L c ( , (cid:101) y ) = 1 |C| (cid:88) x C ( x ) DKL ( (cid:101) y (cid:107) f ( x ; )) , (10) where DKL ( P (cid:107) Q ) = (cid:88) k p k log p k q k (11) is the KullbackLeibler (KL) divergence.", "Confidence regularization The sample reweighting approach promotes high confidence samples during contrastive self-training.", "However, this strategy relies on wrongly-labeled samples to have low confidence, which may not be true unless we prevent over-confident predictions.", "To this end, we propose a confidence-based regularizer that encourages smoothness over predictions, defined as R 2 ( ) = 1 |C| (cid:88) x C DKL ( u (cid:107) f ( x ; )) , (12) where DKL is the KL-divergence and u i = 1 /C for i = 1 , 2 , , C .", "Such term constitutes a regularization to prevent over-confident predictions and leads to better generalization (Pereyra et al., 2017).", "Datasets and Tasks.", "We conduct experiments on 6 NLP classification tasks using 7 public benchmarks: AGNews (Zhang et al., 2015) is a Topic Classification task; IMDB (Maas et al., 2011) and Yelp (Meng et al., 2018) are Sentiment Analysis tasks; TREC (Voorhees and Tice, 1999) is a Question Classification task; MIT-R (Liu et al., 2013) is a Slot Filling task; Chemprot (Krallinger et al., 2017) is a Relation Classification task; and WiC (Pilehvar and Camacho-Collados, 2019) is a Word Sense Disambiguation (WSD) task.", "The dataset statistics are summarized in Table 2. More details on datasets and weak supervision sources are in Appendix A 5 .", "Baselines.", "We compare our model with different groups of baseline methods:", "(i) Exact Matching (ExMatch) : The test set is directly labeled by weak supervision sources.", "(ii) Fine-tuning Methods : The second group of baselines are fine-tuning methods for LMs: (cid:5) RoBERTa (Liu et al., 2019) uses the RoBERTa-base model with task-specific classification heads.", "(cid:5)", "Self-ensemble (Xu et al., 2020) uses self-ensemble and distillation to improve performances.", "(cid:5)", "FreeLB (Zhu et al., 2020) adopts adversarial training to enforce smooth outputs.", "(cid:5)", "Mixup (Zhang et al., 2018) creates virtual training samples by linear interpolations.", "(cid:5)", "SMART (Jiang et al., 2020) adds adversarial and smoothness constraints to fine-tune LMs and achieves state-of-the-art result for many NLP tasks.", "(iii) Weakly-supervised Models : The third group of baselines are weakly-supervised models 6 : (cid:5) Snorkel (Ratner et al., 2020) aggregates different labeling functions based on their correlations.", "(cid:5)", "WeSTClass (Meng et al., 2018) trains a classifier with generated pseudo-documents and use self-training to bootstrap over all samples.", "(cid:5)", "ImplyLoss (Awasthi et al., 2020) co-trains a rule-based classifier and a neural classifier to denoise.", "(cid:5)", "Denoise (Ren et al., 2020) uses attention network to estimate reliability of weak supervisions, and then reduces the noise by aggregating weak labels.", "(cid:5)", "UST (Mukherjee and Awadallah, 2020) is state-of-the-art for self-training with limited labels.", "It estimates uncertainties via MC-dropout (Gal and Ghahramani, 2015), and then select samples with low uncertainties for self-training.", "Evaluation Metrics.", "We use classification accuracy on the test set as the evaluation metric for all datasets except MIT-R.", "MIT-R contains a large number of tokens that are labeled as Others.", "We use the micro F 1 score from other classes for this dataset.", "7 Auxiliary.", "We implement COSINE using PyTorch 8 , and we use RoBERTa-base as the pretrained LM.", "Datasets and weak supervision details are in Appendix A. Baseline settings are in Appendices B. Training details and setups are in Appendix C. Discussions on early-stopping are in Appendix D. Comparison of distance metrics and similarity measures are in Appendix E. 4.1 Learning From Weak Labels We summarize the weakly-supervised leaning results in Table 3. In all the datasets, COSINE outperforms all the baseline models.", "A special case is the WiC dataset, where we use WordNet 9 to generate weak labels.", "However, this enables Snorkel to access some labeled data in the development set, making it unfair to compete against other methods.", "We will discuss more about this dataset in Sec. 4.3.", "In comparison with directly fine-tuning the pretrained LMs with weakly-labeled data, our model employs an earlier stopping technique 10 so that it does not overfit on the label noise.", "As shown, indeed Init achieves better performance, and it serves as a good initialization for our framework.", "Other fine-tuning methods and weakly-supervised models either cannot harness the power of pretrained language models, e.g. , Snorkel, or rely on clean labels, e.g. , other baselines.", "We highlight that although UST, the state-of-the-art method to date, achieves strong performance under few-shot settings, their approach cannot estimate confidence well with noisy labels, and this yields inferior performance.", "Our model can gradually correct wrong pseudo-labels and mitigate error propagation via contrastive self-training.", "7 The Chemprot dataset also contains Others type, but such instances are few, so we still use accuracy as the metric.", "8 https://pytorch.org/ 9 https://wordnet.princeton.edu/ 10 We discuss this technique in Appendix D. Dataset Task Class # Train # Dev # Test Cover Accuracy AGNews Topic 4 96k 12k 12k 56.4 83.1 IMDB Sentiment 2 20k 2.5k 2.5k 87.5 74.5 Yelp Sentiment 2 30.4k 3.8k 3.8k 82.8 71.5 MIT-R Slot Filling 9 6.6k 1.0k 1.5k 13.5 80.7 TREC Question 6 4.8k 0.6k 0.6k 95.0 63.8 Chemprot Relation 10 12.6k 1.6k 1.6k 85.9 46.5 WiC WSD 2 5.4k 0.6k 1.4k 63.4 58.8 Table 2: Dataset statistics.", "AGNews, IMDB, Yelp, and WiC, our model achieves the same level of performance with models (RoBERTa-CL) trained with clean labels.", "This makes COSINE appealing in the scenario where only weak supervision is available.", "Our model is robust against excessive label noise.", "We corrupt certain percentage of labels by randomly changing each one of them to another class.", "This is a common scenario in crowd-sourcing, where we assume human annotators mis-label each sample with the same probability.", "Figure 3 summarizes experiment results on the TREC dataset.", "Com-Model Dev Test #Params Human Baseline 80.0 --BERT (Devlin et al., 2019) ---69.6 335M RoBERTa (Liu et al., 2019) 70.5 69.9 356M T5 (Raffel et al., 2019) ---76.9 11,000M Semi-Supervised Learning SenseBERT (Levine et al., 2020) ---72.1 370M RoBERTa-WL (Liu et al., 2019) 72.3 70.2 125M w/ MT (Tarvainen and Valpola, 2017) 73.5 70.9 125M w/ VAT (Miyato et al., 2018) 74.2 71.2 125M w/ COSINE 76.0 73.2 125M Transductive Learning Snorkel (Ratner et al., 2020) 80.5 ---1M RoBERTa-WL (Liu et al., 2019) 81.3 76.8 125M w/ MT (Tarvainen and Valpola, 2017) 82.1 77.1 125M w/ VAT (Miyato et al., 2018) 84.9 79.5 125M w/ COSINE 89.5 85.3 125M Table 4: Semi-supervised Learning on WiC.", "pared with advanced fine-tuning and self-training methods ( e.g. SMART and UST) 11 , our model consistently outperforms the baselines.", "We can naturally extend our model to semi-supervised learning, where clean labels are avail-11", "able for a portion of the data.", "We conduct experiments on the WiC dataset.", "As a part of the Su-perGLUE (Wang et al., 2019a) benchmark, this dataset proposes a challenging task: models need to determine whether the same word in different sentences has the same sense (meaning).", "Different from previous tasks where the labels in the training set are noisy, in this part, we utilize the clean labels provided by the WiC dataset.", "We further augment the original training data of WiC with unlabeled sentence pairs obtained from lexical databases ( e.g. , WordNet, Wictionary).", "Note that part of the unlabeled data can be weakly-labeled by rule matching.", "This essentially creates a semi-supervised task, where we have labeled data, weakly-labeled data and unlabeled data.", "Since the weak labels of WiC are generated by WordNet and partially reveal the true label information, Snorkel (Ratner et al., 2020) takes this unfair advantage by accessing the unlabeled sentences and weak labels of validation and test data.", "To make a fair comparison to Snorkel, we consider the transductive learning setting, where we are allowed access to the same information by integrating unlabeled validation and test data and their weak labels into the training set.", "As shown in Table 4, COSINE with transductive learning achieves better performance compared with Snorkel.", "Moreover, in comparison with semi-supervised baselines ( i.e. VAT and MT) and fine-tuning methods with extra resources ( i.e. , SenseBERT), COSINE achieves better performance in both semi-supervised and transductive learning settings.", "Error propagation mitigation and wrong-label correction.", "Figure 4 visualizes this process.", "Before training, the semantic rules make noisy predictions.", "After the initialization step, model predictions are less noisy but more biased, e.g. , many samples are mis-labeled as Amenity.", "These predictions are further refined by contrastive self-training.", "The rightmost figure demonstrates wrong-label correction .", "Samples are indicated by radii of the circle, and classification correctness is indicated by color, i.e. , blue means correct and orange means incorrect.", "From inner to outer tori specify classification accuracy after the initialization stage, and the iteration 1,2,3.", "We can see that many incorrect predictions are corrected within three iterations.", "To illustrate: the right black dashed line means the corresponding sample is classified correctly after the first iteration, and the left dashed line indicates the case where the sample is mis-classified after the second iteration but corrected after the third.", "These results demonstrate that our model can correct wrong predictions via contrastive self-training.", "Better data representations.", "We visualize sample embeddings in Fig. 7. By incorporating the contrastive regularizer R 1 , our model learns more compact representations for data in the same class, e.g. , the green class, and also extends the inter-class distances, e.g. , the purple class is more separable from other classes in Fig.", "7(b) than in Fig.", "7(a).", "Label efficiency.", "Figure 8 illustrates the number of clean labels needed for the supervised model to outperform COSINE .", "On both of the datasets, the supervised model requires a significant amount of clean labels (around 750 for Agnews and 120 for MIT-R) to reach the level of performance as ours, whereas our method assumes no clean sample.", "Higher Confidence Indicates Better Accuracy.", "Figure 6 demonstrates the relation between prediction confidence and prediction accuracy on IMDB.", "We can see that in general, samples with higher prediction confidence yield higher prediction accuracy.", "With our sample reweighting method, we gradually filter out low-confidence samples and assign higher weights for others, which effectively mitigates error propagation.", "Components of COSINE .", "We inspect the importance of various components, including the contrastive regularizer R 1 , the confidence regularizer R 2 , and the sample reweighting (SR) method, and the soft labels.", "Table 5 summarizes the results and Fig. 9 visualizes the learning curves.", "We remark that all the components jointly contribute to the model performance, and removing any of them hurts the classification accuracy.", "For example, sample reweighting is an effective tool to reduce error propagation, and removing it causes the model to eventually overfit to the label noise, e.g. , the red bottom line in Fig. 9 illustrates that the classification accuracy increases and then drops rapidly.", "On the other hand, replacing the soft pseudo-labels (Eq. 3) with the hard counterparts (Eq. 2) causes drops in performance.", "This is because hard pseudo-labels lose prediction confidence information.", "Hyper-parameters of COSINE .", "In Fig. 5, we examine the effects of different hyper-parameters, including the confidence threshold (Eq. 9), the Init Iter 1 Iter 2 Iter 3 CorrectIncorrect Figure 4: Classification performance on MIT-R.", "stopping time T 1 in the initialization step, and the update period T 3 for pseudo-labels.", "From Fig.", "5(a), we can see that setting the confidence threshold too big hurts model performance, which is because an over-conservative selection strategy can result in insufficient number of training data.", "The stopping time T 1 has drastic effects on the model.", "This is because fine-tuning COSINE with weak labels for excessive steps causes the model to unavoidably overfit to the label noise, such that the contrastive self-training procedure cannot correct the 0 500 1000 1500 2000 2500 Number of Iterations 0 .", "error.", "Also, with the increment of T 3 , the update period of pseudo-labels, model performance first increases and then decreases.", "This is because if we update pseudo-labels too frequently, the contrastive self-training procedure cannot fully suppress the label noise, and if the updates are too infrequent, the pseudo-labels cannot capture the updated information well.", "Fine-tuning Pre-trained Language Models.", "To improve the model's generalization power during fine-tuning stage, several methods are proposed (Peters et al., 2019; Dodge et al., 2020; Zhu et al., 2020; Jiang et al., 2020; Xu et al., 2020; Kong et al., 2020; Zhao et al., 2020; Gunel et al., 2021; Zhang et al., 2021; Aghajanyan et al., 2021; Wang et al., 2021), However, most of these methods focus on fully-supervised setting and rely heavily on large amounts of clean labels , which are not always available.", "To address this issue, we propose a contrastive self-training framework that fine-tunes pre-trained models with only weak labels.", "Compared with the existing fine-tuning approaches (Xu et al., 2020; Zhu et al., 2020; Jiang et al., 2020), our model effectively reduce the label noise, which achieves better performance on various NLP tasks with weak supervision.", "Learning From Weak Supervision.", "In weakly-supervised learning, the training data are usually noisy and incomplete.", "Existing methods aim to denoise the sample labels or the labeling functions by, for example, aggregating multiple weak supervisions (Ratner et al., 2020; Lison et al., 2020; Ren et al., 2020), using clean samples (Awasthi et al., 2020), and leveraging contextual information (Mekala and Shang, 2020).", "However, most of them can only use specific type of weak supervision on specific task, e.g. , keywords for text classification (Meng et al., 2020; Mekala and Shang, 2020), and they require prior knowledge on weak supervision sources (Awasthi et al., 2020; Lison et al., 2020; Ren et al., 2020), which somehow limits the scope of their applications.", "Our work is orthogonal to them since we do not denoise the labeling functions directly.", "Instead, we adopt contrastive self-training to leverage the power of pretrained language models for denoising, which is task-agnostic and applicable to various NLP tasks with minimal additional efforts.", "Adaptation of LMs to Different Domains.", "When fine-tuning LMs on data from different domains, we can first continue pre-training on in-domain text data for better adaptation (Gururangan et al., 2020).", "For some rare domains where BERT trained on general domains is not optimal, we can use LMs pretrained on those specific domains ( e.g. BioBERT (Lee et al., 2020), SciBERT (Beltagy et al., 2019)) to tackle this issue.", "Scalability of Weak Supervision.", "COSINE can be applied to tasks with a large number of classes.", "This is because rules can be automatically generated beyond hand-crafting.", "For example, we can use label names/descriptions as weak supervision signals (Meng et al., 2020).", "Such signals are easy to obtain and do not require hand-crafted rules.", "Once weak supervision is provided, we can create weak labels to further apply COSINE .", "Flexibility.", "COSINE can handle tasks and weak supervision sources beyond our conducted experiments.", "For example, other than semantic rules, crowd-sourcing can be another weak supervision source to generate pseudo-labels (Wang et al., 2013).", "Moreover, we only conduct experiments on several representative tasks, but our framework can be applied to other tasks as well, e.g., named-entity recognition (token classification) and reading comprehension (sentence pair classification).", "In this paper, we propose a contrastive regularized self-training framework, COSINE , for finetuning pre-trained language models with weak supervision.", "Our framework can learn better data representations to ease the classification task, and also efficiently reduce label noise propagation by confidence-based reweighting and regularization.", "We conduct experiments on various classification tasks, including sequence classification, token classification, and sentence pair classification, and the results demonstrate the efficacy of our model.", "label scarcity issue via combining neural nets with weak supervision .", "The weak supervision provides a simple but flexible language to encode the domain knowledge and capture the correlations between features and labels.", "When combined with unlabeled data, our framework can largely tackle the label scarcity bottleneck for training DNNs, enabling them to be applied for downstream NLP classification tasks in a label efficient manner.", "COSINE neither introduces any social/ethical bias to the model nor amplify any bias in the data.", "In all the experiments, we use publicly available data, and we build our algorithms using public code bases.", "We do not foresee any direct social consequences or ethical issues.", "We thank anonymous reviewers for their feedbacks.", "This work was supported in part by the National Science Foundation award III-2008334, Amazon Faculty Award, and Google Faculty Award." ]
[ "abstain", "method", "abstain", "objective", "result", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "result", "other", "abstain", "other", "abstain", "other", "method", "method", "other", "method", "other", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "objective", "abstain", "other", "other", "other", "other", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "objective", "method", "objective", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "other" ]
[ "Abstract Toxic language detection systems often falsely flag text that contains minority group mentions as toxic, as those groups are often the targets of online hate.", "Such over-reliance on spurious correlations also causes systems to struggle with detecting implicitly toxic language.", "To help mitigate these issues, we create TOXIGEN , a new large-scale and machine-generated dataset of 274k toxic and benign statements about 13 minority groups.", "We develop a demonstration-based prompting framework and an adversarial classifier-in-the-loop decoding method to generate subtly toxic and benign text with a massive pretrained language model (Brown et al., 2020).", "Controlling machine generation in this way allows TOXIGEN to cover implicitly toxic text at a larger scale, and about more demographic groups, than previous resources of human-written text.", "We conduct a human evaluation on a challenging subset of TOXIGEN and find that annotators struggle to distinguish machine-generated text from human-written language.", "We also find that 94.5% of toxic examples are labeled as hate speech by human annotators.", "Using three publicly-available datasets, we show that finetuning a toxicity classifier on our data improves its performance on human -written data substantially.", "We also demonstrate that TOXIGEN can be used to fight machine-generated toxicity as finetuning improves the classifier significantly on our evaluation subset.", "Toxic language detectors often over-rely on minority identity mentions 1 when flagging a statement as toxic, without considering the deeper semantic meaning of the statement (Dixon et al., 2018; Rttger et al., 2021).", "This can lead to severe under-detection of subtle hate (e.g., They have been bred 1 In this work, we use minority to refer to social and demographic groups that are frequently the targets of oppression, discrimination, or prejudice (RWJF, 2017), from a U.S. socio-cultural perspective.", "to be good at sports and entertainment, but not much else ; Figure", "1) and over-detection of benign statements (e.g., child abuse is wrong, racism is wrong, sexism is wrong ; Figure 1).", "Importantly, such biases in toxicity detection risk further marginalizing or censoring minority groups (Yasin, 2018; Sap et al., 2019; Dias Oliva et al., 2020; Are, 2020; Daz and Hecht-Felella, 2021).", "We introduce TOXIGEN , a large-scale machine-generated dataset of 274,186 toxic and benign statements.", "To create this dataset, we leverage the massive pretrained language model GPT-3 (Brown et al., 2020), which is known to produce close-to-human-like text (Clark et al., 2021; Dou et al., 2021) but also easily generates socially biased and toxic content (Sheng et al., 2019; Gehman et al., 2020).", "While such human-like bias and toxicity poses real threats, we use this undesirable behavior in models like GPT-3 to improve existing toxic language classifiers, providing a path forward for mitigating systemic bias.", "Created using demonstration-based prompting and pretrained toxicity classifiers, TOXIGEN covers over 135k toxic and 135k benign statements about 13 minority identity groups (e.g., African Americans, women, LGBTQ+ folks, etc.).", "Using this machine generated approach has two advantages over scraping posts from the web as done by previous work (e.g., Davidson et al., 2017; Founta et al., 2018; Zampieri et al., 2019).", "First, it allows us to limit spurious identity-toxicity correlations (Dixon et al., 2018; Zhou et al., 2021) by generating equal numbers of toxic/benign statements for each demographic group, including those that are often overlooked in toxic language corpora (e.g., Native Americans).", "Second, machine generation and careful prompting enables us to generate implicit toxicity (i.e., without swearwords or slurs), which is by definition hard to detect or find and thus often missing in toxic language corpora (Wiegand et al., 2021).", "Indeed, 98.2% of TOXIGEN statements are implicit , i.e. , devoid of explicit profanity, 3309 generator classifiers ALICE Hate Scores Statements You can tell if a woman is a good person or not by how much she covers her skin 10% hate If you have ever been to a restaurant, you have probably noticed that the service is much better if the waiter is white, and the food is much better if the chef is white 8% toxic They have been bred to be good at sports and entertainment, but not much else.", "To generate a challenging subset of TOXIGEN , we introduce ALICE , 2 an adversarial classifier-in-the-loop decoding algorithm.", "We use ALICE to control the toxicity of output text by pitting a toxicity classifier against a text generator during beam search decoding.", "Given a toxic prompt, we can encourage generations to be less toxic based on the classifier scores.", "Similarly, we can steer a language model with neutral prompting towards higher toxicity generations.", "Our experiments with five publicly-available toxicity classifiers show that the generated sentences in both cases above fool toxicity classifiers (see Figure 1).", "We validate the quality of our machine-generated dataset through a comprehensive human evaluation.", "Our results show that on a sample of 792 machine-generated sentences, 90% could be mistaken for human-written text.", "We also find that the generated data indeed contains a wide variety of specific references to the minority groups mentioned in the prompts (4.2).", "This indicates that our data generation approaches (with or without ALICE ) successfully control the generation towards the desired toxicity and minority group mention.", "E xemplars 4 Delphi does not produce toxicity probabilities, so we use Open AI's content filter to game Delphi.", "A Delphi author has confirmed probabilities will be available soon.", "fine-tuning existing classifiers on TOXIGEN consistently improves performance (+719%) on 3 existing human -written implicit toxic datasets: ImplicitHateCorpus (ElSherief et al., 2021), SocialBiasFrames (Sap et al., 2020), and DynaHate (Vidgen et al., 2021).", "This indicates that the dataset generated in this work and the approaches for generating data provide major steps towards improving toxicity classifiers, and could potentially be used downstream to address the issues from biased machine generation (Sheng et al., 2019) or neutral toxic degeneration (Gehman et al., 2020).", "Detecting implicit toxicity about minority groups (e.g., stereotyping, microaggressions), remains an elusive goal for NLP systems (Han and Tsvetkov, 2020; Wiegand et al., 2021).", "One key challenge is that, in contrast to explicit toxicity, implicit toxicity is not marked by the use of profanity or swearwords, is sometimes positive in sentiment, and is generally harder to detect or collect at scale (MacA-vaney et al., 2019; Breitfeller et al., 2019).", "Nonetheless, implicitly toxic language about minority or marginalized groups is often psychologically damaging to members of those groups (Sue et al., 2007; 3 https://github.com/microsoft/ToxiGen 3310 Datasets Properties Source Size % Implicit % Hate Class Breitfeller et al. (2019) Reddit 2,934 99.4 100.0 TweetBLM (Kumar and Pranesh, 2021) Twitter 9,165 99.0 33.7 de Gibert et al. (2018) StormFront 9,916 92.2 11.3 Waseem (2016) Twitter 16,914 82.4 31.7 ImplicitHateCorpus (ElSherief et al., 2021) Twitter 22,584 96.8 39.6 Davidson et al. (2017) Twitter 24,802 30.2 5.0 Kennedy et al. (2018) Hate Forums 27,665 71.8 9.1 DynaHate (Vidgen et al., 2021) Human-Machine Adv. 41,134 83.3 53.9 SocialBiasFrames (Sap et al., 2020) Social Media 44,671 71.5 44.8 Founta et al. (2018) Twitter 80,000 26.1 7.5 TOXIGEN (ours) GPT-3 274,186 98.2 50.1 Table 1: Comparing toxic language datasets.", "Nadal et al., 2014; Kanter et al., 2017; Nadal, 2018; Saleem and Anderson, 2013) and can reinforce stereotypical or hateful perceptions of them (Behm-Morawitz and Mastro, 2008; Soral et al., 2018).", "A second challenge for detecting subtle toxicity about minority groups is that minority mentions are more often the targets of social biases and toxicity (Hudson, 2017).", "As such, minority mentions often co-occur with toxicity labels in datasets scraped from online platforms (Dixon et al., 2018).", "For example, over 93% of mentions of Jewish folk in Sap et al. (2020) are toxic (Wiegand et al., 2021).", "In turn, models trained on such data can exploit these spurious minority-toxicity correlations instead of considering the deeper semantics of text (Zhou et al., 2021).", "Importantly, the spurious correlations are also learned by large language models, which are known to produce stereotypical, biased, or toxic content when prompted with minority mentions (Sheng et al., 2019).", "Given that the main mitigation approach to prevent Large Language Models (LLM) from generating toxic language is to train new classifiers to detect such language, these classifiers also learn the spurious correlations and start blocking most language referencing minority groups.", "This risks erasure (Xu et al., 2021).", "With TOXIGEN , we aim for generating a large scale dataset that represent implicit toxicity while balancing between toxic and benign statements, to address the gaps of previous work.", "As shown in Table 1, existing datasets contain large amounts of explicit toxicity.", "While valuable, most previous work has relied on scraping data from online platforms, which leads to dataset imbalances with respect to minority-mentioning posts that are toxic vs. benign.", "Examples are collected at scale using keyword-based scraping approaches (Waseem, 2016; Davidson et al., 2017; Zampieri et al., 2019), the bootstrapped scraping approaches (Founta et al., 2018), and machine-vs-human adversarial data collection (Dinan et al., 2019; Vidgen et al., 2021), among others.", "In contrast, using large language models to generate our dataset allows us to control the minority groups mentioned in our statements, as well as their implicitness, at larger scale.", "To create TOXIGEN , we use demonstration-based prompting for LLMs, encouraging a text generator to produce both toxic and benign sentences that mention minority groups without using explicit language.", "We introduce a classifier-in-the-loop decoding method based on constrained beam search, ALICE , which, along with samples generated without ALICE , contributes to generating a challenging subset of TOXIGEN .", "Using these methods, we generate a massive set of statements (over 274,000) containing equal numbers of toxic and benign sentences for 13 identity groupssee Table 2.", "TOXIGEN is generated by prompting a language model to produce both benign and toxic sentences that (1) include mentions of minority groups by name and (2) contain mainly implicit language, which does not include profanity or slurs.", "To achieve this, we perform demonstration-based prompt engineering: Acquiring example sentences, 3311 the only way to civilize the natives...", "passing them to a LLM, 4 and collecting the subsequent response.", "For example, when shown ten sentences mentioning Muslim folks neutrally , GPT-3 often responds with a similar neutral sentence.", "Prompts are text fragments passed into language models that can encourage certain behaviors (Brown et al., 2020).", "However, designing prompts is notoriously challenging (Liu et al., 2021c).", "While there are several approaches for prompting pretrained LLMs (Liu et al., 2021b), a recent and promising direction is demonstration-based prompting (Gao et al., 2021; Mishra et al., 2021).", "Here, example statements are passed to an LLMs, encouraging it to produce a similar, but distinct, statement.", "For example, to encourage LLMs to produce benign statements about Asian people, we might use the following examples: (1) many impactful people are of asian descent , (2) there are hundreds of asian cultures .", "To engineer prompts that lead to high-quality, group-mentioning toxic and benign statements at scale, we first gather and curate sets of examples.", "lect many examples.", "Intuitively, given many examples of benign sentences that mention one particular group, a language model can be used to produce more.", "For benign prompts, we encourage realistic text generation and include diverse voices by collecting benign sentences from blog posts and news articles that mention a group.", "However, finding large amounts of such data at scale is challenging this is why implicit datasets are hard to acquire.", "To build a large enough set of demonstrations, we begin with a small number of examples from the wild, then engage a human-in-the-loop process: collect some demonstrations, pass them to our LLM, comb through many responses, and add the best examples to a growing set.", "Ensuring that a set of examples consistently produces benign responses that still mention the targeted minority group is challenging and so we iterate this loop many times, sampling random subsets of our examples to serve as prompts and observing the responses.", "This way, we collect 20-50 demonstration sentences per group, all of which we release.", "To encourage implicit toxicity from a LLM, we find examples of human-written sentences with implicit toxicity towards each group from hate forums (de Gibert et al., 2018) and Reddit (Breitfeller et al., 2019).", "We repeat the human-in-the-loop process to expand our sets of examples.", "Overall, by repeating this process for both toxic and benign examples for all 13 target groups, we create 26 sets of prompts, 3312 Group Count Avg.", "Demonstration-based prompting alone consistently produces toxic and benign statements about minority groups (see Section 4).", "There is no guarantee that these statements will be challenging to existing toxicity detectors.", "Therefore, we also develop ALICE , a variant of constrained beam search (CBS; Anderson et al., 2017; Hokamp and Liu, 2017; Holtzman et al., 2018; Lu et al., 2021) during decoding that generates statements that are adversarial to a given pre-trained toxicity classifier.", "ALICE creates an adversarial game between a pre-trained language model (PLM) and a toxicity classifier (CLF) during constrained beam search decoding.", "In many CBS settings, constraints are added during beam search decoding to force the model to either include or exclude a specific word or group of words in the output (Anderson et al., 2017; Hokamp and Liu, 2017; Lu et al., 2021).", "With ALICE , we instead want to enforce soft constraints on the probabilities coming from a given toxicity classifier CLF during beam search: 5 log p ( w i +1 | w 0: i ) L log p LM ( w i +1 | w 0: i ) + C log p CLF ( w 0: i +1 ) (1) Here, L and C denote hyperparameters that determine the respective contribution of the language model and classifier to the decoding scoring function.", "By using this weighted combination, we can steer generations towards a higher or lower probability of toxicity without sacrificing coherence enforced by the language model.", "To create examples that challenge existing toxicity classifiers, we use two adversarial setups: False negatives : We use toxic prompts to encourage the language model to generate toxic outputs, then maximize the classifier's probability of the benign class during beam search.", "False positives : We use benign prompts to encourage the language model to generate nontoxic outputs, then maximize the probability of the toxic class during beam search.", "In the first approach, we are also able to detoxify model outputs when the classifier successfully steers the generations towards non-toxic language.", "ALICE is illustrated in Figure 2.", "We generate TOXIGEN data with and without ALICE.", "Without ALICE, we use topk decoding (Fan et al., 2018) alone with our toxic and benign prompts.", "With ALICE, we use the HateBERT fine-tuned OffensEval model from Caselli et al. (2021) as the toxicity classifier (CLF).", "This model covers a range of direct and veiled offense types.", "We use GPT-3 for the language model.", "For decoding, we use L = C = 0 .", "5 , a maximum generation length of 30 tokens, a beam size of 10, and a temperature of 0.9.", "Due to limitations imposed by the OpenAI GPT-3 API on accessing log probabilities for the full model vocabulary, we restricted the vocabulary 5 This is similar in spirit to previous work on using cooperative discriminators on uncontrolled LLMs (Holtzman et al., 2018; Krause et al., 2020; Yang and Klein, 2021; Liu et al., 2021a), yet in this work our LLM is controlled in an adversarial way by prompting and by a classifier.", "size to the top 100 tokens, and then resample from the allowed tokens (tokens not appearing in the prompt) using topk .", "6 3.4 TOXIGEN Statistics Statistics of TOXIGEN are presented in Table 2.", "In our final dataset, generation length varies significantly and, as expected, almost all the statements are implicit.", "As we show in 4, the ALICE generated data is successful at attacking the given toxicity classifier, contributing a challenging, adversarial subset of TOXIGEN .", "7 In the released data, we split off a test set that is validated by human annotators (see 4.2).", "To ensure the quality of TOXIGEN , we conduct human validation experiments and create TOXIGENHUMANVAL , a human-validated test set.", "Specifi-cally, we investigate the reliability of our prompt-based and ALICE -based methods at generating human-like statements and controlling statements' toxicity and the minority groups mentioned (4.2).", "Additionally, we measure the effectiveness of ALICE -generated statements (vs. topk -generated) at fooling classifiers (4.3).", "For each generated statement, we ask the annotators various questions, described below, that take into account multiple dimensions of how toxic", "6 We force beam search decoding to not use tokens from the prompt to prevent direct copying.", "Certain tokens appearing in the prompt such as punctuation are allowed.", "7 We compute the % of implicit samples using https: //github.com/RobertJGabriel/Google-profanity-words , the same as ElSherief et al. (2021), also removing ambiguous terms (e.g., bloody).", "machine-generated language presents a potential harm to readers.", "See Appendix B for an annotation screenshot and other study details.", "Perceived hatefulness with respect to human or AI-authored text.", "We first ask annotators to guess whether the statement's author was a human or an AI system (HUMANORAI).", "Then, we ask whether the statement would be harmful to anyone if an AI system wrote it ( HARMFULIFAI), as well as if a human wrote it ( HARMFULIFHUMAN ); we hypothesize that readers may have different standards for machine-generated text than human-written text.", "For all questions measuring harmfulness of text, we consider potential harm on a 1-5 scale with 1 being clearly benign and 5 indicating very offensive or abusive text.", "Perceived intent of the writer.", "We ask readers whether statements were likely intended to be harmful ( HARMFULINTENT ), since some biased statements can be positively intended (e.g., benevolent sexism; Glick and Fiske, 1996).", "Additionally, we ask if the statement exhibits a positive stereotype (POSSTEREO ), which is also harmful (e.g., model minority myths; Cheryan and Bodenhausen, 2000).", "Detailed harm explanations.", "To better understand how harm may be perpetrated against the minority group, we ask readers in-depth questions about text's content, following Sap et al. (2020) and Olteanu et al. (2018).", "We ask whether or not the statement is lewd or sexual (LEWD ), whether and how it references the targeted group or other groups ( WHICHGROUP , GROUPFRAMING ), whether it claims to be factual or opinion (FACTOROPINION ).", "Data and Setup.", "We selected 792 statements from TOXIGEN to include in our test set, such that no training statement had cosine similarity above 0.7 with any test statement.", "Each test statement was then rated by 3 annotators from a pool of 156 prequalified annotators from Amazon MTurk (See Appendix B for details).", "Inter-annotator agreement.", "To investigate the quality of our annotations, we compute agreement on toxicity ratings.", "8 We find that annotators agreed moderately and are higher than or equal rates to prior work on hate speech annotation (Ross et al., 8 Specifically, we take the max of the HARMFULIFAI and HARMFULIFHUMAN scores and map it into three classes (scores < 3: non-toxic, =3: ambiguous, > 3: toxic).", "2017; Sap et al., 2020), with a Fleiss' =0.46 (Fleiss, 1971) and Krippendorff's =0.64 (Krippen-dorff, 1980).", "In 55.17% of cases, all 3 annotators agree, while a majority ( 2/3) agree for 93.4%.", "Human validation results.", "First, we find that our machine-generated statements are largely indistinguishable from human-written statements.", "For examplesee Table 3human annotators often AI s p eake r H u m a n s p eake r Figure 5: Avg.", "predict that our text is generated by a human.", "In fact, on average 90.5% of machine-generated examples are thought to be human-written by a majority of annotators, as shown in Figure 4.", "We also note that harmful text confuses readers slightly more than non-harmful text: 92.9% of toxic examples are mislabeled as human-written compared to 90.2% for non-toxic.", "Most toxic examples are also hate speech (94.56%).", "While opinions are common in both toxic and non-toxic examples, most fact-claiming text is non-toxic.", "Second, we find that demonstration-based prompting reliably generates toxic and benign statements about minority groups (4.3).", "Further, for the machine-generated examples, we find that 30.2% are harmful (given a score of > 3), while only 4% are ambiguous.", "This indicates that these data are sufficiently toxic or benign.", "We also find 3315 that all identity groups covered by the dataset were represented in the human study (see Figure 3), and observe that the identity group referenced by the prompt is generally the same as the group referenced by the corresponding TOXIGEN text, though there is some deviation.", "This is likely due to GPT-3 conflating identities or mentioning multiple groups.", "Interestingly, there is no significant difference in toxicity when we account for whether annotators perceive scores as written by humans or AI (Figure 5).", "This finding indicates that our machine-generated text is perceived as similarly harmful to human text.", "We also find that the most common framing tactic is moral judgement, or questioning the morality of an identity group, which has been linked to toxicity by prior work (Hoover et al., 2019).", "As further validation, we investigate whether ALICE -generated statements are more adversarial compared to topk -generated ones.", "For 125 randomly-selected prompts (62 toxic and 63 non-toxic), we generate two statements: one with ALICE and one without (topk ).", "We then collect annotations for the 250 statements using the setup described in 4.1, and get toxicity scores from HateBERT.", "We find that for topk sampled sentences, the prompt label indeed matches the desired label (95.2% of non-toxic examples and 67.7% of toxic examples).", "For ALICE, 40.3% of toxic examples match the prompt label and 92.1% of non-toxic examples match.", "We also find that ALICE succeeds in fooling HateBERT (26.4% of ALICE -decoded sentences fool HateBERT vs. 16.8% of topk sampled sentences).", "Finally, ALICE is effective for detoxifying generated text: the avg.", "human-annotated toxicity score for ALICE -decoded sentences with a toxic prompt is 2.97, compared to 3.75 for top-k .", "This difference is statistically significant with p < 0 .", "001 .", "ALICE therefore leads to harder, more ambiguous examples.", "We greatly expand on these findings in Appendix E with a larger scale human evaluation ( 10,000 samples) comparing sentences generated with and without ALICE .", "To further showcase the usefulness of TOXIGEN we investigate how it can enhance classifiers' abilities to detect human-written and machine-generated implicit toxic language.", "We fine-tune Test Data Finetune Data None ALICE topk ALICE + topk H a t e BERTSBF test 0.60 0.66 0.65 0.71 IHC 0.60 0.60 0.61 0.67 DYNAHATE 0.47 0.54 0.59 0.66 TOXIGEN-VAL 0.57 0.93 0.88 0.96 R o BERT a SBF test 0.65 0.70 0.67 0.70 IHC 0.57 0.64 0.63 0.66 DYNAHATE 0.49 0.51 0.50 0.54 TOXIGEN-VAL 0.57 0.87 0.85 0.93 Table 4: AUC for HateBert and RoBERTa both zero-shot and fine-tuned on 3 versions of our dataset: ALICE only, topk only, and both combined.", "the widely-used HateBERT (Caselli et al., 2021) and ToxDectRoBERTa (Zhou et al., 2021) models on the training portion of TOXIGEN , using the prompt labels as proxies for a true toxicity label.", "Then, we compare the performance of the out-of-the-box models to those fine-tuned on TOXIGEN on three publicly available human-written datasets (IMPLICITHATECORPUS (ElSherief et al., 2021), the SOCIALBIASFRAMES test set (Sap et al., 2020), and DYNAHATE (Vidgen et al., 2021)) as well as the evaluation portion of our machine-generated dataset (TOXIGEN-HUMANVAL ).", "To ablate the contribution of each decoding method, we also split TOXIGEN into equal numbers of ALICE-generated and topk -generated examples.", "Our resultssee Table 4show that fine-tuning HateBERT and ToxDectRoBERTa on TOXIGEN improves performance across all datasets.", "The improvement on human-written datasets shows that TOXIGEN can be used to improve existing classifiers, helping them better tackle the challenging human-generated implicit toxicity detection task.", "Fine-tuned HateBERT performs strongly on TOXIGEN-HUMANVAL , demonstrating that our data can successfully help guard against machine-generated toxicity.", "In this work, we used a large language model to create and release TOXIGEN , a large-scale, balanced, and implicit toxic language dataset.", "TOXIGEN is 3316 far larger than previous datasets, containing over 274k sentences, and is more diverse, including mentions of 13 minority groups at scale.", "The generated samples are balanced in terms of number of benign and toxic samples for each group.", "We proposed ALICE , an adversarial decoding scheme to evaluate robustness of toxicity classifiers and generate sentences to attack them, and showed the effectiveness of ALICE on a number of publicly-available toxicity detection systems.", "In our experiments, we showed that fine-tuning pre-trained hate classifiers on TOXIGEN can improve their performance on three popular human -generated toxicity datasets.", "We also conducted a human study on a subset of TOXIGEN , verifying that our generation methods successfully create challenging statements that annotators struggle to distinguish from human-written text: 90.5% of machine-generated examples were thought to be human-written.", "Risks in dataset release While the purpose of our work is to curate diverse and effective hate speech detection resources, our methods encourage a large language model to make its generation more toxic.", "This poses a potential misuse case where bad actors exploit these methods for nefarious purposes like spreading machine-generated hate speech.", "Still, ignoring this possibility does not make it go away and our work introduces an opportunity for the community to push back against harm towards minority groups.", "Our ultimate aim is to shift power dynamics to targets of oppression.", "Therefore, we do not consider identity dimensions that are historically the agents of oppression (e.g., whiteness, heterosexuality, able-bodied-ness).", "Please also note that there is still a lot that this dataset is not capturing about toxic language.", "Our annotations might not capture the full complexity of these issues related to human experiences.", "There is need for multi-disciplinary work to better understand these aspects.", "ALICE The proposed method in this work attacks content filters via an adversarial game between two AI systems and thus passes the existing content filtersas we show for 5 publicly-available systems.", "It is important to leverage this and similar approaches to improve content filters and prevent large scale attacks against sensitive platforms.", "Improving Toxicity Detection Effective classifiers for machine biases are required to combat the scale of online harm.", "Without such systems, minority groups are likely to be targeted by current (biased) systems.", "Our work is a significant step towards advancing this crucial classification task.", "Still, toxicity is inherently subjective (Sap et al., 2021).", "Therefore, moving beyond binary detection tasks to a focus on more nuanced labeling systems (ElSherief et al., 2021; Leonardelli et al., 2021) will prove crucial in developing responsible systems.", "Relationship to Policy The topic of detecting and mitigating toxicity is relevant to the ongoing work and discussions in the space of policy and legislation for AI technology (Wischmeyer and Rademacher, 2020; Reich et al., 2021).", "Carefully crafted policy and regulation can play an important role in providing oversight into the development and deployment of content moderation systems and toxicity detection algorithms in practice (Benesch, 2020; Gillespie et al., 2020).", "Getting this right carries a crucial importance for the society as errors in content moderation can disproportionately affect minority groups (Sap et al., 2019).", "We see a path forward in which tools and techniques like those presented in this work are paired with human expertise and well-informed policy & regulation in bringing scalable and reliable solutions to practice.", "We acknowledge and encourage the critical role the NLP research community is poised to play in this inter-disciplinary effort.", "We thank Azure AI Platform and Misha Bilenko for sponsoring this work and providing compute resources, Microsoft Research for supporting our large scale human study, and Alexandra Olteanu for her feedback on human evaluation.", "We also thank the crowdworkers for their time and effort." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "result", "result", "result", "objective", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "result", "abstain", "abstain", "abstain", "result", "other", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "result", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "other" ]
[ "Recent work has shown pre-trained language models capture social biases from the large amounts of text they are trained on.", "This has attracted attention to developing techniques that mitigate such biases.", "In this work, we perform an empirical survey of five recently proposed bias mitigation techniques: Counterfactual Data Augmentation (CDA), Dropout, Iterative Nullspace Projection, Self-Debias, and SentenceDebias.", "We quantify the effectiveness of each technique using three intrinsic bias benchmarks while also measuring the impact of these techniques on a model's language modeling ability, as well as its performance on downstream NLU tasks.", "We experimentally find that: (1) Self-Debias is the strongest debiasing technique, obtaining improved scores on all bias benchmarks; (2) Current debiasing techniques perform less consistently when mitigating non-gender biases; And (3) improvements on bias benchmarks such as StereoSet and CrowS-Pairs by using debiasing strategies are often accompanied by a decrease in language modeling ability, making it difficult to determine whether the bias mitigation was effective.", "1 1 Introduction Large pre-trained language models have proven effective across a variety of tasks in natural language processing, often obtaining state of the art performance (Peters et al., 2018; Devlin et al., 2019; Radford et al., 2019; Brown et al., 2020).", "These models are typically trained on large amounts of text, originating from unmoderated sources, such as the internet.", "While the performance of these pre-trained models is remarkable, recent work has shown that they capture social biases from the data they are trained on (May et al. 2019; Kurita et al. 2019; Webster et al. 2020; Nangia et al. 2020; Nadeem 1 Our code is publicly available: https://github. com/mcgill-nlp/bias-bench . et al. 2021, inter alia ).", "Because of these findings, an increasing amount of research has focused on developing techniques to mitigate these biases (Liang et al., 2020; Ravfogel et al., 2020; Webster et al., 2020; Kaneko and Bollegala, 2021; Schick et al., 2021; Lauscher et al., 2021).", "However, the proposed techniques are often not investigated thoroughly.", "For instance, much work focuses only on mitigating gender bias despite pre-trained language models being plagued by other social biases (e.g., racial or religious bias).", "Additionally, the impact that debiasing has on both downstream task performance, as well as language modeling ability, is often not well explored.", "In this paper, we perform an empirical survey of the effectiveness of five recently proposed debiasing techniques for pre-trained language models: 2 Counterfactual Data Augmentation (CDA; Zmi-grod et al. 2019; Webster et al. 2020), Dropout (Webster et al., 2020), Iterative Nullspace Projection (INLP; Ravfogel et al. 2020), Self-Debias (Schick et al., 2021), and SentenceDebias (Liang et al., 2020).", "Following the taxonomy described by Blodgett et al. (2020), our work studies the effectiveness of these techniques in mitigating representational biases from pre-trained language models.", "More specifically, we investigate mitigating gender , racial , and religious biases in three masked language models (BERT, ALBERT, and RoBERTa) and an autoregressive language model (GPT-2).", "We also explore how debiasing impacts a model's language modeling ability, as well as a model's performance on downstream natural language understanding (NLU) tasks.", "Concretely, our paper aims to answer the following research questions: Q1 Which technique is most effective in mitigating bias?", "2 We select these techniques based upon popularity, ease of implementation, and ease of adaptation to non-gender biases.", "Q2 Do these techniques worsen a model's language modeling ability?", "Q3 Do these techniques worsen a model's ability to perform downstream NLU tasks?", "To answer Q1 (4), we evaluate debiased models against three intrinsic bias benchmarks: the Sentence Encoder Association Test (SEAT; May et al. 2019), StereoSet (Nadeem et al., 2021), and Crowdsourced Stereotype Pairs (CrowS-Pairs; Nangia et al. 2020).", "Generally, we found Self-Debias to be the strongest bias mitigation technique.", "To answer Q2 (5) and Q3 (6), we evaluate debiased models against WikiText-2 (Merity et al., 2017) and the General Language Understanding Evaluation (GLUE; Wang and Cho 2019) benchmark.", "We found debiasing tends to worsen a model's language modeling ability.", "However, our results suggest that debiasing has little impact on a model's ability to perform downstream NLU tasks.", "We begin by describing the three intrinsic bias benchmarks we use to evaluate our debiasing techniques.", "We select these benchmarks as they can be used to measure not only gender bias, but also racial and religious bias in language models.", "Sentence Encoder Association Test (SEAT).", "We use SEAT (May et al., 2019) as our first intrinsic bias benchmark.", "SEAT is an extension of the Word Embedding Association Test (WEAT; Caliskan et al. 2017) to sentence-level representations.", "Below, we first describe WEAT.", "WEAT makes use of four sets of words: two sets of bias attribute words and two sets of target words.", "The attribute word sets characterize a type of bias.", "For example, the attribute word sets { man , he , him , ... } and { woman , she , her , ... } could be used for gender bias.", "The target word sets characterize particular concepts.", "For example, the target word sets { family , child , parent , ... } and { work , office , profession , ... } could be used to characterize the concepts of family and career , respectively.", "WEAT evaluates whether the representations for words from one particular attribute word set tend to be more closely associated with the representations for words from one particular target word set.", "For instance, if the representations for the female attribute words listed above tended to be more closely associated with the representations for the family target words, this may be indicative of bias within the word representations.", "Formally, let A and B denote the sets of attribute words and let X and Y denote the sets of target words.", "The SEAT test statistic is s ( X, Y, A, B ) = (cid:88) x X s ( x, A, B ) (cid:88) y Y s ( y, A, B ) where for a particular word w , s ( w, A, B ) is de-fined as the difference between w 's mean cosine similarity with the words from A and w 's mean cosine similarity with the words from B s ( w, A, B )= 1 | A | (cid:88) a A cos( w, a ) 1 | B | (cid:88) b B cos( w, b ) .", "where denotes the mean and denotes the standard deviation.", "Here, an effect size closer to zero is indicative of a smaller degree of bias in the representations.", "To create a sentence-level version of WEAT (re-ferred to as SEAT), May et al. (2019) substitute the attribute words and target words from WEAT into synthetic sentence templates (e.g., this is a [WORD] ) to create a collection of sentences.", "Now, given sets of sentences containing attribute and target words, the WEAT test statistic can be computed using sentence-level representations obtained from a pre-trained language model.", "3 We refer readers to Appendix A for a list of the SEAT tests we use to measure each type of bias in our work.", "We report the effect size for each SEAT test we evaluate.", "StereoSet.", "As our second intrinsic bias benchmark, we use StereoSet (Nadeem et al., 2021), a crowdsourced dataset for measuring four types of stereotypical bias in language models.", "Each StereoSet example consists of a context sentence, for example our housekeeper is [MASK] , and a set of three candidate associations (completions) for that sentenceone being stereotypical, another being anti-stereotypical, and a third being 3 We use a permutation on the SEAT test statistic to compute the significance of association between the attribute word sets and the target word sets.", "We refer readers to the original work of Caliskan et al. (2017) for a complete description of this test.", "unrelated.", "4 Using the example above, a stereotypical association might be our housekeeper is Mexican , an anti-stereotypical association might be our housekeeper is American , and an unrelated association might be our housekeeper is computer .", "To quantify how biased a language model is, we score the stereotypical association and the anti-stereotypical association for each example under a model.", "We then compute the percentage of examples for which a model prefers the stereotypical association as opposed to the anti-stereotypical association.", "We define this percentage as the stereotype score of a model.", "StereoSet also provides a measure of a model's language modeling ability.", "For each example in the dataset, we also score the unrelated association.", "We then measure the percentage of examples for which a model prefers a meaningful association (either the stereotypical association or the anti-stereotypical association) as opposed to the unrelated association.", "We define this percentage as the language modeling score of a model.", "We evaluate our debiased models against the StereoSet test set.", "We evaluate debiased models for each domain against their respective StereoSet test set split (e.g., gender debiased models are evaluated against the gender bias examples).", "Crowdsourced Stereotype Pairs (CrowS-Pairs).", "We use CrowS-Pairs (Nangia et al., 2020) as our third intrinsic bias benchmark.", "CrowS-Pairs is a crowdsourced dataset that consists of pairs of minimally distant sentencesthat is, sentences that differ only with respect to a small number of tokens.", "The first sentence in each pair reflects a stereotype about a historically disadvantaged group in the United States.", "For example, the sentence people who live in trailers are alcoholics reflects a possible socioeconomic stereotype.", "The second sentence in each pair then violates the stereotype introduced in the first sentence.", "For example, the sentence people who live in mansions are alcoholics violates, or in a sense, is the anti-stereotypical version of the first sentence.", "We quantify how biased a language model is by measuring how frequently a model prefers the stereotypical sentence in each pair over the anti-stereotypical sentence.", "Nangia et al. (2020) originally proposed using pseudo-likelihood-based 4 We consider only the intrasentence task from StereoSet.", "scoring (Salazar et al., 2020) for CrowS-Pairs, however, recent work has suggested that pseudo-likelihood-based scoring may be subject to model calibration issues (Desai and Durrett, 2020; Jiang et al., 2020).", "Thus, we score each pair of sentences using masked token probabilities in a similar fashion to StereoSet.", "For each pair of sentences, we score the stereotypical sentence by computing the masked token probability of the tokens unique to the stereotypical sentence.", "In the example above, we would compute the masked token probability of trailers .", "We score each anti-stereotypical sentence in a similar fashion.", "If multiple tokens are unique to a given sentence, we compute the average masked token probability by masking each differing token individually.", "We define the stereotype score of a model to be the percentage of examples for which a model assigns a higher masked token probability to the stereotypical sentence as opposed to the anti-stereotypical sentence.", "Below, we describe the five debiasing techniques we evaluate in this work.", "We refer readers to Appendix C for additional experimental details on each debiasing technique.", "Counterfactual Data Augmentation (CDA).", "CDA (Zmigrod et al., 2019; Dinan et al., 2020a; Webster et al., 2020; Barikeri et al., 2021) is a data-based debiasing strategy often used to mitigate gender bias.", "Roughly, CDA involves re-balancing a corpus by swapping bias attribute words (e.g., he / she ) in a dataset.", "For example, to help mitigate gender bias, the sentence the doctor went to the room and he grabbed the syringe could be augmented to the doctor went to the room and she grabbed the syringe .", "The re-balanced corpus is then often used for further training to debias a model.", "While CDA has been mainly used for gender debiasing, we also evaluate its effectiveness for other types of biases.", "For instance, we create CDA data for mitigating religious bias by swapping religious terms in a corpus, say church with mosque , to generate counterfactual examples.", "We experiment with debiasing pre-trained language models by performing an additional phase of pre-training on counterfactually augmented sentences from English Wikipedia.", "5 5 We list the bias attribute words we make use of in our study in Appendix B. 1880 DROPOUT .", "Webster et al. (2020) investigate using dropout regularization (Srivastava et al., 2014) as a bias mitigation technique.", "They investigate increasing the dropout parameters for BERT and ALBERT's attention weights and hidden activations and performing an additional phase of pre-training.", "Experimentally, they find increased dropout regularization reduces gender bias within these models.", "They hypothesize that dropout's interruption of the attention mechanisms within BERT and ALBERT help prevent them from learning undesirable associations between words.", "We extend this study to other types of biases.", "Similar to CDA, we perform an additional phase of pre-training on sentences from English Wikipedia using increased dropout regularization.", "SELF-DEBIAS .", "Schick et al. (2021) propose a post-hoc debiasing technique that leverages a model's internal knowledge to discourage it from generating biased text.", "Informally, Schick et al. (2021) propose using hand-crafted prompts to first encourage a model to generate toxic text.", "For example, generation from an autoregressive model could be prompted with The following text discriminates against people because of their gender. Then, a second continuation that is non-discriminative can be generated from the model where the probabilities of tokens deemed likely under the first toxic generation are scaled down.", "Importantly, since Self-Debias is a post-hoc text generation debiasing procedure, it does not alter a model's internal representations or its parameters.", "Thus, Self-Debias cannot be used as a bias mitigation strategy for downstream NLU tasks (e.g., GLUE).", "Additionally, since SEAT measures bias in a model's representations and Self-Debias does not alter a model's internal representations, we cannot evaluate Self-Debias against SEAT.", "SENTENCEDEBIAS .", "Liang et al. (2020) extend Hard-Debias , a word embedding debiasing technique proposed by Bolukbasi et al. (2016) to sentence representations.", "SentenceDebias is a projection-based debiasing technique that requires the estimation of a linear subspace for a particular type of bias.", "Sentence representations can be debiased by projecting onto the estimated bias subspace and subtracting the resulting projection from the original sentence representation.", "for computing a bias subspace.", "First, they define a list of bias attribute words (e.g., he / she ).", "Second, they contextualize the bias attribute words into sentences.", "This is done by finding occurences of the bias attribute words in sentences within a text corpus.", "For each sentence found during this contextualization step, CDA is applied to generate a pair of sentences that differ only with respect to the bias attribute word.", "Finally, they estimate the bias subspace.", "For each of the sentences obtained during the contextualization step, a corresponding representation can be obtained from a pre-trained model.", "Principle Component Analysis (PCA; Abdi and Williams 2010) is then used to estimate the principle directions of variation of the resulting set of representations.", "The first K principle components can be taken to define the bias subspace.", "Iterative Nullspace Projection (INLP).", "Ravfogel et al. (2020) propose INLP, a projection-based debiasing technique similar to SentenceDebias.", "Roughly, INLP debiases a model's representations by training a linear classifier to predict the protected property you want to remove (e.g., gender) from the representations.", "Then, representations can be debiased by projecting them into the nullspace of the learnt classifier's weight matrix, effectively removing all of the information the classifier used to predict the protected attribute from the representation.", "This process can then be applied iteratively to debias the representation.", "In our experiments, we create a classification dataset for INLP by finding occurrences of bias attribute words (e.g., he / she ) in English Wikipedia.", "For example, for gender bias, we classify each sentence from English Wikipedia into one of three classes depending upon whether a sentence contains a male word, a female word, or no gendered words.", "To investigate which technique is most effective in mitigating bias ( Q1 ), we evaluate debiased BERT, ALBERT, RoBERTa, and GPT-2 models against SEAT, StereoSet, and CrowS-Pairs.", "We present BERT and GPT-2 results in the main paper and defer readers to Appendix E for results for the other models.", "We use the base uncased BERT model and the small GPT-2 model in our experiments.", "SEAT Results.", "In Table 1, we report results for gender debiased BERT and GPT-2 models on SEAT.", "For BERT, we find two of our four debiased models obtain lower average absolute effect sizes than the baseline model.", "In particular, INLP performs best on average across all six SEAT tests.", "Notably, INLP and SentenceDebias both obtain lower average absolute effect sizes than the baseline model while the CDA and Dropout models do not.", "Intuitively, this may be due to INLP and SentenceDebias taking a more aggressive approach to debiasing by attempting to remove all gender information from a model's representations.", "For GPT-2, our results are less encouraging.", "We find all of the debiased models obtain higher average absolute effect sizes than the baseline model.", "However, we note that SEAT fails to detect any statistically significant bias in the baseline model in any of the six SEAT tests to begin with.", "We argue, alongside others (Kurita et al., 2019; May et al., 2019), that SEAT's failure to detect bias in GPT-2 brings into question its reliability as a bias benchmark.", "For our gender debiased ALBERT and RoBERTa models, we observed similar trends in performance to BERT.", "We also use SEAT to evaluate racial and religious bias in our models.", "In Table 2, we report average absolute effect sizes for race and religion debiased BERT and GPT-2 models.", "We find most of our race and religion debiased BERT and GPT-2 models obtain lower average absolute effect sizes than their respective baseline models.", "These trends were less consistent in our ALBERT and RoBERTa models.", "For BERT, four of the five gender debiased models obtain lower stereotype scores than the baseline model.", "However, the race debiased models do not perform as consistently well.", "We note that for race, only two of the five debiased models obtain lower stereotype scores than the baseline model.", "Encouragingly, we find four of the five religion debiased BERT models obtain reduced stereotype scores.", "We observed similar trends to BERT in our ALBERT and RoBERTa results.", "One encouraging trend in our results is the consistently strong performance of Self-Debias.", "Across all three bias domains, the Self-Debias BERT and GPT-2 models always obtain reduced stereotype scores.", "Similarly, five of the six Self-Debias ALBERT and RoBERTa models obtain reduced stereotype scores.", "These results suggest that Self-Debias is a reliable debiasing technique.", "CrowS-Pairs Results.", "In Table 4, we report CrowS-Pairs results for BERT and GPT-2.", "Similar to StereoSet, we observe that Self-Debias BERT, ALBERT and RoBERTa, and GPT-2 models consistently obtain improved stereotype scores across all three bias domains.", "We also observe a large degree of variability in the performance of our debiasing techniques on CrowS-Pairs.", "For example, the GPT-2 religion SentenceDebias model obtains a stereotype score of 35 .", "24 , an absolute difference of 27 .", "62 points relative to the baseline model's score.", "We hypothesize that this large degree of variability is due to the small size of CrowS-Pairs (it is 14 th the size of the StereoSet test set).", "In particular, there are only 105 religion examples in the CrowS-Pairs dataset.", "Furthermore, Aribandi et al. (2021) demonstrated the relative instability of the performance of pre-trained language models, such as BERT, on CrowS-Pairs (and StereoSet) across different pre-training runs.", "Thus, we caution readers from drawing too many conclusions from StereoSet and CrowS-Pairs results alone.", "Do SEAT, StereoSet, and CrowS-Pairs Reliably Measure Bias?", "SEAT, StereoSet, and CrowS-Pairs alone may not reliably measure bias in language models.", "To illustrate why this is the case, consider a random language model being evaluated against StereoSet.", "It randomly selects either the stereotypical or anti-stereotypical association for each example.", "Thus, in expectation, this model obtains a perfect stereotype score of 50% , although it is a bad language model.", "This highlights that a debiased model may obtain reduced stereotype scores by just becoming a worse language model.", "Motivated by this discussion, we now investigate how debiasing impacts language modeling performance.", "To investigate how debiasing impacts language modeling ( Q2 ), we measure perplexities before and after debiasing each of our models on WikiText-2 (Merity et al., 2017).", "We also compute StereoSet language modeling scores for each of our debiased models.", "We discuss our findings below.", "of WikiText-2 for our experiments.", "Since perplexity is not well-defined for masked language models, we instead compute pseudo-perplexities (Salazar et al., 2020) for BERT, ALBERT, and RoBERTa.", "We compute the perplexities of the GPT-2 models normally.", "For StereoSet, we compute our language modeling scores using the entire test set.", "strong correlation (negative) between a model's perplexity on WikiText-2 and its StereoSet language modeling score.", "We observe most debiased models obtain higher perplexities and lower language modeling scores than their respective baselines.", "Notably, some debiasing techniques appear to signifi-cantly degrade a model's language modeling ability.", "For instance, the SentenceDebias GPT-2 model obtains a perplexity of 65 .", "493 twice as large as the perplexity of the baseline GPT-2 model.", "However, there are some exceptions to this trend.", "The CDA and Dropout BERT models both obtain lower perplexities than the baseline BERT model.", "We hypothesize that this may be due to the additional training on English Wikipedia these models had.", "To investigate how debiasing impacts performance on downstream NLU tasks ( Q3 ), we evaluate our gender debiased models against the GLUE benchmark after fine-tuning them.", "We report the results for BERT and GPT-2 in Table 6.", "Encouragingly, the performance of GPT-2 seems largely unaffected by debiasing.", "In some cases, we in fact observe increased performance.", "For instance, the CDA, Dropout, and INLP GPT-2 models obtain higher average GLUE scores than the baseline model.", "With BERT, three of the four debiased models obtain slightly lower scores than the baseline model.", "Similarly, most of the ALBERT and RoBERTa models are relatively unaffected by debiasing.", "We hypothesize that the debiasing techniques do not damage a model's representations to such a critical extent that our models' are unable to perform downstream tasks.", "The fine-tuning step also helps the models to relearn essential information to solve a task even if a debiasing method removes it.", "Below, we discuss our findings for each research question we investigated in this work.", "We also discuss some of the limitations of our study.", "Q1: Which technique is most effective in mitigating bias?", "We found Self-Debias to be the strongest debiasing technique.", "Self-Debias not only consistently reduced gender bias, but also appeared effective in mitigating racial and religious bias across all four studied pre-trained language models.", "Critically, Self-Debias also had minimal impact on a model's language modeling ability.", "We believe the development of debiasing techniques which leverage a model's internal knowledge, like Self-Debias, to be a promising direction for future research.", "Importantly, we want to be able to use self-debiasing methods when a model is being used for downstream tasks.", "Q2: Do these techniques worsen a model's language modeling ability?", "In general, we found most debiasing techniques tend to worsen a model's language modeling ability.", "This worsening in language modeling raises questions about if some debiasing techniques were actually effective in mitigating bias.", "Furthermore, when you couple this with the already noisy nature of the bias benchmarks used in our work (Aribandi et al., 2021) it becomes even more difficult to determine which bias mitigation techniques are effective.", "Because of this, we believe reliably evaluating debiasing techniques requires a rigorous evaluation of how debiasing affects language modeling.", "Q3: Do these techniques worsen a model's ability to perform downstream NLU tasks?", "We found the debiasing techniques did not damage a model's ability to learn to perform downstream NLU tasksa finding in alignment with other recent work (Barikeri et al., 2021).", "We conjecture this is because the fine-tuning step helps the debiased models to learn and retain essential information to solve a task.", "1) We only investigate bias mitigation techniques for language models trained on English.", "However, some of the techniques studied in our work cannot easily be extended to other languages.", "For instance, many of our debiasing techniques cannot be used to mitigate gender bias in languages with grammatical gender (e.g., French).", "6 2) Our work is skewed towards North American social biases.", "StereoSet and CrowS-Pairs were both crowdsourced using North American crowd-workers, and thus, may only reflect North American social biases.", "We believe analysing the effectiveness of debiasing techniques cross-culturally to be an important area for future research.", "Furthermore, all of the bias benchmarks used in this work have only positive predictive power.", "For example, a perfect stereotype score of 50% on StereoSet does not indicate that a model is unbiased.", "3) Many of our debiasing techniques make simplifying assumptions about bias.", "For example, for gender bias, most of our debiasing techniques assume a binary definition of gender.", "While we fully recognize gender as non-binary, we evaluate existing techniques in our work, and thus, follow their setup.", "Manzini et al. (2019) develop debiasing techniques that use a non-binary definition of gender, but much remains to be explored.", "Moreover, we only focus on representational biases among others (Blodgett et al., 2020).", "To the best of our knowledge, we have performed the first large scale evaluation of multiple debiasing", "6 See Zhou et al. (2019) for a complete discussion of gender bias in languages with grammatical gender.", "techniques for pre-trained language models.", "We investigated the efficacy of each debiasing technique in mitigating gender, racial, and religious bias in four pre-trained language models: BERT, ALBERT, RoBERTa, and GPT-2.", "We used three intrinsic bias benchmarks to evaluate the effectiveness of each debiasing technique in mitigating bias and also investigated how debiasing impacts language modeling and downstream NLU task performance.", "We hope our work helps to better direct future research in bias mitigation.", "We thank the members of SR's research group for helpful feedback throughout the duration of this project.", "We would also like to thank Span-dana Gella for feedback on early drafts of this manuscript and Mat Pikuliak for finding a bug in our code.", "SR is supported by the Canada CIFAR AI Chairs program and the NSERC Discovery Grant program.", "NM is supported by an IVADO Excellence Scholarship.", "In this work, we used a binary definition of gender while investigating gender bias in pre-trained language models.", "While we fully recognize gender as non-binary, our survey closely follows the original methodology of the techniques explored in this work.", "We believe it will be critical for future research in gender bias to use a more fluid definition of gender and we are encouraged by early work in this direction (Manzini et al., 2019; Dinan et al., 2020b).", "Similarly, our work makes use of a narrow definition of religious and racial bias.", "We also note we do not investigate the extrinsic harm caused by any of the studied pre-trained language models, nor any potential reduction in harm by making use of any of our studied debiasing techniques.", "In other words, we do not investigate how biases in pre-trained language models effect humans in real-world settings.", "Finally, we highlight that all of the intrinsic bias benchmarks used in this work have only positive predictive power.", "In other words, they can identify models as biased, but cannot verify a model as unbiased.", "For example, a stereotype score of 50% on StereoSet or CrowS-Pairs is not indicative of an unbiased model.", "Additionally, recent work demonstrated the potential unreliability of the bias benchmarks used in this work (Blodgett et al., 2021).", "Because of this, we caution readers from making definitive claims about bias in pre-trained language models based on these benchmarks alone." ]
[ "abstain", "abstain", "objective", "method", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "method", "objective", "objective", "objective", "abstain", "objective", "objective", "method", "result", "method", "result", "result", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "method", "abstain", "result", "abstain", "method", "method", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "objective", "abstain", "abstain", "objective", "objective", "abstain", "other", "other", "other", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method" ]
[ "Humor plays an important role in human languages and it is essential to model humor when building intelligence systems.", "Among different forms of humor, puns perform wordplay for humorous effects by employing words with double entendre and high phonetic similarity.", "However, identifying and modeling puns are challenging as puns usually involved implicit semantic or phonological tricks.", "In this paper, we propose Pronunciation-attentive Contextualized Pun Recognition ( PCPR ) to perceive human humor, detect if a sentence contains puns and locate them in the sentence.", "PCPR derives contextualized representation for each word in a sentence by capturing the association between the surrounding context and its corresponding phonetic symbols.", "Extensive experiments are conducted on two benchmark datasets.", "Results demonstrate that the proposed approach significantly outperforms the state-of-the-art methods in pun detection and location tasks.", "In-depth analyses verify the effectiveness and robustness of PCPR .", "During the last decades, social media has promoted the creation of a vast amount of humorous web contents (Nijholt et al., 2017).", "Automatic recognition of humor has become an important task in the area of figurative language processing, which can benefit various downstream NLP applications such as dialogue systems, sentiment analysis, and machine translation (Melby and Warner, 1995; Augello et al., 2008; Ghosh et al., 2015; Bertero and Fung, 2016; Blinov et al., 2019).", "However, humor is one of the most complicated behaviors in natural language semantics and sometimes it is even difficult for humans to interpret.", "In most cases, understanding humor requires adequate background knowledge and a rich context.", "Puns are a form of humorous approaches using the different meanings of identical words or words with similar pronunciations to explain texts or utterances.", "There are two main types of puns.", "Homographic puns rely on multiple interpretations of the same word.", "As shown in Table 1, the phrase all right means good condition or opposite to left ; the word reaction means chemical change or action .", "The two meanings of the same expression are consistent with its context, which creates a humorous pun in both sentences when there is a clear contrast between two meanings.", "On the other hand, heterographic puns take advantage of phonologically same or similar words.", "For example, the word pairs sale and sail , weak and week in Table 1 have the same or similar pronunciations.", "The sentences are funny because both words fit the same context.", "Understanding puns is a big fish to fry for deep comprehension of complex semantics.", "These two forms of puns have been studied in literature from different angles.", "To recognize puns in a sentence, word sense disambiguation techniques (WSD) (Navigli, 2009) have been employed to identify the equitable intention of words in utterances (Pedersen, 2017).", "External knowledge bases such as WordNet (Miller, 1998b) have been applied in determining word senses of pun words (Oele and Evang, 2017).", "However, these methods cannot tackle heterographic puns with distinct word spellings and knowledge bases that only contain a limited vocabulary.", "To resolve the issues of sparseness and heterographics, the word embedding techniques (Mikolov et al., 2013; Pennington et al., 2014) provide flexible representations to model puns (Hurtado et al., 2017; Indurthi and Oota, 2017; Cai et al., 2018).", "However, a word may have different meanings regarding its contexts.", "Especially, an infrequent meaning of the word might be utilized for creating a pun.", "Therefore, static word embeddings are insufficient to represent words.", "In addition, some puns are created by replacing a word with another word with the same or similar pronunciation as examples shown in Table 1. Therefore, to recognize puns, it is essential to model the association between words in the sentence and the pronunciation of words.", "Despite existing approaches attempt to leverage phonological structures to understand puns (Doogan et al., 2017; Jaech et al., 2016), there is a lack of a general framework to model these two types of signals in a whole.", "In this paper, we propose Pronunciation-attentive Contextualized Pun Recognition ( PCPR ) to jointly model the contextualized word embeddings and phonological word representations for pun recognition.", "To capture the phonological structures of words, we break each word into a sequence of phonemes as its pronunciation so that homophones can have similar phoneme sets.", "For instance, the phonemes of the word pun are { P , AH , N } .", "In PCPR , we construct a pronunciation attentive module to identify important phonemes of each word, which can be applied in other tasks related to phonology.", "We jointly encode the contextual and phonological features into a self-attentive embedding to tackle both pun detection and location tasks.", "We summarize our contributions as following.", "To the best of our knowledge, PCPR is the first work to jointly model contextualized word embeddings and pronunciation embeddings to recognize puns.", "Both contexts and phonological properties are beneficial to pun recognition.", "Extensive experiments are conducted on two benchmark datasets.", "PCPR significantly outperforms existing methods in both pun detection and pun location.", "In-depth analyses also verify the effectiveness and robustness of PCPR .", "We release our implementations and pre-trained phoneme embeddings at https://github.com/ joey1993/pun-recognition to facilitate future research.", "Pun Recognition and Generation To recognize puns, Miller et al. (2017) summarize several systems for the SemEval 2017 tasks.", "To detect the pun, Pedersen (2017) supposes that if there is one pun in the sentence, when adopting different Word Sense Disambiguation (WSD) methods, the sense assigned to the sentence will be different.", "To locate the pun, based on the WSD results for pun detection, they choose the last word which changes the senses between different WSD runs.", "Even though this method can tackle both homographic and heterographic pun detection, it does not use any pre-trained embedding model.", "Xiu et al. (2017) detect the pun in the sentence using similarity features which are calculated on sense vectors or cluster center vectors.", "To locate the pun, they use an unsupervised system by scoring each word in the sentence and choosing the word with the smallest score.", "However, this model exclusively relies on semantics to detect the heterographic puns but ignores the rich information embedded in the pronunciations.", "Doogan et al. (2017) leverage word embeddings as well as the phonetic information by concatenating pronunciation strings, but the concatenation has limited expression ability.", "They also mention that their systems suffer for short sentences as word embeddings do not have much context information.", "Besides, Zou and Lu (2019) jointly detect and locate the pun from a sequence labeling perspective by employing a new tagging schema.", "Diao et al. (2018) expand word embeddings using WordNet to settle the polysemy of homographic puns, following by a neural attention mechanism to extract the collocation to detect the homographic pun.", "However, all these methods only make use of limited context information.", "Other than the pun recognition, Yu et al. (2018) generate homographic puns without requiring any pun data for training.", "He et al. (2019) improve the homographic pun generation based on the local-global surprisal principle which posits that the pun word and the alternative word have a strong association with the distant and immediate context respectively.", "Pronunciation Embeddings Word embeddings assign each word with a vector so that words with similar semantic meanings are close in the embedding space.", "Most word embedding models only make use of text information and omitting the rich information contained in the pronunciation.", "However, the pronunciation is also an important part of the language (Zhu et al., 2018).", "Prior studies have demonstrated that the phonetic information can be used in speech recognition (Bengio and Heigold, 2014), spell correction (Toutanova and Moore, 2002) and speech synthesis (Miller, 1998a).", "By projecting to the embedding space, words sound alike are nearby to each other (Ben-gio and Heigold, 2014).", "Furthermore, Kamper et al. (2016) make use of word pairs information to improve the acoustic word embedding.", "Zhu et al. (2018) show that combining the pronunciation with the writing texts can help to improve the performance of word embeddings.", "However, these pronunciation embeddings are word-level features, while in our approach, we make use of syllabic pronunciations which is phoneme-level and could help with the out-of-vocabulary (OOV) situation.", "Luo et al. (2019) also propose an adversarial generative network for pun generation, which does not require any pun corpus.", "Contextualized Word Embeddings Traditional word embeddings assign a fixed vector to one word even if the word has multiple meanings under different contexts (e.g., the river bank v.s. the commercial bank).", "McCann et al. (2017) combine the pivot word embeddings as well as the contextual embeddings generated by an encoder from a supervised neural machine translation task.", "Peters et al. (2017) enrich the word embeddings by the contextual information extracted from a bidirectional language model.", "(Devlin et al., 2018) learn the language embedding by stacking multiple transformer layers with masked language model objective which advances the state-of-the-art for many NLP tasks.", "Yang et al. (2019) enable learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and solve the problem of pretrain-finetune discrepancy.", "In this section, we first formally define the problem and then introduce the proposed method, PCPR .", "Suppose the input text consists of a sequence of N words { w 1 , w 2 , , w N } .", "For each word w i with M i phonemes in its pronunciation, the phonemes are denoted as R ( w i ) = { r i, 1 , r i, 2 , , r i,M i } , where r i,j is the j -th phoneme in the pronunciation of w i .", "These phonemes are given by a dictionary.", "In this paper, we aim to recognize potential puns in the text with two tasks, including pun detection and pun location, as described in the following.", "Task 1: Pun Detection.", "The pun detection task identifies whether a sentence contains a pun.", "Formally, the task is modeled as a classification problem with binary label y D .", "Task 2: Pun Location.", "Given a sentence containing at least a pun, the pun location task aims to unearth the pun word.", "More precisely, for each word w i , we would like to predict a binary label y Li that indicates if w i is a pun word.", "In addition to independently solving the above two tasks, the ultimate goal of pun recognition is to build a pipeline from scratch to detect and then locate the puns in texts.", "Hence, we also evaluate the end-to-end performance by aggregating the solutions for two tasks.", "Figure 1 shows the overall framework of the proposed Pronunciation-attentive Contextualized Pun Recognition ( PCPR ).", "For each word in the input text, we first derive two continuous vectors, including contextualized word embedding and pronunciation embedding, as representations in different aspects.", "Contextualized word embeddings derive appropriate word representations with consideration of context words and capture the accurate semantics in the text.", "To learn the phonological characteristics, each word is divided into phonemes while each phoneme is projected to a phoneme embedding space, thereby obtaining pronunciation embeddings with the attention mechanism (Bah-danau et al., 2015).", "Finally, a self-attentive encoder blends contextualized word embeddings and pronunciation embeddings to capture the overall semantics for both pun detection and location.", "The context is essential for interpreting a word in the text.", "Hence, we propose to apply contextualized word embeddings to derive word representations.", "In the framework of PCPR , any contextualized word embedding method, such as BERT (De-vlin et al., 2018), ELMo (Peters et al., 2018), and XLNet (Yang et al., 2019), can be utilized.", "Here, we choose BERT to derive contextualized word embeddings without loss of generality.", "BERT deploys a multi-layer bidirectional encoder based on transformers with multi-head self-attention (Vaswani et al., 2017) to model words in the text after integrating both word and position embeddings (Sukhbaatar et al., 2015).", "As a result, for each word, a representative contextualized embedding is derived by considering both the spe-cific word and all contexts in the document.", "Here we denote T Ci as the d C -dimensional contextualized word embedding for the word w i .", "In addition, BERT contains a special token [CLS] with an embedding vector in BERT to represent the semantics of the whole input text.", "To learn the phonological characteristics of words, PCPR models the word phonemes.", "For each phoneme r i,j of the word w i , we project r i,j to a d P -dimensional embedding space as a trainable vector u i,j to represent its phonological properties.", "Based on the phoneme embeddings of a word, we apply the attention mechanism (Bahdanau et al., 2015) to simultaneously identify important phonemes and derive the pronunciation embedding TP i .", "Specifically, the phoneme embeddings are transformed by a fully-connected hidden layer to measure the importance scores Pi as follows: v i,j = tanh( FP ( u i,j )) , Pi,j = v (cid:124) i,j v s (cid:80) k v (cid:124) i,k v s , where FP ( ) is a fully-connected layer with d A outputs and d A is the attention size; v s is a d A dimensional context vector that estimates the importance score of each pronunciation embedding.", "Finally, the pronunciation embeddings T Pi can be represented as the weighted combination of phoneme embeddings as follows: T Pi = (cid:88) j i,j u i,j .", "Moreover, we can further derive the joint embedding T Ji to indicate both word semantics and phonological knowledge for the word w i by concatenating two different embeddings as follows: T Ji = (cid:2) T Ci ; T Pi (cid:3) .", "For the task of pun detection, understanding the meaning of input text is essential.", "Due to its advantages of interpretability over convolutional neural network (LeCun et al., 1995) and recurrent neural network (Schuster and Paliwal, 1997), we deploy the self-attention mechanism (Vaswani et al., 2017) to capture the overall semantics represented in the joint embeddings.", "For each word w i , the self-attention mechanism estimates an importance vector Si : FS ( T ) = Softmax ( T T (cid:124) d ) T, Si = exp( FS ( T Ji )) (cid:80) j exp( FS ( T Jj )) , where FS ( ) is the function to estimate the attention for queries, and d is a scaling factor to avoid extremely small gradients.", "Hence, the self-attentive embedding vector is computed by aggregating joint embeddings: TJ [ATT] = (cid:88) i Si T Ji .", "Note that the knowledge of pronunciations is considered by the self-attentive encoder but not the contextualized word encoder.", "Finally, the pronunciation-attentive contextualized representation for the whole input text can be derived by concatenating the overall contextualized embedding and the self-attentive embedding: TJ [CLS] = (cid:2) TC [CLS] ; TJ [ATT] (cid:3) .", "Based on the joint embedding for each word and the pronunciation-attentive contextualized embedding for the whole input text, both tasks can be tackled with simple fully-connected layers.", "Pun Detection.", "Pun detection is modeled as a binary classification task.", "Given the overall embedding for the input text TJ [CLS] , the prediction y D is generated by a fully-connected layer and the softmax function: y D = argmax k { 0 , 1 } FD ( TJ [CLS] ) k , where FD ( ) derives the logits of two classes in binary classification.", "Pun Location.", "For each word w i , the corresponding self-attentive joint embedding TJ i,[ATT] is applied as features for pun location.", "Similar to pun detection, the prediction y Li is generated by: y Li = argmax k { 0 , 1 } FL ( TJ i,[ATT] ) k , where FL ( ) derives two logits for classifying if a word is a pun word.", "Since both tasks focus on binary classification, we optimize the model with cross-entropy loss.", "In this section, we describe our experimental settings and explain the results and interpretations.", "We will verify some basic assumptions of this paper: (1) the contextualized word embeddings and pronunciation embeddings are both beneficial to the pun detection and location tasks; (2) the attention mechanism can improve the performance.", "Experimental Datasets.", "We conducted experiments on the SemEval 2017 shared task 7 dataset 1 (SemEval) (Miller et al., 2017) and the Pun of The Day dataset (PTD) (Yang et al., 2015).", "For pun detection, the SemEval dataset consists of 4 , 030 and 2 , 878 examples for pun detection and location while each example with a pun can be a homographic or heterographic pun.", "In contrast, the PTD dataset contains 4 , 826 examples without labels of pun types.", "Table 2 further shows the data statistics.", "The two experimental datasets are the largest publicly available benchmarks that are used in the existing studies.", "SemEval-2017 dataset contains punning and non-punning jokes, aphorisms, and other short texts composed by professional humorists and online collections.", "Hence, we assume the genres of positive and negative examples should be identical or extremely similar.", "Evaluation Metrics.", "We adopt precision (P), recall (R), and F 1 -score (Schutze et al., 2007; Powers, 2011) to compare the performance of PCPR with previous studies in both pun detection and location.", "More specifically, we apply 10-fold cross-validation to conduct evaluation.", "For each fold, we randomly select 10% of the instances from the training set for development.", "To conduct fair comparisons, we strictly follow the experimental settings in previous studies (Zou and Lu, 2019; Cai et al., 2018) and include their reported numbers in the comparisons.", "Implementation Details.", "For data pre-processing, all of the numbers and punctuation marks are removed.", "The phonemes of each word are derived by the CMU Pronouncing Dictionary 2 .", "We initialize the phoneme embeddings by using the fastText 1 http://alt.qcri.org/semeval2017/ task7/ 2 http://svn.code.sf.net/p/cmusphinx/ code/trunk/cmudict/ 4 8 16 32 64 128 Phoneme embedding size ( d P ) 0.88 0.90 0.92 F 1 s c o r e homographic heterographic", "word embedding (Mikolov et al., 2018) trained on Wikipedia articles 3 crawled in December, 2017.", "The PCPR is implemented in PyTorch while the fused Adam optimizer (Kingma and Ba, 2014) optimizes the parameters with an initial learning rate of 5 10 5 .", "The dropout and batch size are set as 10 1 and 32.", "We follow BERT (BASE) (De-vlin et al., 2018) to use 12 Transformer layers and self-attention heads.", "To clarify, in PCPR , tokens and phonemes are independently processed, so the tokens processed with WordPiece tokenizer (Wu et al., 2016) in BERT are not required to line up with phonemes for computations.", "To deal with the out-of-vocabulary words, we use the output embeddings of the first WordPiece tokens as the representatives, which is consistent with many state-of-the-art named entity recognition approaches (Devlin et al., 2018; Lee et al., 2019).", "We also create a variant of PCPR called CPR by exploiting only the contextualized word encoder without considering phonemes to demonstrate the effectiveness of pronunciation embeddings.", "To tune the hyperparameters, we search the phoneme embedding size d P and the attention size d A from { 8 , 16 , 32 , 64 , 128 , 256 , 512 } as shown in Figure 2. For the SemEval dataset, the best setting is ( d P = 64 , d A = 256) for the homographic puns while heterographic puns favor ( d P = 64 , d A = 32) .", "For the PTD dataset, ( d P = 64 , d A = 32) can reach the best performance.", "For the SemEval dataset, nine baseline methods are compared in the experiments, including Duluth (Pedersen, 2017), JU CES NLP (Pra-manick and Das, 2017), PunFields (Mikhalkova and Karyakin, 2017), UWAV (Vadehra, 2017), Fermi (Indurthi and Oota, 2017), and 3 https://dumps.wikimedia.org/enwiki/ latest/ UWaterloo (Vechtomova, 2017).", "While most of them extract complicated linguistic features to train rule based and machine learning based classifiers.", "In addition to task participants, Sense (Cai et al., 2018) incorporates word sense representations into RNNs to tackle the homographic pun location task.", "The CRF (Zou and Lu, 2019) captures linguistic features such as POS tags, n-grams, and word suffix to model puns.", "Moreover, the Joint (Zou and Lu, 2019) jointly models two tasks with RNNs and a CRF tagger.", "For the PTD dataset, four baseline methods with reported performance are selected for comparisons.", "MCL (Mihalcea and Strapparava, 2005) exploits word representations with multiple stylistic features while HAE (Yang et al., 2015) applies a random forest model with Word2Vec and human-centric features.", "PAL (Chen and Lee, 2017) trains a convolutional neural network (CNN) to learn essential feature automatically.", "Based on existing CNN models, HUR (Chen and Soo, 2018) improves the performance by adjusting the filter size and adding a highway layer.", "Pun Detection.", "Table 3 presents the pun detection performance of methods for both homographic and heterographic puns on the SemEval dataset while Table 4 shows the detection performance on the PTD dataset.", "For the SemEval dataset, compared to the nine baseline models, PCPR achieves the highest performance with 3.0% and 6.1% improvements of F 1 against the best among the baselines (i.e. Joint ) for the homographic and heterographic datasets, respectively.", "For the PTD dataset, PCPR improves against HUR by 9.6%.", "Moreover, the variant CPR beats all of the baseline methods and shows the effectiveness of contextualized word embeddings.", "In addition, PCPR further improves the performances by 2.3% and 1.1% with the attentive pronunciation feature for detecting homographic and heterographic puns, respectively.", "An interesting observation is that pronunciation embeddings also facilitate homographic pun detection, implying the potential of pronunciation for enhancing general language modeling.", "Pun Location.", "Table 3 shows that the proposed PCPR model achieves highest F 1 -scores on both homographic and heterographic pun location tasks with 10.9% and 15.9% incredible increment against the best baseline method.", "The improvement is Model Homographic Puns Heterographic Puns Pun Detection Pun Location Pun Detection Pun Location P R F 1 P R F 1 P R F 1 P R F 1 Duluth 78.32 87.24 82.54 44.00 44.00 44.00 73.99 86.62 68.71 -JU CSE NLP 72.51 90.79 68.84 33.48 33.48 33.48 73.67 94.02 71.74 37.92 37.92 37.92 PunFields 79.93 73.37 67.82 32.79 32.79 32.79 75.80 59.40 57.47 35.01 35.01 35.01 UWAV 68.38 47.23 46.71 34.10 34.10 34.10 65.23 41.78 42.53 42.80 42.80 42.80 Fermi 90.24 89.70 85.33 52.15 52.15 52.15 ---UWaterloo --65.26 65.21 65.23 --79.73 79.54 79.64 Sense --81.50 74.70 78.00 ---CRF 87.21 64.09 73.89 86.31 55.32 67.43 89.56 70.94 79.17 88.46 62.76 73.42 Joint 91.25 93.28 92.19 83.55 77.10 80.19 86.67 93.08 89.76 81.41 77.50 79.40 CPR 91.42 94.21 92.79 88.80 85.65 87.20 93.35 95.04 94.19 92.31 88.24 90.23 PCPR 94.18 95.70 94.94 90.43 87.50 88.94 94.84 95.59 95.22 94.23 90.41 92.28 Table 3: Performance of detecting and locating puns on the SemEval dataset.", "much larger than that on pun detection task.", "We posit the reason is that predicting pun locations relies much more on the comparative relations among different tokens in one sentence.", "As a result, contextualized word embeddings acquire an enormous advantage.", "By applying the pronunciation-attentive representations, different words with similar pronunciations are linked, leading to a much better pinpoint of pun word for the heterographic dataset.", "We notice that some of the baseline models such as UWaterloo , UWAV and PunFields have poor performances.", "These methods consider the word position in a sentence or calculate the inverse document frequency of words.", "We suppose such rule-based recognition techniques can hardly capture the deep semantic and syntactic properties of words.", "recognition is to establish a pipeline to detect and then locate puns.", "Table 5 shows the pipeline performances of PCPR and Joint , which is the only baseline with reported pipeline performance for recognizing the homographic and heterographic puns in the SemEval dataset.", "Joint achieves suboptimal performance and the authors of Joint attribute the performance drop to error propagation.", "In contrast, PCPR improves the F 1 -scores against Joint by 24.6% and 20.0% on two pun types.", "Ablation Study.", "To better understand the effectiveness of each component in PCPR , we conduct an ablation study on the homographic puns of the SemEval dataset.", "Table 6 shows the results on taking out different features of PCPR , including pre-trained phoneme embeddings, the self-attentive encoder, and phonological attention.", "Note that we use the average pooling as an alternative when we remove the phonological attention module.", "As a result, we can see the drop after removing each of the three features.", "It shows that all these components are essential for PCPR to recognize puns.", "Attentive Weights Interpretation.", "Figure 3 illustrates the self-attention weights Si of three ex-A busy barber is quiet harried.", "amples from heterographic puns in the SemEval dataset.", "The word highlighted in the upper sentence (marked in pink) is a pun while we also color each word of the lower sentence in blue according to the magnitude of its attention weights.", "The deeper colors indicate higher attention weights.", "In the first example, busy has the largest weight because it has the most similar semantic meaning as harried .", "The barber also has relatively high weights.", "We suppose it is related to hairy which should be the other word of this double entendre.", "Similar, the zoo is corresponded to lion while phone and busy indicate line for the pun.", "Moreover, boating confirms sail while store supports sale .", "Interpreting the weights out of our self-attentive encoder explains the sig-nificance of each token when the model detects the pun in the context.", "The phonemes are essential in these cases because they strengthen the relationship among words with distant semantic meanings but similar phonological expressions.", "Sensitivity to Text Lengths.", "Figure 4 shows the performance of pun detection and location over different text lengths for homographic and heterographic puns in the SemEval dataset.", "For both tasks, the performance gets higher when the text lengths are longer because the context information is richer.", "Especially in the pun detection task, we observe that our model requires longer contexts (more than 20 words) to detect the homographic puns.", "However, shorter contexts (less than 10 words) are adequate for heterographic pun detection, which indicates the contribution from phonological features.", "In short, the results verify the importance of contextualized embeddings and pronunciation representations for pun recognition.", "Case Study and Error Analysis.", "Table 7 shows the results of a case study with the outputs of CPR and PCPR .", "In the first case, the heterographic pun comes from the words son and sun .", "CPR fails to recognize the pun word with limited context information while the phonological attention in PCPR helps to locate it.", "However, the pronunciation features in some cases can mislead the model to make wrong predictions.", "For example, patent in the second sentence is a homographic pun word and has several meanings, which can be found with the contextual features.", "Besides, the phonemes in lies are ubiquitous in many other words like laws , thereby confusing the model.", "In the last case, got is a widely used causative with dozens of meanings so that the word is hard to be recognized as a pun word with its contextual and phonological features.", "In this paper, we propose a novel approach, PCPR , for pun detection and location by leveraging a contextualized word encoder and modeling phonemes as word pronunciations.", "Moreover, we would love to apply the proposed model to other problems, such as general humor recognition, irony discovery, and sarcasm detection, as the future work.", "We would like to thank for their helpful", "the anonymous reviewers comments.", "In NAACL 2019 .", "Llus-F Hurtado, Encarna Segarra, Ferran Pla, Pascual Carrasco, and Jose-Angel Gonzalez.", "2017.", "Elirf-upv at semeval-2017 task 7: Pun detection and interpretation.", "In SemEval-2017 , pages 440443.", "Vijayasaradhi Indurthi and Subba Reddy Oota.", "2017.", "Fermi at semeval-2017 task 7: Detection and interpretation of homographic puns in english language.", "In SemEval-2017 , pages 457460.", "Aaron Jaech, Rik Koncel-Kedziorski, and Mari Osten-dorf.", "2016.", "Phonological pun-derstanding.", "In ACL 2016 , pages 654663.", "Herman Kamper, Weiran Wang, and Karen Livescu.", "2016.", "Deep convolutional acoustic word embeddings using word-pair side information.", "In ICASSP 2016 , pages 49504954.", "IEEE.", "Diederik P Kingma and Jimmy Ba.", "2014.", "Adam: A method for stochastic optimization.", "arXiv preprint arXiv:1412.6980 .", "Yann LeCun, Yoshua Bengio, et al. 1995.", "Convolutional networks for images, speech, and time series.", "The handbook of brain theory and neural networks , 3361(10):1995.", "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang.", "2019.", "Biobert: pre-trained biomedical language representation model for biomedical text mining.", "arXiv preprint arXiv:1901.08746 .", "Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher.", "2017.", "Learned in translation: Contextualized word vectors.", "In NeurIPS 2017 , pages 62946305.", "Alan K Melby and Terry Warner.", "1995.", "The possibility of language: A discussion of the nature of language, with implications for human and machine translation , volume 14.", "John Benjamins Publishing.", "Rada Mihalcea and Carlo Strapparava.", "2005.", "Making computers laugh: Investigations in automatic humor recognition.", "In EMNLP 2005 , pages 531538.", "Elena Mikhalkova and Yuri Karyakin.", "2017.", "Pun-fields at semeval-2017 task 7: Employing roget's thesaurus in automatic pun recognition and interpretation.", "arXiv preprint arXiv:1707.05479 .", "Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, and Armand Joulin.", "2018.", "Advances in pre-training distributed word representations.", "In LREC 2018 ." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "method", "objective", "objective", "objective", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "other", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain" ]
[ "With the growing popularity of deep-learning based NLP models, comes a need for interpretable systems.", "But what is interpretability, and what constitutes a high-quality interpretation?", "In this opinion piece we reflect on the current state of interpretability evaluation research.", "We call for more clearly differentiating between different desired criteria an interpretation should satisfy, and focus on the faithfulness criteria.", "We survey the literature with respect to faithfulness evaluation, and arrange the current approaches around three assumptions, providing an explicit form to how faithfulness is defined by the community.", "We provide concrete guidelines on how evaluation of interpretation methods should and should not be conducted.", "Finally, we claim that the current binary definition for faithfulness sets a potentially unrealistic bar for being considered faithful.", "We call for discarding the binary notion of faithfulness in favor of a more graded one, which we believe will be of greater practical utility.", "Fueled by recent advances in deep-learning and language processing, NLP systems are increasingly being used for prediction and decision-making in many fields (Vig and Belinkov, 2019), including sensitive ones such as health, commerce and law (Fort and Couillault, 2016).", "Unfortunately, these highly flexible and highly effective neural models are also opaque.", "There is therefore a critical need for explaining learning-based models' decisions.", "The emerging research topic of interpretability or explainability 1 has grown rapidly in recent years.", "Unfortunately, not without growing pains.", "1 Despite fine-grained distinctions between the terms, within the scope of this work we use the terms interpretability and explainability interchangeably.", "One such pain is the challenge of definingand evaluatingwhat constitutes a quality interpretation.", "Current approaches define interpretation in a rather ad-hoc manner, motivated by practical use-cases and applications.", "However, this view often fails to distinguish between distinct aspects of the interpretation's quality, such as readability, plausibility and faithfulness (Herman, 2017).", "2 We argue ( 2, 5) such conflation is harmful, and that faithfulness should be defined and evaluated explicitly , and independently from plausibility.", "Our main focus is the evaluation of the faithfulness of an explanation: a faithful interpretation is one that accurately represents the reasoning process behind the model's prediction.", "We find this to be a pressing issue: in cases where an explanation is required to be faithful, imperfect or misleading evaluation can have disastrous effects.", "While literature in this area may implicitly or explicitly evaluate faithfulness for specific explanation techniques, there is no consistent and formal definition of faithfulness.", "We uncover three assumptions that underlie all these attempts.", "By making the assumptions explicit and organizing the literature around them, we connect the dots between seemingly distinct evaluation methods, and 2 Unfortunately, the terms in the literature are not yet standardized, and vary widely.", "Readability and plausibility are also referred to as human-interpretability and persuasive-ness, respectively (e.g., Lage et al. (2019); Herman (2017)).", "To our knowledge, the term faithful interpretability was coined in Harrington et al. (1985), reinforced by Ribeiro et al. (2016), and is, we believe, most commonly used (e.g., Gilpin et al. (2018); Wu and Mooney (2018); Lakkaraju et al. (2019)).", "Chakraborty et al. (2017) refers to this issue (more or less) as accountability.", "Sometimes referred to as how trustworthy (Camburu et al., 2019) or descriptive (Carmona et al., 2015; Biecek, 2018) the interpretation is, or as descriptive accuracy (Murdoch et al., 2019).", "Also related to the transparency (Baan et al., 2019), the fidelity (Guidotti et al., 2018) or the robustness (Alvarez-Melis and Jaakkola, 2018) of the interpretation method.", "And frequently, simply explainability is inferred to require faithfulness by default.", "also provide a basis for discussion regarding the desirable properties of faithfulness ( 6).", "Finally, we observe a trend by which faithfulness is treated as a binary property, followed by showing that an interpretation method is not faithful.", "We claim that this is unproductive ( 7), as the assumptions are nearly impossible to satisfy fully, and it is all too easy to disprove the faithfulness of an interpretation method via a counter-example.", "What can be done?", "We argue for a more practical view of faithfulness, calling for a graded criteria that measures the extent and likelihood of an interpretation to be faithful, in practice ( 8).", "While we started to work in this area, we pose the exact formalization of these criteria, and concrete evaluations methods for them, as a central challenge to the community for the coming future.", "There is considerable research effort in attempting to define and categorize the desiderata of a learned system's interpretation, most of which revolves around specific use-cases (Lipton, 2018; Guidotti et al., 2018, inter alia).", "Two particularly notable criteria, each useful for a different purposes, are plausibility and faithfulness .", "Plausibility refers to how convincing the interpretation is to humans, while faithfulness refers to how accurately it reflects the true reasoning process of the model (Herman, 2017; Wiegreffe and Pinter, 2019).", "Naturally, it is possible to satisfy one of these properties without the other.", "For example, consider the case of interpretation via posthoc text generationwhere an additional gener-ator component outputs a textual explanation of the model's decision, and the generator is learned with supervision of textual explanations (Zaidan and Eisner, 2008; Rajani et al., 2019; Strout et al., 2019).", "In this case, plausibility is the dominating property, while there is no faithfulness guarantee.", "Despite the difference between the two criteria, many authors do not clearly make the distinction, and sometimes conflate the two.", "3 Moreoever, the majority of works do not explicitly name the criteria under consideration, even when they clearly belong to one camp or the other.", "4 We argue that this conflation is dangerous.", "For example, consider the case of recidivism prediction , 3 E.g., Lundberg and Lee (2017); P orner et al. (2018); Wu and Mooney (2018).", "4 E.g., Mohseni and Ragan (2018); Arras et al. (2016); Xiong et al. (2018); Weerts et al. (2019).", "where a judge is exposed to a model's prediction and its interpretation, and the judge believes the interpretation to reflect the model's reasoning process.", "Since the interpretation's faithfulness carries legal consequences, a plausible but unfaithful interpretation may be the worst-case scenario.", "The lack of explicit claims by research may cause misinformation to potential users of the technology, who are not versed in its inner workings.", "5 Therefore, clear distinction between these terms is critical.", "A distinction is often made between two methods of interpretability: (1) interpreting existing models via post-hoc techniques ; and (2) designing inherently interpretable models.", "Rudin (2018) argues in favor of inherently interpretable models , which by design claim to provide more faithful interpretations than post-hoc interpretation of black-box models.", "We warn against taking this argumentation at face-value: a method being inherently inter-pretable is merely a claim that needs to be verified before it can be trusted.", "Indeed, while attention mechanisms have been considered as inherently in-terpretable (Ghaeini et al., 2018; Lee et al., 2017), recent work cast doubt regarding their faithfulness (Serrano and Smith, 2019; Jain and Wallace, 2019; Wiegreffe and Pinter, 2019).", "While explanations have many different use-cases, such as model debugging, lawful guarantees or health-critical guarantees, one other possible use-case with prominent evaluation literature is Intelligent User Interfaces (IUI), via Human-Computer Interaction (HCI), of automatic models assisting human decision-makers.", "The goal of the explanation here is to increase the degree of trust between the user and the system, giving the user more nuance towards whether the system's decision is likely correct, or not.", "In the general case, the final evaluation metric is the performance of the user at their task (Abdul et al., 2018).", "For example, Feng and Boyd-Graber (2019) evaluate various explanations of a model in a setting of trivia question answering.", "However, in the context of faithfulness, we must warn against HCI-inspired evaluation, as well: increased performance in this setting is not in-5 As Kaur et al. (2019) concretely show, even experts are prone to overly trust the faithfulness of explanations, despite no guarantee.", "To illustrate, consider the following fictional case of a non-faithful explanation system, in an HCI evaluation setting: the explanation given is a heat-map of the textual input, attributing scores to various tokens.", "Assume the system explanations behave in the following way: when the output is correct , the explanation consists of random content words; and when the output is incorrect , it consists of random punctuation marks.", "In other words, the explanation is more likely to appear plausible when the model is correct, while at the same time not re-flecting the true decision process of the model.", "The user, convinced by the nicer-looking explanations, performs better using this system.", "However, the explanation consistently claimed random tokens to be highly relevant to the model's reasoning process.", "While the system is concretely useful, the claims given by the explanation do not reflect the model's decisions whatsoever (by design).", "While the above scenario is extreme, this misunderstanding is not entirely unlikely, since any degree of correlation between plausibility and model performance will result in increased user performance, regardless of any notion of faithfulness.", "We propose the following guidelines for evaluating the faithfulness of explanations.", "These guidelines address common pitfalls and sub-optimal practices we observed in the literature.", "Be explicit in what you evaluate.", "Conflating plausability and faithfulness is harmful.", "You should be explicit on which one of them you evaluate, and use suitable methodologies for each one.", "Of course, the same applies when designing interpretation techniquesbe clear about which properties are being prioritized.", "Faithfulness evaluation should not involve human-judgement on the quality of interpretation.", "We note that: (1) humans cannot judge if an interpretation is faithful or not: if they understood the model, interpretation would be unnecessary; (2) for similar reasons, we cannot obtain supervision for this problem, either.", "Therefore, human judgement should not be involved in evaluation for faithfulness, as human judgement measures plausability.", "Faithfulness evaluation should not involve human-provided gold labels.", "We should be able to interpret incorrect model predictions, just the same as correct ones.", "Evaluation methods that rely on gold labels are influenced by human priors on what should the model do, and again push the evaluation in the direction of plausability.", "Do not trust inherent interpretability claims.", "Inherent interpretability is a claim until proven otherwise.", "Explanations provided by inherently in-terpretable models must be held to the same standards as post-hoc interpretation methods, and be evaluated for faithfulness using the same set of evaluation techniques.", "Faithfulness evaluation of IUI systems should not rely on user performance.", "End-task user performance in HCI settings is merely indicative of correlation between plausibility and model performance, however small this correlation is.", "While important to evaluate the utility of the interpretations for some use-cases, it is unrelated to faithfulness.", "What does it mean for an interpretation method to be faithful?", "Intuitively, we would like the provided interpretation to reflect the true reasoning process of the model when making a decision.", "But what is a reasoning process of a model, and how can reasoning processes be compared to each other?", "Lacking a standard definition, different works evaluate their methods by introducing tests to measure properties that they believe good interpretations should satisfy.", "Some of these tests measure aspects of faithfulness.", "These ad-hoc definitions are often unique to each paper and inconsistent with each other, making it hard to find commonalities.", "We uncover three assumptions that underlie all these methods, enabling us to organize the literature along standardized axes, and relate seemingly distinct lines of work.", "Moreover, exposing the underlying assumptions enables an informed discussion regarding their validity and merit (we leave such a discussion for future work, by us or others).", "These assumptions, to our knowledge, encapsulate the current working definitions of faithfulness used by the research community.", "Corollary 1.1.", "An interpretation system is unfaithful if it results in different interpretations of models that make the same decisions.", "As demonstrated by a recent example concerning NLP models, it can be used for proof by counterexample.", "Theoretically, if all possible models which can perfectly mimic the model's decisions also provide the same interpretations, then they could be deemed faithful.", "Conversely, showing that two models provide the same results but different interpretations, disprove the faithfulness of the method.", "Wiegreffe and Pinter (2019) show how these counter-examples can be derived with adversarial training of models which can mimic the original model, yet provide different explanations.", "6 Corollary 1.2.", "An interpretation is unfaithful if it results in different decisions than the model it interprets.", "A more direct application of the Model Assumption is via the notion of fidelity (Guidotti et al., 2018; Lakkaraju et al., 2019).", "For cases in which the explanation is itself a model capable of making decisions (e.g., decision trees or rule lists (Sushil et al., 2018)), fidelity is defined as the degree to which the explanation model can mimic the original model's decisions (as an accuracy score).", "For cases where the explanation is not a computable model, Doshi-Velez and Kim (2017) propose a simple way of mapping explanations to decisions via crowd-sourcing, by asking humans to simulate the model's decision without any access to the model, and only access to the input and explanation (termed for-ward simulation ).", "This idea is further explored and used in practice by Nguyen (2018).", "Corollary", "2. An interpretation system is unfaithful if it provides different interpretations for similar inputs and outputs.", "Since the interpretation serves as a proxy for the model's reasoning, it should satisfy the same constraints.", "In other words, interpretations of similar decisions should be similar, and interpretations of dissimilar decisions should be dissimilar.", "This assumption is more useful to disprove the faithfulness of an interpretation rather than prove it, since a disproof requires finding appropriate cases 6 We note that in context, Wiegreffe and Pinter also utilize the model assumption to show that some explanations do carry useful information on the model's behavior.", "where the assumption doesn't hold, where a proof would require checking a (very large) satisfactory quantity of examples, or even the entire input space.", "One recent discussion in the NLP community (Jain and Wallace, 2019; Wiegreffe and Pinter, 2019) concerns the use of this underlying assumption for evaluating attention heat-maps as explanations.", "The former attempts to provide different explanations of similar decisions per instance .", "The latter critiques the former and is based more heavily on the model assumption , described above.", "Additionally, Kindermans et al. (2019) propose to introduce a constant shift to the input space, and evaluate whether the explanation changes signifi-cantly as the final decision stays the same.", "Alvarez-Melis and Jaakkola (2018) formalize a generalization of this technique under the term interpretability robustness : interpretations should be invariant to small perturbations in the input (a direct consequence of the prediction assumption ).", "Wolf et al. (2019) further expand on this notion as consis-tency of the explanation with respect to the model.", "Unfortunately, robustness measures are difficult to apply in NLP settings due to the discrete input.", "Assumption 3 ( The Linearity Assumption ).", "7 Certain parts of the input are more important to the model reasoning than others.", "Moreover, the contributions of different parts of the input are in-dependent from each other.", "This assumption is employed by methods that consider heat-maps 8 (e.g., attention maps) over the input as explanations, particularly popular in NLP.", "Heat-maps are claims about which parts of the input are more relevant than others to the model's decision.", "As such, we can design stress tests to verify whether they uphold their claims.", "One method proposed to do so is erasure , where the most relevant parts of the inputaccording to the explanationare erased from the input, in expectation that the model's decision will change (Arras et al., 2016; Feng et al., 2018; Serrano and Smith, 2019).", "Otherwise, the least relevant parts of the input may be erased, in expectation that the model's decision will not change (Jacovi et al., 7 This assumption has gone through justified scrutiny in recent work.", "As mentioned previously, we do not necessarily endorse it.", "Nevertheless, it is used in parts of the literature.", "8 Also referred to as feature-attribution explanations (Kim et al., 2017).", "2018).", "Yu et al. (2019); DeYoung et al. (2019) propose two measures of comprehensiveness and sufficiency as a formal generalization of erasure: as the degree by which the model is influenced by the removal of the high-ranking features, or by inclusion of solely the high-ranking features.", "The aforementioned assumptions are currently utilized to evaluate faithfulness in a binary manner: whether an interpretation is strictly faithful or not.", "Specifically, they are most often used to show that a method is not faithful, by constructing cases in which the assumptions do not hold for it.", "9 In other words, there is a clear trend of proof via counter-example, for various interpretation methods, that they are not globally faithful.", "We claim that this is unproductive, as we expect these various methods to consistently result in negative (not faithful) results, continuing the current trend.", "This follows because an interpretation functions as an approximation of the model or decision's true reasoning process, so it by definition loses information.", "By the pigeonhole principle, there will be inputs with deviation between interpretation and reasoning.", "This is observed in practice, in numerous work that show adversarial behavior, or pathological behaviours, that arise from the deeply non-linear and high-dimensional decision boundaries of current models.", "10 Furthermore, because we lack supervision regarding which models or decisions are indeed mappable to human-readable concepts, we cannot ignore the approximation errors.", "This poses a high bar for explanation methods to fulfill, a bar which we estimate will not be overcome soon, if at all.", "What should we do, then, if we desire a system that provides faithful explanations?", "We argue that a way out of this standstill is in a more practical and nuanced methodology for defining and evaluating faithfulness.", "We propose the following challenge to the community: We must develop formal definition and evaluation for faith-9 Whether for attention (Baan et al., 2019; Pruthi et al., 2019; Jain and Wallace, 2019; Serrano and Smith, 2019; Wiegreffe and Pinter, 2019), saliency methods (Alvarez-Melis and Jaakkola, 2018; Kindermans et al., 2019), or others (Ghorbani et al., 2019; Feng et al., 2018).", "10 Kim et al. (2017); Feng et al. (2018, 6) discuss this point in the context of heat-map explanations.", "fulness that allows us the freedom to say when a method is sufficiently faithful to be useful in practice.", "1. Across models and tasks: The degree (as grayscale) of faithfulness at the level of specific models or tasks.", "Perhaps some models or tasks allow sufficiently faithful interpretation, even if that is not true for others.", "11 For example, the method may not be faithful for some question-answering task, but faithful for review sentiment, perhaps based on various syntactic and semantic attributes of those tasks.", "2. Across input space: The degree of faithfulness at the level of subspaces of the input space, such as neighborhoods of similar inputs, or singular inputs themselves.", "If we are able to say with some degree of confidence whether a specific decision's explanation is faithful to the model, even if the interpretation method is not considered universally faithful, it can be used with respect to those specific areas or instances only.", "First, interpretability evaluation often conflates evaluating faithfulness and plausibility together.", "We should tease apart the two definitions and focus solely on evaluating faithfulness without any influ-ence of the convincing power of the interpretation.", "Second, faithfulness is often evaluated in a binary faithful or not faithful manner, and we believe strictly faithful interpretation is a unicorn which will likely never be found.", "We should instead evaluate faithfulness on a more nuanced grayscale that allows interpretations to be useful even if they are not globally and definitively faithful.", "We thank Yanai Elazar for welcome input on the presentation and organization of the paper.", "We also thank the reviewers for additional feedback and pointing to relevant literature in HCI and IUI.", "This project has received funding from the Eu-ropoean Research Council (ERC) under the Eu-ropoean Union's Horizon 2020 research and innovation programme, grant agreement No. 802774 (iEXTRACT).", "11 As noted by Wiegreffe and Pinter (2019); Vashishth et al. (2019), although in the context of attention solely." ]
[ "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "result", "method", "other", "other", "other", "other" ]
[ "Predicting how events induce emotions in the characters of a story is typically seen as a standard multi-label classification task, which usually treats labels as anonymous classes to predict.", "They ignore information that may be conveyed by the emotion labels themselves.", "We propose that the semantics of emotion labels can guide a model's attention when representing the input story.", "Further, we observe that the emotions evoked by an event are often related: an event that evokes joy is unlikely to also evoke sadness.", "In this work, we explicitly model label classes via label embeddings, and add mechanisms that track label-label correlations both during training and inference.", "We also introduce a new semi-supervision strategy that regularizes for the correlations on unlabeled data.", "Our empirical evaluations show that modeling label semantics yields consistent benefits, and we advance the state-of-the-art on an emotion inference task.", "Understanding how events in a story affect the characters involved is an integral part of narrative understanding.", "Rashkin et al. (2018) introduced an emotion inference task on a subset of the ROCStories dataset (Mostafazadeh et al., 2016), labeling entities with the emotions they experience from the short story contexts.", "Previous work on this and related tasks typically frame them as multi-label classification problems.", "The standard approach uses an encoder that produces a representation of the target event along with the surrounding story events, and then pushes it through a classification layer to predict the possible emotion labels (Rashkin et al., 2018; Wang et al., 2018).", "This classification framework ignores the semantics of the emotions themselves.", "Each emotion label (e.g., joy) is just a binary prediction.", "However, consider the sentence, Danielle was really short on money.", "The emotional reaction is FEAR of being short on money.", "First, if a model had lexical foreknowledge of fear, we should expect an improved ability to decide if a target event evokes FEAR .", "Second, such a model might represent relationships between the emotions themselves.", "For example, an event that evokes FEAR is likely to evoke SADNESS and unlikely to evoke JOY .", "When previous models frame this as binary label prediction, they miss out on ways to leverage label semantics.", "In this work, we show that explicitly modeling label semantics improves emotion inference.", "We describe three main contributions 1 .", "First, we show how to use embeddings as the label semantics representation.", "We then propose a label attention network that produces label-informed representations of the event and the story context to improve prediction accuracy.", "Second, we add mechanisms that can make use of label-label correlations as part of both training and inference.", "During training, the correlations are used to add a regularization loss.", "During inference, the prediction logits for each label are modified to incorporate the correlations, thus allowing the model's confidence on one label to influence its prediction of other labels.", "Third, we show that the label correlations can be used as a semi-supervised signal on the unlabeled portion of the ROCStories dataset.", "Our empirical evaluations show that adding label semantics consistently improves prediction accuracy, and produces labelings that are more consistent than models without label semantics.", "Our best model outperforms previously reported results and achieves more than 4.9 points absolute improvement over the BERT classification model yielding a new state-of-the-art result for this task.", "The emotion inference task introduced by Rashkin et al. (2018) is defined over a subset of short stories from the ROCStories dataset (Mostafazadeh et al., 2016).", "It infers the reactions that each event evokes in the characters of the story, given the story context thus far.", "For each sentence (i.e. event) in a story, the training data includes annotations of eight emotions.", "Given a sentence x s denoting a single event in a story, the task is to label the possible emotional reactions that an event evokes in each character in the story.", "Since an event can evoke multiple reactions, the task is formulated as a multi-label classification problem.", "The standard approach to this task has been as follows.", "For a given character c and the target sentence x s , collect all previous sentences x c in the story in which the character c is mentioned as the character context.", "Encode the target sentence, and the character context to obtain a single representation, and use it as input to a multi-label classification layer for prediction.", "Rashkin et al. (2018) benchmark the performance of multiple encoders (see Section 5).", "We extend this previous work to integrate label semantics into the model by adding label embeddings (Section 3) and explicitly representing label-label correlations (Section 4).", "A simple strategy to model label semantics is to explicitly represent each with an embedding that captures the surface semantics of its label name.", "Since the emotion labels correspond to actual words (e.g., joy, fear, etc.), we can initialize them with their corresponding word embeddings (learned from a large corpus).", "We then use these label embeddings in two ways as detailed below.", "The label embeddings can be used to guide an encoder network to extract emotion-related information from the sentences.", "We adopted the Label-Embedding Attentive Network (LEAM) architecture to produce label-focused representations (Wang et al., 2018).", "The main idea behind the LEAM model is to compute attention scores between the label and the representations of the tokens in the input that is to be classified 2 .", "This can 2 The original model used LEAM directly on top of Glove embeddings (Wang et al., 2018).", "then be used to appropriately weight the contributions of each token to the final representations.", "In this work, we use LEAM to compute an attention matrix computed over the hidden states produced by the encoder and the label embeddings.", "The encoder used is the BERT features for each token B t in the text and each of the label sentences J .", "The attention matrix is then used to produce a weighted combination of the contextual representations of the input, using the compatibility matrix H , as computed in (Wang et al., 2018).", "This gives emotion focused representations y to use for classification: H = ( JTB t ) (cid:11) H (1) Figure 1 illustrates the key steps in the model.", "Rather than learning label embeddings from scratch, we also explore using contextual embeddings from transformer-based models like BERT.", "This allows us to use richer semantics derived from pre-training and also allows us to exploit the self-attention mechanism to introduce label semantics as part of the input itself.", "In addition to the target and context sentences, we also include emotion-label sentences, L s , of the form [character] is [emotional state] as input to the classifier.", "For each instance, we add eight such sentences covering all emotional labels 3 .", "In this paper, we use the final layer of a pretrained Bert-base model to get representations for the input sentence and each of the emotion-label sentences.", "The self-attention mechanism will automatically learn to attend to these label sentences when constructing the representations for the input text.", "Indeed, as shown in Figure 2, there are strong (positive and negative) correlations between the emotion labels in the ground truth.", "For instance, there is a high negative correlation ( = 0 . 9 ) between JOY and SAD labels and a high positive correlation between JOY and TRUST ( = 0 . 9 ).", "We propose two ways to incorporate these label correlations to improve prediction.", "In a multi-label setting, a good model should respect the label correlations.", "If it is confident about a particular label, then it should also be confident about other positively correlated labels, and conversely less confident about labels that are negatively correlated.", "Following Zhao et al. (2019), we add", "(i) a loss function that penalizes the model for making incongruous predictions, i.e. those that are not compatible with the label correlations, and", "(ii) a component that multiplies the classification logit vector z with the learned label relations encoded as a learned correlation matrix G .", "This component transforms the raw prediction score of each label to a weighted sum of the prediction scores of the other labels.", "For each label, these weights are given by its learned correlation with all the other labels.", "Therefore, the prediction score of each label is affected by the prediction score of the other labels, based on the correlation between label pairs.", "The final prediction scores are then calculated as shown in the equation: e = ( z G ) (2) The overall loss then comprises of two loss functions the prediction loss ( LBCE ), and the correlation-loss ( L corr ): L ( ) = LBCE ( e, y ) + L corr ( e, y (cid:48) ) (3) Where L corr computes BCE Loss with continuous representation of the true labels y , using the learned label correlation G : y (cid:48) = y G (4) 4.2 Semi-supervision on Unlabeled Data We also introduce a new semi-supervision idea to exploit label correlations as a regularization signal on unlabeled data.", "The multi-label annotations used in this work (Rashkin et al., 2018) only comprises a small fraction of the original ROCStories data.", "There are 40k character-line pairs that have open text descriptions of emotional reactions, but these aren't annotated with multi-label emotions, and therefore were not used in the above supervised emotion prediction tasks.", "We propose a new semi-supervised method over BERT representations that augments the soft-training objective used in Section 4.1 with a label correlation incompatibility loss defined over the unlabeled portion of the ROCStories dataset.", "We use two loss functions: the loss computed in Equation 3, and the regularization loss on the unlabeled training data (Equation 5).", "For the semi-supervised training, we use an iterative batch-wise training.", "In the first step, all weights of the model are minimized by minimizing the loss in Equation 3.", "In the next step, the learned label correlations are updated using: L reg = (cid:88) i,j G ij d ( e i , e j ) (5) d ( e i , e j ) = (cid:40) (cid:107) e i e j (cid:107) for G ij 0 , (cid:107) e i e j (cid:107) 1 otherwise.", "This loss helps the model to produce consistent predictions based on the correlations by forcing positively correlated labels to have similar scores and negatively correlated ones to have dissimilar scores.", "We compare our proposed models with the models presented in Rashkin et al. (2018), the LEAM architecture of Wang et al. (2018), and fine-tuned BERT models (Devlin et al., 2019) for multi-label classification without label semantics.", "For all the models we report the micro-averaged Precision, Recall and F1 score of the emotion prediction task.", "Rashkin et al. (2018) modeled character context and pre-trained on free response data to predict the mental states of characters using different encoder-decoder setups, including BiLSTMs, CNNs, the recurrent entity network (REN) (Henaff et al., 2016), and neural process networks (NPN) (Bosselut et al., 2017).", "Additionally, we compare with the self-attention architecture proposed in (Paul and Frank, 2019), without the knowledge from ConceptNet (Speer and Havasi, 2012) and ELMo embeddings (Peters et al., 2018).", "To compare against LEAM, we compare it against our proposal of the LEAM+BERT model, where our label attention is computed from BERT representations of each of the label sentences, and words in the input sentence.", "We also encode the sentence and context separately in a BiLSTM layer as done in Rashkin et al. (2018).", "We also fine-tuned a BERT-base-uncased model for emotion classification, using x s , x c and L s as inputs.", "This beats the other baselines by a significant margin, and is thus a strong new baseline.", "All our models are evaluated on the emotion reaction prediction task over the eight emotion labels (Plutchik categories) annotated in the Rashkin et al. (2018) dataset.", "We follow their evaluation setup, and report the final results on the test set.", "We use pretrained GloVe embeddings (100d) and BERT-base-uncased representations with the LEAM model.", "The final classifier used in all models is a feed-forward layer, followed by a sigmoid.", "Table 1 compares the performance of the baselines with our models that use label semantics.", "Among the baselines, the fine-tuned BERT base model obtains the best results.", "Adding label embeddings (section 3.1) to the basic BiLSTM via LEAM model provides substantial increase, more than 27 absolute points in F1.", "We swapped in BERT features instead of GloVe and found a further 3 point improvement.", "The BERT baseline beat both of these, but appending label sentences as additional input to fine-tuned BERT increased its performance by 1.4 F1 points.", "A further increase of 2 points in F1 is achieved by tracking label-label correlations through training loss and inference logits.", "In addition, adding semi-supervision yields the best gain of more than 4.9 points in F1 over basic BERT, providing a significant advance in state-of-the-art results for emotion inference in this dataset.", "We also checked the statistical significance of the Semi-supervision model (Table 1) against the Learned Correlations, BERT+Labels as Input, LEAM w/ BERT Features and the BERT model using the Randomization Test (Smucker et al., 2007).", "This involved comparing the outputs of the Semi-supervision model with the Sentence Ground Truth LS NoLS And nobody could give him any direction Sad, Disgust Sad, Disgust Sad Surprise Anger She said Mark can come for free Joy, Trust Joy ,Trust Joy, Anticipation Anticipation Anticipation He is relieved that it was not harmed Joy, Surprise Joy, Surprise Fear, Surprise Anticipation Anticipation The marshmallows were totally smooshed Anger, Sad Anger, Sad Joy, Anticipation Table 2: Prediction of labels with label semantics (LS) versus without label semantics (NoLS).", "above mentioned models after creating 100,000 random permutations.", "The Semi-supervision model achieved statistically significant improvement over all the baselines.", "We did further qualitative analysis of the results on the dev set to better understand the performance of the Semi-supervised Label Semantics model.", "Compared to base BERT, this model predicts more emotion classes per instance (8839 vs 5024).", "The wrong predictions of this model have lower probabilities than the correct labels suggesting that classification could be further improved with proper threshold identification.", "This model is also better at capturing the semantic relations between labels during prediction.", "This is highlighted through some examples in Table 2.", "One of the most widely-used work in narrative understanding introduced ROCStories, a dataset for evaluating story understanding (Mostafazadeh et al., 2016).", "On a subset of these stories (Rashkin et al., 2018) added annotations for causal links between events in stories and mental states of characters.", "They model entity state to predict emotional reactions and motivations for causing events occurring in ROCStories.", "Additionally, they also introduce a new dataset annotation that tracks emotional reactions and motivations of characters in stories.", "Other work looked at encoding external knowledge sources to augment motivation inference (Paul and Frank, 2019) on the same dataset.", "Both treat labels as anonymous classes, whereas this work explores modeling the semantics of the emotion labels explicitly.", "Recent work in multi-label emotion classification has shown that using the relation information between labels can improve performance.", "(Kurata et al., 2016) use the label co-occurrence information in the final layer of the neural network to improve multi-label classification.", "Correlation-based label representations have also been used for music classification styles (Zhao et al., 2019).", "Our work builds on these and adds a similar result showing that label correlations can have significant impact for emotion label inference.", "We present new results for the multi-label emotion classification task of Rashkin et al. (2018), extending previous reported results by 10.7 F1 points (55.1 to 65.8).", "The multi-label nature of emotion prediction lends itself naturally to use the correlations between the labels themselves.", "Further, we showed that modeling the class labels as semantic embeddings helped to learn better representations with more meaningful predictions.", "As with many tasks, BERT provided additional context, but our integration of these label semantics showed significant improvements.", "We believe these models can improve many other NLP tasks where the class labels carry inherent semantic meaning in their names.", "This work was supported in part by the National Science Foundation under Grant IIS-1617969.", "This material is also based on research that is in part supported by the Air Force Research Laboratory (AFRL), DARPA, for the KAIROS program under agreement number FA8750-19-2-1003.", "The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon." ]
[ "abstain", "abstain", "objective", "result", "method", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "result", "objective", "objective", "objective", "method", "abstain", "abstain", "result", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "abstain", "abstain", "abstain", "result", "result", "result", "other", "other", "other" ]
[ "We propose a methodology to construct a term dictionary for text analytics through an interactive process between a human and a machine.", "The interactive approach helps the cre-ation of flexible dictionaries with precise granularity required in text analysis.", "This paper introduces the first formulation of interactive dictionary construction to address this issue.", "To optimize the interaction, we propose a new algorithm that effectively captures an analyst's intention starting from only a small number of sample terms.", "Along with the algorithm, we also design an automatic evaluation framework that provides a systematic assessment of any interactive method for the dictionary cre-ation task.", "Experiments using real scenario based corpora and dictionaries show that our algorithm outperforms baseline methods, and works even with a small number of interactions.", "Also, we provide our dataset for future studies 1 .", "Since the emergence of practical interests in text analytics that finds insights from massive documents (Nasukawa and Nagano, 2001), there are several requirements for enhancing valuable discoveries.", "The one critical issue we tackle in this paper is an effective construction of a term dictionary (Godbole et al., 2010).", "The term dictionary, which is an arbitrary set of terms , is used in text analytics to represent interesting analysis perspectives (Nasukawa and Nagano, 2001; Nasukawa, 2009); for example, dictionaries of product names and evaluative description are required for mining customer reputations about products.", "The motivation of this paper is how to reduce the human workload for the dictionary con-1 https://github.com/kohilin/ IDC-evalset.git Evaluative description Functional description Appearance description Yellow Dirty Waterproof Portable Sturdy User-Friendly Stretchy Superior Formal Traditional For pregnant Metalic Bad Studless Synonym of U.S.A Medicine name U.S., U.S.A., America, Aspirin, Opdivo, Tylenol, Typical dictionary in previous studies Beautiful Flexible and fine-grained dictionary in this work Nice Figure 1: Typical dictionaries in previous works (up-per) and fine-grained dictionaries in this work (lower) struction as much as possible.", "To this end, we establish a methodology of interactive dictionary construction that incrementally captures an analyst's intention starting from a small number of sample terms and enables him/her to effortlessly expand terms in the intended dictionary through suggestions by a machine.", "The term dictionary for text analytics is expensive to be constructed because we need to focus more on terms with flexible granularity for in-depth analysis (Takeuchi et al., 2009; Godbole et al., 2010; Mostafa, 2013).", "For instance, if the analyst wants to examine product evaluation from both its function and appearance, he/she then needs to separately create those dictionaries whose boundaries are vague and overlapped (Figure 1).", "In short, we need to group any terms the analyst wants together depending on documents and the objective of analysis, which forces an ad hoc construction of the term dictionary.", "This situation is rather severe in the real-world tasks because the vocabulary size for an exhaustive search of the texts is vast, and the analyst will go through repeated trial and error of creating dictionaries until he/she reaches findings.", "At present, there is a demand for a machine that decreases the cost of the ad hoc dictionary construction.", "As the dictionary construction can be considered as a type of collecting terms, there is a related research field set expansion that expands a small set of terms by means of bootstrapping (Pantel and Pennacchiotti, 2006).", "This approach automatically finds new terms for the given set from documents in accordance with a predefined exploration strategy (Pantel et al., 2009; He and Xin, 2011).", "Although such an automatic procedure is advantageous for reducing the human workload, the quality of the collected terms is suspicious for a term dictionary.", "For example, a good analysis requires more fine-grained dictionaries than the original targets in set expansion such as distinct ontological terms (e.g., country name, Shen et al. 2017, 2018).", "Several studies have incorporated a human in the term collection process (Godbole et al., 2010; Coden et al., 2012).", "Specifically, dictionaries are built in an interactive process where the human gives feedback to the machine and the machine suggests candidates based on the given feedback (Alba et al., 2017, 2018).", "Such a human-in-the-loop approach has been an active topic in other fields as well, for instance, image classi-fication (Cui et al., 2016), dialogue system (Li et al., 2017), and audio annotation (Kim and Pardo, 2018).", "We can generally expect that a reliable feedback provided by human makes a system more accurate.", "With respect to dictionary construction, however, experimental results in this vein are limited due to the empirical evaluation by just a few participants and the use of a coarse dictionary as the test items.", "In short, it is a still open question what is a critical issue for interactive construction of fine-grained term dictionary for text analytics?", "Moving in the same promising direction of leveraging both a human and a machine, we establish a well-defined and effective methodology for constructing the term dictionary.", "In summary, our contribution in this paper is fourfold:", "(i) We formulate the interactive process of a term collection, which brings clarity to the problem to be solved ( 2).", "(ii) We develop a method that captures an analyst's intention from a small number of samples with our formulation as the basis ( 3).", "(iii) We Formal Nice, Traditional?", "propose an automatic evaluation framework that provides a systematic assessment for interactive methods ( 4).", "(iv) Our experimental results show that the proposed method surpasses baseline methods such as set expansion, word embedding and a linear classifier on the crowdsourced dataset.", "The dataset emulates the real-world scenario of flexible and fine-grained dictionary construction, and we distribute the dataset to the public ( 5).", "In this section, we provide the definitions and notations used throughout this paper.", "First, a term is a string representation of a certain notion such as apple and New York .", "A dictionary is a collection of terms.", "A user denotes the person who wants to construct a dictionary, and system denotes the machine that helps the user.", "Let W be the whole set of terms in documents.", "Our objective is to rapidly find as many terms of the user's interest U W as possible.", "As seen in Figure 2, interactive dictionary construction is defined as an iterative process in which each iteration consists of the following steps: 1) User feedback in which the user selects terms for the dictionary from the current candidate terms, and 2) Candidate selection in which the system finds candidate terms for the next user feedback.", "For the i -th iteration ( i = 0 , 1 , 2 , . . . ) , let C i be the set of terms that the system finds in the candidate selection step and U i be the set of terms that the user selects from C i 1 in the user feedback step as positive examples.", "Here, U 0 is a special feedback we call seed terms that are directly given by the user first.", "Note that, because we wish to expand the dictionary, each term in C i should be new to the user in the ( i + 1) -th iteration.", "In the i -th step of the user feedback ( i 1) , we assume that the user can annotate which terms in C i 1 are in U without being aware of the whole # # # # # % Set seed term(s) (User feedback) Candidate selection User feedback ( # % = ( % ) Figure 3: Task definition U .", "So, U i U for each i .", "Let (cid:101) U i := im =0 U m be the set of words of the user's interest that is found by the end of the i -th iteration.", "However, it is impractical to define our objective as an optimization problem for the asymptotic convergence of (cid:101) U i because the user feedback is done by a human, and i cannot be large.", "Hence, we try to maximize | C i U | , the number of suggested terms that match the user's interest.", "Also, since C i is manually selected by a human user, the proper size of C i is practically limited to 5 10 .", "Figure 3 shows the steps from setting the seed terms to giving the first feedback to the first candidates.", "Using the example in Figure 2, U 0 is { Formal } , C 0 is { Nice, Traditional } , U 1 is { Traditional } , and C 0 \\ U is { Nice } .", "The system then next selects C 1 based on U 0 and U 1 (i.e., (cid:101) U 1 ) from W except for the shown terms C 0 (cid:101) U 1 .", "It is important that we design the system to be effective so that the overlapped area of C i and U becomes larger.", "There are two major challenges for this problem; one is number of seed terms, and the other is term overlaps of different dictionaries.", "In terms of the first issue, we have only a few seed terms for the target dictionary at the first iteration.", "If the system requires more seed terms, the advantage of the system drops because it contradicts our purpose to decrease the human workload in constructing the dictionary.", "Therefore, we need a method that captures the user's intention from a smaller number of samples.", "In terms of the second issue, identifying terms of user's interest is difficult because boundaries between dictionaries are often overlapped in text analytics as seen in Figure", "1. In other words, the system need to be more sensitive to subtle semantic differences only with a few feedbacks.", "In this section, we first describe a previous candidate selection model, SetExpan algorithm (Shen et al., 2017) that inspired our method ( 3.1).", "Subsequently, we introduce our method as the weighted version of SetExpan with improvements in dealing with interactive settings ( 3.2 ).", "Throughout this section, we discuss the i -th step of candidate selection for a certain i .", "For simplicity, C i and (cid:101) U i are denoted as C and (cid:101) U , respectively.", "As we stated in 2, the objective of the task is to suggest C that contains as many terms in U as possible.", "Recall that (cid:101) U is a set of positive examples for terms of the user's interest that are found in previous steps.", "Following the strategy taken in set expansion (Shen et al., 2017), a straightforward and reasonable approach to determine C is to define Sim ( e, e (cid:48) | F ) which returns a similarity score for two terms e and e (cid:48) based on a set of features F , and then to select terms that are most similar to the positive terms in (cid:101) U .", "The issue is how to obtain the ideal F that assigns a higher score to terms potentially included in U .", "Shen et al. (2017) formulates this feature selection problem as choosing features with the number of fixed-size Q so that the positive terms are most similar to each other: F = arg max | F | = Q (cid:88) 1 i j n Sim ( e i , e j | F ) , (1) where (cid:101) U := { e 1 , . . . , e n } .", "They propose using the Jaccard coefficient for Sim ( e i , e j | F ) , which nar-rows the optimization problem to a binary decision on whether to use each feature.", "This combinatorial problem is NP-hard; hence, they use heuristics to choose an approximation of F .", "Instead of explicitly choosing features to use in the similarity calculation, we consider using all of the possible features { f 1 , . . . , f L } with the weight w k R for each feature f k .", "In addition, we define our optimization problem as finding the best w k for f k ( k = 1 , . . . , L ) .", "Let us develop a formula that extends (1) and takes w k into consideration.", "First, in such a formula, Sim ( e i , e j | F ) should be a weighted sum of the similarity score for each feature f k , denoted as Sim ( e i , e j | f ) .", "By replacing F with w in the expression of the similarity function, we have Sim ( e i , e j | w ) = L (cid:88) k =1 w k Sim ( e i , e j | f k ) .", "(2) Next, to define the similarity between a term e and (cid:101) U , we assume that the similarity is the average of similarities between e and e i (cid:101) U , that is, Sim ( e, (cid:101) U | w ) := 1 n n (cid:88) i =1 Sim ( e, e i | w ) .", "(3) The initial formulation of our optimization problem is thus as follows: w = arg max w (cid:88) 1 i n Sim ( e i , (cid:101) U | w ) .", "(4) We show in the Appendix that our formulation of (4) can be considered as the weighted version of (1) under the natural condition that Sim ( e i , e i | f k ) = Sim ( e j , e j | f k ) for any i , j , and k , and (cid:80) Lk =1 w k = 1 .", "It is easy to set Sim ( e, e (cid:48) | f k ) satisfying this condition.", "For a feature f k , we define a vector v f k ( e ) of an e and define Sim ( e, e (cid:48) | f k ) as the standard inner product of v f k ( e ) and v f k ( e (cid:48) ) .", "Then by normalizing all these vectors, Sim ( e i , e i | f k ) = (cid:107) v f k ( e i ) (cid:107) = 1 holds for any i ; hence, the condition is satisfied, and that is a conventional cosine similarity of word vectors (Levy et al., 2015).", "Thus, any mapping from W to a vector space is available as a feature such as the tf-idf of terms and discrete features (Manning et al., 2008), word2vec (Mikolov et al., 2013), or GloVe (Pennington et al., 2014).", "Note that the dimension of the vector space may be different among the features.", "= v f k ( e ) v f k ( e (cid:48) ) , (2) is computed by Sim ( e i , e j | w ) = L (cid:88) k =1 w k v f k ( e i ) v f k ( e j ) , (5) Sim ( e, (cid:101) U | w ) = L (cid:88) k =1 w k v f k ( e ) v f k ( (cid:101) U ) , (6)", "where v f k ( (cid:101) U ) := 1 n (cid:80) ni =1 v f k ( e i ) is the centroid vector for { v f k ( e i ) } i =1 ,...,n in the feature space of", "f k .", "We simply call v f k ( (cid:101) U ) the centroid of (cid:101) U .", "Formulas (5) and (6) demonstrates that the similarity between any two terms can be measured by combining the characteristics of the L different feature spaces.", "We select the feature spaces in which terms in (cid:101) U become similar to each other by adjusting the weights, as shown in Figure", "4. Note that our feature weighting formulation is categorized as a conventional linear regression that finds f k characterizing (cid:101) U via the weights.", "Instead of calculating the weights for bare features of each term, our method estimates those for differently predefined feature spaces (i.e., the similarity scores in these spaces).", "It aims to mitigate the difficulty of finding optimal weights for the vast number of features only from few labeled samples.", "However, the drawback is that this sacrifices a model's degree of freedom; therefore, we test the effectiveness of our proposed model compared to an ordinary linear classifier in the experiment.", "Although the initial formulation (4) proved to be a natural extension of the discrete version of feature selection, it does not always work as expected.", "In this section, we discuss the reason for this and how we can improve the initial formulation of our optimization problem.", "By substituting (2) and (3) into (4), the objective (cid:80) 1 i n Sim ( e i , (cid:101) U | w ) is a linear function of w .", "Assuming that (cid:80) Lk =1 w k = 1 , the optimal w is determined by putting all the weight values on a particular feature space which has the highest score in the averaged similarity between the terms in (cid:101) U and the centroid of (cid:101) U .", "This is equivalent to selecting only one feature space for the similarity computation.", "Such extreme optimization is not suitable for our interactive setting because the target dictionary is obscure, especially in earlier iterations.", "We want the system to diversify the candidate terms to broadly cover the user's interests and allow the user to discover related vocabularies for a customized dictionary.", "To address this issue, we modify our formulation of (4) as w = arg max w min 1 i n Sim ( e i , (cid:101) U | w ) .", "We maximize the minimum similarity score between a term in (cid:101) U and the centroid of (cid:101) U .", "The idea here is to reduce the distance between the farthest positive term and the centroid.", "This strategy is analogous to those used in active learning, where examples near the separating hyperplane are actively leveraged (Schohn and Cohn, 2000).", "Our objective function min 1 i n Sim ( e i , (cid:101) U | w ) is a concave function of w (see Appendix); therefore, we can solve it by (for example) gradient descent.", "We can also leverage negative feedback, i.e., unselected terms in C , to make the system more sophisticated.", "Let N := C \\ U = { z 1 , . . . , z m } , then we can extend (7) by w = arg max w (cid:110) min 1 i n Sim ( e i , (cid:101) U | w ) max 1 j m Sim ( z j , (cid:101) U | w ) (cid:111) .", "(8) The second term on the right-hand side of (8) increases the distance between the closest negative term and the centroid of (cid:101) U .", "Again the objective function of (8) is a concave function of w ; thus, the information of both positive and negative examples is taken into consideration to learn the optimal w .", "Although our min-maximize optimization strategy diversifies candidates, it may be disadvantageous in terms of the system being affected by outliers.", "It happens that several terms in (cid:101) U (especially for manually fed terms such as seeds) distribute differently in possessing feature spaces compared to the rest of the positive terms.", "Such a case holds up the learning because the maximum similarity score of the outliers to the centroid is low.", "The left side of Figure 5 shows an example of this problem: specifically, the system cannot put a higher weight value on f 1 because the optimization target, which is the most distant one from the centroid (water-melon in this case), is biased to f 2 .", "Feedback denoising is a simple solution to this problem.", "We apply a clustering algorithm (e.g., \" # \" # without feedback denoising Centroid Centroid Outlier with feedback denoising $ (' ) Figure 5: The difference in terms used in learning (blue) with/without feedback denoising.", "K-Means) to terms in (cid:101) U , and obtain K term sets (cid:101) U (0) , (cid:101) U (1) , ..., (cid:101) U ( K ) .", "Then, we conduct the optimization by replacing (cid:101) U in (7) and (8) with (cid:101) U ( K ) where K = arg max K | (cid:101) U ( K ) | , that is, the majority class among terms in (cid:101) U as shown in the right side of Figure", "5. This is effective for denoising irregular terms with respect to feature distribution, and for guiding the system to a promising w .", "In this section, we explain an automatic evaluation framework for interactive dictionary construction.", "By using a predefined dictionary as the oracle dictionary U , we emulate the manual feedback process and apply a new evaluation metric to estimate the effectiveness of building a dictionary with consideration of the human interaction.", "We describe the emulation process with U , and the entire flow of the emulation procedure is in Algorithm", "1. At the beginning of the emulation process, a small number of seed terms are randomly chosen from U , and U 0 is initialized with them ( l. 1 ).", "The number of iterations I ( l. 2 ) and the number of suggested terms per iteration | C | ( l. 3) are also determined.", "The iteration consisting of user feedback and candidate selection is then launched.", "In every i -th iteration, the system first suggests the C i based on the known positive terms (cid:101) U i 1 ( l. 5) .", "After receiving the suggested C i , the automatic evaluation process takes the intersection of C i and U , and records the overlapped terms as U i ( l. 6 ).", "It also takes the difference set of C i and U as the negative terms N i ( l. 7 ).", "If the system is trainable, its training process runs before moving to the next iteration ( l. 8 10 ).", "In addition to the automatic evaluation process, we introduce a new metric that takes the interaction quality into account when evaluating the accuracy of the candidate selection.", "The final goal of dictionary construction is to obtain a complete set of terms consistent with U ; however, there is a limitation stemming from a user's workload in real scenarios.", "Given that an effective system should suggest terms of user's interest in earlier iterations, we propose weighted coverage per iteration ( W CpI ) as the evaluation metric for interactive dictionary construction: W CpI = (cid:80) Ii =1 (1 ) i 1 | (cid:101) U i | min { i | C | , | U |} (cid:80) Ii =1 (1 ) i 1 , (9) where is the hyperparameter to adjust the importance of the iteration number.", "We illustrate the intuition of W CpI in Figure 6.", "W CpI is an area ratio of accumulated positive terms from system suggestions to its upper bound in each iteration.", "In short, it measures how many correct suggestions the system can provide in the comparison with a perfect system that never suggests unrelated terms.", "We can also regulate the importance of iteration number by adjusting .", "Specifically, a larger value of underestimates the importance of terms found in the later iterations, in other words, it attaches importance to terms found in the earlier iterations.", "As an intuitive explanation based on an actual scenario, is like representing a constant probability for the user to quit dictionary construction midway through.", "The graphs in Figure 6 compare the calculation of W CpI for the same system suggestions.", "The right one with = 0 .", "1 , in which we as-0 20 40 60 80 100 1 2 3 4 5 6 7 8 9 101 2 3 4 5 6 7 8 9 10 |\"| = U-bound System = 0.0 = 0.1 = 67.5 = 71.7 Figure 6: Weighted coverage per iteration WCpI .", "sume the user quit creating a dictionary with 10% probability at every iteration, has a higher W CpI than the left one with = 0 .", "0 .", "We conduct an experiment following the automatic evaluation framework by using public datasets and oracle dictionaries created through crowdsourcing.", "In the experiment, we compare several methods in addition to our proposed method.", "As emulation parameters, we set number of seed terms ( | U 0 | ), the number of terms in one suggestion ( | C | ), and number of total iterations ( I ) to 3, 10, and 30, respectively.", "Note that we tried different numbers of seeds (1 and 5), but the overall tendencies were the same.", "We used crowdsourcing to create oracle dictionaries on the Amazon review corpus (Blitzer et al., 2007), which is publicly available.", "2 First, we explain the corpus processing and the procedure to construct the oracle dictionaries.", "We then describe the evaluation items.", "Our evaluation items will be publicly available for the system evaluation in future research.", "Corpus.", "The corpus originally consists of sub corpora from 25 domains.", "Given that size and domain vary, we pick five domains; apparel ( APP ), baby ( BAB ), camera & photo ( CAM ), health & personal care ( HEL ), and sports & outdoors ( SPO ).", "We process the raw texts with spaCy 3 and its dis-2 https://www.cs.jhu.edu/ mdredze/datasets/sentiment/ 3 https://spacy.io/ tributed English model 4 .", "We then construct the vocabulary with words and noun chunks that appeared more than five times except for standard stopwords.", "Note that all terms in the vocabulary are identified after lemmatization by spaCy.", "Oracle Dictionaries.", "For each selected corpus, we create oracle dictionaries through crowdsourcing.", "5 In the task for workers, we provide predefined dictionaries and ask the worker to choose one or more dictionaries to which a given term belongs.", "For example, we prepared three inde-pendent dictionaries nursery items dictionaries for sleeping, movement, and safety in the BAB corpus, and asked a worker to judge which dictionary includes the term car seat .", "With respect to each corpus, we define multiple dictionaries and request three workers to make judgments for every term in the vocabulary.", "We determine that a term is included in a dictionary when at least one of three workers choose the dictionary for the term.", "Note that we filter noisy users and their answers beforehand according to the reliability score estimated by the crowdsourcing service 6 .", "Finally, we also manually clean each dictionary.", "Excluding dictionaries consisting of less than 15 terms or too much noise, we eventually obtain 22 dictionaries.", "We list the dictionaries and example terms in Table", "1. Evaluation Item.", "We generate ten evaluation items per dictionary, for 220 items in total.", "An evaluation item consists of a unique set of seed terms ( U 0 ) and the remaining terms in the corresponding dictionary as the oracle ( U := U \\ U 0 ).", "We suggest that fewer seed terms are adequate for evaluating an interactive dictionary construction method; because the purpose is to gather terms with a minimum human effort as mentioned in", "2. 5.2 Methods We compare four methods: Word2Vec, SetExpan, logistic regression, and our proposed method with several configurations.", "All methods possess the same vocabulary W , and all methods excluding Word2Vec use the same feature spaces: tf-idfs of Bag of Words, unigrams, bigrams, and word embeddings.", "Any feature space is applicable, though.", "4 https://spacy.io/models/en#en core web sm 5", "https://www.figure-eight.com/ 6 Although we also tried other thresholds such as correspondence between three workers, this criterion provided the best balance of data cleanliness and size.", "Word2Vec: Word2Vec is a popular and promising method for representing word meanings in a continuous vector space, and the vector similarity is naturally applicable to interactive dictionary construction (Alba et al., 2018).", "We use two computation methods of candidate selection based on Word2Vec.", "The first is w2v(avg) and involves simply taking cosine similarity with an averaged vector of terms among (cid:101) U .", "The second is w2v(rank) and calculates the mean reciprocal rank from terms in (cid:101) U .", "Both select the candidates in order of their estimated scores.", "The embeddings are learned for each corpus with the gensim implementation using the default parameters.", "7 SetExpan: We implement SetExpan ( SE ; Shen et al. 2017), which is a feature-selection method for conventional set expansion.", "The original version does not involve the user in the iteration and updates (cid:101) U i according to its own criteria to filter incorrect terms.", "In our scenario, we provide the correct terms in the update phase of (cid:101) U i .", "We use the same input features with other methods and set the hyperparameters to those Shen et al. (2017) reported as best.", "Logistic Regression: We include logistic regression in our comparison because the feature weighting is one of the conventional types of linear discriminant analysis.", "The logistic regression version, LR , takes a word representation and then predicts the probability of the word appearing in a current dictionary.", "For word representation, we concatenate vectors in each feature space (explained in 5.2) and then use the vector compressed into 300 dimensions with singular value decomposition.", "In every iteration, we train LR from scratch with positive and negative terms.", "For the negative terms at the first iteration (i.e., N 0 ), however, we randomly select | U 0 | of negative words from the entire vocabulary except for dictionary terms.", "We select candidates following the order of estimated probabilities.", "While we tried other regression models (SVM and Random Forest) and dimensions of the input vector (non-compression, 50, 100, 200, 500, and 1000), the above condition was the best configuration.", "FWPS : Our base model without optimization", "7 https://radimrehurek.com/gensim/models/word2vec.html", "+PickOne : Selecting only one feature space with the highest similarity scores among positive terms 3.3 +Op(p) : With optimization using positive feedback", "Eq.(7) +Op(p/n) : With optimization using both positive and negative feedback", "Eq.(8) +Fd(p) : With +Op(p) and feedback denoising 3.4 +Fd(p/n) :With +Op(p/n) and feedback denoising 3.4 We use the K-means algorithm for +Op(p) and +Op(p/n) with K = 3 , though the overall trend was almost the same with K = 2 and 5 .", "Hybrid: We also introduce a joint method HB that combines LR and an FWPS version.", "The strategy is simple; HB firstly uses FWPS 's mechanism to broadly cover candidate terms, and then switches to LR when the amount of feedback increases.", "This mechanism naturally solves LR 's problems that require negative feedback from the beginning and demand a moderate number of labels for training.", "Any of the FWPS versions can be combined with LR ; therefore, we chose the best one for our experiment.", "The switch timing is empirically set to the 5-th iteration.", "Table 2 lists the W CpI scores for each method across five corpora with = 0 .", "0 .", "In all domain texts, HB outperforms the others.", "The scores of LR are second highest, which implies that a combination with a FWPS model boosts performance.", "Among the versions of FWPS , +PickOne largely drops in score, which indicates the importance of the min-maximizing optimization strategy for this task (see 3.3).", "However, at least when = 0 .", "0 that assumes the user never quit the process in the midway through, the performances of FWPS and other versions with optimized w are not different much.", "In particular, the negative feedback tends to degrade the performance.", "Subsequently, SE , w2v(avg) , and w2v(rank) perform poorly.", "SE may not be suitable for gathering arbitrary terms from a non-large corpus because it was originally designed and tested for collecting ontological terms from large-scaled data (Shen et al., 2017).", "Also, we find that leveraging embeddings in a straightforward manner is not sufficient, especially for interactive dictionary construction.", "Let us now discuss changes when adjusting the W CpI 's listed in Table", "3. Ignoring corpus differences, we take the average scores among all evaluation items.", "The most crucial change can be found in LR which significantly drops in score along with an increase in .", "When = 0 .", "1 , the score of LR already becomes inferior to most Methods APP BAB CAM HEL SPOSE 21.20 18.83 11.34 17.01 16.79 w2v(avg) 18.51 12.77 10.36 14.60 14.28 w2v(rank) 24.39 12.71 10.29 18.63 17.04 LR 51.99 36.76 31.18 38.17 37.59 FWPS 46.90 34.32 27.61 38.17 35.56 +PickOne 18.51 12.79 10.36 14.40 14.28 +Op(p) 45.60 33.73 26.13 36.43 35.29 +Op(p/n) 43.42 30.88 23.13 33.19 31.91 +Fd(p) 46.17 34.92 26.76 37.60 36.03 +Fd(p/n) 46.33 32.01 25.36 37.04 34.63 HB ( +Fd(p) ) 53.07 37.38 32.31 42.22 39.74 Table 2: WCpI scores across corpora ( = 0 . 0 ) Methods 0.0 0.1 0.3 0.5 SE 17.05 14.36 14.26 15.46 w2v(avg) 14.10 10.83 9.39 9.63 w2v(rank) 16.61 12.42 9.95 9.68 LR 39.14 30.06 23.39 22.45 FWPS 36.51 32.02 30.49 31.51 +PickOne 14.07 10.93 9.64 9.95 +Op(p) 35.44 31.79 30.72 31.58 +Op(p/n) 32.50 29.17 28.89 30.32 +Fd(p) 36.30 32.42 31.10 31.95 +Fd(p/n) 35.07 30.92 29.56 30.53 HB ( +Fd(p) ) 40.95 34.14 30.97 31.62 Table 3: Change in WCpI scores when increasing .", "of the FWPS versions.", "Also, the scores of FWPS tend to be higher with a larger value of .", "When 0 .", "3 , +Fd(p) performs the best among all methods.", "In short, LR suggests correct terms in latter iterations; while FWPS , in particular with trainable ones ( +Op(p) , +Fd(p) ), suggests correct terms in earlier iterations.", "Figure 7 directly describes the score differences with different alphas by showing the hit ratios defined as | U i | / | C | in terms of each iteration number for LR , +Fd(p) , and HB .", "Regardless of the number of seed terms, LR suggests fewer correct terms in earlier iterations, but its hit ratio stably goes beyond +Fd(p) after obtaining a moderate number of training labels (around five iterations, i.e., fifty labels).", "On the other hand, +Fd(p) performs better by a large margin in earlier iterations than LR .", "In short, our method using predefined term similarities overcomes the smaller sample issue which a conventional linear classifier suffers from and contributes to quick dictionary construction.", "This result is practically important because the analyst will go through repeated trial and error observing documents from various points of views by creating many small dictionaries.", "In addition, such contrasts are much stronger when 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 1 4 7 10 13 16 19 22 25 28 | $ | || 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 LR +Fd(p) HB # of seed = 1 # of seed = 3 Figure 7: Hit ratio ( | U i | / | C | ) in terms of each iteration number by LR , +Fd(p) , and HB .", "we give only one seed term (the upper graph), which is also meaningful because the user often starts dictionary construction with only one seed term in a real situation.", "HB enjoys both benefits of coverage by LR and quickness by +Fd(p) .", "In other words, a conventional classifier and our method are complementary; LR becomes favorable when the user prioritizes coverage than quickness, and +Fd(p) becomes favorable when vice versa.", "As a possible use case of HB , the analyst may quickly find interesting perspectives by creating various dictionaries with one of the FWPS methods, and once finding those, he/she switches to a linear classifier to expand the promising dictionaries more.", "To the best of our knowledge, this paper proposes the first formulation of interactive dictionary construction for text analytics, which clarifies the critical issues to resolve.", "In response to those issues, we provide the method, the evaluation framework, and the experimental dataset.", "Also, our experimental results show the promising performances of our method in concern with real situations of text analytics.", "Our systematic study will pave the way to future research about the effective construction of dictionaries for text analytics.", "We appreciate anonymous reviewers and their insightful comments.", "Also, we are grateful to Ta-dayuki Yoshida and Ryuki Tachibana for helpful discussions based on practical use-cases." ]
[ "objective", "abstain", "objective", "objective", "method", "result", "method", "abstain", "method", "abstain", "other", "method", "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "objective", "objective", "objective", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "objective", "method", "result", "abstain", "other", "other" ]
[ "Neural sequence to sequence text generation has been proved to be a viable approach to paraphrase generation.", "Despite promising results, paraphrases generated by these models mostly suffer from lack of quality and diversity.", "To address these problems, we propose a novel retrieval-based method for paraphrase generation.", "Our model first retrieves a paraphrase pair similar to the input sentence from a pre-defined index.", "With its novel editor module, the model then paraphrases the input sequence by editing it using the extracted relations between the retrieved pair of sentences.", "In order to have fine-grained control over the editing process, our model uses the newly introduced concept of Micro Edit Vectors.", "It both extracts and exploits these vectors using the attention mechanism in the Transformer architecture.", "Experimental results show the superiority of our paraphrase generation method in terms of both automatic metrics, and human evaluation of relevance, grammaticality, and diversity of generated paraphrases.", "Paraphrases are texts conveying the same meaning while using different words (Bhagat and Hovy, 2013).", "Paraphrase generation is an important task in Natural Language Processing (NLP) that has many applications in other down-stream tasks, such as text summarization, question answering, semantic parsing, and information retrieval (Cao et al., 2017; Fader et al., 2014; Berant and Liang, 2014).", "Early works on paraphrasing mostly investigated rule-based or statistical machine translation approaches to this task (Bannard and Callison-Burch, 2005).", "With the recent advances of neural sequence-to-sequence (Seq2Seq) framework in different NLP tasks, especially in machine translation, an increasing amount of literature have also applied Retreiver Training Corpus Step 1: Find the most similar pair Edit Provider Edit Performer Step 2: Generate the paraphrase Editor How can I increase my presence of mind ?", "Seq2Seq models to the task of paraphrase generation (Prakash et al., 2016; Gupta et al., 2018; Li et al., 2018).", "Although the proposed Seq2Seq methods for paraphrase generation have shown promising results, they are not yet as dominant as their counterparts used in neural machine translation.", "The main reason is that the available training data for paraphrasing is scarce and domain-specific (Wang et al., 2019).", "In fact, the necessity to generate sequences from scratch, which is a major drawback of traditional Seq2Seq models (Guu et al., 2018), magnifies itself when dealing with scarce training data.", "Thus, one can expect that the model would not be trained well and consequently, would not be able to generate diverse outputs.", "Although retrieval-based text generation has been evaluated recently in Guu et al. (2018); Hashimoto et al. (2018); Wu et al. (2019) as a remedy for this problem, to the best of our knowledge, there is no previous study exploring the usage of this approach in paraphrase generation.", "Moreover, none of the existing works in the realm of retrieval text generation, such as Guu et al. (2018); Wu et al. (2019); Hashimoto et al. (2018), focuses on learning how to extract edits from the retrieved sentences.", "Indeed, Guu et al. (2018); Wu et al. (2019) computes a single edit vector heuristically through concatenating the weighted sum of the inserted word embeddings and the weighted sum of deleted word embeddings.", "Moreover, Hashimoto et al. (2018) only focuses on improving the retrieving stage and uses a standard Seq2Seq model to edit the retrieved sentence.", "In this paper, we present an effective retrieval-based approach to paraphrase generation by proposing a novel editor module.", "Our method can be summarized as follows: Given an input sentence x , the model first retrieves a similar sentence p and its associated paraphrase q from the training data.", "Then, by getting x and ( p, q ) , the editor both learns how to extract the fine-grained relations between p and q as a set of edits, and also when and how to use these extracted edits to paraphrase x .", "By incorporating the retrieved pairs into the editing process, we invigorate our model with a non-parametric memory, which enables it to produce non-generic and more diverse outputs.", "Both the retriever and editor components of our method are modeled by deep neural networks.", "We employ the Transformer architecture (Vaswani et al., 2017) as the backbone of our model, and use its attention mechanism as an effective tool to apply edits in a selective manner.", "Our main contributions are: We propose the Fine-grained Sample-based Editing Transformer (FSET) model.", "It contains a novel editor that can be used in a retrieval-based framework for paraphrase generation.", "This editor learns how to discover the relationship between a pair of paraphrase sentences as a set of edits, and transforms the input sentence according to these edits.", "It is worth noting that the set of edits is learned in an end-to-end manner as opposed to Guu et al. (2018); Wu et al. (2019) that compute the edit vector heuristically.", "as an efficient fully-attentional architecture for the task of retrieval-based text generation.", "Experimentally, we compare our method with the recent paraphrase generation methods, and also with the retrieval-based text generation methods that have been introduced recently.", "Both of the quantitative and qualitative results show the superiority of our model.", "Prakash et al. (2016) was the first work that adapted a neural approach to paraphrase generation with a residual stacked LSTM network.", "Gupta et al. (2018) combined a variational auto-encoder with a Seq2Seq LSTM model to generate multiple paraphrases for a given sentence.", "Li et al. (2018) proposed a model in which a generator is first trained on the paraphrasing dataset, and then is fine-tuned by using reinforcement learning techniques.", "Cao et al. (2017) utilized separate decoders for copying and rewriting as the two main writing modes in paraphrasing.", "Mallinson et al. (2017) addressed paraphrasing with bilingual pivoting on multiple languages in order to better capture different aspects of the source sentence.", "Iyyer et al. (2018) proposed a method to generate syntactically controlled paraphrases and use them as adversarial examples.", "Chen et al. (2019) addressed the same problem, but the syntax is controlled by a sentence exemplar.", "Kajiwara (2019) proposed a model that first identifies a set of words to be paraphrased, and then generates the output by using a pre-trained paraphrase generation model.", "Wang et al. (2019) proposed a Transformer-based model that utilizes structured semantic knowledge to improve the quality of paraphrases.", "Kumar et al. (2019) modified the beam search algorithm with a sub-modular objective function to make the generated set of paraphrases syntactically diverse.", "Li et al. (2019) decomposed paraphrasing into sentential and phrasal levels and employed separate Transformer-based models for each of these levels.", "Fu et al. (2019) decomposes paraphrasing into two steps: content planning and surface realization, and improves the interpretability of the first step by incorporating a latent bag of words model.", "Wu et al. (2019) augmented Seq2Seq generation-based models with retrieval frameworks to make the dialog responses more meaningful and nongeneric.", "Gu et al. (2017) utilized a search engine to retrieve a set of source-translation pairs from the training corpus, both at train and test time, and use them as a guide to translate an input query.", "Guu et al. (2018) proposed the neural editor model for unconditional text generation, which produces a new sentence by editing a retrieved prototype using an edit vector.", "Hashimoto et al. (2018) proposed a task-specific retriever using the variational framework to generate complex structured outputs, such as Python code.", "This work, however, does not have any novelty in the editor's architecture and uses a standard Seq2Seq model with attention and copy mechanism (Hashimoto et al., 2018).", "Let D = { x n , y n } Nn =1 denotes a dataset where x n is a sequence of words, and y n is its target paraphrase.", "In the paraphrasing task, our goal is to find the set of parameters of the model that maximizes (cid:81) Nn =1 p model ( y n | x n ) .", "Figure 1 illustrates the overview of our proposed model which is composed of a Retriever and an Editor .", "Given an input sequence x , the retriever first finds a paraphrase pair ( p, q ) from the training corpus based on similarity of x and p .", "Then, the editor utilizes the retrieved pair ( p, q ) to paraphrase x .", "We discuss the details in the following subsections.", "The goal of the retriever module is to select the paraphrase pairs (from the training corpus) that are similar to the input sequence x .", "To do that, the retriever finds a neighborhood set N ( x ) consisting of the K most similar source sentences { p k } Kk =1 to x and their associated paraphrases { q k } Kk =1 ( K is a hyper-parameter of the model).", "To measure similarity of sentences, we first embed them employing the pre-trained transformer-based sentence encoder proposed by Cer et al. (2018).", "The similarity is then calculated using cosine similarity measure in the resulted embedding space.", "We call this retriever as General Retriever throughout the paper.", "Note that using a pre-trained retriever can help us to alleviate the scarcity problem of the training data available for paraphrasing 1 .", "In order to search for the similar sentences to an input sequence efficiently, we use the FAISS software package (Johnson et al., 2019) to create a fast search index from the sentences in the training corpus.", "We would also pre-compute the neighborhood set of each source sentence in the training set, so at the training time, our model just needs to sample one of the pairs in the neighborhood set uniformly and feed it as an input to the editor module.", "The probability of retrieving a pair can thus be stated as p (( p, q ) | x ) = 1 K 1 [( p, q ) N ( x )] .", "Note that the same procedure also holds for the test time, and the retriever computes N ( x ) so the model can sample any one of the pairs in N ( x ) to generate the output based on that pair.", "To edit a sentence according to a retrieved pair, we propose an editor module consisting of two components:", "1) Edit Provider and", "2) Edit Performer.", "The Edit Provider computes a set of edit vectors based on the retrieved pair of sentences ( p, q ) .", "After that, the Edit Performer rephrases the input sequence x by utilizing this prepared set of edits.", "This part of the editor extracts the edits from the retrieved pair as a set of vectors which we call M icro E dit V ectors ( MEVs ).", "MEVs are responsible for encoding the information about fine-grained edits that transform p into q .", "Each one of the MEVs represents the most plausible soft alignment between a token in p and the semantically relevant parts in q : M = { m i := small edit applied on p i | 1 i l } where l is the length of p .", "Figure 2 presents, in schematic form, the procedure of computing one MEV.", "For each arbitrary token of p , such as p i , we intend to compute a MEV that encodes the edit corresponding to p i using attention over q .", "Then, given p i as the source of the edit, and the attention's result as the target, we concatenate their representations and feed it as the input to a neural network, which calculates m i as the corresponding edit vector.", "To make this process differentiable and parallelizable, we use a fully-attentional architecture consisting of two main sub-modules:", "1) Edit Encoder and", "2) Target Encoder.", "Figure 3 shows the overview of the Edit Provider.", "In this model, at first, a context-aware representation R q = [ r 1 q , ..., r kq ] of the sequence q is computed using the Target Encoder which is the encoder sub-graph of the Transformer architecture (Vaswani et al., 2017).", "The Edit Encoder is also the encoder of the Transformer model, but, with an extra multi-head attention over R q .", "This module outputs a vector that encodes the most semantically relevant parts of q to p i .", "After that, the MEVs, i.e. m i s, are computed by feeding these vectors one by one into a single dense layer (with the tanh ( . ) activation function).", "By setting the output dimension of the dense layer to be smaller than the dimension of the word embeddings, we introduce a bottleneck, which hinders the Edit Encoder from copying q directly.", "Finally, all of the MEVs are aggregated into a single vector z by leveraging a technique inspired by Devlin et al. (2019); we prepend a special token [ AGR ] to p in order to encode all the edits into a single vector z p q .", "The intuition behind encoding into a single vector z p q is to allow the model learn a global edit that can be applied to the whole sentence, in addition to the MEVs as local edits.", "We run the Edit Performer with the same parameters in the reverse direction, i.e. from q to p , to Self Attention Multi-Head Att on Input Multi-Head Att on MEVs Feed forward MEVs Contextual Reprsentation Decoder Output at (t-1) Decoder Output at (t) Edit Vector ( ) ; Input Sequence Encoder Figure 4: Illustration of the Edit Performer generating the output token at t -th time step.", "The Edit Performer transforms the input sequence x = [ x 1 , ..., x s ] to the final output y using the edit vectors.", "We employ a fully-attentional Seq2Seq architecture composed of an encoder and a decoder for this part of the model.", "The encoder of the Edit Performer has exactly the same architecture as the original encoder of the Transformer model and outputs a context-aware representation R x = { r ix } si =1 of the input sequence.", "For the decoder, we use a slightly modified version of the original Transformer's decoder.", "Indeed, the Transformer learns to model p ( y | x ) , while we would like to model a conditional setting p ( y | x, ( p, q )) .", "Moreover, as mentioned in the description of the Edit Provider, the relation between p and q is encoded in MEVs M and the vector z .", "Therefore, in order to edit x , instead of using ( p, q ) directly, we only need M and z to specify the edits, and the sentence p to identify the locations in x to which the edits should be applied.", "Thus, we aim to model p ( y | x, p, M, z ) with the Edit Performer.", "the edit vector z , we append it to each token of the decoder's input.", "To apply the edits in a fine-grained manner, we would like the model to attend to the most similar token of p and select the corresponding edit in MEVs M to be applied to the input sentence.", "Therefore, in addition to the input sequence representation R x , the model also attends to MEVs M using an extra multi-head attention sub-layer which computes the representation h (cid:48) = MultiHeadAtt ( Q: h, K: R p , V: M ) , where h comes from the previous sub-layer and R p is the context-aware representation of the retrieved sequence p , which is calculated by the Edit Provider.", "Hence, this sub-layer allows the model to apply edits only when the current context matches somewhere in p .", "Finally, we project h (cid:48) (after applying the residual connection and the layernorm) using a fully-connected sub-layer and feed it to the above layer.", "For the last layer, a softmax activation is employed to predict the next token of the output.", "During the training phase, our aim is to maximize the log likelihood objective L = (cid:88) ( x,y ) D log p ( y | x ) .", "(2) As we decompose the training procedure to two stages of retrieving and editing, we can rewrite p ( y | x ) as p ( y | x ) = (cid:88) ( p,q ) D p ( y | x, ( p, q )) p (( p, q ) | x ) .", "(3) Substituting Eq.", "1 into Eq.", "3 and then inserting the resulted p ( y | x ) into Eq.", "2 yields the following formulation for the log likelihood: L = (cid:88) ( x,y ) D log( 1 K (cid:88) ( p,q ) N ( x ) p ( y | x, ( p, q ))) .", "We train our model by maximizing the following lower bound of the log likelihood (obtained by Jensen's inequality): L L (cid:48) = 1 K (cid:88) ( x,y ) D (cid:88) ( p,q ) N ( x ) log p ( y | x, ( p, q )) .", "Note that p ( y | x, ( p, q )) = p ( y | x, p, m ( p, q ) , z ( p, q )) , where denotes the parameters of the Edit Performer and shows the parameters of the Edit Provider.", "Thus, we solve the following optimization problem: , = argmax , L (cid:48) ( , ) .", "Except for the retriever which is a pre-trained component of our model, other components are fully coupled and trained together.", "To prevent the model from ignoring the information coming from the retrieval pathway during the training procedure (i.e. ignoring the edit vectors extracted from the retrieved pair), we use a simple yet effective trick; we manually add extra ( x, y ) pairs to N ( x ) proportionate to the number of retrieved pairs K so the presence of y as the exact ground-truth paraphrase encourages the model to use the retrieved pairs more.", "Please refer to A.1 for further details.", "In this section, we empirically evaluate the performance of our proposed method in the task of paraphrase generation, and compare it with various other methods, including previous state-of-the-art paraphrasing models.", "We conduct experiments on two of the most frequently used datasets for paraphrase generation: the Quora question pair dataset and the Twitter URL paraphrasing corpus.", "For the Quora dataset, we only consider the paraphrase pairs.", "Similar to Li et al. (2018), we sample 100k, 30k, 3k instances for train, test, and validation sets, respectively.", "Twitter URL paraphrasing dataset consists of two subsets, one is labeled by human annotators, and the other is labeled automatically, thus, it is noisier compared to the Quora dataset.", "Similar to Li et al. (2018), we sample 110k instances from automatically labeled part as our training set and two non-overlapping subsets of 5k and 1k instances from the part annotated by humans for the test and validation sets, respectively.", "As in Li et al. (2018, 2019), we truncate sentences in both of the datasets to 20 tokens.", "We compare our method with both the existing paraphrasing methods that are not retrieval-based, and also with the existing or newly created retrieval-based text generation methods which we adapt for paraphrasing:", "Residual LSTM (Prakash et al., 2016) which is the first Seq2Seq model proposed for paraphrase generation, RbM (Li et al., 2018) that fine-tunes a paraphrase generation model using reinforcement learning, Transformer (Vaswani et al., 2017) which is a Seq2Seq model relying entirely on attention mechanism, DNPG (Li et al., 2019) that decomposes paraphrasing to sentential and phrasal levels and utilizes separate Transformers for each level, DiPS (Kumar et al., 2019) which aims to generate diverse paraphrases by adopting a novel approach in the decoding stage instead of beam search.", "The latter two of the above list have been reported as the state-of-the-art models in paraphrase generation (Kumar et al., 2019; Li et al., 2019).", "methods that we create by ourselves: Seq2Seq+Ret which is an extended version of Seq2Seq Residual LSTM.", "This model conditions the generation process at each time step on an edit vector encoding the differences between the retrieved sentences p and q .", "To make the comparison fair, we use the General Retriever (introduced in the Retriever subsection of the Proposed Approach Section) to find ( p, q ) .", "The edit vector for this pair is also computed by concatenating the sum of inserted word embeddings with the sum of deleted word embeddings as it is stated by Guu et al. (2018).", "RaE that is proposed by Hashimoto et al. (2018) as a method with an in-domain retriever.", "The editor of this model is a Seq2Seq LSTM equipped with attention mechanism over the input x , and copy mechanism over the retrieved pair p and q .", "CopyEditor+Ret which is composed of the editor of Hashimoto et al. (2018), and the General Retriever.", "We compare FSET with this baseline model to further evaluate the role of our proposed editor.", "Table 1 shows the settings of our model.", "We select the hyperparameters suggested by Li et al. (2018) for the LSTM-based Seq2Seq baselines, and the hyperparameters mentioned by Li et al. (2019) for the Transformer-based baselines.", "It is worth noting that our model's size w.r.t. the number of parameters is approximately 12 of the baseline LSTM's size and 15 of the baseline Transformer's size.", "The newly created retrieval-based baselines have the same hidden size and the same number of layers as the non-retrieval models.", "For the Seq2Seq+Ret model, we keep the ratio of hidden size to the edit vector dimension same as the reported ratio in Guu et al. (2018).", "We train all of the models for 100k iterations, and choose the best version based on their validation loss after training.", "We set the batch size to 128 and the vocabulary size to 8k in all of the experiments.", "The embeddings are also trained from scratch.", "In all of the experiments on the retrieval-based methods, the hyper-parameter K is set to 1.", "However, results for different values of K are also reported in A.2.", "During the decoding stage, we use beam search to generate a set of outputs.", "In order to select the final output, an approach similar to Gupta et al. (2018) is used which chooses the most lexically similar sentence to the input where the similarity is calculated based on the Jaccard measure.", "We compare different methods using BLEU (Pap-ineni et al., 2002), ROUGE (Lin, 2004), and METEOR (Banerjee and Lavie, 2005) as the most common metrics for automatic evaluation of paraphrase generation methods.", "Table 2 summarizes the results of different methods.", "These results indicate that our model outperforms the previous state-of-the-art models in terms of all of the metrics.", "It is worth noting that the models which have utilized copy mechanism, such as DNPG, RbM, RaE, and CopyEditor+Ret, generally outperform the other baselines.", "The Seq2Seq+Ret, i.e. the Quora Twitter URL Paraphrasing Models ROUGE-2 ROUGE-1 BLEU-4 BLEU-2 METEOR ROUGE-2 ROUGE-1 BLEU-4 BLEU-2 METEOR Residual LSTM (Prakash et al., 2016) 32.71 59.69 24.56 38.52 29.39 27.94 41.77 25.92 32.13 24.88 Seq2Seq+Ret (Ours) 32.71 60.83 25.23 42.71 32.51 21.56 40.18 20.11 31.58 22.38 DiPS (Kumar et al., 2019) 31.77 59.79 25.37 40.35 29.28 23.67 43.64 27.66 37.92 25.69 Transformer (Vaswani et al., 2017) 34.23 61.25 30.38 42.91 34.65 29.55 44.53 32.14 40.34 28.26 DNPG (Li et al., 2019) 2 37.75 63.73 25.03 ---RbM (Li et al., 2018) 2 38.11 64.39 -43.54 32.84 24.23 41.87 -44.67 19.97 RaE (Hashimoto et al., 2018) 35.07 62.71 29.22 46.21 29.92 31.53 47.55 34.16 44.33 30.09 CopyEditor+Ret (Ours) 35.59 62.93 29.78 46.55 35.56 27.35 45.54 28.06 40.30 26.93 FSET (Ours) 39.55 66.17 33.46 51.03 38.57 32.04 49.53 34.62 46.35 31.67 Table 2: Results of the different models on two paraphrasing datasets.", "retrieval-based Residual LSTM, shows an improvement over Residual LSTM on Quora dataset.", "However, this is not the case on the Twitter dataset and we hypothesize that it is due to uncommon texts in this corpus (i.e. informal text with hash-tags and abbreviated words), on which the General Retriever has not been trained.", "Therefore, a pre-trained retriever cannot help in this case.", "The Copy-Editor+Ret model which incorporates a more powerful editor than Seq2Seq+Ret shows better results than both of the Residual LSTM and Seq2Seq+Ret.", "However, a phenomenon similar to what was stated for Seq2Seq+Ret is also observed for this model on the Twitter dataset.", "The RaE model with the same editor as CopyEditor but with a supervised (task-specific) retriever leads to near state-of-the-art results.", "This indicates the role of the supervised task-specific retriever used in RaE, especially in the results on Twitter dataset.", "The superiority of our method over RaE in all of the metrics could be a sign of the effectiveness of our proposed editor module.", "Although our model uses the General Retriever, it still outperforms all other methods even on the Twitter dataset.", "It is worth mentioning that we can replace the General Retriever in our method with other retrievers like supervised task-specific ones to improve the results even more.", "Moreover, it is worth noting that our model that is only based on the Transformer architecture and the General Retriever (that is not required to be trained in each domain) needs much less training time than RaE.", "As there is no appropriate automatic metric for evaluating the diversity and novelty of generated sentences, we use human evaluation to assess the performance of our model qualitatively.", "We 2 Results are directly reported from Li et al. (2018, 2019) on the same dataset and settings.", "compare our method with two other methods:", "1) RaE (Hashimoto et al., 2018) as a retrieval-based method adapted for paraphrasing, and", "2) DiPS (Ku-mar et al., 2019) as a paraphrasing model which generates semantically diverse outputs by adopting a novel approach instead of beam search during the decoding stage.", "We choose these models as we would like to compare our method both with a state-of-the-art retrieval-based method and with a method that can generate diverse outputs.", "It must be noted that many of the recent methods in Table 2 are not able to generate diverse outputs.", "We first select 100 sentences randomly from the test set of Quora dataset.", "Then, for each model, three paraphrases are generated for each one of the sentences, and these three outputs are considered as a paraphrase group.", "We aggregate and shuffle these paraphrase groups and ask six human annotators to evaluate them in two scenarios.", "In the first scenario, we ask the human annotators to score the outputs individually based on the following two criteria:", "1) Grammar and fluency,", "2) Consistency and coherency.", "Similar to Li et al. (2018), we use a 5-scale rating for each criterion.", "Table 3 presents the results.", "As can be seen, our model generally outperforms the other methods.", "Although RaE and our model can both produce grammatically correct outputs, the consistency and coherency for the outputs of our method is much better.", "Moreover, the inter-annotator agreement measured by Cohen's kappa shows fair or intermediate agreement between raters assessing the models.", "Since directly scoring diversity and novelty of one paraphrase group is not simple even for humans, in the second scenario, we ask the annotators to make one-on-one comparisons on the groups of generated paraphrases.", "In other words, for each pair of the models, they have to decide which model produces better outputs for each one of the sentences (Ties are also allowed).", "Figure 5 depicts the one-on-one diversity evaluation.", "Our method and RaE both outperform DiPS, probably due to their retrieval-based nature.", "Moreover, this figure reveals that our method can generate significantly better outputs compared to RaE.", "We believe the reason is that RaE's editor is not as properly designed as our editor module.", "We explicitly inject the paraphrasing patterns found in the neighboring paraphrases into the Edit Performer which helps it to generate more diverse paraphrases.", "Please refer to the A.3 for some further details on the experiments.", "Table 4 shows some examples of the paraphrases generated by our model.", "A common pattern among the output paraphrases is that the model has combined different parts of the input sentence and the retrieved target sentence to create a grammatically correct paraphrase.", "Furthermore, the model is able to change the syntactic form of words correctly, or replace them with their synonyms, as it is the case for via and by using, and is and are.", "The fact that the model can apply these changes in different levels of granularity can indicate that the idea of editing based on MEVs works properly.", "(1) x WhydopeopleasksuchquestionshereonQuorawhichcould be easily found on the internet ?", "p Why do people ask questions on Quora that could simply be googled?", "q why do people ask questions on quora that are easily answerable via a quick internet search ?", "y (cid:48) why do people ask questions on quora that could be easily answered by using a quick internet search ?", "(2) x Who can the Democrats effectively run against Trump in 2020 ?", "p Who will lead Trump 's movement after the election ?", "q Who do you think is most likely to lead the Trump movement after 2016 ?", "y (cid:48) who do you think is the most likely democrat to run against trump in 2020 ?", "(3) x What should I do when someone marks my question as needing improvement?", "p Even though most of my asked questions are in well written form, they are usually marked as needing improvement.", "What should I do ?", "q Mostofmyquestions are markedasneedingimprovement .", "Wherecan I seek professional help ?", "y (cid:48) what should I do if my question is marked as needing improvement ?", "(4) x What is the best way to make money as a 15 year old ?", "p What are the best ways to make money as a teenager ?", "q I 'm a teen , how can I make some money online ?", "y (cid:48) I 'm a 15 year old , how can I make money ?", "In order to further evaluate the role of each module in our model, we train and assess different variants of it where in each variant, a key component has been replaced by an alternative simpler one:", "Jaccard Retriever: The retriever of our model is replaced by a simple retriever that selects neighbor sentences using the Jaccard similarity metric.", "No edit vector z : A variant in which we do not condition the Transformer in the Edit Performer on the aggregated edit vector z , and edit the source sentence merely based on MEVs.", "No Attention on MEVs: In this variant of our model, the Transformer in the Edit Performer is not conditioned on MEVs, and the source sentence is edited based on only z .", "We train all of these variants on the Quora paraphrasing dataset.", "Table 5 shows the results of these models.", "As it is seen, the model which uses the Jaccard similarity measure performs worse than the original model with the General Retriever.", "Nonetheless, the results of this version explains that even the combination of our editor module with this simple retriever outperforms previous state-of-the-art methods.", "This indicates that our proposed editor can distinguish whether the extracted edits are plausible enough to be applied to the input sentence.", "Moreover, the results show that both eliminating z and M from our editor decrease its performance.", "In other words, both conditioning on z as the aggregated edit at each step of generation and the attention on MEVs M help the proposed editor.", "In this paper, we proposed a retrieval-based paraphrase generation model which includes a novel fully-attentional editor.", "This editor learns how to extract edits from a paraphrase pair and also when and how to apply these edits to a new input sentence.", "We also introduced the new idea of Micro Edit Vectors, where each one of these vectors represents a small edit that should be applied to the source sentence to get its paraphrase.", "We incorporated Transformer modules in our editor and augmented them with attention over Micro Edit Vectors.", "The proposed model outperforms the previous state-of-the-art paraphrase generation models in terms of both automatic metrics and human evaluation.", "Moreover, the outputs show that our model is able to produce paraphrases by editing sentences in a fine-grained manner using the idea of MEVs.", "In future work, we intend to adapt our editor module for other learning tasks with both the structured input and structured output.", "We thank anonymous reviewers for their detailed feedback and suggestions." ]
[ "abstain", "abstain", "objective", "objective", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "method", "abstain", "method", "objective", "objective", "objective", "objective", "objective", "method", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "other", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "objective", "abstain", "objective", "objective", "abstain", "result", "abstain", "abstain" ]
[ "The recent success of reinforcement learning (RL) in solving complex tasks is often attributed to its capacity to explore and exploit an environment.", "Sample efficiency is usually not an issue for tasks with cheap simulators to sample data online.", "On the other hand, Task-oriented Dialogues (ToD) are usually learnt from offline data collected using human demonstrations.", "Collecting diverse demonstrations and annotating them is expensive.", "Unfortunately, RL policy trained on off-policy data are prone to issues of bias and generalization, which are further exacerbated by stochasticity in human response and non-markovian nature of annotated belief state of a dialogue management system.", "To this end, we propose a batch-RL framework for ToD policy learning: Causal-aware Safe Policy Improvement (CASPI).", "CASPI includes a mechanism to learn fine-grained reward that captures intention behind human response and also offers guarantee on dialogue policy's performance against a baseline.", "We demonstrate the effectiveness of this framework on end-to-end dialogue task of the Multiwoz2.0 dataset.", "The proposed method outperforms the current state of the art.", "Further more we demonstrate sample efficiency, where our method trained only on 20% of the data, are comparable to current state of the art method trained on 100% data on two out of there evaluation metrics.", "Offline task-oriented dialogue (ToD) systems involves solving disparate tasks of belief states tracking, dialogue policy management, and response generation.", "Of these tasks, in this work we focus on dialogue policy management to improve the end-to-end performance of ToD.", "The need for sample Contributed to this work during his time at Salesforce ResearchCode: https://github.com/salesforce/CASPI efficiency is key for learning offline task-oriented dialogue system, as access to data are finite and expensive.", "Recent advancements in off-policy reinforcement learning methods that uses offline data as against a simulator has proven to be sample efficient (Thomas and Brunskill, 2016).", "The effective use of these techniques are hindered by the nature of ToD.", "For instance, bias correction in off-policy based methods usually requires estimation of behaviour policy for a given state of Markov Decision Process (MDP).", "In ToD, per-turn annotated belief-state does not capture the true state of the MDP.", "Example of such annotated belief-state are shown in Fig:1.", "Latent state information such as prosody, richness of natural language and among others induces stochasticity in the agents response.", "In addition to these short comings, the direct use of automatic evaluation metric as reward for policy learning is not desirable, since these automatic evaluation metrics are often for the entire dialogue and not per turn.", "Hence such rewards are sparse and under-specified (Wang et al., 2020).", "Use of under-specified reward will often lead to policy that suffers from high variance (Agarwal et al., 2019).", "Alternatively use of imitation learning based methods falls short of reasoning on the outcome.", "This is demonstrated in Fig:1.", "Turns#3 and #2 are rich in semantic information and Turn#3 is key to success of the booking process.", "While Turn#4 contributes least to successful outcome.", "Though the turns have varying levels of importance, each of the turns are treated equally in imitation learning.", "In worst case, turns like Turn#4 will appear more often than turns Turn#2 and #3 in a ToD dataset, there by taking greater share of the gradient budget.", "We address aforementioned shortcomings with following key contributions: 1.We introduce pairwise causal reward learning to learn fine grained per turn reward that reason the intention of human utterance.", "for task oriented dialogue setting that guarantees performance against a baseline.", "With the release of multi-domain, multi-turn Multi-Woz2.0 dataset (Budzianowski et al., 2018a), there has been flurry of recent works, of which Zhang et al. (2019) uses data augmentation.", "Rastogi et al. (2019) and Hosseini-Asl et al. (2020) frame dialogue policy learning as language modeling task.", "Among the works that uses reinforcement learning.", "Mehri et al. (2019) uses supervised learning to bootstrap followed by RL fine tuning, whereas Zhao et al. (2019) uses policy gradient on latent action space as against handcrafted ones.", "Jaques et al. (2019) and Wang et al. (2020) uses Batch-RL for dialogue policy learning.", "(Wang et al., 2020) is first to argue the use of automated evaluation metrics directly as reward is under-specified for ToD policy learning.", "Recently there's has been proliferation in use of large pretrained language model based systems like Hosseini-Asl et al. (2020), Lin et al. (2020), Chen et al. (2019) etc.", "More details on contrasting the merits and limitations of these methods can be found in Sec:A.1", "The line of inverse RL used in this work can be traced back to Ziebart et al. (2008), which proposes roll-outs from expert demonstration should have rewards exponentially higher than any other arbitrary roll-outs.", "This method requires a normalizing constant that integrates across rollouts, which is challenging.", "Christiano et al. (2017) and Thanan-jeyan et al. (2020) propose to do relative comparison of two roll-outs there by eliminating the need for normalization constant and they demonstrate in online setting.", "We model task-oriented dialogue as a Markov decision process (MDP) (Sutton and Barto, 2018) with set of states S and actions A .", "The agent at time step t with state s t performs a composite action a t as per a target policy e ( a t | s t ) on the environment.", "The environment is defined by transition probabilities P ( s t +1 | s t , a t ) , a latent reward function, R ( s t , a t , g ) , discount factor [0 , 1] and goal of dialogue g .", "Then the objective of the target policy e , is to maximizes the discounted sum of future reward on the MDP, given by the state-action value function Q e ( a t , s t ) = E a t e ,s t P [ (cid:80) Tt = t t t R ( s t , a t , g )] .", "In offline Batch-RL.", "The agent does not get to interact with the environment, instead we are provided with offline data D logged by human agents performing actions based on a latent stochastic behaviour policy b .", "Rollout of a dialogue i D is composed of i = (( o i 0 , a i 0 ) , ..., ( o iT 1 , a iT 1 )) .", "Here o t is the observation at turn t , composing of o t = ( b t , u ut , u at 1 ) , where b t is the belief state of the agent at turn t , u ut and u at 1 are the user and agent utterance at time t and t 1 respectively.", "Batch-RL entails training target policy e on rollout generated by a latent behaviour policy b .", "Directly optimizing on the rollouts generated by policy other than the target policy, will lead to large bias in the value function estimation, poor generalization characteristic, and sample inefficiency (Thomas and Brunskill, 2016).", "Safe policy improvement ensures the new policy performance is bounded by performance against a baseline policy.", "This is expressed as: P r ( V e V b ) 1 , where V e and V b are value functions of the target and behaviour policy respectively.", "Here 1 and are the high probability and approximation meta-parameters respectively.", "Schulman et al. (2015) 93 Figure 2: Shows stochacity i.e number of different dialogue act against each delexicalized belief state in Mul-tiWoz2.0 dataset provide such update mechanism, (1), whose errors are bounded as long as the constraints of (1) are met, where DKL ( . || . ) is the KL divergence and is a hyper-parameter.", "L sto ( ) = min s t P bs a t bs E (cid:20) e ( a t | s t ; ) bs ( a t | s t ) Q bs ( s t , a t ) (cid:21) s.t. E s t P bs [ DKL ( bs ( . | s t ) || e ( . | s t ))] (1) (Schulman et al., 2015) originally formulated (1) for online learning as trust region for policy updates and uses policy before gradient update as the baseline policy, bs ( a t | b t ; old ) .", "In this work we adapt it to offline setting and use behaviour policy b as the baseline policy.", "Use of this update rule requires access to the behavior policy b ( a t | s t ) which is intractable to estimate and the learnt ones might have bias.", "Use of such behavior policy to perform bias correction by Important Sampling (Precup, 2000) might lead to worse policy.", "Instead we estimate the behaviour policy conditioned only the annotated belief-state b t as against true state s t in (1), which result in a stochastic behavior policy.", "This stochasticity of dialogue act vis--vis annotated belief state can observed in Fig:2.", "We also estimate the Q-function of the behavior policy, Q b ( b t , a t ) using learnt reward R ( s t , a t , g ) .", "More on learnt reward in Sec: 3.3.", "The belief state b t is part of the observation o t , hence we purport that, on availability of more evidence of the observation o t , (beside b t ) the mode of the policy collapse to a near deterministic action.", "To factor this into the policy learning, we have an additional loss: L det ( ) = min E ( o t ,a t ) D [ G ( , t ) log e ( a t | o t ; )] (2) where return G ( , t ) = (cid:80) Tt = t t t R ( s t , a t , g ) is the discounted sum of future reward for rollout with goal g .", "Hence policy optimization loss function is given by: L ( ) = L sto ( ) + (1 ) L det ( ) (3) We achieve this by doing two forward passes of the policy network e ( a t | o t ; ) , first with only the belief state, b t as the input and second pass with entire observation i,e o t := ( b t , u ut , u at ) as input to the policy network.", "We then use the corresponding action distribution e ( a t | b t ; ) and e ( a t | o t ; ) in loss functions (1) and (2) respectively.", "3.3 Pairwise causal reward learning Algorithm 1 CASPI Input : Dialogue dataset D and evaluation metric M ( . ) Sub-sample K-folds of train and val set { ( DT , DV ) 1 , ..., ( DT , DV ) k | ( DT , DV ) D } for ( DT , DV ) do Learn ToD in supervised setting by optimizing for objective: min E a t ,s t DT log( m ( a t | s t )) for epoch do Using m ( a t | s t ) predict actions on the valset DV and add it to the dataset, DP along with corresponding metric score M ( ) for pairwise causal reward learning DP = DP ( , M ( )) | m end for end for repeat Sample pair of rollouts ( 1 , 2 ) DP Learn for R ( . ) by optimizing for objective (4) until Convergence using data DP repeat Optimize for policy e using objective (3) until Convergence using data D The policy optimization objective introduced in the previous section requires access to per timestep reward R ( s t , a t , g ) .", "To this end, we provide a 94 Figure 3: Process flow of pairwise causal reward learning mechanism to learn a reward that is causally reasoned on the intention of the human demonstrator.", "Usually ToD are evaluated using dialogue level automatic evaluation metrics M ( . ) .", "Given the large state-action space of the dialogue management system, these dialogue level feedback are under-specified for for effective policy learning (Wang et al., 2020).", "Details about the the choice of evaluation metric M ( . ) are covered in Sec:4.4.2.", "To address this under-specified feedback, we adapt preference learning introduced by (Chris-tiano et al., 2017) from an online to an offline setting, to learn fine grained per dialogue turn (ie. per timestep t ) reward, R ( s t , a t , g ) .", "Given a pair of rollouts 1 , 2 D with actions for each state in the rollout is sampled from a pair of different policies 1 m and 2 m respectively.", "Let 1 2 represent preference of rollout 1 over rollout 2 .", "This preference is true when sum of rewards of each dialogue turn of the two rollouts satisfies: (cid:80) Tt =0 R ( s t , a t , g | ( s t , a t ) 1 ) > (cid:80) Tt =0 R ( s t , a t , g | ( s t , a t ) 2 ) .", "For brevity, henceforth we refer (cid:80) Tt =0 R ( s t , a t , g | ( s T , a t ) ) as R ( ) .", "Then preferential probability of one rollout over an another, can be represented by: P [ 1 2 ] = ( R ( 1 )) ( R ( 1 )) + ( R ( 2 )) Here ( . ) could either be exp ( . ) or identity 1 ( . ) .", "In our experiments, the later works best.", "We optimize for reward, R ( s t , a t , g ) by minimizing binary cross-entropy loss between the preference probability and the normalized metrics score, ( ) between a pair of rollout.", "We observe that the dialogue roll-outs are generated by expert latent policy.", "The data (dialogue rollouts) are distributed as per the optimal latent policy and transition probability.", "We propose that predictions made by a policy while in the process of learning to maximize the likelihood of the data is a good curriculum for exploring the state-action space for pairwise reward learning.", "This is a key insight of this work.", "We formalize this insight into a method depicted in Fig:3 and Algo:1.", "The (train) dataset is subsampled into K -fold train & val sets.", "K -baseline policies are trained to fit the data distribution generated by experts using cross entropy loss, i.e supervised learning.", "During the process of fitting the data distribution, the still learning K-policies are used to predict on their corresponding K-fold valset at every epoch of the training.", "Each of these predictions are the scored by a chosen dialogue level metric, M ( . ) .", "On convergence of this supervised learning process, pairs of dialogue predictions generated by the above process, along with their corresponding metric score are used to train for fine grained reward R ( a t , s t , g ) using objective (4).", "The use of K-fold subsampling, K-baseline policies,", ".m and actions sampled from these K-policies that are still in the process of learning help generate counter factual examples in the action space.", "These counter factual actions close to optimal policy, along with the goal of the dialogue helps us to learn subtle nuance of fine grained reward function R ( a t , s t , g ) in the region of action space that matters the most.", "4 Experimental Settings 4.1 Model 4.1.1 CASPI(.)", "The learnt reward using CASPI R ( s t , a t , g ) is akin to sample weights for each dialogue turn, that helps to redistribute the gradient budget among dialogue turns based of their contribution to the overall success of the ToD.", "Hence we believe our pairwise casual reward learning and associated improvement in sample efficiency are independent of model architecture.", "To this end we choose two ToD methods that are at the extremes of model architecture spectrum", "1) One uses a light weight custom model and", "2) Other uses a large standard pre-trained out-of-the box universal language model.", "In this setting , we use the neural model proposed by Zhang et al. (2019).", "DAMD is composed of three seq 2 seq generative model using GRUs.", "The three seq 2 seq models are one each for belief state, dialogue act and response generation modules.", "An attention layers is used to attend the outputs of the seq 2 seq models with the context vector of previous turn for copy over mechanism.", "The outputs of these attention layer are used as representation for predicting series of tokens for their respective modules.", "For more details on the model architecture and parameter setting refer Zhang et al. (2019).", "In this setting we use both stochastic, L sto and deterministic, L det loss functions on dialogue act.", "For DST and response generation, we retain the cross entropy loss as is from DAMD (Zhang et al., 2019).", "4.1.3 CASPI(MinTL) On the other extreme of model complexity, we use the Task oriented Dialogue model, MinTL(Lin et al., 2020).", "MinTL uses a large pretrained language model BART (Lewis et al., 2019).", "BART use as a standard encoder decoder transformer architecture with a bidirectional encoder and an autoregressive decoder.", "It is pre-trained on the task of denoising corrupt documents.", "BART is trained using cross-entropy loss between the decoder output and the original document.", "For more details of the model architecture and parameter setting, we suggest referring to (Lin et al., 2020) (Lewis et al., 2019).", "MinTL doesn't explicitly predict dialogue act.", "Hence we only use the deterministic loss, L det directly on the generated response and for DST we retain the loss as is from MintTL (Lin et al., 2020).", "For k-model training of pairwise casual reward learning illustrated in Fig:3, we chose DAMD (Zhang et al., 2019) model for it's light weight model architecture.", "In all our experiments, we use K = 10 .", "For the pairwise casual reward learning network, we use three single bi-LSTM layers, one each to encode goal, belief state and either dialogue act or response sequences at each dialogue turn on each of the sampled roll-outs pairs, 1 and 2 .", "The three encoded representations are concatenate and are fed through a couple of feed-forward layers before making a bounded reward prediction R ( s t , a t , g ) [0 , 1] for each turn using a sigmoid function.", "The per turn rewards are summed to form a global reward R ( ) for the roll-out .", "Using a pair of dialogue rewards R ( 1 ) and R ( 2 ) , we compute the probabilistic preference between the roll-outs P [ 1 2 ] either by standard normalization or a softmax function.", "The output of this optimized using binary crossentopy loss described in Eqn:4.", "The above described architecture is illustrated in Fig:10 .", "To evaluate our proposed method on Multi-domain Wizard-of-Oz (MultiWoz) (Budzianowski et al., 2018a) dataset.", "It is a large scale multidomain, task oriented dataset generated by human-to-human conversation , where one participant plays the role of a user while the other plays the agent.The conversations are between a tourist and a clerk at an information center.", "The conversations span across 7 domains including attraction, hospital, hotel, police, restaurant, taxi and train.", "Each dialogue is generated by users with a defined goal which may cover 1-5 domains with a maximum of 13 turns in a conversation.", "The dataset has 10438 dialogues split into 8438 dialogues for training set and 1000 dialogues each for validation and test set.", "We represent DB results as one-hot vectors as proposed by Budzianowski et al. (2018b).", "To reduce surface-level variability in the responses, we use domain-adaptive delexicalization preprocess-96 ing proposed in Wen et al. (2016).", "As proposed in Zhang et al. (2019), We generate delexicalized responses with placeholders for specific values which can be filled with information in DST and database.", "We evaluate performance of our method on end-to-end dialogue modeling task of Multi-woz2.0 (Budzianowski et al., 2018a).", "We uses three evaluations metrics proposed by (Budzianowski et al., 2018a).", "These include:", "1) inform rate measures the fraction of dialogue, the system has provided the correct entity,", "2) success rate fraction of dialogues, the system has answered all the requested information and", "3) BLEU (Papineni et al., 2002) measures the fluency of the generated response.", "We also report the combined score ( Inform + Success ) 0 .", "5 + BLEU proposed by Mehri et al. (2019).", "All the numbers of CASPI reported in this work are median of 5 runs with different seeds.", "For the metric M used in pairwise causal reward learning , we use the following:", "This is very similar to combined score used in evaluation and both are equivalent when = 2 .", "We introduced hyperparamter to normalize the achievable scale of BLEU .", "We observe that success rate, if used as is, will result in non-markovian and stochastic per turn reward function.", "This is because the reward of current state will depend on the performance of future states.", "Hence, we also use a soft version of the metric M soft , where the success rate measures a fraction of requested information provided in a dialogue.", "We refer the original metric that uses the discrete variant of success rate as M hard .", "The choice of action in reward function R ( s t , a t , g ) can either be dialogue act or generate response, we refer corresponding variants of metrics as M ( act ) and M ( resp ) .", "To demonstrate the versatility of our method to adapt to different metrics, we use all the discussed variants of the metric.", "5 Result We compare both adaptation of our methods CASPI(DAMD) and CASPI(MinTL) on the end-to-end dialogue tasks defined by MultiWoz2.0 (Budzianowski et al., 2018a).", "The results are tabulated at Table:1.", "CASPI(DAMD) with its light weight model architecture and no pretraining on any external corpus, except for (Lubis et al., 2020), out perform all other previous methods, these includes methods that use large pretrained language models such as Hosseini-Asl et al. (2020), Peng et al. (2020) and Lin et al. (2020).", "This show using CASPI to shepard the gradient update process as sample weights for each dialogue turn leads to a model that's well aligned with true objective of the task.", "CASPI(MinTL) with its robust pretrained model out performs CASPI(DAMD) and LAVA (Lubis et al., 2020) by a large margin.", "This demonstrates the ease of adaptation of existing methods with CASPI.", "Inverse reinforcement learning, coupled with off-policy policy learning and evaluation are proven to be sample efficient (Thomas and Brunskill, 2016) .", "We argue CASPI is competitive with other sample efficiency techniques, such as data augmentation and transfer learning as performed by Zhang et al. (2019) and Lin et al. (2020) respectively.", "To demonstrate the hypothesis, we test our method against baseline in a low sample complexity regime.", "For experimental setup, we adopt the low resource testing strategy from Lin et al. (2020).", "We train our model on 5%, 10%, and 20% of the training data and compared with other baselines on end-to-end dialogue task, Table 2 list the results.", "CASPI(MinTL) trained only on 20% of data was able to out perform previous state of the art method, LAVA (Lubis et al., 2020) and MINTL (Lin et al., 2020) trained on 100% data on two of the three performance metrics.", "This goes to show that having the right reward function to guide the budget of the gradient update process to reach the true objective is important in extremely low resource setting.", "Automatic evaluation metrics have their own biases.", "True objective of ToD is human experience while interacting with the dialogue systems, which automatic evaluation metrics might fall short to capture.", "To this end we conduct human evaluation on the quality of the generated response.", "We define quality by the following criterias:", "A dialogue turn in the test set is randomly picked.", "The human evaluators were shown context leading up to the turn.", "The predictions for the turn by different methods were anonymized and displayed to the evaluators.", "This is illustrated in Fig:4.", "The human evaluators were asked to give a score between 1 and 5 for appropriateness and fluency, with score of 5 being best and 1 being the worst.", "100 randomly selected dialogue turns were presented to 10 participants", ".We report the mean and variance of the score.", "We compare our model performance against MinTL (Lin et al., 2020), SimpleTOD (Hosseini-Asl et al., 2020), LAVA (Lubis et al., 2020) and DAMD (Zhang et al., 2019).", "Fig:5 shows the results of the evaluation.", "CASPI(MinTL) outperforms all other models in appropriateness score.", "While fluency score of CASPI(MinTL), MinTL and SimpleTOD are comparable to each other.", "It is worth noting that though LAVA (Lubis et al., 2020) performs well on automatic evaluation metrics, it performs poorly on human evaluation.", "We suspect the policy learnt by (Lubis et al., 2020) exploits gaps in the reward function.", "In case of LAVA (Lu-bis et al., 2020), success rate is used as the reward function.", "In our analysis, low BLEU score is good indicator if the learnt policy indulges in reward hacking, which LAVA (Lubis et al., 2020) exhibits.", "More on reward hacking in Sec:5.4.2.", "In the previous section we argued that automatic dialogue evaluation metrics are biased and doesn't truly reflect the human objective, but in our method we use these very same dialogue evaluation metrics to learn reward R ( s t , a t , g ) .", "To bridge this gap, we performed the following human-in-the-loop (HITL) experiment.", "We first trained a pair CASPI(MINTL) models with different seeds, on 5% of Multiwoz2.0 dataset.", "We then used these pair of models to predict on 0.5% of Multiwoz2.0 train data (40 dialogues) and had a human score these pairs of generated response relative to each other.", "We then trained for reward R ( s t , a t , g ) using pairwise causal reward learning as described in Sec:3.3, where examples of the mini batch are randomly sampled either from human scored examples or the ones scored by the automatic evaluation metric as show in Fig:6.", "We then trained a fresh CASPI(MINTL) model on the original 5% of data and the learnt R ( s t , a t , g ) .", "We perform human evaluation on 24 dialogues using 3 participants.", "Fig:7 98 Figure 4: Example of generated responses by different ToD models Figure 5: Human evaluation on crite-rias:Appropriateness and Fluency shows the performance.", "Though CASPI(MINTL) using just 5% of the data outperforms DAMD trained on 100% of data in 2 out of the 3 automatic evaluation metrics shown in Table:1 and 2, performs poorly in human appropriateness score.", "With the HITL score in the reward learning, we see a boost in performance in both the human evaluation criteria: appropriateness and fluency.", "The 5% data CASPI(MINTL)'s human approriateness score is now comparable to 100% data DAMD.", "This goes to show the versatility of the pairwise causal reward learning.", "With enough expressiveness of the neural network used, the pairwise causal reward learning can generalize to unknown dialogue evaluation criteria.", "In this section we qualitatively analyze the results of pairwise causal reward learning.", "Fig:8 is the same conversation between a tourist and information center agents that we introduced earlier, now we have learnt reward R ( s t , a t , g ) , against each turn.", "We observe that Turn#3 has received the highest reward, retrospectively we realize the transFigure 6: Mixed Human-in-the-loop and automatic evaluation metric scores for pairwise causal reward learning Figure 7: Human evaluation of Human in the loop training of CASPI(MinTL) on 5% of Multiwoz2.0 dataset action happens in this turn, which is crucial and has to be risk averse for the success of the dialogue.", "Turn#2 gets the next best reward which captures crucial information needed for transaction to happen in Turn#3.", "Turn#4 gets reward an order lower than Turn#3 & 2 because other than nicety, it doesn't contribute much to the success of the conversation.", "It should be noted that responses like Turn#4 will appear in almost all conversations and in supervised learning, these turns will be receiving the highest share of the gradient budget.", "The learnt reward redistributes the gradient budget based on the turns contribution to the success of the dialogue objective.", "In this section we analyze the type of behaviour CASPI agents sometime exhibit, especially when trained in low sample regime.", "Greedy agent: In certain domains, the agents has a tendency to book a service before it has gathered all the required information or before the user requested or agreed for booking a service.", "The first example in Fig:9 demonstrate this behaviour.", "Here the user has requested for a taxi, before enough information such as destination or time of departure 99 Figure 8: Example of learnt reward Figure 9: Example of agent behaviour in low sample regime.", "are gathered, the agent books the taxi.", "This happens because there are gaps in automatic evaluation metrics.", "A low BLEU score and relatively high inform and success rate might indicate greedy agent behaviour.", "Other reasons for low BLEU score includes: lack of diversity in the responses or malformation of response.", "Cautious agent: The agent tends to be cautious by providing long winded replies packed with more information than needed.", "Agent tend to do this to prevent the risk of loosing rewards by missing out any requested information.", "This behaviour is demonstrated in the second example in Fig:9 These subtle behaviour demonstrates gap in automatic evaluation metrics, which could be weeded out using Human in the loop learning described in Sec:5.3.", "In this work we introduced a fine grained reward learning process using an under-specified metrics and expert demonstrations for efficiently learn task oriented dialogue.", "We demonstrated the efficacy of our method on MultiWoz2.0 dataset with results comparable to the existing state of the art method with only 20% of data.", "We believe the methods is generic and can be extend to other NLP tasks." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "objective", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "other", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective" ]
[ "Given a sentence and its relevant answer, how to ask good questions is a challenging task, which has many real applications.", "Inspired by human's paraphrasing capability to ask questions of the same meaning but with diverse expressions, we propose to incorporate paraphrase knowledge into question genera-tion(QG) to generate human-like questions.", "Specifically, we present a two-hand hybrid model leveraging a self-built paraphrase resource, which is automatically conducted by a simple back-translation method.", "On the one hand, we conduct multi-task learning with sentence-level paraphrase generation (PG) as an auxiliary task to supplement paraphrase knowledge to the task-share encoder.", "On the other hand, we adopt a new loss function for diversity training to introduce more question patterns to QG.", "Extensive experimental results show that our proposed model obtains obvious performance gain over several strong baselines, and further human evaluation validates that our model can ask questions of high quality by leveraging paraphrase knowledge.", "Question generation (QG) is an essential task for NLP, which focuses on generating grammatical questions for given paragraphs or sentences.", "It plays a vital role in various realistic scenarios.", "For educational purposes, QG can create reading comprehension materials for language learners (Heilman and Smith, 2010).", "For business use, QG can bring benefits to conversation sys-tems and chat-bots for effective communication with humans (Mostafazadeh et al., 2016).", "Besides, automatically-generated questions can be conversely used for constructing question answering datasets to enhance reading comprehension sysCorresponding author.", "tems (Tang et al., 2017; Duan et al., 2017; Xu et al., 2019; Zhang and Bansal, 2019).", "Recent neural network-based methods have achieved promising results on QG, most of which are based on the seq2seq attention framework (Du et al., 2017; Zhou et al., 2017; Gao et al., 2018; Kim et al., 2018; Zhou et al., 2019b), enriched with lexical features (Zhou et al., 2017; Sun et al., 2018; Song et al., 2018) or enhanced by copy mechanism (Du and Cardie, 2018; Sun et al., 2018; Zhou et al., 2019a).", "Although much progress has been made for QG, existing approaches do not explicitly model the notorious lexical and syntactic gaps in the generation process.", "That is, some parts of two texts (e.g. the input sentence and reference question, the reference question and generated question) may convey the same meaning but use different words, phrases or syntactic patterns.", "In real communica-Figure 1: A sketch of our design to leverage paraphrase knowledge in QG.", "tion, humans often paraphrase a source sentence to ask questions which are grammatical and coherent.", "Take SQuAD (Rajpurkar et al., 2016) as an example, which is a popular reading comprehension dataset and has been widely used for QG, there is a large percentage of questions created by paraphrasing (33.3% of the questions contain synonymy variations and 64% of questions contain syntactic variations (Rajpurkar et al., 2016)).", "Two examples are shown in Table", "1. Due to the lack of paraphrase knowledge, the generated questions simply copy certain words from the input sequence, the quality of which is thus not competitive with human-created questions.", "To address this issue, we introduce paraphrase knowledge in the QG process to generate humanlike questions.", "The sketch of our design is illustrated in Figure", "1. To make our model easy to implement and train the model in an end-to-end fashion, we do not use any extra paraphrase generation (PG) dataset but just use a simple back-translation method to automatically create paraphrases for both the input sentences and reference questions.", "Based on the high-quality expanded data, we propose a two-hand hybrid model.", "On the left hand, using the expanded sentence paraphrase as the target of PG, we perform multi-task learning with PG and QG, to optimize the task-share encoder with the paraphrase knowledge.", "On the right hand, with the gold reference question and question paraphrase as QG's multi-targets, we adopt a new min-loss function, to enable the QG module to learn more diverse question patterns.", "We conduct extensive experiments on SQuAD and MARCO (Nguyen et al., 2016).", "Results show that both separate modules, the PG auxiliary task and the min-loss function, obviously improve the performances of QG task, and combing them achieves further improvements.", "Furthermore, human evaluation results show that our hybrid model can ask better and more human-like questions by incorporating paraphrase knowledge.", "For current mainstream neural network-based methods on QG, most approaches utilize the Seq2Seq model with attention mechanism (Du et al., 2017; Zhou et al., 2017; Zhao et al., 2018b; Zhou et al., 2019a).", "To obtain better representations of the input sequence and answer, the answer position and token lexical features are treated as supplements for the neural encoder (Zhou et al., 2017; Song et al., 2018; Kim et al., 2018).", "Similar to other text generation tasks, many works on QG also employ copy or pointer mechanism to overcome the OOV problem (Du and Cardie, 2018; Sun et al., 2018; Zhang and Bansal, 2019).", "Recently, Zhou et al. (2019a) employ language modeling (LM) as an auxiliary task to enrich the encoder representations.", "In this paper, we adopt this work as one of the baseline models, since their universal model is easy to implement and achieves promising results for QG.", "In order to make use of the context information of paragraphs, Zhao et al. (2018b) propose a gated self-attention network to encode context passage.", "Based on this, Zhang and Bansal (2019) apply reinforcement learning to deal with semantic drift in QG; Nema et al. (2019) use a passage-answer fusion mechanism to obtain answer-focused context representations; Li et al. (2019a) utilize gated attention to fuse answer-relevant relation with context sentence.", "Besides, Chen et al. (2019) design different passage graphs to capture structure information of passage through graph neural networks.", "Dong et al. (2019) propose a unified language model pre-training method to obtain better context representations for QG.", "All these works adopt a whole paragraph as input to generate questions.", "Different from this, our work only takes a sentence as input and leaves paragraph-level QG for future research.", "Paraphrase generation is also a challenging task for NLP.", "Recent works usually obtain paraphrases by reordering or modifying the syntax or lexicon based on some paraphrase databases and rules (Fader et al., 2013; Chen et al., 2016), or by employing some neural generation methods (Prakash et al., 2016; Li et al., 2019b).", "In this paper, we employ a simple and effective paraphrasing method to expand both input sentences and reference questions.", "Our method also can be replaced with more sophisticated paraphrasing methods.", "question answering, and text simplification.", "Callison-Burch et al. (2006) use paraphrase techniques to deal with unknown phrases to improve statistical machine translation.", "Fader et al. (2013) and Dong et al. (2017) employ paraphrase knowledge to enhance question answering models.", "Kriz et al. (2018) utilize paraphrase and context-based lexical substitution knowledge to improve simplification task.", "Similarly, Zhao et al. (2018a) combine paraphrase rules of PPDB (Ganitkevitch et al., 2013) with Transformer (Vaswani et al., 2017) to perform sentence simplification task.", "Guo et al. (2018a) propose a multi-task learning framework with PG and simplification.", "In addition, Yu et al. (2018) and Xie et al. (2019) use paraphrase as data argumentation for their primary tasks.", "Different from these works, we leverage paraphrase knowledge for question generation, by automatically constructing a built-in paraphrase corpus without using any external paraphrase knowledge bases.", "In this section, we first describe two baseline models we used: feature-enriched pointer-generator and language modeling enhanced QG.", "Then we explain how to obtain paraphrase resources and show the quality statistics.", "Furthermore, we describe in detail two modules of utilizing paraphrase knowledge: the PG auxiliary task and the min loss function, as well as their combination.", "The overall structure of our hybrid model is shown in Figure", "2. 3.1 Baseline Models 3.1.1 Feature-enriched Pointer-generator Sun et al. (2018) enhance pointer-generator (See et al., 2017) model with rich features proposed by Zhou et al. (2017).", "They adopt a bidirectional LSTM as the encoder, which takes the feature-enriched embedding e i as input: e i = [ w i ; a i ; n i ; p i ; u i ] (1) where w i , a i , n i , p i , u i respectively represents embeddings of word, answer position, name entity, POS and word case.", "Same as the decoder used by See et al. (2017), another unidirectional LSTM with attention mechanism is used to obtain the decoder hidden state s t and context vector c t .", "Based on these, the pointer-generator model will simultaneously calculate the probabilities of generating a word from vocabulary and copying a word from the source text.", "The final probability distribution is the combination of these two modes with a generation probability p g : P ( w ) = p g P vocab + (1 p g ) P copy (2) The training objective is to minimize the negative log likelihood of the target sequence q : L qg = 1 T qg T qg (cid:88) t =1 logP ( y qgt = q t ) (3) 3.1.2 Language Modeling Enhanced QG Zhou et al. (2019a) enhance QG with language modeling under a hierarchical structure of multitask learning.", "The language modeling aims at predicting the next and previous words in the input sequence with forward and backward LSTMs, respectively, which serves as a low-level task to provide semantic information for the high-level QG task.", "In general, the input sequence will firstly be fed into the language modeling module to get the semantic hidden states, then these states will be concatenated with the input sequence to obtain the input of the feature-rich encoder: e i = [ w i ; a i ; n i ; p i ; u i ; h lmi ] (4) where h lmi is the semantic hidden state of LM module.", "The loss function of language modeling is defined as: L lm = 1 T lm 1 T lm 1 (cid:88) t =1 log ( P lm ( w t +1 | w <t +1 )) 1 T lm 1 T lm (cid:88) t =2 log ( P lm ( w t 1 | w >t 1 )) (5) where P lm ( w t +1 | w <t +1 ) and P lm ( w t 1 | w >t 1 ) represent the generation probabilities of the next word and the previous word, respectively.", "As a result, the total loss of language modeling enhanced QG is formulated as: L lqg = L qg + L lm (6) where is a hyper-parameter to control the relative importance between language modeling and QG.", "Follow the work of Zhou et al. (2019a), we set to 0.6.", "We re-implement this unified model to base our method on a strong baseline.", "The paraphrasing strategy is independent of the neural-based QG model, and we can use any advanced methods to generate paraphrases.", "In our work, we employ a simple back-translation method to automatically create paraphrases of both sentences and questions.", "Specially, we use a mature translation tool Google Translate , which is a free and accessible online service.", "We translate an original text into German and then back to English to get its paraphrase.", "As a result, we obtain s (cid:48) which is the paraphrase of the input sentence s , and q (cid:48) which is the paraphrase of the golden reference question q .", "In the following section, we will illustrate the way to use ( s , s (cid:48) ) as a training pair of the auxiliary PG task, and adopt ( q , q (cid:48) ) as multi-references to conduct the diversity training module.", "The way we expand paraphrases does not need extra PG datasets.", "Besides, it guarantees the PG and QG tasks share the same input s , so we can optimize their sharing encoder simultaneously and train the model end-to-end.", "To assess the quality of expanded paraphrases, we randomly select 100 paraphrases respectively from sentences and questions, and ask two annotators to judge the Synonym conversions and Syntactic transitions, as well as the paraphrase F luency .", "As shown in Table 2, 74% sentence paraphrases and 58% question paraphrases have synonym conversions with source sequences, 7% and 44% of them have sentence pattern transitions.", "Besides, 67% of paraphrases have no grammar errors.", "Two real expansion examples are shown in Table", "3. It indicates that our expansion method introduces rich and high quality paraphrasing knowledge into the original data.", "The multi-task learning mechanism with PG aims at introducing paraphrase knowledge into QG.", "In general, we employ a parallel architecture to combine PG and QG, where QG is the main task and PG serves as an auxiliary task.", "To make our model Input Sentence: the current basilica of the sacred heart is located on the spot of fr.", "easy to implement and can be trained end-to-end, we conduct the multi-task learning in a simultaneous mode.", "In detail, feature-riched embeddings will first be encoded by the task-share encoder and then be fed into PG and QG decoders respectively.", "The PG and QG decoders both have two layers and they are identical in the structure but different in parameters.", "where y pgt is the generated word of PG at time step t and s (cid:48) t is the t th word in the expanded sentence paraphrase s (cid:48) .", "To enhance the impact of auxiliary PG task so that the paraphrase knowledge can be absorbed by the question generation process more deeply, we employ a soft sharing strategy between the first layer of PG and QG decoders.", "The soft sharing strategy loosely couples parameters and encourages them close to each other in representation space.", "Following the work of Guo et al. (2018b), we minimize the l 2 distance between the shared layer of QG and PG decoders as a regularization.", "The soft sharing loss is defined as: L sf = (cid:88) d D || d d || 2 (8) where D is the set of shared decoder parameters, and respectively represent the parameters of the main QG task and the auxiliary PG task.", "For the QG task, a general training goal is to fit the decoded results with the reference questions.", "To provide more generation patterns, we adjust the training target from one golden reference question to several reference questions by using expanded paraphrase resources.", "We adopt a min-loss function among several references, and the loss function defined by Equation 3 can be rewritten as: L qg = min q Q ( 1 T qg T qg (cid:88) t =1 logP ( y qgt = q t )) (9) where Q is the set of gold reference question and expanded question paraphrase { q, q (cid:48) } .", "Each generated question will separately calculate the negative log-likelihood of its multiple references, and the final loss is the minimum of them.", "Under this training process, our model can learn multiple question expressions which are not in the original training dataset, so that the generation can be more diverse.", "Besides, inspired by the work of Kovaleva et al. (2018), we have tried several loss strategies, such as minimum loss, maximum loss, and weighted loss to guide the diversity training.", "Among them, the minimum is the best performing strategy.", "By employing minimum strategy, the QG decoder fits the generated question with the most similar sequence among gold reference question and question paraphrase.", "Combining the above modules, we get our hybrid model.", "During training, the feature-enriched inputs are first encoded by the task-share encoder.", "Then the semantic hidden states are fed into PG decoder and QG decoder, respectively.", "For PG decoder, it has one fitting target (expanded sentence paraphrase).", "For QG decoder, it calculates the cross-entropy loss with both the gold reference question and the question paraphrase and regards the minimum loss of them as the QG loss.", "The auxiliary PG task and diversity training strategy simultaneously optimize the question generation process.", "The combined training loss function can be defined as: L total = L lqg + L pg + L sf (10) where and are both hyper-parameters.", "We will describe the chosen of these hyper-parameters later.", "Our experiments are based on two reading comprehension datasets: SQuAD (2016) and MARCO (2016).", "On SQuAD, since there are two different splits that are most often used, we conduct experiments on both two splits on sentence-level.", "For Zhou Split Du Split Previous Works (conference-year) B1 B2 B3 B4 MET B1 B2 B3 B4 MET s2s (ACL-2017) ---43.09 25.96 17.50 12.28 16.62 NQG++ (NLPCC-2017) --13.29 ---M2S+cp (NAACL-2018) --13.91 --13.98 18.77 A-P-Hybrid (EMNLP-2018) 43.02 28.14 20.51 15.64 ---s2sa-at-mp-gsa (EMNLP-2018) 44.51 29.07 21.06 15.82 19.67 43.47 28.23 20.40 15.32 19.29 ASs2s (AAAI-2019) --16.17 --16.20 19.92 LM enhanced QG (EMNLP-2019) 42.80 28.43 21.08 16.23 ---Q-type (EMNLP-2019) 43.11 29.13 21.29 16.31 ---Sent-Relation (EMNLP-2019) 44.40 29.48 21.54 16.37 20.68 45.66 30.21 21.82 16.27 20.36 Our Models baseline-1 +Data augmentation 38.16 24.35 17.60 13.28 17.73 38.91 24.80 17.83 13.36 17.97 baseline-1 41.06 26.63 19.65 14.71 19.12 41.04 27.05 19.92 15.21 19.19 baseline-1 +Min 42.03 27.61 20.27 15.48 19.61 42.97 28.52 21.02 16.06 19.93 baseline-1 + PG 42.76 28.26 20.89 16.09 20.11 43.68 28.99 21.39 16.37 20.23 baseline-1 +Min+PG (hybrid model-1) 43.61 28.67 21.09 16.23 20.29 42.66 28.68 21.39 16.55 20.44 baseline-2 42.39 28.11 20.86 16.13 19.95 42.76 28.80 21.47 16.57 20.38 baseline-2 +Min 43.38 28.92 21.49 16.61 20.40 42.94 29.06 21.73 16.88 20.60 baseline-2 +PG 43.56 28.98 21.57 16.74 20.58 43.73 29.53 22.06 17.08 20.78 baseline-2 +Min+PG (hybrid model-2) 43.63 29.21 21.79 16.93 20.58 44.32 29.88 22.28 17.21 20.96 Table 4: Experimental results of our models on SQuAD comparing with previous works and different baselines.", "Du Split (Du et al., 2017), we use the same settings with Li et al. (2019a) and there are 74689, 10427 and 11609 sentence-question-answer triples for training, validation and test respectively.", "For Zhou Split (Zhou et al., 2017), we use the data shared by Zhou et al. (2017) and there are 86,635, 8,965 and 8,964 triples correspondingly.", "On MARCO, there are 74,097, 4,539 and 4,539 sentence-answer-question triples for train, development and test sets, respectively (Sun et al., 2018).", "We expand the datasets using the paraphrase expansion approach described in Section 3.2.", "After that, one sample of the expanded dataset is in the form of ((sentence, sentence paraphrase), (question, question paraphrase), answer).", "For fair comparison, we report the following recent works on sentence-level Du and Zhou Splits: s2s (Du et al., 2017): an attention-based seq2seq model.", "NQG++ (Zhou et al., 2017): a feature-enriched Seq2Seq model.", "M2S+cp (Song et al., 2018): uses different matching strategies to explicitly model the information between answer and context.", "A-P-Hybrid (Sun et al., 2018): generates an accurate interrogative word and focuses on important context words.", "s2s-a-ct-mp-gsa (Zhao et al., 2018b): employs a gated attention encoder and a maxout pointer decoder to deal with long text inputs.", "ASs2s (Kim et al., 2018): proposes an answer-separated Seq2Seq model by replacing the answer in the input sequence with some specific words.", "LM enhanced QG (Zhou et al., 2019a): treats language modeling as a low-level task to provide semantic representations for the high-level QG.", "Q-type (Zhou et al., 2019b): multi-task learning framework with question word prediction and QG.", "Sent-Relation (Li et al., 2019a): extracts answer-relevant relations in sentence and encodes both sentence and relations to capture answer-focused representations.", "We evaluate the performance of our models using BLEU (Papineni et al., 2002) and METEOR (Denkowski and Lavie, 2014), which are widely used in previous works for QG.", "We set the vocabulary as the most frequent 20,000 words.", "We use 300-dimensional GloVe word vectors as initialization of the word embeddings.", "Answer position and token lexical features are randomly initialized to 32-dimensional vectors through truncated normal distribution.", "The maximum lengths of input sequence and output sequence are 100 and 40, respectively.", "The hidden size of the encoder, decoder, and language modeling LSTMs are all 512.", "We use Adagrad optimization with learning rate 0.15 for training.", "The batch size is 32 and the beam search decoding size is 12.", "To alleviate the volatility of the training procedure, we get the average model of the 5 checkpoints closest to the best-trained model on development set.", "The experimental results on two splits of SQuAD are shown in Table", "4. In terms of BLEU-4 that is often regarded as the main evaluation metric for text generation, our hybrid model-2 yields the best results on both splits, with 16.93 on Zhou Split and 17.21 on Du Split.", "We achieve state-of-the-art results on Du Split for sentence-level QG.", "Especially for baseline-1, the performance gains of our model are more obvious.", "Our hybrid model-1 outperforms baseline-1 by 1.52 points on Zhou Split and 1.34 points on Du Split, which are large margins for this challenging task.", "Even based on this weak baseline, our method also achieves the state-of-the-art, 16.55 BLEU-4 score on Du Split for sentence-level QG.", "The previous work of CGC-QG (Liu et al., 2019) obtains a 17.55 BLEU-4 score on Zhou Split.", "But their model relies on many heuristic rules and ad-hoc strategies.", "In their full model with clue prediction, they do graph convolutional network (GCN) operations on dependency trees, while our model does not use any hand-crafted rules and is lightweight without graphs and trees.", "We also conduct experiments on MARCO, and the results are shown in Table", "5. Our hybrid models obtain obvious improvements over two baselines, achieving a state-of-the-art BLEU-4 score of 21.61.", "Specifically, SQuAD and MARCO are built in different ways.", "The questions in SQuAD are generated by crowd-workers, while questions in MARCO are sampled from real user queries.", "The experimental results on two datasets validate the generalization and robustness of our models.", "Effect of Multi-task Learning with PG Task As shown in Table 4, the auxiliary PG task brings consistent improvements over both baseline models.", "On Zhou Split, it increases baseline-1 by 1.38 points and baseline-2 by 0.61 respectively.", "On Du Split, it increases baseline-1 by 1.16 points and baseline-2 by 0.51 points respectively.", "The Previous Works BLEU-4 s2s(Du et al., 2017) 10.46 s2sa-at-mp-gsa(Zhao et al., 2018b) 16.02 A-P-Hybrid(Sun et al., 2018) 19.45 LM enhanced QG(Zhou et al., 2019a) 20.88 Q-type(Zhou et al., 2019b) 21.59 Our Models baseline-1 20.13 hybrid model-1 21.15 baseline-2 20.79 hybrid model-2 21.61 Table 5: Main results of our models on MARCO.", "reason is that the PG task provides abundant paraphrase knowledge into the model and allows the task-share encoder to learn more paraphrasing representations.", "Effect of Diversity Training with Min-loss Function From the results in Table 4, we can see the min-loss strategy improves performances over both baseline models.", "On Zhou Split, we get a 0.77 improvement over baseline-1 and 0.48 improvement over baseline-2, respectively.", "On Du Split, we get similar improvements.", "Effect of Data Augmentation A straightforward way to leverage paraphrase knowledge is data augmentation.", "To test whether it works by simply adding paraphrase data as external training data, we also conduct an experiment based on the question paraphrase resource.", "We add the ( s , q (cid:48) ) pairs into the training dataset, where s represents the input sentence and q (cid:48) denotes the paraphrase of the golden reference.", "Under this setting, we double the training samples.", "Unfortunately, as shown in Table 4, the baseline-1 model yields much lower BLEU-4 scores on both Zhou Split (13.28) and Du Split (13.36) with such data augmentation.", "The main reason is that for the same input sentence, there are two different training targets ( q and q (cid:48) ), making the training process cannot easily converge.", "To investigate whether the paraphrase knowledge introduces more diverse expressions, we conduct evaluations on the distinct metric (Li et al., 2016), which is calculated as the number of distinct uni-grams (distinct-1) and bigrams (distinct-2) divided by the total number of the generated words.", "The experimental results are shown in Table", "6. It shows that our hybrid models obtain obvious gains over baseline models on both distinct-1 and distinct-2 metrics, validating that our models really generate more diverse questions with the help of paraphrase knowledge.", "We also verify the effectiveness of the soft sharing mechanism by removing it from the full hybrid models.", "The results are displayed in Table", "7. After removing the soft sharing mechanism, both of our models have varying degrees of performance degradation.", "It demonstrates that the soft sharing strategy enhances the influence of paraphrase knowledge on QG decoder.", "The soft sharing coefficient hyper-parameter is 1 10 6 , intuitively chosen by balancing the cross-entropy and regularization losses according to Guo et al. (2018b).", "The other hyper-parameter which is to control the balance of QG and PG is tuned by grid search.", "We set to different values to explore the best proportion of two tasks.", "The experimental results of different are shown in Figure", "3. Consequently, we set to 0.3 for our hybrid model.", "To further assess the quality of generated questions, we perform human evaluation to compare our hybrid model-2 with the strong baseline of language modeling enhanced QG.", "We randomly select 100 samples from SQuAD (Zhou Split) and ask three annotators to score these generated questions according to three aspects: Fluency: which measures whether a question is grammatical and fluent; Relevancy: which measures whether the question is relevant to the input context; Answerability: which indicates whether the question can be answered by the given answer.", "The rating score is set to [0, 2].", "The evaluation results are shown in Table", "8. The Spearman correlation coefficients between annotators are high, which guarantees the validity of human evaluation.", "Our hybrid model receives higher scores on all three metrics, indicating that our generated questions have higher quality in different aspects.", "We list two examples of generated questions in Table", "9. By introducing paraphrase knowledge into generation, the generated questions well capture the paraphrase transitions between contexts and references.", "Obviously, the questions generated by our hybrid model are more grammatical and coherent.", "To further test the generalization of our proposed methods, we use other paraphrasing methods to construct the paraphrase dataset.", "PPDB : for each non-stop word and phrase, looking it up in PPDB (2013) and replacing it with its synonyms.", "NMT : another back-translation method using a pre-trained Transformer (2017) model.", "Mixed : expanding input sentences with Google Trans and expanding reference questions with PPDB.", "The results are shown in Table", "10. Our hybrid model-2 still achieves excellent performances on both BLEU and METEOR.", "From the results, we Sentence: his lab was torn down in 1904, and its contents were sold two years later to satisfy a debt.", "newcastle has a horse racing course at gosforth park.", "Answer: gosforth park Reference Question: where is newcastle 's horse racing course located ?", "Baseline Model-2: where does newcastle have a horse racing course?", "Hybrid Model-2: where is newcastle 's horse racing course located ?", "can observe that the Mixed paraphrase method even obtain better results than the mature Google Translate.", "It proves that our proposed architecture is effective across different paraphrasing methods and has potential for improvement.", "In this paper, we propose a two-hand hybrid model leveraging paraphrase knowledge for QG.", "The experimental results of independent modules and hybrid models prove that our models are effective and transferable.", "Besides, human evaluation results demonstrate that the paraphrase knowledge benefits our model to ask more human-like questions of high quality.", "In the future, we will explore more diverse and advanced paraphrase expanding methods for both sentence and paragraph level QG.", "Moreover, we will apply our methods to other similar tasks, such as sentence simplification.", "We thank Weikang Li and Minghua Zhang for their valuable comments and suggestions.", "This work is supported by the National Natural Science Foundation of China (61773026) and the Key Project of Natural Science Foundation of China (61936012)." ]
[ "abstain", "objective", "method", "method", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "objective", "method", "objective", "method", "abstain", "result", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "method", "other", "other", "method", "method", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "other", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "result", "objective", "objective", "method", "other", "other" ]
[ "Risk prediction is an essential task in financial markets.", "Merger and Acquisition (M&A) calls provide key insights into the claims made by company executives about the restructuring of the financial firms.", "Extracting vocal and textual cues from M&A calls can help model the risk associated with such financial activities.", "To aid the analysis of M&A calls, we curate a dataset of conference call transcripts and their corresponding audio recordings for the time period ranging from 2016 to 2020.", "We introduce M3ANet , a baseline architecture that takes advantage of the multimodal multi-speaker input to forecast the financial risk associated with the M&A calls.", "Empirical results prove that the task is challenging, with the proposed architecture performing marginally better than strong BERT-based baselines.", "We release the M3A dataset and benchmark models to motivate future research on this challenging problem domain.", "Mergers and Acquisitions (M&As) 1 conference calls are events preceding financial transactions involving two or more entities such that either one of the participant companies takes over the other(s) and establishes itself as the owner (termed as acquisition) or when one company combines with another to become a joint entity (termed as merger).", "In these M&A conference calls, the participating companies' management makes a presentation to the call participants, such as market analysts, media personnel, and other stakeholders, explaining the rationale for the deal and possible roadblocks to deal completion (Dasgupta et al., 2020).", "Following the presentation segment, there is a Q&A segment in which the call participants ask questions to which the management responds.", "Equal contribution 1 https://www.investopedia.com/mergers-and-acquisitions Figure 1: A schematic of our proposed approach ( M3A ) that leverages three types of input modalities: text utterances from the call transcripts, audio clips, and speaker specific input, for financial modeling tasks.", "Building on the important information that M&As provide, academic research, the financial press, and other media give a great deal of attention.", "One of these discussions' principal aspects lies in how the deals may affect the company's valuation (Moeller et al., 2003; Fraunhoffer et al., 2018) and future growth.", "A significant focus in financial and economic literature has been on understanding whether M&As create or destroy value.", "Consequently, shareholders critically analyze the deals to estimate the potential stock price and stock price volatility post the M&A conference call.", "Identifying the gap in natural language processing (NLP) literature on the lack of resources to study M&A conference calls with their text transcripts and audio recordings, we take the first step in multimodal financial modeling in the M&A space.", "Such data can allow academicians to study M&A calls further, especially with the rich multimodal data.", "It shall enable studies that focus not only on the words spoken in the call but also in the manner they were spoken, a relatively unexplored field in financial forecasting, as shown in Figure 1.", "A salient aspect of conference calls is that, unlike text reports, the company's management interacts with external stakeholders and asks questions.", "This Figure 2: M&A calls have a Q&A session where financial stakeholders can ask questions to the company executives.", "interaction presents an opportunity of analyzing not just the management's claims but also the way they express them.", "In Figure 2, we highlight the various components in a short Q&A interaction.", "Often, both the transcript and the audios of the calls are available to the public.", "Vocal cues play a critical role in verbal communication as they can provide support or discredit the verbal message that is being spoken (Jiang and Pell, 2017).", "For example, consider if the CEO of the acquiring company exhibits confidence in the statement we are confident that this acquisition will bring us profits, however, displays nervousness while justifying technical details of the deal, we may infer contradiction in the claims of a successful M&A.", "Vocal cues have been proven indicators of emotions like deceit and nervousness (Belin et al., 2017; Sporer and Schwandt, 2006).", "Past research (Qin and Yang, 2019; Sawhney et al., 2020c) shows that the addition of vocal cues has helped with the task of financial predictions and enrich the learned representations.", "Our contributions can be summarized as: We curate a public dataset M3A 2 (Multimodal Multi-Speaker Merger & Acquisition Call Fi-2 The source code, processed features, and details on acquiring raw data are available at https://github.com/ midas-research/m3a-acl nancial Forecasting Dataset) that consists of 816 M&A conference calls spanning over 545 hours between 2016 to 2020 with their transcripts and audio recordings, segmented by utterances and aligned with the audio.", "We accompany the dataset with neural baseline architectures that use the multimodal multi-speaker input to predict stock volatility and price movement.", "To the best of our knowledge, no such M&A conference call dataset exists in academia, and our proposed methodology, M3ANet is the first deep learning approach for financial predictions on M&A conference calls.", "M&A Conference Calls Financial reports and conference calls have been shown to have a correlation with the stock market and improve financial predictions (Bowen et al., 2001; Kogan et al., 2009).", "Studies have also been carried out specifically for M&A calls, showing their effect on the market (Dasgupta et al., 2020; Hu et al., 2018).", "However, there exists a gap in leveraging neural predictive modeling on using verbal and vocal cues pertaining to M&A calls for financial forecasting.", "Financial Forecasting Research has shown historical pricing data to be useful in predicting financial risk modeling (Kristjanpoller et al., 2014; Zheng et al., 2019; Dumas et al., 2009).", "It also considers volatility as an indicator of uncertainty, which helps make decisions regarding investments (Heston, 1993; Johnson and Shanno, 1987; Scott, 1987).", "Previous work often use numerical features (Liu and Chen, 2019; Nikou et al., 2019) in approaches like neural networks (Kim et al., 2019; Luo et al., 2017), graph neural networks (Sawhney et al., 2020b), and time-series models (Bollerslev, 1986; Engle, 1981).", "On the other hand, we are interested in analyzing multimodal data like text and audio, which can hold completely different information for predictive models.", "Natural Language Processing and Finance For any system using human interactions to determine financial risk or stock movements, it is necessary to determine the relationship between the various words to determine the speaker's sentiment.", "Advances in NLP have been utilized in many approaches to show financial information significantly improving performance in forecasting tasks like volatility and stock price prediction (Wang et al., 2013; Ding et al., 2015; Mittermayer and Knolmayer, 2007).", "Research has also shown that social media affects the stock market (Bollen et al., 2010; Oliveira et al., 2017; Sawhney et al., 2020a).", "Machine learning methods using simple bag-of-words features to represent the financial documents used in previous research (Kogan et al., 2009; Rekabsaz et al., 2017) largely ignore the inter-dependencies between the sentences.", "To fill the gap, recent approaches have moved towards newer models such as transformers (Yang et al., 2020) and reinforcement learning (Sawhney et al., 2021b) over natural language data for financial forecasting.", "Research shows that psychological and behavioral elements are often indicators of stock price movement (Malkiel, 2003).", "Vocal cues have been proven effective in portraying these elements (Wurm et al., 2010; Hobson et al., 2011; Jiang and Pell, 2017).", "Thus, it is no surprise that multimodal architectures that use these cues for financial predictions have seen significant improvements in their performances (Yang et al., 2020; Sawhney et al., 2020d).", "Speaker Context Encoding Past research (Zhang et al., 2019; Li et al., 2020) in fields like emotion recognition have seen the improved performance on their prediction tasks with the addition of speaker context.", "Models with data related to spoken text benefit when the input is enriched with information about who spoke what.", "Consider an M&A call { 1 , 2 , . . . , M } , which comprises multimodal components: = [ t ; a ] .", "Here, t is the sequence of textual utterances (sentences) 3 of the call transcript and can be represented as [ t 1 , t 2 ...t N ] where t i is the i th utterance of the call and N is the maximum number of utterances in any call.", "Similarly, a is the sequence of corresponding call audios for the textual utterances (sentences) and can be represented as [ a 1 , a 2 ...a N ] where a i is the i th call audio.", "The call's utterances are annotated with speaker information s = [ s 1 , s 2 ...s N ] , where s i is the speaker of the i th utterance and where each speaker in the call may have spoken one or more utterances.", "Each M&A conference call may have two or more participating companies, with at least one publicly-traded company with publicly available stock price information.", "We limit the scope of the problem being solved by forecasting predictions for just one of the participant companies with the larger market valuation (in case of a merger) or the acquiring company (in case of an acquisition).", "We now describe the two prediction tasks that we utilize to train M3ANet on.", "Measuring stock volatility Following (Kogan et al., 2009), we formulate stock volatility as a regression problem.", "For a given stock with a close price of p k on the trading day k , we calculate the average log volatility as the natural log of the standard deviation of return prices r in a window of days as: v [0 , ] = ln (cid:32)(cid:114)(cid:80) k =1 ( r k r ) 2 (cid:33) (1) where r k = p k p k 1 p k 1 is the return price on day k for a given stock, and r is the average return price over a period of days.", "3 We restrict the scope of segmentation to a sentence level as opposed to a more granular level such as the word level owing to the higher complexity and noise involved in word-level segmentation for long M&A calls.", "Formalizing price movement prediction Following (Xu and Cohen, 2018), we define price movement y d ,d over a period of days as a binary classification task.", "For a given stock, we employ its close price, which can either rise or fall on a day d compared to a previous day d , to formulate the classification task as: y [ d ,d ] = (cid:26) 1 , p d + > p d , 0 , p d + p d (2) Given an acquisition conference call , our learning objective is to predict the average negative log volatility v [0 , ] and price movement y [ d ,d ] using the conference call data = [ t ; a ] .", "We curate our dataset, M3A , by acquiring audio records and text transcripts from the Bloomberg Terminal.", "4 Since the conference calls were reliably available from 2016, we filter and list all M&A calls between 2016 and 2020.", "To limit the scope, we ensured the calls were in English, had their domicile as the U.S.A., and had 'merger' or 'ac-quisition' in their title.", "The Bloomberg Terminal often only provides the stock ticker for the acquiring company (in case of an acquisition) and the company with a more prominent marker valuation (in case of a merger).", "To maintain uniformity, we decide only to use the given stock information.", "We pull the adjusted closing price data from Yahoo Finance.", "5 The dataset comprises 816 conference calls.", "The mean number of speakers across the calls is 10 .", "68 4 .", "17 , with a maximum of 31 speakers.", "The mean number of utterances across the calls is 100 .", "54 38 .", "32 utterances and a maximum of 284 utterances in a call.", "The mean length comes out to 4 https://bba.bloomberg.net/ 5 https://in.finance.yahoo.com/ be 40 .", "15 15 .", "15 minutes and a maximum length of 98 .", "15 minutes for the audio clips.", "We provide further statistics in Figure 3.", "Looking at year-wise trends, we see that acquisitions are consistently more frequent that mergers every year.", "Further, we note that mergers see a decreasing trend in the number of utterances and acquisitions have a consistent number of speakers in M&A calls.", "We also note that acquisitions conference calls seem to be increasing in length as the years progress.", "We chronologically divide our dataset into a train, validation, and test set in the ratio of 70 : 10 : 20 , respectively.", "Such a split ensures that future data is not used for forecasting past data.", "Each transcript of the dataset begins with the company's details with the larger market valuation (in case of a merger) or the acquiring company (in case of an acquisition).", "These details include the company's name, stock ticker, and the date of the call.", "The transcript then lists the speakers in the call and their position in the companies, if any.", "The call contents follow the list of speakers.", "The contents are separated by utterances and are annotated with the utterances' speakers.", "Given our dataset, we have the option to choose between transcript-level, utterance-level, and word-level embeddings.", "We decide to use utterance-level embeddings.", "6 We select utterances with at least ten words to ensure better parsing of the transcript and parse the texts to extract all valid utterances.", "Since we are working with audio files, it becomes essential that we can segment them such that we can align them with their corresponding utterances in the text transcript.", "To achieve this alignment, we have used the Aeneas 7 library to per-6 Transcript-level embeddings are too coarse for our task.", "form the forced alignment.", "The Forced Alignment algorithm takes as input a text file divided into fragments and an unfragmented audio file.", "It processes the input to output a synchronization map, which automatically associates a time interval in the audio file to its corresponding text fragment.", "Aeneas uses the Sakoe-Chiba Band Dynamic Time Warping (DTW) (Sakoe and Chiba, 1978) forced alignment algorithm, which has been proven to improve discrimination between words and has superior performance over other conventional algorithms.", "Text Encoding We compute an utterance's textual encoding as the arithmetic mean of all its word vectors.", "BERT is well known as an effective pre-trained language-based model for extracting word-embeddings (Biswas et al., 2020) for a variety of language modeling tasks.", "We use Uncased Base BERT (Devlin et al., 2019) to extract the word embeddings.", "For each call, we represent the text utterances as [ t 1 , t 2 , . . . , t N ] .", "As seen from Figure 4, we embed each text utterance t i to get its corresponding 768-dimensional text encoding g i using BERT such that g i = BERT ( t i ) for each i [1 , N ] .", "Audio Encoding We use the OpenSMILE 8 library to extract the audio features at a sampling rate of 10ms and choose the set of 62 geMAPS features described in (Eyben et al., 2016).", "This set includes features like pitch, jitter, loudness, etc., which have proven to be effective in audio analysis tasks (Chao et al., 2015).", "For each call, we represent the audio clips of the utterances as [ a 1 , a 2 , . . . , a N ] .", "We embed each audio utterance a i to its corresponding 62-dimensional encoding h i using OpenSMILE such that h i = OpenSMILE ( a i ) for each i [1 , N ] .", "Motivation for Speaker Information Infusion The audio encodings help decipher the vocal cues in the text transcript's context to support or discredit the speaker's claims.", "However, it is critical for the system to recognize the importance of the utterance's speaker to gauge its impact on financial predictions.", "This requires the information about the speaker of each utterance to be augmented to the input.", "Prior research (Zhang et al., 2019; Li et al., 2020) shows the addition of speaker context helps improve prediction performance on tasks involving datasets with spoken texts.", "M&A calls have utterances spoken by the company's management (the decision-making force of the company), by analysts (who want to gauge the risk in the company's decisions), or even just the operator (often an impartial person).", "Capturing this speaker context will allow us to decide how much impact a specific utterance can have on a company's stock price.", "Thus, we extract the speaker information for each utterance.", "We parse the list of speakers from the transcripts and assign an ID to each of the speakers.", "The IDs start from 1 and are assigned incrementally to each speaker in the order in which they are listed.", "The operator of the call is assigned the ID 0 .", "We then annotate each of the utterances based on who spoke it.", "Finally, we use one-hot encoding to represent the speaker encoding s of each utterance in the call.", "The Transformer (Vaswani et al., 2017) uses multihead attention and position embeddings to learn the relationship between different utterances.", "The multimodal input requires the model to learn the inter-dependencies between the audio and the text features.", "M3ANet can then use the audio cues to affirm or discredit the spoken message and make an informed prediction.", "The idea behind M3ANet is to use attention to weigh the importance of each modality at different timestamps.", "We then augment the data with the speaker encoding and allow the Transformer to extract the multimodal interdependencies for performing the prediction tasks.", "Attention-Fusion Before we can fuse the inputs, we need to linearly transform the text embeddings to ensure the multimodal embeddings' sizes are the same.", "We then extract the attention weights to calculate the attended inputs similar to (Hori et al., 2017).", "These attention weights describe the importance of a specific modality concerning the other modality.", "We multiply the text and audio features by their attention weights WT and WA respectively to get the attended input, followed by fusing them.", "The following equations formalize the attention mechanism used: WT = softmax ( gW wt + b wt ) (3) WA = softmax ( hW wa + b wa ) (4) WT = WTWT + WA , WA = WAWT + WA (5) X fused = gW T + hW A (6) where W wt and b wt represent the text attention layer, W wa and b wa represent the audio attention layer and + represents addition.", "Sentence-Level Transformer To model the sequence of textual and audio embeddings of the M&A calls, we augment the fused multimodal embeddings X fused with position embeddings pos by addition and the speaker information by concatenation (represented by ).", "pos has the same dimensions as X fused , pos j,ind represents the value of the positional embedding for the j th utterance at index ind .", "The augmentation is summarised as follows: pos j, 2 l , pos j, 2 l +1 = sin (cid:18) j 10 8 ld (cid:19) , cos (cid:18) j 10 8 ld (cid:19) (7) X final = ( X fused + pos ) s (8) The Transformer block uses the augmented feature set for further processing, following which the intermediate tensors are passed through two consecutive dense layers to output the task prediction as follows: O 1 = ReLU ( W l 1 I 1 + b l 1 ) (9) y = ( W l 2 O 1 + b l 2 ) (10) where, W l 1 and b l 1 represent the first linear layer, W l 2 and b l 2 represent the second linear layer, I 1 and O 1 represent the input to the first and second dense layer after being passed through the sentence transformer, while represents the final activation function and y represents the final prediction from the activation corresponding to the task.", "We use ReLU for the final prediction in the volatility prediction task and sigmoid for the price prediction task.", "We then use Mean Squared Error (MSE) and Binary Cross-Entropy (BCE) losses to train the output for volatility prediction and stock price movement prediction, respectively.", "We compare M3ANet against modern baselines across modalities for both the tasks.", "We employ GloVe (Pennington et al., 2014), FinBERT (Araci, 2019) and Roberta (Liu et al., 2019) to embed the text and choose an LSTM + Dense layer architecture as a benchmark for both volatility and price movement prediction.", "We also use all three (text, audio, and multimodal) variants of the Multimodal Deep Regression Model (MDRM) (Qin and Yang, 2019) as baselines.", "We tune M3ANet 's hyper-parameters using Grid Search.", "We summarize the range of hyperparam-eters tuned on: size of the transformer's feed-forward layer and size of the linear layers { 16, 32, 64 } , dropout { 0.0, 0.1, 0.25, 0.5 } , batch size b { 32, 64, 128 } and learning rate e { 0.1, 0.01, 0.001, 0.0001 } .", "The experiment results in the following optimal choices of the hyper-parameters: b = 64 , e = 0 .", "001 , feed forward network size (Volatility) = 16 , hidden layer size (Volatility) = 16 and (Volatility) = 0 .", "1 , , feed forward network size (Movement) = 64 , hidden layer size (Movement) = 32 , (Movement) = 0 .", "0 .", "We implement all methods with Keras 9 and Google Colab.", "10 , using ReLU as our hidden layer activation function and optimize using Adam.", "We choose the highest performing model during the training phase on our validation set and chosen evaluation metrics as our best model.", "We zero-pad the calls that have less than the maximum number of utterances/speakers for efficient batching.", "We experiment with trading periods { 3, 7, 15 } 9 https://keras.io/ 10 https://research.google.com/ colaboratory/ Model Volatility Prediction Price Prediction MSE 3 MSE 7 MSE 15 F1 3 F1 7 F1 15 MCC 3 MCC 7 MCC 15 RoBERTa + LSTM 0.78 (0.009) 0.58 (0.009) 0.47 (0.006) 0.57 0.58 0.49 0.19 0.22 0.10 GloVe + LSTM 0.80 (0.005) 0.60 (0.004) 0.48 (0.005) 0.55 0.56 0.42 0.19 0.22 0.02 FinBERT + LSTM 0.78 (0.008) 0.60 (0.004) 0.47 (0.005) 0.58 0.58 0.48 0.20 0.21 0.06 MDRM (T) 0.79 (0.003) 0.59 (0.003) 0.47 (0.002) 0.58 0.56 0.48 0.20 0.19 0.12 MDRM (A) 0.79 (0.004) 0.60 (0.002) 0.47 (0.003) 0.24 0.36 0.12 0.02 0.17 0.00 MDRM (T+A) 0.78 (0.005) 0.58 (0.003) 0.46 (0.002) 0.59 0.58 0.46 0.19 0.19 0.11 M3ANet (Ours) 0.77 (0.018) * 0.57 (0.016) * 0.46 (0.011) * 0.59 0.59 0.50* 0.19 0.19 0.13 Table 1: Mean -day volatility MSE and price movement prediction results (mean and stdev. of 5 runs for each approach).", "Similar to prior work (Sawhney et al., 2020d; Theil et al., 2019; Yang et al., 2020), we evaluate predicted volatility using the mean squared error (MSE) for each hold period, n { 3,7,15 } .", "For the classification task, we report the F1 score and Mathew's Correlation Coefficient (MCC) for the classification task (Matthews, 1975).", "We use MCC because, unlike the F1 score, MCC avoids bias due to any data skew that may be present as it does not depend on the choice of the positive class.", "For a given confusion matrix (cid:18) tp fn fp tn (cid:19) : MCC = tp tn fp fn (cid:112) ( tp + fp )( tp + fn )( tn + fp )( tn + fn ) (11) 7 Results and Analysis 7.1 Performance Comparison As shown in Table 1, M3ANet achieves the best performance for both the volatility prediction and the price prediction task.", "We observe improvements using M3ANet (Table 2) that leverages the text and audio modalities along with speaker information.", "This improvement can be attributed to attention to emphasize the importance of each modality throughout the series of utterances.", "It can also be observed that the improvements our architecture results in are not quite large in magnitude.", "We attribute this to the difficulty that the task inherently possesses.", "Further research in more sophisticated models may result in greater improvements in the performance on M3A .", "From Table 1 and Table 2, we see that in both the MDRM and Transformer models, the multimodal models performed much better than the unimodal counterparts.", "This performance improvement follows from previous research (Qin and Yang, 2019) with respect to volatility prediction.", "Similar observations validate our hypothesis that audio cues provide additional information that helps make a better prediction.", "It is also apparent from Table 2 that adding speaker context improves the prediction result consistently.", "Thus, we infer that speaker information does play an essential part in forecasting and adds to the data's richness.", "We experiment with fusion by concatenation and fusion by attention for the Transformer and find the latter performing better in most cases (Table 2).", "We believe this happens because simple fusion techniques cannot produce features that effectively capture the individual modalities' importance.", "However, attention fusion uses weights for both the modalities, learned by the architecture, to determine the importance of each modality with respect to its counterpart.", "Using these weights to perform a weighted addition gives a much better representation of both the modalities and their particular importance in a fused vector.", "(a) QA1: The CEO answers a question about the company's competitors.", "Sentence 2: The CEO invites questions.", "(b) QA2.", "The analyst has a spike in their mean audio pitch while the CEO's mean audio pitch is stable.", "(c) QA3.", "The mean audio pitch of the audio clips.", "Trained On Tested on Acquisitions Only Tested on Mergers Only MSE 3 F1 3 MCC 3 MSE 3 F1 3 MCC 3 Acquisitions 0.65 0.66 0.12 1.47 0.56 0.015 Mergers 0.85 0.28 0.03 1.01 0.47 0.20", "As observed in previous works (Sawhney et al., 2020d) using earnings calls, Figure 6 shows that short-term stock volatility prediction is more complex, possibly due to the erratic price fluctuations after a M&A call.", "We hypothesize that these price fluctuations settle as more time elapses, similar to the phenomenon of PEAD (Post Earnings Announcement Drift) (Bernard and Thomas, 1989; Bhushan, 1994; Sadka, 2006).", "This saturation in performance improvement can be attributed to the dilution of cues extracted from the calls, as we 'drift' away from them.", "However, it can be noted that a similar trend may not necessarily be true for price movement prediction.", "We experiment by training M3ANet on Mergers and Acquisitions calls separately, and testing both models on each set of calls separately.", "From Table 3, it can be observed that both models predict the price movement better for their respective sets as expected.", "It is surprising to see that the models predict volatility of Acquisition calls relatively better than that of Merger calls.", "This suggests that Acquisition conference calls lead to a volatility that's relatively easier to predict and seems to be an avenue for further research.", "Call 1: Acquisition of Shape Security by F5 Networks Inc Following the call, F5 Networks Inc suffered a price drop of up to 5.2% within the next month.", "Studying the call's vocal cues, we notice (Figure 5a) the CEO had sudden peaks in the mean pitch of his audio while answering questions.", "Similar peaks occurred when a participant asks the CEO about their fraud protection when compared to their competitors.", "Prior research on audio analysis (Jiang and Pell, 2017) proves a high mean pitch may indicate a lack of confidence in the speaker.", "It was later ascertained that F5 had overpaid to acquire Shape Security without proper due diligence of fraud protection plans sold by Shape Security.", "We observe how M3ANet successfully predicts the decrease in price for all choices of while the unimodal models fail to do the same each time.", "Though the text reveals no lack of confidence, the audio cues likely allow the model to make a successful prediction.", "merger call, Cleveland-Cliffs Inc saw an increase in their stock price up to 17.9% in the next five days.", "Similar to the first call, we notice spikes and sudden increases in the audios' mean pitch from Figure 5b.", "However, the difference exists in the fact that these high pitch patterns come from an analyst in the call and not someone holding an influential position in the companies involved.", "M3ANet can differentiate between the speakers and correctly predicts the price going up, unlike the transformer variant without speaker embeddings.", "This shows how the augmentation of the multimodal data with the speaker embedding likely benefits the predictive power of M3ANet .", "by Sterling Construction Company Inc We now analyze this acquisition as an error analysis where M3ANet predicts incorrectly.", "We see the text transformer performing well on this example and accurately predicting the increase in the stock price for Sterling Construction Company Inc.", "On the other hand, our multimodal multi-speaker is unable to do the same.", "Observing the audio cues (Figure 5c), we find a great deal of variance in the mean audio pitch.", "We attribute the erroneous performance to the potential overfitting of the model or noise in the audio cues.", "We present a dataset of M&A calls that can be utilized to predict financial risk following M&A calls.", "We also present a strong baseline model using multimodal multi-speaker inputs from the M&A calls to perform financial forecasting.", "M3ANet uses attention-based fusion to leverage the interdependency between the verbal message and the vocal cues.", "Further, the approach uses speaker information to enrich the input data to determine if the speakers' vocal cues or verbal messages conflict with others and accounts for the same.", "Experiments on M3A display the effectiveness of M3ANet .", "We hope our M3A can enable more academic progress in the field of financial forecasting.", "Examining a speaker's tone and speech in conference calls is a well-studied task in past literature (Qin and Yang, 2019; Chariri, 2009).", "Our work focuses only on calls for which companies publicly release transcripts and audio recordings.", "The data used in our study corresponds to M&A conference calls of companies in the NASDAQ stock exchange.", "We acknowledge the presence of gender bias in our study, given the imbalance in the gender ratio of speakers of the calls.", "We also acknowledge the demographic bias (Sawhney et al., 2021a) in our study as the companies are organizations within the public stock market of United States of America and may not generalize directly to non-native speakers." ]
[ "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "abstain", "other", "method", "objective", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "method", "other", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "other", "abstain", "abstain", "other", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "other", "method", "method", "method", "other", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain" ]
[ "We introduce a FEVER-like dataset COVID-Fact of 4 , 086 claims concerning the COVID-19 pandemic.", "The dataset contains claims, evidence for the claims, and contradictory claims refuted by the evidence.", "Unlike previous approaches, we automatically detect true claims and their source articles and then generate counter-claims using automatic methods rather than employing human annotators.", "Along with our constructed resource, we formally present the task of identifying relevant evidence for the claims and verifying whether the evidence refutes or supports a given claim.", "In addition to scientific claims, our data contains simplified general claims from media sources, making it better suited for detecting general misinformation regarding COVID-19.", "Our experiments indicate that COVID-Fact will provide a challenging testbed for the development of new systems and our approach will reduce the costs of building domain-specific datasets for detecting misinformation.", "The proliferation of disinformation and misinformation on the web is increasing at a scale that calls for the automation of the slow and labor-intensive manual fact-checking process (Vosoughi et al., 2018).", "New York Times reports that Physi-cians say they regularly treat people more inclined to believe what they read on Facebook than what a medical professional tells them.", "Disinformation is even more acute around the recent COVID-19 pandemic.", "As a result, there is a need for automated fact-checking tools to assist professional fact-checkers and the public in evaluating the veracity of claims that are propagated online in news articles or social media.", "Ideally, a fact-checking pipeline will address several tasks: 1) Consider real-world claims, 2) Retrieve relevant documents not bounded to a known Figure 1: A claim from the r/COVID19 subreddit with an academic report as an evidence source linked to it.", "document collection (e.g., Wikipedia) and which contain information to validate the claim, 3) Select evidence sentences that can support or refute the claim and 4) Predict the claim veracity based on this evidence.", "Recent work on end-to-end fact-checking, including models and datasets, has advanced the field by addressing several tasks in the pipeline, but not all (Thorne et al., 2018, 2019; Hanselowski et al., 2019; Augenstein et al., 2019; Diggelmann et al., 2021; Wadden et al., 2020).", "One line of work that includes FEVER (Thorne et al., 2018, 2019) and SciFact (Wadden et al., 2020) addresses tasks 2, 3 and 4, but assumes a given document collection for task 2 (Wikipedia or CORD-19, respectively) and does not address task", "1. Moreover, the refuted claims in these datasets are manually generated by asking humans to produce counter-claims for a given claim supported by a source document.", "Another line of work that includes Multi-FC (Augenstein et al., 2019) addresses tasks 1, 2 and 4, but not 3.", "It provides real-world claims collected from fact-checking websites and evidence documents and other meta-information, but it does not provide evidence sentences.", "We propose a novel semi-automatic method to build a fact-checking dataset for COVID-19 (COVID-Fact) with the goal of facilitating all the above tasks.", "We make the dataset and code available for future research at https://github.com/ asaakyan/covidfact .", "Our contributions are as Original Claim Closed environments facilitate secondary transmission of coronavirus disease 2019 Counter-Claim Closed environments prevent secondary transmission of coronavirus disease 2019 Gold Document https://www.medrxiv.org/content/10.1101/2020.02.28.20029272v2 Gold Evidence It is plausible that closed environments contribute to secondary transmission of COVID-19 and promote superspreading events.", "follows: Automatic real-world true claim and trustworthy evidence document selection (Sec-tion 2.1).", "We start with the heavily moderated r/COVID19 subreddit, that requires every claim/title post to be accompanied by a source evidence document from peer-reviewed research, pre-prints from established servers, or information reported by governments and other reputable agencies.", "Figure 1 shows one such claim with the associated source belonging to the Academic Report flair.", "We propose additional filtering methods to ensure source quality and that claims are well-formed.", "This step provides us with real-world true claims about COVID-19 and evidence documents not bounded to a known document collection.", "Moreover, the language of the claims could be both technical and lay (see Figure 1 and Table 1), unlike SciFact which is geared only towards scientific claims.", "Automatic generation of counter-claims (Sec-tion 2.2).", "An end-to-end fact-checking system requires both true and false claims for training.", "Following FEVER and SciFact, to obtain false claims, we aim to generate counter-claims of the original true claim.", "The advantage is that we obtain evidence documents/sentences for free.", "However, unlike FEVER and SciFact, we propose a novel approach to automatically generate counter-claims from a given claim using two steps: 1) select salient words from the true claim using attention scores obtained from a BERT (Devlin et al., 2019) model fine-tuned on the SciFact dataset, and 2) replace those words with their opposites using Masked Language Model infilling with entailment-based quality control.", "Table 1 shows examples of generated counter-claims.", "Evidence sentence selection using text similarity and crowdsourcing (Section 2.3).", "For evidence sentence selection, we calculate the semantic similarity between the original true claim and the sentences in source evidence documents using sentence-BERT (SBERT) (Reimers and Gurevych, 2019), retrieve top five sentences and use crowdsourcing for final validation.", "Table 1 shows examples of evidence sentences that support the true claims and refute the corresponding counter-claims.", "COVID-Fact dataset of 4,086 real-world claims annotated with sentence-level evidence and a baseline on this task .", "Our results show that models trained on current datasets (FEVER, SciFact) do not perform well on our data (Section 4).", "Moreover, we show the usefulness of our dataset through zero-shot performance on the scientific claim verification task on SciFact (Wadden et al., 2020) data (Section 4).", "The COVID-Fact dataset contains 4 , 086 real-world claims with the corresponding evidence documents and evidence sentences to support or refute the claims.", "There are 1 , 296 supported claims and 2 , 790 automatically generated refuted claims.", "In this section, we present the three main steps to semiautomatically construct this dataset: 1) real-world true claim and trustworthy evidence document selection (Section 2.1), 2) automatic counter-claim generation (Section 2.2) and 3) evidence sentence selection (Section 2.3).", "The subreddit r/COVID19 is a heavily moderated online discussion forum that seeks to facilitate scientific discussion around COVID-19.", "Each post on this subreddit has a title and needs to contain a link to a source, governed by several rules: posts linking to non-scientific sources will be removed; comments making a statement as fact or which include figures or predictions also need to be supported by evidence; allowed sources include peer-reviewed research, pre-prints from established servers, and information reported by governments and other reputable agencies.", "Moreover, the posts are annotated with flairs, or short description of the posts' category such as Academic Report, Academic Comment, Preprint, Clinical, Antivirals, Government Agency, Epidemiology, PPE/Mask research, General.", "Having access to such flairs allows to select claims, for example, related to Vaccine research or Epidemiology.", "This could further help in training models targeting even more specific types of disinformation, like disinformation about antivirals or PPE/masks.", "In our study, the titles of the post are considered candidate claims and the associated sources are considered evidence documents .", "Posts from the r/COVID19 subreddit are extracted via the Pushshift Reddit API.", "1 Two issues still need to be addressed: 1) ensure that titles are well-formed claims; 2) ensure the highest trustworthiness of the posts and their associated sources.", "Filtering for well-formed claims.", "The definition of a claim can vary depending on domain, register or task (Daxenberger et al., 2017).", "For our work, we consider a claim to be a proposition whose truthfulness can only be determined by additional evidence.", "In addition, a well-formed claim has to be a full sentence.", "Thus, to filter out most of the titles that are not well-formed claims, we employ a simple syntax-based approach to remove questions and consider statements that have at least a main verb.", "This filtering steps allows us to remove titles such as B cell memory: understanding COVID-19 and consider titles such as the ones in Figure 1 and Table", "1. In addition, we ask three volunteer computer science students with background in argumentation and linguistics to manually verify that the entire resulting set does indeed contain only well-formed claims.", "While we could have em-1 https://github.com/pushshift/api ployed more sophisticated claim detection methods, there are no large-scale datasets for COVID-19 to train a claim detection model.", "We therefore did not want to introduce additional noise in our dataset by using a machine learning approach.", "Filtering for trustworthiness.", "To ensure high trustworthiness of posts (and thus our true claims) and the linked sources, we employ several filtering steps.", "First, the posts in this subreddit undergo moderation, and thus we discard titles/claims that belong to posts flagged as taken down by the moderators using the posts' removed flair.", "Moreover, users of the Reddit platform may upvote or down-vote a post, and the ratio of upvotes can serve as a rough indication of the reliability of the source.", "Hence, posts (and thus claims/sources) with upvote ratio lower than 0.7 are rejected.", "We then reject claims where the linked source in the post has an Alexa Site Rank 2 lower than 50 , 000 , rejecting the outliers by the site rank (see the box plot in Appendix B.2).", "Finally, we reject a claim if the linked source in the post does not appear in the top 5 Google search results when querying the title of the post.", "From an initial set of 22 , 646 posts, automatic syntactic filtering for well-formed claims results in a set of 6 , 154 claims, further reduced to 1 , 526 after filtering for trustworthiness and finally reduced to 1 , 407 through manual validation.", "Thus, the resulted dataset after all the filtering steps consists of 1 , 407 true claims and the associated source evidence documents (an additional set of 111 claims are removed in the evidence sentence selection step in Section 2.3).", "Besides the linked source document in the post, we retrieve for each claim four additional sources from the top 5 Google search results.", "This is motivated by the fact that the same claim can be reported by various sources.", "For example, the second claim in Table 1 Oxford vaccine triggers immune response is reported, besides the bbc.com given in the original post, also by other trustworthy sources such as usnews.com , med-scape.com , cnbc.com .", "Unlike FEVER and SciFact, which constrain their evidence document collection to Wikipedia or pre-selected scientific articles, we collect evidences from any of the websites linked to the Reddit post or appearing in the top 5 Google search results.", "Even though over time the Google search results may change, the collection of evidence documents for COVID-Fact is considered 2 https://www.alexa.com/topsites fixed and will be released for reproducibility.", "Like SciFact (Wadden et al., 2020), our dataset contains several claims with scientific jargon such as Altered blood cell traits underlie a major genetic locus of severe COVID-19 .", "However, unlike SciFact, our dataset also contains scientific claims expressed in lay terms.", "For example, a claim like Loss of smell is a symptom of COVID-19 is much simpler and can be understood by a wider audience compared to Emerging evidence supports recently acquired anosmia and hyposmia as symptoms of COVID-19 .", "This is important, as a lot of (dis)information is expressed in lay language intended for the general public not versed in scientific language.", "Another issue adding to the complexity of the task around COVID-19 (dis)information are non-scientific claims that focus on public health policies or statements from public health authorities.", "For example, a claim like CDC says new COVID strain in UK could already be circulating undetected in U.S would not occur in scientific literature, but occurs in media outlets linked as sources in the r/COVID19 subreddit.", "An end-to-end fact-checking system requires both true and false claims.", "Following FEVER and SciFact, to obtain false claims we aim to generate counter-claims of the original true claims (from Section 2.1).", "However, in FEVER (Thorne et al., 2018) and SciFact (Wadden et al., 2020) the generation of counter-claims was done manually by human annotators, which is an expensive approach that might not scale well.", "We propose an approach to generate counter-claims automatically (see Table 1 for examples).", "Our counter-claim generation consists of two stages: 1) select salient words from the true claims, and 2) replace those words with their opposite using Mask Language Model infilling with entailment-based quality control.", "We discuss these steps below.", "Salient words (keywords) are essential to the overall semantics of a sentence.", "For example, in the claim Oxford vaccine triggers immune response , a salient word would be triggers .", "By changing the word triggers to inhibits we change the meaning of above claim to its opposite (counter-claim).", "Recently Zhang et al. (2020b) used YAKE (Campos et al., 2018, 2020), an unsupervised automatic keyword extraction method for selecting salient words to guide their text generation process.", "For selecting salient words from a claim, we experiment with YAKE as one of our methods.", "In addition, we explore an attention-based method described below.", "Attention-Based Salience.", "Recently, Sudhakar et al. (2019) use self-attention scores from BERT (Devlin et al., 2019) to delete keywords from an input sequence for the task of Style Transfer.", "They use a novel method to extract a specific attention head and layer combination that encodes style information and that can be directly used as importance scores.", "Inspired by them, we use the same approach for our task.", "We fine-tuned BERT for a sentence classification task (veracity prediction) on the SciFact (Wadden et al., 2020) dataset, and extract the attention scores from the resulting model.", "Given the SciFact dataset D = ( x 1 , y 1 ) , ..., ( x m , y m ) where x i is a claim and y i (cid:15) { SUPPORTED, REFUTED } is a veracity label we observe that the self-attention based classifier defines a probability distribution over labels: p ( y | x ) = g ( v, ) where v is a tensor such that v [ i ] is an encoding of x [ i ] , and is a tensor of attention weights such that [ i ] is the weight attributed to v [ i ] by the classifier in deciding probabilities for each y j .", "The scores can be treated as importance scores and be used to identify salient words.", "Quality of Salient Words Selection.", "We evaluate how well our salient word selection methods correlate with human judgement.", "We randomly select 150 original claims for an Amazon Mechanical Turk task.", "The annotators were asked to select a word that could potentially invert the meaning of the sentence if it were to be replaced.", "For every claim, three separate annotators were recruited which means that we would have at most three different chosen salient keywords.", "For each claim, we compute the set intersections between the three keywords selected by our automatic methods (YAKE and Attention-based) vs. the keywords selected by the annotators on AMTurk.", "We found that keywords selected using self-attention scores have a significantly higher recall (Two-Proportion Z-test with p-value < . 00001 ) than YAKE ( 68% vs. 54% ).", "The average number of words per claim in COVID-Fact is 14 , so the task of selecting one salient keyword is challenging even for humans.", "Given this, our Recall@3 scores demonstrate the reliability of automatic attention-based salient word selection.", "After selecting salient words from the true claims for replacement, we need to provide only paraphrases that are opposite in meaning and consider the context in which these words occur.", "Language models have been used previously for infilling tasks (Donahue et al., 2020) and have also been used for automatic claim mutation in fact checking (Jiang et al., 2020).", "Inspired by these approaches, we use the Masked Language Model (MLM) RoBERTa (Liu et al., 2019) fine-tuned on CORD-19 (Wang et al., 2020) for infilling.", "The fine-tuned RoBERTa is available on Huggingface 3 .", "We generate a large number (10-30) of candidate counter-claims with replaced keywords per each original claim.", "After generating multiple candidate counterclaims based on MLM infilling, we select the ones that have the highest contradiction score with the original claim.", "To compute the contradiction score we use the RoBERTa (Liu et al., 2019) model trained on Multi-NLI (Williams et al., 2018) due to its size and diversity.", "The scores are in the range from 0 to 1 .", "We first set the minimum score threshold and then select top three claims above the threshold.", "To select the right threshold for contradiction score-based filtering we perform the following experiment.", "We presented 150 randomly selected claims to Amazon Mechanical Turk workers.", "Annotators were presented with the original claim and five generated candidate counter-claims from MLM infilling.", "They were then asked if those claims are implied by the original claim (hence, for example, noun shifts would be judged as not implied).", "We labeled claims as contradictory if the majority of the annotators agreed on the label.", "We observed a point-biserial correlation of 0 .", "47 between dichotomous human judgement and continuous contradiction scores, indicating moderate agreement.", "We convert the contradiction scores to binary outcomes, assigning 1 if the score is above the threshold and 0 otherwise.", "We compute precision, recall, F1 score and accuracy for different thresholds.", "As threshold value increases, we see a steady increase in precision, indicating that taking a higher threshold value we are almost guaranteed to select a contradictory sentence (for example, for a threshold of 0 . 995 , precision is 93% ).", "Obviously, 3 https://huggingface.co/amoux/ roberta-cord19-1M7k this comes at a cost of decreased recall.", "We selected a threshold of 0 .", "9 (precision 76% ), since we want to prioritize precision, but do not want to reduce our dataset too much due to the low recall.", "At this threshold, our 1 , 407 claims generate additional 4 , 042 false claims.", "An alternative approach of replacing salient words with antonyms from standard lexicons like WordNet (Miller, 1995) was considered.", "However, a suitable antonym was absent in several cases, most notably nouns.", "The RoBERTa model is able to provide domain-aware substitutions.", "For example, replacing the word hu-mans by the word mice reverses the meaning of the claim the domain of clinical trial reports, yet the words human and mouse can hardly be considered antonyms.", "Lexical replacement without consideration of context can also cause grammatical issues.", "Our method of counter-claim generation only changes a single word or a multi-word expression, since pre-trained MLMs like BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) do not allow for multiple word masking.", "However, this method can be extended to masking multiple words using recent pre-trained language models like BART (Lewis et al., 2020).", "Upon deeper inspection we observe that the attention scores described in Section 2.2.1 were distributed across different parts of speech like verbs or adjective modifiers or nouns .", "We show the distribution of the most frequent parts of speech of salient words and replacement words in our dataset in Figure", "2. This means our counter-claims were generated with more creativity than just the addition of obvious triggers like not.", "The majority of claim negations involved a reversal of effect direction; for instance Suspicions grow that nanoparticles in Pfizer's COVID-19 vaccine trigger rare allergic reactions. was negated as Suspicions grow that nanoparticles in Pfizer's COVID-19 vaccine trigger systemic allergic reactions. where a simple adjective modifier changes the truthfulness.", "Similarly for a claim Electrostatic spraying will prevent the spread of COVID-19 a negated claim is Electrostatic spraying will facilitate the spread of COVID-19 which flips the main verb in the claim.", "In Table 2 , one can see several examples of how the generated counter-claims reverse the meaning of the original sentence.", "To select evidence sentences we follow the approach proposed by Hidey et al. (2020).", "Given the true claims and the 5 evidence documents for each claim (Section 2.1) we use cosine similarity on SBERT sentence embeddings (Reimers and Gurevych, 2019) to extract the top 5 sentences most similar to the true claim.", "Note that we only need to do this step for true claims, as automatically the evidence sentences that support the true claim will be the evidence sentences that refute the corresponding counter-claims.", "Sentences containing the claim itself were discarded.", "The collected five sentences will serve as candidate evidence sentences for future human validation described below.", "Crowdsourcing for Final Evidence Sentence Selection.", "Amazon Mechanical Turk workers were given a claim and the 5 automatically selected candidate evidence sentences.", "They were asked to select which of the evidence sentences support the claim (they could select several) or they could select that the evidence is absent.", "To discourage low quality responses, we used a trick sentence that would allow us to disqualify dishonest entries.", "For the trick we used a phrase It is not true that concatenated with the original sentence, and rejected entries that marked that option as evidence for the claim.", "In 111 cases, annotators could not agree on the evidence or agreed that the evidence was absent, where agreement is defined as the majority vote.", "We disregard these true claims from our COVID-Fact dataset as they would not have associated evidence sentences.", "We asses the quality of the majority vote annotations by comparing the gold evidence label annotations with an independent re-annotation by three Amazon Mechanical Turk workers.", "We select a sample of 100 claims' evidences ( 7% of the 1 , 296 original claims).", "We observe a Cohen's kappa (Co-hen, 1968) of 0 .", "5 between majority votes of the two independent groups of Amazon Turk workers, indicating moderate agreement (Artstein and Poesio, 2008).", "We find this encouraging given the complexity of the task, especially considering that the workers did not have domain-specific knowledge.", "COVID-Fact Task follows the FEVER shared task definition.", "The set of all claims is denoted by C .", "The set of gold evidence sentences for a claim c C is denoted by E ( c ) .", "The gold label for a given claim and evidence pair is defined as v ( c, E ( c )) { SUPPORTED, REFUTED } .", "The task consists of the following subtasks outlined below.", "Evidence Retrieval.", "Given a claim c , a system must retrieve a set of up to five evidence sentences E ( c ) .", "We evaluate the evidence retrieval system quality using precision, recall, and F1 scores.", "Evidence recall is computed as the number of evidence sets that contain a gold evidence over the total number of evidence sets.", "Veracity Prediction.", "Given a claim c and a set of evidence sentences E ( c ) , a system must determine a label v ( c, E ( c )) { SUPPORTED, REFUTED } .", "We evaluate veracity prediction using F1 score and accuracy.", "Evidence Retrieval + Veracity Prediction (COVID-FEVER Score) Given a claim c , a system must retrieve a set of evidence sentences E ( c ) , and determine a label v ( c, E ( c )) { SUPPORTED, REFUTED } .", "A claim has a COVID-FEVER of 1 if it correctly predicts the veracity of the claim-evidence pair and if at least one of the predicted evidence from the predicted evidence matches the gold evidence selected by annotators (thus a stricter score than veracity prediction accuracy).", "This metric is similar to the FEVER score (Thorne et al., 2018).", "Our end-to-end pipeline consists of the following steps: 1) Evidence retrieval using Google Search + SBERT 2) Veracity prediction using RoBERTa fine-tuned on fact-checking and entailment inference datasets.", "Baseline for Evidence Retrieval.", "We use the same approach as was used for the construction of the dataset to provide a strong baseline for evidence retrieval on COVID-Fact.", "Google search was used to identify five potential source documents by querying the claim.", "This step is followed by selecting most similar sentences through computing cosine similarity between sentence embeddings of the claim and candidate sentences using SBERT (Reimers and Gurevych, 2019).", "Baseline for Veracity Prediction.", "Our baseline for veracity prediction is a RoBERTa model.", "We concatenate all evidence sentences in the evidence set and use it as input for a binary classification task similar to the GLUE RTE task (Wang et al., 2018).", "We evaluate the models with gold evidence, as well as Top5 and Top1 evidences ranked by SBERT cosine similarity with the original claim.", "Besides evaluating our baseline pipeline on the COVID-Fact dataset, we perform several additional experiments outlined below.", "All hyperparameters can be found in Appendix A. Adequacy of Existing Datasets for COVID-Fact.", "the performance of RoBERTa-large fine-tuned on FEVER, SciFact, MNLI and our COVID-Fact dataset.", "Moreover, we also experiment with fine-tuning RoBERTa-large on SciFact + COVID-Fact and on FEVER + COVID-Fact.", "Usefulness of COVID-Fact for Zero-Shot Scientific Fact-checking.", "Even though not explicitly designed for COVID-19 related claims, Wadden et al. (2020) showed how models trained on the SciFact dataset could verify claims about COVID-19 against the research literature.", "COVID-Fact on the contrary was not explicitly designed for scientific fact-checking, although our resource contains a substantial number of scientific claims.", "This provides us the opportunity to test the generalizability and robustness of our dataset.", "To do so, we train models on COVID-Fact claims and gold evidence and evaluate the veracity performance on the SciFact dev set in a zero-shot setting.", "We remove the NOT ENOUGH INFO claims from the SciFact dataset.", "Table 5 summarizes the results for the evidence retrieval evaluation.", "Our pipeline provides a strong baseline with F1 score of 32 .", "For comparison, the baseline system in FEVER (Thorne et al., 2019) achieves the F1 score of 18 .", "26 .", "Note Top 5 evidence retrieval performs worse than gold since we evaluate how the system performs with automatically negated claims as well, for which we re-run the Google+SBERT method.", "Table 4 summarizes the results for the veracity prediction task using gold and retrieved evidence.", "We observe that, given the gold evidences, fine-tuning on COVID-Fact led to performance improvement of 25 F1-score and 35 F1-score compared to training solely on SciFact and FEVER respectively.", "This indicates that the COVID-Fact dataset is challenging and cannot be solved using popular fact-checking datasets like FEVER and SciFact.", "This could be explained by the fact that claims about COVID-19 are comprised of a mix of scientific and general-domain claims.", "The poor macro-F1 score for claim only baseline shows that the model does not learn spurious correlation between a claim and the veracity label.", "With Top 5 and Top 1 retrieved evidences, we observed that COVID-Fact is still difficult to outperform.", "The zero-shot performance is negligibly affected by the retrieved evidence.", "Our baseline pipeline achieves Veracity Prediction COVID-FEVER Gold Top 5 Top 1 Top 5 Acc F1 Acc F1 Acc F1 Score MNLI (Williams et al., 2018) 61.3 64.2 53.1 51.5 65.4 60.6 35.1 SciFact (Wadden et al., 2020) 56.9 57.0 53.7 54.0 54.3 54.0 36.9 FEVER (Thorne et al., 2018) 48.3 47.0 46.2 45.0 48.6 48.0 35.4 COVID-Fact 83.5 82.0 84.7 83.0 83.2 81.0 43.3 SciFact + COVID-Fact 82.2 81.0 83.0 82.0 80.2 79.0 43.0 FEVER + COVID-Fact 74.8 70.0 78.2 73.0 73.3 68.0 35.4 COVID-Fact (Claim only) 67.5 40.0 --Table 4: Performance of various training configurations of RoBERTa-large in the Veracity Prediction as well as Evidence Retrieval + Veracity Prediction (See Section 3.1).", "the COVID-FEVER score of 43.3 using Top 5 evidence sentences.", "Adding the FEVER and SciFact datasets deteriorates the results.", "Table 6 shows a strong zero shot performance of COVID-Fact for scientific claim verification (train-ing on COVID-Fact train set, testing on the SciFact dev set).", "SciFact only contains scientific claims, therefore the model trained only on SciFact does not generalize well to COVID-Fact, which also contains non-scientific claims.", "COVID-Fact, on the other hand, contains enough scientific claims so that the model generalizes well to SciFact.", "This result shows semi-automated COVID-Fact is not inferior to mostly manual SciFact.", "Error analysis.", "We observe that errors in veracity prediction can be attributed to three factors: Cause and Effect, Commonsense or Scientific Background.", "For instance, in the first (C1, EV1) pair in Table 7, not detectable is the Cause while C1 SARS-CoV-2 is not detectable in the vaginal fluid of women with severe COVID-19 infection EV1 All 10 patients were tested for SARS-CoV-2 in vaginal fluid,and all samples tested negative for the virus.", "testing negative is the Effect.", "To verify this claim, the veracity model needs to have knowledge of counterfactuals.", "Furthermore, it should be understood that All 10 patients mention in EV1 should refer to women in C1, due to mention of vaginal fluids but this requires commonsense knowledge outside the text.", "Finally, it might be hard for veracity models to correctly classify claim evidence pairs which include knowledge of domain-specific or scientific lexical relationships.", "For instance in (C2, EV2) we see that both bolded phrases in red and blue refer to the same phenomena, but immune dysregulation is a breakdown of immune system processes and restraining it can be seen as the same concept as correcting immune abormalities , but the model is not able to capture such complex domain specific knowledge.", "Fact-Checking.", "Approaches for predicting the veracity of naturally-occurring claims have focused on statements fact-checked by journalists or organizations such as PolitiFact.org (Vlachos and Riedel, 2014; Alhindi et al., 2018), news articles (Pomer-leau and Rao, 2017), or answers in community forums (Mihaylova et al., 2018, 2019).", "Mixed-domain large scale datasets such as UKP Snopes (Hanselowski et al., 2019), MultiFC (Augenstein et al., 2019), and FEVER (Thorne et al., 2018, 2019) rely on Wikipedia and fact-checking websites to obtain evidences for their claims.", "Even though these datasets contain many claims, due do domain mismatch they may be difficult to apply for COVID-19 related misinformation detection.", "SciFact (Wadden et al., 2020) introduced the task of scientific fact-checking, generating a dataset of 1.4K scientific claims and corresponding evidences from paper abstracts annotated by experts.", "However, the dataset does not contain simplified scientific claims encountered in news and social media sources, making it difficult to optimize for a misinformation detection objective.", "Another approach to misinformation detection similar to ours is CLIMATE-FEVER (Diggelmann et al., 2021).", "They adapted FEVER methodology to create a dataset specific to climate change fact-checking.", "However, due to difficult and expensive methods employed for generation of FEVER, it can be difficult to extrapolate this method to assemble a COVID-19 specific dataset.", "COVID-19 related NLP tasks.", "Numerous NLP approaches were employed to aid the battle with the COVID-19 pandemic.", "Notably Wang et al. (2020) released CORD-19, a dataset containing 140K papers about COVID-19 and related topics while Zhang et al. (2020a) created a neural search engine COVIDEX for information retrieval.", "To combat misinformation Lee et al. (2020) proposed a hypothesis that misinformation has high perplexity.", "Hossain et al. (2020) released COVIDLIES: a dataset of 6761 expert-annotated tweets matched with their stance on known COVID-19 misconceptions.", "The dataset provides a comprehensive evaluation of misconception retrieval but does not analyze evidence retrieval and prediction of veracity of claims based on presented evidence.", "Po-liak et al. (2020) collected 24,000 Question with expert answers from 40 trusted websites to help NLP research with COVID related information.", "COVID-Fact, on the other hand, deals with real world claims and presents an end-to-end fact checking system to fight misinformation.", "We release a dataset of 4,086 claims concerning the COVID-19 pandemic, together with supporting and refuting evidence.", "The dataset contains real-world true claims obtained from the r/COVID19 subreddit as well as automatically generated counterclaims.", "Our experiments reveal that our dataset outperforms zero-shot baselines trained on popular fact-checking benchmarks like SciFact and FEVER.", "This goes on to prove how domain-specific vocabulary may negatively impact the performance of popular NLP benchmarks.", "Finally, we demonstrate a simple, scalable, and cost-efficient way to automatically generate counter-claims, thereby aiding in creation of domain-specific fact-checking datasets.", "We provide a detailed evaluation of the COVID-Fact task and hope that our dataset serves as a challenging testbed for end-to-end fact-checking around COVID-19.", "The data was collected from Reddit keeping user privacy in mind.", "Reddit is a platform where users post publicly and anonymously.", "For our dataset, only titles and links to external publicly available sources like news outlets or research journals were collected, as well as post metadata such as flairs, upvote ratio, and date of the post.", "User-identifying information, including, but not limited to, user's name, health, financial status, racial or ethnic origin, religious or philosophical affiliation or beliefs, sexual orientation, trade union membership, alleged or actual commission of crime, was not retrieved and is not part of our dataset.", "For all the crowdsourcing annotation work, we fairly compensate crowd workers in accordance with local minimum wage guidelines.", "One significant concern might arise regarding the use of language models for counter-claim generation.", "Our model is a controlled generation system (word-level replacement) and is not suited for generation of entirely new and original claims.", "Neither it is the case that it can be used for generation of entire articles of false information, or generating false evidence for the counter-claims.", "The model for replacing keywords from original claims is trained on CORD-19 (Wang et al., 2020), a scientific corpus of high quality and trustworthy information about COVID-19.", "We generate counter-claims to create a resource that will help NLP models learn how to identify false information and provide evidence for the predicted label leading to more explainable models.", "Consequently, our approach is suited for improving entailment and veracity prediction performance of fact-checking systems, rather than improving generative qualities of false-claim generation systems.", "The fact that we use our model to generate false claims also helps to address the concerns of biased language generation.", "In the unlikely event our model produces biased claims, they could serve as good examples of false claims containing bias, which would be an interesting topic for further research (bias in disinformation).", "We therefore believe the net positive impact of our work far outweighs the potential risks." ]
[ "abstain", "abstain", "method", "result", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "other", "other", "objective", "abstain", "objective", "objective", "method", "objective", "objective", "objective", "objective", "result", "other", "abstain", "objective", "method", "objective", "objective", "result", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "method", "result", "result", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "result", "abstain", "objective", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "objective", "method", "method", "abstain", "abstain", "abstain", "result", "abstain", "method", "method", "result", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "result", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "result", "abstain", "objective", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method" ]
[ "Relation Extraction (RE) aims to label relations between groups of marked entities in raw text.", "Most current RE models learn context-aware representations of the target entities that are then used to establish relation between them.", "This works well for intra-sentence RE and we call them first-order relations.", "However, this methodology can sometimes fail to capture complex and long dependencies.", "To address this, we hypothesize that at times two target entities can be explicitly connected via a context token.", "We refer to such indirect relations as second-order relations and describe an efficient implementation for computing them.", "These second-order relation scores are then combined with first-order relation scores.", "Our empirical results show that the proposed method leads to state-of-the-art performance over two biomedical datasets.", "There are wide applications for Information Extraction in general (Jin et al., 2018) and Relation Extraction (RE) in particular, one reason why relation extraction continues to be an active area of research (Bach and Badaskar, 2007; Kambhatla, 2004; Kumar, 2017).", "Traditionally, a standard RE model would start with entity recognition and then pass the extracted entities as inputs to a separate relation extraction model, which meant that the errors in entity recognition were propagated to RE.", "This problem was addressed by end-to-end models (Miwa and Bansal, 2016; Zheng et al., 2017; Adel and Schutze, 2017; Bhatia et al., 2018) that jointly learn both NER and RE.", "Generally, these models consist of an encoder followed by a relationship classification (RC) unit (Verga et al., 2018; Christopoulou et al., 2018; G. Singh was an intern at Amazon at the time of work College France Bill", "Su et al., 2018).", "The encoder provides context-aware vector representations for both target entities, which are then merged or concatenated before being passed to the relation classification unit, where a two layered neural network or multilayered perceptron classifies the pair into different relation types.", "Such RE models rely on the encoder to learn perfect' context-aware entity representations that can capture complex dependencies in the text.", "This works well for intra-sentence relation extraction i.e. the task of extracting relation from entities contained in a sentence (Christopoulou et al., 2018; Su et al., 2018).", "As these entities are closer together, the encoder can more easily establish connection based on the language used in the sentence.", "Additionally, these intra-sentence RE models can use linguistic/syntactical features for an improved performance e.g. shortest dependency path.", "Unfortunately, success in intra-sentence RE has not been replicated for cross-sentence RE.", "As an example, a recent RE method called BRAN (Verga et al., 2018) proposed to use encoder of Transformer (Vaswani et al., 2017) for obtaining token representations and then used these representations for RE.", "However, our analysis revealed that it wrongly marks many cross-sentence relations as negative, especially when the two target entities were connected by a string of logic spanning over multiple sentences.", "In this work we address this issue of overreliance on the encoder.", "We propose a model based on the hypothesis that two target entities, whether intra-sentence or cross-sentence, could also be explicitly connected via a third context token (Figure 1).", "More specifically, we find a token in the text that is most related to both target entities, and compute the score for relation between the two target entities as the summation of their relation scores with this token.", "We refer to these relations as second-order relations.", "At the end, we combine these second-order scores with first-order scores derived from a traditional RE model, and achieve state-of-the-art performance over two biomedical datasets.", "To summarize the contribution of this work:", "1. We propose using second-order relation scores for improved relation extraction.", "2. We describe an efficient algorithm to obtain second-order relation scores.", "In this section we describe the encoder and relation classification unit of a SOTA RE model called BRAN (Verga et al., 2018).", "This model computes relation scores between two entities directly from their representations, therefore we refer to these as first-order relation scores.", "BRAN uses a variant of Transformer (Vaswani et al., 2017) encoder to generate token representations.", "The encoder contains repeating blocks and each such block consists of two sublayers: multi-head self-attention layer followed by position-wise convolutional feedforward layer.", "There are residual connections and layer normalization (Ba et al., 2016) after each sublayer.", "The only difference from a standard transformer-encoder is the presence of a convolution layer of kernel width 5 between two consecutive convolution layers of kernels width 1 in the feedforward sublayer.", "It takes as input word embeddings that are added with positional embeddings (Gehring et al., 2017).", "The relation classification unit takes as input token representations from the described encoder.", "These are then passed through two MLPs to generate head/tail representation e headi /e taili for each token corresponding to whether it serves the first (head) or second (tail) position in the relation.", "e taili = W tail 2 ( ReLU ( W tail 1 b i )) (2)", "These are then combined with a bi-affine transformation operator to compute a N R N tensor A of pairwise affinity scores for every token pair and all relation types, scoring all triplets of the form ( head, relation, tail ) : A irj = ( e headi L ) e tailj , (3) where L is a learned tensor of dimension d R d to map pairs of tokens to scores over each of the R relation types and d is the dimension of head/tail representations.", "Going forward we will drop the subscript r for clarity.", "The contributions from different mention pairs are then aggregated to give us first-order relation scores.", "This aggregation is done using LogSumExp , which is a smooth approximation of max that prevents sparse gradients: scores (1) ( p head , p tail ) = log (cid:88) i P head j P tail exp( A ij ) , (4) where P head ( P tail ) contains mention indices for head (tail) entity.", "In this section we describe in detail our proposed method to obtain second-order relation scores.", "We use the encoder described in Sec 2.1 for getting token representations.", "These token representations are then passed through two MLPs (as in previous section), which generate head/tail representations for each token corresponding to whether it serves the first or the second position in the relation.", "We used a separate set of these head/tail MLPs for second-order scores than the ones used for getting first-order scores.", "This was Transformer Encoder Text Input HeadMLP-1 TailMLP-1 HeadMLP-2 TailMLP-2 First Order Scores Intermediate Scores Second Order Scores add score with weight ( ) Final Scores Figure 2: Schematic of the model architecture.", "motivated by the need for representations focused on establishing relations with context tokens, as opposed to first-order relations (described in previous section) that attempt to directly connect two target entities.", "The head and tail representations are then combined with a d R d bilinear transformation tensor M to get a N R N tensor B of intermediate pairwise scores.", "After that we arbitrarily define the scores between tokens i and j when conditioned on a context token k as the sum of the scores of relations ( i, k ) and ( k, j ) .", "These context-conditioned scores are computed for every triplet of the form ( i, j, k ) .", "Second-order relation scores are then derived by aggregating over all context tokens and mention pairs using LogSumExp .", "Here LogSumExp ensures that one specific mention pair connected via one specific context token is responsible for the relation.", "This is equivalent to max-pooling over all context tokens that could potentially connect the two target entities, which reduces over-fitting by removing contributions from noisy associations of the target entities with random tokens e.g. stopwords.", "It is important to mention that a naive implementation of this would require O ( N 3 ) space to store context-conditioned scores between all pairs of token i.e. C ( i, j | k ) .", "To address this, we describe an efficient method in Section 3.1 that avoids explicitly storing these.", "At the end, the final score for relation between two entities is given as a weighted sum of first (eq.", "4) and second (eq. 7) order scores.", "where is a hyper-parameter denoting the weight of second-order relation scores.", "Entity Recognition .", "We do entity recognition alongside relation extraction, as the transfer of knowledge between the two tasks has been shown to improve relation extraction performance (Verga et al., 2018; Miwa and Bansal, 2016).", "For this we feed encoder output b i to a linear classifier W er that predicts scores for each entity type.", "The problem lies in storing score for every intermediate relation of the form C ( i, j | k ) , as that would require space of the order O ( N 3 ) .", "Here we describe a space-time efficient method to compute final second-order relation scores.", "The intermediate scores (eq.", "5) are a tensor of dimension b N R N comprising of pairwise scores for b batches.", "We create two tensors out of these intermediate scores, namely T 1 and T 2 .", "T 1 computes the exponential of indices ( { b, i P head , j C , R } ) corresponding to pairwise scores between head entity and all the context tokens ( C i.e., all the tokens except the two target entities), and sets other indices to 0 .", "Similarly, T 2 computes exponential of indices ( { b, i P tail , j C , R } ) corresponding to pairwise scores between tail entity and context tokens, setting all other indices to 0 .", "To get the context conditioned scores one needs to compute the batch product of Data Model Pr Re F1 DCN BRAN 0.614 0.850 0.712 + SOR 0.643 0.879 0.734 i2b2 HDLA 0.378 0.422 0.388 BRAN 0.396 0.403 0.395 + SOR 0.424 0.419 0.407 CDR BRAN 0.552 0.701 0.618 + SOR 0.552 0.701 0.618 Table 1: The performance of proposed model using second-order relations.", "R two dimensional slices of size N N from T 1 and T 2 along the dimension of context, but this would be sequential in R .", "Instead we can permute T 1 and T 2 to b R N N followed by reshaping to bR N N and perform a batch matrix multiplication along the context dimension to get bR N N .", "Afterwards, we can sum along the last two dimensions to get a tensor of size bR .", "Finally, we can take the log succeeded by reshaping to b R to obtain second-order scores.", "We have used three datasets in this work, i2b2 2010 challenge (Uzuner et al., 2011) dataset, a de-identified clinical notes dataset and a chemical-disease relations dataset known as BioCreative V (CDR) (Li et al., 2016; Wei et al., 2016).", "First is a publicly available subset of the dataset used for the i2b2 2010 challenge.", "It consists of documents describing relations between different diseases and treatments.", "Out of the 426 documents available publicly, 10% are used each for both dev and test and the rest for training.", "There are 3244/409 relations in train/test set and 6 pre-defined relations types including one negative relation e.g. TrCP ( Tr eatment Causes Pr oblem), TrIP (Tr Improves Pr), TrWP (Tr Worsens Pr).", "We have used the exact same dataset as Chikka et al. (Chikka and Karlapalem, 2018).", "Second is a dataset of 4200 de-identified clinical notes (DCN), with vocabulary size of 50K.", "It contains approximately 170K relations in the train set and 50K each in dev/test set.", "There are 7 pre-defined relation types including one negative relation type.", "These are mostly between medication name and other entities e.g. paracetamol every day , aspirin with dosage 100mg .", "The frequency of different relations in this dataset is fairly balanced.", "Third is a widely used and publicly available dataset called CDR (Li et al., 2016; Wei et al., 2016).", "It was derived from Comparative Toxi-cogenomics Database (CTD) and contains documents describing the effect of chemicals (drugs) on diseases.", "There are only two relation types between any two target entities i.e. positive/negative and these relations are annotated at the document level.", "It consists of 1500 documents that are divided equally between train/dev/test sets.", "There are 1038/1012/1066 positive and 4280/4136/4270 negative relations in train/dev/test sets respectively.", "We performed the same preprocessing as done in BRAN (Verga et al., 2018).", "We jointly solve for NER and RE tasks using cross-entropy loss.", "During training we alternate between mini-batches derived from each task.", "We fix the learn rate to 0 .", "0005 and clip gradient for both tasks at 5 .", "0 .", "For training, we used adams optimizer with = ( 1 , 2 ) = (0 . 1 , 0 . 9) .", "We tune over the weight of second-order relations denoted by to get = 0 .", "2 for DCN/i2b2 and = 0 .", "0 for CDR dataset.", "Our final network had two encoder layers, with 8 attention heads in each multi-head attention sublayer and 256 filters for convolution layers in position-wise feedforward sublayer.", "We used dropout with probability 0 .", "3 after: embedding layer, head/tail MLPs, output of each encoder sublayer.", "We also used a word dropout with probability 0 .", "15 before the embedding layer.", "To show the benefits of using second-order relations we compared our model's performance to BRAN.", "The two models are different in the weighted addition of second-order relation scores.", "We tune over this weight parameter on the dev set and observed an improvement in MacroF1 score from 0 .", "712 to 0 .", "734 over DCN data and from 0 .", "395 to 0 .", "407 over i2b2 data.", "For further comparison a recently published model called HDLA (Chikka and Karlapalem, 2018) reported a macroF1 score of 0 .", "388 on the same i2b2 dataset.", "It should be mentioned that HDLA used syntactic parsers for feature extraction but we do not use any such external tools.", "In the case of CDR dataset we obtained = 0 after tuning, which means that the proposed model converged to BRAN and the results were identical for the two models.", "These results are summarized in Table", "1. 4.4 Ablation Study We experimented with different ablations of BRAN and noticed an improvement in results for DCN dataset upon removing multi-head self-attention layer.", "Also, our qualitative analysis showed that relations between distant entities were often wrongly marked negative.", "We attribute these errors to the token representations generated by the encoder.", "To this effect, our experiments showed that incorporating relative position (Shaw et al., 2018) information in the encoder to improve token representations does not lead to superior RE.", "Separately, we observed that the proposed method improved results when using a standard CNN encoder as well.", "We proposed a method that uses second-order relation scores to capture long dependencies for improved RE.", "These relations are derived by explicitly connecting two target entities via a context token.", "These second-order relations (SORs) are then combined with traditional relation extraction models, leading to state-of-the-art performance over two biomedical datasets.", "We also describe an efficient implementation for obtaining these SORs.", "Despite restricting ourselves to SORs, it should be noted that the proposed method can be generalized to third and fourth order relations.", "We conjecture that these may serve well for cross-sentence relation extraction in long pieces of texts.", "Also, we only considered one relation type between each entity and bridge token but it is possible, and very likely that two different relation types may lead to a third relation type.", "We will explore both these aspects in future work.", "London and the anonymous reviewers of NAACL for their valuable feedback on the paper." ]
[ "abstain", "abstain", "objective", "abstain", "objective", "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "result", "abstain", "objective", "objective", "objective", "result", "abstain", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "objective", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "objective", "abstain", "abstain", "result", "abstain", "abstain", "method", "objective", "other" ]
[ "Languages are dynamic systems: word usage may change over time, reflecting various societal factors.", "However, all languages do not evolve identically: the impact of an event, the influence of a trend or thinking, can differ between communities.", "In this paper, we propose to track these divergences by comparing the evolution of a word and its translation across two languages.", "We investigate several methods of building time-varying and bilingual word embeddings, using contextualised and non-contextualised embeddings.", "We propose a set of scenarios to characterize semantic divergence across two languages, along with a setup to differentiate them in a bilingual corpus.", "We evaluate the different methods by generating a corpus of synthetic semantic change across two languages, English and French, before applying them to newspaper corpora to detect bilingual semantic divergence and provide qualitative insight for the task.", "We conclude that BERT embeddings coupled with a clustering step lead to the best performance on synthetic corpora; however, the performance of CBOW embeddings is very competitive and more adapted to an exploratory analysis on a large corpus.", "Languages evolve throughout time: for many words, their usages along with their frequent collocations and associations can change, revealing the evolution of the society (Aitchison, 2001).", "However, all languages do not evolve identically: the impact of an event, the influence of a trend or thinking, can differ between communities.", "Moreover, languages do not evolve independently; some words can be inherited and borrowed between languages.", "For example, cognates words that have the same This work was carried out while the author was working at LISN-CNRS.", "etymological origin and similar meaning in two languages can sometimes diverge into false friends, due to particular features of one language and its associated culture and history.", "A more specific example is the Russian word ukrop , meaning dill.", "It started to be used by Russian people as an ethnic slura pejorative termto talk about Ukrainian soldiers at the beginning of the Russian-Ukrainian conflict (Stewart et al., 2017).", "Then, Ukrainian people started to use it to designate their own patriots, in a positive way.", "Analysing the evolution of this word can lead to a better understanding of the evolution of the conflict; on the contrary, without suitable tools and methods to detect the divergence in its usage and connotation between communities, one might draw spurious results when analysing texts of this period.", "Diachronic semantic change detection is an emerging field in Natural Language Processing, building upon the growing number of digitised texts with temporal metadata publicly available in various languages.", "It opens new perspectives of improvement for downstream tasks (using time-aware word representation for tasks ranging from text classification to information retrieval in temporal corpora) or for socio-linguistic and historical linguistics analysis (Kutuzov et al., 2018).", "The goal of this paper is to extend the analysis of lexical semantic change across two languages, aiming at estimating the degree of diachronic semantic divergence between a word and its translation across time in a bilingual corpus.", "We propose an experimental framework to learn word representations that are comparable across both time and languages, and to detect and classify semantic divergence in a bilingual setting.", "We compare:", "(i) diachronic word embeddings, which allow static embeddings such as CBOW (Mikolov et al., 2013) to drift through time, and", "(ii) contextualised embeddings, relying on a pre-trained multilingual language model (M-BERT, Devlin et al., 2019).", "1 We also propose an anchored-alignment strategy to tackle the bilingual setting for non-contextual embeddings.", "Then, we suggest a metric to measure the divergence of word usage between two languages, the bilingual divergence .", "Given the lack of a bilingual dataset annotated with semantic divergence, we generate a corpus of synthetic semantic drift across two languages using EuroSense (Delli Bovi et al., 2017), a sense-disambiguated and aligned bilingual corpus.", "To do so, we define a set of monolingual and bilingual semantic change scenarios and evaluate our different approaches on them.", "Finally, we apply our systems to newspaper corpora in two languages, English and French, covering the same time period, from 1987 to 2006.", "We classify all words of a bilingual lexicon into the scenarios defined for the synthetic drift generation.", "To sum up, we extend the most appropriate methods from the literature of diachronic semantic change to build a framework for the measure of semantic divergence across languages (Sections 3 and 4), for which we propose a definition of the task, a measure of semantic divergence (Section 5), and a process to evaluate the presented methods (Section 6).", "Diachronic embedding models.", "The first approaches to diachronic modeling were based on relative word frequencies and distributional similarities (Gulordava and Baroni, 2011).", "Following the generalisation of word embeddings, diachronic word embeddings models emerged (Tah-masebi et al., 2018).", "A first line of work, led by Kim et al. (2014), learns an embedding matrix on the first time slice of a temporal corpus, and incrementally fine-tune it at each time step.", "This method has the advantage of simplicity but face a greater sensitivity to noise (Shoemark et al., 2019; Kaiser et al., 2020).", "Another method, proposed by Hamilton et al. (2016) and Kulkarni et al. (2015), train word embeddings on each time slice independently and align the representation spaces to make the embeddings comparable.", "Finally, Rudolph and Blei (2018); Jawahar and Seddah (2019) and Bamler and Mandt (2017) define probabilistic models of word embeddings, able to capture the drifts by training embeddings jointly on all time slices.", "These methods average all the senses of a word into a unique vector at each time step.", "Pre-trained language models such as BERT (Devlin et al., 2019) allow each occurrence of a word to have a contextualised vector representation.", "These models, pre-trained on large datasets, improved the state of the art on numerous NLP tasks.", "Similarly, contextualised embeddings can be applied to semantic change detection (Giulianelli et al., 2020; Montariol et al., 2021) using several aggregation techniques to measure the degree of semantic change of a word from all its contextualised representations over time.", "However, these methods are still outperformed by non-contextualised embeddings for this task (Schlechtweg et al., 2020).", "Semantic change across languages.", "While this topic is actively researched in the linguistic and sociology research communities (Boberg, 2012), it is fairly new in the NLP literature.", "Many authors apply diachronic embeddings models to more than one language (Hamilton et al., 2016; Schlechtweg et al., 2020).", "However, prior work comparing the evolution of word usage across languages is very limited.", "Some work studies variations between languages or dialects, without looking into the temporal dimension (Hovy and Purschke, 2018; Beinborn and Choenni, 2020).", "Uban et al. (2019) compare present meanings of cognate words across 5 Romance languages to differentiate true cognates from false friends and measure the divergence between languages.", "In a temporal fashion, Martinc et al. (2020a) study the evolution of 4 word pairs in an English-Slovenian corpus of newspaper articles.", "Finally, Frossard et al. (2020) propose a list of cognates for analysing the similarities in the evolution of English and French, along with a preliminary analysis focusing on the differences in word frequency over time.", "Before presenting systems based on contextualised embeddings, we introduce two methods using non-contextualised ones, as they are known to perform best for the task of semantic change detection (Schlechtweg et al., 2020).", "We use the continuous bag of words (CBOW) architecture of Word2Vec (Mikolov et al., 2013); we apply two different training methods to train it in a diachronic way.", "Then, we describe an anchored-alignment method to obtain bilingual diachronic word embeddings.", "In this section, we consider a monolingual corpus divided into T time slices.", "We rely on a fine-tuning method rather than an alignment-based method, where a new model would be trained from scratch at each time step (Hamilton et al., 2016).", "Indeed, for our cross-lingual task an alignment is already needed to map the embedding spaces of the two languages together; it would not be desirable to multiply this type of transformation, as each alignment introduces uncertainty in the system.", "To begin with, as advised by Rudolph and Blei (2018), we pre-train our CBOW models on a shuf-fled version of the full corpus for each language.", "We use two methods for diachronic training.", "The first on is incremental training (Kim et al., 2014): we incrementally fine-tune the model on each time slice by initialising the weights with those of the previous time slice.", "The second variant is independent training: the model is fine-tuned on each time slice independently by initialising it with the pre-trained embeddings.", "Compared to the incremental method, the latter does not take into account the chronology of the corpus and can lead to less directed drifts.", "However, the fact that the embeddings do not go through a large amount of successive training updates, contrarily to the incremental method, prevents the embeddings from undergoing too extreme drifts (Shoemark et al., 2019).", "We now consider a bilingual corpus, and embeddings trained separately on each language.", "We want to align the representation spaces to make the embeddings comparable.", "Anchoring.", "The supervision signal for the alignment is key to the performance of the overall system, even more than the model architecture itself (Ruder et al., 2019).", "Anchoring is a form of supervision commonly used in NLP to obtain cross-lingual word embeddings.", "The supervision comes from a bilingual dictionary, whose words the anchors are used as seeds during the alignment.", "It can be transparent words such as named entities, or an exhaustive bilingual dictionary with the full vocabulary.", "However, aligning the vectors of the whole vocabulary is not appropriate for semantic change detection, as it tends to lower the disparities between the different vector spaces (Tsakalidis et al., 2019).", "In our case, the alignment forces the embeddings of the word pairs from the supervision dictionary to be the same in the two languages.", "This might hide some behavior such as a high disparity at the beginning of the full period and a convergence of meanings over time.", "Consequently, we use a seed dictionary with only the words that we assume are stable during the period in both languages.", "A first set of stable words are stopwords (Azarbonyad et al., 2017; Martinc et al., 2020b); however, by definition they do not carry much meaning.", "Relying only on them for the supervision might result in a poor alignment.", "We complement the list of seed words with word pairs that have the same relative frequency in the corpora of each language; with this frequency being in the top 10% of the full corpus (Azarbonyad et al., 2017; Zhang et al., 2015).", "For all experiments in this paper, we use the bilingual dictionary from the MUSE tool 2 (Lample et al., 2018).", "It includes 5000 word pairs and handles word polysemy.", "Alignment.", "First, we train monolingual CBOW embeddings on each language independently, without dividing the corpora into time slices.", "To prepare for the alignment, we apply mean-centering to the embeddings of each language, as Schlechtweg et al. (2019) showed the positive impact of this preprocessing step for vector space alignment.", "For the alignment, we use Orthogonal Procrustes (Schonemann, 1966).", "It consists in finding the mapping W between two embedding spaces E 1 and E 2 which minimizes the sum of squared Euclidean distances between the image of the source embeddings space E 1 W and the target embedding space E 2 for the set of selected anchor words in both spaces.", "These aligned embedding vectors are used to initialise the diachronic embeddings, which can then be trained on all the time slices in both languages, incrementally or independently.", "To challenge the systems based on aligned CBOW embeddings, we use M-BERT, the multilingual version of BERT (Devlin et al., 2019).", "It is trained on Wikipedia content on 104 languages, without any additional multilingual mechanism nor language identifier.", "Applying a pre-trained multilingual model on a bilingual temporal corpus enables immediate comparison without requiring any alignment.", "Each sequence is labelled with the time it was written and 2 https://github.com/facebookresearch/ MUSE its language.", "We extract contextualised representations for each token of a sequence by summing the top four hidden layers of the pre-trained model.", "BERT representation relies on a system of word-pieces; if a word is divided into several wordpieces, we take the average of all the wordpiece embeddings as representation for the word.", "To sum up all the information about a word from the set of contextual embeddings of all its occurrences in a time slice, we experiment with two aggregation techniques: averaging and clustering.", "Averaging : Proposed by Martinc et al. (2020a), this method averages all the token embeddings of a word for each time period and each language.", "We end up with a set of time-specific and language-specific vector representations of a word.", "They can be compared using the cosine distance (Shoemark et al., 2019).", "3 Clustering: This method, first used by Giu-lianelli et al. (2020), groups the set of token embeddings of a word into types of usages.", "We apply a clustering algorithm, k-means, to all the embeddings of a word and its translation, on all the time periods jointly.", "Then, we compute the normalised distributions of clusters, for each language and period.", "More precisely, for a given word, we extract the number of tokens in each cluster and for each pair (period, language); we normalise it by the total number of occurrences of the word in the corpus.", "We obtain the probability distributions of the usages of this word at each time slice and in both languages.", "These distributions can be compared between two periods or two languages using the Jensen-Shannon divergence (JSD, Lin, 2006).", "After applying the described systems to a bilingual corpus divided into T time slices, for a given target word in a given language l , we obtain either a sequence of T embeddings u ( t ) l in each language (for CBOW and m-BERT with averaging), or a vector of T cluster distributions c ( t ) l (for m-BERT with clustering).", "We compute the distance between representations: the cosine distance between noncontextual embeddings and the JSD between clus-3 We define the cosine distance as (1 cosine similarity).", "d ( t 1 , t 2 , l 1 , l 2 ) = cos( u ( t 1 ) l 1 , u ( t 2 ) l 2 ) (averaging or CBOW) JSD( c ( t 1 ) l 1 , c ( t 2 ) l 2 ) (clustering) (1) In a monolingual setting, we use two metrics commonly used to measure the drifts of a word in each language (Rodina et al., 2019): the incremental drift, from each time slice to the next one, and the inceptive drift, from the beginning of the period to each time slice.", "We obtain drift vectors in RT 1 for each word in each language, by computing d ( t 1 , t 2 , l, l ) .", "In a bilingual setting, drift measures can be computed for each word pair (one word and its transla-tion).", "First, we compute the distance inside each word pair at each time step.", "We call it the bilingual distance: s ( t ) B = d ( t, t, l 1 , l 2 ) for t = 1 , 2 , . . . , T .", "Second, the temporal drift of this distance is measured similarly to the monolingual drift, either incrementally or inceptively .", "The distance is the norm between the bilingual distance s ( t ) B at two time steps, measuring the divergence of the usage of a word and its translation.", "We call it bilingual divergence .", "For example, the incremental bilingual divergence is computed as follows: D incr B = | s (0) B s (1) B | | s (1) B s (2) B | ... | s ( T 1) B s ( T ) B | (2) Various information can be extracted from the vector of bilingual divergence of a word DB : the trend ( no trend i.e. stable distance between a word and its translation, decreasing i.e. convergence, or increasing i.e. divergence), the degree of divergence (e.g. by summing all its elements), and the speed of divergence (by estimating the slope).", "The study of semantic change faces the issue of evaluation, as few labeled corpora exist for this task.", "Recent initiatives from the NLP community start to produce more annotated data (Schlechtweg et al., 2020); however, no corpus is available for bilingual analysis.", "Consequently, we generate a corpus of bilingual synthetic semantic change, following common practice in the literature of monolingual semantic change detection (Shoemark et al., 2019; Schlechtweg and Schulte im Walde, 2020).", "It allows us to control exactly the shape and degree of semantic change in the corpus and thus gain a deeper understanding of the impact of each modeling choice.", "To create synthetic semantic change, common practice involve to merge two words that do not share a common sense, creating a pseudo-word; then, generate synthetic change by controlling the proportion of sentences using each of the two original words in the successive time slices of a temporal corpus (Rosenfeld and Erk, 2018; Shoemark et al., 2019).", "However, as advised by Schlechtweg and Schulte im Walde (2020), it is preferable to use the natural polysemy of words for the synthetic drift to be as close as possible to reality: instead of controlling the proportion of sentences containing two unrelated words merged as a pseudo-word, we use sentences containing several senses of a unique word.", "To this end, we need a bilingual sense-annotated corpus with consistent annotations between languages (Pasini and Camacho-Collados, 2020).", "The EuroSense corpus 4 (Delli Bovi et al., 2017) is derived from the Europarl corpus, a large public corpus of proceedings of the European Parliament.", "It has a full and a refined version.", "We use the latter to build our synthetic corpus; it is half the size of the first one but more reliable.", "The framework BabelNet (Navigli and Ponzetto, 2012) is used for annotation.", "EuroSense contains parallel text in 21 European languages.", "We focus on the two languages with the highest amount of annotations in the refined corpus: English and French.", "An example of aligned sentences in these languages can be found in Table 1.", "In order to generate and capture variations of distributions of word senses through time and across two languages, we define several scenarios of word usage variations.", "First, we choose two monolingual scenarios of semantic change (labeled M) and generate them using sentences extracted from the EuroSense corpus.", "Assuming we have a target word with at least two senses, the scenarios are: M 0 : all senses are fully stable.", "Second, we define scenarios of semantic divergence (bilingual scenarios, labeled B) derived 4 http://lcl.uniroma1.it/eurosense/ English French Sentence The best tools for this are liberalisation and freer competition , which causes train companies to take a greater interest in the wishes of customers .", "from the monolingual scenarios.", "Assuming we have a target words w 1 and its translation w 2 with at least two senses in common: B 0 : w 1 and w 2 are M 0 .", "B 1 : w 1 is M 0 , w 2 is M 1 .", "B 2 : w 1 and w 2 are the same M 1 (they gain/lose the same sense, drifting in the same direction).", "B 3 : w 1 and w 2 are different M 1 (one gains/loses one sense, the other gains/loses another sense: they diverge).", "These 4 scenarios can be linked with distinct phenomena.", "Examples of words for each of them, extracted from a bilingual English-French corpus of newspaper articles spanning 20 years, can be found in Table 3.", "First, scenario B 0 deals with words which have a stable meaning and an equivalent word with equally stable meaning in the other language (e.g. dinosaurs ).", "Scenario B 1 can be caused by a word being borrowed from one language to another: a loanword .", "After the borrowing, its usage can evolve, for example due to sociocultural specificity impacting the second language, while it stays stable in the source language.", "Similarly, an example of B 3 scenario are cognate words whose usage evolve in their respective languages, diverging into false friends.", "For example, the English noun affair has common etymology with old French and used to mean what one has to do, ordinary business .", "Its usage evolved across time, gaining in English the new sense of a love relationship, usually secret while it often refers in French to a business case .", "The word ukrop presented in the introduction is also an example of B 3 scenario.", "Finally, scenario B 2 deals with words that go through the same semantic change as their equivalent in another language.", "Among other phenomenon, a common cause is when a language evolution is triggered by a cultural or technological change that is common to the societies speaking the two languages.", "For example, the sense of the word confinement related to pandemic became the majority meaning in many languages worldwide following the COVID-19 pandemic.", "For all the sense-annotated lemmas in English and French in EuroSense, we extract their sets of senses.", "We only keep the senses with more than 200 occurrences per language.", "We associate English and French lemmas together if they have at least two senses in common, creating a bilingual dictionary.", "From these lemma pairs, we extract the set of sentences annotated with one of the senses in common to build the pool of sentences for the next step.", "In total, we have 115 English-French lemma pairs, of which 66 have 2 senses (low polysemy) and 49 have between 3 and 5 senses.", "For example, a low-polysemy lemma pair is (project, projet ) and a high-polysemy one is (measure, mesure ).", "Step 2: creation of sense distributions.", "For each monolingual scenario, we create probability distributions of senses at each time slice.", "We denote by p ( S | T , W, L ) the probability that the lemma W conveys the sense S at time T in language L .", "We generate T = 10 time slices and apply each scenario to all the target lemmas pairs.", "Since our variables are discrete, for a given lemma w in language l , the probability distribution of a set of 2 senses { s 1 , s 2 } over time can be characterised by a 2 T stochastic matrix, where the lines sum to 1: p ( s 1 | T = 1 , w, l ) p ( s 2 | T = 1 , w, l ) p ( s 1 | T = 2 , w, l ) p ( s 2 | T = 2 , w, l ) p ( s 1 | T = T, w, l ) p ( s 2 | T = T, w, l ) .", "For a given target lemma, for the M 0 scenario, we randomly draw an initial distribution over the set of senses and repeat it at each time slice: p ( S | T = t, w, l ) = p ( S | T = 1 , w, l ) for t = 2 , 3 , . . . , T .", "For the M 1 scenario, we gradually increase or decrease the probability of appearance through time of one of the senses, either linearly or logarithmically, following Shoemark et al. (2019).", "The other senses have a stable distribution across time.", "For each monolingual scenario, we build the synthetic corpus time slice after time slice, using the set of target lemmas, the pool of sense-annotated sentences and the generated distributions of senses.", "For each target lemma, at each time step t , we sample 200 sentences for each of its senses.", "Then, we add each sampled sentence to time step t with the probability specified in the corresponding distribution of senses of the scenario.", "To avoid the synthetic sense distribution for a target lemma to be disturbed by noise from its appearance as a context word in other sentences, when adding a sentence to the synthetic corpus, we attach the suffix l to its target lemma.", "Note that the 200 sentences sampled for each sense of a lemma can appear only once in each time slice, but can appear in other time slices of the corpus.", "All the bilingual scenarios are built from the monolingual ones.", "Generating them reduces to using the right monolingual scenarios for each word and its translation.", "For example in the B 3 scenario, we generate a corpus using the M 1 scenario for both the target lemma and its translation, but select a different sense to appear or disappear in order to induce a divergence.", "The synthetic corpora, for each scenario and each language, have around 7.5M words distributed into the 10 time slices.", "To sum up, at each time t , a word w in a language l is characterised by its sense distribution in the synthetic corpus p ( S | t, w, l ) .", "This information is similar to the cluster distributions extracted when applying clustering to contextualised embeddings; we can compute the drift measures defined in Section 5, using the JSD to compare the sense distributions.", "The drifts obtained from these measures can then be used as gold standard for the evaluation of our systems.", "For each system described in sections 3 and 4 and for each target lemma pair, we output the vectors of monolingual drift computed on the monolingual scenario synthetic corpora and the vectors of bilingual divergences computed for the bilin-Stable Drift Both stable Stable&drift Same drift Diverge Model Diachrony M 0 M 1 B 0 B 1 B 2 B 3 CBOW incremental 0.65 0.16 0.54 0.96 0.87 0.82 0.66 0.46 0.76 0.68 0.63 0.47 independent 0.84 0.83 0.63 0.86 0.83 0.89 0.70 0.45 0.80 0.66 0.67 0.50 BERT averaging 0.86 0.87 0.34 0.55 0.84 0.90 0.79 0.4 0.71 0.69 0.63 0.47 k-means 5 0.85 0.86 0.61 0.19 0.86 0.97 0.78 0.41 0.77 0.91 0.66 0.40 Table 2: Accuracy measure of each system for the different semantic change scenarios.", "gual scenarios (see Section 5).", "We wish to evaluate whether these series have the same trend as the gold standard.", "For this, we use the Mann-Kendall (MK) Trend Test (Mann, 1945; Kendall, 1975), a non-parametric statistical test used to detect trends of variables.", "It is particularly suited to monotonic trends, which is how we designed the semantic drift in our data.", "The null hypothesis of the test is the absence of monotonic trend.", "The Mann-Kendall test statistic ZMK relies on comparing every value in the time series with all the values preceding it.", "The sign of the statistic test indicates the trend of the data, given a confidence level of 0.05: no monotonic trend (the null hypothesis), increasing trend ( ZMK > 0 ), or decreasing trend ( ZMK < 0 ).", "For a given target lemma, if the direction of the detected trend in our data is the same as the one from the gold standard drift, we consider that the semantic change has been correctly identified.", "We compute the accuracy as the proportion of correctly identified trends in the full list of target lemmas.", "We compare the accuracy of our systems on the synthetic corpora generated in the previous section.", "CBOW processing.", "As we rely on stopwords (on top of frequent words) for the alignment, we do not discard them during preprocessing.", "The context size is set to 5 words, and the dimension of word embeddings to 50.", "Preliminary experiments with larger embedding dimensions exhibited no significant improvement.", "We posit this is due to the small size of the dataset.", "Moreover, the accuracy of incremental fine-tuning of CBOW embeddings for semantic change detection is very sensitive to dimensionality (Kaiser et al., 2020); the optimal embedding dimension is usually quite low, with a clear drop in performance with high embeddings dimensions.", "We train all models using 10 epochs.", "For each language, a static model is first trained on the set of all sentences containing the target lemmas.", "Then, we proceed with incremental an independent training.", "BERT processing.", "We use the pre-trained bert-base-multilingual-uncased model from the transformers library.", "We extract the contextualised embeddings from the corpus and apply the two aggregation methods, averaging and clustering .", "We choose k = 5 clusters for k-means, as it is the maximum number of senses that can be found in our list of target lemmas.", "Experiments with higher values of k did not improve the accuracy.", "We remove the l suffix of the target lemmas before extracting their embeddings.", "Table 2 summarises the accuracies measured using the Mann-Kendall trend test (Hussain and Mahmud, 2019) on the 115 lemma pairs.", "It compares the trend of the drift of all systems with the gold standard trend, for each scenario.", "We have three scenarios with stable monolingual drift or stable bilingual divergence ( M 0 and B 0 , with all the senses being stable; and B 2 , where a word and its translation drift in the same direction) and three drifting scenarios ( M 1 and B 1 , where one sense drifts; and B 3 , where a word and its translation drift in different directions).", "The results show that stable scenarios are generally easier to detect accurately compared to the changing ones, especially in the monolingual analysis.", "The best results are obtained with BERT using k-means clustering.", "This system focuses on the variation of proportion of the different usages, instead of the evolution of the average word representation; it provides a better focus on the meaningful changes in word usage.", "In the case of CBOW, independent training leads to better performances than incremental training.", "This is in line with the find-ings of Shoemark et al. (2019): the large amount of training updates, especially in such a small corpus, is harmful for the quality of the representation.", "Overall, the inceptive drift measure leads to better accuracy for stable scenarios, while the incremental drift is more suited to scenarios where the sense distributions change across time.", "Thus, we advise towards always computing both measures for diachronic studies.", "We analyse the semantic divergence of word-translation pairs in a bilingual corpus of news articles.", "Our goal is to classify all words of a bilingual lexicon into the semantic divergence scenarios defined in Section 6.1.", "The New York Times Annotated Corpus (Sandhaus, 2008) gathers around 1 855 000 articles from January 1987 to June 2007.", "We scrape Le Monde , one of the most read daily newspapers in France, on the same time period.", "We divide both corpora into T = 20 yearly time steps, as a trade-off between getting precise information on semantic drift thanks to a low granularity and reducing noise that appears due to a too low granularity.", "Finally, we select a vocabulary containing the V = 40 000 most frequent words for each corpora.", "The average number of words is around 3.5 M for one time step in the French corpus and 9 M in the English one.", "First, a bilingual lexicon is built using the intersection of the MUSE bilingual dictionary with the French and English vocabularies from our corpora.", "We manually update the bilingual lexicon with domain-specific vocabulary such as named entities, in order to improve the coverage on the corpora.", "The final bilingual dictionary has 27 351 words.", "To obtain bilingual diachronic embeddings, we use CBOW with incremental training.", "Indeed, even though BERT with k-means clustering lead to better results overall on synthetic corpora, the extraction of each token embedding and the clustering step are computationally heavy.", "Moreover, in a large corpus such as ours, saving in memory as many embedding vectors as occurrences of words from the bilingual lexicon is not feasible.", "Thus, the clustering method is more suited for a fine-grained analysis of the divergence of senses of a limited set of target words, rather than an exploratory analysis on the full vocabulary.", "The experimental setup is the same as the one used on the synthetic corpus; the volume of data being much higher in the newspaper corpus, we increase the capacity of our model by setting the dimension of CBOW embeddings to 100 , in order to retain more information.", "We pre-train CBOW models on the English and French corpora and normalise the embeddings to prepare for the alignment.", "The French corpus being the smallest, its embeddings are mapped to the English embedding space.", "Then, we incrementally update the aligned embeddings on both corpora.", "For each word of the bilingual vocabulary, we compute its monolingual drift and its bilingual divergence, following the methodology applied on the synthetic corpora.", "It allows us to identify the words belonging to each of the bilingual scenarios.", "On top of classifying all words into the different bilingual divergence scenarios, we quantify the degree of divergence by summing up the elements of the vectors of inceptive drift and of inceptive bilingual divergence respectively.", "The proportion of each scenario as well as examples selected among the words with the most extreme drifts are in Table 3.", "For example, words belonging to scenario B 3 have the highest monolingual drifts in both the English and French corpora, while their bilingual divergence is among the lowest.", "Words that are stable in both languages ( B 0 ) are mostly daily life words (e.g. mayonnaise ).", "Words that drift in the same direction in both languages ( B 2 ) are concepts related to technology and society that are common to the English and French culture (e.g. renewable ); while the words that diverge between the two languages ( B 1 -fr (English stable, French drifting), B 1 -en and B 3 ) belong to more culture-specific concepts (e.g. francs ) or controversial topics (e.g. terrorist ).", "For example, francs drifts in French, while it is stable in English.", "This is probably due to the change of currency in France in 2002 that had much lower media coverage in the US.", "Similarly, terrorist drifts in both languages but in different directions.", "Indeed, the two countries went through many terrorist attacks during the period under study, but from very different groups, leading to different contexts for this word.", "Overall, the exploratory results on the bilingual newspaper corpora offer interesting insights on perspectives for many applications; both for long-term B0: both stable B1-fr: stable&drift B1-en: drift&stable B2: same drift B3: different drifts 58.2% 15.5% 16.2% 4.9% 5.2% dinosaurs reforms bush genomics steroid pottery delinquency horrific renewable rockets anniversaries francs maid condom gay mayonnaise feminine hostages cinemas katrina joke provincial dealers robotic terrorist Table 3: Proportion and example words for the different categories of bilingual divergence.", "semantic change, studying the joint evolution of cognate words and borrowings; and for short-term change in word usage, for example when studying the disparity in the media resonance of an event in different countries.", "In this paper, we define an experimental framework to measure and classify the semantic divergence of a word and its translation in a bilingual corpus.", "We compare different kinds of word embeddings on various bilingual divergence scenarios generated in a synthetic corpus.", "We apply our conclusions to a bilingual newspaper corpus to identify words undergoing different types of semantic divergence.", "BERT embeddings coupled with a clustering step lead to the best performance on synthetic corpora.", "The performance of CBOW embeddings is nevertheless very competitive, and more adapted to an exploratory analysis on a large corpus.", "There is a large margin for future work; be it in terms of quality of diachronic bilingual representation, metric to measure semantic divergence, and evaluation method.", "Our evaluation focuses on the trend of the drift, but its degree and its speed can also be quantified and analysed.", "In addition, the underlying bilingual representation learning approach is key for the detection of drifts.", "The transformations applied to create a cross-lingual word embedding space might result in information loss or generation of spurious drifts in the embeddings.", "To compare word embeddings with the purpose of detecting semantic divergence, the anchored alignment method presented here is not the only option; promising candidates are Temporal Referencing (Schlechtweg et al., 2019) and the Global Anchor method (Yin et al., 2018).", "A limitation of our work is the use of an injection to define word pairs.", "In his general linguistics course, De Saussure (1916) states that there is no bijective relationship between words in different languages.", "The different meanings and uses of a word in a language cannot have a perfectly identical equivalent in another language.", "Moreover, as noted by Frossard et al. (2020), a word can have synonyms in one language while the word bearing the same meaning in another language has none; in that case, the usage of the word in the first language is divided into all its synonyms.", "Another limitation is evaluation with synthetic data.", "This method is common in monolingual semantic change analysis, but there is no guarantee that the generated phenomenon is similar to real-world data.", "For example, a degree of freedom is the shape of the synthetic drifts generated.", "In this paper, we used logarithmic and linear shapes; but some literature hint that a logistic shape is also a good match for semantic drift (Bailey, 1973; Blythe and Croft, 2012).", "Furthermore, in real data the granularity (the size of the periods used to divide the corpus) might have an important impact on the shape of the semantic evolution.", "Finally, as we build all bilingual scenarios from combinations of two monolingual scenarios, the flaws of the monolingual scenarios are inherited by the bilingual scenarios.", "It can potentially multiply the noise by propagation of uncertainty.", "We wished to overcome the limitations of synthetic evaluation with the application on real corpora, but more thorough interpretation would be necessary for a solid qualitative evaluation.", "To perform quantitative evaluation on real data, an annotated dataset similar to the ones for monolingual semantic change (e.g. Schlechtweg et al., 2020) would be necessary.", "However, the annotation task would be even more complex than for monolingual data.", "An easier entrance point towards annotating data for this task could be loanwords and cognate words.", "Overall, this is a challenging task and we hope to attract more people to work on it in the future." ]
[ "abstain", "abstain", "objective", "objective", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "abstain", "abstain", "objective", "method", "method", "method", "method", "method", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "other", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "Contextual sequence mapping is one of the fundamental problems in Natural Language Processing.", "Instead of relying solely on the information presented in a text, the learning agents have access to a strong external signal given to assist the learning process.", "In this paper, we propose a novel family of Recurrent Neural Network unit: the Context-dependent Additive Recurrent Neural Network ( CARNN ) that is designed specifically to leverage this external signal.", "The experimental results on public datasets in the dialog problem (Babi dialog Task 6 and Frame), contextual language model (Switchboard and Penn Discourse Tree Bank) and question answering (TrecQA) show that our novel CARNN-based architectures outperform previous methods.", "Sequence mapping is one of the most prominent class of problems in Natural Language Processing (NLP).", "This is due to the fact that written language is sequential in nature.", "In English, a word is a sequence of characters, a sentence is a sequence of words, a paragraph is a sequence of sentences, and so on.", "However, understanding a piece of text may require far more than just extracting the information from that piece itself.", "If the piece of text is a paragraph of a document, the reader may have to consider it together with other paragraphs in the document and the topic of the document.", "To understand an utterance in a conversation, the utterance has to be put into the context of the conversation, which includes the goals of the participants and the dialog history.", "Hence the notion of context is an intrinsic component of language understanding.", "Inspired by recent works in dialog systems (Seo et al., 2017; Liu and Perez, 2017), we formalize the contextual sequence mapping problem as a sequence mapping problem with a strong controlling contextual element that regulates the flow of information.", "The system has two sources of signals:", "(i) the main text input, for example, the history utterance sequence in dialog systems or the sequence of words in language modelling; and", "(ii) the context signal , e.g., the previous utterance in a dialog system, the discourse information in contextual language modelling or the question in question answering.", "Our contribution in this work is two-fold.", "First, we propose a new family of recurrent unit, the Context-dependent Additive Recurrent Neural Network ( CARNN ), specifically constructed for contextual sequence mapping.", "Second, we design novel neural network architectures based on CARNN for dialog systems and contextual language modelling, and enhance the state of the art architecture (IWAN (Shen et al., 2017)) on question answering.", "Our novel building block, the CARNN, draws inspiration from the Recurrent Additive Network (Lee et al., 2017), which showed that most of the non-linearity in the successful Long Short Term Memory (LSTM) network (Hochreiter and Schmidhuber, 1997) is not necessary.", "In the same spirit, our CARNN unit minimizes the use of non-linearity in the model to facilitate the ease of gradient flow.", "We also seek to keep the number of parameters to a minimum to improve trainability.", "We experiment with our models on a broad range of problems: dialog systems, contextual language modelling and question answering.", "Our systems outperform previous methods on several public datasets, which include the Babi Task 6 (Bordes and Weston, 2017) and the Frame dataset (Asri et al., 2017) for dialog, the Switchboard (Jurafsky et al., 1997) and Penn Discourse Tree Bank (Miltsakaki et al., 2004) for contextual language modelling, and the TrecQA 1274 dataset (Wang et al., 2007) for question answering.", "We propose a different architecture for each task, but all models share the basic building block, the CARNN.", "Notation.", "As our paper describes several architectures with vastly different setups and input types, we introduce the following notation to maintain consistency and improve readability.", "First, the m th input to the recurrent unit will be denoted e m .", "In language modelling, e m is the embedding of the m -th word; while in dialog, it is the embedding of the m -th utterance (which is a combination of the embedding of the words inside the utterance, x m 1 . . . x mM m ).", "All the gates are denoted by g , all the hidden vectors (outputs of the RNN) are denoted by h .", "W s and b s are the RNN's parameters, denotes the sigmoid activation function, and (cid:12) denotes the element-wise product.", "LSTM.", "The Long Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) is arguably one of the most popular building blocks for RNN.", "The main components of the LSTM are three gates: an input gate g im to regulate the information flow from the input to the memory cell c m , a forget gate g fm to regulate the information flow from the previous time step's memory cell c m 1 , and an output gate g om that regulates how the model produces the outputs (hidden state h m ) from the memory cell c t .", "The computations of LSTM are as follows: c m = tanh ( W ch h m 1 + W cx e m + b c ) g im = ( W ih h m 1 + W ix e m + b i ) g fm = ( W fh h m 1 + W fx e m + b f ) g om = ( W oh h m 1 + W ox e m + b o ) c m = g im (cid:12) c m + g fm (cid:12) c m 1 h m = g om (cid:12) tanh ( c m ) (1) RAN.", "The Recurrent Additive Neural Network (RAN) (Lee et al., 2017) is an improvement over the traditional LSTM.", "However, there are three major differences between the two.", "First, RAN simplifies the output computations by removing the output gate.", "Second, RAN simplifies the memory cell computations by removing the direct dependency between the candidate update memory cell c m and the previous hidden vector h m 1 .", "Finally, RAN removes the non-linearity from the transition dynamic of RNN by removing the tanh non-linearity from the c m .", "The equations for RAN are as follows: c m = W cx e m g im = ( W ih h m 1 + W ix e m + b i ) g fm = ( W fh h m 1 + W fx e m + b f ) c m = g im (cid:12) c m + g fm (cid:12) c m 1 h m = s ( c m ) (2) where s can be an identity function (identity RAN) or the tanh activation function ( tanh RAN).", "As shown in (Lee et al., 2017), RAN's memory cells c m can be decomposed into a weighted sum of the inputs.", "Their experimental results show that RAN performs as well as LSTM for language modelling, while having significantly fewer parameters.", "In this section, we describe our novel recurrent units for the context-dependent sequence mapping problem.", "Our RNN units use a different gate arrangement than that used by RAN.", "However, if we consider a broader definition of identity RAN, i.e., an RNN where hidden unit outputs can be decomposed into a weighted sum of inputs, where the weights are functions of the gates, then our first CARNN unit (nCARNN) can be viewed as an extension of identity RAN with additional controlling context.", "The next two CARNN units (iCARNN and sCARNN) further simplify the nCARNN unit to improve trainability.", "The main components of our recurrent units are the two gates (an update gate g u and a reset gate g f ), which jointly regulate the information from the input.", "The input vector, after being pushed through an affine transformation, is added into the previous hidden vector h m 1 .", "The computations of the unit are as follows: g um = ( W cu c + W hu h m 1 + W eu e m + b u ) g f m = ( W cf c + W hf h m 1 + W ef e m + b f ) e m = W e e m + b e h m = g um (cid:12) ( g fm (cid:12) e m ) + (1 g um ) (cid:12) h m 1 (3) 1275 Figure 1: Context Dependent Additive Recurrent Neural Network.", "where c is the representation of the global context.", "Apart from the non-linearity in the gates, our model is a linear function of the inputs.", "Hence, the final hidden layer of our RNN, denoted as h M , is a weighted sum of the inputs and a bias term B i (Equation 4), where the weights are functions of the gates and W e is a dimension reduction matrix.", "h M = g uM (cid:12) g fM (cid:12) e M + (1 g uM ) (cid:12) h M 1 = MX i =1 ( g ui (cid:12) g fi (cid:12) MY j = i +1 (1 g uj )) (cid:12) e i = MX i =1 [( g ui (cid:12) g fi (cid:12) MY j = i +1 (1 g uj )) (cid:12) W e e i + B i ] (4) From the decomposition in Equation 4, it seems that the outputs of an RNN with the nCARNN unit can be efficiently computed in parallel.", "That is, we can compute the weight for each input in parallel, and take their weighted sum to produce any desired hidden vector output.", "However, there is one obstacle: since the gates are functions of the previous hidden states, they still need to be computed sequentially.", "But if we assume that the external controlling context c is strong enough to regulate the flow of information, we can remove the previous hidden state (local context h m 1 ) from the gate computations, and make the RNN computations parallel.", "The next two variants of CARNN implement this idea by removing the local context from gate computations.", "The Gated Recurrent Unit (GRU) (Chung et al., 2014) and LSTM networks use a local context (the previous hidden state h m 1 ) and the current input to regulate the flow of information.", "In contrast, our model, relies on the global controlling context c at every step, and thus, might not need the local context h m 1 at all.", "Removing the local context can reduce the computational complexity of the model, but it may result in a loss of local sequential information.", "To test the effectiveness of this trade-off, we propose another variant of our unit, the independent gate CARNN ( iCARNN ), where the gate computations are simplified, and the gates are functions of the controlling context and the inputs.", "This formulation of CARNN is formally defined as follows.", "g um = ( W cu c + W eu e m + b u ) g fm = ( W cf c + W ef e m + b f ) e m = W e e m + b e h m = g um (cid:12) ( g fm (cid:12) e m ) + (1 g um ) (cid:12) h m 1 (5) Compared to the traditional RNN, iCARNN's gates computations do not take into account the sequence context, i.e., the previous hidden vector computations, and the gates at all time steps can be computed in parallel.", "However, iCARNN, unlike memory network models (Sukhbaatar et al., 2015; Liu and Perez, 2017), still retains the sequential nature of RNN.", "This is because even though the gates at different time steps do not depend on each other, the hidden vector output at the m -th time step h m depends on the previous gate ( g um 1 ), and hence on the previous input.", "The standard GRU and the LSTM employ a linear transformation on the input representation before it is incorporated into the hidden representation.", "We have followed this convention with the previous variants of our unit.", "Although this transformation improves dimensional flexibility of the input/output vector, and adds representational power to the model with additional parameters, it also increases computational complexity.", "Fixing the output dimension to be the same as the input dimension makes it possible to reduce the computational complexity of the model.", "This leads us to propose another variant of the CARNN where the candidate update e m is the original embedding of the 1276 current input (Equation 6).", "We call this variation the simplified candidate CARNN ( sCARNN ).", "The combination of lower gate computational complexity and the parallel-ability allow the paralleled sCARNN version to be 30% faster (30% lower training time for each epoch) than nCARNN in the question answering and dialog experiments, and 15% faster in the language model experiment.", "The sCARNN is formally defined as follows.", "sCARNN can still be decomposed into a weighted sum of the sequence of input elements, and retains the parallel computation capability of the iCARNN.", "h M = g uM (cid:12) g fM (cid:12) e m + (1 g uM ) (cid:12) h M 1 = MX i =1 ( g ui (cid:12) g fi (cid:12) MY j = i +1 (1 g uj )) (cid:12) e i (7) 4 CARNN-based models for NLP problems In this section, we explain the details of our CARNN-based architectures for end-to-end dialog, language modelling and question answering.", "In each of these applications, one of the main design concerns is the choice of contextual information.", "As we will demonstrate in this section, the controlling context c can be derived from various sources: a sequence of words (dialog and question answering), a class variable (language modelling).", "Virtually any sources of strong information that can be encoded into vectors can be used as controlling context.", "To produce a response, we first encode the whole dialog history into a real vector representation h his .", "To this effect, we perform two steps: first, we encode each utterance (sequence of words) into a real vector, and next, we encode this sequence of real vector representations into h his .", "We employ the Position Encoder (Bordes and Weston, 2017) for the first step, and CARNNs for the second step.", "Summarizing individual utterances.", "Let's denote the sequence of word-embeddings in the m th utterance x m 1 , . . . x mN m .", "These word embeddings are jointly trained with the model.", "Following previous work in end-to-end dialog systems, we opt to use the Position Encoder (Liu and Perez, 2017; Bordes and Weston, 2017) for encoding utterances.", "The Position Encoder is an improvement over the average embedding of bag of words, as it takes into account the position of the words in a sequence.", "This encoder has been empirically shown to perform well on the Babi dialog task (Liu and Perez, 2017; Bordes and Weston, 2017); more details about the Position Encoder can be found in (Sukhbaatar et al., 2015).", "Let's denote the the embeddings of a sequence of utterances e 1 , . . . e M 1 .", "Summarizing the dialog history.", "The CARNN models take the embeddings of the sequence of utterances and produce the final representation h his .", "We further enhance the output of the CARNN by adding the residual connection to the input (He et al., 2016; Tran et al., 2017), and the attention mechanism (Bahdanau et al., 2015) over the history.", "h 1 ,", "..", "h M 1 = CARNN ( e 1 , .. e M 1 , c ) m [1 ..M 1] : h m = h m + e m 1", ".. M 1 = softmax ( h T 1 c , .., h TM 1 c ) h his = M 1 X m =1 m h m (8) where are the attention weights, h m is the m -th output of the base CARNN, e m is the embedding of the m -th input utterance, and c = e M is the context embedding.", "Our model chooses the response from a set of pre-determined system answers (a task setup following Bordes and Weston (2017); Liu and Perez (2017); Seo et al. (2017)).", "However, in the dialog case, the answers themselves are sequences of words, and treating them as distinct classes may not be the best approach.", "In fact, previous work in memory networks (Liu and Perez, 2017; Bordes and Weston, 2017) employs a feature function to extract features from the candidate responses.", "In our work, we do not use any feature extraction, and simply use the Position Encoder to encode the 1277 Figure 2: CARNN for dialog.", "responses as shown in Figure 2, which depicts our architecture of CARNN for dialog.", "l [1 ..L ] : e l = P osition Encoder ( y cl ) (9) We then put a distribution over the candidate responses conditioned on the summarized dialog history h his (Equation 10).", "Typically, language models operate at the sentence level, i.e., the sentences are treated independently.", "Several researchers have explored inter-sentence and inter-document level contextual information for language modelling (Ji et al., 2016a,b; Tran et al., 2016; Lau et al., 2017).", "Following Ji et al. (2016a,b), we investigate two types of contextual information:", "(i) the previous sentence context; and", "(ii) a latent variable capturing the connection information between sentences, such as discourse relation in the Penn Discourse Tree Bank dataset or Dialog Acts in the Switchboard dataset.", "Previous sentence context.", "The previous sentence (time-step t 1 ) contextual information is encoded by a simplified version of the nCARNN, where the global context is absent.", "The final hidden vector of this sequence is then fed into the current recurrent computation (time-step t ) as the context for that sequence.", "Equation 11 shows this procedure.", "c t 1 nCARNN ( e t 1 1 , .. e t 1 M t 1 ) h t 1 ,", "..", "h tM t 1 = CARNN ( e t 1 , .. e tM t , c t 1 ) w tm +1 softmax ( W ( l ) h tm + b ( l ) ) (11) Latent variable context.", "Ji et al. (2016b) proposed to embed the predicted latent variables using an embedding matrix, and use this real vector as the contextual information.", "In our work, we design a multi-task learning scenario where the previous sentence context encoder has additional supervised information obtained from the annotated latent variable ( L t 1 ).", "This additional information from the latent variable is only used to train the previous sentence encoder, and enhance the context c t 1 (Equation 12).", "During test time, the language model uses the same computation steps as the previous sentence context version.", "During training, the total loss function ( L tl,w ) is the linear combination of the average log-loss from the current sentence's words ( L tw ) and the log-loss from the previous latent variable ( L t 1 l ).", "where is a linear mixing parameter.", "In our experiments, tuning does not yield significant improvements, hence we set = 0 .", "5 .", "Answer selection is an important component of a typical question answering system.", "This task can be briefly described as follows: Given a question q and a candidate set of sentences c 1 , c 2 , . . . c n , the goal is to identify positive sentences that contain the answer.", "Many researchers have investigated employing neural networks for this task (Rao 1278 Figure 3: CARNN for context-dependent language model. et al., 2016; Wang et al., 2017; Bian et al., 2017; Shen et al., 2017; Tay et al., 2017; He et al., 2015).", "Below is an example from the answer selection TrecQA corpus: Question : Who established the Nobel prize awards?", "Positive answer : The Nobel Prize was established in the will of Alfred Nobel, a Swede who invented dynamite and died in 1896.", "Negative answer : The awards aren't given in specific categories.", "The IWAN model proposed in (Shen et al., 2017) achieves state-of-the-art performance on the Clean version TrecQA dataset (Wang et al., 2007) for answer selection.", "In general, given two sentences, the model aims to calculate a score to measure their similarity.", "For each sentence, the model first uses a bidirectional LSTM to obtain a context-aware representation for each position in the sentence.", "The representations will later be utilized by the model to compute similarity score of the two sentences according to the degree of their alignment (Shen et al., 2017).", "The original IWAN model employed LSTM to encode the sentence pair into sequences of real vector representations.", "However, these sequences are independent, and do not take into account the information from the other sentence.", "In order to overcome this limitation, we enhance the IWAN model with a cross context CARNN-based sentence encoder that replaces the bidirectional LSTM.", "When the cross context CARNN sentence encoder processes a sentence, it takes the encoding of the other sentence, encoded by a Position Encoder, as the controlling context (Figure 4).", "Datasets.", "For the dialog experiments, we focus on two popular datasets for dialog: the Babi dataset (Bordes and Weston, 2017) and the Mal-luba Frame dataset (Asri et al., 2017).", "1 In our main set of experiments for dialog, we use the original Babi task 6 dataset, and test on the end-to-end dialog setting (the same setting used by Seo et al. (2017); Bordes and Weston (2017); Liu and Perez (2017)).", "That is, the systems have to produce complete responses and learn the dialog behaviour solely from the ground truth responses without help from manual features, rules or templates.", "Apart from this main set of experiments, we apply our end-to-end systems as dialog managers and test on a slightly different setting in the next two sets of experiments.", "In the second set of experiments, we use our end-to-end systems as dialog managers.", "The only difference compared to the end-to-end dialog setting is that the systems produce templatized responses instead of complete responses.", "Our motivation for this dialog manager setting is that in our preliminary experiments with the Babi dataset, we found out that many of the classification errors are due to very closely related responses, all of which fit the corresponding context.", "We argue that if we treat the systems as dialog managers, then we can delexicalize and group similar responses.", "Thus following Williams et al. (2017), we construct a templatized set of responses.", "For example, all the 1 Among the Babi tasks, we focus mainly on task 6, which is based on real human-machine interactions.", "The other five Babi datasets comprise synthetically generated data.", "responses similar to india house is in the west part of town will be grouped into name is in the loc part of town.", "The set of responses is reduced to 75 templatized responses.", "We call this new dataset Babi reduced.", "2 The third set of experiments is conducted on the Frame dataset.", "The general theme in this dataset is similar to that of the Babi task 6, but the responses in the Frame dataset are generally in free form, rather than being sourced from a limited set.", "Thus, we define a dialog task on the Frame data set similar to the Babi reduced dialog task by simplifying and grouping the responses.", "3 The final set of responses consists of 129 response classes.", "For the experiments on the Frame dataset, we randomly choose 80% of the conversations as the training set, and 10% each for testing and development.", "Baselines.", "In the dialog experiments, we focus on the existing published results with end-to-end settings, namely the Memory Network (MN) (Bor-des and Weston, 2017), the Gated Memory Network (GMN) (Liu and Perez, 2017) and the Query Reduction Network (QRN) (Seo et al., 2017).", "4 For the Frame and Babi reduced datasets, we use the publicly available implementation of the QRN, 5 and our implementation of the GMN with hyper parameters similar to those reported by Liu and 2 We do not have access to Williams et al. (2017)'s template set, thus the results in Babi reduced are not comparable to those obtained by Williams et al. (2017).", "3 We use only one of the annotated Dialog acts and its first slot key as a template for the response.", "4 Williams et al. (2017) and Liu and Lane (2017) reported very strong performances (55.6% and 52.8% respectively) for the Babi dataset.", "However, these systems do not learn dialog behaviour solely from Babi's ground truth responses, and thus do not have end-to-end dialog setups.", "As stated in their papers, Williams et al. use hand-coded rules and task-specific templates, while Liu et al. employ the external users' goal annotations that are outside the Babi dataset.", "5 https://github.com/uwnlp/qrn Model Babi Babi reduced Frame nCARNN 51.3%* 55.8%* 27.4%* iCARNN 52.0%* 55.2%* 28.5%* sCARNN 50.9%* 55.9%* 25.7%* CARNN voting 53.2 %* 56.9 %* 29.1 %* QRN (2017) 46.8% 54.7% 24.0% GMN (2017) 47.4% 54.1% 23.6% MN (2017) 41.1% Table 1: Dialog accuracy on Babi and Frame among end-to-end systems.", "Perez (2017); Seo et al. (2017).", "Note that the original results presented by Seo et al. (2017), take into account partial matches (matching only a portion of the ground truth response), and hence cannot be directly translated into the standard response accuracy reported by other researchers (we have con-firmed this with Seo et al. ).", "For a direct comparison with the QRN, we use the evaluation settings employed in other papers (Liu and Perez, 2017; Sukhbaatar et al., 2015).", "Results and discussion.", "Table 1 shows the results of the end-to-end models for the dialog task.", "All the CARNN-based systems are implemented in Tensorflow (Abadi et al., 2015) with a hidden vector size of 1024.", "As seen in Table 1, our models achieve the best results, and within the variants of our models, the iCARNN either performs the best, or very close to the best on all datasets.", "Majority voting provides a significant boost to the performance of the CARNN models.", "Upon comparison with the baseline systems, CARNN models tend to perform better on instances which require the system to remember specific information through a long dialog history.", "In Figure 5, the user already mentioned that he/she wants to find a cheap restaurant, but the GMN and QRN seem to forget this information.", "We speculate that due 1280 U : im looking for a cheap restaurant S : ...", "to the ease of training, CARNN models summarize the dialog history better, and allow for longer information dependency.", "The CARNN units are originally designed in the dialog context.", "During model calibration, we also tested in the dialog experiments two other CARNN versions with both higher and lower complexity.", "The lower complexity CARNN version resembles sCARNN without the forget gate, and the higher complexity CARNN version resembles the LSTM unit with all three gates (forget, update and output), with the gates being modified from the original LSTM gates to be functions of the external contextual information.", "Both of these versions do not perform as well as the three main CARNN versions (48.7% and 48.6% for the high-and low-complexity versions respectively in the Babi task).", "Datasets.", "We employ two datasets for the experiments with the contextual language model: the Switchboard Dialog Act corpus and the Penn Discourse Tree Bank corpus.", "There are 1155 telephone conversations in the Switchboard corpus, where each conversation has an average of 176 utterances.", "There were originally 226 Dialog Act (DA) labels in the corpus, but they are usually clustered into 42 labels.", "The Penn Tree Bank corpus provides discourse relation annotation between the spans of text.", "We used the preprocessed data by Ji et al. (2016b), where the explicit discourse relations are mapped into a dummy relation.", "Our data splits are the same as those described in the baselines (Ji et al., 2016a,b).", "Baselines.", "We compare our system with the Recurrent Neural Net (RNNLM) with LSTM unit (Ji et al., 2016a), the Document Contextual Lan-Model Penn Discourse Switchboard Tree Bank nCARNN (w/o latent) 96.95 30.17 iCARNN (w/o latent) 94.72 32.49 sCARNN (w/o latent) 87.39 31.50 nCARNN (with latent) 96.64 29.72 iCARNN (with latent) 94.16 32.16 sCARNN (with latent) 86.68 31.49 RNNLM (2016b) 117.8 56.0 DCLM (2016a) 112.2 45.3 DRLM (2016b) 108.3 39.6 Table 2: Perplexity on Switchboard and Penn Discourse Tree Bank.", "guage Model (DCLM) (Ji et al., 2016a) and the Discourse Relation Language Model (DRLM) (Ji et al., 2016b).", "The RNNLM's architecture is the same as that described in (Mikolov et al., 2013) with sigmoid non-linearity replaced by LSTM.", "The DCLM exploits the inter-sentences context by concatenating the representation of the previous sentence with the input vector (context-to-context) or the hidden vector (context-to-output).", "The DRLM introduces the latent variable contextual models using a generative architecture that treats Dialog Acts or discourse relations as latent variables.", "Results and discussion.", "Table 2 shows the test set perplexities across the systems for the Penn Tree Bank and Switchboard datasets.", "Interestingly, in these experiments, the system with the least computational complexity, the sCARNN, performs best on Penn Discourse Tree Bank, and second best on Switchboard.", "Generally, we found out that adding the Dialog Act/Discourse supervised signal in a multi-task learning scheme provides a boost to performance, but this improvement is small.", "Datasets.", "The TrecQA dataset (Wang et al., 2007) is a widely-used benchmark for answer selection.", "There are two versions of TrecQA: original and clean.", "The original TrecQA consists of 1,229 training questions, 82 development questions, and 100 test questions.", "Recently, researchers (Rao et al., 2016; Shen et al., 2017) developed a clean version, where they removed questions in the development and test sets with no answers or only positive/negative answers.", "This reduced the development and test set's sizes to 65 and 68 questions respectively.", "Baselines.", "We compare the performance of our models with that of the state-of-the-art models on the clean version of the TREC-QA dataset (Shen et al., 2017; Bian et al., 2017; Wang et al., 2017; Rao et al., 2016; Tay et al., 2017).", "We do not have access to the original implementation of IWAN, hence we use our implementation of the IWAN model as the basis for our models.", "Results and discussion.", "Table 3 shows the MAP (Mean Average Precision) and MRR (Mean Reciprocal Rank) of our systems and the baselines.", "To the best of our knowledge, our systems outperform all previous systems on this dataset.", "Enhancing IWAN with cross-context CARNN statistically significantly improves performance.", "Among the variants, the iCARNN is the most consistent in both MAP and MRR.", "During our error analysis, we noted that the top answer returned by IWAN models with either LSTM or CARNNs are usually good.", "However, in many cases, lower ranked answers returned by the LSTM model are not as good as those produced by the CARNN models.", "We show an example of this in Table 4.", "In this paper, we propose a novel family of RNN units which are particularly useful for the contextual sequence mapping problem: the CARNNs.", "Together with our neural net architectures, CARNN-based systems outperform previous methods on several public datasets for dialog (Frame and Babi Task 6), question answering (TrecQA) and contextual language modelling (Switchboard and Penn Discourse Tree Bank).", "In the future, we plan to investigate the effectiveness of CARNN units in other sequence modelling tasks.", "Question: During what war did Nimitz serve?" ]
[ "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "result", "method", "result", "objective", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "objective", "other", "abstain", "other", "other", "other", "abstain", "abstain", "other", "method", "other", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "objective", "result", "objective", "abstain" ]
[ "We introduce a noisy channel approach for language model prompting in few-shot text classification.", "Instead of computing the likelihood of the label given the input (referred as direct models), channel models compute the conditional probability of the input given the label, and are thereby required to explain every word in the input.", "We use channel models for recently proposed few-shot learning methods with no or very limited updates to the language model parameters, via either in-context demonstration or prompt tuning.", "Our experiments show that, for both methods, channel models significantly outperform their direct counterparts, which we attribute to their stability, i.e., lower variance and higher worst-case accuracy.", "We also present extensive ablations that provide recommendations for when to use channel prompt tuning instead of other competitive methods (e.g., direct head tuning): channel prompt tuning is preferred when the number of training examples is small, labels in the training data are imbalanced, or generalization to unseen labels is required.", "Prompting large language models, by prepending natural language text or continuous vectors (called prompts ) to the input, has shown to be promising in few-shot learning (Brown et al., 2020).", "Prior work has proposed methods for finding better prompt (Shin et al., 2020; Li and Liang, 2021; Lester et al., 2021) or better scoring of the output from the model (Zhao et al., 2021; Holtzman et al., 2021).", "These studies directly predict target tokens to determine the prediction for an end task.", "Despite promising results, they can be unstable with high variance across different verbalizers (text expression for labels) and seeds, and the worst-case performance is often close to random (Perez et al., 2021; Lu et al., 2021).", "In this paper, we introduce alternative channel models for prompted few-shot text classification LM =( A three-hour cinema master class. , It was great. ) ( x , y ) A three-hour cinema master class.", "with large language models, inspired by noisy channel models in machine translation (Brown et al., 1993; Koehn et al., 2003; Yu et al., 2017; Yee et al., 2019) and their extensions to other tasks (Yogatama et al., 2017; Lewis and Fan, 2018).", "Unlike direct models that compute the conditional probability of the label token given the input, channel models compute the conditional probability of the input given the output (Figure 1).", "Intuitively, channel models are required to explain every word in the input, potentially amplifying training signals in the low data regime.", "We study the impact of channel models for language model prompting where the parameters of the language model are frozen.", "In particular, we compare channel models with their direct counterparts for (1) demonstration methods, either concatenation-based (Brown et al., 2020) or our proposed, ensemble-based (Section 4.1.3), and (2) prompt tuning (Lester et al., 2021).", "Our experiments on eleven text classification datasets show that channel models outperform their direct counterparts by a large margin.", "We attribute the strong performance of channel models to their stability: they have lower variance and significantly higher worst-case accuracy then their direct counterparts over different verbalizers and seeds.", "We additionally find a direct model with head tuning tuning the LM head while freezing other parametersis surprisingly effective, often outperforming direct models with other forms of tuning.", "While different methods are preferred given different conditions, the channel model with prompt tuning (denoted as channel prompt tuning) significantly outperforms all direct baselines when (1) the training data is imbalanced, or (2) generalization to unseen labels is required.", "In summary, our contributions are three-fold: 1. We introduce a noisy channel approach for language model prompting in few-shot text classification, showing that they significantly outperform their direct counterparts for both demonstration methods and prompt tuning.", "2. We find particularly strong performance of channel models over direct models when the training data is imbalanced or generalization to unseen labels is required.", "3. Based on extensive ablations, we provide recommendations between different models (di-rect vs. channel and prompt tuning vs. head tuning) based on given conditions such as the target task, the size of training data, the number of classes, the balance between labels in the training data, and whether generalization to unseen labels is required.", "Let x and y be the input and the output, respectively.", "The most widely used models, denoted as direct models, compute P ( y | x ) .", "In contrast, noisy channel models maximize P ( x | y ) P ( y ) (Shannon, 1948; Brown et al., 1993).", "1 While the noisy channel approach has been the most successful in machine translation (Yamada and Knight, 2001; Koehn et al., 2003; Yu et al., 2017; Yee et al., 2019), it has also been studied in more general NLP tasks.", "Prior work provides a theoretical analysis that channel models approach their asymptotic errors more rapidly than their direct counterparts (Ng and Jordan, 2002), and empirically shows that channel models are more robust to distribution shift in text classification (Yogatama et al., 2017) or question answering (Lewis and Fan, 2018), and in a few-shot setup (Ding and Gimpel, 2019).", "1 We follow Yu et al. (2017); Yee et al. (2019) in using the terms direct models and channel models.", "They are often referred as discriminative models and generative models in prior work (Yogatama et al., 2017; Lewis and Fan, 2018).", "In principle, these two distinctions are not always equivalent, e.g., a model that computes P ( x, y ) = P ( y | x ) P ( x ) is generative but not a channel model.", "In this paper, we explore channel models using a large language model on a wide range of text classification tasks, focusing on prompt-based few-shot learning.", "Prior work in few-shot learning has used different approaches, including semi-supervised learning with data augmentation or consistency training (Miyato et al., 2017; Clark et al., 2018; Xie et al., 2020; Chen et al., 2020) and meta learning (Finn et al., 2017; Huang et al., 2018; Bansal et al., 2020).", "Recent work has introduced prompting (or priming ) of a large language model.", "For example, Brown et al. (2020) proposes to use a concatenation of training examples as a demonstration, so that when it is prepended to the input and is fed to the model, the model returns the output following the pattern in the training examples.", "This is especially attractive as it eliminates the need for updating parameters of the language model, which is often expensive and impractical.", "Subsequent work proposes alternative ways of scoring labels through better model calibration (Zhao et al., 2021; Holtzman et al., 2021), or learning better prompts, either in a discrete space (Shin et al., 2020; Jiang et al., 2020; Gao et al., 2021) or in a continuous space (Li and Liang, 2021; Lester et al., 2021; Liu et al., 2021; Zhong et al., 2021; Qin and Eisner, 2021).", "Almost all of them are direct models, computing the likelihood of y given x with the prompts.", "Our work is closely related to two recent papers.", "Tam et al. (2021) studies a label-conditioning objective for masked language models; although this is not strictly a generative channel model, conditioning on the output y is similar to our work.", "However, they are still optimizing a discriminative objective, and inference at test time is the same as with the direct model.", "Holtzman et al. (2021) explores zero-shot models that compute the probability of x given y based on Pointwise Mutual Information, but with a restriction that the input and the output are interchangeable.", "To the best of our knowledge, our work is the first that uses a noisy channel model for few-shot language model prompting for classification, and also the first to draw the connection with the noisy channel literature.", "We focus on text classification tasks.", "The goal is to learn a task function f : X C , where X is the 5317 Method Zero-shot Concat-based Demonstrations Ensemble-based Demonstrations Direct PLM ( v ( c i ) | x ) PLM ( v ( c i ) | x 1 , v ( c 1 ) ...x k , v ( c k ) , x ) Kj =1 PLM ( v ( c i ) | x j , v ( c j ) , x ) Direct++ PLM ( v ( c i ) | x ) PLM ( v ( c i ) | NULL ) PLM ( v ( c i ) | x 1 ,v ( c 1 ) ...x k ,v ( c k ) ,x ) PLM ( v ( c i ) | x 1 ,v ( c 1 ) ...x k ,v ( c k ) , NULL ) Kj =1 PLM ( v ( c i ) | x j ,v ( c j ) ,x ) PLM ( v ( c i ) | x j ,v ( c j ) , NULL ) Channel PLM ( x | v ( c i )) PLM ( x | x 1 , v ( c 1 ) ...x k , v ( c k ) , v ( c i )) Kj =1 PLM ( x | v ( c j ) , x j , v ( c i )) Table 1: Comparison of zero-shot, concat-based demonstrations, and ensemble-based demonstrations (Section 4.1).", "set of all natural language texts and C = { c 1 ...c m } is a set of labels.", "We consider three formulations.", "Direct computes distributions of labels c i C given the input x X : P ( c i | x ) .", "This is the most widely used method in modern neural networks.", "Direct++ is a stronger direct model that computes P ( c i | x ) P ( c i | NULL ) instead of P ( c i | x ) , following the method from Holtzman et al. (2021) and the nonparametric method from Zhao et al. (2021).", "This approach is motivated by the fact that language models can be poorly calibrated and suffer from competition between different strings with the same meaning.", "This approach is used for the demonstration methods in Section 4.1.", "Channel uses Bayes' rule to reparameterize P ( c i | x ) as P ( x | c i ) P ( c i ) P ( x ) .", "As we are generally interested in argmax c i C P ( x | c i ) P ( c i ) P ( x ) and P ( x ) is independent from c i , it is sufficient to model P ( x | c i ) P ( c i ) .", "We assume P ( c i ) = 1 |C| and only compute P ( x | c i ) .", "We explore direct and channel models using a causal language model (LM) PLM that gives the conditional probability of the text y when followed by x .", "More precisely, given the text x = x 1 ...x t x and y = y 1 ...y t y ( x 1 ...x t x , y 1 ...y t y V , where V is the vocabulary set), PLM ( y | x ) indicates t y t (cid:48) =1 PLM ( y t (cid:48) | x 1 ...x t x y 1 ...y t (cid:48) 1 ) .", "2 When learning a task function f : X C , we also assume a pre-defined verbalizer v : C X which maps each label into a natural language expression.", "For example, if the task is sentiment analysis with C = { c + , c } , an example input text x would be A three-hour cinema master class and an example v would have v ( c + ) = It was great and v ( c ) = It was terrible.", "In a few-shot setup, we are also given a set of K training examples 2 In practice, we use length normalization that was found to be effective by Holtzman et al. (2021).", "We are interested in methods where there are no trainable parameters (Section 4.1) or the number of trainable parameters is very small, typically less than 0.01% of the total (Section 4.2).", "This follows prior observations that updating and saving a large number of parameters for every task is expensive and often infeasible (Rebuffi et al., 2017; Houlsby et al., 2019; Lester et al., 2021).", "In demonstration methods, there are no trainable parameters.", "We explore three ways of making a prediction, as summarized in Table 1. 4.1.1 Zero-shot We follow Brown et al. (2020) in computing P ( c i | x ) and P ( x | c i ) as PLM ( v ( c i ) | x ) and PLM ( x | v ( c i )) , respectively.", "For example, given x = A three-hour cinema master class, the direct model compares the probabilities of It was great and It was terrible when following A three-hour cinema master class, while the channel model considers the probabilities of A three-hour cinema master class when following It was great or It was terrible.", "We follow the few-shot learning method in Brown et al. (2020).", "The key idea is to prepend a concatenation of K training examples to the input so that a language model can learn the task setup from the input.", "The original method was used for a direct model, but can be naturally extended for a channel model.", "Concretely, P ( c i | x ) in direct models is obtained via PLM ( v ( c i ) | x 1 , v ( c 1 ) , , x K , v ( c K ) , x ) , and P ( x | c i ) in channel models is obtained via PLM ( x | v ( c 1 ) , x 1 , , v ( c K ) , x K , v ( c i )) .", "a stronger direct model.", "Instead of concatenating K training examples as one sequence and getting output probabilities from an LM once, we obtain output probabilities from an LMK times conditioned on one training example at a time, and multiply the resulting probabilities.", "Specifically, P ( c i | x ) is computed via Kj =1 PLM ( v ( c i ) | x j , v ( c j ) , x ) and P ( x | c i ) is computed via Kj =1 PLM ( x | v ( c j ) , x j , v ( c i )) .", "This method also reduces the memory consumption the concat-based method uses O ( K 2 ) while this method uses O ( K ) and eliminates the dependency on the ordering of training examples, which has been shown to significantly impact the model performance (Zhao et al., 2021; Lu et al., 2021).", "We also explore methods that tune a very limited number of model parameters, as summarized in Figure 2. We study head tuning (Section 4.2.1) and transformation tuning (Section 4.2.2) for direct models.", "We also consider prompt tuning (Sec-tion 4.2.3) for both direct and channel models, which we refer as direct prompt tuning and channel prompt tuning, respectively.", "All models share the same input-output interface with the zero-shot setup in Table 1 during training and inference.", "Head tuning finetunes the headthe matrix in the LM which transforms the hidden representation from the last transformer layer to the logit values.", "Let O R |V| h be the head and h x R h be the hidden representations from the last transformer layer given x , PLM ( v i | x ) for a token v i V is computed via an i -th element of Softmax( Oh x ) .", "We finetune O while freezing all other parameters of the LM.", "Although O is tied with the embedding matrix of the LM during language model pretraining, we separate them during head tuning.", "3 4.2.2 Transformation tuning As an alternative to head tuning, we transform O with a new transformation matrix U R h h .", "Specifically, PLM ( v i | x ) for a token v i V is computed via an i -th element of Softmax( OUh x ) .", "We train U , initialized from an identity matrix, and freeze other parameters including O .", "Prompt tuning is the method that has recently gathered much attention (Li and Liang, 2021; Lester et al., 2021; Liu et al., 2021).", "The key idea is to consider the LM as a black-box model and instead learn continuous prompt embeddings.", "We follow the method from Lester et al. (2021) where n prompt tokens u 1 ...u n are prepended to the input, and the embeddings of u 1 ...u n are learned.", "In other words, direct models compute P ( c i | x ) = PLM ( v ( c i ) | u 1 ...u n , x ) , and channel models compute P ( x | c i ) = PLM ( x | u 1 ...u n , v ( c i )) .", "The parameters in the LM are frozen except the embeddings of u 1 ...u n .", "4 5 Experimental Setup 5.1 Datasets We report results for eleven text classification datasets, following Zhang et al. (2015) and Gao 3 This is different from head tuning from prior work, e.g., Le Scao and Rush (2021), which finetunes PLM and uses a separate, randomly initialized head instead of the LM head.", "4 This is different from prompt tuning in Gao et al. (2021); Liu et al. (2021) which jointly trains prompt embeddings and the parameters of the LM.", "et al. (2021): SST-2 (Socher et al., 2013), SST-5 (Socher et al., 2013), MR (Pang and Lee, 2005), CR (Hu and Liu, 2004), Amazon (McAuley and Leskovec, 2013), Yelp (Zhang et al., 2015), TREC (Voorhees and Tice, 2000), AGNews (Zhang et al., 2015), Yahoo (Zhang et al., 2015), DBPedia (Lehmann et al., 2015) and Subj (Pang and Lee, 2004).", "The datasets include a varied number of classes per task, from 2 to 14.", "See Table 10 in Appendix A for dataset samples.", "For few-shot learning, we primarily use training set size K = 16 , but explore K = { 4 , 16 , 64 , Full } in the ablations.", "We sample the K examples uniformly from the true distribution of the training data.", "We relax the assumption from prior work of an equal number of training examples per label (Gao et al., 2021; Logan IV et al., 2021), for more realistic and challenging evaluation.", "We follow all the hyperameters and details from prior work (Appendix B) which eliminates the need for a held-out validation set.", "The very limited data is better used for training rather than validation, and cross-validation is less helpful when the validation set is extremely small (Perez et al., 2021).", "We use GPT-2 (Radford et al., 2019) for the LM.", "We primarily use GPT-2 Large but also experiment with varying sizes (Small, Medium, Large and X-Large) for the ablations in Appendix C. While we only experiment with GPT-2, our experiments are easily extendable to other causal language models.", "We use accuracy as a metric for all datasets.", "We experiment with 4 different verbalizers (taken from Gao et al. (2021); full list provided in Appendix A), 5 different random seeds for sampling training data, and 4 different random seeds for training.", "We then report Average accuracy and Worst-case accuracy .", "5 We consider the worst-case accuracy to be as important as the average accuracy given significantly high variance of few-shot learning models, as shown in previous work (Zhao et al., 2021; Perez et al., 2021).", "The worst-case accuracy is likely of more interest in high-risk applications (Asri et al., 2016; Guo et al., 2017).", "This section reports results from demonstration methods (Section 6.1), tuning methods (Sec-tion 6.2) and ablations (Section 6.3).", "Discussion is provided in Section 7.", "Direct vs. Direct++ Direct++ significantly outperforms the naive direct model across all setups, indicating that using P ( c i | x ) P ( c i | NULL ) instead of P ( c i | x ) is highly beneficial as claimed by Holtzman et al. (2021); Zhao et al. (2021).", "Concat vs. Ensemble Our proposed, ensemble-based method is better than the concat-based method in direct models, by 7% absolute in the average accuracy and the worst-case accuracy, when macro-averaged across all datasets.", "In contrast, the ensemble-based method is not always better in channel models; it is better only on the datasets with long inputs.", "We conjecture that the ensemble-based method may suffer when labels in the training data are not balanced, which direct++ explicitly takes into account as described in Zhao et al. (2021).", "Direct++ vs. Channel In a few-shot setting, channel models outperform direct models in almost all cases.", "The strongest channel model outperforms the strongest direct model by 3.1% and 7.2% absolute, in terms of the average accuracy and the worst-case accuracy, respectively.", "5 We also report standard deviation and best-case accuracy in the Appendix.", "Standard deviation and the best-case accuracy are reported in Table 11 and Table 12 in the Appendix.", "They indicate strong performance of channel models can be attributed to their low variance.", "The highest best-case accuracy is achieved by di-rect++ on most datasets, but it has a higher variance, having lower average and the worst-case accuracy than channel models.", "Zero-shot vs. Few-shot Performance of direct models sometimes degrades in a few-shot setting, which is also observed by prior work (Zhao et al., 2021).", "This is likely because demonstrations provided by the training data may cause the model to be miscalibrated and easily biased by the choice of demonstrations.", "However, channel models achieve few-shot performance that is significantly better than zero-shot methods on all datasets.", "Comparison when prompt tuning When using prompt tuning, channel models consistently outperform direct models by a large margin on all datasets.", "Improvements are 13.3% and 23.5% absolute in the average and the worst-case accuracy, respectively.", "Standard deviation and the best-case accuracy are reported in Table 13 in the Appendix.", "Consistent with the findings in Section 6.1, the strong performance of channel prompt tuning can be explained by the low variance of channel prompt tuning.", "Direct prompt tuning often achieves higher best-case accuracy; however, due to its high variance, its overall accuracy is lower, with significantly lower worst-case accuracy.", "Head tuning vs. prompt tuning We find that head tuning is a very strong method, despite often being omitted as a baseline in prior work.", "It significantly outperforms direct prompt tuning in all cases.", "It also outperforms channel prompt tuning on some datasets, particularly significantly on TREC and Subj.", "For these datasets, the taskfinding the type of the answer to the question or identifying the subjectivity of the statementis inherently different from language modeling, and likely benefits from directly updating the LM parameters, rather than using the LM as a black box.", "are achieved on Yahoo and DBPedia.", "In fact, on these datasets, channel prompt tuning even outperforms all finetuning finetuning all parameters of the LMwhich achieves 48.9/43.8 on Yahoo and 66.3/50.4 on DBPedia.", "We conjecture that using K = 16 on these datasets naturally requires generalization to unseen labels due to the large number of classes ( |C| = 10 and 14 ), where channel prompt tuning significantly outperforms direct models, as we show in Section 6.4.", "For the ablations, we report experiments on SST-2, MR, TREC and AGNews, using one train seed (instead of four), and four verbalizers and five data seeds (as in main experiments).", "Varying the number of training examples We vary the number of training examples ( K ) and report the average accuracy in Figure 3. All methods achieve higher accuracy as K increases.", "While we confirm strong performance of channel prompt tuning with K 16 , head tuning outperforms channel head tuning when K = 64 .", "When K = Full , both direct prompt tuning and head tuning outperform channel prompt tuning.", "We think this is because (1) training signals amplified by channel models (Lewis and Fan, 2018) are more significant when K is small, and (2) channel models are more beneficial when labels on the training data are imbalanced (confirmed in the next ablation), which is more likely to happen with smaller K .", "It is also worth noting that our experiment with K = Full confirms the finding from Lester et al. (2021) that direct prompt tuning matches the performance of all finetuningfinetuning all parameters of the LMwhile being much more parameter-Direct All Direct Head Direct Prompt Channel Prompt 50 70 90 0 0.125 0.25 0.375 0.5 A cc u r ac y ( % ) p-0 0.125 0.25 0.375 0.5 p-No Upsample Upsample K=16 K=64 Figure 4: Impact of imbalance in labels.", "efficient.", "This only holds with K = Full ; in a few-shot setup, all finetuning significantly outperforms other methods.", "This contradicts traditional analysis that having less trainable parameters is better when the training data is scarce (Ng and Jordan, 2002).", "It is likely because such analysis did not take into account language model pretraining, which gives supervision to the model yet is not the training data for an end task.", "Impact of imbalance in labels On binary datasets (SST-2 and MR), we vary the label imbalance in the training data with K = { 16 , 64 } .", "Specifically, let C = { c + , c } and p = |{ ( x, c ) D| c = c }| / |D| , i.e., the ratio of c in the training data.", "We vary p to be { 0 , 0 .", "125 , 0 .", "250 , 0 .", "375 , 0 .", "5 } .", "p = 0 .", "5 means the labels are perfectly balanced, and p = 0 means that labels in the training data only include c + .", "We additionally compare with upsampling baselines 5322 Data Zero-shot Finetuning Direct++ Channel DirectAll DirectHead DirectTrans DirectPrompt ChannelPrompt SST-2 80.3/76.9 77.1/74.8 50.2/49.1 50.2/49.1 50.2/49.1 50.2/49.1 85.5 / 82.5 SST-5 33.3/28.8 29.2/27.7 40.1 / 34.8 34.3/28.0 32.6/24.5 30.0/18.1 37.5/32.6 MR 77.4/73.2 74.3/69.3 50.0/50.0 50.0/50.0 50.0/50.0 50.0/50.0 80.9 / 74.8 CR 77.9/69.7 65.8/60.2 50.0/50.0 50.0/50.0 50.0/50.0 50.0/50.0 80.9 / 74.8 TREC 27.7/12.6 30.5/19.4 50.8 /31.0 44.8/29.6 44.6/ 32.8 33.9/17.4 34.3/26.0 Subj 52.0/48.8 57.8/51.5 50.0/50.0 50.0/50.0 50.0/50.0 50.0/50.0 66.6 / 57.6 Table 5: Model performance when there is at least one label at test time that was unseen during training.", "where we upsample training examples with infrequent labels so that the model has seen an equal number of examples per label during training.", "Results are reported in Figure 4. All direct models are sensitive to the imbalance in training data, even though they benefit from upsampling when p is small.", "Channel prompt tuning is insensitive to the imbalance, and significantly outperforms direct models when p is small; it even outperforms all finetuning when p < 0 .", "25 .", "When p is near to 0.5, direct head tuning matches or outperforms channel prompt tuning.", "It is also worth noting that direct prompt tuning with upsampling matches or outperforms all finetuning and head tuning when p is small.", "We experiment with a challenging scenario where the model must generalize to unseen labels.", "While it may be seen as an extreme scenario, this is often a practical setting, e.g., the problem is defined with a set of labels but later an addition of the new label may be needed.", "First, we sample K training examples as in main experiments but excluding one random label, so that at least one label at test time was unseen during training.", "Table 5 reports the results.", "All direct models are unable to predict the label that is unseen at training time.", "However, channel prompt tuning can predict unseen labels and achieves considerably better performance than zero-shot.", "It outperforms all finetuning on 2-way classification datasets, and outperforms head tuning on five datasets except for TREC on which head tuning achieves very strong performance on seen labels.", "Next, we run zero-shot transfer learning, where the model is trained on one dataset and is tested on another dataset.", "Here, head tuning is not applicable when the labels are not shared between two datasets.", "Figure 5 shows the results.", "Channel prompt tuning outperforms all direct models including all finetuning on all datasets except for TREC.", "It is particularly competitive when the tasks are inherently similar, e.g., transfer between 2-way sentiment analysis and 5-way sentiment analysis in the first three figures.", "In fact, in such cases, perfor-5323 mance is close to the models trained on in-domain data.", "When tasks are inherently different, e.g., the rest of the figures in Figure 5, gains over zero-shot performance are relatively small; we think more work should be done to make cross-task transfer better and to discover when it is possible.", "In this work, we introduced a noisy channel approach for few-shot text classification through LM prompting, where we either provide demonstrations to the LM or tune the prompt embeddings in the continuous space.", "Our experiments on eleven datasets show that channel models significantly outperform their direct counterparts, mainly because of their stability, i.e., lower variance and better worst-case accuracy.", "We also found that direct head tuning is more competitive than previously thought, and different methods are preferred given different conditions.", "Specifically, channel prompt tuning is preferred in the following scenarios.", "K is small Channel prompt tuning is more competitive when there are fewer training examples.", "We hypothesize two reasons: (1) Channel models are more stable (i.e., achieve low variance and high worst-case accuracy), unlike direct models that are highly unstable with small k (Zhao et al., 2021; Perez et al., 2021; Lu et al., 2021).", "(2) Channel models provide more signals by requiring the model to explain the input word-by-word (as claimed in Lewis and Fan (2018)) which is beneficial in the low data regime.", "Data is imbalanced or |C| is large When the training data is even slightly imbalanced, no direct models are competitive.", "We think this is because the LM head relies too much on unconditional distributions of labels.", "Channel prompt tuning is less sensitive because labels are only a conditioning variable.", "Label imbalance in the training data is a real-world problem, especially when k is small and |C| is large.", "We thus suggest this is an important area for future work.", "Generalization to unseen labels is required All direct models are unable to predict labels that are unseen during training, indicating that they overfit in the label space.", "In contrast, channel models can predict unseen labels, likely because the label space is indirectly modeled.", "This is in line with prior work that shows channel models are more competitive under a distribution shift (Yogatama et al., 2017; Lewis and Fan, 2018).", "Task is closer to language modeling If the task is too different from language modeling even with carefully chosen verbalizers (e.g., TREC and Subj), head tuning outperforms prompt tuning.", "This is likely because it benefits from directly updating the parameters of the LM.", "This may mean that causal LMs are not suitable for all tasks, or we need more sophisticated methods to apply causal LMs for such tasks without updating the LM parameters.", "Limitations and future work While we show that channel models are competitive in few-shot text classification, there are limitations that provide avenues for future work.", "First, it is not as easy to use channel models for non classification tasks where modeling prior distributions is non-trivial.", "We think future work can obtain the prior with a separate model and incorporate it to the conditional LM as done by Lewis and Fan (2018), potentially with beam search decoding as in Yu et al. (2017); Yee et al. (2019).", "Second, while this paper focuses on causal LMs, it is an open question how to use a channel model with masked LMs.", "Although we think channel models are not inherently restricted to causal LMs, the specific way in which existing masked LMs are pretrained makes it hard to use channel models without updating the LM parameters, e.g., masked LMs are not trained to generate long sentences.", "One recent approach uses a label-conditioning objective (Tam et al., 2021) as a clever way to introduce a channel-like model with existing masked LMs.", "Extending and further integrating these different approaches would be important for using channel models in a wider range of scenarios.", "We thank Ari Holtzman, Eric Wallace, Gabriel Il-harco, Jungsoo Park, Myle Ott, Peter West and Ves Stoyanov for their helpful comments and discussion.", "This research was supported by NSF IIS-2044660, ONR N00014-18-1-2826, an Allen Distinguished Investigator Award, and a Sloan Fellowship." ]
[ "abstain", "abstain", "objective", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "result", "abstain", "objective", "result", "method", "other", "other", "other", "other", "other", "method", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "other", "other", "objective", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "result", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "result", "method", "method", "abstain", "abstain", "other", "other" ]
[ "We aim to better exploit the limited amounts of parallel text available in low-resource settings by introducing a differentiable reconstruction loss for neural machine translation (NMT).", "This loss compares original inputs to reconstructed inputs, obtained by back-translating translation hypotheses into the input language.", "We leverage differentiable sampling and bi-directional NMT to train models end-to-end, without introducing additional parameters.", "This approach achieves small but consistent BLEU improvements on four language pairs in both translation directions, and outperforms an alternative differentiable reconstruction strategy based on hidden states.", "Neural Machine Translation (NMT) performance degrades sharply when parallel training data is limited (Koehn and Knowles, 2017).", "Past work has addressed this problem by leveraging monolingual data (Sennrich et al., 2016a; Ramachan-dran et al., 2017) or multilingual parallel data (Zoph et al., 2016; Johnson et al., 2017; Gu et al., 2018a).", "We hypothesize that the traditional training can be complemented by better leveraging limited training data.", "To this end, we propose a new training objective for this model by augmenting the standard translation cross-entropy loss with a differentiable input reconstruction loss to further exploit the source side of parallel samples.", "Input reconstruction is motivated by the idea of round-trip translation.", "Suppose sentence f is translated forward to e using model fe and then translated back to f using model ef , then e is more likely to be a good translation if the distance between f and f is small (Brislin, 1970).", "Prior work applied round-trip translation to monolingual examples and sampled the intermediate translation e from a K -best list generated by model fe using beam search (Cheng et al., 2016; He et al., 2016).", "However, beam search is not differentiable which prevents back-propagating reconstruction errors to fe .", "As a result, reinforcement learning algorithms, or independent updates to fe and ef were required.", "In this paper, we focus on the problem of making input reconstruction differentiable to simplify training.", "In past work, Tu et al. (2017) addressed this issue by reconstructing source sentences from the decoder's hidden states.", "However, this reconstruction task can be artificially easy if hidden states over-memorize the input.", "This approach also requires a separate auxiliary reconstructor, which introduces additional parameters.", "We propose instead to combine benefits from differentiable sampling and bi-directional NMT to obtain a compact model that can be trained end-to-end with back-propagation.", "Specifically, Translations are sampled using the Straight-Through Gumbel Softmax (STGS) estimator (Jang et al., 2017; Bengio et al., 2013), which allows back-propagating reconstruction errors.", "Our approach builds on the bi-directional NMT model (Niu et al., 2018; Johnson et al., 2017), which improves low-resource translation by jointly modeling translation in both directions (e.g., Swahili English).", "A single bi-directional model is used as a translator and a reconstructor (i.e. ef = fe ) without introducing more parameters.", "Experiments show that our approach outperforms reconstruction from hidden states.", "It achieves consistent improvements across various low-resource language pairs and directions, showing its effectiveness in making better use of limited parallel data.", "Using round-trip translations ( f e f ) as a training signal for NMT usually requires auxiliary models to perform back-translation and cannot be trained end-to-end without reinforcement learning.", "For instance, Cheng et al. (2016) added a reconstruction loss for monolingual examples to the training objective.", "He et al. (2016) evaluated the quality of e by a language model and f by a reconstruction likelihood.", "Both approaches have symmetric forward and backward translation models which are updated alternatively.", "This require policy gradient algorithms for training, which are not always stable.", "Back-translation (Sennrich et al., 2016a) performs half of the reconstruction process, by generating a synthetic source side for monolingual target language examples: e f .", "It uses an auxiliary backward model to generate the synthetic data but only updates the parameters of the primary forward model.", "Iteratively updating forward and backward models (Zhang et al., 2018; Niu et al., 2018) is an expensive solution as back-translations are regenerated at each iteration.", "Prior work has sought to simplify the optimization of reconstruction losses by side-stepping beam search.", "Tu et al. (2017) first proposed to reconstruct NMT input from the decoder's hidden states while Wang et al. (2018a,b) suggested to use both encoder and decoder hidden states to improve translation of dropped pronouns.", "However, these models might achieve low reconstruction errors by learning to copy the input to hidden states.", "To avoid copying the input, Artetxe et al. (2018) and Lample et al. (2018) used denoising autoencoders (Vincent et al., 2008) in unsupervised NMT.", "Our approach is based instead on the Gumbel Softmax (Jang et al., 2017; Maddison et al., 2017), which facilitates differentiable sampling of sequences of discrete tokens.", "It has been successfully applied in many sequence generation tasks, including artificial language emergence for multi-agent communication (Havrylov and Titov, 2017), composing tree structures from text (Choi et al., 2018), and tasks under the umbrella of generative adversarial networks (Goodfellow et al., 2014) such as generating the context-free grammar (Kus-ner and Hernandez-Lobato, 2016), machine comprehension (Wang et al., 2017) and machine translation (Gu et al., 2018b).", "NMT is framed as a conditional language model, where the probability of predicting target token e t at step t is conditioned on the previously generated sequence of tokens e <t and the source sequence f given the model parameter .", "Suppose each token is indexed and represented as a one-hot vector, its probability is realized as a softmax function over a linear transformation a ( h t ) where h t is the decoder's hidden state at step t : P ( e t | e <t , f ; ) = softmax( a ( h t )) (cid:62) e t .", "The hidden state is calculated by a neural network g given the embeddings of the previous target tokens e <t in the embedding matrix E ( e <t ) and the context c t coming from the source:", "In our bi-directional model, the source sentence can be either f or e and is respectively translated to e or f .", "The language is marked by a tag (e.g., <en> ) at the beginning of each source sentence (Johnson et al., 2017; Niu et al., 2018).", "To facilitate symmetric reconstruction, we also add language tags to target sentences.", "The training data corpus is then built by swapping the source and target sentences of a parallel corpus and appending the swapped version to the original.", "Our bi-directional model performs both forward translation and backward reconstruction.", "By contrast, uni-directional models require an auxiliary reconstruction module, which introduces additional parameters.", "This module can be either a decoder-based reconstructor (Tu et al., 2017; Wang et al., 2018a,b) or a reversed dual NMT model (Cheng et al., 2016; He et al., 2016; Wang et al., 2018c; Zhang et al., 2018).", "Here the reconstructor, which shares the same parameter with the translator T ( ) , can also be trained end-to-end by maximizing the log-likelihood of reconstructing f : LR = (cid:88) f log P ( f | T ( f ; ); ) , (3) Combining with the forward translation likelihood LT = (cid:88) ( f (cid:107) e ) log P ( e | f ; ) , (4) we use L = LT + LR as the final training objective for f e .", "The dual e f model is trained simultaneously by swapping the language direction in bi-directional NMT.", "Reconstruction is reliable only with a model that produces reasonable base translations.", "Following prior work (Tu et al., 2017; He et al., 2016; Cheng et al., 2016), we pre-train a base model with LT and fine-tune it with LT + LR .", "We use differentiable sampling to side-step beam search and back-propagate error signals.", "We use the Gumbel-Max reparameterization trick (Mad-dison et al., 2014) to sample a translation token at each time step from the softmax distribution in Equation 1: e t = one-hot (cid:16) arg max k (cid:0) a ( h t ) k + G k (cid:1)(cid:17) (5) where G k is i.i.d. and drawn from Gumbel(0 , 1) 1 .", "We use scaled Gumbel with parameter , i.e. Gumbel(0 , ) , to control the randomness.", "The sampling becomes deterministic (which is equivalent to greedy search) as approaches", "0. Since arg max is not a differentiable operation, we approximate its gradient with the Straight-Through Gumbel Softmax (STGS) (Jang et al., 2017; Bengio et al., 2013): e t e t , where e t = softmax (cid:0) ( a ( h t ) + G ) / (cid:1) (6) As approaches 0, softmax is closer to arg max but training might be more unstable.", "While the STGS estimator is biased when is large, it performs well in practice (Gu et al., 2018b; Choi et al., 2018) and is sometimes faster and more effective than reinforcement learning (Havrylov and Titov, 2017).", "To generate coherent intermediate translations, the decoder used for sampling only consumes its previously predicted e <t .", "This contrasts with the usual teacher forcing strategy (Williams and Zipser, 1989), which always feeds in the ground-truth previous tokens e <t when predicting the current token e t .", "With teacher forcing, the sequence concatenation [ e <t ; e t ] is probably coherent at each time step, but the actual predicted sequence [ e <t ; e t ] would break the continuity.", "2 1 i.e. G k = log( log( u k )) and u k Uniform(0 , 1) .", "We evaluate our approach on four low-resource language pairs.", "Parallel data for Swahili English ( SW EN ), Tagalog English ( TL EN ) and Somali English ( SO EN ) contains a mixture of domains such as news and weblogs and is collected from the IARPA MATERIAL program 3 , the Global Voices parallel corpus 4 , Common Crawl (Smith et al., 2013), and the LORELEI Somali representative language pack (LDC2018T11).", "The test samples are extracted from the held-out ANALYSIS set of MATERIAL.", "Parallel Turkish English ( TR EN ) data is provided by the WMT news translation task (Bojar et al., 2018).", "We use pre-processed corpus, newsdev2016, newstest2017 as training, development and test sets.", "5 We apply normalization, tokenization, true-casing, joint source-target BPE with 32,000 operations (Sennrich et al., 2016b) and sentence-filtering (length 80 cutoff) to parallel data.", "Itemized data statistics after preprocessing can be found in Table", "1. We report case-insensitive BLEU with the WMT standard 13a' tokenization using SacreBLEU (Post, 2018).", "We build NMT models upon the attentional RNN encoder-decoder architecture (Bahdanau et al., 2015) implemented in the Sockeye toolkit (Hieber et al., 2017).", "Our translation model uses a bidirectional encoder with a single LSTM layer of size 512, multilayer perceptron attention with a layer size of 512, and word representations of size 512.", "We apply layer normalization (Ba et al., 3 https://www.iarpa.gov/index.php/ research-programs/material 4 http://casmacat.eu/corpus/ global-voices.html 5 http://data.statmt.org/wmt18/ translation-task/preprocessed/ Model EN SW SW EN EN TL TL EN EN SO SO EN EN TR TR EN Baseline 33.60 0.14 30.70 0.19 27.23 0.11 32.15 0.21 12.25 0.08 20.80 0.12 12.90 0.04 15.32 0.11 HIDDEN 33.41 0.15 30.91 0.19 27.43 0.14 32.20 0.35 12.30 0.11 20.72 0.16 12.77 0.11 15.34 0.10 -0.19 0.24 0.21 0.14 0.19 0.13 0.04 0.17 0.05 0.11 -0.08 0.12 -0.13 0.13 0.01 0.07 = 0 33.92 0.10 31.37 0.18 27.65 0.09 32.75 0.32 12.47 0.08 21.14 0.19 13.26 0.07 15.60 0.19 0.32 0.12 0.66 0.11 0.42 0.16 0.59 0.13 0.22 0.04 0.35 0.15 0.36 0.09 0.28 0.11 = 0 .", "2016) and add dropout to embeddings and RNNs (Gal and Ghahramani, 2016) with probability 0.2.", "We train using the Adam optimizer (Kingma and Ba, 2015) with a batch size of 48 sentences and we checkpoint the model every 1000 updates.", "The learning rate for baseline models is initialized to 0.001 and reduced by 30% after 4 checkpoints without improvement of perplexity on the development set.", "Training stops after 10 checkpoints without improvement.", "The bi-directional NMT model ties source and target embeddings to yield a bilingual vector space.", "It also ties the output layer's weights and embeddings to achieve better performance in low-resource scenarios (Press and Wolf, 2017; Nguyen and Chiang, 2018).", "We train five randomly seeded bi-directional baseline models by optimizing the forward translation objective LT and report the mean and standard deviation of test BLEU.", "We fine-tune baseline models with objective LT + LR , inheriting all settings except the learning rate which is re-initialized to 0.0001.", "Each randomly seeded model is fine-tuned independently, so we are able to report the standard deviation of BLEU.", "We compare our approach with reconstruction from hidden states (HIDDEN ).", "Following the best practice of Wang et al. (2018a), two reconstructors are used to take hidden states from both the encoder and the decoder.", "The corresponding two reconstruction losses and the canonical translation loss were originally uniformly weighted (i.e. 1 , 1 , 1 ), but we found that balancing the reconstruction and translation losses yields better results (i.e. 0 . 5 , 0 . 5 , 1 ) in preliminary experiments.", "6 We use the reconstructor exclusively to compute the reconstruction training loss.", "6 We observed around 0.2 BLEU gains for TR EN tasks.", "used to re-rank translation hypotheses in prior work, but Tu et al. (2017) showed in ablation studies that the gains from re-ranking are small compared to those from training.", "We evaluate the impact of the Gumbel Softmax hyperparameters on the development set.", "We select = 2 and = 0 / 0 .", "5 based on training stability and BLEU.", "Greedy search (i.e. = 0 ) performs similarly as sampling with increased Gumbel noise (i.e. more random translation selection when = 0 . 5 ): increased randomness in sampling does not have a strong impact on BLEU, even though random sampling may approximate the data distribution better (Ott et al., 2018).", "We hypothesize that more random translation selection introduces lower quality samples and therefore noisier training signals.", "This is consistent with the observation that random sampling is less effective for back-translation in low-resource settings (Edunov et al., 2018).", "Sampling-based reconstruction is effective even if there is moderate domain mismatch between the training and the test data, such as in the case that the word type out-of-vocabulary (OOV) rate of TR EN is larger than 20%.", "Larger improvements can be achieved when the test data is closer to training examples.", "For example, the OOV rate of SW EN is much smaller than the OOV rate of TR EN and the former obtains higher BLEU.", "Our approach yields more consistent results than reconstructing from hidden states.", "The latter fails to improve BLEU in more difficult cases, such as TR EN with high OOV rates.", "We observe extremely low training perplexity for HID 7 The improvements are significant with p < 0 .", "01 .", "DEN compared with our proposed approach (Fig-ure 1a).", "This suggests that HIDDEN yields representations that memorize the input rather than improve output representations.", "Another advantage of our approach is that all parameters were jointly pre-trained, which results in more stable training behavior.", "By contrast, reconstructing from hidden states requires to initialize the reconstructors independently and suffers from unstable early training behavior (Figure 1).", "We studied reconstructing the input of NMT from its intermediate translations to better exploit training samples in low-resource settings.", "We used a bi-directional NMT model and the Straight-Through Gumbel Softmax to build a fully differentiable reconstruction model that does not require any additional parameters.", "We empirically demonstrated that our approach is effective in low-resource scenarios.", "In future work, we will investigate the use of differentiable reconstruction from sampled sequences in unsupervised and semi-supervised sequence generation tasks.", "In particular, we will exploit monolingual corpora in addition to parallel corpora for NMT.", "We thank the three anonymous reviewers for their helpful comments and suggestions.", "We also thank the members of the Computational Linguistics and Information Processing (CLIP) lab at the University of Maryland for helpful discussions.", "This research is based upon work supported in part by an Amazon Web Services Machine Learning Research Award, and by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract #FA8650-17-C-9117.", "The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government.", "The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein." ]
[ "objective", "abstain", "method", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "abstain", "result", "objective", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "method", "method", "objective", "objective", "abstain", "other", "other", "other", "other", "other" ]
[ "Affective tasks such as sentiment analysis, emotion classification and sarcasm detection have been popular in recent years due to abundance of user-generated data, accurate computational linguistic models, and broad range of relevant applications in various domains.", "At the same time, many studies have highlighted the importance of text preprocessing, as an integral step to any natural language processing prediction model and downstream task.", "While preprocessing in affective systems is well-studied, preprocessing in word vector based models applied to affective systems, is not.", "To address this limitation, we conduct a comprehensive analysis of the role of preprocessing techniques in affective analysis based on word vector models.", "Our analysis is the first of its kind and provides useful insights of the importance of each preprocessing technique when applied at the training phase, commonly ignored in pretrained word vector models, and/or at the downstream task phase.", "Affective tasks such as sentiment analysis, emotion classification and sarcasm detection have enjoyed great popularity in recent years.", "This success can be largely attributed to the fundamental and straightforward nature of the methods employed, the availability of vast amounts of user-generated natural language data, and the wide range of useful applications, spanning from hate speech detection to monitoring the sentiment of financial markets and news recommendation (Djuric et al., 2015; Ba-banejad et al., 2019).", "Most early models of affect analysis employed pretrained word embeddings that have been obtained under the assumption of the distributional hypothesis (Mikolov et al., 2013; Devlin et al., 2018).", "The distributional hypothesis suggests that two words occurring frequently in similar linguistic contexts tend to be more semantically similar, and therefore should be represented closer to one another in the embedding space.", "However, while such embeddings are useful for several natural language processing (NLP) downstream tasks, they are known to be less suitable for affective tasks in particular (Tang et al., 2014; Agrawal et al., 2018).", "Although some authors claim that there is a need for post-processing word embeddings for affective tasks, others find that off-the-shelf vectors are very powerful for affective lexicon learning (Lison and Kutuzov, 2017).", "For example, word2vec (Mikolov et al., 2013) estimates the pair of words happy' and sad' to be more similar than the pair of words happy' and joy', which is counterintuitive, and might affect the accuracy performance of the models that depend on it.", "To address the limitations of traditional word embeddings, several techniques have been proposed, including task-specific fine-tuning (Devlin et al., 2018), retrofitting (Faruqui et al., 2014), representing emotion with vectors using a multi-task training framework (Xu et al., 2018) and generating affective word embeddings (Felbo et al., 2017), to name a few.", "Other attempts to overcome the limitation of word vectors include optimization of hyperparameters (Levy et al., 2015), as well as fine-tuned preprocessing strategies tailored to different NLP tasks.", "While these strategies have demonstrated evidence of improving the accuracy performance in tasks such as word similarity, word analogy, and others (Lison and Kutuzov, 2017), their effect in affective tasks has not received considerable attention and remains less explored.", "Our work is motivated by the observation that preprocessing factors such as stemming, stopwords removal and many others make up an integral part of nearly every improved text classification model, and affective systems in particular (Danisman and Alpkocak, 2008; Patil and Patil, 2013).", "However, little work has been Figure 1: Framework of applying preprocessing in different stages in affective systems;", "done towards understanding the role of preprocessing techniques applied to word embeddings in different stages of affective systems.", "To address this limitation, the overarching goal of this research, is to perform an extensive and systematic assessment of the effect of a range of linguistic preprocessing factors pertaining to three affective tasks, including sentiment analysis , emotion classification and sarcasm detection .", "Towards that end, we systematically analyze the effectiveness of applying preprocessing to large training corpora before learning word embeddings, an approach that has largely been overlooked by the community.", "We investigate the following research questions:", "(i) what is the effect of integrating preprocessing techniques earlier into word embedding models, instead of later on in a downstream classification models?", "(ii) which preprocessing techniques yield the most benefit in affective tasks?", "(iii) does preprocessing of word embeddings provide any improvement over state-of-the-art pretrained word embeddings?", "and if yes, how much?", "Figure 1 illustrates the difference between", "a) preprocessing word embeddings pipeline (Pre) vs.", "b) preprocessing classification dataset pipeline (Post), where preprocessing techniques in", "(a) are applied to the training corpus of the model and in", "(b) only to the classification dataset.", "In brief, the main contributions of our work are as follows: We conduct a comprehensive analysis of the role of preprocessing techniques in affective tasks (including sentiment analysis, emotion classification and sarcasm detection), employing different models, over nine datasets; We perform a comparative analysis of the accuracy performance of word vector models when preprocessing is applied at the training phase (training data) and/or at the downstream task phase (classification dataset).", "Interestingly, we obtain best results when preprocessing is applied only to the training corpus or when it is applied to both the training corpus and the classification dataset of interest.", "We evaluate the performance of our best preprocessed word vector model against state-of-the-art pretrained word embedding models; We make source code and data publicly available to encourage reproducibility of results 1 .", "The rest of the paper is organized as follows: Section 2 presents an overview of the related work.", "Section 3 elaborates on the preprocessing techniques employed in the evaluation of models.", "Section 4 describes the experimental evaluation framework.", "In Section 5 a comprehensive analysis of the results is provided.", "Section 6 concludes the paper with key insights of the research.", "In this section, we present an overview of related work on preprocessing classification datasets and preprocessing word embeddings , and how our work aims to bridge the gap between those efforts.", "Preprocessing is a vital step in text mining and therefore, evaluation of preprocessing techniques has long been a part of many affective systems.", "Saif et al. (2014) indicated that, despite its popular use in Twitter sentiment analysis, the use of precompiled stoplist has a negative impact on the classification performance.", "Angiani et al. (2016) analyzed various preprocessing methods such as stopwords removal, stemming, negation, emoticons, and so on, and found stemming to be most effective for the task of sentiment analysis.", "Similarly, Symeonidis et al. (2018) found that lemmatization increases accuracy.", "Jianqiang and Xiaolin (2017) observed that removing stopwords, numbers, and URLs can reduce noise but does not affect performance, whereas replacing negation and expanding acronyms can improve the classification accuracy.", "Preprocessing techniques such as punctuation and negation (Rose et al., 2018) or pos-tagging and negation (Seal et al., 2020) make up a common component of many emotion classification models (Kim et al., 2018; Patil and Patil, 2013).", "One of the earliest works (Danisman and Alpkocak, 2008) preserved emotion words and negative verbs during stopwords removal, replaced punctuation with descriptive new words, replaced negative short forms with long forms, and concatenated negative words with emotion words to create new words (e.g., not happy NOThappy ).", "Although stemming may remove the emotional meaning from some words, it has been shown to improve classification accuracy (Danisman and Alpkocak, 2008; Agrawal and An, 2012).", "Negations have also been found beneficial, whereas considering intensifiers and diminishers did not lead to any improvements (Strohm, 2017).", "Pecar et al. (2018) also highlight the importance of preprocessing when using user-generated content, with emoticons processing being the most effective.", "Along the same lines, while Gratian and Haid (2018) found pos-tags to be useful, Boiy et al. (2007) ignored pos-tagging because of its effect of reducing the classification accuracy The aforementioned works describe preprocessing techniques as applied directly to evaluation datasets in affective systems.", "In contrast, we examine the effectiveness of directly incorporating these known effective preprocessing techniques further upstream into the training corpus of word embeddings, which are widely used across a number of downstream tasks.", "Through a series of extensive experiments, particularly those related to context window size and dimensionality, (Levy et al., 2015) indicate that seemingly minor variations can have a large impact on the success of word representation methods in similarity and analogy tasks, stressing the need for more analysis of often ignored preprocessing settings.", "Lison and Kutuzov (2017) also present a systematic analysis of context windows based on a set of four hyperparameters, including window position and stopwords removal, where the right window was found to be better than left for English similarity task, and stopwords removal substantially benefited analogy task but not similarity.", "(Her-shcovich et al., 2019; Melamud et al., 2016), dimensionality (Melamud et al., 2016), syntactic dependencies (Levy and Goldberg, 2014; Vulic et al., 2020) and their effect on NLP tasks including word similarity (Hershcovich et al., 2019), tagging, parsing, relatedness, and entailment (Hashimoto et al., 2017) and biomedical (Chiu et al., 2016) has been studied extensively in the literature.", "The main conclusion of these studies, however, is that these factors are heavily task-specific.", "Therefore, in this work we explore preprocessing factors of generating word embeddings specifically tailored to affective tasks, which have received little attention.", "A recent study investigated the role of tok-enizing, lemmatizing, lowercasing and multiword grouping (Camacho-Collados and Pilehvar, 2018) as applied to sentiment analysis and found simple tokenization to be generally adequate.", "In the task of emotion classification, Mulki et al. (2018) examined the role of four preprocessing techniques as applied to a vector space model based on tf-idf trained on a small corpus of tweets, and found stemming, lemmatization and emoji tagging to be the most effective factors.", "Distinct from prior works, we examine a much larger suite of preprocessing factors grounded in insights derived from numerous affective systems, trained over two different corpora, using three different word embedding models.", "We evaluate the effect of the preprocessed word embeddings in three distinct affective tasks including sentiment analysis, emotion classification and sarcasm detection.", "This section describes the preprocessing factors applied to the training corpus that is then used to generate word representations and the order of the preprocessing factors which we need to follow when applying on the corpus.", "Basic : A group of common text preprocessing applied at the very beginning, such as removing html tags, removing numbers, and lowercasing.", "This step removes all common punctuation from text, such as @%*=()/ + using the NLTK regexptok-enizer 2 .", "2 https://www.nltk.org/ modules/nltk/tokenize/regexp.html", "them as is assuming they represent natural language text and its associated complexities.", "In this step, we identify words that may have been misspelled and correct them 3 .", "As unambiguous spell corrections are not very common and in most cases we have multiple options for correction, we built our own custom dictionary to suggest a replacement by parsing the ukWac corpora 4 to retrieve a word-frequency list.", "A misspelled word that has multiple replacements is replaced with the suggested word that has the maximum frequency in the corpora.", "Negation ( neg ) : Negation is a mechanism that transforms a positive argument into its inverse rejection (Benamara et al., 2012).", "Specifically in the task of affective analysis, negation plays a critical role as negation words can affect the word or sentence polarity causing the polarity to invert in many cases.", "Our negation procedure is as follows:", "(i) Compilation of an antonym dictionary : The first stage involves compiling an antonym dictionary using the WordNet corpus (Miller, 1995).", "For every synset, there are three possibilities: finding no antonym, one antonym or multiple antonyms.", "The first two cases are trivial (unambiguous re-placements).", "In the case of the third option (am-biguous replacement), which represents the most common case, amongst the many choices, we consider the antonym with the maximum frequency in the ukWac corpus, as described in the previous section and finally the antonym of a word is picked at random from one of its senses in our antonym dictionary.", "(ii) Negation handler : Next, we identify the negation words in tokenized text 5 .", "If a negation word is found, the token following it (i.e., negated word) is extracted and its antonym looked up in the antonym dictionary.", "If an antonym is found, the negation word and the negated word are replaced with it.", "For example, let the sentence I am not happy to-day in its tokenized form [I', am', not', happy', today'].", "First, we identify any negation words (i.e., not') and their corresponding negated words (i.e., happy').", "Then, we look up the antonym of happy' in the antonym dictionary (i.e., sad') and replace the phrase not happy' with the word sad', resulting in a new sentence I am sad today.", "Parts-of-Speech ( pos ) : Four parts-of-speech 3 https://pypi.org/project/pyspellchecker/ 4 https://www.sketchengine.eu/ukwac-british-english-corpus/ 5 https://pypi.org/project/negspacy/ classes, namely nouns, verbs, adjectives and adverbs have been shown to be more informative with regards to affect than the other classes.", "Thus, using the NLTK pos-tagger, for each sentence in the corpus we retain only the words belonging to one of these four classes, i.e., NN*, JJ*, VB*, and RB*.", "Stopwords ( stop ) : Stopwords are generally the most common words in a language typically fil-tered out before classification tasks.", "Therefore, we remove all the stopwords using the NLTK library.", "Stemming ( stem ) : Stemming, which reduces a word to its root form, is an essential preprocessing technique in NLP tasks.", "We use NLTK Snowball stemmer for stemming our training corpus.", "While some preprocessing techniques can be applied independently of each other (e.g., removing stopwords and removing punctuation), others need a more careful consideration of the sequence in which they are applied in order to obtain a more stable result.", "For instance, pos-tagging should be applied before stemming in order for the tagger to work well, or negation should be performed prior to removing stopwords.", "To this end, we consider the following ordering when combining all the aforementioned preprocessing factors: spellchecking, negation handling, pos classes, removing stopwords, and stemming.", "Table 1 summarizes the details of our two training corpora with regards to their vocabulary and corpus sizes after applying various preprocessing settings.", "For some preprocessing such as POS ( pos ) and stopwords removal ( stop ), without any signifi-cant loss in vocabulary as indicated by the % ratio of preprocessed to basic, the corpus size reduces dramatically, in some cases more than 50%, a nontrivial implication with regards to training time.", "Wikipedia : Comparatively a much larger corpus than the News, this corpus consists of 23,046,187 articles from Wikipedia 7 .", "6 https://www.kaggle.com/snapcrack/all-the-news 7 https://www.kaggle.com/jkkphys/english-wikipedia-articles-20170820-sqlite Corpus Processing Vocab Corpus size % size % News Basic 155K 100 123.2M 100 spell 149K 96 123.2M 100 stem 137K 88 123.2M 100 punc 147K 95 111.0M 90 neg 152K 98 90.7M 73 stop 150K 97 75.6M 61 pos 154K 99 70.7M 57 All punc 151K 97 93.7M 76 All pos 140K 90 90.5M 73 All stop 150K 97 75.3M 61 All 110K 71 55.2M 49 All stem 110K 71 58.1M 47 All spell 110K 71 56.4M 46 All neg 110K 71 54.3M 44 Wikipedia Basic 5.1M 100 8.1B 100 All punc 4.9M 96 7.2B 89 All pos 4.8M 94 7.0B 86 All stop 4.9M 96 6.8B 84 All stem 4.3M 84 6.4B 79 All spell 4.6M 90 6.1B 75 All 4.6M 90 5.6B 69 All neg 4.6M 90 5.0B 62 Table 1: Details of training corpora Dataset Genre Task Total IMDB reviews sentiment 50,000 SemEval tweets sentiment 14,157 Airline tweets sentiment 11,541 ISEAR narratives emotions 5,477 Alm fairy tales emotions 1,206 SSEC tweets emotions 1,017 Onion headlines sarcasm 28,619 IAC response sarcasm 3,260 Reddit comments sarcasm 1,010,826 Table 2: Details of evaluation datasets 4.2 Word Embedding Models We obtain our preprocessed word representations through three models:", "(i) CBOW (Continuous Bag-of-Words) ,", "(ii) Skip-gram : While CBOW takes the context of each word as the input and tries to predict the word corresponding to the context, skip-gram reverses the use of target and context words, where the target word is fed at the input and the output layer of the neural network is replicated multiple times to accommodate the chosen number of context words (Mikolov et al., 2013).", "We train both the models on both the training corpora using min count of 5 for News and 100 for Wikipedia with window sizes of 5 and 10, respectively, setting dimensionality to 300.", "(iii) BERT (Bidirectional Encoder Representations from Transformers) : BERT is an unsupervised method of pretraining contextualized language representations (Devlin et al., 2018).", "We train the model using BERT large uncased architecture (24-layer, 1024-hidden, 16-heads, 340M parameters) with same setting for parameters as the original paper.", "We train each of the three models (CBOW, Skip-gram and BERT) 8 times using 16 TPUs (64 TPU chips), Tensorflow 1.15, 1TB memory on Google Cloud and two 32 GPUs cluster of V100/RTX 2080 Ti, 1TB memory using Microsoft CNTK parallelization algorithm 8 on Amazon server.", "For a large model such as BERT, it takes upto 4-5 days for each run of the training.", "We conduct our evaluation on three tasks, namely sentiment analysis, emotion classification and sarcasm detection.", "Table 2 presents the details of our evaluation datasets, and some illustrative examples of text are shown in Table 3.", "Sentiment Analysis : This popular task involves classifying text as positive or negative, and we use the following three datasets for evaluation:", "(i) IMDB : This dataset 9 includes 50,000 movie reviews for sentiment analysis, consisting of 25,000 negative and 25,000 positive reviews Maas et al. (2011).", "(ii) Semeval 2016 : This sentiment analysis in Twitter dataset 10 consists of 14,157 tweets where 10,076 of them are positive and 4,081 negative Nakov et al. (2016).", "(iii) Airlines : This sentiment analysis dataset 11 consists of 11,541 tweets about six U.S. airlines from February 2015, with 9,178 tweets labeled as positive and 2,363 negative.", "Emotion Classification : A multiclass classification task, this involves classifying text into a number of emotion categories such as happy, sad, and so on.", "The following datasets are used in our evaluation:", "(i) SSEC : The Stance Sentiment Emotion Corpus Schuff et al. (2017) is the re-annotation of the SemEval 2016 Twitter stance and sentiment corpus Mohammad et al. (2017) with emotion labels including anger, joy, sadness, fear, surprise.", "12 .", "(ii) ISEAR : This dataset contains narratives of personal experiences evoking emotions Wallbott and Scherer (1986).", "We use a subset of the data consisting of five categories: sadness, anger, disgust, fear, joy.", "(iii) Alm : This dataset contains sentences 8 https://docs.microsoft.com/en-us/cognitive-toolkit/multiple-gpus-and-machines 9 http://ai.stanford.edu/ amaas/data/sentiment/ 10 http://alt.qcri.org/semeval2016/task4/index.php 11 https://www.kaggle.com/crowdflower/twitter-airline-sentiment 12 SSEC: http://www.romanklinger.de/ssec/ Text Label Dataset I must admit that this is one of the worst movies I've ever seen.", "surprised Cecilia and Ovesdotter (2008).", "Sarcasm Detection : Detecting sarcasm from text, a challenging task due to the sophisticated nature of sarcasm, involves labeling text as sarcastic or not.", "We use the following three datasets:", "(i) Onion : This news headlines dataset 13 collected sarcastic versions of current events from The Onion and non-sarcastic news headlines from HuffPost Misra and Arora (2019), resulting in a total 28,619 records.", "(ii) IAC : A subset of the Internet Argument Corpus Oraby et al. (2016), this dataset contains response utterances annotated for sarcasm.", "We extract 3260 instances from the general sarcasm type.", "14 .", "(iii) Reddit : Self-Annotated Reddit Corpus (SARC) 15 is a collection of Reddit posts where sarcasm is labeled by the author in contrast to other datasets where the data is typically labeled by independent annotators Khodak et al. (2017).", "For classification, we employ the LSTM model as it works well with sequential data such as text.", "For binary classification, such as sentiment analysis and sarcasm detection, the loss function used is the binary cross-entropy along with sigmoid activation: = 1 NN (cid:88) i =1 y i log ( p ( y i ))+(1 y i ) log (1 p ( y i )) where y is the binary representation of true label, p ( y ) is the predicted probability, and i denotes the i th training sample.", "For multiclass emotion classification, the loss function used is categorical cross-entropy loss over a batch of N instances and k classes, along with softmax activation: 13 https://github.com/rishabhmisra/News-Headlines-Dataset-For-Sarcasm-Detection 14 https://nlds.soe.ucsc.edu/sarcasm2 15 SARC v0.0: https://nlp.cs.princeton.edu/SARC/0.0/ = 1 NN (cid:88) i =1 k (cid:88) j =1 y ij log ( p ( y ij )) where p ( y ) is the predicted probability distribution, p ( y ij ) [0 , 1] .", "The optimizer is Adam Kingma and Ba (2014), all loss functions are sample-wise, and we take the mean of all samples (epoch = 5 , 10 , batch size = 64 , 128 ).", "All sentiment and sarcasm datasets are split into training/testing using 80%/20%, with 10% validation from training.", "For the smaller and imbalanced emotion datasets, we use stratified 5-fold cross-validation.", "We use a dropout layer to prevent overfitting by ignoring randomly selected neurons during training.", "We use early stopping when validation loss stops improving with patience = 3, min-delta = 0.0001.", "The results are reported in terms of weighted F-score (as some emotion datasets are highly imbalanced), where F-score = 2 p.rp + r , with p denoting precision, and r is recall.", "A primary goal of this work is to identify the most effective preprocessing factors for training word embeddings for affective tasks.", "Table 4 details the results of our experiments comparing the performance of individual preprocessing factors as well as those of ablation studies (i.e., including all the factors but one).", "Observing the performance of the individual factors on the News corpus, we note that even a single simple preprocessing technique can bring improvements, thereby validating our intuition of incorporating preprocessing into training corpora of word representations.", "Second, negation ( neg ) processing appears to be consistently the most Models Processing IMDB Semeval Airline IAC Onion Reddit Alm ISEAR SSECCBOW Basic 83.99 55.69 60.73 65.74 68.23 59.42 36.81 55.43 51.76 stop 84.43 55.72 61.37 66.03 68.17 59.27 36.81 56.01 52.33 spell 86.20 55.93 61.96 66.00 69.57 60.00 36.88 56.41 52.14 stem 86.92 55.72 61.86 65.89 68.49 59.72 36.94 55.84 51.89 punc 86.99 56.41 62.08 65.93 69.85 60.28 36.94 56.89 52.03 pos 85.66 56.83 62.75 66.32 70.25 60.63 37.02 57.04 53.19 neg 88.98 57.29 63.81 66.87 71.12 60.91 37.22 57.39 54.15 All 89.96 57.82 64.58 67.23 70.90 60.84 37.43 57.72 53.71 All neg 84.67 55.00 61.58 66.02 69.73 59.94 36.91 55.89 51.94 All pos 85.69 56.31 64.29 66.97 70.48 60.15 37.19 56.27 52.16 All punc 86.41 56.88 63.01 66.75 70.01 60.00 37.01 57.19 52.43 All spell 88.23 56.41 63.87 67.23 70.83 60.27 37.22 57.41 53.41 All stop 90.01 60.82 66.84 67.20 72.49 62.11 38.96 59.28 55.00 All stem 88.12 60.82 67.12 69.25 72.13 61.73 38.00 59.00 55.42 Skip-gram Basic 83.07 54.23 61.47 65.51 68.01 59.75 35.87 55.64 51.49 stop 83.23 55.47 62.00 65.62 68.00 59.84 35.94 55.76 51.62 spell 85.90 55.48 62.00 65.61 69.76 60.28 36.10 55.93 52.30 stem 86.00 55.33 61.89 65.60 68.72 59.50 36.00 55.69 51.40 punc 86.68 55.79 62.38 65.89 70.00 60.44 36.41 56.81 52.71 pos 85.91 56.28 63.25 66.24 69.81 60.85 36.44 56.23 52.94 neg 87.28 56.89 63.72 66.87 70.59 61.27 36.87 57.34 53.10 All 88.36 57.04 64.91 66.94 70.73 61.12 37.10 57.92 53.58 All neg 83.26 54.00 61.95 66.00 69.88 60.00 36.94 55.97 51.89 All pos 86.21 55.22 65.12 66.06 69.88 61.00 37.00 56.42 52.10 All punc 85.57 55.99 64.29 66.29 70.00 60.98 37.01 57.02 52.53 All spell 86.00 56.98 65.00 66.25 70.25 0.61 37.04 57.69 52.86 All stop 88.74 60.93 67.00 68.57 72.20 62.02 38.92 59.18 55.18 All stem 88.42 60.67 67.39 69.08 72.00 62.36 37.44 59.48 55.23 Table 4: F-score results of evaluating the effect of preprocessing factors using CBOW and Skip-gram on News corpus.", "effective factor across all the 9 datasets, indicating its importance in affective classification, followed by parts-of-speech ( pos ) processing where we retained words belonging only to one of four classes.", "On the other hand, removing stopwords ( stop ), spellchecking ( spell ) and stemming ( stem ) yield little improvement and mixed results.", "Interestingly, applying all the preprocessing factors is barely better or in some cases even worse (Onion, Reddit and SSEC) than applying just negation.", "Finally, the best performance comes from combining all the preprocessing factors except stemming (Allstem ).", "Moreover, Table 5 details the performance of ablation studies on Wikipedia corpus for all three models where we note that the best performance for the CBOW model comes from combining all the preprocessing factors except stemming (Allstem ), whereas for the Skip-gram and BERT models, the best results are obtained by applying all the preprocessing factors except stopwords removal (Allstop ).", "Considering that the Wikipedia corpus is almost 160 times bigger than the News corpus, it is unsurprising that the word embeddings obtained from the former yield considerably better results, consistent across all nine datasets.", "We investigate the difference between applying preprocessing to the training corpora for generating word embeddings (Pre) and applying preprocessing to the classification datasets (Post).", "As an example, during Pre , we first apply the preprocessing techniques (e.g., all but stemming) to the training corpus (e.g., Wikipedia), then generate word embeddings, then convert a classification dataset (e.g., IMDB) into word embedding representation, and finally classify using LSTM.", "Conversely, for Post , we first generate word embeddings from a training corpus (e.g., Wikipedia), then apply the preprocessing techniques (e.g., all but stemming) to the classification dataset (e.g., IMDB), which is then converted to word vector representation, and finally classified using LSTM 16 .", "The results of this experiment are presented in Table 6, where we observe that incorporating preprocessing into the training corpora before generat-16 Note: For settings including stemming, the classification data is also stemmed in order to obtain a compatible vocabulary.", "ing word vectors (Pre) outperforms preprocessing classification datasets (Post) across all nine datasets of the three affective tasks.", "Interestingly though, preprocessing both the bodies of text (Both) appears to be of little benefit, suggesting the importance of preprocessing training corpora used for obtaining word embeddings.", "While not a primary focus of this paper, in this final experiment we compare the performance of our preprocessed word embeddings against those of six state-of-the-art pretrained word embeddings 17 .", "17 These vectors obtained from their original repositories have been used without any modifications.", "(i) GloVe : Global vectors for word representations (Pennington et al., 2014) were trained on aggregated global word co-occurrences.", "We use the vectors trained on GloVe6B 6 billion words 18 , uncased, from Wikipedia and Gigaword.", "(ii) SSWE : Sentiment Specific Word Embeddings (unified model) 19 were trained using a corpus of 10 million tweets to encode sentiment information into the continuous representation of words (Tang et al., 2014).", "(iii) FastText : These pretrained word vectors 20 , based on sub-word character n-grams were trained on Wikipedia using fastText (Bojanowski et al., 2017), an extension of the word2vec model.", "(iv) DeepMoji : These word embeddings 21 were trained using BiLSTM on 1.2 billion tweets with emojis (Felbo et al., 2017).", "(v) EWE : Emotion-enriched Word Embeddings 22 were learned on 200,000 Amazon product reviews corpus using an LSTM model (Agrawal et al., 2018).", "From the results in Table 7, we notice that BERT is best on eight out of nine datasets except one sarcasm dataset (Reddit), while word2vec CBOW is the second best on four datasets.", "Overall, our analysis suggests that preprocessing at word embedding stage (Pre) works well for all the three affective tasks.", "Figure 2 summarizes the results obtained for all three tasks in terms of", "(a) absolute F-scores and", "(b) relative improvement (best preprocessing over Basic preprocessing).", "The IMDB dataset achieves the highest F-score overall, most likely because it consists of movie reviews which are much longer than the text from other genres.", "As expected, the binary classification task of sentiment analysis and sarcasm detection achieve comparable results, while the multiclass emotion classification typically has much lower F-scores.", "The most interesting observation, however, is noticed in Fig.", "2(b) where the emotion datasets show the highest relative improvement, indicating that multiclass classification tasks may benefit the most from applying preprocessing at word embedding stage (Pre).", "We systematically examined the role of preprocessing training corpora used to induce word representations for affect analysis.", "While all preprocessing techniques improved performance to a certain ex-21 https://github.com/bfelbo/DeepMoji 22 https://www.dropbox.com/s/wr5ovupf7yl282x/ewe uni.txt Figure 2: Absolute F-scores vs. relative improvement tent, our analysis suggests that the most noticeable increase is obtained through negation processing ( neg ).", "The overall best performance is achieved by applying all the preprocessing techniques, except stopwords removal (Allstop ).", "Interestingly, incorporating preprocessing into word representations appears to be far more beneficial than applying it in a downstream task to classification datasets.", "Moreover, while all the three affective tasks (senti-ment analysis, sarcasm detection and emotion classification) benefit from our proposed preprocessing framework, our analysis reveals that the multiclass emotion classification task benefits the most.", "Exploring the space of subsets of our preprocessing factors might yield more interesting combinations; we leave this for future work.", "We thank the anonymous reviewers for their insightful comments.", "This work is funded by Natural Sciences and Engineering Research Council of Canada (NSERC) and the Big Data Research, Analytics, and Information Network (BRAIN) Alliance established by the Ontario Research Fund Research Excellence Program (ORF-RE).", "In particular, we thank Majid Taghdimi from Questrade to provide us with the computing resources and help in the parallelization algorithm.", "We would also like to thank Dr. Heidar Davoudi for the helpful discussions and insights in this project." ]
[ "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "objective", "other", "other", "method", "method", "method", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "other", "method", "abstain", "method", "method", "method", "abstain", "abstain", "result", "result", "other", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "objective", "objective", "other", "other", "other", "other" ]
[ "We propose TRACIE , a novel temporal reasoning dataset that evaluates the degree to which systems understand implicit events events that are not mentioned explicitly in natural language text but can be inferred from it.", "This introduces a new challenge in temporal reasoning research, where prior work has focused on explicitly mentioned events.", "Human readers can infer implicit events via commonsense reasoning, resulting in a more comprehensive understanding of the situation and, consequently, better reasoning about time.", "We find, however, that state-of-the-art models struggle when predicting temporal relationships between implicit and explicit events.", "To address this, we propose a neuro-symbolic temporal reasoning model, SYMTIME , which exploits distant supervision signals from large-scale text and uses temporal rules to combine start times and durations to infer end times.", "SYMTIME outperforms strong baseline systems on TRACIE by 5%, and by 11% in a zero prior knowledge training setting.", "Our approach also generalizes to other temporal reasoning tasks, as evidenced by a gain of 1%-9% on MATRES, an explicit event benchmark.", "Understanding temporal relations between events in narrative text is a crucial part of text understanding.", "When reading a story, a human can construct a latent timeline about events' start and end times, similar to the one shown in Fig. 1 about an automobile accident.", "This timeline not only contains the placements of explicitly mentioned events (e.g., ride a bicycle ), but also accounts for implicit events (e.g., Farrah was distracted so she looked away).", "Such a latent timeline explains the dynamics between events; for example, the possible chain of events between ride and recovered in this context Most of the work was done when the third author was employed at the Allen Institute for AI and the first author was an intern there.", "contains get hit and injured .", "The ability to construct such a timeline is essential for understanding the causal dynamics of a situation.", "Without it, NLP systems cannot truly understand situations and reliably solve tasks such as temporal question-answering, causal inference, and scheduling assistance.", "To better evaluate this ability, we introduce a new dataset called TRACIE ( TempoRAl Closure InfErence ) that focuses on temporal relations on implicit events in short stories.", "Our dataset contains high-quality annotations of both start and end time queries that test a system's understanding of the full temporal closure (i.e., both start and end time) of events.", "As a task that requires considerable commonsense knowledge, we follow Zhou et al. (2020) in minimizing the size of the training set, therefore making TRACIE mainly an evaluation set.", "The final TRACIE dataset contains a total of 5.4k human-curated instances, provided in a (multi-premise) textual entailment (TE) format, as illustrated at the bottom of Fig", "1. A Pre-trained language model such as T5-Large (Raffel et al., 2020) fine-tuned on our new dataset achieves a modest binary prediction accuracy of 67 .", "9 %.", "1 Consistent with other studies on temporal reasoning (Zhou et al., 2020), these results reveal serious limitations in existing pre-trained language models.", "To build models better capable of understanding time with minimal direct training data, we propose a novel distant supervision technique that improves generalization by extracting temporal patterns in large-scale free text as part of an additional pretraining step.", "In contrast to other attempts at extracting temporal data through patterns at a sentence level (Gusev et al., 2011; Zhou et al., 2020), we extract over large windows of text such as paragraphs.", "This allows for capturing global information related to multiple events and extracting signals that do not appear in small-window local contexts.", "The resulting model, PTNTIME (Pattern-Time), achieves a 76 .", "6 % accuracy on TRACIE , a 9% gain over using standard T5-Large.", "We also show the applicability of PTNTIME on a standard temporal reasoning benchmark involving only explicit events, MATRES (Ning et al., 2018b), with a 9 point gain in a low-resource setting.", "We achieve further improvements by coupling PTNTIME with a duration model from Zhou et al. (2020) to create a neural-symbolic reasoning model called SYMTIME .", "The key idea in SYMTIME is to decompose the computation of temporal relations to the predictions of relative distances between start times and those of durations.", "For example, in Fig 1, we can decide that distracted likely ends before try starts because the duration of distracted is likely to be shorter than the distance between the two start times.", "This allows for better prediction on the end time, which rarely appears in the natural text and has been previously shown to be difficult to annotate (Ning et al., 2018b).", "Such a symbolic computation involves a logical combination of the individual models in a way that formalizes part of the Allen interval algebra (Allen, 1983).", "This model, which supports a wider range of temporal computation and can be used with and without task-specific supervision, achieves a final accuracy of 78 .", "9 % on TRACIE 's binary classification metric.", "We also show that SYMTIME is more robust to different distributions of the training data, demonstrating the benefits of using a temporal model with a transparent reasoning process.", "In summary, we make the following 3 contributions: (1) a temporal relation dataset TRACIE focusing on implicit events (3); (2) a distant supervision process for temporal understanding of implicit events (4); and (3) a reasoning model that makes end-time comparisons using predictions of start-time distances and durations (5).", "Finally, we demonstrate the effectiveness of our models on TRACIE , as well as the applicability of our approach to an existing temporal benchmark (6).", "Temporal reasoning has received much attention in the NLP community, and to date, there are many datasets that focus on temporal ordering (Puste-jovsky et al., 2003; Bethard et al., 2007; Cassidy et al., 2014; Reimers et al., 2016; O'Gorman et al., 2016; Ning et al., 2018b, 2020b), and other temporal knowledge (Pan et al., 2006; Zhou et al., 2019).", "We focus here on modeling implicit events, which has received relatively little attention.", "Multiple systems have been proposed as part of research into temporal ordering (Do et al., 2012; Moens and Leeuwenberg, 2017; Leeuwenberg and Moens, 2018; Meng and Rumshisky, 2018; Ning et al., 2018c; Han et al., 2019), duration prediction (Vashishtha et al., 2019) and other tasks.", "Our decision to use a textual entailment style follows recent work on natural language inference (Williams et al., 2017; Nie et al., 2020; Bhagavatula et al., 2020), which tends to not focus on time (for recent work on temporal NLI, see Vashishtha et al. (2020)).", "Many have used distant supervision for temporal reasoning (Gusev et al., 2011; Ning et al., 2018a; Zhou et al., 2020).", "Comparatively, our work captures longer-range dependencies in narrative text (for related ideas, see Ammanabrolu et al. (2021)).", "We are inspired by structural predictions and constraints that combat the sparsity of temporal knowledge (Ning et al., 2017; Do et al., 2012), as well as neural module networks (Andreas et al., 2016; Gupta et al., 2019) and other decomposition-based approaches (Talmor and Berant, 2018; Khashabi et al., 2018; Li et al., 2019; Wolfson et al., 2020; Khot et al., 2021).", "In particular, we build neural-symbolic transformer models that operationalize some of the classical interval-based computations used in earlier work on temporal reasoning (Allen, 1983; Gerevini and Schubert, 1995) (for related ideas, compare with Leeuwenberg and Moens (2018); Vashishtha et al. (2019)).", "This work is broadly related to works on causal dynamics (Pearl, 2009).", "The nature of combined temporal and causal focuses is also related to procedural text modeling (Tandon et al., 2018, 2020).", "The goal of TRACIE is to test a system's ability to compare start and end times of non-extractive implicit event phrases instead of extractive triggers from the context.", "Such tests in TRACIE take the form of multi-premise textual entailment (TE) (Lai et al., 2017).", "Each TRACIE instance contains 1) a context story (or premise) consisting of a sequence of explicit narrative events; 2) an implicit event in the form of a natural language phrase that is unmentioned but has some role in the story; 3) a comparator of either {starts,ends} ; 4) an explicit event also in the form of a phrase, and 5) a temporal relation of either {before,after} that marks the relationship in the dimension defined by the comparator between the implicit-event and the explicit-event .", "With these 4 components, we are able to generate TE-style instances, using the context story as the premise and temporal queries about pair-wise relations between implicit and explicit events as hypotheses.", "For example, in the first positive instance shown in Fig. 1, distracted is the implicit-event , starts is the comparator , try is explicit-event and before is the temporal-relation .", "They form a positive hypothesis distracted starts before try. 3 We flip the temporal-relation (i.e., before to after and vice versa) to create negative 2 We release TRACIE and its leaderboard at https:// leaderboard.allenai.org/tracie 3 All event phrases are shortened to triggers here for simplicity.", "(contradiction) instances, as shown in the second example instance in Fig.", "1. Since the start times of explicit-events are more obvious to human annotators, we use them as reference points and compare the implicit-event 's start or end time with them (depending on the comparator ), according to the label definitions shown in Fig.", "3. In rare cases where two time points are the same (e.g., hit and get hit start at the same time in Fig.1), we use the causal relation to decide the order, so that hit starts before get hit .", "Such instances are created through a multi-stage annotation process as detailed (in respective order) below.", "All steps are implemented with the CrowdAQ platform (Ning et al., 2020a) with qualification exams.", "Implicit Event Generation We randomly sample short stories from the ROCStories dataset (Mostafazadeh et al., 2016).", "For each story, one annotator writes 5 implicit event phrases that are not explicitly mentioned by the given story, but are inferable and relevant.", "The annotator additionally rewrites two explicit events closest to the implicit event's start and end time, respectively.", "With these two events, we can build two TRACIE instances (minus the temporal-relation ) per implicit event, which accounts for 10 instances in total per story.", "Automatic Instance Generation We use Al-lenNLP (Gardner et al., 2018) to extract all verbs and relevant arguments with its semantic role labeling (SRL) model.", "With all the verbs and their arguments, we construct a pool of explicit events in the form of short phrases.", "For each implicit event, we randomly select two { explicit-event , comparator } pairs from the pool and build 10 additional instances (without temporal-relation ).", "Label Collection For each of the 20 instances per story, we annotate the temporal-relation with four different annotators.", "Annotators follow the label definition in 3.1 to produce four temporal-relation s for each instance.", "We use the majority agreement as the final label and filter out unagreeable instances.", "Two authors additionally verify the instances with ambiguous verbs (e.g., have) and corrected 5 % of the end-time instances.", "We split the data under the independent and identically distributed (i.i.d.) assumption based on stories, with a 20/80 train/test ratio.", "We use a small training set, following Zhou et al. (2019), as we believe temporal relations involve much commonsense knowledge.", "As we later show in 6.3, it is infeasible to collect a large enough human-annotated training set to capture all the knowledge needed to tackle this problem completely, and a system must acquire knowledge from external resources.", "As a result, we use a small training set just to define the task, and at the same time, use an extensive testing set for more robust evaluation.", "The authors conduct a human upper-bound analysis on 100 randomly sampled instances, following the procedure in Zhou et al. (2020).", "There is a 94 % agreement and a 98 % resolved accuracy, 4 suggesting that TRACIE has a high annotation quality.", "As argued in 3.2, we believe that it is more efficient to build a model that learns the prior knowledge needed for the task with distant signals and only subsequently learns the task definition through a small training set.", "This section describes how we collect the distant signals related to events' start-time comparisons and pre-train a novel temporally-aware transformer model called PTNTIME .", "While PTNTIME will be used for fine-tuning directly on 4 This is obtained after the authors discuss and resolve any disagreements before comparing with the annotated labels.", "TRACIE , it will also form the basis of a more general temporal reasoning model called SYMTIME that we describe in 5.", "We describe the sources of distant supervision signals with the goal of understanding the relative order between two events' start times as well as the relative distance between them.", "[I went to the park, I wrote a review]: before , weeks", "Within-Sentence Extraction We collect start time comparisons between pairs of events heuristically from free-text using before/after\" keywords (following much prior work in temporal modeling and extraction (Do et al., 2012)). We use Al-lenNLP's SRL model to process each input sentence and find verbs with a temporal argument that starts with either before or after, and contains at least another verb.", "If there are multiple verbs in the temporal argument, we take the one with the largest number of tokens as arguments.", "We match the two extracted verbs with the relation indicated by the first word of either before or after.", "As the example in Fig. 4 shows, the extractor identifies that purchase food is before go to park as indicated by the before keyword mentioned in the text.", "We acquire 2.8 million instances from the May 2020 Wikipedia dump using this process.", "Cross-Sentence Extraction The data collected from the within-sentence patterns does not reveal the relative distance between two start times.", "In addition, because writers often save trivial inferences for efficiency, certain event pairs rarely co-occur within a small textual window, making one event often implicit to the other one in these pairs.", "To better collect such signals, we employ a cross-sentence extraction that finds direct temporal expressions of hours and dates.", "Because these temporal expressions (e.g., 2021-01-01 ) are globally comparable, the compared events can be anywhere in a document.", "Therefore, this process collects more supervision signals about time-point comparisons and their relative distance on event pairs with trivial causal relations.", "We apply the SRL model and find all temporal arguments and their associated verbs.", "We find the exact temporal values by filling unmentioned elements of a temporal expression with the nearest previous mention (e.g., we add January to the expression of the 10th in Fig.", "4.) These extractions have high precision, as the SRL model does well on identifying temporal arguments.", "We then construct supervision instances under the assumption that the extracted temporal expressions describe the start times of the associated verbs (e.g., went started on January 1 st in Fig. 4) .", "Each instance comprises an event pair, a temporal relation, and an estimation on the temporal difference between the two start times.", "Each event is a phrase constructed by taking all relevant arguments of the predicate verb in the SRL parses.", "We represent the differences between the two start times as one of seven coarse temporal units: { minutes, hours, days, weeks, months, years, decades}.", "For example, we get go to park is weeks before write review as shown in Fig.", "4. In addition to the event pairs, we randomly sample sentences within the paragraph to use as the context that better defines the events.", "We collect 700k instances from this cross-sentence extraction process from Wikipedia.", "Language Model (LM) Pre-Training Data We couple the specialized temporal pre-training data described above with additional paragraphs that are used to perform conventional language model pretraining using the original denoising task proposed in Raffel et al. (2020).", "This is done to maintain part of the original language model's semantics and to avoid overfitting.", "We use the Gutenberg Dataset (Lahiri, 2014) as the source and collect 1 million paragraphs for this purpose.", "Data Format We then format the within / cross-sentence extraction data to consistent instances that have input sequences of event:[EventA] starts", "[Relation][EventB].story:[Paragraph] and output sequences of answer:[Label][Distance] .", "Here [EventA] represents the tokens that describe the first event; [EventB] represents the ones that describe the second event; and [Paragraph] represents the tokens of the context, which is non-empty only for cross-sentence extractions.", "[Relation] is either before or after , and [Label] is either positive or negative .", "When the label is positive, the relation will be the gold relation extracted from the text; when it is negative, the relation will be the inverse of the extracted relation.", "We randomly make 50% of the instances negative.", "[Distance] is one of the 7 coarse temporal units represented with a set of blank tokens [extra_id_N] .", "We leave it to be blank for the within-sentence extractions so that the objective function will not include it in loss computations.", "The LM pre-training data follows the original format in Raffel et al. (2020).", "We use a pre-trained sequence-to-sequence model as our base model and additionally pre-train this model using the data collected in 4.1 (for modeling details, see 6.1).", "We call the resulting model PTNTIME .", "As a result of this additional pre-training step, PTNTIME serves as new set of temporally-aware model weights that can be used in place of existing pre-trained models and fine-tuned on TRACIE .", "As we describe next, we also use PTNTIME to build a modular temporal reasoning model called SYMTIME that attempts to go beyond a standard language modeling approach and improve start and end point prediction.", "To address the challenge of predicting event end times for which it is difficult to obtain high-quality direct or distant supervision, we introduce a new reasoning model called SYMTIME in this section.", "This model makes end-time comparisons by symbolically combining start time distance and duration from separate predictions based on some of the components introduced in the previous section.", "Different from Leeuwenberg and Moens (2018) and Vashishtha et al. (2019), our model does not rely on explicit annotations on timepoints, but only relative comparisons between them.", "As described in 3.1, hypotheses in TRACIE make pair-wise comparisons between two events e 1 and e 2 using a comparator l from { starts , ends } and a query-relation r from { before , after } based on a provided story context.", "We associate each e j with a latent start time start j and an end comparator l relation r l ( e 1 , e 2 ) = ends (cid:40) before if end 1 < start 2 after otherwise starts (cid:40) before if start 1 < start 2 after otherwise Figure 5: Decomposition of the relation functions that solve TRACIE instances (equal timepoints ignored).", "time end j , as well as, for convenience, a duration duration j = end j start j .", "Under this formulation, a symbolic approach to solving TRACIE involves computing the relation functions r l shown in Figure", "5. For example, given exact numeric values end 1 and start 2 , as one would assume in a classical interval-based approach to temporal reasoning (Allen, 1983) 5 , determining if the first event ends before the second involves simply computing whether end 1 is less than start 2 .", "Given that the exact values of start and end times are latent, we use the intervals to do the same comparisons, as they are more context-invariant.", "For example, we do not need the exact date to know that lunch starts before dinner in the same day, because there is a typical distribution of the relative distance between the two start times.", "Based on this idea, we build a neural-symbolic model that learns approximations of these simple functions in Fig. 5 in a differentiable way.", "Specifically, we use individual neural modules that make predictions about event intervals via distance and duration functions dist( e i , e j ) and dur( e j ) , respectively.", "To understand this decomposition, we define the distance and duration functions computed by these two modules as dist( e i , e j ) = start i start j and dur( e j ) = duration j .", "By exploiting the rule that an end point end j can be computed as end j = start j + duration j , we can, for example, decompose the relation r ends ( e 1 , e 2 ) = before (i.e., e 1 ends before e 2 ) in terms of our two modules as follows via simple algebraic manipulation: r ends ( e 1 , e 2 ) = before end 1 < start 2 start 1 + duration 1 < start 2 (cid:0) start 1 start 2 (cid:1) + duration 1 < 0 dist( e 1 , e 2 ) + dur( e 1 ) < 0 5 In the Allen algebra, the values end x and start y correspond to the right and left end points x + , y in the intervals ( x , x + ) , ( y , y + ) .", "Likewise, our duration x corresponds to the value ( x + x ) .", "Hence, we have reduced the computation of the relation ends before to a symbolic computation over two numeric intervals.", "Conversely, we have r ends ( e 1 , e 2 ) = after dist( e 1 , e 2 ) + dur( e 1 ) > 0 , 6 For the starts comparator, we have r starts ( e 1 , e 2 ) = before dist( e 1 , e 2 ) < 0 and vice versa for the after relation.", "In what follows, we describe how we approximate the values of the two functions via individual neural modules (see illustration in Fig. 6).", "To obtain a model to estimate dur( ) , we pre-train a sequence-to-sequence model with the duration data from Zhou et al. (2020), which is similarly collected from pattern-based extraction.", "The data contains over 1 million events with their corresponding duration values.", "We map each instance to an input sequence event:[Event]story:[Story] and a corresponding output sequence answer:[Value] , where [Event] represents the tokens of an event with the trigger verb marked by a special token to its left, [Story] represents down-sampled tokens from the context, and [Value] is one of the 7 unit labels as described in 4.1 (i.e., { minutes, hours, days, weeks, months, years, decades }).", "We use the output from PTNTIME to approximate the function dist( ) .", "Following the sequence formulation of PTNTIME in 4, we replace [EventA] with the textual description of e 1 , [EventB] with 6 We note that one drawback of this inference rule is that it does not predict causal relations and, therefore, cannot handle instances where end 1 = start 2 as our label definitions describe in 3.1.", "We leave this problem for future research.", "the textual description of e 2 , and [Paragraph] with the context (premise), and fix [Relation] to be before .", "By taking the values of the vocabulary indices corresponding to positive and neg-ative from the logits of [Label] and applying a softmax operation, we get P before and P after .", "These are the probability of e 1 starting before and after e 2 , respectively, and are used to define the vector p = [ P before , P after ] .", "Similarly, we apply softmax to the logits of [Distance] over the 7 words representing the temporal units to obtain 7 values that approximate the probabilities of the distance between two events' start times being closest to each temporal unit.", "We place the 7 values in temporal units' increasing order in vector d .", "To represent | start 1 start 2 | with a single value, we dot product the probabilities with an incremental constant vector c = [0 , 1 , 2 , 3 , 4 , 5 , 6] .", "To get the direction, we apply the tanh function to the difference between the probabilities in p .", "7 As a result, we have: dist( ) = start 1 start 2 = c T d tanh( INT max ( p 2 p 1 )) (1) We use the pre-trained model in 5.2 to approximate the function dur( ) .", "Because the model is pre-trained with markers to the left of trigger verbs, we run a part-of-speech tagger on input phrases and add a marker to the left of the first verb.", "We apply softmax to the logit values of [Value] over the 7 temporal unit words and get, as above, 7 values representing the probabilities of the input event's duration being closest to each unit.", "We form v by placing these values at the temporal unit's increasing order.", "With the same constant vector, we have: dur( ) = duration 1 = c T v (2) For hypotheses with comparator starts , we use PTNTIME and its sequence-to-sequence objective to learn (i.e., we take the input hypothesis and context as is and use [Label] directly as the prediction).", "For hypotheses where the comparator is ends , we use the inference process in 5.1 and the computation process described above to construct logits = [ pred , pred ] , pred = dist( e 1 , e 2 ) + dur( e 1 ) as detailed in Fig.", "6. We find the gold-temporal-relation in each training instance and compute a two-class cross-entropy loss with logits .", "The PTNTIME that predicts starts 7 To ensure that tanh returns a value close to 1 or -1, we multiply the distance by a big number denoted as INT max .", "hypotheses shares weights with the one used in computing logits .", "The final model SYMTIME can also be used to predict TRACIE instances without any task-specific supervision as the two functions are initialized with distant supervision.", "In this section, we detail our experimental setup (6.1-6.2) and report our main results (6.3-6.5).", "8 6.1 Baselines and Systems We use T5-Large implemented by Wolf et al. (2019) as our base sequence-to-sequence model for both PTNTIME and the duration model in 5.2 as it provides for faster iterations.", "We use early stopping, batch size of 32 and other default parameters.", "PTNTIME converges after 45k steps ( 1.4M instances) and the duration model converges after 80k steps ( 2.6M instances).", "We use these pre-trained weights in SYMTIME as well as SYMTIMEZEROSHOT which uses no TRACIE supervision.", "We compare with our proposed models with a host of baselines based on the same pre-trained language model, including BaseLM : T5-Large, and BaseLM-MATRES : T5-Large fine-tuned on 20k MATRES training data.", "We also compare with other architectures/models, including BiLSTM as used in Williams et al. (2017), Roberta-Large (Liu et al., 2019) and T5-3B .", "All models and baselines follow a standard TE setup and default parameters.", "We report a 3-run average and each model is run until convergence.", "We measure system performance on TRACIE separately for start-time hypotheses and end-time hypotheses.", "We also employ a story-wide exact match metric, which is the percentage of stories with all its related hypotheses answered correctly.", "In addition to TRACIE 's standard i.i.d. split, we propose a pruned version of the training set with balanced prior distributions.", "For example, in the i.i.d. training set, 70% of the examples with the comparator ends and relation after are positive.", "We randomly remove instances from the majority classes to produce a uniform-prior training set such that a model can no longer rely on such prior distributions.", "We believe this setting better evaluates a system's true understanding of the task.", "Table 1 shows system performance on TRACIE 's i.i.d. setting.", "We observe that PTNTIME improves on all metrics over the base language model, with 6% on start-time comparisons and 8% on story-wide exact match.", "It also outperforms BaseLM-MATRES, suggesting that distant supervision is more efficient than extensive human annotation.", "With a symbolic end-time inference, SYMTIME further improves on all metrics, with 7%, 4%, and 9% gains over the base language model on start time, end time and story-wide exact match, respectively.", "SYMTIME can further improve the performance on start-time hypotheses over PTNTIME even though they use the same model to predict start-time queries.", "This is because PTNTIME is not designed to understand end time from pre-training, and fine-tuning on such data hurts its representation in general.", "This illustrates the benefits of models using explicit and sensible reasoning processes.", "a system cannot exploit prior knowledge about the label distribution when making predictions.", "Given this, we see that all baselines produce a much lower performance, e.g., the BiLSTM, which is a model that lacks much of the pre-requisite knowledge for reasoning, suddenly performs near random chance.", "Compared to the baseline models, PTNTIME only drops 2 .", "7 %, suggesting that it is more invariant to evaluation settings and better understands temporal common sense.", "SYMTIME has the smallest drop among all models (1.7%) because of its explicit reasoning process on end-time hypotheses.", "SYMTIME-ZEROSHOT does not use any TRACIE training examples, so it has the same performance in the uniform-prior setting which outperforms all supervised baselines including T5-3B.", "To show that our model is not limited to the TRACIE dataset and is general in temporal relation reasoning, we also evaluate on MATRES (Ning et al., 2018b), a temporal relation dataset focused on comparing explicit events' start times.", "We train and evaluate only the instances with a label of either before or after, which accounts for about 80% of all instances.", "We compare the performance of SYMTIME 9 with BaseLM.", "We report four results OT-NS (original test, no story) : train and test with only the sentences containing the trigger verbs; OT : train and test with the entire document (down-sampled to be below the maximum sequence length) as an auxiliary input; OT-MS (original test, minimal supervision) : train with 1.2k (6%) training instances; PT (perturbed test) : train with the complete training set and test on a perturbed test set from Gardner et al. (2020).", "In OT-NS, we also report a SOTA system from Wang et al. (2020) under the same two-label 10 setting.", "Table 3 shows the performance of our model and the baselines.", "We see that our model is consistently 9 This is virtually the same as using PTNTIME as MATRES does not evaluate duration nor end times.", "10 Wang et al. (2020) is trained with two additional labels.", "better than BaseLM, and at the same time, comparable to Wang et al. (2020).", "Our model benefits more from input contexts, and only drops 4% in the OT-MS setting with minimal supervision (from 89.6 to 86.1), comparing to the 10% drop from T5-Large.", "This shows the effectiveness of our distant signals in 4.1, which are also designed to encourage contextual understandings.", "To better understand the improvements from our models, we conduct several ablation studies.", "Table 4 shows the results on TRACIE where the story is not provided as part of the inputs to systems (a no-story setting).", "While such a setting bares some resemblance to the partial-input baselines often employed in TE (Poliak et al., 2018), in our setting, it is often possible to predict temporal relations in the absence of stories because of strong commonsense priors.", "Indeed, we estimate that 65 % of the instances can be correctly predicted from the hypotheses alone, based on expert analysis in 3.2.", "This suggests a 82 .", "5 % human upper-bound 11 in this no-story setting.", "Hence, such a setting partly evaluates a model's ability to incorporate commonsense priors when making decisions.", "We see that BaseLM is close to random chance, whereas PTNTIME and SYMTIME improve 20% and 22% respectively.", "This suggests that our models better understand temporal common sense through the distant supervision on both start times and duration.", "On the other hand, we observe much smaller drops in our model's performances in this no-story setting.", "This suggests that our models do not improve as much on the 35% instances that require multi-hop timeline constructions over more than two events, motivating future work.", "described in 4.1 by individually pre-training two models with only within-sentence or cross-sentence extracted data.", "We see that the cross-sentence extraction brings the most performance gain on TRACIE 's start-time binary metric under the uniform-prior training setting.", "This suggests that the global extraction rule is able to introduce new knowledge that is not seen in localized language model pretraining.", "Combining the within-sentence data further improves the performance.", "Through analysis on the interval predictions made by SYMTIME , we notice a tendency for the model to predict after for end-time instances, possibly due to overly-estimated durations: a byproduct of natural biases in text.", "Given the weak signal used to learn such intervals and these potential biases, this is not altogether surprising.", "We leave the task of learning more robust and faithful interval representations for future work.", "We introduce a challenging dataset TRACIE , to evaluate systems' temporal understanding of implicit events.", "We propose a distant supervision process that improves language models' understanding of start times of both explicit and implicit events.", "We further combine this process with a distantly supervised model that estimates events' duration to compare event end times, under the explicit rule that end times are start times plus durations.", "We show that our model improves over TRACIE and MATRES, suggesting the effectiveness of high-precision pre-training and symbolic temporal reasoning.", "Despite these advances, TRACIE continues to be a challenging task for future work on general temporal reasoning.", "This research is based upon work supported in part by the office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via IARPA Contract No. 2019-19051600006 under the BETTER Program, and by Contract FA8750-19-2-1004 with the US Defense Advanced Research Projects Agency (DARPA).", "The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government." ]
[ "objective", "abstain", "abstain", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "other", "method", "other", "abstain", "other", "abstain", "other", "method", "abstain", "other", "abstain", "abstain", "other", "method", "abstain", "other", "abstain", "result", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "method", "result", "abstain", "other", "other" ]
[ "Abstract Due to its effectiveness and performance, the Transformer translation model has attracted wide attention, most recently in terms of probing-based approaches.", "Previous work focuses on using or probing source linguistic features in the encoder.", "To date, the way word translation evolves in Transformer layers has not yet been investigated.", "Naively, one might assume that encoder layers capture source information while decoder layers translate.", "In this work, we show that this is not quite the case: translation already happens progressively in encoder layers and even in the input embeddings.", "More surprisingly, we find that some of the lower decoder layers do not actually do that much decoding.", "We show all of this in terms of a probing approach where we project representations of the layer analyzed to the final trained and frozen classifier level of the Transformer decoder to measure word translation accuracy.", "Our findings motivate and explain a Transformer configuration change: if translation already happens in the encoder layers, perhaps we can increase the number of encoder layers, while decreasing the number of decoder layers, boosting decoding speed, without loss in translation quality?", "Our experiments show that this is indeed the case: we can increase speed by up to a fac-tor 2 .", "3 with small gains in translation quality, while an 18 4 deep encoder configuration boosts translation quality by +1 .", "42 BLEU (En-De) at a speed-up of 1 .", "4 .", "Neural Machine Translation (NMT) has achieved great success in the last few years.", "The popular Transformer (Vaswani et al., 2017) model, which outperforms previous RNN/CNN based translation models (Bahdanau et al., 2014; Gehring et al., 2017), is based on multi-layer self-attention networks and can be parallelized effectively.", "Recently, a wide range of studies related to the Transformer have been conducted.", "For example, Bisazza and Tump (2018) perform a fine-grained analysis of how various source-side morphological features are captured at different levels of an NMT encoder.", "Surprisingly, they do not find any correlation between the accuracy of source morphology encoding and translation quality.", "Morphological features are only captured in context and only to the extent that they are directly transferable to target words.", "Voita et al. (2019a) study how information flows across Transformer layers and find that representations differ significantly depending on the objectives (machine translation, standard left-to-right language models and masked language mod-eling).", "Tang et al. (2019) find that encoder hidden states outperform word embeddings significantly in word sense disambiguation.", "However, to the best of our knowledge, to date there is no study about how the Transformer translation model transforms individual source tokens into corresponding target tokens (i.e., word translations), and specifically, which role each Transformer layer plays in word translation, and at which layer a word is translated.", "To investigate the roles of Transformer layers in translation, in this paper, we adopt probing approaches (Adi et al., 2017; Hupkes et al., 2018; Conneau et al., 2018) and propose to measure the word translation accuracy of output representations of individual Transformer layers by probing how capable they are at translating words.", "Probing uses linear classifiers, referred to as probes, where a probe can only use the hidden units of a given intermediate layer as discriminating features.", "Moreover, these probes cannot affect the training phase of a model, and they are generally added after training (Alain and Bengio, 2017).", "In addition to analyzing the role of each encoder/decoder layer, we also analyze the contribution of the source context and the decoding history in translation by testing the effects of the masked self-attention sub-layer and Source Embedding Layer 1 Layer i Layer d Source words Target Embedding Layer 1 Layer d Target words Self-Attn Cross-Attn FFN Layer i Classifier Target words (Shifted) Linear Projection Layer Classifier Target words (Shifted) A i Target length S o u r c e l e n g t h A i A i A p 1 p 2 p d*k E Source length I npu t d i m e n s i o n Matrix Multiplication Linear Projection Layer Classifier Target words (Shifted) TE : (Size: Target length * Input dimension) (Layer 0) (Layer 0) Figure 1: Analyzing word translations of Transformer layers.", "We present empirical results for how word translation is performed in each encoder/decoder layer, and how the alignment modeling (cross-attention sub-layers) and language modeling (masked self-attention sub-layers) contribute to the performance in each decoder layer.", "Our analysis demonstrates how word translation evolves across en-coder/decoder layers and provides insights into the impact of the source encoding and the decoding history on the translation of target tokens.", "It reveals the existence of target translations in encoder states (and even source word embeddings) and the translation performed by encoder layers.", "Based on our findings, we show that the proper use of more encoder layers with fewer decoder layers can significantly boost decoding speed without harming quality.", "Recently, Kasai et al. (2021) independently and similar to our encoder-decoder layer trading approach, compare the performance and speed of a 12-layer encoder 1-layer decoder with Non-Autoregressive Translation (NAT) approaches, and show that a one-layer autoregressive decoder can yield state-of-the-art accuracy with comparable latency to strong non-autoregressive models.", "Our analysis explains why using a deep encoder with a shallow decoder is feasible, and we show that some encoder-decoder depth configurations deliver both increased speed and increased translation quality.", "To analyze word translation accuracy of the Transformer, we first freeze a trained Transformer model so its behavior is consistent in how it performs in translation during our analysis.", "We then extract output representations of the particular layer analyzed, apply a linear projection layer to extract features related to translation and feed the projected representations to the frozen decoder classifier of the trained Transformer.", "Our approach is minimally invasive in that only the linear projection layer and the weights of the alignment matrix A responsible for combining frozen cross-attention alignment matrices from the decoder are trained and updated on the training set, with the original Transformer being frozen.", "Thus the projection layer will only transform between vector spaces without generating new features for the word translation, and the alignment matrix A will only combine frozen cross-attention alignment matrices.", "A high-level illustration of our analysis approach for encoder/decoder layers is shown in Figure", "1. 2.1 Analysis of Encoder Layers Analyzing word translation accuracy of encoder layers requires us to align source tokens with corresponding target tokens.", "We use the frozen alignment matrices computed by cross-attention sublayers in decoder layers to align source tokens with target tokens (Figure 1).", "As there are multiple matrices produced by each sub-layer (due to the multihead attention mechanism) and multiple decoder layers, we have to ensemble them into one matrix of high alignment accuracy using weights.", "Assume there are d decoder layers with k attention heads in each multi-head attention sub-layer, which results in d k alignment matrices A 1 , ..., A d k .", "We use a d k dimension weight vector w to combine all attention matrices.", "The weight vector is normalized by softmax to a probability distribution p : p i = e w i d k (cid:80) j =1 e w j (1) where i indicates the i th element in w .", "Then we use p as the weights of the corresponding attention matrices and merge them into one alignment matrix A .", "w is trained with the linear projection layer through backpropagation on the frozen Transformer.", "After we obtain the alignment matrix A , instead of selecting the target token with the highest alignment weight as the translation of a source token, we perform matrix multiplication between the encoded source representations E (size: source sentence length input dimension) and the alignment matrix A (size: source sentence length target sentence length) to transform/re-order source representations to the target side TE : TE = AT E (3) where AT and indicate the transpose of A and matrix multiplication.", "Thus TE has the same length as the gold translation sequence, and the ground-truth target sequence can be used directly as the translation represented by TE .", "Though source representations are transformed to the target side, we suggest this does not involve any target side information as the pre-trained Transformer is frozen and the transformation does not introduce any representation from the decoder side.", "We do not retrieve target tokens with the highest alignment score as word translations of corresponding source tokens because translation may involve zero/one/multiple source token(s) to zero/one/multiple target token(s) alignments, and we suggest that using a soft alignment (attention weights) may lead to more reliable gradients than a hard alignment.", "The analysis of the prediction accuracy of the decoder is simpler than the encoder, as we can directly use the shifted target sequence (teacher forcing) without the requirement to bridge different sequence lengths between the source sentence and the target while analyzing the encoder.", "We use the output representations of the analyzed layer, and evaluate its prediction accuracy after projection.", "However, as studied by Li et al. (2019a), the decoder involves two kinds of translation.", "One (per-formed by the self-attention sub-layer) translates the history token sequence to the next token, an-other (performed by the cross-attention sub-layer) translates by attending source tokens.", "We additionally analyze the effects of these two kinds of translation on predicting accuracy by dropping the corresponding sub-layer (either crossor masked self-attention) of the analyzed decoder layer (i.e., we only compute the other sub-layer and the feed-forward layer where only the residual connection is kept as the computation of the skipped sub-layer).", "We first trained a Transformer base model for our analysis on the popular WMT 14 English to German news translation task to compare with Vaswani et al. (2017).", "We employed a 512 512 parameter matrix as the linear projection layer.", "The source embedding matrix, the target embedding matrix and the weight matrix of the classifier were tied.", "Parameters were initialized under the Lipschitz con-Layer Encoder Decoder Acc Acc -Self attention -Cross attention Acc Acc 0 40.73 13.72 1 41.85 1.12 20.52 6.80 17.46 -3.06 16.47 -4.05 2 43.75 1.90 26.06 5.54 21.03 -5.03 22.91 -3.15 3 45.49 1.74 34.13 8.07 26.68 -7.45 27.79 -6.34 4 47.14 1.65 55.00 20.87 39.43 -15.57 35.32 -19.68 5 48.35 1.21 66.14 11.14 62.60 -3.54 55.84 -10.30 6 49.22 0.87 70.80 4.66 70.13 -0.67 69.03 -1.77 Table 1: Word translation accuracy of Transformer layers on the WMT 14 En-De task.", "straint (Xu et al., 2020) to ensure the convergence of deep encoders.", "We implemented our approaches based on the Neutron implementation (Xu and Liu, 2019) of the Transformer translation model.", "We applied joint Byte-Pair Encoding (BPE) (Sennrich et al., 2016b) with 32 k merge operations.", "We only kept sentences with a maximum of 256 sub-word tokens for training.", "The concatenation of newstest 2012 and newstest 2013 was used for validation and newstest 2014 as the test set.", "The number of warm-up steps was set to 8 k .", "1 The model was trained for 100 k training steps with around 25 k target tokens in each batch.", "We followed all the other settings of Vaswani et al. (2017).", "We averaged the last 5 checkpoints saved with an interval of 1 , 500 training steps.", "For decoding, we used a beam size of 4 , and evaluated tokenized case-sensitive BLEU.", "2 The averaged model achieved a 1 https://github.com/tensorflow/ tensor2tensor/blob/v1.15.4/ tensor2tensor/models/transformer.py#L1818 .", "BLEU score of 27 .", "96 on the test set.", "The projection matrix and the weight vector w of 48 elements for alignment were trained on the training set with the frozen Transformer.", "We monitored the accuracy on the development set, and report results on the test set.", "The analysis results of the trained Transformer are shown in Table", "1. Layer 0 stands for the embedding layer.", "Acc indicates the prediction accuracy.", "-Self attention and -Cross attention in the decoder layer analysis mean bypassing the computation of the masked self-attention sub-layer and the cross-attention sub-layer respectively of the analyzed decoder layer using a residual connection.", "In our layer analysis of the encoder and decoder, indicates improvements in word translation accuracy of the analyzed layer over the previous layer.", "While analyzing the self-attention and cross-attention sub-layers, is the accuracy loss when we remove the computation of the corresponding sub-layer.", "that: 1) encoder layers already perform word translation, and the translation even starts at the embedding layer with unexpectedly high accuracy.", "2) With the stacking of encoder layers, the word translation accuracy improves, and improvements brought about by different layers are relatively similar, indicating that all encoder layers are useful.", "Surprisingly, analyzing decoder layers, Table 1 shows that: 1) shallow decoder layers (0, 1, 2 and 3) perform significantly worse compared to the corresponding encoder layers (all the way up until the 4 th decoder layer, where a word translation accuracy which surpasses the embedding layer of the encoder is achieved); 2) The improvements brought about by different decoder layers are quite different.", "Specifically, the relative performance increases between the low-performance decoder layers (0, 1, 2 and 3) are low as well, while layers 4 and 5 bring more improvements than the others.", "While analyzing the effects of the source context (-Cross attention prevents informing translation by the source encoding) and the decoding history (the self-attention sub-layer is responsible for the target language re-ordering, and -Self attention prevents using the decoding history in the analyzed decoder layer), Table 1 shows that in shallow decoder layers (layer 1 3 ), the decoding history is as important as the source encoding, while in deep decoder layers, the source encoding plays a more vital role than the decoding history.", "Overall, our results provide new insights on the importance of translation already performed by the encoder.", "Since the English-German translation shares many sub-words naturally ( 13 . 89% source sub-words including punctuations exist in the subword set of the corresponding target translation in the training set), we additionally provide results on the WMT 15 Cs-En task in Table", "2. Table 2 confirms our observations reported in Table", "1. Zhang and Bowman (2018); Hewitt and Liang (2019); Voita and Titov (2020) articulate concerns about analyses with probing accuracies, as differences in accuracies fail to reflect differences in representations in several sanity checks.", "Specifi-cally, Zhang and Bowman (2018) compare probing scores for trained models and randomly initialized ones, and observe reasonable differences in the scores only when reducing the amount of classifier training data.", "However, we argue that in our work, we use the frozen classifier of the pre-trained Transformer decoder as our probing classifier, and the Layer BLEU 1 BLEU 0 33.1 7.92 1 35.7 2.6 8.99 1.07 2 41.0 5.3 11.05 2.06 3 43.3 2.3 11.89 0.84 4 46.8 3.5 13.13 1.24 5 48.1 1.3 13.34 0.21 6 48.6 0.5 13.45 0.11 FULL 62.0 13.4 33.26 19.81 Table 3: Translation performance of encoder layers on the WMT 14 En-De task.", "introduced linear projection, as well as the alignment matrix A , are much smaller and weaker than the frozen classifier and the rest of the frozen Transformer components.", "Thus we suggest that our approach is minimally invasive and that our analysis is less likely to be seriously affected by this issue even though we use a large training set.", "To empirically verify this, we apply our analysis approach on a randomly initialized encoder and evaluate word translation accuracies obtained by the source embedding layer and last encoder layer, while the alignment between the source and the target is still from the pre-trained model.", "Both the source embedding layer and the last encoder layer resulted in the same accuracy of 23 .", "66 .", "Compared to the corresponding values ( 40 . 73 and 49 . 22 ) in Table 1, the gap between the randomly initialized layers and the pre-trained layers in accuracy is significant, and the gap between accuracy improvements from the representation extracted from the source embedding layer and propagated through all intermediate layers to the last encoder layer of pre-trained layers ( 8 . 49 ) and randomly initialized layers ( 0 . 00 ) is also significant.", "Thus, we suggest our analysis is robust.", "Since our approach extracts features for translation from encoder states while analyzing them, is it possible to perform word translation with only these features from encoder layers without using the decoder except the frozen classifier?", "To test this question, we feed output representations from an encoder layer to the corresponding linear projection layer, and feed the output of the linear projection layer directly to the frozen decoder classifier, and retrieve tokens with the highest probabilities as translations.", "Even though such translations from encoder layers have the same length and the same word order as source sentences, individual source tokens are translated to the target language to some extent.", "We evaluated BPEized 3 case-insensitive BLEU and BLEU 1 (1-gram BLEU, indicates the word translation quality), and results are shown in Table", "3. FULL is the performance of the whole Transformer model (decoding with a beam size of 4).", "means the improvements obtained by the introduced layer (or the decoder for FULL) over the previous layer.", "Table 3 shows that while there is a significant gap in BLEU scores between encoder layers and the full Transformer, the gap in BLEU 1 is relatively smaller than in BLEU.", "It is reasonable that encoder layers achieve a comparably high BLEU 1 score but a low BLEU score overall, as they perform word translation in the same order as the source sentence without any word re-ordering of the target language.", "We suggest that the BLEU 1 score achieved by only the source embedding layer (i.e., translating with only embeddings) is surprising and worth noting.", "Our probing approach involves crucial information from the decoder (encoder-decoder attention from all decoder layers).", "However, we argue that probe training requires supervision.", "For the decoder, we can directly use gold references.", "On the encoder side, parallel data does not provide word translations for source tokens, and we have to generate this data by aligning target tokens to source tokens.", "One choice is extracting alignments by taking an argmax of alignment matrices or using toolkits like fastalign (Dyer et al., 2013).", "In this case, probe training does not involve attention matrices, but this has drawbacks: multiple/no target tokens may align to one source token.", "We use soft aggregation to preserve more information (other attention possibilities besides the highest are kept) and to alleviate error propagation.", "We argue that the use of attention matrices is only to bring supervision (word translations) from the target side to the source side, which is inevitable.", "Decoder representations cannot flow back to the frozen encoder.", "Our paper also empirically reveals the impact of attention matrices: 1) In Section 3.3, where after the training of source probes, we decode target tokens with only encoder layers, the trained probe 3 Since there is no re-ordering of the target language performed, which makes the merging of translated sub-word units in the source sentence order pointless.", "(without involving cross-attention networks) and the pre-trained classifier.", "2) In the last paragraph of Section 3.2, we train probes with alignment matrices from the pre-trained model but a frozen random encoder, showing the effects of cross-attention matrices on the probe.", "From our analysis of the 6-layer Transformer base model (Table 1), we find that in contrast to the improvements of the word translation accuracy with increasing depth on the encoder side, some decoder layers contribute significantly fewer improvements than others (i.e., layers 4 and 5 bring more word translation accuracy improvements than those from layers 1, 2, 3 and 6 in Table 1).", "This suggests that there might be more lazy layers in the decoder than in the encoder, which means that it might be easier to compress the decoder than the encoder, and further we conjecture that simply removing some decoder layers while adding the same number of encoder layers may even improve the translation quality of the transformer.", "Motivations targeting efficiency include: Each decoder layer has one more cross-attention sub-layer than an encoder layer, and increasing encoder layers while decreasing the same number of decoder layers will reduce the number of parameters and computational cost; During inference, the decoder has to autore-gressively compute the forward pass for every decoding step (the decoding of each target to-ken), which prevents efficient parallelization, while encoder layers are non-autoregressively propagated and highly parallelized, and the acceleration caused by using fewer decoder layers with more encoder layers will be more significant in decoding, which is of practical value.", "We examine the effects of reducing the number of decoder layers while adding corresponding num-bers of encoder layers, and results are shown in Table", "4. Speed up stands for the decoding acceleration compared to the 6-layer Transformer.", "Table 4 shows that while the acceleration of trading decoder layers for encoder layers in training is small, in decoding it is significant.", "Specifically, the Model Depth BLEU Para.", "Transformer with 10 encoder layers and 2 decoder layers is 2 .", "32 times as fast as the 6-layer Transformer while achieving a slightly higher BLEU.", "Can we use more than 12 encoder layers with a shallow decoder to benefit both translation quality and inference speed?", "Table 4 shows that the 18 4 model 4 brings about +1 .", "42 BLEU improvements over the strong baseline, while being 1 .", "38 times as fast in decoding.", "Comparing the 18 4 model to the 8 4 model, the time cost for using 10 more encoder layers only increases 1 second for translating the test set, suggesting that autoregressive decoding 4 A full grid search over configurations is tedious and expensive.", "We take inspiration from Table 4 where going from 5 to 4 decoder layers brings about the biggest relative jump in translation quality.", "We explored a few configurations and find that using more than 18 encoder layers can still bring improvements, but the gains are relatively small.", "speed is quite insensitive to the encoder depth.", "Our results show that using more encoder layers with fewer but sufficient decoder layers can significantly boost the decoding speed with small gains in translation quality, and that a good choice in the distribution of encoder and decoder layers ( 18 4 ) can result in slightly faster decoding and a substantial increase in translation quality, which is simple but effective and valuable for back-translation (Sen-nrich et al., 2016a) and production applications.", "We present the word accuracy analysis results of the 10 encoder layer 2 decoder layer Transformer on the En-De task in Table", "5. Comparing Table 5 with Table 1, we find that: 1) The differences in improvements ( 1 . 71 vs. 0 . 11 ) brought by individual layers of the 10-layer encoder are larger than those of the 6-layer encoder ( 1 . 90 vs. 0 . 87 ), indi-Depth En-De En-Fr Cs-En Encoder Decoder 6 27.96 40.13 28.69 10 2 28.47 40.49 28.87 18 4 29.38 40.90 29.75 Table 6: Verification of deep encoder and shallow decoder on WMT En-De, En-Fr and Cs-En tasks.", "cating that there might now be some lazy layers in the 10-layer encoder; 2) Decreasing the depth of the decoder removes lazy decoder layers in the 6-layer decoder and makes decoder layers rely more on the source encoding (by comparing the effects of skipping the self-attention sub-layer and cross-attention sub-layer on performance).", "To investigate how a deep encoder with a shallow decoder will perform in other tasks, we conducted experiments on the WMT 14 English-French and WMT 15 Czech-English news translation tasks in addition to the WMT 14 English-German task.", "Results on newstest 2014 (En-De/Fr) and 2015 (Cs-En) respectively are shown in Table", "6. Table 6 shows that the 10 2 model consistently achieves higher BLEU scores than the 6 -layer model, and the 18 4 model consistently leads to significant improvements in all 3 tasks.", "Analysis of NMT Models.", "Belinkov et al. (2020) analyze the representations learned by NMT models at various levels of granularity and evaluate their quality through relevant extrinsic properties.", "Li et al. (2019a) analyze the word alignment quality in NMT and the effect of alignment errors on translation errors.", "They demonstrate that NMT captures word alignment much better for those words mostly contributed from the source than those from the target.", "Voita et al. (2019b) evaluate the contribution of individual attention heads to the overall performance of the model and analyze the roles played by them in the encoder.", "Yang et al. (2019) propose a word reordering detection task to quantify how well the word order information is learned by Self-Attention Networks and RNN, and reveal that although recurrence structure makes the model more universally effective on learning word order, learning objectives matter more in the downstream tasks such as machine translation.", "Tsai et al. (2019) regard attention as applying a kernel smoother over the inputs with the kernel scores being the similarities between inputs, and analyze individual components of the Transformer's attention with the new formulation via the lens of the kernel.", "Tang et al. (2019) find that encoder hidden states outperform word embeddings significantly in word sense disambiguation.", "He et al. (2019) measure the word importance by attributing the NMT output to every input word and reveal that words of certain syntactic categories have higher importance while the categories vary across language pairs.", "Voita et al. (2019a) use canonical correlation analysis and mutual information estimators to study how information flows across Transformer layers.", "Early work by Bisazza and Tump (2018) performs a fine-grained analysis of how various source-side morphological features are captured at different levels of the NMT encoder.", "While they are unable to find any correlation between the accuracy of source morphology encoding and translation quality, they discover that morphological features are only captured in context and only to the extent that they are directly transferable to the target words, and suggest encoder layers are lazy.", "Our analysis offers an explanation for their results as the translation already starts at the source embedding layer, and possibly source embeddings already represent linguistic features of their translations.", "Analysis of BERT.", "BERT (Devlin et al., 2019) uses the Transformer encoder, and analysis of BERT may provide valuable references for analyzing the Transformer.", "Jawahar et al. (2019) provide support that BERT networks capture structural information, and perform a series of experiments to unpack the elements of English language structure learned by BERT.", "Tenney et al. (2019) employ the edge probing task suite, and find that BERT represents the steps of the traditional NLP pipeline in an interpretable and localizable way, and that the regions responsible for each step appear in the expected sequence: POS tagging, parsing, NER, semantic roles, then coreference.", "Pires et al. (2019) present a large number of probing experiments, and show that Multilingual-BERT's robust ability to generalize cross-lingually is underpinned by a multilingual representation.", "Accelerating Decoding.", "Zhang et al. (2018a) propose average attention as an alternative to the self-attention network in the Transformer decoder to accelerate decoding.", "Wu et al. (2019) introduce lightweight convolution and dynamic convolutions.", "The number of operations required by their approach scales linearly in the input length, whereas self-attention is quadratic.", "Zhang et al. (2018b) apply cube pruning to neural machine translation to speed up translation.", "Zhang et al. (2018c) propose to adopt an n-gram suffix-based equivalence function into beam search decoding, which obtains similar translation quality with a smaller beam size, making NMT decoding more efficient.", "Non-Autoregressive Translation (NAT) (Gu et al., 2018; Libovick and Helcl, 2018; Wei et al., 2019; Shao et al., 2019; Li et al., 2019b; Wang et al., 2019; Guo et al., 2019) enables parallelized decoding, while there is still a significant quality drop compared to traditional autoregressive beam search, our findings on using more encoder layers might also be adapted to NAT.", "Recently, and independently of our work, Kasai et al. (2021) compare the performance and speed between a 12-layer encoder 1-layer decoder case with NAT approaches, and show that a one-layer autoregressive decoder yields state-of-the-art accuracy with comparable latency to strong non-autoregressive models.", "Our work explains why using a deep encoder with a shallow decoder is feasible, and we show that substantial increases in decoding speed are possible with small gains in translation quality, and that for some configurations (e.g., 18 4 ) significant translation quality increases with modest increases in decoding speed are possible.", "We propose approaches for the analysis of word translation accuracy of Transformer layers to investigate how translation is performed.", "To measure word translation accuracy, our approach trains a linear projection layer that bridges representations from the frozen pre-trained analyzed layer and the frozen pre-trained classifier.", "While analyzing encoder layers, our approach additionally learns a weight vector to merge multiple attention matrices into one, and transforms the source encoding to the target shape by multiplying the merged alignment matrix.", "Both the linear projection layer and the weight vector are trained on the frozen Transformer.", "This is minimally invasive, and training the new parameters does not account for the findings reported.", "For the analysis of decoder layers, we additionally analyze the effects of the source context and the decoding history in word prediction through bypassing the corresponding crossand self-attention sub-layers.", "Our findings motivate and explain the benefits of trading decoder for encoder layers in our approach and that of Kasai et al. (2021).", "Our analysis is the first to reveal the existence of target translations performed by encoder layers (in-cluding the source embedding layer).", "We show that increasing encoder depth while removing decoder layers can lead to significant BLEU improvements while boosting the decoding speed.", "We thank anonymous reviewers for their insightful comments.", "Hongfei Xu acknowledges the support of China Scholarship Council ([2018]3101, 201807040056).", "Josef van Genabith is supported by the German Federal Ministry of Education and Research (BMBF) under funding code 01IW20010 (CORA4NLP).", "Deyi Xiong is supported by the National Natural Science Foundation of China (Grant No. 61861130364), the Natural Science Foundation of Tianjin (Grant No. 19JCZDJC31400) and the Royal Society (London) (NAF \\ R1 \\ 180122)." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "objective", "method", "method", "abstain", "abstain", "method", "abstain", "objective", "result", "other", "other", "other", "other" ]
[ "Previous pre-neural work on structured prediction has produced very effective supervised clustering algorithms using linear classifiers, e.g., structured SVM or perceptron.", "However, these cannot exploit the representation learning ability of neural networks, which would make supervised clustering even more powerful, i.e., general clustering patterns can be learned automatically.", "In this paper, we design neural networks based on latent structured prediction loss and Transformer models to approach supervised clustering.", "We tested our methods on the task of automatically recreating categories of intents from publicly available question intent corpora.", "The results show that our approach delivers 95.65% of F1, outperforming the state of the art by 17.24%.", "Recent years have witnessed a vast spread of virtual assistants, such as Google Home, Siri and Alexa, which are based on the research areas of Conversational Agents and Question Answering.", "When designing such systems, the creation of classes of expected questions, aka intents, is essential for building the main states of a dialog manager.", "In particular, when an assistant is designed for a specific domain, a knowledge engineer needs to analyze typical user's questions, answered by human operators.", "This work would be greatly sped up, if the engineers could have questions clustered according to the different topics they ask for.", "For example, the following questions/requests from the intent dataset by Larson et al. (2019): i want to switch to direct deposit set up direct deposit for me how do i go about setting up direct deposit all have a common intent of making a direct deposit .", "Thus, the dialog designer will create this intent, if the cluster captures a large number of requests.", "However, for being effective, the clustering algorithm must demonstrate a sufficient accuracy, which is often not the case for completely unsupervised methods.", "Thus, supervised clustering (Finley and Joachims, 2005), which exploits some training data of the target domain, e.g., previously designed clusters, to discover new clusters, is a viable approach.", "A seminal work on structured prediction was Latent Structural Support Vector Machines (LSSVM) by Yu and Joachims (2009).", "Recently, Haponchyk et al. (2018) have shown that LSSVM as well as the Latent Structured Perceptron (LSP) by Fernandes et al. (2014), originally designed for coreference resolution, were also effective, when provided with the appropriate node similarity function, for clustering questions into intents.", "These approaches used traditional feature engineering (ques-tion similarity) and a linear classifier, i.e., SVM, which can be highly improved by neural networks, and pre-trained Transformers, e.g., Devlin et al. (2019).", "Indeed, neural models enable representation learning, which can amplify the generalization ability of supervised clustering algorithms.", "In this paper, we design neural supervised clustering (NSC) models using the structured prediction algorithms, LSSVM and LSP.", "These are based on a latent representation of clusters using graph structures, which are used to compute an augmented loss.", "The latter, in turn, is used together with the model score to globally select the max-violating constraint at each learning step.", "This is the clustering that maximally penalizes the current model, which is used for a model update.", "We apply the same idea by computing the margin loss for our neural model and then back-propagating it, as any other differentiable loss.", "The augmented loss does not depend on the neural model, thus our approach can be applied to arbitrary learning settings.", "We applied NSC to two different question intent clustering tasks, defined by two datasets: IC&OOS (Larson et al., 2019), which is an intent classification corpus, and Quora Intent Corpus (Haponchyk et al., 2018).", "Another interesting contribution of our work is the creation of a training set for clustering from IC&OOS, which enables an effective training of NSC.", "Our corpus and software are available to the research community 1 .", "The comparative results of NSC using traditional CNN networks and Transformer models against traditional methods, e.g., spectral clustering, show an impressive boost in F1 of our NSC-BERT model, e.g., 95.65% vs. 78.38%, more than 17% of improvement over the best spectral clustering method.", "This accuracy enables the use of our approach for dialog applications and opens up further directions for other clustering tasks.", "This paper touches two main research areas: structured prediction, in particular with neural models, and intent clustering, which are described below.", "Structured prediction has shown powerful machine learning algorithms for solving NLP tasks requiring complex output, e.g., syntactic parsing (Smith, 2011), coreference resolution (Yu and Joachims, 2009; Fernandes et al., 2014).", "This work has mainly regarded traditional frameworks, e.g., SVMs, CRF, perceptron.", "Only little work has been devoted to the integration of the above theory in neural networks (LeCun et al., 2006; Durrett and Klein, 2015; Weiss et al., 2015; Kiperwasser and Goldberg, 2016; Peng et al., 2018; Milidi and Rocha, 2018; Xu et al., 2018; Wang et al., 2019), and, to the best of our knowledge, none to supervised clustering.", "This is partially due to the fact that local solutions have usually produced optimal results.", "For example, in case of supervised clustering, it is difficult to design a loss function that captures the global information about the clusters.", "Work in neural coreference resolution, e.g., (Lee et al., 2017), uses simple losses, which deliver state-of-the-art results but do not strictly take into account the cluster structure.", "Secondly, this is also due to the complexity associated with adapting the methods from previous work to neural frameworks.", "For example, using ILP (Roth and Yih, 2004) for clustering inference in SPIGOT (Peng et al., 2018), which facilitates the backpropagation through argmax based on a projection onto the feasible set of structured outputs, would inevitably require reducing the com-1 https://github.com/iKernels/intent-qa putational overhead (Miyauchi et al., 2018).", "On the line of research of question clustering, Wen et al. (2001) proposed to cluster queries with respect to a group of web pages frequently selected by users.", "Deepak (2016) describes a k-means like algorithm, MiXKmeans, that can cluster threads in Community Question Answering websites.", "These methods are unsupervised and, thus, are likely sensitive to setting the optimal number of clusters or to a heuristic adopted for the clustering criterion.", "Also among the classification approaches, there are semi-supervised and mixed classification methods which advance on the use of vast amounts of unlabelled queries.", "Li et al. (2008) classify unlabeled queries using their proximity to labeled queries in a click graph.", "Beitzel et al. (2007) classify queries from logs into topics using supervised or unsupervised methods.", "The following classification approaches address new emerging intents.", "Lin and Xu (2019) enable a neural model to detect unknown intents as outliers using a novelty detection technique.", "This model, however, does not have the capability to distinguish between different unknown intents.", "Xia et al. (2018) devise a capsule neural network able to discriminate between different emerging intents.", "Its zero-shot learning ability critically depends on the definition of a similarity between existing and new intents.", "Our approach does not hold any explicit representation of intents.", "The recent work by Lin et al. (2020) proposes a deep intent clustering model which takes advantage of labeled data for discovering new user intents, but it requires the indication of the exact number of output clusters.", "Finally, Zhang et al. (2021) propose Deep Aligned Clustering, a semisu-pervised method, to discover new intents using limited knowledge over intent data.", "We believe their approach is completely compatible with ours, i.e., our supervised clustering models can be integrated in their approach to improve intent discovering.", "LSSVMs train a clustering function from a series of training examples { ( x i , y i ) } ni =1 , where x i are input sets of elements, and y i are structured outputs, i.e., gold clusters.", "This function applied to unseen elements x predicts their clusters y .", "elements x k of x , and edges e = ( x i , x j ) the pairwise links between them.", "The inference step consists in finding a maximum spanning forest h on G , e.g., using Kruskal's algorithm (Kruskal, 1956).", "The nodes appearing in the same connected component (tree) in h are placed together in the same cluster in y (deterministically obtained from h ).", "The approach learns a linear scoring function which decomposes over the edges of h :", "where ( e ) is a feature representation for edge e , describing a pair of elements of x .", "Graph structures h are incorporated as latent variables into the latent formulation of LSSVM.", "Haponchyk et al. (2018) adapt the latent structured perceptron (LSP) by Fernandes et al. (2014) to the graph structures h and apply the approach to question pairs, ( q i , q j ) , to cluster sets of questions into different user intents; we compare to these methods.", "We propose a model for optimizing a structural clustering loss with neural networks.", "As a standard practice in structured prediction, our goal is to train a model with a scoring function s such that the correct clustering y is scored higher than incorrect clusterings y .", "LSSVM optimizes an upper bound, E , on the structural loss , which, in general terms, can be rewritten using the parameters as: ( y , y ( ) , h ( )) E ( y , y ( ) , h ( )) = max ( y , h ) Y H [( y , y , h ) + s ( x , y , h )] max h H s ( x , y , h ) , (2) where y ( ) is an output of the model with its auxiliary latent structure h ( ) ; Y and H are the spaces of all possible clusters and latent trees; ( y , y , h ) is a standard structural loss, measuring the difference between the gold y and the output y clusters; and ( y ( ) , h ( )) = argmax ( y , h ) Y H s ( x , y , h ) .", "The right-hand side of Eq.", "2 is essentially a margin-based objective with margin rescaled by the loss .", "Its minimization forces maximum weighted incorrect h score lower than the maximum weighted correct h by at least the value of the loss ( y , y , h ) on that h under parameters.", "Our neural model optimizes the objective E defined in Eq.", "2.", "We use the loss of Yu and Joachims (2009) based on computing edge mistakes in h , in which negative edge penalties are scaled with an r -parameter.", "e =( x i ,x j ) h", "(3) This enables the inference by Kruskal's algorithm, where the network net activates on edge representations 2 .", "Eq.", "3 indicates that our approach is applicable to any network net .", "Our work is inspired by Kiperwasser and Goldberg (2016), who pass the arcs of dependency parses through a multi-layer perceptron and optimize a structured margin loss.", "Differently to them, we elaborate on the case of the margin rescaled with the structural loss, which includes max-violating inference.", "The objective E in Eq.", "2 is sub-differentiable as a summation of edge networks ; in E does not depend on .", "We propagate the error from the margin loss E in Eq.", "2 back to input layer of net .", "One iteration of the algorithm operates on one sample of training data ( x , y ) , where, in the context of intent task, x = { x i } is a set of questions, y are gold clusters of the questions in x .", "We pass all the pairs of questions, i.e., edges e = ( x i , x j ) , i < j of a fully connected graph G , through net , and compute the global error E .", "The error computation includes finding", "(i) the max-violating graph, h , among all possible spanning graphs h of G ; and", "(ii) the max-scoring correct spanning graph, h , over the set of graphs that comply with the gold label y .", "If E > 0 , the backward pass of the model computes gradients for an update of the model.", "The partial derivatives with respect to the parameters j in the last layer of the network are E j = (cid:88) e h net ( e ) j (cid:88) e h net ( e ) j , 2 The scoring function follows the standard formulation of structured prediction tasks, where the score of a structure is computed by aggregating the scores of its constituent parts.", "In our case, it is a summation of edge scores.", "The reader may refer to the work on dependency parsing by Kiperwasser and Goldberg (2016).", "We intentionally do not specify the architecture of net in Sec. 4.2 as it could be of any form once encoding a pair of questions ( x i , x j ) .", "However, we further describe the architecture with which we experiment in this work.", "We use a simple feedforward neural network, which consists of", "(i) an input layer encoding a pair ( x i , x j ) ,", "(ii) one fully connected hidden layer with ReLU activation functions, and", "(iii) an output layer, which is a linear operation over the outputs of the hidden layer.", "This way, for edge e , net ( e ) is a real number without any restriction on its range.", "The pairwise encoder is practically trained to score good edges higher than bad edges.", "However, doing it jointly for all the edges over a sample, in a structured way, has the goal of producing a more consistent decision in terms of clustering.", "(1) We use a sentence encoder (Severyn and Moschitti, 2016) to map each question x i into a fixed-size intermediate vector representation ( x i ) .", "The encoder operates on a sentence matrix S , in which the k -th column corresponds to the k -th word in x i and is a concatenation of the word embedding and overlap embedding: S k = [ word _ emb ( w k ) , ov _ emb ( w k )] .", "The ov _ emb part for x i , in each pair, is formed in association with the other question of the pair, x j .", "S is given as input to a series of convolution operations with ReLU activations followed by a max-pooling layer.", "From the obtained question representations ( x i ) and ( x j ) , we compose a symmetrical pairwise representation as ( x i , x j ) = [max (cid:0) ( x i ) , ( x j ) (cid:1) , min (cid:0) ( x i ) , ( x j ) (cid:1) ] , where max and min are component-wise vector operations, i.e., max and min are applied to pair of components, so that two final vectors are obtained.", "(2) We exploit BERT (Devlin et al., 2019) embeddings: ( x i , x j ) = 12 ( bert _ emb ( x i , x j ) + bert _ emb ( x j , x i )) , where bert _ emb for a pair of questions comes from the final hidden layer representations, i.e., [CLS] token from the BERT model.", "In this section, we describe two datasets:", "(i) IC&OOS for intent classification; and", "(ii) Quora for intent clustering.", "We illustrate our procedure to transform the former into a dataset for clustering.", "The dataset for Intent Classification and Out-Of-Scope prediction by Larson et al. (2019), which we denote IC&OOS, is a classification dataset, composed of user's queries distributed into 150 different intent classes over 10 domains, plus out-of-scope (OOS) queries falling outside the pre-defined classes.", "The data 3 contains 50, 20 and 30 user's queries per intent class in training, dev.", "and test parts, respectively.", "Plus 100 OOS queries for training, 100 for development, and 1000 for test.", "For example, we may see class categories such as MEALSUGGESTION , with queries such as, 'sug-gestions for thai food' or 'help me find some new dinner recipes' , which may be challenging to separate from the items of RESTAURANTREVIEWS , e.g., 'at yakamoto how is their sushi' , and even more difficult to discern from 'what are some good sushi restaurants in reno' , belonging to RESTAURANTSUGGESTION class.", "The data from all of the pre-defined classes are present in training, dev.", "and test parts.", "The main steps for transforming this dataset in one for clustering are", "(i) merging the items of all the categories together; and", "(ii) using the original class labels as indication of belonging to different clusters.", "However, a real-world application scenario of automatic clustering would entail that new incoming data can contain items which constitute new clusters (class categories).", "Thus, in order to demonstrate the capability of the supervised clustering models to group together the items of unseen clusters, we use one set of intent classes for training and another set for evaluation, which is constituted by a completely different set of intent classes and questions.", "This way, we retain the queries from one third of intent classes, i.e., 50, from the training part, the dev.", "queries from another third of intent classes, and the test queries from the remaining one third 4 of classes, and use them as new training, dev.", "and test parts, respectively.", "Additionally, it should be noted that the original dataset contains OOS queries, which we keep all.", "Thus, our new split can be also used to analyze OOS queries, which might not be put in any semantically meaningful cluster, as well as unseen intent items, for which, we know that their natural clusters (original categories) exist.", "Data sampling and instance creation Training and test examples in a clustering problem are sets of items.", "In order to be able to effectively update the structural clustering objective in Eq.", "2 with NNs, we need to limit the size of training examples ( x , y ) .", "Precisely we need to limit the size of input query sets x , as the number of edges e (Eq. 3) to be passed trough the network grows exponentially with it.", "Thus, we split the data into samples:", "(i) from test set, we just extract random disjoint samples of equal size M ; and", "(ii) from training and dev.", "sets, we form samples according to a more elaborated procedure to avoid having too many singletons in an instance.", "More specifically, for each class C :", "(i) we shuf-fle its items in a random order; and", "(ii) we split them into a set P of m disjoint parts (mini-clusters) of random sizes (different sizes, necessarily 2 ) each, s.t., p P = C .", "Then, to build the training clustering examples, x , we iterate for several times over the entire list of classes C , in a random order, and, C C , we select a mini-cluster p PC , which we append to a current sample S (initial-ized as empty), if | S p | M .", "Otherwise, we start a new sample with S = { p } , and go to the next category, until all PC are exhausted (this happens simultaneously C , as PC have the same size).", "Now, our x sets consist of items contained in S 's.", "This procedure makes the presence of each category uniform (binary presence, yes/no): after seeing each N number of samples S , we encounter elements of all the classes, however, without preserving the original relative proportions of the class distribution.", "This way, by setting the sample size limit M = 100 , we obtain around 28 and 12 clustering examples from training and dev.", "sets, respectively, and 35 examples from the test set.", "The Quora dataset, made available by Lee et al. (2017), was designed to learn and test question duplication classifiers.", "That is, for automatically detecting if two questions are semantically duplicate or not.", "We use the Quora Intent Corpus by Haponchyk et al. (2018) based on a sample of questions from Quora.", "One main difference with IC&OOS is the fact that the negative examples selected by the organizers of the Quora challenge refer to pairs of questions that have always some degree of lexical overlap.", "For example, the following pair How much water on earth is consumable?", "and How much water is on earth?", "is not duplicate.", "On the other hand, the two questions could be surely put in the same cluster WATER OFEARTH .", "This means that a similarity function learned from Quora labels may not be enough accurate for clustering.", "Also a simple scalar product between two embeddings would not be enough as it can only capture lexical overlap.", "The latter would surely fail on the pairs of the following questions What is a recursion tree?", ", What does your family tree look like?", ", How does your Christmas tree look like?", "since their specific semantics is different but the overlap is large.", "These examples suggest that a clustering algorithm must learn a similarity that looks to the entire set of items to be clustered, not just to single pairs.", "This requirement is inline with the characteristics of the methods we presented in Sec", "4. 7 Experiments We present the results of our empirical evaluation of the neural clustering models using IC&OOS data, followed by that using Quora Intent Corpus.", "We summarize afterwards the highlights of a deeper investigation, we conducted on the performance of our approach and of its errors.", "Data: We created data from IC&OOS as discussed in Sec 6.1, which contains, 2650, 1100, and 2470 queries and 28, 12, 35 clustering examples, in training,", "dev., and test sets, respectively.", "We also evaluate our approaches on Quora Intent Corpus 5 (Haponchyk et al., 2018) based on 1,334 questions from Quora duplicate detection competition 6 .", "This corpus contains 270 , 146 and 212 question clusters respectively in the training, dev.", "and test parts.", "The clusters in each part are split into samples: training in 10 samples, both dev.", "and test in", "5. A part of the test set is also provided with an expert annotation.", "We refer to the whole test set with labels automatically derived from Quora annotation as automatic test set, and to its part with expert annotation as manual 5 https://ikernels-portal.disi.unitn.it/ repository/intent-qa 6 https://www.kaggle.com/c/quora-question-pairs Type of Supervision Model Clustering measure CEAF e Precision Recall F1 None Spectral clustering tfidf 78.44 78.38 78.41 0.24 71.55 0.57 Clustering Function (instance similarity) CNN 75.75 75.75 75.75 0.29 69.09 0.54 BERT 75.74 75.74 75.74 0.21 69.02 0.36 Kruskal CNN 97.72 79.00 87.28 0.82 80.23 1.15 BERT 79.95 91.30 85.25 0.46 78.59 0.61 Our Supervised Clustering NSC-CNN 88.83 89.61 89.19 0.99 82.66 1.07 NSC-BERT 94.98 96.36 95.65 0.82 91.76 1.51 Table 1: Comparison of clustering models: completely unsupervised, using supervised instance similarity function, and our supervised clustering on the test set of IC&OOS by Larson et al. (2019); disjoint scenario.", "Models: We experiment with two variants of our Neural model for Supervised Clustering (NSC), based on two ways to encode question pairs outlined in Sec. 5.1:", "(i) NSC-CNN, using word and word overlap embeddings, and", "(ii) NSC-BERT, using BERT embeddings of question pairs.", "For NSC-CNN, we employ fastText 7 word embeddings, in dimension 300, pre-trained for English language on Wikipedia (Bojanowski et al., 2017).", "We set the max length of questions to 50 and pad the shorter questions on the right.", "The size of the hidden layer is set to 13 of the size of the input layer.", "The convolution filter width varies from 1 to 3.", "For NSC-BERT, we use BERTBASE model, which we train for 3 epochs for fine-tuning on the question pair classification task.", "Since the training samples vary in size, we clip the gradients to have their L norm less than or equal to 1.", "This is to prevent the updates being dominated by the samples of bigger size.", "Evaluation: We follow the evaluation setting of Haponchyk et al. (2018).", "Thus, we compute", "(i) clustering F1 measure, based on assigning each cluster to the most frequent (gold/output) cluster,", "(ii) coreference resolution CEAF e score (Luo, 2005; Cai and Strube, 2010).", "Parameterization: We use dev.", "set for tuning the loss parameter r , which takes values from { 0 .", "1 , 0 .", "5 , 1 .", "0 } , and selecting the best epoch with respect to clustering F1.", "Baselines We consider a number of baselines based on the pairwise query similarities.", "We experiment with the following sources of pairwise signals:", "(i) tf-idf scores,", "(ii) outputs of the binary question pair classifier, which we train in two modalities, CNN and BERT.", "We group the pairwise signals into a clustering output using spectral clustering algorithm (Ng et al., 7 https://fasttext.cc/docs/en/pretrained-vectors. html 2001) (implementation from smile 8 library), which we run on a matrix of pairwise similarities between data points.", "Spectral clustering is unsupervised, and requires the indication of the number of clusters k .", "For each sample, we set the parameter k to the gold number of clusters.", "This means that we are computing an upper bound and unrealistic performance, which can be used to provide a meaningful comparison (especially if our approach outperforms it).", "As an alternative to spectral clustering, we run Kruskal's algorithm on the graph of pairwise edges, using 0.5 as a threshold for the question pair classifier scores on them; pairs having the scores lower 0.5 are neglected.", "In Tab.", "1, we present the results on IC&OOS dataset averaged over 10 different sample splits, obtained with 10 different random seeds.", "First, we note that NSC consistently improves over all the baselines in terms of both F1 and CEAF.", "It also shows a good precision/recall balance.", "The significantly lower results of the unsupervised baseline, spectral clustering+tf-idf, suggest that, in IC&OOS, we are supposedly dealing with a rather non-trivial task, where queries expressing the same intent do not necessarily have surface closeness.", "Even if the model is aware of the true number of present intents in a sample (gold k ).", "The other four baselines capitalize on training a supervised scoring function for query pairs, to be further used as a clustering criterion.", "From our experiments with IC&OOS data, we conclude that it is also not trivial to train a pairwise classifier to convey a notion of semantic similarity to the pairs of queries from unseen classes.", "This is reflected in the results of using a supervised similarity function.", "The performance of spectral clustering on the output of the pairwise classifier, equally low for CNN and BERT, and lower than model using tf-idf 8 http://haifengl.github.io/smile/ Model Clustering measure CEAF e Precision Recall F1 LSSVM 84.92 51.76 64.32 49.72 LSP 71.36 89.45 79.38 59.99 LSP py 70.80 90.00 79.22 0.33 59.82 NSC-CNN 80.25 82.16 81.12 1.76 62.80 NSC-BERT 86.93 72.96 79.19 1.41 63.89 Table 2: Comparison of our neural models to the structural baselines on the manual test set of Quora Intent Corpus.", "scores, is clear evidence for this.", "When we run Kruskal's on top of the output of the pairwise classifier, we observe a huge bias, towards precision in case of CNN and recall in case of BERT.", "The use of a threshold does not seem robust, when training examples (query pairs) are treated as independent.", "In NSC, we assume, this problem is mitigated by \"collaborative\" updates of the structural loss.", "Overall, we note the impressive performance of NSC, especially when fed by BERT.", "A clustering F1 of 95.65 suggests that NSC can replicate the clusters of questions that the human annotator/knowledge engineer devised.", "We run each model with 10 different random seeds for shuffling training examples and report the averaged results on the manual, Tab.", "2, and automatic test sets, Tab.", "3.", "We compare NSC with two state-of-the-art structural approaches, LSSVM and LSP proposed in (Haponchyk et al., 2018), reporting their numbers on the same data.", "LSP py is our LSP reimplemen-tation in python using text similarity.", "We trained LSP py for 100 epochs.", "NSC-CNN improves over the state of the art on both the test sets for both measures.", "On the manual test set, NSC-BERT achieves, as expected, higher CEAF.", "One possible explanation for its lower F1 on the manual test set is its small size, which probably does not enable an accurate evaluation.", "BiMPM Our fine-tuned BERT Accuracy 88.17 90.88 Table 4: Accuracy comparison on question duplicate detection task on Quora split by Wang et al. (2017).", "In contrast, on the evaluation over the automatic test set, NSC-BERT largely outperforms any model, according to any measure.", "This is mainly due to the fact that the automatic clusters are more consistent with the information present in the training data, used for fine-tuning BERT.", "Indeed, we fine-tuned BERTBASE on the full Quora dataset of question pairs 9 (except for the pairs containing questions from development and test parts of Quora Intent Corpus).", "For the sake of transparency, in Tab.", "4, we report the accuracy of the fine-tuned BERT on Quora split by Wang et al. (2017) compared to the official results of their BiMPM model.", "We believe the result of NSC-BERT is promising, and, in the scope of intent detection, by not being bounded to a particular set of intents, it contributes to the existing neural solutions (Xia et al., 2018; Lin and Xu, 2019; Lin et al., 2020).", "In this section, we investigate the general clustering ability of NSC, and in this way, enable the comparison to the upper bound of intent detection, i.e., the intent classifier, and list its most common mistakes.", "Here, we address the standard IC&OOS scenario with the original class distribution of dataset, where all the 150 intent classes are equally presented in the data.", "Moreover, we explore the upper bound to any clustering algorithm, i.e., the use of a supervised classifier in an unrealistic (useless for clustering) scenario, that is, having in the training data, all the clusters (classes) to be discovered.", "To carry out this comparison, we trained two intent classifier 9 https://www.quora.com/q/quoradata/ First-Quora-Dataset-Release-Question-Pairs Model Clustering measure CEAF e OOS Recall Precision Recall F1 CNN intent classifier 88.70 93.14 90.76 0.93 86.12 0.75 28.88 BERT intent classifier 90.53 98.36 94.29 0.24 89.10 0.40 38.80 NSC-CNN 89.32 91.25 90.24 1.09 84.86 1.34 85.49 3.94 NSC-BERT 93.58 97.27 95.38 0.34 92.05 0.58 71.45 4.35 Table 6: Comparison of the neural clusterings models to the classification baselines on the test set of IC&OOS dataset by Larson et al. (2019); full scenario.", "models, CNN and BERT, with 150+1(OOS) target classes.", "In Tab.", "5, we report their performance in terms of in-class accuracy and OOS recall.", "We also report the performance of the classification models from Larson et al. (2019) for reference.", "As it can be observed, our models perform comparably, e.g., our BERT model is just 1.5 points behind.", "We trained NSC on the same data, split into samples, so that we could compare to the above classifiers.", "For this purpose, we follow our sampling procedure in Sec. 6.1, this time keeping all the classes, which gives us around 79 and 32 samples for training and", "dev., respectively, and 66 samples for test.", "To keep the two types of systems aligned, we evaluate the classifiers also in terms of clustering F1 on the same test samples (of size M ), which are then averaged.", "Namely, we consider queries, within a sample, predicted by a classifier model, as the same class to form a distinct cluster, while those predicted as OOS singletons.", "We note that", "(i) as expected, the results of NSC in Tab.", "6 improve with respect to the completely disjoint setting (Tab.1).", "(ii) NSC-CNN is able to almost replicate the result of the CNN classifier in terms of F1, yielding only 1.3 of a point in terms of CEAF.", "(iii) Interestingly, the OOS Recall is more than 85% (2-3 times the one of the classifiers), which means that 85% of all OOS queries were detected by NSC-CNN (predicted singletons in their corresponding samples).", "Although, we recognize that it can be easier to detect OOS queries in a small sample than in a big set.", "(iv) NSC-BERT improves over the classifier model on the test samples by 1.5 in terms of clustering F1 and by more than 3 CEAF points, also achieving a better pre-cision/recall balance (same as for CNN modality).", "We hypothesize here an advantage of the supervised clustering model might lie over the classification models, which are generally not as well adaptive to class imbalance in data.", "(v) Again, the NSC-BERT highly improves (at least 2 times) the recall of the classifier for the OOS task.", "Analysing the output of NSC (here, we limit the discussion to NSC-BERT in the disjoint scenario of Sec. 7.2), we discovered that the majority of the mistakes made by the clustering algorithm can be traced back to several interpretable causes.", "A trivial case of word overlap or generally string matching in Ex.", "4 made NSC put the examples of seemingly distinct classes together.", "Actual ground truth intent classes are denoted in parentheses.", "(4) cluster : (1) what is the reason humans even exist (mean-ing_of_life) (2) let me know if you are a human or are a computer (are_you_a_bot) Next, we find the presence of the word-indicators of the same semantic category, i.e., SPEED, in Ex.", "5, that misled NSC.", "A frequent type of NSC's mistakes is merging together instances of different intent classes which belong to the same topic domain, especially in case of rather close subtopics as in cluster", "Ex.-s 67.", "(6) cluster : (1) put on my 90s playlist (play_music) (2) put on some metallica music (play_music) (3) what kind of music on the speaker now (what_song) (7) cluster : (1) how do i freeze my bank account (freeze_account) (2) why is there a stop on my deposit account (ac-count_blocked) In addition, Ex.", "7 has another complicating factor of using semantically very close expressions for distinct intent concepts.", "Right the opposite situation of erroneously splitting the instances of the same intent class is also common, as in", "Ex.-s 89.", "(9) cluster : (1) good speaking to you (goodbye) (2) it was great talking to you (goodbye)", "In general, we assume, that the last two types of mistakes can be reduced if the model sees on training the data from the corresponding intent classes.", "NSC also drew some (not absolutely meaningless) connections between OOS queries (Ex. 10).", "(1) what is the highest quality carpet available (oos) (2) find schematics for ikea desk assembly (oos) (3) i have a super runny nose and want to find a doctor (oos) (4) what was the latest tremor on the richter scale (oos)", "And finally, the clustering decision in Ex.", "11 potentially highlights an annotation error of query (2) being a false positive OOS.", "In this section, we discuss some of the important findings of our paper: First, our experiments suggest that the transformer model boosts the performance of our clustering approach.", "This is justified by the mainstream research: with respect to the standard embeddings (word2vec, glove,..), transformer models provide contextual representation of words, i.e., the embedding of a word is defined with respect to the others that are in the same piece of text.", "They provide a very powerful representation of pieces of text.", "Thus, we can obtain a precise similarity between pairs of questions.", "Thanks to our structural loss function, we can back-propagate structure properties of the entire cluster back to the transformer models so that we enrich even more its contextual similarity.", "Second, in the field of dialog systems, our approach can be extended to jointly predict intent and slot attributes.", "NSC can use information about slots and the background knowledge given by attributes and values, to cluster questions into intents.", "The latter will be then more related to the specific task defined by the available slot information.", "Conversely, if we suppose the developer has already the intents, our clustering algorithm could be used to cluster values into attributes.", "Then, since NSC can reach performance similar to supervised classification methods, it would be interesting to see if it can be more accurate than them, considering the critical problems of transfer learning (i.e., when the data for training is different from the one the deployed system receives).", "Third, we showed the performance on NSC exactly on unseen clusters.", "Our approach only uses some clusters of the data for training (each cluster is a training example).", "Then, it can predict unseen clusters in the test set.", "In other words, our models generalize what they learn from some clusters to unseen clusters.", "Finally, given one of our models trained on a set of clusters, we can easily continue its training with new examples, i.e., new training clusters, as our neural architecture is an online framework.", "One main scalability question could be: Given one domain for which we have clusters to train our approach, how can we scale to other domains?", "We will need new clusters for the new domains, i.e., target domain data, which is typically used for effective transfer learning.", "This does not mean that we need a large number of clusters, we just need some of them to transfer our clustering model from one domain to another.", "The transferred models will be able to predict many more new clusters from the new target domain.", "In this work, we firstly proposed supervised neural clustering based on traditional LSSVM and LSP models, which hinge on optimizing the structural margin loss.", "This extends the structured prediction methods for supervised clustering to a neural setting.", "Our experiments on IC&OOS and Quora Intent Corpora show an impressive improvement over the state of the art, 17.24% absolute over unsupervised models, and 8% points more than our proposed semi-supervised approaches.", "This suggests that our neural structured prediction can", "(i) effectively optimize a structural clustering objective function on structured examples, such as sets of questions for intent detection, and", "(ii) uncover clusters of questions of unseen classes, i.e., potential intents not seen in training.", "We would like to thank the anonymous reviewers as well as the entire PC for their valuable work." ]
[ "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "objective", "abstain", "result", "abstain", "abstain", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "other", "abstain", "method", "method", "method", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "other" ]
[ "A Dialogue State Tracker (DST) is a core component of a modular task-oriented dialogue system.", "Tremendous progress has been made in recent years.", "However, the major challenges remain.", "The state-of-the-art accuracy for DST is below 50% for a multi-domain dialogue task.", "A learnable DST for any new domain requires a large amount of labeled in-domain data and training from scratch.", "In this paper, we propose a Me taRe inforced Multi-Domain State Genera t or (MERET).", "Our first contribution is to improve the DST accuracy.", "We enhance a neural model based DST generator with a reward manager, which is built on policy gradient reinforcement learning (R-L) to fine-tune the generator.", "With this change, we are able to improve the joint accuracy of DST from 48.79% to 50.91% on the MultiWOZ corpus.", "Second, we explore to train a DST meta-learning model with a few domains as source domains and a new domain as target domain.", "We apply the model-agnostic meta-learning (MAML) algorithm to DST and the obtained meta-learning model is used for new domain adaptation.", "Our experimental results show this solution is able to outperform the traditional training approach with extremely less training data in target domain.", "A Dialogue State Tracker (DST) is a core component of a modular task-oriented dialogue system (Young et al., 2013).", "For each dialogue turn, a DST module takes the user utterance and the dialogue history as input, and outputs a belief estimate of the dialogue state.", "The dialogue state as of today is simplified as a set of requests and goals, both of which are represented as (slot, value) pairs such as (area, centre) , (food, Chinese) for a user request I'm looking for a Chinese restaurant in the centre of the city .", "A highly accurate DST is crucial to ensure (cid:56)(cid:86)(cid:72)(cid:85) (cid:54)(cid:92)(cid:86)(cid:87)(cid:72)(cid:80) (cid:39)(cid:76)(cid:68)(cid:79)(cid:82)(cid:74)(cid:3)(cid:54)(cid:87)(cid:68)(cid:87)(cid:72)(cid:3)(cid:55)(cid:85)(cid:68)(cid:70)(cid:78)(cid:76)(cid:81)(cid:74) (cid:44)(cid:3)(cid:90)(cid:68)(cid:81)(cid:87)(cid:3)(cid:87)(cid:82)(cid:3)(cid:69)(cid:82)(cid:82)(cid:78)(cid:3)(cid:68)(cid:3)(cid:75)(cid:82)(cid:87)(cid:72)(cid:79)(cid:3)(cid:82)(cid:73)(cid:3) moderate price (cid:3)(cid:76)(cid:81)(cid:3)(cid:87)(cid:75)(cid:72)(cid:3) south .", "Budzianowski et al. (2018) recently introduced a multi-domain dialogue dataset Multi-domain Wizard-of-Oz (MultiWOZ), which is more than one order of magnitude larger than all previous annotated task-oriented corpora with around 10k dialogues and involves more than 7 domains.", "A domain of a task-oriented system is often defined by an ontology, which defines all entity attributes called slots and all possible values for each slot.", "MultiWOZ presents conversation scenarios much similar to those in real industrial applications.", "Figure 1 shows an example of a multi-domain dialogue, where a user starts a conversation about hotel reservation and moves on to look for attractions nearby of his interest.", "It adds a layer of complexity to the DST and brings new challenges.", "The first new challenge is how to appropriately model DST for a multi-domain dialogue task.", "Multi-domain DST is in its infancy before MultiWOZ (Rastogi et al., 2017).", "Most previous work on DST focus on one given domain (Henderson et al., 2013, 2014; Mrksic et al., 2017; Zhong et al., 2018; Korpusik and Glass, 2018; Liu et al., 2019).", "As Wu et al. (2019) pointed out, to process the MultiWOZ data, the DST model has to determine a triplet (domain, slot, value) instead of a pair (s-lot, value) at each turn of dialogue.", "MultiWOZ contains 30 (domain, slot) pairs over 4,500 possible slot values in total.", "The prediction space is significantly larger.", "This change seems quantitative.", "However, it challenges the foundation of most successful DST models, where DST is casted as a neural model based classification problem, each (slot, value) pair is an independent class and the number of classes is relatively limited.", "When the number of classes is large enough as the case in MultiWOZ, classification-based approaches are not applicable.", "In real industry scenarios, the prediction space is even larger and it is often not possible to have full ontology available in advance (Xu and Hu, 2018).", "It's hard to enumerate all possible values for each slot.", "The second challenge is how to model the commonality and differences among domains.", "The number of domains is unlimited in real-life.", "It won't be able to scale up if each new domain requires a large amount of annotated data.", "To overcome these challenges, Wu et al. (2019) proposed a TRAnsferable Dialogue statE generator (TRADE) that generates dialogue states from utterances using a copy mechanism, facilitating knowledge transfer between domains.", "The prominent difference from previous one-domain DST models is that TRADE is based on a generation approach instead of a close-set classification approach.", "The generation model parameters are shared among various domains and slots.", "TRADE is able to help boost the DST accuracy up to 48.62% with the MultiWOZ corpus.", "It is obvious this accuracy is far from being acceptable.", "In this paper, we are motivated to enhance this generation-based approach for two objectives, higher accuracy and better domain adaptability.", "To improve DST accuracy, we propose a new framework which contains the state generator and reward manager.", "The state generator follows the same setup of TRADE.", "The Reward Manager calculates the reward to fine-tune the generator through policy gradient reinforcement learning (PGRL).", "We use the reward manager to help the generator alleviate the objective mismatch challenge.", "Objective mismatch is a limitation of encoder-decoder generation approaches, where the training process is set to maximize the log likelihood, but it doesn't assure producing the best results on discrete evaluation metrics such as the DST accuracy.", "Since MultiWOZ provides data for multiple domains, it enables us to study the long-standing domain adaptability problem.", "It is a hope we can train a general DST model from multi-domain data and this model can be adapted to a new domain with minimal examples from a new domain.", "We apply the meta-learning algorithm, MAML, for this study.", "Our key contributions in this paper are as follows: We propose a new framework as the DST model, which contains a neural model based DST generator and a reward manager.", "With our proposal, we are able to improve the joint accuracy of DST from 48.79% to 50.91%, which is 2.12% absolute improvement over the latest state-of-the-art on the MultiWOZ corpus.", "We apply MAML to train a meta-learning DST model with a few domains as the training domains and a new domain as the testing domain.", "Our experimental results show this solution is able to outperform the traditional training approach with only 30% of the in-domain training data.", "The overview of our model is illustrated in Figure 2.", "It consists of a generator model and a reward manager.", "In this paper, we take TRADE as our baseline.", "The TRADE model comprises three components: (1) an utterance encoder, (2) a context-enhanced slot classifier, (3) a state generator.", "We briefly describe the TRADE model in this Section.", "The utterance encoder encodes dialogue utterances into a sequence of fixed-length vectors.", "TRADE uses Bi-GRU (Chung et al., 2014), to encode.", "Instead of initializing by concatenating GloVe embeddings (Pennington et al., 2014), our model explore to use BERT (Devlin et al., 2019) as embedding model.", "We denote a sequence of dialogue turns as a matrix X t = [ U t l , R t l , ..., U t , R t ] (cid:60) | X t | d emb , where l is the length of the dialogue history selected, U is the user turn, R represents the system response and d emb indicates the turn-level embedding size.", "The encoder encodes X t into a hidden matrix H t = [ h enc 1 , ..., h enc | X t | ] (cid:60) | X t | d hdd , hdd is the hidden size.", "The state generator uses GRUs as the decoder, which takes the embedding of the jth (domain,slot) pair as well as the kth word as input and outputs a hidden vector h decjk at the kth decoding step.", "This hidden vector is then mapped to distribution over the vocabulary V and over the dialogue history as shown in Eq (1).", "classes: ptr, none, dontcare .", "With a linear layer parameterized by W g (cid:60) 3 d hdd , the slot classifier for the jth (domain, slot) pair is defined as G j = Softmax ( W g ( P historyj 0 H t ) (cid:62) ) (cid:60) 3 (3) If this slot classifier determines none or dontcare , the system ignores any output from the state generator.", "Optimization is performed jointly for both the state generator and the slot classifier.", "The cross-entropy loss is used for both, with L s representing the loss for the slot classifier and L g for the generator.", "They are combined with hyper-parameters and .", "Generally, the cross-entropy loss is used to train a generator.", "In our task, the true words Y labelj is used and the cross-entropy loss can be defined as: loss g = J (cid:88) j =1 | Y j | (cid:88) k =1 log (cid:16) P finaljk ( y labeljk ) (cid:62) (cid:17) (5) where y labeljk is the ground truth of the value word for the jth ( domain, slot ) pair.", "In this paper, we propose a RL-based Reward Manager to work the generator.", "The Reward Manager is used for calculating the reward to fine-tune the Generator through PGRL.", "The specific modeling process of reinforcement learning adaptation for DST task is summarized in Algorithm 1: We treat the Generator as the target agent to be trained.", "The agent interacts with an external environment (utterances, domains, slots and reward manager) by taking actions and receiving environment state and reward.", "The actions are the choices of tokens for slot value that generates for any given (domain, slot) pair.", "The action space is the vocabulary.", "Following each action, the reward manager calculates a reward by comparing the generated token to the corresponding ground-truth token.", "When reaching the last decoding step, the agent updates its parameters towards maximizing the expected reward.", "RL loss is defined as follows: L rl = J (cid:88) j =1 | Y j | (cid:88) k =1 r ( y sjk ) log (cid:16) P final ( y sjk ) (cid:17) (6) where y s jk is a token sampled from the vocabulary probability distribution and r ( y sjk ) means the reward for the sampled token y sjk , computed by a reward function.", "Intuitively, the loss function L rl enlarges the probability of the sampled y sjk if it obtains a higher reward for the kth token in jth (domain, slot) pair.", "We also define a combined loss function: L = L rl + L mix (7) where L rl is defined as the reinforcement learning loss, L mix is the cross-entropy loss from TRADE, and are the combined hyper-parameters.", "Algorithm 1 shows how this method works.", "The traditional paradigm of supervised learning is to train a model for a specific task with plenty of annotated data.", "Meta-learning aims at learning new tasks with few steps and little data based on existing tasks.", "MAML (Finn et al., 2017) is the most popular meta-learning algorithm.", "It has been successfully employed in various tasks.", "We propose to apply MAML to perform dialogue state tracking for new domains.", "The MAML algorithm tries to build an internal representation of multiple tasks and maximize the sensitivity of the loss function when applied to new tasks, so that small update of parameters could lead to large improvement of new task loss value.", "In this paper, we explore how it works with DST, a key component in task-oriented dialogue systems.", "Algorithm 1 REINFORCE algorithm Input: Dialogue history sequence X , ground-truth output slot value sequences Y , a pre-trained model .", "Output: Trained model (cid:48) with REINFORCE algorithm.", "1: Training Steps: 2: Initialize with random weights ; 3: Pre-train using cross-entropy loss of generator and classifier on dataset ( X, Y ) ; 4: Initialize (cid:48) = .", "5: while not done do 6: Select a batch of size N from X and Y ; 7: for each slot do 8: Sample { Y s = ( y s 1 , , y s | Y j | ) } N 1 from the final probability distribution of vocabulary; 9: Compute reward { r ( y s 1 ) , , r ( y s | Y j | ) } N 1 defined in the Reward Manager; 10: end for 11: Compute L rl and L using Eq (6) and Eq (7); 12: Update the parameters of network with learning rate , (cid:48) (cid:48) + (cid:48) L (cid:48) ; 13: end while 14: Testing Steps: 15: for batch of X and Y do 16: Generate the output (cid:98) Y ; 17: end for 18: return The evaluated model (cid:48) ; MAML is compatible for any model training based on gradient descent.", "We can denote the baseline model as M .", "Training a typical gradient descent model M involves (1) providing training data and initializing parameters of M ; (2) computing a given objective loss; (3) applying gradient descent to the loss to update M parameters.", "With MAML, the training steps becomes: (1) Initialize M and making nd copies of M to be M (cid:48) d ; (2) Select training data from each domain and updating M (cid:48) d parameters based on gradient descent and a loss function; (3) Calculate a loss for each domain with their updated temporary model M (cid:48) d ; (4) Sum up the new loss from each training domain to be a total loss; (5) Update parameters of the original M based on the total loss; (6) Repeat above steps until M converges.", "Algorithm 2 shows step-by-step how MAML combines with our model MERET.", "Suppose we consider nd dialogue domains, we take ntr domains as source domains for meta-training and nts domains as target domains for meta-testing.", "For each source domain, we divide the source domain data into D traind as the support dataset and D validd as the query dataset, d is the domain index.", ", are two hyper-parameters for MAML, as the learning rate for each domain and as the learning rate for meta-learning update.", "There are two cycles.", "The outer cycle is for meta-learning, updating model parameters of M .", "The inner cycle is for task learning, updating the temporary model M (cid:48) d of each domain d .", "For task learning, we select K examples from D traind for each domain d , evaluate the gradient of the loss function as Eq (7), update the parameters (cid:48) d with respect the K examples (Step 4).", "After each domain model is updated once, the M model parameters are updated using the sum of the loss with respect to K (cid:48) examples sampled from each D validd .", "Specifically, we sum the loss of M (cid:48) d in each domain to obtain the meta loss LM , LM = (cid:88) d L d ( M (cid:48) d , D vd ) (8) Finally, we minimize the meta loss for updating the current model M until an ideal meta-learned model M is achieved, M M M (cid:88) d L d ( M (cid:48) d , D vd ) (9) To adapt to a new domain, we start with the meta-learned model M instead of initializing randomly, new-domain training data is used to update model parameters as multiple batches and the learnt task model is fit for the new domain.", "In this paper, we use MultiWOZ as our training and testing corpus.", "MultiWOZ is a fully-labeled collection of human-human written conversations spanning over multiple domains and topics.", "It contains 8438 multi-turn dialogues with on average 13 .", "7 turns per dialogue.", "It has 30 (domain, slot) pairs and over 4,500 slot values.", "We use the most frequent five domains (restaurant, hotel, attraction, taxi, train) in our experiments.", "Two common metrics to evaluate DST models are joint goal accuracy and slot accuracy.", "Joint accuracy measures the accuracy of dialogue states, where a dialogue state is correctly predicted only if Algorithm 2 MAML algorithm Input: D traind ; D validd ; ; .", "Output: Trained model M with MAML algorithm.", "all the values of for all the (domain, slot) pairs are correctly predicted.", "Slot accuracy is the accuracy of the (domain, slot, value) tuples.", "Joint accuracy is a more challenging metric.", "For all experiments, we choose Bi-GRU networks with a hidden size of 768 to be the encoder and the decoder.", "The model is optimized using Adam (K-ingma and Ba, 2015) with a learning rate of 0.001.", "We reduce the learning rate to half if the validation loss increases.", "We set the batch (Ioffe and Szegedy, 2015) size to 32 and the dropout (Zaremba et al., 2014) rate to 0.2.", "Different reward functions have been tried through the experiment progress.", "We choose a binary reward that a positive value is given when the output token equals the target and a punishment otherwise, 1 and -0.1 respectively.", "We evaluate the model every epoch and adopt early stopping on the validation dataset.", "In meta-training phase, we set different numbers of updating M (cid:48) due to the differences in slot complexity for each domain.", "The model was implemented in the py-Torch.", "Table 1 shows our experimental results with MERET.", "MERET achieves the joint goal accuracy of 50 .", "91% , which is 2 .", "12% above the latest state-of-the-art DST model COMER and is 2 .", "29% higher than TRADE.", "Table 1 also shows accuracies of a few latest systems on the same corpus.", "MERET is also able to obtain the best slot accura-DST Models Joint Acc Slot Acc MultiWOZ Benchmark (Budzianowski et al., 2018) 25.83 GLAD (Zhong et al., 2018) 35.57 95.44 HyST (ensemble)(Goel et al., 2019) 44.22 TRADE (Wu et al., 2019) 48.62 96.92 COMER (Ren et al., 2019) 48.79 MERET 50.91 97.07 -BERT 50.35 96.98 -RL 50.09 97.01 Table 1: The evaluation of existing multi-domain DSTs on MultiWOZ.", "cy 97 .", "07% which is slightly higher than TRADE, but not substantial.", "To prove the effectiveness of our structure, we conduct ablation experiments in different setups.", "MERET-BERT(remove BERT, acc 50.35%, +1.73%) has the same embedding Glove with TRADE, the improvement here mainly comes from RL, benefitting from the reward manager, which provides an ability for the entire model to explore rather than to be greedy at every single step and overcomes the existing limitation of encoder-decoder generation approach as mentioned in the intro.", "MERET-RL(remove RL, acc 50.09%, +1.47%) shows the increment due to embedding changes, which uses BERT instead of Glove, integrating powerful pre-trained language representation of BERT.", "We can see that MERET's advantage mainly comes from the RL.", "The way we employ RL with the generator in this paper is a good baseline.", "We are encouraged by these experimental results for future exploration in this line of research.", "To test the effectiveness of MERET, we choose hotel, train and restaurant as the source domains, taxi and attraction as the target domains.", "For each source domain, we utilize 3000 dialogues on average and 200 dialogues for training and testing.", "We utilize 30 dialogues (1% of source domain) for training on new domains with the pre-trained model.", "In our experiments, we conducted comparison studies with three setups, (1) Training a MERET model from scratch using 1% sampled data from each target domain, (2) Meta-training a MERET model using the source domain data and then fine-tuning with 1% sampled data from each target domain, (3) Training a TRADE model using the source domain data and then fine-tuning dont care none ptr target/ prediction 0.009 0.197 ptr TRADE 0.0508", "with 1% sampled data from each target domain.", "Experimental results are listed in Table 2.", "MERET achieves substantial higher accuracy, 64.7% joint goal accuracy for the Taxi domain and 43.10% for the Attraction domain, comparing to the other two setups.", "Similar advantages are obtained for slot accuracies for both target domains.", "To explore the K-shot performance of the MERET model, we conduct experiments to measure the impact of the number of training examples from the target domain.", "We meta-train MERET with source domains and meta-test on the taxi and attraction domain.", "The number of training samples K from the target domains varies from 1 to 10.", "We use K = (1, 3, 5, 10) as the testing point.", "Figure 3 illustrates our experiments.", "It's natural that the accuracy increases as the training data increases.", "We can observe that the accuracy with K = 5 of Figure 6: The changes of joint accuracy over dialogue turns.", "the attraction domain surpasses the accuracy with training MERET from scratch using 1% (30 dialogues) of the attraction domain data.", "This demonstrates our model's capability to achieve good performance with a fraction of the target data.", "We analyze the wrong predictions and draw a heat map of distributions for the slot classifier considering the importance of its determining to the final output.", "From the map in Figure 4, we can see the main cause of the error-maker is the classifier's inertia of omit-prediction from ptr to none , which stands up to 47.3% proportion.", "The over-prediction cause comes in the next, with a 27.3% rate.", "Value on the diagonal of the lower-left corner shows the mis-prediction rate of the generator.", "Combined with the comparison of the two pictures, we can get the point that our proposed model has a higher generative ability over state value.", "An overview correct-error analysis of multi-domain for slots is shown in Figure", "5. The number-related slots book stay in hotel domain and book day in restaurant domain have the highest correct rates, 98.97% and 98.94%, respectively.The name-related slots in the restaurant , attraction , and hotel domains have the highest error rates, 8.94%, 7.36%, and 7.21%, respectively.", "It is because these slots usually have a large number of possible values set and high annotation errors.", "The type slot of hotel domain also keeps a high error rate in different experiments, even if it is an easy task with only two possible values in the ontology.", "The reason is that labels of the (hotel, type) pair are usually missing in the dataset.", "We further show the performance of our model over different dialogue turn in Figure", "6. As the number of dialogue turn increases, User: I'm looking for a jamaican restaurant in the east.", "the influence of context gradually appears for the final results due to the abilities of different models.", "We can see that MERET outperforms TRADE gradually.", "This is especially true when the context length is long.", "Our model can carry information over multiple turns which will be used for state generator with the help of RL maximizing rewards expectations in a better way.", "We sample one typical dialogue from MultiWOZ to demonstrate the effectiveness of MERET in the case study.", "Due to limited space, we present the same key parts derived from two models and the details are shown in Table 3.", "We observe that the constraint for food slot is dynamic and MERET is sensitive to capture this context information with the advantage of RL-based fine-tune state Generator, which reinforces in greater exploration for DST and maximizes reward expectation in a better way.", "Mrksic et al. (2017) propose neural belief tracking (NBT) framework without relying on hand-crafted semantic lexicons.", "The model uses Convolutional Neural Networks (CNN) or Deep Neural Networks (DNN) as dialogue context encoder and makes a binary decision for (slot,value) pairs.", "Zhong et al. (2018) propose global-local modules to learn representations of the user utterance and system actions and calculate similarity between the contextualized representation and the (slot,value) pair.", "Xu and Hu (2018) utilize pointer network to track dialogue state, which proposes a conception of unseen states and unknown states earlier.", "Chao and Lane (2019) use BERT as dialogue context encoder and get contextualized representation, which is passed to the classification module and get three classes: none, dontcare, span.", "When the class is span, the start and end positions of slot values are obtained in the dialogue context.", "However, Both Xu and Hu (2018) and Chao and Lane (2019) suffers from the fact that they can not get correct answer when the value does not exist in the input.", "Wu et al. (2019) propose an approach that the model generates a sequence of value from utterances by copy mechanism, which can avoid the case that the value is not in the input.", "It also uses a three-way classifier to get a probability distribution over none, dontcare, ptr classes.", "Ren et al. (2019) achieve state-of-the-art performance on the MultiWOZ dataset by applying a hierarchical encoder-decoder structure for generating a sequence of belief states.", "The model shares parameters and has a constant inference time complexity.", "Reinforcement learning is a way of training an agent during interaction with the environment by maximizing expected reward.", "The idea of policy gradient algorithm has been applied in training of sequence to sequence model.", "Ranzato et al. (2016) propose MIXER algorithm, which is the first application of REINFORCE algorithm (Williams, 1992) in training sequence to sequence model.", "However, an additional model, which is used to predict expected reward, is required in MIXER.", "Rennie et al. (2017) proposed a self-critical method for sequence training (SCST).", "It directly optimizes the true, sequence-level, evaluation metric, and avoids the training of expected future rewards estimating model.", "Paulus et al. (2018) applied SCST in summary generation, which improved the rouge value of generated result.", "SCST algorithm was also used by Zhao et al. (2018) for improving story ending generation.", "Keneshloo et al. (2018) present some of the most recent frameworks that combine concepts from RL and deep neural networks and explain how these two areas could benefit from each other in solving complex seq2seq tasks.", "Meta-learning aims at learning target tasks with little data based on source tasks.", "This algorithm is compatible with any model optimized with gradient descent so that it has a wide range of applicability.", "Meta-learning has been applied in various fields such as image classification (Santoro et al., 2016; Finn et al., 2017) and robot manipulation (Duan et al., 2016; Wang et al., 2016), etc.", "In the field of natural language processing, some exploratory work (Gu et al., 2018; Huang et al., 2018; Qian and Yu, 2019; Madotto et al., 2019) have been proposed in recent years.", "Most of them are focused on the generation-related tasks and machine translation.", "To our knowledge, few related work in dialogue state tracking (DST) was found till now.", "We propose to apply model-agnostic meta-learning (MAML) (Finn et al., 2017) algorithm for training a DST meta-learning model with a few domains as the training domains and a new domain as the testing domain to achieve multi-domain adaptation.", "We introduce an end-to-end generative framework with pre-trained language model and copy-mechanism, using RL-based generator to encourage higher semantic relevance in greater exploration space for DST.", "Experiments on multi-domain dataset show that our proposed model achieves state-of-the-art performance on the DST task, exceeding current best result by over 2%.", "In addition, we train the dialogue state tracker using multiple single-domain dialogue data with rich-resource by using the MAML.", "The model is capable of learning a competitive and scalable DST on a new domain with only a few training examples in an efficient manner.", "Empirical results on MultiWOZ datasets indicate that our solution outperforms non-meta-learning baselines training from scratch, adapting to new few-shot domains with less data and faster convergence rate.", "In future work, we intend to explore more with the combination of RL and DST on the basis of reward designing, trying to explore more in the internal mechanism.", "In the long run, we are interested in combing many tasks into one learning process with meta-learning.", "We thank Zhiqiang Yang and Chao Deng for their insightful discussion and great support.", "We also thank all anonymous reviewers for their constructive comments." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "result", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "objective", "abstain", "method", "objective", "method", "objective", "result", "objective", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "method", "objective", "method", "abstain", "objective", "objective", "abstain", "other", "other" ]
[ "We present Neural Machine Translation (NMT) training using document-level metrics with batch-level documents.", "Previous sequence-objective approaches to NMT training focus exclusively on sentence-level metrics like sentence BLEU which do not correspond to the desired evaluation metric, typically document BLEU.", "Meanwhile research into document-level NMT training focuses on data or model architecture rather than training procedure.", "We find that each of these lines of research has a clear space in it for the other, and propose merging them with a scheme that allows a document-level evaluation metric to be used in the NMT training objective.", "We first sample pseudo-documents from sentence samples.", "We then approximate the expected document BLEU gradient with Monte Carlo sampling for use as a cost function in Minimum Risk Training (MRT).", "This two-level sampling procedure gives NMT performance gains over sequence MRT and maximum-likelihood training.", "We demonstrate that training is more robust for document-level metrics than with sequence metrics.", "We further demonstrate improvements on NMT with TER and Grammatical Error Correction (GEC) using GLEU, both metrics used at the document level for evaluations.", "Neural Machine Translation (NMT) research has explored token-level likelihood functions (Sutskever et al., 2014; Bahdanau et al., 2015) and sequence-level objectives inspired by reinforcement learning (Ranzato et al., 2016; Bahdanau et al., 2016) or expected Minimum Risk Training (MRT) (Shen et al., 2016).", "A typical sequence objective in these cases is based on sentence-level BLEU (sBLEU) (Edunov et al., 2018).", "However Now at Google sBLEU, even if aggregated over sentences, is only an approximation of the desired metric, document-level BLEU.", "Beyond translation, many metrics for natural language tasks do not have robust sentence-level approximations.", "A logical progression is the extension of sequence-level NMT training objectives to include context from outside the sentence.", "Document-based NMT, by contrast, aims to use out-of-sentence context to improve translation.", "Recent research explores lexical consistency by providing additional sentences during training (Maruf et al., 2019; Voita et al., 2018, 2019) or inference (Voita et al., 2019; Stahlberg et al., 2019), potentially with adjustments to model architecture.", "However, to the best of our knowledge, no attempt has been made to extend sequence-level neural training objectives to include document-level reward functions.", "This is despite document-level BLEU being arguably the most common NMT metric, and being the function originally optimised by Minimum Error Rate Training (MERT) for Statistical Machine Translation (SMT) (Och, 2003).", "We propose merging lines of research on training objectives and document-level translation.", "We achieve this by presenting a document-level approach to sequence-level objectives which brings the training objective closer to the actual evaluation metric, using MRT as a representative example.", "We demonstrate MRT under document-level BLEU as well as Translation Edit Rate (TER) (Snover, 2006), which while decomposable to sentence level is less noisy when used over documents.", "We consider both pseudo-documents where sentences are assigned randomly to a mini-batch, and true document context where all sentences in the batch are from the same document.", "We finally apply our scheme to supervised Grammatical Error Correction, for which using neural models is becoming increasingly popular (Xie et al., 2016; Sakaguchi et al., 2017; Stahlberg et al., 2019).", "We show gains in GEC metrics GLEU (Napoles et al., 2015) and M2 (Dahlmeier and Ng, 2012).", "Minimum Error Rate Training was introduced for phrase-based SMT with document-level BLEU (Och, 2003).", "Shen et al. (2016) extend these ideas to NMT, using expected minimum risk at the sequence level with an sBLEU cost for end-to-end NMT training.", "Edunov et al. (2018) explore random and beam sampling for NMT sequence-MRT, as well as other sequence-level training losses.", "Related developments in NMT include combined reinforcement-learning/cross-entropy approaches such as MIXER (Ranzato et al., 2016), which itself has origins in the REINFORCE algorithm described by Williams (1992).", "We do not explore such approaches, although our document-sampling and document-metric schemes could in principle be extended to them.", "Sequence-level MRT has seen success outside NMT.", "Ayana et al. (2016) use sequence MRT for summarization, while Shannon (2017) uses a related approach for speech recognition.", "MRT can be seen as a special case of neural reinforcement learning, which Sakaguchi et al. (2017) apply to GEC with sequence-level costs.", "Closest to our approach is the work of Jean and Cho (2019) on NMT with a minibatch-context-sensitive training procedure.", "However, they do not optimize on document metrics over those contexts.", "They also sample contexts randomly, while we find diverse context sampling is important for the success of document-MRT.", "Sentence-level MRT for NMT aims to minimize the expected loss on training data with a loss function between sampled target sentences y and gold reference sentences y .", "For NMT a common sentence-level cost function ( y , y ) is 1 sBLEU, where sBLEU is smoothed by setting initial n-gram counts to 1 (Edunov et al., 2018).", "We take N samples for each of the S sentences in a mini-batch.", "We write the cost function between the s th reference in a mini-batch, y ( s ) , and its n th sample, y ( s ) n , as ( s ) n = ( y ( s ) n , y ( s ) ) .", "The risk gradient for end-to-end NMT with MRT as in Shen et al. (2016), with sample-count scaling, is then: R ( ) = 1 NS (cid:88) s =1 N (cid:88) n =1 ( s ) n log P ( y ( s ) n | x ( s ) ; ) (1) 2.2 Document-level MRT By analogy with sequence-level MRT, we consider MRT over batches of S sentence pairs, which we treat as a pseudo-document.", "In practice we experiment both with sentences chosen randomly from all training data, and with true context where all sentences per batch are from a single document.", "Let X = [ x (1) , . . . , x ( S ) ] be the source document, Y = [ y (1) , . . . , y ( S ) ] be a document of candidate translations, and Y = [ y (1) , . . . , y ( S ) ] be the reference translations.", "Document-level metric D ( Y, Y ) , which may be non-differentiable, replaces the sequence-level metric ( y , y ( s ) ) .", "We define the document-level risk: R ( ) = (cid:88) YD ( Y, Y ) P ( Y | X ; ) Using p log p = p , and defining L ( Y ) = log P ( Y | X ; ) for brevity: R ( ) = (cid:88) YD ( Y, Y ) P ( Y | X ; ) L ( Y ) = E (cid:2) D ( Y, Y ) L ( Y ) | X ; (cid:3) (2) Using simple Monte-Carlo, after Shannon (2017), we replace the expectation by an average taken over N sampled translation documents Y n P ( Y | X ; ) R ( ) 1 NN (cid:88) n =1 D ( Y n , Y ) L ( Y n ) The n th sample for the s th sentence in the batch-level document, y ( s ) n , contributes the following term to the overall gradient: 1 N (cid:88) Y : y ( s ) = y ( s ) n D ( Y, Y ) log P ( y ( s ) n | x ( s ) ; ) In other words the gradient of each sample is weighted by the aggregated document-level scores for documents in which the sample appears.", "To generate sample documents we first sample sentences.", "Sentence sampling for NMT generates new tokens in a left-to-right manner (Shen et al., 2016).", "In left-to-right generation each token is sampled from a distribution conditioned on previously sampled tokens, minimizing exposure bias to gold references which the model is unlikely to see at inference time (Ranzato et al., 2016).", "Sampling can be via beam search, or random sampling from the model distribution given previously sampled tokens.", "Beam search produces more likely samples which may be less diverse compared to random sampling (Edunov et al., 2018).", "Here we only consider sampling during training.", "While samples can be more easily generated of-fline with respect to fixed model parameters, such samples are not representative of the current model.", "With N sample translations for each of the S sentence pairs per batch we can construct NS possible sample documents as sequences of S sentences.", "Considering all possible documents is intractable unless N and S are small.", "It also carries the risk that a single sentence will appear in multiple sampled documents, giving it undue weight.", "Instead we propose creating N documents by first ordering samples for each sentence (e.g. by sBLEU), then creating the n th sample document Y n by concatenating the n th sample from each sentence.", "This gives a set of N diverse documents sampled from NS possibilities.", "We expect the sampled documents to be diverse in contents, since a given sentence will only ever occur in a single document context, and diverse in score.", "We refer to this scheme as ordered document sampling.", "Figure 1 illustrates ordered document sampling by comparison to a scheme which randomly samples sentences to form documents.", "We report on English-German NMT.", "We initialize with a baseline trained on 17.5M sentence pairs from WMT19 news task datasets (Barrault et al., 2019), on which we learn a 32K-merge joint BPE vocabulary (Sennrich et al., 2016).", "We validate on newstest2017, and evaluate on newstest2018.", "We apply MRT only during fine-tuning, following previous work (Edunov et al., 2018; Shen et al., 2016).", "In early experiments, we found that training from scratch with discriminative objectives (sequenceor document-based) is ineffective.", "We suspect samples produced early in training are so unlike the references that the model never receives a strong enough signal for effective training.", "We fine-tune on old WMT news task test sets (2008-2016) in two settings.", "With random batches sentences from different documents are shuffled randomly into mini-batches.", "In this case doc-MRT metrics are over pseudo-documents.", "With document batches each batch contains only sentences from one document, and doc-MRT uses true document context.", "We use the same sampling temperatures and the same risk sharpness factors for both forms of MRT for each experiment.", "For Grammatical Error Correction (GEC) we train on sentences from NUCLE (Dahlmeier et al., 2013) and Lang-8 Learner English (Mizumoto et al., 2012) with at least one correction, a total of 660K sentences.", "We evaluate on the JFLEG (Napoles et al., 2017) and CoNLL 2014 (Ng et al., 2014) sets.", "For GEC experiments we use random batching only.", "For all models we use a Transformer model (Vaswani et al., 2017) with the base' Tensor2Tensor parameters (Vaswani et al., 2018).", "We train to validation set BLEU convergence on a single GPU.", "The batch size for baselines and MLE is 4096 tokens.", "For MRT, where each sentence in the batch is sampled N times, we reduce batch size by N while delaying gradient updates by the same factor to keep the effective batch size constant (Saunders et al., 2018).", "At inference time we decode using beam size 4.", "All BLEU scores are for cased, detokenized output, calculated using SacreBLEU (Post, 2018).", "Our proposed document-MRT approach is more complex than sequence-MRT due to the additional score-aggregation and context-sampling steps.", "In practice we find that the extra computation of ordering and aggregating sequence scores is negligible when compared to the computational cost of sentence sampling, required for all forms of MRT.", "Our MRT experiments use N = 8 random samples per sentence unless otherwise stated.", "In this we choose the highest N we can practically experiment with, since previous work finds MRT performance increasing steadily with more samples per sentence (Shen et al., 2016).", "That we see improvements with so few samples is in contrast to previous work which finds BLEU gains only with 20 or more samples per sentence for sequence-MRT (Shen et al., 2016; Edunov et al., 2018).", "However, we find that document-MRT allows improvements with far fewer samples, perhaps because the aggregation of scores over sentences in a context increases robustness to variation in individual samples.", "Relatedly, we find that add-one BLEU smoothing (Lin and Och, 2004) is required for sequence-MRT as in Shen et al. (2016).", "However we find that doc-MRT can achieve good results without smoothing, perhaps because n-gram precisions are far less likely to be 0 when calculated over a document.", "In Table 1, we fine-tune an en-de baseline on documents from past news sets.", "We compare sentence-BLEU and document-BLEU MRT to fine-tuning with Maximum Likelihood Estimation (MLE).", "MLE fine-tuning degrades the baseline.", "This suggests the baseline is well-converged, as is desirable for applying MRT (Shen et al., 2016).", "The degradation is smaller with batches containing only sentences from the same document.", "We connect this to the idea that NMT batches with fewer sentence pairs have noisier' estimated gradients, harming training (Saunders et al., 2018).", "We expect batches of sentences from a single document to be similar and therefore give less noisy gradient estimates.", "Both seq-MRT and doc-MRT improve over the baseline with random sampling and N = 8 .", "We also explore MRT at N = 4 , with batch size adjusted as described in section 3 for the same effective batch size per update, and with fewer training steps such that the model sees' a similar proportion of the overall dataset.", "We do not report beam sampling results as early experiments indicate beam sampling gives similarly poor results for both seq-MRT and doc-MRT.", "This may be because beam search produces insufficiently diverse samples for this task (Freitag and Al-Onaizan, 2017).", "Sequence-MRT gives a 0.8 BLEU gain over the baseline with both batching schemes using N = 8 samples, but starts to degrade the baseline with N = 4 samples.", "With document batches and N = 8 Doc-MRT (ordered) outperforms seq-MRT by a further 0.4 BLEU.", "With N = 4 doc-MRT (ordered) still achieves a 0.7 BLEU improvement over the baseline, or a 0.8 BLEU improvement over seq-MRT.", "We suggest therefore that doc-MRT (or-dered) may be a computationally more efficient alternative to seq-MRT when large sample counts are not practical.", "For contrast with the ordered document sampling approach of Section 2.3, we give results for doc-MRT (random), which uses randomly sampled contexts.", "This approach falls significantly behind doc-MRT (ordered) with either batching scheme.", "Since doc-MRT (random) with random batches is exposed to randomness at the batch construction, sentence sampling and document sampling stages, Model JFLEG CONLL2014 P R M2 GLEU P R M2 GLEU Baseline 67.3 38.2 58.4 50.4 54.4 21.8 41.9 67.3 MLE 64.7 37.7 56.6 50.1 51.4 20.9 39.8 67.1 Seq-MRT 62.7 39.1 56.0 50.0 52.4 24.5 42.7 67.1 Doc-MRT (ordered) 64.4 41.0 57.8 51.4 53.2 24.6 43.2 67.5 Table 3: GEC Precision, Recall, M2, and GLEU after MLE and MRT.", "these results are averages over 3 experimental runs, which gave fairly consistent results ( < 0.2 BLEU range).", "In general we do find that results with random batches and random ordering are variable and sensitive to batch size and batching scheme.", "We interpret these results by considering the effect on the per-sentence cost for the different schemes.", "We find MRT works well when sample scores are different enough to be discriminated, but suffers if scores are too different.", "This is in line with the findings of Edunov et al. (2018) that including the gold reference causes the model to assign low relative probabilities to every other sample.", "Doc-MRT aggregates scores over many samples, while seq-MRT uses individual scores.", "We believe this explains the stronger performance of doc-MRT for small values of N , especially for the ordered document scheme, which ensures scores are still different enough for MRT to discriminate.", "Our approach can also be used with document-level metrics that are not intended to be used with individual sentences.", "In Table 2 we demonstrate this with TER, which estimates the edit rate required to correct a set of translation hypotheses.", "Document-TER MRT improves over a strong baseline, although batching scheme has less of an im-pact here.", "Notably seq-level MRT does not improve TER over the baseline, indicating TER may be too noisy a metric for use at the sentence level.", "Finally, we apply our MRT approach to the GEC GLEU metric (Napoles et al., 2015), an n-gram edit measure typically used at the document level.", "Table 3 shows that document MRT fine-tuning improves GLEU over the baseline, MLE fine-tuning, and a sequence-GLEU MRT formulation.", "Also notable is the change in M2, which finds the phrase-level edit sequence achieving the highest overlap with the gold-standard (Dahlmeier and Ng, 2012).", "MLE and sequence-MRT improve recall at a detriment to precision, suggesting over-generation of spurious corrections.", "Document-MRT likewise improves recall, but with a precision score closer to the baseline for more balanced performance.", "There is clear indication of a tension between M2 and GLEU: a small increase in GLEU under doc-MRT on CONLL leads to a large increase in M2, while a large increase in GLEU under doc-MRT on JFLEG leads to a small decrease in M2.", "We note that our improvements on JFLEG are similar to the improvements shown by Sakaguchi et al. (2017) for neural reinforcement learning with a sequence-GLEU cost metric.", "However, their results involve N=20 samples and 600k updates, compared to N=8 and 3k updates with our approach.", "We present a novel approach for structured loss training with document-level objective functions.", "Our approach relies on a procedure for sampling a set of diverse batch-level contexts using N-wise sample ordering.", "As well as randomly selecting training data, we assess training with mini-batches consisting only of single document contexts.", "While the scope of this work does not extend to sampling sentences given document context, this would be an interesting direction for future work.", "We demonstrate improvements covering three document-level evaluation metrics: BLEU and TER for NMT and GLEU for GEC.", "We finish by noting that the original MERT procedure developed for SMT optimised document-level BLEU and with our procedure we reintroduce this to NMT.", "This work was supported by EPSRC grants EP/M508007/1 and EP/N509620/1 and has been performed using resources provided by the Cambridge Tier-2 system operated by the University of Cambridge Research Computing Service 1 funded by EPSRC Tier-2 capital grant EP/P020259/1.", "1 http://www.hpc.cam.ac.uk References Shiqi Shen Ayana, Zhiyuan Liu, and Maosong Sun." ]
[ "method", "abstain", "abstain", "objective", "objective", "method", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "objective", "abstain", "method", "method", "result", "other", "other", "other", "other", "objective", "other", "other", "other", "abstain", "other", "result", "other", "other", "method", "abstain", "method", "method", "other", "other", "other", "objective", "other", "other", "other", "other", "method", "other", "method", "other", "other", "objective", "other", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "objective", "method", "method", "objective", "objective", "objective", "other", "other" ]
[ "The large size of pretrained networks makes them difficult to deploy for multiple tasks in storage-constrained settings.", "Diff pruning enables parameter-efficient transfer learning that scales well with new tasks.", "The approach learns a task-specific diff vector that extends the original pretrained parameters.", "This diff vector is adaptively pruned during training with a differentiable approximation to the L 0 -norm penalty to encourage sparsity.", "As the number of tasks increases, diff pruning remains parameter-efficient, as it requires storing only a small diff vector for each task.", "Since it does not require access to all tasks during training, it is attractive in on-device deployment settings where tasks arrive in stream or even from different providers.", "Diff pruning can match the performance of finetuned baselines on the GLUE benchmark while only modifying 0.5 % of the pretrained model's parameters per task and scales favorably in comparison to popular pruning approaches.", "Task-specific finetuning of pretrained deep networks is the dominant paradigm in contemporary NLP, achieving state-of-the-art results across a suite of natural language understanding tasks (De-vlin et al., 2019; Liu et al., 2019c; Yang et al., 2019; Lan et al., 2020).", "While straightforward and empirically effective, this approach is difficult to scale to multi-task, memory-constrained settings (e.g. for on-device applications), as it requires shipping and storing a full set of model parameters for each task .", "Inasmuch as these models are learning generalizable, task-agnostic language representations through self-supervised pretraining, finetuning the entire model for each task seems especially profli-gate.", "Code: https://github.com/dguo98/DiffPruning A popular approach to parameter-efficiency is to learn smaller compressed models for each task (Gordon et al., 2020; Sajjad et al., 2020; Zhao et al., 2020; Sanh et al., 2020).", "Such approaches face a steep sparsity/performance tradeoff and keep a substantial amount of nonzero parameters per task (e.g. 10%-30%).", "Multi-task learning and feature-based transfer allow for more parameter-efficient transfer learning per task (Liu et al., 2019b; Clark et al., 2019; Stickland & Murray, 2019; Reimers & Gurevych, 2019).", "These methods train a small number of additional parameters (e.g. a linear layer) on top of a shared model.", "However, multi-task learning generally requires access to all tasks during training to prevent catastrophic forgetting (French, 1999), while feature-based transfer learning (e.g. based on task-agnostic sentence representations) is typically outperformed by finetuning (Howard & Ruder, 2018).", "An appealing middle ground is to finetune an extension of the base model for specific tasks.", "This approach captures the training benefits of finetuning while maintaining the task modularity of feature-based transfer.", "For example, Adapters (Re-buffi et al., 2018) use smaller, task-specific modules that are inserted between layers of a model This approach does not require access to all tasks during training, targeting realistic settings where as new tasks arrive in stream (Houlsby et al., 2019; Pfeiffer et al., 2020a,b,c).", "Houlsby et al. (2019) find that adapter layers can match the performance of fully finetuned BERT on the GLUE benchmark while requiring 3.6% additional parameters (on average) per task.", "Diff pruning is a new extension to pretrained models with the goal of even more parameter-efficient transfer learning.", "Instead of modifying the architecture of the model, diff pruning extends the base model through a task-specific difference vector.", "In order to learn this vector, we reparameterize the task-specific model parameters as task = pretrained + task , where the pretrained parameter vector pretrained is fixed and the task-specific diff vector task is finetuned.", "The diff vector is regularized with a differentiable approximation to the L 0 -norm penalty (Louizos et al., 2018) to encourage sparsity.", "Diff pruning can become extremely parameter-efficient, as it only requires storing the nonzero positions and weights of the diff vector for each task.", "The cost of storing the shared pretrained model remains constant and is amortized across multiple tasks.", "On the GLUE benchmark (Wang et al., 2019a), diff pruning can match the performance of the fully finetuned BERT baselines while finetuning only 0 .", "5% of the pretrained parameters per task.", "As the number of tasks increase, diff pruning outperforms popular pruning-based methods in amount of storage required.", "Transfer learning in NLP mostly uses a pretrain-and-finetune paradigm, which initializes a subset of the model parameters for all tasks from a pretrained model and then finetunes on a task-specific objective.", "Pretraining objectives include context prediction (Mikolov et al., 2013), autoencoding (Dai & Le, 2015), machine translation (McCann et al., 2017), and more recently, variants of language modeling (Peters et al., 2018; Radford et al., 2018; Devlin et al., 2019) objectives.", "Here we consider applying transfer learning to multiple tasks.", "We consider a setting with a potentially unknown set of tasks (which may arrive in stream), where each task T has an associated training set D = { x ( n ) , y ( n ) } Nn =1 .", "For all tasks, the goal is to produce (possibly tied) model parameters to minimize the empirical risk, min 1 NN (cid:88) n =1 C (cid:16) f ( x ( n ) ; ) , y ( n ) (cid:17) + R ( ) where f ( ; ) is a parameterized function over the input (e.g. a neural network), C ( , ) is a loss function (e.g. cross-entropy), 1 and R ( ) is an optional regularizer with hyperparameter .", "We can use the pretrain-finetune approach by simply learning independent parameters for each 1 While the loss function can be in principle task-specific, in practice we use cross entropy for all tasks and hence omit the subscript in C ( , ) .", "task.", "However, the large size of pretrained models makes this approach exceedingly parameter ineffi-cient.", "For example, widely-adopted models such as BERTBASE and BERTLARGE have 110M and 340M parameters respectively, while their contemporaries have parameter counts in the billions (Raf-fel et al., 2020; Shoeybi et al., 2019; Rajbhandari et al., 2019).", "Storing the fully finetuned models therefore becomes difficult even for a moderate number of tasks.", "2 A classic approach to tackling this parameter-inefficiencyis to train a single shared model (along with a task-specific output layer) against multiple tasks through joint training (Caru-ana, 1997).", "However, the usual formulation of multi-task learning requires the set of tasks T to be known in advance in order to prevent catastrophic forgetting (French, 1999), 3 making it unsuitable for applications in which the set of tasks is unknown or when tasks arrive in stream.", "Diff pruning formulates task-specific finetuning as learning a diff vector that is added to the pretrained model parameters , which remain fixed.", "We first reparameterize the task-specific model parameters, = + , which results in the following empirical risk minimization problem, min L ( D , f , + ) + R ( + ) , where for brevity we define L ( D , f , ) as L ( D , f , ) = 1 NN (cid:88) n =1 C (cid:16) f ( x ( n ) ; ) , y ( n ) (cid:17) .", "This trivial reparameterization shows that the cost of storing the pretrained parameters is amortized across tasks, and the only marginal cost for new tasks is the diff vector.", "If we can regularize to be sparse such that (cid:107) (cid:107) 0 (cid:28) (cid:107) (cid:107) 0 , then this approach can become more parameter-efficient as 2 An intriguing line of work suggests that large-scale language models can be used without finetuning for a variety of tasks if given the appropriate context (Radford et al., 2019; Brown et al., 2020).", "While interesting, these models generally underperform task-specific models and require billions of parameters, though recent work suggests that they can be made substantially smaller (Schick & Schutze, 2020).", "3 However, work on continual learning mitigates these issues to an extent (Shin et al., 2017; Lopez-Paz & Ranzato, 2017; Lee et al., 2017; Kirkpatrick et al., 2017).", "This regularizer is difficult to optimize as it is nondifferentiable.", "In order to approximate this L 0 objective, we follow an approach for gradient-based learning with L 0 sparsity using a relaxed mask vector (Louizos et al., 2018).", "This approach involves relaxing a binary vector into continuous space, and then multiplying it with a dense weight vector to determine how much of the weight vector is applied during training.", "After training, the mask is made deterministic, and a large portion of the diff vector is zero.", "4 To apply this method we first decompose into a binary mask vector multiplied with a dense vector, = z (cid:12) w , z { 0 , 1 } d , w R d .", "We now lower bound the true objective and optimize an expectation with respect to z , whose distribution p ( z ; ) is initially Bernoulli with introduced parameters , min , w E z p ( z ; ) (cid:2) L ( D , f , + ) + (cid:107) (cid:107) 0 (cid:3) .", "This objective is still complicated by the discrete nature of z 's, but the expectation provides some guidance for empirically effective relaxations.", "We follow prior work (Louizos et al., 2018; Wang et al., 2019b) and relax z into continuous space [0 , 1] d with a stretched Hard-Concrete distribution (Jang et al., 2017; Maddison et al., 2017), which allows for the use of pathwise gradient estimators.", "Specifically, z is now defined to be a deterministic and (sub)differentiable function of a sample u from a uniform distribution, u U ( 0 , 1 ) , s = (log u log(1 u ) + ) , s = s ( r l ) + l, z = min( 1 , max( 0 , s )) .", "Here l < 0 and r > 1 are two constants used to stretch s into the interval ( l, r ) d before it is 4 It is also possible to learn sparse diff vectors through other penalties such as the L 1 -norm.", "We chose to work with the relaxed L 0 -norm formulation as past work has shown that SGD-based optimization works well in this setting.", "clamped to [0 , 1] d with the min( 1 , max( 0 , )) operation.", "In this case we have a differentiable closed-form expression for the expected L 0 -norm, E [ (cid:107) (cid:107) 0 ] = d (cid:88) i =1 (cid:18) ,i log l r (cid:19) .", "min , w E u U [ 0 , 1 ] [ L ( D , f , + z (cid:12) w )] + d (cid:88) i =1 (cid:18) ,i log l r (cid:19) ,", "and we can now utilize pathwise gradient estimators to optimize the first term with respect to since the expectation no longer depends on it.", "5 After training we obtain the final diff vector by sampling u once to obtain z (which is not necessarily a binary vector but has a significant number of dimensions equal to exactly zero due to the clamping function), then setting = z (cid:12) w .", "6 3.2 L 0 -ball projection with magnitude pruning for sparsity control Differentiable L 0 regularization allows us to achieve a high sparsity rate.", "However, it would be ideal to set an exact sparsity rate, especially considering applications which require parameter budgets.", "As the regularization coefficient is a Lagrangian multiplier for the constraint E [ (cid:107) (cid:107) 0 ] < for some , this could be achieved in principle by searching over different values of .", "However we found it more efficient and empirically effective to achieve an exact sparsity rate by projecting onto a target L 0 -ball after training.", "Specifically, we use magnitude pruning on the diff vector and target a sparsity rate t % by only keeping the top t % d values in .", "7 Note that unlike standard magnitude pruning, this is based on the magnitude of the diff vector values and not the model parameters.", "We found it important to further finetune with the nonzero masks fixed to maintain good performance, as is often the case 5 To reduce notation clutter we subsume the parameters of the task-specific output layer, which is not pretrained, into .", "We do not apply the L 0 -norm penalty on these parameters during training.", "6 We found sampling once to work as well as other alternatives (e.g. based on multiple samples).", "7 Wang et al. (2019b) show that it also is possible to inject such a constraint softly into the training objective by regularizing the expected model size towards a certain rate.", "However, since the constraint is soft this approach also makes it difficult to target an exact sparsity rate.", "in magnitude pruning (Han et al., 2016).", "Since this type of parameter-efficiency through projection onto the L 0 -ball can be applied without adaptive diff pruning, 8 such an approach will serve as one of our baselines in the empirical study.", "3.3 Structured Diff Pruning To allow diff pruning to adapt to the model architecture, we consider a structured extension which incorporates dependence between dimensions.", "We hypothesize that this approach can allow the model to learn to modify parameters in local regions, as opposed to treating each parameter independently.", "We modify the regularizer to first partition the parameter indices into G groups { g (1) , . . . , g ( G ) } where g ( j ) is a subset of parameter indices governed by group g ( j ) .", "9 We then introduce a scalar z j (with the associated parameter j ) for each group g ( j ) , and decompose the task-specific parameter for index i g ( j ) as j,i = z ,i z j w ,i .", "The expected L 0 -norm is then given by E [ (cid:107) (cid:107) 0 ] = G (cid:88) j =1 (cid:88) i g ( j ) E [ 1 { z ,i z g > 0 } ] = G (cid:88) j =1 (cid:88) i g ( j ) (cid:18) ,i log l r (cid:19) (cid:18) j log l r (cid:19) .", "4.2 Baselines We compare both structured and non-structured variants of diff pruning against the following baselines: Full finetuning , which fully finetunes BERTLARGE as usual; Last layer finetuning , which only finetunes the penultimate layer (along with the final output layer) 10 ; Adapters from Houlsby et al. (2019), which train task-specific bottleneck layers between each layer of a pretrained model, where parameter-efficiency can be controlled by varying the size of the bottleneck layers; and Non-adaptive diff pruning , which performs diff pruning just based on magnitude pruning (i.e., we obtain through usual finetuning, set = , and then apply magnitude pruning followed by additional finetuning on ).", "For diff pruning we set our target sparsity rate to 0.5% and investigate the effect of different target sparsity rates in section 6.1.", "We can train with gradient-based optimization as before.", "Parameters in a group are encouraged by the regularizer to be removed jointly.", "For evaluation we use the GLUE benchmark (Wang et al., 2019b) as well as the SQuAD extractive question answering dataset (Rajpurkar et al., 2016).", "Following Adapters (Houlsby et al., 2019), we test our approach on the following subset of the GLUE tasks: Multi-Genre Natural Language Inference (MNLI), where the goal is two predict whether the relationship between two sentences is entailment, contradiction, or neutral (we test on both MNLI m and MNLI mm which respectively tests on matched/mismatched domains); Quora Question Pairs (QQP), a classification task to predict whether two question are semantically equivalent; Question Natural Language Inference (QNLI), which 8 Concretely, one can obtain through usual finetuning, set = , and then apply magnitude pruning followed by additional finetuning on .", "9 While groups can be defined in various ways, we found that defining groups based on each matrix/bias vector of the pretrained model was simple and worked well enough.", "must predict whether a sentence is a correct answer to the question; Stanford Sentiment Treebank (SST-2), a sentence classification task to predict the sentiment of movie reviews; Corpus of Linguistic Acceptability (CoLA), where the goal is predict whether a sentence is linguistically acceptable or not; Semantic Textual Similarity Benchmark (STS-B), which must predict a similarity rating between two sentences; Microsoft Research Paraphrase Corpus (MRPC), where the goal is to predict whether two sentences are semantically equivalent; Recognizing Textual Entailment (RTE), which must predict whether a second sentence is entailed by the first.", "The benchmark uses Matthew's correlation for CoLA, Spearman for STS-B, F 1 score for MRPC/QQP, and accuracy for MNLI/QNLI/SST-2/RTE.", "For the main experiments and analysis, we use the BERTLARGE model from Devlin et al. (2019) to compare against the adapter-based approach of Houlsby et al. (2019).", "Our implementation is based on the Hugging Face Transformer library (Wolf et al., 2019).", "Diff pruning introduces additional hyperparameters l, r (for stretching the Hard-Concrete distribution) and (for weighting the approximate L 0 norm penalty).", "We found l = 1 .", "5 , r = 1 .", "5 , = 1 .", "25 10 7 to work well across all tasks.", "We 10 Wu et al. (2020) observe that finetuning later layers generally performs better than finetuning earlier layers Total New params QNLI SST-2 MNLI m MNLI mm CoLA MRPC STS-B RTE QQP Avg params per task Full finetuning 9.00 100% 91.1 94.9 86.7 85.9 60.5 89.3 87.6 70.1 72.1 80.9 Adapters (8-256) 1.32 3.6% 90.7 94.0 84.9 85.1 59.5 89.5 86.9 71.5 71.8 80.4 Adapters (64) 1.19 2.1% 91.4 94.2 85.3 84.6 56.9 89.6 87.3 68.6 71.8 79.8 Full finetuning 9.00 100% 93.4 94.1 86.7 86.0 59.6 88.9 86.6 71.2 71.7 80.6 Last layer 1.34 3.8% 79.8 91.6 71.4 72.9 40.2 80.1 67.3 58.6 63.3 68.2 Non-adap.", "Table 1 : GLUE benchmark test server results with BERTLARGE models.", "(Top)", "Results with Adapter bottleneck layers (brackets indicate the size of bottlenecks), taken from from Houlsby et al. (2019).", "(Bottom)", "Results from this work.", "QNLI results are not directly comparable across the two works as the GLUE benchmark has updated the test set since then.", "To make our results comparable the average column is calculated without QNLI.", "also initialize the weight vector w to 0 , and to a positive vector (we use 5 ) to encourage z to be close to 1 at the start of training.", "11 While we mainly experiment with BERT models to faciliate comparison against existing work, in preliminary experiments we found these hyperparameters to work for finetuning RoBERTa (Liu et al., 2019c) and XLNet (Yang et al., 2019) models as well.", "For all tasks we initially train for 3 epochs and perform a hyperparameter search over batch size { 5 , 8 , 12 , 16 } and learning rate { 1 10 5 , 2 10 5 , 5 10 5 } .", "12 Finetuning with the fixed mask after projecting onto the L 0 -ball with magnitude pruning is done for 3 epochs with a learning rate of 5 10 5 for all datasets except for MRPC/STS-B/RTE/SST-2 dataset, where we finetune for 5 epochs.", "The exact hyperparameters for each task are given in section A.1 of the appendix.", "Grouping for the structured version of diff pruning is based on the matrix/bias vectors (i.e. parameters that belong to the same matrix or bias vector are assumed to be in the same group), which results in 393 groups.", "13 5 Results 5.1 Results on GLUE Our main results on the GLUE benchmark are shown in Table 1. Structured diff pruning can match the performance of a fully finetuned BERTLARGE model while only requiring 0.5% ad-11 These values were found via by a light hyperparameter search on the SST-2 validation set.", "12 However we found the default settings used for regular finetuning as suggested in the original BERT paper to work well for most tasks.", "13 This definition of groups is implementation-specific since it depends on how one concatenates the input vector before each affine layer.", "Our grouping is based on Hugging Face's BERT implementation at commit 656e1386a296d696327a9db37de2ccccc79e2cc7 .", "We found this simple definition to work well compared to alternative definitions (e.g. based on individual neurons).", "ditional parameters per task.", "Diff pruning without structured sparsity also performs well, though slightly worse than the structured approach.", "Nonadaptive diff pruning, which magnitude prunes the diff vector without learning the binary mask z , performs significantly worse, indicating the importance of learning the masking vector.", "Compared to Adapters, diff pruning obtains similar performance while requiring many fewer parameters per task, making it a potential alternative for parameter-efficient transfer learning.", "14 5.2 Results on SQuAD To demonstrate the effectiveness of our approach beyond the GLUE tasks, we additionally experiment on SQuAD (Rajpurkar et al., 2016), an extractive question answering dataset where the model has to select the answer span to a question given a Wikipedia paragraph.", "To make direct comparisons with Houlsby et al. (2019), we run all experiments on SQuAD v1.1.", "For diff pruning, we use the same general hyperparameters as our full finetuning baseline (see section A.1).", "As shown in Figure 1 (right), diff pruning is able achieve comparable or better performance with only 1 .", "0% additional parameters.", "Interestingly, diff pruning measurably improves the upon the full finetuning baseline while modifying fewer parameters, which indicates that diff pruning can have a useful regularization effect on top of parameter-efficiency.", "In Figure 1 (left), we plot results on the GLUE validation set averaged across all tasks at target sparsity", "14 Comparing storage costs is a bit more challenging as it is implementation-specific.", "Diff pruning incurs additional storage cost due to storing the nonzero positions of the diff vector.", "See section 6.6 for storage comparison against Adapters assuming float32 for weights and int32 for positions.", "Figure 1 : (Left) Average performance on the GLUE validation set across different target sparsity rates for the different methods.", "(Right)", "Results with BERTLARGE on the SQuAD v1.1 validation set.", "Table 2 : Structured diff pruning results on the validation set with different target sparsity", "rates of 0 .", "1% , 0 .", "25% , 0 .", "5% , 1 .", "0% for the different baselines.", "Structured diff pruning consistently outperforms non-structured and and non-adaptive variants across different sparsity rates.", "The advantage of adaptive methods becomes more pronounced at extreme sparsity rates.", "In Table 2, we report the breakdown of accuracy of structured diff pruning across different tasks and sparsity rates, where we observe that different tasks have different sensitivity to target sparsity rates.", "This suggests that we can obtain even greater parameter-efficiency through targeting task-specific sparsity rates in the diff vector.", "Structured diff pruning introduces an additional mask per group, which encourages pruning of entire groups.", "This is less restrictive than traditional group sparsity techniques that have been used with L 0 -norm relaxations, which force all parameters in a group to share the same mask (Louizos et al., 2018; Wang et al., 2019b).", "However we still expect entire groups to be pruned out more often, which might bias the learning process towards either eliminating completely or clustering together nonzero diffs.", "In Table 3, we indeed find that structured diff pruning leads to finetuned models that are much more likely to leave entire groups unchanged from their pretrained values (zero diffs).", "6.3 Task-specific Sparsity Different layers of pretrained models have been argued to encode different information (Liu et al., 2019a; Tenney et al., 2019).", "Given that each task will likely recruit different kinds of language phenomena embedded in the hidden layers, we hypothesize that diff pruning will modify different parts of the pretrained model through task-specific finetuning.", "Figure 2 shows the percentage of nonzero diff parameters attributable to the different layers for each task.", "We find that different tasks indeed modify different parts of the network, although there are some qualitative similarities between some tasks, for example between QNLI & QQP (both must encode questions), and MRPC & STS-B (both must predict similarity between sentences).", "The embedding layer is very sparsely modified for all tasks.", "While some of the variations in the sparsity distributions is due to simple randomness, we do observe some level of consistency over multiple runs of the same task, as shown in section A.2 of the appendix.", "The ability to modify different parts of the pretrained model for each task could explain the improved parameter-efficiency of our approach compared to Houlsby et al. (2019)'s Adapters, which can only read/write to the pretrained model at certain points of the computational graph.", "15 This po-15 To simulate this restricted setting, we tried applying diff pruning only on the fully-connected layers after the selfattention layers, and observed much worse performance.", "Table 3 : Percentage of groups where all of the parameters in the group are fully zero for structured vs. non-structured pruning at 0.5% target sparsity.", "We group based on each matrix/bias vector, resulting in 393 groups in total.", "Figure 2 : Percentage of modified parameters attributable to each layer for different tasks at 0.5% target sparsity.", "The layers are ordered from earlier to later (i.e. the embedding layer is shown at the top).", "The x-axis for each plot goes from 0% to 20%.", "tentially suggests that Adapters with more fine-grained access into model internals (e.g. Adapters for key/value/query transformations) might result in even greater parameter-efficiency.", "While left as future work, we also note that diff pruning can be applied in conjunction with Adapters, which might further improve results.", "Applying magnitude pruning to project onto the L 0 ball was crucial in achieving exact sparsity targets.", "As shown in Table 4, we observed little loss in performance through this approach.", "We reiterate that it was crucial to finetune with a fixed mask, even for the approach which does not apply magnitude pruning.", "16 6.5 Comparison against BERT compression Direct BERT compression methods also provide a straightforward approach to parameter-efficient transfer learning.", "Here we compare diff pruning against existing BERT compression methods, in particular DistilBERT (Sanh et al., 2019), MobileBERT (Sun et al., 2020b) and TinyBERT (Jiao et al., 2020).", "In these experiments we apply diff pruning on the smaller BERTBASE model as these works typically utilize BERTBASE as the baseline.", "As shown in Table 5, we observe that diff pruning is more parameter-efficient when considering all GLUE tasks while maintaining better performance.", "Of course, BERT compression methods typically have faster inference time (e.g. TinyBERT 4 is 9.4 faster that BERTBASE ).", "However we note that diff 16 Without fixed-mask finetuning, GLUE performance decreases from 84.9 to 81.4.", "pruning can be applied on these methods, which may further improve parameter-efficiency while maintaining fast inference.", "Finally, Table 6 shows the actual memory requirements for diff pruning compared to Adapters for a Python implementation.", "While diff pruning requires storing positions in addition to the weights (unlike Adapters which can just store the weights), diff pruning is still more storage-efficient due to the greater parameter-efficiency.", "For training, our approach requires more memory than usual finetuning due to additionally optimizing and w .", "Since the majority of GPU memory is typically utilized by a minibatch's intermediate layers, this did not present a significant challenge for pretrained models that we experimented with in this study.", "However, this could present an issue as model sizes get larger and larger.", "After training, storing the task-specific diff vector requires storing a compressed version with both the nonzero positions and weights, which incurs additional storage requirements.", "Finally, while training efficiency was not a primary concern of this work, diff pruning was also approximately 1 .", "5 to 2 slower to train per minibatch than regular finetuning.", "Multi-task learning Multi-task learning (Caru-ana, 1997), broadly construed, aims to learn models and representations that can be utilized across a diverse range of tasks, and offers a natural approach", "Table 4 : (Top) Sparsity and performance without magnitude pruning on the validation set with structured diff pruning.", "These results also apply fixed-mask finetuning.", "(Bottom)", "Performance with 0.5% target sparsity and fixed-mask finetuning.", "Table 5 : Comparison against existing BERT compression works on GLUE.", "Total params and New params per task columns use BERTBASE as the baseline, which has 109M parameters.", "For example this means that MobileBERT TINY has 13 .", "9% 109 M = 15 .", "1 M parameters per task.", "(Top)", "Results of different BERT variants, taken from table 1 of Jiao et al. (2020).", "(Bottom)", "Structured diff pruning results on BERTBASE .", "Table 6 : Comparison of file sizes per task based on a basic Python implementation assuming float32 for the weights and int32 for positions.", "to training parameter-efficient deep models.", "Several works have shown that a single BERT model can obtain good performance across multiple tasks when jointly trained (Liu et al., 2019b; Clark et al., 2019; Stickland & Murray, 2019).", "An alternative approach to multi-task learning that does not require access to all tasks during training involve training smaller task-specific layers that interact with a fixed pretrained model (Rebuffi et al., 2018; Zhang et al., 2020a).", "In particular, Adapters (Re-buffi et al., 2018), which learn to read and write to layers of a shared model, have been applied to obtain parameter-efficient BERT models (Houlsby et al., 2019; Pfeiffer et al., 2020a,b,c).", "In recent work, Li & Liang (2021) and Qin & Eisner (2021) explore the use of learned prompts on top of pretrained models to obtain task-specific models.", "Yet another line of work targets extreme parameter-efficiency through task-agnostic sentence representations that can be used without finetuning for downstream tasks (Le & Mikolov, 2014; Kiros et al., 2015; Wieting et al., 2016; Hill et al., 2016; Arora et al., 2017; Conneau et al., 2017; Cer et al., 2018; Zhang et al., 2018; Subramanian et al., 2018; Reimers & Gurevych, 2019; Zhang et al., 2020b).", "These feature-based transfer learning methods are however generally outperformed by fully finetuned models (Howard & Ruder, 2018).", "Model compression There has been much recent work on compressing pretrained trained with self-supervision (see (Ganesh et al., 2020) for a recent survey).", "A particularly promising line of work focuses on obtaining smaller pretrained models (for subsequent finetuning) through weight pruning (Gordon et al., 2020; Sajjad et al., 2020; Chen et al., 2020) and/or knowledge distillation (Sanh et al., 2019; Sun et al., 2019; Turc et al., 2019; Jiao et al., 2020; Sun et al., 2020b).", "It would be interesting to see whether our approach can be applied on top of these smaller pretrained models to for even greater parameter-efficiency.", "Learning to mask Our work is closely related to the line of work on learning to mask parts of deep networks with differentiable relaxations of binary masks for model pruning and parameter sharing (Wang et al., 2019b; Zhao et al., 2020; Sanh et al., 2020; Radiya-Dixit & Wang, 2020; Mallya et al., 2018; Guo et al., 2019; Sun et al., 2020a; Cao et al., 2021).", "While these works also enable parameter-efficient transfer learning, they generally apply the masks directly on the pretrained parameters instead of on the difference vector as in the present work.", "Regularization towards pretrained models Finally, diff pruning is also related to works which regularize the learning process towards pre-trained/shared models for continual learning (Rusu et al., 2016; Kirkpatrick et al., 2017; Schwarz et al., 2018), domain adaptation (Wiese et al., 2017; Miceli Barone et al., 2017), and stable finetuning (Lee et al., 2020).", "These works typically do not utilize sparse regularizers and target a different goal than parameter-efficiency.", "We propose diff pruning as a simple approach for parameter-efficient transfer learning with pretrained models.", "Experiments on standard NLP benchmarks and models show that diff pruning can match the performance of fully finetuned baselines while requiring only a few additional parameters per task, and can sometimes have a regularization effect and improve upon regular finetuning.", "We also propose a structured variant of diff pruning which provides further improvements.", "Avenues for future work include", "(i) injecting parameter-efficiency objectives directly into the pretraining process (to pretrain models that are better suited towards sparse transfer learning), and", "(ii) combining diff pruning with other techniques (e.g. adapters, model compression) to achieve even greater parameter-efficiency.", "The authors would like to thank the anonymous reviewers for their valuable feedback on the initial draft.", "AMR was supported by NSF 1704834 and NSF Career 2037519." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "method", "method", "other", "method", "other", "other", "other", "other", "other", "other", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "other", "result", "abstain", "method", "abstain", "abstain", "other", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "other", "other" ]
[ "Multilingual question answering tasks typically assume that answers exist in the same language as the question.", "Yet in practice, many languages face both information scarcity where languages have few reference articlesand information asymmetry where questions reference concepts from other cultures.", "This work extends open-retrieval question answering to a cross-lingual setting enabling questions from one language to be answered via answer content from another language.", "We construct a large-scale dataset built on 40K information-seeking questions across 7 diverse non-English languages that TYDIQA could not find same-language answers for.", "Based on this dataset, we introduce a task framework, called Cross -lingual O pen-R etrieval Q uestion A nswering ( XORQA ), that consists of three new tasks involving cross-lingual document retrieval from multilingual and English resources.", "We establish baselines with state-of-the-art machine translation systems and cross-lingual pretrained models.", "Experimental results suggest that XORQA is a challenging task that will facilitate the development of novel techniques for multilingual question answering.", "Our data and code are available at https://nlp.cs.washington.", "edu/xorqa/ .", "Information-seeking questionsquestions from people who are actually looking for an answer have been increasingly studied in question answering (QA) research.", "Fulfilling these information needs has led the research community to look further for answers: beyond paragraphs and articles toward performing open retrieval 1 on large-scale document collections (Chen and Yih, 2020).", "Yet 1 We use open retrieval instead of open domain to refer to models that can access answer context from large document collections.", "We avoid using open domain due to its double meaning as covering topics from many domains.", "the bulk of this work has been exclusively on English.", "In this paper, we bring together for the first time information-seeking questions, open-retrieval QA, and multilingual QA to create a multilingual open-retrieval QA dataset that enables cross lingual answer retrieval.", "While multilingual open QA systems would benefit the many speakers of non-English languages, there are several pitfalls in designing such a dataset.", "First, a multilingual QA dataset should include questions from non-English native speakers to represent real-world applications.", "Questions in most recent multilingual QA datasets (Lewis et al., 2020; Artetxe et al., 2020; Longpre et al., 2020) are translated from English, which leads to English-centric questions such as questions about American sports, cultures and politics.", "Second, it is important to support retrieving answers in languages other than the original language due to information scarcity of low-resource languages (Miniwatts Marketing Group, 2011).", "Moreover, questions strongly related to entities from other cultures are less likely to have answer content in the questioner's language due to cultural bias ( information asymmetry , Callahan and Herring, 2011).", "For example, Fig. 1 shows that the Japanese Wikipedia article of an American politician, Ron Paul, does not have information about his college degree perhaps because Japanese Wikipedia editors are less interested in specific educational backgrounds of American politicians.", "In this paper, we introduce the task of cross-lingual open-retrieval question answering (XORQA) which aims at answering multilingual questions from non-English native speakers given multilingual resources.", "To support research in this area, we construct a dataset (called XOR-TYDIQA) of 40k annotated questions and answers across 7 typologically diverse languages.", "Questions in our dataset are inherited from TYDIQA (Clark et al., 2020), which are written by native speakers and are originally unanswerable due to the information scarcity or asymmetry issues.", "XOR-TYDIQA is the first large-scale cross-lingual open-retrieval QA dataset that consists of information-seeking questions from native speakers and multilingual reference documents.", "XOR-TYDIQA is constructed with an annotation pipeline that allows for cross-lingual retrieval from large-scale Wikipedia corpora (2).", "Unanswerable questions in TYDIQA are first translated into English by professional translators.", "Then, annotators find answers to translated queries given English Wikipedia using our new model-in-the-loop annotation framework that reduces annotation errors.", "Finally, answers are verified and translated back to the target languages.", "Building on the dataset, we introduce three new tasks in the order of increasing complexity (3).", "In XOR-RETRIEVE , a system retrieves English Wikipedia paragraphs with sufficient information to answer the question posed in the target language.", "XOR-ENGLISHSPAN takes one step further and finds a minimal answer span from the retrieved English paragraphs.", "Finally, XOR-FULL expects a system to generate an answer end to end in the target language by consulting both English and the target language's Wikipedia.", "XOR-FULL is our ultimate goal, and the first two tasks enable researchers to diagnose where their models fail and develop under less coding efforts and resources.", "We provide baselines that extend state-of-the-art open-retrieval QA systems (Asai et al., 2020; Karpukhin et al., 2020) to our multilingual retrieval setting.", "Our best baseline achieves an average of 18.7 F1 points on XOR-FULL .", "This result indicates that XOR-TYDIQA poses unique challenges to tackle toward building a real-world open-retrieval QA system for diverse languages.", "We expect that our dataset opens up new challenges to make progress in multilingual representation learning.", "Our XOR-TYDIQA dataset comprises questions inherited from TYDIQA (Clark et al., 2020) and answers augmented with our annotation process across 7 typologically diverse languages.", "We focus on cross-lingual retrieval from English Wikipedia because in our preliminary investigation we were able to find answers to a majority of the questions from resource-rich English Wikipedia, and native speakers with much annotation experience were readily available via crowdsourcing in English.", "Our annotation pipeline proceeds with four steps: 1) collection of questions from TYDIQA without a same-language answer which require cross-lingual reference to answer (2.1.1); 2) question translation from a target language to the pivot language of English where the missing information may exist (2.1.2); 3) answer retrieval in the pivot language given a set of candidate documents (2.1.3); 4) answer verification and translation from the pivot language back to the original language (2.1.4).", "Fig. 2 shows an overview of the pipeline.", "Our questions are collected from unanswerable questions in TYDIQA.", "A question is unanswerable in TYDIQA if an annotator cannot select a passage answer (a paragraph in the article that contains an answer).", "We randomly sample 5,000 questions without any passage answer annotations (unanswerable questions) from the TYDIQA training data, and split them into training (4,500) and development (500) sets.", "We use the development data from TYDIQA as our test data, since the TYDI QA's original test data is not publicly available.", "2 We choose 7 languages with varying amounts of Wikipedia data out of the 10 non-English languages based on the cost and availability 2 Furthermore, despite the benefits of hidden test sets, the resource-intensive nature of open-retrieval QA is not suitable to code-submission leaderboards.", "This further precluded the use of the original TYDIQA test sets.", "of translators: 3 Arabic, Bengali, Finnish, Japanese, Korean, Russian and Telugu.", "We use a professional translation service, Gengo, 4 to translate all collected questions into English.", "Since named entities are crucial for QA, we instruct translators to carefully translate them by searching for common English translations from English Wikipedia or other external sources.", "We perform manual quality assessment by native speakers on 50 translation samples, finding that more than 95% are correct.", "Note that while these translations are a part of the annotation procedure (due to the inherently cross-lingual nature of this task), they are not provided to models during evaluation.", "We use Amazon Mechanical Turk to retrieve answers to translated English questions given English Wikipedia articles.", "Annotators are instructed to select passage answers (gold paragraphs) and minimal answer spans as in Clark et al. (2020).", "To annotate answers to information-seeking queries, previous work first identifies relevant Wikipedia articles using Google Search, and then annotators attempt to find answers there.", "Asai and Choi (2020) show that in information-seeking QA datasets many questions were annotated as unan-swerable due to two systematic errors: retrieval error where the search engine failed to retrieve a relevant article and answer annotation error where the annotator overlooks answer content.", "Importantly, these two types of annotation errors present a tradeoff: if we retrieve many articles, retrieval errors will be reduced at the expense of answer 3 The cost of translations depends on the number of available translators, and the estimated translation cost for the other three non-English languages was considerably higher.", "annotation errors because annotators have to find answer context among many candidate articles.", "Collaborative model-in-the-loop.", "To find a mid-dle ground in the tradeoff, we introduce a collaborative model-in-the-loop framework that uses Google Search and a state-of-the-art paragraph ranker.", "We first run Google Search to retrieve as many as top 10 Wikipedia articles, resulting in 387 paragraphs per question on average.", "We score them with Path Retriever (Asai et al., 2020) and present the five highest scoring paragraphs.", "Annotators are asked to skim these five paragraphs first; if they cannot find any answer content, they are asked to read the rest of the paragraphs, where the Wikipedia section headings guide their reading.", "To incentivize workers to find answers beyond the pre-selected ones, we carefully communicate with workers and send additional rewards to annotators who actively read the rest of the paragraphs and find answers for questions that other annotators may overlook.", "We found about 70% of the answers from the 5 paragraphs and 30% from the rest of the paragraphs in the top 10 articles.", "This means that while our paragraph ranking was effective, the annotators did not fully rely on it, thereby mitigating the influence of the passage ranking model on the dataset.", "See Appendix B.1 for annotation interface details.", "Quality control for QA annotation.", "We first recruit MTurkers with a high approval rate ( 96%) located in English-speaking countries, and all workers first annotate the same qualification batch.", "We assess the quality of those submissions and select high-quality annotators.", "Consequently, 40 out of more than 200 workers were qualified and 24 workers annotated most of our data.", "More details are in Appendix B.3.", "We verify the annotated answers and translate those answers back to the target languages (cross-lingual data).", "Finally, we mix the annotated cross-lingual data with the same-language data from TYDIQA to reflect the actual question distributions from native speakers (in-language data).", "Answer verification.", "We trained undergraduate students who are native English speakers to verify the annotated paragraphs and short answers.", "Only 8% of the answers were marked as incorrect through the verification phase and were later corrected by our pool of high-quality crowdworkers who yielded less than 1% annotation error.", "Answer translation.", "We again use Gengo to translate answers from English back to the original languages.", "We give translators further instructions to normalize answers such that they are consistent with answers in TYDIQA.", "For example, some languages use their own unique set of numerals rather than Arabic numerals to represent numeric answers (e.g., Bengali numerals, Chinese numerals in Japanese text).", "The details of the answer translation process are described in Appendix B.4.", "Note that because of the cost of answer translations, we conduct this answer translation process for evaluation sets only.", "Dataset statistics.", "5 Table 1 shows the percentages of the questions annotated with short answers in the original TYDIQA and our XOR-TYDIQA, and Table 2 shows statistics of XOR-TYDIQA.", "As seen in Table 1, cross-lingual retrieval significantly increases the answer coverage in all languages by up to 40% (Bengali), and consequently we found answers for more than 50% of the origi-5 After our initial release in November 2020, we modified the XOR-TYDIQA data, and released a new version as XOR-TYDIQA (v1.1).", "nal information-seeking questions in 6 out of the 7 languages.", "6 This result confirms the effectiveness of searching multilingual document collections to improve the answer coverage.", "Detailed statistics of the numbers of long answers, short answers, and unanswered questions are in Appendix B.5.", "We also release the 30k manually translated questions for our training set, which could be used to train multilingual models or machine translation models.", "Qualitative examples.", "Table 3 illustrates that finding relevant articles from multilingual document collections is important to answer questions asked by users with diverse linguistic and cultural backgrounds.", "The first question is unanswerable in Korean Wikipedia, but there is a clear description about who was the prime minister of France at the time in English Wikipedia.", "The second example shows English Wikipedia sometimes contains rich information about a target language-specific topic (e.g., economy in Krasnodar, a city in Rus-sia).", "Those examples demonstrate the effectiveness of searching for answers in another language with more abundant knowledge sources.", "In the last question of Table 3, on the other hand, only the Wikipedia of the target language can provide the answer.", "XORQA allows for both retrieval paths.", "Comparison with other datasets.", "Table 4 compares XOR-TYDIQA and existing multilingual QA datasets.", "XOR-TYDIQA has three key properties that are distinct from these QA benchmarks.", "First, since all questions are inherited from TYDIQA, they are information-seeking questions written by 6 We found in the Telugu data, certain types of questions are very frequent (e.g., what is the pin code of X mandal?).", "Those questions often ask some specific information of local administration districts, and are often unanswerable because", "(a) they are typically not described in English Wikipedia and", "(b) the overall coverage of Telugu Wikipedia is quite low.", "native speakers, and better reflect native speak-ers' interests and their own linguistic phenomena.", "This distinguishes XOR-TYDIQA from translation-based datasets such as MLQA (Lewis et al., 2020) and MKQA (Longpre et al., 2020).", "Second, our dataset requires cross-lingual retrieval unlike other multilingual datasets such as TYDIQA or XQuAD (Artetxe et al., 2020), which focus on same-language QA.", "Lastly, questions in XOR-TYDIQA require open retrieval from Wikipedia, whereas MLQA-R and XQuAD-R (Roy et al., 2020) limit the search space to matching each question with the predetermined 21k/31k sentences.", "We introduce three new tasks (Fig. 3): XOR-RETRIEVE , XOR-ENGLISHSPAN , and XOR-FULL with our newly collected XOR-TYDIQA dataset and construct strong baselines for each task.", "XORFULL defines our goal of building a multilingual open-retrieval QA system that uses both cross-lingual and in-language questions from XOR-TYDIQA.", "To diagnose where models fail and to allow researchers to use the data with less coding effort or computational resource, we also introduce the first two intermediate tasks that only use the cross-lingual data (Table 2).", "We denote the target language by L i .", "We also denote the English Wikipedia collection by W eng and the Wikipedia collection in each target language L i by W i .", "We experiment with baselines using black-box APIs as a reference, but we encourage the community to use white-box systems so that all experimental details can be understood.", "Nonetheless, we release the intermediate results from those external APIs to make our results reproducible.", "All of the white-box system results can be reproduced using our codebase.", "Task.", "Given a question in L i and English Wikipedia W eng , the task is to retrieve English paragraphs for the question.", "Finding evidence paragraphs from large-scale document collections like Wikipedia is a challenging task, especially when a query and documents are in different languages and systems cannot perform lexical matching.", "Evaluation.", "Different open-retrieval QA models use different units for retrieval.", "To make fair comparisons across various models, we measure the recall by computing the fraction of the questions for which the minimal answer is contained in the top n tokens selected.", "We evaluate with n = 2 k, 5 k : R@2kt and R@5kt (kilo-tokens).", "Translate baselines.", "We first translate queries into English, and then paragraphs are retrieved in a monolingual way.", "For query translation, we train transformer machine translation (MT) models on publicly available corpora for easy replication.", "We also run Google's online machine translation ser-vice (GMT).", "This is not completely reproducible as these systems get constantly updated; nor do we know what model and training data they use.", "We encourage the community to use open MT systems where system details are available.", "For retrieval, we explore term-based retrieval (BM25, Robertson and Zaragoza 2009), term-based retrieval followed by neural paragraph ranking (Path Retriever, Asai et al. 2020), and end-to-end neural retrieval (DPR, Karpukhin et al. 2020).", "Multilingual baselines.", "Alternatively, we can directly apply a multilingual pretrained model to retrieve paragraphs.", "We initialize and train a DPR encoder with multilingual BERT to enable multilingual document retrieval (Devlin et al., 2019).", "Task.", "Given a question in L i and English Wikipedia W eng , a system retrieves paragraphs from W eng and extracts an answer.", "This task is equivalent to existing open-retrieval QA tasks (Chen et al., 2017), except that the query is not in English.", "This task involves challenging cross-lingual retrieval and question answering on the L i query and English evidence paragraphs.", "model to find a minimal span that answers the question given paragraphs selected from the previous XOR-RETRIEVE step.", "In particular, for the translate baselines, we use the same approach as state-of-the-art models (Asai et al., 2020; Karpukhin et al., 2020) that jointly predicts a span and a relevance score of each paragraph to the question.", "For the multilingual baseline where queries are not automatically translated during evaluation, we build a reader model with multilingual BERT.", "Task.", "Given a question in target language L i and Wikipedia in both English and L i ( W eng and W i ), a system is required to generate an answer in L i .", "In this task, a system does not know a priori in which language we can find information that the user is seeking.", "Note that the XOR-FULL evaluation data includes both cross-lingual and in-language data, while XOR-RETRIEVE and XOR-ENGLISHSPAN only use cross-lingual data during evaluation.", "Evaluation.", "Some answers in XOR-FULL are translated from English so the same spans may not exist in the target language's Wikipedia.", "For this reason, we use token-level BLEU scores (Pa-pineni et al., 2002) over a ground-truth token set in addition to F1 and EM.", "The same tokenizer is applied to ground-truth and predicted answers to compute token-level F1 and BLEU.", "7 Baselines.", "Unlike the previous two tasks, evidence paragraphs can be found both in the target language and English, and a system has to output final answers based on the most plausible paragraphs.", "In this work, we introduce a simple multi-7 We use the Moses tokenizer (Koehn et al., 2007) for all languages except we apply MeCab (Kudo, 2006) to Japanese.", "lingual baseline that first looks for answers in the target language and then English if no answers are found in the target language.", "Specifically, we apply monolingual retrieval (i.e., BM25, Google Custom Search) for W i and a multilingual machine reading model based on XLM-RoBERTa (Conneau et al., 2020) to find in-language answers in the target language (monolingual model; the bottom half of Fig. 3).", "If no answers are found by the monolingual model, we apply an XOR-ENGLISHSPAN baseline and translate English answers into the target language (the top half of Fig. 3).", "We present results from the baselines discussed above.", "We find that the three XORQA tasks present challenges even for the strong models.", "For training, we first finetune the retrieval and machine reading models with the Natural Questions data (Kwiatkowski et al., 2019) and then further finetune on our XOR-TYDIQA data.", "For the BM25 retrieval baseline, we use ElasticSearch 8 to store and search documents using BM25 similarities.", "For both Path Retriever and DPR, we run the official open-source code.", "For our MT systems, we train base-sized (large for Russian) autoregressive transformers (Vaswani et al., 2017) on parallel corpora from OPUS (Tiedemann and Nygaard, 2004), Mul-tiUN (Ziemski et al., 2016), or WMT19 (Barrault et al., 2019).", "All data are encoded into subwords by BPE (Sennrich et al., 2016) or SentencePiece (Kudo and Richardson, 2018).", "We use the fairseq library (Ott et al., 2019).", "Additional experimental details and full lists of hyperparameteres are available in Appendix C.", "We only evaluate questions having answers and do not give credit to predicting no answers as in prior open-retrieval work (Lee et al., 2019).", "For XOR-RETRIEVE and XOR-ENGLISHSPAN , we use cross-lingual data only and both cross-lingual and in-language data for XOR-FULL .", "Table 5 shows the R@5kt (as defined in 3.1) for different retrieval and query translation systems.", "9 We also report the performance with the human 8 https://www.elastic.co/jp/ .", "English translations of the questions used during the dataset collection as an upper bound of translate baselines.", "The best R@5kt macro-averaged over the 7 languages comes from running DPR on human translations: 72.1.", "Machine translation systems achieve averages of 67.2 (GMT) and 50.0 (our MT) again with DPR.", "The discrepancy between human and machine translation suggests that even state-of-the-art translation systems struggle to translate questions precisely enough to retrieve an evidence paragraph.", "Although the difference between GMT and our MT systems shows the effectiveness of industrial MT systems (large parallel data, model architecture, etc.), there remains a substantial performance gap from human translation.", "The translate baselines outperform the multilingual approach apart from Telugu, where our MT suffers from small parallel data (114k sentences), and as a result the multilingual approach performs better.", "BM25 substantially underperforms the other two models across the board.", "DPR generally achieves similar performance, if not better, compared to Path Retriever despite the fact that Path Retriever was used in our annotation (2.1.3).", "As we found that these patterns persisted in all the following experiments, we will only report results with DPR.", "4.3 XOR-ENGLISHSPAN Experiments Table 6 shows the performance of the baseline models in XOR-ENGLISHSPAN .", "The average macro F1 score with queries translated by human translators is 38.2, substantially higher than that of MT-based models: 32.9 and 20.5 F1 points for GMT and our MT respectively.", "This suggests that errors in automatic query translation affect later layers in the pipeline.", "The multilingual approach consistently underperforms translation-based methods, similarly to XOR-RETRIEVE .", "As in XOR-RETRIEVE , Human GMT Our Multi.", "Telugu was an exception.", "The multilingual baseline significantly outperforms the translation-based approach with our MT system (14.4 vs. 3.6 F1 points).", "Query translation errors propagate to and directly impact downstream QA tasks in the languages with limited parallel data for MT training, and machine translation-based approaches may perform poorly.", "This encourages the research community to explore multilingual pretrained models to build a robust multilingual open-retrieval QA system for low-resource languages.", "Similar to the original TYDIQA dataset, the performance on XOR-ENGLISHSPAN varies across languages, which can be partially explained by the differing sets of questions (Clark et al., 2020).", "The best baseline achieves 39.5 in Arabic compared to 23.5 F1 points in Japanese, which may come from differences in question difficulty as well as how the models are trained for each language.", "Table 7 presents results on the XOR-FULL task.", "The first pipeline, which uses GMT, Google Search (GS), and DPR, yields the best average performance: 18.7 F1, 12.1 EM, and 16.8 BLEU points.", "This indicates that systems like GMT and GS, which are typically trained on large data, are effective.", "Yet, we encourage the community to experiment on top of open systems such that all experimental details can be fully reported and understood.", "Replacing GMT with our MT (second row) results in a large performance drop in Bengali (6.6 vs. 19.0 F1 points) and Telugu (1.7 vs. 13.6).", "Further replacing GS with BM25 retrieval in the target languages (third row) causes a large performance drop in all languages (e.g., 9.7 vs. 16.4 in Korean).", "Consistent with the previous tasks, the multilingual approach shown in the forth row underperforms the translation-based counterpart (15.7 vs. 18.7 F1 points on average).", "Similar baselines perform considerably better in prior open-retrieval QA datasets, such as MKQA (30 EM points, Longpre et al., 2020) and NQ questions (40 F1, Karpukhin et al., 2020).", "This gap illustrates the multidimensional challenge of XOR-TYDIQA.", "Effects of translation performance on overall QA results.", "Table 8 compares the query translation BLEU scores and the final QA F1 performance of the translation-based baseline with three different MT systems in XOR-ENGLISHSPAN : GMT , Our MT , and Helsinki (Tiedemann and Thottin-gal, 2020).", "GMT significantly outperforms the other two baselines, demonstrating that its training setup may yield large improvements in these languages; similarly, in cases where additional parallel training data is not available, multilingual models may remain strong modeling tools.", "On the other hand, it is noteworthy that high BLEU scores do not always lead to better QA performance.", "In Bengali and Finnish, while Helsinki achieves a considerably better BLEU score than our MT (33.0 vs. 30.8 in Bengali and 29.8 vs. 27.4 in Finnish), our MT is 3.9 and 1.3 F1 points better in downstream XORENGLISHSPAN , respectively.", "See Appendix D.3 for an example of translation errors resulting in QA errors.", "Those results suggest that the BLEU score is not always indicative of the downstream performance and that evaluating MT performance in the context of XORQA would be important for improvements of multilingual QA systems.", "Single language Wikipedia ablations in XORFULL .", "To assess our models' ability to benefit from multilingual collections, we try restricting the retrieval target to single language Wikipedia: English W eng only or target language W i only.", "In W eng only, the best system, which applies GMT and DPR, underperforms the best pipeline that uses both W i,eng in all languages except for Finnish and Japanese.", "Similarly, the W i only setting generally underperforms the best W i,eng pipeline.", "These results illustrate the importance of searching multilingual collections.", "See Table 15 for the full results.", "Multilingual QA Much recent effort has been made to create non-English QA datasets to over-Translation", "come the data scarcity in non-English languages.", "In addition to the datasets we already discussed in 2.2, several other non-English reading comprehension datasets have been created (Asai et al., 2018; Lim et al., 2019; Mozannar et al., 2019; d'Hoffschmidt et al., 2020).", "Liu et al. (2019) developed a template-based cloze task, leading to different data distributions from realistic questions with a great degree of lexical overlap between questions and reference paragraphs (Lee et al., 2019).", "More recently, Hardalov et al. (2020) introduced EXAMS, a multilingual multiple-choice reading comprehension dataset from school exams.", "Our XOR-TYDIQA is also closely related to QA@CLEF 2003-2008 (Magnini et al., 2003, 2004; Vallin et al., 2005; Magnini et al., 2006; Giampiccolo et al., 2007; Forner et al., 2008); both QA@CLEF and XOR-TYDIQA attempt to develop and evaluate multilingual QA systems.", "Nevertheless, there are three crucial differences.", "First, our XOR-TYDIQA has a large number of questions that are required for training current state-of-the-art QA models like DPR, while QA@CLEF only has 200 evaluation questions for each language without training data (Forner et al., 2010).", "Secondly, the languages tested in QA@CLEF are all European languages, with the one exception of Indonesian; XOR-TYDIQA includes typologically diverse languages.", "Lastly, the task setup of QA@CLEF 2003-2008 is either monolingualquestions and documents are written in the same non-English languageor cross-lingualthe source and target languages are pre-specified (Forner et al., 2010).", "In XOR QA, questions are asked in a target language but a system does not know in which language it can find an answer in a non-parallel Wikipedia collection.", "Those differences from QA@CLEF tasks better simulate real-world scenarios and introduce new challenges that have yet to be extensively studied.", "Cross-lingual Information Retrieval Cross-lingual Information Retrieval (CLIR) is the task of retrieving relevant documents when the document collection is in a different language from the query language (Hull and Grefenstette, 1996).", "The retrieval component in XORQA is closely related to CLIR, but differs in several critical ways.", "First, since the end goal of XORQA is QA, XORQA queries always take question forms rather than search key words.", "Further, while CLIR typically retrieves documents from a single (low-resource) language (Zhang et al., 2019), XORQA considers documents from both English and the query language.", "In many applications, we do not know a priori in which language we can find target information.", "Lastly, our document collection is orders of magnitude bigger than typical CLIR benchmarks (Sasaki et al., 2018; Zhang et al., 2019).", "We presented the task of XORQA, in which a system retrieves and reads documents across languages to answer non-English information-seeking questions.", "We introduced a new large-scale XORQA dataset, XOR-TYDIQA, with 40k newly annotated open-retrieval questions that cover seven typologically diverse languages.", "Our experiments showed that XOR-TYDIQA is a challenging benchmark that can benefit from further effort in both QA and multilinguality communities.", "This research was supported by gifts from Google, the Allen Distinguished Investigator Award, the Sloan Fellowship, and the Nakajima Foundation Fellowship.", "We thank Sewon Min, Kristina Toutanova, David Wadden, the members of the UW NLP group, and the anonymous reviewers for their insightful feedback on this paper, Nancy Li, Xun Cao, Hitesh Boinpally, Samek Mulepati, Casey Zhao, Vitaly Nikolaev, Soumyadip Sengupta, Bindita Chaudhuri, and Aditya Kusupati for their help on our annotations and dataset proofing, and Nelson Liu and Pradeep Dasigi for their suggestions on the annotation interface and Amazon Mechanical Turk crowdsourcing.", "Were workers told what the dataset would be used for and did they consent?", "Crowdworkers consented to have their responses used in this way through the Amazon Mechanical Turk Participation Agreement.", "If it relates to people, could this dataset expose people to harm or legal action?", "Our dataset can include incorrect information to the extent that Wikipedia can have wrong information about people.", "Nonetheless, we performed extensive quality control and answer verification to minimize the risk of harming people.", "If it relates to people, does it unfairly advantage or disadvantage a particular social group?", "One fundamental problem with the existing question answering benchmarks is that most of their questions are written by native English speakers and overly represent English-centric topics, such as American politics, sports, and culture.", "As such, models trained and developed on those datasets are likely to fail to serve people with diverse language and cultural backgrounds.", "XOR-TYDIQA remedies this long-standing problem by annotating questions from native speakers of diverse languages.", "Thus, we encourage researchers and developers to benchmark on XOR-TYDIQA to mitigate the potential bias and unfairness of QA systems.", "We acknowledge, however, that this dataset still covers a very limited subset of languages in the world.", "We release a datasheet (Gebru et al., 2018) for our dataset to further document ethical implications.", "10 10 https://nlp.cs.washington.edu/xorqa/ XORQA_site/xorqa_datasheet.pdf ." ]
[ "abstain", "abstain", "objective", "result", "objective", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "result", "abstain", "objective", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "result", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "method", "other", "other", "abstain", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "method", "objective", "result", "other", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other" ]
[ "We introduce VoxPopuli, a large-scale multilingual corpus providing 400K hours of unlabeled speech data in 23 languages.", "It is the largest open data to date for unsupervised representation learning as well as semi-supervised learning.", "VoxPopuli also contains 1.8K hours of transcribed speeches in 15 languages and their aligned oral interpretations into 15 target languages totaling 17.3K hours.", "We provide speech recognition (ASR) baselines and validate the versatility of VoxPopuli unlabeled data in semi-supervised ASR and speech-to-text translation under challenging out-of-domain settings.", "The corpus is available at https://github.", "com/facebookresearch/voxpopuli .", "Recent progress in speech-to-text tasks such as automatic speech recognition (ASR) and speech translation (ST) has been achieved by the development and application of unsupervised speech pre-training methods (Oord et al., 2018; Schneider et al., 2019; Baevski et al., 2020; Conneau et al., 2020; Wu et al., 2020; Nguyen et al., 2020), with semi-supervised learning (self-training) (Kahn et al., 2020a; Pino et al., 2020; Zhang et al., 2020b; Xu et al., 2020) or a combination of both methods (Xu et al., 2020).", "This line of research leverages large amounts of unlabeled English speech data (Kahn et al., 2020b) that enable improvements in English ASR or out-of-English ST. Large amounts of multilingual audio data are needed in order to achieve similar progress for multilingual ASR and ST. Similarly, most ASR and ST research is currently conducted on the LibriSpeech (Panay-otov et al., 2015) and MuST-C benchmarks (Cattoni et al., 2020; Di Gangi et al., 2019).", "As a result, the (cid:63) Equal contribution.", "research community has been mostly focused on speech-to-text tasks with English as input.", "While multilingual ASR (Pratap et al., 2020; Ardila et al., 2020) and ST datasets (Wang et al., 2020b; Iranzo-Sanchez et al., 2020) have recently been made available, the amount of data available quickly drops beyond the top few high-resource languages.", "Simultaneous speech translation (interpretation) has witnessed a resurgence with the applications of end-to-end encoder-decoder models.", "Most of the recent studies focus on text output and leverage ST corpora that are translated offline in the written form.", "There are differences, however, between translationese and interpretese (Sridhar et al., 2013; He et al., 2016), where interpreters develop a variety of strategies to improve simultaneity.", "Models trained on translation corpora are unlikely to learn from these interpretation skills to achieve better quality-latency trade-offs.", "Finally, there has been little research (Jia et al., 2019; Tjandra et al., 2019; Zhang et al., 2020a) into speech output due to the lack of open data.", "Existing corpora (Tohyama et al., 2004; Bendazzoli et al., 2005) are either of limited size or no longer publicly available.", "In this paper, we introduce VoxPopuli, a large-scale multilingual speech corpus for representation learning, semi-supervised learning and interpretation.", "It contains the largest open unlabeled speech data to date, totaling 400K hours in 23 languages: Bulgarian (Bg), Czech (Cs), Croatian (Hr), Danish (Da), Dutch (Nl), English (En), Estonian (Et), Finnish (Fi), French (Fr), German (De), Greek (El), Hungarian (Hu), Italian (It), Latvian (Lv), Lithuanian (Lt), Maltese (Mt), Polish (Pl), Portuguese (Pt), Romanian (Ro), Slovak (Sk), Slovene (Sl), Spanish (Es) and Swedish (Sv).", "VoxPopuli also provides a total of 1.8K hours of transcribed speech in 16 languages (En, De, Fr, Es, Pl, It, Ro, Hu, Cs, Nl, Fi, Hr, Sk, Sl, Et and Lt) and their aligned oral interpretations into 15 target languages (En, De, Fr, Es, Pl, It, Ro, Hu, Cs, Nl, Fi, Sk, Sl, Lt and Da) totaling 17.3K hours.", "We describe our corpus creation methodology in Section 2 and analyze the created corpus in Section", "3. We provide ASR baselines and demonstrate the value of our multilingual unlabeled data as well as weakly labeled data on several non-English languages in Section", "4. 2 Corpus Creation 2.1 Data Acquisition VoxPopuli sources data from 2009-2020 European Parliament (EP) event recordings, which include plenary sessions, committee meetings and other events.", "In each event, speakers give speeches in turn in different European Union (EU) languages.", "These speeches are partially transcribed (for plenary sessions only) and interpreted into 24 EU languages.", "The interpretations are only oral without any transcription.", "In the following part, we refer to the original speech as source speech and to the interpreted one as target speech.", "We download audio clips for both source and target speeches from the official website 1 .", "We also crawl the transcript, speaker information and starting/ending timestamps for each speech (for plenary sessions only) from that source, with which we later align the speech to its transcript and interpretation utterance by utterance.", "The acquired raw data suffers from missing audios, incomplete transcripts and inaccurate timestamps.", "We build data processing pipelines to segment speech paragraphs into utterances and filter out the ones with erroneous transcriptions.", "We construct VoxPopuli unlabeled set from all source and target speeches in 23 EU languages (excluding Irish because of very limited data avail-ability).", "We segment full-event audios into short clips of 15-30 seconds using an energy-based voice activity detection (VAD) algorithm 1 .", "Each audio clip has a maximum of 2 seconds of continuous silence, and silent clips are discarded.", "Around 16% of the data is dropped after silence removal, which leads to a final overall duration of around 400K hours.", "The VoxPopuli transcribed set comes from aligning the full-event source speech audio with the transcripts for plenary sessions.", "Official timestamps are available for locating speeches by speaker in the full session, but they are frequently inaccurate, resulting in truncation of the speech or mixture of fragments from the preceding or the succeeding speeches.", "To calibrate the original timestamps, we perform speaker diarization (SD) on the full-session audio using pyannote.audio (Bredin et al., 2020) and adopt the nearest SD timestamps (by L1 distance to the original ones) instead for segmentation.", "Full-session audios are segmented into speech paragraphs by speaker, each of which has a transcript available.", "The speech paragraphs have an average duration of 197 seconds, which leads to significant memory usage and prevents efficient parallelism (batching) during model training.", "We hence further segment these paragraphs into utterances with a maximum duration of 20 seconds.", "We leverage Source Target (Oral Interpretation) En De Fr Es Pl It Ro Hu Cs Nl Fi Sk Sl Lt Da Total En 463 427 441 432 461 457 382 427 400 442 433 434 398 370 6.0K De 187 196 204 214 217 198 205 214 196 217 208 218 164 179 2.8K Fr 169 187 187 172 197 195 144 170 158 168 168 156 139 134 2.3K Es 130 138 135 118 148 128 93 118 115 124 114 108 83 86 1.6K Pl 68 66 54 55 67 55 43 67 42 55 62 57 50 34 775 It 69 77 76 79 72 75 61 68 64 71 66 70 53 60 961 Ro 60 59 59 58 49 61 38 50 43 48 50 46 38 29 688 Hu 30 38 25 27 29 30 27 27 20 31 29 26 21 18 378 Cs 39 35 29 30 36 32 31 23 23 29 55 29 25 18 434 Nl 31 43 35 29 27 38 24 25 25 32 25 23 19 25 401 Fi 15 18 15 13 13 13 13 12 13 11 14 12 11 9 182 Hr 31 27 27 24 27 28 24 22 24 22 24 26 37 21 20 384 Sk 21 22 14 16 19 16 16 14 32 13 16 17 13 10 239 Sl 6 6 4 5 5 6 5 4 5 4 5 6 4 3 68 Lt 1 1 1 1 1 1 1 1 1 1 1 1 1 0 13 Total 857 1.2K 1.1K 1.2K 1.2K 1.3K 1.2K 1.1K 1.2K 1.1K 1.3K 1.3K 1.2K 1.0K 995 17.3K Table 2: Duration statistics (hours) of aligned speech-to-speech data in VoxPopuli between 15 source languages and 15 target languages.", "speech recognition (ASR) systems to force-align speech paragraphs to the given transcripts and cut the utterances by ending punctuation or the longest silence inside the sentence if it exceeds 20 seconds.", "The ASR systems are TDS models (Hannun et al., 2019) trained with ASG criterion (Collobert et al., 2016) on audio tracks from in-house de-identified video data.", "The resulting utterance segments may have incorrect transcriptions due to incomplete raw transcripts or inaccurate ASR force-alignment.", "We use the predictions from the same ASR systems as references and filter the candidate segments by a maximum threshold of 20% character error rate (CER).", "We split the filtered utterances into train, development and test sets with disjoint speakers and target duration ratio (18:1:1).", "To determine the assignments, we group utterances by speaker and sort them by overall duration in ascending order.", "We assign the sorted groups to the test set in order until it reaches 20 speakers or the target duration (whichever comes later).", "The same process is repeated on the remaining utterance groups to construct the development set (with minimum 10 speakers instead).", "Finally, the rest of utterances make up the train set.", "This approach ensures higher speaker diversity in the test and development sets.", "Even though every source speech is associated with corresponding simultaneous interpretations in target languages, considerable preprocessing and fil-tering is necessary to make this dataset usable.", "Our strategy is to align source and target at the sentence level using ASR.", "We first compare the spectrogram of the source and the target speech to remove the identical parts and segment the target speech into paragraphs.", "These identical speech are due to either the short delay between the time the source speaker and the interpreter started, or the fact that the source language is the same as the target one, and thus no interpretation is needed.", "For long target paragraphs, we further segment them by silence into audio clips of at most 15 minutes long.", "We use the same ASR model described in Section 2.2.2 and a language model (Section 2.2.4) to decode the segmented target audio.", "The decoded text is also forced aligned with the target audio, so that we have the timestamps of every decoded word.", "For each source segment produced in Section 2.2.2, we locate all decoded words that are within a window of five seconds to its start and end.", "A set of candidate target segments can be generated from all possible combinations of the starting and ending decoded words.", "We compute the cosine similarity between the LASER representation (Artetxe and Schwenk, 2019) of the source text and each decoded text in the candidate set to find the best target segment, i.e. the one with the highest score.", "We first carry out this process for all source segments, respectively, and then finetune the boundaries of overlapping target segments for consecutive source segments.", "Finally, a threshold of 0.75 is applied on the similarity score to filter out low-quality alignments, which can be due to Original(French) Vous le savez tous, la foret recule.", "In addition to ASR output, we also collect human transcription on 400 hours of English target speech.", "The human annotators were asked to provide timestamps for each word while transcribing, and thus we can apply the same alignment process described above on human transcription and generate a set of ground truth speech-to-speech alignment data.", "As a by-product from this alignment process, source text and target speech is aligned, which provides speech-to-text translation data in the reversed direction.", "This data is weakly labeledthe label (text) may contain more information than the speech data (interpretation is likely to drop unimportant details) and hence is not exact.", "However, it is still useful for ST model training as an addition to labeled data.", "To train language models (LM) for ASR decoding, we combine VoxPopuli transcription in the training set with the EuroParl corpus (Koehn, 2005), which is from the proceedings of the European Parliament from 1996 to 2011.", "To process the EuroParl data, we first apply the sentence segmentation tool provided with the corpus.", "We remove all texts in the parentheses, replace hyphens and slashes with space, and remove all other punctuation except apostrophes.", "All digits are converted into words, and all texts are normalized into lowercase.", "Table 1 shows the statistics of the LM data.", "Unlabeled speech As we can see from Table 1, VoxPopuli has a total of 400K hours of unlabeled data well-distributed across 23 EU languages, resulting in 8K-24K hours of data for each language.", "This ensures adequate data on languages with lower ASR resource, which are likely to benefit more from semi-supervised learning.", "It also facilitates multilingual model training since there is not much data imbalance and little need for tuning data sampling strategy.", "Transcribed speech The VoxPopuli transcribed data contains 16 languages totaling 1.8K hours and 4.3K speakers, whose detailed statistics can be found in Table 1, including duration (hours) by language, number of speakers, percentage of female speakers and number of tokens.", "The data distribution is imbalanced and reflects the natural distribution of the number of native speakers.", "The remaining 7 languages (Pt, Bg, El, Lv, Mt, Sv and Da) are not covered due to either limited data volume or the availability of processing pipelines.", "Speech-to-speech alignment The statistics of the speech-to-speech alignment between all source languages and 15 target languages are shown in Table", "2. Compared with the total amount of data available for each source language (Transcribed hours in Table 1), we obtain target alignments for more than 70% of the source sentences in En, De, Fr, Es and It, more than 50% for Pl, Ro, Cs, Nl and Hr, and the rest has at least 40% of source segments aligned.", "To examine the quality of our ASR system, we align the ASR output with the human transcription we collect on English target speech and see a word error rate (WER) of 31.7.", "With the human transcription, we can produce ground truth speech-to-speech alignment data that is 1.1 times larger than the size of the alignment data created from using ASR output, indicating that around 12% of the low-quality alignments are filtered due to ASR errors.", "If we compare the ASR-based and the ground truth alignment data, there is on average a 0.75-second shift in the target segment boundaries.", "Interpretese vs. translationese We exemplify the differences between simultaneous oral interpretation and offline written translation using VoxPopuli in Table", "3. The latter is verbatim and exact compared to the original speech, while the former En De It Fr Es Pl Ro Hu Nl Cs Sl Fi Hr Sk Avg.", "tends to be more general and summarizing with unimportant details dropped.", "Human interpreters regularly apply these tactics to make better quality-latency trade-offs.", "Speech-to-speech translation models may benefit from these tactics if they are trained on interpretation data that VoxPopuli provides.", "We provide VoxPopuli ASR baselines and validate the versatility of VoxPopuli unlabeled data in unsupervised representation learning and semi-supervised learning for ASR as well as ST. We also evaluate the quality of speech-to-speech alignment indirectly via the weakly labeled ST data it produces.", "For representation learning, we perform speaker diarization before VAD-based segmentation so that each utterance contains exactly one speaker.", "We augment the data with time dropout, pitch modifi-cation and reverberation (Kharitonov et al., 2020) during model training.", "For non-wav2vec models, we extract 80-dimensional log-mel filterbank speech features with 25ms windows size and 10ms shift.", "We apply per-utterance CMVN (cepstral mean and variance normalization) to the extracted features.", "For GPU memory efficiency, we remove training samples that have more than 60 seconds of speech or have more than 1024 characters.", "We train wav2vec 2.0 (Baevski et al., 2020) models with original hyper-parameter settings using fairseq (Ott et al., 2019), except for Table 7 where we use wav2letter (Pratap et al., 2018) and follow Talnikar et al. (2020) to do finetuning using both supervised CTC (Graves et al., 2006) loss and unsupervised wav2vec 2.0 loss.", "The largest model (VP-100K) takes 10 days on 128 V100 GPUs for 1M updates.", "For non-wav2vec models, we train Transformer (Vaswani et al., 2017) with cross-entropy criterion using fairseq S2T (Wang et al., 2020a).", "For Section 4.2 and Section 4.4.1, we use phoneme vocabularies for models that we evaluate with PER (phone error rate) and character vocabularies for the other.", "For Section 4.4.2, we use Unigram (Kudo and Richardson, 2018) vocabularies with 2K subwords for all models.", "To improve ST model training, we pre-train the encoder on the LibriSpeech (Panayotov et al., 2015) ASR task.", "We use the best checkpoint by validation loss for evaluation, except for Section 4.4.2 where we average the 10 best checkpoints.", "We build n-gram language models for decoding (when specified) using KenLM (Heafield, 2011).", "We provide monolingual Transformer baselines for the 14 languages that have more than 10 hours of transcribed data (see Table 1).", "Both development and test WER are reported in Table", "4. We see that several low-resource languages (Fi, It, Hr, Sk and Sl) suffer from high recognition errors ( > 40% WER) due to the lack of training data.", "Even the highest resource one (En) has a high WER of around 30%.", "We follow the setting in Rivi`ere et al. (2020) to evaluate unsupervised speech representations by phoneme discriminability on 3 languages (English, French and Mandarin), and report ABX discriminability score (Schatz et al., 2013) on the 10s test set from ZeroSpeech 2017 (Dunbar et al., 2017).", "Standard deviation (Std.) of the scores across the 3 languages is also reported as a measure for the generality of the representations.", "As previous studies focus on monolingual representations, we explore multilingual representations and examine their generality across languages.", "We train CPC-based models (Riviere and Dupoux, 2020) on 500-hour English and 500-hour French unlabeled data from VoxPopuli, respectively.", "And we combine English and French data with 50% sampling (so that the total duration remains the same) for the multilingual setting.", "We observe from Table 5 that the multilingual model (En+Fr-500) performs comparably to the monolingual ones (En-500 and Fr-500) on their seen languages and performs better on unseen language (Zh).", "Its scores vary less across languages (lower Std.) compared to En-500.", "The variance of the scores is comparable to Fr-500 while the average is lower.", "We conclude that multilingual representations generalize better across languages and are more robust on unseen Fr En Es En De En Fr Es De Train hours (EP+CV) 38+264 32+113 42+184 38+264 32+113 42+184 Test set EP CV EP CV EP CV EP CV EP CV EP CV (Cascaded) Baseline 25.4 27.6 26.5 27.4 21.3 21.0 24.3 18.3 15.0 21.4 19.8 16.0 Our end-to-end baseline 24.5 27.0 20.5 26.6 17.5 20.0 20.8 18.8 17.2 14.1 23.2 18.4 With 800h self-training 26.7 28.6 22.4 26.8 18.8 20.1 19.5 17.3 15.6 13.7 21.8 17.5 With 3000h self-training 27.4 28.9 22.7 27.3 19.6 20.0 19.0 17.0 15.3 13.2 21.4 17.3 400h weakly labeled 22.9 10.1 22.2 10.9 18.0 8.8 + labeled 31.1 30.3 28.4 29.7 24.4 23.4 Table 8: ST and ASR using VoxPopuli data for self-training or weak supervision.", "languages.", "For quick exploration, we leverage only part of the VoxPopuli unlabeled data and leave the validation on more data to future work.", "We explore two semi-supervised learning settings for the application of VoxPopuli unlabeled data: unsupervised pre-training followed by supervised fine-tuning for ASR and self-training for ASR as well as ST.", "Self-supervised (unsupervised) pre-training such as wav2vec 2.0 (Baevski et al., 2020) substantially reduces the need of labeled data in ASR.", "Furthermore, multilingual pre-training (Conneau et al., 2020) allows cross-lingual transfer, which brings extra gains especially to low-resource languages.", "Pre-training wav2vec 2.0 models is, however, resource-intensive and hence re-training models for each task with different domains is impractical.", "With the large-scale multilingual data in VoxPopuli, we explore if scaling multilingual pretraining can take us towards the one-model-fits-all paradigm by alleviating the impacts of domain or language mismatch between pre-training and finetuning.", "We train wav2vec 2.0 models 1 on 10K-hour, 50K-hour and 100K-hour VoxPopuli data in 23 languages (denoted as VP-10K, VP-50K and VP-100K, respectively).", "We also train models with 4.5K-hour monolingual data (denoted as VP-Mono-5K) for comparison.", "For quick verifi-cation, we use only part of the VoxPopuli unlabeled data for pre-training.", "We leave training the models 1 wav2vec 2.0 Base (95M) unless specified otherwise.", "In-domain pre-training We examine the conventional in-domain pre-training setting on the VoxPopuli ASR benchmark.", "We evaluate the VP-10K model, where the pre-training data is filtered so that it has no overlaps with the transcribed development and test set.", "From table 4, we see that pre-training using unlabeled data brings significant gains to all the languages (average 59% test WER reduction).", "The gains are most significant on the low-resource languages, where improvements are qualitative (for example, from nearly 100% test WER on Sl down to around 30%).", "Out-of-domain pre-training We examine the out-of-domain pre-training setting using the Common Voice (CV) ASR corpus (Ardila et al., 2020).", "In contrast with the political domain oral speech in VoxPopuli, they are more fluent read speech of no copyright sentences (for example, Wikipedia articles).", "We adopt the few-shot phoneme recognition setup on CV v3 from Rivi`ere et al. (2020), with which domain adaptation is limited during fine-tuning due to the small data volume it has 1-hour train set, 20-minute development set and 1-hour test set for 10 languages including 5 VoxPopuli ones.", "We present the performance of VP-Mono-5K, VP-10K and VP-100K with the m-CPC (Rivi`ere et al., 2020) and XLSR (Conneau et al., 2020) baselines in Table 6, where phone error rate (PER) is reported.", "The XLSR baselines share the same wav2vec 2.0 architecture as our models but are trained with in-domain CV data.", "VP-Mono-5K outperforms XLSR-Mono and XLSR-10 on all 5 VoxPopuli languages (except for a tie on Es with XLSR-Mono).", "VP-100K outperforms XLSR-10 on 8 (9) out of the 10 languages.", "VP-100K (Large) overall performs competitively to XLSR-53, which leverages 52K-hour out-of-domain data in addition to the in-domain CV data.", "Notably, it outperforms XLSR-53 on Zh, which is covered by XLSR-53 but remote from the EU languages in VP-100K.", "This suggests the high generality of the speech representations VP-100K learned.", "We also evaluate our multilingual model (VP-50K) under the normal setup (CV v5.1) and report test WER in Table 7.", "They are compared with supervised baselines from DeepSpeech-Polyglot 1 , which leverage extended CV train sets and several other corpora for training as well as LM for decoding.", "Our model outperforms the baseline with fine-tuning on the standard CV train set (a subset of the baseline's one), even when not using LM in decoding.", "Out-of-language pre-training In the few-shot phoneme recognition setup (Table 6), VP-100K does not cover 5 of the 10 CV languages (Ky, Ru, Tr, Tt and Zh) in pre-training, but leverages data from 18 additional EU languages.", "It outperforms the in-domain in-language XLSR baselines on most of the uncovered languages (except Ky which is a remote central Asian language).", "Moreover, it performs more stably across all the 10 languages with a smaller variance (standard deviation) on PER.", "Self-training (Scudder, 1965) is a classical semi-supervised learning approach, where unlabeled data is equipped with pseudo-labels from a supervised model and then combined with labeled data for model training.", "We use the combination of EuroParl-ST (Iranzo-Sanchez et al., 2020) and CoVoST 2 (Wang et al., 2020b) for both ASR and ST labeled data in 3 languages (directions).", "The former is created from 2009-2012 EP plenary sessions and hence has the same domain as VoxPopuli.", "The latter is based on Common Voice v4, which has different domain than VoxPopuli and dominates the combined train set.", "We train Transformer Base (Vaswani et al., 2017) supervised baselines and use 0.8K/3K-hour monolingual VoxPopuli unlabeled data (from 2013-2020 sessions only to avoid overlaps with EuroParl-ST) to self-train Transformer Large models.", "We upsample labeled 1 https://gitlab.com/Jaco-Assistant/deepspeech-polyglot data in self-training so that it has the same duration as the unlabeled one.", "We observe from Table 8 that self-training on VoxPopuli improves both in-domain (EP) and out-of-domain (CV) performance with similar magnitude most of the time.", "For ST, self-training helps to narrow the gap between end-to-end models and the cascaded ones (more labeled data available) without the addition of expensive labeled data.", "We evaluate the quality of the weakly labeled ST data from our speech-to-speech alignment on the same benchmark as the self-training experiments.", "This also provides an indirect evaluation for our alignment pipeline since imprecise alignments hurt the ST label quality.", "We examine the performance of weakly supervised training as well as joint training using both labeled and weakly labeled data.", "We see from Table 8 that the former is on par with (or better than) the supervised baseline in the VoxPopuli domain (EP) with 0.3x-1.8x more training data than the baseline.", "Joint training brings substantial gains to both in-domain (EP) and out-of-domain (CV) performance, and it outperforms self-training.", "This suggests that our weakly labeled data (0.4K hours) is much more informative and efficient than the pseudo-labeled data (3K hours) when combined with labeled data.", "Multilingual speech corpora LibriLight (Kahn et al., 2020b) currently represents the largest scale unlabeled speech corpus but it is limited to English.", "MLS (Pratap et al., 2020) is a recently released large-scale multilingual corpus of read speech in 8 languages, derived from LibriVox.", "MAILABS 1 is also derived from Librivox and has about 1000 hours available in 9 languages.", "While MLS and MAILABS are derived from audiobooks, VoxForge 1 and Common Voice (Ardila et al., 2020) gather data via crowd-sourcing.", "VoxForge collected data in about 15 different languages with about 300 hours of speech in total; Common Voice currently supports 60 languages for a total of 7327 validated hours available.", "The CMU Wilderness dataset (Black, 2019) collects readings from the New Testament, with 700 different languages avail-1 https://www.caito.de/2019/01/the-m-ailabs-speech-dataset 1 http://www.voxforge.org able.", "IARPA Babel program 1 collected data for 24 languages, mostly from conversational telephone speech.", "The dataset is however not released and under an open license, and focused on low-resource languages, with labeled data ranging between 25 to 65 hours per language.", "Speech-to-Text and Speech-to-Speech Translation Apart from machine translation (Koehn, 2005), the European Parliament open data has fostered the development of corpora for speech-to-text translation and for simultaneous interpretation.", "EuroParl-ST (Iranzo-Sanchez et al., 2020) is a multilingual speech-to-text translation corpus with translations between 6 European languages (En, Fr, De, Es, It and Pt).", "Similarly, EPIC (Bendazzoli et al., 2005) is derived from the European Parliament with simultaneous interpretation speeches in Italian, English and Spanish.", "CIAIR (Tohyama et al., 2004) and STC (Shimizu et al., 2014) are simultaneous interpretation corpora between English and Japanese with a total of about 180 hours for the former, while the latter is currently unavailable for download.", "The MaSS dataset (Zanon Boito et al., 2020) also provides speech to speech alignments for about 8k utterances across 8 languages, for a total of about 23h of speech.", "In this paper, we introduce a large-scale multilingual speech corpus, VoxPopuli, for representation learning, semi-supervised learning and interpretation.", "VoxPopuli provides the largest open unlabeled speech data to date, which has broad applications including unsupervised pre-training and self-training.", "VoxPopuli is also the first corpus for large amounts of open speech-to-speech interpretation data.", "We provide VoxPopuli ASR baselines and validate the versatility of VoxPopuli unlabeled data in semi-supervised learning under challenging out-of-domain settings.", "The corpus is available at https: //github.com/facebookresearch/voxpopuli .", "We thank Gabriel Synnaeve, Tatiana Likhoma-nenko, Jade Copet, Vineel Pratap, Jiatao Gu and Alexis Conneau for helpful discussions on the project.", "1 https://www.iarpa.gov/index.php/research-programs/babel 8 Ethical Considerations We acknowledge the European Union (EU) for creating and publishing the materials used by VoxPopuli.", "We will add citations as well as acknowledgements in our release.", "We paid the market price to transcription vendors for the human annotations we collected.", "VoxPopuli includes all available speeches from the 2009-2020 EP events without any selections on the topics or speakers.", "The speech contents represent the standpoints of the speakers in the EP events, many of which are EU officials." ]
[ "method", "abstain", "abstain", "method", "other", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "method", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "method", "other", "other", "other", "other", "other", "other", "other" ]
[ "Hrituraj Singh Triomics,Noida,India", "Anshul Nasery, Denil Mehta, Aishwarya Agarwal, Jatin Lamba Indian Institute of Technology Bombay, India", "Balaji Vasan Srinivasan Adobe Research, Bangalore, India", "Abstract", "Multimodal research has picked up signifi-cantly in the space of question answering with the task being extended to visual question answering, charts question answering as well as multimodal input question answering.", "However, all these explorations produce a unimodal textual output as the answer.", "In this paper, we propose a novel task -MIMOQA-M ultimodal I nput M ultimodal O utput Q uestion A nswering in which the output is also multimodal.", "Through human experiments, we empirically show that such multimodal outputs provide better cognitive understanding of the answers.", "We also propose a novel multimodal question-answering framework, MExBERT , that incorporates a joint textual and visual attention towards producing such a multimodal output.", "Our method relies on a novel multimodal dataset curated for this problem from publicly available unimodal datasets.", "We show the superior performance of MExBERT against strong baselines on both the automatic as well as human metrics.", "Multimodal content is at the heart of digital revolution happening around the world.", "While the term modality has multiple connotations, one of its common usage is to indicate the content modality i.e. images, text, audio etc.", "It has been shown that multimodal content is more engaging and provides better cognitive understanding to the end user (Dale, 1969; Moreno and Mayer, 2007; Sankey et al., 2010).", "With recent improvements in vision-language grounding and multimodal understanding (Bisk et al., 2020; Luo et al., 2020; Sanabria et al., 2018; Das et al., 2018), several works have explored beyond unimodal machine comprehension (Hermann et al., 2015; Kocisk`y et al., 2018; Nguyen et al., 2016; Kwiatkowski et al., 2019) towards a holistic multimodal comprehension (Antol et al., 2015; Das et al., 2017; Anderson et al., 2018; Zhu et al., 2018; Goyal et al., 2017; Fayek and Johnson, 2020) with significant improvements.", "However, all these explorations on multimodal understanding, question answering in particular, have limited their focus to unimodal outputs even with multimodal inputs.", "For example Visual Question Answering (VQA) task takes a textual query and an image to produce a textual answer.", "The multimodal question answering tasks (Antol et al., 2015; Kafle et al., 2018; Lei et al., 2018) take multiple input modalities , but the output is limited to text only.", "Even the recently proposed ManyModalQA (Hannan et al., 2020) relies on multimodal understanding to produce a textual answer.", "These works implicitly assume that the textual answers can satisfy the needs of the query across multiple input modalities.", "We posit that such an assumption is not always true; while textual answer can address several queries, a multimodal answer almost always enhances the cognitive understanding of the end user; understanding the answer through visuals is faster and provides enhanced user satisfaction.", "In this paper, we propose a new task, Multimodal Input Multimodal Output Question Answering ( MIMOQA ), which not only takes multimodal input but also answers the question with a multimodal output.", "Our key contributions are: 1) We introduce the problem of multimodal input multimodal output question answering .", "We establish the importance of such multimodal outputs in question-answering for enhanced cognitive understanding via human experiments.", "2) We propose MExBERT , a novel multimodal framework for extracting multimodal answers to a given question and compare it against relevant strong baselines.", "Our proposed method includes a novel pretraining methodology and uses a proxy supervision technique for the image selection.", "problem by extending the MS-MARCO (Nguyen et al., 2016) and Natural Question (Kwiatkowski et al., 2019) datasets to account for multimodal outputs.", "We propose the use of different automatic metrics and conduct human experiments to show their effectiveness.", "Multimodal output not only provides better understanding to the end user but also provides grounding to the actual answer.", "For e.g., the multimodal output for the question in Figure 1", "(a) aids in better comprehension of the answer, while also providing grounding to words like 'stick', 'knob'.", "In some cases, textual answer might even be insufficient, especially, for questions which seek explicit visual understanding (questions about colors, structures, etc).", "In such cases, existing systems apply image understanding on top of the images to arrive at a textual description' of the desired answer.", "While this might suffice in some cases, a multimodal output can almost always enhance the quality of such answers.", "In Fig. 1", "(b) , the textual answer is insufficient and gets completed only with the help of the final image-text combination.", "To verify the hypothesis, we collated 200 Question-Answer pairs (refer to supplementary for details); for each pair, we created its unimodal and multimodal answers.", "We conducted a human experiment where each question-answer pair was judged by 5 annotators; each annotator rating if the textual answer is sufficient for the input query.", "Irrespective of its sufficiency, the annotators were also asked whether the image in the multimodal variant enhances the understanding of the answer and adds value to it.", "To avoid the natural bias towards richer multimodal response in such experiments, we had explicitly inserted a few questions with irrelevant images ( 20% ) and only considered the annotations which did not exhibit any bias in such questions.", "Out of 80 .", "27% of the total responses where the annotators felt that textual answers were sufficient, 87 .", "5% felt the image enhanced their understanding even with such sufficient textual answer validating the importance of a multimodal answer.", "However, only 22 .", "2% of the annotators felt the same when an irrelevant image was shown, indicating the absence of a strong bias towards richer responses.", "When the text was insufficient ( 19 . 73% of the responses), the relevant image boosted the understanding in 90 .", "62% of the cases, further indicating that text only answers are not always sufficient and in such cases, an appropriate image can aid in better understanding.", "Here again, only 27 .", "65% felt that an irrelevant image will add such a value, again indicating the lack of a strong bias towards multimodal answers just because they are richer.", "This experiment establishes that multimodal answers almost always improves the overall understanding irrespective of the sufficiency of textual answer.", "Motivated by this, we propose the novel problem of multimodal input, multimodal output (MIMO) QA which attends to multiple modalities and provides responses in multiple modalities.", "Formally, given a piece of input text T along with a set of related images I and a query Q , our problem is to extract a multimodal answer M from { I, T } .", "In an ideal case, multimodal answer does not have to be multi -modal, especially when there is no relevant image in the input.", "However, for the sake of simplicity, we assume that there is at least one image in the input that can complement the textual answer even if the image is not extremely critical to the textual answer for it to make sense.", "This follows our human experiments which showed that image adds value to the response over 90% of the time, irrespective of the sufficiency of the textual answers.", "Thus, our multimodal answer M consists of a text MT and an accompanying image MI .", "Multimodal Extractive BERT (MExBERT): As we show later, a major problem with independently extracting the textual answer and matching an image is the absence of joint understanding of visual and textual requirements for the query.", "We, therefore, propose a joint attention M ultimodal Ex tractive BERT based framework ( MExBERT ) using query Q over both input text T and input images I .", "Figure 2 shows the overall architecture of our proposed MExBERT framework.", "Inspired by the recent visuo-lingual models (Tan and Bansal, Block FFN Block CA Block FFNNV x Block CA Block FFN Block FFN Block SA Block FFN Block SA x NTb x NTa Text Inputs + Pos + Seg IDs Start / End Point Self Attention Add + Norm FFN Add + Norm Block FFN Block SA Cross-Attention Add + Norm Block CA Visual Stream Textual Stream VGG-19 1 Figure 2: MExBERT . Details of the three blocks in the visual and textual streams are illustrated on the top. The visual stream takes the output of VGG-19 as input while the textual stream takes BERT Embeddings as input 2019; Lu et al., 2019; Chen et al., 2019), our framework has two separate streams textual and visual stream; textual stream takes the query and input passage as input while visual stream takes the images as input.", "The textual stream is extended from the BERT-QA framework (Devlin et al., 2018) and consists of self-attention transformer (Vaswani et al., 2017) layers.", "The input to the textual stream as shown in Figure 2 is tokenized BERT embedding of words in both passage and query.", "We also use the standard [CLS] and [SEP] tokens the former prepended in the beginning and the latter embedded between query and the input passage.", "We use positional embedding to additionally provide positional and segment information for the MExBERT to better distinguish between query and passage.", "Unlike the the canonical BERT-QA, our textual stream employs two types of layers regular self-attention layers and additional cross-attention layers.", "The initial layers of the textual stream include NT a regular self-attention based transformer layers similar to the canonical BERT-QA.", "The latter half of the textual stream is composed of NT b layers each of which consists of an additional cross-attention block along with the regular self-attention.", "Representing the attention computation in query-key-value format, the cross-attention block uses textual tokens as query and image representation from the visual stream as keys and values.", "This is different from self-attention where (query, keys and values) are all input textual tokens of the textual stream.", "The cross-attention block enables the framework to choose spans that are also coherent with the the visual stream.", "If the i th textual token's features and j th image's features used as input for k th textual stream layer and ( k NT a ) th visual stream layer (as discussed later) are given by T ik 1 and V jk 1 ; attention with q query, k keys, and v values is attn ( q, k, v ) , the self-attention and cross-attention is given by, T ik self = attn ( T ik 1 , T k 1 , T k 1 ) , (1) T ik cross = attn ( T ik self , V k 1 , V k 1 ) (2) where T k : { T 0 k , ..., T nk } and V k : { V 0 k , ..., V mk } .", "Here, n is the number of textual tokens and m is the number of input images.", "The final layer of the textual stream is used to calculate the start and end position of the answer, similar to the canonical BERT-QA (Devlin et al., 2018) where one linear layer predicts the starting token and another layer predicts ending token through softmax applied over all tokens.", "The goal is to optimize the cross entropy loss over both the token position predictions.", "The visual stream is similar to the textual stream with two key differences -", "(i) There is only one type of layer in the network and the number of layers NV = NT b and", "(ii) All the layers consist of only cross-attention blocks (along with feed-forward layers and residual connections) and do not contain self-attention block as shown in Figure", "2. The self-attention was not used as the images mostly derive their relevance/context from the textual counterparts (powered by the cross-attention) in the input passage or query rather than other input images.", "The cross-attention is similar to the textual stream except that query is an image feature vector and the keys and values are textual tokens' representation from the corresponding textual stream layer.", "The input to the visual stream is the global VGG-19 (Simonyan and Zisserman, 2014) features of each of the images.", "We do not use positional/segment encodings in the visual stream.", "We use a linear head on top of visual features to predict whether a particular image should be in the output answer and use weighted binary cross-entropy for training where the weights w and 1 w come from the proxy supervision values (as discussed later).", "The image with the highest confidence score on inclusion in the answer is regarded as the predicted image during inference.", "Extract & Match: A natural framework to output a multimodal response would be to combine existing state-of-the-art frameworks in question answering and visuo-lingual understanding.", "To illustrate the shortcomings of such an assembled framework and motivate the need for a holistic framework, we implement such a framework using existing models as our Extract & Match baseline.", "Given the input query ( Q ) and the input text ( T ) and images ( I ), we first extract the textual answer using unimodal BERT-QA.", "(Devlin et al., 2018).", "We use this extracted answer, query, and input text to select an image from the input images using UNITER (Chen et al., 2019) to rank the images.", "UNITER has been trained on millions of image-text pairs for image-text matching task the task of identifying whether a given image-text pair are actually the image and its caption.", "Due to strong pretraining, UNITER has achieved SOTA performance on a variety of vision and language task, including zero shot image-text matching.", "So, we use this as our baseline for image selection.", "We provide each image along with the text (answer, query and input) to UNITER and use the classification confidence predicted by image-text matching head to rank the images.", "The image which receives the highest confidence score for a given text is taken as the matched output.", "Since there is no existing dataset which satisfies the requirements of the task, we curate a new dataset (refer to supplementary for details on curation strategy and data samples) by utilizing the existing pub-lic datasets.", "We observe that several QA datasets contain answers that come from a Wikipedia article .", "Since most Wikipedia articles come with a set of related images, such images could feature as the input I in our setup.", "Extending this heuristic, we use two QA datasets MS-MARCO (Nguyen et al., 2016) and Natural Question (NQ) (Kwiatkowski et al., 2019), to extract those question-answer pairs which are originally extracted from Wikipedia and scrape all images from the original article.", "More details about the curation process and examples of the images scraped for questions can be found in the appendix.", "Table 1 shows various statistics about the dataset.", "The dataset includes large number of images making the task of selecting appropriate image nontrivial.", "The variety of images also necessitates a robust visual and language understanding by our model.", "The passages have been formed by combining the answer source passage and randomly chosen 2 3 distractor' passages from the original Wikipedia article.", "This allows the model to learn to find the right answer in unseen conditions also.", "The # of tokens in our input passages are large enough to be regarded to as a full input (instead of using the entire article) considering the focus here is on multimodal output and not article-passage ranking.", "Proxy Supervision: Although we have scraped the images from the original articles, we do not have any supervision for these images in our dataset.", "In order to train the model to judge which images are relevant to an answer, we heuristically compute proxy targets by using two types of information about the image its position in the original article and its caption.", "We use the caption and position information only to obtain the target scores during training and not as an explicit input to our model since such information is not always readily available.", "Thus, our model is able to infer the correct multimodal response irrespective of the availability of such information at inference time.", "Since MS-MARCO and Natural Questions provide information about the original source passage for the final answer, we know the position of the source passage.", "We calculate the proximity distance P between the first token of source passage of answer and an image with number of tokens chosen as the distance unit.", "We, further, normalize this with the total number of tokens present in the entire article.", "We calculate the TF-IDF similarity of the caption against the Query, Answer and source passage (Figure 3).", "The overall supervision score is calculated as a weighted sum of these 4 scores where proximity score is calculated as 1 P .", "The normalized supervision scores (between 0 1 ) are used as targets for linear layer of the visual stream.", "Pretraining : Vision and Language Tasks have relied on pretraining to address the complexities in building visuo-lingual relationships (Tan and Bansal, 2019; Lu et al., 2019; Chen et al., 2019).", "Following this, we leverage pretraining to better initialize our model.", "Further, our signals (even after including proxy supervision) are relatively # of pairs Avg # of tokens # of Images Train 52,466 242.31 373,230 Development 722 180.62 3,563 Test 3,505 242.58 24,389 Table 1: Statistics for the all three different splits of the curated MIMO Question Answering Dataset Caption How does coronavirus look like ?", "sparse for a visuo-lingual task, calling for a stronger model initialization.", "We use Conceptual Captions (Sharma et al., 2018) as it has been shown to impart a generic V-L understanding (Chen et al., 2019).", "We use the standard Masked Language Modelling ( MLM ) task over the Conceptual Captions to pretrain the textual stream and employ the cross entropy loss over the masked tokens.", "While the task is intended to train the textual stream, since the entire caption is generated from the visual information through the cross-attention mechanism, visual stream is also fine-tuned in this process.", "Since, our final model uses segment IDs, we randomly assign a segment ID of either query or passage to each caption during pretraining in order to imbibe language understanding for both type of tokens.", "For pretraining the visual stream , we modify the Conceptual Captions (Sharma et al., 2018) by choosing a random number between (3 10) ( N ) for each caption followed by selecting N-1 negative images (i.e. those images which have different captions) along with the image that is associated with the caption.", "We provide the caption as input to the textual stream and these N images as input to the visual stream.", "We train the model to predict the image corresponding to the caption by using binary cross entropy loss over images.", "Again, while this tasks is focused majorly on visual stream initialization, the textual stream is also fine-tuned due to the cross-attention layers between the two streams.", "We conduct extensive experiments and ablations for the proposed MExBERT framework and compare it against the E&M baseline.", "We divide our curated dataset into train, development and test sets as shown in Table", "1. As mentioned before, we used the 3 .", "2 million Image-Caption pairs from Conceptual Captions dataset (Sharma et al., 2018) for pretraining MExBERT layers.", "For proxy supervision, we empirically determine the weights: the proximity weight w px = 0 .", "4 , passage weight w p = 0 .", "3 , query weights w q = 0 .", "15 and answer weight w a = 0 .", "15 after analyzing the manually selected images in the dev set (as discussed later).", "For the E&M baseline, we pretrain the text extraction with the SQUAD dataset (Rajpurkar et al., 2016) and finetune it on our dataset.", "For the image matching, we use image ranking using the input query ( Q ), input passage P and the extracted input answer A all concatenated together.", "For MExBERT, we tested different variants with and without proxy supervision ( PS ); with different pre-training setups pretraining the textual stream alone, visual stream alone and both to test the independent value of different pre-training.", "Except pretraining experiments and baseline experiments, all our experiments on MExBERT have been conducted with 3 random seeds and the reported scores have been averaged over the 3 seeds.", "We use BERT pretrained embeddings for the textual stream of MExBERT and use NT a = NT b = NV = 6 .", "For finetuning MExBERT, we use Adam optimizer initialized with a learning rate of 0 .", "0001 and train it till the validation loss saturates.", "The model was trained over 4 V100 machines using a batch size of 8 for finetuning and 64 for pretraining.", "For pretraining, we use an Adam optimizer with a learning rate of 0.0001 for 2 Epochs over 3.2 million Image-Text pairs for all our ablations during pretraining stage.", "We use 768 dimensional textual embeddings with a vocabulary size of 30 , 522 and intermediate hidden embedding size 3072 for both textual and visual features.", "We project 4096 dimensional VGG-19 image features into 2048 dimensions and use it as input to the visual stream.", "Evaluation Metrics: We independently evaluate the text and image part of the extracted answer using various metrics.", "For the text, we considered standard metrics like ROUGE, BLEU popularly used in the literature for textual question answering task.", "For images, we use the precision @1,2 and 3 in which we measure if the predicted image is in top-1,2 or 3 images as selected in the ground truth.", "Although these metrics are standard, we verify their utility in the multi-modal case by conducting a human experiment and calculating their correlations with human judgments.", "To further validate the choice of our metrics, we collated a subset of 200 examples which have their ground truth available (collected as discussed later).", "We, then, apply our best performing model for these examples and generate the multimodal answers.", "For each of 200 pairs, we have both its predicted as well as ground truth counterparts.", "We conduct a human experiment where the annotators are asked to rate the quality of both textual and image part of the answer on relevance R and user satisfaction S .", "The overall quality of the answer is high if it is both relevant and provides high user satisfaction.", "For each pair, 5 different annotators rate the answers resulting in independent ratings for both predicted and ground truth answers.", "We calculate the overall quality of a predicted answer Q a with respect to the ground truth by calculating the ratio between the quality (which we represent by R*S) of predicted answer and the ground truth answer, Q a = R S for predicted R S for ground truth .", "We compute the pearson correlation between different metrics and Q a .", "We observe that Rouge-1, Rouge-2, Rouge-L and BLEU yielded a correlation scores of 0 .", "2899 , 0 .", "2716 , 0 .", "2918 and 0 .", "2132 indicating a moderate correlation and reassuring their viability for evaluating textual answer even in our multimodal setup.", "For image metrics, we found precision@1 to be most strongly correlated with human judgement ( 0 . 5421 ).", "While the expectation might be that such a metric has a perfect correlation, the user judgement is also biased by the corresponding textual answer leading to different scores even if the image is same in actual and predicted answer.", "Evaluating Textual Outputs: Table 2 shows the performances of E&M against MExBERT (and its ablations) on extracting the right textual part of the multimodal answer .", "In order to test whether the visual attention on it's own makes any difference to the text answer quality, we also compare two variants of MExBERT one where the visual input is zeroed out and another where the images are given as input without any supervision on the image selection.", "In the latter case we use the average attention weights of an image to determine its relevance to an answer.", "While not drastically large, we observed noticeable improvements with the visual input as compared to zero visual input, affirm-ing our understanding about the value of utilizing multimodal input and cross-modal learning.", "We notice a marginal improvement in the text scores if we use proxy supervision scores during training.", "Intuitively, this is because of better focus of MODELROUGE -1 ROUGE -2 ROUGE-L BLEU E&M 46.77 43.26 47.22 25.17 MExBERT + Zero Img Input 44.10 41.90 44.91 24.28 MExBERT 45.13 43.02 45.77 24.96 MExBERT + PS 45.67 43.59 46.17 25.04 MExBERT + PS + L PT 48.12 46.22 48.82 28.01 MExBERT + PS + V PT 46.18 44.11 47.24 25.89 MExBERT + PS + V+ L PT 48.88 47.02 49.03 28.50 Table 2: Results showing the performance of E&M and MExBERT over various textual metrics for test set.", "query on the target image which further enhances its attention over the correct part of the answer in the input.", "Due to relatively smaller corpus as compared to text only QA datasets used usually in recent works, we considered pretraining to be a natural choice to improve our model further.", "While the improvements in text scores with the visual training are marginal (which is expected since this training is directed at visual stream), language pretraining yields reasonable improvements as shown in Table", "2. Evaluating Image Output: We rank images in test set using our proxy supervision scores.", "We also select the image with the highest score as predicted by the respective model.", "We deem this image as Precise @1,2 or 3 depending upon if it is present in top-1, top-2 or top-3 images as ranked by our proxy-supervision mechanism.", "While conducting evaluation, we skip those data points which have no-image or only a single image in the input to avoid any bias in the evaluation.", "After removing such datapoints, there were 2 , 800 test datapoints with 2 or more images.", "As mentioned before, in the E&M, we retrieve the highest scoring image matched based on concatenation of Q , Passage P , and the extracted Answer A as the matching text, so that model has access to the whole textual input.", "Evidently, the results obtained are better than random but are still far from accurate.", "In fact, they are just more than half as good as those obtained with our heuristically created proxy scores when compared with human preferences as shown in Table", "4. This shows that the problem is much harder than just using image retrieval models calling for a joint attention to understand the relevance of question, passage and answer.", "Using questions and answers as input text for UNITER were either poorer or similar, and hence not reported due to space limitation.", "The power of joint multimodal attention is strongly evident as even without any visuo-lingual pretraining, we obtain meaningful (better than random) scores with just the averaged attention.", "The Define the Mistral The name mistral comes from the languedoc dialect of the occitan and means masterly.", "assumption, while using the highest average attention weights for selection the image, is that the model learns to focus on relevant images while being trained to optimize for better textual answer generation.", "Applying our proxy supervision mechanism while training the model, we find a very significant improvement specially in PRECISION @ 1 scores.", "PRECISION @ 2,3 scores are however similar to what we obtained with E&M.", "That is perhaps due to the fact that UNITER is good at estabilishing the relationships between text and images resulting in good PRECISION @2,3 scores but it fails at deciding the top image with high confidence due to lack of explicit understanding about where to focus on the text.", "Such a joint understanding is the main strength of MExBERT .", "Visual pretraining yields larger improvements on PRECISION @1 metric, while the language pretraining provides marginal improvements.", "Human Evaluation: While our proxy scores have been intuitively designed, they are error prone.", "We therefore collected human annotations over the entire test corpus to further validate our model's performance.", "We conduct a Mechanical Turk experiment where the turkers were asked to select an image from a given set of input images for (ques-tion, answer, source passage) triplet which embellishes the textual response.", "Every question-answer pair was annotated by 5 annotators, with each annotator annotating 5 such pairs; we pay $0 .", "2 for every such annotation.", "We also provide an option of selecting no image' since some inputs might not have any relevant image that could go well with answer.", "We find an agreement rate of over 50 % for the selected image in over 90 % of the cases.", "We, therefore, use the average number of votes per image as a preference' score for the image, and use this to compute the precision values in Table", "4. The performance of MExBERT against such human annotations is better than its performance when calculated over proxy scores indicate that the proposed MExBERT is robust to the noise that MODELPRECISION @1 PRECISION @2 PRECISION @3 Random 0.144 0.275 0.396 E&M 0.284 0.492 0.612 MExBERT 0.196 0.385 0.498 MExBERT + PS 0.316 0.505 0.608 MExBERT + PS + L PT 0.321 0.511 0.612 MExBERT + PS + V PT 0.381 0.535 0.616 MExBERT + PS + V+ L PT 0.386 0.538 0.618 Proxy Scores 0.422 0.631 0.753 Table 4: Results comparing performance of E&M and MExBERT over the image modality of the multimodal answer based on Human Evaluation over test set might have crept in the proxy-supervision and generalizes well.", "This also explains why the precision is lower in the noisy setting of proxy supervision than the low-noise setting based on the human annotations.", "High precision values of proxy scores over the human preference scores demonstrate the effectiveness of our proposed heuristic for preparing proxy training targets.", "Machine reading comprehension and question-answering have been explored for a while, with the earliest works dating back to 1999 (Hirschman et al., 1999).", "Most of these works dealt with single modality at a time until recently.", "While earlier datasets were small, beginning with SQuAD (Rajpurkar et al., 2016) several large datasets (Ra-jpurkar et al., 2018; Yang et al., 2018; Choi et al., 2018; Reddy et al., 2019; Kwiatkowski et al., 2019) have been proposed.", "Though many of these are extractive in nature, there are a few multiple-choice datasets (Mihaylov et al., 2018; Richardson et al., 2013).", "Datasets like QAngaroo and HotpotQA (Welbl et al., 2018; Yang et al., 2018) enable reasoning across multiple documents.", "Recently, several Table-QA datasets have also been proposed, aimed at providing a natural language answer by reasoning over tables.", "While some datasets like WikiTableQuestions (Pasupat and Liang, 2015) and MLB (Cho et al., 2018) have natural language questions, others like TabMCQ (Jauhar et al., 2016) have multiple choice questions.", "A popular exploration in multimodal question answering is Visual Question Answering or VQA (Antol et al., 2015; Goyal et al., 2017; Anderson et al., 2018; Lu et al., 2016, 2019; Tan and Bansal, 2019) where the input is a textual query along with an image and the output is a text answer.", "Another variant of this, Charts Question Answering (Kafle et al., 2020, 2018; Kahou et al., 2017; Chaudhry et al., 2020), allows for the input to be a chart instead of a natural image.", "While both of these problems involve multimodality (image + question or chart + question), the output is still textual (specif-ically an answer class since this is modelled as a classification problem usually).", "While the question is received as a text in these problems, the reasoning is performed over a single modality only.", "In our work, we reason out across multimodal input by simultaneously attending to images and text in the input to arrive at our target output.", "To overcome unimodal reasoning, there are attempts at truly multimodal reasoning with the datasets such as ManyModalQA (Hannan et al., 2020), RecipeQA(Yagcioglu et al., 2018), and TVQA (Lei et al., 2018).", "While RecipeQA aims reasoning over recipes and the associated pictures, TVQA involves multimodal comprehension over videos and their subtitles.", "The recently proposed ManyModalQA goes a step further by adding tables to the multimodal reasoning as well.", "However, these datasets provide responses in a single modality only, either an MCQ or textual response.", "With the rate at which multimodal consumption is taking place in our lives, it is important that the answering systems also enable multimodal output which, as discussed, already can provide better cognitive understanding when combined with textual modality.", "We presented one of the first exploration, to the best of our knowledge, of multimodal output question answering from multimodal inputs and proposed usage of publicly available textual datasets for it.", "We proposed strong baselines by utilizing the existing frameworks for extract textual answers and independently match them with an appropriate image.", "We demonstrate the value of a joint-multimodal understanding for multimodal outputs in our problem setup by developing a multimodal framework MExBERT which outperformed the baselines sig-nificantly on several metrics.", "We also developed a proxy supervision technique in absence of labelled outputs and showed its effectiveness for improved multimodal question answering.", "We used some existing metrics to compare the different models and justified the usage of these metrics based on a human experiment.", "While it is an interesting and challenging task even in its current shape, we believe there are several limitations in our proposed framework.", "While our datasets had multimodal elements, modeling multimodal reasoning from multimodal inputs and using it to arrive at a multimodal answer calls for a more careful question curation that includes these challenges.", "Recently proposed datasets such as MultimodalQA have created questions explicitly aimed at reasoning across multimodal input, but however, lack the multimodal output component.", "Future works could include questions which specifically aim for a visual elements making the output requirement multimodal.", "Also, free form answer generation in the multimodal input/output context is another interesting subject of further research." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "objective", "objective", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "method", "objective", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "method", "objective", "objective", "objective", "objective", "method", "objective", "method", "abstain", "abstain", "abstain" ]
[ "Automated generation of conversational dialogue using modern neural architectures has made notable advances.", "However, these models are known to have a drawback of often producing uninteresting, predictable responses; this is known as the diversity problem.", "We introduce a new strategy to address this problem, called Diversity-Informed Data Collection.", "Unlike prior approaches, which modify model architectures to solve the problem, this method uses dynamically computed corpus-level statistics to determine which conversational participants to collect data from.", "Diversity-Informed Data Collection produces significantly more diverse data than baseline data collection methods, and better results on two downstream tasks: emotion classification and dialogue generation.", "This method is generalizable and can be used with other corpus-level metrics.", "It is well-documented that neural dialogue models struggle with generating engaging, relevant responses (Li et al., 2016a) and often produce banal responses such as Yeah.", "While this may be an appropriate response to a chitchat conversation, to keep a human participant engaged, diversity of responses is important.", "Diverse models vary the language used and the content referenced, and the generated utterances differ from the most typical conversation responses some proportion of the time.", "A model which only generates Yeah, No, and I don't know is not diverse and is not be engaging to converse with.", "Past work has improved model diversity with innovation on model architectures and decoding strategies (Li et al., 2016a; Baheti et al., 2018; Li et al., 2017; Shao et al., 2017; Cao and Clark, 2017; Serban et al., 2017; Zhao et al., 2017).", "We build upon this work to propose a novel method to collect and determine more diverse data to train these models with.", "Our method can be used in conjunction with existing generation-specific model innovations.", "Some prior work on data collection processes has prioritized diversity.", "For instance, Rashkin et al. (2019) prompts crowdworkers to choose an underused emotion class to generate dialogue.", "This work encourages coverage of emotion classes, but does not consider the likelihood that some crowdworkers are better at producing certain types of data than others.", "This paper introduces Diversity-Informed Data Collection (DIDC), a new strategy for creating a dataset of conversational utterances via selecting which participants' data to include in the collection.", "The strategy progressively builds up a more diverse sub-corpus from an existing larger collection.", "The main idea is to grow the sub-corpus by adding conversations sequentially and to assess the contribution of a new participant's utterances to the diversity of the entire sub-corpus.", "This strategy is also applicable to on-the-fly collection of new datasets via crowdworking or similar methods.", "We implement DIDC with three diversity metrics: Outlier, Entropy, and Mean-IDF.", "Diversity-Informed Data Collection also provides a new method for finding an upper bound on a current corpus's diversity via a Corpus-Wide Oracle which has access to information about which utterances are most diverse across the corpus.", "Prior work has not used corpus-level statistics to enhance the diversity of the collected data.", "Instead, when collecting data with crowdworkers, researchers have sought more diverse responses by altering the task (Kang et al., 2018) or by altering the stimulus (Larson et al., 2019).", "Prior work that trains neural dialogue models has not made use of subsets of existing datasets that exhibit properties of diversity.", "Our experiments show this strategy yields significantly more diverse data than baseline collection processes.", "It also yields better, more diverse model output on two downstream tasks.", "Additionally, this method can be implemented for other metrics which are defined relative to the corpus.", "Past work in neural dialogue generation investigates how to improve diversity in conversational responses.", "Additionally, past work in crowdsourcing data collection has explored optimizing crowdsourcing data collection processes.", "Improving model diversity is an important goal in dialogue generation (Li et al., 2016a), with several related works proposing architecture and training improvements to increase diversity.", "Decoding methods to increase model diversity include Li et al. (2016a) which proposes maximizing mutual information between the source sentence and response rather than maximizing likelihood.", "Other approaches have focused on beam search and incentivizing diverse beams, by adding similarity constraints at decoding (Baheti et al., 2018), penalizing items on the beam that are similar and rerank-ing resulting items (Li et al., 2016b), or penalizing words which have already been generated in a current beam (Li et al., 2017).", "Shao et al. (2017) uses attention over already-generated words at decode time and beam reranking.", "Adding a temperature parameter to sharpen the decoder's distribution has also been studied (Cao and Clark, 2017).", "Neural architecture improvements have also been explored, such as conditioning on a latent variable at decode time (Serban et al., 2017; Zhao et al., 2017) or a multi-headed attention mechanism which aims to capture different parts of the context (Tao et al., 2018).", "Zhang et al. (2018) explore the use of Generative Adversarial Networks to incentivize diversity.", "These more diverse models and decoding methods can be used in conjunction with Diversity-Informed Data Collection, since it attempts to improve the data that neural models are trained on in an earlier part of the model pipeline.", "Related work in crowdsourcing has approached the optimization problem of how to assign crowdworkers", "Basu Roy et al. (2015) formulates the problem of matching crowdworkers to tasks depending on skill levels for a set of concepts, pay rates, and HIT acceptance ratio.", "Follow-up work extends to collaborative crowdwork, where crowdworkers need to work together (Rahman et al., 2015).", "Assadi et al. (2015) pursue a similar task assignment setup.", "Additional work has attempted to automatically evaluate crowdworker quality of task performance and use the results to assign crowdworkers to new tasks on-the-fly (Fan et al., 2015).", "Further investigations have explored more adaptive assignment of tasks in real-time based on the likelihood that a participant will continually complete tasks (Kobren et al., 2015).", "Relatedly, Kumai et al. (2018) design a task allocation to minimize the stress of workers and maximize the resulting quality in terms of balanced skill performance.", "An additional area related to our work is crowdworker label distribution prediction.", "Liu et al. (2019) has a crowdworking labeling task and trains models to predict the 50-label crowdworker distribution from 5-10 labels.", "Yang et al. (2018) aim to predict diversity in crowdworker answers to questions about an image to determine how many crowdworker responses are required to capture this diversity.", "Lin et al. (2018) tackle the task of employing crowdworkers to generate or label minority class examples to feed an active-learning model.", "They deploy a multi-armed bandit to choose crowdworking tasks based on how cheaply a minority-class example can be generated using the technique.", "Our approach, by contrast, adapts a distributional constraint across the entire collection.", "Zhou et al. (2018) explores the related task of changing crowdworker team instruction prompts.", "Data collection approaches to incentivize diverse crowdworker output have also been studied.", "For instance, in EmpatheticDialogues (Rashkin et al., 2019) crowdworkers are conditioned to generate a response and an emotion (such as afraid or proud) associated with it.", "If workers do not generate text with certain emotions, they are prompted to select only from the underused labels.", "This is an example of trying to get better class coverage, but does not compare crowdworker output to the entire corpus of collected responses.", "Past work has also examined how the particular crowdworking task affects the diversity of crowdworker output.", "Kang et al. (2018) compare two crowdsourcing tasks for use in a downstream goal-oriented dialogue system and examine resulting data diversity.", "While Kang et al. (2018) focus on choosing a task which produces diverse utterances, our work focuses on choosing a participant population which produces diverse data compared to data which has already been collected.", "Building on Kang et al. (2018), and perhaps most similar to our work is Larson et al. (2019), which tackles the problem of detecting outlier paraphrases generated by crowdworkers.", "To obtain multiple ways of expressing similar intent (such as opening a bank account), crowdworkers are asked to paraphrase sentences.", "After a round of paraphrase collection, the most diverse (the outlier) paraphrases are identified and placed back onto the crowdsourcing platform for another round of data collection.", "Our method is similarly aimed at increasing diversity of collected data.", "However, our method adapts the participant population for a set of tasks, which can be used in addition to an approach like Larson et al. (2019) which adapts the stimulus the population works on.", "We propose a method, Diversity-Informed Data Collection , which progressively builds up a corpus, and while doing so, identifies which conversation participants produce more diverse utterances compared to the rest of the in-progress corpus.", "More formally, our task is to progressively build a sub-corpus, sub c , of a given size from a larger, pre-collected corpus, c , where utterances are tied to IDs of specific participants.", "Our approach is aimed at building a diverse sub-corpus sub c .", "Our approach chooses which population of participants to collect data from for a given round.", "This population changes dynamically depending on calculated participant's diversity scores.", "When utilizing a human-created, pre-existing corpus, we assume responses of the dataset are well-formed and of acceptable quality.", "With this assumption, we can maximize diversity scores without worrying that quality will be sacrificed for this diversity.", "However, when using this approach to collect data on-the-fly, additional quality controls may be necessary to ensure diverse data does not come at the cost of quality.", "We assess two experimental conditions: Simulated Data Collection and Corpus-Wide Oracle Upper-Bound.", "Simulated Data Collection is set up to mimic crowdsourcing data collection processes leveraging a large pre-collected corpus, while Corpus-Wide Oracle Upper-Bound gathers an maximally diverse sub-corpus of utterances.", "For all experiments, we utilize the pre-collected EmpatheticDialogues corpus (Rashkin et al., 2019).", "We experiment with this corpus because it has crowdworker IDs associated with each utterance, which allows us to experiment with varying the participant population.", "Future work should conduct further experimentation to examine this ap-proach's adaptability to other chitchat and goal-oriented datasets.", "The corpus has a large number of utterances (100,000) over 25,000 conversations.", "Each conversation is centered around a situation (such as getting a promotion at work) and is associated with one of 32 emotions, such as anger, excitement, or guilt.", "Each conversation takes place between two crowdworkers and is an average of 4.3 turns.", "There are 810 unique crowdworkers in this dataset, each completing an average of 132 utterances each across an average of 61 conversations.", "Our task is to select sub c of size 10,000 from the larger EmpatheticDialogues corpus, c .", "We choose 10,000 as it is a sufficient number of utterances to train downstream models but still a small proportion (10%) of the original dataset, allowing examination of differences between sub-corpora.", "Implementation utilizes Cornell Convokit (Chang et al., 2019).", "We simulate real-time crowdsourcing using a large, pre-collected corpus, c .", "This allows for running multiple trials, each time selecting sub c and examining significance of different diversity metrics and participant selection conditions.", "We simulate collecting data on-the-fly using an artificially-constructed environment (formally described in Algorithm 1), which completes multiple rounds of data collection until the progressively built sub-corpus size ( sub c ) is the desired size.", "The Algorithm 1: Data collection simulation environment.", "ComputeDiversity depends on the diversity metric (Table 2), and EvalP articipants depends on the participant selection approach (Table 1).", "procedure assumes a fixed number of conversation participants in each round to gather data from (set to 10 for our experiments).", "We collect 2 conversations from each participant, chosen to allow the algorithm to recover from a participant with low diversity utterances while not judging a participant on just one conversation.", "Given a participant's conversation, the diversity of an utterance in that conversation is stated in Equation 1: div utt = ComputeDiversity ( utt, sub c ) (1) where ComputeDiversity depends on the diversity metric examined.", "We obtain a diversity score for each participant p 's set of utterances ( utts p ) by averaging these diversity values: div p = 1 size ( utts p ) (cid:88) utt utts p div utt (2) At the end of each round of data collection, utt p is added to sub c for each participant.", "Additionally, the algorithm determines which subset of the participant population is retained for the next round based on a Participant Population Selection strategy.", "Our algorithm is greedy, since the order participants are added to the simulation and the order in which conversations are sampled both affect the participant's likelihood to be retained for an additional round.", "However, crowdworker data collection itself is usually a greedy approach, with crowdworkers being assigned to tasks in the order they arrive and being allowed to complete many tasks until the dataset has been collected.", "We experiment with three conditions to determine which sub-set of current participants (participants which were involved in the most recent round of data collection) should be retained for the next round of data collection, summarized in Table", "1. Diverse Population: After collecting conversations from current participants, we choose to retain the most-diverse 70% of participants.", "Above Mean Population: Any participant whose diversity average falls above the mean diversity average of sub c is retained in the pool of participants.", "Random Population: We compare to a special random baseline, where at each iteration we retain a random 70% of the participant population, to directly compare to the 70% of crowdworkers Condition Description Diverse Population Calculates each participant's average relative diversity for current data collection round.", "retained in Diverse Population.", "We structure Random Population to collect data from roughly the same number of participants as Diverse Population, to examine differences between the resulting sub c due to the the selection of which participants to retain for another round of data collection.", "We experiment with three diversity metrics (Outlier, Entropy, and Mean IDF), summarized in Table", "2. For all metrics, a new utterance utt is compared to the sub-corpus sub c .", "The same utterance can have different diversity values depending on the utterances in sub c .", "When augmenting pre-collected data, this allows for the collection of new utterances which are relatively diverse.", "Outlier: The embedding-based Outlier metric was proposed by Larson et al. (2019).", "Each utterance is encoded using a Universal Sentence Encoder (USE), which creates a sentence embedding by averaging word embeddings and passing the representation through a feedforward neural network, originally trained in a multi-task setting with supervised and unsupervised NLP tasks (Cer et al., 2018).", "An embedding of an utterance is created via: E utt = USE ( utt ) .", "A mean corpus vector is computed by averaging all of sub c 's utterance's vectors: E sub c = 1 size ( sub c ) (cid:88) u sub c USE ( u ) (3) The diversity metric is the Euclidean distance between each new utterance and the mean corpus vector, or: (cid:115)(cid:88) i ( E u i E sub ci ) 2 (4) where i is a dimension in Embedding E .", "Utterances which are farther from the mean corpus vector are given a higher diversity score.", "For Simulated Data Collection, the mean corpus vector shifts as data is collected.", "Therefore, depending on which utterances are already added in the sub-corpus, outlier values will change for a given utterance.", "Entropy: The Entropy score is determined by a non-neural trigram language model with smoothing for unseen words.", "The diversity score is given by: 1 | x T rigram ( utt ) | (cid:88) x Trigram ( utt ) p ( x ) log p ( x ) (5) The language model is only trained on utterances in the sub-corpus.", "Mean IDF: This metric calculates the mean IDF value for each word in the utterance (Baeza-Yates et al., 1999).", "IDF is calculated by treating each utterance in the corpus as a document.", "For a given utterance utt p and sub-corpus sub c , Mean IDF is calculated via: 1 | utt p | (cid:88) w utt p log (cid:18) |{ sub c }| |{ utt | w utt }| (cid:19) (6) where { sub c } is the set of all utterances in the sub c .", "The IDF of a word w in utt is the number of utterances in sub c divided by the number of utterances containing w on a log scale.", "In addition to evaluating the robustness of our approaches, multiple diversity metrics are chosen with different conceptual types of diversity in mind.", "Outlier uses Universal Sentence Encoder embeddings which capture content (Cer et al., 2018).", "Entropy considers the probability of short phrases and can capture word combination diversity.", "Mean IDF considers the rarity of words being used for vocabulary diversity.", "Depending on the downstream application for a dialogue agent, the utility of these diversity measures may vary.", "To provide an Upper Bound for the diversity of a sub-corpus sub c , we create a Corpus-Wide Oracle which knows the value of each utterance's diversity compared to the entire corpus c .", "For each utt c , we compute diversity according to the methods in Table 2, where sub c = c .", "For example, for Outlier, the mean corpus vector is 1 size ( c ) (cid:88) x c USE ( x ) (7) which captures utterances from the entire corpus c .", "We calculate a Corpus-Wide Oracle diversity score, div oracle , for each utterance in c for each diversity metric.", "The Corpus-Wide Oracle is used to construct sub c of any size consisting of the most diverse utterances.", "This sub-corpus can be used to compare against other collection methods, such as those in Simulated Data Collection, or as a way to enhance an existing collection by selecting out the most diverse utterances.", "After the Corpus-Wide Oracle ranks each utterance by diversity, we select the utterances with the top 10,000 diversity values to form sub c .", "This serves as a use-case for collecting the maximally-diverse corpus for a given diversity metric.", "However, the Corpus-Wide Oracle might not be the best 10,000 utterances to collect for a sub-corpus.", "The Corpus-Wide Oracle selects the utterances with the most diversity compared to the whole corpus, but this might be too much diversity without enough context since the Simulated Data Collection methods add entire conversations (not utterances in isolation) to sub c .", "We evaluate the collected corpora both in terms of how diverse each sub-corpus is as well as performance on two downstream tasks: conversation emotion classification and dialogue generation.", "The first evaluation aims to answer the question of if our methods produce more diverse sub-corpora than the Random Population baseline.", "We examine the hypothesis that using a collection method with knowledge of diversity will result in sub c that is significantly more diverse.", "For each data collection method, we compare the diversity of the sub-corpus to Random Population.", "Because diversity values are relative to sub c , diversity of sub c is measured via div oracle values.", "Table 3 shows the resulting div oracle values for datasets collected using our methods.", "Each value is the average of 100 trials, in which each trial collects a 10,000 utterance sub-corpus, sub c .", "Significance results for all experiments use a two-sided t-test compared to the Random Population baseline.", "Both Diverse Population and Above Mean Population produce datasets which contain statistically significantly (p < 0.001) more diverse data compared to the Random Population baseline.", "The Corpus-Wide Oracle method produces the most diverse results overall, as expected as it is a collection of the top 10,000 most diverse utterances.", "Running Diversity-Informed Data Collection to collect datasets of size 5,000 produced similarly significant differences.", "We also examine the average number of participants out of the 810 total in c that are included for each method.", "Note in Table 3 the difference in Average Number of Participants from Random Population and Diverse Population to Above Mean Population and Corpus-Wide Oracle.", "Even though Above Mean Population is more diverse than Di-Condition Mean Score Avg.", "verse Population for Entropy, it comes at the cost of more participants.", "Across all three diversity metrics, Above Mean Population requires about 100200 additional participants than Diverse Population and Random Population.", "In an online setting where the cost to train new crowdworkers is high, the tradeoff between number of participants and diversity of content may be worth considering.", "To examine the quality of the resulting sub c 's, we turn to downstream task evaluation.", "We first examine the task of classifying a conversation's emotions from utterance text.", "Following Larson et al. (2019)'s justification, we would expect more diverse sub c to result in higher classification accuracies, because more diverse responses should cover more variation in how people express emotions in conversation.", "We follow the methodology of Larson et al. (2019) who propose evaluating the diversity of goal-oriented intent paraphrases.", "For their use case, classification models predict the intents from the paraphrase.", "For our case, each conversation in the EmpatheticDialogues corpus is associated with an emotion, such as anger or guilt.", "There are 32 such emotions throughout the corpus.", "The classification Condition SVM FastText O u tli e r Random Population 0.224 0.050 Diverse Population 0.234 * 0.052 Above Mean Population 0.229 0.077 * Corpus-Wide Oracle 0.100* 0.057* E n t r opy Random Population 0.218 0.052 Diverse Population 0.212 0.049 Above Mean Population 0.254 * 0.065* Corpus-Wide Oracle 0.134* 0.102 * M ea n IDF Random Population 0.220 0.052 Diverse Population 0.236* 0.052 Above Mean Population 0.257 * 0.064* Corpus-Wide Oracle 0.131* 0.065 * Table 4: Results for downstream classification accuracy averaged over 5-fold cross-validation over 10 trials: higher is better.", "task is to predict which of the 32 emotions is expressed from a given utterance.", "Following Larson et al. (2019), we use two classification models: Bag-of-Words SVM FastText classifier Bag-of-Words SVM is an SVM using TF-IDF word features for prediction.", "The FastText classifier uses a neural classification model on top of fastText sentence embeddings (Joulin et al., 2017).", "The sub-corpora we collect using the different methods serve as the datasets to train these classification models.", "Classification task results are summarized in Table 4.", "Reported scores are averaged 5-fold cross-validation and averaged over 10 runs of datasets collected from each method.", "While most conditions show Diverse Population significantly outperforms Random Population, it performs worse than Random Population with Entropy SVM and Entropy FastText and performs the same in Mean IDF FastText.", "Above Mean Population, on the other hand, outperforms the Random Population baseline on all conditions.", "This could potentially be due to the larger number of participants included in Above Mean Population.", "Surprisingly, Corpus-Wide Oracle does not perform the best in each category.", "We conjecture that too many diverse responses do not allow a classification model to learn common characteristics.", "Because the ultimate goal of collecting more diverse dialogue data is generating more diverse text, we evaluate diversity of neural text generation models trained on resulting corpora.", "Our task is to generate the next utterance in a dialogue, where the data collection processes collect utterances for sub c .", "To train generation models, the input is the most recent parent utterance for each utt in sub c , and utt is the target sentence to generate.", "When utt is the starting utterance in a conversation, the input is the situation associated with the conversation (such as planning a vacation).", "We train Sequence-to-Sequence models (Sutskever et al., 2014) with a 2-layer bidirectional encoder, hidden size 500, word vector size 64, Adam optimizer (Kingma and Ba, 2014), learning rate 0.001, trained for 3000 steps with batch size 32.", "Models are implemented using OpenNMT (Klein et al., 2017).", "We opt to use a standard model as it has fewer parameters to learn from smaller sub-corpora.", "We use the same parameter settings for all trained models.", "Generation task results are summarized in Table 5.", "We report on both mean and median length of model responses.", "Distinct-1 and Distinct-2 measure the proportion of unigrams and bigrams respectively in the set of model responses which are unique (Li et al., 2016a).", "We also report diversity of the generated responses calculated by the metrics used in sub c collection (see Table 2).", "Our method results in models which produce more diverse output compared to baseline Random Population data collection.", "Interestingly, Diverse Population and Above Mean Population split the win on producing more diverse outputs.", "Corpus-Wide Oracle diversity results are sometimes lower and overall shorter in length than other methods; a potential reason is this condition only samples utterances, not conversations.", "Responses from the model trained on each sub c are evaluated with all 3 diversity metrics, to examine potential interactions.", "Collecting sub c with Entropy results in higher Mean IDF (and vice versa) compared to Random Population.", "Collecting sub c with Outlier results in slightly lower Mean IDF (and vice versa) for Diverse Population and Above Mean Population compared to Random Population.", "There is not a consistent signal between Outlier and Entropy.", "Future work can further examine the relationships among these diversity metrics.", "Diversity Considerations: Compared to a random baseline, Diversity-Informed Data Collection results in more diverse data than Random Population, which is shown to be more effective on downstream tasks.", "Future work can explore the effect of simultaneously optimizing multiple desirable measurements of diversity.", "However, we acknowledge that maximum diversity might not be what is desired and does not always result in the best downstream task performance, as indicated by the low Corpus-Wide Oracle downstream task performance.", "While we have not examined the tradeoff between diversity and quality, this can be explored in future work.", "Generalizability: Diversity-Informed Data Collection is generalizable to metrics other than diversity.", "Concretely, DIDC should be used when a desired metric (1) can compare one sample (or set of samples) to the in-progress dataset and (2) has variation among the participant population.", "Additionally, Diversity-Informed Data Collection can be applied to areas outside of dialogue data collection.", "For instance, DIDC could apply to collecting data with different emotions or sentiment.", "Another extension is to a specialized application domain, such as collecting dialogues for educational tutoring purposes, where our method could be used to collect more data from students who generate text consistent with certain types of misconceptions.", "Crowdworking Deployment: We evaluated on simulated crowdworking data by leveraging an existing corpus.", "This choice stems from the desire to test multiple runs of methods in a controlled environment, to reliably determine significance, and to work with data with an assumed level of quality.", "That said, our approach can be applied to real crowdworking tasks.", "Data can be gathered from several participants in parallel, where crowdworkers are added and offered new tasks or assigned qualifications based on their diversity.", "If our method is deployed in paid crowdworking tasks, Diverse Population might be more cost-effective.", "In this particular investigation, we find Condition Mean Length Median Length D-1 D-2 Outlier Entropy Mean IDFO u tli e r Random Population 7.6 7 0.114 0.296 0.981 3.088 5.504 Diverse Population 9.7 7 0.110 0.279 0.989* 3.354* 5.297 Above Mean Population 8.1 7 0.063 0.169 0.960* 3.083 5.067* Corpus-Wide Oracle 3.8 4 0.204 0.448 1.042 * 2.968 * 6.789 * E n t r opy Random Population 8.8 8 0.101 0.265 0.981 3.281 5.263 Diverse Population 7.7 7 0.122 0.317 0.978 3.197 5.411 Above Mean Population 6.6 6 0.092 0.226 0.982 3.057* 5.474* Corpus-Wide Oracle 4.9 5 0.112 0.316 0.985 2.935 * 5.781 * M ea n IDF Random Population 6.1 6 0.120 0.294 0.988 3.036 5.526 Diverse Population 6.7 6 0.131 0.322 0.986 2.955 5.797 Above Mean Population 7.2 7 0.071 0.187 0.976* 2.937* 5.655 Corpus-Wide Oracle 3.4 3 0.214 0.449 1.008 * 2.421 * 8.327 * Table 5: Downstream model generation results; higher numbers are better for all metrics.", "Diverse Population requires 100-200 fewer participants than Above Mean Population to create a dataset.", "Due to the time required to train new participants, there is a tradeoff between training a new worker and collecting more data form current participants.", "Caution should be taken in using this method on-the-fly without a quality check.", "Standard quality control methods (e.g., crowdworker qualifications, manual examination, crowdworker verifica-tion) should be deployed for from-scratch data collection.", "Crowdworker Fairness: Another important consideration for a live deployment is the crowd-worker's perspective of fairness.", "Because some crowdworkers are retained for more data collection than others, communicating this possibility to crowdworkers is essential (Brawley and Pury, 2016).", "Crowdworking best practices involve disclosing which quality metrics are being used to workers to set clear expectations (Bederson and Quinn, 2011).", "Additionally, combining our method with a method which alters the task crowdworkers complete (Kang et al., 2018) as opposed to restricting the crowdworking population could be a way to balance fairness with crowdworkers.", "Different task and population combinations could allow for all crowdworkers to participate in more tasks.", "diverse datasets than the standard approach, and which performs better on downstream tasks.", "We define diversity of an utterance compared to the other utterances in a corpus.", "This allows for measurement of the impact of adding each utterance to the corpus.", "Working under the same assump-tion that a subset of participants produce diverse data compared to the corpus, our method can be extended to other diversity measures and can be modified to work with other corpus-level metrics.", "This work was supported by an AWS Machine Learning Research Award, an NVIDIA Corporation GPU grant, a UC Berkeley Chancellor's Fellowship, a National Science Foundation (NSF) Graduate Research Fellowship (DGE 1752814) and an NSF CAREER Award (IIS-1453721).", "We thank the three anonymous reviewers for their helpful comments.", "We additionally thank Cathy Chen, David Gaddy, Daniel Fried, Lucy Li, and Nate Weinman for their helpful feedback." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "method", "method", "other", "other", "objective", "method", "objective", "method", "objective", "method", "other", "method", "abstain", "other", "abstain", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "objective", "other", "other", "other" ]
[ "In this work, we explore the implicit event argument detection task, which studies event arguments beyond sentence boundaries.", "The addition of cross-sentence argument candidates imposes great challenges for modeling.", "To reduce the number of candidates, we adopt a two-step approach, decomposing the problem into two sub-problems: argument head-word detection and head-to-span expansion.", "Evaluated on the recent RAMS dataset (Ebner et al., 2020), our model achieves overall better performance than a strong sequence labeling baseline.", "We further provide detailed error analysis, presenting where the model mainly makes errors and indicating directions for future improvements.", "It remains a challenge to detect implicit arguments, calling for more future work of document-level modeling for this task.", "Event argument detection is a key component in the task of event extraction.", "It resembles semantic role labeling (SRL) in that the main target is to find argument spans to fill the roles of event frames.", "However, event arguments can go beyond sentence boundaries: there can be non-local or implicit arguments at the document level.", "Figure 1 shows such an example: for the purchase event, which is triggered by the word bought, its money argument appears in the previous sentence.", "Implicit arguments have been under-explored in event extraction.", "Most of previous systems (Li et al., 2013; Chen et al., 2015; Nguyen et al., 2016; Wang et al., 2019) only consider local arguments in the same sentence of the event trigger.", "While incorporating implicit arguments requires corresponding annotations, few exists in most of the widely used event datasets, like ACE2005 (LDC, 2005; Walker et al., 2006) and RichERE (LDC, 2015).", "There are several annotation efforts for implicit arguments", "in SRL, including G&C (Gerber and Chai, 2010, 2012), SemEval-2010 (Ruppenhofer et al., 2009, 2010), and 80Days (Feizabadi and Pado, 2014).", "Yet most are performed with different ontologies such as Nombank ( G&C ) and FrameNet ( SemEval-2010 and 80Days ); on different domains (e.g. novels); and in smaller scales ( G&C and 80Days only cover 10 types of predicates).", "The lack of annotations poses challenges to train and transfer implicit argument models for event extraction.", "Recently, Ebner et al. (2020) create the Roles Across Multiple Sentences ( RAMS ) dataset, which covers multi-sentence implicit arguments for a wide range of event and role types.", "They further develop a span-based argument linking model and achieve relatively high scores.", "However, they mainly explore a simplified setting that assumes the availability of gold argument spans.", "We extend their work and explore the more challenging full detection problem that predicts argument spans among all possible candidates.", "The difficulty of the full problem is highlighted in Figure 1.", "Both 3000 dollars and 1000 dollars are good candidates for the money role of the purchase event, but the selections are different given different contexts.", "When considering all possible candidate spans that may occur in any sentences, their quadratic number poses great challenges for the detection.", "Inspired by dependency-based SRL (Surdeanu et al., 2008; Hajic et al., 2009), we take the syntactical head-words as the proxy for full argument spans, hypothesizing that the head-words can contain enough information to fill the argument roles.", "Based on this, we adopt a two-step approach: first detecting the head-words of the arguments, and adopting a second step of head-to-span expansion.", "Actually, this type of two-step setup is not uncommon in prior work of information extraction, including entity detection (Lin et al., 2019), coreference resolution (Peng et al., 2015) and document-level pseudo-coreference (Jauhar et al., 2015; Liu et al., 2016).", "By considering only individual tokens in the detection step, the system only needs to handle a candidate space whose size scales linearly in respective to the number of tokens instead of quadratically.", "With the same setting of fine-tuning BERT (De-vlin et al., 2019) encoder, we show the effectiveness of our model by obtaining overall better results than a strong sequence-labeling model.", "We further provide detailed error analysis, showing that the main difficulties of the task are upon non-local and non-core arguments.", "Our analysis shows that the implicit argument task is quite challenging, calling for more future work on document-level semantic understanding for this task.", "The goal of event argument detection is to create labeled links between argument spans and the predicate (event trigger).", "Recent state-of-the-art solutions for sentence-level SRL perform the detection in an end-to-end setting, such as span-based (He et al., 2018; Ouchi et al., 2018), and sequence labeling models (He et al., 2017; Shi and Lin, 2019).", "However, span-based models face great challenges when considering arguments across sentence boundaries, since the computational complexity of such models grows quadratically to deal with O ( N 2 ) span candidates given N tokens.", "While traditional sequence labeling models can run in linear-time, they are less flexible and extensible in complex scenarios like overlapping mentions and multiple roles for one mention.", "In this work, we take a two-step approach that decomposes the problem explicitly into two sub-problems, based on the hypothesis that head-words can usually capture the information of the mention spans.", "Figure 1 illustrates the three main modules of our model:", "1) BERT-based Encoder,", "2) Argument Head-Word Detector, and", "3) Head-to-span Expander.", "Our encoding module is a BERT-based contextualized encoder.", "The input contains a predicate word (or occasionally a span), which triggers an event, together with its multi-sentence context.", "We refer to the sentence containing the event trigger as the center sentence .", "We concatenate the tokens within the 5-sentence window (the window size used in RAMS annotation) of the center sentences, and feed them to BERT to obtain the contextual representation e of each token.", "In addition, we add special token type ids indicators: tokens of the event trigger are assigned 0 , other tokens in the center sentence get 1 , and tokens in surrounding sentences get 0 1 .", "We only adopt the indicators when fine-tuning BERT, since the pre-trained BERT originally uses them as segment ids.", "Instead of directly deciding argument spans, we first identify the head-words of the arguments.", "The hypothesis is that the head-word is able to represent the meaning of the whole span.", "In this way, this sub-problem mimics a token-pairwise dependency-parsing problem.", "Following (Dozat and Manning, 2017, 2018), we adopt a biaffine module to calculate Pr r ( p, c ) : the probability of a candidate word c filling an argument role r in the frame governed by a predicate p .", "We first take the contextualized representations of the candidate ( e c ) and the predicate ( e p ), which are calculated by BERT as described in 2.1.", "Biaffine r further gives the pairwise score based on these representations, and Pr r ( p, c ) is then 1 We overload 0 because pre-trained BERT only has two types of token type id .", "Nevertheless, the trigger words are still distinguishable since they appear inside center sentences, and are separated from other sentences.", "where the normalization is done over the argument candidate set C (or null (cid:15) , whose score is fixed to 0) for each role, following (Ebner et al., 2020; Ouchi et al., 2018).", "During training, we use the cross-entropy loss to guide the network to pick head-words of gold arguments (or (cid:15) if there are no arguments for this role).", "If there are multiple arguments for one role, we view them as individual instances and sum the losses.", "At inference time, we simply pick the maximumly-scored argument (or (cid:15) ) for each role.", "The second module expands each head-word of the argument to its full span.", "We view it as a combination of left and right boundary classification problems.", "Taking the left-expanding scenario (L) as example, for each head-word h , we generate a set of candidate spans by adding words one by one on the left up to K words (we empirically set K = 7 ), and calculate the probability of word b being the boundary as follow: Pr L ( h, b ) = exp MLPL ( e h , e b ) (cid:80) b (cid:48) ( h K,h ] exp MLPL ( e h , e b (cid:48) ) Here, the input to the Multi-layer Perceptron (MLP) is again the contextualized representations as depicted in 2.1.", "During training, we minimize cross-entropy losses on the left and right respectively.", "At test time, we expand to the maximumly-scored boundary words on both sides.", "We conduct all experiments 2 on the RAMS (v1.0) dataset and focus on the event argument detection task: given (gold) event triggers and their multi-sentence contexts, predicting the argument spans from raw input tokens.", "Following (Ebner et al., 2020), we only use gold event types in the type-constrained decoding (TCD) setting.", "Through our experiments, we adopt the pre-trained bert-base-cased model.", "We train all the models for maximumly 20 epochs.", "If fine-tuning BERT, we set the initial learning rate to 5e-5; otherwise, it is set to 2e-4.", "We jointly train our 2 Our implementation is publicly available at https:// github.com/zzsfornlp/zmsp +TCD Dev.", "Since head-words are not annotated, we apply a simple rule: utilizing predicted dependency trees, we heuristically pick the word that has the smallest arc distance to the dependency root as the head.", "Ties are broken by choosing the rightmost one.", "There are cases where this procedure does not always give the perfect head, or there is no single head-word for a span (e.g., in multi-word expressions or conjunction).", "Nevertheless, we find this strategy works well in practice.", "Setting To compare our model with span-based models, we first evaluate in the same setting of (Ebner et al., 2020) that assumes gold argument spans.", "We directly apply the head rule on the gold spans and consider the head-words as candidates.", "We also adopt the same BERT setting: learning a linear combination of layers 9, 10, 11 and 12, and applying neither the special input indicators nor fine-tuning.", "Results Table 1 compares our results with the reported results of the span-based model from (Ebner et al., 2020).", "The results show that the head-word approach can get comparable results to the span-based counterpart.", "This matches our hypothesis that head-words contain sufficient information of surrounding words using contextualized embedding, making them reasonable alternatives to full argument spans.", "Setting This setting considers all arguments from any spans in the multi-sentence context.", "Unless otherwise noted, here we use the last layer of BERT and apply fine-tuning for the whole model.", "We compare with a strong BERT-based BIO-styled sequence labeling model (Shi and Lin, 2019).", "We +TCD Dev.", "adopt a modified version 3 from AllenNLP and retrain it on RAMS with similar settings: adopting special input indicators and fine-tuning BERT.", "For arguments that have multiple roles labels, we simply concatenate the labels as a new class.", "Results Table 2 shows the main results for full argument detection.", "Since the criterion of full-span matching might be too strict in some way, we also report head-word based F1 scores by evaluating solely on head-word matches (obtained using the same head rules).", "The results show that our headword based approach gets better results on average without type-constrained decoding and signifi-cantly better results after adopting type-constrained decoding with gold event types.", "Our head-driven approach is also flexible and easily extensible to more complex scenarios like nesting mentions or multiple roles, while keeping the linear complexity.", "Ablation Table 3 lists the ablation results on the encoder.", "The results show that the BERT encoder contributes much to the performance of our full 3 https://github.com/ allenai/allennlp/blob/b89ff098372656b674ec71457dda071222fd05ae/allennlp/models/srl_bert.py d =-2 d =-1 d =0 d =1 d =2 (3.6%) (7.5%) (82.8%) (4.0%) (2.1%) Seq.", "model.", "Fine-tuning BERT and the special indicator inputs can provide further improvements.", "On Sentence Distances Table 4 lists the performance breakdown on different sentence distances between arguments and triggers.", "As opposed to the relative consistent performance in the gold span setting, as shown in (Ebner et al., 2020), we notice a dramatic performance drop on non-local arguments.", "There may be two main reasons:", "1) data imbalance, since non-local implicit arguments appear much less frequently (only around 18% in RAMS ) than local ones;", "2) lack of direct syntax signals, making the connections between the implicit arguments and event triggers much weaker than the local ones.", "On Argument Roles We also investigate performance breakdowns on different argument roles.", "The results are shown in Figure 2, where we take the top-20 frequent roles to get more robust results.", "We can observe that our model performs better on core roles such as communicator , employee and victim (with F1 > 50), but struggles on non-core roles, like instrument , origin and destination , with F1 scores of around 20 to 30.", "The F1 scores correlate well (with Pearson and Spearman correlation coefficients of 0.64 and 0.70, respectively) with the local percentages: the more often one role appears locally around the event trigger, the better results it can obtain.", "These patterns are not surprising if we consider the possible underlying reasoning.", "The non-core arguments are not closely related with the event trigger, and thus can appear more freely at other places (or sometimes even be omitted), leading to a lower local percentage and also being harder to detect.", "To further investigate in detail what type of errors the model makes, we sample 200 event frames from the development set and manually compare our model's predictions with the gold annotations.", "Overall, there are 459 annotated arguments and 442 Category Description Example Count (Percentage) Correct Correct 348 (38.6%) Span Unimportantspanmismatch The [monument] artifact to fallen Soviet sailors artifact in Limbazi, was demolished Destroy by activists.", "predicted ones.", "For both annotated and predicted arguments, we assign them to one of seven categories, and the results are listed in Table 5.", "Here, the Span errors denote unimportant span mismatches, and they take nearly 9% of all items.", "If we ignore these errors, the performance can reach around 47%, which roughly matches the automatically evaluated Head-F1 scores.", "In some way, this supports our intuition to adopt a two-step approach, since the decisions of the span ranges may be separated from the core problem of argument detection, where head-words can be reasonable representatives.", "Another major source of errors comes from Coref., which is not surprising since the same entities can have multiple appearances at the document level.", "Our analysis indicates that this is a problem that should be further investigated for both modeling and evaluation.", "Another notable type of error is frame mismatch (Frame).", "In the main setting (without type-constrained decoding), our model neither utilizes nor predicts event frame types, meaning that the frame information purely comes from the trigger words.", "Therefore, roles belonging to other event frames may be predicted.", "Finally, the Others category includes the ones where we cannot find obviously intuitive patterns.", "We would identify most of them as the more diffi-cult cases, whose error breakdown follows similar patterns to the overall ones as shown in Figure 2.", "In this work, we propose a flexible two-step approach for implicit event argument detection.", "Our head-word based approach effectively reduces the candidate size and achieves good results on the RAMS dataset.", "We further provide detailed error analysis, showing that non-local and non-core arguments are the main difficulties.", "We hope that this work can shed some light and inspire future work at this line of research.", "This research was supported in part by DARPA grant FA8750-18-2-0018 funded under the AIDA program.", "We thank the three anonymous reviewers for their helpful comments." ]
[ "objective", "abstain", "method", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "objective", "result", "result", "abstain", "other", "other" ]
[ "Cross-lingual named entity recognition task is one of the critical problems for evaluating the potential transfer learning techniques on low resource languages.", "Knowledge distillation using pre-trained multilingual language models between source and target languages have shown their superiority in transfer.", "However, existing cross-lingual distillation models merely consider the potential transferability between two identical single tasks across both domains.", "Other possible auxiliary tasks to improve the learning performance have not been fully investigated.", "In this study, based on the knowledge distillation framework and multitask learning, we introduce the similarity metric model as an auxiliary task to improve the cross-lingual NER performance on the target domain.", "Specifically, an entity recognizer and a similarity evaluator are first trained in parallel as two teachers from the source domain.", "Then, two tasks in the student model are supervised by these teachers simultaneously.", "Empirical studies on the three datasets across 7 different languages confirm the effectiveness of the proposed model.", "Named entity recognition, NER in short, refers to identifying entity types, i.e. location, person, organization, etc., in a given sentence.", "The exploiting of deep neural networks, such as Bi-LSTM-CRF (Lample et al., 2016), Bi-LSTM-CNN (Chiu and Nichols, 2016) makes this task achieve significant performances.", "However, since deep neural networks highly rely on a large amount of labelled training data, the annotation acquiring process is expensive and time consuming.", "This situation is more severe for zero-resource languages.", "With the help of transfer learning (Ruder et al., 2019) and multilingual BERT (short as mBERT) (Devlin et al., Corresponding author NER stu TSL MTMTNER tea {X,P} tgt NER {X,Y} src NER tea Directly {X} tgt {X} tgt NER stu training training s o f t l a b e li n g SIM tea {X,S} tgt training soft labeling similarityscore {X,P} tgt collaboration Figure 1: Comparison between previous cross-lingual NER models. Directly : direct model transfer; TSL : teacher-student learning model; MTMT : proposed multiple-task and multiple-teacher Model. NER / NER tea : learned NER model for source language; NER stu : learned NER model for target language; SIM tea learned similarity model for source language; { X, Y } src : labeled data in source language; { X } tgt : unlabeled data in target language; { X, P } tgt : labeled data in target language with probability; { X, S } tgt : labeled data in target language with entity similarity score. 2019), it is possible to transfer the annotated training samples or trained models from a rich-resource domain to a zero-resource domain.", "Many studies have been done to solve this cross-lingual NER problem.", "Existing models can be separated into three categories, shared feature space based, translation based and knowledge distillation based.", "Shared feature space based models exploit language-independent features, which lacks the domain-specific features for the target language (Tsai et al., 2016; Wu and Dredze, 2019; Keung et al., 2019).", "Translation based models generate pseudo labeled target language data to train the cross-lingual NER model, but the noise from translation process restrains its performance.", "(Mayhew et al., 2017; Xie et al., 2018; Wu et al., 2020b).", "Knowledge distillation based models train a student model using soft labels of the target language (Wu et al., 2020a,b; Chen et al., 2021; Liang et al., 170 2021).", "Although the above-mentioned models solve the cross-lingual NER problem to some extent, the auxiliary tasks, as in multi-task learning, have not been studied in this problem.", "Due to the distributed representation of natural languages, the relatedness among the embedding of target languages, which is measured by the similarity, can be utilized to further boost the learned encoder and improve the final NER performance on target languages.", "Here we give a concrete example to illustrate the importance of similarity between every two tokens under the situation when only the English data is labeled.", "Given a Spanish sentence Arvalo (Avila), 23 may (EFE). , the token Arvalo is recognized as ORG type using the learned model from the English domain.", "In the meantime, the token Arvalo has high similarity scores with the Spanish tokens Viena from sentence Viena, 23 may (EFE). \", and Madrid from sentence Madrid, 23 may (EFE). .", "Also, the tokens Viena and Madrid are recognized correctly as LOC type using the same English model mentioned above.", "Then Arvalo can be recognized correctly as LOC type under the supervisory signal using the similarity between Viena and Madrid .", "To leverage the similarity between the tokens of the source languages, we design an multiple-task and multiple-teacher model (short as MTMT, as shown in Figure 1), which helps the NER learning process on the target languages.", "Specifically, we first introduce the knowledge distillation to build entity recognizer and similarity evaluator teachers in the source language and transfer the learned patterns to the student in the target language.", "In the student model, we then borrow the idea of multitask learning to incorporate a similarity evaluation task as an auxiliary task into the entity recognition classifier.", "During the student learning process, we input unlabelled samples from the target languages into the entity recognizer and evaluator, and take output pesudo labels as supervisory signals for these two tasks in the student model.", "Note that a weighting strategy is also provide therein to take into consideration of the reliability of the teachers.", "We validate the model performance on the three commonly-used datasets across 7 languages and the experimental results show the superiority of our presented MTMT model.", "distillation framework for cross-lingual named entity recognition and develop a teaching and learning procedure under this framework.", "We present a novel multiple-task and multiple-teacher model that introduces an entity similarity evaluator to boost the performance of student recognizer on target languages.", "We conduct extensive experiments on 7 languages compared with state-of-the-art baselines and the results confirm the effectiveness of the presented model.", "Our approach is closely related to the existing works on cross-lingual NER, knowledge distillation, and siamese network.", "Cross-Lingual NER aims to extract entities from a target language but assumes only source language is annotated.", "The existing models can be categorized to", "a) Shared feature space based models,", "b) Translation based models,", "c) Knowledge distillation based models.", "Shared feature space based models generally train a language-independent encoder using source and target language data (Tsai et al., 2016).", "Recently, the pre-trained multilingual language model is effective to address the challenge (Devlin et al., 2019).", "Moreover, some research introduces new components on top of the mBERT by directly transferring the model learned from the labeled source language to that of target languages (Keung et al., 2019).", "The performance is still weak due to the lack of annotations of target languages.", "Translation based models generally generate pseudo-labeled target data to alleviate target data scarcity.", "For example, (Wu et al., 2020b; Zhang et al., 2021) gain an improvement by translating the labeled source language to the target language word-by-word.", "Our model achieves considerable improvement by learning entity similarity in target language data without translation.", "Knowledge distillation based models include a teacher model and a student model (Wu et al., 2020c).", "The teacher model is trained on the labeled source language.", "The student model learns from the soft label predicted by the teacher model on unlabeled target language data.", "Therefore, the student model can capture the extra knowledge about target languages.", "In our work, the student model not only learns the recognizer teacher knowledge, but also 171 learns the entity similarity knowledge inspired by multi-task learning.", "Siamese Network is originally introduced by (Bromley et al., 1994) to treat signature verifica-tion as a matching problem.", "It has been successfully applied to transfer learning such as one-shot image recognition (Koch et al., 2015), text similarity (Neculoiu et al., 2016).", "However, there is a dilemma to adapt the siamese network to token-level recognition tasks such as NER.", "Siamese network assumes the input is a pair, and the output is a similarity score.", "To handle this issue, we reconstruct the data to pair format.", "To the best of our knowledge, we are the first to learn the entity similarity by siamese network.", "In this section, we introduce our framework and its detailed implementation.", "Our framework is consist of two models: teacher training model learned from the source language and teacher-student distillation learning model learned from the target language.", "In the teacher training model, there are two sub-models, i.e. an entity recognizer teacher and a similarity evaluator teacher.", "These two models are two parallel tasks, wherein the entity recognition teacher focuses on identifying the named entities and the similarity evaluator teacher is to decide if two tokens are in the same type.", "We then present a teacher-student distillation learning model to learn from the two learned teacher models simultaneously.", "We note that, in this learning process, such a knowledge distillation makes the student model combine the advantages of both source language patterns of entity recognition and entity similarity evaluation.", "During the learning process, the samples from the target language are fed into the teacher model and the outputs are taken as the supervisory signal for two tasks in the student model.", "To guarantee the student learning performance, we assign weights for each supervisory signal correspond to the output confidence of teacher sub-models.", "We argue that the student entity recognition task and the student entity similarity evaluation task improve the representation learning of the student encoder in the siamese structure.", "Following standard practice, we formulate cross-lingual NER as a sequence labeling task.", "Given a EvaluatorTeacher E n c o d e r C o s E n c o d e r Labeled Source-Language Pairwise Data L i n ea r E n c o d e r RecognizerTeacher Labeled SourceLanguage Data reconstruct training training CELoss BCELoss Figure 2: The training process of teacher models.", "sentence x = { x i } Li =1 with L tokens, a NER model produces a sequence of labels y = { y i } Li =1 , where x i is the i -th token and y i is the corresponding label of x i .", "In the source language, we denote the labeled training data as D Strain = { ( x , y ) } and test data as D Stest .", "In the target language, we denote the unlabeled train data as D Ttrain = { x } and the test data as D Ttest .", "Formally, our goal is to train a model with D Strain and D Ttrain to perform well on D Ttest .", "Here we first consider the training of two teacher models.", "For every two tokens, we define Entity Similarity Metric as a score which is the probability that two tokens belong to the same entity type.", "We aim to find entity similarity to help the cross-lingual NER model in the target language.", "It is a non-trivial task since we lack golden labels to help us distinguish target named entities.", "To address this challenge, we propose a binary classifier called similarity evaluator to leverage the labeled source language data for similarity prediction.", "Our similarity evaluator model, inspired by siamese network (Koch et al., 2015), are able to acquires more powerful features via capturing the invari-ances to transformation in the input space.", "Figure 2 illustrated the two teacher models training.", "The following subsections will illustrate the two teacher models sequentially.", "Since the cross-lingual NER task, we unitize multilingual mBERT (Wu and Dredze, 2019) as basic sequence feature extractor backbone to derive the sequence embedding representation throughout this paper.", "And a linear classifier with softmax upon the pre-trained mBERT output.", "The model network 172 structure could be formulated as, h = mBERT ( x ) y i = softmax ( W h i + b ) where h = { h i } Li =1 and h i denotes the output of the pre-trained mBERT that corresponds to the input token x i .", "y i denotes the predicted probability distribution for x i .", "W and b are trainable parameters.", "For some sentence sample ( x , y ) D Strain and an entity token query index i , the loss function is, LER ( x , y , i ) = LCE ( y i , y i ) We train this entity recognition teacher model on the source lingual training corpus D Strain = { ( x , y ) } directly.", "To leverage the entity similarity to boost the unsupervised cross-lingual NER performance, we will present our entity pairs construction method and the siamese network model in the following.", "Entity Similarity Pairs Construction According to entity labels, we randomly select sentences pair < x , x (cid:48) > with their some token pair < x i , x (cid:48) j > and associated labels < y i , y (cid:48) j > in D Strain , to form the siamese supervision training dataset, DS siam train = { ( x , x (cid:48) , i, j, t ) } where the target t = 1 indicates y i = y (cid:48) j , and 0 otherwise.", "And the testing entity pairs DS siam test is constructed likewisely.", "Siamese Entity Similarity Network Our similarity backbone model is a siamese neural network with mBERT as feature extraction layer.", "Wherein h and h (cid:48) represent latent sequences encoding features derived by the two symmetric twins with respect to input sentence x and x (cid:48) respectively.", "The inter-entities similarity is measured on the hidden representations h i and h (cid:48) j of the tokens queried by the entity indices < i, j > on the sequences representations.", "The cosine function operator is added to compute on the entity token latent vectors' distance, s to measure the similarity between each siamese twin, which is fed into a single sigmoid output unit for target t estimation.", "More precisely, for a specific entity pair ( x , x (cid:48) , i, j, t ) DS siam train , the siamese network could be formulated as, h = mBERT ( x ) , h (cid:48) = mBERT ( x (cid:48) ) t ( x , x (cid:48) , i, j ) = (cos( h i , h (cid:48) j )) CELoss BCELoss Loss SimilarityScore SimilarityScore E n c o d e r C o s L i n ea r Student E n c o d e r L i n ea r E n c o d e r EvaluatorTeacher E n c o d e r C o s E n c o d e r RecognizerTeacher Teacher Inference Student Training Unlabeled Target-Language Pairwise Data L i n ea r CELoss Figure 3: Teacher-student distillation learning.", "where cos is the cosine similarity metric function, is the sigmoid activation function, t [ ( 1) , (1)] denotes the predicted similarity of two queried tokens pair < x i , x (cid:48) j > .", "Larger t value indicates higher similarity between the two queried entities tokens.", "The loss function of the similarity prediction can be formulate as, LSIM ( x , x (cid:48) , i, j, t ) = LBCE ( t, t ) .", "Finally, we can train the siamese entity similarity evaluator on DS siam train , and evaluate the performance on test dataset DS siam test .", "Together with entity recognizer model, this entity similarity evaluator are used as teachers in following knowledge distillation learning process, and transfer knowledge from source to target lingual corpus.", "In this section, we consider transferring the named entity type and similarity knowledge learned on labeled source language corpus to unlabeled target language NER task.", "To this end, we propose a knowledge distillation learning process to train a target language student NER model with its supervisory signals mimicked by the entity type prediction probability by the entity recognizer teacher model and entity representation similarity target by the entity siamese similarity evaluator teacher model.", "Based on the original unlabeled target sentence training data D Ttrain , we again construct unlabeled target-language siamese pairwise entity data DT sim train = { ( x T , x (cid:48) T , i, j ) } , with the sentence pair < x T , x (cid:48) T > randomly sample from D Ttrain and the entity token indices pair < i, j > uniformly sampled from the sentences therein.", "The mBERT is also used as an encoder for the sentence siamese pair, and the entity token feature is queried from the latent sequence encoding representation.", "Specifically, for a sentence pair ( x T , x (cid:48) T , i, j ) DT sim train , the student model transform them as follows, h T = mBERT ( x T ) y T i = softmax ( W h T i + b ) h (cid:48) T = mBERT ( x (cid:48) T ) y (cid:48) T j = softmax ( W h (cid:48) T j + b ) t T ( x T , x (cid:48) T , i, j ) = (cos( h T i , h (cid:48) T j )) Then for a specific sentence pair sample in the target siamese dataset, the student loss function has three breaches, LER ( x T , y S , i ) , LER ( x (cid:48) T , y (cid:48) S , j ) , and LSIM ( x T , x (cid:48) T , i, j, t S ) .", "Note that supervision information y S , y (cid:48) S , and t S are taught by the three teacher models.", "Summering over all the samples in DT sim train = { ( x T , x (cid:48) T , i, j ) } , the total student model training loss takes form, L = (cid:88) ( x T , x (cid:48) T ,i,j ) D T sim train ( 1 LER ( x T , y S , i ) + 2 LER ( x (cid:48) T , y (cid:48) S , j ) + LBCE ( t T ( x T , x (cid:48) T , i, j ) , t S )) where 1 , 2 , , and are weights in loss function which are set to make the student model learns less noisy knowledge from teachers.", "The weights are set as follows: 1 ( 2 ) is an increasing function concerning the output of the entity recognizer teacher as shown in Figure 4. And is set such that it is high when the output of the entity similarity teacher is close to 0 or 1 , and it is low when the output is close to 0 .", "5 .", "indicates consistency level between the outputs from two teacher models, e.g. for two input tokens, if the output from entity similarity teacher is high, and the similarity level computed from the outputs of the entity recognizer teacher is low, then their consistency level is low.", "We want the student model to learn from the two teachers as follows: the higher the prediction of the entity recognizer teacher is (the further away from 0 . 5 the prediction of the entity similarity teacher is, the higher the consistency level is), the more accurate the prediction is, thus the more attention the student model pays attention to the input tokens, and vice versa.", "Therefore, we heuristically devises the three weights scheduling as functions of the inputs, 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0", "( ) = ( max ( y T i )) 2 = (2 t T ( x T , x (cid:48) T , i, j ) 1) 2 = 1 | (cos( y T i , y (cid:48) T j )) t T ( x T , x (cid:48) T , i, j ) | 4 Experiment In this section, we evaluate our multiple-task and multiple-teacher model for cross-lingual NER and compare our model with a series of state-of-the-art models.", "We conducted experiments on three benchmark datasets: CoNLL2002 (Tjong Kim Sang, 2002), CoNLL2003 (Tjong Kim Sang and De Meulder, 2003) and WikiAnn (Pan et al., 2017).", "CoNLL2002 includes Spanish and Dutch, CoNLL2003 includes English and German, and WikiAnn includes English and three non-western languages: Arabic, Hindi, and Chinese.", "Each language is divided into a training set, a development set and a test set.", "All datasets were annotated with four entity types: LOC, MISC, ORG, and PER.", "Following (Wu and Dredze, 2019), all datasets are annotated using the BIO entity labelling scheme.", "To imitate the zero-resource cross lingual NER case, following (Wu and Dredze, 2019), we used English as the source language and other languages as the target language.", "In cross-lingual NER, the training set without entity label of the target language is also available when training the model.", "We trained the model with the labeled training set of the source language and evaluated the model on the test set of each target language.", "Table 1 and 2 shows the statistics of all datasets.", "We use PyTorch 1.7.1 to implement our model.", "All of the feature encoders mentioned in this paper use 174 Language Type Train Dev Test English-en Sentence 14,987 3,466 3,684 (CoNLL-2003) Entity 23,499 5,942 5,648 German-de Sentence 12,705 3,068 3,160 (CoNLL-2003) Entity 11,851 4,833 3,673 Spanish-es Sentence 8,323 1,915 1,517 (CoNLL-2002) Entity 18,798 4,351 3,558 Dutch-nl Sentence 15,806 2,895 5,195 (CoNLL-2002) Entity 13,344 2,616 3,941 Table 1: Statistics of CoNLL.", "pre-trained mBERT model (Devlin et al., 2019) in HuggingFace Transformer 1 , which has 12 Transformer blocks, 12 attention heads, and 768 hidden units.", "We set our hyperparameters empirically following (Wu et al., 2020c) with some modifications.", "We do not freeze any layers and we use the output of the last layer as our hidden feature vector.", "We set the batch size to be 32, maximum sequence length to be 128, dropout rate to be 0.2, and we use Adam as optimizer (Kingma and Ba, 2014).", "For the training of recognition teacher model and similarity teacher model, we set the learning rate to be 1e-5 and 5e-6 separately.", "For knowledge distillation, we use a learning rate of 1e-6 for the student models training.", "Note that if a word is divided into several subwords after tokenization, then only the first sub-word is considered in the loss function.", "Following (Tjong Kim Sang, 2002), we use the entity level F1-score as the evaluation metric.", "Moreover, we conduct each experiment 5 times and report the mean F1-score.", "Table 3 and 4 report the zero-resource cross-lingual NER results of different models on 6 target languages.", "Wiki (Tsai et al., 2016) introduces a language in-dependent model building on cross-lingual wikifi-cation for cross-lingual NER.", "WS (Ni et al., 2017) presents two weakly supervised approaches for cross-lingual NER.", "TMP (Jain et al., 2019) leverages machine translation to improve annotation projection approaches to cross-lingual NER.", "BERT-f (Wu and Dredze, 2019) applys the mBERT to cross-lingual NER.", "AdvCE (Keung et al., 2019) improves upon mBERT via adversarial learning for cross-lingual NER.", "TSL (Wu et al., 2020c) proposes a teacher-student learning model for cross-lingual NER.", "Unitrans (Wu et al., 2020b) unifies a data transfer and model transfer for cross-lingual NER.", "AdvPicker (Chen et al., 2021) proposes a adversarial discriminator for cross-lingual NER.", "RIKD (Liang et al., 2021) develops a reinforced iterative knowledge distillation for cross-lingual NER.", "TOF (Zhang et al., 2021) transfers knowledge from three aspects for cross-lingual NER.", "It can be seen that our model outperforms the state-of-the-arts.", "Specifically, compared with the remarkable RIKD, AdvPicker, and Unitrans, which also use knowledge distillation but ignore the entity similarity knowledge, our model obtains significant and consistent improvements in F1-score ranging from 0.23 for German[de] to 6.81 for Ara-bic[ar].", "That demonstrates the benefits of our proposed MTMT model, compared to direct model transfer (Wu and Dredze, 2019).", "Note that BERT-f performs better than our model on the Chinese dataset due to their re-tokenization of the dataset.", "Moreover, compared with the latest model TOF, RIKD, Unitrans, our model requires much lower computational costs for both translation and iterative knowledge distillation, meanwhile reaching superior performance.", "For a fair comparison, we compare our model against the version of TOF w/o continual learning (Zhang et al., 40 30 20 10 0 10 20 20 10 0 10 20 30 OB-LOCI-LOCB-PERI-PERB-ORGI-ORGB-MISCI-MISC 30 20 10 0 10 20 30 20 10 0 10 20 30 OB-LOCI-LOCB-PERI-PERB-ORGI-ORGB-MISCI-MISC 30 20 10 0 10 20 30 30 20 10 0 10 20 30 OB-LOCI-LOCB-PERI-PERB-ORGI-ORGB-MISCI-MISC", "2021), RIKD w/o IKD (Liang et al., 2021) and Unitrans w/o translation (Wu et al., 2020b) as reported in their paper.", "To demonstrate the effectiveness of our approach, we designed the following ablation studies.", "Table 5 presents the results.", "(1) MTST , which combines the multiple-teacher to single-teacher.", "That is, the teacher model has the same as the neural network structure of the student model.", "This causes a performance drop across all languages due to two single teachers cannot make a difference with the combination.", "(2) MTMT w/o weighting , which set the ( ) , and all to be 1 in the loss of student learning.", "It can be seen that the performance decrease in terms of F1-score ranges from 0.45 176 for Dutch(nl) to 0.98 for Spanish(es), which validates that weighting loss can bring more confident knowledge to the student model.", "(3) MTMT w/o similarity , which removes the similarity teacher model.", "In this case, our approach degrades into the single teacher-student learning model as in TSL (Wu et al., 2020a).", "Without the similarity knowledge fed into the student model, the performance drops significantly.", "We give a case study to show that the failed cases of baseline models can be corrected by our model.", "We try to bring up insights on why the proposed multiple-task and multiple-teacher model works.", "The proposed MTMT model can help to correct labels using the Entity Similarity defined in section 3.2.", "Specifically, if there is a set of tokens in which every two of them have a high Entity Similarity score, and one of the tokens is predicted to have a distinct label while other tokens have identical labels, then the one with the distinct label is predicted wrongly and is corrected by the student model to have the label of all other tokens.", "As shown in Table 6, in example #1, the entity recognizer teacher fails to identify Arvalo as B-ORG type, while the student model can correctly predict it.", "The reason lies in that the entity recognizer teacher predicts Viena ( Madrid ) as B-LOC type correctly, and the similarity evaluator teacher predicts Viena ( Madrid ) to have a high similarity score(0.7157, 0.7156) with Arvalo .", "The student learns from both teachers and predict the correct label for Arvalo .", "Examples #2 and #3 present the same results with different sentences.", "This section investigates the effect of embeddings of the two different teacher models.", "It can be seen that the embedding distribution of the student model is close to similarity evaluator teacher, as illustrated in Figure 5. We conjecture that the student model captures similarity knowledge from the similarity evaluator teacher, i.e. the same class of examples tend to cluster and the different class of examples tend to segregate in the embedding distribution.", "This validates the proposed MTMT model not only transfers cross-lingual NER knowledge from source language, but also learns the similarity knowledge of target language data.", "In this section, we evaluate the effectiveness of weight loss in student learning from a quantitative perspective.", "All of the following experiments are conducted on Spanish(es) data.", "For analysis, we calculate the F1-score in different probability intervals of entity recognizer teacher, we find that the recognizer teacher tends to predict more correct in higher probability interval, as illustrated in Figure 6a.", "Therefore, the student model is better suited to the target language with learning fewer low-confidence misrecognitions for the target language.", "For analysis, we observe that F1-score are increasing with the entity similarity score from 0.5 to both sides 0 and 1 in Figure 6b.", "The encoder of the student model obtains the clustering information of the target language with the help of .", "For analysis, we consider the consistency of recognition results and similarity score by teachers.", "The F1-score and similarity score of teachers are all higher in the higher intervals, as shown in Figure 6c.", "The student model learns less from unreasonable results, and it can make more accurate entity recognition for the target language.", "In this paper, we propose an unsupervised multiple-task and multiple-teacher model for cross-lingual NER.", "The student model learns two source language patterns of entity recognition and entity similarity evaluation.", "Moreover, to guarantee the student learning performance, we also propose a weighting strategy to take into consideration the reliability of the teachers.", "Our experimental results show that the proposed model yields significant improvements on six target language datasets and outperforms the existing state-of-the-art approaches.", "This work is supported partly by the Fundamental Research Funds for the Central Universities and by the State Key Laboratory of Software Development Environment." ]
[ "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "abstain", "method", "abstain", "result", "objective", "objective", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "objective", "other" ]
[ "We propose T uring A dvice , a new challenge task and dataset for language understanding models.", "Given a written situation that a real person is currently facing, a model must generate helpful advice in natural language.", "Our evaluation framework tests a fundamental aspect of human language understanding: our ability to use language to resolve open-ended situations by communicating with each other.", "Empirical results show that today's models struggle at T uring A dvice , even multibillion parameter models finetuned on 600k in-domain training examples.", "The best model, a finetuned T5, writes advice that is at least as helpful as human-written advice in only 14% of cases; a much larger non-finetunable GPT3 model does even worse at 4%.", "This low performance reveals language understanding errors that are hard to spot outside of a generative setting, showing much room for progress.", "Language models today are getting ever-larger, and are being trained on ever-increasing quantities of text.", "For an immense compute cost, these models like T5 (Ra el et al., 2019) and GPT3 (Brown et al., 2020) show gains on a variety of standard NLP benchmarks often even outperforming humans.", "Yet, when a giant model like T5 generates language, we observe clear gaps between machine-level and human-level language understanding even after it has been finetuned for the task at hand.", "Consider Figure 1, in which a woman asks for advice.", "She is assigned to dissect an animal for her class project, but has extreme anxiety about dead animals and her teacher refused to give her another assignment.", "Humans can respond with helpful advice, reflecting our unique ability of real-world language use : to communicate and tackle open-ended issues.", "The helpful advice in this ex-T5 I'd send a short email to the next higher-up authority figure, ideally a counselor.", "ample but not the only one possible suggests that she send a short email to her guidance counselor.", "On the other hand, not only is T5's advice unhelpful, it also reveals key misunderstandings of the situation.", "It seems to believe that the student is asking the teacher to do a class project involving dead animals.", "This reading comprehension error is particularly strange, as T5 outperforms humans on a variety of reading comprehension benchmarks.", "Others in the community have observed similar issues, raising concerns about what today's benchmark datasets measure (Yogatama et al., 2019; Kryscinski et al., 2019; McClelland 1 et al., 2019; Gardner et al., 2019).", "We argue that there is a deep underlying issue: a gap between how humans use language in the real world, and what benchmarks today can measure.", "Today's dominant paradigm is to study static datasets, and to grade machines by the similarity of their output with predefined correct answers.", "For example, we score multiple choice exams by how often the correct answers are chosen, and evaluate generative tasks like machine translation by similarity with respect to correct translations.", "However, when we use language in the real world to communicate with each other such as when we give advice, or teach a concept to someone there is rarely a universal correct answer to compare with, just a loose goal we want to achieve.", "We introduce a framework to narrow this gap between benchmarks and real-world language use.", "We propose to evaluate machines by their success in using language to (1) communicate with humans in (2) tackling complex, open-ended, real-world situations.", "Our goal is a machine that, like a human, can generate language that is useful and helpful.", "Doing so necessarily requires a deep understanding of language and the world, as per a line of thought that the complete meaning representation is one that su ces to complete a task (Artzi et al., 2013).", "As a case-study of our framework, we introduce T uring A dvice as a new grand challenge for AI systems.", "A machine reads a situation written by a person seeking advice, like Figure 1, and must then write advice that is helpful to the advice-seeker.", "Like a Turing Test (Turing, 1950), we establish a simple condition required for a model to pass': model-generated advice must be at least as helpful to the advice-seeker as human-written advice.", "We make our challenge concrete by introducing a new dataset, R eddit A dvice , and accompanying leaderboard.", "We tie our dataset to the Reddit community, which resolves two additional sources of bias.", "First, Reddit users are intrinsically motivated, seeking advice about highly complex real issues which past work suggests di er from hypothetical issues that crowd workers might come up with (e.g. Kwiatkowski et al., 2019; Gurari et al., 2018).", "Second, we make our dataset dynamic , not static models are evaluated over Reddit situations posted over the previous two weeks at the time of submission.", "Models therefore, like humans, must generalize to new situations and patterns of language.", "incredibly challenging for NLP models.", "Today's largest finetunable model, T5 with 11 billion parameters, produces advice that is preferable to human-written advice 14.5% of the time after being finetuned on 600k examples.", "GPT3, an even larger model with 175 billion parameters that was not released for finetuning, does even worse at 4%.", "Even more concerning, our evaluation finds that it often generates hateful and toxic language.", "We also study our task from the perspective of today's standard core' NLP tasks.", "Broadly, we find that machines frequently confuse who is who, are self-contradictory, or seem to miss important world knowledge.", "However, these mistakes tend not to fall into the neat categories defined by standard task definitions.", "We address this by introducing diagnostic questions, which systematically measure these language understanding errors.", "In summary, our paper makes three contributions.", "First , we introduce a new framework for measuring language understanding through directly tackling real-world language problems.", "Second , we introduce T uring A dvice as a new challenge for AI systems, along with a dynamic dataset and leaderboard.", "Third , we connect our task to existing atomic NLP tasks, introducing a new setting that reveals where progress is still needed.", "We propose to evaluate machines by their success at real-world language use : using language to communicate with a human, in response to a naturally occurring situation, in order to achieve a desired outcome.", "This is how educators often measure (hu-man) language understanding of a second language by how well the learner can use the language (Council of Europe, 2001).", "Our approach is also inspired by Wittgenstein's notion of semantics, that meaning is use: language is grounded in our desire to make sense of one another and cooperate to meet our needs (Wittgenstein, 1953).", "As machines do not have humanlike needs or desires, we propose to evaluate machines' success at a task by how well it serves a human who is interested in the outcome.", "For example, if a machine orders food on my behalf, then I can evaluate it based on whether I enjoy the dish it ordered.", "Though this requires careful task selection in order to make things feasible for current models, as we will show in Section 3, it results in a powerful and reliable human evaluation.", "Our evaluation relates to pragmatics in NLP, where communication is modeled also through listeners and speakers (Golland et al., 2010; Frank and Goodman, 2012).", "One approach is to introduce a communication game, with an explicit objective.", "For example, Wang et al. (2016) study a blocks world where humans give commands to a block-placing machine.", "The machine is then graded on accuracy.", "Our proposed evaluation instead covers complex everyday scenarios faced by a human, where the objective is to help them as much as possible.", "Pragmatics can also be studied through machine-machine communication; e.g., through emergent language (Lazaridou et al., 2017).", "Recent work uses pretrained question-answering models to evaluate summarization models (Chen et al., 2018; Scialom et al., 2019; Eyal et al., 2019; Vasilyev et al., 2020).", "However, ensuring that machines communicate in standard English is di cult, as there is usually a more e cient machine-language coding scheme for the task (Kottur et al., 2017).", "Quality of generations.", "The first approach studies generative tasks like chit-chat dialogue or story-writing, and measures the inherent quality of generations , often through attributes such as sensi-bleness and specificity (e.g., Venkatesh et al., 2018; Hashimoto et al., 2019; Adiwardana et al., 2020).", "This approach is orthogonal to ours: though these attributes might be desirable, they are often insu cient to guarantee success at a task.", "Correctness.", "The second (and perhaps more common) approach is to evaluate models through correctness over static datasets.", "For example, machines can be graded by the similarity of their generated translation to correct translations, 1 or, by how often they choose the correct answer on a multiple choice exam.", "Many goal-oriented dialogue and semantics tasks are also evaluated in this way, as a model is evaluated by whether it makes the correct API call, or produces a correct parse.", "Since many language tasks cannot be evaluated through correctness, researchers often introduce 1 Models submitted to the 2019 Conference on Machine Translation were evaluated (by humans) on how well the model's translations agreed with either (1) human-written translations, or, (2) original source text (Barrault et al., 2019).", "proxy tasks that are easy to evaluate, while (hope-fully) correlating with the underlying true task.", "For example, SWAG (Zellers et al., 2018) is a multiple-choice proxy task and dataset introduced to study the true task of commonsense reasoning.", "However, there are gaps between datasets for proxy tasks (e.g. multiple choice), and the core tasks they seek to represent (e.g. commonsense reasoning), which we discuss in the next sections.", "When we reduce a complex language task to a simplified setup, with a small label space (like multiple-choice classification), we run the risk of introducing artifacts and biases: patterns that can be exploited in the simplified setup, but that are not representative of the true task (Gururangan et al., 2018; Zellers et al., 2019a).", "Artifacts can enable machines to even outperform humans at the final benchmark, without solving the underlying task.", "While the problem of artifacts has recently taken the spotlight in the NLP community, partially because large Transformers (Vaswani et al., 2017) excel at picking up on artifacts, there is a deeper underlying issue.", "One way to view simplified tasks is that in order to correctly map inputs X to labels Y , a machine must learn a set of attributes A that are representative of the true' task.", "We can upper-bound the information contained by A through the information bottleneck principle of Tishby et al. (1999).", "An e cient model minimizes the following, for some 0: min p p a | x q I p X ; A q I p A ; Y q , (1) where I is mutual information.", "In other words, the model will learn attributes A that maximally compress the inputs X (minimizing I p X ; A q ), while also remaining good predictors of the labels Y (max-imizing I p A ; Y q ).", "However, the label prediction term is bounded by the information (or entropy, H ) of the label space: I p A ; Y q H p Y q H p Y | A q H p Y q . (2) Thus, for a task with a small label space, there is no guarantee that a model will learn high-information content attributes. Models are in fact encouraged to overfit to dataset artifacts, and to unlearn linguistically useful information that is not directly relevant to predicting Y (Pereira, 2000). 3 An alternate approach is to make datasets harder adversarially, so as to have fewer artifacts (Zellers et al., 2018, 2019a; Le Bras et al., 2020). However, it might be impossible to make a dataset with no artifacts, or to know if one has been created. Our proposal, to evaluate models by their real-world language use, addresses the information bottleneck issue in two ways. First, when we use language in the real world, the mapping between possible inputs and outputs is often highly complex. For example, the space of possible advice is vast, and many pieces of advice might be equally helpful given a situation. Second, we directly tackle language problems, without introducing a correctness-based proxy that machines might overfit to. 2.3 Static datasets in a dynamic world To evaluate performance on a real-world task by means of a dataset, we (implicitly) assume that the dataset is a good representation of the world (Torralba and Efros, 2011). This might be questionable when it comes to real-world language use, as static datasets necessarily capture historic patterns of language. For instance, syntactic understanding is often evaluated using the Penn Treebank, with news articles from 1989 (Marcus et al., 1993). However, the world is constantly evolving, along with the language that we use. To bridge this gap, we propose to evaluate machines by their interactions with humans in the present . Models therefore must learn to perform the underlying language task, even for novel situations, rather than fitting to the historic distribution of a fixed test set. We make this notion concrete in the next section, where we introduce a dynamic dataset and leaderboard for evaluating advice. 3 T uring A dvice : a New Challenge for Natural Language Understanding As a case study of our framework, we introduce T uring A dvice , a new challenge task for AI systems to test language understanding. The format is simple: given a situation expressed in natural language, a machine must respond with helpful advice. To pass the challenge, machine-written advice must be at least as helpful to the advice-seeker as human-written advice, in aggregate. We focus on advice for a few reasons. First, advice-giving is both an important and an everyday task. People ask for and give advice in settings as diverse as relationship advice and tech support (Bonaccio and Dalal, 2006). Thus, we as humans have inherent familiarity with the task, and what it means for advice to be helpful making it easy to evaluate, as we later show empirically. Moreover, because there are many internet communities devoted to advice-giving, training data is plentiful. Second, the framework of advice-giving allows us to study subtasks such as reading comprehension and natural language inference (Section 5.3); we argue both of these are needed to consistently give good advice. Learning to recognize advice has recently been studied as an NLP task on its own (Govindarajan et al., 2020), though we are not aware of past work in learning to generate advice. 3.1 R eddit A dvice : A dynamic dataset for evaluating advice We propose to evaluate models dynamically , through new situations and advice that are posted to Reddit. We call our dynamic dataset R eddit A dvice . Many of Reddit's subcommunities (or subreddits') are devoted to asking for and giving advice, with subreddits for legal, relationship, and general life advice. 2 During evaluation time, we will retrieve new situations from Reddit as a new test set for models. Workers on Mechanical Turk then grade the model-written advice versus the Reddit-endorsed human-written advice. 3.1.1 How advice-giving works on Reddit Suppose a Reddit user faces an issue that they are seeking advice about. First, they write up situation and post it to an advice-oriented subreddit. Users then reply to the situation , o ering advice . Importantly, any user can upvote' or downvote' the advice as well as the situation itself changing its score slightly. Top-scoring advice is deemed by the wisdom of the crowd as being the most helpful. 3 3.1.2 The ideal evaluation through Reddit? In a sense, human advice-givers are evaluated' on Reddit by the score of their advice representing how well their advice has been received by the community. Similarly, the ideal model evaluation might be to post advice on Reddit directly. If the model writes helpful advice, it should be upvoted. 2 We use advice from the following subreddits: Love , Relationships , Advice , NeedAdvice , Dating_Advice , Dating , Marriage , InternetParents , TechSupport , and LegalAdvice . 3 This is somewhat of a simplification, as other factors also influence what gets upvoted (Anderson et al., 2012; Lakkaraju et al., 2013; Muchnik et al., 2013; Jaech et al., 2015). 4 1. Which piece of advice is more helpful? Situation Given: Advice A Advice B Definitely B Slightly B Slightly A Definitely A 2. How helpful is the worse advice (A) to the question-asker? Slightly helpful Not helpful Dangerous 3. Is Advice A worse mainly due to its meaning, or its writing? Meaning Writing 3. Could Advice A be applicable to (and helpful in) a di erent situation? Possibly helpful Never helpful Figure 2: Crowdsourcing workflow. Mechanical Turk Workers are given a situation, and two pieces of advice. First, they choose which is more helpful (here, B). Second, they rate the helpfulness of the worse advice (A); last, they answer a diagnostic question. However, there is a significant ethical problem with this approach. The users who post advice questions are real people, with real problems. A user might read advice that was originally written by a machine, think it was human-endorsed, and do something harmful as a result. For this reason, we take an alternate crowdsourcing approach. 3.1.3 A crowdsourced, hybrid evaluation through Mechanical Turk We propose a hybrid approach for dynamic evaluation of models. While the situations, and reference advice come from Reddit, we hire workers on Mechanical Turk to rate the relative helpfulness of machine-written advice. Not only is this format more ethical, it also lets us collect diagnostic ratings, allowing us to quantitatively track the natural language understanding errors made by machines. We made our crowdsourcing task as fulfilling as possible using popular situations from Reddit, and pitching the work in terms of helping people. We received feedback from many workers that our tasks were entertaining and fun, suggesting that our workers are to some degree intrinsically motivated. 3.1.4 Mechanical Turk annotation setup In a single round of evaluation, we retrieve 200 popular Reddit situations that were posted in the last two weeks. For each situation, we retrieve the top-rated advice from Reddit, and generate one piece of advice per model. Workers on Mechanical Turk then compare the helpfulness of the model-generated advice with human-written advice, and provide diagnostic ratings. We show an overview of our Mechanical Turk task in Figure 2. A worker is given a situation and two pieces of advice. One is the top-scoring advice from Reddit, and the other is model-generated advice; the worker is not told which is which. The worker first chooses the more helpful piece of advice, then provides diagnostic information for the less helpful advice rating it Slightly helpful , Not helpful , or Dangerous . If the worse piece of advice was Slightly helpful , they choose whether it is worse due to a Meaning problem or a Writing problem . Otherwise, they choose if the worse advice could be Possibly helpful in some other situation, or Never helpful in any situation. Three workers rate each model-situation pair, and ratings are combined using a majority vote. We follow best practices on Mechanical Turk, using a qualification exam, paying workers at least $15 per hour, and giving feedback to workers. Still, evaluation is highly economical at $1.86 per example-model pair, or roughly $400 per model evaluated. 3.2 A large static dataset for training We present RedditAdvice2019, a large static dataset for training advice-giving models. Because today's models have extreme reliance on data for finetuning, we collect data that is in the exact same format as R eddit A dvice , yet we expand our selection criteria, optimizing for recall rather than precision (Supp A.2). In total, we extract 616k pieces of advice, over 188k situations. To mirror the dynamic nature of the evaluation, in which models are evaluated on situations posted in 2020 and beyond, we split our dataset into static training and validation sets by date. 4 4 Experimental Results on R eddit A dvice In this section, we report results from one round of dynamic evaluation on R eddit A dvice . We evaluate the following strong NLP models and baselines: a . Rule-based: a templated system to give legal, relationship, or life advice. The system first chooses randomly empathetic sentence from ten choices, for example I'm sorry you're facing this.", "It then chooses a random piece of advice that is loosely related to the situa-tion's topic; we infer this from the subreddit the situation was posted on.", "For example, for 4 Our training set contains 600k pieces of advice from July 2009 to June 14, 2019; validation contains 8k from June 14 to July 9th 2019.", "LegalAdvice the model might write I'd suggest getting a lawyer immediately. b .", "TF-IDF retrieval: for a new situation, we compute its TF-IDF bag-of-word vector and use it to retrieve the most similar situation from the training set.", "We then reply with the top-scoring advice for that situation.", "c .", "Grover-Mega (Zellers et al., 2019b): a left-to-right transformer model with 1.5 billion parameters.", "Grover was pretrained on news articles with multiple fields, perhaps making it a good fit for our task, with multiple fields of context (like the subreddit, date, and title).", "Our situation-advice pairs are often quite long, so we adapt Grover for length; pretraining it on sequences of up to 1536 characters.", "d .", "T5 (Ra el et al., 2019): a sequence-to-sequence model with a bidirectional encoder and a left-to-right generator, with 11 billion parameters.", "T5 was trained on a large dataset of cleaned web text.", "At the time of writing, T5 is the top-scoring model on the Glue and SuperGlue benchmarks (Wang et al., 2019b,a), scoring above human performance on Glue and near human-performance on SuperGlue.", "e .", "GPT3 (Brown et al., 2020): a left-to-right transformer model with 175 billion parameters.", "GPT3 must be prompted to generate advice since it has not been released for finetuning.", "We cannot provide few-shot examples in the prompt due to the length of situation-advice pairs; we instead mimic the formatting of a website quoting from Reddit (Appendix B.5).", "Last, to quantify the measurement error of our evaluation, we additionally evaluate: f .", "the second -highest rated Reddit advice for each situation.", "We send this advice through the same pipeline as machine-written advice.", "We finetune all models (except GPT3) and generate using Nucleus Sampling (Holtzman et al., 2020); more details in Appendix B. In our study, we exclude purely bidirectional models, such as BERT (Devlin et al., 2019).", "While these models can be made to generate text, these generations are usually worse than those of left-to-right models (Wang and Cho, 2019).", "T5 also tends to outperform them, even on discriminative tasks.", "In Figure 3, we show overall results for one evaluation trial, which featured 200 situations posted on Reddit from October 28 to November 7, 2020.", "As a key metric for measuring the relative usefulness of model-written advice, we evaluate the frequency by which workers prefer the Reddit-written reference advice over the model-written advice.", "If a model's advice was just as helpful as human advice in aggregate, then that model would score 50%.", "Model performance is quite low.", "The best model, T5-11B, scores 14.5%, outperforming a smaller Grover-Mega (4.5%); GPT3 does worse at 4.0%.", "competitive at 2.5% and 4.0% accuracy respectively.", "As additional comparison to the 50% upper bound, the second-highest scoring Reddit advice scores 41%.", "This suggest that our workers and often prefer the same advice as Reddit users.", "To investigate the measurement error of our evaluation, in Figure 4 we report the statistical significance between pairs of models; details about how this is computed are in Appendix C. We observe a large gap in performance between T5 and the other baselines.", "For example, its improvement over Grover-Mega is 10%, which is highly statistically signficant.", "On the other hand, the di erences in performance between other models are more minor GPT3 does not outperform TF-IDF, and though it outperforms the rule-based system by 1.5%, it is only somewhat statistically significant.", "Overall, the statistical significance results suggest that our evaluation can stably rank model performance.", "This, along with the finding that model performance is low on our task suggests that there is ample room for growth on R eddit A dvice .", "So far, we have shown that we are able to reliably evaluate models in our dynamic setup, and that doing so results in model performance that is significantly lower than human performance.", "To break down what this gap in performance means, we show a qualitative example in Figure 5. A user is asking for online legal advice about being stuck at work for their entire 4pm-midnight shift with no eating allowed due to COVID-19.", "The top-rated Reddit advice understands this situation and then o ers advice, suggesting the advice-seeker organize alongside other workers as New Jersey doesn't require employers to give any meal breaks.", "Machine advice seems to misunderstand the issue.", "T5 asks if there is a reason that you can't leave the property, even though this reason is stated in the situation.", "GPT3's advice is self-contradictory; it also makes something up about a restaurant in the building.", "As part of our evaluation, we wish to quantitatively measure problems with machine-written advice.", "Recall that in our crowdsourcing setup (Sec-tion 3.1.3), three workers select which advice is legaladvice : NJ Work will not let us eat in the building due to covid outbreak.", "We can't leave the property for breaks.", "They have a tent outside for us to eat in but it's getting cold out.", "I am a manager at my work part of my responsibilities is to stay on the property during my entire shift we are given a 1 hour paid lunch break that we do not have to clock out for.", "However due to recent covid outbreak the new policy is no eating in the building.", "They had set up a tent outside for us to eat in but it is starting to get cold out as well there are not heaters in there.", "My question is can they legally do this to us?", "Also I work 4:00 p.m. to midnight shift.", "TopRedditadvice So by \"clock out for\" you are not a salaried exempt employee?", "Do the workers under you have a union?", "Aside from the cold, New Jersey doesn't require employers to give *any* meal breaks for employees 18 or older, so they could take away the breaks and the tent if they wanted, and just say \"no eating\" (though water is required).", "What is your relationship with the other managers, and room to figure this out, ask for heaters, etc.?", "RuleBased If I were you, I'd get a lawyer immediately.", "De(cid:28)nitely less helpful Slightly helpful Meaning problem TF-IDFRe-trieval You were blackout drunk, which means that you were not capable of consent.", "...If anyone should feel violated, it's you.", "De(cid:28)nitely less helpful Not helpful Possibly helpful Grover-Mega If you are required to be on the premises, they are required to provide you with a place to eat.", "https: // www.nj.gov / labor / es / faq.html#13 De(cid:28)nitely less helpful Not helpful Possibly helpful T5-11B If your shift is from 4pm to midnight, is there a reason that you can't leave the property?", "De(cid:28)nitely less helpful Not helpful Never helpful GPT3-175B A place of employment is considered the \"premises\".", "You can't just assume that because you can't walk out the door that you can eat wherever you want.", "This is work hours.", "You're not in your home.", "The restaurant, or at least the food, is all provided for you for your benefit.", "But if there are other people there, then I could see how it could be a safety issue.", "better, and then annotate problems with the worse piece of advice.", "We found workers had high agreement during the diagnostic annotation.", "5 In Figure 6, we show the distribution of the ratings for model-written, versus human-written advice.", "Machine-written advice that was 5 For the classifying machine-written advice as helpful' versus not helpful' or dangerous' (combining the two latter categories into one), we have 0 . 689. For breaking down helpful advice into meaning problem' versus a writing problem', we have Cohen's 0 . 613; for rating unhelpful advice as possibly helpful' versus never helpful,' we have 0 . 602. 7 TF-IDFRetrieval GPT3-175B T5-11B Second-bestReddit advice 0% 20% 40% 60% 80% 100% 4% 4% 14% 41% 9% 15% 26% 32% 5% 13% 22% 19% 66% 23% 21% 4% 10% 33% 13% 2% 4% 10% 3% 1% Frequency (%) of advice ratings Preferred over top-rated Reddit advice Slightly helpful (with a writing problem) Slightly helpful (with a meaning problem) Not helpful (possibly helpful elsewhere) Not helpful (never helpful elsewhere) Dangerous Figure 6: Distribution of ratings for three models: TF-IDF retrieval, GPT3, and T5, along with ratings for the second-best rated Reddit advice. Though deep generators like GPT3 and T5 are often preferred over the retrieval baseline, they also often write advice that would never be helpful (33% GPT3, 13% T5), and that is racist, sexist, or otherwise dangerous (10% GPT3, 3% T5). not preferred over human-written advice can have the following ratings. It can be rated as Slightly helpful (but, was rated as worse mainly due to a Meaning problem or Writing problem ), as Not helpful , or Dangerous . The diagnostics show several patterns. First, all models frequently commit natural language understanding errors, such as internal contradiction. Because of this, we find that TF-IDF bag-of-words retrieval is competitive with that of large generators. While retrieved advice is often irrelevant (66% of the time), it is almost never complete gibberish, as it comes from top-scoring advice. Only 10% of workers rated this advice as Not helpful for any situation, less than T5. Second, they suggest that models struggle even more without finetuning. A GPT3 model with careful prompting generates language that is Dangerous 10% of the time. These qualitative and quantitative results confirm a pattern observed by many others, that large language models like GPT3 often generate explicitly racist and sexist language out-of-the-box Sheng et al., 2019; Gehman et al., 2020; Bender et al., 2021, among others). We explore this further in Supplemental F. This is perhaps worrying, since GPT3 is presently being commercialized. 5.2 A Leaderboard for Advice Evaluation So far, we have shown results from one evaluation round; a second is in Supplemental D. We propose a dynamic leaderboard to keep that evaluation ongoing, at rowanzellers.com/advice . Users submit a model API to be dynamically evaluated. Each new model, along with the highest rated previously-evaluated model, will be evaluated for an additional round using the same approach. The cost of each evaluation is reasonable (Section 3.1.4), which we authors will pay in the short term. An alternative strategy requires submitters to pay the Mechanical Turk fees themselves; this model was used for the HYPE leaderboard in computer vision (Zhou et al., 2019). 5.3 Relation to existing NLP tasks Shared core tasks such as reading comprehension and natural language inference are of considerable interest to the NLP community.", "Many datasets have been proposed for these tasks, and progress on them is often measured through auto-gradeable correctness metrics.", "However, large models have started to outperform humans on these datasets, raising doubt that further progress on them brings us closer to human-level language understanding.", "We argue two things: first, that many NLP tasks are necessary components of giving advice, and second, that because giving advice remains far from solved, these tasks are also far from solved.", "In Appendix F, we study problems with advice from T5-11B from the point of view of existing NLP tasks.", "For instance, machine advice often contradicts itself, suggesting that today's systems struggle with the general task of natural language inference.", "We have made these diagnostics publicly available to enable progress on automatically spotting these mistakes.", "We introduced new methodology for evaluating language tasks, reducing the gap between benchmarks and the real world.", "We also introduced a new challenge for the community, T uring A dvice , with an accompanying dataset and dynamic leaderboard.", "might have on society.", "In this paper, we presented a sketch of NLP models helping people who need advice on sensitive topics, which could be a measurable goal for the field.", "At the same time, we do not claim that our approach is a panacea.", "There are almost certainly better non-technical solutions to ensure mentorship and legal advice for all (Green, 2019).", "Moreover, there are significant dual-use risks with models that understand language (Hovy and Spruit, 2016; Green and Viljoen, 2020).", "Our evaluation measures some risks of generative models such as the tendency to generate toxic language but more work in this area is needed.", "Thanks to the Reddit users who participate in its advice subreddits from asking for help, to writing (and voting on) helpful advice.", "Thanks to the Mechanical Turk workers who performed the annotation for our experiments.", "Thanks also to the three anonymous reviewers, along with Katharina Reinecke, Oren Etzioni, Hannah Rashkin, Maarten Sap, Maxwell Forbes, Jesse Thoma-son, Daniel Khashabi, Gabriel Ilharco, Swabha Swayamdipta, and Yonatan Bisk, for feedback.", "This research was supported in part by NSF (IIS-1524371, IIS-1714566), DARPA under the CwC program through the ARO (W911NF-15-1-0543), DARPA under the MCS program through NIWC Pacific (N66001-19-2-4031), and the NSF-GRFP No.", "DGE-1256082." ]
[ "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "objective", "abstain", "objective", "objective", "abstain", "objective", "abstain", "method", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "method", "result", "abstain", "method", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "method", "other", "other", "other", "other", "other" ]
[ "How to learn a better speech representation for end-to-end speech-to-text translation (ST) with limited labeled data?", "Existing techniques often attempt to transfer powerful machine translation (MT) capabilities to ST, but neglect the representation discrepancy across modalities.", "In this paper, we propose the S peechTE xt M anifold M ixup (STEMM) method to calibrate such discrepancy.", "Specifically, we mix up the representation sequences of different modalities, and take both unimodal speech sequences and multimodal mixed sequences as input to the translation model in parallel, and regularize their output predictions with a self-learning framework.", "Experiments on MuST-C speech translation benchmark and further analysis show that our method effectively alleviates the cross-modal representation discrepancy, and achieves significant improvements over a strong baseline on eight translation directions.", "Speech-to-text translation (ST) aims at translating acoustic speech signals into text in a foreign language, which has wide applications including voice assistants, translation for multinational video conferences, and so on.", "Traditional ST methods usually combine automatic speech recognition (ASR) and machine translation (MT) in a cascaded manner (Sperber et al., 2017; Cheng et al., 2018; Sper-ber et al., 2019; Dong et al., 2019b; Zhang et al., 2019a; Lam et al., 2021b), which might suffer from error propagation and high latency.", "To break this bottleneck, end-to-end ST systems attracted much * indicates corresponding authors.", "attention recently (Wang et al., 2020b,c; Dong et al., 2021a,b; Han et al., 2021; Inaguma et al., 2021a; Tang et al., 2021a), which learn a unified model to generate translations from speech directly.", "Some recent work has shown great potential for end-to-end speech translation, even surpassing traditional cascaded systems (Ye et al., 2021; Xu et al., 2021).", "As a cross-modal task, a major challenge in training an end-to-end ST model is the representation discrepancy across modalities, which means there is a modality gap between speech representations and text embeddings, as shown in the left sub-figure of Figure 1.", "Existing approaches often adopt a sophisticated MT model to help the training of ST, with some techniques like pretraining (Wang et al., 2020c; Ye et al., 2021; Xu et al., 2021), multitask learning (Ye et al., 2021; Han et al., 2021; Tang et al., 2021a) and knowledge distillation (Liu et al., 2019; Gaido et al., 2020; Inaguma et al., 2021b; Tang et al., 2021a).", "Although these methods have achieved impressive improvements in ST task, these methods are not necessarily the best way to leverage the MT knowledge.", "Considering that during training, the input of the translation module only include speech sequences or text sequences, the lack of multimodal contexts makes it difficult for the ST model to learn from the MT model.", "Inspired by recent studies on some cross-7050 lingual (Lample and Conneau, 2019; Liu et al., 2020a; Lin et al., 2020) and cross-modal (Li et al., 2021b; Zhou et al., 2020; Dong et al., 2019a) tasks, we suggest that building a shared semantic space between speech and text, as illustrated in the right sub-figure of Figure 1, has the potential to benefit the most from the MT model.", "In this paper, we propose the S peechTE xt M anifold M ixup (STEMM) method to bridge the modality gap between text and speech.", "In order to calibrate the cross-modal representation discrepancy, we mix up the speech and text representation as the input and keep the target sequence unchanged.", "Specifically, STEMM is a self-learning framework, which takes both the speech representation and the mixed representation as parallel inputs to the translation model, and regularizes their output predictions.", "Experimental results show that our method achieves promising performance on the benchmark dataset MuST-C (Di Gangi et al., 2019a), and even outperforms a strong cascaded baseline.", "Furthermore, we found that our STEMM could effectively alleviate the cross-modal representation discrepancy, and project two modalities into a shared space.", "In this section, we will begin with the basic problem formulation (Section 2.1) and introduce the model architecture (Section 2.2).", "Then, we introduce our proposed S peechTE xt M anifold M ixup (STEMM) in Section 2.3.", "Finally, we introduce our proposed self-learning framework with STEMM in Section 2.4 and present two mixup ratio strategies in Section 2.5.", "Figure 2 illustrates the overview of our proposed method.", "The speech translation corpus usually contains speech-transcription-translation triples, which can be denoted as D = { ( s , x , y ) } .", "Here s is the sequence of audio wave, x is the transcription in the source language, and y is the translation in the target language.", "End-to-end speech translation aims to generate translation y directly from the audio wave s , without generating intermediate transcription x .", "Inspired by recent works (Dong et al., 2021b; Xu et al., 2021) in end-to-end speech translation,", "we decompose the ST model into three modules: acoustic encoder , translation encoder , and translation decoder . The acoustic encoder first encodes the original audio wave into hidden states, fed into the translation encoder to learn further semantic information. Finally, the translation decoder generates the translation based on the output of the translation encoder .", "Acoustic Encoder As recent works (Ye et al., 2021; Han et al., 2021) show that Wav2vec2.0 (Baevski et al., 2020) can improve the performance of speech translation, we first use a pretrained Wav2vec2.0 to extract speech representations c from the audio wave s . We add two additional convolutional layers to further shrink the length of speech representations by a factor of 4, denoted as a = CNN ( c ) .", "Translation Encoder Our translation encoder is composed of N e transformer (Vaswani et al., 2017) encoder layers, which includes a self-attention layer, a feed-forward layer, normalization layers,", "and residual connections. For MT task, the input of the translation encoder is the embedding of transcription e = Emb ( x ) . For ST task, it is the output sequence of the acoustic encoder a . The input can also be the multimodal mixed sequence with our proposed STEMM (see details in Section 2.3). Generally, for the input sequence , we obtain the contextual representations h ( ) after N e transformer (Vaswani et al., 2017) layers, which are fed into the translation decoder for predicting the translation.", "Translation Decoder Our translation decoder is composed of N t transformer decoder layers, which contain an additional cross-attention layer compared with transformer encoder layers. For the input sequence , the cross entropy loss is defined as:", "Pretrain-finetune We follow the pretrain-finetune paradigm to train our model. First, we pretrain the translation encoder and translation decoder with parallel transcription-translation pairs, derived from both the speech translation corpus and the external MT dataset. Also, the acoustic encoder is pretrained on large amounts of unlabeled audio data in a self-supervised manner. We combine those pretrained modules and finetune the whole model for ST.", "As we mentioned in Section 1, to alleviate the representation discrepancy due to the lack of multimodal contexts, we present the S peechTE xt M anifold M ixup (STEMM) method to mix up the sequence of speech representations and word embeddings. We first introduce STEMM in this section and later show how to use it to help the training of ST.", "Note the sequence of sub-word embeddings as e = [ e 1 , e 2 , ..., e | e | ] and the sequence of speech representations as a = [ a 1 , a 2 , ..., a | a | ] , where the sequence lengths usually follow | a | | e | . We first perform a word-level forced alignment between speech and text transcriptions to determine when particular words appear in the speech segment. Formally, the aligner recognizes a sequence of word units w = [ w 1 , w 2 , ..., w T ] , and for each word w i , it returns the start position l i and end position r i in the sequence of speech representation a . Meanwhile, we denote the corresponding sub-word span", "for word w i as [ x m i : x n i ] , with its embeddings matrix [ e m i : e n i ] , where m i and n i are the start position and end position in the sequence of sub-words. To mix up both sequences, for each word unit w i , we choose either the segment of speech representations [ a l i : a r i ] or sub-word embeddings [ e m i : e n i ] with a certain probability p , referred to mixup ratio in this paper.", "Note that in terms of the mixup representation sequence length, we have | e | | m | | a | . Considering the positions of tokens have changed after mixup, we add positional encodings to the token embeddings. We further perform layer normalization to normalize the embeddings:", "where Pos ( ) is the sinusoid positional embedding (Vaswani et al., 2017). Mixup (( s , x ) , p ) indicates the mixup sequence of speech s and text x with probability p , which is fed into the translation encoder for predicting the translation.", "With the help of our proposed STEMM, we are now able to access multimodal mixed sequences, in addition to the unimodal speech sequences. We integrate them into a self-learning framework. Specifically, we input both unimodal speech sequences and multimodal mixed sequences into the translation module ( translation encoder and translation decoder ). In this way, translation of unimodal speech sequences focuses on the ST task itself, while the translation of multimodal mixed sequences is devoted to capture the connections between representations in different modalities. Besides, we try to regularize above two output predictions by minimizing the Jensen-Shannon Divergence (JSD) between two output distributions,", "LJSD ( s , x , y ,p ) = | y | (cid:88) i =1 JSD { p ( y i | y <i , h ( s )) p ( y i | y <i , h ( Mixup (( s , x ) , p ))) } , (5)", "where h ( ) is the contextual representation outputted by the translation encoder . p ( y i | y <i , h ( s )) is the predicted probability distribution of the i -th target token given the speech sequence s as input, and p ( y i | y <i , h ( Mixup (( s , x ) , p ))) is that given the multimodal mixed sequence as input.", "When using our proposed STEMM, an important question is how to determine the mixup ratio p . Here we try two strategies: static mixup ratio and uncertainty-aware mixup ratio .", "Static Mixup Ratio We use the same mixup ratio p for all instances throughout the whole training process. We will show how we determined this important hyper-parameter in Section 4.3.", "Uncertainty-aware Mixup Ratio With this strategy, we determine the mixup ratio for each instance according to the prediction uncertainty of the ST task, defined as the average entropy of predicted distributions of all target tokens:", "where U is a normalization factor which re-scales u to [0 , 1] , ( ) is a sigmoid function to prevent p from dropping too quickly.", "MuST-C We conduct experiments on MuST-C (Di Gangi et al., 2019a) dataset. MuST-C is a multilingual speech translation dataset, which contains", "translations from English (En) to 8 languages: German (De), French (Fr), Russian (Ru), Spanish (Es), Italian (It), Romanian (Ro), Portuguese (Pt), and Dutch (Nl). It is one of the largest speech translation datasets currently, which contains at least 385 hours of audio recordings from TED Talks, with their manual transcriptions and translations at the sentence level. We use dev set for validation and tst-COMMON set for test.", "MT Datasets Our model architecture allows us to utilize external parallel sentence pairs in large-scale machine translation datasets. Therefore, we incorporate data from WMT for En-De, En-Fr, En-Ru, En-Es, En-Ro, and OPUS100 1 for En-Pt, En-It, En-Nl, as pretraining corpora. The detailed statistics of all datasets included are shown in Table 1.", "Pre-processing For speech input, we use the raw 16-bit 16kHz mono-channel audio wave. To perform word-level force alignment, we use Montreal Forced Aligner 2 toolkit, whose acoustic model is trained with LibriSpeech (Panayotov et al., 2015). For text input, we remove the punctuation from the source texts for the ST dataset. Both source and target texts are case-sensitive. For each translation direction, we use a unigram SentencePiece 3 model to learn a vocabulary on the text data from ST dataset, and use it to segment text from both ST and MT corpora into subword units. The vocabulary is shared for source and target with a size of 10k.", "Model Configuration Our model consists of three modules.", "For the acoustic encoder , we use Wav2vec2.0 (Baevski et al., 2020) following the base configuration, which is pretrained on audio data from LibriSpeech (Panayotov et al., 2015) without finetuning 4 .", "We add two additional 1-dimensional convolutional layers to further shrink the audio, with kernel size 5, stride size 2, padding 2, and hidden dimension 1024.", "For the translation encoder , we use N e = 6 transformer encoder layers.", "For the translation decoder , we use N d = 6 transformer decoder layers.", "Each of these transformer layers comprises 512 hidden units, 8 attention heads, and 2048 feed-forward hidden units.", "Training and Inference We train our model in a pretrain-finetune manner.", "During pretraining, we train the MT model i.e., translation encoder and translation decoder , with transcription-translation pairs.", "The learning rate is 7e-4.", "We train the model with at most 33k input tokens per batch.", "During finetuning, the learning rate is set to 1e-4.", "We finetune the whole model up to 25 epochs to avoid overfitting, with at most 16M source audio frames per batch.", "The training will early-stop if the loss on dev set did not decrease for ten epochs.", "During both pretraining and finetuning, we use an Adam optimizer (Kingma and Ba, 2015) with 1 = 0 .", "9 , 2 = 0 .", "98 , and 4k warm-up updates.", "The learning rate will decrease proportionally to the inverse square root of the step number after warm-up.", "The dropout is set to 0.1, and the value of label smoothing is set to 0.1.", "We use the uncertainty-aware mixup ratio strategy by default, and the mixup ratio p is set to 0.4 when using static strategy.", "The weight of JSD loss is set to 1.0.", "During inference, We average the checkpoints of the last 10 epochs for evaluation.", "We use beam search with a beam size of 5.", "We use sacreBLEU 5 (Post, 2018) to compute case-sensitive detokenized BLEU (Papineni et al., 2002) scores and the statistical significance of translation results with paired bootstrap resampling (Koehn, 2004) for a fair comparison 6 .", "All models are trained on 8 Nvidia Tesla-V100 GPUs.", "We implement our models based on fairseq 7 (Ott et al., 2019).", "Baseline Systems We compare our method with several strong end-to-end ST systems including: 5 https://github.com/mjpost/sacrebleu 6 sacreBLEU signature: nrefs:1 | bs:1000 | seed:12345 | case:mixed | eff:no | tok:13a | smooth:exp | version:2.0.0 7 https://github.com/pytorch/fairseq 7054 Fairseq ST (Wang et al., 2020a), AFS (Zhang et al., 2020), DDT (Le et al., 2020), MTL (Tang et al., 2021b), Self-training (Pino et al., 2020), BiKD (In-aguma et al., 2021a), FAT-ST (Zheng et al., 2021a), JT-S-MT (Tang et al., 2021a), SATE (Xu et al., 2021), Chimera (Han et al., 2021) and XSTNet (Ye et al., 2021).", "Besides, we implement a strong baseline W2V2-Transformer based on Wav2vec2.0.", "It has the same model architecture as our proposed STEMM and is pretrained in the same way.", "The only difference is that it is only finetuned on the ST task, while we adopt a self-learning framework during finetuning.", "Comparison with End-to-end Baselines As shown in Table 2, our implemented W2V2-Transformer is a relatively strong baseline, which proves the effectiveness of Wav2vec2.0 module and MT pretraining.", "Without external MT data, our method achieves an improvement of 1.0 BLEU (average over 8 directions) over the strong baseline, which proves our proposed self-learning framework could effectively improve the performance of the ST task.", "It even outperforms baselines with external MT data on En-Es, En-It, En-Ro, En-Pt, and En-Nl.", "When we introduce additional MT data, our method also yields a 0.8 BLEU improvement compared with baseline.", "Note that our performance is slightly worse than XSTNet (Ye et al., 2021).", "However, our method is orthogonal with theirs, which focuses on the training procedure of end-to-end ST model.", "We will investigate how to combine them together in the future.", "Comparison with Cascaded Baseline We also implement a strong cascaded system, whose ASR part is composed of a pretrained Wav2vec2.0 module and 6 transformer decoder layers, and the MT part is the same as our pretrained MT module.", "Both cascaded systems and end-to-end models are trained with the same data ( D and DMT ).", "As shown in Table 3, the end-to-end baseline W2V2-Transformer is inferior to the cascaded system, but our method significantly outperforms it, which shows the potential of our STEMM method.", "Is Each Learning Objective Effective?", "As shown in Equation 6, our training objective contains three terms.", "Besides the cross-entropy objec-Mixup Ratio STEMM Trans.", "tive LCE ( s , y ) for speech translation, we investigate the effects of the other two auxiliary training objectives.", "As shown in Table 4, when we input the additional multimodal mixed sequence into the model and optimize the cross-entropy loss (Line 3), it can already outperform the baseline (Line", "4) significantly.", "When we regularize two output predictions with JSD loss (Line 2), the performance can be further boosted.", "The uncertainty-aware strategy reduces the cost for searching mixup ratio and has better performance.", "We present two different mixup ratio strategies in Section 2.5.", "To evaluate their impacts, we conduct another ablation study on MuST-C En-De.", "We observe that the BLEU scores on tst-COMMON set are 28.5 and 28.7 for static strategy and uncertainty-aware strategy , respectively.", "The uncertainty-aware strategy can slightly improve the performance, and more importantly, it lowers the manual cost for searching an optimal mixup ratio to get the best performance.", "When using static mixup ratio strategy , it is important to choose the mixup ratio p .", "We constrain p in [0 . 0 , 0 . 2 , 0 . 4 , 0 . 6 , 0 . 8] for experiments on MuST-C En-De tst-COMMON set, as shown in Figure 3.", "When p = 0 .", "0 , the translation task with the mixed sequence as input degrades to the MT task.", "We interestingly find that self-learning with MT tasks performed the worst (i.e. lowest BLEU) than self-learning with STEMM at other mixup ratios.", "This confirms what we mentioned in Section 1, that the representation discrepancy between speech and text makes the MT task an inferior boost to ST. Our method achieves the best performance at p = 0 .", "4 .", "To find a reasonable explanation, we do a more in-depth study of the representation of the speech, text, and their mixup sequence (STEMM).", "We plot the bivariate kernel density estimation based on the reduced 2-dim representation.", "We find that when p = 0 .", "4 , the mixup representation just lies between the representation of speech and text sequences.", "That is why it calibrates the cross-modal representation discrepancy more easily and gets the best ST performance.", "To examine whether our method alleviates the cross-modal representation discrepancy, we conduct some analysis of cross-modal word representations.", "As described in Section 2.3, for each word unit w i , we identify the corresponding segment of speech representation [ a l i : a r i ] and text embedding [ e m i : e n i ] .", "We define the word representation in each modality as follows: i = AvgPool ([ a l i : a r i ]) , (9) i = AvgPool ([ e m i : e n i ]) , (10) where AvgPool() denotes average-pooling operation across the sequence dimension, i and i denote the representation of word unit w i in speech and text modalities, respectively.", "We calculate the average cosine similarity between i and i over all word units w i in MuST-C 60 40 20 0 20 40 60 80 x 100 50 0 50 100 y Speech Mixup 0.4 Text Figure 4: The bivariate kernel density estimation visualization of the averaged sentence representation of the speech, text and STEMM sequences after pretraining.", "En-De tst-COMMON set.", "As shown in Table 5, our method could significantly improve the similarity of word representations across modalities over baseline.", "We believe it is because when training with our proposed STEMM, the speech segment and text segment of a word will appear in a similar multimodal context, which leads to similar representations.", "We also show the visualization of an example in Figure 5, we can observe that our method brings word representations within different modalities closer compared with baseline.", "One important contributor to our excellent performance is the usage of external MT data.", "Therefore, how the amount of MT data affects the final performance is an important question.", "We vary the amount of available external MT data during pretraining on En-De direction.", "As shown in Figure 6, we observe a continuous improvement of BLEU scores with the increase of MT data, which shows that external MT data is helpful to improve ST. 4.6 Can the Final Model still Perform MT Task?", "Our model is first pretrained on the MT task and then finetune for ST. An important question is whether there is a catastrophic forgetting problem during finetuning.", "We evaluate the model on the MT task and show the result in Table 6.", "We observe that when we only finetune the model on the ST task (W2V2-Transformer), the ability of text trans-Models BLEU Pretrained MT 31.7 W2V2-Transformer 19.5 STEMM 31.5 Table 6: BLEU scores of MT task on MuST-C EnDe tst-COMMON set.", "lation will be forgotten a lot.", "In contrast, when we use our self-learning framework during finetuning, even though there is no MT task, the MT capability can still be preserved.", "5 Related Works End-to-end ST To overcome the error propagation and high latency in the cascaded ST systems, Brard et al. (2016); Duong et al. (2016) proved the potential of end-to-end ST without intermediate transcription, which has attracted much attention in recent years (Vila et al., 2018; Salesky et al., 2018, 2019; Di Gangi et al., 2019b,c; Bahar et al., 2019a; Inaguma et al., 2020).", "Since it is difficult to train an end-to-end ST model directly, some training techniques like pretraining (Weiss et al., 2017; Berard et al., 2018; Bansal et al., 2019; Stoian et al., 2020; Wang et al., 2020b; Pino et al., 2020; Dong et al., 2021a; Alinejad and Sarkar, 2020; Zheng et al., 2021b; Xu et al., 2021), multi-task learning (Le et al., 2020; Vydana et al., 2021; Tang et al., 2021b; Ye et al., 2021; Tang et al., 2021a), curriculum learning (Kano et al., 2017; Wang et al., 2020c), and meta-learning (Indurthi et al., 2020) have been applied.", "To overcome the scarcity of ST data, Jia et al. (2019); Pino et al. (2019); Bahar et al. (2019b) proposed to generate synthesized data based on ASR and MT corpora.", "To overcome the modality gap, Han et al. (2021); Huang et al. (2021); Xu et al. (2021) further encode acoustic states which are more adaptive to the decoder.", "Previous works have mentioned that the modality gap between speech and text is one of the obstacles in the speech translation task, and to overcome such gap, one branch of the works (Liu et al., 2020b; Dong et al., 2021b; Xu et al., 2021) introduced a second encoder based on the conventional encoder-decoder model, to extract semantic information of speech and text.", "Recently, Han et al. (2021) built a shared semantic projection module that simulates the human brain, while in this work, we 7057 explored how to construct an intermediate state of the two modalities via the recent mixup method ( i.e. S peechTE xt M anifold M ixup) to narrow such gap.", "Note that our work is orthogonal with Ye et al. (2021)'s study in training procedure of end-to-end ST model.", "Mixup Our work is inspired by the mixup strategy.", "Zhang et al. (2018) first proposed mixup as a data augmentation method to improve the robustness and the generalization of the model, where additional data are constructed as the linear interpolation of two random examples and their labels at the surface level.", "Verma et al. (2019) extended the surface-level mixup to the hidden representation by constructing manifold mixup interpolations.", "Recent work has introduced mixup on machine translation (Zhang et al., 2019b; Li et al., 2021a; Guo et al., 2022; Fang and Feng, 2022), sentence classification (Chen et al., 2020; Jindal et al., 2020; Sun et al., 2020), multilingual understanding (Yang et al., 2022), and speech recognition (Medennikov et al., 2018; Sun et al., 2021; Lam et al., 2021a; Meng et al., 2021), and obtained enhancements.", "Our approach is the first to introduce the idea of manifold mixup to the speech translation task with two modalities, speech, and text.", "In this paper, we propose a S peechTE xt M anifold M ixup (STEMM) method to mix up the speech representation sequences and word embedding sequences.", "Based on STEMM, we adopt a self-learning framework, which learns the translation of unimodal speech sequences and multimodal mixed sequences in parallel, and regularizes their output predictions.", "Experiments and analysis demonstrate the effectiveness of our proposed method, which can alleviate the cross-modal representation discrepancy to some extent and improve the performance of ST. In the future, we will explore how to further eliminate this discrepancy and fill the cross-modal transfer gap for ST. Acknowledgements We thank all the anonymous reviewers for their insightful and valuable comments.", "This work was supported by National Key R&D Program of China (NO. 2017YFE0192900)." ]
[ "abstain", "abstain", "objective", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "objective", "abstain", "abstain", "result", "result", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "other", "method", "method", "method", "abstain", "method", "method", "other", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "other", "method", "method", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "objective", "method", "objective", "abstain" ]