sentences
sequence | labels
sequence |
---|---|
[
"Manual fact-checking does not scale well to serve the needs of the internet.",
"This issue is further compounded in non-English contexts.",
"In this paper, we discuss claim matching as a possible solution to scale fact-checking.",
"We define claim matching as the task of identifying pairs of textual messages containing claims that can be served with one fact-check.",
"We construct a novel dataset of WhatsApp tipline and public group messages alongside fact-checked claims that are first annotated for containing claim-like statements and then matched with potentially similar items and annotated for claim matching.",
"Our dataset contains content in high-resource (English, Hindi) and lower-resource (Bengali, Malayalam, Tamil) languages.",
"We train our own embedding model using knowledge distillation and a high-quality teacher model in order to address the imbalance in embedding quality between the lowand high-resource languages in our dataset.",
"We provide evaluations on the performance of our solution and compare with baselines and existing state-of-the-art multilingual embedding models, namely LASER and LaBSE.",
"We demonstrate that our performance exceeds LASER and LaBSE in all settings.",
"We release our annotated datasets 1 , codebooks, and trained embedding model 2 to allow for further research.",
"Human fact-checking is high-quality but time-consuming.",
"Given the effort that goes into fact-checking a piece of content, it is desirable that a fact-check be easily matched with any content to which it applies.",
"It is also necessary for fact-checkers to prioritize content for fact-checking 1 https://doi.org/10.5281/zenodo.",
"since there is not enough time to fact-check everything.",
"In practice, there are many factors that affect whether a message is fact-check worthy' (Kon-stantinovskiy et al., 2020; Hassan et al., 2017), but one important factor is prevalence.",
"Fact-checkers often want to check claims that currently have high viewership and avoid fact-checking fringe' claims as a fact-check could bring more attention to the claimsan understudied process known as ampli-fication (Phillips, 2018; Wardle, 2018).",
"While the number of exact duplicates and shares of a message can be used as a proxy for popularity, discovering and grouping together multiple messages making the same claims in different ways can give a more accurate view of prevalence.",
"Such algorithms are also important for serving relevant fact-checks via misinformation tiplines' on WhatsApp and other platforms (Wardle et al., 2019; Meedan, 2019; Magallon Rosa, 2019).",
"Identifying pairs of textual messages containing claims that can be served with one fact-check is a potential solution to these issues.",
"The ability to group claim-matched textual content in different languages would enable fact-checking organizations around the globe to prioritize and scale up their efforts to combat misinformation.",
"In this paper, we make the following contributions:",
"(i) we develop the task of claim matching,",
"(ii) we train and release an Indian language XLM-R (I-XLM-R) sentence embedding model,",
"(iii) we develop a multilingual annotated dataset across highand lower-resource languages for evaluation, and",
"(iv) we evaluate the ability of state-of-the-art sentence embedding models to perform claim matching at scale.",
"We formally evaluate our methods within language but also show clusters found using our multilingual embedding model often have messages in different languages presenting the same claims.",
"Barber's salon poses the biggest risk factor for Corona!",
"This threat is going to remain for a long duration.",
"*At an average a barber's napkin touches 5 noses minimum* The US health dept chief J Anthony said that salons have been responsible for almost 50% deaths.",
"consists of 5,066 messages in English, Hindi, Bengali, Malayalam, and Tamil that have been triple annotated for containing claim-like statements' following the definition proposed by fact-checkers in Konstantinovskiy et al. (2020).",
"The second dataset consists of 2,343 pairs of social media messages and fact-checks in the same five languages as the first dataset annotated for claim similarity.",
"Table 1 shows examples of annotated pairs of messages from the second dataset.",
"Semantic textual similarity (STS) refers to the task of measuring the similarity in meaning of sentences, and there have been widely adopted evaluation benchmarks including the Semantic Textual Similarity Benchmark (STS-B) (2017; 2016; 2015; 2014; 2013; 2012) and the Microsoft Research Paraphrase Corpus (MRPC) (Dolan and Brockett, 2005).",
"The STS-B benchmark assigns discrete similarity scores of 0 to 5 to pairs of sentences, with sentence pairs scored zero being completely dissimilar and pairs scored five being equivalent in meaning.",
"The MRPC benchmark assigns binary labels that indicate whether sentence pairs are paraphrases or not.",
"Semantic textual similarity is a problem still actively researched with a dynamic state of the art performance.",
"In recent work from Raffel et al. (2020), the authors achieved state-of-the-art performance on STS-B benchmark using the large 11B parameter T5 model.",
"The ALBERT model (Lan et al., 2019) achieved an accuracy of 93.4% on the MRPC benchmark and is considered one of the top contenders on the MRPC leaderboard.",
"While semantic textual similarity is similar to claim matching, the nuances in the latter require special attention.",
"Claim matching is the task of matching messages with claims that can be served with the same fact-check and that does not always translate to message pairs having the same meanings.",
"Moreover, claim matching requires working with content of variable length.",
"In practice, content from social media also has wide variation in lexical and grammatical quality.",
"Embedding models are essential for claim and semantic similarity search at scale, since classification methods require a quadratic number of comparisons.",
"comparisons.",
"While we have seen an increasing number of transformer-based contextual embedding models in recent years (Devlin et al., 2019; Reimers and Gurevych, 2019; Cer et al., 2018), the progress has been asymmetric across languages.",
"The XLM-R model by Conneau et al. (2019) with 100 languages is a transformer-based model with a 250K token vocabulary trained by multilingual masked language modeling (MLM) with monolingual data and gained significant improvements in cross-lingual and multilingual benchmarks.",
"LASER (Artetxe and Schwenk, 2019) provided language-agnostic representation of text in 93 languages.",
"The authors trained a BiLSTM architecture using parallel corpora and an objective function that maps similar sentences in the same vicinity in a high-dimensional space.",
"Language-agnostic BERT sentence embeddings (LaBSE) by Feng et al. (2020) improved over LASER in higher resource languages by MLM and translation language modeling (TLM) pretraining, followed by fine-tuning on a translation ranking task (Yang et al., 2019).",
"Shaar et al. (2020) discussed retrieval and ranking of fact-checked claims for an input claim to detect previously debunked misinformation.",
"They introduced the task, as well as a dataset covering US politics in English, and two BM25 based architectures with SBERT and a BERT-based reranker on top.",
"Vo and Lee (2020) tackled a similar problem by finding relevant fact-check reports for multimodal social media posts.",
"However these projects only focus on English data that mainly cover U.S. politics and at least one of the matching pairs is a claim from a fact-check report.",
"Additionally, the data collection process used in Shaar et al. (2020) might not necessarily capture all possible matches for a claim, since the dataset is constructed by including only the claims mentioned in one fact-check report and not all previous occurrences.",
"This may skew results and increase the risk of the model having a high false negative ratio.",
"Recently, the CheckThat!",
"Lab 2020 (Barron-Cedeno et al., 2020) has presented the same problem as a shared task.",
"We improve on prior work by finding a solution that works for highand low-resource languages and also for matching claims between pairs of social media content and pairs of fact-checks.",
"We explicitly annotated claim pairs that might match, avoiding the aforementioned false negatives issue by design and providing more accurate models and evaluations.",
"The data used in this paper comes from a variety of sources.",
"We use a mixture of social media (e.g., WhatsApp) content alongside fact-checked claims, since it is essential for any claim-matching solution to be able to match content both among fact-checked claims and social media posts as well as within social media posts.",
"Among the prevalent topics in our data sources are the COVID-19 pandemic, elections, and politics.",
"Tiplines.",
"Meedan, a technology non-profit, has been assisting fact-checking organizations to setup and run misinformation tiplines on WhatsApp using their open-source software, Check.",
"A tipline is a dedicated service to which tips' can be submitted by users.",
"On WhatsApp, tiplines are phone numbers to which WhatsApp users can forward potential misinformation to check for existing fact-checks or request a new fact-check.",
"The first tipline in our dataset ran during the 2019 Indian elections and received 37,823 unique text messages.",
"Several additional always-on tiplines launched in December 2019 and ran throughout the 2020 calendar year.",
"We obtained a list of the text of messages and the times at which they were submitted to these tiplines for March to May 2019 (Indian election tipline) and for February 2020 to August 2020 (all other tiplines).",
"We have no information beyond the text of messages and the times at which they were submitted.",
"In particular, we have no information about the submitting users.",
"WhatsApp Public Groups.",
"In addition to the messages submitted to these tiplines, we have data from a large number public WhatsApp groups collected by Garimella and Eckles (2020) during the same time period as the Indian election tipline.",
"The dataset was collected by monitoring over 5,000 public WhatsApp groups discussing politics in India, totaling over 2 million unique posts.",
"For more information on the dataset, please refer to Garimella and Eckles (2020).",
"Such public WhatsApp groups, particularly those discussing politics have been shown to be widely used in India (Lokniti, 2018).",
"fact-checkers and fact-check aggregators.",
"We employ aggregators such as Google Fact-check Explorer, 3 GESIS (Tchechmedjiev et al., 2019), and Data Commons, and include roughly a dozen fact-checking organizations certified by the International Fact-Checking Network with either global or geographically-relevant scope in our dataset.",
"All fact-checks included at minimum a headline and a publish date, but typically also include a lead or the full text of the fact-check, as well as adjudication of the claim (e.g., truth or falsity), and sometimes include information of lesser value for our work such as author, categorization tags, or references to original content that necessitated the fact-check.",
"To construct a dataset for claim matching, we design a two-step sampling and annotation process.",
"We first sample a subset of items with potential matches from all sources and then annotate and select the ones containing claim-like statements.",
"In a second task, we annotate pairs of messages for claim similarity.",
"One of the messages in each pair must have been annotated as containing a claim-like statement in the first annotation task.",
"We sample possible matches in several ways in order to not unnecessarily waste annotator time.",
"We describe these sampling strategies and other details of the process in the remainder of this section.",
"Task 1 presented annotators with a WhatsApp message or fact-check headline and asked whether contained a claim-like statement.",
"We first created a codebook by inductively examining the English-language data, translations of the other-language data, and discussing the task with two fact-checkers (one Hindi-speaking and one Malayalam-speaking).",
"We began with the definition set out by practitioners (Konstantinovskiy et al., 2020) for a claim-like statement and created examples drawn from our data sources.",
"Annotators were asked whether the message had a claim-like statement and allowed to choose Yes, Probably, No, or N/A: The message is not in language X (where X was the language being an-notated).",
"The instructions made clear Probably should be used sparingly and was intended for instances where an image, video, or other context was 3 https://toolbox.google.com/factcheck/ explorer Table 2: Claim-like statements.",
"missing.",
"The detailed instructions and an example of the interface are provided in the supplemental materials.",
"We recruited three native speakers for each of Hindi, Bengali, Tamil, and Malayalam through Indian student societies at different universities as well as independent journalists.",
"All of our annotators had a Bachelor's degree and many were pursuing Masters or PhDs.",
"We onboarded all annotators and discussed the risks of possibly politically charged, hateful, violent, and/or offensive content in the dataset.",
"Our custom-built annotation interface provided the ability to skip any piece of content with one keystroke.",
"We also encouraged annotators to take frequent breaks and calculated these breaks into our payments.",
"Our English-language data is a mix of Indian and global content.",
"Two of our English annotators had previously completed the Hindi and Malayalam tasks while the third English annotator completed only the English-language task.",
"We calculate agreement using Randolph's marginal-free kappa (Randolph, 2005).",
"This measure better estimates intercoder agreement in unbalanced datasets compared to fixed-marginal scores like Fleiss' kappa (Warrens, 2010).",
"All participants annotated 100 items independently.",
"We then discussed disagreements on these 100 items and updated the codebook if needed.",
"The participants then annotated datasets of approximately 1,000 items in each language.",
"Information about this final annotation dataset is presented in Table 2.",
"Agreement between annotators for this task is lower than the next task but on par with annotation tasks for hate speech and other hard tasks' (Del Vigna et al., 2017; Ousidhoum et al., 2019) suggesting determining whether a message has a claim-like statement is harder than determining the similarity of the statements (Task 2).",
"The second task presented annotators with two messages and asked how similar the claim-like statements were in the messages.",
"Annotators were given a four-point scale (Very Similar, Somewhat Similar, Somewhat Dissimilar, and Very Dissimilar).",
"We prepared a codebook with clear instructions for each response and examples in consultation with the two fact-checkers and discussed it with all annotators before annotation began.",
"Annotators could also select N/A: One or more of the messages is not in language X or does not contain a claim-like statement).",
"Our initial testing showed the largest source of disagreement was between Somewhat Dissimilar and Very Dissimilar.",
"We added guidance to the codebook but did not dwell on this aspect as we planned to collapse these categories together.",
"We prioritize our evaluations on Very Similar or Somewhat Similar statements.",
"Although our goal is claim matching, this task asked annotators about the similarity of claim-like statements as the annotators were not all fact-checkers.",
"We found asking the annotators to speculate about whether some hypothetical fact-check could cover both statements was unhelpful.",
"Our codebook is constructed such that Very Similar pairs of messages could be served by one fact-check while Somewhat Similar messages would partially be served by the same fact-check.",
"A link to the codebook is in the supplemental materials.",
"The same annotators from Task 1 completed Task 2 with a few exceptions.",
"One Tamil annotator was unable to continue due to time restrictions, and one Bengali annotator only completed part of the annotations (we calculate agreement with and without this annotator in Table 3).",
"We added a fourth English annotator in case there was an-0.0 0.2 0.4 0.6 0.8 1.0 Cosine Similarity 0.0 0.2 0.4 0.6 0.8 1.0 F r a c t i o n o f p o s t s LaBSE not sim.",
"other dropout but all English annotators completed.",
"Table 3 shows a breakdown of the dataset by language.",
"In general, agreement on this task, even among the same annotators as Task 1, was much higher than Task 1 suggesting claim similarity is an easier task than claim detection.",
"The largest point of disagreement was around the use of the N/A label: discussing this with annotators we found it was again the disagreement about whether certain messages had claims leading to the disagreement.",
"A purely random sample of pairs is very unlikely to find many pairs that match.",
"We considered examining pairs with the highest cosine similarities only, but these pairs were likely to match in trivial and uninteresting ways.",
"In the end, we used random stratified sampling to select pairs for annotation.",
"We first calculate all pairwise cosine similarities using multiple embedding models (described in Section 5).",
"We then use stratified sampling to sample 100 pairs in proportion to a Gaussian distribution with mean 0.825 and standard deviation 0.1 for each model and language.",
"We do this due to our strong prior that pairs close to zero as well as pairs close to one are usually uninteresting.' These represent pairs that either clearly do not match or (very often) clearly match.",
"In practice, we still sample a wide range of values (Figure 1).",
"We also include 100 random pairs for each language with the exception of Tamil due to annotator time limitations.",
"We used LASER, LaBSE, and our Indian XLM-R (I-XLM-R) model (details below) to sample pairs for all languages.",
"Our Bengali and Malayalam annotators had additional capacity and annotated additional pairs drawn in a similar way.",
"We use a GPU-enabled server with one 1080 GPU to train our own embedding model and run the rest of our experiments on desktop computers with minimal runtime.",
"We use the Elasticsearch implementation of the BM25 system and use the Sentence-Transformers (for I-XLM-R), PyTorch (for LASER), and TensorFlow (for LaBSE) 4 to train and retrieve embeddings.",
"We follow the approach of Reimers and Gurevych (2020) for tuning the hyperparameters of our embedding model.",
"We use the knowledge distillation approach presented in Reimers and Gurevych (2020) to train a multilingual embedding model.",
"5 The approach adopts a studentteacher model in which a high quality teacher embedding model is used to align text representations of a student model by mapping embeddings of text in the student language to close proximity of the embeddings of the same text in the teacher language.",
"Using this approach we train a model for English, Hindi, Malayalam, Tamil, and Bengali.",
"We refer to this model as our Indian XLM-R model (I-XLM-R), and use it as one of the models we evaluate for claim matching.",
"Training Data.",
"The knowledge distillation approach requires parallel text in both student and teacher languages for training embedding models.",
"We find the OPUS parallel corpora (Tiedemann, 2012) to be a useful and diverse resource for parallel data.",
"We retrieve parallel data between English and the collection of our four Indian languages from OPUS and use it as training data.",
"Training Procedure.",
"For a teacher model MT and a student model MS and a collection of ( s i , t i ) pairs of parallel text, we minimize the following MSE loss function for a given mini-batch B: 1 | B | (cid:80) i B [( MT ( s i ) MS ( s i )) 2 + ( MT ( s i ) MS ( t i )) 2 ] Intuitively, this loss function forces embeddings of the student model for both t i and s i to be in proximity of the teacher embeddings for s i , 4 We use https://github.com/bojone/labse .",
"5 Trained models from Reimers and Gurevych do not include embeddings for Bengali, Tamil, and Malayalam, which motivated us to train the I-XLM-R model.",
"therefore transferring embedding knowledge from the teacher to the student model.",
"For training our Indian XLM-R model, we pick the English SBERT model as teacher (Reimers and Gurevych, 2019) (for its high quality embeddings) and XLM-Roberta (XLM-R) as the student (for SOTA performance in NLP tasks and a universal vocabulary that includes tokens from 100 languages).",
"We evaluate a retrieval-based claim matching solution built on top of the BM25 retrieval system (Robertson and Zaragoza, 2009) as well as an embeddings-only approach.",
"In the first case, queries are fed into BM25 and the retrieved results are then sorted based on their embedding similarity to the input query.",
"The top ranking results are then used as potential matches for the input claim.",
"In the latter case, we classify pairs of items using features derived from the embedding models.",
"For some applications, it is good enough to be able to rank the most similar claims and treat the problem of claim matching as an information retrieval problem.",
"This is the case, for example, when fact-checkers are examining possible matches to determine if a new content item matches a previous fact-check.",
"We discuss the performance of information retrieval approaches in Section 6.1.",
"In many other applications, however, we seek a system that can determine if the claims in two items match without human intervention.",
"These applications demand a classification approach: i.e., to determine whether two items match.",
"This allows similar items to be grouped and fact-checkers to identify the largest groups of items with claims that have not been fact-checked.",
"We discuss the performance of simple classification approaches in Section 6.2.",
"We find the mean reciprocal rank (MRR) metric to be a good IR-based performance measure for our system, since we only know of one match in the retrieved results by the system for our queries.",
"We use the base BM25 system as a strong baseline to compare against.",
"We also compare our system with other state-of-the-art multilingual embedding models used for reranking, namely LASER and LaBSE.",
"Results are presented in Table 4.",
"The BM25 with I-XLM-R reranking outperforms other systems in all languages, with the exception of Tamil and English where the system performs comparably with the BM25 baseline.",
"The largest lead in performance of the I-XLM-R based model is for Bengali, where the MRR score is more than 0.1 higher than the BM25 baseline.",
"Both LASER and LaBSE fall short on surpassing the baseline for any of the languages.",
"LASER performs the worst on Tamil, where its MRR score is nearly 0.07 less than BM25.",
"Similarly, LaBSE's largest difference with BM25 is in Hindi where it falls short by 0.085.",
"Although there is room for improvement in some languages, the I-XLM-R seems the best choice if only one system is chosen.",
"After calculating MRR we also evaluated the systems on other metrics, namely Mean First Rel-evant (MFR, Fuhr (2018)) and HasPositive@K (Shaar et al., 2020).",
"Both measures did not demonstrate any meaningful patterns useful for selecting the best system.",
"We do not include the details of these evaluations for brevity.",
"Responding to submitted content on a tipline, as well as grouping claims to understand their relative prevalence/popularity, requires more than presenting a ranked list as occurs in the information retrieval approaches in the previous subsection and in previous formulations of this problem (e.g., Shaar et al., 2020).",
"In this section we use the annotated pairs to evaluate how well simple classifiers perform with each model.",
"Threshold Classifier.",
"The first classifier' we evaluate is a simple threshold applied to the cosine similarity of a pair of items.",
"Items above the threshold are predicted to match while items with a similarity below the threshold are predicted to not match.",
"In doing this, we seek to understand the extent to which the embedding models can separate messages with matching claims from those with non-matching claims.",
"An ideal model would assign higher cosine similarity scores to every pair of messages with matching claims than to pairs of messages with nonmatching claims.",
"Table 5 shows the F1 scores averaged across 10 runs of 10-fold cross validation for binary classifiers applied to all languages and each language individually.",
"In general, the Indian XLM-R model performs best at the task with F1 scores ranging from 0.57 to 0.88.",
"As shown in Figure 2, our Indian XLM-R model outperforms LASER primarily in precision and outperforms LaBSE primarily in terms of recall.",
"The numbers reported in Table 5's last column all come from I-XLM-R.",
"The English-only SBERT model performs slightly better with a maximum F1 score of 0.90 0.09 at a threshold of 0.71 on English data, suggesting that the student model may have drifted from the teacher model for English during training.",
"This drift is slight, however, and the cosine similarities across all English-language data for the two models are highly correlated with a Pearson's correlation coefficient of 0.93.",
"The authors of SBERT released two additional multilingual models on that support English and Hindi, but do not support Bengali, Malayalam, or Tamil.",
"6 We find the models have comparable performance to I-XLM-R on English & Hindi while F1 scores for other languages are between 0.17 and 0.61.",
"Our dataset includes both social media messages (namely, WhatsApp messages) and fact-checks.",
"Overall, performance is higher for matching fact-checks to one another than for matching social media messages to one another for all models.",
"As an example, the best-performing model, Indian XLM-R, achieves a maximum F1 score of 0.76 with a threshold 0.87 for matching pairs of fact-checks, but only a maximum F1 score of 0.72 (threshold 0.90) for matching pairs of social media messages.",
"Claim Matching Classifier.",
"We train an AdaBoost binary classifier that predicts if two textual claims match.",
"The features are all precomputed or trivial to compute so that such a system could easily be run to refine a smaller number of candidate matches with minimal additional computation.",
"We use lengths of claims, the difference in lengths, embedding vectors of each item, and their cosine similarity as features.",
"We build a balanced dataset by taking all the Very Similar pairs and matching every item with a randomly selected Not Very Similar (every other label) item from the same language.",
"We do not differentiate between pairs in different languages as our per language data is limited and all features including the embedding vectors translate across languages as they are from mulitilingual embedding models.",
"Claim matching classification results are presented in Table 6.",
"We evaluate models using 10-fold cross validation and report accuracy and F1 Table 6: Claim matching classification results.",
"scores for each class averaged over 10 runs.",
"Consistent with previous outcomes, it is clear that using the I-XLM-R cosine similarity and embeddings as input features results in better performance than other models, including the model with all features.",
"The positive class F1 scores for all models in Table 6 are notably higher than the threshold approaches (Table 5) suggesting information from the embeddings themselves and the lengths of the texts are useful in determining whether the claims in two messages match.",
"The claim matching classifier is language-agnostic and is learning from only 522 datapoints, which underscores the quality of the I-XLM-R embeddings.",
"Error Analysis.",
"We manually inspect the pairs classified in error using the threshold classifier and I-XLM-R.",
"The pairs either have a similarity score above the matching threshold but are Not Similar (false positives, 24/89) or are matches and have a score below threshold (false negatives, 65/89).",
"16 of the 24 false positives are labeled as Somewhat Similar, and manual inspection shows that these pairs all have overlapping claims (i.e., they share some claims but not others).",
"There are no obvious patterns for the false negatives, but some of the errors are made in ambiguous cases.",
"We also examine the errors of one random fold of the AdaBoost classifier to further investigate where our model makes mistakes.",
"There are a total of 10 wrong predictions (6 false negatives and 4 false positives).",
"Of these, 2/6 and 1/4 are annotation errors.",
"Within the false negatives, most other cases are pairs of text that are very similar but minimally ambiguous because of a lack of context, which annotators correctly resolved to being identical.",
"An example of such a false negative is the pair of messages Claim rare flower that blooms once in 400 years in the-himalayas-called-mahameru-pushpam and Images of Mahameru flower blooms once every 400 years in Himalayas.",
"False positives were all Somewhat Similar and Somewhat Dissimilar pairs that the classifier mistook for Very Similar.",
"There were no significant discrepancies among languages in classification errors.",
"Scaling human-led fact-checking efforts requires matching messages with the same claims.",
"In this paper, we train a new model and create an evaluation dataset that moves beyond English and American politics.",
"Our system is being used in practice to support fact-checking organizations.",
"We find that the embedding models can generally match messages with the same claims.",
"Performance for matching fact-checks slightly exceeds that for matching social media items.",
"This makes sense, given that fact-checks are written by professional journalists and generally exhibit less orthographical variation than social media items.",
"Too few examples of fact-checks correctly matched a social media item to evaluate performance in that setting.",
"This is not a major limitation since nearly every fact-check starts from a social media item.",
"So, in practice we only need to be able to match social media items to one another in order to locate other social media items having the same claims as the item that led to a fact-check.",
"We evaluate claim matching within each language, but the embedding models are all multilingual and could serve to match claims across languages.",
"BM25 is not multilingual, but Elasticsearch can index embeddings directly.",
"Previously de Britto Almeida and Santos (2020) developed a Elasticsearch plugin to query embeddings by cosine distance, but since version 7.3 of Elasticsearch this functionality is now available natively in Elasticsearch (Tibshirani, 2019), meaning a large set of embeddings can be searched efficiently to find near matches across languages.",
"As a proof of concept, we took the 37,823 unique text messages sent to the Indian election tipline and clustered them using I-XLM-R and online, single-link hierarchical clustering with a threshold of 0.90.",
"We found 1,305 clusters with 2 or more items; the largest cluster had 213 items.",
"We hired an Indian journalist with experience fact-checking during the Indian 2019 elections to annotate each of the 559 clusters with five or more items by hand.",
"The annotation interface presented three examples from each cluster: one with the lowest average distance to all other messages in the cluster, one with the highest distance, and one message chosen randomly.",
"In 137 cases the examples shown for annotation were from multiple languages, and in 132 of those cases the journalist was able to identify the same claims across multiple languages.",
"Although preliminary, this demonstrates the feasibility and importance of multilingual claim matching with these methods an area we hope further work will tackle.",
"Our findings are supporting over 12 fact-checking organizations running misinformation tiplines.",
"The deployed system uses I-XLM-R and automatically groups text messages with similarities over 0.95 and recommends possible matches from less-similar candidates that fact-checking organizations can confirm or reject.",
"Matches can also be added manually.",
"Initial feedback from the fact-checkers has been positive, and we are collecting data for further research and evaluation.",
"We prioritized the well-being of annotators and the privacy of WhatsApp users throughout this research.",
"Our data release conforms to the FAIR principles (Wilkinson et al., 2016).",
"We have no identifying information about WhatsApp users and any references to personally identifiable information in messages such as phone numbers, emails, addresses and license plate numbers are removed to preserve user privacy.",
"We worked closely with our annotators preparing them for the risk of hateful content, encouraging frequent breaks, and paying well-above minimum wage.",
"We took a compassionate response to COVID disruptions and other life stresses even when this meant less annotated data than was originally envisioned.",
"This work was funded by the Omidyar Network with additional support from Sida, the Robert Wood Johnson Foundation, and the Volkswagen Foundation.",
"Kiran Garimella is supported by the Michael Hammer postdoctoral fellowship at MIT.",
"We are thankful to all of the wonderful annotators and fact-checking organizations who made this research possible.",
"We are grateful to the Meedan team, Prof. Rada Mihalcea, Gautam Kishore Shahi, and our anonymous reviewers."
] | [
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"method",
"method",
"objective",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"objective",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"other",
"other",
"other",
"other"
] |
[
"We propose a general framework called Text Modular Networks (TMNs) for building interpretable systems that learn to solve complex tasks by decomposing them into simpler ones solvable by existing models.",
"To ensure solvability of simpler tasks, TMNs learn the textual input-output behavior (i.e., language ) of existing models through their datasets.",
"This differs from prior decomposition-based approaches which, besides being designed specifically for each complex task, produce decompositions independent of existing sub-models.",
"Specifically, we focus on Question Answering (QA) and show how to train a next-question generator to sequentially produce sub-questions targeting appropriate sub-models, without additional human annotation.",
"These sub-questions and answers provide a faithful natural language explanation of the model's reasoning.",
"We use this framework to build MODULARQA, 1 a system that can answer multi-hop reasoning questions by decomposing them into sub-questions answerable by a neural factoid single-span QA model and a symbolic calculator.",
"Our experiments show that MODULARQA is more versatile than existing explainable systems for DROP and HotpotQA datasets, is more robust than state-of-the-art blackbox (uninterpretable) systems, and generates more understandable and trustworthy explanations compared to prior work.",
"An intuitive way to solve more complex tasks, such as multi-hop question-answering (Yang et al., 2018; Khashabi et al., 2018; Khot et al., 2020) and numerical reasoning (Dua et al., 2019), would be to decompose them into already solved simpler problems, e.g., single-fact QA (Rajpurkar et al., 2016).",
"Besides allowing reuse of existing simpler models, this approach would yield an interpretable system that provides a faithful explanation (Jacovi and 1 https://github.com/allenai/modularqa HotpotQA Question: Little Big Girl was a Simpsons episode directed by the animator and artist of what nationality?",
"Goldberg, 2020) of its reasoning as a composition of simpler sub-tasks, as shown in Fig.",
"1. Motivated by this, we ask the following question: Given a set of existing QA models, can one leverage them to answer complex questions by communicating with these existing models?",
"We propose a general framework, Text Modular Networks (TMNs) , that answers this question by learning to decompose complex questions (of any form) into sub-questions that are answerable by existing QA modelssymbolic or neural (hence-forth referred to as sub-models ).",
"2 Unlike previous approaches (Talmor and Berant, 2018; Min et al., 2019a), the decompositions are not based on splits of the complex questions and aren't built independent of the sub-model.",
"Instead, our framework learns to generate sub-questions in the scope 2 TMNs, in fact, treat sub-models as blackboxes, and can thus use any model or function as a module.",
"of existing models.",
"For instance, the second subquestion in the DROP dataset example in Fig. 1 requires the introduction of a new phrase , start to take a dip, which is beyond the scope of standard decomposition approaches.",
"Additionally, the final sub-question targets a symbolic calculator , which operates over a different input language.",
"The core of our TMN framework is a next-question generator that sequentially produces the next sub-question to ask as well as an appropriate sub-model for answering it.",
"The resulting sequence of sub-questions and their answers provides a human-interpretable description of the model's neuro-symbolic reasoning (Mc-Carthy, 1988; Smolensky, 1988), as illustrated in Fig.",
"1. Notably, TMNs learn to produce these decompositions using only distant supervision, without the need for any explicit human annotation.",
"One of our key insights is that the capabilities of existing sub-models can be captured by training a text-to-text system to generate the questions in the sub-model's training dataset (e.g., SQuAD), given appropriate hints .",
"In our case, we train a BART model (Lewis et al., 2020) to generate questions given the context, answer, and preferred vocabulary as hints.",
"We then use these sub-task question models to generate sub-questions (and identify appropriate sub-models) that could lead to the likely intermediate answers extracted for each step of the complex question (Raymond S. and American in the HotpotQA example in Fig. 1).",
"The resulting sub-questions, by virtue of our training, are in the language (i.e., within-scope) of the corresponding sub-models.",
"These sub-question sequences can now be used to train the next-question generator to sequentially produce the next sub-question.",
"We use this trained generator, along with existing QA models, to answer complex questions, without the need for any intermediate answers.",
"We use the TMN framework to develop MODULARQA , a modular system that explains its reasoning in natural language, by decomposing complex questions into those answerable by two sub-models: a neural factoid single-span QA model and a symbolic calculator .",
"MODULAR QA's implementation 1 covers multi-hop questions that can be answered using these two sub-models via five classes of reasoning found in existing QA datasets: composition, conjunction, comparison, difference, and complementation .",
"3 3 Composition and conjunction questions are also referred We evaluate MODULARQA on questions from two datasets, DROP (Dua et al., 2019) and HotpotQA (Yang et al., 2018), resulting in the first cross-dataset decomposition-based interpretable QA system.",
"Despite its interpretability and versatility, MODULARQA scores only 3.7% F1 lower than NumNet+V2 (Ran et al., 2019), a state-of-the-art blackbox model designed for DROP.",
"MODULARQA even outperforms this blackbox model by 2% F1 in a limited data setting and demonstrates higher (+7% F1) robustness (Gardner et al., 2020).",
"MODULARQA is competitive with and can even outperform task-specific Neural Module Networks (Gupta et al., 2020; Jiang and Bansal, 2019) while producing textual explanations.",
"Further, our human evaluation against a split-point based decomposition model trained on decomposition annotation (Min et al., 2019b) for HotpotQA finds our explanations to be more trustworthy, understandable, and preferable in 67%-78% of the cases.",
"Contributions.",
"(1) Text Modular Networks (TMNs), a general framework that leverages existing simpler modelsneural and symbolic as blackboxes for answering complex questions.",
"(2) MODULARQA, 1 an interpretable system that learns to automatically decompose multi-hop and discrete reasoning questions.",
"(3) Experiments on DROP and HotpotQA demonstrating MODULAR QA's cross-dataset versatility, robustness, sample efficiency and ability to explain its reasoning in natural language.",
"Many early QA systems were designed as a combination of distinct modules, often composing outputs of lower-level language tasks to solve higher-level tasks (Moldovan et al., 2000; Harabagiu and Hickl, 2006).",
"However, much of this prior work is limited to pre-determined composition structures (Berant et al., 2013; Seo et al., 2015; Nee-lakantan et al., 2017; Roy and Roth, 2018).",
"Various modular network architectures have been proposed to exploit compositionality (Rosen-baum et al., 2018; Kirsch et al., 2018).",
"The closest models to our work are based on neural module networks (NMN) (Andreas et al., 2016) which compose task-specific simple neural modules.",
"We to as bridge' questions.",
"Complementation refers to questions such as What percentage of X is not Y?' MODULARQA can be easily extended to other reasoning types by defining the corresponding hints (4.3).",
"compare against formulations of NMNs for HotpotQA (Jiang and Bansal, 2019) and DROP (Gupta et al., 2020), both of which target only one dataset and do not reuse existing QA systems.",
"Moreover, they provide attention-based explanations whose interpretability is unclear (Serrano and Smith, 2019; Brunner et al., 2020; Wiegreffe and Pinter, 2019).",
"Question decomposition has been pursued before for ComplexWebQuestions (Talmor and Berant, 2018) and HotpotQA.",
"Both approaches (Tal-mor and Berant, 2018; Min et al., 2019b) focus on directly training a model to produce sub-questions using question spansan approach not suitable for DROP questions (as illustrated in Fig. 1).",
"Our next-question generator overcomes this limitation by generating free-form sub-questions in the language of existing models.",
"Perez et al. (2020) also use a text-to-text model to generate sub-questions for HotpotQA.",
"However, they generate simpler questions without capturing the requisite reasoning, and hence use them mainly for evidence retrieval.",
"BREAK (Wolfson et al., 2020) follows an alternative paradigm of collecting full question decomposition meaning representations (QDMR) annotations.",
"While this can be effective, it relies on costly human annotation that may not generalize to domains with new decomposition operations.",
"Its decompositions are generated in a model-agnostic way and still need QA systems to answer the sub-questions, e.g, high-level QDMR questions such as Which is earlier? and Which is longer? would need special systems that can map these to symbolic comparisons.",
"In contrast, TMNs start with pre-determined models and learn to generate decompositions in their language.",
"While many multi-hop QA models exist for HotpotQA and DROP, these are often equally complex models (Tu et al., 2020; Fang et al., 2020; Ran et al., 2019) focusing on just one of these datasets.",
"Only on HotpotQA, where supporting sentences are annotated, can these models also produce post-hoc explanations, but these explanations are often not faithful and shown to be gameable (Trivedi et al., 2020).",
"TMNs are able to produce explanations for multiple datasets without needing such annotations, making it more generalizable to future datasets.",
"TMNs are a family of architectures consisting of modules that communicate through language learned from these modules, to accomplish a cer-QA",
"tain goal (e.g., answering a question).",
"Figure 2 illustrates this general idea in the context of answering a DROP question.",
"The core of our system is a next-question generator D , a component in charge of generating and distributing sub-tasks among sub-models A s .",
"The system alternates between using D to produce the next question ( NextGen ) and using the corresponding sub-model to answer this question.",
"Formally, solving a complex question qc is an alternating process between the following two steps: Generate the next question q i for submodel t i : (cid:104) t i , q i (cid:105) = D ( qc, q 1 , a 1 , . . . , q i 1 , a i 1 ) Find answer a i by posing q i to submodel t i : a i = A t i ( q i , p ) where q i is the i th generated sub-question and a i is the answer produced by a sub-model t i based on a given context paragraph p .",
"This simple iterative process ends when q i +1 equals a special end-of-sequence symbol (denoted throughout as [EOQ] ) with the final output answer a i .",
"Building a Text Modular Network.",
"The key challenge in building a Text Modular Networks is developing the next-question generator model.",
"Training this model requires a next-question prediction dataset where each example is a step in the iterative progression of sub-question generation.",
"For example, the second step in Fig. 2 is: I n qc : How many years did it take for the services sector to rebound?",
"q 1 : In what year did the services sector rebound?",
"a 1 : 2003 O u t (cid:26) (cid:104) t 2 , q 2 (cid:105) = (cid:104) SQuAD, When did the services sector start to take a dip? (cid:105) While it may be possible to collect task-specific datasets or design a task-specific next-question generator (Min et al., 2019b; Talmor and Berant, 2018), our goal is to build a framework that can be easily extended to new complex QA tasks reusing existing QA sub-models.",
"To achieve this, we present a general framework to generate the next-question training dataset by: (1) Modeling the language of sub-models; (2) Building decompositions in the language of these sub-models using minimal distant supervision hints .",
"To ensure the sub-questions are answerable by existing sub-models, we train a text-to-text sub-task question model on the original sub-task to generate a plausible q i conditioned on hints, e.g., a BART model trained on SQuAD to generate a question given the answer.",
"We can view this utility as characterizing the question language of the submodel.",
"For example, such a model trained on the SQuAD dataset would produce factoid questions the space of questions answerable by a model trained on this dataset.",
"While an unconditional text generation model can also capture the space of questions, it can generate a large number of possibly valid questions, making it hard to effectively train or use such a model.",
"Instead, we scope the problem down to conditional text generation of questions given hints z .",
"Specifically, we use the context p , answer a and question vocabulary v as input conditions to train a question generator model G : z q where z = (cid:104) p, a, v (cid:105) .",
"Such a generator, GS , produces the first two sub-questions in the example in Fig. 4, when using a =2003 (or 2002, resp.) and v = ( qc ) ={service, sector, year, rebound} as hints.",
"To generate training decompositions for a complex question using a sub-task question model, we extract distant-supervision hints z corresponding to each reasoning step.",
"This is akin to the distant supervision approaches used to extract logical forms in semantic parsing (Liang et al., 2013; Berant et al., 2013) and the intermediate entities in a reasoning chain (Gupta et al., 2020; Jiang and Bansal, 2019).",
"In our running DROP example, under the defi-nition of z = (cid:104) p, a, v (cid:105) , we would need to provide the context, answer, and question vocabulary for each reasoning step.",
"We can derive intermediate answers by finding the two numbers whose difference is the final answer (see Fig. 4).",
"We can use words from the input question as vocabulary hints.",
"4 As shown in Fig. 4, we generate the training sub-questions in the language of appropriate systems for each step i using the question generation model G t i : q i = G t i ( z i ) where z i = (cid:104) p i , a i , v i (cid:105) and the model t i is determined by the answer type (or can be a hint too).",
"Note that our framework does not depend on the specific choice of z .",
"Our key idea is to train the sub-task question model conditioned on the same z that we can provide for the complex task.",
"The hint z could be very general (just the context) or very specific (exact vocabulary of the question), trading off the ease of extracting hints with the quality of the generated decomposition.",
"Similarly, these hints don't have to be 100% accurate as they are only used to build the training data and play no role during inference.",
"Finally, we convert the decompositions into training data for the next-question generator.",
"For each question q i generated using the sub-task question model G t i , we create the training example: Input: qc, q 1 , a 1 , . . . , q i 1 , a i 1 Output: (cid:104) t i , q i (cid:105) Training Data Generation Summary.",
"Fig. 3 illustrates the complete process for generating the training data for the next-question generator.",
"For each complex question, we extract a set of possible hints for each potential reasoning chain (e.g., all number pairs that lead to the final answer).",
"For each step, we use the corresponding sub-task question models to generate potential sub-questions that lead to the expected answer.",
"Finally we use these generated sub-question decompositions as the training data for the next-question generator model.",
"We next describe a specific instantiation of the Text Modular Network: MODULARQA a new QA system that works across HotpotQA and DROP.",
"To handle these datasets, we first introduce the two QA sub-models(4.1), the sub-task question models for these models(4.2), our approach to build training data (4.3), and the inference procedure used for question-answering(4.4).",
"Para , p : ...",
"The sector decreased by 7.8 percent in 2002, before rebounding in 2003 ...",
"Question , qc : How many years did it take for the services sector to rebound?",
"Answer a : 1 Hints Sub-Questions (cid:104) a 1 = 2003 , p 1 = p, v 1 = (qc) (cid:105) q 1 = GS ( p 1 , a 1 , v 1 ) : In what year did the services sector rebound?",
"(cid:104) a 2 = 2002 , p 2 = p, v 2 = (qc) (cid:105) q 2 = GS ( p 2 , a 2 , v 2 ) : When did the services sector start to take a dip?",
"(cid:104) a 3 = 1 , p 3 = p, v 3 ={diff, 2003, 2002} (cid:105) q 3 = GC ( p 3 , a 3 , v 3 ) : diff(2003, 2002) (cid:4) q 4 = [EOQ] Figure 4: An example decomposition generated for a DROP example using hints and sub-question generators G .",
"We use two QA models with broad coverage on the two datasets:",
"trained on the entire SQuAD 2.0 dataset including the no-answer questions; and Math calculator model, AC , a symbolic Python",
"program that can perform key operations needed for DROP and HotpotQA, namely: diff (X, Y, Z) that computes the difference between X and Y in unit Z (days/months/years); not (X) that computes the complement % of X, i.e., 100 X; if_then (X <op> Y, Z, W) that returns Z if X <op> Y is true, otherwise returns W.",
"We define two sub-task question models corresponding to each of our QA sub-models.",
"SQuAD Sub-task Question Model, GS .",
"We train a BART-Large model on the answerable subset of SQuAD 2.0 to build our sub-task question model for SQuAD.",
"We use the gold paragraph and answer from the dataset as the input context and answer.",
"For the estimated question vocabulary, we select essential words 5 from the gold questions (re-ferred as the function ) with additional irrelevant words sampled from other questions.",
"6 To train the text-to-text BARTS model, we use a simple concatenation of the passage, vocabulary, and answer (with markers such as H: and A: to indicate each field) as the input sequence and the question as the output sequence.",
"While a constrained-decoding approach (Hokamp and Liu, 2017; Hu et al., 2019a) could be used here to further promote the use of the vocabulary hints, this simple approach was effective and more generally applicable to other hints in our use-case.",
"Once this model is trained, we use it with nucleus sampling (Holtzman et al., 2020) to generate k sub-questions, Q , and filter out those that lead an incorrect or no answer using AS : GS ( p, a, v ) = { q Q | overlaps ( AS ( p, q ) , a ) } Math Sub-task Question Model, GC .",
"Given the symbolic nature of this solver, rather than training a neural generator, we simply generate all possible numeric questions given the context.",
"Similar to GS , we first generate potential questions Q and then filter down to those that lead to the expected answer using AC : GC ( p, a, v ) = { q Q | AC ( p, q ) = a } 4.3 Generating Training Decompositions We broadly identify five classes of questions in HotpotQA and DROP dataset that can be answered using our two models.",
"7 These question classes, how 5 ( q ) = Non-stopword tokens with pos tags {NOUN, VERB, NUM, PROPN, ADJ, RB} 6 More details in Appendix A 7 Other questions require a QA model that can return multiple answers or a Boolean QA model, as discussed in 6.",
"they are identified and how we extract hints for each question type is described next.",
"Note that similar rules for extracting distant supervision hints have been used by prior work for DROP (Gupta et al., 2020) and HotpotQA (Jiang and Bansal, 2019) too.",
"1. Difference ( How many days before X did Y happen? ): We identify these questions based on the presence of term indicating a measurement :how many and terms indicating difference such as shorter, more, days between, etc.",
"Also we check for two dates or numbers in the context such that their difference (in all units) can lead to the final answer.",
"If these conditions are satisfied, for every pair n 1 , n 2 where the difference (in units u ) can lead to the final answer, we generate the hints: p 1 = p ; a 1 = n 1 ; v 1 = ( qc ) p 2 = p ; a 2 = n 2 ; v 2 = ( qc ) p 3 = ; a 3 = a ; v 3 = [ diff , n 1 , n 2 , u ] where refers to the empty string.",
"2. Comparison ( Which event happened before: X or Y? ): We identify the two entities e 1 and e 2 in such questions and find dates/numbers that are mentioned in documents.",
"For every n 1 , n 2 num-ber/date mentioned close to e 1 and e 2 respectively, we create the hints: p 1 = p ; a 1 = n 1 ; v 1 = ( qc ) \\ e 2 p 2 = p ; a 2 = n 2 ; v 2 = ( qc ) \\ e 1 p 3 = ; a 3 = a ; v 3 = [ if_then , n 1 , n 2 , e 1 , e 2 ] The final set of hints are for use by the calculator generator to create the questions: if_then ( n 1 > n 2 , e 1 , e 2 ) and if_then ( n 1 < n 2 , e 1 , e 2 ) .",
"3. Complementation ( What percent is not X? ): We identify these questions mainly based on the presence of .* not .* in the question and a number n 1 such that the a = 100 n 1 .",
"The hints are: p 1 = p ; a 1 = n 1 ; v 1 = ( qc ) p 2 = ; a 2 = a ; v 2 = [ not , n 1 ] 4. Composition ( Where was 44th President born? ): For such questions(only present in Hot-potQA), we need to first find an intermediate entity e 1 that would be the answer to a sub-question in qc (e.g. Who is the 44th President?).",
"This intermediate entity is used by the second sub-question to get the final answer.",
"Given the two gold paras d 1 and d 2 , where d 2 contains the answer, we use the mention of d 2 's title in d 1 as the intermediate entity.",
"8 While we could use the entire complex question vocabulary to create hints, we can reduce some noise by removing terms that appear exclusively in the other document.",
"So the final hints are: p 1 = d 1 ; a 1 = e 1 ; z 1 = ( qc, d 1 , d 2 ) p 2 = d 2 ; a 2 = a ; z 2 = ( qc, d 2 , d 1 ) + e 1 where ( q, d 1 , d 2 ) indicates the terms in ( q ) that appear in d 2 but not in d 1 .",
"9 5. Conjunction ( Who acted as X and directed Y? ): These class of questions do not have any intermediate entity but have two sub-questions with the same answer e.g. Who is a politician and an actor?.",
"If the answer appears in both supporting paragraphs, we assume that it is a conjunction question.",
"The hints for such questions are: p 1 = d 1 ; a 1 = a ; z 1 = ( qc, d 1 , d 2 ) p 2 = d 2 ; a 2 = a ; z 2 = ( qc, d 2 , d 1 ) While decomposition datasets such as BREAK could be used to obtain more direct supervision for these hints, we focus here on the broader feasibility of distant supervision.",
"We observe that our current approach generates hints for 89% of the questions and can find decompositions that lead to the gold answer for 50% of them.",
"So while the hints cannot 8 If not found, we ignore such questions.",
"9 We use the same for comparison questions in HotpotQA.",
"be used directly to produce decompositions, the next-question generator is able to generalize from these examples to generate decompositions for all questions with 81% of them leading to the gold answer.",
"App.",
"D provides more details and example of hints for each question class.",
"As described earlier, given these input hints and our sub-task question models, we can generate the sub-question for each step and the appropriate sub-model (based on the model that produced this question).",
"We use nucleus sampling to sample 5 questions for each reasoning step.",
"To improve the training data quality, we also filter out potentially noisy decompositions.",
"10 We train a BART-Large model, our next-question generator, on this training data to produce the next question given the complex question and previous question-answer pairs.",
"We use best-first search (Dijkstra et al., 1959) to find the best decomposition chain and use the answer produced at the end of the chain as our predicted answer.",
"We sample n 0 sub-questions from the next-question generator using nucleus sampling.",
"Each question is then answered by the appropriate QA sub-model (defined by the prefix in the question).",
"This partial chain is again passed to the next-question generator to generate the next n 1 sub-questions, and so on.",
"11 A chain is considered complete when the next-question generator outputs the end-of-chain marker [EOQ] .",
"We define a scoring function that scores each partial chain u based on the new words introduced in the sub-questions compared to the input question.",
"12 For a complete chain, we additionally add the score from a RoBERTa model trained on randomly sampled chains (chains that lead to the correct answer are labeled as positive).",
"Concretely, we use the negative class score from this classifier, ( u ) , to compute the final chain score as ( u ) + ( u ) , i.e., lower is better.",
"13 5 Experiments To evaluate our modular approach, we use two datasets, DROP and HotpotQA, that contain ques-10 if an intermediate answer is unused or vocabulary of question chain is too different from the input question.",
"See Appendix A.3 for more details.",
"tions answerable using a SQuAD model and a math calculator.",
"We identify 14.4K training questions in DROP that are within the scope of our system, 14 which forms 18.7% of the dataset.",
"15 We similarly select 2973 Dev questions (from 9536), and split them into 601 Dev and 2371 Test questions.",
"We evaluate our system on the entire HotpotQA dataset.",
"Since the test set is blind, we split the Dev set (7405 qns.) into 1481 Dev and 5924 Test questions.",
"For training, we only use 17% of the training dataset containing 15661 questions categorized as hard by HotpotQA authors.",
"16 5.1 Explanation and Interpretability A key aspect distinguishing MODULARQA is that it can explain its reasoning in a human-interpretable fashion, in the form of simpler sub-questions it creates via decomposition.",
"Table 1 illustrates six sample reasoning explanations; the question context and sub-models are omitted for brevity.",
"We see that MODULARQA is able to take oddly phrased questions to create clean sub-questions (example 4), handle yes/no questions (example 6), recognize the unit of comparison (example 1), and map the phrase smaller\" to the appropriate direction of comparison without any manual rules (example 2). Analyzing such explanations for 40 Dev questions (20 from each dataset), we found that among the 28 questions MODULARQA answered correctly, it produced a valid reasoning chain in as many as 93% of the cases, attesting to its strong ability to provide understandable explanations. To further assess the human readability of MODULAR QA's explanations, we compared them with those produced by DecompRC (Min et al., 2019b), the only decomposition-based system for the considered datasets. We identified 155 questions that are within the scope of MODULARQA and for which both systems produce a decomposition. 17 We then asked crowdworkers on Amazon Mechanical Turk to annotate them along three dimensions: (1) given the two explanations, which system's answer do they trust more; (2) which system's explanation do they understand better; and (3) which system's explanation do they generally prefer . 14 See App. D for how this subset is automatically identified. 15 Previous modular systems (Gupta et al., 2020) have targeted even smaller subsets to develop modular approaches. 16 Increasing the training set didn't affect performance. 17 DecompRC failed to produce chains on 6x more questions than our system. See App. C for details on how these questions were selected and how they were normalized. DROP F1 HotpotQA F1 All Diff Comp Cmpl All Br Comp Interpretable Cross-Dataset Models (5.2) MODULARQA 87.9 85.2 81.0 96.6 61.8 64.9 49.2 WordOverlap 80.5 82.5 58.3 95.8 57.5 61.7 40.5 Greedy 60.2 52.2 52.9 76.3 42.4 44.8 33.0 Limited Versatility (5.3) NMN-D 79.1* SNMN 63.1 63.7 60.1 DecompRC 70.3 72.1 63.4 Limited Interpretability (5.4) NumNet+V2 91.6 86.5 94.5 95.5 Quark 75.5 78.1 64.9 Table 2: F1 scores on the DROP and HotpotQA questions and the individual classes: Difference(Diff), Com-parison(Comp), Complementation(Cmpl) and Bridge(Br). TOP: Comparison to variations of MODULARQA that work across datasets. MIDDLE: Comparison to targeted interpretable systems. BOTTOM: Comparison to targeted blackbox systems. MODULARQA is competitive with previous approaches on DROP and mainly lags behind systems on HotpotQA that are able to exploit artifacts. Trust Understand Prefer DecompRC 50 (33%) 34 (22%) 49 (32%) MODULARQA 105 (67%) 121 (78%) 106 (68%) Table 3: Human evaluation of the explanation quality. Across all dimensions, crowdsource workers preferred the explanations of MODULARQA over DecompRC. Table 3 summarizes the aggregate statistic of the majority labels, with 5 annotations per question. Crowdworkers understood MODULAR QA's natural language explanations better in 78% of the cases, trusted more that it pointed to the correct answer, and generally preferred its explanations. 5.2 Interpretable Cross-Dataset Models With MODULARQA being the first interpretable model for DROP and HotpotQA, there were no comparable existing cross-dataset systems. We instead consider two baselines obtained by modifying MODULARQA: (1) only the word-overlap based scoring function ( u ) for chains (no RoBERTa clas-sifier); and (2) greedy inference, i.e., use the most likely question at each step (no search). As shown in Table 2 (top rows), MODULARQA outperforms the purely word-overlap based approach by 7pts F1 on DROP and 4pts on HotpotQA. A simple coverage-based decomposition is thus not as effective, although HotpotQA suffers less because of decompositions being explicit in it. 18 Performance drops much more heavily (18pts on DROP and 19pts on HotpotQA) when we do not employ search at all. This is primarily because 18 Recall that our word-overlap based score penalizes missed question words and words introduced during decomposition. the optimal sub-question can often be unanswerable by the intended sub-model while an alternate decomposition may lead to the right answer. 5.3 Comparison to Dataset-Specific Models To assess the price MODULARQA pays for being versatile, we compare it to three interpretable systems that target a particular dataset. Two are Neural Module Networks, with modules designed specifically for a subset of DROP (referred to as NMN-D) (Gupta et al., 2020) and for HotpotQA (referred to as SNMN) (Jiang and Bansal, 2019). The third is DecompRC, whose split-based decomposition, human annotations, as well as answer composition algorithm was specifically designed for HotpotQA. As seen in Table 2 (middle rows), MODULARQA actually substantially outperforms the DROP model NMN-D while being able to produce textual explanations (rather than attention visualiza-tion). 19 On the HotpotQA dataset, MODULARQA is comparable to S-NMN but underperforms compared to DecompRC. Note that DecompRC can choose to answer some questions using single-hop reasoning and potentially exploit many artifacts in this dataset (Min et al., 2019a; Trivedi et al., 2020). 5.4 Comparison to Black-Box Models To assess the price MODULARQA pays for being interpretable, we compare it to two state-of-the-art black-box systems that not only lack interpretability but are also targeted towards specific datasets: NumNet+V2 (Ran et al., 2019) for DROP 19 Since NMN-D focuses on a different subset, we report its score on the shared subset, on which MODULARQA achieves an F1 score of 92.5 (not shown in the table) and Quark (Groeneveld et al., 2020) for HotpotQA. Since we use the SQuAD QA system in our model, we first fine-tune the LM in both of these systems on the SQuAD dataset, and then train them on the same datasets as MODULARQA. As seen in Table 2 (bottom rows), we are competitive with the state-of-the-art model on DROP but underperform compared to the Quark system. Note that Quark relies on supporting fact annotation and trains a single end-to-end QA model, thereby being more likely to exploit dataset artifacts. Upon analyzing MODULAR QA's errors (defined as questions with F1 score under 0.5) on HotpotQA, we found 65% of the errors arise from intermediate questions having multiple or yes/no answers. These are not handled by modules in our current implementation, suggesting a path for improvement. We also analyzed the errors on the DROP dev set and identified question decomposition (53.3%) and QA models (33.33%) as the main sources of error. 20 Within question decomposition, the key cause of error is higher RoBERTa score for an incorrect decomposition (50% of errors). Both the SQuAD and Math QA models were responsible for errors, with the latter erring only due to out-of-scope formats (e.g., date ranges 1693-99). Appendix E provides more details. 5.5 Additional Benefits of TMNs The last set of experiments support two distinct benefits (besides interpretability) of our approach even against state-of-the-art black-box models. Higher Robustness. We evaluate on the DROP contrast set (Gardner et al., 2020), a suite of test-only examples created for assessing robustness via minimally perturbed examples. On the 239 (out of 947) questions that are within our scope using the same logic as before, we find that MODULARQA outperforms NumNet+V2 by 7%-10%: Contrast Test EM F1 MODULARQA 55.7 63.3 NumNet+V2 45.2 56.2 Learning with Less Data. We next evaluate the sample efficiency of MODULARQA by considering training sets of 3 different sizes: 100%, 60%, and 20% (14448, 8782, and 2596 questions, resp.) of the training questions selected for DROP. 21 As 20 Remaining errors are due to dataset and scope issues. 21 For simplicity, we train MODULARQA on the DROP questions only here. To obtain sufficient examples, we increase the number of questions sampled for each decomposition step. See App. A.6 for more details. shown below, the gap (in F1 score) between MODULARQA and the state-of-the-art model steadily shrinks, and MODULARQA even outperforms it when both are trained on 20% of the data. Portion of Train set 100% 60% 20% MODULARQA 87.8 89.3 87.0 NumNet+V2 91.6 88.3 85.4 6 Conclusion & Future Work We introduced Text Modular Networks , which provide a general-purpose framework that casts complex tasks as textual interaction between existing, simpler QA modules. Based on this conceptual framework, we built MODULARQA, an instantiation of TMNs that can perform multi-hop and discrete numeric reasoning. Empirically, MODULARQA is on-par with other modular approaches (which are dataset-specific) and outperforms a state-of-the-art model in a limited data setting and on expert-generated perturbations. Importantly, MODULARQA provides easy-to-interpret explanations of its reasoning. It is the first system that decomposes DROP questions into textual sub-questions and can be applied to both DROP and HotpotQA. Extending this model to more question classes such as counting (How many touchdowns were scored by X?) and Boolean conjunction (Are both X and Y musicians?) are interesting avenues for future work.",
"To handle the former class, the first challenge is building models that can return a list of answersa relatively unexplored task until recently (Hu et al., 2019b; Segal et al., 2020).",
"For Boolean questions, the challenge is identifying good sub-questions as there is a large space of questions such as Did musicians work for X? that may have the expected yes/no answer but are not part of the true decomposition.",
"Semantic parsing faces similar issues when questions have a large number of possible logical forms (Dasigi et al., 2019).",
"Finally, end-to-end training of the next-question generator and QA models via REINFORCE (Williams, 1992) can further improve the score and allow for faster greedy inference.",
"We thank the Aristo team at AI2 for helpful input, Beaker team for their support with experiments, Dirk Groeneveld for providing the output of the Quark system for evaluation, and Jonathan Berant, Matt Gardner, and Hanna Hajishirzi for invaluable feedback on initial drafts of this paper."
] | [
"objective",
"abstain",
"abstain",
"result",
"abstain",
"method",
"result",
"abstain",
"other",
"abstain",
"objective",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"result",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"other",
"abstain",
"method",
"method",
"other",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other"
] |
[
"The element of repetition in cyberbullying behavior has directed recent computational studies toward detecting cyberbullying based on a social media session .",
"In contrast to a single text, a session may consist of an initial post and an associated sequence of comments.",
"Yet, emerging efforts to enhance the performance of session-based cyberbullying detection have largely overlooked unintended social biases in existing cyberbullying datasets.",
"For example, a session containing certain demographic-identity terms (e.g., gay or black) is more likely to be classified as an instance of cyberbullying.",
"In this paper, we first show evidence of such bias in models trained on sessions collected from different social media platforms (e.g., Instagram).",
"We then propose a context-aware and model-agnostic debiasing strategy that leverages a reinforcement learning technique, without requiring any extra resources or annotations apart from a pre-defined set of sensitive triggers commonly used for identifying cyberbullying instances.",
"Empirical evaluations show that the proposed strategy can simultaneously alleviate the impacts of the unintended biases and improve the detection performance.",
"Cyberbullying has become a prevalent adverse behavior in online social interactions.",
"Recent findings indicate that over 35% of young people have been victims of cyberbullying and roughly 15% have admitted to cyberbullying others (Hinduja and Patchin, 2020; Kim et al., 2021).",
"The detrimental consequences of cyberbullying have motivated considerable efforts in various fields to combat cyberbullying.",
"For example, in computational studies of cyberbullying detection which have been largely aimed at classifying text posted on social media Equal contribution \u0000\u0013\u0000\u0011\u0000\u0015 \u0000\u0013\u0000\u0011\u0000\u0017 \u0000\u0013\u0000\u0011\u0000\u0019 \u0000\u0013\u0000\u0011\u0000\u001b P ( label = bully | z ) \u0000' \u0000H\u0000Q \u0000V \u0000L \u0000W \u0000\\ \u0000\u0003 \u0000R \u0000I\u0000\u0003\u0000W \u0000K\u0000H \u0000\u0003 \u0000& \u0000R\u0000Q\u0000G \u0000L \u0000W \u0000L \u0000R\u0000Q\u0000D \u0000O \u0000\u0003 \u00003 \u0000U \u0000R\u0000E\u0000D\u0000E \u0000L\u0000O\u0000L \u0000W \u0000\\ \u0000]\u0000\u0003\u0000 \u0000\u0003\u0000Z\u0000L\u0000W\u0000K\u0000R\u0000X\u0000W\u0000\u0003\u0000V\u0000H\u0000Q\u0000V\u0000L\u0000W\u0000L\u0000Y\u0000H\u0000]\u0000\u0003\u0000 \u0000\u0003\u0000V\u0000H\u0000Q\u0000V\u0000L\u0000W\u0000L\u0000Y\u0000H",
"platforms with machine learning and natural language processing (NLP) the primary goal is to improve the overall accuracy and speediness of detection.",
"Partly due to an increased awareness of the repetitive nature of cyberbullying behavior, a number of recent efforts in cyberbullying detection have shifted in focus from classification of a single text to detection in a social media session.",
"A session typically consists of an image/video with a caption, a sequence of comments, and other social content,",
"e.g., number of likes.",
"The promising results, nevertheless, may come from a deeply biased model that captures, uses, and even amplifies the unintended biases embedded in social media data (Zhang et al., 2020).",
"That is, because humans are biased, human-generated language corpora can introduce human social prejudices into model training processes (Caliskan et al., 2017).",
"Evidence of such bias has been found in toxicity detection (Zhang et al., 2020) and hate speech detection (Davidson et al., 2019), revealing that tweets in African-American Vernacular English (AAVE) are more likely to be classified as abusive or offensive.",
"Similarly, a cyberbullying classifier may simply take advantage of sensitive triggers,",
"e.g., demographic-identity information (e.g., gay) and offensive terms (stupid, ni***r), to make decisions.",
"Indeed, we find that in the Instagram data for benchmarking cyberbullying detection released by (Hosseinmardi et al., 2015), 68.4% of sessions containing the word gay are labeled as bullying, 89.4% of sessions containing the word ni***r, and 64.3% of sessions containing the word Mexican.",
"In Figure 1, we showcase differences in the performance of a standard hierarchical attention network (HAN) (Yang et al., 2016) a commonly used model for session-based cyberbullying detection and a HAN that was debiased using our proposed strategy in sessions with and without sensitive triggers using the benchmark Instagram data.",
"Specifically, the x -axis represents the probability of the classifier predicting a session as bullying, i.e., the decision scores F : p ( label = bully | Z ) .",
"The y -axis represents the conditional probability densities of the decision scores, i.e., p ( F| Z ) .",
"Figure",
"1(a) shows that the densities are dependent on Z and the dependencies are largely reduced by our mitigation strategy, as depicted in Figure",
"1(b).",
"This paper aims to mitigate the unintended bias in cyberbullying detection in social media sessions.",
"Our task poses multi-faceted challenges that render recent model-agnostic research in fair text classification especially, data manipulation methods (Dixon et al., 2018; Sun et al., 2019) inapplicable.",
"First, in contrast to a single text (e.g., a tweet), social media sessions with a sequence of comments contain rich contextual information.",
"Bias mitigation cannot be defined without context (Lee et al., 2020).",
"The axiomatic and absolute definitions may render current interventions (e.g., gender-swapping) ineffective and may even misguide cyberbullying classifiers.",
"Second, session-based cyberbullying detection is a sequential decision-making process rather than a one-off operation.",
"Therefore, current decisions made by a cyberbullying classifier can influence its future predictions and debiasing strategies.",
"Third, these data manipulation methods are impractical in our task due to the need for extra data annotation, which is especially time-consuming for sequential social media data with rich context.",
"In addition, these methods consider fairness through a differentiable loss function that may not directly incorporate specific fairness goals or measures.",
"To address these challenges, we propose a context-aware and model-agnostic debiasing training framework for cyberbullying detection.",
"It does not require additional resources, apart from a predefined set of sensitive triggers.",
"In particular, drawing from recent advances in reinforcement learning (RL), we consider a classifier as an agent that interacts with the environment to accumulate experience in cyberbullying detection and bias mitigation.",
"At each timestep, the agent makes decisions based on all comments observed up to that point in time and is updated by the collected feedback.",
"Empirical evaluations on two real-world datasets show that the proposed debiasing framework can effectively mitigate the unintended biases while improving the performance of cyberbullying detection.",
"Cyberbullying Detection.",
"The growing prevalence of social networking sites and convenient access to digital devices and the internet have substantially expedited information-sharing processes.",
"A byproduct of this, however, has been the increased vulnerability of young people, in particular, to one of the most serious online risks cyberbullying.",
"To help combat cyberbullying, researchers have used various techniques in machine learning and NLP to automate the process of cyberbullying detection.",
"This is also evidenced by a number of recent competitions and workshops for related tasks such as detection of hate speech against immigrants and women (Basile et al., 2019), offensive language identification (Zampieri et al., 2020), and toxic spans detection (Pavlopoulos et al., 2021).",
"Early works simplified the task as text classification, the input of which are content-based features (e.g., cyberbullying keywords) extracted from a single text (e.g., a tweet) and labels denoting whether the text is relevant to cyberbullying, see,",
"e.g., (Di-nakar et al., 2011; Xu et al., 2012).",
"To better leverage the rich information included in social media data, many studies proposed to augment textual features with emotion/sentiment (Dani et al., 2017), social network information such as relational centrality and ego networks (Squicciarini et al., 2015; Huang et al., 2014), and other multi-modal information such as location and time (Cheng et al., 2019b).",
"Extensive experimental results revealed that the improvement of these approaches is significant.",
"From the data perspective, research in cyberbullying detection has shifted from modeling a single text to multi-modal data and social media sessions.",
"Underpinning these transitions is an increased recognition of two distinct characteristics of cyberbullying behavior repetitiveness and power imbalance (Smith et al., 2008).",
"To address these characteristics, studies such as (Cheng et al., 2019a, 2021) proposed to model the structure of a session and temporal dynamics among the comments using HAN.",
"Yet, whereas numerous studies have focused on achieving better prediction performance, these approaches tend to carry or reinforce the unintended social biases in the datasets (Gencoglu, 2020).",
"Our work thus complements earlier research by examining and mitigating unintended bias in cyberbullying detection models.",
"Fairness in NLP.",
"Humans are inherently biased, and many studies have revealed human biases and discrimination in natural language (Garg et al., 2018; Jentzsch et al., 2019).",
"Evidence has, for instance, emerged in biased pre-trained word embeddings and semantics derived from language corpora.",
"However, in the field of NLP, the question of how to alleviate bias and promote fairness has only more recently begun to be addressed.",
"Using text classification tasks as an example, one predominant method to make the classifiers fairer is to balance training data in a statistical sense.",
"In particular, one can augment original data with external labeled data (Dixon et al., 2018).",
"Similar methods include data oversampling/downsampling, sample weighting (Zhang et al., 2020), and identity term swapping (Park et al., 2018).",
"Dixon et al. (Dixon et al., 2018) added non-toxic samples containing identity terms from Wikipedia articles into training data.",
"A similar strategy was used in (Nozza et al., 2019) for misogyny detection.",
"Badjatiya et al. (Badjatiya et al., 2019) proposed to replace sensitive words with neutral words or tokens.",
"This balancing strategy, while convenient and easy to implement, is not compatible with session-based cyberbullying detection.",
"First, practical considerations impede us from providing additional labeled data with specific sensitive triggers.",
"Data labeling for session-based cyberbullying detection is especially time-consuming and labor-intensive, given that it requires carefully examining a media object and all associated comments in a social media session.",
"Second, because there are potentially many words or tokens sensitive to cyberbullying, identity term swapping is almost impossible.",
"Third, social media sessions contain sequences of comments that provide contextual information important for both cyberbullying detection and bias mitigation.",
"Simple data augmentation can result in the significant loss of such information.",
"Lastly, balancing can introduce additional calibration parameters that can impair classification performance and bias mitigation (Gencoglu, 2020).",
"Cyberbullying is often characterized as a repeated rather than a one-off behavior (Smith et al., 2008).",
"This unique trait has motivated research that focuses on the detection of cyberbullying in entire social media sessions .",
"In contrast to a single text,",
"e.g., a Facebook comment or a tweet, a social media session is typically composed of an initial post (e.g., an image with a caption), a sequence of comments from different users, timestamps, spatial location, user profile information, and other social content such as number of likes (Cheng et al., 2020).",
"Session-based cyberbullying detection presents a number of characteristics such as multi-modality and user interaction (Cheng et al., 2020).",
"In this work, because our goal is to mitigate bias in natural language, we focus on text (i.e., a sequence of comments) in a social media session.",
"We formally define session-based cyberbullying detection as follows: Definition ( Cyberbullying Detection in a Social Media Session ) .",
"We consider a corpus of N social media sessions C = { 1 , 2 , ..., N } , in which each session consists of a sequence of comments denoted as { c 1 , ..., c C } .",
"A session is labeled as either y = 1 denoting a bullying session or y = 0 denoting a non-bullying session.",
"Let D be the dimension of extracted textual features (e.g., Bag of Words) x i for c i .",
"Session-based cyberbullying detection aims to learn a binary classifier using a sequence of textual data to identify if a social media session is a cyberbullying instance: F : { x 1 , ..., x C } RD { 0 , 1 } .",
"An unbiased model for cyberbullying detection makes decisions based on the semantics in a social media session instead of sensitive triggers potentially related to cyberbullying, such as gay, black, or fat.",
"In the presence of unintended bias, a model may present high performance for sessions with these sensitive triggers without knowing their semantics (Dixon et al., 2018).",
"In this section, we first discuss how to define and assess bias in the context of session-based cyberbullying detection.",
"Bias in a text classification model can be assessed by the False Negative Equality Difference (FNED) and False Positive Equality Difference (FPED) metrics, as used in previous studies such as (Zhang et al., 2020; Gencoglu, 2020; Huang et al., 2020).",
"They are a relaxation of Equalized Odds (Borkan et al., 2019) and defined as FNED = (cid:88) z | FNR z FNR overall | , (2) FPED = (cid:88) z | FPR z FPR overall | , (3) where z denotes cyberbullying-sensitive triggers, such as gay, black, and Mexican.",
"The complete list of sensitive triggers can be found in Appendix A. FNR overall and FPR overall denote the False Negative Rate and False Positive Rate over the entire training dataset.",
"Similarly, FNR z and FPR z are calculated over the subset of the data containing the sensitive triggers.",
"An unbiased cyberbullying model meets the following condition: P ( Y | Z ) = P ( Y ) , (4) where Y stands for the predicted label.",
"By Equation 4, we imply that Y is independent of the cyberbullying-sensitive triggers Z that is, a debiased model performs similarly for sessions with and without Z .",
"Note that the widely-used non-discrimination evaluation sets Identity Phrase Templates Test Sets (IPTTS) (Dixon et al., 2018) are not applicable to our task.",
"IPTTS are generated by predefined templates with slots for specific terms,",
"e.g., I am a boy and I am a girl.",
"They only include examples for single text, whereas a social media session includes a sequence of comments.",
"As we will show in subsection 5.1, the average number of comments in the Instagram dataset is 72, which can pose great challenges for generating synthetic social media sessions and the labeling process.",
"Essentially, a debiasing session-based cyberbullying detection is a sequential decision-making process where decisions are updated periodically to assure high performance.",
"In this debiasing framework, comments arrive and are observed sequentially.",
"At each timestep, two decisions are made Agent Environmnet Session comment comment ... comment Data state (comments[ ]) action Reward Function action state Figure 2: Overview of the proposed model.",
"based on the feedback from past decisions: (1) predicting whether a session is bullying and (2) gauging the performance differences between sessions with and without sensitive triggers.",
"Our debiasing strategy is built on the recent results of RL (Shi et al., 2018; Zou et al., 2019; Mosallanezhad et al., 2019), particularly, the sequential Markov Decision Process (MDP).",
"In this approach, an agent A interacts with an environment over discrete time steps t : the agent selects action a t in response to state s t .",
"a t causes the environment to change its state from s t to s t +1 and returns a reward r t +1 .",
"Therefore, each interaction between the agent and the environment creates an experience tuple M t = ( s t , a t , s t +1 , r t +1 ) .",
"The experience tuple is used to train the agent A through different interactions with the environment.",
"The agent's goal is to excel at a specific task, such as generating text (Shi et al., 2018) or summarizing text (Keneshloo et al., 2019).",
"In this work, we leverage techniques in RL to alleviate the unintended bias when classifying social media sessions into bullying or non-bullying based on user comments.",
"In particular, we consider a standard classifier F (e.g., HAN) as an RL agent and a sequence of comments observed at time { 1 , 2 , ..., t } as state s t .",
"The agent selects an action a t { non-bullying , bullying } according to a policy function ( s t ) .",
"( s t ) indicates the probability distribution of actions a in response to state s t , whereas ( s t , a t ) shows the probability of choosing action a t in response to state s t .",
"The action can be interpreted as the predicted label y using the input comments.",
"The reward r t +1 is then calculated for the state-action set ( s t , a t ) and the cumulative discounted sum of rewards G t is used to optimize the policy function ( s t ) .",
"Below, we provide details of the (1) environment, (2) states, (3) actions, and (4) the reward function for the proposed debiasing approach.",
"Environment is a session comments loader.",
"At each episode, the environment chooses a single session and returns its first t comments as state s t .",
"As such, states are independent from the agent's actions, as they do not affect the next state.",
"When it reaches the maximum number of comments of the selected session C , the process is terminated.",
"State s t is a sequence of comments in a social media session posted by various users from time 1 through time t .",
"Action a t determines a session to be bullying or not, given the input comments or state s t : a t { bullying , non-bullying } .",
"(5) Reward function R is used to optimize the policy function ( s t , a t ) .",
"It is defined based on how successfully the agent predicts the label for the input state s t and how much bias the classifier currently has.",
"We define the bias of a classifier as the harmonic mean of FPED and FNED characterized by the sensitive triggers in cyberbullying.",
"In a debiased classifier, we expect both FPED and FNED to be close to zero.",
"We define the reward function R as R = l F 2 FPED FNEDFPED + FNED , (6) where l indicates the prediction error of the classifier and balances between prediction and the debiasing effect of F .",
"The reward function is calculated based on all sessions in the environment, evaluating the performance and bias of the classifier.",
"Given the environment, state, actions, and the reward function, we aim to learn the optimal action selection strategy ( s t , a t ) .",
"At each timestep t , the agent classifies a session with t comments and the reward r t +1 is calculated using Equation 6, according to the agent's action a t and state s t .",
"The goal Algorithm 1 The Optimization Algorithm Require: The dataset { x , z, y } , initialized ( s 0 , a 0 ) , discount rate , balancing weight , learning rate lr , number of episode E .",
"of the agent is to maximize its reward according to Equation 6.",
"We use the policy gradient algorithm REINFORCE (Sutton et al., 1999) to train the agent.",
"As such, the agent has similar properties to a classifier and the classifier's output distribution can be mapped to the agent's policy function ( s t , a t ) .",
"We use the following function to update the agent: = lr L ( ) , (7) where lr denotes the learning rate, is the parameter w.r.t. the policy function ( s t , a t ) , and L ( ) indicates the policy loss: L ( ) = log( ( s t , a t ) G t ) , (8) where G t = (cid:80) ti =1 i r i +1 is the cumulative sum of rewards with discount rate .",
"The pseudo-code for the optimization algorithm can be seen in Algorithm 1. 5 Evaluation In this section, we conduct both quantitative and qualitative evaluations to examine the efficacy of our debiasing strategy.",
"1 In particular, we show that our method can effectively mitigate the impacts of unintended data biases without impairing the model's prediction performance by answering: 1 The source code is publicly available at https://github.com/GitHubLuCheng/MitigateBiasSessionCB Table 1: Statistics of the Instagram and Vine datasets.",
"(1) Can we mitigate the unintended bias of machine learning models for detecting cyberbullying sessions by leveraging techniques in RL?",
"(2) If so, will this debiasing strategy impair the cyberbullying detection performance?",
"and (3) If no' to (2), what is the source of gain?",
"Two benchmark datasets for cyberbullying detection Instagram (Hosseinmardi et al., 2015) and Vine (Rafiq et al., 2015) are used for empirical evaluation.",
"The number of sessions in Instagram and Vine is 2,218 and 970, respectively.",
"Both datasets were crawled using a snowball sampling method and manually annotated via the crowd-sourcing platform CrowdFlower.",
"2 Sessions containing less than 15 comments were removed to ensure data annotation quality.",
"Annotators were asked to examine the image/video, associated caption, and all of the comments in a session before making the final decisions.",
"Instagram: Instagram 3 is a social networking site ranked as one of the top five networks with the highest percentage of users reporting experiences of cyberbullying (the Label Anti Bullying Charity, 2013).",
"Each social media session consists of image content, a corresponding caption, and a sequence of comments in temporal order.",
"In total, this dataset is composed of 2,218 sessions, with an average number of 72 comments in each session.",
"Vine: Vine 4 was a mobile application that allowed users to upload and comment on six-second looping videos.",
"Each social media session consists of video content, the corresponding caption, and a sequence of comments in temporal order.",
"This dataset contains 970 sessions and each session contains, on average, 81 comments.",
"For social media sessions, standard fairness methods, such as identity swapping and data supplementation, are not applicable.",
"We compare our approach with commonly used machine learning 2",
"models for classification with sequential text data, including HAN, Convolutional Neural Network (CNN), and Gated Recurrent Unit (GRU), as well as a recent model proposed for session-based cyberbullying detection HANCD (Cheng et al., 2019a).",
"HANCD leverages multi-task learning to jointly model the hierarchical structure of a social media session and the temporal dynamics of its sequence of comments to improve the performance of cyberbullying detection.",
"We also include the state-of-the-art model Constrained (Gencoglu, 2020) that imposes two fairness constraints on cyberbullying detection to mitigate biases.",
"In our implementation, we use the HANCD classifier as the cyberbullying model in Constrained for a fair comparison.",
"The parameter w.r.t. the fairness constraints is set to 0.005, as suggested.",
"Both HAN and HANCD use GRU to extract the context of the input data.",
"We use 1-layer GRUs with a hidden size of 100 and 200 neurons for word and comment attention networks, respectively.",
"As our approach is model-agnostic, for each standard machine learning model, there is a corresponding debiased counterpart.",
"For the proposed method, l F in the reward function (Equation 6) is computed as the cross entropy loss between the true label y and the predicted probability p : l F = 1 2 2 (cid:88) i =1 y i log( p i )+(1 y i ) log(1 p i ) .",
"In Algorithm 1, the classifier F is pre-trained for 5 iterations using loss function l F , learning rate 3 e 3 , and the Adam optimizer (Kingma and Ba, 2014).",
"F is then placed in the RL setting discussed in subsection 4.2.",
"We apply the REINFORCE method with E = 500 episodes, learning rate 1 e 5 , = 1 .",
"0 , and = 0 .",
"5 using the Adam optimizer to further update the classifier.",
"Evaluations focus on both the prediction accuracy and the debiasing effect of a model.",
"For prediction performance, we adopt standard metrics for binary classification, including Precision, Recall, F1, and AUC scores.",
"Following (Zhang et al., 2020; Gencoglu, 2020), we use FPED, FNED, and total bias (FPED+FNED) to evaluate how biased a model is w.r.t. sessions with and without sensitive triggers.",
"Lower scores indicate less bias.",
"For all models, pre-trained GloVe word embeddings (Pennington et al., 2014) and 10-fold cross validation with 80/20 split are used for fair comparison.",
"Furthermore, we perform McNemar's test to examine whether a statistically significant difference between baseline and debiased models exists in terms of cyberbullying classification accuracy and equity.",
"The best results are highlighted in bold font.",
"In this section, we show experimental results to answer the first question: Can the proposed framework mitigate unintended bias?",
"As expected, the proposed RL framework can effectively mitigate the impact of the unintended bias embedded in the datasets for cyberbullying detection.",
"We report results for both Instagram and Vine in Table 2. Dedenotes a debiased model,",
"e.g., De-HAN is a HAN debiased by the proposed RL framework.",
"Total stands for the total bias (FPED+FNED).",
"All McNemar's tests resulted in statistical significance with p -values < 0 .",
"05 .",
"We observe the following: (1) Compared to the standard classifiers, the debiased counterparts sig-nificantly improve FNED and FPED scores, indicating that our proposed debiasing strategy can mitigate the unintended bias in data used for predicting cyberbullying sessions, regardless of the dataset or machine learning model.",
"For example, when tested on Instagram with the HAN model, our debiasing method can decrease FPED, FNED, and total bias by 95.7%, 56.7%, and 57.0%, respectively.",
"For Vine , the improvement with HAN is 71.4%, 3.3%, and 50.5%, respectively.",
"(2) Total biases of standard classifiers come from both the FPRs and FNRs for the Instagram experiments, while the main contributor of biases is the FPRs for the Vine experiments.",
"Our approach mitigates total bias in both scenarios.",
"(3) Our debiasing strategy based on RL techniques is also more effective than the fairness constraints proposed in (Gencoglu, 2020), as indicated by the decreased total biases for both Instagram and Vine .",
"By comparing HANCD, Constrained, and De-HANCD, we see that Constrained decreases FPED by sacrificing FNED, while De-HANCD can decrease both.",
"In addition to the quantitative results, we provide qualitative analyses by visualizing FPED and FNED of both the standard and debiased HANCD models.",
"In an experiment with Instagram for sessions containing ten sensitive triggers, as illustrated in Figure 3, we can observe that compared to De-HANCD, HANCD is more biased toward some sensitive triggers, such as fat and stupid.",
"Demographic-identity related bias is also detected in HANCD.",
"For example, sessions containing identity terms including ne**o, gay, and ni**a are more likely to be falsely identified as bullying, as indicated by FPED.",
"By contrast, De-HANCD mitigates various types of unintended biases and has more consistent performance across all of the sensitive triggers.",
"A dilemma often faced by researchers studying bias and fairness in machine learning is the trade-off between fairness and efficiency (Bertsimas et al., 2012).",
"Under this trade-off theory, forcing cyberbullying classifiers to follow the proposed debiasing strategy would invariably decrease the accuracy.",
"This section shows that, somewhat coun-terintuitively, our approach can outperform biased models w.r.t. overall cyberbullying detection acTable 3: Performance comparisons of different models on the Instagram dataset.",
"Higher AUC, precision (PREC), recall (REC), and F1 scores indicate better performance.",
"p -value < 0 .",
"05 for all McNemar's tests.",
"the data.",
"Results are presented in Tables 3-4.",
"We see that the proposed debiasing strategy can both alleviate the bias and retain high prediction accuracy.",
"For instance, for Instagram , our approach achieves the highest AUC and F1 score of all evaluated models.",
"For Vine , the improvement of De-HAN over HAN is 9.8% and 41.9% for AUC and F1 score, respectively.",
"The improvement over Constrained is 15.8% and 15.4%, respectively.",
"Biased models present much lower Precision than Recall for Vine .",
"This result is in line with the findings in Table 2, where we observe that the larger bias component is associated with FPRs in Vine .",
"This indicates that when the sample size is small, these models overfit to sensitive triggers for detecting bullying instances.",
"The debiasing strategy effectively reduces models' reliance on those terms and utilizes contextual information for prediction.",
"(b) Performance w.r.t. cyberbullying detection.",
"accuracy?",
"This non-compromising approach may be attributed to the proposed RL framework that effectively captures contextual information.",
"In this section, we examine the impact of parameter in Equation 6 by varying { 0 .",
"0 , 0 .",
"2 , 0 .",
"4 , 0 .",
"6 , 0 .",
"8 , 1 .",
"0 } .",
"We show performance w.r.t. bias mitigation (total bias) and cyberbullying detection (F1 score) in Figure 4. The results clearly show the efficacy of the proposed RL framework for bias mitigation.",
"In particular, as we increase , the RL agent puts more effort toward alleviating biases by minimizing both FPED and FNED simultaneously.",
"Moreover, by interacting with the environment, the RL agent also leverages contextual information in order to minimize the prediction error and receive a larger reward.",
"As a result, the RL agent largely reduces biases while improving the prediction accuracy, as shown by the slight increase in detection performance of the classifier in Figure 4b.",
"In this work, we examined unintended biases in datasets for session-based cyberbullying detection.",
"In contrast to conventional data for bias mitigation in text classification, social media sessions consist of a sequence of comments with rich contextual information.",
"To alleviate these unintended biases, we propose an effective debiasing strategy by leveraging techniques in RL.",
"Our approach is context-aware, model-agnostic, and does not require additional resources or annotations aside from a predefined set of potentially sensitive triggers related to cyberbullying.",
"Empirical evaluations demonstrated that our approach can mitigate unintended bias in the data without impairing a model's prediction accuracy.",
"Other types of decisions in sequential decision-making processes can impact the underlying user population, thereby influencing future comments generated by users.",
"Future research can be directed towards studying the long-term impact of the debiasing strategy, as well as investigating different types of biases in session-based cyberbullying detection, such as gender bias, racial bias, and language bias.",
"Our approach can also benefit from integrating previous studies that use data augmentation or swapping methods to counteract bias.",
"Due to the challenges of data collection and labeling, validating our approach on datasets across different social media platforms is also an important avenue for future work.",
"This work seeks to advance collaborative research efforts aimed at mitigating bias in session-based cyberbullying detection, a topic that has yet to be studied extensively.",
"Here, we provide preliminary solutions, but more work is needed to elucidate ways to build debiased and effective models.",
"While all data used in this study are publicly available, we are committed to securing the privacy of the individuals in our datasets.",
"To this end, we automatically replaced user names with ordered indices in our analysis.",
"The insulting or offensive terms and the figures used in this paper are for illustrative purposes only and do not represent the views or ethical attitudes of the authors.",
"This material is based upon work supported by the National Science Foundation (NSF) Grants 1719722 and 2036127."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"other"
] |
[
"Medical report generation task, which targets to produce long and coherent descriptions of medical images, has attracted growing research interests recently.",
"Different from the general image captioning tasks, medical report generation is more challenging for data-driven neural models.",
"This is mainly due to 1) the serious data bias and 2) the limited medical data.",
"To alleviate the data bias and make best use of available data, we propose a Competence-based Multimodal Curriculum Learning framework (CMCL).",
"Specifically, CMCL simulates the learning process of radiologists and optimizes the model in a step by step manner.",
"Firstly, CMCL estimates the difficulty of each training instance and evaluates the competence of current model; Secondly, CMCL selects the most suitable batch of training instances considering current model competence.",
"By iterating above two steps, CMCL can gradually improve the model's performance.",
"The experiments on the public IU-Xray and MIMIC-CXR datasets show that CMCL can be incorporated into existing models to improve their performance.",
"Medical images, e.g., radiology and pathology images, and their corresponding reports, which describe the observations in details of both normal and abnormal regions, are widely-used for diagnosis and treatment (Delrue et al., 2011; Goergen et al., 2013).",
"In clinical practice, writing a medical report can be time-consuming and tedious for experienced radiologists, and error-prone for inexperienced radiologists.",
"Therefore, automatically generating medical reports can assist radiologists in clinical decision-making and emerge as a prominent attractive research direction in both artificial Corresponding author.",
"intelligence and clinical medicine (Jing et al., 2018, 2019; Li et al., 2018, 2019; Wang et al., 2018; Xue et al., 2018; Yuan et al., 2019; Zhang et al., 2020a; Chen et al., 2020; Liu et al., 2021a,b, 2019c).",
"Many existing medical report generation models adopt the standard image captioning approaches: a CNN-based image encoder followed by a LSTM-based report decoder, e.g., CNN-HLSTM (Jing et al., 2018; Liang et al., 2017).",
"However, directly applying image captioning approaches to medical images has the following problems: 1) Visual data bias : the normal images dominate the dataset over the abnormal ones (Shin et al., 2016).",
"Furthermore, for each abnormal image, the normal regions dominate the image over the abnormal ones.",
"As shown in Figure 1, abnormal regions (Red bounding boxes) only occupy a small part of the entire image; 2) Textual data bias : as shown in Figure 1, in a medical report, radiologists tend to describe all the items in an image, making the descriptions of normal regions dominate the entire report.",
"Besides, many similar sentences are used to describe the same normal regions.",
"3) Training efficiency : during training, most existing works treat all the samples equally without considering their difficulties.",
"As a result, the visual and textual biases could mislead the model training (Jing et al., 2019; Xue et al., 2018; Yuan et al., 2019; Liu et al., 2021a,b; Li et al., 2018).",
"As shown in Figure 1, even a state-of-the-art model (Jing et al., 2018) still generates some repeated sentences of normalities and fails to depict the rare but important abnormalities.",
"To this end, we propose a novel Competence-based Multimodal Curriculum Learning framework (CMCL) which progressively learns medical reports following an easy-to-hard fashion.",
"Such a step by step process is similar to the learning curve of radiologists: (1) first start from simple and easy-written reports; (2) and then attempt to consume harder reports, which consist of rare and diverse abnormalities.",
"In order to model the above gradual working patterns, CMCL first assesses the difficulty of each training instance from multiple perspectives (i.e., the Visual Complexity and Textual Complexity) and then automatically selects the most rewarding training samples according to the current competence of the model.",
"In this way, once the easy and simple samples are well-learned, CMCL increases the chance of learning difficult and complex samples, preventing the models from getting stuck in bad local optima 1 , which is obviously a better solution than the common approaches of uniformly sampling training examples from the limited medical data.",
"As a result, CMCL could better utilize the limited medical data to alleviate the data bias.",
"We evaluate the effectiveness of the proposed CMCL on two public datasets, i.e., IU-Xray (Demner-Fushman et al., 2016) and MIMIC-CXR (Johnson et al., 2019).",
"Overall, the main contributions of this work are: We introduce the curriculum learning in medical report generation, which enables the models to gradually proceed from easy samples to 1 Current models tend to generate plausible general reports with no prominent abnormal narratives (Jing et al., 2019; Li et al., 2018; Yuan et al., 2019; Liu et al., 2021a,b) more complex ones in training, helping existing models better utilize the limited medical data to alleviate the data bias.",
"We assess the difficulty of each training instance from multiple perspectives and propose a competence-based multimodal curriculum learning framework (CMCL) to consider multiple difficulties simultaneously.",
"We evaluate our proposed approach on two public datasets.",
"After equipping our proposed CMCL, which doesn't introduce additional parameters and only requires a small modifi-cation to the training data pipelines, performances of the existing baseline models can be improved on most metrics.",
"Moreover, we conduct human evaluations to measure the effectiveness in terms of its usefulness for clinical practice.",
"Image Captioning and Paragraph Generation The task of image captioning (Chen et al., 2015; Vinyals et al., 2015), which aims to generate a sentence to describe the given image, has received extensive research interests (Anderson et al., 2018; Rennie et al., 2017; Liu et al., 2019a, 2020a).",
"These approaches mainly adopt the encoder-decoder framework which translates the image to a single descriptive sentence.",
"Such an encoder-decoder framework have achieved great success in advancing the state-of-the-arts (Vinyals et al., 2015; Lu et al., 2017; Xu et al., 2015; Liu et al., 2018, 2019b).",
"Specifically, the encoder network (Krizhevsky et al., 2012; He et al., 2016) computes visual representations for the visual contents and the decoder network (Hochreiter and Schmidhuber, 1997; Vaswani et al., 2017) generates a target sentence based on the visual representations.",
"In contrast to the image captioning, image paragraph generation, which aims to produce a long and semantic-coherent paragraph to describe the input image, has recently attracted growing research interests (Krause et al., 2017; Liang et al., 2017; Yu et al., 2016).",
"To perform the image paragraph generation, a hierarchical LSTM (HLSTM) (Krause et al., 2017; Liang et al., 2017) is proposed as the decoder to well generate long paragraphs.",
"Medical Report Generation The medical reports are expected to 1) cover contents of key medical findings such as heart size, lung opacity, and bone structure; 2) correctly capture any abnormalities and support with details such as the location and shape of the abnormality; 3) correctly describe potential diseases such as effusion, pneumothorax and consolidation (Delrue et al., 2011; Goergen et al., 2013; Li et al., 2018; Liu et al., 2021a,b).",
"Therefore, correctly describing the abnormalities become the most urgent goal and the core value of this task.",
"Similar to image paragraph generation, most existing medical report generation works (Jing et al., 2018, 2019; Li et al., 2018; Wang et al., 2018; Xue et al., 2018; Yuan et al., 2019; Zhang et al., 2020a,b; Miura et al., 2021; Lovelace and Mortazavi, 2020; Liu et al., 2021b, 2019c) attempt to adopt a CNN-HLSTM based model to automatically generate a fluent report.",
"However, due to the data bias and the limited medical data, these models are biased towards generating plausible but general reports without prominent abnormal narratives (Jing et al., 2019; Li et al., 2018; Yuan et al., 2019; Liu et al., 2021a,b).",
"Curriculum Learning In recent years, curriculum learning (Bengio et al., 2009), which enables the models to gradually proceed from easy samples to more complex ones in training (Elman, 1993), has received growing research interests in natural language processing field, e.g., neural machine translation (Platanios et al., 2019; Kumar et al., 2019; Zhao et al., 2020; Liu et al., 2020b; Zhang et al., 2018; Kocmi and Bojar, 2017; Xu et al., 2020) and computer vision field, e.g., image classification (Weinshall et al., 2018), human attribute analysis(Wang et al., 2019) and visual question answering (Li et al., 2020).",
"For example, in neural machine translation, Platanios et al. (2019) proposed to utilize the training samples in order of easy-to-hard and to describe the difficulty of a training sample using the sentence length or the rarity of the words appearing in it (Zhao et al., 2020).",
"However, these methods (Platanios et al., 2019; Liu et al., 2020b; Xu et al., 2020) are single difficulty-based and unimodal curriculum learning approaches.",
"It is obviously not applicable to medical report generation task, which involves multimodal data, i.e., visual medical images and textual reports, resulting in multi-modal complexities, i.e., the visual complexity and the textual complexity.",
"Therefore, it is hard to design one single metric to estimate the overall difficulty of medical report generation.",
"To this end, based on the work of Platanios et al. (2019), we propose a competence-based multimodal curriculum learning approach with multiple difficulty metrics.",
"In this section, we briefly describe typical medical report generation approaches and introduce the proposed Competence-based Multimodal Curriculum",
"Learning (CMCL).",
"As shown in the top of Figure 2, many medical report generation models adopt the encoder-decoder manner.",
"Firstly, the visual features are extracted from the input medical image via a CNN model.",
"Then the visual features are fed into a sequence generation model, like LSTM to produce the medical report.",
"In the training phase, all training instances are randomly shuffled and grouped into batches for training.",
"In other words, all training instances are treated equally.",
"Different from typical medical report generation models, CMCL builds the training batch in a selective manner.",
"The middle part of Figure 2 displays the framework of CMCL equipped with one single difficulty metric.",
"CMCL first ranks all training instances according to this difficulty metric and then gradually enlarges the range of training instances that the batch is selected.",
"In this manner, CMCL can train the models from easy to difficult instances.",
"Since medical report generation involves multimodal data, like visual medical images and textual reports, it is hard to design one single metric to estimate the overall difficulty.",
"Therefore, we also propose a CMCL with multiple difficulty metrics.",
"As shown in the bottom of Figure 2, the training instances are ranked by multiple metrics independently.",
"At each step, CMCL generates one batch for each difficulty metric and then calculates the perplexity of each batch based on current model.",
"The batch with highest perplexity is selected to train the model.",
"It can be understood that CMCL sets multiple syllabus in parallel, and the model is optimized towards the one with lowest competence.",
"In this section, we define the difficulty metrics used by CMCL.",
"As stated in Section 2, the key challenge of medical report generation is to accurately capture and describe the abnormalities (Delrue et al., 2011; Goergen et al., 2013; Li et al., 2018).",
"There-Random Shuffle Batch Sampled Uniformly Dataset Ranking by Single Difficulty Metrics Batch Sampled by Model Competence Dataset Ranking by Difficulty Metrics 1 Ranking by Difficulty Metrics 2 Ranking by Difficulty Metrics 3 Ranking by Difficulty Metrics n Batch Sampled by Model Perplexity and Competence Dataset Encoder Decoder Model Encoder Decoder Model Encoder Decoder Model Perplexity Update Multiple Difficulty-based Curriculum Learning Single Difficulty-based Curriculum Learning Baseline Figure 2: The top illustrates the typical encoder-decoder approach; The middle illustrates the Single Difficulty-based Curriculum Learning, where only one difficulty metric is used; The bottom illustrates the Multiple Difficulty-based Curriculum Learning, where multiple difficulty metrics are introduced.",
"fore, we assess the difficulty of instances based on the difficulty of accurately capturing and describing the abnormalities.",
"Heuristic Metric d 1 If a medical image contains complex visual contents, it is more likely to contain more abnormalities, which increases the difficulty to accurately capture them.",
"To measure such visual difficulty, we adopt the widely-used ResNet-50 (He et al., 2016) pre-trained on ImageNet (Deng et al., 2009) and fine-tuned on CheXpert dataset (Irvin et al., 2019), which consists of 224,316 X-ray images with each image labeled with occurrences of 14 common radiographic observations.",
"Specifically, we first extract the normal image embeddings of all normal training images from the last average pooling layer of ResNet-50.",
"Then, given an input image, we again use the ResNet-50 to obtain the image embedding.",
"At last, the average cosine similarity between the input image and normal images is adopted as the heuristic metric of visual difficulty.",
"Model Confidence d 2 We also introduce a model-based metric.",
"We adopt the above ResNet-50 to conduct the abnormality classification task.",
"We first adopt the ResNet-50 to acquire the classification probability distribution P ( I ) = { p 1 ( I ) , p 2 ( I ) , . . . , p 14 ( I ) } among the 14 common diseases for each image I in the training dataset, where p n ( I ) [0 , 1] .",
"Then, we employ the entropy value H ( I ) of the probability distribution, defined as follows: H ( I ) = 14 (cid:88) n =1 ( p n ( I ) log ( p n ( I )) + (1 p n ( I )) log (1 p n ( I ))) (1) We employ the entropy value H ( I ) as the model confidence measure, indicating whether an image is easy to be classified or not.",
"Heuristic Metric d 3 A serious problem for medical report generation models is the tendency to generate plausible general reports with no prominent abnormal narratives (Jing et al., 2019; Li et al., 2018; Yuan et al., 2019).",
"The normal sentences are easy to learn, but are less informative, while most abnormal sentences, consisting of more rare and diverse abnormalities, are relatively more difficult to learn, especially at the initial learning stage.",
"To this end, we adopt the number of abnormal sentences in a report to define the difficulty of a report.",
"Following Jing et al. (2018), we consider sentences which contain no, normal, clear, stable as normal sentences, the rest sentences are consider as abnormal sentences.",
"Model Confidence d 4 Similar to visual difficulty, we further introduce a model confidence as a metric.",
"To this end, we define the difficulty using the negative log-likelihood loss values (Xu et al., 2020; Zhang et al., 2018) of training samples.",
"To acquire the negative log-likelihood loss values, we Algorithm 1 Single Difficulty-based Curriculum Learning (Platanios et al., 2019).",
"adopt the widely-used and classic CNN-HLSTM (Jing et al., 2018), in which the CNN is implemented with ResNet-50, trained on the downstream dataset used for evaluation with a cross-entropy loss.",
"It is worth noticing that since we focus on the medical report generation and design the metrics based on the difficulty of accurately capturing and describing the abnormalities, we do not consider some language difficulty metrics used in neural machine translation, e.g., the sentence length (Platan-ios et al., 2019), the n-gram rarity together with Named Entity Recognition (NER) and Parts of Speech (POS) taggings (Zhao et al., 2020).",
"In this section, we first briefly introduce the conventional single difficulty-based curriculum (Pla-tanios et al., 2019).",
"Then we propose the multiple difficulty-based curriculum learning for medical report generation.",
"Platanios et al. (2019) proposed a competence-based and single difficulty-based curriculum learning framework (see Algorithm 1), which first sorts each instance in the training dataset D train according to a single difficulty metric d , and then defines the model competence c ( t ) (0 , 1] at training step t by following functional forms:",
"where c (0) is the initial competence and usually set to 0.01, p is the coefficient to control the curriculum schedule and is usually set to 2, and T is",
"Algorithm 2 Multiple Difficulty-based Curriculum Learning.",
"The Red colored text denotes the differences from Algorithm",
"the duration of curriculum learning and determines the length of the curriculum.",
"In implementations, at training time step t , the top c ( t ) portions of the sorted training dataset are selected to sample a training batch to train the model.",
"In this way, the model is able to gradually proceed from easy samples to more complex ones in training, resulting in first starting to utilize the simple and easy-written reports for training, and then attempting to utilize harder reports for training.",
"The training instances of medical report generation task are pairs of medical images and corresponding reports which is a multi-modal data.",
"It's hard to estimate the difficulty with only one metric.",
"In addition, the experimental results (see Table 4) show that directly fusing multiple difficulty metrics as one ( d 1 + d 2 + d 3 + d 4 ) is obviously inappropriate, which is also verified in Platanios et al. (2019).",
"To this end, we extend the single difficulty-based curriculum learning into the multiple difficulty-based curriculum learning, where we provide the medical report generation models with four different difficulty metrics, i.e., d 1 , d 2 , d 3 , d 4 (see Section 4).",
"A simple and natural way is to randomly or sequentially choose a curricula to train the model, i.e., 1 2 3 4 1. However, a better approach is to adaptively select the most appropriate curricula for each training step, which follows the common practice of human learning behavior: When we have learned some curricula well, we tend to choose the under-learned curricula to learn.",
"Algorithm 2 summarizes the overall learning process of the proposed framework and Figure 3 illustrates the process of Algorithm",
"2. In implementations, similarly, we first sort the training dataset based on the four difficulty metrics and acquire four sorted training datasets in line 1-2.",
"Then, based on the model competence, we acquire the training samples for each curricula, in line 4.",
"In line 5, we further estimate the perplexity (PPL) of model on different training samples B i ( t i ) corresponding to different curricula, defined as: PPL ( B i ( t i )) = (cid:88) R k B i ( t i ) N (cid:118)(cid:117)(cid:117)(cid:116) N (cid:89) m =1 1 P ( w km | w k 1 ,...,w km 1 ) where R k = { w k 1 , w k 2 , . . . , w kN } denotes the k -th report in B i ( t i ) .",
"The perplexity (PPL) measures how many bits on average would be needed to encode each word of the report given the model, so the current curricula with higher PPL means that the model is not well-learned for this curricula and need to be improved.",
"Therefore, the PPL can be used to determine the curricula at each training step dynamically.",
"Specifically, in line 8-9, we select the under-learned curricula, i.e., the curricula with maximum PPL, to train the current model.",
"After that, we again estimate the model competence in the selected curricula in line 11 and compute the PPL of model on the training samples corresponding to the selected curricula in line 12.",
"We firstly describe two public datasets as well as the widely-used metrics, baselines and settings.",
"Then we present the evaluation of our CMCL.",
"We conduct experiments on two public datasets, i.e., a widely-used benchmark IU-Xray (Demner-Fushman et al., 2016) and a recently released large-scale MIMIC-CXR (Johnson et al., 2019).",
"MIMIC-CXR 3 is the recently released largest dataset to date and consists of 377,110 chest X-ray images and 227,835 radiology reports from 64,588 patients of the Beth Israel Deaconess Medical Center.",
"For IU-Xray dataset, following previous works (Chen et al., 2020; Jing et al., 2019; Li et al., 2019, 2018), we randomly split the dataset into 70%-10%-20% training-validation-testing splits.",
"At last, we preprocess the reports by tokenizing, converting to lower-cases and removing non-alpha tokens.",
"For MIMIC-CXR, following Chen et al. (2020); Liu et al. (2021a,b), we use the official splits to report our results, resulting in 368,960 samples in the training set, 2,991 samples in the validation set and 5,159 samples in the test set.",
"We convert all tokens of reports to lower-cases and filter tokens that occur less than 10 times in the corpus, resulting in a vocabulary of around 4,000 tokens.",
"We tested three representative baselines that were originally designed for image captioning and three",
"competitive baselines that were originally designed for medical report generation.",
"NIC: Vinyals et al. (2015) proposed the encoder-decoder network, which employs a CNN-based encoder to extract image features and a RNN-based decoder to generate the target sentence, for image captioning.",
"Spatial-Attention: Lu et al. (2017) proposed the visual attention, which is calculated on the hidden states, to help the model to focus on the most relevant image regions instead of the whole image.",
"Adaptive-Attention: Considering that the decoder tends to require little or no visual information from the image to predict the nonvisual words such as the and of, Lu et al. (2017) designed an adaptive attention model to decide when to employ the visual attention.",
"6.2.2 Medical Report Generation Baselines CNN-HLSTM: Jing et al. (2018) introduced the Hierarchical LSTM structure (HLSTM), which contains the paragraph LSTM and the sentence LSTM.",
"HLSTM first uses the paragraph LSTM to generate a series of high-level topic vectors representing the sentences, and then utilizes the sentence LSTM to generate a sentence based on each topic vector.",
"HLSTM+att+Dual: Harzig et al. (2019) proposed a hierarchical LSTM with the attention mechanism and further introduced two LSTMs, i.e., Normal LSTM and Abnormal LSTM, to help the model to generate more accurate normal and abnormal sentences.",
"Co-Attention: Jing et al. (2018) proposed the co-attention model, which combines the merits of visual attention and semantic attention, to attend to both images and predicted semantic tags 4 simultaneously, exploring the synergistic effects of visual and semantic information.",
"We adopt the widely-used BLEU (Papineni et al., 2002), METEOR (Banerjee and Lavie, 2005) and ROUGE-L (Lin, 2004), which are reported by the",
"evaluation toolkit (Chen et al., 2015) 5 , to test the performance.",
"Specifically, ROUGE-L is proposed for automatic evaluation of the extracted text summarization.",
"METEOR and BLEU are originally designed for machine translation evaluation.",
"For all baselines, since our focus is to change the training paradigm, which improves existing baselines by efficiently utilizing the limited medical data, we keep the inner structure of the baselines untouched and preserve the original parameter setting.",
"For our curriculum learning framework, following previous work (Platanios et al., 2019), the c (0) and p are set to 0.01 and 2, respectively.",
"For different baselines, we first re-implement the baselines without using any curriculum.",
"When equipping baselines with curriculum, following Platanios et al. (2019), we set T in",
"Eq.(2) to a quarter of the number of training steps that the baseline model takes to reach approximately 90% of its fi-nal BLEU-4 score.",
"To boost the performance, we further incorporate the Batching method (Xu et al., 2020), which batches the samples with similar difficulty in the curriculum learning framework.",
"To re-implement the baselines and our approach, following common practice (Jing et al., 2019; Li et al., 2019, 2018; Liu et al., 2021a,b), we extract image features for both dataset used for evaluation from a ResNet-50 (He et al., 2016), which is pretrained on ImageNet (Deng et al., 2009) and fine-tuned on public available CheXpert dataset (Irvin et al., 2019).",
"To ensure consistency with the experiment settings of previous works (Chen et al., 2020), for IU-Xray, we utilize paired images of a patient as the input; for MIMIC-CXR, we use single image as the input.",
"For parameter optimization, we use Adam optimizer (Kingma and Ba, 2014) with a batch size of 16 and a learning rate of 1e-4.",
"As shown in Table 1, for two datasets, all baselines equipped with our approach receive performance gains over most metrics.",
"The results prove the effectiveness and the compatibility of our CMCL in promoting the performance of existing models by better utilizing the limited medical data.",
"Besides, in Table 2, we further select six existing state-of-the-art models, i.e., HRGR-Agent (Li et al., 2018), CMAS-RL (Jing et al., 2019), SentSAT + KG (Zhang et al., 2020a), Up-Down (Anderson et al., 2018), Transformer (Chen et al., 2020) and 5 https://github.com/tylin/coco-caption Methods Dataset: MIMIC-CXR (Johnson et al., 2019) Dataset: IU-Xray (Demner-Fushman et al., 2016) B-1 B-2 B-3 B-4 M R-L B-1 B-2 B-3 B-4 M R-L NIC (Vinyals et al., 2015) 0.290 0.182 0.119 0.081 0.112 0.249 0.352 0.227 0.154 0.109 0.133 0.313 w/ CMCL 0.301 0.189 0.123 0.085 0.119 0.241 0.358 0.223 0.160 0.114 0.137 0.317 Spatial-Attention (Lu et al., 2017) 0.302 0.189 0.122 0.082 0.120 0.259 0.374 0.235 0.158 0.120 0.146 0.322 w/ CMCL 0.312 0.200 0.125 0.087 0.118 0.258 0.381 0.246 0.164 0.123 0.153 0.327 Adaptive-Attention (Lu et al., 2017) 0.307 0.192 0.124 0.084 0.119 0.262 0.433 0.285 0.194 0.137 0.166 0.349 w/ CMCL 0.302 0.192 0.129 0.091 0.125 0.264 0.437 0.281 0.196 0.140 0.174 0.338 CNN-HLSTM (Krause et al., 2017) 0.321 0.203 0.129 0.092 0.125 0.270 0.435 0.280 0.187 0.131 0.173 0.346 w/ CMCL 0.337 0.210 0.136 0.097 0.131 0.274 0.462 0.293 0.207 0.155 0.179 0.360 HLSTM+att+Dual (Harzig et al., 2019) 0.328 0.204 0.127 0.090 0.122 0.267 0.447 0.289 0.192 0.144 0.175 0.358 w/ CMCL 0.330 0.206 0.133 0.088 0.119 0.272 0.461 0.298 0.201 0.150 0.173 0.359 Co-Attention (Jing et al., 2018) 0.329 0.206 0.133 0.095 0.129 0.273 0.463 0.293 0.207 0.155 0.178 0.365 w/ CMCL 0.344 0.217 0.140 0.097 0.133 0.281 0.473 0.305 0.217 0.162 0.186 0.378 Table 1: Performance of automatic evaluations on the test sets of the MIMIC-CXR and the IU-Xray datasets.",
"vs. Models Baseline wins Tie w/ CMCL' wins CNN-HLSTM (Jing et al., 2018) 15 28 57 Co-Attention (Jing et al., 2018) 24 35 41 Table 3: We invite 2 professional clinicians to conduct the human evaluation for comparing our method with baselines.",
"R2Gen (Chen et al., 2020), for comparison.",
"For these selected models, we directly quote the results from the original paper for IU-Xray, and from Chen et al. (2020) for MIMIC-CXR.",
"As we can see, based on the Co-Attention (Chen et al., 2020), our approach CMCL achieves competitive results with these state-of-the-art models on major metrics, which further demonstrate the effectiveness of the proposed approach.",
"In this section, to verify the effectiveness of our approach in clinical practice, we invite two professional clinicians to evaluate the perceptual quality of 100 randomly selected reports generated by Baselines and Baselines w/ CMCL.",
"For the baselines, we choose a representative model: CNN-HLSTM and a state-of-the-art model: Co-Attention.",
"The clinicians are unaware of which model generates these reports.",
"In particular, to have more documents examined, we did not use the same documents for both clinicians and check the agreements between them.",
"That is to say, the documents for different clinicians do not overlap.",
"The results in Table 3 show that our approach is better than baselines in clinical practice with winning pick-up percentages.",
"In particular, all invited professional clinicians found that our approach can generate fluent reports with more accurate descriptions of abnormalities than baselines.",
"It indicates that our approach can help baselines to efficiently alleviate the data bias problem, which also can be verified in Section 6.7.",
"Analysis on the Difficulty Metrics In this section, we conduct an ablation study by only using a single difficulty metric during the curriculum learning, i.e., single difficulty-based curriculum learning, to investigate the contribution of each difficulty metric in our framework and the results are shown in Table 4.",
"Settings (a-d) show that every difficulty metric can boost the performance of baselines, which verify the effectiveness of our designed difficulty metrics.",
"In particular, 1) the Settings Visual Difficulty Textual Difficulty RouteStrategy Dataset: IU-Xray (Demner-Fushman et al., 2016) HeuristicMetric Model Confidence HeuristicMetric Model Confidence Baseline: CNN-HLSTM (Jing et al., 2018) B-1 B-2 B-3 B-4 M R-L Baseline ---0.435 0.280 0.187 0.131 0.173 0.346",
"model confidence in both visual and textual difficulties achieves better performance than the heuristic metrics.",
"It shows that the model confidence is the more critical in neural models.",
"2) Both the model confidence and heuristic metrics in the textual difficulty achieve better performance than their counterparts in the visual difficulty, which indicates that the textual data bias is the more critical in textual report generation task.",
"When progressively incorporate each difficulty metric, the performance will increase continuously (see settings (e-g)), showing that integrating different difficulty metrics can bring the improvements from different aspects, and the advantages of all difficulty metrics can be united as an overall improvement.",
"Analysis on the Route Strategy As stated in Section 5.2, to implement the multiple difficulty-based curriculum learning, three simple and natural ways is to: 1) Fuse multiple difficulty metrics directly as a single mixed difficulty metric, d 1 + d 2 + d 3 + d 4 ; 2) Randomly choose a curricula and 3) Sequentially choose a curricula (i.e., 1 2 3 4 1) to train the model.",
"Table 4 (h-j) show the results of the three implementations.",
"As we can see, all route strategies are viable in practice with improved performance of medical report generation, which proves the effectiveness and robustness of our CMCL framework.",
"Besides, all of them perform worse than our approach (Setting",
"(g)), which confirms the effectiveness of dynamically learning strategy at each training step.",
"In Figure 1, we give two intuitive examples to better understand our approach.",
"As we can see, our approach generates structured and robust reports, which show significant alignment with ground truth reports and are supported by accurate abnormal descriptions.",
"For example, the generated report correctly describes Blunting of right costophrenic in the first example and Scoliosis is present in the second example.",
"The results prove our arguments and verify the effectiveness of our proposed CMCL in alleviating the data bias problem by enabling the model to gradually proceed from easy to more complex instances in training.",
"In this paper, we propose the novel competence-based multimodal curriculum learning framework (CMCL) to alleviate the data bias by efficiently utilizing the limited medical data for medical report generation.",
"To this end, considering the difficulty of accurately capturing and describing the abnormalities, we first assess four sample difficulties of training data from the visual complexity and the textual complexity, resulting in four different curricula.",
"Next, CMCL enables the model to be trained with the appropriate curricula and gradually proceed from easy samples to more complex ones in training.",
"Experimental results demonstrate the effectiveness and the generalization capabilities of CMCL, which consistently boosts the performance of the baselines under most metrics.",
"This work is partly supported by Tencent Medical AI Lab, Beijing, China.",
"We would like to sincerely thank the clinicians Xiaoxia Xie and Jing Zhang of the Harbin Chest Hospital in China for providing the human evaluation.",
"We sincerely thank all the anonymous reviewers for their constructive comments and suggestions that substantially improved this paper.",
"In this work, we focus on helping a wide range of existing medical report generation systems alleviate the data bias by efficiently utilizing the limited medical data for medical report generation.",
"Our work can enable the existing systems to gradually proceed from easy samples to more complex ones in training, which is similar to the learning curve of radiologist: (1) first start from simple and easy-written reports; (2) and then attempt to consume harder reports, which consist of rare and diverse abnormalities.",
"As a result, our work can promote the usefulness of existing medical report generation systems in better assisting radiologists in clinical decision-makings and reducing their workload.",
"In particular, for radiologists, given a large amount of medical images, the systems can automatically generate medical reports, the radiologists only need to make revisions rather than write a new report from scratch.",
"We conduct the experiments on the public MIMIC-CXR and IU-Xray datasets.",
"All protected health information was de-identified.",
"De-identification was performed in compliance with Health Insurance Portability and Accountability Act (HIPAA) standards in order to facilitate public access to the datasets.",
"Deletion of protected health information (PHI) from structured data sources (e.g., database fields that provide patient name or date of birth) was straightforward.",
"All necessary pa-tient/participant consent has been obtained and the appropriate institutional forms have been archived."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"objective",
"objective",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"result",
"other",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"While it has been shown that Neural Machine Translation ( NMT ) is highly sensitive to noisy parallel training samples, prior work treats all types of mismatches between source and target as noise.",
"As a result, it remains unclear how samples that are mostly equivalent but contain a small number of semantically divergent tokens impact NMT training.",
"To close this gap, we analyze the impact of different types of fine-grained semantic divergences on Transformer models.",
"We show that models trained on synthetic divergences output degenerated text more frequently and are less confident in their predictions.",
"Based on these findings, we introduce a divergent-aware NMT framework that uses factors to help NMT recover from the degradation caused by naturally occurring divergences, improving both translation quality and model calibration on EN FR tasks.",
"While parallel texts are essential to Neural Machine Translation ( NMT ), the degree of parallelism varies widely across samples in practice, for reasons ranging from noise in the extraction process (Roziewski and Stokowiec, 2016) to nonliteral translations (Zhai et al., 2019b, 2020a).",
"For instance (Figure 1), a French SOURCE could be paired with an exact translation into English ( EQ ), with a mostly equivalent translation where only a few tokens convey divergent meaning (fine-DIV ), or with a semantically unrelated, noisy reference (coarseDIV ).",
"Yet, prior work treats parallel samples in a binary fashion: coarse-grained divergences are viewed as noise to be excluded from training (Koehn et al., 2018), whilst others are typically regarded as gold-standard equivalent translations.",
"As a result, the impact of fine-grained divergences on NMT remains unclear.",
"This paper aims to understand and mitigate the impact of fine-grained semantic divergences in Figure 1: Equivalent vs. Divergent references on NMT training.",
"NMT .",
"We first contribute an analysis of how fine-grained divergences in training data affect NMT quality and confidence.",
"Starting from a set of equivalent English-French WikiMatrix sentence pairs, we simulate divergences by gradually corrupting them with synthetic fine-grained divergences .",
"Following Khayrallah and Koehn (2018)who, in contrast, study the impact of noise on MT we control for different types of fine-grained semantic divergences and different ratios of equivalent vs. divergent data.",
"Our findings indicate that these imperfect training references: hurt translation quality (as measured by BLEU and METEOR ) once they overwhelm equivalents; output degenerated text more frequently; and increase the uncertainty of models' predictions.",
"Based on these findings, we introduce a divergent-aware NMT framework that incorporates information about which tokens are indicative of semantic divergences between the source and target side of a training sample.",
"Source-side divergence tags are integrated as feature factors (Had-dow and Koehn, 2012; Sennrich and Haddow, 2016; Hoang et al., 2016), while target-side divergence tags form an additional output sequence generated in a multi-task fashion (Garca-Martnez et al., 2016, 2017).",
"Results on EN FR translation show that our approach is a successful mitigation strategy : it helps NMT recover from the negative impact of fine-grained divergences on translation quality, with fewer degenerated hypotheses, and more confident and better calibrated predictions.",
"We make our code publicly available: https://github.com/Elbria/xling-SemDiv-NMT .",
"Cross-lingual Semantic Divergences We use this term to refer to meaning differences in aligned bilingual text (Vyas et al., 2018; Carpuat et al., 2017).",
"Divergences in manual translation might arise due to the translation process (Zhai et al., 2018) and result in non-literal translations (Zhai et al., 2020a).",
"Divergences might also arise in parallel text extracted from multilingual comparable resources.",
"For instance, in Wikipedia, documents aligned across languages might contain parallel segments that share important content, yet they are not perfect translations of each other, yielding fine-grained semantic divergences (Smith et al., 2010).",
"Finally coarse-grained divergences might result from the process of automatically mining and aligning corpora from monolingual data (Fung and Cheung, 2004; Munteanu and Marcu, 2005), or web-scale parallel text (Smith et al., 2013; El-Kishky et al., 2020; Espl ` a et al., 2019).",
"Noise vs. Semantic Divergences In the context of MT , noise often refers to mismatches in web-crawled parallel corpora that are collected without guarantees about their quality.",
"Khayrallah and Koehn (2018) define five frequent types of noise found in the German-English Paracrawl corpus: misaligned sentences , disfluent text , wrong language , short segments , and untranslated sentences .",
"They examine the impact of noise on translation quality and find that untranslated training instances cause NMT models to copy the input sentence at inference time.",
"Their findings motivated a shared task dedicated to filtering noisy samples from web-crawled data at WMT , since 2018 (Koehn et al., 2018, 2019, 2020).",
"This work moves beyond such coarse divergences and focuses instead on fine-grained divergences that affect a small number of tokens within mostly equivalent pairs and that can be found even in high-quality parallel corpora.",
"Training Assumptions NMT models are typically trained to maximize the log-likelihood of the training data, D { ( x ( n ) , y ( n ) ) } Nn =1 , where ( x ( n ) , y ( n ) ) is the n -th sentence pair consisting of sentences that are assumed to be translations of each other .",
"Under this assumption, model parameters are updated to maximize the token-level cross-entropy loss: J ( ) = N (cid:88) n =1 T (cid:88) t =1 log p ( y ( n ) t | y ( n ) <t , x ( n ) ; ) (1) In Figure 1, we illustrate how semantic divergences interact with NMT training.",
"In the case of coarse divergences, both the prefixes (cid:101) y ( n ) t< 1 and targets (cid:101) y ( n ) t , yield a noisy training signal at each time step t , which motivates excluding them from the training pool entirely.",
"In the case of fine-grained divergences, the assumption of semantic equivalence is only partially broken.",
"Depending on the time step t , we might thus condition the prediction of the next token on partially corrupted prefixes, encourage the model to make a wrong prediction, or do a combination of the above.",
"This suggests that fine-grained divergent samples provide a noisy yet potentially useful training signal depending on the time step.",
"Meanwhile, fine-grained divergences increase uncertainty in the training data, and as a result might impact models' confidence in their predictions, as noisy untranslated samples do (Ott et al., 2018).",
"This work seeks to clarify and mitigate their impact on NMT , accounting for both translation quality and model confidence.",
"We evaluate the impact of semantic divergences on NMT by injecting increasing amounts of synthetic divergent samples during training, following the methodology of Khayrallah and Koehn (2018) for noise.",
"We focus on three types of divergences, which were found to be frequent in parallel corpora.",
"They are fine-grained as they represent discrepancies between the source and target segments at a word or phrase level: LEXICAL SUBSTITUTION aims at mimicking particularization and generalization operations resulting from non-literal translations (Zhai et al., 2019a, 2020b); PHRASE REPLACEMENT mimics phrasal mistranslations; SUBTREE DELETION simulates missing phrasal content from the source or target side.",
"Synthetic divergent samples are automatically generated by corrupting semantically equivalent sentence pairs, following the methodology introduced by Briakou and Carpuat (2020).",
"Equivalents are identified by their Divergent m BERT classifier that yields an F 1 score of 84 , on manually annotated WikiMatrix data, despite being trained on synthetic data.",
"For LEXICAL SUBSTITUTION we corrupt equivalents by substituting words with their hypernyms or hyponyms from WordNet, for PHRASE REPLACEMENT we replace sequences of words with phrases of matching POS tags, and for SUBTREE DELETION we randomly delete subtrees in the dependency parse tree of either the source or the target.",
"Having access to those 4 versions of the same corpus (one initial equivalent and three synthetic divergences), we mix equivalents and divergent pairs introducing one type of divergence at a time (corpora statistics are included in D).",
"Finally, we evaluate the translation quality and uncertainty of the resulting translation models.",
"Training Data We train our models on the parallel WikiMatrix French-English corpus (Schwenk et al., 2019), which consists of sentence pairs mined from Wikipedia pages using language-agnostic sentence embeddings ( LASER ) (Artetxe and Schwenk, 2019).",
"Previous annotations show that 40% of sentence pairs in a random sample contain fine-grained divergences (Briakou and Carpuat, 2020).",
"After cleaning noisy samples using simple rules (i.e., exclude pairs that are",
"a) too short or too long,",
"b) mostly numbers,",
"c) almost copies based on edit distance), we extract equivalent samples using the Divergent m BERT model.",
"Table 1 presents statistics on the extracted pairs, along with the corpus created if we threshold the LASER score at 1 .",
"04 , as suggested by Schwenk et al. (2019).",
"Development and Test data We use the official development and test splits of the TED corpus (Qi et al., 2018), consisting of 4 , 320 and 4 , 866 gold-standard translation pairs, respectively.",
"All models Corpus #Sentences WIKIMATRIX 6 , 562 , 360 + HEURISTIC FILTERING 2 , 437 , 108 + LASER FILTERING 1 , 250 , 683 + divergentm BERT FILTERING 751 , 792 Table 1: WikiMatrix EN-FR corpus statistics.",
"share the same BPE vocabulary.",
"We average results across runs with 3 different random seeds.",
"Preprocessing We use the standard Moses scripts (Koehn et al., 2007) for punctuation normalization, true-casing, and tokenization.",
"We learn 32 KBPE s (Sennrich et al., 2016c) using Sentence-Piece (Kudo and Richardson, 2018).",
"Models We use the base Transformer architecture (Vaswani et al., 2017), with embedding size of 512 , transformer hidden size of 2 , 048 , 8 attention heads, 6 transformer layers, and dropout of 0 .",
"1 .",
"Target embeddings are tied with the output layer weights.",
"We train with label smoothing ( 0 . 1 ).",
"We optimize with Adam (Kingma and Ba, 2015) with a batch size of 4 , 096 tokens and checkpoint models every 1 , 000 updates.",
"The initial learning rate is 0 .",
"0002 , and it is reduced by 30 % after 4 checkpoints without validation perplexity improvement.",
"We stop training after 20 checkpoints without improvement.",
"We select the best checkpoint based on validation BLEU (Papineni et al., 2002).",
"All models are trained on a single GeForce GTX 1080 GPU .",
"Translation Quality Table 2 presents the impact of semantic divergences on BLEU and METEOR .",
"Corrupting equivalent bitext with fine-grained divergences hurts translation quality across the board.",
"In most cases, the degradation is proportional to the percentage of corrupted training samples.",
"LEXICAL SUBSTITUTION causes the largest degradation for both metrics.",
"The degradation is relatively smaller for METEOR than BLEU , which we attribute to the fact that METEOR allows matches between synonyms when comparing references to hypotheses.",
"SUBTREE DELETION and LEXICAL SUBSTITUTION corruptions lead to significant degradation at 50% ( BLEU ; standard deviations across reruns are < 0 . 4 ).",
"By contrast, Transformers are more robust to PHRASE REPLACEMENT corruptions, as degradations are only significant after corrupting 70% ( BLEU ) of equivalents.",
"Token Uncertainty We measure the impact of divergences on model uncertainty at training time and at test time.",
"For the first, we extract the probability of a reference token conditioned on reference prefixes at each time step.",
"For the latter, we compute the probability of the token predicted by the model given its own history of predictions.",
"Figure 2 shows that models trained on EQUIVALENTS are more confident in their token level predictions both at inference and training time.",
"SUBTREE DELETION mismatches affect models' confidence less than other types, while PHRASE REPLACEMENT hurts confidence the most both at inference and at training time.",
"Finally, we observe that differences across divergence types are larger in early decoding steps, while at later steps, they all converge below the EQUIVALENTS .",
"Degenerated Hypotheses When models are trained on 50% or more divergent samples, the total length of their hypotheses is longer than the references.",
"Manual analysis on models trained with 100% of divergent samples suggests that this length effect is partially caused by degenerated text.",
"Following Holtzman et al. (2019)who study this phenomenon for unconditional text generationwe define degenerations as output text that is bland, incoherent, or gets stuck in repetitive loops.",
"1 1 For instance, I've never studied sculpture, engineering and architecture, and the engineering and architecture.",
"We automatically detect degenerated text in model outputs by checking whether they contain repetitive loops of n -grams that do not appear in the reference (details on the algorithm are in C).",
"Figure 3 shows that exposing NMT to divergences increases the percentage of degenerated outputs.",
"Even with large beams, the models trained on divergent data yield more repetitions than the EQUIVALENTS .",
"Moreover, divergences due to phrasal mismatches ( PHRASE REPLACEMENT and SUBTREE DELETION ) yield more frequent repetitions than token-level mismatches ( LEXICAL SUBSTITUTION ).",
"Interestingly, the latter almost matches the frequency of repetitions in EQUIVALENTS with larger beams ( 5 ).",
"Summary Synthetic divergences hurt translation quality, as expected.",
"More surprisingly, our study also reveals that this degradation is partially due to more frequent degenerated outputs, and that divergences impact models' confidence in their predictions.",
"Different types of divergences have different effects: LEXICAL SUBSTITUTION causes the largest degradation in translation quality, SUBTREE DELETION and PHRASE REPLACEMENT increase the number of degenerated beam hypotheses, while PHRASE REPLACEMENT also hurts the models' confidence the most.",
"Nevertheless, the impact of divergences on BLEU appears to be smaller than that of noise (Khayrallah and Koehn, 2018).",
"2 This suggests that noise filtering techniques are suboptimal to deal with fine-grained divergences.",
"We now turn to naturally occurring divergences in WikiMatrix.",
"We will see that their impact on model quality and uncertainty is consistent with that of synthetic divergences ( 4.3).",
"We propose a divergent-aware framework for NMT ( 4.1) that successfully mitigates their impact ( 4.3).",
"We use semantic factors to inform NMT of tokens that are indicative of meaning differences in each sentence pair.",
"We tag divergent source and target tokens in parallel segments as equivalent ( EQ ) or divergent ( DIV ) using an m BERT -based classifier trained on synthetic data.",
"2 While the absolute scores are not directly comparable across settings, Khayrallah and Koehn (2018) report that noise has a more striking impact of 8 to 25 BLEU.",
"The classifier has a 45 F 1 score on a fine-grained divergence test set (Briakou and Carpuat, 2020).",
"The predicted tags are thus noisy, as expected on this challenging task, yet we will see that they are useful.",
"An example is illustrated below: SRC TOKENS votre p`ere est francais FACTORS EQ DIV EQ EQ TGT TOKENS your parent is french FACTORS EQ DIV EQ EQ Source Factors We follow Sennrich and Haddow (2016) who represent the encoder input as a combination of token embeddings and linguistic features.",
"Concretely, we look up separate embeddings vectors for tokens and source-side divergent predictions, which are then concatenated.",
"The length of the concatenated vector matches the total embedding size.",
"Target Factors Target-side divergence tags are an additional output sequence, as in Garca-Martnez et al. (2016).",
"At each time step the model produces two distributions: one over the token target vocabulary and one over the target factors.",
"The model is trained to minimize a divergent-aware loss (Equation 2).",
"Terms in red (also, underlined) correspond to modifications to the traditional NMT loss.",
"At time step t , the model is rewarded to match the reference target y ( n ) t , conditioned on the source sequence of tokens ( x ( n ) ), the source factors ( ( n ) ), the token target prefix ( y ( n ) <t ), and the target factors prefix ( z ( n ) <t ).",
"At the same time ( t ), the model is rewarded to match the factored predictions for the previous time step = t 1 .",
"The time shift between the two target sequences is introduced so that the model learns to firstly predict the reference token at and then its corresponding EQ vs. DIV label, at the same time step.",
"The factored predictions are conditioned again on x ( n ) , ( n ) , the target factor prefix z ( n ) < and the token prefix ( y ( n ) ).",
"L = N (cid:88) n =1 (cid:32) T (cid:88) t =1 log p ( y ( n ) t | y ( n ) <t , z ( n ) <t , x ( n ) , ( n ) ; ) (cid:124) (cid:123)(cid:122) (cid:125) L ( n ) MT + T (cid:88) = t 1 log p ( z ( n ) | z ( n ) < , y ( n ) , x ( n ) , ( n ) ; ) (cid:124) (cid:123)(cid:122) (cid:125) L ( n ) factor (cid:33) (2) Inference At test time, input tokens are tagged with EQ to encourage the model to predict an equivalent translation.",
"We decode using beam search for predicting the translation sequence.",
"The token predictions are conditioned on both the token and the factors prefixes.",
"The factor prefixes are greedily decoded and thus do not participate in beam search.",
"Divergences We conduct an extensive comparison of models exposed to different amounts of equivalent and divergent WikiMatrix samples.",
"Starting from the pool of examples identified as divergent at 3.2, we rank and select the most fine-grained divergences by thresholding the bicleaner score (Ramrez-Sanchez et al., 2020) at 0 .",
"5 , 0 .",
"7 and 0 .",
"8 .",
"For details, see A. Models We compare the factored models ( DIVFACTORIZED ) for incorporating divergent tokens (4.1) against: 1. LASER models are trained on WikiMatrix pairs with a LASER score greater than 1 .",
"04 the noise filtering strategy recommended by Schwenk et al. (2019).",
"Our prior work shows that thresholding LASER might introduce a number of divergent data in the training pool varying from fine to coarse mismatches (Briakou and Carpuat, 2020).",
"2. EQUIVALENTS models are trained on WikiMatrix pairs detected as exact translations (3.2); 3. DIV-AGNOSTIC models are trained on equivalent and fine-grained divergent data without incorporating information that distinguishes between them; 4. DIV-TAGGED models distinguish equivalences from divergences by appending < EQ > vs. < DIV > tags as source-side constraints (Sennrich et al., 2016a).",
"Models' details Our models are implemented in the Sockeye2 toolkit (Domhan et al., 2020).",
"3 We set the size of factor embeddings to 8 , the source token embeddings to 504 and target embeddings to 514 , yielding equal model sizes across experiments.",
"All other parameters are kept the same across models, as discussed in 3.2, except that target embeddings are not tied with output layer weights for factored models.",
"More details are included in B. Other Data & Preprocessing We use the same preprocessing as well as development and test sets as in 3.2, except we learn 5 KBPE s as in 3 https://github.com/awslabs/sockeye Schwenk et al. (2019).",
"DIV-FACTORIZED , DIVAGNOSTIC , and DIV-TAGGED models are compared in controlled setups that use the same training data.",
"We also evaluate out-of-domain on the khresmoi-summary test set for the WMT 2014 medical translation task (Bojar et al., 2014).",
"Evaluation We evaluate translation quality with BLEU (Papineni et al., 2002) and METEOR (Baner-jee and Lavie, 2005).",
"4,5 We compute Inference Expected Calibration Error (Inf ECE ) as Wang et al. (2020), which measures the difference in expectation between confidence and accuracy.",
"6 We measure token-level translation accuracy based on Translation Error Rate ( TER ) alignments between hypotheses and references.",
"7 Unless mentioned otherwise, we decode with a beam size of 5 .",
"We discuss the impact of real divergences along the dimensions surfaced by the synthetic data analysis.",
"Translation Quality Table 3 presents BLEU and METEOR scores across model configurations and data settings on the TED test sets.",
"First, the model trained on EQUIVALENTS represents a very competitive baseline as it performs better or statistically comparable to all models.",
"This result is in line with prior evidence of Vyas et al. (2018) who show that filtering out the most divergent pairs in noisy corpora (e.g., OpenSubtitles and Com-monCrawl) does not hurt translation quality.",
"Interestingly, the EQUIVALENTS model outperforms LASER across metrics and translation directions, despite the fact that it is exposed to only about half of the training data.",
"Gradually adding divergent data ( DIV-AGNOSTIC ) hurts translation quality across the board compared to the EQUIVALENTS model.",
"The drops are significantly larger when divergences overwhelm the equivalent translations, which is consistent with our findings on synthetic data.",
"Second, DIV-FACTORIZED is the most effective mitigation strategy.",
"With segment-level constraints ( DIV-TAGGED ), models can recover from the degradation caused by divergences ( DIV-AGNOSTIC ), but not consistently.",
"By contrast, token-level factors ( DIV-FACTORIZED ) help NMT recover from the impact of divergences across data setups and reach 4 https://github.com/mjpost/sacrebleu 5 https://www.cs.cmu.edu/alavie/METEOR/ 6 https://github.com/shuo-git/InfECE 7 http://www.cs.umd.edu/snover/tercom/ FR EN EN FR METHOD Training size BLEU METEOR BLEU METEOR LASER 1 .",
"Train.",
"size (M) FR EN EN FRLASER 1 .",
"translation quality comparable to that of the EQUIVALENTS model, successfully mitigating the impact of the noisy training signals from divergent samples.",
"Third, when translating the out-of-domain test set, DIV-FACTORIZED improves over the EQUIVALENTS model, as presented in Table 4. DIVAGNOSTIC models perform comparably to EQUIVALENTS , while factorizing divergences improves on the latter by +1 BLEU , for both directions.",
"8 Mitigating the impact of divergences is thus important for NMT to benefit from the increased coverage of out-of-domain data provided by the divergent samples.",
"Degenerated Hypotheses We check for degenerated outputs across models, data setups (we account for different percentages of divergences in the training data), and different beam sizes (Ta-ble 5).",
"As with synthetic divergences, we observe that when real divergences overwhelm the training data ( 55% ), degenerated loops are almost twice as frequent for all beam sizes.",
"This phenomenon is consistently mitigated by DIV-FACTORIZED models across the board.",
"9 Furthermore, in some settings ( 20% , 33% ), DIV-FACTORIZED models decrease the amount of degenerated text by half compared to the EQUIVALENTS models.",
"10 8 We include METEOR results in Appendix E. 9 We observe similar trends for EN FR in Appendix F 10 LASER models degenerate more frequently than EQUIVALENTS and DIV-FACTORIZED .",
"Uncertainty Figures 4a and 4c show that the gold-standard references are assigned lower probabilities by the DIV-AGNOSTIC models than all other models, especially in early time steps ( t < 30 ).",
"We observe similar drops in confidence based on the probabilities of predicted tokens at inference time (4b and 4d).",
"This confirms that exposing models to fine-grained semantic divergences hurts their confidence, whether the divergences are synthetic or not.",
"Furthermore, factorizing divergences helps mitigate the impact of naturally occurring divergences on uncertainty in addition to translation quality.",
"We conduct a calibration analysis to measure the differences between the confidence (i.e., probability ) and the correctness (i.e., accuracy ) of the generated tokens in expectation.",
"Given that deep neural networks are often mis-calibrated in the direction of over-estimation (confidence > accuracy) (Guo et al., 2017), we check whether the increased confidence of DIV-FACTORIZED hurts calibration (Table 6).",
"DIV-FACTORIZED models are on average more confident and more accurate than their DIV-AGNOSTIC counterparts.",
"Interestingly, DIV-AGNOSTIC has smaller calibration errors than EQUIVALENTS and LASER models across the board.",
"We discuss work related to cross-lingual semantic divergences and noise effects in Section 2 and now turn to the literature that connects with the methods used in this paper.",
"Factored Models Factored models are introduced to inject word-level linguistic annotations (e.g., Part-of-Speech tags, lemmas) in translation.",
"Source-side factors have been used in statistical MT (Haddow and Koehn, 2012) and in NMT (Sen-nrich et al., 2016b; Hoang et al., 2016).",
"Target-side factors are used by Garca-Martnez et al. (2017) as an extension to the traditional NMT framework that outputs multiple sequences.",
"Although their main motivation is to enable models to handle larger vocabularies, Wilken and Matusov (2019) propose a list of novel applications of target-side factors beyond their initial purpose, such as word-case prediction and subword segmentation.",
"Our approach draws inspiration from all the aforementioned works, yet it is unique in its use of both source and target factors to incorporate semantics in NMT .",
"Calibration Kumar and Sarawagi (2019) find that NMT models are miscalibrated, even when conditioned on gold-standard prefixes.",
"They attribute this behavior to the poor calibration of the EOS token and the uncertainty of attention and design a recalibration model to improve calibration.",
"Ott et al. (2018) argue that miscalibration can be attributed to the extrinsic uncertainty of the noisy, untranslated references found in the training data.",
"Muller et al. (2019) investigate the effect of label smoothing on calibration.",
"On a similar spirit, Wang et al. (2020) propose graduated label smoothing to improve calibration at inference time.",
"They also link miscalibration to linguistic properties of the data (e.g., frequency, position, syntactic roles).",
"Our work, in contrast, focuses on the semantic properties of the training data that affect calibration.",
"This work investigates the impact of semantic mismatches beyond noise in parallel text on NMT quality and confidence.",
"Our experiments on EN FR tasks show that fine-grained semantic divergences hurt translation quality when they overwhelm the training data.",
"Models exposed to fine-grained divergences at training time are less confident in their predictions, which hurts beam search and produces degenerated text (repetitive loops) more frequently.",
"Furthermore, we also show that, unlike noisy samples, fine-grained divergences can still provide a useful training signal for NMT when they are modeled via factors.",
"Evaluated on EN FR translation tasks, our divergent-aware NMT framework mitigates the negative impact of divergent references on translation quality, improves the confidence and calibration of predictions, and produces degenerated text less frequently.",
"More broadly, this work illustrates how understanding the properties of training data can help build better NMT models.",
"In future work, we will extend our analysis to other properties of parallel text and to other language pairs, focusing on low-resource conditions where divergences are expected to be even more prevalent.",
"We thank Sweta Agrawal, Doug Oard, Suraj Rajap-pan Nair, the anonymous reviewers and the CLIP lab at UMD for helpful comments.",
"This material is based upon work supported by the National Science Foundation under Award No. 1750695 .",
"Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation."
] | [
"abstain",
"abstain",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"method",
"abstain",
"method",
"abstain",
"abstain",
"result",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"method",
"other",
"other",
"method",
"other",
"other",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"result",
"abstain",
"result",
"result",
"method",
"objective",
"other",
"other",
"other"
] |
[
"Neural networks are surprisingly good at interpolating and perform remarkably well when the training set examples resemble those in the test set.",
"However, they are often unable to extrapolate patterns beyond the seen data, even when the abstractions required for such patterns are simple.",
"In this paper, we first review the notion of extrapolation, why it is important, and how one could hope to tackle it.",
"We then focus on a specific type of extrapolation, which is especially useful for natural language processing: generalization to sequences longer than those seen during training.",
"We hypothesize that models with a separate content-and location-based attention are more likely to extrapolate than those with common attention mechanisms.",
"We empirically support our claim for recurrent seq2seq models with our proposed attention on variants of the Lookup Table task.",
"This sheds light on some striking failures of neural models for sequences and on possible methods to approaching such issues.",
"It is indisputable that, in recent years, neural network research has made stunning progress on a wide variety of tasks that require to process sequential inputs, such as machine translation (Sutskever et al., 2014) and speech recognition (Graves et al., 2013).",
"However, many researchers have questioned the forms of generalization that neural networks exhibit, which significantly diverges from human-like generalization (Lake and Baroni, 2017; Geirhos et al., 2018).",
"This discrepancy with human-like generalization is particularly true when it comes to extrapolating outside the training space (DeLosh et al., 1997; Marcus, 1998).",
"statistical cues (Jo and Bengio, 2017), testing extrapolation and generalization to samples from the long tails of a distribution might be the only way of quantifying their capacity of abstract reasoning (Santoro et al., 2018).",
"Despite this benefit, little work has been done in extrapolation.",
"A possible explanation is that the probability of encountering a test example in the extrapolation setting seems low when the training set D is large.",
"1 However, such an argument fails to consider the high cost of error in extrapolation settings, and this can be a barrier for real-world scenarios (e.g., self-driving cars).",
"In this paper, we focus on extrapolation in sequences.",
"More precisely, how to generalize sequence-to-sequence predictors to inputs of length n > n D , where n D denotes the length of the longest sequence in the training set.",
"Such extrapolation is crucial for language acquisition, where humans have limited learning resources to account for the unbounded nature of language.",
"To successfully generalize, a language learner needs to process new and potentially longer sentences than previously encountered ones (Chomsky, 1956).",
"Accounting for this unbounded nature of language is challenging for neural networks.",
"This issue has recently been uncovered for seq2seq models by looking at simple artificial tasks (Lake and Baroni, 2018; Liska et al., 2018; Weber et al., 2018).",
"Liska et al. (2018) find that seq2seq architectures can converge to local minima that generalize, but rarely do.",
"This suggests that neural networks could generalize but lack inductive biases that favor extrapolatable behavior.",
"1 Extrapolation is still prevalent in practical scenarios as high-dimensional problems would typically require an exponentially large D to be representative, and the underlying distribution may vary over time (Hooker, 2004).",
"current attention mechanisms, which are mainly responsible for recent successes in natural language processing (NLP), are unlikely to extrapolate as they depend on the content of trained embeddings.",
"This leads us to introduce a novel location -based attention that is loosely inspired by human visual attention.",
"To avoid gaining extrapolation capabilities at the cost of expressivity, we introduce an attention mixer that combines contentand position-based attentions.",
"Finally, we show that recurrent models equipped with this new attention mechanism can extrapolate to longer sequences.",
"Extrapolation is often used but rarely formally de-fined.",
"Ebert et al. (2014) have found that when extrapolation is explicitly defined, it often refers to points outside a hull delimited by the training set.",
"E.g., rectangular hull, concave hull, or convex hull.",
"In this work we use the rectangle hull definition (Brooks et al., 1988), as any model which is extrapolatable for this region would also be extrapolatable for the convex and concave definition.",
"Given any finite training dataset D := { x ( n ) } Nn =1 R d , we define the interpolation domain to be the d -dimensional interval I inter := (cid:81) di =1 [min n x ( n ) i , max n x ( n ) i ] and the extrapolation domain its complement I extra := R d \\ I inter .",
"In other words, we define a test example x to be in the extrapolation setting if at least one of its features x j is larger or smaller than any values it took during training (Figure 1).",
"Throughout this paper, we assume that neural networks with inputs or temporary representations in I extra will break.",
"Indeed, for a given target function t : R d R to approximate, there is an infinite amount of predictors that satisfy f ( x ) = t ( x ) , x I inter R d .",
"Without any additional constraints, it is thus extremely unlikely that f ( x ) = t ( x ) , x R d .",
"This could explain why neural networks have empirically been found to break in extrapolation settings (Lohninger, 1999; Hettiarachchi et al., 2005; Mitchell et al., 2018).",
"The rest of the paper discusses how to constrain representations used by our neural models 2 to be in I inter regardless of the source sentence length, without decreasing their expressivity.",
"First and foremost, we would like a model that can extrapolate to sequences longer than the longest training one n D ( Extrapolation Constraint ).",
"As previously discussed, models with inputs or temporary representations in I extra will very likely break.",
"To satisfy the extrapolation constraint, neural models should thus not depend on features that take values in I extra for sequences longer than n D .",
"Second, our model should be able to learn very complex positional attention patterns ( Positional Patterns Constraint ).",
"Finally, although the position of words in a sentence is important, many tasks depend on their semantics.",
"The model should thus still be able to learn content-based attention patterns ( Content Patterns Constraint ).",
"In the following section, we review previously proposed attention-mechanism and discuss why they do not fulfill the three aforementioned desired properties.",
"An attention mechanism (or attender) takes as input a matrix of keys K := { k Ts } n s s =1 R n s d and a query q t R d , and outputs a probability mass function t R n s that will weight a set of values V := { v Ts } n s s =1 R n s d v to generate a glimpse vector g t R d v used for downstream tasks.",
"Following Graves et al. (2014), it is useful to think of the attender as a memory access module, t as the soft address and g t as the accessed vector.",
"Figure 2 illustrates attention in a recurrent seq2seq (Cho et al., 2014), which we will use for our experiments.",
"Both the keys and the values correspond to the set of encoder hidden states 2 Although the sentence length is a scalar, the temporary representations (outputs of a hidden layer) are high dimensional.",
"K = V = E := { e Ts } n s s =1 , while the query corresponds to the current decoder hidden state q t = d t .",
"Most attention mechanisms compute content-based addressing (associative memory) that depend on (partial) matches of the key and query.",
"They take as input K and q t and output a semantic-based attention t R n s .",
"For example, if you wanted to translate a scientific paper, you could understand the main point of the text without remembering the specific technical terms that were used.",
"When translating, you would go back to the text and translate the jargon by knowing what to look for.",
"A number of content-based have been proposed, they usually differ in a score that quantifies the match between k s , q t through multinomial logits: t := { softmax(score( k s , q t )) } n s 1 s =0 (2) score( k s , q t ) := u T tanh([ (cid:101) k s ; (cid:101) q t ]) Additive Bahdanau et al. (2015) k T s (cid:101) q t Multiplicative Luong et al. (2015) k Ts q t d S. Dot Prod.",
"Vaswani et al. (2017) (3) Where (cid:101) x is a shorthand for W x .",
"A location (or position) attention mechanism computes location-based addressing (random access memory) that depend on the index of the key.",
"It takes as input q t and outputs a location attention t R n s .",
"Intuitively, it decides which value to retrieve based on its index.",
"For example, in German sentences, the verb goes at the end of the sentence, after a subordinate clause.",
"When translating from German to English, it might thus make sense to directly attend to the last word in the German source sentence after encoding a subordinate clause.",
"There are many other cases where attending to words based on their positions seems important.",
"E.g. translating from subject-object-verb to subject-verb-object languages, or understanding the emphasis in some languages.",
"Despite the importance of word ordering in natural language, location-based attention is not common in seq2seq frameworks.",
"This is probably because content-based attention can emulate location-based attention in the usual interpolation setting.",
"Indeed, it can learn to encode a positional embedding in the hidden states of the encoder through some internal counter.",
"This counter is unlikely to work in the extrapolation regime, 3 we, therefore, investigate other types of location-attention that could satisfy the extrapolation constraint.",
"Luong et al. (2015) proposed a location-based attention by using Equation 2 with a score that is independent of the key score( k s , q t ) = w T q t .",
"They restrict themselves to sequences of the same length, which is not of interest to our work.",
"Such a mechanism could be extended to sequences of varying lengths but would still lack extrapolation capacity as the model still has to learn to embed the location of the index it wants to retrieve.",
"The Neural Turing Machine (Graves et al., 2014), post-processes the content attention by shifting its location by a predicted number of steps.",
"We use a similar mechanism, which is extrapolatable due to the independence of the sequence length.",
"Nevertheless, on its own, it does not allow positional-only patterns in variable-length sentences.",
"For example, it cannot attend to the i th word irrespective of the sentence length.",
"The same argument holds for other location-based attention developed for architectures with an external memory (Sukhbaatar et al., 2015).",
"More recently, many location-based attention have been proposed in self-attention mechanism.",
"These methods are usually based on sinusoidal encodings (SE), which have been proposed to take into account the word positions while bypassing the need for recurrences in encoder-decoder frameworks.",
"In this paper, we will consider the transformer and transformerXL (relative SE) attention, 3 This assumption can depend on the architecture and the inductive bias it provides (Weiss et al., 2018).",
"For our task, we found that the assumption held for both LSTM and GRU.",
"Transformer (Vaswani et al., 2017)",
"Where p t is a positional encoding with sinu-soidals of different frequencies at every dimension.",
"Although powerful, the sinusoidal encoding and its variants (Shaw et al., 2018; Dai et al., 2019) lack the ability to model location patterns that depend on general word position such as look at the i th word (after ...) in the extrapolation setting.",
"Indeed, the sinusoidal encoding for any fixed offset p t + k is linear in p t but not in k .",
"Location-based processing of attention has also been proposed as a way of constraining content-based attention to some (soft) window.",
"Yang et al. (2018) achieve it by multiplying the content attention by the weights of a predicted Gaussian such that the model has an inductive bias towards attending to words that are close to each other.",
"Sukhbaatar et al. (2019) use a piece-wise window to decrease the computational complexity of the model.",
"These methods nevertheless solve a fundamentally different problem and do not allow location-only extrapolatable patterns of attention.",
"In this section, we propose a location attender that can satisfy the extrapolation and positional patterns constraint.",
"We then discuss how to incorporate content attention to satisfy the content patterns constraint.",
"We would like our position attention to be loosely reminiscent of human attention, whereby we sequentially focus on a single area of the input (e.g., words or pixels) but vaguely perceive neighboring inputs due to the eccentricity effect (Carrasco et al., 1995).",
"The visual acuity of humans is uni-modal, symmetric, and spikes at the fovea, which corresponds to a 0 retinal eccentricity.",
"We model this visual acuity using a Gaussian Probability Density Function (PDF) similarly to Mnih et al. (2014).",
"4 4 Visual acuity is distributed in a Laplace-like distribution, but initial experiments were more encouraging using a Gaussian.",
"I.e. for each step, the Location Attender models a Gaussian attention over the relative word positions.",
"Specifically, it generates a mean t and standard deviation t , which are used to compute the location attention given the values of the PDFs at the relative indices r s := s n s 1 of the keys: t := { PDF t , t ( r s ) } n s 1 s =0 Using relative indices r s instead of the absolute ones s is crucial such that the generated t is bounded (in [0 , 1] ), thereby satisfying the extrapolation constraint.",
"This model, unfortunately, fails to satisfy the positional patterns constraint, as it only allows patterns of attention based on percentile positions.",
"E.g., it can decide to attend to the 10%-percentile word but not to the 2 nd word.",
"This incapacity to satisfy the position pattern constraint is a general issue with commonly used attention mechanisms (including sinusoidal-based) that only becomes apparent when dealing with complex extrapolation patterns.",
"To have a general attention mechanism, we need a t that can:",
"i) attend to locations based on absolute positions;",
"ii) attend to locations based on percentile positions;",
"iii) attend to positions based on the previous attention.",
"We achieve this by defining one building block for each of those requirements ( b t ) such that their weighted average forms t , and the weights t are bounded outputs of the model.",
"The three building blocks are: The step size 1 n s 1 between words allows the attention mechanism to depend on absolute positions.",
"The generated weight is an integer, which dictates the additional number of steps to take.",
"The bias term 1 enables the model to use percentile positions.",
"The generated weight gates it (on or off).",
"The average position of the previous attention t 1 that is gated by the generated weight.",
"This ensures that the model can attend using absolute positions to words at indices not seen during training.",
"E.g., attending to index n D + 5 by first attending to n D then t 1 + 5 .",
"The weights t are generated using a Gated Recurrent Unit (GRU) (Cho et al., 2014).",
"t is clamped to [0 , 1] by a linear function to yield interpretable and extrapolatable behaviour.",
"We also Figure 3: Proposed Location Attender.",
"force t > min and normalize it by n s which respectively avoids division by 0 and makes t comparable regardless of n s .",
"A graphical overview of the Location Attender can be seen in Figure 3.",
"Formally: t := GRU (cid:16) ReLU (cid:16) W ( resize ) q t (cid:17)(cid:17) t := ReLU(W ( ) t ) + min n s t := a (W ( ) t ) b t := { t 1 ; 1 n s 1; 1 } t := clamp( Tt b t ) st := 1 (cid:112) 2 2 t exp (cid:32) ( s n s 1 t ) 2 2 2 t (cid:33) Where clamp is a leaky clamping (2 leaky ReLUs) and min = 0 .",
"27 .",
"a is the activation function that forces each of the three dimensions of t to take on the desired values.",
"Namely a sigmoid activation for the gates, and the following soft-staircase 5 to force the weights of the step size to be approximately integers (Figure 4): softstair ( x ) := (cid:98) x (cid:99) +sigmoid(20( x 0 .",
"5 (cid:98) x (cid:99) )) 5.2 Mix Attender We enforce the content patterns constraint, by using a convex combination of content and location attention (Figure 5): t := % ( ) t t + (1 % ( ) t ) t % ( ) t := sigmoid(W (%) q t ) 5 Straight-through estimators (Bengio et al., 2013) and Gumbel-Softmax (Jang et al., 2017; Maddison et al., 2017) performed slightly worst and required predefining the maximum number of steps.",
"The fact that humans generate and understand unbounded sentences with a finite experience is often used as proof of the principle of compositionality (Szab, 2017).",
"Following this argument, methods that can extrapolate to longer sequences should exhibit some compositionality.",
"Based on this observation, we evaluate on a compositionality-specific artificial task, lookup tables (Liska et al., 2018), but extend it to better quantify extrapolation.",
"6 This task is especially interesting to us, as there is a clear notion of what a good attention pattern should look like, making it easy to qualitatively and quantitatively analyze attentive models.",
"It is a well-controlled task, which allows us to uncover challenges that prevent models from extrapolating on real-world data.",
"The lookup tables task consists in sequentially applying k pre-defined lookup table functions.",
"The lookup tables are bijective mappings on the set of 6 The extended datasets as well as scripts to generate them can be found at https://github.com/ i-machine-think/machine-tasks/tree/master/LongLookupTables Input Target Target Attention 000 t1 .",
"all 3-bit strings t i : { 0 , 1 } 3 { 0 , 1 } 3 .",
"For example, if t 1 (000) = 110 and t 2 (110) = 100 then t 2 ( t 1 (000)) = t 2 (110) = 100 .",
"Following Hupkes et al. (2018), we write the operations from left to right, as well as add the inputs and temporary steps to the targets.",
"E.g. the previous example corresponds to the input 000 t1 t2 and the target 000 110 100 .",
"General extrapolatable seq2seq models should be able to terminate by outputting an end of sentence token <eos> .",
"We thus append <eos> to the targets and a full stop .",
"to the inputs.",
"7 At each decoding step, the target only depends on the previous output and the current lookup table.",
"E.g. the last decoding step of 000 t1 t2 , only depends on the previous output 110 = t 1 (000) and the current table t 2 .",
"The network thus has to learn the lookup table mappings and use the correct one at each step.",
"The gold standard attention, therefore, corresponds to the position of the current lookup table.",
"Table 1 illustrates a longer example and its correct attention.",
"The various train and test sets are generated by composing 6 random lookup tables t 1 , . . . , t 6 that have as input and output one of the 2 3 = 8 possible 3-bit strings.",
"Specifically, we use k = 1 . . . 4 composed tables in the training set, k = 2 . . . 4 for the interpolation test sets, and k = 5 . . . 9 for the extrapolation test sets.",
"There are 5 different extrapolation test sets, depending on their additional lengths compared to the maximum training examples ( long 1 , . . . , long 5 ).",
"We randomly select only 5000 possible examples for each of these test sets.",
"For the interpolation test sets, we select 3000 examples from all possible input-output pairs.",
"The training set contains all other possible input-output pairs, approximately 10000 examples.",
"To test whether the attention can generate more complex patterns (investigating the Positional Patterns Constraint), we also introduce a dataset which",
"reverses the order of the inputs in the previous dataset.",
"E.g. the last example in Table 1, would be written as t2 t1 t1 000 .",
", the target would not change, and the attention pattern should be 3 2 1 0 4 (attend to . when outputting <eos> ).",
"Although the change seems minor, we hypothesize that such a setting will be much more complicated as the attention pattern is not monotonic and does not follow the encoding nor the decoding steps.",
"Indeed, in the previous task, the model only needs to learn to match the i th decoding step with the i th encoding step.",
"Finally, we introduce another variant that also requires content attention (investigating the Content Patterns Constraint).",
"To do so, we augment each training example with a start token ! between the input and the tables in the source sequence.",
"We then add m U{ 0 , 10 } tables t i before the start token.",
"The target outputs were not modified and are thus independent of the added tables.",
"Solving this task requires to first attend to the input, then to the token which follows ! (content attention) and finally proceed with incremental location attention.",
"Examples of the training data are given in Table 2.",
"The main metric is sequence accuracy ( seqAcc ), which corresponds to the accuracy of predicting the entire sequence correctly (including its length).",
"To get insights about how the model works, we will also use two other losses.",
"Sequence Accuracy Before Eos ( seqAccBE ), which only evaluates the accuracy of the subsequence before the model generated a <eos> .",
"Attention Loss ( attnLoss ), which quantifies the quality of the attention pattern before <eos> .",
"It is computed as the mean squared error between the predicted and gold standard attention.",
"8 The attention loss gives an indication of how far the 8 The loss is overly simplistic as it is symmetric around t even though errors in the temporal direction are less serious as the embeddings contain past information.",
"Concerning baselines, we use three content attention: additive, multiplicative, scaled dot product (Eq.3).",
"We also have two mixed content-location attention baselines: Transformer and TransformerXL (Eq.4).",
"To focus on the attention mechanisms, our model and the baselines all use a smaller version of the best performing recurrent seq2seq architecture on the lookup table task (Hupkes et al., 2018).",
"The model has never been modified during our experimentation and is schematized in Figure 2.",
"The embeddings are of dimension 64 , the recurrent network is a GRU (Cho et al., 2014) with a hidden size of 128 , 50% dropout (Srivastava et al., 2014) is applied on the encoder-decoder bottleneck, and a residual connection is used between the inputs (embeddings) and outputs of the encoder.",
"Training consists of 50 epochs with the Adam (Kingma and Ba, 2015) optimizer.",
"For sanity check, we tested all the baselines and our models (with and without attention mix) on the interpolation setting of the three tasks.",
"Our models and the best baseline (transformer attention) achieved 100% sequence accuracy ( seqAcc ).",
"The major desired property of our model is to be able to extrapolate.",
"We tested the extrapolation capacity of our location attender by evaluating its seqAcc on the long lookup table extrapolation test sets.",
"Figure 6 shows the seqAcc of the location attender against the strongest baseline (transformer attention).",
"As hypothesized, the transformer attention has some extrapolation capacity, but our location attender substantially outperforms it in this simple task.",
"Importantly, the loss in performance in the extrapolation setting for the best baseline is abrupt and goes from 100% to 0% by adding only three tokens to the inputs.",
"This suggests that commonly used models are brittle and cannot even extrapolate by a small amount.",
"To do so, we computed the sequence accuracy before <eos> ( SeqAccBE ).",
"Figure 7 shows that the model outputs are always correct but that it often terminates decoding too soon, which we will refer to as the <eos> problem .",
"This suggests that the decoder keeps an internal counter to increase the probability of outputting <eos> when the decoding step is greater than the ones seen at training time.",
"The model learns this heuristic, which is always correct during training time and can be thought of as a metric hacking.",
"Importantly, it is not a hard boundary: the model is often able to extrapolate a couple of steps but usually stops before the correct number of steps.",
"Having shown that our model can extrapolate well on a simple task, we would like to investigate",
"whether it can do so for tasks that require more complicated attention patterns such as the reversed and noisy task.",
"Although the Mix Attender, outperformed all baselines on both tasks, it was not able to get more than 40% and 5% sequence accuracy for long 1 and long 2 respectively.",
"Figure 8 shows that when considering seqAccBE , the Mix Attender is able to extrapolate well in the noisy setting and a little in the reverse setting.",
"This suggests that it is not able to extrapolate well when considering sequence accuracy because it strongly suffers from the <eos> problem.",
"This is a recurrent problem in our experiments and is more likely to happen in harder tasks and larger models.",
"As previously discussed, variants of the lookup table task are especially interesting as we know the gold standard attention pattern.",
"This enables evaluation of attention patterns through the MSE attention loss ( attnLoss ).",
"Table 3 shows the attention loss averaged over the three tasks.",
"Although not perfect, the Mix Attender performs on average the best across all set-Attention Interp.",
"tings.",
"9 Crucially, it performs similarly in an interpolation setting and simple extrapolation setting ( long 1 ), while all other baselines perform significantly worse after adding a single token.",
"Even in long 2 , it is competitive with all other attention mechanisms in their interpolation domain.",
"This indicates that the model is indeed able to extrapolate by being more precise with its attention pattern.",
"In addition to enabling extrapolation, the temporary variables such as the weight given to each building block are very helpful for debugging the model and improving interpretability.",
"Figure 9 shows the output of a Mix Attender for the lookup tables with noisy start task.",
"The input was sampled from the Long 4 test set.",
"The top-left image shows the final attention.",
"The top-right table shows the value of some interpretable variables at every decoding step.",
"The bottom images correspond to the content and location attention.",
"The first decoding step uses location attention to attend to the first input.",
"For the next three steps, the model outputs a mixing weight % ( ) 0 to focus on content attention.",
"The content attention successfully finds the first non-noisy table (after ! ).",
"10 It then goes back to using the location attention with ( ) = 1 and (1 /n ) = 1 to generate a diagonal attention.",
"Finally, it predicts <eos> when attending to the end of the input . .",
"At each step, = min as it does not need to attend to neighboring words for this task.",
"% ( ) is never exactly 0 or 1, such that the model can easily learn to switch between content and location attention as it does not collapse to using a single form of attention.",
"A single step of content attention should be sufficient, but the model seems to consistently use three steps.",
"In this paper, we focused on one type of extrapolation, which is especially important in NLP: generalization to longer sequences.",
"We propose a new location-based attention, and show that it can extrapolate better than previous models while learning various attention patterns.",
"Despite promising initial results, our model is still unable to extrapolate perfectly for harder tasks.",
"By analyzing its behavior, we uncovered an interesting heuristic used by seq2seq models, namely that they keep track of a decoding counter to know when to output the <eos> token.",
"This is a bottleneck for extrapolation, suggesting that removing this heuristic is key to reaching perfect extrapolation and should be investigated in future work.",
"Once the <eos> problem is solved, we could test the model on real-world datasets.",
"It would also be interesting to test such attention mechanisms in self-attentive seq2seq models without recurrence.",
"Finally, as the location attender is not model dependent, it could be pretrained on complex location patterns and incorporated as a plug-and-play module to get extrapolatable position attention.",
"Taking a step back, we have shown that current deep learning models with common attention mechanisms are unable to extrapolate well on seemingly straightforward tasks.",
"This tends to be overlooked by the field due to standard benchmarks that can be solved using only interpolation.",
"We hope that this paper acts as a reminder that extrapolation is a hard setting that has not been much investigated by the machine learning community.",
"As current methods that memorize and learn superficial cues are unable to extrapolate while humans are, we believe that such a setting might help (and force) the field to come up with more human-like computational models that are capable of abstract reasoning.",
"Dieuwke Hupkes is funded by the Netherlands Organization for Scientific Research (NWO), through a Gravitation Grant 024.001.006 to the Language in Interaction Consortium.",
"Elia Bruni is funded by the European Unions Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No 790369 (MAGIC)."
] | [
"abstain",
"abstain",
"objective",
"result",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"other",
"other"
] |
[
"Hybrid data combining both tabular and textual content (e.g., financial reports) are quite pervasive in the real world.",
"However, Question Answering (QA) over such hybrid data is largely neglected in existing research.",
"In this work, we extract samples from real financial reports to build a new large-scale QA dataset containing both T abular A nd T extual data, named TAT-QA, where numerical reasoning is usually required to infer the answer, such as addition, subtraction, multiplication, division, counting, comparison/sorting, and their compositions.",
"We further propose a novel QA model termed TAGOP , which is capable of reasoning over both tables and text.",
"It adopts sequence tagging to extract relevant cells from the table along with relevant spans from the text to infer their semantics, and then applies symbolic reasoning over them with a set of aggregation operators to arrive at the final answer.",
"TAGOP achieves 58 .",
"0 % in F 1 , which is an 11 .",
"1 % absolute increase over the previous best baseline model, according to our experiments on TAT-QA.",
"But this result still lags far behind the performance of human expert, i.e. 90 .",
"8 % in F 1 .",
"It demonstrates that our TAT-QA is very challenging and can serve as a benchmark for training and testing powerful QA models that address hybrid data.",
"Our dataset is publicly available for noncommercial use at https://nextplusplus.",
"github.io/TAT-QA/ .",
"Existing QA systems largely focus on only unstructured text (Hermann et al., 2015; Rajpurkar et al., 2016; Dua et al., 2019; Yang et al., 2018; Li et al., 2020; Nie et al., 2020), structured knowledge base (KB) (Berant et al., 2013; Yih et al., 2015; Talmor and Berant, 2018), or semi-structured tables (Pasu-pat and Liang, 2015; Zhong et al., 2017; Yu et al., Corresponding author",
"2018; Zhang and Balog, 2019; Zhang et al., 2020).",
"Though receiving growing interests (Das et al., 2017; Sun et al., 2019; Chen et al., 2020b, 2021), works on hybrid data comprising of unstructured text and structured or semi-structured KB/tables are rare.",
"Recently, Chen et al. (2020b) attempt to simulate a type of hybrid data through manually linking table cells to Wiki pages via hyperlinks.",
"However, such connection between table and text is relatively loose.",
"In the real world, a more common hybrid data form is, the table (that usually contains numbers) is more comprehensively linked to text, e.g., semantically related or complementary.",
"Such hybrid data are very pervasive in various scenarios like scientific research papers, medical reports, financial reports, etc.",
"The left box of Figure 1 shows a real example from some financial report, where there is a table containing row/column header and numbers inside, and also some paragraphs describing it.",
"We call the hybrid data like this example hybrid context in QA problems, as it contains both tabular and textual content, and call the paragraphs associated paragraphs to the table.",
"To comprehend and answer a question from such hybrid context relies on the close relation between table and paragraphs, and usually requires numerical reasoning.",
"For example, one needs to identify revenue from the external customers in the describing text so as to understand the content of the table.",
"As for How much does the commercial cloud revenue account for the total revenue in 2019? , one needs to get the total revenue in 2019, i.e. 125 , 843 million from the table and commercial cloud revenue, i.e. 38 . 1 billion, from the text to infer the answer.",
"To stimulate progress of QA research over such hybrid data, we propose a new dataset, named TAT-QA ( T abular A nd T extual dataset for Q uestion A nswering).",
"The hybrid contexts in TAT-QA are extracted from real-world financial reports, each # Reasoning Question Answer Scale Derivation 1 Word Matching (38.06%) How much revenue came from Linkedin in 2018?",
"composed of a table with row/col header and numbers, as well as at least two paragraphs that describe, analyse or complement the content of this table.",
"Given hybrid contexts, we invite annotators with financial knowledge to generate questions that are useful in real-world financial analyses and provide answers accordingly.",
"It is worth mentioning that a large portion of questions in TAT-QA demand numerical reasoning, for which derivation of the answer is also labeled to facilitate developing explainable models.",
"In total, TAT-QA contains 16 , 552 questions associated with 2 , 757 hybrid contexts from 182 reports.",
"We further propose a novel TAGOP model based on TAT-QA.",
"Taking as input the given question, table and associated paragraphs, TAGOP applies sequence tagging to extract relevant cells from the table and relevant spans from text as the evidences.",
"Then it applies symbolic reasoning over them with a set of aggregation operators to arrive at the final answer.",
"Predicting the magnitude of a number is an important aspect when tackling hybrid data in TAT-QA, including thousand, million, billion, etc. that are often omitted or shown only in headers or associated paragraphs of the table for brevity.",
"We term such magnitude of a number as its scale .",
"Take Question 6 in Figure 1 as an example: How much of the total revenue in 2018 did not come from devices?",
"The numerical value in the answer is obtained by subtraction: 110 , 360 5 , 134 , while the scale million is identified from the first-row header of the table.",
"In TAGOP , we incorporate a multi-class classifier for scale prediction.",
"We test three types of QA models on TAT-QA, specially addressing tabular, textual, and hybrid data.",
"Our TAGOP achieves 58 .",
"0 % in terms of F 1 , which is a 11 .",
"1 % absolute increase over the best baseline model, according to our experiments on TAT-QA.",
"It is worth noting that the results still lag far behind performance of human experts, i.e. 90 .",
"8 % in F 1 .",
"We can see that to tackle the QA task over the hybrid data as in TAT-QA is challenging and more effort is demanded.",
"We expect our TAT-QA dataset and TAGOP model to serve as a benchmark and baseline respectively to contribute to the development of QA models for hybrid data, especially those requiring numerical reasoning.",
"In TAT-QA there are two forms of data: tables and their relevant text, which are extracted from real-world financial reports.",
"In particular, we first download about 500 financial reports released in the past two years from an online website 1 .",
"We adopt the table detection model in (Li et al., 2019) to detect tables in these reports, and apply Apache PDFBox 2 library to extract the table contents to be processed with our annotation tool.",
"We only keep those tables with 3 30 rows and 3 6 columns.",
"Finally, about 20 , 000 candidate tables are retained, which have no standard schema and lots of numbers inside.",
"1 https://www.annualreports.com/ 2 https://pdfbox.apache.org/ The corresponding reports with selected tables are also kept.",
"Note that these candidate tables may still contain errors, such as containing too few or many rows/cols, mis-detected numbers, which will be manually picked out and deleted or fixed during the annotation process.",
"The annotation is done with our self-developed tool.",
"All the annotators are with financial background knowledge.",
"Adding Relevant Paragraphs to Tables We build valid hybrid contexts based on the original reports kept in the previous step.",
"A valid hybrid context in TAT-QA consists of a table and at least two associated paragraphs surrounding it, as shown in the left box in Figure",
"1. To associate enough relevant paragraphs to a candidate table, the annotators first check whether there are 2 paragraphs around this table, and then check whether they are relevant, meaning the paragraphs should be describing, analysing or complementing the content in the table.",
"If yes, then all the surrounding paragraphs will be associated to this table.",
"Otherwise, the table will be skipped (discarded).",
"3 Question-Answer Pair Creation Based on the valid hybrid contexts, the annotators are then asked to create question-answer pairs, where the questions need to be useful in real-world financial analyses.",
"In addition, we encourage them to create questions that can be answered by people without much finance knowledge and use common words instead of the same words appeared in the hybrid context (Rajpurkar et al., 2016).",
"Given one hybrid context, at least 6 questions are generated, including extracted and calculated questions.",
"For extracted questions, the answers can be a single span or multiple spans from either the table or the associated paragraphs.",
"For calculated questions, numerical reasoning is required to produce the answers, including addition, subtraction, multiplication, division, counting, comparison/sorting and their compositions.",
"Furthermore, we particularly ask the annotators to annotate the right scale for the numerical answer when necessary.",
"Answer Type and Derivation Annotation The answers in TAT-QA have three types: a single span or multiple spans extracted from the table or text, as well as a generated answer (usually obtained through numerical reasoning).",
"The annotators will 3 About two thirds of candidate tables were discarded.",
"also need to label its type after they generate an answer.",
"For generated answers, the corresponding derivations are provided to facilitate the development of explainable QA models, including two types: 1) an arithmetic expression, like ( 11 , 386 10 , 353 )/ 10 , 353 ) for Question 8 in Figure 1, which can be executed to arrive at the final answer; and 2) a set of items separated with ## , like device ## enterprise services for Question 4 in Figure 1 where the count of items equals the answer.",
"We further divide questions in TAT-QA into four kinds: Span , Spans , Arithmetic and Counting , where the latter two kinds correspond to the above two types of deviations, to help us better investigate the numerical reasoning capability of a QA model.",
"Answer Source Annotation For each answer, annotators are required to specify the source(s) it is derived from, including Table , Text , and Table-text (both).",
"This is to force the model to learn to aggregate information from hybrid sources to infer the answer, thus lift its generalizability.",
"For example, to answer Question 7 in Figure 1: How much does the commercial cloud revenue account for the total revenue in 2019? , we can observe from the derivation that 125 , 843 million comes from the table while 38 . 1 billion from text.",
"Competent Annotators To build TAT-QA, financial domain knowledge is necessary.",
"Hence, we employ about 30 university students majored in finance or similar disciplines as annotators.",
"We give all candidate annotators a minor test and only those with 95% correct rate are hired.",
"Before starting the annotation work, we give a training session to the annotators to help them fully understand our annotation requirements and also learn the usage of our annotation system.",
"Two-round Validation For each annotation, we ask two different verifiers to perform a two-round validation after it is submitted, including checking and approval, to ensure its quality.",
"We have five verifiers in total, including two annotators who have good performance on this project and three graduate students with financial background.",
"In the checking phase, a verifier checks the submitted annotation and asks the annotator to fix it if any mistake or problem is found.",
"In the approval phase, a different verifier inspects the annotation again that has been confirmed by the first verifier, and then approves it if no problem is found.",
"Averagely, an annotator can label two hybrid contexts per hour; the whole annotation work lasts about three months.",
"Finally, we attain a total of 2 , 757 hybrid contexts and 16 , 552 corresponding question-answer pairs from 182 financial reports.",
"The hybrid contexts are randomly split into training set ( 80% ), development set ( 10% ) and test set ( 10% ); hence all questions about a particular hybrid context belong to only one of the splits.",
"We show the basic statistics of each split in Table 1, and the question distribution regarding answer source and answer type in Table",
"2. In Figure 1, we give an example from TAT-QA, demonstrating the various reasoning types and percentage of each reasoning type over the whole dataset.",
"We introduce a novel QA model, named TAGOP , which first applies sequence TAGging to extract relevant cells from the table and text spans from the paragraphs inspired by (Li et al., 2016; Sun et al., 2016; Segal et al., 2020).",
"This step is analogy to slot filling or schema linking, whose effectiveness has been demonstrated in dialogue systems (Lei et al., 2018; Jin et al., 2018) and semantic parsing (Lei et al., 2020).",
"And then TAGOP performs symbolic reasoning over them with a set of aggregation OPerators to arrive at the final answer.",
"The overall architecture is illustrated in Figure",
"2. 3.1 Sequence Tagging Given a question, TAGOP first extracts supporting evidences from its hybrid context (i.e. the table and associated paragraphs) via sequence tagging with the InsideOutside tagging ( IO ) approach (Ramshaw and Marcus, 1995).",
"In particular, it assigns each token either I or O label and takes Table Text Table-text Total Span 1,801 3,496 1,842 7,139 Spans 777 258 1,037 2,072 Counting 106 5 266 377 Arithmetic 4,747 143 2,074 6,964 Total 7,431 3,902 5,219 16,552 Table 2: Question distribution regarding different answer types and sources in TAT-QA those tagged with I as the supporting evidences for producing the answer.",
"The given question, flattened table by row (Herzig et al., 2020) and associated paragraphs are input sequentially to a transformer-based encoder like RoBERTa (Liu et al., 2019), as shown in the bottom part of Figure 2, to obtain corresponding representations.",
"Each sub-token is tagged independently, and the corresponding cell in the table or word in the paragraph would be regarded as positive if any of its sub-tokens is tagged with I .",
"For the paragraphs, the continuous words that are predicted as positive are combined as a span.",
"During testing, all positive cells and spans are taken as the supporting evidences.",
"Formally, for each sub-token t in the paragraph, the probability of the tag is computed as p tagt = softmax ( FFN ( h t )) (1) where FFN is a two-layer feed-forward network with GELU (Hendrycks and Gimpel, 2016) activation and h t is the representation of sub-token t .",
"Next, we perform symbolic reasoning over obtained evidences to infer the final answer, for which we apply an aggregation operator.",
"In our TAGOP , there are ten types of aggregation operators.",
"For each input question, an operator classifier is applied to decide which operator the evidences would go through; for some operators sensitive to the order of input numbers, an auxiliary number order classifier is used.",
"The aggregation operators are explained as below, covering most reasoning types as listed in Figure",
"1. Span-in-text : To select the span with the highest probability from predicted candidate spans.",
"The probability of a span is the highest probability of all its sub-tokens tagged I .",
"Cell-in-table : To select the cell with the highest probability from predicted candidate cells.",
"The probability of a cell is the highest probability of all its sub-tokens tagged I .",
"Spans : To select all the predicted cell and span candidates; Sum : To sum all predicted cells and spans purely consisting of numbers; Count : To count all predicted cells and spans; Average : To average over all the predicted cells and spans purely consisting of numbers; Multiplication : To multiply all predicted cells and spans purely consisting of numbers; Division : To first rank all the predicted cells and spans purely consisting of numbers based on their probabilities, and then apply division calculation to top-two; Difference : To first rank all predicted numerical cells and spans based on their probabilities, and then apply subtraction calculation to top-two.",
"Change ratio : For the top-two values after ranking all predicted numerical cells and spans based on their probabilities, compute the change ratio of the first value compared to the second one.",
"Operator Classifier To predict the right aggregation operator, a multi-class classifier is developed.",
"In particular, we take the vector of [CLS] as input to compute the probability: p op = softmax ( FFN ( [CLS] ) (2) where FFN denotes a two-layer feed-forward network with the GELU activation.",
"Number Order Classifier For operators of Difference , Division and Change ratio , the order of the input two numbers matters in the final result.",
"Hence we additionally append a number order classifier after them, formulated as p order = softmax ( FFN ( avg ( h t 1 , h t 2 )) (3) where FFN denotes a two-layer feed-forward network with the GELU activation, h t 1 , h t 2 are representations of the top two tokens according to probability, and avg means average.",
"For a token, its probability is the highest probability of all its sub-tokens tagged I , and its representation is the average over those of its sub-tokens.",
"Till now we have attained the string or numerical value to be contained in the final answer.",
"However, a right prediction of a numerical answer should not only include the right number but also the correct scale.",
"This is a unique challenge over TAT-QA and very pervasive in the context of finance.",
"We develop a multi-class classifier to predict the scale.",
"Generally, the scale in TAT-QA may be None , Thousand , Million , Billion , and Percent .",
"Taking as input the concatenated representation of [CLS] , the table and paragraphs sequentially, the multi-class classifier computes the probability of the scale as p scale = softmax ( FFN ([ [CLS] ; h tab ; h p ]) (4) where h tab and h p are the representations of the table and the paragraphs respectively, which are obtained by applying an average pooling over the representations of their corresponding tokens,; denotes concatenation, and FFN denotes a two-layer feed-forward network with the GELU activation.",
"After obtaining the scale, the numerical or string prediction is multiplied or concatenated with the corresponding scale as the final prediction to compare with the ground-truth answer respectively.",
"To optimize TAGOP , the overall loss is the sum of the loss of the above four classification tasks:",
"L = NLL ( log ( P tag ) , G tag ) + NLL ( log ( P op ) , G op ) + NLL ( log ( P scale ) , G scale ) + NLL ( log ( P order ) , G order ) (5)",
"where NLL( ) is the negative log-likelihood loss, G tag and G op come from the supporting evidences which are extracted from the annotated answer and derivation.",
"We locate the evidence in the table first if it is among the answer sources, and otherwise in its associated paragraphs.",
"Note we only keep the first found if an evidence appears multiple times in the hybrid context.",
"G scale uses the annotated scale of the answer; G order is needed when the ground-truth operator is one of Difference , Division and Change ratio , which is obtained by mapping the two operands extracted from their corresponding ground-truth deviation in the input sequence.",
"If their order is the same as that in the input sequence, G order = 0 ; otherwise it is 1 .",
"Textual QA Models We adopt two reading comprehension (RC) models as baselines over textual data: BERT-RC (Devlin et al., 2018), which is a SQuAD-style RC model; and NumNet+ V2 4 (Ran et al., 2019), which achieves promising performance on DROP that requires numerical reasoning over textual data.",
"We adapt them to our TAT-QA as follows.",
"We convert the table to a sequence by row, also as input to the models, followed by tokens from the paragraphs.",
"Besides, we add a multi-class classifier, exactly as in our TAGOP , to enable the two models to predict the scale based on Eq.",
"(4).",
"Tabular QA Model We employ TaPas for Wik-iTableQuestion (WTQ) (Herzig et al., 2020) as a baseline over tabular data.",
"TaPas is pretrained over large-scale tables and associated text from Wikipedia jointly for table parsing.",
"To train it, we heuristically locate the evidence in the table with the annotated answer or derivation, which is the 4 https://github.com/llamazing/numnet plus first matched one if a same value appears multiple times.",
"In addition, we remove the numerical rank id feature in its embedding layer, which ranks all values per numerical column in the table but does not make sense in TAT-QA.",
"Similar to above textual QA setting, we add an additional multi-class classifier to predict the scale as in Eq.",
"(4).",
"Hybrid QA Model We adopt HyBrider (Chen et al., 2020b) as our baseline over hybrid data, which tackles tabular and textual data from Wikipedia.",
"We use the code released in the original paper 5 , but adapt it to TAT-QA.",
"Concretely, each cell in the table of TAT-QA is regarded as linked with associated paragraphs of this table, like hyper-links in the original paper, and we only use its cell matching mechanism to link the question with the table cells in its linking stage.",
"The selected cells and paragraphs are fed into the RC model in the last stage to infer the answer.",
"For ease of training on TAT-QA, we also omit the prediction of the scale, i.e. we regard the predicted scale by this model as always correct.",
"We adopt the popular Exact Match (EM) and numeracy-focused F 1 score (Dua et al., 2019) to measure model performance on TAT-QA.",
"However, the original implementation of both metrics is insensitive to whether a value is positive or negative in the answer as the minus is omitted in evaluation.",
"Since this issue is crucial for correctly interpreting numerical values, especially in the finance domain, we keep the plus-minus of a value when calculating them.",
"In addition, the numeracy-focused F 1 score is set to 0 unless the predicted number multiplied by predicted scale equals exactly the ground truth.",
"Comparison with Baselines We first compare our TAGOP with three types of previous QA models as described in Section 4.1.",
"The results are summarized in Table",
"3. It can be seen that our model is always superior to other baselines in terms of both metrics, with very large margins over the second best, namely 50.1/58.0 vs. 37.0/46.9 in EM/F 1 on test set of TAT-QA respectively.",
"This well reveals the effectiveness of our method that reasons over both tabular and textual data involving lots 5 https://github.com/wenhuchen/HybridQA of numerical contents.",
"For two textual QA baselines, NumNet+ V2 performs better than BERT-RC, which is possibly attributed to the stronger capability of numerical reasoning of the latter, but it is still worse than our method.",
"The tabular QA baseline Tapas for WTQ is trained with only tabular data in TAT-QA, showing very limited capability to process hybrid data, as can be seen from its performance.",
"The HyBrider is the worst among all baseline models, because it is designed for HybridQA (Chen et al., 2020b) which does not focus on the comprehensive interdependence of table and paragraphs, nor numerical reasoning.",
"However, all the models perform significantly worse than human performance 6 , indicating TAT-QA is challenging to current QA models and more efforts on hybrid QA are demanded.",
"Answer Type and Source Analysis Furthermore, we analyze detailed performance of TAGOP w.r.t answer type and source in Table",
"4. It can be seen that TAGOP performs better on the questions whose answers rely on the tables compared to those from the text.",
"This is probably because table cells have clearer boundaries than text spans to the model, thus it is relatively easy for the model to extract supporting evidences from the tables leveraging sequence tagging techniques.",
"In addition, TAGOP performs relatively worse on arithmetic questions compared with other types.",
"This may be because the calculations for arithmetic questions are diverse and harder than other types, indicating the challenge of TAT-QA, especially for the requirement of numerical reasoning.",
"Results of TAGOP with Different Operators We here investigate the contributions of the ten aggregation operators to the final performance of TAGOP .",
"As shown in Table 5, we devise nine variants of the full model of TAGOP ; based on the variant of TAGOP with only one operator (e.g. Span-in-text), for each of other variants, we add one more operator back.",
"As can be seen from the table, all added operators can benefit the model performance.",
"Furthermore, we find that some operators like Span-in-text , Cell-in-table , Difference and Average make 6 The human performance is evaluated by asking annotators to answer 50 randomly sampled hybrid contexts (containing 301 questions) from our test set.",
"Note the human performance is still not 100% correct because our questions require relatively heavy cognitive load like tedious numerical calculations.",
"Comparing human performance of F 1 in SQUAD (Rajpurkar et al., 2016) ( 86 . 8 %) and DROP (Dua et al., 2019)) ( 96 . 4 %), the score ( 90 . 8 %) in our dataset already indicates a good quality and annotation consistency in our dataset.",
"more contributions than others.",
"In comparison, Sum and Multiplication bring little gain or even decline.",
"After analysis, we find this is because the instances of Sum or Multiplication are minor in our test set, which are easily influenced by randomness.",
"Error Analysis We further investigate our TAGOP by analysing error cases.",
"We randomly sample 100 error instances from the test set, and classify them into five categories as shown in Table 6, each with an example: (1) Wrong Evidence ( 55 %), meaning the model obtained wrong supporting evidence from the hybrid context; (2) Missing Model Dev Test EM F 1 EM F 1 + Span-in-text 13.4 20.5 14.1 21.8 + Cell-in-table 25.4 36.0 24.1 35.3 + Spans 33.6 41.3 31.3 39.4 + Sum 33.8 41.3 31.2 39.1 + Count 35.9 43.5 32.7 40.6 + Average 43.3 50.6 38.2 45.9 + Multiplication 44.2 51.4 37.9 46.0 + Division 45.0 52.5 39.2 47.5 + Difference 51.4 58.7 45.1 53.3 + Change ratio (Full) 55.2 62.7 50.1 58.0 Table 5: Performance with different aggregation operators of TAGOP model.",
"Evidence ( 29 %), meaning the model failed to extract the supporting evidence for the answer; (3) Wrong Calculation ( 9 %), meaning the model failed to compute the answer with the correct supporting evidence; (4) Unsupported Calculation ( 4 %), meaning the ten operators defined cannot support this calculation; (5) Scale Error ( 3 %), meaning the model failed to predict the scale of the numerical value in an answer.",
"We can then observe about 84 % error is caused by the failure to extract the supporting evidence from the table and paragraphs given a question.",
"This demonstrates more efforts are needed to strengthen the model's capability of precisely aggregating information from hybrid contexts.",
"After instance-level analysis, we find another interesting error resource is the dependence on domain knowledge.",
"While we encourage annotators to create questions answerable by humans without much finance knowledge, we still find domain knowledge is required for some questions.",
"For example, given the question What is the gross profit margin of the company in 2015? , the model needs to extract the gross profit and revenue from the hybrid context and compute the answer according to the finance formula (gross profit margin = gross profit / revenue) .",
"How to integrate such finance knowledge into QA models to answer questions in TAT-QA still needs further exploration.",
"QA Datasets Currently, there are many datasets for QA tasks, focusing on text, or KB/table.",
"Textual ones include CNN/Daily Mail (Hermann et al., 2015), SQuAD (Rajpurkar et al., 2016), etc.",
"Recently deep reasoning over textual data has gained increasing attention (Zhu et al., 2021), e.g. multihop reasoning (Yang et al., 2018; Welbl et al., 2018).",
"DROP (Dua et al., 2019) is built to develop numerical reasoning capability of QA models, which in this sense is similar to TAT-QA, but only focuses on textual data.",
"KB/Tabular QA aims to automatically answer questions via well-structured KB (Berant et al., 2013; Talmor and Berant, 2018; Yih et al., 2015) or semi-structured tables (Pasupat and Liang, 2015; Zhong et al., 2017; Yu et al., 2018).",
"Comparably, QA over hybrid data receives limited efforts, focusing on mixture of KB/tables and text.",
"HybridQA (Chen et al., 2020b) is one existing hybrid dataset for QA tasks, where the context is a table connected with Wiki pages via hyperlinks.",
"Numerical Reasoning Numerical reasoning is key to many NLP tasks like question answering (Dua et al., 2019; Ran et al., 2019; Andor et al., 2019; Chen et al., 2020a; Pasupat and Liang, 2015; Herzig et al., 2020; Yin et al., 2020; Zhang and Balog, 2020) and arithmetic word problems (Kush-man et al., 2014; Mitra and Baral, 2016; Huang et al., 2017; Ling et al., 2017).",
"To our best knowledge, no prior work attempts to develop models able to perform numerical reasoning over hybrid contexts.",
"We propose a new challenging QA dataset TAT-QA, comprising real-word hybrid contexts where the table contains numbers and has comprehensive dependencies on text in finance domain.",
"To answer questions in TAT-QA, the close relation between table and paragraphs and numerical reasoning are required.",
"We also propose a baseline model TAGOP based on TAT-QA, aggregating information from hybrid context and performing numerical reasoning over it with pre-defined operators to compute the final answer.",
"Experiments show TAT-QA dataset is very challenging and more effort is demanded for tackling QA tasks over hybrid data.",
"We expect our TAT-QA dataset and TAGOP model would serve as a benchmark and baseline respectively to help build more advanced QA models, facilitating the development of QA technologies to address more complex and realistic hybrid data, especially those requiring numerical reasoning.",
"The authors gratefully acknowledge Zhuyun Dai for giving valuable suggestions on this study, Xin-nan Zhang for developing the data annotation tool, and Tong Ye and Ming Wei Chan for their work on checking the annotation quality.",
"Our thanks also go to all the anonymous reviewers for their positive feedback.",
"This research is supported by the NExT Research Centre, Singapore."
] | [
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"other",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"other",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"method",
"other",
"other",
"other"
] |
[
"Pretrained language models are now ubiquitous in Natural Language Processing.",
"Despite their success, most available models have either been trained on English data or on the concatenation of data in multiple languages.",
"This makes practical use of such modelsin all languages except Englishvery limited.",
"In this paper, we investigate the feasibility of training monolingual Transformer-based language models for other languages, taking French as an example and evaluating our language models on part-of-speech tagging, dependency parsing, named entity recognition and natural language inference tasks.",
"We show that the use of web crawled data is preferable to the use of Wikipedia data.",
"More surprisingly, we show that a relatively small web crawled dataset (4GB) leads to results that are as good as those obtained using larger datasets (130+GB).",
"Our best performing model CamemBERT reaches or improves the state of the art in all four downstream tasks.",
"Pretrained word representations have a long history in Natural Language Processing (NLP), from noncontextual (Brown et al., 1992; Ando and Zhang, 2005; Mikolov et al., 2013; Pennington et al., 2014) to contextual word embeddings (Peters et al., 2018; Akbik et al., 2018).",
"Word representations are usually obtained by training language model architectures on large amounts of textual data and then fed as an input to more complex task-specific architectures.",
"More recently, these specialized architectures have been replaced altogether by large-scale pretrained language models which are fine-tuned for each application considered.",
"This shift has resulted in large improvements in performance over a wide Equal contribution.",
"range of tasks (Devlin et al., 2019; Radford et al., 2019; Liu et al., 2019; Raffel et al., 2019).",
"These transfer learning methods exhibit clear advantages over more traditional task-specific approaches.",
"In particular, they can be trained in an unsupervized manner, thereby taking advantage of the information contained in large amounts of raw text.",
"Yet they come with implementation challenges, namely the amount of data and computational resources needed for pretraining, which can reach hundreds of gigabytes of text and require hundreds of GPUs (Yang et al., 2019; Liu et al., 2019).",
"This has limited the availability of these state-of-the-art models to the English language, at least in the monolingual setting.",
"This is particularly inconvenient as it hinders their practical use in NLP systems.",
"It also prevents us from investigating their language modelling capacity, for instance in the case of morphologically rich languages.",
"Although multilingual models give remarkable results, they are often larger, and their results, as we will observe for French, can lag behind their monolingual counterparts for high-resource languages.",
"In order to reproduce and validate results that have so far only been obtained for English, we take advantage of the newly available multilingual corpora OSCAR (Ortiz Surez et al., 2019) to train a monolingual language model for French, dubbed CamemBERT.",
"We also train alternative versions of CamemBERT on different smaller corpora with different levels of homogeneity in genre and style in order to assess the impact of these parameters on downstream task performance.",
"CamemBERT uses the RoBERTa architecture (Liu et al., 2019), an improved variant of the high-performing and widely used BERT architecture (Devlin et al., 2019).",
"We evaluate our model on four different downstream tasks for French: part-of-speech (POS) tagging, dependency parsing, named entity recognition (NER) and natural language inference (NLI).",
"CamemBERT improves on the state of the art in all four tasks compared to previous monolingual and multilingual approaches including mBERT, XLM and XLM-R, which confirms the effectiveness of large pretrained language models for French.",
"First release of a monolingual RoBERTa model for the French language using recently introduced large-scale open source corpora from the Oscar collection and first outside the original BERT authors to release such a large model for an other language than English.",
"1 We achieve state-of-the-art results on four downstream tasks: POS tagging, dependency parsing, NER and NLI, confirming the effectiveness of BERT-based language models for French.",
"We demonstrate that small and diverse training sets can achieve similar performance to large-scale corpora, by analysing the importance of the pretraining corpus in terms of size and domain.",
"From non-contextual to contextual word embeddings The first neural word vector representations were non-contextualized word embeddings, most notably word2vec (Mikolov et al., 2013), GloVe (Pennington et al., 2014) and fastText (Mikolov et al., 2018), which were designed to be used as input to task-specific neural architectures.",
"Contextualized word representations such as ELMo (Peters et al., 2018) and flair (Akbik et al., 2018), improved the representational power of word embeddings by taking context into account.",
"Among other reasons, they improved the performance of models on many tasks by handling words polysemy.",
"This paved the way for larger contextualized models that replaced downstream architectures altogether in most tasks.",
"Trained with language modeling objectives, these approaches range from LSTM-based architectures such as (Dai and Le, 2015), to the successful transformer-based architectures such as GPT2 (Radford et al., 2019), BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019) and more recently ALBERT (Lan et al., 2019) and T5 (Raffel et al., 2019).",
"Non-English contextualized models Following the success of large pretrained language models, they were extended to the multilingual setting with multilingual BERT (hereafter mBERT) (Devlin et al., 2018), a single multilingual model for 104 different languages trained on Wikipedia data, and later XLM (Lample and Conneau, 2019), which significantly improved unsupervized machine translation.",
"More recently XLM-R (Conneau et al., 2019), extended XLM by training on 2.5TB of data and outperformed previous scores on multilingual benchmarks.",
"They show that multilingual models can obtain results competitive with monolingual models by leveraging higher quality data from other languages on specific downstream tasks.",
"A few non-English monolingual models have been released: ELMo models for Japanese, Portuguese, German and Basque 2 and BERT for Sim-plified and Traditional Chinese (Devlin et al., 2018) and German (Chan et al., 2019).",
"However, to the best of our knowledge, no particular effort has been made toward training models for languages other than English at a scale similar to the latest English models (e.g. RoBERTa trained on more than 100GB of data).",
"BERT and RoBERTa Our approach is based on RoBERTa (Liu et al., 2019) which itself is based on BERT (Devlin et al., 2019).",
"BERT is a multi-layer bidirectional Transformer encoder trained with a masked language modeling (MLM) objective, inspired by the Cloze task (Taylor, 1953).",
"It comes in two sizes: the BERTBASE architecture and the BERTLARGE architecture.",
"The BERTBASE architecture is 3 times smaller and therefore faster and easier to use while BERTLARGE achieves increased performance on downstream tasks.",
"RoBERTa improves the original implementation of BERT by identifying key design choices for better performance, using dynamic masking, removing the next sentence prediction task, training with larger batches, on more data, and for longer.",
"In this section, we present the four downstream tasks that we use to evaluate CamemBERT, namely: Part-Of-Speech (POS) tagging, dependency parsing, Named Entity Recognition (NER) and Natural Language Inference (NLI).",
"We also present the baselines that we will use for comparison.",
"2 https://allennlp.org/elmo Tasks POS tagging is a low-level syntactic task, which consists in assigning to each word its corresponding grammatical category.",
"Dependency parsing consists in predicting the labeled syntactic tree in order to capture the syntactic relations between words.",
"For both of these tasks we run our experiments using the Universal Dependencies (UD) 3 framework and its corresponding UD POS tag set (Petrov et al., 2012) and UD treebank collection (Nivre et al., 2018), which was used for the CoNLL 2018 shared task (Seker et al., 2018).",
"We perform our evaluations on the four freely available French UD treebanks in UD v2.2: GSD (McDonald et al., 2013), Sequoia 4 (Candito and Seddah, 2012; Candito et al., 2014), Spoken (Lacheret et al., 2014; Bawden et al., 2014) 5 , and ParTUT (Sanguinetti and Bosco, 2015).",
"A brief overview of the size and content of each treebank can be found in Table",
"1. Treebank #Tokens #Sentences Genres Blogs, News GSD 389,363 16,342 Reviews, Wiki Medical, News Sequoia 68,615 3,099 Non-fiction, Wiki Spoken 34,972 2,786 Spoken ParTUT 27,658 1,020 Legal, News, Wikis FTB 350,930 27,658 News Table 1: Statistics on the treebanks used in POS tagging, dependency parsing, and NER (FTB).",
"We also evaluate our model in NER, which is a sequence labeling task predicting which words refer to real-world objects, such as people, locations, artifacts and organisations.",
"We use the French Treebank 6 (FTB) (Abeill et al., 2003) in its 2008 version introduced by Candito and Crabb (2009) and with NER annotations by Sagot et al. (2012).",
"The FTB contains more than 11 thousand entity mentions distributed among 7 different entity types.",
"A brief overview of the FTB can also be found in Table",
"1. Finally, we evaluate our model on NLI, using the French part of the XNLI dataset (Conneau et al., 2018).",
"NLI consists in predicting whether a hypothesis sentence is entailed, neutral or contradicts a premise sentence.",
"The XNLI dataset is the exten-3 https://universaldependencies.org 4 https://deep-sequoia.inria.fr 5 Speech transcript uncased that includes annotated disflu-encies without punctuation 6 This dataset has only been stored and used on Inria's servers after signing the research-only agreement.",
"sion of the Multi-Genre NLI (MultiNLI) corpus (Williams et al., 2018) to 15 languages by translating the validation and test sets manually into each of those languages.",
"The English training set is machine translated for all languages other than English.",
"The dataset is composed of 122k train, 2490 development and 5010 test examples for each language.",
"As usual, NLI performance is evaluated using accuracy.",
"Baselines In dependency parsing and POS-tagging we compare our model with: mBERT : The multilingual cased version of BERT (see Section 2.1).",
"We fine-tune mBERT on each of the treebanks with an additional layer for POS-tagging and dependency parsing, in the same conditions as our CamemBERT model.",
"XLMMLM-TLM : A multilingual pretrained language model from Lample and Conneau (2019), which showed better performance than mBERT on NLI.",
"We use the version available in the Hugging's Face transformer library (Wolf et al., 2019); like mBERT, we fine-tune it in the same conditions as our model.",
"UDify (Kondratyuk, 2019): A multitask and multilingual model based on mBERT, UDify is trained simultaneously on 124 different UD treebanks, creating a single POS tagging and dependency parsing model that works across 75 different languages.",
"We report the scores from Kondratyuk (2019) paper.",
"UDPipe Future (Straka, 2018): An LSTM-based model ranked 3 rd in dependency parsing and 6 th in POS tagging at the CoNLL 2018 shared task (Seker et al., 2018).",
"We report the scores from Kondratyuk (2019) paper.",
"UDPipe Future + mBERT + Flair (Straka et al., 2019): The original UDPipe Future implementation using mBERT and Flair as feature-based contextualized word embeddings.",
"We report the scores from Straka et al. (2019) paper.",
"In French, no extensive work has been done on NER due to the limited availability of annotated corpora.",
"Thus we compare our model with the only recent available baselines set by Dupont (2017), who trained both CRF (Lafferty et al., 2001) and BiLSTM-CRF (Lample et al., 2016) architectures on the FTB and enhanced them using heuristics and pretrained word embeddings.",
"Additionally, as for POS and dependency parsing, we compare our model to a fine-tuned version of mBERT for the NER task.",
"For XNLI, we provide the scores of mBERT which has been reported for French by Wu and Dredze (2019).",
"We report scores from XLMMLM-TLM (described above), the best model from Lample and Conneau (2019).",
"We also report the results of XLM-R (Conneau et al., 2019).",
"In this section, we describe the pretraining data, architecture, training objective and optimisation setup we use for CamemBERT.",
"Pretrained language models benefits from being trained on large datasets (Devlin et al., 2018; Liu et al., 2019; Raffel et al., 2019).",
"We therefore use the French part of the OSCAR corpus (Ortiz Surez et al., 2019), a pre-filtered and pre-classified version of Common Crawl.",
"7 OSCAR is a set of monolingual corpora extracted from Common Crawl snapshots.",
"It follows the same approach as (Grave et al., 2018) by using a language classification model based on the fastText linear classifier (Grave et al., 2017; Joulin et al., 2016) pretrained on Wikipedia, Tatoeba and SETimes, which supports 176 languages.",
"No other filtering is done.",
"We use a non-shuffled version of the French data, which amounts to 138GB of raw text and 32.7B tokens after subword tokenization.",
"We segment the input text data into subword units using SentencePiece (Kudo and Richardson, 2018).",
"SentencePiece is an extension of Byte-Pair encoding (BPE) (Sennrich et al., 2016) and WordPiece (Kudo, 2018) that does not require pre-tokenization (at the word or token level), thus removing the need for language-specific tokenisers.",
"We use a vocabulary size of 32k subword tokens.",
"These subwords are learned on 10 7 sentences sampled randomly from the pretraining dataset.",
"We do not use subword regularisation (i.e. sampling from multiple possible segmentations) for the sake of simplicity.",
"7 https://commoncrawl.org/about/ 4.3 Language Modeling Transformer Similar to RoBERTa and BERT, CamemBERT is a multi-layer bidirectional Transformer (Vaswani et al., 2017).",
"Given the widespread usage of Transformers, we do not describe them here and refer the reader to (Vaswani et al., 2017).",
"CamemBERT uses the original architectures of BERTBASE (12 layers, 768 hidden dimensions, 12 attention heads, 110M parameters) and BERTLARGE (24 layers, 1024 hidden dimensions, 16 attention heads, 335M parameters).",
"CamemBERT is very similar to RoBERTa, the main difference being the use of whole-word masking and the usage of SentencePiece tokenization (Kudo and Richardson, 2018) instead of WordPiece (Schuster and Nakajima, 2012).",
"Pretraining Objective We train our model on the Masked Language Modeling (MLM) task.",
"Given an input text sequence composed of N tokens x 1 , ..., x N , we select 15% of tokens for possible replacement.",
"Among those selected tokens, 80% are replaced with the special <MASK> token, 10% are left unchanged and 10% are replaced by a random token.",
"The model is then trained to predict the initial masked tokens using cross-entropy loss.",
"Following the RoBERTa approach, we dynamically mask tokens instead of fixing them statically for the whole dataset during preprocessing.",
"This improves variability and makes the model more robust when training for multiple epochs.",
"Since we use SentencePiece to tokenize our corpus, the input tokens to the model are a mix of whole words and subwords.",
"An upgraded version of BERT 8 and Joshi et al. (2019) have shown that masking whole words instead of individual subwords leads to improved performance.",
"Whole-word Masking (WWM) makes the training task more dif-ficult because the model has to predict a whole word rather than predicting only part of the word given the rest.",
"We train our models using WWM by using whitespaces in the initial untokenized text as word delimiters.",
"WWM is implemented by first randomly sampling 15% of the words in the sequence and then considering all subword tokens in each of this 15% for candidate replacement.",
"This amounts to a proportion of selected tokens that is close to the original 15%.",
"These tokens are then either replaced by 8 https://github.com/google-research/ bert/blob/master/README.md <MASK> tokens (80%), left unchanged (10%) or replaced by a random token.",
"Subsequent work has shown that the next sentence prediction (NSP) task originally used in BERT does not improve downstream task performance (Lample and Conneau, 2019; Liu et al., 2019), thus we also remove it.",
"Optimisation Following (Liu et al., 2019), we optimize the model using Adam (Kingma and Ba, 2014) ( 1 = 0 . 9 , 2 = 0 . 98 ) for 100k steps with large batch sizes of 8192 sequences, each sequence containing at most 512 tokens.",
"We enforce each sequence to only contain complete paragraphs (which correspond to lines in the our pretraining dataset).",
"Pretraining We use the RoBERTa implementation in the fairseq library (Ott et al., 2019).",
"Our learning rate is warmed up for 10k steps up to a peak value of 0 .",
"0007 instead of the original 0 .",
"0001 given our large batch size, and then fades to zero with polynomial decay.",
"Unless otherwise specified, our models use the BASE architecture, and are pretrained for 100k backpropagation steps on 256 Nvidia V100 GPUs (32GB each) for a day.",
"We do not train our models for longer due to practical considerations, even though the performance still seemed to be increasing.",
"We use the pretrained CamemBERT in two ways.",
"In the first one, which we refer to as fine-tuning , we fine-tune the model on a specific task in an end-to-end manner.",
"In the second one, referred to as feature-based embeddings or simply embeddings , we extract frozen contextual embedding vectors from CamemBERT.",
"These two complementary approaches shed light on the quality of the pretrained hidden representations captured by CamemBERT.",
"Fine-tuning For each task, we append the relevant predictive layer on top of CamemBERT's architecture.",
"Following the work done on BERT (Devlin et al., 2019), for sequence tagging and sequence labeling we append a linear layer that respectively takes as input the last hidden representation of the <s> special token and the last hidden representation of the first subword token of each word.",
"For dependency parsing, we plug a bi-affine graph predictor head as inspired by Dozat and Manning (2017).",
"We refer the reader to this article for more details on this module.",
"We fine-tune on XNLI by adding a classification head composed of one hidden layer with a non-linearity and one linear projection layer, with input dropout for both.",
"We fine-tune CamemBERT independently for each task and each dataset.",
"We optimize the model using the Adam optimiser (Kingma and Ba, 2014) with a fixed learning rate.",
"We run a grid search on a combination of learning rates and batch sizes.",
"We select the best model on the validation set out of the 30 first epochs.",
"For NLI we use the default hyper-parameters provided by the authors of RoBERTa on the MNLI task.",
"9 Although this might have pushed the performances even further, we do not apply any regularisation techniques such as weight de-cay, learning rate warm-up or discriminative fine-tuning, except for NLI.",
"We show that fine-tuning CamemBERT in a straightforward manner leads to state-of-the-art results on all tasks and outperforms the existing BERT-based models in all cases.",
"The POS tagging, dependency parsing, and NER experiments are run using Hugging Face's Transformer library extended to support CamemBERT and dependency parsing (Wolf et al., 2019).",
"The NLI experiments use the fairseq library following the RoBERTa implementation.",
"Embeddings Following Strakov et al. (2019) and Straka et al. (2019) for mBERT and the English BERT, we make use of CamemBERT in a feature-based embeddings setting.",
"In order to obtain a representation for a given token, we first compute the average of each sub-word's representations in the last four layers of the Transformer, and then average the resulting sub-word vectors.",
"We evaluate CamemBERT in the embeddings setting for POS tagging, dependency parsing and NER; using the open-source implementations of Straka et al. (2019) and Strakov et al. (2019).",
"10 5 Evaluation of CamemBERT In this section, we measure the performance of our models by evaluating them on the four aforementioned tasks: POS tagging, dependency parsing, NER and NLI.",
"9 More details at https://github.com/pytorch/ fairseq/blob/master/examples/roberta/ README.glue.md .",
"10 UDPipe Future is available at https://github.",
"com/CoNLL-UD-2018/UDPipe-Future , and the code for nested NER is available at https://github.com/ ufal/acl2019_nested_ner .",
"POS tagging and dependency parsing For POS tagging and dependency parsing, we compare CamemBERT with other models in the two settings: fine-tuning and as feature-based embeddings .",
"We report the results in Table",
"2. CamemBERT reaches state-of-the-art scores on all treebanks and metrics in both scenarios.",
"The two approaches achieve similar scores, with a slight advantage for the fine-tuned version of CamemBERT, thus questioning the need for complex task-specific architectures such as UDPipe Future.",
"Despite a much simpler optimisation process and no task specific architecture, fine-tuning CamemBERT outperforms UDify on all treebanks and sometimes by a large margin (e.g. +4.15% LAS on Sequoia and +5.37 LAS on ParTUT).",
"CamemBERT also reaches better performance than other multilingual pretrained models such as mBERT and XLMMLM-TLM on all treebanks.",
"CamemBERT achieves overall slightly better results than the previous state-of-the-art and task-specific architecture UDPipe Future+mBERT +Flair, except for POS tagging on Sequoia and POS tagging on Spoken, where CamemBERT lags by 0.03% and 0.14% UPOS respectively.",
"UDPipe Fu-ture+mBERT +Flair uses the contextualized string embeddings Flair (Akbik et al., 2018), which are in fact pretrained contextualized character-level word embeddings specifically designed to handle misspelled words as well as subword structures such as prefixes and suffixes.",
"This design choice might explain the difference in score for POS tagging with CamemBERT, especially for the Spoken treebank where words are not capitalized, a factor that might pose a problem for CamemBERT which was trained on capitalized data, but that might be properly handle by Flair on the UDPipe Future+mBERT +Flair model.",
"Named-Entity Recognition For NER, we similarly evaluate CamemBERT in the fine-tuning setting and as input embeddings to the task specific architecture LSTM+CRF.",
"We report these scores in Table",
"3. In both scenarios, CamemBERT achieves higher F1 scores than the traditional CRF-based architectures, both non-neural and neural, and than fine-tuned multilingual BERT models.",
"11 Using CamemBERT as embeddings to the traditional LSTM+CRF architecture gives slightly higher scores than by fine-tuning the model (89.08 vs. 89.55).",
"This demonstrates that although CamemBERT can be used successfully without any task-specific architecture, it can still produce high quality contextualized embeddings that might be useful in scenarios where powerful downstream architectures exist.",
"11 XLMMLM-TLM is a lower-case model.",
"Case is crucial for NER, therefore we do not report its low performance (84.37%) Natural Language Inference On the XNLI benchmark, we compare CamemBERT to previous state-of-the-art multilingual models in the fine-tuning setting.",
"In addition to the standard CamemBERT model with a BASE architecture, we train another model with the LARGE architecture, referred to as CamemBERT LARGE , for a fair comparison with XLM-RLARGE .",
"This model is trained with the CCNet corpus, described in Sec. 6, for 100k steps.",
"12 We expect that training the model for longer would yield even better performance.",
"CamemBERT reaches higher accuracy than its BASE counterparts reaching +5.6% over mBERT, +2.3 over XLMMLM-TLM , and +2.4 over XLM-RBASE .",
"CamemBERT also uses as few as half as many parameters (110M vs. 270M for XLM-RBASE ).",
"CamemBERT LARGE achieves a state-of-the-art accuracy of 85.7% on the XNLI benchmark, as opposed to 85.2, for the recent XLM-RLARGE .",
"CamemBERT uses fewer parameters than multilingual models, mostly because of its smaller vocabulary size (e.g. 32k vs. 250k for XLM-R).",
"Two elements might explain the better performance of CamemBERT over XLM-R.",
"Even though XLM-R was trained on an impressive amount of data (2.5TB), only 57GB of this data is in French, whereas we used 138GB of French data.",
"Additionally XLM-R also handles 100 languages, and the authors show that when reducing the number of languages to 7, they can reach 82.5% accuracy for French XNLI with their BASE architecture.",
"Summary of CamemBERT's results CamemBERT improves the state of the art for the 4 downstream tasks considered, thereby confirming on French the usefulness of Transformer-based models.",
"We obtain these results when using CamemBERT as a fine-tuned model or when used as contextual embeddings with task-specific architectures.",
"This questions the need for more complex downstream architectures, similar to what was shown for English (Devlin et al., 2019).",
"Additionally, this suggests that CamemBERT is also able to produce high-quality representations out-of-the-box without further tuning.",
"12 We train our LARGE model with the CCNet corpus for practical reasons.",
"Given that BASE models reach similar performance when using OSCAR or CCNet as pretraining corpus (Appendix Table 8), we expect an OSCAR LARGE model to reach comparable scores.",
"In this section we investigate the influence of the homogeneity and size of the pretraining corpus on downstream task performance.",
"With this aim, we train alternative version of CamemBERT by varying the pretraining datasets.",
"For this experiment, we fix the number of pretraining steps to 100k, and allow the number of epochs to vary accordingly (more epochs for smaller dataset sizes).",
"All models use the BASE architecture.",
"In order to investigate the need for homogeneous clean data versus more diverse and possibly noisier data, we use alternative sources of pretraining data in addition to OSCAR: Wikipedia , which is homogeneous in terms of genre and style.",
"We use the official 2019 French Wikipedia dumps 13 .",
"We remove HTML tags and tables using Giuseppe At-tardi's WikiExtractor .",
"14 CCNet (Wenzek et al., 2019), a dataset extracted from Common Crawl with a different filtering process than for OSCAR.",
"It was built using a language model trained on Wikipedia, in order to filter out bad quality texts such as code or tables.",
"15 As this filtering step biases the noisy data from Common Crawl to more Wikipedia-like text, we expect CCNet to act as a middle ground between the unfil-tered noisy OSCAR dataset, and the clean Wikipedia dataset.",
"As a result of the different filtering processes, CCNet contains longer documents on average compared to OSCAR with smallerand often noisierdocuments weeded out.",
"In order to make the comparison between these three sources of pretraining data, we randomly sample 4GB of text (at the document level) from OSCAR and CCNet, thereby creating samples of both Common-Crawl-based corpora of the same size as the French Wikipedia.",
"These smaller 4GB samples also provides us a way to investigate the impact 13 https://dumps.wikimedia.org/ backup-index.html .",
"of pretraining data size.",
"Downstream task performance for our alternative versions of CamemBERT are provided in Table 5.",
"The upper section reports scores in the fine-tuning setting while the lower section reports scores for the embeddings.",
"Table 5 clearly shows that models trained on the 4GB versions of OSCAR and CCNet (Common Crawl) perform consistently better than the the one trained on the French Wikipedia.",
"This is true both in the fine-tuning and embeddings setting.",
"Unsurprisingly, the gap is larger on tasks involving texts whose genre and style are more divergent from those of Wikipedia, such as tagging and parsing on the Spoken treebank.",
"The performance gap is also very large on the XNLI task, probably as a consequence of the larger diversity of Common-Crawl-based corpora in terms of genres and topics.",
"XNLI is indeed based on multiNLI which covers a range of genres of spoken and written text.",
"The downstream task performances of the models trained on the 4GB version of CCNet and OSCAR are much more similar.",
"16 16 We provide the results of a model trained on the whole CCNet corpus in the Appendix.",
"The conclusions are similar when comparing models trained on the full corpora: downstream results are similar when using OSCAR or CCNet.",
"An unexpected outcome of our experiments is that the model trained only on the 4GB sample of OSCAR performs similarly to the standard CamemBERT trained on the whole 138GB OSCAR.",
"The only task with a large performance gap is NER, where 138GB models are better by 0.9 F1 points.",
"This could be due to the higher number of named entities present in the larger corpora, which is ben-eficial for this task.",
"On the contrary, other tasks don't seem to gain from the additional data.",
"In other words, when trained on corpora such as OSCAR and CCNet, which are heterogeneous in terms of genre and style, 4GB of uncompressed text is large enough as pretraining corpus to reach state-of-the-art results with the BASE architecure, better than those obtained with mBERT (pretrained on 60GB of text).",
"17 This calls into question the need to use a very large corpus such as OSCAR or CCNet when training a monolingual Transformer-based language model such as BERT or RoBERTa.",
"Not only does this mean that the computational (and therefore environmental) cost of training a state-of-the-art language model can be reduced, but it also means that CamemBERT-like models can be trained for all languages for which a Common-Crawl-based corpus of 4GB or more can be created.",
"OSCAR is available in 166 languages, and provides such a corpus for 38 languages.",
"Moreover, it is possible that slightly smaller corpora (e.g. down to 1GB) could also prove sufficient to train high-performing language models.",
"We obtained our results with BASE architectures.",
"Further research is needed to confirm the validity of our findings on larger architectures and other more complex natural 17 The OSCAR-4GB model gets slightly better XNLI accuracy than the full OSCAR-138GB model (81.88 vs. 81.55).",
"This might be due to the random seed used for pretraining, as each model is pretrained only once.",
"language understanding tasks.",
"However, even with a BASE architecture and 4GB of training data, the validation loss is still decreasing beyond 100k steps (and 400 epochs).",
"This suggests that we are still under-fitting the 4GB pretraining dataset, training longer might increase downstream performance.",
"Since the pre-publication of this work (Martin et al., 2019), many monolingual language models have appeared, e.g. (Le et al., 2019; Virtanen et al., 2019; Delobelle et al., 2020), for as much as 30 languages (Nozza et al., 2020).",
"In almost all tested config-urations they displayed better results than multilingual language models such as mBERT (Pires et al., 2019).",
"Interestingly, Le et al. (2019) showed that using their FlauBert, a RoBERTa-based language model for French, which was trained on less but more edited data, in conjunction to CamemBERT in an ensemble system could improve the performance of a parsing model and establish a new state-of-the-art in constituency parsing of French, highlighting thus the complementarity of both models.",
"18 As it was the case for English when BERT was first released, the availability of similar scale language models for French enabled interesting applications, such as large scale anonymization of legal texts, where CamemBERT-based models established a new state-of-the-art on this task (Benesty, 2019), or the first large question answering experiments on a French Squad data set that was released very recently (d'Hoffschmidt et al., 2020) where the authors matched human performance using CamemBERT LARGE .",
"Being the first pre-trained language model that used the open-source Common Crawl Oscar corpus and given its impact on the community, CamemBERT paved the way for many works on monolingual language models that followed.",
"Furthermore, the availability of all its training data favors reproducibility and is a step towards better understanding such models.",
"In that spirit, we make the models used in our experiments available via our website and via the huggingface and fairseq APIs, in addition to the base CamemBERT model.",
"In this work, we investigated the feasibility of training a Transformer-based language model for lan-18",
"guages other than English.",
"Using French as an example, we trained CamemBERT, a language model based on RoBERTa.",
"We evaluated CamemBERT on four downstream tasks (part-of-speech tagging, dependency parsing, named entity recognition and natural language inference) in which our best model reached or improved the state of the art in all tasks considered, even when compared to strong multilingual models such as mBERT, XLM and XLM-R, while also having fewer parameters.",
"Our experiments demonstrate that using web crawled data with high variability is preferable to using Wikipedia-based data.",
"In addition we showed that our models could reach surprisingly high performances with as low as 4GB of pretraining data, questioning thus the need for large scale pretraining corpora.",
"This shows that state-of-the-art Transformer-based language models can be trained on languages with far fewer resources than English, whenever a few gigabytes of data are available.",
"This paves the way for the rise of monolingual contextual pre-trained language-models for under-resourced languages.",
"The question of knowing whether pretraining on small domain specific content will be a better option than transfer learning techniques such as fine-tuning remains open and we leave it for future work.",
"Pretrained on pure open-source corpora, CamemBERT is freely available and distributed with the MIT license via popular NLP libraries ( fairseq and huggingface ) as well as on our website camembert-model.fr .",
"We want to thank Clmentine Fourrier for her proofreading and insightful comments, and Alix Chagu for her great logo.",
"This work was partly funded by three French National funded projects granted to Inria and other partners by the Agence Nationale de la Recherche, namely projects PARSITI (ANR-16-CE33-0021), SoSweet (ANR-15-CE38-0011) and BASNUM (ANR-18-CE38-0003), as well as by the last author's chair in the PRAIRIE institute funded by the French national agency ANR as part of the Investissements d'avenir programme under the reference ANR-19-P3IA-0001."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"result",
"method",
"abstain",
"method",
"abstain",
"objective",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"method",
"result",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"We propose MULTIOPED 1 , an open-domain news editorial corpus that supports various tasks pertaining to the argumentation structure in news editorials, focusing on automatic perspective discovery .",
"News editorial is a genre of persuasive text, where the argumentation structure is usually implicit .",
"However, the arguments presented in an editorial typically cen-ter around a concise, focused thesis, which we refer to as their perspective .",
"MULTIOPED aims at supporting the study of multiple tasks relevant to automatic perspective discovery , where a system is expected to produce a single-sentence thesis statement summarizing the arguments presented.",
"We argue that identifying and abstracting such natural language perspectives from editorials is a crucial step toward studying the implicit argumentation structure in news editorials.",
"We first discuss the challenges and define a few conceptual tasks towards our goal.",
"To demonstrate the utility of MULTIOPED and the induced tasks, we study the problem of perspective summarization in a multi-task learning setting, as a case study.",
"We show that, with the induced tasks as auxiliary tasks, we can improve the quality of the perspective summary generated.",
"We hope that MULTIOPED will be a useful resource for future studies on argumentation in the news editorial domain.",
"News editorial is a form of persuasive text that conveys consensus opinion on a controversial topic from the editors of a newspaper.",
"Much like an argumentative essay, a news editorial centers around a thesis, which represents the authors' perspective on the topic.",
"Usually, a news editorial argues in favor of the authors' stance on the topic, and is substantiated by extensive factual evidence .",
"As news editorials function as professionally produced written discourse for conveying media attitude and guidance, they have traditionally been studied by the community as a rich resource for many argumantation-related tasks.",
"(Wilson and Wiebe, 2003; Yu and Hatzivassiloglou, 2003; Bal and Saint-Dizier, 2009).",
"This work targets the problem of developing computational methods to identify and comparatively analyze the authors' perspectives and supporting arguments behind news editorials.",
"One challenge to studying the argumentation structure in news editorials is that its elements are rarely expressed explicitly (El Baff et al., 2018).",
"For example, Figure 1 shows two news editorials holding opposite views on whether a lockdown should continue.",
"However, neither of them present their key perspectives explicitly.",
"Instead, the perspective is conveyed through subtle rhetoric strategies to either affirm or challenge the readers' stance from prior belief on the topic, as a study by El Baff et al. (2018) discovers.",
"As Figure 1 shows, the statement The lock down should stop concisely summarizes the perspective expressed in the article on the left.",
"We refer to such statements as perspectives throughout the paper.",
"The ability to abstractively summarize the perspectives from the editorial would allow us to understand multiple topic-aligned editorials in context and reason about their inter-editorial argumentation structure.",
"To facilitate research along the line, we collect data from THEPERSPECTIVE 2 website, and construct MULTIOPED , an open-domain English news editorial corpus that supports various tasks pertaining to the argumentation structure in news editorials, focusing on automatic perspective discovery (Chen et al., 2019).",
"The structure of the data is shown in Figure",
"1. For each of the 1,397 natural language query on a different topic in our dataset, it features two (rather long) news editorials.",
"Each editorial features a single-sentence perspective, which is abstractively summarized from the editorial by human experts.",
"A short abstract then highlights the details in the editorial that support the perspective .",
"The perspectives of the two editorials represents responses of opposite stances towards the query.",
"Naturally, the structure of the dataset induces a range of important argumentation-related natural language understanding tasks.",
"For instance, the presence of the summary perspective allows for stance classification (Hasan and Ng, 2013) with respect to the query, which arguably is more tangible than inferring the stance from the entire editorial.",
"Another example task is the conditional generation of the perspective from the abstract/editorial, which relates to the widely studied task of argument generation (Hua and Wang, 2018; Alshomary et al., 2020).",
"We defer the more detailed description of the induced tasks to Section",
"3. One key advantage of MULTIOPED that is absent from earlier datasets is that a large number of argumentation-related tasks can be studied jointly using a single high quality corpus.",
"To demonstrate this benefit and the utility of the MULTIOPED dataset 3 along with its induced tasks, we study the problem of perspective summarization in a multitask learning setting.",
"We employ perspective relevance and stance classifications as two auxilliary tasks to the summarization objective.",
"Our empiri-2 https://www.theperspective.com/ perspectives/ 3 Our code and data is available at http://cogcomp.",
"cal and human analysis on the generated summaries show that the multi-task learning setting improves the generated perspectives in terms of the argument quality and stance consistency.",
"In summary, our contributions in this work are three-fold.",
"First, we propose a conceptual framework for identifying and abstracting the perspectives and the corresponding argumentation structure in news editorials, and define a set of tasks necessary for achieving this goal.",
"Second, we propose the MULTIOPED dataset, a news editorial dataset that induces multiple argumentation-related tasks.",
"Third, we demonstrate the utility of our multi-purpose dataset and induced tasks, by using the perspective summarization task as a case study.",
"We include the induced tasks as auxiliary objectives in multi-task learning setting, and demonstrate their effectiveness to perspective summarization.",
"Our goal of perspective discovery follows similar definition proposed by Chen et al. (2019), and is closely related to a widely studied area of argumentation mining, i.e. identifying the argumentation structure within persuasive text (Stab and Gurevych, 2014b; Kiesel et al., 2015).",
"However, most studies in this domain focus on extractive methods, which becomes less applicable to our study.",
"As the arguments are usually presented in an subtle and implicit way in news editorials, we instead focus on the generation methods for the perspectives.",
"This closely resembles the argument conclusion generation task (Alshomary et al., 2020).",
"One key distinction here is the presense of query to provide topic guidance during the perspective generation.",
"Compared to other conditional text generation tasks, perspective generation subjects to a few more constraints with respect to the argumentation structure.",
"For example, the perspective must constitute the same stance (Hasan and Ng, 2013) as the editorial towards the query.",
"On the other hand, while the editorial may cover content not directly related to the query, the generated perspective must present a relevant argument in the query's context.",
"Such structural constraints can be studied in the format of classification problems.",
"And being able to study such problems along side the perspective summarization task on one high-quality corpus is important in our case, as it opens up the probability of modeling the tasks jointly.",
"We show the benefit of Dataset Source Open Domain Cross Article Abstractive ARAUCARIADB (Reed et al., 2008) News Ed.",
"doing so by presenting a case study in section",
"5. As the query provides topic guidance, it allows for the study of the topic-aligned pairs of editorials which presents counter-arguments to each other.",
"Such property is absent from notable datasets of similar purposes to ours, as shown in Table",
"1. ARAUCARIADB (Reed et al., 2008) is the first effort to provide large-scale annotations of dense argumentation structure within individual news editorials.",
"Stab and Gurevych (2014a); Eckle-Kohler et al. (2015) provide resources for extractive argumentation structure in persuasive essays and news articles, respectively.",
"Later works (Hua and Wang, 2018; Chen et al., 2019) focus on the abstractive generation or identification of arguments from web corpora.",
"All of these datasets focus on studies of argumentation structure within individual document.",
"Instead, our proposed dataset presents the opportunity to study the cross-document argumentation structure.",
"Following the design principles outlined in the previous section, we propose a topic-aligned English news editorial corpus, MULTIOPED .",
"The structure of an example instance in MULTIOPED is shown in Figure",
"1. To clarify our description of the dataset, we use the following notations.",
"Let q be a query about a controversial topic.",
"Each q in the dataset is paired with two editorials e pro and e con , that constitute supporting and opposing stances to the query q respectively.",
"Each editorial is abstracted into and a single-sentence perspective p , which provides a high-level summarization of the key argument presented in the editorial.",
"The premises, or relevant details to support the perspective, forms the abstract a .",
"Naturally, the relation between these elements induces several tasks, most of which encompass similar definitions to existing argumentation-related tasks.",
"We define and describe the tasks and their connection to our end goal of perspective discovery below.",
"1. Generating an Abstract : Given an editorial e , a system is expected to identify and summarize the relevant arguments into an abstract paragraph a to the context provided by the query q .",
"This is closely related to the task of argument synthesis (El Baff et al., 2019; Hua et al., 2019).",
"We set aside this problem in our case study in section 5, and use the abstract provided by the dataset.",
"2. Perspective Summarization : Given the generated abstract a and the query q , a system is expected to generate the perspective p , a concise summary of the arguments presented in a .",
"Conceptually, this problem resembles the task of argument conclusion generation (Alshomary et al., 2020).",
"We adopt a slightly different setting where the target topic is expressed in the form of a natural language query.",
"editorials e 's stance towards a query q .",
"The generated perspective p from editorial e allows us to focus on a simpler task definition of classifying the stance of the perspective to the query q (Hasan and Ng, 2013; Bar-Haim et al., 2017).",
"4. Assessing the Relevance of Perspective : We want to measure the validity of the perspective by assessing whether the perspective presents a relevant argument towards the query (Chen et al., 2019; Ein-Dor et al., 2020).",
"This can be formulated as a classification problem with the query q and a perspective p as inputs, as we show in section",
"5. Figure 4: Relevance classification: Decide if the perspective is relevant to the query.",
"We extract the query, editorial article pairs, abstract paragraph pairs, along with their perspective summaries from THEPERSPECTIVE 4 website.",
"The website presents controversial topics in the form of queries.",
"For each query, two related editorial articles with opposing views from different sources 4 https://www.theperspective.com/ perspectives/ are selected by the writers from the website.",
"The writers create a concise one-sentence summary of each article as the response to the query, and an abstract paragraph to summarize the relevant arguments from the article.",
"An example structure of the data is shown in Figure.1.",
"To verify the structure from the website, and collect additional annotations, such as stance of the perspectives, we conduct a few annotation experiments with Amazon Mechanical Turk 7 .",
"For all of our annotation experiments, we require the workers to be located in the United States, as the controversial topics covered by the website are most applicable in the U.S. context.",
"We also require the workers to have masters qualifications (i.e. Top performers recognized by MTurk among all workers).",
"We compensate the workers $0 .",
"75 , $1 .",
"00 and $1 .",
"25 per 10 queries for the implicit reference resolution, topic annotation, and stance annotation tasks respectively.",
"The compensation rates are determined by estimating the average completion time for each annotation experiments.",
"Example screenshots of our annotation interface and more detailed annotation guidelines can be found in Appendix B. 4.2.1 Stance Annotation In our dataset, each query is presented with two perspectives with opposite stance to the query.",
"However, the raw data that we collected does not specify the stance of each perspective individually.",
"We ask two expert annotators label whether each perspective is offering a supporting or opposing view with respect to its query.",
"The two experts discuss and adjudicate their decisions.",
"We then ask on average three crowdsource workers per instance to verify the annotations.",
"From the annotations collected by experts, we find that 30 out of 1 , 397 queries do not constitute a clear stance.",
"Such queries are typically \"open-ended\" questions which cannot be responded with a yes or no answer, i.e. why or what questions.",
"We leave these instances unlabeled and exclude them from the next verification step.",
"To assess the quality of stance labels created, we randomly sample 500 perspectives, and ask three MTurk workers per instance to verify stance labels.",
"We computed the inter-rater agreement fleiss' = 0 .",
"81 among workers, and the agreement between majority decision from works and the ex-pert's adjudicated annotations is cohen's = 0 .",
"92 .",
"We describe how we measure the two types of agreements respectively in Appendix A.3.",
"Some of the perspectives in our dataset have implicit references to certain subjects in the query.",
"For instance, for a query Is Trump Right To Criticize Mail-In Voting? , and a perspective It's far too risky for an election , the word \"It\" in the perspective refers to Mail-in Voting in the query.",
"As we assume that a perspective should presents a complete, valid argument on itself, we decide to replace such implicit reference in a perspective with the correct referent in the query.",
"For example, the corrected perspective in the previous example would become Mail-in voting is far too risky for an election .",
"We ask one expert annotator to identify implicit references and make modifications for every perspective in the dataset.",
"In total, 1 , 301 out of 2 , 794 perspectives are identified and corrected by the expert annotator.",
"We ask three Turkers to verify that the modifications do not introduce any grammatical error or change the original meaning.",
"We randomly sample 500 modified perspectives and present Turkers with the question of \"Will this modification change the original meaning or introduce grammar error?\".",
"The percentage of majority answers being No is 84%.",
"We include both changed and original versions of the perspectives in our datasets.",
"We create 9 topic labels according to the categorization from THEPERSPECTIVE website and major news outlets.",
"We then as ask three MTurk workers to assign one of the 9 topic labels to each query.",
"We regard the majority answer by the Turkers as the annotation for its topic category.",
"In cases where all three annotators choose different categories ( 43 cases out of all 1397 queries), we label it as other topics .",
"We show the distribution of topic categories in Figure",
"5. The inter-agreement among three annotators for this 9-class classification task is = 0 .",
"65 Figure 5: Topic distribution of the 1397 queries in MULTIOPED .",
"MULTIOPED consists of 1,397 queries about different news topics.",
"Each query is presented with two perspectives, two abstracts and two linked news editorials.",
"Despite a few stale urls and invalid redirections, we manage to extract the text for 2,584 news editorials.",
"More detailed statistics are reported in Table",
"2. 5 Case Study: Multi-task Learning for Perspective Summarization 5.1 Multi-Task Framework To demonstrate the benefits of modeling the induced tasks on the argumentation structure, we present a case study on the task of perspective summarization.",
"Given a query and an abstract from the related editorial, a system is expected to produce a concise and fluent summary perspective for the editorial.",
"In addition, the generated perspective ideally should satisfy a few structural constraints with respect to the query.",
"For instance, the generated perspective must constitute the same stance as the editorial towards the query.",
"Also the perspective should be relevant in the context of the query.",
"The two requirements resemble the perspective stance and relevance classification tasks defined in section 3 respectively. Motivated by this, we study the two tasks together with perspective summarization in a multitask learning framework. We choose BART (Lewis et al., 2020) as our base summarization model. BART is a pretrained auto-regressive transformer (Vaswani et al., 2017) encoder-decoder model, that have been proven effective in conditional text generation and other NLP tasks. Model ROUGE 1 ROUGE 2 ROUGELBERTSCOREREL . % STANCE % BART 28.24 11.34 26.96 88.67 91.91 72.32 + Rel 28.35 11.51 27.12 88.69 92.98 72.68 + Stance 28.19 11.53 26.93 88.75 91.25 73.39 + Rel & Stance 29.18 11.92 27.94 88.74 94.64 74.29 Table 3: Results of our multitask perspective summarization models. We compare to BART as a baseline, and experiment with different combinations of the auxiliary tasks. We report the F 1 score under ROUGE { 1 , 2 ,L } and BERTSCORE metrics, as well as the percentage of summaries with the correct relevance and stance label, as predicted by our pretrained classification models respectively. See Appendix A for training details and hyperparameters settings. Model RANK %1 STREL . STANCEBART 2.09 49.50 77.00 70.50 + Rel 1.74 60.50 87.00 70.50 + Stance 1.78 58.50 83.00 79.50 + R & S 1.76 59.00 82.00 69.00 Table 4: Human Evaluations results. R ANK shows a model's averaged rank judged by the raters ( 1 = best , 4 = worst ) %1 ST represents the percentage of generated summaries from one model that are ranked the best.",
"We start with a pretrained BART base model with 139 M parameters, and finetune the model to output the target perspective given the query and abstract concatenated as input.",
"In addition, we put two separate linear layers over the pooled embed-dings of the last decoder layer, and predict the relevance and stance labels of the generated summary respectively.",
"The two tasks and the perspective summarization are learned jointly, and share the underlying model parameters from BART.",
"One obvious challenge in the setup is that we do not have access to the ground truth stance and relevance labels for the generated summaries during training.",
"To address this, we adopt similar strategies as in knowledge distillation (Hinton et al., 2015).",
"We first train two separate BERT (De-vlin et al., 2019) classifiers as the teacher models for stance and relevance classificaiton respectively.",
"Due to the size limit of our dataset, we pretrain both models on the PERSPECTRUM dataset (Chen et al., 2019), which contain over 7,000 instances of training data, with similar formats and definition to our ( query, perspective ) pairs.",
"We further fine-tune the models on our training set.",
"When measured against our test set, the relevance and stance models achieve binary accuracy of 92% and 75% respectively.",
"During the perspective summarization model training, we use the pre-trained BERT models for relevance and stance classification to predict labels for each generated summary.",
"We expect the BART plus linear layers to mimic the predictions made by the two pretrained BERT models respectively.",
"Specifically: HQ = EOS ( DBART ( EBART ( Q ))) HA = EOS ( DBART ( EBART ( A ))) We feed the query and the abstract separately through the BART encoder ( EBART ) and decoder ( DBART ).",
"We get their hidden representations HQ and HA as the embedding of the end-of-sentence (</s>') token from the decoder.",
"We then concatenate HQ and HA , and feed the concatenation to the two linear layers.",
"Finally, a softmax layer is applied to get stance/relevance predictions y rel and y stance .",
"Next, We feed the query and the generated summary to the two pretrained BERT classification models to get the soft stance and relevance labels y rel and y stance .",
"We use two mean square error (MSE) loss terms to measure the discrepancy between the BART predictions and the soft labels.",
"We combine LREL and LSTANCE with the summarization objective, LSUM , which is the negative log-likelihood loss between generated and target perspective.",
"The auxiliary losses LREL and LSTANCE are weighted by tunable hyperparameters 1 and 2 respectively.",
"L = LSUM + 1 LREL + 2 LSTANCE 5.2 Results 5.2.1 Automatic Evaluations Table 3 shows our evaluation results of our multitask model with different combinations of auxiliary tasks.",
"The reported results are averaged over three trained models with different random initialization.",
"We first evaluate the generated perspective summaries against the target perspective with ROUGE (Lin, 2004) and BERTSCORE (Zhang et al., 2020) metrics.",
"We observe that relevance and stance auxiliary tasks both increase the ROUGE and BERTSCORE , and combining the two objectives yields the best performance under the ROUGE metrics.",
"To empirically verify whether the perspectives generated by our multi-task model are improved in terms of the relevance and stance correctness, we again use the two pretrained BERT classifiers to measure the percentage of generated summary with correct relevance and stance label.",
"The results potentially suggest that by mimicing the predictions made by the two pretrained classifiers, our multitask framework is able to generate summaries with higher quality along the two dimensions.",
"We randomly sampled 100 instances of abstracts with query from the test set, and ask two human raters to judge the quality of perspectives generated by the four systems.",
"For the four summaries generated from an abstract by the different systems, we shuffle their order and ask the raters to rank each summary by the overall quality, with four criteria considered (1) Fluency (2) Grammatical Correctness (3) Faithfulness to the arguments offered in the original abstract (4) Salience .",
"We allow ties among different summaries.",
"We report their averaged ranks and the number of times a system is ranked first place in Table",
"4. The results are the averaged scores between the two annotators, and the level of agreement between them for this 4-class ranking task is = 0 .",
"35 .",
"For each summary, we ask the raters to annotate whether it (1) represents a relevant argument to the query (2) constitutes the correct stance as the target stance label.",
"The kappa agreement between the two Query Should trump accept democrats' gov't spending bill?",
"raters for these two tasks are 0.54 and 0.70, respectively.",
"We show the human evaluation results in Table",
"4. We observe that while both the relevance and stance auxiliary tasks improve the quality of the generated perspective, combining the two auxiliary tasks does not guarantee a better summary quality.",
"The results on ROUGE , BERTSCORE and human evaluation suggest that the perspective summarization model learning benefits from both the relevance and stance tasks.",
"However, we also observe that the vanilla BART present a strong baseline in both automatic and human evaluations.",
"We list two typical cases where we observe the relevance and stance objectives improve the quality of the generated summary.",
"For the query shown in Table 5, the BART model generates an out-of-context word shutdown, which exists in the abstract, but is not applicable in the context provided by the query.",
"The model with relevance objective, on the other hand, generates a perspective that is coherent to the context provided.",
"For the query shown in Table 6, the baseline BART model incorrectly produces a supporting perspective to the query, while the editorial or abstract presents the opposite stance.",
"The model with the stance objective generates a perspective with a matching stance.",
"While we choose relevance and stance classification as the two auxiliary tasks in this case study, there exist many other candidate tasks that might be helpful in the setting.",
"For instance, measuring the quality (Toledo et al., 2019), or more specifically persuasiveness (Carlile et al., 2018) of the perspective might be two, amongst other, viable options.",
"As our study assumes that the abstract is provided for each editorial, the overall performance of perspective summarization will likely drop, if we use model-generated abstract instead of ground truth as input.",
"News editorials have been studied as a resource for studying many argumentation-related tasks.",
"Wilson and Wiebe (2003); Yu and Hatzivassiloglou (2003) use editorials for the study on sentiments and opinions.",
"Later works (Reed et al., 2008; Bal and Saint-Dizier, 2009; Chow, 2016) shift focus on the argumentation structure within editorials, and their persuasiveness effect (Al Khatib et al., 2016; El Baff et al., 2020).",
"A few other recent studies have explored argument quality (El Baff et al., 2018) and generation (El Baff et al., 2019) when using editorials as a resource.",
"Our proposed dataset and study focus on the interplay between elements of the argumentation structure presented in editorial articles.",
"Unlike previous work, we study these elements as the abstractive instead of extractive summary from the news editorials.",
"Most early efforts in argument generation, i.e. generating components in an argumentation structure, study rule-based synthesis methods based on argumentation theories (Reed et al., 1996; Zuker-man et al., 2000).",
"With the recent progress in neural, sequence to sequence text generation methods (Sutskever et al., 2014), a few studies have adapted such techniques for end-to-end argument generation.",
"(Wang and Ling, 2016; Hua and Wang, 2018; Hua et al., 2019).",
"The task of perspective generation in this work closely relates to argument conclusion generation (Alshomary et al., 2020).",
"Our study focuses on the setting where the target topic, or the query , is given as input to the generation model.",
"Due to the implicit nature of the perspectives (Habernal et al., 2018), one key challenge to the task is keep the semantics of the perspective generated truthful to the abstract and editorial article.",
"We approach this by measuring the compatibility of the perspective to the context along the dimensions of content salience (Bar-Haim et al., 2020) and stance correctness (Bar-Haim et al., 2017).",
"Our multi-task generation approach conceptually resembles the work by Guo et al. (2018), where multiple auxiliary tasks is employed to improve the quality of the generated summary.",
"We present MULTIOPED an open-domain news editorial corpus that induces a number of argumentation-related tasks.",
"The proposed dataset presents a few properties that are absent from existing datasets.",
"First, the elements in the annotation structure are presented as abstraction over the text in editorial, as such elements usually exist implicitly in editorials.",
"Second, as the pairs of editorials are aligned by topic, and exhibit opposing stance to each other, such structure allows for studies on cross-document argumentation structure.",
"Third, the dataset allows for the study of multiple argumentation-related tasks together.",
"To demonstrate the power of having multiple related tasks in a single high-quality dataset, we study the problem of perspective summarization in a multi-task learning setting.",
"Our analysis shows that modeling stance and relevance classification jointly with the summarization task improves the overall quality of the perspective generated.",
"In future work, we hope to utilize the corpus to improve the multi-task framework for perspective summarization.",
"As we set aside the problem of abstract generation in our case study, we would also like to identify the challenges and potential solution to the problem.",
"We hope that MULTIOPED presents opportunities and challenges to future research in argumentation.",
"We collected data for MULTIOPED by automatically extracting data from www.theperspective.",
"com/perspectives .",
"The CEO of the website, Daniel Ravner, granted us permission to extract and use their data for academic research.",
"We further annotated the data using crowd-workers.",
"All crowd-workers were compensated by a fair wage determined by estimating the average completing time of each annotation task.",
"Please refer to section 4.2 for more details.",
"The queries, abstracts, and perspectives in MULTIOPED are written by the professional writers of the website.",
"The website aims at presenting the perspectives in each article without unnecessary subjective interpretation, but there is no guarantee that no subjectivity is involved in their content creation process.",
"The authors would like to thank Daniel Ravner, the CEO of www.theperspective.com , for kindly granting access to data from the site for academic research.",
"This work was supported in part by a Focused Award from Google, and a gift from Tencent."
] | [
"objective",
"abstain",
"method",
"abstain",
"method",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"other",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"result",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"result",
"result",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"other",
"other",
"other",
"objective",
"method",
"other",
"other",
"other",
"abstain",
"method",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"result",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"Context-aware neural machine translation (NMT) remains challenging due to the lack of large-scale document-level parallel dataset.",
"To break the corpus bottleneck, in this paper we aim to improve context-aware NMT by taking the advantage of the availability of both large-scale sentence-level parallel dataset and source-side monolingual documents.",
"1 To this end, we propose two pre-training tasks.",
"One learns to translate a sentence from source language to target language on the sentence-level parallel dataset while the other learns to translate a document from deliberately noised to original on the monolingual documents.",
"Importantly, the two pre-training tasks are jointly and simultaneously learned via the same model, thereafter fine-tuned on scale-limited parallel documents from both sentence-level and document-level perspectives.",
"Experimental results on four translation tasks show that our approach significantly improves translation performance.",
"One nice property of our approach is that the fine-tuned model can be used to translate both sentences and documents.",
"Document-level context-aware neural machine translation (NMT) aims to translate sentences in a document under the guidance of document-level context.",
"Recent years have witnessed great improvement in context-aware NMT with extensive attempts at effectively leveraging document-level context ((Tiedemann and Scherrer, 2017; Maruf and Haffari, 2018; Maruf et al., 2019), to name a few).",
"However, the performance of context-aware NMT still suffers from the size of parallel document dataset.",
"On the one hand, unlike Corresponding Author: Junhui Li.",
"sentence-level translation models which could be well trained on large-scale sentence-level parallel datasets, the translation models of context-aware NMT may result in insufficient training.",
"On the other hand, with only scale-limited source-side documents, the context encoders may fail to effectively extract useful context from the whole document.",
"2 On the contrary, large-scale of parallel sentence corpora, and especially monolingual document corpora are much easier to find.",
"In this paper, our goal is to break the corpus bottleneck for context-aware NMT by leveraging both large-scale sentence-level parallel dataset and monolingual documents.",
"Specifically, we aim to use the former to boost the performance of translation models while employ the latter to enhance the context encoders' capability of capturing useful context information.",
"There have been several attempts to boost context-aware NMT performance in the scenarios where the document-level parallel dataset is scale-limited, or even not available.",
"On the one hand, sentence-level parallel dataset is a natural resource to use.",
"For example, Zhang et al. (2018) propose a two-stage training strategy for context-aware NMT by pre-training the model on a sentence-level parallel dataset.",
"On the other hand, Junczys-Dowmunt (2019) leverage large-scale source-side monolingual documents, in which they simply concatenate sentences within a document into a long sequence and explore multi-task training via the BERT-objective (Devlin et al., 2019) on the encoder.",
"Due to that different models are usually required to model sentences and documents, however, it is challenging to effectively take them both in a single model.",
"2 We note that not all, but many context-aware NMT models contain a context encoder to extract global context information from the document.",
"both sentence-level parallel dataset and monolingual documents, in this paper we propose a novel cross-task pre-training approach.",
"As shown in Figure 1, we define two pre-training tasks.",
"One learns to translate a sentence from source language to target language while the other learns to translate a document from deliberately noised to original.",
"Importantly, the two pre-training tasks are jointly learned via the same model synchronously.",
"Then we use document-level parallel dataset to fine-tune the properly pre-trained models.",
"Similarly to the pre-training, we can fine-tune the models from both sentence-level and document-level perspectives.",
"Experimental results on four document-level translation tasks show that our approach significantly improves translation performance, suggesting the effectiveness of our approach in modeling both sentence-level parallel dataset and monolingual documents.",
"One nice property of our approach is that the fine-tuned models can be used to translate both sentences and documents.",
"In the following, we first describe our pre-training tasks defined upon sentence-level parallel dataset and large-scale monolingual documents (Sec-tion 2.1).",
"Then we detail our model which caters such pre-training tasks (Section 2.2).",
"Finally, we present our joint pre-training (Section 2.3).",
"We define two pre-training tasks in our pre-training.",
"One is on sentence-level parallel dataset while the other is on monolingual documents.",
"Sentence-level Translation Given large-scale sentence-level parallel dataset, our pre-training task is quite straight, i.e., sentence-level translation.",
"Document-level Restoration Given monolingual documents, our pre-training task is to restore a document from a noised version.",
"To this end, we deliberately corrupt documents by following the two pre-training objectives, which are inspired by both gap sentence objective (Zhang et al., 2020) and masked language model objective (De-vlin et al., 2019).",
"Context-Aware Gap Sentence Restoration (CA-GSR).",
"Given a document S with N sentences, we randomly select M sentences as gap sentences and replace them with a mask token [MASK1] to inform the model.",
"The gap sentence ratio is, therefore M/N .",
"For each selected gap sentence, we use its left and right neighbours as input while the gap sentence serves as output.",
"To mimic document-level translation task, in the selection the first and the last sentences are always not selected while any two consequent sentences are not both selected.",
"Context-Aware Masked Sentence Restoration (CA-MSR).",
"Given a sentence X , we follow BERT and randomly select 15% tokens in it.",
"The selected tokens are (1) 80% of time replaced by a mask token [MASK2] , or (2) 10% of time replaced by a random token, or (3) 10% of time unchanged.",
"For a sentence, we use its masked (cid:98) X as input while the original X serves as output.",
"Both CA-GSR and CA-MSR are applied simultaneously with the noised document as context.",
"For convenience of presentation, we use a concrete example to illustrate the input and output of our document-level restoration task.",
"As shown in Figure 2, let assume that a document X contains 6 sentences and the third and fifth sentences (i.e., X 3 and X 5 ) are selected as gap sentences while the others are not.",
"On the one hand, for a sentence which is not selected as gap sentence, e.g., X 1 , we use its masked version (e.g., (cid:99) X 1 ) as input while try to predict its original sentence (e.g., X 1 ).",
"On the other hand, for a gap sentence, e.g., X 3 , we concatenate its left and right neighbouring sentences with separator [MASK1] and try to predict the gap sentence (e.g., X 3 ).",
"As shown in Figure 2, sentences from S 1 to S 6 constitute document-level input S while sentences from T 1 to T 6 make up output T .",
"Note that we do not include either gap sentences themselves or their masked version in S , in case the document (cid:2869) : (cid:2870) : (cid:2871) : (cid:2872) : (cid:2873) : (cid:2874) :",
"Overall, the pre-training task of document-level restoration is to predict target output T by giving source input S , which is the same as the task of document-level translation, except that in the restoration S and T are in the same language while in the latter the two are in different languages.",
"We use the same model to cater the above two pre-training tasks.",
"Since the task of document-level restoration is more complicated than the task of sentence-level translation, we first describe the model for document-level restoration (Sec-tion 2.2.1).",
"Then we apply the model for sentence-level translation (Section 2.2.2).",
"We define some notations before describing our model.",
"Given a document-level source input S = ( S 1 , , SN ) and target output T = ( T 1 , , TN ) with N sentence pairs, we assume each source sentence S i = ( s i, 1 , , s i,n ) consists of n words.",
"We use d m as the size of embedding and hidden state throughout the entire model.",
"Figure 3 shows our context-aware model.",
"It contains two parts, namely a global context encoder and a seq2seq model augmented by context representation.",
"Note that for document-level restoration, we take documents as input units.",
"Global Context Encoder For the i -th input sentence S i in document S , the global context encoder aims to extract useful global context for every word s i,j in it.",
"As shown in Figure",
"3(a), the encoder consists of a stack of N g identical encoder layers.",
"Each encoder layer consists of four major sub-layers: a self-attention sub-layer, a sentence representation sub-layer, a global context attention sub-layer and a feed-forward sub-layer.",
"In the k -th encoder layer, the self-attention sublayer takes A ( k ) i R n d m as input and computes a new sequence B ( k ) i with the same length via multihead attention function: B ( k ) i = MultiHead (cid:16) q = A ( k ) i , k = A ( k ) i , v = A ( k ) i (cid:17) , (1) where the output B ( k ) i is in the shape of R n d m , 3 and q , k , v represent the query and key-value pairs in attention mechanism respectively.",
"For the first encoder layer, A (1) i is the addition of S i 's word embedding and its position embedding while for other layers, A ( k ) i is the output of the proceeding encoder layer.",
"In the k -th encoder layer, the sentence representation sub-layer takes B ( k ) i as input and computes a vector to represent the sentence through a linear combination with a vector of weights as: ( k ) i = softmax (cid:18) W 2 tanh (cid:18) W 1 (cid:16) B ( k ) i (cid:17) T (cid:19)(cid:19) (2) where W 1 R d m d m and W 2 R d m are model parameters.",
"The output ( k ) i is a n -sized vector.",
"Then the representation vector of sentence S i is the weighted sum of its hidden states: C ( k ) i = ( k ) i B ( k ) i , (3) where C ( k ) i is a d m -sized vector.",
"We then stack vectors of all sentences in S into C ( k ) , i.e., C ( k ) = (cid:104) C ( k ) 1 , , C ( k ) N (cid:105) .",
"Note that C ( k ) RN d m is at document-level and represents the global context.",
"In the k -th encoder layer, the global context attention sub-layer extracts useful global context for s i,j in S i .",
"This is also done via multi-head attention function: D ( k ) i = MultiHead (cid:16) q = B ( k ) i , k = C ( k ) , v = C ( k ) (cid:17) , (4) where the output D ( k ) i is in the shape of R n d m .",
"In the k -th encoder layer, the Feed forward sublayer is applied to each position separately and 3 The actual output of this sub-layer is LayerNorm ( B ( k ) i + A ( k ) i ) , where LayerNorm is the layer normalization function.",
"For simplicity, we do not include the residual addition and layer normalization functions in our sub-layers.",
"Note that the sentence representation sub-layer is the only exception which does not have residual addition and layer normalization.",
"identically by two linear transformations with a ReLU activation in between.",
"We denote G i R n d m as the final output of the global context encoder, i.e., G i = E ( N g ) i .",
"That is to say, G i represents the context representation for sentence S i .",
"Context-Aware Model As shown in Figure 3",
"(b), the seq2seq model is very similar to the standard Transformer, except that it is now equipped with context representation obtained by the global context encoder.",
"For sentence S i , we denote the sentence encoder output as H i R n d m .",
"To leverage its context representation G i , we define a gate to linearly combine the two kinds of representation via: H (cid:48) i = H i + (1 ) G i , (6) where the gating weight is computed by = sigmoid (cid:16) [ H i ; G i ] WG (cid:17) , (7) where WG R 2 d m d m are model parameters.",
"Then we use H (cid:48) i to replace H i as the input to the decoder.",
"We point out that in the global context encoder and sentence encoder, we share the self-attention sub-layer and the feed forward sublayer.",
"That is to say, compared to the standard Transformer, we introduce new parameters to cater the sentence representation sub-layers, the global context sub-layers, and the gate mechanism to combine the two kinds of representation in Eq.",
"6.",
"In the first pre-training task, sentence-level translation is context-agnostic and does not require the global context encoder.",
"Therefore, it only uses the sentence encoder and decoder, as shown in Figure 3",
"(b).",
"Moreover, we turn off the gate mechanism by setting H (cid:48) i = H i .",
"Since we share the two sub-layers of self-attention and feed forward between the sentence encoder and the global context encoder, updating the model by sentence-level translation will have direct impact on the global context encoder too.",
"As shown in our experimentation, we share the same vocabulary for pre-training tasks.",
"To train the above two pre-training tasks with a single model, we follow the strategy used in Johnson et al. (2017) and add a preceding language tag to each source and target sentence.",
"Our joint pre-training on two tasks falls into the paradigm of multi-task learning (MTL).",
"In training stage, we take turns to load the training data of these pre-training tasks.",
"For example, we update model parameters on a batch of training instances from the first task, and then update parameters on a batch of training instances of the other, and the process repeats.",
"Similar to pre-training tasks, we define the following two different fine-tuning tasks from both sentence-level and document-level.",
"Sentence-level Translation We first extract sentence-level parallel sentence pairs from the document-level parallel dataset for fine-tuning.",
"This fine-tuning task enables the fine-tuned model to translate sentences.",
"In fine-tuning, this task is processed as same as the sentence-level translation task in pre-training.",
"Document-level Translation Given a parallel document ( X , Y ) with N sentence pairs ( X i , Y i ) | N 1 .",
"This fine-tune task is to translate source document X into target document Y .",
"In fine-tuning, this task takes parallel documents as input units and is processed as same as the document-level restoration task in pre-training.",
"The fine-tuning process is quite similar as the pretraining process in Section 2.3.",
"Specifically, we add a preceding language tag to each sentence.",
"Meanwhile in fine-tuning, we alternatively load batches of the two fine-tuning tasks.",
"To test the effect of our approach in leveraging sentence-level parallel dataset and monolingual documents, we carry out experiments on Chinese-to-English (ZH-EN) and English-to-German (EN-DE) translation.",
"Pre-training data settings.",
"The ZH-EN sentence-level parallel dataset contains 2.0M sentence pairs with 54.8M Chinese words and 60.8M English words.",
"4 We use WMT14 EN-DE 4 It consists of LDC2002E18, LDC2003E07, LDC2003E14, news part of LDC2004T08, LDC2002T01, LDC2004T07, LDC2005T06, LDC2005T10, LDC2009T02, translation dataset as the EN-DE sentence-level parallel dataset which consists of 4.4M sentence pairs.",
"5 We use Chinese Gigaword (LDC2009T27) and English Gigaword (LDC2012T21) as monolingual document dataset for ZH-EN and En-DE translation, respectively.",
"For efficient training, we split long documents into sub-documents with at most 30 sentences.",
"We have 2.6M (7.3M) sub-documents with 24M (102M) sentences in total for Chinese (English).",
"Upon the monolingual documents, we prepare training instances for the document-level restoration task and set gap sentence ratio to 20%.",
"All Chinese sentences are segmented by Jieba 6 while all English and German sentences are tok-enized by Moses scripts (Koehn et al., 2007).",
"7 For ZH-EN (EN-DE) translation, we merge the source and target sentences of the parallel dataset and the monolingual document and segment words into sub-words by a BPE model with 30K (25K) operations (Sennrich et al., 2016).",
"Fine-tuning data settings.",
"For ZH-EN, we have one translation task on news domain.",
"The document-level parallel corpus of training set include 41K documents with 780K sentence pairs.",
"8 We use the NIST MT 2006 dataset as the development set, and combine the NIST MT 2002, 2003, 2004, 2005, 2008 datasets as test",
"set..",
"For EN-DE, we test three translation tasks in domains of TED talks, News-Commentary and Europarl.",
"TED, which is from IWSLT 2017 MT track (Cettolo et al., 2012).",
"We combine test2016 and test2017 as our test set while the rest as the development set.",
"News, which is from News Commentary v11 corpus.",
"9 We use news-test2015 and news-test2016 as the development set and test set, respectively.",
"LDC2009T15, LDC2010T03.",
"5 https://www.statmt.org/wmt14/transla tion-task.html 6 https://github.com/messense/jieba-rs 7 As related studies, we lowercase English sentences in ZH-EN while truecase English and German sentences in EN-DE.",
"8 It consists of LDC2002T01, LDC2004T07, LDC2005T06, LDC2005T10, LDC2009T02, LDC2009T15, LDC2010T03.",
"Note that they are also included in ZH-EN parallel dataset.",
"9 http://www.casmacat.eu/corpus/news-co mmentary.html # Model Bi-sent Mo-doc ZH-EN EN-DE (TED) EN-DE (News) EN-DE (Europarl) Avg.",
"Europarl, which is extracted from the Europarl v7.",
"The training, development and test sets are obtained through randomly splitting the corpus.",
"All above EN-DE document-level parallel datasets are downloaded from Maruf et al. (2019).",
"10 Similar to fine-tuning datasets, the pre-processing steps consist of word segmentation, tokenization, long document split.",
"Then we segment the words into subwords using the BPE models trained on pretraining datasets.",
"See Appendix A for more statistics of the fine-tuning datasets.",
"Model settings.",
"We use OpenNMT (Klein et al., 2017) as the implementation of Transformer and implement our models based on it.",
"11 For all translation models, the numbers of layers in the context encoder, sentence encoder and decoder (i.e., N g , N e , and N d in Fig 3) are set to 6.",
"The hidden size and the filter size are set to 512 and 2048, respectively.",
"The number of heads in multi-head attention is 8 and the dropout rate is 0.1.",
"In pre-training, we train the models for 500K steps on four V100 GPUs with batch-size 8192.",
"We use Adam (Kingma and Ba, 2015) with 1 = 0.9, 2 = 0.98 for optimization, and learning rate as 1, the warm-up step as 16K.",
"In fine-tuning, we fine-tune the models for 200K steps on a single V100 GPU with batch-size 8192, learning rate 0.3, and warm-up step 4K.",
"In inferring, we set the beam size to 5.",
"Evaluation.",
"For evaluation, we use two metrics: BLEU (Papineni et al., 2002) and Meteor (Lavie and Agarwal, 2007) to evaluate translation quality.",
"Main results.",
"Table 1 shows the performance of our approach, where Ours-sent and Ours-doc indicate the performance achieved by our approach when we use sentences or documents as input units, respectively.",
"In the scenario where both sentence-level parallel dataset and monolingual documents are not used, we directly train our models from scratch with the two fine-tuning tasks on the fine-tuning datasets.",
"#2 and #3 in the table show that our model is capable of translating both sentences and documents.",
"Interestingly, when we use sentences as translation units, our models (i.e., #2 Ours-sent ) outperform sentence-level Transformer baseline (i.e., #1 who uses sentences as input units in both training and inferring) over all translation tasks with improvement of averaged 1.36 BLEU and 1.72 Meteor.",
"Moreover, when we use documents as translation units, our models (i.e., #3 Ours-doc ) achieve further improvement by modeling document-level context.",
"Compared to previous studies, it also shows that our approach surpasses all context-aware baselines on ZH-EN and EN-DE (TED) tasks and achieves the state-of-the-art on average.",
"In the scenario where both sentence-level parallel dataset and monolingual documents are used, 12 similar performance trends also hold.",
"For example, #5 Ours-sent significantly exceeds Transformer 12 For Transformer baseline (i.e., #4 in the table), the two pre-training objectives in document-level restoration are context-agnostic.",
"baseline with 1.85 BLEU and 1.78 Meteor on average while #6 Outs-doc further achieves the best performance.",
"Ablation study.",
"We take ZH-EN and EN-DE (News) translations as representatives to study the effect of leveraging sentence-level parallel dataset and monolingual documents.",
"Table 2 compares the performance on the the test sets of ZH-EN and EN-DE (News) translations in different scenarios.",
"From it, we have the following observations.",
"Using either sentence-level parallel dataset or monolingual documents helps translation for both Transformer baselines and our context-aware models.",
"However, in the presence of sentence-level parallel dataset, the Transformer baselines fail to achieve higher performance with monolingual documents, as we observe performance drops from 46.99 BLEU to 46.30 on Zh-EN, and from 26.89 to 26.80 on EN-DE.",
"In contrary, our models achieve the highest performance by leveraging the two resources.",
"This suggests the effectiveness of our approach in employing the two resources.",
"It is not surprising to find out that the improvement is mainly contributed by using sentence-level parallel dataset, as translation model is more important than context encoder Finally, our approach consistently outperforms sentence-level Transformer in all scenarios.",
"Encouraging, the performance gap becomes even larger on ZH-EN when more resources are used.",
"Next we use ZH-EN translation to analyze more on how our approach affects translation performance.",
"See Appendix B for parameter analysis and statistics of the pre-trained models.",
"In Section 3 we alternate sentence-level translation and document-level translation in fine-tuning.",
"We investigate the effect of including sentence-level translation as a fine-tuning task.",
"Table 3 compares the performance with respect to different fine-tuning strategies and different input units in inferring.",
"When we use documents as input units in inferring, the joint fine-tuning strategy provides no advantage.",
"However, when the input units are sentences, the joint fine-tuning strategy outperforms the one not including sentence-level translation in fine-tuning.",
"We also want to examine whether the proposed approach actually learns to utilize document context to resolve discourse inconsistencies.",
"Following Voita et al. (2019b) and Zheng et al. (2020), we use the same datasets to train model and contrastive test set for the evaluation of discourse phenomena for English-Russian by Voita et al. (2019b).",
"There are four test sets in the suite regarding deixis, lexicon consistency, ellipsis (inflection and verb phrase).",
"Each testset contains groups of contrastive examples consisting of a positive translation with correct discourse phenomenon and negative translations with incorrect phenomena.",
"The goal is to figure out if a model is more likely to generate a cor-Model Bi-sent Mo-doc Dev Test Trans.",
"rect translation compared to the incorrect variation.",
"We summarize the results in Table 4, which shows that in different scenarios our models are better at resolving discourse consistencies than context-agnostic baselines.",
"We follow Miculicich et al. (2018) and Tan et al. (2019) to evaluate coreference and anaphora using the reference-based metric: accuracy of pronoun translation (Werlen and Popescu-Belis, 2017).",
"Table 5 lists the performance of pronoun translation.",
"From it we observe that our proposed approach can well improve the performance of pronoun translations.",
"A significant hyper-parameter in the pre-training task of document-level restoration is the gap sentence ratio.",
"A low ratio makes the document-level restoration less challenging while choosing gap sentences at a high ratio makes the global context have more overlapped.",
"Table 6 shows that we achieve the best performance when the ratio is set as 20%.",
"As shown in Figure 2, we include two pre-training objectives in document-level restoration, i.e, CA-GSR and CA-MSR.",
"To investigate the effect of CA-GSR, we use CA-MSR as the only objective in this pre-training task.",
"In this way, the S 3 and S 5 in Figure 2",
"(a), for example, will be (cid:99) X 3 and (cid:99) X 5 , respectively.",
"Table 7 compares the performance when the pre-training task is of CA-MSR objective or combination of CA-GSR and CA-MSR.It Pre-training Objective Dev Test CA-GSR + CA-MSR 50.90 50.03 CA-MSR 50.61 49.73 Table 7: Performance (BLEU scores) on dev and test sets of ZH-EN translation with respect to different pretraining objectives in document-level restoration.",
"We describe related studies in the following two perspectives.",
"Cache/Memory-based approaches (Tu et al., 2018; Kuang et al., 2018; Maruf and Haffari, 2018; Wang et al., 2017) store word/sentence translation in previous sentences for future sentence translation.",
"Various approaches with an extra context encoders are proposed to model either local context, e.g., previous sentences (Jean et al., 2017; Wang et al., 2017; Zhang et al., 2018; Bawden et al., 2018; Voita et al., 2018, 2019b; Yang et al., 2019; Huo et al., 2020), or entire document (Maruf and Haffari, 2018; Mace and Servan, 2019; Maruf et al., 2019; Tan et al., 2019; Xiong et al., 2019; Zheng et al., 2020; Kang et al., 2020).",
"Besides, there have been several attempts to improve context-aware NMT with monolingual document data.",
"To make translations more coherent within a document, Voita et al. (2019a) propose DocRepair trained on monolingual target language documents to correct the inconsistencies in sentence-level translation while Yu et al. (2020) train a context-aware language model to rerank sentence-level translations.",
"Finally, Junczys-Dowmunt (2019) use source-side monolingual documents to explore multi-task training via the BERT-objective on the encoder.",
"They simply concatenate sentences within a document into a long sequence, which is different from our approach.",
"While there are substantial studies on improving sentence-level NMT with pre-training, we limit ourselves here to pre-training for document-level (context-aware) NMT.",
"BART (Lewis et al., 2020) is a denoising auto-encoder model which learns to reconstruct the original document from a noised version.",
"Inspired by BART, mBART (Liu et al., 2020) is a model trained on a mixed corpus containing monolingual documents of different languages.",
"Both BART and mBART concatenate sentences in one document into a long sequence, and thus fall into a standard sequence-to-sequence (seq2seq) framework.",
"This is very different from our cross-task pre-training, in which we combine both context-agnostic learning and context-aware learning in a single model.",
"In order to leverage both large-scale sentence-level parallel dataset and source-side monolingual documents for context-aware NMT, in this paper, we have proposed a novel cross-task pre-training approach, which simultaneously learns to translate a sentence from source language to target language while denoising a document from deliberately noised to original.",
"Upon the pre-trained models, we fine-tune them with document-level parallel dataset from both sentence-level and document-level perspectives.",
"Experimental results on multiple document-level translation tasks have demonstrate the effectiveness of our approach.",
"Finally, we also provide insights on how context-aware NMT benefits from our approach.",
"This work was supported by the National Natural Science Foundation of China (Grant No. 62036004 and 61876120)."
] | [
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result",
"abstain",
"abstain",
"method",
"abstain",
"result",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"other",
"other",
"method",
"objective",
"method",
"objective",
"method",
"other"
] |
[
"Recent work has improved our ability to detect linguistic knowledge in word representations.",
"However, current methods for detecting syntactic knowledge do not test whether syntax trees are represented in their entirety.",
"In this work, we propose a structural probe , which evaluates whether syntax trees are embedded in a linear transformation of a neural network's word representation space.",
"The probe identifies a linear transformation under which squared L2 distance encodes the distance between words in the parse tree, and one in which squared L2 norm encodes depth in the parse tree.",
"Using our probe, we show that such transformations exist for both ELMo and BERT but not in baselines, providing evidence that entire syntax trees are embedded implicitly in deep models' vector geometry.",
"As pretrained deep models that build contextualized representations of language continue to provide gains on NLP benchmarks, understanding what they learn is increasingly important.",
"To this end, probing methods are designed to evaluate the extent to which representations of language encode particular knowledge of interest, like part-of-speech (Belinkov et al., 2017), morphology (Peters et al., 2018a), or sentence length (Adi et al., 2017).",
"Such methods work by specifying a probe (Con-neau et al., 2018; Hupkes et al., 2018), a supervised model for finding information in a representation.",
"Of particular interest, both for linguistics and for building better models, is whether deep models' representations encode syntax (Linzen, 2018).",
"Despite recent work (Kuncoro et al., 2018; Peters et al., 2018b; Tenney et al., 2019), open questions remain as to whether deep contextual models encode entire parse trees in their word representations.",
"In this work, we propose a structural probe , a simple model which tests whether syntax trees are consistently embedded in a linear transformation of a neural network's word representation space.",
"Tree structure is embedded if the transformed space has the property that squared L2 distance between two words' vectors corresponds to the number of edges between the words in the parse tree.",
"To reconstruct edge directions, we hypothesize a linear transformation under which the squared L2 norm corresponds to the depth of the word in the parse tree.",
"Our probe uses supervision to find the transformations under which these properties are best approximated for each model.",
"If such transformations exist, they define inner products on the original space under which squared distances and norms encode syntax trees even though the models being probed were never given trees as input or supervised to reconstruct them.",
"This is a structural property of the word representation space, akin to vector offsets encoding word analogies (Mikolov et al., 2013).",
"Using our probe, we conduct a targeted case study, showing that ELMo (Peters et al., 2018a) and BERT (Devlin et al., 2019) representations embed parse trees with high consistency in contrast to baselines, and in a low-rank space.",
"1 In summary, we contribute a simple structural probe for finding syntax in word representations ( 2), and experiments providing insights into and examples of how a low-rank transformation recovers parse trees from ELMo and BERT representations ( 3,4).",
"Finally, we discuss our probe and limitations in the context of recent work ( 5).",
"Our goal is to design a simple method for testing whether a neural network embeds each sentence's",
"1 We release our code at https://github.com/ john-hewitt/structural-probes .",
"dependency parse tree in its contextual word representations a structural hypothesis.",
"Under a reasonable definition, to embed a graph is to learn a vector representation of each node such that geometry in the vector spacedistances and norms approximates geometry in the graph (Hamilton et al., 2017).",
"Intuitively, why do parse tree distances and depths matter to syntax?",
"The distance metricthe path length between each pair of wordsrecovers the tree T simply by identifying that nodes u, v with distance d T ( u, v ) = 1 are neighbors.",
"The node with greater normdepth in the treeis the child.",
"Beyond this identity, the distance metric explains hierarchical behavior.",
"For example, the ability to perform the classic hierarchy test of subject-verb number agreeement (Linzen et al., 2016) in the presence of attractors can be explained as the verb (V) being closer in the tree to its subject (S) than to any of the attactor nouns: S ...",
"Intuitively, if a neural network embeds parse trees, it likely will not use its entire representation space to do so, since it needs to encode many kinds of information.",
"Our probe learns a linear transformation of a word representation space such that the transformed space embeds parse trees across all sentences.",
"This can be interpreted as finding the part of the representation space that is used to encode syntax; equivalently, it is finding the distance on the original space that best fits the tree metrics.",
"In this section we provide a description of our proposed structural probe, first discussing the distance formulation.",
"Let M be a model that takes in a sequence of n words w (cid:96) 1: n and produces a sequence of vector representations h (cid:96) 1: n , where (cid:96) identifies the sentence.",
"Starting with the dot product, recall that we can define a family of inner products, h TA h , parameterized by any positive semi-definite, symmetric matrix A S m m + .",
"Equivalently, we can view this as specifying a linear transformation B R k m , such that A = BTB .",
"The inner product is then ( B h ) T ( B h ) , the norm of h once transformed by B .",
"Every inner product corresponds to a distance metric.",
"Thus, our family of squared distances is defined as: d B ( h (cid:96)i , h (cid:96)j ) 2 = (cid:0) B ( h (cid:96)i h (cid:96)j ) (cid:1) T (cid:0) B ( h (cid:96)i h (cid:96)j ) (cid:1) (1) where i, j index the word in the sentence.",
"2 The parameters of our probe are exactly the matrix B , which we train to recreate the tree distance between all pairs of words ( w (cid:96)i , w (cid:96)j ) in all sentences T (cid:96) in the training set of a parsed corpus.",
"Specifically, we approximate through gradient descent: min B (cid:88) (cid:96) 1 | s (cid:96) | 2 (cid:88) i,j (cid:12)(cid:12) d T (cid:96) ( w (cid:96)i , w (cid:96)j ) d B ( h (cid:96)i , h (cid:96)j ) 2 (cid:12)(cid:12) where | s (cid:96) | is the length of the sentence; we normalize by the square since each sentence has | s (cid:96) | 2 word pairs.",
"Because our structural probe defines a valid distance metric, we get a few nice properties for free.",
"The simplest is that distances are guaranteed nonnegative and symmetric, which fits our probing task.",
"Perhaps most importantly, the probe tests the concrete claim that there exists an inner product on the representation space whose squared distance a global property of the spaceencodes syntax tree distance.",
"This means that the model not only encodes which word is governed by which other word, but each word's proximity to every other word in the syntax tree.",
"3 This is a claim about the structure of the representation space, akin to the claim that analogies are encoded as vector-offsets in uncontextualized word embeddings (Mikolov et al., 2013).",
"One benefit of this is the ability to query the nature of this structure: for example, the dimensionality of the transformed space ( 4.1).",
"The second tree property we consider is the parse depth (cid:107) w i (cid:107) of a word w i , defined as the number of edges in the parse tree between w i and the root of the tree.",
"This property is naturally represented as a norm it imposes a total order on the words in the sentence.",
"We wish to probe to see if there exists a squared norm on the word representation 2 As noted in Eqn 1, in practice, we find that approximating the parse tree distance and norms with the squared vector distances and norms consistently performs better.",
"Because a distance metric and its square encode exactly the same parse trees, we use the squared distance throughout this paper.",
"Also strictly, since A is not positive definite, the inner product is indefinite, and the distance a pseudometric.",
"Further discussion can be found in our appendix.",
"3 Probing for distance instead of headedness also helps avoid somewhat arbitrary decisions regarding PP headedness, the DP hypothesis, and auxiliaries, letting the representation disagree on these while still encoding roughly the same global structure.",
"See Section 5 for more discussion.",
"Table 1 : Results of structural probes on the PTB WSJ test set; baselines in the top half, models hypothesized to encode syntax in the bottom half.",
"For the distance probes, we show the Undirected Unlabeled Attachment Score (UUAS) as well as the average Spearman correlation of true to predicted distances, DSpr.",
"For the norm probes, we show the root prediction accuracy and the average Spearman correlation of true to predicted norms, NSpr.",
"Figure 1 : Parse distance UUAS and distance Spearman correlation across the BERT and ELMo model layers.",
"space that encodes this tree norm.",
"We replace the vector distance function d B ( h i , h j ) with the squared vector norm (cid:107) h i (cid:107) 2 B , replacing Equation 1 with (cid:107) h i (cid:107) A = ( B h i ) T ( B h i ) and training B to recreate (cid:107) w i (cid:107) .",
"Like the distance probe, this norm formulation makes a concrete claim about the structure of the vector space.",
"Using our probe, we evaluate whether representations from ELMo and BERT, two popular English models pre-trained on language modeling-like objectives, embed parse trees according to our structural hypothesis.",
"Unless otherwise specified, we permit the linear transformation B to be potentially full-rank (i.e., B is square.) Later, we explore what rank of transformation is actually necessary for encoding syntax ( 4.1).",
"Representation models We use the 5.5B-word pre-trained ELMo weights for all ELMo representations, and both BERT-base (cased) and BERT-large (cased).",
"The representations we evaluate are denoted ELMOK, BERTBASEK, BERTLARGEK, where K indexes the hidden layer of the corresponding model.",
"All ELMo and BERT-large layers are dimensionality 1024; BERT-base layers are dimensionality 768.",
"Data We probe models for their ability to capture the Stanford Dependencies formalism (de Marn-effe et al., 2006), claiming that capturing most aspects of the formalism implies an understanding of English syntactic structure.",
"To this end, we obtain fixed word representations for sentences of the parsing train/dev/test splits of the Penn Treebank (Marcus et al., 1993), with no pre-processing.",
"4 Baselines Our baselines should encode features useful for training a parser, but not be capable of parsing themselves, to provide points of comparison against ELMo and BERT.",
"They are as follows: LINEAR : The tree resulting from the assumption that English parse trees form a left-to-right chain.",
"A model that encodes the positions of words should be able to meet this baseline.",
"ELMO 0 : Strong character-level word embeddings with no contextual information.",
"As these representations lack even position information, we should be completely unable to find syntax trees embedded.",
"DECAY 0 : Assigns each word a weighted average of all ELMO 0 embeddings in the sentence.",
"The weight assigned to each word decays exponentially as 12 d , where d is the linear distance between the words.",
"PROJ 0 : Contextualizes the ELMO 0 embeddings with a randomly initialized BiLSTM layer of dimensionality identical to ELMo (1024), a surprisingly strong baseline for contextualiza-tion (Conneau et al., 2018).",
"We evaluate models on how well the predicted distances between all pairs of words reconstruct gold parse trees and correlate with the parse trees' distance metrics.",
"To evaluate tree reconstruction, we take each test sentence's predicted parse tree distances and compute the minimum spanning tree.",
"We evaluate the predicted tree on undirected 4 Since BERT constructs subword representations, we align subword vectors with gold Penn Treebank tokens, and assign each token the average of its subword representation.",
"This thus represents a lower-bound on BERT's performance.",
"Figure 2 : Minimum spanning trees resultant from predicted squared distances on BERTLARGE 16 and ELMO 1 compared to the best baseline, PROJ",
"0. Black edges are the gold parse, above each sentence; blue are BERTLARGE 16, red are ELMO 1, and purple are PROJ",
"0. attachment score (UUAS)the percent of undirected edges placed correctlyagainst the gold tree.",
"For distance correlation, we compute the Spearman correlation between true and predicted distances for each word in each sentence.",
"We average these correlations between all sentences of a fixed length, and report the macro average across sentence lengths 550 as the distance Spearman (DSpr.) metric.",
"5 3.2 Tree depth evaluation metrics We evaluate models on their ability to recreate the order of words specified by their depth in the parse tree.",
"We report the Spearman correlation betwen the true depth ordering and the predicted ordering, averaging first between sentences of the same length, and then across sentence lengths 550, as the norm Spearman (NSpr.).",
"We also evaluate models' ability to identify the root of the sentence as the least deep, as the root%.",
"6 4 Results We report the results of parse distance probes and parse depth probes in Table",
"1. We first confirm that our probe can't simply learn to parse on top of any informative representation, unlike parser-based probes (Peters et al., 2018b).",
"In particular, ELMO 0 and DECAY 0 fail to substantially outperform a right-branching-tree oracle that encodes the linear sequence of words.",
"PROJ 0, which has all of the representational capacity of ELMO 1 but none of the training, performs the best among the baselines.",
"Upon inspection, we found that our probe on PROJ 0 improves over the linear hypothesis with 5 The 550 range is chosen to avoid simple short sentences as well as sentences so long as to be rare in the test data.",
"6 In UUAS and root% evaluations, we ignore all punctuation tokens, as is standard.",
"Figure 3 : Parse tree depth according to the gold tree (black, circle) and the norm probes (squared) on ELMO 1 (red, triangle) and BERTLARGE 16 (blue, square).",
"mostly simple deviations from linearity, as visualized in Figure",
"2. We find surprisingly robust syntax embedded in each of ELMo and BERT according to our probes.",
"Figure 2 shows the surprising extent to which a minimum spanning tree on predicted distances recovers the dependency parse structure in both ELMo and BERT.",
"As we note however, the distance metric itself is a global notion; all pairs of words are trained to know their distance not just which word is their head; Figure 4 demonstrates the rich structure of the true parse distance metric recovered by the predicted distances.",
"Figure 3 demonstrates the surprising extent to which the depth in the tree is encoded by vector norm after the probe transformation.",
"Between models, we find consistently that BERTLARGE performs better than BERTBASE , which performs better than ELMO .",
"7 We also find, as in Peters et al. (2018b), a clear difference in syntactic information between layers; Figure 1 reports the performance 7 It is worthwhile to note that our hypotheses were developed while analyzing LSTM models like ELMo, and applied without modification on the self-attention based BERT models.",
"Figure 4 : (left) Matrix representing gold tree distances between all pairs of words in a sentence, whose linear order runs top-to-bottom and left-to-right.",
"Darker colors indicate close words, lighter indicate far.",
"(right)",
"The same distances as embedded by BERTLARGE 16 (squared).",
"More detailed graphs available in the Appendix.",
"Figure 5 : Parse distance tree reconstruction accuracy when the linear transformation is constrained to varying maximum dimensionality.",
"With the result that there exists syntax-encoding vector structure in both ELMo and BERT, it is natural to ask how compactly syntactic information is encoded in the vector space.",
"We find that in both models, the effective rank of linear transformation required is surprisingly low.",
"We train structural probes of varying k , that is, specifying a matrix B R k m such that the transformed vector B h is in R k .",
"As shown in Figure 5, increasing k beyond 64 or 128 leads to no further gains in parsing accuracy.",
"Intuitively, larger k means a more expressive probing model, and a larger fraction of the representational capacity of the model being devoted to syntax.",
"We also note with curiosity that the three models we consider all seem to require transformations of approximately the same rank; we leave exploration of this to exciting future work.",
"Recent work has analyzed model behavior to determine if a model understands hierarchy and other linguistic phenomena (Linzen, 2018; Gulordava et al., 2018; Kuncoro et al., 2018; Linzen and Leonard, 2018; van Schijndel and Linzen, 2018; Tang et al., 2018; Futrell et al., 2018).",
"Our work extends the literature on linguistic probes, found at least in (Pe-ters et al., 2018b; Belinkov et al., 2017; Blevins et al., 2018; Hupkes et al., 2018).",
"Conneau et al. (2018) present a task similar to our parse depth prediction, where a sentence representation vector is asked to classify the maximum parse depth ever achieved in the sentence.",
"Tenney et al. (2019) evaluates a complementary task to ours, training probes to learn the labels on structures when the gold structures themselves are given.",
"Peters et al. (2018b) evaluates the extent to which constituency trees can be extracted from hidden states, but uses a probe of considerable complexity, making less concrete hypotheses about how the information is encoded.",
"Probing tasks and limitations Our reviewers rightfully noted that one might just probe for headedness, as in a bilinear graph-based dependency parser.",
"More broadly, a deep neural network probe of some kind is almost certain to achieve higher parsing accuracies than our method.",
"Our task and probe construction are designed not to test for some notion of syntactic knowledge broadly construed, but instead for an extremely strict notion where all pairs of words know their syntactic distance, and this information is a global structural property of the vector space.",
"However, this study is limited to testing that hypothesis, and we foresee future probing tasks which make other tradeoffs between probe complexity, probe task, and hypotheses tested.",
"In summary, through our structural probes we demonstrate that the structure of syntax trees emerges through properly defined distances and norms on two deep models' word representation spaces.",
"Beyond this actionable insight, we suggest our probe may be useful for testing the existence of different types of graph structures on any neural representation of language, an exciting avenue for future work.",
"We would like to acknowledge Urvashi Khandel-wal and Tatsunori B. Hashimoto for formative advice in early stages, Abigail See, Kevin Clark, Siva Reddy, Drew A. Hudson, and Roma Patel for helpful comments on drafts, and Percy Liang, for guidance on rank experiments.",
"We would also like to thank the reviewers, whose helpful comments led to increased clarity and extra experiments.",
"This research was supported by a gift from Tencent."
] | [
"result",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"method",
"method",
"objective",
"abstain",
"other",
"other",
"other"
] |
[
"The discovery of supporting evidence for addressing complex mathematical problems is a semantically challenging task, which is still unexplored in the field of natural language processing for mathematical text.",
"The natural language premise selection task consists in using conjectures written in both natural language and mathematical formulae to recommend premises that most likely will be useful to prove a particular statement.",
"We propose an approach to solve this task as a link prediction problem, using Deep Convolutional Graph Neural Networks.",
"This paper also analyses how different baselines perform in this task and shows that a graph structure can provide higher F1-score, especially when considering multi-hop premise selection.",
"Mathematical proofs are used to establish the truth value of a mathematical claim.",
"The act of creating a new proof contributes to the development of Mathematics, being one of its central components.",
"Premise selection is a well-defined task in the field of Automated Theorem Proving (ATP), where proofs are encoded using a formal logical representation.",
"Given a set of premises P , and a new conjecture c , premise selection aims to predict those premises from P that will most likely lead to an automatically constructed proof of c , where P and c are both written using a formal language (Irving et al., 2016).",
"The issue with using formal mathematics is that only a small portion of the known mathematical statements is available in a formalised dataset, and formal statements are usually hard for humans to interpret and write.",
"In this paper, we focus on natural language mathematical text (mathematical statements as they are present in scientific papers and textbooks), since it is more accessible for mathematicians to write/read mathematical statements using natural language.",
"The mathematical discourse is composed of a particular combination of words and mathematical terms, where terms follow a different set of syntactic rules and entail a specific lexicon.",
"Nonetheless, words and mathematical terms are interdependent in the context of mathematical discourse.",
"This phenomenon is exclusive to mathematical language, not found in any other natural, or artificial, language (Ganesalingam, 2013), providing a unique and challenging application for semantic evaluation and natural language processing.",
"and Freitas, 2020) task is defined as: Definition (Natural language premise selec-tion): Given a set of premises (or supporting facts) P in a mathematical corpus (containing both natural language and formulae) and a new conjecture c proposed by a user, predict those premises from P that will most likely be useful for generating a proof for c (i.e. partially entails c ).",
"A premise is considered relevant if the knowledge it provides can be reused for generating a proof for a given conjecture.",
"We propose an approach to solve the natural premise selection task, representing all conjectures and premises as nodes and the dependencies as edges, formulating the problem as a link prediction problem.",
"We hypothesise that graph-based embeddings are suitable structures for representing and detecting the dependencies between different mathematical statements.",
"We then use Deep Convolutional Graph Neural Networks (Zhang et al., 2018) over a structural and content-based encoding of proofs in order to obtain the set of useful premises for proving a statement.",
"In order to evaluate this task, we use the dataset PS-ProofWiki.",
"This dataset opens possibilities of applications not only for the premise selection task but also for evaluating different equational embeddings, textual entailment for mathematics and natural language inference in the context of mathematical texts.",
"The performance of the proposed model is compared to a set of baselines.",
"The contributions of this paper can be summarised as follows:",
"(i) Proposal of a novel representation for the natural language premise selection problem.",
"(ii) Proposal of an approach for addressing the natural language premise selection task using link prediction under a Deep Convolutional Graph Neural Network representation.",
"(iii) Quantitative and qualitative evaluation against existing baselines.",
"Latent and explicit representation models have seen a substantial advance in the past years, with the introduction of neural embeddings such as BERT (Devlin et al., 2018), which are able to capture discourse-level relations and semantic abstractions.",
"However, the development of representation models and their evaluation in the context of mathematical discourse is still an open problem.",
"In this section, we present some of the research in NLP applied to mathematics.",
"We also describe existing works that apply premise selection in the domain of ATPs.",
"Mathematical Language Processing A relevant area that intersects both NLP and mathematical discourse is the research on how to automatically solve math word problems.",
"Wang et al. (2018) test how different Seq2Seq models perform on mathematical word problems, where each question has a set of possible solution equations and the different equations are normalised to the same tree representation.",
"Huang et al. (2016) analyse various approaches to solve mathematical word problems and concludes that it is still an unsolved challenge.",
"Xie and Sun (2019) proposes a neural model to generate an expression tree following a reasoning similar to the way humans solve math word problems.",
"Text2Math is an approach to solve arithmetic word problems and equation parsing tasks by proposing a joint representation to learn the correspondence between words and math expressions (Zou and Lu, 2019).",
"On the discourse analysis domain, Zinn (2003) introduces a proof representation structure for mathematical discourse using discourse representation theory and presents a prototype for automating the process of generating proofs.",
"Naproche (Natural language Proof Checking) (Cramer et al., 2009) is a project focused on the development of a controlled natural language (CNL) for mathematical texts and adapting proof checking software to work with this language in order to check syntactic and mathematical correctness.",
"Ganesalingam and Gowers (2017) propose a program that solves elementary mathematical problems, with the focus on metric space theory, and presents solutions similar to the ones introduced by humans.",
"The authors recognise that their system is operating at a disadvantage because human language involves several constraints that rule out many sound and effective tactics for generating proofs.",
"Different works started exploring equational embeddings.",
"EqEmbs (Krstovski and Blei, 2018) is built on exponential family embeddings, considering equations as single elements, modelling part of the equations, such as variables, symbols and operators.",
"EqEmbs considers the context for the equations as a window of sixteen words.",
"Tangent-CFT (Mansouri et al., 2019) uses fastText to produce formula embeddings for symbol layout trees (SLTs) and operator trees (OPTs).",
"The embedding procedure converts the representation into a sequence of tuples, where the elements are tokenised as characters.",
"The tuples are embedded using n-grams computed over the tuple and its neighbouring tuples.",
"Greiner-Petter et al. (2019) developed a skip-gram-based model using as a reference corpus a collection of arXiv papers in HTML format using a term-level tokenisation granularity.",
"The authors found that the induced vector space did not produce meaningful semantic clusters.",
"Wallace et al. (2019) found that CNNs are useful for tasks involving understanding and working with numbers; however, it still struggles to extrapolate beyond the values seen during training.",
"Premise Selection Premise selection is an approach generally used for selecting useful premises to prove conjectures in Automated Theorem Proving (ATP) systems (Alama et al., 2014).",
"Irving et al. (2016) propose a neural architecture for premise selection using formal statements written in Mizar.",
"The authors were able to solve 67.9% of the conjectures present in the Mathematical Mizar Library.",
"Other authors have used machine learning approaches such as Kernel-based Learning (Alama et al., 2014), k-NN algorithm (Gauthier and Kaliszyk, 2015) and Random Forests (Farber and Kaliszyk, 2015).",
"Contrasted to related work, the model proposed on this paper targets capturing both content (local) and structural dependencies (global) across natural language mathematical statements and its evaluation on the natural language premise selection problem.",
"Figure 1 depicts an example of a theorem and its proof, where it can be observed that the proof is based upon two other supporting facts (premises): the theorem for Factors of Composition Series for Prime Power Group and the definition for Solvable",
"Group .",
"In order to evaluate the premise selection, we used a corpus extracted from ProofWiki 1 .",
"ProofWiki is an online compendium of mathematical proofs, with a goal to collect and classify mathematical proofs.",
"ProofWiki contains links between theorems, definitions and axioms in the context of a mathematical proof, determining which dependencies are present.",
"Definitions and axioms are statements accepted without formal proof, while theorems, lemmas and corollaries require one (Solow, 2002).",
"All entries are composed by a statement written in a combination of natural language and mathematical latex notation.",
"The extracted corpus, which is named PS-ProofWiki, contains more than 18 , 000 entries.",
"We also computed how many times each statement is used as a premise, and we observed that most of the statements are used as dependencies for only a small subset of premises.",
"A total of 6 , 866 statements has between one and three dependants.",
"On average, statements contain a total length of 289 symbols (characters and mathematical symbols).",
"The specific number of tokens will depend on the type of tokenisation used for the mathematical symbols.",
"A complete analysis of this corpus is made available in (Ferreira and Freitas, 2020).",
"In the next sections, we describe the proposed model for addressing the premise selection task.",
"The proposed model uses a Deep Graph Convolutional Neural Network (DGCNN) for solving the premise selection task as a link prediction task (Zhang and Chen, 2018).",
"The proposed model aims to encode the natural language and the formu-1 http://proofwiki.org/ lae terms as well as the dependencies and graph-structural patterns of the mathematical text.",
"In Mathematics, theorems are always built on top of previous mathematical knowledge, such as lemmas, corollaries, definitions and other theorems.",
"Thus, Mathematics as a discourse intrinsically entails a network structure.",
"With this hierarchy and interlinking of concepts in mind, we developed a graph representation to represent all mathematical statements present in the corpus and their associated dependencies.",
"The extracted dependency graph is a directed graph G = ( V , E ) where V is a set of vertices, composed by mathematical statements and E is a set of ordered pairs of vertices (edges), in this case the relationship between mathematical statements.",
"If m 1 , m 2 V and ( m 1 , m 2 ) E that means the statement m 1 is a premise to the statement m 2 .",
"From the set of graphs containing all asserted dependency relations, an enclosing sub-graph (with a fixed hop h size of 1 h 2 ) is extracted by selecting a pair of nodes as the target.",
"These pair of nodes will be used to define the link prediction classification context, in which a binary class is assigned, P when ( m 1 , m 2 ) E and NP (not a premise) otherwise (Figure 2).",
"As we predict the link between different statements, we are also predicting the dependencies between different statements, therefore, addressing the natural premise selection problem.",
"Every node m i V is composed of two parts: (1) a label based on a function which encodes its neighbourhood, (2) an embedding of its textual content.",
"The framework generates labels for the nodes using the Double-Radius Node Labelling (DRNL) (Zhang and Chen, 2018) mapping, assuming that the graph is undirected.",
"The labelling technique was altered so it could also work for a directed graph setting.",
"Considering two different statements m 1 , m 2 V , where we want to predict if m 2 is a premise for m 1 ; all nodes are labelled as follows:",
"(i) m 1 is labelled as 1,",
"(ii) m 2 is labelled Let G be a group whose order p n where p is a prime number and n is a positive integer.",
"as 2,",
"(iii) for every x in S reachable from m 1 , label x as the distance between m 1 and x ,",
"(iv) for every y in S unreachable from m 1 , label y as 0.",
"The embedding of the textual content is an embedding of the mathematical statements.",
"A mathematical statement is composed of a hybrid setting of mathematical notation and natural language statements.",
"Paragraph Vector Distributed Memory (PV-DM/Doc2Vec) (Le and Mikolov, 2014) was used to encode a statement-level representation of the constituent statements of the proof (where each statement is a paragraph').",
"The expressions and equations are encoded as a tree, by representing every sub-expression as a token.",
"For example, the expression ( x + y ) c ' is represented as the sequence of tokens [ x ', y ', ( x + y ) ', ( x + y ) c '], capturing the syntactic structure of the mathematical expression.",
"The same model captures both the natural language and the formulae tokens.",
"Figure 3 depicts how the structural and content aspects are represented.",
"A Deep Graph Convolutional Neural Network (DGCNN) architecture (Zhang et al., 2018) was used as the default GNN engine of the premise selection.",
"The architecture was selected due to its ability to encode network features with a consistent performance across different graph network (GN) evaluation scenarios.",
"Moreover, we use the graph encoding proposed in (Zhang and Chen, 2018), which aims for learning subgraph structural patterns using DGCNNs.",
"This approach embeds the learning of a problem-specific graph heuristic function (which is formalised as the -decaying heuristic theory).",
"This can be contrasted with the use of pre-defined methods from a single heuristic framework (such as Katz index, PageRank and SimRank (Zhang and Chen, 2018)), by using a graph-specific approximation instead.",
"The underlying assumption behind the selection of the base architecture is that the premise selection problem requires the encoding of both the statement content and of the graph-dependency patterns.",
"The final problem of premise selection is rephrased as a problem of link prediction, and the final classification layer has a binary classifier.",
"Figure 4 depicts the main components of an end-to-end architecture.",
"A denotes the adjacency matrix of a graph, n the number of vertices where each vertex has a c-dimensional feature vector, denoted as X R n c .",
"For a vertex v , we use ( v ) to denote the set of Figure 3: Pre-processing workflow of the proof corpus.",
"where W R c c is a weight matrix of graph convolution parameters, A = A + I based on the adjacency matrix A , D is a diagonal degree matrix (Zhang and Chen, 2018) and f is a non-linear activation function.",
"D 1 A is a propagation matrix.",
"The graph aggregation layer builds for each node a graph-level feature vector based the individual node states, which is defined by: Z i = f ( 1 | ( i ) | + 1[ X i W + (cid:88) j ( i ) X j W ]) (2) The graph convolution aggregates node patterns, extracting local subgraph patterns.",
"The last graph convolution layer output can be used to sort the graph vertices in an order which reflects the vertices structural roles (Zhang and Chen, 2018).",
"After the aggregation, the DGCNN uses a sort pooling layer , which sorts the final node states based on to the last graph convolution layer's output (Zhang and Chen, 2018).",
"The sorting criteria are based on a topological-based ordering.",
"For example (Niepert et al., 2016) provide a labelling scheme for vertexes based on topological patterns.",
"This topological ordering is consistent across graphs: vertices in two different graphs will be assigned similar relative positions if they have similar structural roles (Zhang et al., 2018).",
"The ordering operation is followed by a max-k pooling operation which creates a representation for the different graphs with uniform dimensions (truncating or extending into k dimensions).",
"This allows the application of a 1-D CNN layer on the node sequence.",
"A final dense layer connected to a softmax layer performs the binary classification of the target vertices into the premise/non-premise case .",
"A standard DGCNN configuration is used (Zhang et al., 2018), containing four graph convolution layers, a sort pooling layer with a k assignment 0.60 (graph coverage), two 1-D convolution layers and a dense layer with 128 neurons.",
"The proposed model has a locality assumption expressed at the statement encoding level, which limits the proof neighbourhood to two hops.",
"This follows the intuition that the premise selection model aims to reflect the mentioned structure of proofs (expanding, however an additional hop) privileging the classification of closer and more specific conjecture-premise relations.",
"More exploratory types of proofs may require the expansion of the hops to cope with longer distance relations.",
"This section evaluates the performance of the proposed model using PS-ProofWiki.",
"We introduce initial baselines using two basic approaches, TF-IDF and PV-DBOW.",
"These are further expanded using a transformer-based architecture (BERT), due to its state-of-art results for the encoding of sentence-level embeddings and their use in tasks such as natural language inference.",
"For the experiments using BERT and the proposed approach, we split the dataset using a 50/20/30 (train/dev/test) split.",
"We run all experiments ten times, evaluating on the test set, and report the average Precision, Recall and F1-score.",
"All evaluation data, as well as the experimental pipeline, can be found online 2 for reproducibility purposes.",
"In order to identify the challenges of the task of natural language premise selection using PS-ProofWiki, we performed initial experiments using two Bag-of-words (BoW) baselines: TF-IDF and PV-DBOW (Le and Mikolov, 2014).",
"We use both weighting schemes to define the vector representations for all mathematical statements.",
"Then we compute the cosine similarity between each entry and rank the results by their distance.",
"The Mean Average Precision (MAP) is computed for each baseline: MAP = (cid:80) Ni =1 AvegP ( s i ) N (3) where N is the total number of statements, s i is the i -th mathematical statement and AvegP is the average precision.",
"2 https://github.com/ai-systems/premise selection graph",
"ranking tasks, such as supporting facts (explana-tions) retrieval (Valentino et al., 2020).",
"Table 1 presents the results for the BoW baselines.",
"Three different types of tokenisations are compared for encoding the mathematical expressions.",
"In the first instance, we treat the expressions and equations as single tokens; for example, the expression x + y + z would be considered a single token.",
"We also considered tokenised expressions, tokenising variables and operators, the example would be tokenised as [ x ', + ', y ', + ', z '].",
"In both examples, the natural language part of the text is tokenised as a sequence of words.",
"Finally, we tokenise the whole text as a sequence of characters.",
"We run PV-DBOW with the default parameters, comparing different sizes of embeddings, with the best results obtained with an embedding size of 100.",
"From the MAPs obtained by the BoW, we can conclude that the task is semantically non-trivial and cannot be addressed with retrieval-based strategies which are based on lexical overlap.",
"We can also notice that better results are obtained when the expressions are tokenised as a sequence of operations and variables, suggesting that the elements inside the expressions have semantic properties that are relevant for determining the relevant premises.",
"For the following experiments, we are using the tokenised expressions and PV-DBOW with an embedding size of 100 for the encoding of the expressions.",
"In Table 2 we compare the results for different sizes of the dataset.",
"We consider the full dataset and three different subsets with different categories of mathematical statements.",
"We can notice that for smaller datasets, both baselines perform better.",
"This result was expected since with smaller datasets there are less possible premises, and elements from the same categories tend to have a higher lexical Table 1: MAP results for TF-IDF and PV-DBOW comparing different tokenisation strategies for the mathematical expressions.",
"We can also consider the fact that premises are transitive, i.e., if one a mathematical text t i has a premise x and a mathematical text t j has t i as a premise, then x should also be a premise of t j .",
"In this case, the task becomes semantically more challenging, as it can be observed in Table 3, where we consider the transitivity within two and three hops of distance.",
"From the results, we notice that the more hops needed to obtain the premise, the worse our baselines perform.",
"In order to use BERT, we reformulate this problem as a pairwise relevance classification problem, as done previously in the context of ATP systems.",
"We have a set of mathematical statements S , a set of conjectures C and a set of premises P , where C P , C S and P S .",
"Considering a conjecture c C and a premise p P , a function f ( c, p ) is defined, where f ( c, p ) = 1 if p is a part of the proof of c and f ( c, p ) = 0 otherwise.",
"the target task with a sequence classifier, adding a linear layer on top of the transformer embeddings.",
"The dataset is imbalanced by the nature of the natural premise selection problem.",
"In order to solve the natural premise selection task, any approach would have to be able to handle a large number of negative examples.",
"There are 10k different possible premises, and some conjectures are only connected to one premise, creating a large number of negative pairs in our dataset, requiring the definition of a cap for the number of negative samples.",
"In order to provide a more constrained setting, we define a subset of the PS-ProofWiki, named PS-ProofWiki TRIG targeting trigonometric functions.",
"The proposed approach outperforms the BERT-based model by 41% in terms of F1-score, as shown in Table 4.",
"We hypothesise that the encoding of the structural patterns of the dependency relations in addition to the content-based similarity better captures the semantic nature of the proof (fundamental to interpret a proof by its neighbourhood).",
"In order to evaluate the robustness of the proposed approach and the baseline with regard to an increase in imbalance (reflecting a notion of scalability of the quality of the inference within the KB), we compare how the F1-score changes as we add more (random) negative examples to the dataset.",
"Figure 5a and Figure 5b presents a comparison between BERT and our approach for the PS-ProofWiki TRIG and the PS-ProofWiki datasets, respectively.",
"The results indicate that the BERT-based clas-sifier performance degrades faster as we increase the number of negative samples in the dataset.",
"For n = 30 , the F1-score reaches a value of almost zero.",
"In contrast, the proposed model presents a significantly slower decline (25%), showing better scalability properties in the context of the premise selection problem.",
"Finally we experiment on how BERT and the proposed model compares when we consider transitivity between premises (n-hop relations), using PS-ProofWiki TRIG and 10 negative examples for each positive example.",
"We report the results in Table 5, where we can see that the proposed model obtains better overall performance as the number of Table 4: Precision (P), recall (R), and F1-score (F1) for the BERT baseline and the proposed approach, with 30 negative examples for each positive case (values are multiplied by 100).",
"hops is increased.",
"These results reinforce the architectural design supported by graph-based models.",
"From the results obtained from our model we observed that the model struggles to encode statements which are centered around pure equational (formulae) content.",
"Embeddings for mathematical symbols should take into consideration more specific semantics of operators: such semantics is not obtained using PV-DM (Doc2Vec) or BERT.",
"This provides evidence on the need for more principled structural embeddings for mathematical formulas, which could most certainly improve the prediction of future work in the natural premise selection task.",
"Even though BERT is not trained in a mathematical corpus, it still obtains relevant results, hinting that training BERT on a mathematical corpus could achieve better results.",
"However, this task is outside the scope of this work and will be left for future work.",
"The proposed DGCNN-based model is capable of finding structural patterns between the statements and to reinforce content-based semantic evidence.",
"We observed that statements that are similar in content, commonly have a significant intersection of premises, as a result of the graph embedding, the DGCNN-model is able to better discriminate more fine-grained semantic cues better.",
"In this work, we introduced an approach for natural language premise selection (finding relevant theorems, axioms and definitions) in large natural language mathematical texts.",
"The proposed approach, which uses Deep Graph Convolutional Neural Networks (DGCNNs) combines both structural and content elements of mathematical statements for addressing the premise selection problem as a link prediction classification problem.",
"Results show that the approach outperforms a BERT-based baseline by 41% in F1-score.",
"Moreover, the proposed model shows significantly lower F1-score degradation concerning class imbalance, a fundamental desirable scalability property for the problem of premise selection.",
"Our approach is also able to obtain better performance when we consider the transitivity of premises.",
"The qualitative analysis indicates that there is the demand to design principled embeddings for better capturing the semantics of proofs which are denser in mathematical formulae.",
"As future work, we will explore different heuristics for navigating in the premises graph, as researched before for textual entailment (Silva et al., 2019, 2018) and selective reasoning (Freitas et al., 2014).",
"The authors would like to thank the anonymous reviewers for the constructive feedback, we also would like to thank Mokanarangan Thayaparan and Marco Valentino for the helpful discussions."
] | [
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"other",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other"
] |
[
"Confidence calibration, which aims to make model predictions equal to the true correctness measures, is important for neural machine translation (NMT) because it is able to offer useful indicators of translation errors in the generated output.",
"While prior studies have shown that NMT models trained with label smoothing are well-calibrated on the groundtruth training data, we find that miscalibration still remains a severe challenge for NMT during inference due to the discrepancy between training and inference.",
"By carefully designing experiments on three language pairs, our work provides in-depth analyses of the correlation between calibration and translation performance as well as linguistic properties of miscalibration and reports a number of interesting findings that might help humans better analyze, understand and improve NMT models.",
"Based on these observations, we further propose a new graduated label smoothing method that can improve both inference calibration and translation performance.",
"1 1 Introduction Calibration requires that the probability a model assigns to a prediction (i.e., confidence ) equals to the correctness measure of the prediction (i.e., accuracy ).",
"Calibrated models are important in user-facing applications such as natural language processing (Nguyen and O'Connor, 2015) and speech recognition (Yu et al., 2011), in which one needs to assess the confidence of a prediction.",
"For example, in computer-assisted translation, a calibrated machine translation model is able to tell a user when the model's predictions are likely to be incorrect, which is helpful for the user to correct errors.",
"Work was done when Shuo Wang was interning at Tencent AI Lab under the Rhino-Bird Elite Training Program.",
"1 The source code is available at https://github.",
"com/shuo-git/InfECE .",
"The study of calibration on classification tasks has a long history, from statistical machine learning (Platt et al., 1999; Niculescu-Mizil and Caruana, 2005) to deep learning (Guo et al., 2017).",
"However, calibration on structured generation tasks such as neural machine translation (NMT) has not been well studied.",
"Recently, Muller et al. (2019) and Kumar and Sarawagi (2019) studied the calibration of NMT in the training setting, and found that NMT trained with label smoothing (Szegedy et al., 2016) is well-calibrated.",
"We believe that this setting would cover up a central problem of NMT, the exposure bias (Ranzato et al., 2015) the training-inference discrepancy caused by teacher forcing in the training of auto-regressive models.",
"In response to this problem, this work focuses on the calibration of NMT in inference, which can better reflect the generative capacity of NMT models.",
"To this end, we use translation error rate (TER) (Snover et al., 2006) to automatically annotate the correctness of generated tokens, which makes it feasible to evaluate calibration in inference.",
"Experimental results on several datasets across language pairs show that even trained with label smoothing, NMT models still suffer from miscalibration errors in inference.",
"Figure 1 shows an example.",
"While modern neural networks on classification tasks have been found to be miscalibrated in the direction of over-estimation (i.e., confidence > accuracy) (Guo et al., 2017), NMT models are also under-estimated (i.e., confidence < accuracy) on low-confidence predictions.",
"In addition, we found that miscalibrated predictions correlate well with the translation errors in inference.",
"Specifically, the over-estimated predictions correlate more with over-translation and mis-translation errors, while the under-estimated predictions correlate more with under-translation errors.",
"This demonstrates the necessity of studying inference calibration for NMT.",
"By investigating the linguistic properties of miscalibrated tokens in NMT outputs, we have several interesting findings: Frequency : Low-frequency tokens generally suffer from under-estimation.",
"Moreover, low-frequency tokens contribute more to overestimation than high-frequency tokens, especially on large-scale data.",
"Position : Over-estimation does not have a bias on the position of generated tokens, while under-estimation occurs more in the left part of a generated sentence than in the right part.",
"Fertility : Predicted tokens that align to more than one source token (fertility 2) suffer more from under-estimation, while tokens with fertility < 1 suffer from over-estimation.",
"Syntactic Roles : Content tokens are more likely to suffer from miscalibration than content-free tokens.",
"Specifically, verbs are more likely to suffer from over-estimation than under-estimation.",
"Word Granularity : sub-words suffer more from both over-estimation and underestimation, while full words are less likely to be miscalibrated.",
"Inspired by the finding that miscalibration on classification tasks is closely related to lack of regularization and increased model size (Guo et al., 2017), we revisit these techniques on the NMT (i.e., structured generation) task: Regularization Techniques : We investigate label smoothing and dropout (Hinton et al., 2012), which directly affect the confidence estimation.",
"Both label smoothing and dropout improve the inference calibration by alleviating the over-estimation.",
"Label smoothing is the key for well-calibration, which is essential for maintaining translation performance for inference in large search space.",
"Inspired by this finding, we propose a novel graduated label smoothing approach, in which the smoothing penalty for high-confidence predictions is higher than that for low-confidence predictions.",
"The graduated label smoothing can improve translation performance by alleviating inference miscalibration.",
"Model Size : Increasing model size consistently improves translation performance at the cost of negatively affecting inference calibration.",
"The problem can be alleviated by increasing the capacity of encoder only, which maintains the inference calibration and obtains a further improvement of translation performance in large search space.",
"To summarize, the main contributions of our work are listed as follows: We demonstrate the necessity of studying inference calibration for NMT, which can serve as useful indicators of translation errors.",
"We reveal certain linguistic properties of miscalibrated predictions in NMT, which provides potentially useful information for the design of training procedures.",
"We revisit recent advances in architectures and regularization techniques, and provide variants that can boost translation performance by improving inference calibration.",
"Calibration on Classification Calibration on classification tasks has been studied for a long history in the statistics literature, including Platt scaling (Platt et al., 1999), isotonic regression (Niculescu-Mizil and Caruana, 2005) and many other methods for non-binary classification (Zadrozny and Elkan, 2002; Menon et al., 2012; Zhong and Kwok, 2013).",
"For modern deep neural networks, Guo et al. (2017) demonstrated Bush held a talk with Sharon in Israel .",
"that recent advances in training and model architecture have strong effects on the calibration.",
"Szegedy et al. (2016) propose the label smoothing technique which can effectively reduce the calibration error.",
"Ding et al. (2019) extend label smoothing to adaptive label regularization.",
"Calibration on Structured Prediction Different from classification tasks, most natural language processing (NLP) tasks deal with complex structures (Kuleshov and Liang, 2015).",
"Nguyen and O'Connor (2015) verified the finding of Niculescu-Mizil and Caruana (2005) in NLP tasks on log-linear structured models.",
"For NMT, some works directed their attention to the uncertainty in prediction (Ott et al., 2018; Wang et al., 2019), Kumar and Sarawagi (2019) studied the calibration of several NMT models and found that the end of a sentence is severely miscalibrated.",
"Muller et al. (2019) investigated the effect of label smoothing, finding that NMT models are well-calibrated in training.",
"Different from previous works, we are interested in the calibration of NMT models in inference, given that the training and inference are discrepant for standard NMT models (Vaswani et al., 2017).",
"Training In machine translation task, an NMT model F : x y maximizes the probability of a target sequence y = { y 1 , ..., y T } given a source sentence x = { x 1 , ..., x S } : T",
"where is a set of model parameters and y <t is a partial translation.",
"At each time step, the model generates an output token of the highest probability based on the source sentence x and the partial translation y <t .",
"The training objective is to minimize the negative log-likelihood loss on the training corpus.",
"Inference NMT models are trained on the ground-truth data distribution ( teaching forcing ), while in inference the models generate target tokens based on previous model predictions, which can be erroneous.",
"The training-inference discrepancy caused by teacher forcing in maximum likelihood estimation training (Equation 1) is often referred to as exposure bias (Ranzato et al., 2015).",
"In this work, we aim to investigate the calibration of NMT in inference, which we believe can better reflect the generation capacity of NMT models.",
"Calibration requires that the probability a model assigns to a prediction (i.e., confidence ) equals to the true correctness measure of the prediction (i.e., accuracy ).",
"Modern neural networks have been found to be miscalibrated in the direction of overestimation (Guo et al., 2017).",
"In this study, we revisit the calibration problem in NMT.",
"If an NMT model is well-calibrated, the gap between the confidence of the generated tokens and the accuracy of them will be small.",
"2 Expected Calibration Error (ECE) ECE is a commonly-used metric to evaluate the miscalibration, which measures the difference in expectation between confidence and accuracy (Naeini et al., 2015).",
"Specifically, ECE partitions predictions into M bins { B 1 , . . . , BM } according to their confidence and takes a weighted average of the bin's accuracy/confidence difference: ECE = M (cid:88) m =1 | B m | N (cid:12)(cid:12)(cid:12) acc ( B m ) conf ( B m ) (cid:12)(cid:12)(cid:12) , (2) where N is the number of prediction samples and | B m | is the number of samples in the m -th bin.",
"2 For example, given 100 predictions, each with confidence 0.7.",
"If the accuracy is also 0.7 (i.e., 70 of the 100 tokens are correct), then the NMT model is well calibrated.",
"ECE in Training and Inference In the case of considering just the topmost token in structured prediction tasks (e.g., machine translation), the prediction is y = arg max y V P ( y ) with P ( y ) as confidence .",
"The accuracy C ( y ) { 1 , 0 } denotes whether the prediction y is correct.",
"In training, the correctness of the prediction y is calculated as whether y matches the ground-truth token y n : C ( y ) { 1 , 0 } .",
"However, in inference it is not straightforward to measure the accuracy of y , since it requires to build an alignment between the generated tokens and the ground-truth tokens.",
"To this end, we turn to the metric of Translation Error Rate (TER) (Snover et al., 2006), which measures the number of edits required to change a model output into the ground-truth sequence.",
"Specifically, it assigns a label l { C, S, I } to each generated token.",
"Figure 2 shows an example of TER labels of each generated token with respect to the reference.",
"As a side product, TER annotations provide the information of translation errors.",
"While TER only labels the mis-translation (S) and over-translation (I) errors, we describe a simple heuristic method to annotate the under-translation error by mapping the label D from the ground-truth sequence to the generated sequence.",
"Data and Setup We carried out experiments on three different language pairs, including WAT17 English-Japanese (En-Jp), WMT14 English-German (En-De), and WMT17 Chinese-English (Zh-En).",
"The training datasets consist of 1.9M, 4.5M, and 20.6M sentence pairs respectively.",
"We employed Byte pair encoding (BPE) (Sennrich et al., 2016) with 32K merge operations for all the three language pairs.",
"We used BLEU (Pap-ineni et al., 2001) to evaluate the NMT models.",
"We used the TER toolkit (Snover et al., 2006) to label whether the tokens in NMT outputs are correctly translated.",
"Normalization was not used, and the maximum shift distance was set to 50.",
"The NMT model that we used in our experiments is Transformer (Vaswani et al., 2017).",
"We used base model as default, which consists of a 6-layer encoder and a 6-layer decoder and the hidden size is 512.",
"The model parameters are optimized by Adam (Kingma and Ba, 2015), with 1 = 0 .",
"9 , 2 = 0 .",
"98 and (cid:15) = 10 9 .",
"We used the same warmup strategy for learning rate as Vaswani et al. (2017) with warmup steps = 4 , 000 .",
"Reliability diagrams are a visual representation of model calibration, which plot accuracy as a function of confidence (Niculescu-Mizil and Caruana, 2005).",
"Specifically, it partitions the output tokens into several bins according to their prediction confidence, and calculate the average confidence and accuracy of each bin.",
"Figure 1 shows the reliability diagrams of both training and inference on En-De and Figure 3 shows those on En-Jp and Zh-En.",
"Results are reported on the validation sets.",
"NMT still suffers from miscalibration.",
"The difference between training and inference ECEs is that when estimating training ECE, NMT models are fed with ground-truth prefixes (Kumar and Sarawagi, 2019; Muller et al., 2019), while for inference ECE, NMT models are fed with previous model predictions.",
"As seen, the training ECE is very small, indicating that NMT models are well-calibrated in training.",
"This is consistent with the findings of Kumar and Sarawagi (2019); Muller et al. (2019).",
"However, the inference ECE is much higher, suggesting that NMT models still suffer from miscalibration in inference.",
"NMT models are miscalibrated in directions of both overand under-estimation.",
"Modern neural networks have been found to be miscalibrated on classification tasks in the direction of overestimation (Guo et al., 2017).",
"In contrast, NMT models also suffer from under-estimation problems.",
"The under-estimation problem is more serious on En-Jp than on Zh-En, which we attribute to the smaller size of the training data of the En-Jp task.",
"We investigated the calibration error of tokens with different TER labels.",
"As the development set is small, to make the results more convincing, we sampled 100K sentences from the training set as a held-out set and retrained the NMT model on the remained training set excluding the held-out set.",
"All results in this section is reported by the retrained model.",
"We firstly compute the gap between the confidence and the accuracy of each token in each confidence bin on the held-out set.",
"Tokens in bins whose gaps are less than a threshold are labeled as well-calibrated, otherwise they are labeled as miscalibrated.",
"We use the inference ECE estimated on the development set as the threshold for each language pair respectively.",
"Miscalibrated tokens can be divided into two categories: over-estimation and under-estimation.",
"As shown in Table 1, correct translations (i.e., C) have higher correlations to well-calibrated predictions and erroneous translations (i.e., S, I, and D) correlate more to miscalibrated predictions.",
"This finding is more obvious when NMT models are trained on larger data (e.g., Zh-En).",
"Table 2 lists the correlation between different translation errors and different kinds of miscalibration.",
"We find that over-estimated predictions are closely correlated with over-translation and misType Under-Est.",
"translation errors, while the under-estimated predictions correlate well with under-translation errors.",
"This finding demonstrates the necessity of studying inference calibration for NMT.",
"In this section, we investigate the linguistic properties of miscalibrated tokens in NMT outputs.",
"We explore the following five types of properties: frequency, position, fertility, syntactic roles, and word granularity.",
"Frequency is generally related to miscalibration; position, fertility, and word granularity are three factors associated with structured prediction; syntactic roles or linguistic roles may vary across language pairs.",
"The results in this section are reported on the held-out set by the retrained model.",
"Relative Change We use the relative change of the proportion of a certain category of tokens to quantify to what extent they suffer from the under/over-estimation.",
"For instance, in the Zh-En task, high-frequency tokens account for 87.6% on the whole held-out set, and among over-estimated tokens, high-frequency tokens account for 77.3%, thus for over-estimation the relative change of high-frequency tokens is (77.3-87.6)/87.6=-11.76% in Zh-En.",
"Accordingly, the value of the red rectangle of Zh-En is -11.76% in Figure 4a.",
"Positive relative change denotes that a certain type of linguistic property accounts more in miscalibrated predictions than in all the predictions, suggesting this type of linguistic property suffers Over-Estimation R e l a ti v e C h a ng e -50% 50% 150% 250% 350% En-Jp En-De Zh-En LowMediumHigh Under-Estimation En-Jp En-De Zh-En Over-Estimation R e l a ti v e C h a ng e -50% -25% 0% 25% 50% En-Jp Jp-En En-DeZh-En LeftMiddleRight Under-Estimation En-Jp Jp-En En-DeZh-En",
"from the miscalibration problem.",
"Similarly, negative relative change suggests that a certainty type of linguistic property is less likely to be impaired by the miscalibration problem.",
"We divide tokens into three categories based on their frequency, including High : the most 3,000 frequent tokens; Medium : the most 3,001-12,000 frequent tokens; Low : the other tokens.",
"Low-frequency tokens are miscalibrated in the direction of under-estimation.",
"As shown in Figure 4, the relative changes of lowand medium-frequency tokens are much bigger than those of high-frequency tokens.",
"The under-estimation in lowand medium-frequency tokens can be alleviated by increasing the size of training data (Fig-ure 4b, data size: En-Jp < En-De < Zh-En).",
"Low-frequency tokens contribute more to overestimation.",
"As shown in Figure 4a, the relative changes of lowand medium-frequency tokens are positive while those of high-frequency tokens are negative, regarding over-estimation.",
"High-frequency tokens are less likely to be miscalibrated.",
"We find the relative changes of high frequency tokens are negative across the three language pairs.",
"The imbalance in token frequency plays an important role in the calibration of NMT.",
"In structured prediction, different positions may behave differently regarding miscalibration.",
"Thus we divide all the tokens equally into three categories: Left : tokens on the left third; Middle : tokens on the middle third; Right : tokens on the right third.",
"Figure 5 depicts the relative changes of these three positions.",
"Since Japanese is a head-final language (Wu et al., 2018), we also include the results of Japanese-English (Jp-En) for comparison.",
"Over-estimation does not have a bias on position.",
"And this holds for both left-branching and right-branching languages.",
"Increasing the size of training data is less likely to affect the over-estimation in different positions.",
"Under-estimation occurs more in the left part.",
"This phenomenon is more obvious in left-branching languages (e.g., Japanese) than in right-branching languages (e.g., English and German), confirming that characteristics of a language play an important role in machine translation (Wu et al., 2018).",
"Fertility indicates how many source tokens a target token is aligned to, which is highly related to inference in NMT.",
"We use Fast Align (Dyer et al., 2013) to extract bilingual alignment.",
"We distinguish between four categories regarding fertility: 2 : target tokens that are aligned to more than one source tokens; 1 : target tokens that are aligned to a single source token; (0 , 1) : target tokens that are aligned to a single source token along with other target tokens; 0 : target tokens that are not aligned to any source token.",
"Figure 6 plots the results.",
"Tokens aligning to less than one source token suffer from over-estimation.",
"The extent grows with Under-Estimation R e l a ti v e C h a ng e -100% -30% 40% 110% En-Jp En-De Zh-En Pos Tags P e r ce n t a g e 0% 10% 20% 30% 40% 50% En-Jp En-De Zh-En Noun Verb Adj Prep.",
"the data size.",
"In addition, these tokens ((0, 1)) are less likely to suffer from under-estimation.",
"Tokens aligning to more than one source token suffer more from under-estimation.",
"The relative change of fertility > = 2 is much larger than that of the other types of fertility.",
"Meanwhile, the null-aligned target tokens (fertility = 0) also suffer from under-estimation problem instead of overestimation problem on the large-scale Zh-En data.",
"In this experiment, we investigate the syntactic roles of miscalibrated tokens.",
"3 Words in English and German sentences are labeled by Stanford POS tagger 4 , and Japanese sentences are labeled by Kytea 5 .",
"We distinguish between the following POS tags: noun, verb, adjective, preposition, determiner, punctuation, and the others.",
"Noun, verb, and adjective belong to content tokens.",
"Preposition, determiner, punctuation and the others belong to content-free tokens.",
"Content tokens are more likely to suffer from miscalibration.",
"From Figure 7 we find that the most relative changes of content tokens (i.e., Noun, Verb and Adj) are positive, while most of the relative changes of the content-free tokens (i.e., Prep., Dete., Punc., Others) are negative.",
"Among content tokens, the verbs (Verb) face the over-estimation problem instead of the underestimation problem.",
"Surprisingly, the adjectives (Adj) suffer from under-estimation problem on large data (e.g., En-De and Zh-En).",
"BPE segmentation is the preliminary step for current NMT systems, which may segment some 3",
"words into sub-words.",
"To explore the effect of word granularity on the miscalibration of NMT models, we divide the tokens after BPE segmentation into two categories: Sub-Words that are divided into word fragments by BPE (e.g., with @@), and Full Words that are not divided by BPE.",
"Figure 8 depicts the results.",
"Sub-words suffer more from miscalibration, while full words are less likely to be miscalibrated.",
"The relative changes of sub-words are all positive for both overand under-estimation, while those of full words are all negative.",
"Sennrich et al. (2016) showed that BPE addresses the open-vocabulary translation by encoding rare and unknown words as sequences of sub-word units.",
"Our results con-firm their claim: the behaviors of sub-words and full words correlate well with those of lowand high-frequency tokens respectively.",
"Guo et al. (2017) have revealed that the miscalibration on classification tasks is closely related to lack of regularization and increased model size.",
"In this section we check whether the conclusion holds on the inference of NMT models, which belong to a family of structured generation.",
"One criticism of NMT inference is that the translation performance inversely decreases with the increase of search space (Tu et al., 2017).",
"Quite recently, Kumar and Sarawagi (2019) claimed that this problem can be attributed to miscalibration.",
"Accordingly, we also report results on large beam size and find that reducing miscalibration can improve the NMT performance in large beam size.",
"Label Smoothing (Szegedy et al., 2016): distributing a certain percentage of confidence from the ground truth label to other labels uniformly in training.",
"Dropout (Hinton et al., 2012): randomly omitting a certain percentage of the neural networks on each training case, which has been shown effective to prevent the over-fitting problem for large neural networks.",
"For comparison, we disable label smoothing or dropout to retrain the model on the whole training set.",
"The results are shown in Table",
"3. We find that label smoothing improves the performance by greatly reducing the over-estimation, at the cost of increasing the percentage of under-estimation error.",
"Dropout alleviates the over-estimation problem, and does not aggravate under-estimation.",
"Although label smoothing only marginally improves performance on top of dropout, it is essential for maintaining the translation performance in larger search space (i.e., Beam Size = 100).",
"As seen from Table 3, reducing ECE can only lead to marginal BLEU gains.",
"We attribute this phenomenon to the fact that ECE is another metric to evaluate NMT models, which is potentially complementary to BLEU.",
"Accordingly, ECE is not necessarily strictly negatively related to BLEU.",
"Graduated Label Smoothing Inspired by this finding, we propose a novel graduated label smoothing approach, in which the smoothing penalty for high-confidence predictions is bigger than that for low-confidence predictions.",
"We firstly use the model trained by vanilla label smoothing to estimate the confidence of each token in the training set, then we set the smoothing penalty to 0.3 for tokens with confidence above 0.7, 0.0 for tokens with confidence below 0.3, and 0.1 for the remaining tokens.",
"As shown in Table 3, the graduated label smoothing can improve translation performance by alle-Enc.",
"viating inference miscalibration, and the improvement is more significant in large beam size.",
"Figure 9 shows the reliability diagrams of different label smoothing strategies.",
"The graduated label smoothing can effectively calibrate the predictions with 0 .",
"4 confidence 0 .",
"8 , while is less effective for low(i.e., < 0 . 4 ) and high-confidence (i.e., > 0 . 8 ) predictions.",
"We believe that the design of more advanced techniques to solve this problem is a worthwhile future direction of research.",
"The model size of NMT models has increased sig-nificantly recently (Bahdanau et al., 2015; Vaswani et al., 2017; Wang et al., 2019).",
"We evaluated the inference calibration of models with different sizes.",
"We increase model size in the following two ways: Deeper model : both the encoder and the decoder are deepened to 24 layers; Wider model : the hidden size of the encoder and the decoder is widened to 1024.",
"The BLEU score and inference ECE of different models are shown in Table",
"4. Increasing model size negatively affects inference calibration.",
"We find that increasing both the encoder and the decoder increases the inference calibration error despite increasing the BLEU, confirming the finding of Guo et al. (2017) that increased model size is closely related to model miscalibration.",
"This leads to a performance drop in a larger search space (i.e., Beam Size = 100).",
"Only enlarging the encoder improves translation quality while maintaining inference calibration.",
"As the decoder is more directly related to the generation, it is more likely to result in miscalibration.",
"In order to maintain the performance improvement and do not aggravate over-estimation, we propose to only increase the size of encoder and keep the decoder unchanged.",
"Results in Table 4 indicate that only enlarging the encoder can achieve better performance with fewer parameters compared to enlarging both the encoder and the decoder.",
"In a larger search space (i.e., Beam Size = 100), models with high inference ECE will generate worse translations while models with low inference ECE can achieve improved translation performance.",
"Although NMT models are well-calibrated in training, we observe that they still suffer from miscalibration during inference because of the discrepancy between training and inference.",
"Through a series of in-depth analyses, we report several interesting findings which may help to analyze, understand and improve NMT models.",
"We revisit recent advances and find that label smoothing and dropout play key roles in calibrating modern NMT models.",
"We further propose graduated label smoothing that can reduce the inference calibration error effectively.",
"Finally, we find that increasing model size can negatively affect the calibration of NMT models and this can be alleviated by only enlarging the encoder.",
"As well-calibrated confidence estimation is more likely to establish trustworthiness with users, we plan to apply our work to interactive machine translation scenarios in the future.",
"We thank all anonymous reviewers for their valuable comments and suggestions for this work.",
"This work was supported by the National Key R&D Program of China (No. 2017YFB0202204), National Natural Science Foundation of China (No. 61925601, No. 61761166008, No. 61772302), Beijing Advanced Innovation Center for Language Resources (No. TYR17002), and the NExT++ project supported by the National Research Foundation, Prime Ministers Office, Singapore under its IRC@Singapore Funding Initiative."
] | [
"abstain",
"result",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"result",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"objective",
"result",
"method",
"other",
"other"
] |
[
"It has been exactly a decade since the first es-tablishment of SPMRL, a research initiative unifying multiple research efforts to address the peculiar challenges of Statistical Parsing for Morphologically-Rich Languages (MRLs).",
"Here we reflect on parsing MRLs in that decade, highlight the solutions and lessons learned for the architectural, modeling and lexical challenges in the pre-neural era, and argue that similar challenges re-emerge in neural architectures for MRLs.",
"We then aim to offer a climax, suggesting that incorporating symbolic ideas proposed in SPMRL terms into nowadays neural architectures has the potential to push NLP for MRLs to a new level.",
"We sketch a strategies for designing Neural Models for MRLs (NMRL), and showcase preliminary support for these strategies via investigating the task of multi-tagging in Hebrew, a morphologically-rich, high-fusion, language.",
"The ability to process natural language data and to automatically extract structured meanings out of them has always been the hallmark of Artificial Intelligence (AI), and today it is also of immense practical value in downstream technological applications for Information Extraction, Text Analytics, and diverse Data Science applications.",
"The introduction of deep learning models (Goodfellow et al., 2016) into Natural Language Processing (NLP) has led to an explosion in the Neural models and pretraining techniques applied to NLP tasks from classical tasks as tagging and parsing to end-to-end tasks as machine translation and question answering raising the performance bar on these tasks to an all-times peak.",
"So far though, these advances have been reported mostly for English.",
"Can these advances carry over to languages that are typologically vastly different from English, such as Morphologically-Rich Languages ?",
"The term Morphologically-Rich Languages (MRLs) refers to languages such as Arabic, Hebrew, Turkish or Maltese, in which significant information is expressed morphologically, e.g., via word-level variation, rather than syntactically, e.g., via fixed word-order and periphrastic constructions, as in English.",
"These properties lead to diverse and ambiguous structures, accompanied with huge lexica, which in turn make MRLs notoriously hard to parse (Nivre et al., 2007; Tsarfaty, 2013).",
"A decade ago, Tsarfaty et al. (2010) put forth three overarching challenges for the MRLs research community:",
"(i) The Architectural Challenge: What input units are adequate for processing MRLs?",
"(ii) The Modeling Challenge: What modeling assumptions are adequate for MRLs?",
"(iii) The Lexical Challenge: How can we cope with extreme data sparseness in MRLs lexica?",
"For NLP in the pre-neural era, effective solutions have been proposed and successfully applied to address each of these challenges for MRLs, using data from MRLs treebanks and designated shared tasks (Nivre et al., 2007; Seddah et al., 2013a, 2014a; Nivre et al., 2016).",
"The solutions proposed to the above challenges included:",
"(i) parsing morphemes rather than words,",
"(ii) joint modeling of local morphology and global structures, and",
"(iii) exploiting external knowledge to analyze the long tail of unattested word-forms.",
"Upon the introduction of Neural Network models into NLP (Goldberg, 2016), it was hoped that we could dispense with the need to model different languages differently.",
"Curiously though, this has not been the case.",
"Languages with rich morphology typically require careful treatment, and often the design of additional resources (cf.",
"Czarnowska et al. (2019)).",
"Moreover, current modeling strategies for neural NLP appear to stand in contrast with the pre-neural proposals for processing MRLs.",
"First, unsupervised pre-training techniques employing language modeling objectives (LM, MLM) are applied nowadays to raw words rather than morphemes, and deliver word-embeddings agnostic to internal structure.",
"While some morphological structure may be implicitly encoded in these vectors, the morphemes themselves remain un-accessible (Va-nia et al., 2018; Cotterell and Schutze, 2015).",
"Second, pre-neural models for parsing MRLs call for joint inference over local and global structures, tasking multiple, ambiguous, morphological analyses (a.k.a. lattices ) as input, and disambiguating these morphological structure jointly with the parsing task (Goldberg and Tsarfaty, 2008; Green and Manning, 2010; Bohnet et al., 2013a; Seeker and Centinoglu, 2015; More et al., 2019).",
"In contrast, pre-trained embeddings select a single vector for each input token prior to any further analysis.",
"Finally, pre-trained embeddings trained on words cannot assign vectors to unseen words.",
"The use of unsupervised char-based or sub-word units (Bojanowski et al., 2017) to remedy this situation shows mixed results; while these models learn orthographic similarities between seen and unseen words, they fail to learn the functions of sub-word units (Avraham and Goldberg (2017); Vania and Lopez (2017) and references therein).",
"This paper aims to underscore the challenges of processing MRLs, reiterate the lessons learned in the pre-neural era, and establish their relevance to MRL processing in neural terms.",
"On the one hand, technical proposals as pre-trained embeddings, fine-tuning, and end-to-end modeling, have advanced NLP greatly.",
"On the other hand, neural advances often overlook MRL complexities, and disregard strategies that were proven useful for MRLs in the past.",
"We argue that breakthroughs in Neural Models for MRLs (NMRL) can be obtained by incorporating symbolic knowledge and pre-neural strategies into the end-to-end neural architectures.",
"The remainder of this paper is organized as follows.",
"In Section 2 we survey the methodological changes that neural modeling brought into NLP.",
"In Section 3 we characterize MRLs and qualify the challenges that they pose to neural NLP.",
"In Section 4 we assess the compatibility of pre-neural modeling and current neural modeling practices for MRLs, and in Section 5 we suggest to re-frame pre-neural solution strategies in neural terms.",
"In Section 6 we present preliminary empirical support for these strategies, and in Section 7 we conclude.",
"Classical NLP research has been traditionally devoted to the development of computer programs called parsers , that accept an utterance in a human language as input and deliver its underlying linguistic structure as output.",
"The output may be of various sorts: Morphological parsing analyzes the internal structure of words.",
"Syntactic parsing analyses the structure of sentences.",
"Semantic parsing assigns a formal representation to the utterance, one that reflects its meaning.",
"Discourse parsing identi-fies the discourse units, discourse relations, as well as rhetoric and pragmatic structure associated with complete narratives.",
"Since natural language exhibits ambiguity at all levels of analysis, statistical parsers aim to learn how to pick the best analysis from multiple suitable candidates (Smith, 2011).",
"The introduction of Deep Learning has revolutionized all areas of Artificial Intelligence, and NLP research is no exception (Goldberg, 2016).",
"Neural-network models now demonstrate an all-times peak in the performance of various NLP tasks, from conventional tasks in the NLP pipeline like tagging and parsing (Alberti et al., 2015; Nguyen et al., 2017; Zhou et al., 2019) to diverse downstream applications, such as machine translation (Bahdanau et al., 2014; Luong et al., 2015), question answering (An-dreas et al., 2016), text-to-code generation (Hayati et al., 2018) and natural language navigation (Mei et al., 2016).",
"In addition to revolutionizing empirical NLP, neural models have also altered the methodology of conducting NLP research, in various ways, which we review here in turn.",
"First, while state-of-the-art models for structure prediction in NLP used to rely heavily on intricate formal structures and carefully designed features (or feature-templates) (Zhang and Nivre, 2011; Zhang and Clark, 2011a), current neural models provide a form of representation learning and may be viewed as automatic feature-extractors (Kiper-wasser and Goldberg, 2016; Dozat and Manning, 2018).",
"That is, as long as the input object can be represented as a vector, the neural model will learn how to map it to the appropriate set of structural decisions, without having to write features or feature-templates by hand.",
"Second, most neural models for NLP rely on pre-training , the process of acquiring word-level vector representations termed word-embeddings .",
"These vectors are used as input, instead of actual words.",
"Initially, word embeddings were non-contextualized (Mikolov et al., 2013; Pennington et al., 2014), i.e., they assigned the same vector to the occurrences of a word in different contexts.",
"Later models present contextualized embeddings (Devlin et al., 2018; Peters et al., 2018; Yang et al., 2019; Liu et al., 2019b), they assign different vectors to the occurrences of the same word in different contexts.",
"Embeddings in general, and contextualized ones in particular, dramatically increased the performance of any NLP task they were applied to.",
"Third, working with contextualized embeddings has been so successful, that it shifted the focus of NLP practitioners from training models from scratch to fine-tuning (Liu et al., 2019a) pre-trained embeddings.",
"That is, instead of tailoring hugely complex models for specific tasks and training them from scratch, a huge effort is invested in learning a general language model (LM) that can assign contextualized embeddings to words.",
"These vectors are often argued to capture, or encode , various aspects of structure and meaning (Hewitt and Manning, 2019), and then, a relatively small amount of task-specific data may be used to fine-tune the pre-trained embeddings, so that the model can solve a particular task at hand.",
"Finally, traditional NLP tasks, such as the parsing layers mentioned earlier, were typically organized into a pipeline turning unstructured texts gradually into more complex structures by gradually increasing the complexity of analysis.",
"Eventually, complex semantic structures formed the basis for the design of dialogue systems, question answering systems, etc.",
"Nowadays, NN models for complex semantic tasks are often designed and trained end-to-end (E2E) on examples of input-output pairs.",
"There is an implicit assumption that all relavnt linguistic features are already encoded in the pre-trained representations, and that they will be automatically extracted in the learning process.",
"This methodology of pre-training , automatic feature extraction and fine-tuning has been applied to a wide variety of tasks and saw immense success for English and also for similar languages.",
"Notwithstanding, the majority of achievements and results for complex natural language understanding (NLU) does not yet carry over to all languages, and in particular, for languages known as Morphologically-Rich Languages .",
"The term Morphologically-Rich Languages entered the NLP research community about a decade ago (Tsarfaty et al., 2010) bringing to the forefront of the research a set of languages which are typologically different from English and share a host of similar processing challenges.",
"Subsequent SPMRL events and shared tasks (Seddah et al., 2013b; Tsarfaty, 2013; Seddah et al., 2014b) illustrated how methodologies and modeling assumptions for English NLP often break down in the face of such typologically diversity.",
"That is, while most NLP models can in principle be trained on data in any given language, 1 such models are often developed with English in mind, and the bias injected into such models is not optimal for languages that exhibit flexible word order, and rich word-internal structure, as is the case in MRLs.",
"Let us briefly survey the properties of MRLs and the challenges associated with them, and observe how pre-neural studies proposed to address them.",
"The Essence of MRLs.",
"The term morphologically rich languages (MRLs) refers to languages in which significant information regarding the units in the sentence and the relations between them is expressed morphologically, i.e., via word structure, rather than syntactically, e.g., using word order and rigid structures.",
"Morphologically-marked information may be of various sorts.",
"For example, consider the following Hebrew sentence: 2 (1) hild hpil at hspr fl hildh.",
"literally: the-kid.MASC.SING cause-to-fall.MASC.PAST ACC the-book of the-kid.FEM.SING trans: the boy made the book of the girl fall.",
"There are several lessons to be learned from (1).",
"First note that the 6 tokens in Hebrew correspond to 9 tokens in the English translation we can observe three types of morphological phenomena that has led to this.",
"First, elements such as prepositions, relativizers and the definite markers h (the) in Hebrew always attach as CLITICS to lexical hosts, and do not stand on their own.",
"Second, features as gender, number, person, tense etc. are marked by INFLECTIONAL morphemes.",
"In particular, the final h 1 E.g., via applying them to the universal dependencies (UD) treebanks (Nivre et al., 2016).",
"distinguishes ildh kid.FEM from its ild kid.MASC counterpart.",
"Interestingly, an initial h marks definiteness in hild, hspr and hildh , so there is no 1:1 relation between surface elements (chars) and what they can mark.",
"Finally, the Hebrew verb, hpil , which also begins with an h , corresponds to the construction ( binyan , pattern) cause-to-fall' via a DERIVATIONAL morphological process that combines the pattern h i (causative) and the lexical root n.p.l (to fall).",
"Note that the h i causative morpheme is non-concatenative.",
"Moreover, when combining h i + n.p.l into hpil the n drops, leaving only a part of the root explicit.",
"This word-level complexity then requires decomposition of raw surface tokens into constituent morphemes in order to transfer them to the syntactic, semantic, or downstream tasks that require this information.",
"However, rich morphology may lead to extreme ambiguity in the decomposition of tokens into morphemes.",
"Take for example the two occurrences of the word form hpil in (2): (2) hild hpil at hpil.",
"literally: the-kid.MASC.SING cause-to-fall.MASC.PAST ACC the-elephant translated: the boy made the elephant fall.",
"Two different morphological processes lead to two different decompositions of hpil , one is concatenative: the + elephant ( h+pil ) and one is not: cause-to + fall ( h i + n.p.l).",
"Moreover, neither interpretation is a-priory more likely than the other.",
"We need the global context in order to select the single human-perceived analysis for each form.",
"The Typology of MRLs.",
"The extent to which morphological phenomena is reflected in different languages varies, and linguistic typology describes morphological diversity along two dimensions.",
"One is the synthesis dimension, which captures the ratio of morphemes per word .",
"Isolating languages on one end present one-morpheme-per-word, like most words in English.",
"At the other end we have polysynthetic languages , where multiple morphemes can form a single word, as it is in Turkish.",
"The other dimension is fusion , and it refers to how easy it is to decompose the word into morphemes.",
"In Turkish, which is agglutinative , the segmentation into morphemes is rather straightforward.",
"This stands in contrast with fusional languages, such as Hebrew, where the decomposition of a word like hpil is less trivial due to the intricate fusion' processes that went into creation.",
"Key Challenges in NLP for MRLs The linguistic characteristics of MRLs are known to pose challenges to the development of NLP models, shared across languages and tasks.",
"The overarching challenges are summerized in Tsarfaty et al. (2010):",
"(i) THEARCHITECTURALCHALLENGE : What are the units that should enter as input into the NLP pipeline for MRLs?",
"Are they words?",
"Morphemes?",
"How are these units identified and propagated down the pipeline?",
"(ii) THE MODELINGCHALLENGE : What are the modeling assumptions that are appropriate for models for MRLs?",
"What kind of structure representations and features (or feature-templates) are appropriate?",
"(iii) THELEXICALCHALLENGE : How can we cope with the extreme data sparseness that follows from the complex structure of words and the productivity of morphology?",
"us now survey the solutions proposed for these three overarching challenges in the pre-neural era.",
"In response to the ARCHITECTURAL challenge, several input alternatives have been proposed.",
"The input to processing an MRL can be composed of raw tokens, segmented morphemes, or complete morphological lattices that capture the multiple possible analyses for each input tokens (More et al., 2018).",
"Morphological lattices seem particularly advantageous, since on the one hand they represent the explicit decomposition of words into morphemes, and on the other hand retain the morphological ambiguity of the input stream, to be disambiguated downstream, when information from later phases, syntactic or semantic, becomes available.",
"Lattice-based processing has led to re-thinking the MODELING architectures for MRLs, and to propose JOINT models, where multiple levels of information are represented during training, and are jointly predicted at inference time.",
"Such joint models have been developed for MRLs in the context of phrase-structure parsing (Tsarfaty, 2006; Goldberg and Tsarfaty, 2008; Green and Manning, 2010) and dependency parsing (Bohnet et al., 2013b; Seeker and Cetinoglu, 2015; More et al., 2019).",
"In all cases, it has been shown that joint models obtain better results than their morphological or syntactic standalone counterparts.",
"3 3 Joint models are shown to be effective for other tasks and languages, such as parsing and NER (Finkel and Manning, 2009) or parsing and SRL (Johansson and Nugues, 2008).",
"Finally, the LEXICAL challenge refers to the problem of out-of-vocabulary items.",
"Supervised training successfully analyzes attested forms, but fails to analyze the long tail of morphological forms in the language, not yet attested during training.",
"Pre-neural models for MRLs thus benefit from additional symbolic information beyond the supervised data.",
"It can be in the form of online dictionaries, wide-coverage lexica, or a-priori knowledge of the structure of morphological paradigms in the language (Sagot et al., 2006; Goldberg et al., 2009).",
"Where We're At Upon the introduction of neural models into NLP the hope was that we could dispense with the need to develop language-specific modeling strategies, and that models will seamlessly carry over from any one language (type) to another.",
"Curiously, this was not yet shown to be the case.",
"NLP advances in MRLs still lag behind those for English, with lower empirical results on classical tasks (Straka et al., 2016), and very scarce results for applications as question answering and natural language inference (Hu et al., 2020).",
"More fundamentally, NLP researchers nowadays successfully predict linguistic properties of English via neural models as in Linzen et al. (2016); Gu-lordava et al. (2018), but they are less successful in doing so for languages that differ from English, as in Ravfogel et al. (2018).",
"It is high time for the MRL community to shed light on the methodological and empirical gaps between neural models for English and for MRLs, and to bridge this gap.",
"The point of departure of this paper is the claim that neural modeling practices employed in NLP nowadays are suboptimal in the face of properties of MRLs.",
"In what follows we illuminate this claim for the four neural methodological constructs that we termed pre-training , fine-tuning , feature-extraction and end-to-end modeling .",
"Pre-training of word embeddings presupposes that the input to an NLP architecture consists of raw words.",
"However, word-level embeddings may not be useful for tasks that require access to the actual morphemes.",
"For example, for semantic tasks in MRLs, it is often better to use morphological embeddings of lemmas rather than words (Avraham and Goldberg, 2017).",
"Also, dependency parsing for MRLs requires access to morphological segments, according to the UD scheme (Straka et al., 2016).",
"A reasonable solution might be to morphologically analyze and segment all input words prior to pre-training.",
"Unfortunately, this solution does not fit the bill for MRLs either.",
"First, current neural segmentors and taggers for MRLs are not accurate enough, and errors in the analyses propagate through the pre-training to contaminate the trained embeddings and later tasks.",
"In the universal segmentation work of (Shao et al., 2018), for instance, neural segmentation for languages which are high on both the synthesis and the fusion index, such as Arabic and Hebrew, lags far behind.",
"Beyond that, there is the technical matter of resources.",
"Pre-training models as Devlin et al. (2018); Liu et al. (2019b); Yang et al. (2019) requires massive amounts of data and computing resources, and such training often takes place outside of academia.",
"Training morphological embeddings rather than word embeddings was not taken up by any commercial partner.",
"4 Next, let us turn to the notion of fine-tuning , widely used today in all sorts of NLP tasks, typically in conjunction with contextualized embeddings as (Devlin et al., 2018; Peters et al., 2018; Liu et al., 2019b).",
"An argument may be advanced that contextualized embeddings actually encode accurate disambiguated morphological analyses in their context-based representations, and all we have to do is to probe these vectors and make these morphological analyses explicit.",
"This argument is appealing, but it was never seriously tested empirically, and it is an open question whether we can successfully probe the fine-grained morphological functions from these vectors.",
"A possible caveat for this line of research has to do with the inner-working of contextualized representations.",
"Most contextualized embeddings operate not on words but on word-pieces .",
"A word-pieces algorithm breaks down words into sub-words, and the model assigns vectors to them.",
"The word-pieces representations are later concatenated or pooled together to represent complete words.",
"It is an open question whether these word-pieces capture relevant aspects of morphology.",
"In particular, it is unclear that the strategy of relying on chars or char-strings is adequate for encoding non-concatenative phenomena that go beyond simple character sequencing, such as templatic morphology, substraction, reduplication, and more (Acker-man and Malouf, 2006; Blevins, 2016).",
"4 Possibly since this does not align with the business goals.",
"The notion of word-pieces leads us to consider the LEXICAL challenge.",
"The suggestion to use sub-word units (chars or char n-grams) rather than words could naturally help in generalizing from seen to unseen word tokens.",
"There is a range of subword units that are currently employed (chars, char-grams, BPEs (Sennrich et al., 2015)), nicely compared and contrasted by Vania and Lopez (2017).",
"Vania and Lopez (2017); Vania et al. (2018) show that for the type of sub-word units that are currently used, standard pre-training leads to clustering words that are similar orthographically , and do not necessarily share their linguistic functions .",
"When a downstream task requires the morphological signature (e.g., dependency parsing in (Vania et al., 2018)) this information is not recoverable from models based on sub-word units alone.",
"On the whole, it seems that end-to-end modeling for MRLs cannot completely rely on automatic feature extraction and dispense with the need to explicitly model morphology.",
"It is rather the contrary.",
"Explicit morphological analyses provide an excellent basis for successful feature extraction and accurate downstream tasks.",
"When such analysis is missing, results for MRLs deteriorate.",
"So, we should aim to recover morphological structures rather than ignore them, or jointly infer such information together with the downstream tasks.",
"5 A different, however related, note concerning automatic feature extraction in MRLs has to do with the flexible or free word-order patterns that are exhibited by many MRLs.",
"Many neural models rely on RNNs (Hochreiter and Schmidhuber, 1997) for feature extraction.",
"These models assume complete linear ordering of the words and heavily rely on positions in the process of representation learning.",
"Even pre-training based on attention and self-attention (Vaswani et al., 2017) assign weights to positional embeddings.",
"In this sense, the bias of current neural models to encode positions stands in contrast with the properties of MRLs, that often show discrepancies between the linear position of words and their linguistic functions.",
"It is an open question whether there are more adequate architectures for training (or pre-training) for more flexible or free word-order languages.",
"5 Furthermore, Gonen et al. (2019) have recently shown that one needs to know the explicit morphological analyses in order to effectively ignore or neutralize certain morphemes, for instance discarding gender for reducing bias in the data.",
"The Overarching Goal The purpose of the proposed research theme, which we henceforth refer to as Neural Models for MRLs (NMRL), is to devise modeling strategies for MRLs, for classical NLP tasks (tagging, parsing) and for downstream language understanding tasks (question answering, information extraction, NL inference, and more).",
"This research diverges from the standard methodology of applying DL for NLP in three ways.",
"First, current end-to-end neural models for complex language understanding are developed mostly for English (Wang et al., 2018, 2019).",
"Here we aim to situate neural modeling of natural language understanding in cross-linguistic settings (e.g., (Hu et al., 2020)).",
"Second, while current neural models for NLP assume pre-training with massive amounts of unsupervised data (Ruder et al., 2019; Yang et al., 2019; Liu et al., 2019b), research on MRLs might be realistically faced with resource-scarce settings, and will require models that are more green (Schwartz et al., 2019).",
"Finally, while many neural-based models developed for English presuppose that linguistic information relevant for the downstream task is implicitly encoded in word vectors, and may be successfully predicted by neural models (Linzen et al., 2016), we question the assumption that ready-made pre-trained embeddings, will indeed encode all relevant information required for end-to-end models in MRLs.",
"The key strategies we propose in order to address NMRL include transitionining to",
"(i) morphological-embeddings ,",
"(ii) joint lattice-based modeling , and",
"(iii) paradigm cell-filling (Blevins, 2016; Ackerman et al., 2009), as we detail shortly.",
"Research Questions.",
"To instigate research on NMRL, let us define the three overarching DEEP challenges of MRLs in the spirit of (Tsarfaty et al., 2010).",
"For these challenges, the aim is to devise solutions that respect the linguistic complexities while employing the most recent deep learning advances.",
"THEDEEPARCHITECTURALCHALLENGE : The classical' architectural challenge aimed to define optimal input and output units adequate for processing MRLs.",
"In neural terms, this challenge boils down to a question concerning the units that should enter pre-training.",
"Are they words?",
"Word-pieces?",
"Segmented morphemes?",
"Lemmas?",
"Lattices?",
"Furthermore, should these units be predicted from existing pre-trained embeddings (e.g., multilingual BERT (Ruder et al., 2019) or XLNet (Yang et al., 2019)), or should we develop new pre-training paradigms that will make the relevant morphological units more explicit?",
"THEDEEPMODELINGCHALLENGE : The use of neural models for NLP tasks re-opens an old debate concerning joint vs pipeline architectures for parsing MRLs.",
"The strategy of pre-training word vectors and then employing feature extraction or fine-tuning pre-supposes a pipeline architecture, where a model sets all morphological decisions during pre-training .",
"Joint models assume lattices that encode ambiguity and partial order, and morphological disambiguation happens only later, in the global context of the task.",
"Is it possible to devise neural joint models parsing for MRLs?",
"And if so, would they still outperform a pipeline?",
"THEDEEPLEXICALCHALLENGE : Despite the reliance on pre-trained embeddings and unsupervised data, there is still an extreme amount of unseen lexical items in the long tail of inflected forms in the language, due to the productive nature of morphology.",
"Therefore, we need to effectively handle words outside of the pre-trained vocabulary.",
"How can we cope with the extreme data sparseness in highly synthetic MRLs?",
"Should we incorporate external resources such as dictionaries, lexica, or knowledge of paradigm structure and if so, how should such symbolic information be incorporated into the end-to-end neural model?",
"Solution Strategies.",
"The work on NMRL may proceed along either of these four reserch avenues, each of which groups together research efforts to address a different challenge of NMRL.",
"Neural Language Modeling for MRLs.",
"The strategy here is to empirically examine the ability of existing pre-trained language models to encode rich word-internal structures, and to devise new alternatives for pretraining that would inject relevant biases into the language models, and make morphological information effectively learnable.",
"This may be done by proposing better word-pieces algorithms, and/or devising new pre-training objectives (e.g., lattice-based) that are more appropriate for MRLs.",
"Joint Neural Models for MRLs.",
"The aim here is to devise neural models that parse morphologically ambiguous input words in conjunction to analyzing deeper linguistic layers, and to investigate whether these joint models work better than a pipeline as has been the case in pre-neural models.",
"Neural modeling of morphology may be donw jointly with, named-entity recognition, syntactic or semantic parsing, and downstream tasks as information extraction and question answering.",
"Interleving information from all layers may be done by all at once (e.g., via MultiTask Learning (Caruana, 1997)) or by gradually adding complexity (e.g., via Curriculum Learning (Bengio et al., 2009)).",
"Neural Applications for MRLs.",
"We aim to develop effective strategies for devising end-to-end models for complex language understanding in MRLs.",
"To do so, the community needs high-quality benchmarks for question answering, machine reading and machine reasoning for MRLs.",
"Initially, we need to rely on lessons learned concerning pre-training and joint modeling in the previous items, in order to devise successful architectures for solving these tasks.",
"Moreover, developing benchmarks and annotating them both at the morphological level and for the downstream task will help to evaluate the benefits of explicit morphological modeling versus representation learning, for acquiring word-internal information needed for the downstream task.",
"Closing the Lexical Gap for MRLs.",
"Finally, we need to develop effective strategies for handling out-of-vocabulary (OOV) items in neural models for MRLs.",
"Currently, the main focus of investigation lies in breaking words into pieces, to help generalize from seen to unseen word tokens.",
"As a complementary area of investigation, a plausible direction would be to shift the focus from the decomposition of words into morphemes, to the organization of words as complete paradigms.",
"That is, instead of relying on sub-word units, identify sets of words organized into morphological paradigms (Blevins, 2016).",
"Rather than construct new words from observed pieces, complete unseen paradigms by analogy based on observed complete paradigms.",
"Expected Significance.",
"As has been the case with SPMRL, work on NMRL is expected to deliver architectures and modeling strategies that can carry across MRLs, along with a family of algorithms for predicting, and benchmarks for evaluating, a range of linguistic phenomena in MRLs.",
"From a scientific standpoint, this investigation will advance our understanding of what types of linguistic phenomena neural models can encode, and in what ways properties of the language should guide the choice of our neural architectural decisions.",
"From a technological point of view, such modeling strategies will have vast applications in serving language technology and artificial intelligence advances to a range of languages which do not currently enjoy these technological benefits.",
"Goal.",
"In this section we aim to empirically assess the ability of neural models to recover the word-internal structure of morphologically complex and highly ambiguous surface tokens in Modern Hebrew.",
"Hebrew is a Semitic language which lies high on both the synthesis and fusion typological indices, and thus provides an interesting case study.",
"Specifically, we devised a multi-tagging task where each raw input token is tagged with the sequence of Part-of-Speech tags that represent the functions of its constituent morphemes.",
"For example, the token hpil in Section 3 can assume two different multi-tag analyses: VERB (made-fall) or DET+NOUN (the elephant).",
"The number of distinct tags in the multi-tagging analyses of Hebrew tokens can be up to seven different tags, that represent distinct functions contained in the word token.",
"Models.",
"We compare the results of multi-tagging obtained by a state-of-the-art, pre-neural, morphosyntactic parser (More et al., 2019) that is based on the structured prediction framework of Zhang and Clark (2011b).",
"The pre-neural parser explicitly incorporates three components for addressing the challenges associated with MRLs:",
"(i) it receives complete morphological lattices as input, where each input token is initially assigned the set of all possible morphological analyses for this token, according to a wide-coverage lexicon,",
"(ii) it employs joint training and inference of morphological segmentation and syntactic dependencies, and",
"(iii) it employs unknown-words heuristics based on linguistic rules to assert possible valid analyses of OOV tokens.",
"parser to three neural architectures:",
"An end-to-end language-agnostic LSTM-CRF architecture, trained to predict a single complex tag (multi-tag) per token, encoding words with and without morph/char embeddings .",
"An architecture based on the Hebrew section of multilingual BERT , fine-tuned to predict a single complex tag (multi-tag) per token.",
"As a first approximation of incorporating symbolic morphological constructs into the neural end-to-end architecture, we designed our own COPYNET, a sequence-to-sequence pointer-network where the input consists of complete morphological lattices for each token, and a copy-attention mechanism is trained to jointly select morphological segments and tag associations from within the lattice , to construct the complete multi-tag analyses.",
"Data and Metrics.",
"We use the Hebrew section of the SPMRL shared task (Seddah et al., 2013b) using the standard split, training on 5000 sentences and evaluating on 500 sentences.",
"For generating the lattices we rely on a rule-based algorithm we devised on top of the wide-coverage lexicon of (Adler and Elhadad, 2006), the same lexicon employed in previous work on Hebrew (More and Tsarfaty, 2016; More et al., 2019; Tsarfaty et al., 2019).",
"We report the F-Scores on Seg/POS as defined in More and Tsarfaty (2016); More et al. (2019).",
"Results.",
"Table 1 shows the multi-tagging results for the different models.",
"The pre-neural model obtains 95.5 F1 on joint Seg+POS prediction on the standard dev set.",
"As for the neural models, in an oracle segmentation scenario, where the gold morphological segmentation is known in advance, both BERT and the LSTM-CRF get close to the pre-neural model results.",
"However, they solve an easier and unrealistic task, since in realistic scenarios the gold segmentation is never known in advance.",
"In the more realistic scenarios, where the segmentation is automatically predicted (via More et al. (2019)), the results of the Neural models substantially drop.",
"As expected, morph-based and char-based representations help to improve results of the LSTM-CRF model, though not yet reaching the 95 F-score of the pre-neural model.",
"Finally, employing our COPYNET with symbolic morphological lattices, with OOV segmentation heuristics as in the pre-neural model, leads to the most significant improvement, almost closing the gap with the pre-neural state-of-the-art result.",
"Unfortunately, lattices are incompatible with LSTMs and with BERT, since LSTMs and BERT models assume complete linear ordering of the tokens, while lattices impose only a partial order on the morphemes.",
"The question how to incorporate contextualized embeddings into joint, lattice-based, models is fascinating, and calls for further research.",
"This paper proposes NMRL, a new (or rather, re-defined) research theme aiming to develop neural models, benchmarks, and modeling strategies for MRLs.",
"We surveyed current research practices in neural NLP, characterized the particular challenges associated with MRLs, and demonstrated that some of the neural modeling practices are incompatible with the accumulated wisdom concerning MRLs in the SPMRL literature.",
"We proceeded to define the three DEEP counterparts to the challenges proposed in Tsarfaty et al. (2010), namely, the DEEPARCHITECTURALCHALLENGE , DEEPMODELINGCHALLENGE and DEEPLEXICALCHALLENGE , and sketched plausible research avenues that the NMRL community might wish to explore towards their resolution.",
"Our preliminary experiments on Hebrew multi-tagging confirmed that relying on lessons learned for MRLs in the pre-neural era and incorporating similar theoretical constructs into the neural architecture indeed improves the empirical results on multi-tagging of Hebrew, on the very basic form of analysis of Modern Hebrew a morphologically rich and highly-fusional language.",
"This type of research needs to be extended to the investigation of multiple tasks, multiple languages, and multiple possible pre-training regimes (words, chars, morphemes, lattices) in order to investigate whether this trend extends to other languages and tasks.",
"Whether adopting solution strategies for MRLs proposed herein or devising new ones, it is high time to bring the linguistic and moprhologi-cal complexity of MRLs back to the forefront of NLP research, both for the purpose of getting a better grasp of the abilities, as well as limitations, of neural models for NLP, and towards serving the exciting NLP/AI advances to the understudied, less-privileged, languages.",
"We thank Clara Vania, Adam Lopez, and members of the Edinburgh-NLP seminar, Yoav Goldberg, Ido Dagan, and members of the BIU-NLP seminar, for intriguing discussions on earlier presentations of this work.",
"This research is kindly supported by the Israel Science Foundation (ISF), grant No. 1739/16, and by the European Research Council (ERC), under the Europoean Union Horizon 2020 research and innovation programme, grant No. 677352."
] | [
"abstain",
"result",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"other",
"other"
] |
[
"This survey builds an interdisciplinary picture of Argument Mining (AM), with a strong focus on its potential to address issues related to Social and Political Science.",
"More specifically, we focus on AM challenges related to its applications to social media and in the multilingual domain, and then proceed to the widely debated notion of argument quality.",
"We propose a novel definition of argument quality which is integrated with that of deliberative quality from the Social Science literature.",
"Under our definition, the quality of a contribution needs to be assessed at multiple levels: the contribution itself, its preceding context, and the consequential effect on the development of the upcoming discourse.",
"The latter has not received the deserved attention within the community.",
"We finally define an application of AM for Social Good: (semi-) automatic moderation , a highly integrative application which",
"(a) represents a challenging testbed for the integrated notion of quality we advocate,",
"(b) allows the empirical quantification of argu-ment/deliberative quality to benefit from the developments in other NLP fields (i.e. hate speech detection, fact checking, debiasing), and",
"(c) has a clearly beneficial potential at the level of its societal thanks to its real-world application (even if extremely ambitious).",
"Considering Argument Mining (AM) for Social Good implies a strong conceptual shift: the discourse exchange is not to be interpreted as a competition to be won by the most persuasive contribution 1 , but rather as a cooperative endeavor in which",
"1 In this paper, we use the term contribution to refer to a turn in a discourse exchange; more concretely a contribution is a textual unit in a discourse contex, e.g., a post in a forum, a tweet in a discussion thread; a speech in a parliamentary debate).",
"each individual contribution represents a move towards a shared goal.",
"If argumentative discourse is cooperation, it is not to be taken for granted that the perfect debater, most often the primary objective in AM research, is necessarily also the best team player.",
"Building on this assumption, we review recent developments in the field of AM from the perspective of its application in socially relevant contexts.",
"Our survey has a strong interdisciplinary perspective, putting the focus on the collaboration between NLP and the Social Sciences and, more specifically, in argumentation targeted at decision-making ( deliberation ).",
"Deliberative discourse historically characterizes parliamentary debates; however, it pervades, more and more frequently, discussions in digital democracy forums and, beyond that, specific strands of discussions in generalistic social media.",
"Looking at argumentation through the lens of deliberation has a 2-fold benefit.",
"From a purely NLP perspective, the insights gained through modeling deliberative features can in turn be employed in applications targeting discourse in deliberative forums and social media more broadly, allowing systems to be more adaptable to real-world discourse settings.",
"Social Sciences, in turn, can enormously benefit from the possibility of scaling up to a larger public with the support of NLP methods.",
"The novelty of this survey with respect to literature (Cabrio and Villata, 2018; Lawrence and Reed, 2019) is precisely in its interdisciplinary focus, which leads us to a novel formulation of the widely debated notion of argument quality (Wachsmuth et al., 2017a,b), which we put in direct comparison to Deliberative Quality (Bachtiger and Parkinson, 2019).",
"The take-home message of this comparison is that the quality of a contribution to an argument cannot only be quantified in terms of its textual (linguistic/logical) properties and the relation to the preceding contributions (as commonly done in argument quality), but also the relation to the cooperation challenge needs to be brought in the picture.",
"In other words, a good contribution is one that ensures the discourse to unfold productively.",
"2 We conclude the survey by defining the conceptual coordinates and the practical challenges of (semi-) automatic moderation , a highly integrative application of AM for Social Good which represents a natural testbed for the integrated definition of quality discussed above.",
"We propose to implement moderation as a form of discourse optimization, and spell out the objective of such optimization that is to say, the desiderata for an NLP-based moderator.",
"We discuss the concrete challenges related to the tasks of an NLP moderator, and review existing work that, albeit not targeted at NLP moderation directly, can be brought in as part of a puzzle which is both ambitious and worthwhile to pursue.",
"Argument(ation) Mining (AM) is a field encompassing varying tasks that deal with the automated analysis of arguments from natural language text.",
"Habernal and Gurevych (2017) defines AM as the general task of analyzing discourse on the pragmatics level and applying a certain argumentation theory to model and automatically analyze the data at hand.",
"The progress in the field of NLP in recent years has also influenced this research area: automatic recognition and identification of arguments has been enabled in various domains and different models for the analysis and representation of argumentative structure have been developed.",
"Furthermore, there is a growing research interest in other aspects of AM, such as argument quality.",
"Cabrio and Villata (2018) provide an elaborate overview of the AM framework in their data-driven analysis of the state of the art after five years of significant developments in the field of AM.",
"Generally speaking, given a collection of natural language texts, the task at hand is implemented in two stages: Argument extraction The system first identifies the documents which contain the argumentative structure and the specific textual spans in which 2 The productive quality of a contribution can be defined in relation to Social Sciences literature (Steenbergen et al., 2003; Steiner et al., 2005), c.f. Section 3 argumentation is encoded.",
"Once the textual boundaries are defined, subportions of the argumentative spans are assigned to a set of pre-established argument components (e.g. claims, premises, rebuttal, etc.).",
"A variety of models were used for this including Naive Bayes (Moens et al., 2007), SVMs (Mochales and Moens, 2011), RNNs (Nicu-lae et al., 2017; Eger et al., 2017), Pre-trained Language Models (Chakrabarty et al., 2019; Lugini and Litman, 2020), and other supervisedlearning techniques (Ein-Dor et al., 2020).",
"Relation assignment The goal of the second stage is to model the relations between the argumentative spans identified in the first stage.",
"These relations can exist between different arguments (support, attack) as well as within an argument (connecting the premises with the claim).",
"Recent approaches to argumentative relation classification investigate for example relational models (Traut-mann et al., 2020) or inject background knowledge by leveraging features from different knowledge bases (Kobbe et al., 2019).",
"Detecting these relations is necessary to model the overall structure of the argumentation (discourse/debate).",
"As this structure can be complex, the task is difficult, involving high-level knowledge representation and reasoning issues.",
"After the relations are detected, the discourse structure can then be mapped to a graph representation, called argumentation graph, with the arguments as nodes and relations as edges.",
"To simplify the problem, some approaches reduce the graph to a tree-structure representation (Peldszus and Stede, 2015; Stab and Gurevych, 2017).",
"Different methods to generate the structure have been investigated, e.g. SVMs (Habernal and Gurevych, 2017; Niculae et al., 2017) or textual entailment (Cabrio and Villata, 2013; Cocarascu et al., 2020).",
"Modeling the relations and argumentation flow within a debate is an important fac-tor when defining the notion of argument quality, which will be presented in Section 3.",
"Consider the following example taken from an online debate about compulsory vaccinations 3 which demonstrates the framework quite clearly.",
"Given a statement presenting background and context, participants are asked to discuss the question Does public health demand vaccinations? ( Claims are in bold, and premises are underlined.) 3 http://debatepedia.idebate.org/en/ index.php/Debate:_Compulsory_vaccination A 1 : A vaccine is the best way to prevent an outbreak of a disease or to reduce its negative effects.",
"Vaccinated people become immune to a certain pathogen and do not develop a disease.",
"Although there are occasionally side effects, these affect only a tiny number of people compared to the protection offered to the vast majority.",
"A 2 : Many vaccines have serious and sometimes deadly side effects .",
"With many vaccines the immunity is not lifelong.",
"Sometimes the vaccines itself can cause a serious disease to develop as a side effect.",
"If governments know that compulsory mass vaccination is likely to cause death or permanent disability in even a few cases, it is immoral for them to make it compulsory.",
"Here, the argumentative text boundaries are first determined from the natural language discussion and the argument components (claims and premises) are extracted.",
"Then, the relations between the two arguments are as follows: A 1 supports the argument while A 2 attacks it.",
"However, consider another example, extracted from an online debate platform Kialo 4 .",
"Here, the participants' contribution and the structure mirror a more direct and conversational dynamic to argumentation.",
"The seemingly simple example of an online exchange shows how a more conversational environment provides vaguer boundaries of argumentation structure and components.",
"Each argument is more direct, not necessarily consisting of a claim-premise configuration, and the strength and productive quality of each argument is particularly relative to the context, each contribution affecting the argument differently either at a local or global level.",
"Note, however, that the relations between arguments and claim are still relatively clear (e.g. A 2 supports while A 5 attacks the main claim in A 1 ; A 3 attacks A 2 directly; and A 4 closes any further 4 https://www.kialo.com/explore/ featured discussion on A 3 's premise).",
"Clearly, the environment and type of platform under consideration have a significant impact on a system's capacity to implement such a framework and on the degree of complexity found in the components and relations to extract, assign, and predict.",
"Working in the realm of overtly argumentative text (such as persuasive essays (Stab and Gurevych, 2017)), while challenging of course, can be quite standardized.",
"The language use is generally in line with natural language expectations and often standard (e.g. claim, premise and stance are clear), the structure and collective goal of the debate are rather controlled and topic-specific, and the collection of participants involved is often a closed or an easily-classified set (e.g. in parliamentary debates, news forums, etc.).",
"In social media While overtly argumentative text, like those described above, represents the natural domain of application for AM, social media constitute a powerful source of large amounts of data (billions of words) despite facing particular challenges in AM.",
"Social media plays an increasingly significant role in modern political and social discourse, yet resources built for conducting AM on this type of data structure remain limited for clear reasons.",
"These platforms inherently collect and spread a wide range of content, including personal opinions, facts, fake news, and additional information of interest to users.",
"Distinguishing between personal opinion, fact, and fake news, for example, is not always straightforward, as seen in recent work on fake news detection (Kotonya and Toni, 2020).",
"Further, the language used on such platforms is infamously chaotic and often non-standard in comparison to the language use in more structured environments, like parliamentary debates.",
"The combination of these aspects introduces the unique challenge of implementing AM to particularly heterogeneous, poorly annotated data.",
"Recent work has aimed to tackle such challenges in social media.",
"Dusmanu et al. (2017) apply a supervised classification approach to identify arguments on Twitter, focusing on the tasks of facts recognition and source identification.",
"They study the feasibility of the approaches proposed to address these tasks on a set of tweets related to the Grexit and Brexitnews topics.",
"Habernal and Gurevych (2017) provide an extensive analysis of the steps and the modeling strategies necessary to analyze social media data (e.g. forum posts) in terms of their argumentative structure, while Simpson and Gurevych (2018) tackle the issue of the scalability of AM algorithms.",
"Despite the rising attention and developments to AM in social media, one of the major challenges currently facing the field is the lack of consensus on how exactly to analyse argumentative user-generated texts such as online comments (Bauwelinck and Lefever, 2020).",
"On the one hand, the amount of annotations available for the scale of this heterogeneous data remains limited.",
"Recent work by Schaefer and Stede (2020), among others, have aimed to construct large Twitter corpora annotated for argument components, including argumentative spans within tweets.",
"On the other hand, annotation guidelines are not necessarily clear, and the theoretical motivations underlying the proposed guidelines used to generate labelled corpora rarely include motivation for the use of a particular theoretical basis.",
"Bauwelinck and Lefever (2020) introduce a pilot study and aim to provide a clear justification of the theories and definitions underlying the design of a set of guidelines.",
"The linguistic, structural, and logistic complexity and openness of such platforms clearly present unique challenges.",
"However, being able to work well with argumentative text from social media and discussion forums is essential considering the continuously growing impact on the political and social framework of modern times.",
"Multilingual argument mining Multilinguality is an important area of research in NLP that has gained more attention recently because of the cross-lingual transfer potentials of Pre-trained Language Models (Devlin et al., 2019; Conneau et al., 2020) and because of the potentials for a societal impact at a global scale.",
"The latter is particularly important when considering AM for Social Good since language should not be a barrier for participation if the goal is to allow any productive contribution.",
"Various recent studies have investigated multilinguality for AM.",
"Eger et al. (2019) discuss a series of experiments on using machine translation and annotation projection for AM, specifically argument components extraction and classification in German, English, and Chinese.",
"A similar approach to build training data in other languages using machine translation is done in Toledo-Ronen et al. (2020), which use a pre-trained multilingual BERT (Devlin et al., 2019) for modeling.",
"This approach is shown to perform well for classifying argument stance and detecting evidence, but not for predicting argument quality scores.",
"Multilingual stance detection in political social media text (Vamvas and Sennrich, 2020) is also investigated in Lai et al. (2020) using stylistic, structural, affective and contextual features from text and analysing the scenarios in which each of these features is effective.",
"Other work has also dealt with building non-English datasets (Lindahl, 2020; Bauwelinck and Lefever, 2020; Schaefer and Stede, 2020; Zotova et al., 2020), but there still seems to be a focus on Indo-European languages (and sometimes Chinese) with a lack of datasets and analysis extending to other languages.",
"This is a general issue in NLP research that extends to performance bias in favor of standard dialects for example in English (Blodgett et al., 2016) and bias that could target certain user groups instead of protecting them as was shown for Hate Speech Detection (Davidson et al., 2019).",
"This is an important limitation to address in AM as well for more inclusivity and towards a more positive societal impact.",
"The second stage in the framework of AM is defined as relation assignment (c.f. Section 2.1); a complex task that aims to predict the relations holding between the arguments defined in the first stage.",
"Being able to model the relations between arguments and components within the structure, for example in argument graphs (Besnard and Hunter, 2014; Craven and Toni, 2016), allows us to actually work with the argumentative text in an application-based setting, understand the stance and context of arguments, and develop a story for the consequential impact of arguments on the discourse, among other things.",
"Generally speaking, we can use this task as an approach to analyze argument quality (AQ).",
"However, within the AM community, an open question concerns the adequate definition and op-erationalization of the notion of AQ.",
"Despite this, to move forward with the task of AQ analysis and to create large corpora with crowd-sourced annotations, some approaches rely on the relative assessment of quality: Given two arguments, which is more convincing?",
"(Habernal and Gurevych, 2016; Toledo et al., 2019; Gretz et al., 2020)",
"Thus the natural way of quantifying the success of an argument is in terms of its persuasiveness.",
"Indeed, plenty of previous work has explored the many factors which contribute to the persuasiveness of a message: the linguistic features employed by the authors (Persing and Ng, 2017), the semantic type of claims and premises (Hidey et al., 2017), the different sources of evidence produced to support an argument (Addawood and Bashir, 2016), the effects of the personality traits and prior beliefs on persuasiveness (Lukin et al., 2017; Durmus and Cardie, 2018; Al Khatib et al., 2020), the interaction with other participants (Ji et al., 2018; Egawa et al., 2020), the use of argument invention when debating about unknown topics (Bilu et al., 2019), the structure of the arguments (Li et al., 2020), and the effect of the style of the text in achieving persuasion (El Baff et al., 2020).",
"Persuasiveness is, however, not the only way to define whether an argument is good at least not from a deliberation point of view.",
"A good contribution to a debate is one which uncovers a previously unnoticed aspect of a problem, thus generating a perturbation in the discourse (controversies can be productive!).",
"Or else, a good contribution is one that settles an issue, by stating the differences between opposing views and allowing the discourse to stabilize in a series of clusters (convergence on just one position is not necessarily a good outcome).",
"Most recent research projects (Wachsmuth et al., 2017b) aim to address the challenge of redefining the notion of AQ, away from persuasiveness and towards a more situated definition which has to do with the needs of argumentation in a real-world scenario.",
"This new definition has been the basis for the creation of new corpora from different domains (Ng et al., 2020), where feature-based (Wachsmuth and Werner, 2020) and neural models were tested for automatic prediction (Lauscher et al., 2020).",
"Other aspects of AQ have become the subject of AM research such as the relevance and impact of arguments (Durmus et al., 2019), the verifiability (Park and Cardie, 2018), local acceptability (Yang et al., 2019) and the best deliberative move (Al-Khatib et al., 2018).",
"We argue that this shift is necessary for two reasons: (1) Working with real-world applications of AM naturally forces us into the more heterogeneous realm of data structures, such as social media, in which language, structure, and content are less uniform and confined to the classic notion of logical debate; and (2) In order to encourage deliberation from an open audience of citizens, we need to redefine our concept of AQ and productive discourse such that there is equal worth and participation granted to each contributor of the argument.",
"Deliberative Quality We therefore propose adapting the definition of quality to integrate the abundant research on the topic from the field of Social Sciences.",
"Here, the quality of a discourse has been investigated in the context of deliberation with the focus on inclusivity : how can the interplay of the different participants in the discourse lead to an optimal outcome for the collective?",
"The focus here is not on the quality of the individual contributions.",
"Instead, an overall quality of the discourse is determined by the fact that the individual quality dimensions are distributed among different contributions (e.g some participants do more rational reasoning, others share personal experiences).",
"We would like to integrate those aspects that focus on inclusivity and cooperation.",
"Similar to Wachsmuth et al. (2017b), social scientists have developed a taxonomy, the discourse quality index (DQI), that describes the different desirable aspects of a discourse (Steenbergen et al., 2003).",
"This taxonomy has been used to analyze the quality of deliberation in different contexts, ranging from more formal contexts, such as parliamentary debates (Steiner et al., 2005), to informal discussions in online forums (Trenel, 2004).",
"Both implementations integrate logical coherence as one dimension, cogency in Wachsmuth et al. (2017b), justification in the DQI.",
"Some aspects of inclusivity are also being touched upon in the rhetorical and dialectical dimension of Wachsmuth et al. (2017b), such as using appropriate language ( Appropriateness ) or whether an argument supports conflict resolution ( global relevance ).",
"We concentrate on the following dimensions from the DQI, which particularly focus on the collaborative aspect of discourse.",
"Respect : this dimension includes respectful tone, respect for other social groups/backgrounds, and openness towards other opinions.",
"Equality / Participation : it is not desirable that some dominant participants make the bulk of contributions while many others remain passive.",
"All participants should have equal opportunities to contribute and all topics, including those that DQI (Steenbergen et al., 2003) AQ (Wachsmuth et al., 2017b) Description Logical coherence Local acceptability Argument should be sound, rationally worthy Justification level Local sufficiency (Enough) premises should support the claim Local relevance Premises should be suitable to support claim Personal experiences Emotional appeal Argumentation should increase empathy Emotional balance Appropriateness Suitable language and amount of emotions Credibility Is the participant credible?",
"may only affect minorities, are equally relevant.",
"Interactivity : beyond simply sharing opinions, acknowledging other viewpoints and interacting with other participants through listening and responding lead to new perspectives arising compromises can emerge.",
"Testimoniality / Report of personal accounts : sharing stories and personal narratives as an alternative form of communication can involve more people in the discourse, especially those who cannot identify themselves with rational argumentation.",
"It can also make other participants aware of other perspectives as it generally increases empathy.",
"Especially when traditional or universal norms need to be questioned, narratives are particularly well suited, as their ambiguity and vagueness creates room for interpretation.",
"This is particularly important when new ideas or perspectives are introduced, since they cannot yet be rationally articulated.",
"Table 1 establishes a direct comparison between discourse quality dimensions of the DQI (Steenber-gen et al., 2003; Steiner et al., 2005) and argument quality dimensions as defined in Wachsmuth et al. (2017b).",
"Apart from the potential theoretical insights, the existing guidelines can be applied to annotate new or enrich existing corpora for AM.",
"Despite the small size, the data already annotated based on the DQI can be made usable and extended for NLP.",
"In addition, some of the quality dimensions can be further quantified or approximated using statistical methods.",
"For example, interactivity or equality can be assessed with frequency-based methods, such as frequency of posts by distinct participants and response rate.",
"Summing up The overview of the definitions of AQ along with the discussion of the potential of the integration of Deliberative Quality features into an AM framework has one strong take-home message: The need for the scope of the investigation to go beyond",
"(a) the persuasiveness of a an argumentative text (speeches, forum posts, tweets), and",
"(b) their relation to the immediate preceding discourse.",
"Instead, we pointed out the need to also assess the potential of the impact of that argumentative text on the upcoming discourse: this dimension of quality, inherently related to the interpretation of argumentation as a cooperation challenge, is currently lacking in current approaches to AQ.",
"Grounding AQ in a discourse perspective which quantifies team-playing and its impact on discourse dynamics is a clear challenge, both theoretically, in the Social Sciences and Argumentation Theory, and concretely, as the empirical quantification of discourse-grounded AQ will require large annotation efforts, real-time implementations, and thorough evaluation strategies.",
"We propose to make a first step in tackling this challenge by mapping it into a concrete application: (semi-)automatic moderation implemented as a form of discourse optimization , or, as it is commonly referred to in the Social Sciences, facilitation (Kaner et al., 2007; Trenel, 2009).",
"To illustrate the dynamics of moderation, let us start from concrete examples from a deliberation platform, RegulationRoom .",
"This discussion forum has been employed by public institutions to gather citizens contributions on discussions targeting very heterogeneous issues (more details can be found in Appendix).",
"Let us consider the following example from a discussion on the distracted driving by commercial vehicle operators (e.g., truckers and bus drivers).",
"The posts we selected (arrows indicate comment nesting) are from the discussion sub-thread: Texting what are the risks?",
"5 User 1 : In 2004,... the driver failed to move out of the low-clearance lane while talking on a",
"cellphone. This ac-cident happened in 2004!",
"He was TALKING on a CELLPHONE!",
"IMO, Turn Off Cell B/4 Driving! should have become law long B/4 NOW!!",
"All these years have gone by, hundreds of LIVES have been lost, & our society is just NOW starting to work on this issue?",
"AND we think we need to start with small steps like banning TEXTING (& sometimes in just commercial vehicles?)?",
"[...] User 2 : A driver in California recently caused an accident because he spilled his coffee.",
"Another driver almost wrecked because he was trying to light a cigarette.",
"The bottom line is that ANY distraction while driving a car can cause an accident.",
"Where do we draw the line?",
"Also, there are millions of people out there who are completely capable of using their cell phone AND driving, at the same time.",
"Are we proposing that they should be punished, for the inabilities of others?",
"For people who spend much of their time in the car, this time might be their only chance to communicate with loved ones, do business, or make important calls.",
"If they are physically capable to use their phones safely while driving, why restrict their freedoms?",
"Moderator : It's true that any distraction can cause an accident.",
"The agency decided that texting was particularly unsafe, in part on the basis of the VTTI study that we reference lower on the page.",
"Click the graphic to get a sense of the safety risks associated with different activities.",
"A question: do you think that this rule imposes an undue burden on personal communication?",
"What alternative restrictions on texting, if any, would you propose to impose on professional drivers?",
"The example involves two users who clearly differ in their argumentation style and position.",
"User 1 has a clear position on the topic (claim in bold: not just texting, but all cellphone interactions should be banned), which she/he supports with personal reports (underlined text) an emotional tone, and a style which is typical of social media text.",
"User 2 replies, opening the post on a sarcastic note, which serves as the first premise to her/his (implicit) claim which is encoded in three rethorical questions (in bold): there should be no restrictions at all, because imposing them would be unfair.",
"This is the case because (premises underlined): any distraction can cause an accident, some people are capable of using their phone while driving, people who spend lot of time in the car for professional reasons still need 5 archive.regulationroom.org/texting/ design-and-operation/index.html to communicate with loved ones.",
"A moderator then joins the discussion to",
"(a) provide a clarification as to why the focus is on texting and a link to further information on the matter, and",
"(b) ask User 2 to elaborate on the personal communication issue, and to propose alternatives.",
"In the Appendix we report another example from the same topic and thread, where the user acts as a problematizer, challenging the scope and definition of the rule under discussion and the moderator acts as a discourse traffic director, pointing out that the user should read and contribute to different threads in the discussion.",
"The guidelines for human moderators in RegulationRoom have been defined in advance in a 'moderator protocol' (eRulemaking Initiative et al., 2017) which reflect the moderator actions mentioned in the examples.",
"In the protocol the moderator roles were divided into two main classes.",
"Supervision functions include general moderator actions that do not necessarily target the specific content of the posts, e.g., greeting participants, monitoring compliance with netiquette (policing), or helping with technical difficulties.",
"Substantive moderator functions aim to improve the quality of comments and promote fruitful discourse.",
"As the examples above clearly show, this can both mean that the moderator encourages exchanges between discourse participants and participation in other posts (broadening the scope of the discus-sion), or helping users to improve the content of their posts (requests for clarification, focusing on one topic, substantive reasoning, sharing personal experiences).",
"RegulationRoom represents an excellent example of the beneficial role of the moderator in maintaining productive argumentation from participants.",
"However, to the best of our knowledge, there is little to no NLP work targeting moderation modeling.",
"Park et al. (2012) used data from RegulationRoom and conducted an annotation study to empirically categorize the types of moderator interventions specified in the moderator protocol.",
"Classification experiments were conducted using SVM to predict the type of action a moderator would perform, given the previous comment.",
"However this work is limited as it only focuses on two types of moderator interventions (broadening the scope of the discussion, improving argument quality) and as it does not predict whether the moderator should intervene, building on the assumption that a given comment has already been flagged as in need for moderation.",
"Besides the concrete example of RegulationRoom, moderation and discourse facilitation have been, and still are, a crucial topic in digital democracy.",
"6 The know-how of digital democracy experts is an invaluable starting point for the application of AM to moderation, as current research targets both the integration of digital solutions to facilitate online campaigns, and a critical reflection of the effects of such innovations on the deliberation outcomes.",
"Digital innovation supporting deliberation Argument maps (Walton, 2005) are widely employed to support online discussions, as an emerging optimization of the deliberation.",
"Given a specific topic, for example possible reactions to climate change, users who wish to contribute to the discussion are requested to structure their contribution by producing an item in a conceptual map and optionally writing an accompanying post.",
"Their contribution to the argument maps is often reviewed by a moderator.",
"So in a sense, the argument map for a given deliberation process is the outcome of a process that comes both from below (the user) and above (the moderator).",
"Thanks to argument maps, the overall discourse picture can be overviewed and it is easier for the group of contributors to express support for one (or many) of the available options, without having to read a large number of long posts.",
"An example of this approach is represented in Deliberatorium 7 , an e-deliberation platform which has been extensively employed in many reference studies on the effect of digital innovation on deliberation (Klein, 2011).",
"Another example of a digital deliberation platform which integrates argument maps and offers an option for moderation is COLAGREE (Yang et al., 2021; Ito, 2018).",
"Among the studies testing the impact of such digital platforms on online deliberation, Spada et al. (2015) tests the effect of Deliberatorium 's argument maps on an online discussion among the supporters of the Italian Democratic party concerning the desired features of electoral law to be proposed by the party to the Parliament.",
"This study compared the discussion of users employing Deliberatorium and a control group using a traditional forum format which was then encoded into argument maps.",
"The comparison showed that 6 See Dahlberg (2011) for an outline of positions in deliberative democracy.",
"the argument map modality did not discourage participation, and while it appeared to make users less creative (fewer new ideas as compared to the traditional forum), it also reduced the rate of claims without further discussion.",
"Yet, the need for trained moderators tends to be a significant bottleneck (both in terms of time and of costs) in digital deliberation.",
"Moreover, empirical research on the effect of moderation on deliberation has uncovered the risks of biased moderation.",
"For example, the experiment in Spada and Vreeland (2013) tests the extent to which moderators can influence participants' behavior by expressing their views during the moderation process.",
"NLP-supported moderation represents a clear solution to the bottleneck problem affecting facilitation in digital democracy.",
"Automatic tools can take over some of the tasks that human moderators typically perform when monitoring online discussions.",
"For example, in Social Sciences, one of the most discussed issues in crowd-scale deliberation is flaming, i.e., aggressive and disrespectful communicative behavior (Lampe et al., 2014).",
"Here, moderators could benefit from hate-speech and trolling detection methods in NLP.",
"NLP methods to support deliberative decision-making have already been applied for the real-time visualisation of argument maps (El-Assady et al., 2017).",
"Deliberation in real-time applications has the clear potential of structured arguments extraction from the news media (Daxenberger and Gurevych, 2020), the identification of the argumentative structure in deliberative contexts (Liebeck et al., 2016), as well as automatic argument summarization (Lawrence et al., 2017).",
"Beyond the real-time support to users (and moderators) provided by the methods described above, further tasks specific to AM which are part of the role of a human or (semi-)automoated moderator include: detecting fallacies (Habernal et al., 2018b), reasoning and common-sense (Habernal et al., 2018a), relevance estimation (Potthast et al., 2019).",
"In addition, detecting and highlighting parts of an argument that are a good target for attacks (Jo et al., 2020a) can help the moderator to motivate more participation and argumentation from opposing sides of a discussion.",
"Another important source is the detection of implicitly asserted prepositions (Jo et al., 2020b) which has a counterpart in the framing detection task (Card et al., 2015; Akyurek et al., 2020), as framing is a manipulation strategy which highlights specific aspects of an issue under discussion to promote certain interpretations.",
"Further NLP tasks which can play a crucial role in ensuring a healthy interaction are, for example, Hate Speech Detection (Warner and Hirschberg, 2012; Waseem and Hovy, 2016; Schmidt and Wie-gand, 2017), Fact Checking (Vlachos and Riedel, 2014; Kotonya and Toni, 2020), Facts recognition and source identification (Dusmanu et al., 2017).",
"How to represent discourse?",
"Thus far, we have discussed the main ingredients of a rich NLP-informed approach to deliberative discourse.",
"These components, together with the deliberation-augmented definition of AQ sketched in section 3 are the features that the NLP moderator takes as an input.",
"One question remains open: How to represent the argumentative discourse within a contribution (e.g. a forum post) and across contributions (e.g. an entire online deliberation campaign)?",
"We can approach also this question from an interdisciplinary perspective.",
"Reference work in political science aims at modeling the mechanisms of political discourse in forms of discourse networks, as defined in Leifeld (2017).",
"A discourse network is a bipartite graph, containing two classes of nodes: actors (e.g. Angela Merkel; the left-wing party; etc.) and claims (e.g. housing opportunities should be established for refugees); Edges between actors and claims indicate the support or opposition of a certain actor to a specific claim.",
"Discourse coalitions (Hajer, 1993) and argumentative clusters are the projection of the affiliation network on the actor and claim sides of the network (Leifeld and Haunss, 2012; Haunss et al., 2013).",
"Recent NLP research has targeted integration machine learning in the discourse network analysis workflow (Pado et al., 2019; Haunss et al., 2020).",
"Crucially for AM, discourse networks can integrate claims and actors with a third class of nodes, the frame nodes, which encode the reason put forward by an actor to support or reject a claim.",
"This type of representation is perfectly compatible with a graph-based approach on argument representation which has already been established as to be preferred to a tree-structure representation both empirically (Niculae et al., 2017) and theoretically (Afantenos and Asher, 2014).",
"network: participant inclusion, can be enforced by ensuring that the contributions of peripheric actor nodes receive the deserved salience; argument mapping and summarization can be modeled by identifying hot sub-graphs in the network; the impact of a contribution (the grounded notion of AQ we have been advocating thus far) can be quantified as the perturbation introduced in the network, with its long term effects on convergence or polarization.",
"Who moderates the (NLP) moderators?",
"The problem of biased moderation obviously relates to the issue of bias in NLP (Blodgett et al., 2020; Caliskan et al., 2017; Bolukbasi et al., 2016; Spli-ethover and Wachsmuth, 2020) and it has a clear implication in the application of NLP methods to moderation.",
"For example, we would not want our NLP models to infer a negative impact on AQ from cues which just reveal that the user belongs to certain groups.",
"This is a real risk when quality is equated to success, in turn quantified in terms of likes, replies, retweets.",
"The public of a forum may be sensitive to such cues, but the moderator should be unbiased with respect to them.",
"Another source of bias is the degree of literacy of a contribution: while users who express themselves poorly are likely to be less popular with the forum public, their contributions may still be a very good move in the cooperation challenge one that moderators (NLP or humans, online or in-person) have to ensure will not be left unexploited.",
"While there are clear social drawbacks to working with data and approaches to AM that limit the participation of the argumentation/deliberation, opening the floodgates to unregulated, evenly weighted contribution of all arguments also presents a dilemma.",
"We present an interdisciplinary formulation of the notion of argument quality, which is more apt to work with heterogeneous data and platforms, such as discussion forums and social media.",
"With the goal of ensuring a productive development of the discourse, we propose NLP-supported moderation to facilitate argumentation and deliberation in digital democracy.",
"We acknowledge funding by the Bundesministeri-umfur Bildung und Forschung (BMBF) through the project E-DELIB ( Powering up E-deliberation: towards AI-supported moderation )."
] | [
"abstain",
"method",
"objective",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"other"
] |
[
"We find that existing language modeling datasets contain many near-duplicate examples and long repetitive substrings.",
"As a result, over 1% of the unprompted output of language models trained on these datasets is copied verbatim from the training data.",
"We develop two tools that allow us to deduplicate training datasetsfor example removing from C4 a single 61 word English sentence that is repeated over 60 , 000 times.",
"Deduplication allows us to train models that emit memorized text ten times less frequently and require fewer training steps to achieve the same or better accuracy.",
"We can also reduce train-test overlap, which affects over 4% of the validation set of standard datasets, thus allowing for more accurate evaluation.",
"Code for deduplication is released at https://github.com/google-research/ deduplicate-text-datasets .",
"A key factor behind the recent progress in natural language processing is the development of large-scale text corpora used to train increasingly large language models.",
"These datasets have grown from single gigabytes to as much as a terabyte over the past few years (Chelba et al., 2013; Xue et al., 2020; Graff et al., 2003; Brown et al., 2020).",
"Because it is so expensive to perform manual review and curation on massive datasets, they tend to suffer in quality compared to their smaller predecessors.",
"This has implications far beyond metrics like perplexity and validation loss, as learned models re-flect the biases present in their training data (Ben-der et al., 2021; Wallace et al., 2019; Sheng et al., 2020).",
"Quantitatively and qualitatively understanding these datasets is therefore a research challenge in its own right (Dodge et al., 2021a).",
"We show that one particular source of bias, duplicated training examples, is pervasive: all four common NLP datasets we studied contained duplicates.",
"Additionally, all four corresponding validation sets contained text duplicated in the training set.",
"While naive deduplication is straightforward (and the datasets we consider already perform some naive form of deduplication), performing thorough deduplication at scale is both computationally challenging and requires sophisticated techniques.",
"We propose two scalable techniques to detect and remove duplicated training data.",
"Exact substring matching identifies verbatim strings that are repeated.",
"This allows us to identify cases where only part of a training example is duplicated (4.1).",
"Approximate full document matching uses hash-based techniques (Broder, 1997) to identify pairs of documents with high n -gram overlap (4.2).",
"1. Over 1% of tokens emitted unprompted from a model trained on standard datasets (e.g., C4) are part of a memorized sequence (See 6.2) even though the 1.5 billion parameter model is much smaller than the 350GB dataset it was trained on.",
"By deduplicating the training dataset we reduce the rate of emitting memorized training data by a factor of 10 .",
"2. Train-test overlap is common in non-deduplicated datasets.",
"For example, we find a 61-word sequence 1 in C4 (Raffel et al., 2020) that is repeated 61 , 036 times verbatim in the training dataset and 61 times in the validation set ( 0 . 02% of the samples in each dataset).",
"1 by combining fantastic ideas, interesting arrangements, and follow the current trends in the field of that make you more inspired and give artistic touches. We'd be honored if you can apply some or all of these design in your wedding. believe me, brilliant ideas would be perfect if it can be applied in real and make the people around you amazed! 1 8424 This train-test set overlap not only causes researchers to over-estimate model accuracy, but also biases model selection towards models and hyperparameters that intentionally overfit their training datasets.",
"3. Training models on deduplicated datasets is more efficient.",
"Processing a dataset with our framework requires a CPU-only linear-time algorithm.",
"And so because these datasets are up to 19% smaller, even including the deduplication runtime itself, training on deduplicated datasets directly reduces the training cost in terms of time, dollar, and the environment (Bender et al., 2021; Strubell et al., 2019; Patterson et al., 2021).",
"4. Deduplicating training data does not hurt perplexity: models trained on deduplicated datasets have no worse perplexity compared to baseline models trained on the original datasets.",
"In some cases deduplication reduces perplexity by up to 10% .",
"Further, because recent LMs are typically limited to training for just a few epochs (Radford et al., 2019; Raffel et al., 2020), by training on higher quality data the models can reach higher accuracy faster.",
"To summarize, data duplication offers significant advantages and no observed disadvantages.",
"In the remainder of this paper we present our text deduplication framework in 4, and study the extent of duplicate content in common NLP datasets (e.g., C4, Wiki-40B, and LM1B) in 5.",
"We then examine the impact of deduplication on test perplexity (6.1) and on the frequency of emitting memorized content (6.2).",
"Finally, we analyze to what extent perplexity on existing, released models are skewed as a result of overlap between the train and test/validation splits (6.3).",
"Large language model datasets.",
"While we believe our results are independent of model architecture, we perform our analysis on Transformer-based decoder-only language models (Vaswani et al., 2017) trained for open-ended text generation.",
"These current state-of-the-art models are trained on internet text.",
"For example, the GPT-2 family of models Radford et al. (2019) is trained on Web-Text, a dataset of web documents highly ranked on Reddithowever this dataset was not made available publicly.",
"A common dataset starting point is CommonCrawl, an index of public webpages.",
"Among the models trained on CommonCrawl include GPT-3 (Brown et al., 2020) with the addition of book datasets, GROVER (Zellers et al., 2019) on a restricted subset filtered to news domains called RealNews, and T5 (Raffel et al., 2020) on a cleaned version of common crawl called C4.",
"Other models are trained on more curated Internet sourcesfor example Guo et al. (2020) used high quality processed Wikipedia text from 40 different languages to train monolingual 141.4M parameter language models.",
"Non-English models necessarily use different datasets; Zeng et al. (2021) for instance introduced PANGU , a family of models with up to 200B parameters that were trained on a non-public corpus of cleaned and filtered Chinese-language documents from CommonCrawl and other sources.",
"Since many of these datasets are not public, we deduplicate three that are: Wiki-40B, C4, and RealNewsas well as the One Billion Word Language Model Benchmark (Chelba et al., 2013), a smaller dataset commonly used for evaluation.",
"Contamination of downstream tasks.",
"When models are trained on datasets constructed by crawling the Internet, it is possible the model will train on the test set of downstream target tasks.",
"For example, Radford et al. (2019, 4) performed a posthoc analysis to identify 8-gram overlaps between GPT-2's training set and datasets used for evaluation, and Dodge et al. (2021b) analyzed C4 and found that up to 14.4% of test examples for various standard tasks were found verbatim (normalizing for capitalization and punctuation) in the dataset.",
"A more proactive approach removes contaminated data.",
"Trinh and Le (2018, Appendix B) removed documents from their CommonCrawl-based train set that overlapped substantially with the commonsense reasoning used for evaluation.",
"And GPT-3 (Brown et al., 2020, 5) did the reverse and removed downstream evaluation examples from their training data by conservatively filtering out any train set examples with a 13-gram overlap with any evaluation example.",
"Up to 90% of tasks were flagged as potentially contaminated.",
"In our research, we do not focus on the impact of duplicate text in pretrained models on downstream benchmark tasks; instead we address how duplicate text in the LM training and validation sets impacts model perplexity and the extent to which generated text included memorized content.",
"Memorizing training data.",
"The privacy risks of data memorization, for example the ability to extract sensitive data such as valid phone numbers and IRC usernames, are highlighted by Carlini et al. (2020).",
"While their paper finds 604 samples that GPT-2 emitted from its training set, we show that over 1% of the data most models emit is memorized training data.",
"In computer vision, memorization of training data has been studied from various angles for both discriminative and generative models (e.g. Arpit et al., 2017; Webster et al., 2019; Feldman and Zhang, 2020; Stephenson et al., 2021; Teter-wak et al., 2021).",
"Duplicate text in training data.",
"The Book Corpus (Zhu et al., 2015), which was used to train popular models such as BERT, has a substantial amount of exact-duplicate documents according to Bandy and Vincent (2021).",
"Allamanis (2019) shows that duplicate examples in code datasets cause worsened performance on code understanding tasks.",
"We analyze the presence of duplicate text in four datasets of varying sizes that have been used for training natural language generation systems, producing general-purpose pre-trained models, and for language model benchmarking.",
"While this paper restricts itself to English datasets, we expect that non-English datasets suffer from similar issues and could likewise benefit from de-duplication.",
"Wikipedia (Wiki-40B) consists of multi-lingual cleaned Wikipedia text (Guo et al., 2020).",
"We take the English portion, which contains 2.9M Wikipedia pages with an average length of 768 BPE tokens.",
"The dataset creators do not indicate any deduplication was performed aside from removing redirect-pages (e.g., sunflower to Helianthus).",
"One-Billion Word benchmark (LM1B) contains 30M sentences of news commentary (Chelba et al., 2013).",
"Unlike the other datasets we analyze, LM1B's examples are one sentence long rather than multi-sentence documents.",
"The average example length is 32 BPE tokens.",
"While this dataset is extremely standard for benchmarking language models, Radford et al. (2019, Sec 4) note it has 13.2% overlap of the test set with the train set.",
"Colossal Cleaned Common Crawl (C4) is made up of 360M web documents, with an average length of 486 BPE tokens (Raffel et al., 2020).",
"C4 was introduced as a pre-training dataset for T5, a set of encoder-decoder models which have been widely used in fine-tuned downstream tasks.",
"The dataset was previously deduplicated in a more sophisticated process than the prior two datasets.",
"Each paragraph was hashed and paragraphs resulting in hash collisions were removed.",
"This was followed by a pass that removed placeholder text, code, and prohibited words.",
"See Dodge et al. (2021a) for a detailed breakdown of the source text in C4.",
"RealNews is a subset of the Common Crawl consisting of articles from news domains (Zellers et al., 2019).",
"It contains 31M documents with average length 793 BPE tokens.",
"RealNews was deduplicated by inserting a hash of the first 100 characters of each document into a bloom filter (Bloom, 1970) and then excluding any document which resulted in a hash collision.",
"Like C4, examples with duplicate URLs were excluded.",
"The simplest technique to find duplicate examples would be to perform exact string matching between all example pairs, but as we will show, this is insufficient.",
"We introduce two complementary methods for performing deduplication.",
"First, using a suffix array (Manber and Myers, 1993), we remove duplicate substrings from the dataset if they occur verbatim in more than one example.",
"Second, we use MinHash (Broder, 1997), an efficient algorithm for estimating the n -gram similarity between all pairs of examples in a corpus, to remove entire examples from the dataset if they have high n -gram overlap with any other example.",
"We consider a dataset D = { x i } Ni =1 as a collection of examples x i .",
"Each of these examples is itself a sequence of tokens : x i = (cid:2) x 1 i , x 2 i , , x s i i (cid:3) .",
"Due to the diversity of possibilities in human language, it is rare for the same idea to be expressed identically in multiple documents unless one expression is derived from the other, or both are quoting from a shared source.",
"This observation motivates deduplicating exact substrings.",
"We call our approach EXACTSUBSTR .",
"When two examples x i and x j share a sufficiently long substring (that is, a substring for which x",
"a..a + k i = x",
"b..b + k j ), that substring is removed from one of them.",
"Based on statistical analyses (B), we select k = 50 tokens as the minimum matching substring length.",
"This exact-substring-matching criterion, while conceptually simple, is computationally prohibitive with naive (quadratic) all-pair matching.",
"To improve the efficiency, we concatenate all the examples of the entire dataset D into a giant sequence S , and construct a Suffix Array A of S .",
"A suffix array (Manber and Myers, 1993) is a representation of a suffix tree (Weiner, 1973) that can be constructed in linear time in (cid:107)S(cid:107) (Krkkinen and Sanders, 2003) and enables efficient computation of many substring queries; in particular, they allow us to identify duplicated training examples in linear time.",
"Suffix arrays have the advantage over suffix trees in that they are 10100 more memory efficient (Manber and Myers, 1993), requiring just 8 bytes per input token, though they are asymptotically less efficient for some query types.",
"They have been used widely in NLP, such as for efficient TF-IDF computation (Yamamoto and Church, 2001) and document clustering (Chim and Deng, 2007).",
"The suffix array A for a sequence S is a lexicographically-ordered list of all suffixes contained in the sequence.",
"Formally, A ( S ) = arg sort all_suffixes ( S ) For example, the suffixes of the sequence banana are (banana, anana, nana ana, na, a) and so the suffix array is the sequence (6 4 2 1 5 3).",
"In practice, we construct S from the bytes of the BPE tokenization of the text (6).",
"After constructing A , it is straightforward to identify duplicated training examples.",
"Suppose that the sequence s was repeated exactly twice in the training dataset S at positions i and j , that is, S",
"i..i + | s | = S",
"j..j + | s | .",
"Then the indices i, j will occur adjacent to each other in the suffix array A .",
"Finding all repeated sequences is thus a matter of linearly scanning the suffix array from beginning to end and looking for sequences A i , A i +1 that share a common prefix of at least some threshold length.",
"Any satisfying sequences are recorded.",
"This algorithm is embarrassingly parallel, and so we can efficiently process the dataset.",
"Based on experimentation (Appendix B), we choose a threshold length of 50 BPE tokens for all experiments.",
"We also perform approximate deduplication based on matching entire examples.",
"This method, which we call NEARDUP , is a good complement to the exact substring matching, especially for web crawl text, as it handles the very common case of documents being identical except for interspersed templated fields (such as the last row of Table 1).",
"MinHash (Broder, 1997) is an approximate matching algorithm widely used in large-scale deduplication tasks (Versley and Panchenko, 2012; Gabriel et al., 2018; Gyawali et al., 2020), including to deduplicate the training set for a large Chinese-language LM (Zeng et al., 2021).",
"Given two documents x i and x j , the main idea is to represent each document by its respective set of n -grams d i and d j .",
"We can then use hash functions to approximate the Jaccard Index (Jaccard, 1912): Jaccard( d i , d j ) = | d i d j | / | d i d j | If the Jaccard Index between d i and d j is sufficiently high, it is likely that documents are approximate matches of each other.",
"To efficiently approximate the Jaccard index, MinHash constructs document signatures by sorting each of the n -grams via a hash function, and then keeping only the k smallest hashed n -grams.",
"There are multiple ways to construct estimators of the Jaccard index from these kinds of signatures (Cohen, 2016).",
"In our implementation, we use 5-grams and a signature of size 9,000.",
"The probability that two documents are considered a potential match is Pr( d i , d j | Jaccard( d i , d j ) = s i,j ) = 1 (1 s bi,j ) r where b = 20 and r = 450 are user-settable parameters to control the strength of the filter.",
"See Appendix A for more details.",
"For each pair of documents identified as a potential match, more computationally expensive similarity metrics can be employed as a subsequent filtering step.",
"In particular, we identify two documents as duplicates if they are matched by the MinHash algorithm and their edit similarity is greater than 0.8.",
"The edit similarity between token sequences x i and x j is defined as: EditSim( x i , x j ) = 1 EditDistance( x i , x j ) max( | x i | , | x j | ) To build clusters of similar documents, we construct a graph that has an edge between two documents if they are considered a match.",
"Then, we 4 8427 Dataset Example Near-Duplicate Example Wiki-40B \\n_START_ARTICLE_\\nHum Award for Most Impactful Character \\n_START_SECTION_\\nWinners and nomi-nees\\n_START_PARAGRAPH_\\nIn the list below, winners are listed first in the colored row, followed by the other nominees.",
"use the method introduced in acki et al. (2018) to identify connected components. A breakdown of the computation needed is given in Appendix A.",
"We deduplicate each of the four datasets with both of our two techniques. When text was duplicated across multiple data splits, we prioritized keeping a copy in the test or validation set and removing it from the train set.",
"With NEARDUP , we found that the web-scrape datasets contain between 3.04% (on C4) to 13.63% (on RealNews) near duplicates (Table 2). Near-duplicate text is much less common in Wiki-40B, forming only 0.39% of the train set. 2 In C4, the majority (1.8M) of near-duplicate clusters consisted of just a single pair of examples that matched against each other, but there were 280 clusters with over 5,000 examples in them (Figure 1), including one cluster of size 250,933.",
"On average with EXACTSUBSTR , we remove more total content than with NEARDUP (de-spite EXACTSUBSTR not removing any examples outright)for example removing 7 . 18% of the tokens in C4. The exception is LM1B, where EXACTSUBSTR removes 8 less data than NEARDUP . On investigation, we find this is due to the fact that LM1B documents are significantly shorter: 90% of all documents are under 50 tokens, and so are not even candidates for potential matches even if the entire sequence matched verbatim. We find that both NEARDUP and EXACTSUBSTR remove similar content 77% of the training examples that NEARDUP removes from C4 have at least one verbatim length50 match found by EXACTSUBSTR .",
"While the authors of both RealNews and C4 explicitly attempted deduplication during dataset construction, the methods were insufficient to capture the more subtle types of duplicate text commonly found on the internet. In C4 and Wiki-40B, we qualitatively observe that much of the text identified as near-duplicated is computer-generated. The text is identical except for the names of places, businesses, products, dates, and so on. Because these examples frequently differ by just a few words at a time, deduplication strategies relying on exact string matching would fail to identify a match. Example duplicate pairs from each dataset can be found in Table 1 (more examples in the Appendix).",
"For RealNews and LM1B, derived from news sites, we observe that many near-duplicates occur because the same news article appears on multiple news sites with slightly different formatting. For example, in LM1B, there is one example that starts MINEOLA , N.Y. New York officials say [...] and another that starts ( AP ) New York officials say [...].",
"The two examples are otherwise identical.",
"Both deduplication methods identify overlap between the train set and the validation set (Table 2).",
"For example, 4.6% of the C4 validation set and 14.4% of the RealNews validation set examples had an approximate duplicate in their respective training sets.",
"Such duplication is problematic since it could cause evaluation metrics to be unfairly in-flated for models that are better at memorizing their train sets.",
"We evaluate the effect of this leakage on publicly released models in Section 6.3.",
".",
"We trained 1.5B parameter XL\", decoder-only, Transformer-based language models similar to GPT-2, on C4-O RIGINAL , C4-N EARDUP , and C4-E XACTSUBSTR , respectively. We use the T5 codebase and model architecture from Raffel et al. (2020), and each model was trained for about two epochs on its respective dataset. To better understand the amount of variance in the perplexities of trained models, we also trained three different random seeds of the 110M parameter base\" model for each of the above three datasetsfor a total of nine base-sized models. For all experiments, we used a Byte Pair Encoding (BPE) vocabulary trained on C4-N EARDUP 0 5 10 15 20 25 30 35 Perplexity C4 Original C4 Duplicates C4 Unique LM1B Wiki40B E v a l u a t i o n d a t a s e t Training data Original NearDup ExactSubstr Figure 2: Impact of deduplicating the training set on validation perplexity. We plot the results from T5 XL (see Appendix for base-sized model). For C4, we evaluate on C4 Original , the original validation set; C4 Unique , a subset of the validation set identified by NEARDUP as having zero matches across C4; and C4 Duplicates , a subset of the validation set identified by NEARDUP as having a match in the C4 train set. with a budget of 50K tokens, which resulted in a vocabulary the same size as GPT-2's. We trained with a maximum sequence length of 512 tokens (for longer documents, we randomly extracted subsequences of this length.) Further training details can be found in Appendix C. 6.1 Model Perplexity We computed the perplexity of our trained models on the validation sets of LM1B and Wiki-40B, and on subsets of the C4 validation set (Figure 2). For the base size, we observe that all models have similar perplexity on the original C4 validation set and on validation set examples that were identified as unique (no near-duplicate in either train or validation). However, both models trained on deduplicated data have significantly higher perplexity on validation set examples that have duplicates in the training set than the model trained on the original C4. EXACTSUBSTR -deduplicated results in higher perplexity than NEARDUP -deduplicated. These trends holds true for the XL sized model as well. While this may suggest EXACTSUBSTR duplication results in models least overfit on the train set, note that both of these techniques have used separate duplicate thresholds and a different choice of thresholds could change the results. When evaluating on the validation sets of LM1B and Wiki-40B, we found that models trained on NEARDUP -deduplicated C4 consistently achieved lowest perplexity (for LM1B eval with base models, see Appendix Figure 7). EXACTSUBSTR deduplication decreases perplexity of the XL model by almost 3 points perplexity on Wiki-40B which is 6 8429 Model 1 Epoch 2 Epochs XL-ORIGINAL 1.926% 1.571% XL-NEARDUP 0.189% 0.264% XL-EXACTSUBSTR 0.138% 0.168% Table 4: When generating 100k sequences with no prompting, over 1% of the tokens emitted from a model trained on the original dataset are part of a 50-token long sequence copied directly from the training dataset. This drops to 0 . 1% for the deduplicated datasets. much larger than the variation of about 1 point perplexity we observed in the base models. This is despite seeing fewer tokens of training data overall. Lastly, we note all our XL models achieved <35 perplexity on LM1B, which is less than the 42.16 perplexity reported for the 1.5B GPT-2 using a vocabulary the same size as ours. 6.2 Generated Text Data duplication has the effect of biasing the trained LM towards particular types of examples. This can contribute to a lower diversity of generations, and increased likelihood that the generated content is copied from the training data (Carlini et al., 2020). For our generation experiments, we use topk random sampling with k = 50 and experiment with prompted and unprompted generation. No prompt. We first evaluate memorization tendencies in the case where the model is asked to generate text without any prompt sequence. We generate 100,000 samples, each up to 512 tokens in length (examples provided in the Ap-pendix). For each generated token, we say the token is memorized if it is part of a 50-token substring that is exactly contained in the training data. On XL-ORIGINAL , over 1% of the generated tokens belong to memorized sub-sequences (see Table 4). This is 10 more memorization than XL-EXACTSUBSTR or XL-NEARDUP . Some example subsequences that were copied verbatim from the train set can be found in Table 9 in the Appendix. With prompting. In most real use cases, language model generation is controlled by providing a prompt for the model to continue. We experiment with four possible prompt sources: training examples identified by EXACTSUBSTR as having near-duplicates in the train set (train dup), training examples identified as unique (train unique), validation set examples with a near-duplicate in the train set (valid in train), and validation set ex-0.0 0.1 0.2 0.3 0.4 Fraction of LM continuations matching true continuation train dup train unique valid in train valid unique P r o m p t s o u r c e Training data Original NearDup ExactSubstr Figure 3: The proportion of generations which have edit similarity above 0.8 with the groundtruth continuation when using the LM to generate continuations for 32-token prompts identified by NEARDUP as either duplicated or unique. Model Dataset Orig Dups Unique Transformer-XL LM1B 21.77 10.11 23.58 GROVER-Base RealNews 15.44 13.77 15.73 GROVER-XL RealNews 9.15 7.68 9.45 Table 5: For each model, the perplexity of the official validation set ( Orig ), valid set examples which were identified by NEARDUP as matches of train set examples ( Dups ), and valid set examples identified by NEARDUP as unique ( Unique ). Due to the size of the RealNews validation set, we evaluated on only the first 25k examples meeting each condition. amples identified as unique across all splits (valid unique). We select the first 32 tokens of each example as the prompt, which means we can evaluate the fraction of generations which are near-duplicates with the ground-truth continuation for the prompt (Figure 3). When the prompt comes from duplicate examples in the train set, XL-ORIGINAL reproduces the groundtruth continuation over 40% of the time. XL-EXACTSUBSTR and XL-NEARDUP still copy the groundtruth more often when the prompt comes from a duplicate example than when the prompt comes from a unique example, suggesting that more stringent deduplication may be necessary to remove memorization tendencies entirely. 6.3 Impact on Existing Models Train-test leakage does not just impact models trained on C4. Table 5 shows that the presence of near-duplicates of the evaluation set in the train set has a significant impact on model perplexity for two standard models: Transformer-XL (Dai et al., 2019), which was trained on LM1B, and GROVER (Zellers et al., 2019), which was trained on RealNews. For Transformer XL, the perplexity 7 8430 halves on examples identified as near-duplicates. For GROVER, the difference, though not quite as stark, is present in both model sizes considered. Existing models also suffer from the problem of generating text from their train sets. We find that 1 . 38% of the tokens in the official release of 25k GROVER-Mega outputs 3 are part of verbatim matches in RealNews of at least length 50 . Likewise, more than 5% of the tokens in ~200k sequences outputted by GPT-Neo 1.3B (Black et al., 2021) are part of a 50 token matches of its training data, the Pile (Gao et al., 2020). 7 Discussion The focus of this paper is on the datasets used to train language models. While recent work focused on documenting the potential harms that could arise from problematic datasets (Bender and Friedman, 2018; Gebru et al., 2020), less work has been done to quantitatively analyze properties of real language modelling datasets, like Dodge et al. (2021a) has done for C4. Our paper provides analysis on one particular axis, that of data duplication. Our experiments measured what could be quan-tified: the amount of duplicate content in common datasets, the effect of deduplication on trained model perplexity, and the reduction of memorized content in trained models through deduplication. We do not focus on the nature of the data being removed by deduplication or memorized by LMs. Privacy is an important subject for future work, as memorized training data has significant privacy consequences. By this, we mean the standard privacy definition that a model should not reveal anything particular to the specific dataset it was trained on, as opposed to another training dataset from a similar distribution (Shokri et al., 2017). 4 Training on standard datasets that have not yet been deduplicated results in models that are particularly sensitive to examples that happened to be repeated multiple times, and this has negative privacy implications. For instance, it could violate a person's expectations of privacy if their publicly available personal data appeared in a different, surprising context. Downstream applications of LMs, such 3 gs://grover-models/generation_examples/ generator=mega~dataset=p0.90.jsonl 4 Another interpretation of privacy focuses on the sensitivity of the data involved, when a model is trained on and able to reproduce personal identifiers or other forms of private data.",
"Our definition is more expansive.",
"memorized content like adverts for real products.",
"We stress that in our experiments, we do not distinguish between undesired memorized text (such as phone numbers), innocuous memorized text (common phrases), and text we may want to be memorized (such as a quote by a public figure), and instead treat all instances of the LM generating text that closely matches the training set as problematic.",
"While we qualitatively observed that much of the identified memorized content was relatively innocuous, a more systematic study of the risks associated with the detected memorization was beyond the scope of this work.",
"We also do not investigate the negative consequences of deduplication.",
"Some language tasks explicitly require memorization, like document retrieval or closed-book question answering.",
"Also, text that gives attribution is often duplicated across documents, so removing duplicate substrings could correspond to removing just the attribution, which could result in models that learn the content without its attached attribution.",
"Deduplication is also not sufficient to remove privacy-sensitive data like bank passwords and medical records which should never be used in training data (Brown et al., 2022).",
"Ultimately, whether memorization is a desired property of a language model, or else risky and unwanted, depends both on the nature of the text that has been memorized and on the downstream applications of the trained model.",
"However, since the trend has been towards creating datasets and models that are application-agnostic, we encourage researchers to think carefully about the limitations of the data they have collected and the how the model's intended usage constrains what should be part of the training set.",
"Developing techniques to memorize or forget specific sequences depending on the end application is a promising research direction.",
"We encourage future language model research to perform dataset deduplication, either by training on the deduplicated datasets we release, using the deduplication tools we release, or following our approach to deduplicate datasets with new tools.",
"The exact technique used to perform deduplication is less important than performing stringent deduplication in the first place.",
"On the whole, dedu-5 https://play.aidungeon.io/ 8 8431 plication does not harm, and sometimes improves, model perplexity, despite the fact that the deduplicated datasets are smaller and faster to train on.",
"It is especially important that there are no duplicates between the training and testing sets, because overlap here explicitly encourages selecting models that memorize the training data.",
"Lastly, deduplication helps to reduce some of the privacy concerns around LMs memorizing their training data.",
"The developers of large language models typically attempt to create training data that reflects natural human communication, but current methods to collect and curate such datasets are fallible.",
"There are multiple reasons some text ends up over-represented.",
"For example, bot replies, auto-generated templates, and licenses are repeated for structural (e.g., legal, economical) reasons (as was also observed by Dodge et al. (2021a)).",
"Additionally, common techniques for acquiring and cleaning data can result in an over-representation of particular subsets of world users, often those who are English-speaking and publishing in established forums.",
"This effectively under-represents non-English speakers as well as groups whose communication mostly occurs outside of the public web.",
"In this paper, we focus on the problem of over-representation of some types of text (struc-tural duplicates) but do not address the problem of under-representation of others.",
"Additionally, while we discuss when memorized content might be desired and when it might not be desired, our analysis does not disambiguate these two cases.",
"Work to disambiguate helpful from harmful memorization is tremendously complex and would require a different set of research methodologies than are presented in this work.",
"We are grateful to the many researchers whose technical help, feedback, and discussions shaped this project: Jacob Austin, Samy Bengio, Olivier Bousquet, James Bradbury, Fernando Diaz, Mark Diaz, Noah Fiedel, Jonathan Frankle, David Grangier, Stefanie Karp, David Mimno, Gaurav Mishra, Michael Mozer, Sharan Narang, Alex Pas-sos, Adam Roberts, Hanie Sedghi, Jascha Sohl-dickstein, David So, Florian Tramer, and Yun William Yu.",
"We are also grateful to the Google Brain women who have given us continuous support.",
"Chris Callison-Burch and Daphne Ippolito's research is supported in part by the DARPA KAIROS Program (contract FA8750-19-2-1004), the DARPA LwLL Program (contract FA8750-19-2-0201), and the IARPA BETTER Program (con-tract 2019-19051600004).",
"The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, IARPA, or the U.S. Government.",
"Each of the authors on this paper significantly contributed to the final results.",
"Katherine trained the models used in the paper, built and ran the eval and text generation pipelines, contributed significantly to writing, analysis, and project organization and management.",
"Daphne ran the approximate matching data deduplication pipelines, extracted prompts and evaluation datasets, ran eval pipelines, and contributed significantly to planning, writing, and analysis.",
"Andrew wrote the code to perform deduplication with approximate matching, helped evaluate energy expenditure, and helped with analysis.",
"Chiyuan helped generate plots and contributed to project scoping, writing, and data analysis.",
"Chris offered mentorship and guidance throughout the project and contributed to writing.",
"Doug offered mentorship and guidance throughout the project and contributed to writing.",
"Nicholas wrote the suffix array implementation, ran all EXACTSUBSTR deduplication experiments, contributed significantly to planning, writing, and analysis, as well as scoping the project."
] | [
"result",
"abstain",
"objective",
"result",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"objective",
"abstain",
"method",
"abstain",
"objective",
"method",
"objective",
"result",
"method",
"objective",
"abstain",
"objective",
"objective",
"objective",
"objective",
"abstain",
"method",
"abstain",
"result",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"result",
"result",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Current predominant neural machine translation (NMT) models often have a deep structure with large amounts of parameters, making these models hard to train and easily suffering from over-fitting.",
"A common practice is to utilize a validation set to evaluate the training process and select the best checkpoint.",
"Average and ensemble techniques on checkpoints can lead to further performance improvement.",
"However, as these methods do not affect the training process, the system performance is restricted to the checkpoints generated in the original training procedure.",
"In contrast, we propose an online knowledge distillation method.",
"Our method on-the-fly generates a teacher model from checkpoints, guiding the training process to obtain better performance.",
"Experiments on several datasets and language pairs show steady improvement over a strong self-attention-based baseline system.",
"We also provide analysis on data-limited setting against over-fitting.",
"Furthermore, our method leads to an improvement on a machine reading experiment as well.",
"Neural Machine Translation (NMT) (Cho et al., 2014; Sutskever et al., 2014) has been rapidly developed during the past several years.",
"For further performance improvement, deeper and more expressive structures (Johnson et al., 2017; Barone et al., 2017b; Gehring et al., 2017; Vaswani et al., 2017) have been exploited.",
"However, all of these models have more than hundreds of millions of parameters, which makes the training process more challenging.",
"During the training of NMT models, we notice the following two problematic phenomena: First, the training process is unstable.",
"This is evidenced by the decreasing of training loss with Corresponding Author.",
"fluctuate performance on the validation set.",
"Second, the performance on validation set usually begins to worsen after several epochs, while the training loss keeps decreasing, which suggests the model being at risk of over-fitting.",
"In order to alleviate these issues, the common practice is to periodically evaluate models on a held-out set (with each evaluated model saved as a checkpoint ).",
"Training is terminated when m consecutive checkpoints show no improvement and select the checkpoint with best evaluation score as the final model.",
"Further improvement can be achieved by utilizing more checkpoints, by smoothing , which averages these checkpoints' parameters to generate more desirable parameters (Sennrich et al., 2016a); or by ensemble , which averages these checkpoints' output probabilities at every step during inference (Chen et al., 2017).",
"However, we notice that all of these methods have a limitation.",
"Once the training process gets parameters with poor performance, selecting, smoothing or ensemble from the checkpoints in this process may have limited generalization performance as well.",
"We impute the limitation to the offline property of these methods.",
"In other words, only employing checkpoints after training cannot affect the original training process.",
"In this paper, we propose to utilize checkpoints to lead the training process.",
"Our method is carried out in a knowledge distillation manner.",
"At each training step, because being evaluated on the held-out validation data, the best checkpoint up to the current training step can be seen as a model with the best generalization ability so far.",
"Therefore, we employ this checkpoint as the teacher model, and let the current training model, as the student, learn from the output probability distributions of the teacher model, as well as truth translations in the training data.",
"Such kind of knowledge distillation is performed on-the-fly because the teacher model could always be updated once any latest better checkpoint is generated.",
"We call our method O nline D istillation from C heckpoints (ODC).",
"We conduct experiments on four translation tasks (including two low-resource tasks), and one machine reading comprehension task.",
"All the results demonstrate that our ODC method can achieve improvement upon strong baseline systems.",
"ODC also outperforms checkpoint smoothing and ensemble methods, without extra cost during inference.",
"We can achieve further improvement by combining ODC with those methods.",
"Major contributions of our work include:",
"1. In contrast to checkpoint smoothing and ensemble which do not affect the training process, we explore the way to distill knowledge from checkpoints to lead the training process in an on-the-fly manner( 3.1, 3.2).",
"We obtain better performance by replacing the best checkpoint with moving average parameters at that step.",
"( 3.3)",
"2. We conduct experiments on four translation tasks, including two low resource tasks.",
"In all the tasks our method outperforms strong baseline systems ( 4.2, 4.3).",
"We also conduct an experiment on machine reading comprehension task and the result shows that our method can be applied to other tasks too ( 4.4).",
"3. We conduct comprehensive analysis and show that our method can significantly alleviate over-fitting issue in low-resource condition ( 5.1), and help to find a wider minimum which brings better generation ( 5.2).",
"Neural Machine Translation (NMT) systems learn a conditional probability P ( Y | X ) for translating a source sentence X = ( x 1 , ..., x M ) to a target sentence Y = ( y 1 , ..., y N ) , in which x i and y j are the i -th word and j -th word in sentence X and Y , respectively.",
"An NMT model usually consists of an encoder (parameterized by enc ) and a decoder (parameterized by dec ).",
"The encoder transforms a sequence of source tokens into a sequence of hidden states: H ( X ) = ( h 1 , ..., h M ) = f enc ( X ; enc ) .",
"The decoder of NMT is usually a network computing the conditional probability of each target words y j based on its previous words and the source sentence:",
"where s j is the hidden state of decoder at time step j , p is the distribution of NMT model and is all the parameters of NMT model.",
"The standard way to train an NMT model is to minimize the cross-entropy between the one-hot distribution of the target sentence and the NMT model's output distribution: L ( ) = N (cid:88) j =1 |V| (cid:88) k =1 1 { y j = k } (3) log p ( y j = k | y <j , X ; ) , = arg min L ( ) , (4) where 1 ( ) is the indicator function and V is the target vocabulary.",
"Knowledge distillation is a class of methods which transfers knowledge from a pre-trained teacher model T , to a student model S .",
"The teacher model can be a model with large capacity (Bucila et al., 2006) or an ensemble of several models (Hinton et al., 2015).",
"In knowledge distillation, the student model learns to match the predictions of the teacher model.",
"Concretely, assuming that we learn a classification model (parameterized by ) on a set of training samples in the form of ( x, y ) with |V| classes.",
"Instead of minimizing the cross-entropy loss between one-hot label y and model's output probability p ( y | x ; ) , knowledge distillation uses the teacher model's distribution q ( | x ) as soft tar-gets and optimizes the loss: LKD ( ) = |V| (cid:88) k =1 q ( y = k | x ; T ) (5) log p ( y = k | x ; ) , where T parameterizes the teacher model and p ( | x ) is the distribution of the student model.",
"Kim and Rush (2016) proposed that, as the loss of NMT model (Equation 4) can be factored into minimizing cross-entropy loss between the target Checkpoints: Teacher Models: training steps direction update teacher models knowledge distillation validation score (darker means better) +1 +2 +1 Figure 1: Illustration of online distillation from checkpoints(ODC).",
"words and word-level probabilities of the NMT model for every position at target side, knowledge distillation on multi-class classification can be naturally applied.",
"They defined word-level knowledge distillation (W-KD) on a sentence as: LW-KD ( ) = N (cid:88) i = j |V| (cid:88) k =1 q ( y j = k | y <j , H ( x ); T ) (6) log p ( y j = k | y <j , H ( x ); ) , where V is the target vocabulary.",
"They further proposed sequence-level knowledge distillation (S-KD), which optimizes the student model by matching the predictions of the teacher model in the probability distribution over the space of all possible target sequences: LS-KD ( ) = (cid:88) Y q ( Y | X ; T ) log p ( Y | X ; ) , (7) where is the space of target side sentences.",
"As summing over exponential numbers of samples here is intractable, they proposed to train student model on samples generated by teacher model as an approximation.",
"which not only requires one pre-training process to obtain the teacher model, but also limits the power of leading the training process.",
"In contrast, we are aiming at a more integrated process where the teacher model does not come from a separate training process, but from the current training routine itself.",
"More specifically, we update the teacher along with the training process, so the distilled knowledge could be updated when a stronger model comes out.",
"Figure 1 illustrates the paradigm of our method.",
"In generation tasks, the knowledge distillation could be performed at the word-level or sequence-level.",
"In this paper, we focus on the word-level distillation because this distillation only needs forced teaching, which could be performed efficiently together with the training of the student model compared to generating translations from teacher model.",
"It is more computational-friendly, especially when the NMT models are built with parallelizable convolution (Gehring et al., 2017) or self-attention structures (Vaswani et al., 2017).",
"Observed from the training process of NMT models, performance on the validation set does not improve monotonically.",
"When the performance of the training model on the validation set declines, we could always select the best checkpoint so far as the teacher, because it has the best generalization performance.",
"Specially, when the best checkpoint is generated at the current time step, we only update the teacher model but perform no distillation.",
"The online distillation process is summarized in Algorithm",
"1. We use t to denote the training step and t to denote the parameters at time step t .",
"We denote T k as the time step the k -th time when the model is evaluated on validation and T as the validation interval, for which T k +1 = T k + T .",
"Let T k ( T k T k ) be the time step when the best checkpoint is obtained up to T k , and T as the teacher's parameters to lead the following training process.",
"If the current checkpoint is the best checkpoint so far, i.e. T k = T k , we update the teacher to be this new checkpoint T = T k (in Line 16 and 20).",
"The loss for the training process at time step t ( T k < t < T k +1 ) is defined as follows: L t ( ) = (cid:40) L ( ) T k = T k L ( ) + LW-KD ( ) otherwise , (8) where L ( ) and L ( ) W-KD is defined in Equation 4 and 7, respectively (in Line 5-8).",
"Knowledge distillation usually works better when teacher models have better performance.",
"As Tarvainen and Valpola (2017) proposed in their work, averaging model parameters over training steps tends to produce a more accurate model that using final parameters directly.",
"They called this method as Mean Teacher .",
"Following Tarvainen and Valpola (2017), besides updating parameters, we maintain the exponential moving average (EMA) of the model parameters as: (cid:48) t = (cid:48) t 1 + (1 ) t , (9) where t is the update step, is the parameters of the training model and (cid:48) the parameters of EMA.",
"is the decay weight which is close to 1.0, and typically in multiple-nines range, i.e., 0.999, 0.9999.",
"By doing so, at each timestep t , parameters of NMT model t has their corresponding EMA parameters (cid:48) t .",
"Whenever we update teacher model T with the current best checkpoint, we can use its EMA parameters instead (in Line 17-18).",
"It can further improve the generalization ability of the teacher model, and bring a better performance of knowledge distillation.",
"We will show in 4.2 that using meaning teacher indeed achieves better performance.",
"Algorithm 1: Online Distillation from Checkpoints 1 Input: validation interval T ; validation count k ;EMA decay weight ; initial model parameters 0 2 Initialization: k = 0 ; t = 0 ; T 0 = 1 , T 0 = 1 ; T = ; (cid:48) 0 = 0 ; L 0 ( ) = L ( ) 3 while not reach stopping criteria do 4 repeat 5 if T k = T k then 6 L t ( ) = L ( ) 7 else 8 L t ( ) = L ( ) + LW-KD ( ) 9 minimize L t ( ) and update t ; 10 (cid:48) t = (cid:48) t 1 + (1 ) t ; 11 t = t + 1 ; 12 until t mod T == 0 ; 13 T k +1 = t ; 14 evaluate on validation set; 15 if get better checkpoint then 16 T k +1 = t ; 17 if use EMA as teacher then 18 T = (cid:48) t ; 19 else 20 T = t ; 21 else 22 T k +1 = T k ; 23 k = k + 1 ; 4 Experiments 4.1 Setups To evaluate the effectiveness of our method, we conduct experiments on four machine translation tasks: NIST Chinese-English, WMT17 Chinese-English, IWSLT15 English-Vietnamese, and WMT17 English-Turkish.",
"We conduct experiments based on an open source implementation of Transformer (Vaswani et al., 2017) model in NJUNMT-pytorch 1 .",
"For all the translation experiments, we use SacreBLEU 2 to report reproducible BLEU scores.",
"We also present an experiment on machine reading comprehension, showing our method could also be applied to other tasks.",
"Datasets For NIST Chinese-English translation task, training data consists of 1.34M LDC sentence pairs 3 , with 40.8M Chinese words and 45.8M English words, respectively.",
"We use NIST2003 dataset set as the validation set and NIST 2004, 1 https://github.com/whr94621/NJUNMT-pytorch 2 https://github.com/awslabs/sockeye/tree/master /sockeye contrib/sacrebleu 3 The corpora includes LDC2002E18, LDC2003E07, LDC2003E14, Hansards portion of LDC2004T07, LDC2004T08 and LDC2005T06 SYSTEMS NIST Chinese-English NIST03 NIST04 NIST05 NIST06 Average RNNSearch (Zhang et al., 2018b) 36.59 39.57 35.56 35.29 -Transformer-base(Yang et al., 2018) 42.23 42.17 41.02 -baseline 43.78 44.26 40.97 38.93 41.39 baseline + LKS 44.12 44.87 41.59 39.22 41.89 +0.50 baseline + BKS 44.23 44.98 41.62 39.74 42.11 +0.73 baseline + BKE 44.30 45.01 41.86 40.05 42.31 +0.92 ODC 45.33 45.18 42.60 39.67 42.48 +1.10 ODC + LKS 45.05 45.49 42.99 40.48 42.99 +1.60 ODC + BKS 45.35 45.49 43.21 39.96 42.89 +1.50 ODC + BKE 45.34 45.92 43.35 40.30 43.19 +1.80 ODC-EMA 45.52 45.72 43.01 40.65 43.13 +1.74 Table 1: Case-insensitive BLEU scores of Chinese-English translation on NIST datasets.",
"2005, 2006 as test sets.",
"We filter out sentence pairs whose source or target side contain more than 50 words.",
"We use BPE (Sennrich et al., 2016b) with 30K merge operations on both sides.",
"For WMT17 Chinese-English translation task, we use the pre-processed version released by WMT 4 .",
"We only use CWMT part of WMT Corpus.",
"We use newsdev2017 as the validation set and newstest2017 s the test set.",
"We learn a BPE model with 32K merge operations and keep all the BPE tokens in the vocabulary.",
"We limit the maximal sentence length as 100 after BPE segmentation.",
"For IWSLT15 English-Vietnamese translation task, we directly use the pre-processed data used in Luong and Manning (2015) 5 , which has 133K sentence pairs, with 2.70M English words and 3.31M Vietnamese words.",
"We use the released validation and test set, which has 1553 and 1268 sentences respectively.",
"Following the settings in Huang et al. (2017), the Vietnamese and English vocabulary size are 7,709 and 17,191, respectively.",
"For WMT17 English-Turkish translation task, We use the pre-processed data released by WMT17 6 .",
"It has 207K sentence pairs, with 5.21M English words and 4.63 Turkish words.",
"We use newstest2016 as our validation set and newstest2017 as the test set.",
"We use joint BPE segmentation (Sennrich et al., 2017) to process the whole training data.",
"The merge operations are 16K.",
"4 http://data.statmt.org/wmt18/translation-task/preprocessed/zh-en/ 5 https://github.com/tefan-it/nmt-en-vi 6 http://data.statmt.org/wmt17/translation-task/preprocessed/tr-en/ Implementation Details Without specific statement, we follow the transformer base v1 hyper-parameters settings 7 , with 6 layers in both encoder and decoder, 512 hidden units and 8 attention heads in multi-head attention mechanism and 2048 hidden units in feed-forward layers.",
"Parameters are optimized using Adam(Kingma and Ba, 2014).",
"The initial learning rate is set as 0.1 and scheduled according to the method proposed in Vaswani et al. (2017), with warm-up steps as 4000.",
"We periodically evaluate the training model on the validation set by doing translation and compute the BLEU scores.",
"We stop training when 50 subsequent of BLEU scores on validation set do not get improvement.",
"We use beam search with beam size as 5.",
"We first evaluate the capability of our method for improving performance when there are plenty of training data.",
"We conduct experiments on both NIST and WMT17 Chinese-English Translation tasks.",
"Results on NIST Dataset We compare our method with several ways to utilize checkpoints 8 : last-k-smoothing : After training the baseline model, we average the parameters of the last k checkpoints as the final model.",
"as the final model.",
"In this case, checkpoints may have better performance but higher variance which could be harmful to parameters averaging.",
"best-k-ensemble : Do ensemble inference (av-erage the output probabilities) with the best k checkpoints (Chen et al., 2017).",
"As shown in Table 1, our baseline is comparable to the other two recent published results (Zhang et al. (2018b), Yang et al. (2018)).",
"In consistent with Chen et al. (2017), using checkpoints for smoothing or ensemble does improve the baseline system.",
"Using EMA parameters also improve the baseline system as well, which is in consist with (Tarvainen and Valpola, 2017).",
"Compared to the baseline, our approach ODC brings translation improvement across different test sets and achieves 42.48 BLEU scores on aver-age(+1.09 BLEU v.s. baseline).",
"This result con-firms that using best checkpoint as teacher indeed helps improving the performance of the translation model.",
"Besides, ODC is comparable to the best results among smoothing and ensemble on baseline's checkpoints (achieved by best-k-ensemble).",
"Considering that best-k-ensemble needs to decode with k models, while ODC decodes only one, our model enjoys a better efficiency.",
"Furthermore, we can achieve further improvement by combining these methods on checkpoints generated by ODC.",
"Results also show that ODC-EMA ( 3.3) could achieve additional improvement from ODC itself (43.13 v.s. 42.48 BLEU), demonstrating that using EMA of the best checkpoint instead can bring better knowledge distillation performance, as it generates a better teacher model.",
"Results on WMT17 Dataset We present the results on WMT17 Chinese-English translation task in Table",
"2. We report the results of the baseline, ODC and a recent result published by Zhang et al. (2018c).",
"To make a fair comparison, we follow the experiment setting in Zhang et al. (2018c).",
"The experiment results show similar trends with those on the NIST datasets.",
"Applying ODC leads to the result of 24.22 BLEU, which is 0.85 BLEU higher compared with baseline.",
"We also apply our method to two low resource translation tasks, i.e., IWSLT2015 English-Vietnamese",
"(EN2VI) and WMT17 English-Turkish (EN2TR).",
"Due to the limited amount of training data, models are more likely to suffer from over-fitting.",
"Therefore, we use a higher dropout rate of 0.2 and weight decay, another common technique against over-fitting, with decay weight set as 10 3 as the default setting.",
"We implement weight decay as AdamW (Loshchilov and Hutter, 2017) does.",
"Besides, we further experiment with grid search on the validation set for optimal hyper-parameters of dropout rate and weight decay, which may lead to better results.",
"We adopt a simple heuristic, which first searches an optimal dropout rate, and then further searches weight decay coefficients based on this dropout.",
"We experiment with dropout as 0.2, 0.3, 0.4, and weight decay as 10 1 , 10 2 and 10 3 .",
"As in Table 3, our baseline is comparable to two recent published results, respectively: EN2TR from Zhang et al. (2018c) and EN2VI from offi-cial release tensor2tensor problem 9 .",
"Grid hyper-parameter search does improve the baseline system.",
"ODC leads to better results compared to the baseline, as well as the baseline with grid parameter search.",
"ODC can achieve further improvement after searching for optimal hyper-parameters of dropout and weight decay.",
"Although our main research is focused for the task of machine translation, the idea of ODC could be applied to other tasks as well.",
"We experiments on the Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016), a machine reading comprehension task.",
"SQuAD contains 107,785 human-generated reading comprehension questions, with 536 Wikipedia articles.",
"Each question is associated with a paragraph extracted from an article, and the corresponding answer is a span from this article.",
"A machine reading comprehension model is designed to predict the start and end positions in the article of the answer.",
"The state-of-the-art machine reading comprehension system also employs a deep neural network structure, which is similar to NMT.",
"We apply our ODC method on BiDAF++ (Choi et al., 2018), a multi-layer SQuAD model that augments BiDAF (Seo et al., 2016) with self-attention and contextualized embeddings.",
"We evaluate the model after each epoch and implement the knowledge distillation by teaching the student with the output distribution of answer start and end positions predicted by the best checkpoint.",
"For the results, ODC improves a base BiDAF++ from 76.83 to 77.40, in EM scores, showing that our method can be applied to a broader range of tasks.",
"We conduct further analysis to probe into the reasons for the advantages of ODC.",
"We first show that our method can significantly alleviate the over-fitting issue in data-limited condition.",
"After that, we show that parameters gained from our method tend to be wider minimums, which represents better generalization.",
"Taking IWSLT15 English-Vietnamese as a test-bed, we analyze whether our method could help handle the over-fitting issue.",
"We first plot the curve of the loss on the validation set at each training step for the different models (in Figure 2, the top curve with rounds).",
"It is easy to see that the loss curve of the baseline increases as the training goes after 50K steps, indicating a severe over-fitting.",
"With better dropout rate and weight decay, the over-fitting is Figure 2: Loss curves (top) and final BLEU scores (bot-tom) on the validation set of baseline, baseline with grid-search and ODC, respectively.",
"less severe; while with ODC the loss curve shows a more steady trend of decrease, and is almost always under the other two's.",
"The final BLEU score on the validation set (Figure 2, bottom) shows corresponding result.",
"The grid search of hyper-parameters improves the BLEU from 26.06 to 26.42 in BLEU, while ODC achieves 26.99.",
"Both results indicate that our method is more effective at handling the over-fitting problem.",
"We hold that minimizing the cross-entropy between the teacher model and the student model serves as regularization to the training of the student model, which avoids the model getting into over-fitting.",
"In the training process in Chinese-English tasks, we do not observe obvious over-fitting issue as shown in low resource translation tasks.",
"In this section, we analyze how ODC helps the model generalization.",
"Keskar et al. (2016) proposed that the width Figure 3: The upper plot shows the validation losses curve along the line segment decided by parameters of baseline and ODC.",
"of the minimum in a loss surface is related to its generalization ability.",
"Therefore, we compare the generalization capability between baseline system and our ODC method by exploring around the parameters.",
"We make use of the visualization technique employed in (Goodfellow and Vinyals, 2014) and analyze the results on the NIST data set.",
"Let base and ODC denote the final parameters obtained from baseline and ODC.",
"Consider the line: ( ) = ODC + (1 . 0 ) base , (10) which connects base ( = 0 . 0 ) and ODC ( = 1 . 0 ).",
"We plot the value of Equation 4 as a function of (normalized by count of words per sentence) with = ( ) .",
"We draw from 1 .",
"0 to 2 .",
"0 at an interval of 0.02.",
"In this way, the width of base and ODC can be represented as the steepness of the curve nearby.",
"To further quantitatively represent the steepness, we compute the standard deviation of values on this curve within different distances to the two parameters, respectively.",
"We plot them in Figure",
"From Figure 3 we can see that the loss curve behaves steeper around the parameters of baseline than of ODC.",
"Besides, the standard deviations of losses around the baseline model are consistently higher than ODC within all the distances.",
"It is evident that the parameters of ODC act as a wider minimum c and explains why ODC can lead to a more generalized model.",
"Regularization has broad applications in training NMT models to improve performance and avoid over-fitting.",
"There are some common regularization techniques, such as L 2 normalization and dropout (Srivastava et al., 2014).",
"These methods are simple and easy to implement but need carefully tuning on the validation set.",
"These methods are also orthogonal to our method.",
"There are also some works to exploit regularization techniques in fine tuning of NMT model.",
"Barone et al. (2017a) proposed a tuneout method which randomly replaces columns of weight matrices of out-of-domain parameter matrices.",
"Khayral-lah et al. (2018) shared similar training object with us, as they computed the KL divergence between out-of-domain and in-domain model.",
"Both of their works request a pre-trained teacher model, while we are work on a more general training problem which does not require such kind of model.",
"While traditional knowledge distillation requires a static, pre-trained teacher model, online knowledge distillation tends to overcome this problem by selecting or generating a teacher dynamically from scratch.",
"To the best of our knowledge, Zhang et al. (2017) is the first trial to replace the offline teacher model.",
"They trained peer models to teach each other simultaneously.",
"Compared to their work, our method uses the best checkpoint as the teacher, which avoids introducing extra parameters.",
"Furlanello et al. (2018) tends to update teacher model during the training procedure iteratively, but their method needs to train the teacher model until convergence in each iteration.",
"Instead, our method only needs one phase of training, whose overhead is relatively small.",
"Lan et al. (2018) using an ensemble of several branches of the model as teacher for computer vision tasks, which only needs one-phase training as well.",
"However, their method relies heavily on the multi-branch structures of the tasks, which are not widely applicable in neural machine translation.",
"In this paper, we propose an online knowledge distillation method with the teacher model generated from checkpoints during the training procedure.",
"Experiments on four machine translation tasks and a machine reading task show that our method outperforms strong baseline systems.",
"Further analysis shows that our method can effectively alleviate the over-fitting issue, and tend to find a wider minimum.",
"We would like to thank the anonymous reviewers for their insightful comments.",
"We also thank Boxing Chen from Alibaba Group for his helpful comments.",
"This work is supported by the National Science Foundation of China (No. 61772261, 61672277) and the Jiangsu Province Research Foundation for Basic Research (No. BK20170074).",
"Part of this work is supported by 13th Five-Yea All-Army Common Information System Equipment Pre-Research Project (No. 31510040201)."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"result",
"objective",
"objective",
"result",
"objective",
"method",
"result",
"result",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"abstain",
"other",
"other",
"method",
"method",
"other",
"abstain",
"other",
"method",
"other",
"abstain",
"other",
"other",
"objective",
"result",
"result",
"other",
"other",
"other",
"other"
] |
[
"Contextual features always play an important role in Chinese word segmentation (CWS).",
"Wordhood information, being one of the contextual features, is proved to be useful in many conventional character-based segmenters.",
"However, this feature receives less attention in recent neural models and it is also challenging to design a framework that can properly integrate wordhood information from different wordhood measures to existing neural frameworks.",
"In this paper, we therefore propose a neural framework, WMSEG , which uses memory networks to incorporate wordhood information with several popular encoder-decoder combinations for CWS.",
"Experimental results on five benchmark datasets indicate the memory mechanism successfully models wordhood information for neural segmenters and helps WMSEG achieve state-of-the-art performance on all those datasets.",
"Further experiments and analyses also demonstrate the robustness of our proposed framework with respect to different wordhood measures and the efficiency of wordhood information in cross-domain experiments.",
"1 1 Introduction Unlike most written languages in the world, the Chinese writing system does not use explicit delimiters (e.g., white space) to separate words in written text.",
"Therefore, Chinese word segmentation (CWS) conventionally serves as the first step in Chinese language processing, especially for many downstream tasks such as text classification (Zeng et al., 2018), question answering (Liu et al., 2018), machine translation (Yang et al., 2018), etc.",
"In the past two decades, the mainstream methodology of CWS treated CWS as a character-based Partially done as an intern at Sinovation Ventures.",
"Corresponding author.",
"1 WMSEG (code and the best performing models) is released at https://github.com/SVAIGBA/WMSeg .",
"sequence labeling task (Tseng et al., 2005; Song et al., 2006; Sun and Xu, 2011; Pei et al., 2014; Chen et al., 2015; Zhang et al., 2016; Chen et al., 2017; Ma et al., 2018; Higashiyama et al., 2019; Qiu et al., 2019), where various studies were proposed to effectively extract contextual features to help better predicting segmentation labels for each character (Zhang et al., 2013; Zhou et al., 2017; Higashiyama et al., 2019).",
"Among all the contextual features, the ones measuring wordhood for n-grams illustrate their helpfulness in many nonneural CWS models (Sun et al., 1998; Xue and Shen, 2003; Feng et al., 2004; Song and Xia, 2012).",
"Later, following the track of the sequence labeling methodology, recent approaches with neural networks are proved to be powerful in this task (Chen et al., 2015; Ma et al., 2018; Higashiyama et al., 2019).",
"However, since neural networks (e.g., LSTM) is considered to be able to provide a good modeling of contextual dependencies, less attention is paid to the idea of explicitly leveraging wordhood information of n-grams in the context as what had previously been done in non-neural models.",
"Although some studies sidestepped the idea by incorporating contextual n-grams (Pei et al., 2014; Zhou et al., 2017) or word attention (Higashiyama et al., 2019) into the sequence labeling process, they are limited in either concatenating word and character embeddings or requiring a well-defined word lexicon.",
"Therefore, it has not been fully explored what would be the best way of representing contextual information such as wordhood features in neural CWS models.",
"Moreover, consider there are various choices of wordhood measures, it is also a challenge to design a framework that can incorporate different wordhood features so that the entire CWS approach can be general while being effective in accommodating the input from any measures.",
"CWS by leveraging wordhood information.",
"In detail, we utilize key-value memory networks (Miller et al., 2016) to incorporate character n-grams with their wordhood measurements in a general sequence labeling paradigm, where the memory module can be incorporated with different prevailing encoders (e.g., BiLSTM and BERT) and decoders (e.g., softmax and CRF).",
"For the memory, we map n-grams and their wordhood information to keys and values in it, respectively, and one can use different wordhood measures to generate such information.",
"Then for each input character, the memory module addresses all the n-grams in the key list that contain the character and uses their corresponding values to generate an output vector to enhance the decoder for assigning a segmentation label to the character.",
"Experimental results from five widely used benchmark datasets confirm that WMSEG with wordhood information can improve CWS over powerful baseline segmenters and ourperform previous studies, where state-of-the-art performance is observed on all the datasets.",
"Further experiments and analyses are also performed to investigate different factors affecting WMSEG 's performance.",
"Following previous studies, we regard CWS as character-based sequence labeling task.",
"The architecture of WMSEG is illustrated in Figure 1, where the general sequence labeling paradigm is the top part with a memory module inserted between the encoder and the decoder.",
"The model predicts a tag (e.g., tag B for the 1st character in a word) for each character, and the predicted tag sequence is then converted to word boundary in the system output.",
"The bottom part of the figure starts with a lexicon N , which is simply a list of n-grams and can be built by various methods (see Section 2.1).",
"Given an input sentence X = x 1 x 2 ...x i ...x l , for each character x i in X , our approach uses the lexicon N to generate (keys, values) for x i and send it to the memory module.",
"In all, the process of WMSEG to perform CWS can be formalized as (cid:98) Y = arg max YT l p ( Y|X , M ( X , N )) (1) where T denotes the set of all types of segmentation labels, and l stands for the length of the input sentence X .",
"The output Y is the corresponding label sequence for X with (cid:98) Y representing the best label sequence according to the model.",
"M is the memory module proposed in this paper that consumes X and N and provides corresponding wordhood information for X to maximize p .",
"In the rest of this section, we describe the construction of the n-gram lexicon, the proposed wordhood memory networks, and how it is integrated with different encoders and decoders, respectively.",
"To build the wordhood memory networks, the first step is to construct the lexicon N because the keys in the memory module are built upon N , where each n-gram in N is stored as a key in it.",
"2 In this study, N is simply a list of n-grams, and technically, it can be constructed through many existing resources or automatic methods.",
"Compared to using an off-the-shelf lexicon or the word dictionary from the training data, it is hypothesized that, for the purpose of incorporating wordhood information into the general sequence labeling framework, unsupervised wordhood measures, such as accessor variety (AV) (Feng et al., 2004), pointwise mutual information (PMI) (Sun et al., 1998), and description length gain (DLG) (Kit and Wilks, 1999), would perform better.",
"For example, AV measures the wordhood of an n-gram k by AV ( k ) = min ( L av ( k ) , R av ( k )) (2) where L av ( k ) and R av ( k ) denote the number of different character types that can precede (left access number) or follow (right access number) the n-gram k .",
"Normally, the higher the AV score is, the more likely the n-gram forms a word.",
"To encode both n-grams and the wordhood information they carry, one requires an appropriate framework to do so for CWS.",
"Compared with other network structures that can exploit n-grams such as the attention mechanism, key-value memory networks are more appropriate to model such pairwise knowledge via transforms between keys and values.",
"In the memory, we map n-grams and their wordhood information to keys and values, respectively.",
"Following Miller et al. (2016), we illustrate how our memory module generates and operates the (keys, values) pair for each x i in this subsection.",
"N-gram Addressing For each x i in a train-ing/test instance, normally there are many n-grams in N that contain x i .",
"Therefore, the n-gram addressing step is to generate all n-grams from x i 's context (including x i ) and keep only the ones that appear in N , resulting K i = [ k i, 1 , k i, 2 , k i,j , k i,m i ] that x i is a part of k i,j .",
"For example, in the input sentence shown in Figure 1, the n-grams that contain the character x 4 = ( people ) form the list K 4 = [ ( people ), 2 Therefore n-gram and key are equivalent in the memory. Rule v i,j x i is the beginning of the key k i,j VB x i is inside the key k i,j VI x i is the ending of the key k i,j VE x i is the single-character key k i,j VS Table 1: The rules for assigning different values to x i according to its position in a key k i,j . ( resident ), ( livelihood ), ( residents' life ) ] , which are highlighted in the dashed boxes illustrated at the bottom part of the figure.",
"Then, the memory module activates the corresponding keys in it, addresses their embeddings (which are denoted as e ki,j for each k i,j ), and computes the probability distribution for them with p i,j = exp ( h i e ki,j ) (cid:80) m i j =1 exp ( h i e ki,j ) (3) for each key, where h i is the vector for x i which can be generated from any text encoder.",
"Wordhood Reading Values in the memory represent the wordhood information for a given x i and k i,j pair, which is not a straightforward mapping because x i may have different roles in each k i,j .",
"For example, k i,j delivers different wordhood information when x i appears at the beginning or the ending of k i,j .",
"Therefore, we set rules in Table 1 to read a value for a key according to different situations of x i in k i,j , where we use a set of values { VB , VI , VE , VS } with embeddings { e VB , e VI , e VE , e VS } (illustrated in different colors in Figure 1) so that all n-grams should map to one of the values based on x i 's position in k i,j .",
"To illustrate that, in the aforementioned example, n-grams in K 4 for x 4 = ( people ) are mapped to a value list V 4 = [ VS , VE , VB , VI ] (see Figure 1).",
"As a result, each K i for x i has a list of values denoted by V i = [ v i, 1 , v i, 2 , v i,j . v i,m i ] .",
"Then the total wordhood memory for x i is computed from the weighted sum of all keys and values by o i = m i (cid:88) j =1 p i,j e vi,j (4) where e vi,j is the embedding for v i,j .",
"Afterwards, o i is summed element-wise with h i and the result is passed through a fully connected layer by a i = W o ( h i + o i ) (5) MSR PKU AS CITYU CTB6 TRAINTESTTRAINTESTTRAINTESTTRAINTESTTRAINDEVTESTCHAR # 4,050K 184K 1,826K 173K 8,368K 198K 2,403K 68K 1,056K 100K 134K WORD # 2,368K 107K 1,110K 104K 5,500K 123K 1,456K 41K 641K 60K 82K CHARTYPE # 5K 3K 5K 3K 6K 4K 5K 3K 4K 3K 3K WORDTYPE # 88K 13K 55K 13K 141K 19K 69K 9K 42K 10K 12K OOV RATE -2.7 -5.8 -4.3 -7.2 -5.4 5.6 Table 2: Statistics of the five benchmark datasets, in terms of the number of character and word tokens and types in each training and test set.",
"where W o is a trainable parameter and the output a i R |T | is a weight vector with its each dimension corresponding to a segmentation label.",
"[ h 1 , h 2 , ..., h i , ..., h l ] = Encoder ( X ) (6)",
"where the Encoder can be different models, e.g., Bi-LSTM and BERT (Devlin et al., 2019), to represent a sequence of Chinese characters into vectors.",
"Once all a i are generated from the memory for each x i , a decoder takes them to predict a sequence of segmentation labels (cid:98) Y = (cid:98) y 1 (cid:98) y 2 (cid:98) y l for X by (cid:98) Y = Decoder ( A ) (7) where A = a 1 a 2 a i a l is the sequence of output from Eq.",
"where a ti is the value at dimension t in a i .",
"Or one can use CRF for the Decoder : (cid:98) y i = arg max y i T exp ( W c a i + b c ) (cid:80) y i 1 y i exp ( W c a i ) + b c (9) where W c R |T ||T | and b c R |T | are trainable parameters to model the transition for y i 1 to y i .",
"We employ five benchmark datasets in our experiments: four of them, namely, MSR, PKU, AS, and CITYU, are from SIGHAN 2005 Bakeoff (Emer-son, 2005) and the fifth one is CTB6 (Xue et al., 2005).",
"AS and CITYU are in traditional Chinese characters whereas the other three use simplified BC BN MZ NW WEBCHAR # 275K 483K 403K 443K 342K WORD # 184K 287K 258K 260K 210K CHARTYPE # 3K 3K 4K 3K 4K WORDTYPE # 12K 23K 26K 21K 21K OOV RATE 3.4 6.0 8.9 5.9 7.1 Table 3: Statistics of CTB7 with respect to five different genres.",
"ones.",
"Following previous studies (Chen et al., 2015, 2017; Qiu et al., 2019), we convert traditional Chinese characters in AS and CITYU into simplified ones.",
"3 For MSR, AS, PKU, and CITYU, we follow their official training/test data split.",
"For CTB6, we use the same split as that stated in Yang and Xue (2012); Chen et al. (2015); Higashiyama et al. (2019), and only use its test set for the final experiment.",
"Table 2 show the statistics of all datasets in terms of the number of characters and words and the percentage of out-of-vocabulary (OOV) words in the dev/test sets with respect to the training set.",
"In addition, we also use CTB7 (LDC2010T07) to perform our cross-domain experiments.",
"There are five genres in CTB7, including broadcast conversation (BC), broadcast news (BN), magazine (MZ), newswire (NW), and weblog (WEB ).",
"The statistics of all the genres are reported in Table 3, where the OOV rate for each genre is computed according to the union of all other genres.",
"For example, the OOV rate for BC is computed with respect to the union of BN, MZ, NW, and WEB .",
"We experiment with three wordhood measures to construct N .",
"The main experiment adopts the aforementioned AV as the measure to rank all n-grams, because AV was shown to be the most effective wordhood measure in previous CWS studies (Zhao and Kit, 2008).",
"Since AV is sensitive to 3 The conversion scripts are from https://github.",
"corpus size, in our experiments we use different AV thresholds when building the lexicon for each dataset: the threshold is 2 for PKU, CITYU, CTB6 and CTB7, and 5 for MSR and AS.",
"To test the the robustness of WMSEG , we also try two other wordhood measures, i.e., PMI (Sun et al., 1998) and DLG (Kit and Wilks, 1999).",
"PMI measures pointwise mutual information between two Chinese characters, x (cid:48) and x (cid:48)(cid:48) , via P MI ( x (cid:48) , x (cid:48)(cid:48) ) = log p ( x (cid:48) x (cid:48)(cid:48) ) p ( x (cid:48) ) p ( x (cid:48)(cid:48) ) (10) where p computes the probability of an n-gram (i.e., x (cid:48) , x (cid:48)(cid:48) and x (cid:48) x (cid:48)(cid:48) ) in a dataset.",
"A high PMI score indicates that the two characters co-occur a lot in the dataset and are likely to form a word.",
"Hence, we use a threshold to determine whether a word boundary delimiter should be inserted between two adjacent characters in the dataset.",
"In our experiments, we set the threshold to 0, PMI score lower than it will result in a segmentation.",
"In other words, for each dataset, we use PMI to perform unsupervised segmentation and collect the segmented words from it to build the n-gram lexicon N .",
"The other measure, DLG, computes wordhood of an n-gram s according to the change of the description length of a dataset D with and without treating that n-gram as a segment: DLG ( s ) = DL ( D ) DL ( D [ r s ] s ) (11) where D denotes the original dataset and D [ r s ] s represents a new dataset by treating s as a new segment, replacing all the occurrences of s with a new symbol r (which can be seen as an index for newly identified segment s ), and then appending s at the end.",
"DL ( D ) is the Shannon-Fano code length of a dataset D , calculated by DL ( D ) = (cid:88) x V c ( x ) log c ( x ) |D| (12) where V refers to the vocabulary of D and c ( x ) the count of segment x .",
"We set the threshold for DLG to 0 and use the n-grams whose DLG is higher than it to build lexicon N for each dataset.",
"All aforementioned measures are conducted on the union of the training and test sets, so that n-grams and their wordhood information are shared in both the learning and prediction phase.",
"We remove all white spaces from the data and use the resulted raw texts to perform these measures.",
"Table 4 shows the sizes of the lexicons created with these wordhood measures on the five datasets.",
"Following previous studies (Sun and Xu, 2011; Chen et al., 2015, 2017; Ma et al., 2018; Qiu et al., 2019), we use four segmentation labels in our experiments, i.e., T = { B, I, E, S } .",
"Among them, B , I , and E indicate a character is the beginning, inside, and the ending of a word and S denotes that the character is a single-character word.",
"Since text representation plays an important role to facilitate many tasks (Conneau et al., 2017; Song et al., 2017, 2018; Sileo et al., 2019), we try two effective and well-known encoders, i.e., Bi-LSTM and BERT 4 .",
"In addition, we test WMSEG on a pre-trained encoder for Chinese language, i.e., ZEN 5 (Diao et al., 2019), which learns n-gram information in its pre-training from large raw corpora and outperforms BERT on many Chinese NLP tasks.",
"Table 5 shows the hyperparameter settings for all the encoders: for the Bi-LSTM encoder, we follow the setting of Chen et al. (2015) and adopt their character embeddings for e x i , and for BERT and ZEN encoders, we follow the default settings in their papers (Devlin et al., 2019; Diao et al., 2019).",
"For the decoders, we use softmax and CRF, and set their loss functions as cross-entropy and negative log-likelihood, respectively.",
"The memory module can be initialized by random or pre-trained word embeddings for keys and values.",
"In our experiments, we use random initialization for them.",
"6 4 We use the Chinese base model from https://s3.",
"In this section, we firstly report the results of WMSEG with different configurations on five benchmark datasets and its comparison with existing models.",
"Then we explore the effect of using different lexicon N and different wordhood measures in WMSEG .",
"We also use a cross-domain experiment to illustrate the effectiveness of WMSEG when more OOVs are in the test set.",
"Lastly, a case study is performed to visualize how the wordhood information used in WMSEG helps CWS.",
"In the main experiment, we illustrate the validity of the proposed memory module by comparing WMSEG in different configurations, i.e., with and without the memory in integrating with three encoders, i.e., Bi-LSTM, BERT, and ZEN, and two decoders, i.e., softmax and CRF.",
"The experimental results on the aforementioned five benchmark datasets are shown in Table 6, where the overall F-score and the recall of OOV are reported.",
"With five datasets and six encoder-decoder configurations, the table includes results from 30 pairs of experiments, each pair with or without using the memories.",
"There are several observations drawn from the results.",
"First, the overall comparison clearly indicates that, WMSEG (i.e., the model with wordhood memories) outperforms the baseline (i.e., the model without wordhood memories) for all 30 pairs in terms of F-scores and for 25 pairs in terms of ROOV .",
"Second, the proposed memory module works smoothly with different encoders and decoders, where some improvement is pretty significant; for instance, when using Bi-LSTM as the encoder and CRF as the decoder, WMSEG improves the F-score on the AS dataset from 94.39 to 95.07 and ROOV from 61.59 to 68.17.",
"With BERT or ZEN as the encoder, even when the baseline system performs very well, the improvement of WMSEG on F-scores is still decent.",
"Third, among the models with ZEN, the ones with the memory module further improve their baselines, although the context information carried by n-grams is already learned in pre-training ZEN.",
"This indicates that wordhood information provides additional cues (besides the contextual features) that can benefit CWS, and our proposed memory module is able to provide further task-specific guidance to an n-gram integrated encoder.",
"Fourth, the wordhood memory shows its robustness with different lexicon size when we consider WMSEG 's performance with the lexicon statistics reported in Table 4 together.",
"To summarize, the results in this experiment not only confirm that wordhood information is a simple yet effective source of knowledge to help CWS without requiring external support such as a well-defined dictionary or manually crafted heuristics, but also fully illustrate that the design of our model can effectively integrate this type of knowledge.",
"To further illustrate the validity and the effectiveness of WMSEG , we compare our best-performing model with the ones in previous studies on the same benchmark datasets.",
"The comparison is presented in Table 7, where WMSEG (both the one with BERT and ZEN) outperforms all existing models with respect to the F-scores and achieves new state-of-the-art performance on all datasets.",
"As domain variance is always an important factor affecting the performance of NLP systems especially word semgenters (Song et al., 2012; Song and Xia, 2013), in addition to the experiments on benchmark datasets, we also run WMSEG on CTB7 across domains (genres in this case) with and without the memory module.",
"To test on each genre, we use the union of the data from the other four genres to train our segmenter and use AV to extract n-grams from the entire raw text from CTB7 in this experiment.",
"Table 8 reports the results in F-score and OOV recall, which show a similar trend as that in Table 6, where WMSEG outperforms baselines for all five genres.",
"Particularly, for genres with large domain variance (e.g., the ones with high OOV rates such as MZ and WEB ), CWS is difficult, and its relatively low F-scores in Table 8 from baseline models confirm that.",
"Yet WMSEG offers a decent way to improve cross-domain CWS performance without any help from external knowledge or complicated model design, which further illustrates the effectiveness of the memory module.",
"The reason could be that many n-grams are shared in both training and test data; these n-grams with their wordhood information present a strong indication to the model on what combinations of characters can be treated as words, even though some of them never appear in the training data.",
"To analyze the robustness of WMSEG with respect to the lexicon, we compare four ways (ID: 2-5 in Table 9) of constructing the lexicon ( N ): the first one",
"simply uses the vocabulary from the training data (marked as GOLDLABEL in Table 9; ID: 2); the other three ways use AV to extract n-grams from the unsegmented training data only (ID: 3), the test data only (ID: 4), and training + test set (ID: 5), respectively.",
"7 Table 9 shows the results of running BERT-CRF on the WEB genre of CTB7 without the wordhood memories (ID: 1) and with the memories (ID: 2-5), following the cross-domain setting in 4.2.",
"While the four methods with memories achieve similar results on the F score, indicating the robustness of our proposed framework, the one that builds N using the raw texts from both training and test sets through unsupervised method (ID: 5) achieves the biggest improvement on ROOV , demonstrating the advantage of including the unlabeled test set by incorporating the results from unsupervised wordhood measures into the models.",
"WMSEG provides a general way of integrating wordhood information for CWS, we expect other wordhood measures to play the same role in it.",
"Therefore, we test PMI and DLG in our model and compare them with the previous results from AV (see Table 6).",
"Specifically, we use our best performing BERT-based model, i.e., BERT-CRF, with the n-gram lexicons constructed by the aforementioned three measures and run it on all benchmark datasets.",
"We draw the histograms of the F-scores obtained from WMSEG with each measure (red, green, and blue bars for AV, PMI, and DLG, re-7 One could also use an external corpus to build N , which is not considered in this experiment.",
"spectively) in Figure 2, where the F-scores of the baseline model are also presented in orange bars.",
"As shown in the figure, the performances of using the three measures are very similar, which indicates that WMSEG is able to robustly incorporate the wordhood information from various measures, despite that those measures focus on different aspects of n-grams when determining whether the n-grams should be treated as words.",
"Particularly, consider that the lexicons produced by the three measures are rather different in their sizes (as shown in Table 4), the results in Figure 2 strongly demonstrate the effectiveness of our proposed approach in learning with a limited number of n-grams.",
"This observation also reveals the possibility that many n-grams may be redundant for our model, and WMSEG is thus able to identify the most useful ones from them, which is analyzed in the case study.",
"To investigate how the memory learns from the wordhood information carried by n-grams, we conduct a case study with an example input sentence / / / / ( He learned computer techniques since childhood ).",
"In this sentence, the Figure 2: The F-scores of WMSEG (BERT) using three different wordhood measures, namely AV (red), PMI (green), and DLG (blue), on five benchmark datasets.",
"n-gram is ambiguous with two possible interpretations: / ( learn since childhood ) and / ( from primary school ).",
"Native Chinese speakers can easily choose the first one with the given context but a word segmenter might incorrectly choose the second segmentation.",
"We feed this case into our BERT-CRF model with the memory module.",
"In Figure 3, we visualize the resulted weights that learned from keys",
"(a) and values",
"(b) of the memory, as well as from the final tagger",
"(c).",
"The heatmaps of all keys and values in the memory with respect to each corresponding input character clearly illustrate that the appropriate n-grams, e.g., ( he ), ( learn ), ( from childhood ), etc., receive higher weights than others and the corresponding values for them are also emphasized, which further affects final CWS tagging so that the weight distributions from",
"(b) and",
"(c) look alike to each other.",
"Therefore, this visualization explains, to some extent, that the proposed memory mechanism can identify and distinguish important n-grams within a certain context and thus improves CWS performance accordingly.",
"As one of the most fundamental NLP tasks for Chinese language processing, CWS has been studied for decades, with two steams of methods, i.e., word-based and character-based ones (Xue and Shen, 2003; Peng et al., 2004; Levow, 2006; Zhao et al., 2006; Zhao and Kit, 2008; Li and Sun, 2009; Song et al., 2009a; Li, 2011; Sun and Xu, 2011; Mansur et al., 2013; Zhang et al., 2013; Pei et al., 2014; Chen et al., 2015; Ma and Hinrichs, 2015; Liu et al., 2016; Zhang et al., 2016; Wang and Xu, 2017; Zhou et al., 2017; Chen et al., 2017; Ma et al., 2018; Higashiyama et al., 2019; Gong et al., 2019; Qiu et al., 2019).",
"Among these studies, most of them follow the character-based paradigm to predict segmentation labels for each character in an input sentence; n-grams are used in some of these studies to enhance model performance, which is also observed in many other NLP tasks (Song et al., 2009b; Xiong et al., 2011; Shrestha, 2014; Shi et al., 2016; Diao et al., 2019).",
"Recently, CWS benefits from neural networks and further progress are made with embeddings (Pei et al., 2014; Ma and Hinrichs, 2015; Liu et al., 2016; Zhang et al., 2016; Wang and Xu, 2017; Zhou et al., 2017), recurrent neural models (Chen et al., 2015; Ma et al., 2018; Higashiyama et al., 2019; Gong et al., 2019) and even adversarial learning (Chen et al., 2017).",
"To enhance CWS with neural models, there were studies leverage external information, such as vocabularies from auto-segmented external corpus (Wang and Xu, 2017; Higashiyama et al., 2019), where Higashiyama et al. (2019) introduced a word attention mechanism to learn from large granular texts during the CWS process.",
"In addition, the studies from Chen et al. (2017) and Qiu et al. (2019) try to improve CWS by learning from data annotated through different segmentation criteria.",
"Moreover, there is a study leveraging auto-analyzed syntactic knowledge obtained from off-the-shelf toolkits to help CWS and part-of-speech tagging (Tian et al., 2020).",
"Compare to these studies, WMSEG offers an alternative solution to robustly enhancing neural CWS models without requiring external resources.",
"In this paper, we propose WMSEG , a neural framework for CWS using wordhood memory networks, which maps n-grams and their wordhood information to keys and values in it and appropriately models the values according to the importance of keys in a specific context.",
"The framework follows the sequence labeling paradigm, and the encoders and decoders in it can be implemented by various prevailing models.",
"To the best of our knowledge, this is the first work using key-value memory networks and utilizing wordhood information for neural models in CWS.",
"Experimental results on various widely used benchmark datasets illustrate the effectiveness of WMSEG , where state-of-the-art performance is achieved on all datasets.",
"Further experiments and analyses also demonstrate the robustness of WMSEG in the cross-domain scenario as well as when using different lexicons and wordhood measures."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"objective",
"abstain",
"abstain"
] |
[
"Abstract Learning disentangled representations of textual data is essential for many natural language tasks such as fair classification, style transfer and sentence generation, among others.",
"The existent dominant approaches in the context of text data either rely on training an adversary (discriminator) that aims at making attribute values difficult to be inferred from the latent code or rely on minimising variational bounds of the mutual information between latent code and the value attribute.",
"However, the available methods suffer of the impossibility to provide a fine-grained control of the degree (or force) of disentanglement.",
"In contrast to adversarial methods, which are remarkably simple, although the adversary seems to be performing perfectly well during the training phase, after it is completed a fair amount of information about the undesired attribute still remains.",
"This paper introduces a novel variational upper bound to the mutual information between an attribute and the latent code of an encoder.",
"Our bound aims at controlling the approximation error via the Renyi's divergence, leading to both better disentangled representations and in particular, a precise control of the desirable degree of disentanglement than state-of-the-art methods proposed for textual data.",
"Furthermore, it does not suffer from the degeneracy of other losses in multi-class scenarios.",
"We show the superiority of this method on fair classification and on textual style transfer tasks.",
"Additionally, we provide new insights illustrating various trade-offs in style transfer when attempting to learn disentangled representations and quality of the generated sentence.",
"Learning disentangled representations hold a central place to build rich embeddings of high-dimensional data.",
"For a representation to be disentangled implies that it factorizes some latent cause or causes of variation as formulated by (Bengio et al., 2013).",
"For example, if there are two causes for the transformations in the data that do not generally happen together and are statistically distinguishable (e.g., factors occur independently), a maximally disentangled representation is expected to present a sparse structure that separates those causes.",
"Disentangled representations have been shown to be useful for a large variety of data, such as video (Hsieh et al., 2018), image (Sanchez et al., 2019), text (John et al., 2018), audio (Hung et al., 2018), among others, and applied to many different tasks, e.g. , robust and fair classification (Elazar and Goldberg, 2018), visual reasoning (van Steenkiste et al., 2019), style transfer (Fu et al., 2017), conditional generation (Denton et al., 2017; Burgess et al., 2018), few shot learning (Kumar Verma et al., 2018), among others.",
"In this work, we focus our attention on learning disentangled representations for text, as it remains overlooked by (John et al., 2018).",
"Perhaps, one of the most popular applications of disentanglement in textual data is fair classification (Elazar and Goldberg, 2018; Barrett et al., 2019) and sentence generation tasks such as style transfer (John et al., 2018) or conditional sentence generation (Cheng et al., 2020b).",
"For fair classification, perfectly disentangled latent representations can be used to ensure fairness as the decisions are taken based on representations which are statistically independent fromor at least carrying limited information aboutthe protected attributes.",
"However, there exists a trade-offs between full disentangled representations and performances on the target task, as shown by (Feutry et al., 2018), among others.",
"For sequence generation and in particular, for style transfer, learning disentangled representations aim at allowing an easier transfer of the desired style.",
"To the best of our knowledge, a depth study of the relationship between disentangled representations based either on adversarial losses solely or on vCLUB S and quality of the generated sentences remains overlooked.",
"Most of the previous studies have been focusing on either trade-offs between metrics computed on the generated sentences (Tikhonov et al., 2019) or performance evaluation of the disentanglement as part of (or convoluted with) more complex modules.",
"This enhances the need to provide a fair evaluation of disentanglement methods by isolating their individual contributions (Yamshchikov et al., 2019; Cheng et al., 2020b).",
"Methods to enforce disentangled representations can be grouped into two different categories.",
"The first category relies on an adversarial term in the training objective that aims at ensuring that sensitive attribute values ( e.g. race, sex, style) as statistically independent as possible from the encoded latent representation.",
"Interestingly enough, several works (John et al., 2018; Elazar and Goldberg, 2018; Bao et al., 2019; Yi et al., 2020; Jain et al., 2019; Zhang et al., 2018; Hu et al., 2017), Elazar and Goldberg (2018) have recently shown that even though the adversary teacher seems to be performing remarkably well during training, after the training phase, a fair amount of information about the sensitive attributes still remains, and can be extracted from the encoded representation.",
"The second category aim at minimising Mutual Information (MI) between encoded latent representation and the sensitive attribute values, i.e. , without resorting to an adversarial discriminator.",
"MI acts as an universal measure of dependence since it captures non-linear and statistical dependencies of high orders between the involved quantities (Kin-ney and Atwal, 2014).",
"However, estimating MI has been a long-standing challenge, in particular when dealing with high-dimensional data (Paninski, 2003; Pichler et al., 2020).",
"Recent methods rely on variational upper bounds.",
"For instance, (Cheng et al., 2020b) study vCLUB-S (Cheng et al., 2020a) for sentence generation tasks.",
"Although this approach improves on previous state-of-the-art methods, it does not allow to fine-tuning of the desired degree of disentanglement, i.e., it enforces light or strong levels of disentanglement where only few features relevant to the input sentence remain (see Feutry et al. (2018) for further discussion).",
"A novel objective to train disentangled representations from attributes.",
"To overcome some of the limitations of both adversarial losses and vCLUB-S we derive a novel upper bound to the MI which aims at correcting the approximation error via either the Kullback-Leibler (Ali and Silvey, 1966) or Renyi (Rnyi et al., 1961) divergences.",
"This correction terms appears to be a key feature to fine-tuning the degree of disentanglement compared to vCLUB-S .",
"Applications and numerical results.",
"First, we demonstrate that the aforementioned surrogate is better suited than the widely used adversarial losses as well as vCLUB-S as it can provide better disentangled textual representations while allowing fine-tuning of the desired degree of disentanglement .",
"In particular, we show that our method offers a better accuracy versus disentanglement trade-offs for fair classification tasks.",
"We additionally demonstrate that our surrogate outperforms both methods when learning disentangled representations for style transfer and conditional sentence generation while not suffering (or degenerating) when the number of classes is greater than two, which is an apparent limitation of adversarial training.",
"By isolating the disentanglement module, we identify and report existing tradeoffs between different degree of disentanglement and quality of generated sentences.",
"The later includes content preservation between input and generated sentences and accuracy on the generated style.",
"We introduce notations, tasks, and closely related work.",
"Consider a training set D = { ( x i , y i ) } ni =1 of n sentences x i X paired with attribute values y i Y { 1 , . . . , |Y|} which indicates a discrete attribute to be disentangled from the resulting representations.",
"We study the following scenarios: Disentangled representations.",
"Learning disentangled representations consists in learning a model M : X R d that maps feature inputs X to a vector of dimension d that retains as much as possible information of the original content from the input sentence but as little as possible about the undesired attribute Y .",
"In this framework, content is defined as any relevant information present in X that does not depend on Y .",
"Applications to binary fair classification.",
"The task of fair classification through disentangled representations aims at building representations that are independent of selective discrete (sensitive) attributes ( e.g. , gender or race).",
"This task consists in learning a model M : X { 0 , 1 } that maps any input x to a label l { 0 , 1 } .",
"The goal of the learner is to build a predictor that assigns each x to either 0 or 1 oblivious of the protected attribute y .",
"Recently, much progress has been made on devising appropriate means of fairness, e.g. , (Zemel et al., 2013; Zafar et al., 2017; Mohri et al., 2019).",
"In particular, (Xie et al., 2017; Barrett et al., 2019; Elazar and Goldberg, 2018) approach the problem based on adversarial losses.",
"More precisely, these approaches consist in learning an encoder that maps x into a representation vector h x , a critic C c which attempts to predict y , and an output classifier f d used to predict l based on the observed h x .",
"The classifier is said to be fair if there is no statistical information about y that is present in h x (Xie et al., 2017; Elazar and Goldberg, 2018).",
"Applications to conditional sentence generation.",
"The task of conditional sentence generation consists in taking an input text containing specific stylistic properties to then generate a realistic (synthetic) text containing potentially different stylistic properties.",
"It requests to learn a model M : X Y X that maps a pair of inputs ( x, y t ) to a sentence x g , where the outcome sentence should retain as much as possible of the original content from the input sentence while having (potentially a new) attribute y g .",
"Proposed approaches to tackle textual style transfer (Zhang et al., 2020; Xu et al., 2019) can be divided into two main categories.",
"The first category (Prabhumoye et al., 2018; Lample et al., 2018) uses cycle losses based on back translation (Wieting et al., 2017) to ensure that the content is preserved during the transformation.",
"Whereas, the second category look to explicitly separate attributes from the content.",
"This constraint is enforced using either adversarial training (Fu et al., 2017; Hu et al., 2017; Zhang et al., 2018; Yamshchikov et al., 2019) or MI minimisation using vCLUB-S (Cheng et al., 2020b).",
"Traditional adversarial training is based on an encoder that aims to fool the adversary discriminator by removing attribute information from the content embedding (Elazar and Goldberg, 2018).",
"As we will observe, the more the representations are disentangled the easier is to transfer the style but at the same time the less the content is preserved.",
"In order to approach the sequence generation tasks, we build on the Style-embedding Model by (John et al., 2018) (StyleEmb) which uses adversarial losses introduced in prior work for these dedicated tasks.",
"During the training phase, the input sentence is fed to a sentence encoder, namely f e , while the input style is fed to a separated style encoder, namely f s e .",
"During the inference phase, the desired stylepotentially different from the input styleis provided as input along with the input sentence.",
"This section describes the proposed approach to learn disentangled representations.",
"We first review MI along with the model overview and then, we derive the variational bound we will use, and discuss connections with adversarial losses.",
"The MI is a key concept in information theory for measuring high-order statistical dependencies between random quantities.",
"Given two random variables Z and Y , the MI is defined by I ( Z ; Y ) = EZY (cid:20) log p ZY ( Z, Y ) p Z ( Z ) p Y ( Y ) (cid:21) , (1) where p ZY is the joint probability density function (pdf) of the random variables ( Z, Y ) , with p Z and p Y representing the respective marginal pdfs.",
"MI is related to entropy h ( Y ) and conditional entropy h ( Y | Z ) as follows: I ( Z ; Y ) = h ( Y ) h ( Y | Z ) .",
"(2) Our models for fair classification and sequence generation share a similar structure.",
"These rely on an encoder that takes as input a random sentence X and maps it to a random representation Z using a deep encoder denoted by f e .",
"Then, classification and sentence generation are performed using either a classifier or an auto-regressive decoder denoted by f d .",
"We aim at minimizing MI between the latent code represented by the Random Variable (RV) Z = f e ( X ) and the desired attribute represented by the RV Y .",
"The objective of interest L ( f e ) is defined as: L ( f e ) L down.",
"where L down.",
"represents a downstream specific (target task) loss and is a meta-parameter that controls the sensitive trade-off between disentanglement ( i.e. , minimizing MI) and success in the downstream task ( i.e. , minimizing the target loss).",
"In Sec. 5, we illustrate theses different trade-offs.",
"Applications to fair classification and sentence generation.",
"For fair classification, we follow standard practices and optimize the cross-entropy between prediction and ground-truth labels.",
"In the sentence generation task L down.",
"represents the negative log-likelihood between individual tokens.",
"Estimating the MI is a long-standing challenge as the exact computation (Paninski, 2003) is only tractable for discrete variables, or for a limited family of problems where the underlying data-distribution satisfies smoothing properties, see recent work by (Pichler et al., 2020).",
"Different from previous approaches leading to variational lower bounds (Belghazi et al., 2018; Hjelm et al., 2018; Oord et al., 2018), in this paper we derive an estimator based on a variational upper bound to the MI which control the approximation error based on the Kullback-Leibler and the Renyi divergences (Daudel et al., 2020).",
"Theorem 1 (Variational upper bound on MI) Let ( Z, Y ) be an arbitrary pair of RVs with ( Z, Y ) p ZY according to some underlying pdf, and let q (cid:98) Y | Z be a conditional variational distribution on the attributes satisfying p ZY (cid:28) p Z q (cid:98) Y | Z , i.e., absolutely continuous.",
"Then, we have that I ( Z ; Y ) EY (cid:20) log (cid:90) q (cid:98) Y | Z ( Y | z ) p Z ( z ) dz (cid:21) + EY Z (cid:104) log q (cid:98) Y | Z ( Y | Z ) (cid:105) + KL (cid:0) p ZY (cid:107) p Z q (cid:98) Y | Z (cid:1) , (4) where KL (cid:0) p ZY (cid:107) p Z q (cid:98) Y | Z (cid:1) denotes the KL divergence.",
"Similarly, we have that for any > 1 , I ( Z ; Y ) EY (cid:20) log (cid:90) q (cid:98) Y | Z ( Y | z ) p Z ( z ) dz (cid:21) + EY Z (cid:104) log q (cid:98) Y | Z ( Y | Z ) (cid:105) + D (cid:0) p ZY (cid:107) p Z q (cid:98) Y | Z (cid:1) , (5) where ( 1) D (cid:0) p ZY (cid:107) p Z q (cid:98) Y | Z (cid:1) = log EZY [ R 1 ( Z, Y )] denotes the Renyi divergence and R ( z, y ) = p Y | Z ( y | z ) q (cid:98) Y | Z ( y | z ) , for ( z, y ) Supp ( p ZY ) .",
"Proof: The upper bound on H ( Y ) is a direct application of the the (Donsker and Varadhan, 1985) representation of KL divergence while the lower bound on H ( Y | Z ) follows from the monotonicity property of the function: (cid:55) D (cid:0) p ZY (cid:107) p Z q (cid:98) Y | Z (cid:1) .",
"Further details are relegated to Appendix A. Remark: It is worth to emphasise that the KL divergence in (4) and Renyi divergence in (5) control the approximation error between the exact entropy and its corresponding bound.",
"From theoretical bounds to trainable surrogates to minimize MI: It is easy to check that the inequalities in (Eq. 4) and (Eq. 5) are tight provided that p ZY p Z q (cid:98) Y | Z almost surely for some adequate choice of the variational distribution.",
"However, the evaluation of these bounds requires to obtain an estimate of the density-ratio R ( z, y ) .",
"Density-ratio estimation has been widely studied in the literature (see (Sugiyama et al., 2012) and references therein) and confidence bounds has been reported by (Kpotufe, 2017) under some smoothing assumption on underlying data-distribution p ZY .",
"In this work, we will estimate this ratio by using a critic C R which is trained to differentiate between a balanced dataset of positive i.i.d samples coming from p ZY and negative i.i.d samples coming from q (cid:98) Y | Z p Z .",
"Then, for any pair ( z, y ) , the density-ratio can be estimated by R ( z, y ) ( C R ( z,y )) 1 ( C R ( z,y )) , where ( ) indicates the sigmoid function and C R ( z, y ) is the unnormalized output of the critic.",
"It is worth to mention that after estimating this ratio, the previous upper bounds may not be strict bounds so we will refer them as surrogates.",
"Adversarial approaches : In order to enhance our understanding of why the proposed approach based on the minimization of the MI using our variational upper bound in Th.",
"1 may lead to a better training objective than previous adversarial losses, we discuss below the explicit relationship between MI and cross-entropy loss.",
"Let Y Y denote a random attribute and let Z be a possibly high-dimensional representation that needs to be disentangled from Y .",
"Then, I ( Z ; Y ) H ( Y ) EY Z (cid:104) log q (cid:98) Y | Z ( Y | Z ) (cid:105) = Const CE ( (cid:98) Y | Z ) , (6) where CE ( (cid:98) Y | Z ) denotes the cross-entropy corresponding to the adversarial discriminator q (cid:98) Y | Z , noting that Y comes from an unknown distribution on which we have no influence H ( Y ) is an unknown constant, and using that the approximation error: KL (cid:0) q ZY (cid:107) q (cid:98) Y | Z p Z (cid:1) = CE ( (cid:98) Y | Z ) H ( Y | Z ) .",
"Eq.",
"6 shows that the cross-entropy loss leads to a lower bound (up to a constant) on the MI.",
"Although the cross-entropy can lead to good estimates of the conditional entropy, the adversarial approaches for classification and sequence generation by (Barrett et al., 2019; John et al., 2018) which consists in maximizing the cross-entropy, induces a degeneracy (unbounded loss) as increases in the underlying optimization problem.",
"As we will observe in next section, our variational upper bound in Th.",
"1 can overcome this issue, in particular for |Y| > 2 .",
"vCLUB-S : Different from our method, Cheng et al. (2020a) introduce I vCLUB which is an upper bound on MI defined by I vCLUB ( Y ; Z ) = EY Z [log p Y | Z ( Y | Z )] EYEZ [log p Y | Z ( Y | Z )] .",
"It would be worth to mention that this bound follows a similar approach to the previously introduced bound in (Feutry et al., 2018).",
"Fair classification task.",
"We follow the experimental protocol of (Elazar and Goldberg, 2018).",
"The main task consists in predicting a binary label representing either the sentiment (positive/negative) or the mention.",
"The mention task aims at predicting if a tweet is conversational.",
"Here the considered protected attribute is the race.",
"The dataset has been automatically constructed from DIAL corpus (Blodgett et al., 2016) which contained race annotations over 50 Million of tweets.",
"Sentiment tweets are extracted using a list of predefined emo-jis and mentions are identified using @mentions tokens.",
"The final dataset contains 160k tweets for the training and two splits of 10K tweets for validation and testing.",
"Splits are balanced such that the random estimator is likely to achieve 50% accuracy.",
"Style Transfer For our sentence generation task, we conduct experiments on three different datasets extracted from restaurant reviews in Yelp.",
"The first dataset, referred to as SYelp, contains 444101, 63483, and 126670 labelled short reviews (at most 20 words) for train, validation, and test, respectively.",
"For each review a binary label is assigned depending on its polarity.",
"Following (Lample et al., 2018), we use a second version of Yelp, referred to as FYelp, with longer reviews (at most 70 words).",
"It contains five coarse-grained restaurant category labels ( e.g. , Asian, American, Mexican, Bars and Dessert).",
"The multi-category FYelp is used to access the generalization capabilities of our methods to a multi-class scenario.",
"Efficiency measure of the disentanglement methods.",
"(Barrett et al., 2019) report that offline classifiers (post training) outperform clearly adversarial discriminators.",
"We will re-training a classifier on the latent representation learnt by the model and we will report its accuracy.",
"Measure of performance within the fair classification task.",
"In the fair classification task we aim at maximizing accuracy on the target task and so we will report the corresponding accuracy.",
"Measure of performance within sentence generation tasks.",
"Sentences generated by the model are expected to be fluent, to preserve the input content and to contain the desired style.",
"For style transfer, the desired style is different from the input style while for conditional sentence generation, both input and output styles should be similar.",
"Nevertheless, automatic evaluation of generative models for text is still an open problem.",
"We measure the style of the output sentence by using a fastText classifier (Joulin et al., 2016b).",
"For content preservation, we follow (John et al., 2018) and compute both:",
"(i) the cosine measure between source and generated sentence embeddings, which are the concatenation of min, max, and mean of word embedding (sen-timent words removed), and",
"(ii) the BLEU score between generated text and the input using SACRE-BLEU from (Post, 2018).",
"Motivated by previous work, we evaluate the fluency of the language with the perplexity given by a GPT-2 (Radford et al., 2019) pretrained model performing fine-tuning on the training corpus.",
"We choose to report the log-perplexity since we believe it can better reflects the uncertainty of the language model (a small variation in the model loss would induce a large change in the perplexity due to the exponential term).",
"Besides the automatic evaluation, we further test our disentangled representation effectiveness by human evaluation results are presented in Tab.",
"1.",
"vCLUB-S , KL refers to a model trained using the vCLUB-S and KL surrogate (see Eq. 14) respectively; and D refers to a model trained based on the -Renyi surrogate (Eq. 15), for { 1 .",
"3 , 1 .",
"5 , 1 .",
"8 } .",
"In this section, we present our results on the fair classification and binary sequence generation tasks, see Ssec.",
"5.1 and Ssec.",
"5.2, respectively.",
"We additionally show that our variational surrogates to the MIcontrarily to adversarial lossesdo not suffer in multi-class scenarios (see Ssec. 5.3).",
"Upper bound on performances.",
"We first examine how much of the protected attribute we can be recovered from an unfair classifier ( i.e. , trained without adversarial loss) and how well does such classifier perform.",
"Results are reported in Fig. 1.",
"We observe that we achieve similar scores than the ones reported in previous studies (Barrett et al., 2019; Elazar and Goldberg, 2018).",
"This experiment shows that, when training to solve the main task, the classifier learns information about the protected attribute, i.e. , the attacker's accuracy is better than random guessing.",
"In the following, we compare the different proposed methods to disentangle representations and obtain a fairer classifier.",
"Methods comparisons.",
"Fig. 1 shows the results of the different models and illustrates the trade-offs between disentangled representations and the target task accuracy.",
"Results are reported on the testset for both sentiment and mention tasks when race is the protected.",
"We observe that the classifier trained with an adversarial loss degenerates for > 5 since the adversarial term in Eq.",
"3 is influencing much the global gradient than the downstream term ( i.e. , cross-entropy loss between predicted and golden distribution).",
"Remarkably, both models trained to minimize either the KL or the Renyi surrogate do not suffer much from the aforementioned multiclass problem.",
"For both tasks, we observe that the KL and the Renyi surrogates can offer better disentangled representations than those induced by adversarial approaches.",
"In this task, both the KL and Renyi achieve perfect disentangled representations ( i.e. , random guessing accuracy on protected attributes) with a 5% drop in the accuracy of the target task, when perfectly masking the protected attributes.",
"As a matter of fact, we observe that vCLUB-S provides only two regimes: either a light protection (attacker accuracy around 60%), with almost no loss in task accuracy ( < 1 ), or a strong protection (attacker accuracy around 50%), where a few features relevant to the target task remain.",
"1 On the sentiment task, we can draw similar conclusions.",
"However, the Renyi's surrogate achieves slightly better-disentangled representations.",
"Overall, we can observe that our proposed surrogate enables good control of the degree of disentangling.",
"Additionally, we do not observe a degenerated behaviouras it is the case with adversarial losseswhen increases.",
"Furthermore, our surrogate allows simultaneously better disentangled representations while preserving the accuracy of the target task.",
"In the previous section, we have shown that the proposed surrogates do not suffer from limitations of adversarial losses and allow to achieve better disentangled representations than existing methods relying on vCLUB-S .",
"Disentanglement modules are a core block for a large number of both style transfer and conditional sentence generation algorithms (Tikhonov et al., 2019; Yamshchikov et al., 2019; Fu et al., 2017) that place explicit constraints to force disentangled representations.",
"First, we assess the disentanglement quality and the control over desired level of disentanglement while changing the downstream term, which for the sentence generation task is the cross-entropy loss on individual token.",
"Then, we exhibit the existing trade-offs between quality of generated sentences, measured by the metric introduced in Ssec.",
"4.2, and the resulting degree of disentanglement.",
"The results are presented for SYelp 5.2.1 Evaluating disentanglement Fig. 2a shows the adversary accuracy of the different methods as a function of .",
"Similarly to the fair classification task, a fair amount of information can be recovered from the embedding learnt with adversarial loss.",
"In addition, we observe a clear degradation of its performance for values > 1 .",
"In this setting, the Renyi surrogates achieves consistently better results in terms of disentanglement than the one minimizing the KL surrogate.",
"The curve for Renyi's surrogates shows that exploring different values of allows good control of the 1 This phenomenon is also reported in (Feutry et al., 2018) on a picture anonymization task.",
"(d) Figure 1: Numerical results on fair classification.",
"Trade-offs between target task and attacker accuracy are reported in Fig. 1a, Fig. 1b for mention task, and Fig. 1c, Fig. 1d for sentiment task.",
"For low values of some points coincide.",
"As increases the level of disentanglement increases and the proposed methods using both KL ( KL ) and Reny divergences ( D ) clearly offer better control than existing methods.",
"disentanglement degree.",
"Renyi surrogate generalizes well for sentence generation.",
"Similarly to the fairness task vCLUB-S only offers two regimes: \"light\" disentanglement with very little polarity transfer and \"strong\" disentanglement.",
"The quality of generated sentences are evaluated using the fluency (see Fig. 3c ), the content preservation (see Fig. 3a), additional results using a cosine similarity are given in Appendix D, and polarity accuracy (see Fig. 3b ).",
"For style transfer, and for all models, we observe trade-offs between disentanglement and content preservation (measured by BLEU) and between fluency and disentanglement.",
"Learning disentangled representations leads to poorer content preservation.",
"As a matter of fact, similar conclusions can be drawn while measuring content with the cosine similarity (see Appendix D).",
"For polarity accuracy, in non-degenerated cases (see below), we observe that the model is able to better transfer the sentiment in presence of disentangled representations.",
"Transferring style is easier with disentangled representations, however there is no free lunch here since disentangling also removes important information about the content .",
"It is worth noting that even in the \"strong\" disentanglement regime vCLUB-S struggles to transfer the polarity (accuracy of 40% for { 1 , 2 , 10 , 15 } ) where other models reach 80%.",
"It is worth noting that similar conclusions hold for two different sentence generation tasks: style transfer and conditional generation, which tends to validate the current line of work that formulates text generation as generic text-to-text (Raffel et al., 2019).",
"Quality of generated sentences.",
"Examples of generated sentences are given in Tab.",
"2 , providing qualitative examples that illustrate the previously observed trade-offs.",
"The adversarial loss degenerates for values 5 and a stuttering phenomenon appears (Holtzman et al., 2019).",
"Tab.",
"1 gathers results of human evaluation and show that our surrogates can better disentangle style while preserving more content than available methods.",
"In Fig. 2b we report the adversary accuracy of our different methods for the values of using FYelp",
"dataset with category label.",
"In the binary setting for 1 , models using adversarial loss can learn disentangled representations while in the multi-class setting, the adversarial loss degenerates for small values of ( i.e sentences are no longer fluent as shown by the increase in perplexity in Fig. 4c).",
"Minimizing MI based on our surrogates seems to mitigate the problem and offer a better control of the disentanglement degree for various values of than vCLUB S .",
"Further results are gathered in Appendix G. 6 Summary and Concluding Remarks We devised a new alternative method to adversarial losses capable of learning disentangled textual representation.",
"Our method does not require adversarial training and hence, it does not suffer in presence of multi-class setups.",
"A key feature of this method is to account for the approximation error incurred when bounding the mutual information.",
"Experiments show better trade-offs than both adversarial training and vCLUB-S on two fair classification tasks and demonstrate the efficiency to learn disentangled representations for sequence generation.",
"As a matter of fact, there is no free-lunch for sentence generation tasks: although transferring style is easier with disentangled representations, it also removes important information about the content .",
"The proposed method can replace the adversary in any kind of algorithms (Tikhonov et al., 2019; Fu et al., 2017) with no modifications.",
"Future work includes testing with other type of labels such as dialog act (Chapuis et al., 2020; Colombo et al., 2020), emotions (Witon et al., 2018), opinion (Gar-cia et al., 2019) or speaker's stance and confidence (Dinkar et al., 2020).",
"Since it allows more fine-grained control over the amount of disentanglement, we expect it to be easier to tune when combined with more complex models.",
"The authors would like to thanks Georg Pichler for the thorough reading.",
"The work of Prof. Pablo Piantanida was supported by the European Commis-sion's Marie Sklodowska-Curie Actions (MSCA), through the Marie Sklodowska-Curie IF (H2020-MSCAIF-2017-EF-797805).",
"The PhD of Pierre is fully founded by IBM GBS France in collaboration with Telecom Paris."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"objective",
"other",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"abstain",
"objective",
"objective",
"objective",
"result",
"objective",
"method",
"objective",
"abstain",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"other",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"In this paper, we present CorefQA, an accurate and extensible approach for the coreference resolution task.",
"We formulate the problem as a span prediction task, like in question answering: A query is generated for each candidate mention using its surrounding context, and a span prediction module is employed to extract the text spans of the coreferences within the document using the generated query.",
"This formulation comes with the following key advantages: (1) The span prediction strategy provides the flexibility of retrieving mentions left out at the mention proposal stage; (2) In the question answering framework, encoding the mention and its context explicitly in a query makes it possible to have a deep and thorough examination of cues embedded in the context of coreferent mentions; and (3) A plethora of existing question answering datasets can be used for data augmentation to improve the model's generalization capability.",
"Experiments demonstrate significant performance boost over previous models, with 83.1 (+3.5) F1 score on the CoNLL-2012 benchmark and 87.5 (+2.5) F1 score on the GAP benchmark.",
"1 1 Introduction Recent coreference resolution systems (Lee et al., 2017, 2018; Zhang et al., 2018a; Kantor and Globerson, 2019) consider all text spans in a document as potential mentions and learn to find an antecedent for each possible mention.",
"There are two key issues with this paradigm, in terms of task formalization and the algorithm.",
"At the task formalization level, mentions left out at the mention proposal stage can never be recovered since the downstream module only operates on the proposed mentions.",
"A2: [ the poison ] Q3: Who were poisoned and did not know how to protect themselves against the poison? A3: [ many people , themselves ] Q4: Whom did they not know how to protect against the poison? A4: [ many people , They ] Q5: They were poisoned and did not know how to protect themselves against what? A5: [ toxic gas ]",
"2018a).",
"The coreference datasets can only provide a weak signal for spans that correspond to entity mentions because singleton mentions are not explicitly labeled.",
"Due to the inferiority of the mention proposal model, it would be favorable if a coreference framework had a mechanism to retrieve left-out mentions.",
"At the algorithm level, existing end-to-end methods (Lee et al., 2017, 2018; Zhang et al., 2018a) score each pair of mentions only based on mention representations from the output layer of a contextualization model.",
"This means that the model lacks the connection between mentions and their contexts.",
"Semantic matching operations between two mentions (and their contexts) are performed only at the output layer and are relatively superficial.",
"Therefore it is hard for their models to capture all the lexical, semantic and syntactic cues in the context.",
"To alleviate these issues, we propose CorefQA, a new approach that formulates the coreference resolution problem as a span prediction task, akin to the question answering setting.",
"A query is generated for each candidate mention using its surrounding context, and a span prediction module is further employed to extract the text spans of the coreferences within the document using the generated query.",
"Some concrete examples are shown in Figure",
"1. 2 This formulation provides benefits at both the task formulation level and the algorithm level.",
"At the task formulation level, since left-out mentions can still be retrieved at the span prediction stage, the negative effect of undetected mentions is significantly alleviated.",
"At the algorithm level, by generating a query for each candidate mention using its surrounding context, the CorefQA model explicitly considers the surrounding context of the target mentions, the influence of which will later be propagated to each input word using the self-attention mechanism.",
"Additionally, unlike existing end-to-end methods (Lee et al., 2017, 2018; Zhang et al., 2018a), where the interactions between two mentions are only superficially modeled at the output layer of contextualization, span prediction requires a more thorough and deeper examination of the lexical, semantic and syntactic cues within the context, which will potentially lead to better performance.",
"Moreover, the proposed question answering formulation allows us to take advantage of existing question answering datasets.",
"Coreference annotation is expensive, cumbersome and often requires linguistic expertise from annotators.",
"Under the proposed formulation, the coreference resolution has the same format as the existing question answering datasets (Rajpurkar et al., 2016a, 2018; Dasigi et al., 2019a).",
"Those datasets can thus readily be used for data augmentation.",
"We show that pre-training on existing question answering datasets improves the model's generalization and 2 This is an illustration of the question formulation.",
"transferability, leading to additional performance boost.",
"Experiments show that the proposed framework significantly outperforms previous models on two widely-used datasets.",
"Specifically, we achieve new state-of-the-art scores of 83.1 (+3.5) on the CoNLL-2012 benchmark and 87.5 (+2.5) on the GAP benchmark.",
"Coreference resolution is a fundamental problem in natural language processing and is considered as a good test of machine intelligence (Morgenstern et al., 2016).",
"Neural network models have shown promising results over the years.",
"Earlier neural-based models (Wiseman et al., 2016; Clark and Manning, 2015, 2016) rely on parsers and hand-engineered mention proposal algorithms.",
"Recent work (Lee et al., 2017, 2018; Kantor and Globerson, 2019) tackled the problem in an end-to-end fashion by jointly detecting mentions and predicting coreferences.",
"Based on how entity-level information is incorporated, they can be further categorized as (1) entity-level models (Bjorkelund and Kuhn, 2014; Clark and Manning, 2015, 2016; Wiseman et al., 2016) that directly model the representation of real-world entities and (2) mention-ranking models (Durrett and Klein, 2013; Wiseman et al., 2015; Lee et al., 2017) that learn to select the antecedent of each anaphoric mention.",
"Our CorefQA model is essentially a mention-ranking model, but we identify coreference using question answering.",
"Machine reading comprehension is a general and extensible task form.",
"Many tasks in natural language processing can be framed as reading comprehension while abstracting away the task-specific modeling constraints.",
"McCann et al. (2018) introduced the decaNLP challenge, which converts a set of 10 core tasks in NLP to reading comprehension.",
"He et al. (2015) showed that semantic role labeling annotations could be solicited by using question-answer pairs to represent the predicate-argument structure.",
"Levy et al. (2017) reduced relation extraction to answering simple reading comprehension questions, yielding models that generalize better in the I was hired to do some Christmas music, and it was just Jingle Bells and I brought my cat with me to the studio, and I was working on the song and the cat jumped up into the record booth and started meowing along, meowing to me.",
"zero-shot setting.",
"Li et al. (2019a,b) cast the tasks of named entity extraction and relation extraction as a reading comprehension problem.",
"In parallel to our work, Aralikatte et al. (2019) converted coreference and ellipsis resolution in a question answering format, and showed the benefits of training joint models for these tasks.",
"Their models are built under the assumption that gold mentions are provided at inference time, whereas our model does not need that assumption it jointly trains the mention proposal model and the coreference resolution model in an end-to-end manner.",
"Data augmentation is a strategy that enables practitioners to significantly increase the diversity of data available for training models.",
"Data augmentation techniques have been explored in various fields such as question answering (Talmor and Berant, 2019), text classification (Kobayashi, 2018) and dialogue language understanding (Hou et al., 2018).",
"In coreference resolution, Zhao et al. (2018); Emami et al. (2019); Zhao et al. (2019) focused on debiasing the gender bias problem; Aralikatte et al. (2019) explored the effectiveness of joint modeling of ellipsis and coreference resolution.",
"To the best of our knowledge, we are the first to use existing question answering datasets as data augmentation for coreference resolution.",
"In this section, we describe our CorefQA model in detail.",
"The overall architecture is illustrated in Figure",
"2. 3.1 Notations Given a sequence of input tokens X = { x 1 , x 2 , ..., x n } in a document, where n denotes the length of the document.",
"N = n ( n + 1) / 2 denotes the number of all possible text spans in X .",
"Let e i denotes the i -th span representation 1 i N , with the start index FIRST (i) and the end index LAST (i).",
"e i = { x FIRST ( i ) , x FIRST ( i )+1 , ..., x LAST ( i ) 1 , x LAST ( i ) } .",
"The task of coreference resolution is to determine the antecedents for all possible spans.",
"If a candidate span e i does not represent an entity mention or is not coreferent with any other mentions, a dummy token (cid:15) is assigned as its antecedent.",
"The linking between all possible spans e defines the final clustering.",
"We use the SpanBERT model 3 to obtain input representations following Joshi et al. (2019a).",
"Each token x i is associated with a SpanBERT representation x i .",
"Since the speaker information is indispensable for coreference resolution, previous methods (Wiseman et al., 2016; Lee et al., 2017; Joshi et al., 2019a) usually convert the speaker information into binary features indicating whether two mentions are from the same speaker.",
"However, we use a straightforward strategy that directly concatenates the speaker's name with the corresponding utterance.",
"This strategy is inspired by recent research in personalized dialogue modeling that use persona information to represent speakers (Li et al., 2016; Zhang et al., 2018b; 3 https://github.com/facebookresearch/ SpanBERT Mazare et al., 2018).",
"In subsection 5.2, we will empirically demonstrate its superiority over the feature-based method in Lee et al. (2017).",
"To fit long documents into SpanBERT, we use a sliding-window approach that creates a T -sized segment after every T /2 tokens.",
"Segments are then passed to the SpanBERT encoder independently.",
"The final token representations are derived by taking the token representations with maximum context.",
"Similar to Lee et al. (2017), our model considers all spans up to a maximum length L as potential mentions.",
"To improve computational efficiency, we further prune the candidate spans greedily during both training and evaluation.",
"To do so, the mention score of each candidate span is computed by feeding the first and the last of its constituent token representations into a feed-forward layer: s m ( i ) = FFNN m ([ x FIRST ( i ) , x LAST ( i ) ]) (1) where x FIRST ( i ) and x LAST ( i ) represent the first and the last token representation of the i -th candidate span.",
"FFNN m ( ) denotes the feed-forward neural network that computes a nonlinear mapping from the input vector to the mention score.",
"We only keep up to n (where n is the document length) spans with the highest mention scores.",
"Given a mention e i proposed by the mention proposal network, the role of the mention linking network is to give a score s a ( i, j ) for any text span e j , indicating whether e i and e j are coreferent.",
"We propose to use the question answering framework as the backbone to compute s a ( i, j ) .",
"It operates on the triplet { context (X), query (q), answers (a) } .",
"The context X is the input document.",
"The query q ( e i ) is constructed as follows: given e i , we use the sentence that e i resides in as the query, with the minor modification that we encapsulates e i with special tokens < mention >< /mention > .",
"The answers a are the coreferent mentions of e i .",
"Following Devlin et al. (2019), we represent the input query and the context as a single packed sequence.",
"Since a mention can have multiple coreferent mentions, we follow Li et al. (2019a,b) and generate a BIO tag for each token.",
"BIO tags respectively mark the beginning ( B ), inside ( I ) and outside ( O ) of a coreferent mention.",
"It is worth noting that there exist unanswerable queries where labels for tokens in X are all O .",
"4 A query is considered unanswerable in the following scenarios: (1) the candidate span e i does not represent an entity mention or (2) the candidate span e i represents an entity mention but is not coreferent with any other mentions in X .",
"FFNN tag () represents the feed-forward neural network that computes a nonlinear mapping from the input vector to the tag logit.",
"We further extend the token-level score in Eq.",
"2 to the span level.",
"The anaphora score s a ( j | i ) , the compatibility score of span j being a answer for span i , is calculated by the log probability of its beginning word taking the B tag and the rest taking the I tag: s a ( j | i ) = 1 | e j | [log p BFIRST ( j ) + k = LAST ( j ) (cid:88) k = FIRST ( j )+1 log p I k ] (3) A closer look at Eq.3 reveals that it only models the uni-directional coreference relation from e i to e j , i.e., e j is the answer for query q ( e i ) .",
"This is suboptimal since if e i is a coreference mention of e j , then e j should also be the coreference mention e i .",
"We thus need to optimize the bi-directional relation between e i and e j .",
"5 The final score s a ( i, j ) is thus given as follows: s a ( i, j ) = 1 2( s a ( j | i ) + s a ( i | j )) (4) s a ( i | j ) can be computed in the same way as s a ( j | i ) , in which q ( e j ) is used as the query.",
"For a pair of text span e i and e j , the premises for them being coreferent mentions are (1) they are mentions and (2) they are coreferent.",
"This makes the overall score s ( i, j ) for e i and e j the combination of Eq.1 and Eq.4: s ( i, j ) = s m ( i ) + s m ( j ) + s a ( i, j ) (5) 4 In the rare cases where coreferent answers are nested, we simply treat all tokens of the inner mentions as I .",
"5 This bidirectional relationship is actually referred to as mutual dependency and has shown to benefit a wide range of NLP tasks such as machine translation (Hassan et al., 2018) or dialogue generation (Li et al., 2015).",
"Given a document X with length n and the number of spans O ( n 2 ) , the computation of Eq.5 for all mention pairs is intractable with the complexity of O ( n 4 ) .",
"Given an extracted mention e i , the computation of Eq.5 for ( e i , e j ) regarding all e j is still extremely intensive since the computation of the backward span prediction score s a ( i | j ) requires running question answering models on all query q ( e j ) .",
"A further pruning procedure is thus needed: For each query q ( e i ) , we collect C span candidates only based on the s a ( j | i ) scores.",
"For each mention e i proposed by the mention proposal network, it is associated with C potential spans proposed by the mention linking network based on s ( j | i ) , we aim to optimize the marginal log-likelihood of all correct antecedents implied by the gold clustering.",
"Following Lee et al. (2017), we append a dummy token (cid:15) to the C candidates.",
"The model will output it if none of the C span candidates is coreferent with e i .",
"For each mention e i , the model learns a distribution P ( ) over all possible antecedent spans e j based on the global score s ( i, j ) from Eq.",
"5: P ( e j ) = e s ( i,j ) (cid:80) j (cid:48) C e s ( i,j (cid:48) ) (6) The mention proposal module and the mention linking module are jointly trained in an end-to-end fashion using training signals from Eq.6, with the SpanBERT parameters shared.",
"Given an input document, we can obtain an undirected graph using the overall score, each node of which represents a candidate mention from either the mention proposal module or the mention linking module.",
"We prune the graph by keeping the edge whose weight is the largest for each node based on Eq.6.",
"Nodes whose closest neighbor is the dummy token (cid:15) are abandoned.",
"Therefore, the mention clusters can be decoded from the graph.",
"We hypothesize that the reasoning (such as synonymy, world knowledge, syntactic variation, and multiple sentence reasoning) required to answer",
"the questions are also indispensable for coreference resolution.",
"Annotated question answering datasets are usually significantly larger than the coreference datasets due to the high linguistic expertise required for the latter.",
"Under the proposed QA formulation, coreference resolution has the same format as the existing question answering datasets (Rajpurkar et al., 2016a, 2018; Dasigi et al., 2019a).",
"In this way, they can readily be used for data augmentation.",
"We thus propose to pretrain the mention linking network on the Quoref dataset (Dasigi et al., 2019b), and the SQuAD dataset (Rajpurkar et al., 2016b).",
"Comparing with existing models (Lee et al., 2017, 2018; Joshi et al., 2019b), the proposed question answering formalization has the flexibility of retrieving mentions left out at the mention proposal stage.",
"However, since we still have the mention proposal model, we need to know in which situation missed mentions could be retrieved and in which situation they cannot.",
"We use the example in Figure 1 as an illustration, in which { many people, They, themselves } are coreferent mentions: If partial mentions are missed by the mention proposal model, e.g., many people and They, they can still be retrieved in the mention linking stage when the not-missed mention (i.e., themselves) is used as query.",
"But, if all the mentions within the cluster are missed, none of them can be used for query construction, which means they all will be irreversibly left out.",
"Given the fact that the proposal mention network proposes a significant number of mentions, the chance that mentions within a mention cluster are all missed is relatively low (which exponentially decreases as the number of entities increases).",
"This explains the superiority (though far from perfect) of the proposed model.",
"However, how to completely remove the mention proposal network remains a problem in the field of coreference resolution.",
"The special tokens used to denote the speaker's name ( < speaker >< /speaker > ) and the special tokens used to denote the queried mentions ( < mention >< /mention > ) are initialized by randomly taking the unused tokens from the SpanBERT vocabulary.",
"The sliding window size T = 512, and the mention keep ratio = 0.2.",
"The maximum length L for mention proposal = 10 and the maximum number of antecedents kept for each mention C = 50.",
"The SpanBERT parameters are updated by the Adam optimizer (Kingma and Ba, 2015) with initial learning rate 1 10 5 and the task parameters are updated by the Range optimizer 6 with initial learning rate 2 10 4 .",
"We compare the CorefQA model with previous neural models that are trained end-to-end:",
"e2e-coref (Lee et al., 2017) is the first end-to-end coreference system that learns which spans are entity mentions and how to best cluster them jointly.",
"Their token representations are built upon the GLoVe (Pennington et al., 2014) and Turian (Turian et al., 2010) embeddings.",
"c2f-coref + ELMo (Lee et al., 2018) extends Lee et al. (2017) by combining a coarse-to-fine pruning with a higher-order inference mechanism.",
"Their representations are built upon ELMo embeddings (Peters et al., 2018).",
"c2f-coref + BERT-large(Joshi et al., 2019b) builds the c2f-coref system on top of BERT (Devlin et al., 2019) token representations.",
"EE + BERT-large (Kantor and Globerson, 2019) represents each mention in a cluster via an approximation of the sum of all mentions in the cluster.",
"c2f-coref + SpanBERT-large (Joshi et al., 2019a) focuses on pre-training span representations to better represent and predict spans of text.",
"The English data of CoNLL-2012 shared task (Pradhan et al., 2012) contains 2,802/343/348 train/development/test documents in 7 different genres.",
"The main evaluation is the average of three metrics MUC (Vilain et al., 1995), B 3 (Bagga and Baldwin, 1998), and CEAF 4 (Luo, 2005) on the test set according to the official CoNLL-2012 evaluation scripts 7 .",
"6 https://github.com/lessw2020/ Ranger-Deep-Learning-Optimizer 7 http://conll.cemantix.org/2012/ software.html We compare the CorefQA model with several baseline models in Table",
"1. Our CorefQA system achieves a huge performance boost over existing systems: With SpanBERT-base, it achieves an F1 score of 79.9, which already outperforms the previous SOTA model using SpanBERT-large by 0.3.",
"With SpanBERT-large, it achieves an F1 score of 83.1, with a 3.5 performance boost over the previous SOTA system.",
"The GAP dataset (Webster et al., 2018) is a gender-balanced dataset that targets the challenges of resolving naturally occurring ambiguous pronouns.",
"It comprises 8,908 coreference-labeled pairs of (ambiguous pronoun, antecedent name) sampled from Wikipedia.",
"We follow the protocols in Webster et al. (2018); Joshi et al. (2019b) and use the off-the-shelf resolver trained on the CoNLL-2012 dataset to get the performance of the GAP dataset.",
"Table 2 presents the results.",
"We can see that the proposed CorefQA model achieves state-of-the-art performance on all metrics on the GAP dataset.",
"We perform comprehensive ablation studies and analyses on the CoNLL-2012 development dataset.",
"Results are shown in Table",
"3. 5.1 Effects of Different Modules in the Proposed Framework Effect of SpanBERT Replacing SpanBERT with vanilla BERT leads to a 3.8 F1 degradation.",
"This verifies the importance of span-level pre-training for coreference resolution and is consistent with previous findings (Joshi et al., 2019a).",
"Effect of Pre-training Mention Proposal Network Skipping the pre-training of the mention proposal network using golden mentions results in a 7.5 F1 degradation, which is in line with our expectation.",
"A randomly initialized mention proposal model implies that mentions are randomly selected.",
"Randomly selected mentions will mostly be transformed to unanswerable queries.",
"This makes it hard for the question answering model to learn at the initial training stage, leading to inferior performance.",
"Effect of QA pre-training on the augmented datasets One of the most valuable strengths of MUC B 3 CEAF 4 P R F1 P R F1 P R F1 Avg.",
"converting anaphora resolution to question answering is that existing QA datasets can be readily used for data augmentation purposes.",
"We see a contribution of 0.7 F1 from pre-training on the Quoref dataset (Dasigi et al., 2019a) and a contribution of 0.3 F1 from pre-training on the SQuAD dataset (Rajpurkar et al., 2016a).",
"Effect of Question Answering We aim to study the pure performance gain of the paradigm shift from mention-pair scoring to query-based span prediction.",
"For this purpose, we replace the mention linking module with the mention-pair scoring module described in Lee et al. (2018), while others 1 2 3 4 5 6 7+ 10 20 30 40 50 60 70 80 90 100 Number of speakers per document % F1(Speaker as feature) F1(Speaker as input) Frequency Figure 3: Performance on the development set of the CoNLL-2012 dataset with various number of speakers.",
"remain unchanged.",
"We observe an 8.4 F1 degradation in performance, demonstrating the significant superiority of the proposed question answering framework over the mention-pair scoring framework.",
"We compare our speaker modeling strategy (de-noted by Speaker as input ), which directly concatenates the speaker's name with the corresponding utterance, with the strategy in Wiseman et al. (2016); Lee et al. (2017); Joshi et al. (2019a) (denoted by Speaker as feature ), which converts speaker information into binary features indicating whether two mentions are from the same speaker.",
"We show the average F1 scores breakdown by 10 20 30 40 50 30 40 50 60 70 80 90 100 the number of spans kept per word M e n ti on R eca ll ( % ) Joshi et al. (2019a) (various ) Joshi et al. (2019a) (actual ) Our model (various ) Our model (actual ) Figure 4: Change of mention recalls as we increase the number of spans kept per word.",
"documents according to the number of their constituent speakers in Figure",
"3. Results show that the proposed strategy performs significantly better on documents with a larger number of speakers.",
"Compared with the coarse modeling of whether two utterances are from the same speaker, a speaker's name can be thought of as speaker ID in persona dialogue learning (Li et al., 2016; Zhang et al., 2018b; Mazare et al., 2018).",
"Representations learned for names have the potential to better generalize the global information of the speakers in the multi-party dialogue situation, leading to better context modeling and thus better results.",
"Since the proposed framework has the potential to retrieve mentions missed at the mention proposal stage, we expect it to have higher overall mention recall rate than previous models (Lee et al., 2017, 2018; Zhang et al., 2018a; Kantor and Globerson, 2019).",
"We examine the proportion of gold mentions covered in the development set as we increase the hyperparameter (the number of spans kept per word) in Figure",
"4. Our model consistently outperforms the baseline model with various values of .",
"Notably, our model is less sensitive to smaller values of .",
"This is because missed mentions can still be retrieved at the mention linking stage.",
"We provide qualitative analyses to highlight the strengths of our model in Table",
"4. Shown in Example 1, by explicitly formulating the anaphora identification of the company as a 1 [ Freddie Mac ] is giving golden parachutes to two of its ousted executives.",
". . . Yesterday Federal Prosecutions announced a criminal probe into [ the company ].",
"2 [ A traveling reporter ] now on leave and joins us to tell [ her ] story.",
"Thank [ you ] for coming in to share this with us.",
"3 Paula Zahn: [ Thelma Gutierrez ] went inside the forensic laboratory where scientists are trying to solve this mystery.",
"Thelma Gutierrez: In this laboratory alone [ I ] 'm surrounded by the remains of at least twenty different service members who are in the process of being identified so that they too can go home.",
"query, our model uses more information from a local context, and successfully identifies Freddie Mac as the answer from a longer distance.",
"The model can also efficiently harness the speaker information in a conversational setting.",
"In Example 3, it would be difficult to identify that [ Thelma Gutierrez ] is the correct antecedent of mention [ I ] without knowing that Thelma Gutierrez is the speaker of the second utterance.",
"However, our model successfully identifies it by directly feeding the speaker's name at the input level.",
"In this paper, we present CorefQA, a coreference resolution model that casts anaphora identification as the task of query-based span prediction in question answering.",
"We showed that the proposed formalization can successfully retrieve mentions left out at the mention proposal stage.",
"It also makes data augmentation using a plethora of existing question answering datasets possible.",
"Furthermore, a new speaker modeling strategy can also boost the performance in dialogue settings.",
"Empirical results on two widely-used coreference datasets demonstrate the effectiveness of our model.",
"In future work, we will explore novel approaches to generate the questions based on each mention, and evaluate the influence of different question generation methods on the coreference resolution task.",
"We thank all anonymous reviewers for their comments and suggestions.",
"The work is supported by the National Natural Science Foundation of China (NSFC No. 61625107 and 61751209)."
] | [
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"other",
"other"
] |
[
"This paper studies the (often implicit) human values behind natural language arguments, such as to have freedom of thought or to be broadminded .",
"Values are commonly accepted answers to why some option is desirable in the ethical sense and are thus essential both in real-world argumentation and theoretical argumentation frameworks.",
"However, their large variety has been a major obstacle to modeling them in argument mining.",
"To overcome this obstacle, we contribute an operationalization of human values, namely a multi-level taxonomy with 54 values that is in line with psychological research.",
"Moreover, we provide a dataset of 5270 arguments from four geographical cultures, manually annotated for human values.",
"First experiments with the automatic classification of human values are promising, with F 1 -scores up to 0.81 and 0.25 on average.",
"How come people disagree on the best course forward in controversial issues, even if they use the same information to form their opinion?",
"A way to get to the bottom of such disagreement is to repeatedly ask them why they see something as desirable.",
"We observe that people have different beliefs and priorities of what is generally worth striving for (e.g., personal achievements vs. humility) and how to do so (e.g., being self-directed vs. respecting traditions), often referred to as (human) values (Searle, 2003).",
"Some values tend to conflict and others to align (see Figure 1), which can cause disagreement on the best course forward, but also the support, if not formation, of political parties that promote the respective highly revered values.",
"Moreover, one can observe different value priorities between cultures and disagreement thereon.",
"Due to their outlined importance, human values are studied both in the social sciences (Schwartz, 1994) and in formal argumentation (Bench-Capon, 2003) for decades.",
"According to the social sciences, a value is a (1) belief (2) pertaining to desirable end states or modes of conduct, that (3) transcends specific situations, (4) guides selection or evaluation of behavior, people, and events, and (5) is ordered by importance relative to other values to form a system of value priorities.",
"As Schwartz continues, these features make it possible to conclude that security and independence are values, 4459 whereas thirst and a preference for blue ties are not.",
"Social media is good for us. Though it might make people less polite, it makes our lives much easier.",
"To understand the pragmatics of this argument, a reader has to acknowledge the belief (Point 1 in the definition above) that the end state (2) of having a comfortable life is desirable in general (3).",
"To concur with the statement (4), the reader further has to prefer having a comfortable life over being polite (5)ignoring other arguments on the topic for the sake of the example.",
"Within computational linguistics, human values thus provide the context to categorize, compare, and evaluate argumentative statements, creating several possibilities: to inform social science research on values through large-scale datasets; to assess argumentation with respect to scope and strength; to generate or select arguments based on the value system of a target audience; and to identify opposing and shared values on both sides of a controversial topic.",
"However, the task to identify values in arguments seems daunting due to their large number, often implicit use in arguments, and vague definitions.",
"On the other hand, the creation of larger argumentation datasets, advancements in natural language understanding, and the decade-long rigorous tax-onomization of values by social scientists has put such an automatic identification within reach.",
"As a first endeavor on the automatic identification of values in written arguments, this paper makes three contributions: (1) a consolidated multilevel taxonomy of 54 human values taken from four authoritative cross-cultural social science studies (Section 3); (2) a dataset of 5270 arguments from the US (most arguments), Africa, China, and India, each of which manually annotated for all values by three annotators, corresponding to about 850k human judgments (Section 4); and (3) first classification results per taxonomy level, establishing a baseline and revealing promising results both within and across cultures (Section 5).",
"Human values are of concern to most if not to all social sciences (Rokeach, 1973) and have also been integrated into computational frameworks of argumentation (Bench-Capon, 2003).",
"In NLP, values have been analyzed for personality profiling (Ma-heshwari et al., 2017), but not yet for argument mining, as considered here.",
"Rokeach (1973) already described the two concepts of (1) a value as a belief pertaining to desirable end states or modes of conduct and (2) a value system as prioritization of values based on cultural, social, and personal factors.",
"These definitions attribute values to persons rather than to objects, facilitating a systematic analysis (Rokeach, 1973).",
"The paper at hand follows these definitions and targets the personal values behind arguments, that is, the values that the arguments, mostly implicitly, resort to.",
"Several proposed value schemes are domain-independent and hence suited to analyze generic argumentation.",
"Our consolidated value taxonomy (Section 3) is thus based on these schemes.",
"Combining research from anthropology, sociology, philosophy, and psychology, Rokeach (1973) estimates the total number of human values to be fewer than hundreds, and develops a practical survey of 36 values that distinguishes between values pertaining to desirable end states and desirable behavior.",
"Specifically for cross-cultural analysis, Schwartz et al. (2012) derived 48 value questions from the universal needs of individuals and societies, including obeying all the laws and to be humble .",
"Moreover, Schwartz (1994) proposes a relatedness of values by their tendency to be compatible in their pursuit (see Figure 1).",
"This relatedness reflects two higher order conflicts: (1) openness to change/own thoughts vs. conservation/submission, and (2) self-transcension (directed towards oth-ers/the environment) vs. self-enhancing (directed towards one's self), allowing to analyse values at several levels.",
"Cheng and Fleischmann (2010) consolidates 12 schemes into a meta-inventory with 16 values, such as honesty and justice , revealing a large overlap in schemes across fields of research.",
"However, as the meta-inventory is strictly more coarse-grained than Schwartz et",
"al.'s theory we do not investigate it further in this paper.",
"Other schemes, however, pertain to specific purposes, making them less suited for our study.",
"We give an overview for completeness.",
"England (1967) suggested 66 values related to management decisions, such as high productivity and prestige , and categorized them by relevant entity, for example business organizations and individuals .",
"Brown and Crace (2002) looked at 14 values for counseling and therapy, such as responsibility and spirituality , and Kahle et al. (1988) at nine for consumer research, such as warm relationships and excitement .",
"Formal argumentation employs value systems to model audience-specific preferences, that is, an argument's strength depends on the degree to which the audience reveres the values the argument resorts to.",
"Examples include value-based argumentation schemes (van der Weide et al., 2009), defeasible logic programming (Teze et al., 2019), and the value-based argumentation framework of Bench-Capon (2003).",
"The latter is an extension of the abstract argumentation framework of Dung (1995) that has already been applied manually to analyze interactions with reasoning and persuasion subject to a specific value system (Atkinson and Bench-Capon, 2021).",
"This paper presents a first step towards the large-scale automatic application of these works as it takes values to argument mining.",
"Feldman (2021) recently showed the strong connection between values and the moral foundation theory (Haidt, 2012).",
"Like personal values, this theory analyzes ethical reasoning behind human choices, but considers five rather abstract founda-tions: care, fairness, loyalty, authority, and purity.",
"Alshomary and Wachsmuth (2021) hypothesized that the foundations could be used for audience-specific argument generation.",
"Kobbe et al. (2020) tried to classify arguments by foundations, but noted a low human agreement due to the vagueness of the foundations.",
"We assume values can here contribute to the classification by foundations.",
"Values overlap with idea of framing in communication, that is, the selection and emphasis of specific aspects of (perceived) reality to promote a particular problem, causal interpretation, ethical evaluation, and/or recommendation (Entman, 1993).",
"In frames, values can define the costs and benefits of options (Entman, 1993), while common value systems are used for evaluation.",
"Framing has often been studied computationally for news (Naderi and Hirst, 2015; Chen et al., 2021), but also for political speech (De Vreese, 2005), and argumentation (Ajjour et al., 2019).",
"In the latter, some values are so prevalent that they constitute frames of their own, indicating a potential use of values in frame identification.",
"For example, 14 out of 54 values we use are also frames in the dataset of Ajjour et al. 1 Values may be considered as aspects under which to group arguments.",
"Some researchers have mined aspects from text (Trautmann, 2020) or used them to control argument generation (Schiller et al., 1 Per Jaccard similarity of value and frame names 0 . 5 . 2021).",
"Others have studied the task of opinion summarization in arguments (Egan et al., 2016; Misra et al., 2016; Chen et al., 2019), aiming at the most important aspects discussed in a debate.",
"Related, the task of key point analysis (Bar-Haim et al., 2020; Friedman et al., 2021) is to generate a small set of concise statements that each represent a different aspect.",
"We argue that analyzing the values found in a collection of arguments provides a new perspective to aspects in argumentation, focusing on the why behind an argument's reasoning.",
"Human values have been considered in formal argumentation since about 20 years (Bench-Capon, 2003).",
"However, to the best of our knowledge, our paper is the first that aims at identifying the values behind arguments computationally.",
"The term be-hind reflects the fact that many arguments do not explicate values; for example, in the argument no matter they felt forced to commit it: anyone who commits a crime should be prosecuted no value is mentioned literally.",
"The argument gains its persuasive strength when being connected to values, which can be both desirable behavior ( behaving properly ) or end states ( a safe country ).",
"By putting forward an argument, its proponent wants the audience to connect the argument with its values.",
"Formally, values are connected specifically with the argument's premise.",
"However, automatic models might still improve when incorporating the textual conclusion as context for the textual premise.",
"The task studied in this paper is to draw this connection between arguments and values automatically.",
"The heart of a value-based argumentation framework is a value taxonomy (or a set of values) that is both accepted and relevant.",
"The research presented in this paper is largely based on the refined theory of Schwartz et al. (2012), 2 which, however, has been extended by us: Comparing Schwartz et",
"al.'s refined theory with three other widespread value lists against a sample of our dataset, we decided to add and integrate nine values (see Table 1).",
"We also asked the annotators to comment on supposedly missing values (see Section 4).",
"For most of the additional 48 value descriptions that we received ( be humane , be fair , be modern , etc.), we identified existing values or value combinations in the taxonomy that subsume them, suggesting to extend the value descriptions rather than adding new values.",
"Only two of the added values are not directly related to the universal needs that Schwartz (1994) based the value categories on.",
"The proposed category universalism: objectivity integrates well between the outward thinking of universalism: tolerance and the free thinking of self-direction: thought (see Figure 1).",
"We adopt a uniform naming scheme where the value names reflect the distinction of Rokeach (1973) into instrumental ( be . . . ) and terminal ( have . . . ) values, and are easy to embed in sentences, for example, it is good to be creative .",
"The taxonomy levels are chosen based on usefulness in social science research.",
"The values at Level 1 are intended to be the items in surveys (Schwartz, 1994), which is why we also suggest to use them for dataset annotation.",
"Moreover, Level 1 values can still be classified into being either instrumental or terminal.",
"One could, however, create arbitrarily coarseand fine-grained levels.",
"3 The close connection of our taxonomy to social science research enables studies of value systems across disciplines that are beyond the scope of this paper.",
"The grouping of values at higher levels allows for classifications at coarser levels of granularity, enabling investigations such as, whether a specific set of arguments focus on persons or society mainly, or whether they imply a rather anxiety-free or a rather anxiety-avoiding background (cf. Figure 1).",
"Also, the circular organization of the taxonomy enables the analysis of major directions in a collection of arguments, which can, for example, be used to study value differences in argumentation datasets of different cultures.",
"In addition, for the 41 values with a link to the World Values Survey (the WVS column in Table 1, Haerpfer et al., 2020), the corresponding dataset contains information on people's value priorities (i.e., value systems) collected rigorously for 51 territories, with the earliest survey from 1981 and the latest from 2020.",
"These links allow comparing value distributions identified in regional datasets with survey data.",
"This section presents the first dataset for studying human values behind arguments.",
"Each of the 5270 arguments included was annotated by three crowdworkers for all 54 values from Section",
"3. The dataset, taxonomy description, and annotation interface are available online as Webis-ArgValues-22.",
"4 3 For example, with values such as have no broken legs.",
"Following the aspiration of a cross-cultural value taxonomy and using territories as a proxy for cultures, the dataset is composed of four parts: Africa , China , India , and USA .",
"Each argument consists of one premise, one conclusion, and a stance attribute indicating whether the premise is in favor of (pro) or against (con) the conclusion.",
"As existing argument datasets are almost exclusively from a Western background, we had to collect new suitable arguments for the non-US parts, drastically limiting their size.",
"The respective non-US sources were recommended to us for their authenticity by students from the respective territory that work with our groups.",
"Note that this data is not intended to represent the respective culture, but to train and benchmark classifiers across sources.",
"Africa We manually extracted 50 arguments from recent editorials of the debating ideas section of a pan-African news platform, African Arguments .",
"5 Premises could often be extracted literally, but conclusions were mostly implicit and had to be compiled from several source sentences.",
"China We extracted 100 arguments from the recommendation and hotlist section of a Chinese question-answering website, Zhihu .",
"6 We manually identified key points (premises and conclusions) in the answers and manually translated them to English using automated translation for a first draft.",
"India We extracted 100 arguments from the controversial debate topics 2021 section of Group Discussion Ideas .",
"7 This blog collects pros and cons on various topics from Indian news to support discussions.",
"Premises and conclusions were used as-is.",
"USA We took 5020 arguments with a manual argument quality rating of at least 0.5 from the 30,497 arguments of the IBM-ArgQ-Rank-30kArgs dataset (Gretz et al., 2020).",
"For the dataset, crowdworkers wrote one pro and one con argument for one of 71 common controversial topics.",
"We rephrased the topics to represent conclusions.",
"Due to the difficulty of collecting datasets from various cultures, the number of respective arguments (250) is small compared to the US part.",
"However, we will mainly use them for testing the robustness of identifying values in arguments.",
"Table 2 shows one example from each part.",
"Note that we do not see any part as representative for the respective culture, but rather as a necessary approximation (see Section 7 for a discussion).",
"Table 3 provides an overview of the dataset.",
"Premises are longer than conclusions, with USA having the lowest average for both.",
"The Africa part has the fewest premises per conclusion (2.2) and the US part the most (70.7).",
"The skew between pros and cons is highest for Africa with a ratio of about 3:1.",
"All these observations are results of the collection process and are natural variations for arguments.",
"We employed a custom three-part annotation interface, optimized for speed and task expertise acquisition through keyboard shortcuts and a clear template-like structure (see Appendix A for screen-shots).",
"Besides instructions and example arguments, a brief explanation of specific terms was given if needed (e.g., for the 996 overtime system mentioned in several arguments from China).",
"Below this introductory material, the main part of the interface consists of three panels.",
"The first panel places the argument to be annotated in a scenario: Imagine someone is arguing [in favor of/against] [conclusion] by saying: [premise].",
"The second panel formulated the annotation task for a value as a yes/no question.",
"8 The question follows the operationalization of Section 3: If asked Why is that good?, might this be their justification?",
"Because it is good to [value].",
"For illustration, example implications of matching arguments were provided.",
"Instructions stated that one to five values are typical for an argument, and more than 10 should be avoided.",
"A third panel shows the annotation progress.",
"Annotators could write feedback on both arguments and values.",
"The crowdsourcing ran on the MTurk platform, with annotators taking 2:40 minutes per argument on average, and totaling 90 days of 8-hour work.",
"We required them to have an approval rate of at least 98%, at least 100 approved work tasks, and for language proficiencybeing located in the US.",
"No further personal information was gathered.",
"The annotators were first restricted to three annotation tasks.",
"Manual quality checks at this stage resulted in 154 work rejections (5% rejection rate) due to ignored instructions.",
"We then selected 27 annotators for annotating the bulk of arguments, ensuring at least 3 annotations per argument.",
"As mandatory for MTurk, annotators were paid on a task basis, which led to an average hourly wage of $8.12 (cur-rent US federal minimum wage: $7.25).",
"Additionally, we paid bonuses of total $65.65, especially to annotators who wrote extensive comments.",
"Despite the difficulty of the annotation task, the crowdworker annotators reached an average value-wise agreement of 0.49 (Krippendorff, 2004).",
"We found most disagreement arose from the complexity of annotating 54 values at once, with annotators sometimes confusing values despite the descriptions.",
"For follow-up datasets, one could likely reduce such problems by training annotators on the arguments of our dataset with highest disagreement.",
"One step we implemented for quality assurance is that we manually checked the 48 arguments ( < 1%) to which MACE assigned more than 10 values, reducing their values to the most prevalent 57 ones.",
"The right side of Table 1 shows the frequency of each value in each dataset part, revealing that each value occurs at least once.",
"A value in the ground truth also automatically led to an assignment of all parent labels in the taxonomy (see Figure 1).",
"Figure 2 shows the resulting level-wise distribution of labels per argument.",
"As the majority of arguments are assigned both labels for Levels 4a and b, these base dichotomies for values are hence mostly not dichotomous for arguments.",
"So, like the value systems of people, many arguments seem to resort to a broad spectrum of values from the value continuum at once.",
"For example, the first argument in Table 2 resorts to both having a comfortable life (personal focus, self-protection) and having equality (social focus, growth).",
"Similar to observations of Rokeach (1973, p. 50f) on value systems, this example showcases an interaction between values that change their psychological significance, where having equality gives having a comfortable life a social focus.",
"We believe that our dataset enables scholars to study such interactions for arguments in the future.",
"This section presents a first attempt at automatically identifying human values using standard approaches.",
"The first experiment focuses on the USA dataset part alone, the second on a cross-cultural setting.",
"We compare three approaches, for which we provide our implementation online: 9 BERT .",
"Fine-tuned multi-label bert-base-uncased with batch size 8 and learning rate 2 5 (20 epochs).",
"SVM .",
"A linear kernel scikit-learn support vector machine trained label-wise with C = 18 .",
"1-Baseline .",
"Classifies each argument as resorting to all values.",
"Thus always achieves a recall of 1.",
"Our evaluation focuses on the label-wise F 1 score and its mean over all labels (macro-average), as well as its constituents precision and recall.",
"We report accuracy for completeness, though the heavily skewed label distribution makes it less suited.",
"The evaluation employs macro-averages for all metrics to give the same weight to all values.",
"Note that the 1-Baseline is especially strong for the F 1 -score since it always achieves a recall of 1.",
"By definition this baseline achieves at least as highand in most cases higherF 1 -scores than label-wise random guessing according to the label frequency.",
"For calculating the p -values when comparing approaches we employ the Wilcoxon signed rank significance test (Wilcox, 1996).",
"As detailed in Section 4, most arguments actually have both labels of the base dichotomies (Levels 4a and b) assigned to them, so we do not discuss these levels deeper here.",
"We first report results on the main part of our dataset (USA) as an experiment with matching training and test set.",
"The approaches are trained on the arguments from 60 unique conclusions (4240 arguments, ~85%), validated on 4 (277, ~5%), and tested on 7 (503, ~10%).",
"The conclusions were selected so that the different sets contain roughly the specified percentage of arguments.",
"Unfortunately, this process led to different value distributions in the different sets.",
"However, we deemed the conclusion-wise split more important for our experiments, as we want to test whether classifiers generalize to unseen conclusions.",
"Only one very rare value, be neat and tidy (0.2% of arguments in USA part), does not occur in the test set.",
"We thus exclude this value from evaluation.",
"Table 4 shows the results averaged over all labels.",
"BERT performs best according to F 1 -score for Level 1 ( p = 0 . 007 vs. SVM and p = 0 . 001 vs. 1-Baseline; n = 53 ) and for Level 2 ( p = 0 . 153 and p = 0 . 117 ; n = 20 ), but is worse than or at the baseline for higher levels ( n too small for test).",
"The comparably bad performance at higher levels is somewhat surprising, as it indicates that the categories at these higher levels are harder to separate by state-of-the-art language-based approaches.",
"Maybe hierarchical classification approaches (e.g., Babbar et al., 2013) can address this comparably weak performance by utilizing signals at each level of the hierarchy simultaneously.",
"Moreover, while a F 1 -score of 0.25 at Level 1 is encouraging for largely out-of-the-box approaches, clearly more work is needed.",
"Though a recall of 0.19 may be acceptable for applications that not rely on completeness, a precision of 0.40 is clearly too low for practical uses.",
"As Figure 3 shows, however, considerably higher F 1 -scores are reached by BERT for several values and value categories.",
"Specifically, the identification works exceptionally well for the value have good health (F 1 : 0.81) and the value-category security: personal (F 1 : 0.78) that contains it.",
"Other value categories with F 1 0 .",
"5 are universalism: concern , self-direction: action , achievement , and benevolence: caring .",
"The out-of-the-box models thus perform reasonably well for a few selected values and categories within the USA part.",
"Moreover, Figure 3 indicates some correlation of value frequency (grey bars) with classifier performance (colored lines).",
"One reason for this correlation could be that the dataset is too small for training reliable classifiers on the infrequent values.",
"Another reason might be that there is a more developed vocabulary concerning frequent values, making it easier for classifiers to identify these values.",
"The results are distributed alongside the dataset for follow-up analyses.",
"The non-US parts are considerably smaller and as a result ~28% of the values are lacking arguments (cf. Table 1).",
"However, the 1-Baseline is equally affected by this lack, thus providing for a comparison with the previous setting.",
"Table 5 shows the F 1 -scores for each test set averaged over all labels.",
"Once more, BERT performed best by the F 1 -score for Level 1 ( p = 0 . 006 vs. SVM and p < 0 . 001 vs. 1-Baseline; n = 169 ) and Level 2 (both p < 0 . 001 ; n = 74 ), whereas no significant difference was found for Level 3 ( p = 0 . 179 and p = 0 . 856 ; n = 16 ).",
"BERT and SVM perform on Level 1 and 2 similar across parts.",
"Maybe due to the clarity of its editored arguments, BERT performs best for India, despite the 1-Baseline performing best for USA.",
"These findings constitute first evidence that using a cross-cultural value taxonomy could result in robust methods for identifying the values behind arguments, even though more data and research seem necessary to get there.",
"A computational identification of human values behind arguments is a challenging but also necessary task.",
"With our research we contribute (1) a multilevel taxonomy with 54 values based on social science research, (2) a labeled dataset comprised of 5270 arguments from four sources, and (3) empirical analyses that cover multiple value granularity levels and compare different cultures.",
"Based on this work a logical next step are analyses that fully exploit relationships between labels.",
"Hierarchical classification approaches appear promising here (e.g., Babbar et al., 2013); learning rules for multi-label classification (e.g., Loza Menca and Jannsen, 2016) can provide insights into value-relationships.",
"include data from more cultures or territories, genres (e.g., blog posts), modalities (offline and spoken argumentation), and languages.",
"Probably an automated translation with manual assurance, as we did for the dataset's China part, may not be sufficient.",
"Though we optimized the annotation process, the argument acquisition requires a community effort to ensure the widest variety of data.",
"Employing annotators from different cultures is a requirement to analyze and mitigate potential sources of bias.",
"A subsequent step of ranking the annotated values by importance can be beneficial for certain use cases, especially when using the higher taxonomy levels.",
"Values are a major contributor to argument strength (Bench-Capon, 2021), and the large-scale mining from web data could improve all of argument categorization, assessment, and generation.",
"For example, matching values between arguments could be effective for both supporting and countering arguments.",
"Clearly expressing values behind arguments could avoid misunderstandings between humans and automated argumentation systems (Kiesel et al., 2021).",
"Similarly, an objective highlighting of common values behind arguments across political camps could be a step towards resolving seemingly fundamental disagreements.",
"Finally, the analysis of values in large-scale text corpora can also be of interest of social science scholars.",
"How are values expressed online?",
"Combined with Internet archive data, one could even analyse references to values over time.",
"We thus hope that this work can serve as a first step towards a better understanding of how the public sees and saw human values in everyday (digital) life.",
"Identifying values in argumentative texts could be used in various applications like argument faceted search, value-based argument generation, and value-based personality profiling.",
"In all these applications, an analysis of values has the opportunity to broaden the discussion (e.g., by present-4467 ing a diverse set of arguments covering a wide spectrum of personal values in search or inviting people with underrepresented value-systems to dis-cussions).",
"At the same time, a value-based analysis could risk to exclude people or arguments based on their values.",
"However, in other cases, for example hate speech, such an exclusion might be desirable.",
"While we tried to include texts from different cultures in our dataset, it is important to note that these samples are not representative of their respective culture, but intended as a benchmark for measuring classification robustness across sources.",
"A more significant community effort is needed to collect more solid datasets from a wider variety of sources.",
"To facilitate the inclusivity of different cultures, we adopted a personal value taxonomy that has been developed targeting universalism and tested across cultures.",
"However, in our study, the annotations have all been carried out by annotators from a western background.",
"Even though the value taxonomy strives for universalism, a potential risk is that an annotator from a specific culture might fail to correctly interpret the implied values in a text written by people from a different culture.",
"Finally, as mentioned in Section 4, we did not gather any personal information in our annotation studies, and we ensured that all our annotators get paid more than the minimum wage in the U.S. References Yamen Ajjour, Milad Alshomary, Henning Wachsmuth, and Benno Stein."
] | [
"method",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"We tackle the problem of generating a pun sentence given a pair of homophones (e.g., died and dyed ).",
"Supervised text generation is inappropriate due to the lack of a large corpus of puns, and even if such a corpus existed, mimicry is at odds with generating novel content.",
"In this paper, we propose an unsupervised approach to pun generation using a corpus of unhumorous text and what we call the local-global surprisal principle : we posit that in a pun sentence, there is a strong association between the pun word (e.g., dyed ) and the distant context, as well as a strong association between the alternative word (e.g., died ) and the immediate context.",
"This contrast creates surprise and thus humor.",
"We instantiate this principle for pun generation in two ways:",
"(i) as a measure based on the ratio of probabilities under a language model, and",
"(ii) a retrieve-and-edit approach based on words suggested by a skip-gram model.",
"Human evaluation shows that our retrieve-and-edit approach generates puns successfully 31% of the time, tripling the success rate of a neural generation baseline.",
"Generating creative content is a key requirement in many natural language generation tasks such as poetry generation (Manurung et al., 2000; Ghazvininejad et al., 2016), story generation (Meehan, 1977; Peng et al., 2018; Fan et al., 2018; Yao et al., 2019), and social chat-bots (Weizenbaum, 1966; Hao et al., 2018).",
"In this paper, we explore creative generation with a focus on puns.",
"We follow the definition of puns in Aarons (2017); Miller et al. (2017): A pun is a form of wordplay in which one sign (e.g., a word or a phrase) suggests two or more meanings by exploiting polysemy, homonymy, or phonological Equal contribution. Yesterday I accidentally swallowed some food coloring. The doctor says I'm OK, but I feel like I've dyed a little inside. Alternative word: died. Pun word: dyed. Local context Global context Figure 1: An illustration of a homophonic pun. The pun word appears in the sentence, while the alternative word , which has the same pronunciation but different meaning, is implicated. The local context refers to the immediate words around the pun word, whereas the global context refers to the whole sentence. similarity to another sign, for an intended humorous or rhetorical effect.",
"We focus on a typical class of puns where the ambiguity comes from two (near) homophones.",
"Consider the example in Figure 1: Yesterday I accidentally swallowed some food coloring. The doctor says I'm OK, but I feel like I've dyed (died) a little inside. .",
"The pun word shown in the sentence ( dyed ) indicates one interpretation: the person is colored inside by food coloring.",
"On the other hand, an alternative word ( died ) is implied by the context for another interpretation: the person is sad due to the accident.",
"Current approaches to text generation require lots of training data, but there is no large corpus of puns.",
"Even such a corpus existed, learning the distribution of existing data and sampling from it is unlikely to lead to truly novel, creative sentences.",
"Creative composition requires deviating from the norm, whereas standard generation approaches seek to mimic the norm.",
"Recently, Yu et al. (2018) proposed an unsupervised approach that generates puns from a neural language model by jointly decoding conditioned on both the pun and the alternative words, thus injecting ambiguity to the output sentence.",
"However, Kao et al. (2015) showed that ambiguity alone is insufficient to bring humor; the two meanings must also be supported by distinct sets of words in the sentence.",
"Inspired by Kao et al. (2015), we propose a general principle for puns which we call local-global surprisal principle .",
"Our key observation is that the strength for the interpretation of the pun and the alternative words flips as one reads the sentence.",
"For example, in Figure 1, died is favored by the immediate (local) context, whereas dyed is favored by the global context (i.e. ...food color-ing... ).",
"Our surprisal principle posits that the pun word is much more surprising in the local context than in the global context, while the opposite is true for the alternative word.",
"We instantiate our local-global surprisal principle in two ways.",
"First, we develop a quantitative metric for surprise based on the conditional probabilities of the pun word and the alternative word given local and global contexts under a neural language model.",
"However, we find that this metric is not sufficient for generation.",
"We then develop an unsupervised approach to generate puns based on a retrieve-and-edit framework (Guu et al., 2018; Hashimoto et al., 2018) given an unhumorous corpus (Figure 2).",
"We call our system SURGEN (SURprisal-based pun GENeration).",
"We test our approach on 150 pun-alternative word pairs.",
"1 First, we show a strong correlation between our surprisal metric and funniness ratings from crowdworkers.",
"Second, human evaluation shows that our system generates puns successfully 31% of the time, compared to 9% of a neural generation baseline (Yu et al., 2018), and results in higher funniness scores.",
"We assume access to a large corpus of raw (unhu-morous) text.",
"Given a pun word w p (e.g., dyed ) and an alternative word w a (e.g., died ) which are (near) homophones, we aim to generate a list of pun sentences.",
"A pun sentence contains only the pun word w p , but both w p and w a should be evoked by the sentence.",
"pun word position, and are tickled by the relation between the pun word and the rest of the sentence.",
"Consider the following cloze test: Yesterday I accidentally swallowed some food coloring. The doctor says I'm OK, but I feel like I've a little inside. .",
"Most people would expect the word in the blank to be died whereas the actual word is dyed .",
"Locally, died a little inside is much more likely than dyed a little inside .",
"However, globally when looking back at the whole sentence, dyed is evoked by food coloring .",
"Formally, w p is more surprising relative to w a in the local context, but much less so in the global context.",
"We hypothesize that this contrast between local and global surprisal creates humor.",
"Let us try to formalize the local-global surprisal principle quantitatively.",
"To measure the amount of surprise due to seeing the pun word instead of the alternative word in a certain context c , we define surprisal S as the log-likelihood ratio of the two events: S ( c ) def = log p ( w p | c ) p ( w a | c ) = log p ( w p , c ) p ( w a , c ) .",
"We define the local surprisal to only consider context of a span around the pun word, and the global surprisal to consider context of the whole sentence.",
"Letting x 1 , . . . , x n be a sequence of tokens, and x p be the pun word w p , we have S local def = S ( x p d : p 1 , x p +1: p + d ) , (2) S global def = S ( x 1: p 1 , x p +1: n ) , (3) where d is the local window size.",
"For puns, both the local and global surprisal should be positive because they are unusual sentences by nature.",
"However, the global surprisal should be lower than the local surprisal due to topic words hinting at the pun word.",
"We use the following unified metric, local-global surprisal , to quantify whether a sentence is a pun: S ratio def = (cid:40) 1 S local < 0 or S global < 0 , S local /S global otherwise .",
"We hypothesize that larger S ratio is indicative of a good pun.",
"Note that this hypothesis is invalid when either S local or S global is negative, in which case we consider the sentences equally unfunny by setting S ratio to 1 .",
"cut.",
"The surprisal metric above can be used to assess whether a sentence is a pun, but to generate puns, we need a procedure that can ensure grammaticality.",
"Recall that the surprisal principle requires (1) a strong association between the alternative word and the local context; (2) a strong association between the pun word and the distant context; and (3) both words should be interpretable given local and global context to maintain ambiguity.",
"Our strategy is to model puns as deviations from normality.",
"Specifically, we mine seed sentences (sentences with the potential to be transformed into puns) from a large, generic corpus, and edit them to satisfy the three requirements above.",
"Figure 2 gives an overview of our approach.",
"Suppose we are generating a pun given w p = hare and w a = hair .",
"To reinforce w a = hair in the local context despite the appearance of hare , we retrieve sentences containing hair and replace occurrences of it with hare .",
"Here, the local context strongly favors the alternative word ( hair cut ) relative to the pun word ( hare cut ).",
"Next, to make the pun word hare more plausible, we insert a hare -related topic word ( greyhound ) near the beginning of the sentence.",
"In summary, we create local surprisal by putting w p in common contexts for w a , and connect w p to a distant topic word by substitution.",
"We describe each step in detail below.",
"Local surprisal.",
"The first step is to retrieve sentences containing w a .",
"A typical pattern of pun sentences is that the pun word only occurs once towards the end of the sentence, which separates local context from pun-related topics at the beginning.",
"Therefore, we retrieve sentences containing exactly one w a and rank them by the position of w a in the sentence (later is better).",
"Next, we replace w a in the retrieved sentence with w p .",
"The pun word usually fits in the context as it often has the same part-of-speech tag as the alternative word.",
"Thus the swap creates local surprisal by putting the pun word in an unusual but acceptable context.",
"We call this step RETRIEVE +S WAP , and use it as a baseline to generate puns.",
"Global surprisal.",
"While the pun word is locally unexpected, we need to foreshadow it.",
"This global association must not be too strong that it eliminates the ambiguity.",
"Therefore, we include a single topic word related to the pun word by replacing one word at the beginning of the seed sentence.",
"We see this simple structure in many human-written puns as well.",
"For example, Old butchers never die, they only meat their fate. , where pun words and their corresponding topic words are underlined.",
"We define relatedness between two words w i and w j based on a distant skip-gram model p ( w j | w i ) , where we train p to maximize p ( w j | w i ) for all w i , w j in the same sentence between d 1 to d 2 words apart.",
"Formally: i d 2 (cid:88) j = i d 1 log p ( w j | w i ) + i + d 2 (cid:88) j = i + d 1 log p ( w j | w i ) .",
"We take the topk predictions from p ( w | w p ) where w p is the pun word, as candidate topic words w to be further filtered next.",
"Type consistent constraint.",
"The replacement must maintain acceptability of the sentence.",
"For example, changing person to ship in Each person must pay their fare share does not make sense even though ship and fare are related.",
"Therefore, we restrict the deleted word in the seed sentence to nouns and pronouns, as verbs have more constraints on their arguments and replacing them is likely to result in unacceptable sentences.",
"In addition, we select candidate topic words that are type-consistent with the deleted word, e.g., replacing person with passenger as opposed to ship .",
"We define type-consistency (for nouns) based on WordNet path similarity.",
"2 Given two words, we get their synsets from WordNet constrained by their POS tags.",
"3 If the path similarity between any pair of senses from the two respective synsets is larger than a threshold, we consider the two words type-consistent.",
"In summary, the first noun or pronoun in the seed sentence is replaced by a type-consistent topic word.",
"We call this baseline RETRIEVE +S WAP +T OPIC .",
"Improve grammaticality.",
"Directly replacing a word with the topic word may result in ungrammatical sentences, e.g., replacing i with negotiator and getting negotiator am just a woman trying to peace her life back together. .",
"Therefore, we use a sequence-to-sequence model to smooth the edited sentence (RETRIEVE +S WAP +T OPIC +S MOOTHER ).",
"We smooth the sentence by deleting words around the topic word and train a model to fill in the blank.",
"The smoother is trained in a similar fashion to denoising autoencoders: we delete immediate neighbors of a word in a sentence, and ask the model to reconstruct the sentence by predicting missing neighbors.",
"A training example is shown below: Original: the man slowly walked towards the woods .",
"Input: <i> man </i> walked towards the woods .",
"Output: the man slowly During training, the word to delete is selected in the same way as selecting the word to replace in a seed sentence, i.e. nouns or pronouns at the beginning of a sentence.",
"At test time, the smoother is expected to fill in words to connect the topic word with the seed sentence in a grammatical way, e.g., the negotiator is just a woman trying to peace her life back together. (the part rewritten by the smoother is underlined).",
"We first evaluate how well our surprisal principle predicts the funniness of sentences perceived by humans (Section 4.2), and then compare our pun generation system and its varia-2",
"tions with a simple retrieval baseline and a neural generation model (Yu et al., 2018) (Section 4.3).",
"We show that the local-global surprisal scores strongly correlate with human ratings of funniness, and all of our systems outperform the baselines based on human evaluation.",
"In particular, RETRIEVE +S WAP +T OPIC (henceforth SURGEN ) achieves the highest success rate and average funniness score among all systems.",
"We use the pun dataset from 2017 SemEval task7 (Doogan et al., 2017).",
"The dataset contains 1099 human-written puns annotated with pun words and alternative words, from which we take 219 for development.",
"We use BookCorpus (Zhu et al., 2015) as the generic corpus for retrieval and training various components of our system.",
"We evaluate the surprisal principle by analyzing how well the local-global surprisal score (Equa-tion (4)) predicts funniness rated by humans.",
"We first give a brief overview of previous computational accounts of humor, and then analyze the correlation between each metric and human ratings.",
"Prior funniness metrics.",
"Kao et al. (2015) proposed two information-theoretic metrics: ambiguity of meanings and distinctiveness of supporting words.",
"Ambiguity says that the sentence should support both the pun meaning and the alternative meaning.",
"Distinctiveness further requires that the two meanings be supported by distinct sets of words.",
"In contrast, our metric based on the surprisal principle imposes additional requirements.",
"First, surprisal says that while both meanings are acceptable (indicating ambiguity), the pun meaning is unexpected based on the local context.",
"Second, the local-global surprisal contrast requires the pun word to be well supported in the global context.",
"Given the anomalous nature of puns, we also consider a metric for unusualness based on normalized log-probabilities under a language model (Pauls and Klein, 2012): Unusualness def = 1 n log (cid:32) p ( x 1 , . . . , x n ) / n (cid:89) i =1 p ( x i ) (cid:33) .",
"of puns.",
"Each sentence has a latent variable z { w p , w a } corresponding to the pun meaning and the alternative meaning.",
"Each word also has a latent meaning assignment variable f controlling whether it is generated from an unconditional unigram language model or a unigram model conditioned on z .",
"Ambiguity is defined as the entropy of the posterior distribution over z given all the words, and distinctiveness is defined as the symmetrized KL-divergence between distributions of the assignment variables given the pun meaning and the alternative meaning respectively.",
"The generative model relies on p ( x i | z ) , which Kao et al. (2015) estimates using human ratings of word relatedness.",
"We instead use the skip-gram model described in Section 3.3 as we are interested in a fully-automated system.",
"For local-global surprisal and unusualness, we estimate probabilities of text spans using a neural language model trained on WikiText-103 (Merity et al., 2016).",
"4 The local context window size ( d in Equation (2)) is set to",
"2. Human ratings of funniness.",
"Similar to Kao et al. (2015), to test whether a metric can differentiate puns from normal sentences, we collected ratings for both puns from the SemEval dataset and non-puns retrieved from the generic corpus containing either w p or w a .",
"To test the importance of 4 https://dl.fbaipublicfiles.com/ fairseq/models/wiki103_fconv_lm.tar.bz2 .",
"surprisal, we also included swap-puns where w p is replaced by w a , which results in sentences that are ambiguous but not necessarily surprising.",
"We collected all of our human ratings on Amazon Mechanical Turk (AMT).",
"Workers are asked to answer the question How funny is this sentence? on a scale from 1 (not at all) to 7 (ex-tremely).",
"We obtained funniness ratings on 130 sentences from the development set with 33 puns, 33 swap-puns, and 64 non-puns.",
"48 workers each read roughly 1020 sentences in random order, counterbalanced for sentence types of non-puns, swap-puns, and puns.",
"Each sentence is rated by 5 workers, and we removed 10 workers whose maximum Spearman correlation with other people rating the same sentence is lower than 0.2.",
"The average Spearman correlation among all the remaining workers (which captures inter-annotator agreement) is 0.3.",
"We z -scored the ratings of each worker for calibration and took the average z scored ratings of a sentence as its funniness score.",
"Table 1 shows the statistics of our annotated dataset (SEMEVAL ) and Kao et al. (2015)'s dataset (KAO ).",
"Note that the two datasets have different numbers and types of sentences, and the human ratings were collected separately.",
"As expected, puns are funnier than both swap-puns and non-puns.",
"Swap-puns are funnier than non-puns, possibly because they have inherit ambiguity brought by the RETRIEVE +S WAP operation.",
"Automatic metrics of funniness.",
"We analyze the following metrics: local-global surprisal ( S ratio ), ambiguity, distinctiveness, and unusualness, with respect to their correlation with human ratings of funniness.",
"For each metric, we standardized the scores and outliers beyond two standard deviations are set to +2 or 2 accordingly.",
"5 We then compute the metrics' Spearman correlation with human ratings.",
"On KAO , we directly took the ambiguity scores and distinctiveness scores from the original implementation which requires human-annotated word relatedness.",
"6 On SEMEVAL , we used our reimplemen-tion of Kao et al. (2015)'s algorithm but with the skip-gram model.",
"The results are shown in Table",
"2. For puns and non-puns, all metrics correlate strongly with human scores, indicating all of them are useful for pun detection.",
"For puns and swap-puns, only local-global surprisal ( S ratio ) has strong correlation, which shows that surprisal is important for characterizing puns.",
"Ambiguity and distinctiveness do not differentiate pun word from the alternative word, and unusualness only considers probability of the sentence with the pun word, thus they do not correlate as significantly as S ratio .",
"Within puns, only distinctiveness has significant correlation, whereas the other metrics are not fine-grained enough to differentiate good puns from mediocre ones.",
"Overall, no single metric is robust enough to score funniness across all types of sentences, which makes it hard to generate puns by optimizing automatic metrics of funniness directly.",
"There is slight inconsistency between results on SEMEVAL and KAO .",
"Specifically, for puns and non-puns, the distinctiveness metric shows a significant correlation with human ratings on KAO but not on SEMEVAL .",
"We hypothesize that it is mainly due to differences in the two corpora and noise from the skip-gram approximation.",
"For example, our dataset contains longer sentences with an average length of 20 words versus 11 words for KAO .",
"Further, Kao et al. (2015) used human annotation of word relatedness while we used the skip-gram model to estimate p ( x i | z ) .",
"5 Since both S ratio and distinctiveness are unbounded, bounding the values gives more reliable correlation results.",
"Systems.",
"We compare with a recent neural pun generator (Yu et al., 2018).",
"They proposed an unsupervised approach based on generic language models to generate homographic puns.",
"7 Their approach takes as input two senses of a target word (e.g., bat.n01 , bat.n02 from WordNet synsets), and decodes from both senses jointly by taking a product of the probabilities conditioned on the two senses respectively (e.g., bat.n01 and bat.n02 ), so that both senses are reflected in the output.",
"To ensure that the target word appears in the middle of a sentence, they decode backward from the target word towards the beginning and then decode forward to complete the sentence.",
"We adapted their method to generate homophonic puns by considering w p and w a as two input senses and decoding from the pun word.",
"We retrained their forward / backward language models on the same BookCorpus used for our system.",
"For comparison, we chose their best model (NEURALJOINTDECODER ), which mainly captures ambiguity in puns.",
"In addition, we include a retrieval baseline (RETRIEVE ) which simply retrieves sentences containing the pun word.",
"For our systems, we include the entire progression of methods described in Section 3 (RETRIEVE +S WAP , RETRIEVE +S WAP +T OPIC , and RETRIEVE +S WAP +T OPIC +S MOOTHER ).",
"Implementation details.",
"The key components of our systems include a retriever, a skip-gram 7 Sentences where the pun word and alternative word have the same written form (e.g., bat ) but different senses.",
"model for topic word prediction, a type consistency checker, and a neural smoother.",
"Given an alternative word, the retriever returned 500 candidates, among which we took the top 100 as seed sentences (Section 3.3 local surprisal).",
"For topic words, we took the top 100 words predicted by the skip-gram model and filtered them to ensure type consistency with the deleted word (Section 3.3 global surprisal).",
"The WordNet path similarity threshold for type consistency was set to 0.3.",
"The skip-gram model was trained on BookCorpus with d 1 =5 and d 2 =10 in Equation (5).",
"We set the word embedding size to 300 and trained for 15 epochs using Adam (Kingma and Ba, 2014) with a learning rate of 0.0001.",
"For the neural smoother, we trained a single-layer LSTM (512 hidden units) sequence-to-sequence model with attention on BookCorpus.",
"The model was trained for 50 epochs using AdaGrad (Duchi et al., 2010) with a learning rate of 0.01 and a dropout rate of 0.1.",
"Human evaluation.",
"We hired workers on AMT to rate outputs from all 5 systems together with expert-written puns from the SemEval pun dataset.",
"Each worker was shown a group of sentences generated by all systems (randomly shuffled) given the same pun word and alternative word pair.",
"Workers were asked to rate each sentence on three aspects: (1) success ( Is the sentence a pun? ), 8 (2) funniness ( How funny is the sentence? ), and (3) grammaticality ( How grammatical is the sentence? ).",
"Success was rated as yes/no, and funniness and grammaticality were rated on a scale from 1 (not at all) to 5 (very).",
"We also included a N/A choice (does not make sense) for funniness to exclude cases where the sentence are not understandable.",
"Workers were explicitly instructed to try their best to give different scores for sentences 8 They were shown the definition from Miller et al. (2017).",
"We evaluated 150 pun/alternative word pairs.",
"Each generated sentence was rated by 5 workers and their scores were averaged.",
"N/A ratings were excluded unless all ratings of a sentence were N/A, in which case we set its score to",
"0. We attracted 65, 93, 66 workers for the success, funniness, and grammaticality surveys respectively, and removed 3, 4, 4 workers because their maximum Spearman correlation with other workers was lower than 0.2.",
"We measure inter-annotator agreement using average Spearman correlation among all workers, and the average inter-annotator Spearman correlation for success, funniness, and grammaticality are 0.57, 0.36, and 0.32, respectively.",
"Table 3 shows the overall results.",
"All 3 of our systems outperform the baselines in terms of success rate and funniness.",
"More edits (i.e. swapping, inserting topic words) made the sentence less grammatical, but also much more like puns (higher success rate).",
"Interestingly, introducing the neural smoother did not improve grammaticality and hurt success rate slightly.",
"Manual inspection shows that ungrammaticality is often caused by improper topic word, thus fixing its neighboring words does not truly solve the problem.",
"For example, filling drum (related to lute ) in if that was it was likely that another body would turn up soon, because someone probably wouldn't want to share the lute. .",
"In addition, when the neural model is given a rare topic word, it tends to rewrite it to a common phrase instead, again showing that supervised learning is against the spirit of generating novel content.",
"For example, inserting gentlewoman to not allow me to ... produces these people did not allow me to ... .",
"Overall, our Method Example Rating",
"SURGEN performs the best and tripled the success rate of NEURALJOINTDECODER with improved funniness and grammaticality scores.",
"Nevertheless, there is still a significant gap between generated puns and expert-written puns across all aspects, indicating that pun generation remains an open challenge.",
"Table 4 shows the pairwise comparison results among our best model SURGEN , NEURALJOINTDECODER , and expert-written puns.",
"Given the outputs of two systems, we decided win/lose/tie by comparing the average scores of both outputs.",
"We see that SURGEN dominates NEURALJOINTDECODER with > 50% winning rate on funniness and grammaticality.",
"On success rate, the two methods have many ties since they both have relatively low success rate.",
"Our generated puns were rated funnier than expert-written puns around 10% of the time.",
"In Table 5, we show example outputs of our SURGEN , the NEURALJOINTDECODER baseline, and expert-written puns.",
"SURGEN sometimes generates creative puns that are rated even funnier than human-written puns (example 1).",
"In contrast, NEURALJOINTDECODER at best generates ambiguous sentences (example 2 and 3) and sometimes the sentences are ungrammatical (example 1) or hard to understand (example 4).",
"The examples also show the current limitation of SURGEN .",
"In example 3, it failed to realize that butter is not animate thus cannot want since our type consistency checker is very simple.",
"To gain further insights on the limitation of our system, we randomly sampled 50 unsuccessful generations (labeled by workers) to analyze the issues.",
"We characterized the issues into 6 non-exclusive categories: (1) weak association between the local context and w a (e.g., ...in the form of a batty (bat) ); (2) w p does not fit in the local context, often due to different POS tags of w a and w p (e.g., vibrate with a taxed (text) ); (3) the topic word is not related to w p (e.g., pagan vs fabrication ); (4) the topic word does not fit in its immediate context, often due to inconsistent types (e.g., slider won't go... ), (5) grammatical errors; and (6) fail to obtain seed sentences or topic words.",
"A breakdown of these errors is shown in Figure",
"3. The main issues lie in finding seed sentences that accommodate both the pun word and the topic word.",
"There is also room for improvement in predicting pun-related topic words.",
"Humor involves complex cognitive activities and many theories attempt to explain what might be considered humorous.",
"Among the leading theories, the incongruity theory (Tony, 2004) is most related to our surprisal principle.",
"The incongruity theory posits that humor is perceived at the moment of resolving the incongruity between two concepts, often involving unexpected shifts in perspectives.",
"Ginzburg et al. (2015) applied the incongruity theory to explain laughter in dialogues.",
"Prior work (Kao et al., 2015) on formalizing incongruity theory for puns focuses on ambiguity between two concepts and the heterogeneity nature of the ambiguity.",
"Our surprisal principle further formalizes unexpectedness (local surprisal) and incongruity resolution (global association).",
"The surprisal principle is also related to studies in psycholinguistics on the relation between surprisal and human comprehension (Levy, 2013; Levy and Gibson, 2013).",
"Our study suggests it could be a fruitful direction to formally study the relationship between human perception of surprisal and humor.",
"Early approaches to joke generation (Binsted, 1996; Ritchie, 2005) largely rely on templates for specific types of puns.",
"For example, JAPE (Binsted, 1996) generates noun phrase puns as question-answer pairs, e.g., What do you call a [murderer] with [fiber]? A [cereal] [killer]. Petrovic and Matthews (2013) fill in a joke template based on word similarity and uncommonness.",
"Similar to our editing approach, Valitutti et al. (2013) substitutes a word with a taboo word based on form similarity and local coherence to generate adult jokes.",
"Recently, Yu et al. (2018) generates puns from a generic neural language model by simultaneously conditioning on two meanings.",
"Most of these approaches leverage some assumptions of joke structures, e.g., incongruity, relations between words, and word types.",
"Our approach also relies on specific pun structures; we have proposed and operationalized a local-global surprisal principle for pun generation.",
"Our work is also built upon generic text generation techniques, in particular recent neural generation models.",
"Hashimoto et al. (2018) developed a retrieve-and-edit approach to improve both grammaticality and diversity of the generated text.",
"Shen et al. (2017); Fu et al. (2018) explored adversarial training to manipulate the style of a sentence.",
"Our neural smoother is also closely related to Li et al. (2018)'s delete-retrieve-edit approach to text style transfer.",
"Creative generation is more challenging as it requires both formality (e.g., grammaticality, rhythm, and rhyme) and novelty.",
"Therefore, many works (including us) impose strong constraints on the generative process, such as Petrovic and Matthews (2013); Valitutti et al. (2013) for joke generation, Ghazvininejad et al. (2016) for poetry generation, and Yao et al. (2019) for storytelling.",
"In this paper, we tackled pun generation by developing and exploring a local-global surprisal principle.",
"We show that a simple instantiation based on only a language model trained on non-humorous text is effective at detecting puns (though is not fine-grained enough to detect the degree of funniness within puns).",
"To generate puns, we operationalize the surprisal principle with a retrieve-and-edit framework to create contrast in the amount of surprise in local and global contexts.",
"While we improve beyond current techniques, we are still far from human-generated puns.",
"While we believe the local-global surprisal principle is a useful conceptual tool, the principle itself is not quite yet formalized in a robust enough way that can be be used both as a principle for evaluating sentences and can be directly optimized to generate puns.",
"A big challenge in humor, and more generally, creative text generation, is to capture the difference between creativity (novel but well-formed material) and nonsense (ill-formed material).",
"Language models conflate the two, so developing methods that are nuanced enough to recognize this difference is key to future progress.",
"This work was supported by the DARPA CwC program under ISI prime contract no.",
"W911NF-15-1-0543 and ARO prime contract no.",
"W911NF-15-1-0462.",
"We thank Abhinav Moudgil and Justine Kao for sharing their data and results.",
"We also thank members of the Stanford NLP group and USC Plus Lab for insightful discussions.",
"All code, data, and experiments for this paper are available on the CodaLab platform: https://worksheets.",
"codalab.org/worksheets/ 0x5a7d0fe35b144ad68998d74891a31ed6 ."
] | [
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"objective",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"other",
"other",
"objective",
"other",
"method",
"other",
"other",
"method",
"other",
"other",
"objective",
"method",
"other",
"other",
"abstain",
"other",
"abstain",
"objective",
"result",
"objective",
"result",
"objective",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain"
] |
[
"Syntactic analysis plays an important role in semantic parsing, but the nature of this role remains a topic of ongoing debate.",
"The debate has been constrained by the scarcity of empirical comparative studies between syntactic and semantic schemes, which hinders the development of parsing methods informed by the details of target schemes and constructions.",
"We target this gap, and take Universal Dependencies (UD) and UCCA as a test case.",
"After abstracting away from differences of convention or formalism, we find that most content divergences can be ascribed to: (1) UCCA's distinction between a Scene and a non-Scene; (2) UCCA's distinction between primary relations, secondary ones and participants; (3) different treatment of multi-word expressions, and (4) different treatment of inter-clause linkage.",
"We further discuss the long tail of cases where the two schemes take markedly different approaches.",
"Finally, we show that the proposed comparison methodology can be used for fine-grained evaluation of UCCA parsing, highlighting both challenges and potential sources for improvement.",
"The substantial differences between the schemes suggest that semantic parsers are likely to benefit downstream text understanding applications beyond their syntactic counterparts.",
"Semantic representations hold promise due to their ability to transparently reflect distinctions relevant for text understanding applications.",
"For example, syntactic representations are usually sensitive to distinctions based on POS (part of speech), such as between compounds and possessives.",
"Semantic schemes are less likely to make this distinction since a possessive can often be paraphrased as a compound and vice versa (e.g., US presi-dent/president of the US), but may distinguish different senses of possessives (e.g., some of the presidents and inauguration of the presidents).",
"Nevertheless, little empirical study has been done on what distinguishes semantic schemes from syntactic ones, which are still in many cases the backbone of text understanding systems.",
"Such studies are essential for (1) determining whether and to what extent semantic methods should be adopted for text understanding applications; (2) defining better inductive biases for semantic parsers, and allowing better use of information encoded in syntax; (3) pointing at semantic distinctions unlikely to be resolved by syntax.",
"The importance of such an empirical study is emphasized by the ongoing discussion as to what role syntax should play in semantic parsing, if any (Swayamdipta et al., 2018; Strubell et al., 2018; He et al., 2018; Cai et al., 2018).",
"See 8.",
"This paper aims to address this gap, focusing on content differences.",
"As a test case, we compare relatively similar schemes (2): the syntactic Universal Dependencies (UD; Nivre et al., 2016), and the semantic Universal Conceptual Cognitive Annotation (UCCA; Abend and Rappoport, 2013).",
"We UCCA-annotate the entire web reviews section of the UD EWT corpus (3), and develop a converter to assimilate UD and UCCA, which use formally different graphs (4).",
"We then align their nodes, and identify which UCCA categories match which UD relations, and which are unmatched.",
"1. UCCA's distinction between words and phrases that evoke Scenes (events) and ones that do not.",
"For example, eventive and non-eventive nouns are treated differently in UCCA, but similarly in UD.",
"2. UCCA's distinction between primary relations, secondary relations and Participants, in contrast to UD's core/non-core distinction.",
"3. Different treatment of multi-word expressions (MWEs), where UCCA has a stronger tendency to explicitly mark them.",
"4. UCCA's conflation of several syntactic realizations of inter-clause linkage, and disambiguation of other cases that UD treats similarly.",
"We show that the differences between the schemes are substantial, and suggest that UCCA parsing in particular and semantic parsing in general are likely to benefit downstream text understanding applications.",
"For example, only 72.9% of UCCA Participants are UD syntactic arguments, i.e., many semantic participants cannot be recovered from UD.",
"1 Our findings are relevant to other semantic representations, given their significant overlap in content (Abend and Rappoport, 2017).",
"A methodology for comparing syntactic and semantic treebanks can also support fine-grained error analysis of semantic parsers, as illustrated by Szubert et al. (2018) for AMR (Banarescu et al., 2013).",
"To demonstrate the utility of our comparison methodology, we perform fine-grained error analysis on UCCA parsing, according to UD relations (6).",
"Results highlight challenges for current parsing technology, and expose cases where UCCA parsers may benefit from modeling syntactic structure more directly.",
"2 2 Representations The conceptual and formal similarity between UD and UCCA can be traced back to their shared design principles: both are designed to be applicable across languages and domains, to enable rapid annotation and to support text understanding applications.",
"This section provides a brief introduction to each of the schemes, whereas the next sections discuss their content in further detail.",
"3 UCCA is a semantic annotation scheme rooted in typological and cognitive linguistic theory.",
"It aims to represent the main semantic phenomena in text, abstracting away from syntactic forms.",
"Shown to be preserved remarkably well across translations (Sulem et al., 2015), it has been applied to improve text simplification (Sulem 1 This excludes cases of shared argumenthood, which are partially covered by enhanced UD .",
"See 4.1.",
"2 Our conversion and analysis code is public available at https://github.com/danielhers/synsem .",
"3 See Supplementary Material for a definition of each category in both schemes, and their abbreviations.",
"et al., 2018b), and text-to-text generation evaluation (Birch et al., 2016; Choshen and Abend, 2018; Sulem et al., 2018a).",
"Formally, UCCA structures are directed acyclic graphs (DAGs) whose nodes (or units ) correspond either to words, or to elements viewed as a single entity according to some semantic or cognitive consideration.",
"Edges are labeled, indicating the role of a child in the relation the parent represents.",
"Figure 1 shows a legend of UCCA abbreviations.",
"A Scene is UCCA's notion of an event or a frame, and is a description of a movement, an action or a state which persists in time.",
"Every Scene contains one primary relation, which can be either a Process or a State.",
"Scenes may contain any number of Participants, a category which also includes abstract participants and locations.",
"They may also contain temporal relations (Time), and secondary relations (Adverbials), which cover semantic distinctions such as manner, modality and aspect.",
"4 Scenes may be linked to one another in several ways.",
"First, a Scene can provide information about some entity, in which case it is marked as an Elaborator.",
"This often occurs in the case of participles or relative clauses.",
"For example, (child) who went to school is an Elaborator Scene in The child who went to school is John.",
"A Scene may also be a Participant in another Scene.",
"For example, John went to school in the sentence: He said John went to school.",
"In other cases, Scenes are annotated as Parallel Scenes (H), which are flat structures and may include a Linker (L), as in: When L [he arrives] H , [he will call them] H .",
"Non-Scene units are headed by units of the category Center, denoting the type of entity or thing described by the whole unit.",
"Elements in non-Scene units include Quantifiers (such as dozens of people) and Connectors (mostly coordinating conjunctions).",
"Other modifiers to the Center are marked as Elaborators.",
"UCCA distinguishes primary edges, corresponding to explicit relations, from remote edges, 4 Despite the similar terminology, UCCA Adverbials are not necessarily adverbs syntactically.",
"which allow for a unit to participate in several super-ordinate relations.",
"See example in Figure",
"1. Primary edges form a tree, whereas remote edges (dashed) enable reentrancy, forming a DAG.",
"UD is a syntactic dependency scheme used in many languages, aiming for cross-linguistically consistent and coarse-grained treebank annotation.",
"Formally, UD uses bi-lexical trees, with edge labels representing syntactic relations.",
"One aspect of UD similar to UCCA is its preference of lexical (rather than functional) heads.",
"For example, in auxiliary verb constructions (e.g., is eating), UD marks the lexical verb ( eating ) as the head, while other dependency schemes may select the auxiliary is instead.",
"While the approaches are largely inter-translatable (Schwartz et al., 2012), lexical head schemes are more similar in form to semantic schemes, such as UCCA and semantic dependencies (Oepen et al., 2016).",
"Being a dependency representation, UD is structurally underspecified in an important way: it is not possible in UD to mark the distinction between an element modifying the head of the phrase and the same element modifying the whole phrase (de Marneffe and Nivre, 2019).",
"An example UD tree is given in Figure",
"2. UD relations will be written in typewriter font.",
"We annotate 723 English passages (3,813 sentences; 52,721 tokens), comprising the web reviews section of the English Web Treebank (EWT; Bies et al., 2012).",
"Text is annotated by two UCCA annotators according to v2.0 of the UCCA guidelines 5 and cross-reviewed.",
"As these sentences are 5 http://bit.ly/ucca_guidelines_v2 Train Dev Test # Passages 347 192 184 # Sentences 2,723 554 535 # Tokens 44,804 5,394 5,381 Table 2: Data split for the shared gold-standard corpus.",
"included in the UD English_EWT treebank, this is a shared gold-standard UCCA and UD annotated corpus.",
"6 We use the standard train/develop-ment/test split, shown in Table",
"2. 4 Comparison Methodology To facilitate comparison between UCCA and UD, we first assimilate the graphs by abstracting away from formalism differences, obtaining a similar graph format for both schemes.",
"We then match pairs of nodes in the converted UD and UCCA trees if they share all terminals in their yields.",
"UD annotates bi-lexical dependency trees, while UCCA graphs contain non-terminal nodes.",
"In 4.1, we outline the unified DAG converter by Hershcovich et al. (2018a,b), 7 which we use to reach a common format.",
"In 4.2, we describe a number of extensions to the converter, which abstract away from further non-content differences.",
"Figure 3 presents the same tree from Figure 2 after conversion.",
"The converter adds one pre-terminal per token, and attaches them according to the original dependency tree: traversing it from the root, for each head it creates a non-terminal parent with the edge label head , and adds the dependents as children of the created non-terminal.",
"Relation subtypes are stripped, leaving only universal relations.",
"For example, the language-specific definite article label det:def is replaced by the universal det .",
"Reentrancies.",
"Remote edges in UCCA enable reentrancy, forming a DAG together with primary edges.",
"UD allows reentrancy when including enhanced dependencies (Schuster and Manning, 2016), 8 which form (bi-lexical) graphs, representing phenomena such as predicate ellipsis (e.g., gapping), and shared arguments due to coordination, control, raising and relative clauses.",
"UCCA is more inclusive in its use of remote edges, and accounts for the entire class of implicit arguments termed Constructional Null Instantiation in FrameNet (Ruppenhofer et al., 2016).",
"For example, in The Pentagon is bypassing official US intelligence channels [...] in order to create strife (from EWT), remote edges mark Pentagon as a shared argument of bypassing and create .",
"Another example is if you call for an appointment [...] so you can then make one, where a remote edge in UCCA indicates that one refers to appointment .",
"Neither is covered by enhanced UD.",
"In order to facilitate comparison, we remove remote edges and enhanced dependencies in the conversion process.",
"We thus compare basic UD and UCCA trees, deferring a comparison of UCCA and enhanced UD to future work.",
"Unanalyzable units.",
"An unanalyzable phrase is represented in UCCA as a single unit covering multiple terminals.",
"In multi-word expressions (MWEs) in UD, each word after the first is attached to the previous word, with the flat , fixed or goeswith relations (depending on whether the expression is grammaticalized, or split by error).",
"We remove edges of these relations and join the corresponding pre-terminals to one unit.",
"Promotion of conjunctions.",
"The basic conversion generally preserves terminal yields: the set of terminals spanned by a non-terminal is the same as the original dependency yield of its head terminal (e.g., in Figure 3, the yield of the non-terminal headed by graduation is After graduation, the same as that of graduation in Figure 2).",
"Since UD attaches subordinating and coordinating conjunctions to the subsequent conjunct, this results in them being positioned in the same conjunct they relate (e.g., After will be included in 8 https://universaldependencies.org/u/ overview/enhanced-syntax.html the first conjunct in After arriving home, John went to sleep; and will be included in the second conjunct in John and Mary).",
"In contrast, UCCA places conjunctions as siblings to their conjuncts (e.g., [After] [arriving home], [John went to sleep] and [John] [and] [Mary]).",
"To abstract away from these convention differences, we place coordinating and subordinating conjunctions (i.e., cc -labeled units, and mark labeled units with an advcl head such as when , if , after ) as siblings of their conjuncts.",
"Using the shared format, we turn to analyzing the content differences between UCCA and UD.",
"9 5.1 Confusion Matrix Table 3 presents the confusion matrix of categories between the converted UD and UCCA, calculated over all sentences in the training and development sets of the shared EWT reviews corpus.",
"We leave the test set out of this evaluation to avoid contamination for future parsing experiments.",
"In case of multiple UCCA units with the same terminal yield (i.e., units with a single non-remote child), we take the top category only, to avoid double-counting.",
"Excluding punctuation, this results in 60,434 yields in UCCA and 58,992 in UD.",
"Of these, 52,280 are common, meaning that a UCCA parser developed this way would get a very high F1 score of 87.6%, if it is provided with the gold UCCA label for every converted edge.",
"Some yields still have more than one UCCA category associated with them, due to edges with multiple categories ( A (cid:12)(cid:12) P and A (cid:12)(cid:12) S ).",
"For presentation reasons, 0.15% of the UCCA units in the data are not presented here, as they belong to rare (< 0.1%) multiple-category combinations.",
"Only 82.6% of UD's syntactic arguments ( ccomp , csubj , iobj , nsubj , obj , obl and xcomp ) are UCCA Participants, and only 72.9% of the Participants are syntactic argumentsa difference stemming from the Scene/non-Scene (5.2) and argument/adjunct (5.3) distinctions.",
"Moreover, if we identify predicates as words having at least one argument and Scenes as units with at least one Participant, then only 92.1% of UD's predicates correspond to Scenes (many are secondary relations within one scene), and only 80% 9 See http://bit.ly/uccaud for a detailed explanation of each example in this section.",
"of Scenes correspond to predicates (e.g., eventive nouns, which are not syntactic predicates).",
"Examining the head row in Table 3 allows us to contrast the schemes' notions of a head.",
"head labeled units have at least one dependent in UD, or are single-clause sentences (technically, they are non-terminals added by the converter).",
"Of them, 75.7% correspond to Processes, States, Parallel Scenes or Centers, which are UCCA's notions of semantic heads, and 11.6% are left unmatched, mostly due to MWEs analyzed in UD but not in UCCA (5.4).",
"Another source of unmatched units is inter-Scene linkage, which tends to be flatter in UCCA (5.5).",
"The rest are mostly due to head swap (e.g., all of Dallas, where all is a Quantifier of Dallas in UCCA, but the head in UD).",
"In the following subsections, we review the main content differences between the schemes, as reflected in the confusion matrix, and categorize them according to the UD relations involved.",
"UCCA distinguishes between Scenes and non-Scenes.",
"This distinction crosses UD categories, as a Scene can be evoked by a verb, an eventive or stative noun ( negotiation , fatigue ), an adjective or even a preposition (this is for John).",
"ex-cellent).",
"However, when describing a Scene, the subject may be a Process/State (e.g., but service is very poor).",
"Some wh-pronouns are the subjects or objects of a relative clause, but are Linkers or Relators, depending on whether they link Scenes or non-Scenes, respectively.",
"For example, who in overall, Joe is a happy camper who has found a great spot is an nsubj , but a Linker.",
"Other arguments are Adverbials or Time (see 5.3), and some do not match any UCCA unit, especially when they are parts of MWEs (see 5.4).",
"Adjectival modifiers are Adverbials when modifying Scenes ( romantic dinner), States when describing non-Scenes ( beautiful hotel) or when semantically predicative (such a convenient lo-cation), or Elaborators where defining inherent properties of non-Scenes ( medical school).",
"Nominal and clausal modifiers.",
"Most are Participants or Elaborators, depending on whether they modify a Scene (e.g., discount on services and our decision to buy when we did are Participants, but my car's gears and brakes and Some of the younger kids that work there are Elabo-rators).",
"Unmatched acl are often free relative clauses (e.g., in the prices were worth what I got , what is the obj of worth but a Participant of I got ).",
"Case markers.",
"While mostly Relators modifying non-Scenes (e.g., the team at Bradley Chevron), some case markers are Linkers linking Scenes together (e.g., very informative web-site with a lot of good work).",
"Others are Elaborators (e.g., over a year) or States when used as the main relation in verbless or copula clauses (e.g., it is right on Wisconsin Ave).",
"Coordination.",
"Coordinating conjunctions ( cc ) are Connectors where they coordinate non-Scenes (e.g., Mercedes and Dan) or Linkers where they coordinate Scenes (e.g., outdated but not bad).",
"Similarly, conjuncts and list elements ( conj , list ) may be Parallel Scenes (H), or Centers when they are non-Scenes.",
"10 Determiners.",
"Articles are Functions, but determiners modifying non-Scenes are Elaborators (e.g., I will never recommend this gym to any woman).",
"Where modifying Scenes (mostly negation) they are marked as Adverbials.",
"For example, no feathers in stock, what a mistake, and the rear window had some leakage are all Adverbials.",
"UD distinguishes core arguments, adverb modifiers, and obliques (in English UD, the latter mostly correspond to prepositional dependents of verbs).",
"UCCA distinguishes Participants, including locations and abstract entities, from secondary relations (Adverbials), which cover manner, aspect and modality.",
"Adverbials can be verbs (e.g., begin , fail ), prepositional phrases ( with disrespect ), as well as modals, adjectives and adverbs.",
"Adverbs and obliques.",
"Most UD adverb modifiers are Adverbials (e.g., I sometimes go), but they may be Participants, mostly in the case of semantic arguments describing location (e.g., here ).",
"Obliques may be Participants (e.g., wait for Nick ), Time (e.g., for over 7 years ) or Adverbialsmostly manner adjuncts ( by far ).",
"Clausal arguments are Participant Scenes (e.g., it was great that they did not charge a service fee , did not really know what I wanted or I asked them to change it ).",
"However, when serving as complements to a secondary verb, they will not match any unit in UCCA, as it places secondary verbs on the same level as their primary relation.",
"For example, to pay is an xcomp in they have to pay, while the UCCA structure is flat: have 10 While in UD the conjunction cc is attached to the following conjunct, in UCCA coordination is a flat structure.",
"Auxiliary verbs are Functions (e.g., do not for-get), or Adverbials when they are modals (e.g., you can graduate).",
"Semi-modals in UD are treated as clausal heads, which take a clausal complement.",
"For example, in able to do well, UD treats able as the head, which takes do well as an xcomp .",
"UCCA, on the other hand, treats it as an Adverbial, creating a mismatch for xcomp .",
"UD and UCCA treat MWEs differently.",
"In UD they include names, compounds and grammaticalized fixed expressions.",
"UCCA treats names and grammaticalized MWEs as unanalyzable units, but also a range of semantically opaque constructions (e.g., light verbs and idioms).",
"On the other hand, compounds are not necessarily unanalyzable in UCCA, especially if compositional.",
"Compounds.",
"English compounds are mostly nominal, and are a very heterogeneous category.",
"Most compounds correspond to Elaborators (e.g., industry standard), or Elaborator Scenes (e.g., out-of-place flat-screen TV), and many are unanalyzable expressions (e.g., mark up).",
"Where the head noun evokes a Scene, the dependent is often a Participant (e.g., food craving), but can also be an Adverbial (e.g., first time buyers) depending on its semantic category.",
"Other compounds in UD are phrasal verbs (e.g., figure out , cleaned up ), which UCCA treats as unanalyzable (leading to unmatched units).",
"Core arguments.",
"A significant number of subjects and objects are left unmatched as they form parts of MWEs marked in UCCA as unanalyzable.",
"UD annotates MWEs involving a verb and its argument(s) just like any other clause, and therefore lacks this semantic content.",
"Examples include light verbs (e.g., give a try ), idioms (bites the dust ), and figures of speech (e.g., when it comes to, offer a taste (of)), all are UCCA units.",
"Complex prepositions.",
"Some complex prepositions (e.g., according to or on top of ), not encoded as MWEs in UD, are unanalyzable in UCCA.",
"Head selection.",
"UCCA tends to flatten linkage, where UD, as a dependency scheme, selects a head and dependent per relation.",
"This yields scope ambiguities for coordination, an inherently flat structure.",
"For instance, unique gifts and cards is ambiguous in UD as to whether unique applies only to gifts or to the whole phraseboth annotated as in Figure 4a.",
"UCCA, allowing non-terminal nodes, disambiguates this case (Figure 4b).",
"Clausal dependents.",
"UD categorizes clause linkage into coordination, subordination, argumenthood (complementation), and parataxis.",
"UCCA distinguishes argumenthood but conflates the others into the Parallel Scene category.",
"For example, We called few companies before we decided to hire them and Check out The Willow Lounge, you'll be happy are Parallel Scenes.",
"Note that while in UD, mark (e.g., before ) is attached to the dependent adverbial clause, a UCCA Linker lies outside the linked Scenes.",
"To reduce unmatched advcl instances, this convention difference is fixed by the converter (4.2).",
"Many remaining unmatched units are due to conjunctions we could not reliably raise.",
"For instance, the marker to introducing an xcomp is ambiguous between Linker (purposive to ) and Function (infinitive marker).",
"Similarly, wh-pronouns may be Linkers (he was willing to budge a little on the price which means a lot to me), but have other uses in questions and free relative clauses.",
"Other mismatches result from the long tail of differences in how UD and UCCA construe linkage.",
"Consider the sentence in Figure 5.",
"While moment is an oblique argument of know in UD, From the moment is analyzed as a Linker in UCCA.",
"Appositions in UD always follow the modified noun, but named entities in them are UCCA Centers, regardless of position (e.g., in its sister store Peking Garden, the UD head its sister store is an Elaborator, while Peking Garden is the Center).",
"Copulas.",
"UCCA distinguishes copular constructions expressing identity (e.g., This is the original Ham's restaurant) where the copula is annotated as State, and cases of attribution (e.g., Mercedes and Dan are very thorough) or location (e.g., Excellent chefs are in the kitchen), where the copula is a Function.",
"Discourse markers and interjections.",
"Units relating a Scene to the speech event or to the speaker's opinion are Ground (e.g., no , Warwick in New Jersey and Please visit my website).",
"On the other hand, discourse elements that relate one Scene to another are Linkers (e.g., anyway ).",
"Vocatives are both Ground and Participants if they participate in the Scene and are the party addressed.",
"For example, Mark in Thanks Mark is both the person addressed and the one thanked.",
"11 Expletives and subjects.",
"Expletives are generally Functions, but some instances of it and that are analyzed as nsubj in UD and as Function in UCCA (e.g., it 's like driving a new car).",
"Excluded relations.",
"We exclude the following UD labels, as they are irrelevant to our evaluation: root (always matches the entire sentence); punct (punctuation is ignored in UCCA evalu-ation); dep (unspecified dependency), orphan (used for gapping, which is represented using remote edges in UCCAsee 4.1); fixed , flat and goeswith (correspond to parts of unanalyzable units in UCCA, and so do not represent units on their ownsee 4.2); reparandum and dislocated (too rare in EWT).",
"In 5 we used our comparison methodology, consisting of the conversion to a shared format and matching units by terminal yield, to compare gold-standard UD and UCCA.",
"In this section we ap-11 The A (cid:12)(cid:12) G column is omitted from Table 3 as this category combination occurs in only 0.02% of edges in the corpus.",
"Data.",
"In addition to the UCCA EWT data (3), we use the reviews section of the UD v2.3 English_EWT treebank (Nivre et al., 2018), 12 annotated over the exact same sentences.",
"We additionally use UDPipe v1.2 (Straka et al., 2016; Straka and Strakov, 2017), trained on English_EWT, 13 for feature extraction.",
"We apply the extended converter to UD as before (4.2).",
"Parser.",
"We train TUPA v1.3 (Hershcovich et al., 2017, 2018a) on the UCCA EWT data, with the standard train/development/test split.",
"TUPA uses POS tags and syntactic dependencies as features.",
"We experiment both with using gold UD for feature extraction, and with using UDPipe outputs.",
"Evaluation by gold-standard UD.",
"UCCA evaluation is generally carried out by considering a predicted unit as correct if there is a gold unit that matches it in terminal yield and labels.",
"Precision, Recall and F-score (F1) are computed accordingly.",
"For the fine-grained analysis, we split the gold-standard, predicted and matched UCCA units according to the labels of the UD relations whose dependents have the same terminal yield (if any).",
"Table 4 presents TUPA's scores on the UCCA EWT development and test sets.",
"Surprisingly, using UDPipe for feature extraction results in better scores than gold syntactic tags and dependencies.",
"Table 5 shows fine-grained evaluation by UD relations.",
"TUPA does best on auxiliaries and determiners, despite the heterogeneity of corresponding 12 https://hdl.handle.net/11234/1-2895 13 https://hdl.handle.net/11234/1-2898 UCCA categories (see Table 3), possibly by making lexical distinctions (e.g., modals and auxiliary verbs are both UD auxiliaries, but are annotated as Adverbials and Functions, respectively).",
"Copulas and coordinating conjunctions pose a more difficult distinction, since the same lexical items may have different categories depending on the context: State/Function for copulas, due to the distinction between identity and attribution, and Connector/Linker for conjunctions, due to the distinction between Scenes and non-Scenes.",
"However, the reviews domain imposes a strong prior for both (Function and Linker, respectively), which TUPA learns successfully.",
"Inter-clause linkage ( conj , advcl , xcomp , ccomp , parataxis , acl and csubj ) is a common source of error for TUPA.",
"Although the match between UCCA and UD is not perfect in these cases, it is overall better than TUPA's unlabeled performance, despite using gold-standard syntactic features.",
"Our results thus suggest that encoding syntax more directly, perhaps using syntactic scaffolding (Swayamdipta et al., 2018) or guided attention (Strubell et al., 2018), may assist in predicting unit boundaries.",
"However, TUPA often succeeds at making distinctions that are not even encoded in UD.",
"For example, it does reasonably well (71%) on distinguishing between noun modifiers of Scene-evoking nouns (Participants) and modifiers of other nouns (Elaborators), surpassing a majority baseline based on the UD relation (51%).",
"Lexical resources that distinguish eventive and relational nouns from concrete nouns may allow improving it even further.",
"In the similar case of compounds, lexical resources for light verbs and idioms may increase performance.",
"NLP tasks often require semantic distinctions that are difficult to extract from syntactic representations.",
"Consider the example after graduation, John moved to Paris again.",
"While graduation evokes a Scene (Figure 1), in UD it is an oblique modifier of moved , just like Paris is (Figure 2).",
"The Scene/non-Scene distinction (5.2) would assist structural text simplification systems in paraphrasing this sentence to two sentences, each one containing one Scene (Sulem et al., 2018a).",
"Another example is machine translation translating the same sentence into Hebrew, which does not have a word for graduation , would re-aux det cop cc expl iobj nsubj case list advmod amod nummod mark vocative compound obj nmod conj advcl obl xcomp discourse ccomp parataxis appos acl csubj",
"quire a clause to convey the same meaning.",
"The mapping would therefore be more direct using a semantic representation, and we would benefit from breaking the utterance into two Scenes.",
"The use of syntactic parsing as a proxy for semantic structure has a long tradition in NLP.",
"Indeed, semantic parsers have leveraged syntax for output space pruning (Xue and Palmer, 2004), syntactic features (Gildea and Jurafsky, 2002; Hershcovich et al., 2017), joint modeling (Surdeanu et al., 2008; Hajic et al., 2009), and multi-task learning (Swayamdipta et al., 2016, 2018; Hershcovich et al., 2018a).",
"Empirical comparison between syntactic and semantic schemes, however, is still scarce.",
"Rudinger and Van Durme (2014) mapped Stanford Dependencies (precursor to UD) to Hobbsian Logical Form, identifying semantic gaps in the former.",
"PredPatt (White et al., 2016), a framework for extracting predicate-argument structures from UD, was evaluated by Zhang et al. (2017) on a large set of converted PropBank annotations.",
"Szubert et al. (2018) proposed a method for aligning AMR and UD subgraphs, finding that 97% of AMR edges are evoked by one or more words or syntactic relations.",
"Damonte et al. (2017) refined AMR evaluation by UD labels, similar to our fine-grained evaluation of UCCA parsing.",
"Some syntactic representation approaches, notably CCG (Steedman, 2000), directly reflect the underlying semantics, and have been used to transduce semantic forms using rule-based systems (Basile et al., 2012).",
"A related line of work tackles the transduction of syntactic structures into semantic ones.",
"Reddy et al. (2016) proposed a rule-based method for converting UD to logical forms.",
"Stanovsky et al. (2016) converted Stanford dependency trees into proposition structures (PROPS), abstracting away from some syntactic detail.",
"We evaluated the similarities and divergences in the content encoded by UD and UCCA.",
"We annotated the reviews section of the English Web Treebank with UCCA, and used an automated methodology to evaluate how well the two schemes align, abstracting away from differences of mere convention.",
"We provided a detailed picture of the content differences between the schemes.",
"Notably, we quantified the differences between the notions of syntactic and semantic heads and arguments, finding substantial divergence between them.",
"Our findings highlight the potential utility of using semantic parsers for text understanding applications (over their syntactic counterparts), but also expose challenges semantic parsers must address, and potential approaches for addressing them.",
"This work was supported by the Israel Science Foundation (grant No. 929/17), and by the HUJI Cyber Security Research Center in conjunction with the Israel National Cyber Bureau in the Prime Minister's Office.",
"We thank Jakob Prange, Nathan Schneider and the anonymous reviewers for their helpful comments."
] | [
"abstain",
"abstain",
"method",
"result",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"method",
"objective",
"objective",
"objective",
"objective",
"objective",
"result",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"method",
"method",
"method",
"result",
"result",
"other",
"other"
] |
[
"Previous works that integrated news articles to better process stock prices used a variety of neural networks to predict price movements.",
"The textual and price information were both encoded in the neural network, and it is therefore difficult to apply this approach in situations other than the original framework of the notoriously hard problem of price prediction.",
"In contrast, this paper presents a method to encode the influence of news articles through a vector representation of stocks called a stock embedding .",
"The stock embedding is acquired with a deep learning framework using both news articles and price history.",
"Because the embedding takes the operational form of a vector, it is applicable to other financial problems besides price prediction.",
"As one example application, we show the results of portfolio optimization using Reuters & Bloomberg headlines, producing a capital gain 2.8 times larger than that obtained with a baseline method using only stock price data.",
"This suggests that the proposed stock embedding can leverage textual financial semantics to solve financial prediction problems.",
"News articles influence the dynamics of financial markets.",
"For example, after the release of breaking news, the share prices of related stocks are often observed to move.",
"This suggests the possibility of using natural language processing (NLP) to aid traders by analyzing this influence between news article texts and prices.",
"Recent studies (Ding et al., 2015; Hu et al., 2018; Chen et al., 2019; Yang et al., 2018) have indeed reported that news articles can be leveraged to improve the accuracy of predicting stock price movements.",
"These previous works have used deep learning techniques.",
"They train neural networks with article texts and financial market prices, attempting to improve price prediction.",
"In these approaches, the overall mutual effect between texts and prices is distributed over the neural network, which makes it difficult to extract this effect and apply it to tasks other than price prediction.",
"Therefore, we take a new approach by explicitly describing this mutual effect in terms of a vector.",
"A stock is represented by a vector so that its inner product with an embedding of a text produces a larger value when the text is more related to the stock.",
"In the rest of the paper, we call this vector a stock embedding .",
"The names of stocks, such as AAPL (the ticker symbol for Apple Inc. ), typically appear in a financial news article text.",
"Because these names form part of the text, usual NLP techniques can be applied to acquire an embedding of a stock.",
"Such general textual embedding, however, does not incorporate the financial reality of stock price changes.",
"Hence, the proposed stock embedding represents the price as well as the semantics of the text, as we acquire it by training on both news articles and stock prices.",
"Precisely, our stock embedding is trained through a binary classification problem, namely, whether a stock price goes up or down in comparison with the previous day's price.",
"As a result, an acquired stock embedding captures the relation between a stock name and a news article even when the article has no direct mention of the stock.",
"Our stock embedding can be considered as one technique to specialize, or ground, a symbol that has a practical reality outside of text.",
"Furthermore, two major advantages come with the vector form of our stock embedding.",
"The first is that the training can be effectuated for all stocks at once, rather than stock by stock.",
"This is an important advantage to alleviate data sparseness and prevent overfitting, as discussed in Section 4.",
"The second advantage lies in the portability of a vector.",
"In contrast to previous works, in which stock-specific information was distributed among the parameters of a neural network, a vector representing all the characteristics of a stock is much easier to extract and apply to other uses besides price prediction.",
"Hence, this paper shows an example of portfolio optimization, one of the most important applications in finance.",
"To the best of our knowledge, this is the first report of incorporating NLP into modern portfolio theory (Markowitz, 1952).",
"Our method differs from previous works that used NLP to enhance investment strategies.",
"Many previous works focused on stock price forecasting only (Ding et al., 2015; Hu et al., 2018) and did not attempt to apply the learned results to other financial tasks.",
"Another previous work (Song et al., 2017) investigated portfolios with texts.",
"It obtained a ranking of stocks from texts by using a neural network technique and then evaluated investment in the highest/lowest ranked stocks.",
"That work was not based on modern portfolio theory, however, nor did it integrate price and text data.",
"In contrast, our method uses NLP in addition to price data to acquire a general representation in the form of an embedding applicable to different targets.",
"In our experiments, a portfolio generated using stock embeddings achieved an annual gain 2.8 times greater than that of a portfolio generated with price data only.",
"This provides evidence that the stock embedding well encodes both text and price information.",
"The main idea of this article is based on important techniques of NLP.",
"It is now common to represent discrete entities in natural language by continuous vectors.",
"These vectors are called embed-dings and usually obtained from neural network models.",
"Examples include the word embedding (Mikolov et al., 2013), phrase embedding (Zhang et al., 2014), sentence embedding (Lin et al., 2017), and event embedding (Ding et al., 2016).",
"One advantage of these continuous representations is that the geometry of an embedding system contains rich semantic information, as has been discovered at many levels (Mikolov et al., 2013; Reif et al., 2019).",
"The acquisition of stock embeddings in this paper is based on the original idea developed for linguistic entities.",
"Here, we extend the idea further so that the embeddings reflect the reality of a stock market outside text.",
"A stock embedding is trained using the attention mechanism (Bahdanau et al., 2015), which is another current NLP technique.",
"The basic idea of the original attention mechanism is to assign higher weights to more relevant word vectors and make the weights adaptive to different contexts.",
"Our framework is based on the classification task for text-driven stock price movement, which has been studied intensely as follows.",
"Early research on exploiting financial news articles for better stock price prediction dates back to Ou and Penman (1989), in which financial indicators were extracted manually from financial statements.",
"Later, in Fung et al. (2002), NLP methods were adopted for automatic text feature extraction.",
"Since the 2000s, Twitter and other text-centered social media platforms have become essential sources of financial signals.",
"Bollen et al. (2011) found evidence for causality between the public mood extracted from tweets and the Dow Jones Industrial Average index.",
"In Nguyen et al. (2015), post texts collected from the Yahoo! Finance Message Board were used to predict whether the prices of 18 US stocks would rise or drop on the next trading day.",
"As deep learning methods for NLP have become more common, many papers have reported the use of neural networks for text-driven stock classification (or prediction) tasks.",
"Ding et al. (2015) proposed an event embedding to represent a news headline with a vector and used a convolutional neural network for classification.",
"In that work, all the event embeddings of news articles published on the same day were simply averaged to summarize that day's market information.",
"Hu et al. (2018) was among the first works that applied the attention mechanism to the task of news-driven stock price movement classification.",
"They developed a dual-level attention framework, in which news articles were assigned different weights depending on the output of a logistic regression component with a bias term, so that the most informative news articles were highlighted.",
"The method of weighting news articles in this paper is similar to that previous work.",
"The stock-specific information in Hu et al. (2018) was encoded in the neural network, however, making it focused on the price prediction task.",
"In contrast, we represent such stock-specific information by the stock embedding, i.e., a vector, which is easy to interpret geometrically and extract for other applications.",
"For one such application, we evaluated our stock embedding in terms of portfolio optimization.",
"To the best of our knowledge, this is the first paper applying NLP techniques to modern portfolio theory.",
"We use the mean-variance minimization portfolio model (introduced in Section 7) proposed in Markowitz (1952), which directly led to the capital asset pricing model (Sharpe, 1964).",
"In this paper, the stock embedding is trained with a deep learning system through binary classification of price movements.",
"Let p t be the stock price on day t , and let y t be the desired output of the system.",
"Here, t { 1 , 2 , . . . , T } , and T is the number of trading days in the considered time period.",
"The binary classification problem indicates that y t is classified in the following way: y t = (cid:26) 1 , p t p t 1 0 , p t < p t 1 .",
"To train such a deep learning system, news articles are used as the input.",
"In this work, news articles are considered daily (i.e., treated in units of days).",
"We denote the set of articles published on day t by N t , and each article by n i N t , with i = 1 , . . . , | N t | .",
"This paper considers a time window around day t , denoted as [ t d 1 , t + d 2 ] given two constants d 1 , d 2 .",
"Let N [ t d 1 ,t + d 2 ] be the set of news articles published within the time window.",
"When d 2 = 1 , indicating the use of articles until day t 1 , the task is called prediction , as the training does not use any articles published on or after day t .",
"In general, this task is acknowledged as very hard (Fama, 1970; Basu, 1977; Timmermann and Granger, 2004) according to the efficient-market hypothesis (EMH) 1 , and such prediction provides only a limited gain, if any.",
"Note that previous NLP studies concerning stock prices were all aimed at this hard problem (Ding et al., 2015; Hu et al., 2018; Xu and Cohen, 2018; Yang et al., 2018).",
"On the other hand, when d 2 0 , this paper refers to the task as classification .",
"The performance on classification shows how well the model understands a news article.",
"Because the prediction problem is too hard and offers limited gain, as proven by many previous works, our target lies in classification .",
"The aims are thus to acquire embeddings that are highly sensitive to textual context and to 1 According to the EMH, in an efficient market, prices reflect the true values of assets by having incorporated all past information, so nobody can predict the price.",
"The EMH is hypothesized to hold but has also attracted criticism.",
"apply them to tasks other than price prediction.",
"Therefore, in this paper, we set d 1 = 4 and d 2 = 0 .",
"Let the classification model be represented by a mapping f .",
"The probability that the price of a stock j , where j = 1 , . . . , J , goes up on day t is y jt = f (cid:0) N [ t 4 ,t ] (cid:1) .",
"(2) In the process of model optimization, the model should reduce the mean cross-entropy loss between every true label y jt and its corresponding estimate y jt , as follows: l j = 1 TT (cid:88) t =1 (cid:16) y jt log y jt + (1 y jt ) log(1 y jt ) (cid:17) .",
"This function describes the loss for only one stock, but a stock market includes multiple stocks.",
"This work considers all stocks in a market equally important.",
"The overall loss function is therefore a simple average of the cross-entropy loss for all stocks, i.e., l = ( (cid:80) Jj =1 l j ) /J .",
"Let s j represent a stock embedding, where j = 1 , 2 , ..., J .",
"This is initialized as a random vector and then trained via a neural model to obtain s j , whose inner product with the embedding of a related text becomes large.",
"This section describes the proposed method to acquire stock embeddings by building up a neural network for price movement classification.",
"The neural network consists of two parts: a text feature distiller and a price movement classifier.",
"Text feature distiller.",
"The text feature distiller first converts every news article n i into a pair of vectors ( n Ki , n Vi ) corresponding to key and value vectors, respectively.",
"Let N Kt = { n Ki } t , N Vt = { n Vi } t denote the sets of key/value vectors of the articles released on day t .",
"Such dual-vector representation of a text was proposed and adopted successfully in Miller et al. (2016) and Daniluk et al. (2017).",
"The pair of vectors contains the semantic information of the article text at two different levels.",
"Roughly, n Ki represents the article at the word level, whereas n Vi represents it at the context level.",
"The text feature distiller calculates the attention score for every article i published on day t .",
"The attention score between article i and stock j is given by the inner product of the two vectors n K i and s j : score i,j = n Ki s j .",
"Note that there are other possible definitions of this inner product, such as the cosine similarity or a generalized inner product using some arbitrary function.",
"Because this work focuses on the most basic capability of the stock embedding, it uses the most basic inner product (i.e., the dot product).",
"Let ji denote the weight put on news article i with respect to stock j , to classify whether the stock price will go up or down.",
"With the use of score i,j defined above, ji is given as the following: ji exp(score i,j ) (cid:80) i (cid:48) exp(score i (cid:48) ,j ) .",
"By using ji as the weights put on news articles, we compute the market status of stock j on day t as the following, which is the input to the classifier: m jt = (cid:88) n Vi N Vt ji n Vi .",
"(3) Therefore, m jt is computed over a set of n Vi , representing the context of texts on day t .",
"ji is thus acquired as the softmax function of the scores across the articles released on the same day.",
"We call m jt the market vector , to which we will return in Section 6.",
"Price movement classifier.",
"The input of the price movement classifier is a sequence of vectors, M j [ t 4 ,t ] = [ m jt 4 , m jt 3 , . . . , m jt ] , with respect to stock j .",
"This is processed by a recurrent neural network using a bidirectional gated recurrent unit (Bi-GRU).",
"The choice of a Bi-GRU was made by considering the model capacity and training diffi-culty.",
"The classifier estimates the probability y jt : h Ot = GRU( M j [ t 4 ,t ] ) , y jt = (MLP( h Ot )) , (4) where ( x ) = 1 / (1 + exp( x )) , and GRU and MLP stand for the Bi-GRU and a multilayer per-ceptron, respectively.",
"An optional re-weighting technique over the GRU's output vectors h O ( [ t 4 , t ]) (Hu et al., 2018) can be applied.",
"In this case, after the first line of formula (4), the re-weighting is conducted in the following way: h O = t (cid:80) = t 4 h O , and this h O becomes the input of the second line instead of h Ot .",
"Here, , the weight for day , decides how much one day is considered in the classification.",
"In our implementation, = exp( v t h O ) (cid:80) 0 = 4 exp( v h Ot + ) , where the vector v differentiates the temporal effects of news articles released around day t .",
"v Figure 1: Illustration of the classifier sharing mechanism across stocks on day t :",
"Such formulation of neural network training has the advantage of avoiding overfitting.",
"A common problem in the task of stock movement classification or prediction is small sample sizes, especially when adopting units of days.",
"In contrast, the proposed model does not suffer from small sample sizes, because the price movement classifier can be trained across all the stocks by sharing one classifier, rather than by generating one classifier for each individual stock like in many previous works (Ding et al., 2015; Hu et al., 2018; Xu and Cohen, 2018).",
"We call this a classifier sharing mechanism.",
"Figure 1 illustrates the difference between models with and without classifier sharing.",
"The upper figure",
"(a) shows the conventional setting without sharing, in which J classifiers are generated, one for each stock.",
"In contrast, the lower figure",
"(b) shows one classifier generated for all stocks.",
"This setting enables learning of the correlation among stocks, in addition to avoiding overfitting and the problem of small sample sizes.",
"Specifically, the classifier is shared across all stocks, thus achieving a sample size about 50 to 100 times larger.",
"We used two news article datasets to build stock embeddings: the Wall Street Journal (WSJ, in the",
"following) dataset and the Reuters & Bloomberg (R&B) dataset 2 , as listed in Table 1.",
"WSJ contains around 400,000 news headlines published across 16 years, whereas R&B contains around 550,000 articles across 7 years.",
"Compared with R&B, WSJ has a relatively more uniform distribution of news articles across time (see the standard deviations listed in parentheses in the fifth column of Table 1).",
"Following previous studies reporting that the main body of a news text produces irrelevant noise (Ding et al., 2015), we extracted only the headlines in both datasets.",
"As for the stocks, we selected two subsets of the stocks in Standard & Poor's S&P 500 index, one for each of the WSJ and R&B datasets.",
"These subsets consisted only of stocks that were mentioned in no fewer than 100 different news articles, so that mutual effects between the articles and the price history would appear pretty often in the texts.",
"More importantly, this ensured that keyword retrieval-based methods that locate related articles by explicit keyword matching could be applied for comparison.",
"For the WSJ and R&B datasets, the subsets had 89 and 50 stocks, respectively.",
"All other stocks were removed from consideration.",
"As seen in formula (2), the input for the neural network is N [ t 4 ,t ] , the set of articles around day t , and the output is y jt .",
"The label y jt is the binarized price movement of stock j at day t .",
"This is measured by the log-return between two subsequent days: log return jt = log p jt log p jt 1 .",
"The distribution of log-returns is typically bell shaped with a center close to 0, as also mentioned in Hu et al. (2018).",
"The return values of the days were separated into three categories of negative, ambiguous, and positive by the use of thresholds 3 .",
"Here, ambiguous refers to those samples close to 0.0, which were removed.",
"Thus, by using only the clearly negative and positive days, the returns were binarized.",
"2 This dataset was made open source in Ding et al. (2015).",
"3 We used the thresholds [ 0 . 0053 , 0 . 0079] for the WSJ dataset and [ 0 . 0059 , 0 . 0068] for the R&B dataset.",
"The margins were asymmetric around 0 because these datasets had slightly more rising days than declining ones.",
"Through such filtering, the number of samples for each stock became about two-thirds of the number of all trading days, or around 4 2600 and 1200 samples for each stock, for the WSJ and R&B datasets, respectively.",
"The Adam optimizer (Kingma and Ba, 2015) was used with cosine annealing (Loshchilov and Hutter, 2017) to train the neural network.",
"The initial learning rate was set to 5e-4.",
"The mini-batch size was 64.",
"We stopped the training process when the value of the loss function with respect to the validation set no longer dropped, and then we measured the accuracy on the test set for evaluation.",
"As for the dual-vector representation of news article texts, introduced in Section 4, the key and value vectors were calculated as described here.",
"The key vector n Ki is defined as follows 5 by using word embeddings w k acquired by Word2vec: n Ki = (cid:80) k k w k (cid:80) k k , where k = TF k IDF k is the TFIDF (Manning and Schutze, 2001) score of word k .",
"The dimension of n Ki equals that of the Word2vec model trained on the news corpus, i.e., 64 in our implementation.",
"As for the value vector n Vi , we used vectors acquired through a BERT encoder 6 .",
"We used the pretrained BERT model available from Google Research , with 24 layers trained on an uncased corpus.",
"This model outputs vectors of 1024 dimensions, but we reduced the dimensions to 256 by using principal component analysis (PCA), to suppress the number of parameters in the neural network.",
"Along with the effect of the stock embedding, the effect of the dual-vector representation (DVR) is also 4 The number of samples after filtering differed slightly among stocks, because the distribution of log-returns differed, while the same thresholds were used.",
"5 We chose this method after examining several options, including the smooth inverse frequency (SIF) (Arora et al., 2017), TFIDF-weighted word embeddings, and several other methods.",
"We found that TFIDF-weighted word embeddings with Word2vec worked best.",
"6 BERT (Bidirectional Encoder Representations from Transformer) is a neural network model (Devlin et al., 2019) that can be used to encode text into vectors with a fixed dimension.",
"The basic effect of the stock embedding was evaluated through the performance on the price movement classification task, as stated in Section 3.",
"The whole dataset described in Section 5.1 was randomly divided into nonoverlapping train-ing/validation/test sets in the ratios of 0.6/0.2/0.2.",
"The training/validation/test parts did not share any samples from the same dates.",
"Every method below was tested for 10 different random divisions, and the average performance is reported here.",
"The proposed model is abbreviated as WA+CS+DVR , for weighted average with classifier sharing and dual-vector representation .",
"For an ablation test, four models were considered, which varied the market vector of the day (defined in formula (3) in Section 4 (Ding et al., 2015)) and were with or without the dual-vector representation and classifier sharing (Ding et al., 2015; Hu et al., 2018; Xu and Cohen, 2018; Yang et al., 2018), as follows.",
"representations of the same day is taken as the market vector of the day, as proposed by Ding",
"et al. (2015).",
"Weighted average (WA): As stated in formula (3), the market vector of the day is averaged by using the weights from the stock-text inner products, as proposed in Hu et al. (2018).",
"Note again that their work did not apply classifier sharing but instead produced one classifier for each stock, nor did it adopt the dual-vector representation.",
"WA + classifier sharing (CS): This refers to WA with classifier sharing across stocks.",
"This variant does not adopt the dual-vector representation, i.e., n Ki is set equal to n Vi for every news article i .",
"Thus, the same BERT text embedding is used for both n Ki and n Vi .",
"WA + dual-vector representation (DVR): This refers to WA with the dual-vector representation of news texts.",
"This variant does not adopt classifier sharing.",
"Furthermore, to examine the effect of the data size, we tested different dataset portions: 1 year, 3 years, and the whole dataset.",
"Therefore, the experimental variants involved five methods (four comparison + our proposal) and three data sizes, or a total of 15 experiments.",
"Figure 2 summarizes the complete experimental results.",
"The uppermost bar of each bar group, in red, corresponds to our model with classifier sharing (CS) and the dual-vector representation (DVR).",
"The other bars, in orange, blue, purple, and green, correspond to the four ablation variants.",
"The ablation datasets with only 1-year data contained around 150 training samples and were too small for most variants to work well, yet our proposed model, WA+CS+DVR , could still obtain positive results (classification accuracy over 50%).",
"With the 3-year datasets, our WA+CS+DVR model widened the performance gap, whereas the simple average and weighted average models still failed to work better than random guessing.",
"These results show the superiority of our model in handling the overfitting problem with small datasets.",
"Finally, the significant differences between WA+CS+DVR (in red) and WA+CS (in blue) and between WA+DVR (in orange) and WA (in purple) strongly supported the advantage of adopting the dual-vector representation (DVR), especially when classifier sharing was combined.",
"Thus far, the evaluation on classification has shown the capability of our framework in understanding news articles.",
"For financial applications, however, the task must be in the form of prediction ; that is, it must produce some gain ahead of the time when a news article is published.",
"As one such predictive example, we present portfolio optimization, one of the most important financial tasks, and we show how our stock embedding can be applied to it.",
"A portfolio is essentially a set of weights assigned to stocks, representing the proportions of capital invested in them.",
"Intuitively, a portfolio bears a bigger risk if a large proportion is invested in two highly positively correlated stocks, rather than two uncorrelated or negatively correlated stocks.",
"Based on this idea, the mean-variance minimization model in Markowitz (1952) is formulated as follows: min w risk = w T w (5a) subject to w T r = E, (5b) Figure 2: Mean classification accuracy percentages (with SD in parentheses) over 10 replications.",
"0 w j 1 j = 1 , ..., J, (5d) where is the risk matrix; w is the vector of investment weights; r is a vector such that r j equals the mean historic return of stock j ; 1 is a vector of ones; and E , the expected portfolio return, is a parameter decided by an investor's preference.",
"Note that higher E usually means higher risk born by the investor.",
"In the original model of Markowitz, is the covariance matrix of the historic return time series of stocks, ij = Cov( { r i } t , { r j } t ) ( i, j { 1 , ..., J } ) .",
"According to Markowitz (1952), the solution of this optimization problem, which can be obtained via quadratic programming, gives the portfolio with the smallest risk for an expected overall return E .",
"Using the covariance matrix as the risk matrix is limited, however, for two reasons.",
"First, the overwhelming noise in price movements prevents accurate estimation of the covariance.",
"More importantly, it ignores the events described in news articles that indeed cause price movements.",
"On the other hand, the stock embeddings built here provide much abundant textual information for defining .",
"Concretely, i,j = cos ( s i , s j ) .",
"This should work because the stock embedding reflects a stock's responsiveness to a certain class of news events.",
"In other words, close stock embeddings indicate a correlated response pattern to an event described in news articles.",
"Stock embeddings capture this correlation much better than the covariance matrix does, and this correlation is what a good portfolio relies on.",
"By solving the same optimization problem but with a different matrix , we get another vector of investment ratios, w , with respect to the stocks.",
"By virtually investing according to w and observing the result within a certain period, can be evaluated.",
"For each of the WSJ and R&B datasets, we ran one investment simulation for various definitions of , as follows.",
"S&P 500 index: As a market baseline, we used an S&P 500 index portfolio, in which all 505 stocks in the index were considered and the investment weight w j was in proportion to the market capitalization of stock j .",
"The price history of the portfolio was provided by Dow Jones .",
"This method did not use to form the portfolio.",
"S&P 89*/50*: This approach was the same as above but with the set of stocks reduced to those tested in our work, as explained in Section 5.1: 89 stocks for the WSJ dataset 7 , and 50 for the R&B dataset.",
"Covariance matrix of historic stock returns: was the covariance matrix as originally proposed by Markowitz.",
"Word2vec-general: (text only) was the cosine matrix of the word embeddings trained on general corpora (fastText word embeddings (Bojanowski et al., 2017) were used in our experiments).",
"For each stock, we used the word embedding of its ticker symbol, e.g., the word embedding of AAPL for Apple Inc .",
"Word2vec-news: (text only) was the cosine matrix of the word embedding vectors trained 7 The S&P 89* portfolio was evaluated during the period of 2001 to 2016.",
"The market capitalization history of the stocks before the year 2005 is not available, so the record was estimated for this missing period.",
"First, the number of shares outstanding was extrapolated from the data of 2005-2016, in which the values were pretty stable during the whole period.",
"The market capitalization was then acquired by multiplying the price by the shares outstanding.",
"on news text corpora.",
"We used the full text of the R&B dataset for training, in which all mentions of a stock in the text were replaced by the stock's ticker symbol.",
"Covariance stock embedding: (text and price) was the result of element-wise multiplication of the covariance matrix and the cosine matrix of the stock embeddings.",
"Weighted BERT: (text only) was the cosine matrix of stock vectors acquired as follows, where the BERT-based text representation n Vi was used.",
"For a stock j , the vector was obtained as a weighted average of n Vi for which the text mentioned the stock or company.",
"Here, the weight of article i was defined as follows: i (# of mentions of j in i ) (# of mentions of all stocks in i ) .",
"The portfolio evaluation was conducted in a yearly setting, as illustrated in Figure 3.",
"At the beginning of each year, given some expected gain E , the portfolio was computed by using all news articles and historic prices until the end of the previous year.",
"In other words, for each year, the training set in the experiment consisted of samples strictly earlier than those constituting the test set.",
"Therefore, the evaluation was conducted in a prediction setting.",
"Then, investments were made according to the yearly renewed portfolio as in Figure 3; that is, capital was allocated to stocks according to w .",
"The realized annual gain of the portfolio followed this equation: annual gain = J (cid:88) j =1 w j ( p j end of year p j begin of year 1) , where w j is the proportion of investment in stock j , and p j is the price of j .",
"In this way, for each of the WSJ and R&B, we obtained results over 16 and 7 years, respectively.",
"For different expected gains E { 0 .",
"05 , 0 .",
"06 , ..., 0 .",
"29 } , which cover typical cases in real-world portfolio construction, the average annual gain was computed.",
"Figure 4 shows the experimental results.",
"The upper graphs show the annual gain with respect to different values of E (horizontal axes) for",
"(a) the WSJ and",
"(b) the R&B, averaged over years.",
"Every curve corresponds to a different definition of .",
"It can be seen that the proposed stock embedding method outperformed the other methods, except for larger E with WSJ 8 .",
"Especially for the R&B dataset, stock embedding greatly outperformed all other methods at all E .",
"The lower bar graph summarizes the overall aggregate gain for each method.",
"The values in the bars indicate the average realized annual gains, while those above the bars are the ratios of the gains in comparison with that of the standard covariance method (in blue).",
"The leftmost two bars in each bar graph show the gains of the S&P 500 portfolio and the S&P 89*/50* portfolio, respectively.",
"As described above, the S&P 500 portfolio consisted of an average of around 500 stocks traded in the US, while the S&P 89*/50* portfolio, which was calculated with the same method but on a smaller set of stocks (89 for the WSJ, and 50 for the R&B), achieved higher gains than its S&P 500 sibling did.",
"The values of the S&P portfolios generally went up during the periods of both datasets, and therefore, the gains were positive.",
"The dashed horizontal line in each bar graph indicates the result for the standard covariance method as a baseline.",
"Its gains were only 12.5% and 12.7% for the WSJ and R&B, respectively, but with stock embeddings, the gains increased to 17.2% and 35.5%, or 1.37 and 2.80 times greater than the baseline results, respectively.",
"This per-8 Our method did not perform well only for large E .",
"The mean-variance minimization model has been reported to become unstable under the two conditions of large E and low overall market gain (Dai and Wang, 2019).",
"The return of the WSJ period (2000-2015) was lower than that of the R&B period (2006-2013), and therefore, these two conditions were more likely to be met for WSJ.",
"The results for the method that integrated the covariance matrix and stock embedding (in green) did not much outperform the baselines.",
"A possible reason is that the stock embedding had already integrated the price information.",
"As for the other variants based on pure text (in purple, orange, and brown), the results improved slightly.",
"Among them, weighted BERT outperformed the other methods for both datasets.",
"This indicates the potential of BERT and other recent neural language models for portfolio optimization.",
"This paper has proposed the idea of a stock embedding , a vector representation of a stock in a financial market.",
"A method was formulated to acquire such vectors from stock price history and news articles by using a neural network framework.",
"In the framework, the stock embedding detects news articles that are related to the stock, which is the essence of the proposed method.",
"We trained stock embeddings for the task of binary classification of stock price movements on two different datasets, the WSJ and R&B.",
"The improvements in classification accuracy with our framework, due to the classifier sharing and dual-vector text representation proposed in this paper, implied that the stock embeddings successfully incorporated market knowledge from both the news articles and price history.",
"Because the stock embedding is a vector that can be separated from the other components of the classification model, it can be applied to other tasks besides price movement classification.",
"As an example, we showed the use of stock embeddings in a portfolio optimization task by replacing the risk matrix in the portfolio objective function with a cosine matrix of stock embeddings.",
"In investment simulations on the R&B dataset, our stock embedding method generated 2.80 times the annual return obtained using the covariance matrix of the historic return series.",
"This significant gain suggests further potential of our stock embedding for modeling the correlation among stocks in a financial market, and for further applications, such as risk control and asset pricing.",
"We sincerely thank the anonymous reviewers for their comments.",
"This paper was supported by the Research Institute of Science and Technology for Society (HITE 17942497), and by the University of Tokyo Gap Fund Program.",
"The paper reflects the view of the authors only."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"objective",
"objective",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"objective",
"method",
"objective",
"abstain",
"method",
"other",
"other",
"other",
"other",
"method",
"abstain",
"method",
"other",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"method",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"objective",
"result",
"result",
"other",
"other",
"other"
] |
[
"In classification, there are usually some good features that are indicative of class labels.",
"For example, in sentiment classification, words like good and nice are indicative of the positive sentiment and words like bad and terrible are indicative of the negative sentiment.",
"However, there are also many common features ( e.g. , words) that are not indicative of any specific class ( e.g. , voice and screen, which are common to both sentiment classes and are not discriminative for classification).",
"Although deep learning has made significant progresses in generating discriminative features through its powerful representation learning, we believe there is still room for improvement.",
"In this paper, we propose a novel angle to further improve this representation learning, i.e., feature projection .",
"This method projects existing features into the orthogonal space of the common features.",
"The resulting projection is thus perpendicular to the common features and more discriminative for classification.",
"We apply this new method to improve CNN, RNN, Transformer, and Bert based text classification and obtain markedly better results.",
"Text classification is an important task in natural language processing and text mining.",
"It has a very wide range of applications, such as sentiment classification (Liu, 2012), question classification (Li and Roth, 2002), and deception detection (Liu, 2012; Feng et al., 2012).",
"In recent years, deep learning models have been shown to outperform traditional classification methods (Kim, 2014; Iyyer et al., 2015; Tang et al., 2015; Dai and Le, 2015; Jin et al., 2016; Joulin et al., 2017; Shen et al., 2018).",
"Given the input document, the system applies a mapping function ( e.g. , averaging or summation, a Equal Contribution.",
"convolution neural network (CNN), recurrent neural network (RNN), and so on) to learn a dense representation of the document and then uses this representation to perform the final classification.",
"Representation learning is one of the key strengthes of deep learning.",
"In this paper, we propose to further improve the representation learning, i.e., to make the representation more discriminative for classification.",
"Note that throughout the paper we will use sentence sentiment classification as an example to explain different ideas, but in our experiments, non-sentiment classification datasets are also used to show the generality of the proposed method.",
"For text classification, many neural networks and embedding techniques have been devised and applied, e.g., RNN, CNN, Transformer (Vaswani et al., 2017) and Bert (Devlin et al., 2018).",
"For example, RNN can model the whole sentence and also capture the long-term dependencies within the sentence.",
"However, modeling the entire sequence may neglect some key local contexts that are important for classification (Yin et al., 2017).",
"CNN is able to extract more local and position-invariant features (Scherer et al., 2010; Collobert et al., 2011).",
"However, these methods may not give enough weights to some special or discriminative words.",
"To solve this problem, the attention mechanism was introduced.",
"For example, by exploiting attention, Transformer and Bert (which maximizes Transformer's ability to extract sentence semantic information) can achieve even better results than both CNN and RNN on many tasks.",
"We will see some other related methods to produce effective representations in the related work section.",
"Although the existing models are already able to produce excellent representations, we will show that these representations can still be improved.",
"This paper explores in an entirely different direction, i.e., feature projection.",
"In a typical sentence or document, there are usually some words or features that are correlated with some class labels, but there are also many other common features that cannot distinguish different classes.",
"For example, in sentiment classification, words like Good and Nice are indicative of the positive sentiment, and words like Bad and Terrible are indicative of the negative sentiment.",
"Words like picture , price , and battery are not indicative of any sentiment, i.e., they are not discriminative.",
"However, they may still interfere the representation learning to produce sub-optimal feature representations for the final classification.",
"Even though the attention mechanism can alleviate this problem to some extent by giving higher weights to words associated with classes and lower weights to the other words that are not indicative of any specific classes.",
"However, due to the idiosyncrasy of the data and the inaccuracy of the attention mechanism, the problem remains.",
"In this paper, we propose a novel feature projection method to improve feature representation learning to make it more discriminative for classification.",
"The proposed method is called Feature Purification Network ( FP-Net ).",
"Specifically, FP-Net consists of two sub-networks, a common feature learning network referred to as the C-net and a projection network referred to as the P-net.",
"C-net uses a Gradient Reverse Layer ( GRL ) (Ganin and Lempitsky, 2014; Zhang et al., 2019) to extract common features (cid:126)b ( i.e. , invariant features (Zhang et al., 2019)) that are shared by multiple classes and have little discriminative power for classification.",
"At the same time, P-net uses a traditional feature extractor to learn the feature vector (cid:126)a for the input sentence or document.",
"Then the feature (or representation) vector (cid:126)a is projected onto the vector of the common features (cid:126)b ( i.e. , vector (cid:126)b ) to get a projection vector (cid:126)c , which represents the input sentence's own common features.",
"Then, we project the feature vector (cid:126)a onto the orthogonal direction of the vector of the common features (cid:126)c to produce the final purer features for classification.",
"It is quite clear and intuitive that this orthogonal project is to get rid of the common features and make the system focusing on those discriminative features only.",
"We will explain why two projections are used in Section",
"3. In summary, the key contribution of this paper is the improvement to representation learning through feature vector projection.",
"To the best of our knowledge, this is the first such technique.",
"Specifically, an Orthogonal Projection Layer ( OPL ) is proposed to map the features obtained by a traditional feature extractor to the classification-specific semantic space, which is orthogonal to the common features such that we obtain a more relevant and discriminative (or purer) feature representation from the original document for classification.",
"Extensive experiments have been conducted to verify the effectiveness of the proposed method on two sentence sentiment classification datasets MR and SST2, a natural language inference dataset SNLI, and a question classification dataset TREC.",
"The results show that the proposed method can improve the classification accuracy of RNN, CNN, Transformer and Bert based classification methods markedly, which shows that feature projection is a highly promising direction to explore.",
"It is well known that one of the key strengths of deep neural networks is their superb ability to learn highly effective representations or features from the raw data, which have been shown to be very successful for all kinds of applications including natural language processing tasks such as text classification (Jin et al., 2016), machine translation (Bah-danau et al., 2014; Vaswani et al., 2017) dialogue (Wang and Jiang, 2016), etc.",
"Previous work on learning representations broadly falls in two main categories: supervised and unsupervised methods.",
"Our work focuses on improving the representation of text for supervised classification.",
"Supervised methods: These methods improve data utilization efficiency and discriminative feature distillation as they can obtain better training signals from the labeled data.",
"Sequence models such as recurrent neural networks (RNN), Long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) and gated recurrent unit (GRU) (Chung et al., 2014) networks are suitable for handling text because a sentence or document can be regarded as a sequence.",
"Therefore, a large amount of work based on RNN and its variants for feature extraction and downstream tasks has been done (Tang et al., 2015; Wang and Tian, 2016; He et al., 2016).",
"Unlike RNN's sequence modeling approach, CNN (Convolutional Neural Network) uses different sized windows to capture local correlations and position-invariant information (Kim, 2014; Conneau et al., 2016; Lai et al., 2015; Xiao and Cho, 2016; Wang, 2018).",
"A common approach of these methods is to create an instance-level representation by using the final hidden state of the RNN, the maximum (or average) pooling of the RNN hidden states, or convolutional n-grams.",
"However, they may ignore the importance of special words that are highly discriminative for classification.",
"After Bahdanau et al. (2014) introduced the attention mechanism in machine translation, attention mechanism has been exploited in many natural language processing tasks including text classification to solve the above problem.",
"For example, Yang et al. (2016) introduced attention as an integral part of the model for text classification.",
"Lin et al. (2017) proposed a new model for extracting interpretable sentence embeddings using self-attention.",
"Ma et al. (2018) showed that attention mechanism is also effective for sentiment classification.",
"Vaswani et al. (2017) further illustrated that they can get a stronger sentence-level representation by stacking multiple blocks of self-attention.",
"Bert (Devlin et al., 2018) combines Transformer and a large corpus to produce an even more complete and better sentence-level representation.",
"Some other studies improved the representation of sentences from the perspective of language structures ( e.g. , parse trees and dependency trees) (Tai et al., 2015; Mou et al., 2015).",
"Subramanian et al. (2018) utilized a single multi-task framework to combine the benefits of diverse sentence representation learning objectives.",
"However, to the best of our knowledge, these existing works and others have not used feature projection to improve (or purify) representations for supervised learning, which we believe is a promising direction to explore.",
"Unsupervised methods: These methods utilize a large unlabeled text corpus to learn word representations which are then composed into sentence and document representations.",
"For example, Kiros et al. (2015) constructed sentence representations by trying to reconstruct neighbouring sentences.",
"Hill et al. (2016) proposed a log-linear bag-of-words models for sentence representation.",
"The unsupervised smooth inverse frequency method in (Etha-yarajh, 2018) built on this but used a weighted average of word embeddings and principal component removal for sentence representations.",
"Our work is again clearly different from these unsupervised methods as the proposed method works under supervised learning.",
"Existing unsupervised methods also do not use feature projection.",
"Some other works have also been done for semi-supervised representation learning (Kevin Clark, 2018) and transfer learning (Tamaazousti et al., 2018).",
"Jason Phang (2019) also proposed to use some data-rich intermediate supervised tasks for pre-training to help produce better representation for the end task.",
"To the best of our knowledge, all these previous studies tried to improve representations using external data or knowledge, which are quite different from our method as we don't use any external information.",
"Also, the philosophy of our approach is entirely different as we try to eliminate commonalities among classes through feature projection, which is orthogonal to existing representation learning approaches.",
"Finally, our work is related to several other works.",
"Ganin and Lempitsky (2014) introduced the gradient reverse layer (GRL) for extracting common features in the context of domain adaptation.",
"It embeds domain adaptation into the process of learning representations so that the final classification decision has more discriminative and invariant characteristics for domain changes.",
"We also use GRL to extract irrelevant or common features.",
"However, we do not work on domain adaptation and they do not use feature projection.",
"Belinkov et al. (2019) used adversarial learning to encourage models to learn representations free of hypothesis-only biases in the SNLI dataset.",
"Zhang et al. (2019) combined GRL and aspect attention to study cross-domain sentiment classification.",
"They found common features across domains and then extracted information from the aspects (which are product features) with the help of common features to do classifica-tions.",
"Our work is clearly different because none of these existing works improve representation learning through feature projection.",
"The overall framework of our model is shown in Figure",
"1. The whole model consists of two parts, the first part is the projection network ( i.e. , P-net) and the other is the common feature learning network ( i.e. , C-net).",
"As mentioned earlier, the goal of C-net is to extract common features and the goal of P-net is to compute the purified features for classification, which is done by projecting the learned full information vector of the input document into a more discriminative semantic space to eliminate the influence of the common features.",
"P-net consists of four parts: the input layer X , the feature extractor F p , Orthogonal Projection C-Net P-Net YOPLYGRL word embedding Figure 1: The architecture of FP-Net Layer ( OPL ), and the final classification layer C p .",
"C-net is also composed of four parts: the input layer X , the feature extractor F c ( F p and F c 's parameters are not shared) 1 , the Gradient Reverse Layer ( GRL ) and the classification layer C c .",
"The key idea of the proposed technique is as follows: The feature vector f p computed by the feature extractor F p is projected to the orthogonal direction of the feature vector f c extracted by F c of the C-net.",
"That is, f p (the full information extracted from the input document) is projected to the discriminative semantic space to be purified for the final classification.",
"However, in order to perform the orthogonal projection, two operations are required, which we will explain shortly.",
"Next we use CNN as an example feature extractor to detail each component of the proposed FP-Net.",
"CNN Extractor : Given a dataset D = { ( x i , y i ) } Ni =1 , where x i is an input document with the length L (after padding or cut) and y i is the label corresponding to the sample x i .",
"Let V ij R k be the word vector corresponding to the j th word of the document x i .",
"X i RL k is the embedding matrix of x i .",
"Recall our FP-Net model consists of two sub-networks, i.e., P-net and C-net, with the same input x i .",
"The two sub-networks also have the same structure for the feature extractor CNN, but there are no shared parameters between them.",
"The feature extractors of P-net and C-net are F p , F c .",
"1 The feature extractor can be any existing extractor.",
"In this work, we verified the effectiveness of our purification network using CNN, RNN, Transformer, and Bert as feature extractors as we will see in the experiment section.",
"We use F c as an example to introduce the working of CNN.",
"When the feature extractor F c receives X i from the input layer, F c extracts the advanced features f c from X i in the form of n-grams, which is: f c = [ c 1 , c 2 , ..., c l n +1 ] = [ c j ] l n +1 j =1 , (1) where c j represents the output produced by CNN's filter on X i [ j : j + n 1 , :] .",
"Mathematically, a convolution operation consists of a filter W R n k and a bias b R .",
"Then c j can be expressed as: c j = g ( W X i [ j : j + n 1 , :] + b ) , (2) where g is a nonlinear activation function such as Relu .",
"We use a Maxpooling operation over the feature map and take the maximum value f c = max { f c } as the feature corresponding to this particular filter.",
"The same feature extractor F p will also get the advanced features f p from the input layer.",
"We refer to the features of the P-net and C-net respectively as f p = CNNp ( X ) , (3) f c = CNNc ( X ) .",
"C-net Module : The goal of C-net is to extract the common features, which are the semantic information of the input example that is not discriminative for the classification task.",
"As mentioned earlier, common features are those shared by all classes of the problem.",
"The classifier C c should not use them to distinguish different classes.",
"To obtain common features, we add a Gradient Reverse Layer ( GRL ) (Ganin and Lempitsky, 2014; Ganin et al., 2016) after the feature extractor F c to reverse the gradient direction.",
"Through this training module, we can obtain the common features that are shared among classes.",
"Without loss of generality, we can think of the gradient reverse layer as a pseudo-function de-fined by two incompatible equations describing its forward and back-propagation behaviors: GRL ( x ) = x, (5) GRL ( x ) x = I, (6) Figure 2: Working of the Orthogonal Projection Layer.",
"where W c and b c are the weights and bias of C c respectively.",
"By optimizing the objective function Loss c , the feature extractor F c is able to extract the common features of different classes.",
"P-net Module : The goal of P-net is to first extract the full semantic information from the input example and then project it into the semantic space purified for classification.",
"In order to achieve this, we perform the projection of the feature f p extracted by the feature extractor F p onto the orthogonal direction of the common feature f c , extracted by F c .",
"The feature space orthogonal to the common feature vector should contain features that are pure and highly effective for classification ( e.g. , sentiment related information in sentiment classifi-cation).",
"Projecting the traditional feature vector f p to this orthogonal feature space preserves the discriminative information and remove those common features of the classes that are unhelpful and even confusing to the classification task.",
"The Orthogonal Projection Layer ( OPL ) helps us accomplish this goal.",
"Figure 2 illustrates the idea of OPL using a two-dimensional space example.",
"Mathematically, we first project the tradition feature vector f p onto the common feature vector f c : f p = Proj ( f p , f c ) , (9) where Proj is a projection function.",
"Proj ( x, y ) = x y | y | y | y | , (10) where x , y are vectors.",
"We then do the projection in the orthogonal direction of the projected feature f p to get the purer classification feature vector: (cid:102) f p = Proj ( f p , ( f p f p )) .",
"Clearly, it is easy to show that the feature vector (cid:102) f p obtained by Eq.",
"11 is equivalent to f p f p .",
"Using the traditional feature vector f p and the projected feature vector f p , we can build a plane (in three dimensions ).",
"The intersection of this plane and the orthogonal plane of the projected feature vector f p is our pure feature vector.",
"In other words, the projection in Eq.",
"9 is a constraint on the common feature vector.",
"That is to say: the modulus of the common feature vector is limited by projecting the traditional feature vector of the input x i to the common feature vector, so the semantic information of the new common feature vector ( i.e. , the projected feature f p ) contains only the common semantic information in x i .",
"This makes the final purified feature vector (cid:102) f p coming from the traditional feature vector f p rather than any vector in any plane orthogonal to the common feature vector f c .",
"Finally, we use the purified feature vector (cid:102) f p to do the classification.",
"Note that here Loss p and Loss c are trained simultaneously, and they use different optimizers.",
"Loss p uses the Adam optimizer.",
"Since Ganin and Lempitsky (2014) used Moment SGD as the domain classi-fier's optimizer, our C-net loss function Loss c also uses Moment SGD optimizer.",
"2 Gradients are also passed back through feature f c when optimizing Loss p .",
"Although the two losses are opposite to each other in terms of optimization targets of the feature extractor F c , the effect of Loss p on F c is in the orthogonal direction of f c .",
"A balance will be found to make the extracted feature f c closer to the real common features.",
"The complete training algorithm of the proposed FP-Net is given in Algorithm 1, which is self-explanatory.",
"2 We have conducted experiments using the Adam optimizer for both C-Net and P-Net.",
"The results are about the same as using two different optimiers.",
"3 https://github.com/Qqinmaster/FP-Net/ Algorithm 1 Feature Purification Network 1: Input : Dataset D = { ( x i , y i ) } Ni =1 , x i 's embedding matrix X i R Lk ; Randomly initialized FP-Net's parameters .",
"to verify whether the proposed feature purification is general and effective for different deep learning classification models (or more precisely, feature extractors) on diverse datasets.",
"We carried out experiments on four diverse benchmark datasets:",
"MR : This is a movie review dataset for sentiment classification.",
"It has two classes: positive and negative (Pang and Lee, 2005).",
"4 SST2 : This is the Stanford Sentiment Treebank dataset.",
"5 Each sample is marked as negative or positive.",
"TREC : This is a question classification dataset, which is to classify a question into one of the six question types (Li and Roth, 2002).",
"6 SNLI : This is a popular text entailment dataset.",
"It contains 570k human annotated sentence pairs, in which the premises are drawn from the captions of the Flickr 30 corpus and hypotheses are manually annotated (Bowman et al., 2015).",
"For this SNLI dataset, we created the following settings to suit our needs: (1) we concatenated the two sentences (in a pair) as a single sample; (2) when using 4 http://www.cs.cornell.edu/people/ pabo/movie-review-data/ 5 http://nlp.stanford.edu/sentiment/ 6 http://cogcomp.cs.illinois.edu/Data/ QA/QC/ Data c l T rain T est | V | MR 2 45 8,529 1,066 17,884 SNLI 3 40 54,936 9,824 33,944 SST2 2 35 6,920 1,821 16,789 TREC 6 15 5,000 952 8,834 Table 1: Dataset statistics.",
"Bert as a feature extractor, we reduced the number of training set samples to 25,000 to speed up the training process.",
"For other feature extractors (see below), the complete data is used.",
"Since our goal is to perform feature purification so that the purified features are more conducive for classification, to verify the validity of the proposed FP-Net model, we compare the classification results with and without purification using the following popular feature extractors:",
"LSTM : The long short-term memory network (LSTM) (Hochreiter and Schmidhuber, 1997) for solving the gradient disappearing problem of the traditional RNN.",
"CNN : We use the Convolution Neural Networks in (Kim, 2014) as the feature extractor to generate representations.",
"Transformer : We use the encoder part of the model proposed by (Vaswani et al., 2017) as the feature extractor, followed by a classifier.",
"Bert : We fine-tuned on the trained Bert base (Devlin et al., 2018).",
"Bert base includes 12-layer, 768-hidden, 12-heads and 110M parameters.",
"In particular, we use Bert-base Uncased, where Uncased means that the text has been lower cased before WordPiece tokenization.",
"Note, those existing feature learning or feature enhancement approaches discussed in Section 2 are not compared as they are entirely different from our approach.",
"They mainly relied on external data or information to improve representation learning.",
"Our method does not use any external data or information.",
"However, we do include Bert as a baseline as it is perhaps one of the most successful feature learning methods using external data.",
"Our method can improve on top of Bert.",
"First, all the word embeddings in our experiments are randomly initialized as 200-dimension vectors and then modified during training (except Bert).",
"For each type of feature extractor, we have the following configuration: 1) For the RNN-based models, we use a two-layer LSTM for feature extraction and the hidden state of each layer is set to 256.",
"2) For the CNN-based models, in order to obtain more fine-grained features, we use filter sizes of [2,3,4,5,6] with 100 feature maps each.",
"3) For the Transformer-based models, we use Transformer's encoder as the feature extractor, specifically with single-head and 3 blocks.",
"4) For the Bert-based models, we fine-tuned the pre-trained Bert-base parameters.",
"These settings are exactly the same in the baseline as in our FP-Net.",
"In the training of the C-net module, we use a stochastic gradient with 0.9 as the momentum and the following annealing learning rate (Ganin and Lempitsky, 2014).",
"where p is the training progress linearly changing from 0 to 1, l 0 = 0 .",
"01 , = 10 and = 0 .",
"75 .",
"In GRL, the hyper-parameters swept [0 . 05 , 0 . 1 , 0 . 2 , 0 . 4 , 0 . 8 , 1 . 0] .",
"In our experiments, we adopt the classification accuracy as the evaluation metric.",
"We summarize the experimental results in Table 2, where FP + X means that the model trained by the proposed FP-Net using X as the feature extractor.",
"Each of the two lines compares the experimental results of the traditional model with our proposed model on these four datasets.",
"From Table 2, we can make the following observations.",
"1. Our FP-Net model consistently improves the results of the baseline feature extractors ( i.e. , LSTM, CNN, Transformer and Bert) using the proposed feature projection.",
"This verifies the effectiveness of the proposed feature purification method of projecting the traditional feature vectors to the orthogonal direction of the common features.",
"2. Compared with the traditional CNN, the FP + CNN model increases the accuracy by 2 .",
"56% on the MR dataset and 1 .",
"46% on the SNLI dataset.",
"The improvement of FP + LSTM is less, increased by 0 .",
"67% and 0 .",
"94% on the MR and SNLI datasets.",
"This shows that the way that CNN extracts input features (concatenate the feature after using different sliding window sizes for extracting local features) is quite effective in extracting more complete semantic information, which leads to more irrelevant features being used.",
"That is why the projection on the CNN features brings more improvements compared to the RNN-based model.",
"3. By comparing the experimental results of the attention-base model ( i.e. , Transformer and Bert), we can see that our FP-Net can improve the feature representation capabilities of these feature extractors.",
"For example, in the Bert-based experiment, our FP+Bert can increases the accuracy by 3 .",
"11% on MR and 1 .",
"66% on TREC.",
"That is to say our orthogonal projection method can make the representation of attention-based obtain a higher discriminative power for classification.",
"Outperforming Bert is particularly significant because Bert is perhaps one of the best feature extractors, if not the best.",
"In order to analyze the effectiveness of each component of FP-Net, we performed the following two ablation experiments.",
"First, in Table 3, we report the results of the ablation test of each component of FP-Net, where FP+CNN-G (or O, G-O) represents FP-Net with the GRL (or OPL, or both GRL and OPL) removed while using CNN as the feature extractor.",
"The parameters of all the experiments compared in the first block are exactly the same.",
"In order to keep the parameter size consistent, we performed element-wise summation of the features of FP-Net's two sub-networks f p and f c in the FP+CNN-G-O experiment.",
"By comparing the experimental results of the first block, we observe the following: 1) Whether GRL or OPL is removed or both GRL and OPL are removed at the same time, the accuracy will drop significantly compared with the complete FP-Net.",
"For example, for the MR dataset, when we remove the GRL and keep the OPL ( i.e. , FP+CNN-G), the accuracy decreases by 1 .",
"03% ; When we remove both GRL and OPL, and then execute f p + f c ( i.e. , FP+CNN-G-O(plus)), the accuracy decreases by 2 .",
"36% , etc.",
"These results show that each component in FP-Net is important, and the absence of any one component will lead to decline in accuracy.",
"2) In the experiment of FP+CNN-O, we remove OPL and keep GRL, which means that we use f p f c instead of the orthogonal projection ( i.e. , f p f p ).",
"As stated in P-Net module of Section 3, such a replacement will give up a constraint that gets the common feature f p of the current input x i from the base common feature f c .",
"The results showed that the accuracy decreases by 2 .",
"10% on MR and decreases by 1 .",
"27% on SNLI, which mean that the projection operation ( i.e. , Eq. 9) is necessary.",
"3) Clearly, adding f p and f c of FP-Net is not the only way to connect the two sub-networks of FP+CNN-G-O.",
"We can do f p f c , where is the concatenation operator.",
"Although this method has more parameters in the P-net classifier, we can still observe that the accuracy of FP+CNN-G-O is not as good as the accuracy of FP+CNN.",
"For example, FP+CNN-G-O reduced the accuracy by 2 .",
"36% on MR and 1 .",
"30% on SNLI, which can also prove the effectiveness of GRL and OPL in our FP-Net.",
"Second, we show that the improvement in accuracy by FP-Net is not due to the increase in the number of parameters.",
"We doubled the parameters of traditional CNN and Transformer and compared with our FP+CNN, FP+Trans.",
"The results of this part of the experiments are shown in Table 4, where the index 'Dp' means the Doubled parameter size.",
"For example, Tans Dp increases the number of blocks of Transformer in the baseline from 3 to 6.",
"All experimental results show that increasing the number of parameters of the baseline models will improve classification accuracy slightly, but there is still a large gap with the proposed model.",
"In this paper, we proposed a novel Feature Purification Network (FP-Net) to improve the representation for text classification.",
"The method is based on feature projection.",
"The proposed model uses two sub-networks, one for identifying common features that are not discriminative for classification, and the other for feature projection that projects the traditional features to the orthogonal direction of the common features.",
"To the best of our knowledge, this is the first method that uses feature projection to improve text classification.",
"Through a large number of comparative experiments, we showed the effectiveness of the proposed feature projection method.",
"Our current method is designed only for traditional text classification methods such as LSTM, CNN, and Transformer.",
"In our future work, we will consider extending it to graph-based methods such as GCN for graph data, and to generation-based methods such as GAN for adversarial learning.",
"The project was funded by Peking University."
] | [
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"other",
"other",
"method",
"method",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"method",
"objective",
"other"
] |
[
"Negation is a core construction in natural language.",
"Despite being very successful on many tasks, state-of-the-art pre-trained language models often handle negation incorrectly.",
"To improve language models in this regard, we propose to augment the language modeling objective with an unlikelihood objective that is based on negated generic sentences from a raw text corpus.",
"By training BERT with the resulting combined objective we reduce the mean top 1 error rate to 4% on the negated LAMA dataset.",
"We also see some improvements on the negated NLI benchmarks.",
"Negation is an important property in many language understanding tasks, such as sentiment analysis, question answering, knowledge base completion and natural language inference (Kassner and Schtze, 2019; Naik et al., 2018).",
"While Pretrained Language Models (PLMs) such as BERT pushed the state-of-the-art on these tasks (Devlin et al., 2019; Petroni et al., 2019), they fail dramatically on instances that require understanding negation.",
"Kassner and Schtze (2019) show that current PLMs cannot correctly distinguish between the negated and non-negated forms of fill-in-the-blank tests.",
"For instance, when asked to predict the [MASK] token in sentences such as The capital of Cuba is [MASK] and The capital of Cuba is not [MASK] , BERT often generate the same answer Havana , indicating that it may not be appropriately modeling the distribution of negative sentences.",
"Additional evidence is given by the fact that, when fine-tuned on natural language inference tasks, PLMs tend to mis-classify examples which Figure 1: An overview of the unlikelihood objective.",
"contain not or no as contradiction when the true label is neutral or entailment (Naik et al., 2018).",
"Recently, Hossain et al. (2020b) proposed new natural language inference test sets to specifically target the model's understanding of negation and show that current state-of-the-art models perform poorly on these test sets.",
"In this work, we investigate whether we can alleviate the modeling bias of PLMs on negated sentences.",
"Our approach is composed of two core contributions:",
"i) a syntactic data augmentation scheme to automatically generate negated sentences;",
"ii) a new training paradigm, dubbed unlikelihood training with reference (Fig. 1), based on the recently proposed unlikelihood training (Welleck et al., 2020).",
"At first, we generate a large number of negated sentences by negating sentences mined from an openly available text corpus (Wikipedia).",
"Our sentence negator uses the dependency parse of the sentence, part of speech tags, and morphological features of each word in the sentence and deterministically negates the sentence.",
"Given a negated version of a sentence, we replace its object with the [MASK] token and use unlikelihood training to make the object unlikely under the PLM distribution (e.g. we minimize the probability of improvements as depicted in Fig. 1).",
"Importantly, in order to ensure that the negated sentence is factually false, we use the positive sentence as context (i.e., as a reference) for the unlikelihood prediction task.",
"Concretely, we provide the concatenation of the positive and the masked negated sentence as input to the PLM.",
"Our method can be thought of a type data augmentation, which has be shown to be effective at improving robustness across many tasks in language, such as text classification (Wei and Zou, 2019), natural language inference (Min et al., 2020; McCoy et al., 2019) and semantic parsing (Andreas, 2019).",
"For our negation experiments, we fine-tune pretrained BERT with our new objective and a knowledge distillation objective.",
"We test our model on the negated LAMA dataset (Kassner and Schtze, 2019), which is the negated version of knowledge probing dataset LAMA, introduced in Petroni et al. (2019).",
"Our model achieves a mean error rate of 4% (a improvement of 5 points) on the negated LAMA dataset while maintaining the performance on the original LAMA dataset without any direct training on the negated LAMA sentences.",
"We also fine-tune BERT on RTE (Dagan et al., 2005; Bar-Haim et al., 2006; Giampiccolo et al., 2007; Bentivogli et al., 2009), SNLI (Bowman et al., 2015) and MNLI (Williams et al., 2018) tasks and achieve better results on the language inference benchmark including negation from (Hossain et al., 2020b).",
"Pre-trained language models have shown impressive results across many tasks, such as question answering (Alberti et al., 2019) and natural language inference (Liu et al., 2019).",
"These models are also known to encode factual and common-sense knowledge (Radford et al., 2019; Petroni et al., 2019; Bosselut et al., 2019).",
"Despite these abilities, Kassner and Schtze (2019) found that these models fail at understanding negation through analysing negated factual statements.",
"Extensive literature looks at the linguistic knowledge learned by language models (McCoy et al., 2019; Jumelet and Hupkes, 2018; Gulordava et al., 2018; Marvin and Linzen, 2018; Tenney et al., 2019; Warstadt and Bowman, 2019; Talmor et al., 2019).",
"Recent work has also studied the shortcomings in negation scope detection (Jumelet and Hupkes, 2018; Fancellu et al., 2016, 2017; Morante and Daelemans, 2009; Li and Lu, 2018; Zhao and Bethard, 2020; Chen, 2019) and focus detection (Shen et al., 2019; Zou et al., 2014, 2015; Hossain et al., 2020a).",
"Naik et al. (2018) and McCoy et al. (2019) systematically study the linguistic abilities of these models using NLI, and show that these models rely on erroneous syntactic heuristics.",
"Our work is in this spirit for negations.",
"Noji and Takamura (2020) propose taking advantage of negative examples and unlikelihood in the training of language models to increase their syntactic abilities.",
"Similarly, Min et al. (2020) show the effectiveness of syntactic data augmentation in the case of robustness in NLI.",
"Neither of these works focus on negations.",
"We generate the negated versions of sentences using a syntactic augmentation method.",
"The method gets as input the dependency parse of the sentence, POS tags and morphological information of each word and negates the sentence using a set of rules.",
"Each rule has a dependency tree regular expression pattern (Semgrex; Chambers et al. 2007).",
"We use Semgrex patterns to identify different syntactic templates, and then transform the sentence based on a list of actions defined in the rule.",
"These actions can be move , replace , insert and lemmatize .",
"The unlikelihood token which will be discussed later is also chosen using Semgrex patterns (see Appendix C for some examples).",
"We use Stanza (Qi et al., 2020) to get the dependency parse of the sentences, parts of speech tags, lemma, and morphological features of the words.",
"We also filter out sentences with more than 20 words.",
"To test the coverage of our Semgrex patterns, we randomly sampled 930 sentences from Wikipedia.",
"Only 31 of them did not match any of our Semgrex patterns (See table 8 in Appendix B for the number Model SQuAD ConceptNet T-REx Google-RE BERT 13.53 15.65 29.10 10.24 BERT + KL 13.64 15.64 29.28 10.27 BERTNOT 13.97 15.49 29.25 10.31 Table 1: Mean precision at k = 1 ( p @ 1 ) for original LAMA queries (higher is better) of pre-trained BERT, BERT trained with distillation objective, and BERT with unlikelihood and distillation objectives (BERTNOT, sec 4.2).",
"of matches for each rule in our rule set for these 930 sentences).",
"In addition, to get a better sense of the correctness of our method, 100 random sentences (from Wikipedia) were negated and reviewed by a native English speaker.",
"The precision for these negations is 94 .",
"00% .",
"Table 7 in Appendix B shows examples of original and negated sentences.",
"Applying unlikelihood to a word in any random sentence is problematic, unless the sentence is a factual statement (e.g. unlikelihood on improvements in He did not advocate navigational improvements on the Sangamon River. in Fig 1 is problematic as this sentence is not grounded in reality).",
"Moreover, using solely factual sentences limits the application of this method.",
"1 To be able to use any generic (not necessarily factual) sentence and pick an unlikelihood token in it, there needs to be some sort of grounding or context.",
"In this setup, each training example is of the form < sentence A , sentence B > where sentence A is the reference for sentence B , and provides the grounding or context for it.",
"The unlikelihood loss has recently been proposed by Welleck et al. (2020) to mitigate the problem of repetition in neural text generation.",
"Noji and Takamura (2020) also adopted this loss to penalize the desirability of an incorrect token in a sentence.",
"We adopt this method to penalize the likelihood of a token in sentence B that makes this sentence contradictory with the reference sentence A .",
"(1) A Humans have a rational soul.",
"In the example 1, assuming that sentence A is true, we want the model to avoid assigning soul in sentence B a high probability.",
"To this end, the probability of the unlikelihood token x u = soul is penalized with the unlikelihood loss LUL as: LUL ( x u ) = log(1 p ( x u | x 1: T )) , (1) where x 1: T is the whole input sequence ( sentence A concatenated with sentence B which is the negated version of sentence A as illustrated in Fig 1).",
"To have a balanced augmentation data set, we also include examples where sentence B is the copy of sentence A and therefore not contradictory with it.",
"In this context, we want the model to perform as it was untouched (before any fine-tuning).",
"The KL divergence knowledge distillation loss is used for these examples on the same token: (2) A Humans have a rational soul.",
"The loss LKL for token x l = [MASK] is written as: LKL ( x l ) = DKL ( p LM || p ) (2) where p LM is the probability distribution over the vocabulary for the masked token x l under the LM before any fine-tuning.",
"In our experiments, we use the BERT-base model and further train it with two objectives, the unlikelihood objective (Eq. 1) and the knowledge Query Top 3 words with log probs from BERT Top 3 words with log probs from BERTNOT iOS is developed by [MASK].",
"distillation objective (Eq. 2).",
"We also use original Wikipedia sentences for the latter to prevent catastrophic forgetting of language modeling.",
"The probability of the unlikelihood token p ( x u | x 1: T ) and the distribution for masked token x l are computed using the language modeling head of the BERT model by replacing x u and x l in the input sequences with the [MASK] token.",
"Examples for each objective are sampled uniformly.",
"We will refer to our model as BERTNOT.",
"We report our main results on LAMA and Negated LAMA for knowledge base completion.",
"The cloze statements from LAMA are facts or commonsense knowledge generated from either subject-relation-object triples (X, rel, Y) or question-answers pairs.",
"The cloze statements for the triples are generated using a template for each relation which includes the placeholders X and Y (e.g. X is located in Y).",
"X is replaced for the subject and Y is replaced with the [MASK] token to be predicted by the model.",
"In the question-answer pairs, the answer is replaced with [MASK] token.",
"The facts in the LAMA dataset are from multiple sources: 1) Google-RE relations, namely place of birth, date of birth and place of death; 2) T-REx, a subset of Wikidata triples with 41 relations (ElSa-har et al., 2018); 3) ConceptNet with 16 relations (Li et al., 2016); 4) SQuAD, a subset of 305 context-insensitive questions manually rephrased as cloze-style questions (Rajpurkar et al., 2016).",
"Negated LAMA was created by manually negating the templates or questions (Kassner and Schtze, 2019).",
"Following Petroni et al. (2019) we use mean precision at k ( P @ k ) for LAMA.",
"For negated LAMA we report mean top 1 error rate.",
"As discussed in section 4.2, we train a pre-trained BERT base cased model for 5 epochs, with 20k examples for each objective, a maximum sequence length of 128 and a learning rate of 1e-5.",
"To see the effects of the unlikelihood objective more clearly, we also train a pre-trained BERT base cased model with only the KL knowledge distillation objective with the same data and hyper-parameters.",
"Tables 1 and 2 respectively show the mean precision at rank 1 (averaged over all the relations) for LAMA, and mean top 1 error rate for negated LAMA queries.",
"2 The mean error rate on the negated LAMA queries decreases to below 4% while the results on original LAMA stay the same.",
"These results are achieved without any direct training on LAMA queries (negated or non-negated).",
"Table 3 shows the top 3 predicted words for a pretrained BERT model and the model trained with our method.",
"Pre-trained BERT seems to ignore negation and mostly predict based on the subject of the query, but the prediction probability in the negated queries seems to be generally lower.",
"Our method is as good as the vanilla model (BERT) on original queries.",
"For the negated queries, our model predictions are far-superior than the vanilla model.",
"We also tried out method on BERT-large.",
"See appendix E for results and discussion.",
"We fine-tune our model with a language inference objective on RTE, SNLI and MNLI tasks.",
"Table 4 shows the accuracies on the original development splits and the new splits from Hossain et al. (2020b) containing negation for each task.",
"We used the hyper-parameters from Hossain et al. (2020b) to fine-tune all of our models.",
"Our model achieves superior results on RTE (low-resource setting) and slightly better accuracies on SNLI and MNLI (high-resource setting) on all the new splits containing negation, while keeping roughly the same scores on the original dev splits.",
"We conjecture that fine-tuning on large-amounts of data (SNLI and MNLI) may have resulted in catastrophic forgetting of the negation knowledge, decreasing the gap between BERT and BERTNOT.",
"We tried to alleviate the catastrophic forgetting by mixing in some unlikelihood training and knowledge distillation along the NLI training, but that did not help.",
"You can see these results for MNLI in appendix D. We leave further exploration of better fine-tuning objectives while preserving the pretrained knowledge for future work.",
"Table 5 shows some of the examples of the new RTE split containing negation from Hossain et al. (2020b), along with the predictions from BERT and BERTNOT.",
"Examples 4 and 6 show the failure cases of BERTNOT.",
"As it can be seen, for the fifth example, the true label is incorrect, but BERTNOT predicts the correct label for this pair of premise and hypothesis.",
"In this work, we propose a combination of the unlikelihood objective with a reference based setup for input sentences to model negation.",
"This allows us to utilize generic sentences, and negate them with our data augmentation method to be used as examples for the unlikelihood objective.",
"Our method notably improves the error rate on the negated LAMA dataset while keeping the same performance on the original LAMA queries.",
"We also test our method on the original development sets and new splits containing negation from Hossain et al. (2020b) of RTE, SNLI and MNLI tasks.",
"We see large improvements on the negated splits in low-resource setting (RTE) and slight improvements in high-resource setting (SNLI and MNLI), while also maintaining similar results as BERT on original splits."
] | [
"abstain",
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"objective",
"method",
"method",
"method",
"method",
"result",
"objective",
"abstain",
"result",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"abstain",
"result"
] |
[
"We extracted information from the ACL Anthology (AA) and Google Scholar (GS) to examine trends in citations of NLP papers.",
"We explore questions such as: how well cited are papers of different types (journal articles, conference papers, demo papers, etc.)?",
"how well cited are papers from different areas of within NLP?",
"etc.",
"Notably, we show that only about 56% of the papers in AA are cited ten or more times.",
"CL Journal has the most cited papers, but its citation dominance has lessened in recent years.",
"On average, long papers get almost three times as many citations as short papers; and papers on sentiment classification , anaphora resolution , and entity recognition have the highest median citations.",
"The analyses presented here, and the associated dataset of NLP papers mapped to citations, have a number of uses including: understanding how the field is growing and quantifying the impact of different types of papers.",
"The origins of Natural Language Processing (NLP) go back to the earliest work in Computer Science when Alan Turing published his seminal paper exploring whether machines can think, and proposed what is now known as the Turing test (Turing, 1950, 2009).",
"A crucial factor in the evolution of NLP as a field of study in its own right was the formation of the Association for Computational Linguistics (ACL) in 1962, and the first ACL conference in 1965.",
"1 Today NLP is a broad interdisciplinary field with a growing number of researchers from Computer Science, Linguistics, Information Science, Psychology, Social Sciences, Humanities, and more joining its ranks.",
"1 One can make a distinction between NLP and Computational Linguistics; however, for this work, we will consider them to be synonymous.",
"Also, ACL was originally named the Association for Machine Translation and Computational Linguistics (AMTCL).",
"It was changed to ACL in 1968.",
"Organizations such as ACL, ELRA, and AFNLP publish peer-reviewed NLP papers that include both journal articles and conference proceedings.",
"Historically, the need for a faster review process has made conference proceedings the dominant form of published research in Computer Science and NLP.",
"With time, the conferences and the types of papers they publish, have evolved.",
"Some conferences, such as EMNLP and ACL, are highly competitive, while others, such as most workshops and LREC, deliberately choose to keep more generous acceptance rates.",
"The publications themselves can be of different types: journal articles, conference papers, short papers, system demonstration papers, shared task papers, workshop papers, etc.",
"New ideas and paradigms have evolved: for example, the rise of statistical NLP in the 1990s and deep learning in the 2010s.",
"With the dawn of a new decade and NLP research becoming more diverse and more popular than it ever has been, this work looks back at the papers already published to identify broad trends in their impact on subsequent scholarly work.",
"Commonly used metrics of research impact on subsequent scholarly work are derived from citations including: number of citations, average citations, h-index, relative citation ratio, and impact factor (Bornmann and Daniel, 2009).",
"However, the number of citations is not always a reflection of the quality or importance of a piece of work.",
"Note also that there are systematic biases that prevent certain kinds of papers from accruing citations, especially when the contributions of a piece of work are atypical or in an area where the number of scientific publications is low.",
"Furthermore, the citation process can be abused, for example, by egregious self-citations (Ioannidis et al., 2019).",
"Nonetheless, given the immense volume of scientific literature, the relative ease with which one can track citations using services such as Google Scholar (GS), and given the lack of other easily applicable and effective metrics, citation analysis is an imperfect but useful window into research impact.",
"Thus citation metrics are often a factor when making decisions about funding research and hiring scientists.",
"Citation analysis can also be used to gauge the influence of outside fields on one's field and the influence of one's field on other fields.",
"Therefore, it can be used to determine the relationship of a field with the wider academic community.",
"As part of a broader project on analyzing NLP Literature, we extracted and aligned information from the ACL Anthology (AA) and Google Scholar to create a dataset of tens of thousands of NLP papers and their citations (Mohammad, 2020b, 2019).",
"2 In this paper, we describe work on examining the papers and their citations to identify broad trends within NLP researchoverall, across paper types, across publication venues, over time, and across research areas within NLP.",
"Notably, we explored questions such as: how well cited are papers of different types (journal articles, conference papers, demo papers, etc.)?",
"how well cited are papers published in different time spans?",
"how well cited are papers from different areas of research within NLP?",
"etc.",
"The dataset and the analyses have many uses including: understanding how the field is growing; quantifying the impact of different types of papers on subsequent publications; and understanding the impact of various conferences and journals.",
"Perhaps most importantly, though, they serve as a record of the state of NLP literature in terms of citations.",
"All of the data and interactive visualizations associated with this work are freely available through the project homepage.",
"3 2 Background and Related Work The ACL Anthology is a digital repository of public domain, free to access, articles on NLP.",
"4 It includes papers published in the family of ACL conferences as well as in other NLP conferences such as LREC and RANLP.",
"5 As of June 2019, it provided access to the full text and metadata for 50K articles published since 1965 (the year of the first ACL confer-2 In separate work we have used the NLP Scholar data to explore gender gaps in Natural Language Processing research; especially, disparities in authorship and citations (Mohammad, 2020a).",
"We have also developed an interactive visualization tool that allows users to search for relevant related work in the ACL Anthology Mohammad (2020c).",
"3 http://saifmohammad.com/WebPages/nlpscholar.html 4 https://www.aclweb.org/anthology/ 5 ACL licenses its papers with a Creative Commons Attribution 4.0 International License.",
"ence).",
"It is the largest single source of scientific literature on NLP.",
"Various subsets of AA have been used in the past for a number of tasks including: the study of citation patterns and intent (Pham and Hoffmann, 2003; Aya et al., 2005; Teufel et al., 2006; Mohammad et al., 2009; Nanba et al., 2011; Zhu et al., 2015; Radev et al., 2016), generating summaries of scientific articles (Qazvinian et al., 2013), and creating corpora of scientific articles (Bird et al., 2008; Mariani et al., 2018).",
"Perhaps the work closest to ours is that by Anderson et al. (2012), who examine papers from 1980 to 2008 to track the ebb and flow of topics within NLP, the influence of subfields on each other, and the influence of researchers from outside NLP.",
"However, that work did not examine trends in the citations of NLP papers.",
"Google Scholar is a free web search engine for academic literature.",
"6 Through it, users can access the metadata associated with an article such as the number of citations it has received.",
"Google Scholar does not provide information on how many articles are included in its database.",
"However, sciento-metric researchers estimated that it included about 389 million documents in January 2018 (Gusen-bauer, 2019)making it the world's largest source of academic information.",
"Thus, there is growing interest in the use of Google Scholar information to draw inferences about scholarly research in general (Howland, 2010; Orduna-Malea et al., 2014; Khabsa and Giles, 2014; Mingers and Leydesdorff, 2015; Martn-Martn et al., 2018) and on scholarly impact in particular (Priem and Hemminger, 2010; Yogatama et al., 2011; Bulaitis, 2017; Ravenscroft et al., 2017; Bos and Nitza, 2019; Ioannidis et al., 2019).",
"This work examines patterns of citations of tens of thousands of NLP papers, both overall and across paper types, venues, and areas of research.",
"We now briefly describe how we extracted information from the ACL Anthology and Google Scholar to facilitate the citation analysis.",
"(Further details about the dataset, as well as an analysis of the volume of research in NLP over the years, are available in Mohammad",
"(2020b).) We aligned the information across AA and GS using the paper title, year of publication, and first author last name.",
"The ACL Anthology provides access to its data through its website and a github repository (Gildea et al., 2018).",
"7 We extracted paper title, names of authors, year of publication, and venue of publication from the repository.",
"8 As of June 2019, AA had 50K entries; however, this includes forewords, schedules, etc. that are not truly research publications.",
"After discarding them we are left with a set of 44,894 papers.",
"9 3.2 Google Scholar Data Google Scholar does not provide an API to extract information about the papers.",
"This is likely because of its agreement with publishing companies that have scientific literature behind paywalls (Martn-Martn et al., 2018).",
"We extracted citation information from Google Scholar profiles of authors who published at least three papers in the ACL Anthology.",
"A Google Scholar Profile page is a user-created page where authors can include their papers (along with the GS-provided citation information for the papers).",
"Scraping author profile pages is explicitly allowed by GS's robots exclusion standard.",
"This is also how past work has 7 https://www.aclweb.org/anthology/ https://github.com/acl-org/acl-anthology 8 Multiple authors can have the same name and the same authors may use multiple variants of their names in papers.",
"The AA volunteer team handles such ambiguities using both semi-automatic and manual approaches (fixing some instances on a case-by-case basis).",
"Additionally, the AA repository includes a file that has canonical forms of author names.",
"9 We used simple keyword searches for terms such as foreword, invited talk, program, appendix and session in the title to pull out entries that were likely to not be research publications.",
"These were then manually examined to verify that they did not contain any false positives.",
"studied Google Scholar (Khabsa and Giles, 2014; Orduna-Malea et al., 2014; Martn-Martn et al., 2018).",
"We collected citation information for 1.1 million papers in total.",
"We will refer to this dataset as GScholar-NLP .",
"Note that GScholar-NLP includes citation counts not just for NLP papers, but also for non-NLP papers published by the authors.",
"GScholar-NLP includes 32,985 of the 44,894 papers in AA (about 74%).",
"We will refer to this subset of the ACL Anthology papers as AA (cid:48) .",
"The citation analyses presented in this paper are on AA (cid:48) .",
"Future work will analyze both AA (cid:48) and GScholar-NLP to determine influences of other fields on NLP.",
"A. 1.2 million citations (as of June 2019).",
"Figure 1 shows the screenshot of an interactive timeline graph where each year has a bar with height corresponding to the number of citations received by papers published in that year.",
"Further, the bar has colored segments corresponding to each of the papers; the height of a segment is proportional to the number of citations the paper has received.",
"Thus it is easy to spot the papers that received a large number of citations.",
"Hovering over individual papers reveals additional metadata.",
"Discussion: With time, not only have the number of papers grown, but also the number of high-citation papers.",
"We see a marked jump in the 1990s over the previous decades, but the 2000s are the most notable in terms of the high number of citations.",
"The 2010s papers will likely surpass the 2000s papers in the years to come.",
"Q2.",
"How well cited are individual AA (cid:48) papers, as in, what is the average number of citations, what is the median, what is the distributison of citations?",
"How well cited are the different types of papers: journal papers, main conference papers, workshop papers, etc.?",
"A. In this and all further analyses, we do not include AA (cid:48) papers published in 2017 or later (to allow for at least 2.5 years for the papers to collect citations).",
"There are 26,949 AA (cid:48) papers that were published from 1965 to 2016.",
"Figure 2 shows box and whisker plots for: all of these papers (on the left) and for individual paper types (on the right).",
"The whiskers are at a distance of 1.5 times the inter-quartile length.",
"The average number of citations are indicated with the horizontal green dotted lines.",
"Creating a separate class for Top-tier Conference is somewhat arbitrary, but it helps make certain comparisons more meaningful.",
"For this work, we consider ACL, EMNLP, NAACL, COLING, and EACL as top-tier conferences based on low acceptance rates and high citation metrics, but certainly other groupings are also reasonable.",
"Discussion: Overall, the median citation count is 12.",
"75% of the papers have 34 or fewer citations.",
"The average number of citations (45) is markedly higher than the median (12); this is because of a small number highly cited papers.",
"When comparing different types of papers, we notice a large difference between journal papers and the rest.",
"Even though the number of journal papers in AA (and AA (cid:48) ) is very small (about 2.5%), these papers have the highest median and average citations (55 and 204, respectively).",
"Top-tier conferences come next, followed by other conferences.",
"The differences between each of these pairs is statistically significant (KolmogorovSmirnov (KS) test, p < .01).",
"10 Interestingly, the workshop papers and the shared task papers have higher medians 10 KS is a non-parametric test that can be applied to compare distributions without needing to make assumptions about the nature of the distributions.",
"Since the citations data is not normally distributed, KS is especially well suited.",
"and averages than the non-top-tier conferences.",
"These differences are also significant (KS, p < .01).",
"Q3.",
"How well cited are recent AA (cid:48) papers: say those published in the last decade (20102016)?",
"How well cited are papers that were all published in the same year, say 2014?",
"Are the citation distributions for individual years very different from those for larger time spans, say 20102016?",
"Also, how well cited are papers 5 years after they are published?",
"A. The top of Figure 3 shows citation box plots for 20102016; the bottom shows plots for papers published in 2014.",
"Discussion: Observe that, in general, these numbers are markedly lower than the those in Figure 2.",
"That is expected as these papers have had less time to accrue citations.",
"Observe that journal papers again have the highest median and average; however, the gap between journals and top-tier conferences has reduced considerably.",
"The shared task papers have a signifi-20102016 2014 Figure 3: Citation box plots for papers: published 20102016 (top) and published in 2014 (bottom).",
"cantly higher average than workshop and non-top-tier conferences.",
"Examining the data revealed that many of the task description papers and the competition winning systems' system-description papers received a large number of citations (while the majority of the other system description papers received much lower citations).",
"Shared tasks have also been particularly popular in the 2010s compared to earlier years.",
"The plots for 2014 (bottom of Figure 3) are similar to that of 20102016.",
"(Although, system demo papers published in that year are better cited Figure 4: Citation box plots for journal articles and top-tier conference papers from various time spans. than the larger set from the 20102016 period.)",
"This plot also gives an idea of citation patterns for papers 5 years after they have been published.",
"Q4.",
"If we only consider journal papers and top-tier conferences, how well cited are papers from various time spans?",
"Discussion: Observe that the 1990s and the 2000s have markedly higher medians and averages than other time periods.",
"The early 1990s, which have the highest average, were an interesting period for NLP with the emergence of statistical approaches (especially from speech processing) and the use of data from the World Wide Web.",
"The 20002010 period, which saw an intensification of the statistical data-driven approaches, is notable for the highest median.",
"The high average in the 1990s is likely because of some seminal papers that obtained a very high number of citations.",
"(Also the 1990's had fewer papers than the 2010s, and thus the average is impacted more by the very high-citation papers.)",
"The drop off in the average and median for recent papers is largely because they have not had as much time to collect citations.",
"A. Figure 5 (top) shows the citation box plots for 19652016 papers from individual venues.",
"The plots for workshops, system, demos, shared tasks, and tutorials are shown as well for ease of comparison.",
"Figure 5 (bottom) shows the same box plots for 20102016 papers.",
"Discussion: CL Journal has the highest median and average citation numbers.",
"ACL comes second, closely followed by EMNLP and NAACL.",
"The gap between CL Journal and ACL is considerably reduced when considering the 20102016 papers.",
"IJCNLP and LREC have the highest numbers among the non-top-tier conferences, but their numbers remain lower than the numbers for SemEval, non-SemEval shared tasks, and workshops.",
"TACL, a journal, has substantially lower citation numbers than CL Journal, ACL, EMNLP, and NAACL (Figure 5 top).",
"However, it should be noted that TACL only began publishing since 2013.",
"(Also, with a page limit of about ten, TACL papers are arguably more akin to conference papers than journal papers.)",
"When considering only the 2010 2016 papers, TACL's citation numbers are second only to CL Journal (Figure 5 bottom).",
"When considering 20102016 papers, the system demonstration papers, the SemEval shared task papers, and non-SemEval shared task papers have notably high averages (surpassing or equalling those of COLING and EACL); however their median citations are lower.",
"(This is consistent with the trends we saw earlier in Q3.)",
"A. Short papers were introduced by ACL in 2003.",
"Since then ACL is by far the venue with the highest number of short papers (compared to other venues).",
"So we compare long and short papers published at ACL since 2003 to determine their average citations.",
"Figure 6 shows the citation box plots for long and short papers published between 2003 and 2016 at ACL.",
"The two distributions are statistically different (KolmogorovSmirnov test, p < .01).",
"Discussion: In 2003, the idea of short papers was a novelty.",
"It was conceived with the idea that there needs to a be a place for focused contributions that do not require as much space as a long paper.",
"The format gained popularity quickly, and short papers at ACL tend to be incredibly competitive (sometimes having a lower acceptance rate than long papers).",
"While there have been several influential short papers, it remains unclear how well-cited they are as a category.",
"This analysis sheds some light to that end.",
"We find that, on average, long papers get almost three times as many citations as short papers; the median for long papers is two-and-half times that of short papers.",
"Q7.",
"How do different venues and paper types compare in terms of the volume of papers pertaining to various amounts of citation?",
"A. Figure 7 shows a stream graph of #papers by #citations.",
"The contributions of each of the venues and paper types are stacked one on top of another (bands of colors).",
"For a given point on the citations axis (say k ), the width of the stream corresponds to the number of papers with k citations.",
"Discussion: It is not surprising to see that the #pa-pers by #citations curve follows a power law distribution.",
"(There are lots of papers with 0 or few citations, but the number drops of exponentially with the number of citations.)",
"Workshop papers (light grey) are the most numerous, followed by LREC (green)as observable from their wide bands.",
"The bands for ACL, COLING, EMNLP, and NAACL are easily discernable but the bands for many others, especially CL Journal and TACL are barely discernable indicating low relative volume of their papers.",
"Observe that the bands for workshops and LREC are markedly wider in the 0 to 10 citations range than in the 11 and more citations range of the x axis.",
"In contrast, the widths of the bands for top-tier conferences, such as ACL and EMNLP, remain relatively stable.",
"Nonetheless, in terms of raw volume, it is worth noting that the workshops and LREC each produce more papers that are cited ten or more times than any other venue.",
"As one considers even higher citations, the top-tier conferences become more dominant.",
"Q8.",
"Discussion: About 56% of the papers are cited ten or more times.",
"6.4% of the papers are never cited.",
"(Note also that some portion of the 19 bin likely includes papers that only received self-citations.)",
"It would be interesting to compare these numbers with those in other fields such as medical sciences, physics, linguistics, machine learning, and psychology.",
"Q9.",
"How well cited are areas within NLP?",
"A. We used word bigrams in the titles of papers to sample papers from various areas.",
"12 The title has a privileged position in a paper.",
"It serves many functions, but most importantly, it conveys what the paper is about.",
"For example, a paper with the bigram machine translation in the title is likely about machine translation (MT).",
"We removed function words from the titles of papers in AA, and extracted all bigrams.",
"Figure 9 shows, in order of decreasing frequency, the list of 66 bigrams that occurred in more than 100 papers.",
"For each bigram, the yellow/green bar shows the median citations of the corresponding papers.",
"The average citations and the number of papers are shown in parenthesis.",
"Other approaches such as clustering are also reasonable; however, results with those might not be easily reproducible.",
"We chose the title bigrams approach for its simplicity.",
"Discussion: The graph shows, for example, that the bigram machine translation occurred in 1,659 AA (cid:48) papers that have a median citation count of 14, while the average is 68.8.",
"The average is one of the highest among the bigrams, despite the median being more middle of the pack.",
"This suggests the presence of heavily cited, outlier, papers.",
"Indeed, the most cited paper in all of AA (cid:48) is an MT paper with more than 9000 citations (Papineni et al., 2002).",
"Note that not all MT papers have machine translation in the title.",
"Although non-random, this sample of 1,659 papers is arguably a reasonably representative sample of MT papers.",
"Third in the list are papers with statistical machine in the titlemost commonly from the phrase statistical machine translation .",
"One expects considerable overlap across these sets of papers.",
"However, machine translation likely covers a broader range of research including work done before statistical MT was introduced, as well as work on neural MT and MT evaluation.",
"The bigrams with the highest median include: sentiment classification (31), anaphora resolution (30), and entity recognition (25).",
"The bigrams with the lowest median include: language resources (5), textual entailment (8), translation system (9), and cross language (9).",
"The bigrams with the highest average include: sentiment classification (181.6), speech tagging (107.9), sentiment analysis (104.0), and statistical machine (90.1).",
"13 One can access the lists of highly cited papers, pertaining to each of the bigrams, through the interactive visualization.",
"We list below some ideas of future work that we did not explore in this paper:",
"Analyze NLP papers that are published outside of the ACL Anthology.",
"Measure involvement of the industry in NLP publications over time.",
"Measure the impact of research publications in other ways beyond citations.",
"Identify papers that have made substantial contributions in non-standard ways.",
"A list of limitations and ethical considerations associated with this work is available online.",
"14 13 Note that simply composing titles with these high-citation bigrams is not expected to attract a large number of citations.",
"14 https://medium.com/@nlpscholar/about-nlp-scholar62cb3b0f4488 6 Conclusions We extracted citation information for 1.1M papers from Google Scholar profiles of researchers who published at least three papers in the ACL Anthology.",
"We used the citation counts of a subset ( 27K papers) to examine patterns of citation across paper types, venues, over time, and across areas of research within NLP.",
"We showed that only about 56% of the papers are cited ten or more times.",
"CL Journal has the most cited papers, but the citation gap between CL journal and top-tier conferences has reduced in recent years.",
"On average, long papers get almost three times as many citations as short papers.",
"In case of popular shared tasks, the task-description papers and competition-winning system-description papers often receive a considerable number of citations.",
"So much so that the average number of citations for the shared task papers is higher than the average for non-top-tier conferences.",
"The papers on sentiment classification , anaphora resolution , and entity recognition have the highest median citations.",
"Workshop papers and the shared task papers have higher median and average citations than the non-top-tier conferences.",
"The analyses presented here, and the associated dataset of papers mapped to citations, have a number of uses including, understanding how the field is growing and quantifying the impact of different types of papers.",
"In separate work, we explored the use of the dataset to detect gender disparities in authorship and citations (Mohammad, 2020a).",
"The dataset can potentially also be used to compare patterns of citations in NLP with those in other fields.",
"Finally, we note again that citations are not an accurate reflection of the quality or importance of individual pieces of work.",
"A crucial direction of future work is to develop richer ways of capturing scholarly impact.",
"This work was possible due to the helpful discussion and encouragement from a number of awesome people including: Dan Jurafsky, Tara Small, Michael Strube, Cyril Goutte, Eric Joanis, Matt Post, Patrick Littell, Torsten Zesch, Ellen Riloff, Iryna Gurevych, Rebecca Knowles, Isar Ne-jadgholi, and Peter Turney.",
"Also, a big thanks to the ACL Anthology and Google Scholar Teams for creating and maintaining wonderful resources."
] | [
"method",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"Despite recent successes of large pre-trained language models in solving reasoning tasks, their inference capabilities remain opaque.",
"We posit that such models can be made more interpretable by explicitly generating interim inference rules, and using them to guide the generation of task-specific textual outputs.",
"In this paper we present COINS , a recursive inference framework that",
"i) iteratively reads context sentences,",
"ii) dynamically generates contextualized inference rules, encodes them, and",
"iii) uses them to guide task-specific output generation.",
"We apply COINS to a Narrative Story Completion task that asks a model to complete a story with missing sentences, to produce a coherent story with plausible logical connections, causal relationships, and temporal dependencies.",
"By modularizing inference and sentence generation steps in a recurrent model, we aim to make reasoning steps and their effects on next sentence generation transparent.",
"Our automatic and manual evaluations show that the model generates better story sentences than SOTA baselines, especially in terms of coherence.",
"We further demonstrate improved performance over strong pre-trained LMs in generating commonsense inference rules.",
"The recursive nature of COINS holds the potential for controlled generation of longer sequences.",
"Narrative story understanding, and similarly story generation, requires the ability to construe meaning that is not explicitly stated through commonsense reasoning over events in the story (Rashkin et al., 2018a).",
"Previous work in modeling narrative stories has focused on learning scripts 1 (Schank and Abelson, 1977; Mooney and DeJong, 1985) and learning narrative schemas using corpus statis-1 Scripts are structured knowledge about stereotypical event sequences together with their participants.",
"tics (Chambers and Jurafsky, 2009; Balasubramanian et al., 2013; Nguyen et al., 2015).",
"Recently, large pretrained language models (LMs) such as GPT-2 have shown remarkable performance on various generation tasks.",
"While these pretrained LMs learn probabilistic associations between words and sentences, they still have difficulties in modeling causality (Mostafazadeh et al., 2020).",
"Also, in narrative story generation, models need to be consistent with everyday commonsense norms.",
"Hence, to address a story generation task,",
"i) models need to be equipped with suitable knowledge,",
"ii) they need effective knowledge integration and reasoning methods, and ideally",
"iii) we want to be able to make the effectiveness of these methods transparent.",
"In this work we focus on the aspects",
"i) to",
"iii), by investigating new methods that build on pretrained LMs to generate missing sentences from an incomplete narrative story.",
"Specifically, we focus on Narrative Story Completion (NSC) , a new task setting for story generation.",
"Given an incomplete story, specified only through its beginning and ending, the task is to generate the missing sentences to complete the story (see Figure 1).",
"Our hypothesis is that in order to obtaining a consistent and coherent narrative story, the task requires a model's ability to perform commonsense inference about events and entities in a story.",
"Unlike other existing tasks, NSC requires:",
"i) generating multiple sentences to complete a story, and",
"ii) ensuring that the generated sentences are coherent with respect to both beginning and ending of the story.",
"Hence, the NSC task offers a challenging setup for investigating the reasoning capacities of a story generation model.",
"Humans excel in drawing inferences and constructing causal chains that explain the connection between events (Kintsch and Dijk, 1978).",
"Figure 1 illustrates this with an example from our NSC task.",
"2 From Janie was excited to see her sister's play in theatre ( s 1 ) .",
"Janie got a call from her boss about new work ( s 2 ) and the outcome Janie watched a video of the play later.",
"( s 5 ) we can construct inference rules in forward and backward direction: forward via EFFECT : Someone B ( boss ) gave work to Someone A ( Janie ); backward via CAUSE : Someone A ( Janie ) wasn't able to go Somewhere B ( to the theatre ).",
"By combining these inferences, we can obtain a representation from which to generate a connection that completes the story, e.g., Janie's boss wanted her to look after the issue ( s 3 ) .",
"She missed the theatre play ( s 4 ) .",
"In this work, we propose COINS : a recursive model that jointly learns to",
"i) dynamically generate commonsense inference rules 3 grounded in the context and to",
"ii) perform controled and coherent story generation, using the generated inferences as a guide.",
"We hypothesize that jointly learning to generate contextualized inference rules from dynamically predicted contextualized inference rules and learning to generate story sentences incrementally while taking the inferences into account, will improve the quality of both the predicted inference rules and of generated story sentences.",
"Moreover, the recursive nature of the model and the individuation of the inference prediction and sentence generation tasks make the process more interpretable : the generated inference rules can be viewed as intermediate representations, and can serve as explanations of how the dynamically produced inferences influence the quality of generated story sentences.",
"Our main contributions are as follows: 1) We propose a new setting for a Narrative Story Completion task, which asks a system to complete a narrative story given its beginning and ending, 2 We use the ROCstories dataset to frame the NSC task.",
"with the aim of examining the reasoning capacities of a model that solves the task.",
"2) We propose an integrated reasoning and NL generation model, COINS , that based on its current context generates contextualized commonsense inference rules and follow-up sentences, in a stepwise recurrent process.",
"3) We conduct extensive experiments with automatic and human evaluation.",
"Automatic evaluations show that COINS outperforms strong baselines ( +2 . 2 BLEU score).",
"Human evaluation shows that compared to strong baselines, our model yields better sentence generations with respect to coherence ( +50 . 5% ) and grammaticality ( +20 . 5% ).",
"4) We show that COINS generates better inference rules ( +2 . 3 BLEU score) compared to a fine-tuned GPT-2 model, and that jointly learning to generate inferences and story sentences improves the quality of the generated inference rules.",
"Our code is made publicly available.",
"4 2 Related Work Sentence-level Commonsense Inference and Beyond.",
"Recent research in this area has focused on commonsense knowledge acquisition (Sap et al., 2019; Zhang et al., 2020; Speer et al., 2017; Malaviya et al., 2020) and commonsense reasoning (Zellers et al., 2019; Talmor et al., 2018).",
"In our work, we focus on inferential knowledge about events, and entities participating in such events.",
"Rashkin et al. (2018b) introduced a knowledge resource of commonsense inferences regarding peo-ple's intents and reactions towards a diverse set of events.",
"With COMET , Bosselut et al. (2019) have shown that pre-trained neural language models can be fine-tuned using large knowledge bases (such as ATOMIC , Sap et al. (2019)) to generate inferences for a given event or sentence.",
"However, the generated knowledge from COMET is non-contextualized and hence, can be inconsistent.",
"Recently, Mostafazadeh et al. (2020) proposed GLUCOSE , a new resource and dataset that offers semi-structured commonsense inference rules that are grounded in sentences of specific stories.",
"They show that fine-tuning a pre-trained LM on the GLUCOSE dataset helps the model to better generate inferrable commonsense explanations given a complete story.",
"In concurrent work, Gabriel et al. (2021) proposed PARA-COMET, a model that in-4 https://github.com/Heidelberg-NLP/ COINS corporates paragraph-level information to generate coherent commonsense inferences from narratives.",
"In this work, we investigate how well a neural model can generate contextualized commonsense inference rules for an incomplete story.",
"Learning to predict iterative inference steps for successive events in a narration using semi-structured knowledge rules is still a difficult and underexplored task.",
"We propose a model that learns to iteratively generate a coherent completion of an incomplete narrative story utilizing semi-structured knowledge as offered by the GLUCOSE framework.",
"Commonsense Reasoning in Narrative Stories.",
"Early work on narrative events focused on script learning , by defining stereotypical event sequences together with their participants (Schank and Abelson, 1977).",
"In later works, Chambers and Jurafsky (2008, 2009); Balasubramanian et al. (2013); Nguyen et al. (2015); Pichotta and Mooney (2014) proposed methods to learn narrative event chains using a simpler event representation that allows for efficient learning and inference.",
"Chambers and Jurafsky (2009) acquired Narrative Event Schemata from corpora and established the Narrative Cloze Task (Chambers and Jurafsky, 2008) that evaluates script knowledge by predicting a missing event (verb and its arguments) in a sequence of observed events.",
"More recently, Mostafazadeh et al. (2016) proposed the story cloze task that selects a plausible (right) over an implausible (wrong) story ending.",
"Bhagavatula et al. (2020) proposed an abductive reasoning task to test a model's ability to generate plausible explanations for an incomplete set of observations.",
"Paul and Frank (2020) proposed a multi-head knowledge attention method to dynamically incorporate non-contextualized inferential knowledge to address the abductive reasoning task .",
"Qin et al. (2020) proposed an unsupervised decoding algorithm that can flexibly incorporate both the past and future contexts using only off-the-shelf language models to generate plausible explanations.",
"Concurrent to our work, Paul and Frank (2021) presented a method for addressing the abductive reasoning task by explicitly learning what events could follow other events in a hypothetical scenario.",
"In our work, we make use of the ROCStories dataset (Mostafazadeh et al., 2016) to build a Narrative Story Completion task that tests a model's ability of generating missing sentences in a story.",
"We propose a model that aims to produce coherent narrative stories by performing iterative commonsense inference steps.",
"Narrative Story Generation.",
"Much existing work on story generation relied on symbolic planning methods (Lebowitz, 1987; P Erez and Sharples, 2001; Jozefowicz et al., 2016).",
"With the advances of Seq2Seq models, several works applied them in automatic story generation tasks (Roemmele, 2016; Jain et al., 2017).",
"Fan et al. (2018) proposed a hierarchical approach to generate short stories from initial prompts.",
"Recently, many works have focused on integrating external commonsense knowledge from large static knowledge bases like ATOMIC (Sap et al., 2019) or ConceptNet (Speer et al., 2017) for different tasks such as story ending generation (Ji et al., 2020; Guan et al., 2019) or story generation (Guan et al., 2020; Xu et al., 2020).",
"In concurrent work, Ammanabrolu et al. (2021) look into causality for a commonsense plot generation task.",
"In our work, we model the assumption that contextualized inference rules provide inferred information that can guide a system in generating both contextually grounded and coherent follow-up sentences in a story generation task.",
"We formulate the Narrative Story Completion task (NSC) as follows: given an incomplete story ( S = s 1 , s 2 , s n ) as a sequence of tokens t = { t 1 , t 2 , ..., t SEP , ..., t m } (with t SEP a mask token delimiting s 2 and s n ), the goal is to generate the missing sentences ( s 3 , ..., s n 1 ) as a sequence of tokens y s i = { y s i 1 , y s i 2 , ..., y s i v } (with i = 3 , ..., n 1 and v the maximum length of each sentence).",
"In the setting of the NSC task, we expect the completed story to be coherent.",
"That is, the generated sentences should exhibit reasonable logical connections, causal relationships, and temporal dependencies with each other and the given beginning and ending of the story.",
"In this paper, we define a discourse to be coherent if successive sentences that are about the same entities, and the reported events involving them can be construed to reflect common knowledge about how events are typically connected in a temporal sequence or by causal relations.",
"Similar to Hobbs (1985), the criteria to conclude that discourse is coherent include require that there are reflections of causality in the text.",
"Our take on this task is to incrementally generate contextualized inference rules from the given context, and to make use of this knowledge to generate missing story sentences.",
"This section details how we construct training data for the NSC task, by enriching stories with automatically predicted contextualized inferences.",
"5 We utilize the GLUCOSE (Mostafazadeh et al., 2020) dataset, which contains implicit commonsense knowledge in form of semi-structured general and specific inference rules 6 (cf. Table 1) that are grounded in the context of individual stories from ROCStories.",
"In GLUCOSE , given a story S and a selected sentence X from the story, the authors define ten dimensions d of commonsense causal explanations related to X , inspired by human cognitive psychology.",
"Only a small part of ROCStories is annotated with GLUCOSE inferences (Table 3).",
"Given the amount of commonsense knowledge needed for real-world tasks, a static knowledge resource is always incomplete.",
"Thus, we fine-tune a pre-trained GPT-2 model on the annotated part of GLUCOSE to dynamically generate inference rules for each sentence X i of each story S i from the underlying ROCStories data.",
"We fine-tune two separate language models CSI gen and CSI spec for general and specific rules, respectively (Table 2).",
"im-5 For testing we rely on GLUCOSE 's manually validated inference rules on a small subset of the ROCStories corpus.",
"6 Specific means rules grounded in a given context and general corresponds to rules that are applicable to other contexts.",
"plicit causes and effects of a sentence X in a given story.",
"In our work, we are interested in inference rules that explain a sentence's causes and effects, to study the impact of such inferences on narrative story completion.",
"We therefore cluster all dimensions d into the two categories EFFECT vs. CAUSE (Table 1) and aggregate all rules from the respective categories (preserving their dimensions).",
"Once our models ( CSI gen , CSI spec ) are trained, we apply them to our NSC task training data, to enrich it with inference rules for each sentence and story.",
"In this section we introduce a recursively operating reasoning and sentence generation model: COINS .",
"An overview is given in Figure 2. In each iteration, the model applies two consecutive steps: (1) Inference Step : Given an incomplete story context S (cid:48) = X S i and relation r , an inference model CSI ( gen or spec ) generates COntextualized inference rules of type r .",
"(2) Generation Step : a sentence generator reads the generated inference rules concatenated with the current context S (cid:48) and generates the next story sentence s i +1 .",
"The context S (cid:48) is updated with s i +1 and steps (1) and (2) are repeated (cf. Algorithm 1).",
"This formulation allows us to",
"i) examine inference and generation capabilities separately from each other,",
"ii) helps determine the impact of inferential knowledge on story generation, and",
"iii) can give us insight into how knowledge can guide story generation in a recursive inference framework.",
"Inference Step.",
"We define the initial story context S (cid:48) = { s 1 , s 2 , [SEP] , s n } , a selected sentence as s i , and relation type r { EFFECT , CAUSE } , where i [2 , . . . n 1] , s i = { w s i 1 ,",
".., w s i v } .",
"We adopt a pretrained GPT-2 (base) (Radford et al., 2019) transformer model with multiple Transformer blocks of multi-head self-attention and fully connected layers.",
"During training, in each iteration the input to the model is a concatenation of the current source ( S (cid:48) , s i , r ) and target sequence i.e., the inference Contextualized Inference Rules ( I i ) Sentence ( s i ) Output Sentence ( s i+1 ) (GPT-2) (GPT-2) Generate Semi-Structured Inference Rules Generate Missing Sentence I i + S' Context ( S' ) Update Context Figure 2: Architecture of the COINS model.",
"h 0 p = e p + P p , h lp = block ( h l 1 <p ) , l [1 , L ] p ( y p | y <p , p ) = softmax ( h Lp WT ) (1)",
"where h 0 p is a summation of token embedding e p and position embedding P p for the p -th token; h lp is the l -th layer's output at position p , computed through transformer blocks with the masked multi-head self attention mechanism; h Lp is the final layer's hidden state and y <p indicates the left context of position p .",
"The softmax layer defines the model to output the most probable target sequence: the most likely inference rules ( E i and C i ) for each relation type (cf. Algorithm Line 4-5).",
"During training, we minimize the objective (2) LI ( ) = m + N (cid:88) k = m log p ( E ki | S (cid:48) , s i , EFFECT ) m + N (cid:88) k = m log p ( C ki | S (cid:48) , s n , CAUSE ) (2) where m, N denote the number of tokens in the source ( S (cid:48) , s i , r ) and target sequence (inference rules) respectively; refers to model parameters.",
"In this work, we focus on the NSC task, which requires our model to capture temporal dependencies and causal relationships between events.",
"While we designed our sentence generation model in such a way that it can utilize inference rules from both forward and backward directions for each sentence, we here trigger the generation of CAUSE inference rules for s n , since we expect that events , motivations or attributes that cause s n will be relevant for generating the preceding sentences [ s 3 , . . . s n 1 ] .",
"Similarly, we generate EFFECT relations for s i , assuming that an event , changes of emotion or changes of attribute that are possible effects caused by s i will be most relevant for generating the missing follow-up sentences.",
"In principle, however, for NSC and other story generation tasks, we may consider CAUSE and EFFECT relations for all sentences, letting the model freely choose from the full space of inferences.",
"We concatenate the generated inference rules ( I i = E i C i ) 7 and store the last hidden representation in Mem IR IRN L H , where N is the number of sentences, L the maximum inference sequence length and H the hidden state dimensions.",
"Mem IR is updated with the hidden representations of inference rules in each iteration.",
"Hence, Mem IR could act as an intermediate representation, and as a basis for providing explanations for observed story sentence generations.",
"Mem IR may also be used as a memory for long-form text generation tasks, to keep track of implicit knowledge triggered by previously generated text, and could support flexible discourse serialization patterns.",
"8 Generation Step.",
"Given the generated inference rules I i (in form of tokens) and the incomplete story context S (cid:48) , we aim to generate the next missing sentence.",
"We pass the input through another pretrained GPT-2 (base) model (cf. Equation 1).",
"The loss function for the sentence generator is LS ( ) = v (cid:88) k =1 log P ( y s i +1 k | I i , [ EOK ] , S (cid:48) ) (3) where y k denotes the k -th token and v the maximum length of the generated sentence; 7 We use [ SEP ] token to delimit the individual E i and C i when concatenating them.",
"i [2 , n 1] ; [ EOK ] denotes the end of knowledge rule tokens, and refers to model parameters.",
"Update Story Context.",
"In the final step we update the story context by inserting the generated sentence s i +1 into the previous story context (cf. Algorithm 1, line 12).",
"Training and Inference.",
"We add the losses LI for inference generation and LS for sentence generation to make the models dependent on each other (Algorithm 1, line. 10-11).",
"For both the inference and the generation step model, we minimize the negative log likelihood loss of the respective target sequence.",
"We apply COINS to the NSC and the Story Ending Generation tasks.",
"9 For data statistics see Table 3. Narrative Story Completion.",
"We follow the task definition as introduced in 3.",
"Data Collection.",
"We construct the NSC dataset on the basis of the ROCStories corpus (Mostafazadeh et al., 2016), which contains 98,162 five-sentence stories with a clear beginning and ending, thus making it a good choice for this task.",
"We choose the first two sentences ( s 1 , s 2 ) as beginning rather than just s 1 because the first sentence ( s 1 ) tends to be short in length, and usually introduces characters or sets the scene (Mostafazadeh et al., 2016), wherease the second sentence ( s 2 ) provides more information about the initial story.",
"Parameter size.",
"For GPT-2 we use the GPT-2 small checkpoint (117M parameters) based on the implementation of HuggingFace (Wolf et al., 2020).",
"Decoding Strategy.",
"In the inference stage, we adopt beam search decoding with a beam size of 5 for all our models and all baselines we produce.",
"We used the following set of hyperparameters for our COINS model: batch size: { 2 , 4 } ; epochs: { 3 , 5 } ; learning rate: { 1 e 5 , 5 e 6 } .",
"We use Adam Optimizer, and dropout rate = 0 .",
"1 .",
"We ran our experiments with GPU sizes of 11 GB and 24 GB.",
"We compare our COINS model to the following baselines:",
"9 The results for Story Ending Generation will corroborate our results for NSC .",
"All details are given in the Appendix .",
"(a) GPT-2 (Radford et al., 2018) (with 12-layer, 768-hidden, 12-heads), trained with an objective to predict the next word.",
"The input to the GPT-2 model is the concatenation of the source and the target story sequence.",
"We follow the standard procedure to fine-tune GPT-2 on the NSC task during training and minimize the loss function: log ( s 3 , s 4 | [ SOS ] s 1 , s 2 , [ SEP ] , s 5 [ EOS ]) (4)",
"et al., 2020) is the current SOTA for ROCStories generation.",
"It first fine-tunes a pre-trained GPT-2 (small) model with knowledge triples from commonsense datasets (ConceptNet [CN] Speer et al. (2017) and ATOMIC [AT] Sap et al. (2020)).",
"The knowledge triples were converted to sentences using templates.",
"A multitask learning framework further fine-tunes this model on both the Story Ending Generation task and classifying corrupted stories from real ones.",
"As our baseline we choose the version without multi-tasking, since the corrupted story setting is not applicable for the NSC task.",
"(c) GRF (Ji et al., 2020) is the current SOTA for the Abductive Reasoning and the Story Ending Generation tasks.",
"GRF enables pre-trained models (GPT-2 small) with dynamic multi-hop reasoning on multi-relational paths extracted from the external ConceptNet commonsense knowledge graph.",
"(d) GLUCOSE -GPT-2 Similar to Guan et al. (2020), we fine-tune pretrained GPT-2 (small) on the GLUCOSE dataset using general rules (GR).",
"We follow the same procedure as Guan et al. (2020) and",
"(i) first fine-tune a pre-trained GPT-2 , but here on the GLUCOSE dataset, with the following loss: log ( I i | S, s i , r ) , (5) where r: CAUSE /E FFECT , I i : Inference rules.",
"The main difference between GLUCOSE-GPT-2 and COINS is: COINS explicitly learns to generate (contextualized) inference rules on the fly during the inference step and incorporates them in the story generation step.",
"For automatic evaluation in the NSC task we use as metrics Perplexity (indicates fluency of text genera-tion), BLEU -1/2 (Papineni et al., 2002) and ROUGEL (Lin, 2004).",
"We report performance on the test Model PPL ( ) BLEU-1/2 ( ) ROUGE-L ( ) GPT-2 11.56 16.66/6.8 17.2 KE [CN, AT] 12.61 17.55/7.6 17.9 GLUCOSE-GPT-2 12.7 17.9/7.8 17.5 GRF [CN] 12.18 20.8/8.2 17.6 COINS (SR) 6.7 22.53/10.10 18.9 COINS (GR) 6.9 22.82/10.52 19.4 COINS Oracle (SR) (Test-only) 30.75/22.76 32.5 COINS Oracle (GR) (Test-only) 26.37/17.01 27.38 Human 24.53/12.10 20.2 Table 4: Automatic evaluation results for Story Completion.",
"sets by averaging results obtained for 5 different seeds.",
"All improvements across all model variants are statistically significant at p < 0.05).",
"4 and 6. NSC task.",
"Table 4 shows the results for the models described in 6.3 and evaluated as per 6.4.",
"We observe the following:",
"(i) COINS outperforms all strong baseline models that utilize pre-trained language models and incorporate external commonsense knowledge with respect to all automatic evaluation metrics.",
"Note that GLUCOSE -GPT2 and COINS are using the same knowledge resource, hence the clear performance increase of COINS ( +4 . 92 BLEU score) indicates that jointly learning to generate contextualized inferences rules and missing sentences in a recursive manner can enhance generation quality.",
"10",
"(ii) Similar to Ji et al. (2020) we observe that fine-tuning GPT-2 over knowledge triples ([C N ], [AT ] OMIC or [GL ] UCOSE ) doesn't improve the overall performance by much (Table 4, line 2: [CN +A T ] vs. line 3: [GL ] vs. line 1: [no CSK]).",
"(iii) For COINS , general rules (GR) boost performance more than specific rules, indicating that the sentence generation model generalizes well.",
"(iv) In the oracle settings at inference time we provide the model with the silver inference rules (generated as per 4) that use the complete story context as background.",
"The result indicates that SR performs better than GR when the model sees the full story context.",
"In general we observe that story generation benefits from higher-quality, contextualized inference 10 Since GRF 's architecture is specific for ConceptNet, we cannot exclude that the better performance of COINS ( +2 . 2 BLEU ) is in part due to differences in the used knowledge.",
"rules from GLUCOSE (for COINS ).",
"11 The improvement of COINS over GLUCOSE -GPT-2 indicates that our model is well able to utilize and profit from the inference rules.",
"In the oracle setting, SR performs much better than GR.",
"This is expected, since oracle rules with access to the full context will deliver more contextually-relevant inferences, while GR rules may diverge more from the story context.",
"However, in the realistic NSC task setting (Table 4, lines 5,6) GR outperforms SR, which again underlines the generalization capacities of COINS .",
"Impact of different inputs for the Generation Step.",
"In Table 5 we investigate the performance of COINS with different inputs to the sentence generation component at inference time :",
"(i) When only inference rules (from the inference step) are given to the model without any story context ( S (cid:48) = { s 1 , s 2 , [SEP] , s n } ) ( IR only ), sentence generation benefits when specific rules are used.",
"This is expected since the specific rules contain statements with concrete character names and paraphrased events from the story.",
"(ii) When only the story beginning ( s 1 , 2 ) is provided to the sentence generation model without the ending sentence s n ( w/oSE ) nor inference rules ( w/oIR ) we observe that the performance drops compared to models given the full incomplete context ( S (cid:48) ), indicating that knowing the story ending helps the model to generate missing sentences that are coherent with the story.",
"However,",
"(iii) when adding inference rules IR (from the inference step i.e., E i + C i ) to the context ( s 1 , 2 ) without ending sentence ( w/oSE ), performance again improves (+ 5 . 85 BLEU scores).",
"Note that the inference rule contains the CAUSE relation for s n .",
"This indicates that the model is able to utilize inference rules for story generation.",
"12 11 Automatic (silver) GLUCOSE inference rules (cf. 4) of type GR yield 60 .",
"Performance of inference rule generation.",
"We now investigate how difficult it is to generate contextualized inference rules (specific and general) when multiple sentences are missing from a story.",
"For this we compare COINS to a GPT-2 model fine-tuned on GLUCOSE data to generate inference rules (cf. 4).",
"We study the impact of jointly and dynamically learning sentence and inference rule generation (in COINS ) on the inference generation task while the fine-tuned GPT-2 model only learns to generate inference rules conditioned on the static story context.",
"We specifically examine the difficulty of generating inference rules for two consecutive sentences ( s 3 and s 4 ) in a 5-sentence context, as opposed to shorter sequences, in three different scenarios:",
"i) when the complete story context S is given;",
"ii) when the incomplete context S (cid:48) (i.e., s 1 , s 2 and s 5 ) is given, plus either s 3 or s 4 ( 1-missing sentence ), and",
"iii) when S (cid:48) is given, but neither of the intermediate sentences s 3 and s 4 ( 2-missing sentences ).",
"In each setting, we generate EFFECT and CAUSE rules for the targeted sentences s 3 , s 4 , and compare their quality.",
"The results are reported in Table 6. We observe that in the 2-missing sentences setting, COINS outperforms GPT-2 (by +2 . 3 BLEU score on average).",
"This indicates that learning to perform inference rule generation jointly with sentence generation is beneficial for filling-in multiple story sentences.",
"Interestingly, for increasing numbers of missing sentences, performance drops drastically for CAUSE (as opposed to EFFECT ), but less so for COINS as opposed to GPT-2.",
"A possible reason for this may be the conditional, uni-directional nature of the underlying GPT-2 language model, which is trained to predict follow-up words in forward direction.",
"This may favor future-directed EFFECT rules as opposed to CAUSE relations.",
"The milder effect on COINS could indicate that the concurrent inference model supports the sentence generation model to overcome this weakness.",
"13 8 Manual Evaluation Automatic metrics can give us some indication of NLG quality, however, these metrics do not necessarily reflect the coherence of generated story sentences.",
"We thus conduct a human evaluation focusing on the grammaticality and coherence of the generated sentences in their story context.",
"We 13 In future work, we will test the above hypothesis by experimenting with a bi-directional transformer generation model.",
"conduct pairwise comparisons for randomly sampled 100 instances of our best model, i.e., COINS with GR (according to automatic metrics) with four strong baseline models (GPT-2, GLUCOSE -GPT-2, GRF, KE).",
"For each pair of instances (one from COINS , the other from a baseline model), we present the generated sentences in their story context, and asked three annotators to give a preference rating ( win , tie , lose ) according to the criteria grammaticality and coherence .",
"For grammaticality, we present each sentence in isolation and ask the annotators to rate which sentence is more fluent, readable, and compliant with the English standard usage.",
"For coherence, we ask the annotators to assess which of the two generated sentences are more logically coherent with each other and the story beginning and ending, in terms of causal and temporal dependencies.",
"We applied majority voting among the three annotators to obtain final decisions.",
"More details about the annotation are given in Appendix .",
"The human evaluation results are presented in Table 7. 14 The results show that our model produces more coherent and more grammatically correct sentences compared to all baselines.",
"This indicates that with support of learned contextualized inference rules based on GLUCOSE knowledge, our model generates more coherent story sentences that are causally and temporally well connected.",
"further conduct human evaluation to validate the effectiveness and relevance of the generated inference rules.",
"We randomly select 50 instances from the NSC dev set.",
"We asked three annotators to evaluate the (GR) inference rules 15 .",
"We define an inference rule to be relevant if",
"(a) it captures im-14 We report inter-annotator agreement scores calculated with Fless' kappa (Fleiss, 1971), calculated for each comparison.",
"We find moderate or fair agreement.",
"15 We report only COINS (GR), our best model according to automatic metrics.",
"plicit causes and effects of a selected sentence X given an incomplete story S (cid:48) , and",
"(b) it is providing useful explanations for the incomplete story S (cid:48) .",
"The result for this evaluation is shown in Fig.3, for EFFECT and CAUSE relations.",
"We find that in 36% and 34% of cases for effects and causes, respectively (computed on the basis of majority agreement), our algorithm was able to generate relevant inference rules.",
"Our annotations yielded fair inter-annotator agreement of Fleiss' = 0 .",
"45 .",
"Case Study.",
"We provide an example from NSC with different generation outputs (Table 8).",
"Note that the generated sentences are grounded to the inference rules obtained from the inference step.",
"Hence, the rules provide both an intermediate representation and explanations for how knowledge can guide or influence story generation.",
"We provide more qualitative examples in the Appendix.",
"We addressed a Narrative Story Completion task that allows us to probe the coherence capabilities of a neural generation model.",
"We proposed COINS , a model that iteratively generates commonsense inference rules grounded in the context and generates story sentences, using the generated inferences as a guide.",
"Human and automatic eval-IncompleteStory: s 1 : Ken was driving around in the snow.",
"By individuating the inference rule and sentence generation steps, COINS can make the contribution of commonsense knowledge on story generation transparent.",
"The recursive nature of the inference-driven generation model holds potential for knowledge-driven control in the generation of longer sequences.",
"In future work we will explore how an enhanced memory of generated inferences can realize more complex narrative patterns that diverge from strictly ordered narrative sequences.",
"This work has been supported by the German Research Foundation as part of the Research Training Group Adaptive Preparation of Information from Heterogeneous Sources (AIPHES) under grant No.",
"GRK 1994/1.",
"We thank our annotators for their valuable annotations.",
"We also thank NVIDIA Corporation for donating GPUs used in this research."
] | [
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"objective",
"method",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"other",
"other",
"other"
] |
[
"Developing bots demands high quality training samples, typically in the form of user utterances and their associated intents.",
"Given the fuzzy nature of human language, such datasets ideally must cover all possible utterances of each single intent.",
"Crowdsourcing has widely been used to collect such inclusive datasets by paraphrasing an initial utterance.",
"However, the quality of this approach often suffers from various issues, particularly language errors produced by unqualified crowd workers.",
"More so, since workers are tasked to write open-ended text, it is very challenging to automatically asses the quality of paraphrased utterances.",
"In this paper, we investigate common crowdsourced paraphrasing issues, and propose an annotated dataset called Para-Quality , for detecting the quality issues.",
"We also investigate existing tools and services to provide baselines for detecting each category of issues.",
"In all, this work presents a data-driven view of incorrect paraphrases during the bot development process, and we pave the way towards automatic detection of unqualified paraphrases.",
"With the increasing advances in deep learning as well as natural language processing, a new generation of conversational agents is attracting significant attention (Dale, 2016).",
"Also known as dialogue systems , virtual assistants , chatbots or simply bots (Campagna et al., 2017; Su et al., 2017), some advanced bots are now designed to perform complex tasks (e.g., flight booking ), many of which are built using machine learning techniques.",
"At the heart of building such task-oriented bots lies the challenge of accurately capturing the user's intent (e.g., find cafes in Chicago ), and then extracting its entities to service the request (e.g term= cafes, location=Chicago ).",
"However, its success relies heavily on obtaining both, large and high quality corpora of training samples showing mappings between sample utterances and intents.",
"This is necessary given the ambiguous nature of the human language (Wasow et al., 2005) and large variations of expressions (Wang et al., 2012; Zamanirad et al., 2017).",
"A lack of variations in training samples can result in incorrect intent detection and consequently execution of undesirable tasks (e.g., booking an expensive hotel instead of a cheap room) (Hen-derson et al., 2018).",
"Likewise, quality issues in the training samples can lead to unmitigated disasters (Neff and Nagy, 2016) as it happened to Microsoft's Tay by making a huge number of offensive commentaries due to biases in the training data (Henderson et al., 2018).",
"It is therefore not surprising that research and development into training data acquisition for bots has received significant consideration (Campagna et al., 2017; Kang et al., 2018).",
"Collecting training samples usually involves two primary steps:",
"(i) firstly, obtaining an initial utterance for a given user intent (e.g., find a cafe in Chicago ); and",
"(ii) secondly, paraphrasing this initial expression into multiple variations (Su et al., 2017; Campagna et al., 2017).",
"Paraphrasing is thus vital to cover the variety of ways an expression can be specified (Yang et al., 2018a).",
"As summarized in (McCarthy et al., 2009), a quality paraphrases has three components: semantic completeness, lexical difference, and syntactic difference.",
"To obtain lexically and syntactically diverse paraphrase, crowdsourcing paraphrases has gained popularity in recent years.",
"However, crowdsourced paraphrases need to be checked for quality, given that they are produced by unknown workers with varied skills and motivations (Cam-pagna et al., 2017; Daniel et al., 2018).",
"For example, spammers, malicious and even inexperienced crowd-workers may provide misleading, erroneous, and semantically invalid paraphrases (Li et al., 2016; Campagna et al., 2017).",
"Quality issues may also stem from misunderstanding the intent or not covering important information such as values of the intent parameters (Su et al., 2017).",
"The common practice for quality assessment of crowdsourced paraphrases is to design another crowdsourcing task in which workers validate the output from others.",
"However, this approach is costly having to pay for the task twice, making domain-independent automated techniques a very appealing alternative.",
"Moreover, quality control is especially desirable if done before workers submit their paraphrases, since low quality workers can be removed early on without any payment.",
"This can also allow crowdsourcing tasks to provide feedback to users in order to assist them in generating high quality paraphrases (Nilforoshan et al., 2017; Nilforoshan and Wu, 2018).",
"To achieve this, it is therefore necessary to automatically recognize quality issues in crowdsourced paraphrases during the process of bot development.",
"In this paper, we investigate common paraphrasing errors when using crowdsourcing, and we propose an annotated dataset called Para-Quality in which each paraphrase is labelled with the error categories.",
"Accordingly, this work presents a quantitative data-driven study of incorrect paraphrases in bot development process and paves the way towards enhanced automated detection of unqualified paraphrased utterances.",
"More specifically, our contributions are two-folded: We obtained a sample set of 6000 paraphrases using crowdsourcing.",
"To aim for a broad diversity of samples, the initial expressions were sourced from 40 expressions of highly popular APIs from various domains.",
"Next, we examined and analyzed these samples in order to identify a taxonomy of common paraphrase errors errors (e.g., cheating, misspelling, linguistic errors).",
"Accordingly, we constructed an annotated dataset called Para-Quality (using both crowdsourcing and manual verification), in which the paraphrases were labeled with a range of different categorized errors.",
"We investigated existing tools and services (e.g., spell and grammar checkers, language identifiers) to detect potential errors.",
"We formulated baselines for each category of errors to determine if they were capable to automatically detect such issues.",
"Our experiments indicate that existing tools often have low precision and recall, and hence our results advocates the need for new approaches in effective detection of paraphrasing issues.",
"Various types of paraphrasing issues have been reported in the literature, namely: spelling errors (Braunger et al., 2018), grammatical errors (Jiang et al., 2017; Negri et al., 2012), and missing slot-value (happens when a worker forget to include an entity in paraphrases) (Su et al., 2017).",
"We collected paraphrases for two main reasons:",
"(i) to have a hands-on experience on how incorrect paraphrases are generated, and",
"(ii) to annotate the dataset for building and evaluating paraphrasing quality control systems.",
"Methodology.",
"We obtained 40 expressions from various domains (i.e. Yelp, Skyscanner, Spotify, Scopus, Expedia, Open Weather, Amazon AWS, Gamil, Facebook, Bing Image Search ) indexed in ThingPedia (Campagna et al., 2017) and API-KG 1 .",
"We then launched a paraphrasing task on Figure-Eight 2 .",
"Workers were asked to provide three paraphrases for a given expression (Jiang et al., 2017), which is common practice in crowdsourced paraphrasing to reduce repetitive results (Campagna et al., 2017; Jiang et al., 2017).",
"In the provided expression, parameter values were highlighted and crowd-workers were asked to preserve them.",
"Each worker's paraphrases for an initial utterance are normalized by lowercasing and removing punctuation.",
"Next, the initial utterance and the paraphrases are compared to forbid submitting empty strings or repeated paraphrases, and checked if they contain highlighted parameter values (which is also a common practice to avoid missing parameter values) (Mitchell et al., 2014).",
"We collected paraphrases from workers in English speaking countries, and created a dataset containing 6000 paraphrases (2000 triple-paraphrases) in total 3 .",
"sourced paraphrases, and recognized 5 primary categories of paraphrasing issues.",
"However, we only considered paraphrase-level issues related to the validity of a paraphrase without considering dataset-level quality issues such as lexical diversity (Negri et al., 2012) and bias (Henderson et al., 2018).",
"Misspelling has been reported as one of most common mistakes in paraphrasing (Inaba et al., 2015; Wang et al., 2012; Chklovski, 2005; Braunger et al., 2018).",
"In our sample set, we also noticed misspellings were generated both intentionally (as an act of cheating to quickly generate a paraphrase such as Example 2 in Table 1) and unintentionally (due to a lack of knowledge or a simple mistake such as Example 3 in Table 1).",
"Linguistic errors are also common in crowdsourced natural language collections (Jiang et al., 2017; Negri et al., 2012).",
"Verb errors, preposition errors, vocabulary errors (improper word substitu-tions), and incorrect singular/plural nouns, just to name a few.",
"Moreover, capitalization and article errors seems abundant (e.g., Example 5 in Table 1).",
"Given that real bot users also make such errors, it is important to have linguistically incorrect utterances in the training samples (Bapat et al., 2018).",
"However, at a very least, detecting linguistic errors can contribute to quality-aware selection of crowd workers.",
"This occurs when a paraphrase deviates from the meaning of the initial utterance (e.g., find cafes in Chicago ).",
"As reported in various studies, workers may forget to mention parameter values (also known as missing slot)(e.g., find cafes ) 4 (Cross-ley et al., 2016; Su et al., 2017; Ravichander et al., 2017; Wang et al., 2012; Braunger et al., 2018), provide wrong values (e.g., find cafes in Paris ) (Su et al., 2017; Ravichander et al., 2017; Wang et al., 2012; Negri et al., 2012; Braunger et al., 2018), or add unmentioned parameter values(Wang et al., 2012) (e.g., find two cafes in Chicago ).",
"Workers may also incorrectly use a singular noun instead of its plural form, and vice versa.",
"For instance, 4 In our task design, this type of error cannot happen since parameter values are checked using regular expressions before submission in Example 6 of Table 1, the paraphrase only asks for the status of one specific burglar alarm while the expression asks for the status of all burglar alarms .",
"Making mistakes in paraphrasing complementary forms of words also exists in the crowdsourced dataset.",
"For instance, in Example 7 of Table 1, assuming that the bot answers the question only by saying YES or NO, the answer for the paraphrase differs from that of the expression.",
"However, it will make no difference if the bot's response is more descriptive (e.g., it's working, it isn't working.) Finally, some paraphrases significantly diverge from expressions.",
"For instance, in Example 8 of Table 1, the intent of paraphrase is to turn off the TV; however, that of initial utterance is to query about the TV status.",
"In some cases, workers misunderstood the task and provided translations in their own native languages (referred to as Translation issues) (Cross-ley et al., 2016; Braunger et al., 2018; Bapat et al., 2018), and some mistakenly thought they should provide answers for expressions phrased as questions (referred to as Answering issues) such as Example 9 in Table 1.",
"This occurred even though workers were provided with comprehensive instructions and examples.",
"We infer that some workers did not read the instructions, ignoring the possibility of cheating.",
"In crowdsourced tasks, collecting paraphrases is not immune to unqualified workers, cheaters, or spammers (Daniel et al., 2018; Crossley et al., 2016; Chklovski, 2005).",
"Detecting malicious behaviour is vital because even constructive feedback may not guarantee quality improvements as workers act carelessly on purpose.",
"Cheating is thus considered a special case of Semantic Error which is done intentionally.",
"It is difficult even for experts to detect if someone is cheating or unintentionally making mistakes.",
"However, it becomes easier when we consider all three paraphrases written by a worker for a given expression at once.",
"For example, in Example 10 of Table 1, the malicious worker removes words one by one to generate new paraphrases.",
"In this example, we also notice that it is still possible that a cheater produces a valid paraphrase accidentally such as the first paraphrase in Example 10.",
"Workers may also start providing # Label Sample 1 Correct Expression Create a public playlist named new playlist Paraphrase Make a public playlist named new playlist 2 Spelling Errors Expression Estimate the taxi fare from the airport to home Paraphrase Estimate the taxi fare from the airport to hom 3 Spelling Errors Expression Estimate the taxi fare from the airport to home Paraphrase Tell me about the far from airport to home 4 Spelling Errors Expression Where should I try coffee near Newtown?",
"faulty paraphrases after generating some correct paraphrases as shown in Example 14 of Table 1.",
"Based on our observations, the simplest mode of cheating is to add a few random characters to the source sentence as shown in Example",
"11. Next is adding a few words to the source sentence without much editing as shown in Example",
"12. Finally, there are cheaters who rewrite and change the sentences substantially in a very random way such as Example",
"13. 4 Dataset Annotation Next, we designed another crowdsourcing task to annotate the collected paraphrases according to the category of issues devised above.",
"Namely, using following labels: Correct , Semantic Error , Misspelling , Linguistic Error , Translation , Answering , and Cheating .",
"We split the category of misunderstanding issues into Translation and Answering because they require different methods to detect.",
"Methodology.",
"In the annotation task, crowd workers were instructed to label each paraphrase with the paraphrasing issues.",
"Next, to further in-crease the quality of annotations 5 , two authors of this paper manually re-annotated the paraphrases to resolve disagreements between crowd annotators.",
"Moreover, contradictory labels (e.g., a paraphrase cannot be labeled both Correct and Misspelling simultaneously) were checked to ensure consistency.",
"The overall Kappa test showed a high agreement coefficient between the annotators ( McHugh, 2012) by Kappa being 0 .",
"85 .",
"Table 2 also shows the pair-wise inter-annotator agree-5 because of weak agreement between crowd workers Label Kappa Correct 0.900 Misspelling 0.972 Linguistic Errors 0.879 Translation 1.000 Answering 0.855 Cheating 0.936 Semantic Errors 0.833 Table 2: Pairwise Inter-Annotator Agreement ment (Cohen, 1960).",
"Next, the authors discussed and revised the re-annotated labels to further in-crease the quality of annotations by discussing and resolving disagreements.",
"Statistics.",
"Figure 1 shows the frequencies of each label in the crowdsourced paraphrases as well as their co-occurrences in an UpSet plot (Lex et al., 2014) using Intervene (Khan and Mathe-lier, 2017).",
"Accordingly we infer that only 61% of paraphrases are labeled Correct .",
"This plot also shows how many times two labels co-occurred.",
"For example, all paraphrases which are labeled Translation (24 times), are also labeled Cheating 6 .",
"Automatically detecting paraphrasing issues, especially when done during the crowd task, can minimize the cost of crowdsourcing by eliminating malicious workers, reducing the number of erroneous paraphrases, and eliminating the need for launching another crowdsourced validation task.",
"Moreover, by detecting Misspelling and Linguistic Errors , users can be provided with proper feedback to help them improve the quality of paraphrasing by showing the source of error and suggestions to address the error (e.g., Spelling error detected: articl article ).",
"Detecting Semantic Errors , such as missing parameter values, can also help crowd workers to generate high quality correct paraphrases.",
"Automated methods can also be used to identify low quality workers, and particularly cheaters who may generate potentially large amount of invalid paraphrases intentionally.",
"Moreover, providing suggestions to cheaters will not help and therefore early detection is of paramount.",
"6 We used Google Translate to check whether they were proper translations or just random sentences in other languages Spell Checker Precision Recall F1 Aspell 8 0.249 0.618 0.354 Hunspell 9 0.249 0.619 0.355 MySpell 10 0.249 0.619 0.355 Norvig 11 0.488 0.655 0.559 Ginger 12 0.540 0.719 0.616 Yandex 13 0.571 0.752 0.650 Bing Spell Check 14 0.612 0.737 0.669 LanguageTool 15 0.630 0.727 0.674 Table 3: Comparison of Spell Checkers In a pre-hoc quality control approach for crowdsourced paraphrases, the most important metric seems to be the precision of detecting invalid paraphrases (Nilforoshan et al., 2017).",
"That is because the main aim of using such a quality control approach is rejecting invalid paraphrases without rejecting correct ones (Burrows et al., 2013).",
"This is essential because rejecting correct paraphrases would be unfair and unproductive.",
"For instance, sincere and trustful crowd workers might not get paid as a result of false-positives (incorrectly detected errors).",
"On the other hand, having a high recall in detecting invalid paraphrases is important to eliminate faulty paraphrases and consequently obtain robust training samples.",
"Moreover, such a quality control technique should ideally be domain-independent, accessible, and easily-operated to minimize the cost of customization for a special domain and requiring paid experts (e.g., an open source pre-built machine learning model).",
"In the rest of this section, we examine current tools and approaches and discuss their effectiveness in assessing the paraphrasing issues.",
"We employed several spell checkers as listed in Table 3 to examine if they are effective in recognizing spelling errors.",
"We looked up Wikipedia, Github, and ProgrammableWeb 7 to find available tools and APIs for this purpose.",
"7 https://www.programmableweb.com 8 http://aspell.net/ 9 http://hunspell.github.io/ 10 http://www.openoffice.org/lingucomponent/dictionary.html 11 https://github.com/barrust/pyspellchecker 12 https://www.gingersoftware.com/ grammarcheck 13 https://tech.yandex.ru/speller/ 14 https://azure.microsoft.com/en-us/services/cognitive-services/spell-check/ 15 https://languagetool.org 0 1000 (cid:23) 000 (cid:44) (cid:81) (cid:87) (cid:72) (cid:85) (cid:86) (cid:72) (cid:70) (cid:87) (cid:76) (cid:82)(cid:81) (cid:3) (cid:41) (cid:85) (cid:72)(cid:84)(cid:88)(cid:72)(cid:81) (cid:70)(cid:92) (cid:647)(cid:677)(cid:660)(cid:673)(cid:678)(cid:671)(cid:660)(cid:679)(cid:668)(cid:674)(cid:673)(cid:3)(cid:628)(cid:673)(cid:678)(cid:682)(cid:664)(cid:677)(cid:668)(cid:673)(cid:666)(cid:3)(cid:640)(cid:668)(cid:678)(cid:678)(cid:675)(cid:664)(cid:671)(cid:671)(cid:668)(cid:673)(cid:666)(cid:3)(cid:646)(cid:664)(cid:672)(cid:660)(cid:673)(cid:679)(cid:668)(cid:662) (cid:632)(cid:677)(cid:677)(cid:674)(cid:677) (cid:3) (cid:3) (cid:639)(cid:668)(cid:673)(cid:666)(cid:680)(cid:668)(cid:678)(cid:668)(cid:679)(cid:662) (cid:632)(cid:677)(cid:677)(cid:674)(cid:677) (cid:3) (cid:630)(cid:667)(cid:664)(cid:660)(cid:679)(cid:668)(cid:673)(cid:666)(cid:3)(cid:630)(cid:674)(cid:677)(cid:677)(cid:664)(cid:662)(cid:679) (cid:19)(cid:21) (cid:1) (cid:9) (cid:17)(cid:6)(cid:10) (cid:18)(cid:21)(cid:25) (cid:9) (cid:19) (cid:6)(cid:10) (cid:19)(cid:24)(cid:25) (cid:1) (cid:9) (cid:22) (cid:6)(cid:10) (cid:24)(cid:18)(cid:21) (cid:1) (cid:9) (cid:18) (cid:19) (cid:6)(cid:10) (cid:26)(cid:17)(cid:17) (cid:9) (cid:18) (cid:22) (cid:6)(cid:10) (cid:18)(cid:17)(cid:22)(cid:26) (cid:9) (cid:18) (cid:25) (cid:6)(cid:10) (cid:9)(cid:23)(cid:18)(cid:6)(cid:10) (cid:41)(cid:85)(cid:72)(cid:84) (cid:88)(cid:72)(cid:81)(cid:70)(cid:92) (cid:20)(cid:22)(cid:26) (cid:18) (cid:22)(cid:25)(cid:23)(cid:21)(cid:24)(cid:21) (cid:21)(cid:20)(cid:25)(cid:20)(cid:18)(cid:22)(cid:18)(cid:17)(cid:26)(cid:18)(cid:17)(cid:24) (cid:18)(cid:17)(cid:21)(cid:26)(cid:18)(cid:25)(cid:26) (cid:22)(cid:19) (cid:20)(cid:21) (cid:20)(cid:18) (cid:19)(cid:24) (cid:19)(cid:21) (cid:18)(cid:18)(cid:18) (cid:18) (cid:20)(cid:23)(cid:25)(cid:19) Figure 1: Dataset Label Statistics Even though detecting misspelled words seems easy with existing automatic spellcheckers, they fall short in a few cases.",
"This can be also concluded from Table 3 by considering the precision and recall of each spell checker in detecting only paraphrases with misspellings.",
"For instance, spell checkers are often unable to identify homonyms (Perelman, 2016), incorrectly mark proper nouns and unusual words (Bernstein et al., 2015), and sometimes do not identify wrong words that are properly spelled (Chisholm and Henry, 2005).",
"For instance, in Example 1 of Table 1, the new playlist is incorrectly detected as a misspelled word by LanguageTool (the best performer as listed in Table 3).",
"In Example 3, the word far is not detected even though the worker has misspelled the word fare .",
"In Example 4, the word Newtown (a suburb in Sydney) is mistakenly detected as a misspelling error.",
"Some of these deficiencies can be addressed.",
"For instance, in the case of spelling errors, assuming that the initial expressions given to the crowd are free of typos, we can ignore false-positives like the Newtown and new playlist.",
"We investigated how well grammar checkers perform in detecting linguistic errors.",
"We employed several grammar checkers as listed in Table",
"4. Our experiments shows that spell checkers have both low precision and recall.",
"Perelman (Perelman, 2016) also conducted several experiments with major commercial and noncommercial grammar checkers, and identified that Grammar Checker Precision Recall F1 AfterDeadline 17 0.228 0.069 0.106 Ginger 0.322 0.256 0.285 GrammarBot 18 0.356 0.139 0.200 LanguageTool 0.388 0.098 0.156 Table 4: Comparison of Grammar Checkers grammar checkers are unreliable.",
"Based on our observations, grammar checkers often fail in detecting linguistic errors as shown in Table",
"4. Examples include improper use of words (e.g., Who is the latest scientific article of machine learn-ing? ), random sequence of words generated by cheaters (e.g., Come the next sing ), and missing articles 16 (e.g., I'm looking for flight for Tehran to Sydney ).",
"Given these examples, we believe that language models can be used to measure the likelihood of a sequence of words to detect if it is linguistically acceptable.",
"We also investigated several language detectors to evaluate how well they perform when crowd workers use another language instead of English.",
"The results of experiment in Table 5 indicate that these tools detect almost all sentences in other languages.",
"But they produce lots of false-positives including for correct English sentences (e.g., play next song ).",
"As a result, the tools in our experi-16 Missing articles in expressions similar to newspaper headlines are not considered error in the dataset (e.g., Hotel near Disneyland ) 17 https://www.afterthedeadline.com 18 https://www.grammarbot.io Language Detector Precision Recall F1 FastText 19 0.072 1.000 0.135 LangDetect 20 0.080 0.917 0.147 LanguageIdentifier 21 0.080 0.917 0.147 IBM Watson 22 0.170 0.958 0.289 DetectLanguage 23 0.344 0.917 0.500 DetectLanguage+ 0.909 1.000 0.952 Table 5: Language Detection ment have low precision in detecting languages as shown in Table",
"5. Most of the false-positives are caused by sentences that contain unusual words such as misspellings and named entities in the sentence (e.g., Email Phil saying I got you).",
"One possible approach to improve the precision of such tools and APIs is to check if a given paraphrase has spelling errors prior to using language detection tools.",
"We therefore extended the DetectLanguage (the best performing tool) by adding a constraint: a sentence is not written in another language unless it has at least two spelling errors.",
"This constraint is based on the assumption that spell checkers treat foreign words as spelling errors and a sentence has at least two words to be called a sentence.",
"This approach ( DetectLanguage+ in Table 5) significantly reduced the number of false-positives and thus improved precision.",
"Dialog Acts (DAs) (Jurafsky and Martin, 2018), also known as speech acts, represent general intents of an utterance.",
"DA tagging systems label utterances with a predefined set of utterance types ( Directive, Commissive, Informative, etc (Mezza et al.,",
"2018).) Based on the fact that DAs must remain consistent during paraphrasing, we employed a state-of-art, domain-independent, pre-trained DA tagger proposed in (Mezza et al., 2018).",
"For example, if an initial utterance is a question (e.g., are there any cafes nearby? ) it is acceptable to paraphrase it into a directive sentence (e.g., find cafes nearby. ), but its speech act cannot be informative (e.g., there is a cafe on the corner.).",
"Overall, due to the lack of any other 19 https://fasttext.cc/blog/2017/10/02/blog-post.html(Joulin et al., 2017) 20 https://pypi.org/project/langdetect/ 21 https://github.com/saffsd/langid.py 22 https://console.bluemix.net/apidocs/language-translator#identify-language 23 https://ws.detectlanguage.com domain-independent DA tagger for the English language, we only investigated this tagger.",
"We found that it has a precision of 2% with recall of 63%.",
"This shows that detecting speech acts is a very challenging task especially for domain-independent environments.",
"Advances in speech act detection and availability of public speech act datasets can assist in detecting this category of the paraphrasing issues.",
"Moreover, it is feasible to automatically generate pairs of questions and answers by mining datasets in the fields of Question Answering and dialog systems.",
"Automatically building such pairs can help building a dataset which is diverse enough to be used in practice.",
"Such a dataset can be fed into deep learning algorithms to yield better performance in detecting Answering issues.",
"To the best our knowledge, there is not yet an approach to distinguish between categories of semantically invalid paraphrases.",
"Paraphrase detection and textual semantic similarity (STS) methods are designed to measure how two pieces of text are semantically similar.",
"However, they do not differentiate between different types of errors (e.g., Cheating , Answering , Semantic Errors ) in our settings.",
"As such, these techniques are not directly applicable.",
"In the rest of this section, we focus on building machine learning models to detect the paraphrasing errors.",
"For this purpose, we used 38 established features from the literature as summarized in Table",
"6. Using these features and Weka (Hall et al., 2009), we built various classifiers to detect the following paraphrasing issues: Answering , Semantic Errors , and Cheating .",
"We chose to test the five classification algorithms applied in paraphrasing literature as mentioned in (Burrows et al., 2013): C4.5 Decision Tree, K-Nearest Neighbor (K=50), Maximum Entropy, Naive Bayes, and Support Vector Machines (SVM) using default Weka 3.6.13 parameters for each of the classification algorithms.",
"We also experimented with Random Forest algorithm since it is a widely-used classifier.",
"We did not apply deep learning based classifiers directly due to the lack of expressions in the collected dataset which seems essential for developing domain independent classifiers.",
"While our dataset is reasonably large, it contains only 40 expressions (each having 150 paraphrases).",
"Given that deep learn-Category # Description N-gram Features 12 N-gram overlap, exclusive longest common prefix n-gram overlap, and SUMO all proposed in (Joao et al., 2007), as well as Gaussian, Parabolic, and Trigonometric proposed in (Cordeiro et al., 2007), Paraphrase In N-gram Changes (PINC) (Chen and Dolan, 2011), Bilingual Evaluation Understudy (BLEU) (Papineni et al., 2002), Google's BLEU (GLEU) (Wu et al., 2016), NIST (Doddington, 2002), Character n-gram F-score (CHRF) (Popovic, 2016), and the length of the longest common subsequence.",
"ing techniques are data thirsty (Goodfellow et al., 2016; Yang et al., 2018a), to use these kinds of models and eliminate the burden of manual feature engineering, much more expressions are needed.",
"Instead, we benefited from the state-of-art sentence encoders via Transfer Learning as listed in Table",
"6. Table 7, 8, and 9 demonstrate the performance of various classifiers (excluding classifiers with F1 being less than 0 . 2 ) for each of paraphrasing issues using 10-fold cross validation.",
"To keep the classifiers domain-independent, we split the dataset based on the expressions without sharing any paraphrases of a single expression between the test and train samples.",
"It can be seen that automatically detecting these quality issues is very challenging; even the best performing classifier has a very low F1 score especially for detecting Answering and Semantic Error issues.",
"Based on manual exploration, we also found that the classifiers fail to recognize complex cheating behaviours such as Example 13 in Table 1 as discussed in Section 3.",
"Therefore, new approaches are required to accurately detect paraphrasing issues.",
"Based on our explorations and a prior work (McCarthy et al., 2009), we postulate that accurately detecting linguistic errors such as grammatically incorrect paraphrases can play indispensable role in detecting cheating behaviours.",
"Moreover, advances in measuring semantic similarity between sentences can help differentiate between semantically invalid paraphrases and correct ones.",
"We also assessed the performance of detecting incorrect paraphrases regardless of their categories.",
"In this setting, we labeled all incorrect sentences with a single label ( Incorrect ) regardless of their categories.",
"Table 10 demonstrates the performance of various classifiers.",
"Detecting incorrect paraphrases is useful for post-hoc quality control to remove incorrect paraphrases after crowdsourcing paraphrases and consequently eliminate the need for crowdsourced validation task.",
"To the best of our knowledge, that our work is the first to categorize paraphrasing issues and propose",
"an annotated dataset for assessing quality issues of paraphrased user expressions.",
"Nevertheless, our work is related to the areas of",
"(i) quality control in crowdsourced natural language datasets; and",
"(ii) semantic similarity.",
"Quality Control.",
"Quality can be assessed after or before data acquisition.",
"While post-hoc methods evaluate quality when all paraphrases are collected, pre-hoc methods can prevent submission of low quality paraphrases during crowdsourcing.",
"The most prevalent post-hoc approach is launching a verification task to evaluate crowdsourced paraphrases (Negri et al., 2012; Tschirsich and Hintz, 2013).",
"However, automatically removing misspelled paraphrases (Wang et al., 2012) and discarding submissions from workers with low/high task completion time (Ma et al., 2017) are also applied in literature.",
"Machine learning models have also been explored in plagiarism detection systems to assure quality of crowdsourced paraphrases (Crossley et al., 2016; Burrows et al., 2013).",
"Pre-hoc methods, on the other hand, rely on online approaches to asses the quality of the data provided during crowdsourcing (Nilforoshan et al., 2017).",
"Sophisticated techniques are required to avoid generation of erroneous paraphrases (e.g., automatic feedback generation was used to assist crowd workers in generating high quality para-phrases).",
"Precog (Nilforoshan et al., 2017) is an example of such tools which is based on a supervised method for generating automatic writing feedback for multi-paragraph text designed mostly for crowdsourced product reviews (Nil-foroshan et al., 2017; Nilforoshan and Wu, 2018).",
"This paper aims for paving the way for building automatic pre-hoc approaches, and providing appropriate online feedback to users to assist them in generating appropriate paraphrases.",
"However, the provided dataset can also be used for building post-hoc methods to automatically omit faulty paraphrases.",
"Semantic Similarity.",
"Measuring similarity between units of text plays an important role in Natural Language Processing (NLP).",
"Several NLP tasks have been designed to cover various aspects and usages of textual similarity.",
"Examples include textual entailment, semantic textual similarity (Yang et al., 2018b; Fakouri-Kapourchali et al., 2018), paraphrase detection (Agarwal et al., 2018; Issa et al., 2018), duplicate question detection (Mannarswamy and Chidambaram, 2018) tasks which are studied well in NLP.",
"Moreover, recent success in sentence encoders (e.g., Sent2Vec (Pagliardini et al., 2018), InferSent (Conneau et al., 2017), Universal Sentence Encoder (Cer et al., 2018), and Concatenated Power Mean Embeddings (Ruckle et al., 2018)) can be exploited to detect paraphrasing issues with more accuracy.",
"These techniques can be borrowed with some domain specific considerations to build automatic quality control systems for detecting low quality paraphrases.",
"In this paper, we employed a data-driven approach to investigate and quantitatively study various crowdsourced paraphrasing issues.",
"We discussed how automatic techniques for detecting various quality issues can assist the manual process of crowdsourced paraphrasing.",
"We collected an annotated dataset of crowdsourced paraphrasing in which each paraphrase is labeled with associated paraphrasing issues.",
"We used this dataset to assess existing tools and techniques and to determine whether they are sufficient for automatically detecting such issues.",
"Our experiments revealed that automated detection of errors in paraphrases is a challenging task.",
"As a future work, we will be working on devising automated-assisted methods for detection of paraphrasing issues.",
"This will be based on a two-way feedback mechanism: generating feedback for workers, while at the same time the system learns from the (data of) users to improve its machine intelligence.",
"In time, we envision increasingly less dependence on users.",
"This research was supported fully by the Australian Government through the Australian Research Council's Discovery Projects funding scheme (project DP1601104515)."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"objective",
"method",
"objective",
"objective",
"method",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"other"
] |
[
"Targeted syntactic evaluation of subject-verb number agreement in English (TSE) evaluates language models' syntactic knowledge using hand-crafted minimal pairs of sentences that differ only in the main verb's conjugation.",
"The method evaluates whether language models rate each grammatical sentence as more likely than its ungrammatical counterpart.",
"We identify two distinct goals for TSE.",
"First, evaluating the systematicity of a language model's syntactic knowledge: given a sentence, can it conjugate arbitrary verbs correctly?",
"Second, evaluating a model's likely behavior : given a sentence, does the model concentrate its probability mass on correctly conjugated verbs, even if only on a subset of the possible verbs?",
"We argue that current implementations of TSE do not directly capture either of these goals, and propose new metrics to capture each goal separately.",
"Under our metrics, we find that TSE overestimates systematicity of language models, but that models score up to 40% better on verbs that they predict are likely in context.",
"As neural language models have emerged as both broadly useful engineering tools (Devlin et al., 2018; Radford et al., 2019) and potential models of human language processing (Linzen and Leonard, 2018; Ettinger et al., 2018; Futrell et al., 2019), evaluations targeting their syntactic ability have been developed to better understand their capabilities.",
"One such method for syntactic evaluation tests models' knowledge of English subject-verb (S/V) number agreement (Linzen et al., 2016; Gulordava et al., 2018).",
"These studies consider minimal pairs of sentences, such as The keys to the cabinet is/are on the table , that differ only in verb number, and test if models rate grammatical sentences as more probable.",
"The syntactically correct of the two sentences is sampled from natural corpora (Linzen et al., 2016; Kuncoro et al., 2018) or constructed from templates.",
"The use of templates, known as Targeted Syntactic Evaluation (TSE), allows for the fine-grained evaluation of models on specific, often rare, syntactic phenomena (Marvin and Linzen, 2018; Ettinger et al., 2018; Warstadt et al., 2020), but (when evaluating S/V number agreement) relies on researchers hand-specifying a small set of verb lemmas that are substituted into each template.",
"In this work, we improve the TSE methodology by disentangling its broad objective of evaluating syntactic ability into two distinct goals, and we introduce two variants of TSE to separately capture each goal.",
"These evaluations demonstrate that neural models do not generally consider well-conjugated verbs more likely than their incorrect conjugations, but instead prefer to correctly conjugate verbs they deem likely.",
"We argue that the objective of evaluating syntactic ability can be decomposed into two goals and that current implementations of TSE do not achieve either of them.",
"The first goal is measuring systematicity : for a specific syntactic construction, does the model correctly conjugate arbitrary verbs with the grammatical number of the subject?",
"TSE currently fails to capture this because it evaluates models using only a small set of verbs for each syntactic construction.",
"If models only conjugate these verbs correctly, they receive a high score, even if they conjugate other verbs incorrectly.",
"The second goal is measuring likely behavior : when we sample verbs from the model in a specific syntactic construction, will they be properly conjugated?",
"TSE fails to directly capture this because the small set of verbs used in evaluation might differ from the verbs that are likely in context under the model.",
"If models conjugate these hand-specified verbs incorrectly, they receive a low score, even if they correctly conjugate more likely verbs.",
"The keys to the cabinet exist/exists on the table.",
"where for simplicity we assert that the only possible verbs are: is/are ( be ) and exists/exist ( exist ).",
"Let the model assign higher probability mass to the correct conjugation for the be pair but not for the exist pair (Table 1).",
"First, consider evaluating systematicity.",
"To re-flect how TSE chooses a small subset of the possible verbs for evaluation, in this toy example let it choose only be .",
"Thus, the model scores 1 out of 1 , whereas a test of systematicity should penalize the model for incorrectly conjugating exist .",
"Now, consider evaluating likely behavior.",
"Let this same model generate either of the two correct conjugations (are/exist) with total probability of 0 .",
"7 and generate either of the incorrect conjugations with total probability 0 .",
"3 .",
"Thus, when we sample from the model, it generates a correct conjugation with probability 0 .",
"7 , but TSE cannot measure this, since it gives a binary score to each verb pair.",
"The first of our proposed evaluations, equally-weighted syntactic evaluation (EW), addresses systematicity.",
"To better approximate a model's ability to conjugate any verb, EW expands TSE to consider a much larger set of verbs than given in the templates used by prior work.",
"model-weighted syntactic evaluation (MW), addresses likely behavior.",
"This method computes the probability mass that models put on producing the correct verb conjugation given a particular syntactic context.",
"It rates the syntactic quality of samples models need not conjugate all verbs, but instead be likely to generate some well-conjugated verb.",
"We conduct these evaluations on four pretrained language models using two template datasets: M&L (Marvin and Linzen, 2018) and BLiMP (Warstadt et al., 2020).",
"Overall, we find that the EW scores are lower than the TSE scores, indicating that the verb choices in these templates overestimate models' systematicity with respect to subject-verb number agreement.",
"This lack of systematicity is particularly apparent when we test verb lemmas that models find unlikely, with scores dropping by up to 40% .",
"In contrast, the MW scores are high, suggesting that language models preferentially conjugate verbs they deem likely.",
"Moreover, this ability improves when the tail of the distribution is truncated, as it is in decoding strategies like nucleus sampling (Holtzman et al., 2020).",
"1 2 Methods To define our metrics, we introduce some notation.",
"TSE has two components: the model M to evaluate, and the set of templates T with interesting syntactic phenomena (e.g., from Marvin and Linzen (2018)).",
"In S/V number agreement, each template contains a context c , including the subject that specifies the correct verb inflection; and the verb lemma (cid:96) with correct and incorrect inflections in the third person present tense ( (cid:96) + and (cid:96) , respectively).",
"M takes c and produces a distribution PM ( | c ) over its vocabulary, which we assume includes (cid:96) + and (cid:96) .",
"We then compute a score for each template and average the scores over all templates to get a final score for M .",
"The TSE score for a template can be expressed as: 1 [ PM ( (cid:96) + | c ) > PM ( (cid:96) | c )] .",
"The crux of our proposal is to use a large set of lemmas, L , while drawing contexts c from a prede-fined set of templates T .",
"We define two evaluation methods using L : Equally-Weighted (EW) Here we average (1) over all (cid:96) in L , evaluating systematicity.",
"Model-Weighted (MW) Here we compute the total probability of generating a correct inflection conditioned on generating a lemma in L :",
"Data We use S/V number agreement TSE templates from Marvin and Linzen (2018) and BLiMP (Warstadt et al., 2020) (for BLiMP we use the minimal pairs differing in verb, not subject).",
"For our MW and EW evaluations, we only keep templates with unique contexts (i.e., templates not differing solely in verb lemma).",
"We also ensure that all sentences start with a capital letter (for cased models) and end with a sentence-final period (for bidirectional models).",
"Our list of English verb lemmas contains 3,562 lemmas and is compiled by combining the top 1,000 most frequent verb lemmas from COCA, extracting all tokens with the VB part-of-speech tag in the Penn Treebank (1,951 lemmas), and scraping 3,250 lemmas from the Giant Verb List (Davies, 2008; Marcus et al., 1993; Essay, 2015).",
"2 Masked LMs may assign a different number of tokens to plural and singular forms of the same lemma, and they may not model joint probabilities over multiple tokens.",
"To enable a fairer comparison between LMs and masked LMs, we only consider lemmas where both inflections are in the wordpiece vocabulary of the models.",
"This choice leaves 980 lemmas for BERT cased, 1159 for BERT uncased, and 1265 for GPT2 and RoBERTa (so results are not comparable between models).",
"This verbal variety situates our work between Gulordava et al. (2018)'s and Marvin and Linzen (2018)'s: our verbs can be infelicitous like the sentences in Gulordava et al. (2018), but our contexts are felicitous.",
"See Section 5 for additional discussion.",
"Models We evaluate both bidirectional and unidirectional models, including BERT-large-uncased, BERT-large-cased, GPT2-XL, and RoBERTa-large (Devlin et al., 2018; Radford et al., 2019; Liu et al., 2019), all using the Huggingface Tranformers library (Wolf et al., 2020).",
"To understand models' performances at the head and tail of their distributions, we additionally restrict L to the lemmas assigned high and low probabilities.",
"To consider the high-confidence lemmas, for each template in the dataset, we record the MW and EW scores computed using the inflections that fall into the top p percentile of the model's distribution.",
"We choose p { 10 , 20 , 30 , 40 , 50 , 60 , 70 , 80 , 90 , 95 , 97 , 100 } , noting that for each p , the distribution we use is the same as the one used by nucleus sampling (with a nucleus of size p ).",
"Analogously, to focus on the low-confidence lemmas, we consider the lemmas where both inflections fall into the bottom p percentile of the model's distribution.",
"Here, we choose p { 50 , 10 , 1 , 0 .",
"1 , 0 .",
"01 , 0 .",
"001 , 0 .",
"0001 } .",
"3 4 Results Our results can be found in Table 2.",
"We find that EW scores are almost always lower than TSE 3 At times, a cut-off lies within the probability mass on an inflection of interest.",
"scores, indicating that TSE overestimates systematicity.",
"On the other hand, higher MW scores reveal that sampling from the models is likely to result in correct conjugations.",
"A potential confounder for unidirectional LMs (GPT2) is that they only receive the left context and subject verb pairs sometimes look like noun phrases.",
"For example, a sentence starting with The officer can be continued by experiences joy or by experience is overwhelming .",
"This is not an issue when there are phrases or clauses between the subject and verb, and it may not occur for other English syntactic phenomena or in other languages.",
"To investigate the extent to which models perform well on likely lemmas and poorly on unlikely lemmas, we plot these scores for the top and bottom p percentiles in Figure 1.",
"In general, the models perform better on lemmas that they assign high probability to in both evaluations.",
"For example, consider the BERT cased model assessed on object relative clause constructions.",
"The MW plot illustrates that sampling from the top 60% of the distribution will produce a grammatical output with 97% probability, while sampling from the entire distribution only does so with 91% probability.",
"The EW plot shows that the model attains a score under 80% when assessed on verbs in the bottom 0 .",
"001% of the model's probability mass, even though considering verbs in the top 90% of the model's probability mass would yield a score over 94% .",
"These observations extend previous work on nucleus sampling, showing that cutting off the tails of the distribution generates more syntactically correct outputs (Holtzman et al., 2020).",
"There are two additional factors to keep in mind for these plots.",
"First, the heads and tails of the distributions often contain very few lemmas eligible for use in score calculation.",
"Second, models often assign probability mass to other lemma inflections (e.g. the past tense) that do not allow us to assess models' S/V number agreement ability.",
"See the Appendix for related plots.",
"Earlier, we motivated MW with the consideration that the lemmas TSE uses might be unlikely, and therefore give an unrealistic depiction of models' likely syntactic behavior.",
"Two examples where this happens and leads to a deceptively low score on a template for a model (here BERT-large-cased) are in Table 3.",
"In the first column, the lemma set used by TSE contains like , hate , and love , and the model puts more probability on like than likes , leading to a TSE score of 0 .",
"67 .",
"However, the most probable lemmas are meet , encounter , see , and face , all of which the model conjugates correctly.",
"In the second column, there is another example where the MW score rewards models for correct conjugations while TSE does not.",
"Like the last example, the lemma set used by TSE contains like , hate , and love , and like is conjugated incorrectly.",
"However, the more probable lemmas pilot , control , employ , train , use , include , have , order , command , and feature are all conjugated correctly.",
"Evaluating Models Some previous work has focused on using minimal pairs to evaluate syntactic representations of models.",
"Goldberg (2019) and Wolf (2019) assess the syntactic abilities of large transformers like BERT and GPT2, while Kuncoro et al. (2018), Tran et al. (2018) and Kim The senators that the skater [mask] are young.",
"et al. (2019) evaluate architectures designed to capture syntax (e.g., Ordered Neurons (Shen et al., 2019) and Recurrent Neural Network Grammars (Dyer et al., 2016)).",
"In these cases, minimal pair evaluations should align with models' performance as language models, which is measured by our MW score.",
"Psycholinguistics Recent work has also applied experimental procedures from psycholinguistics to compare human and neural model language processing (Futrell et al., 2019).",
"Experiments investigating garden path sentences' surprisal, S/V number agreement, and other specific syntactic phenomena reveal that models and humans have different patterns of errors and processing (Linzen and Leonard, 2018; Ettinger et al., 2018; Wilcox et al., 2020; van Schijndel and Linzen, 2020).",
"Many of these phenomena are rare, so evaluations with tem-plated minimal pairs complement perplexity as a metric for evaluating models' syntactic generalization (Hu et al., 2020).",
"When measuring syntactic ability on arbitrary lemmas, our EW metric would be preferred.",
"Lexical Choice in Syntactic Evaluation Prior work also considered how the lexical items in minimal pairs affect the syntactic evaluation of models.",
"Marvin and Linzen (2018) note that certain verbs are preferentially conjugated correctly (they observe RNNs conjugate be correctly more often than swim ) and claim that this is due to unigram frequency of the verbs.",
"Similarly, we observe that models succeed on our MW metric indicating that they correctly inflect verbs with high in-context probability under the model.",
"Relatedly, Yu et al. (2020) investigate the nouns used in TSE minimal pairs and find that language model performance at subject-verb number agreement is uncorrelated with unigram probability of the noun.",
"We instead focus on model-estimated in-context probability of the verb in minimal pairs, finding that model performance increases with the model probability.",
"Finally, Gulordava et al. (2018) argue that the results of syntactic evaluations are influenced by semantic associations between tokens, so they remove these associations by substituting each token with a different randomly selected token with the same syntactic role.",
"Although the resulting minimal pairs are infelicitous, models are still able to predict the correct inflection with above-chance accuracy.",
"Our methods are similar in that some of the verbs in our evaluation set are infelicitous, however the contexts we use are semantically coherent.",
"Rather than avoiding semantic effects by creating infelicitous contexts, we marginalize them out by using a large set of verb lemmas.",
"This makes our metrics less stringent than those of Gulordava et al. (2018), but captures a potentially more realistic setting where we expect our models to perform systematically.",
"As neural models have proven successful at NLP tasks and as potential psycholinguistic models, we continue to refine our understanding of how and whether they capture human-like language faculties.",
"TSE provides a useful framework to address this question, but its current implementation focuses on a limited group of hand-chosen verbs, so it inaccurately reflects models' syntactic generalization abilities.",
"In response, we propose two minimal pair evaluations: equally-weighted and model-weighted syntactic evaluation.",
"The first focuses on systematicity by expanding the set of verbs TSE considers, and illustrates that language models still struggle with S/V agreement for unlikely verbs.",
"The second focuses on likely behavior by computing the probability of producing a correctly conjugated verb, and illustrates that despite systematic shortcomings, language models generate syntactically valid utterances with high probability.",
"By introducing these metrics, we hope to arrive at a clearer picture of the syntactic abilities of language models.",
"The metrics we propose have been developed specifically with corpora using Standard American English in order to evaluate models' abilities to understand Standard American English syntax.",
"This focus means that models performing well under these evaluations may perform poorly in other English dialects, and they may not understand all syntactic systems, for example in other languages.",
"Finally, our MW metric concerns itself with how models are likely to preform during generative processes (such as beam search or sampling).",
"Performing well on this metric means models will be able to generate more human-like text which has potential downstream harms such as misinformation generation or other inauthentic behavior in situations where written language is the medium used for communication.",
"The authors would like to thank the reviewers for their helpful feedback, along with Tal Linzen, Chris Manning, Rishi Bommasani, Kawin Etha-yarajh, Lisa Li, Nelson Liu, Yasuhide Miura, Aaron Mueller, and Tianyi Zhang for their invaluable comments and discussions.",
"JH was supported by an NSF Graduate Research Fellowship under grant number DGE1656518, and by Two Sigma under their 2020 PhD Fellowship Program."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"method",
"other",
"other",
"abstain",
"other",
"abstain",
"other",
"other",
"method",
"method",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"Given an untrimmed video and a text query, natural language video localization (NLVL) is to locate a matching span from the video that semantically corresponds to the query.",
"Existing solutions formulate NLVL either as a ranking task and apply multimodal matching architecture, or as a regression task to directly regress the target video span.",
"In this work, we address NLVL task with a span-based QA approach by treating the input video as text passage.",
"We propose a video span localizing network (VSLNet), on top of the standard span-based QA framework, to address NLVL.",
"The proposed VSLNet tackles the differences between NLVL and span-based QA through a simple and yet effective query-guided highlighting (QGH) strategy.",
"The QGH guides VSLNet to search for matching video span within a highlighted region.",
"Through extensive experiments on three benchmark datasets, we show that the proposed VSLNet outperforms the state-of-the-art methods; and adopting span-based QA framework is a promising direction to solve NLVL.",
"1 1 Introduction Given an untrimmed video, natural language video localization (NLVL) is to retrieve or localize a temporal moment that semantically corresponds to a given language query.",
"An example is shown in Figure 1. As an important vision-language understanding task, NLVL involves both computer vision and natural language processing techniques (Kr-ishna et al., 2017; Hendricks et al., 2017; Gao et al., 2018; Le et al., 2019; Yu et al., 2019).",
"Clearly, cross-modal reasoning is essential for NLVL to correctly locate the target moment from a video.",
"Prior works primarily treat NLVL as a ranking task, which is solved by applying multimodal Corresponding author.",
"Language Query : Men are celebrating and an old man gives a trophy to a young boy.",
"matching architecture to find the best matching video segment for a given language query (Gao et al., 2017; Hendricks et al., 2018; Liu et al., 2018a; Ge et al., 2019; Xu et al., 2019; Chen and Jiang, 2019; Zhang et al., 2019).",
"Recently, some works explore to model cross-interactions between video and query, and to regress the temporal locations of target moment directly (Yuan et al., 2019b; Lu et al., 2019a).",
"There are also studies to formulate NLVL as a sequence decision making problem and to solve it by reinforcement learning (Wang et al., 2019; He et al., 2019).",
"We address the NLVL task from a different perspective.",
"The essence of NLVL is to search for a video moment as the answer to a given language query from an untrimmed video.",
"By treating the video as a text passage, and the target moment as the answer span, NLVL shares significant similarities with span-based question answering (QA) task.",
"The span-based QA framework (Seo et al., 2017; Wang et al., 2017; Huang et al., 2018) can be adopted for NLVL.",
"Hence, we attempt to solve this task with a multimodal span-based QA approach.",
"There are two main differences between traditional text span-based QA and NLVL tasks.",
"First, video is continuous and causal relations between video events are usually adjacent.",
"Natural language, on the other hand, is inconsecutive and words in a sentence demonstrate syntactic structure.",
"For instance, changes between adjacent video frames are usually very small, while adjacent word tokens may carry distinctive meanings.",
"As the result, many events in a video are directly correlated and can even cause one another (Krishna et al., 2017).",
"Causalities between word spans or sentences are usually indirect and can be far apart.",
"Second, compared to word spans in text, human is insensitive to small shifting between video frames.",
"In other words, small offsets between video frames do not affect the understanding of video content, but the differences of a few words or even one word could change the meaning of a sentence.",
"As a baseline, we first solve the NLVL task with a standard span-based QA framework named VSLBase .",
"Specifically, visual features are analogous to that of text passage; the target moment is regarded as the answer span.",
"VSLBase is trained to predict the start and end boundaries of the answer span.",
"Note that VSLBase does not address the two aforementioned major differences between video and natural language.",
"To this end, we propose an improved version named VSLNet (Video Span Localizing Network).",
"VSLNet introduces a Query-Guided Highlighting ( QGH ) strategy in addition to VSLBase.",
"Here, we regard the target moment and its adjacent contexts as foreground, while the rest as background, i.e., foreground covers a slightly longer span than the answer span.",
"With QGH, VSLNet is guided to search for the target moment within a highlighted region.",
"Through region highlighting, VSLNet well addresses the two differences.",
"First, the longer region provides additional contexts for locating answer span due to the continuous nature of video content.",
"Second, the highlighted region helps the network to focus on subtle differences between video frames, because the search space is reduced compared to the full video.",
"Experimental results on three benchmark datasets show that adopting span-based QA framework is suitable for NLVL.",
"With a simple network architecture, VSLBase delivers comparable performance to strong baselines.",
"In addition, VSLNet further boosts the performance and achieves the best among all evaluated methods.",
"Natural Language Video Localization.",
"The task of retrieving video segments using language queries was introduced in (Hendricks et al., 2017; Gao et al., 2017).",
"Solutions to NLVL need to model the cross-interactions between natural language and video.",
"The early works treat NLVL as a ranking task, and rely on multimodal matching architecture to find the best matching video moment for a language query (Gao et al., 2017; Hendricks et al., 2017, 2018; Wu and Han, 2018; Liu et al., 2018a,b; Xu et al., 2019; Zhang et al., 2019).",
"Although intuitive, these models are sensitive to negative samples.",
"Specifically, they need to dense sample candidate moments to achieve good performance, which leads to low efficiency and lack of flexibility.",
"Various approaches have been proposed to overcome those drawbacks.",
"Yuan et al. (2019b) builds a proposal-free method using BiLSTM and directly regresses temporal locations of target moment.",
"Lu et al. (2019a) proposes a dense bottom-up framework, which regresses the distances to start and end boundaries for each frame in target moment, and select the ones with highest confidence as final result.",
"Yuan et al. (2019a) proposes a semantic conditioned dynamic modulation for better correlating sentence related video contents over time, and establishing a precise matching relationship between sentence and video.",
"There are also works (Wang et al., 2019; He et al., 2019) that formulate NLVL as a sequence decision making problem, and adopt reinforcement learning based approaches, to progressively observe candidate moments conditioned on language query.",
"Most similar to our work are (Chen et al., 2019) and (Ghosh et al., 2019), as both studies are considered using the concept of question answering to address NLVL.",
"However, both studies do not explain the similarity and differences between NLVL and traditional span-based QA, and they do not adopt the standard span-based QA framework.",
"In our study, VSLBase adopts standard span-based QA framework; and VSLNet explicitly addresses the differences between NLVL and traditional span-based QA tasks.",
"Span-based Question Answering.",
"Span-based QA has been widely studied in past years.",
"Wang and Jiang (2017) combines match-LSTM (Wang and Jiang, 2016) and Pointer-Net (Vinyals et al., 2015) to estimate boundaries of the answer span.",
"BiDAF (Seo et al., 2017) introduces bi-directional attention to obtain query-aware context representation.",
"Xiong et al. (2017) proposes a coattention network to capture the interactions between context and query.",
"R-Net (Wang et al., 2017) integrates mutual and self attentions into RNN encoder for feature refinement.",
"QANet (Yu et al., 2018) lever-Q u e r y : T h i s p e r s on s t a r t s c ook i ng a t t h e s t ov e 3 D C o n v N e t This person stove G l o V e F e a t u re E n c o d er C o n t e x tQ u er y A tt e n t i o n C o nd i t i o n e d Sp a n P re d i c t o r !",
"ages a similar attention mechanism in a stacked convolutional encoder to improve performance.",
"Fu-sionNet (Huang et al., 2018) presents a full-aware multi-level attention to capture complete query information.",
"By treating input video as text passage, the above frameworks are all applicable to NLVL in principle.",
"However, these frameworks are not designed to consider the differences between video and text passage.",
"Their modeling complexity arises from the interactions between query and text passage, both are text.",
"In our solution, VSLBase adopts a simple and standard span-based QA framework, making it easier to model the differences between video and text through adding additional modules.",
"Our VSLNet addresses the differences by introducing the QGH module.",
"Very recently, pre-trained transformer based language models (Devlin et al., 2019; Dai et al., 2019; Liu et al., 2019; Yang et al., 2019) have elevated the performance of span-based QA tasks by a large margin.",
"Meanwhile, similar pre-trained models (Sun et al., 2019a,b; Yu and Jiang, 2019; Rahman et al., 2019; Nguyen and Okatani, 2019; Lu et al., 2019b; Tan and Bansal, 2019) are being proposed to learn joint distributions over multimodality sequence of visual and linguistic inputs.",
"Exploring the pre-trained models for NLVL is part of our future work and is out of the scope of this study.",
"We now describe how to address NLVL task by adopting a span-based QA framework.",
"We then present VSLBase (Sections 3.2 to 3.4) and VSLNet in detail.",
"Their architectures are shown in Figure 2. 3.1 Span-based QA for NLVL We denote the untrimmed video as V = { f t } Tt =1 and the language query as Q = { q j } mj =1 , where T and m are the number of frames and words, respectively.",
"s and e represent the start and end time of the temporal moment i.e., answer span.",
"To address NLVL with span-based QA framework, its data is transformed into a set of SQuAD style triples ( Context, Question, Answer ) (Rajpurkar et al., 2016).",
"For each video V , we extract its visual features V = { v i } ni =1 by a pre-trained 3D ConvNet (Carreira and Zisserman, 2017), where n is the number of extracted features.",
"Here, V can be regarded as the sequence of word embeddings for a text passage with n tokens.",
"Similar to word embeddings, each feature v i here is a video feature vector.",
"Since span-based QA aims to predict start and end boundaries of an answer span, the start/end time of a video sequence needs to be mapped to the corresponding boundaries in the visual feature sequence V .",
"Suppose the video duration is T , the start (end) span index is calculated by a s ( e ) = (cid:104) s ( e ) / T n (cid:105) , where (cid:104)(cid:105) denotes the rounding operator.",
"During the inference, the predicted span boundary can be easily converted to the corresponding time via s ( e ) = a s ( e ) /n T .",
"After transforming moment annotations in NLVL dataset, we obtain a set of ( V , Q, A ) triples.",
"Visual features V = [ v 1 , v 2 , . . . , v n ] act as the passage with n tokens; Q = [ q 1 , q 2 , . . . , q m ] is the query with m tokens, and the answer A = [ v a s , v a s +1 , . . . , v a e ] corresponds to a piece in the passage.",
"Then, the NLVL task becomes to find the correct start and end boundaries of the answer span, a s and a e .",
"We already have visual features V = { v i } ni =1 R n d v .",
"Word embeddings of a text query Q , Q = { q j } mj =1 R m d q , are easily obtainable e.g., GloVe.",
"We project them into the same dimension d , V (cid:48) R n d and Q (cid:48) R m d , by two linear layers (see Figure",
"2(a)).",
"Then we build the feature encoder with a simplified version of the embedding encoder layer in QANet (Yu et al., 2018).",
"Instead of applying a stack of multiple encoder blocks, we use only one encoder block.",
"This encoder block consists of four convolution layers, followed by a multi-head attention layer (Vaswani et al., 2017).",
"A feed-forward layer is used to produce the output.",
"Layer normalization (Ba et al., 2016) and residual connection (He et al., 2016) are applied to each layer.",
"The encoded visual features and word embeddings are as follows: (cid:101) V = FeatureEncoder ( V (cid:48) ) (cid:101) Q = FeatureEncoder ( Q (cid:48) ) (1) The parameters of feature encoder are shared by visual features and word embeddings.",
"After feature encoding, we use context-query attention (CQA) (Seo et al., 2017; Xiong et al., 2017; Yu et al., 2018) to capture the cross-modal interactions between visual and textural features.",
"CQA first calculates the similarity scores, S R n m , between each visual feature and query feature.",
"Then context-to-query ( A ) and query-to-context ( B ) attention weights are computed as: A = S r (cid:101) Q R n d , B = S r S Tc (cid:101) V R n d where S r and S c are the rowand column-wise normalization of S by SoftMax, respectively.",
"where V q R n d ; FFN is a single feed-forward layer; (cid:12) denotes element-wise multiplication.",
"We construct a conditioned span predictor by using two unidirectional LSTMs and two feed-forward layers, inspired by Ghosh et al. (2019).",
"The main difference between ours and Ghosh et al. (2019) is that we use unidirectional LSTM instead of bidirectional LSTM.",
"We observe that unidirectional LSTM shows similar performance with fewer parameters and higher efficiency.",
"The two LSTMs are stacked so that the LSTM of end boundary can be conditioned on that of start boundary.",
"Then the hidden states of the two LSTMs are fed into the Query : He uses the tool to take off all of the nuts one by one.",
"corresponding feed-forward layers to compute the start and end scores: h st = UniLSTM start ( v qt , h st 1 ) h et = UniLSTM end ( h st , h et 1 ) S st = W s ([ h st ; v qt ]) + b s S et = W e ([ h et ; v qt ]) + b e (3) Here, S st and S et denote the scores of start and end boundaries at position t ; v qt represents the t -th feature in V q .",
"Then, the probability distributions of start and end boundaries are computed by P s = SoftMax ( S s ) R n and P e = SoftMax ( S e ) R n , and the training objective is defined as: L span = 1 2 (cid:2) f CE ( P s , Y s ) + f CE ( P e , Y e ) (cid:3) (4) where f CE represents cross-entropy loss function; Y s and Y e are the labels for the start ( a s ) and end ( a e ) boundaries, respectively.",
"During inference, the predicted answer span ( a s , a e ) of a query is generated by maximizing the joint probability of start and end boundaries by: span ( a s , a e ) = arg max a s , a e P s ( a s ) P e ( a e ) s.t. 0 a s a e n (5) We have completed the VSLBase architecture (see Figure",
"A Query-Guided Highlighting (QGH) strategy is introduced in VSLNet, to address the major differences between text span-based QA and NLVL tasks, as shown in Figure",
"2(b).",
"With QGH strategy, we consider the target moment as the foreground, and the rest as background, illustrated in Figure 3. The target moment, which is aligned with the language query, starts from a s and ends at a e with length L = a e a s .",
"QGH extends the boundaries of the foreground to cover its antecedent and consequent C o n v1 D & S i g m o i d !",
"video contents, where the extension ratio is controlled by a hyperparameter . As aforementioned in Introduction, the extended boundary could potentially cover additional contexts and also help the network to focus on subtle differences between video frames.",
"By assigning 1 to foreground and 0 to background, we obtain a sequence of 0 1 , denoted by Y h . QGH is a binary classification module to predict the confidence a visual feature belongs to foreground or background. The structure of QGH is shown in Figure 4. We first encode word features (cid:101) Q into sentence representation (denoted by h Q ), with self-attention mechanism (Bahdanau et al., 2015). Then h Q is concatenated with each feature in V q as V q = [ v q 1 , . . . , v qn ] , where v qi = [ v qi ; h Q ] . The highlighting score is computed as:",
"Accordingly, feature V q in Equation 3 is replaced by (cid:101) V q in VSLNet to compute L span . The loss function of query-guided highlighting is formulated as:",
"L = L span + LQGH .",
"We conduct experiments on three benchmark datasets: Charades-STA (Gao et al., 2017), ActivityNet Caption (Krishna et al., 2017), and TACoS (Regneri et al., 2013), summarized in Table 1.",
"The videos are about daily indoor activities. There are 12 , 408 and 3 , 720 moment annotations for training and test, respectively.",
"ActivityNet Caption contains about 20 k videos taken from ActivityNet (Heilbron et al., 2015). We follow the setup in Yuan et al. (2019b), leading to 37 , 421 moment annotations for training, and 17 , 505 annotations for test.",
"TACoS is selected from MPII Cooking Composite Activities dataset (Rohrbach et al., 2012). We follow the setting in Gao et al. (2017), where 10 , 146 , 4 , 589 and 4 , 083 annotations are used for training, validation and test, respectively.",
"Metrics. We adopt R@ n, IoU = and mIoU as the evaluation metrics, following (Gao et al., 2017; Liu et al., 2018a; Yuan et al., 2019b). The R@ n, IoU = denotes the percentage of language queries having at least one result whose Intersection over Union (IoU) with ground truth is larger than in topn retrieved moments. mIoU is the average IoU over all testing samples. In our experiments, we use n = 1 and { 0 . 3 , 0 . 5 , 0 . 7 } .",
"Implementation. For language query Q , we use 300 d GloVe (Pennington et al., 2014) vectors to initialize each lowercase word; the word embeddings are fixed during training. For untrimmed video V , we downsample frames and extract RGB visual features using the 3D ConvNet which was pre-trained on Kinetics dataset (Carreira and Zisserman, 2017). We set the dimension of all the hidden layers in the model as 128; the kernel size of convolution layer is 7 ; the head size of multi-head attention is 8 . For all datasets, the model is trained for 100 epochs with batch size of 16 and early stopping strategy. Parameter optimization is performed by Adam (Kingma and Ba, 2015) with learning rate of 0 . 0001 , linear decay of learning rate and gradient clipping of 1 . 0 . Dropout (Srivastava et al., 2014) of 0 . 2 is applied to prevent overfitting.",
"We compare VSLBase and VSLNet with the following state-of-the-arts: CTRL (Gao et al., 2017), ACRN (Liu et al., 2018a), TGN (Chen et al., 2018), ACL-K (Ge et al., 2019), QSPN (Xu et al., 2019), SAP (Chen and Jiang, 2019), MAN (Zhang et al., 2019), SM-RL (Wang et al., 2019), RWM-RL (He et al., 2019), L-Net (Chen et al., 2019), ExCL (Ghosh et al., 2019), ABLR (Yuan et al.,",
"2019b) and DEBUG (Lu et al., 2019a). In all result tables, the scores of compared methods are reported in the corresponding works. Best results are in bold and second best underlined.",
"The results on Charades-STA are summarized in Table 2. For fair comparison with ExCL, we follow the same setting in ExCL to use the C3D model fine-tuned on Charades dataset as visual feature extractor. Observed that VSLNet significantly outperforms all baselines by a large margin over all metrics. It is worth noting that the performance improvements of VSLNet are more significant under more strict metrics. For instance, VSLNet achieves 7 . 47% improvement in IoU = 0 . 7 versus",
"0 . 78% in IoU = 0 . 5 , compared to MAN. Without query-guided highlighting, VSLBase outperforms all compared baselines over IoU = 0 . 7 , which shows adopting span-based QA framework is promising for NLVL. Moreover, VSLNet bene-fits from visual feature fine-tuning, and achieves state-of-the-art results on this dataset.",
"Table 3 summarizes the results on ActivityNet Caption dataset. Note that this dataset requires YouTube clips to be downloaded online. We have 1 , 309 missing videos, while ExCL reports 3 , 370 missing videos. Strictly speaking, the results reported in this table are not directly comparable. Despite that, VSLNet is superior to ExCL with 2 . 06% and 0 . 16% absolute improvements over IoU = 0 . 7 and IoU = 0 . 3 , respectively. Meanwhile, VSLNet surpasses other baselines.",
"Similar observations hold on TACoS dataset. Reported in Table 4, VSLNet achieves new state-of-the-art performance over all evaluation metrics. Without QGH, VSLBase shows comparable per-Module",
"We conduct ablative experiments to analyze the importance of feature encoder and context-query attention in our approach.",
"We also investigate the impact of extension ratio (see Figure 3) in query-guided highlighting (QGH).",
"Finally we visually show the effectiveness of QGH in VSLNet, and also discuss the weaknesses of VSLBase and VSLNet.",
"We study the effectiveness of our feature encoder and context-query attention (CQA) by replacing them with other modules.",
"Specifically, we use bidirectional LSTM (BiLSTM) as an alternative feature encoder.",
"For context-query attention, we replace it by a simple method (named CAT) which concatenates each visual feature with max-pooled query feature.",
"Recall that our feature encoder consists of Convolution + Multi-head attention + Feed-forward layers (see Section 3.2), we name it CMF.",
"With the alternatives, we now have 4 combinations, listed in Table 5. Observe from the results, CMF shows stable superiority over CAT on all metrics regardless of other modules; CQA surpasses CAT whichever feature encoder is used.",
"This study indicates that CMF and CQA are more effective.",
"Table 6 reports performance gains of different 0.0 0.05 0.1 0.2 0.3 0.4 0.5 1.0 1.5 2.0 3.0 Extension Ratio ( ) 67.0 67.5 68.0 68.5 69.0 69.5 70.0 R @ 1 , I o U = 0 .",
"modules over R@ 1 , IoU = 0 . 7 metric.",
"The results shows that replacing CAT with CQA leads to larger improvements, compared to replacing BiLSTM by CMF.",
"This observation suggests CQA plays a more important role in our model.",
"Specifi-cally, keeping CQA, the absolute gain is 1 .",
"61% by replacing encoder module.",
"Keeping CMF, the gain of replacing attention module is 3 .",
"09% .",
"Figure 5 visualizes the matrix of similarity score between visual and language features in the context-query attention (CQA) module ( S R n m in Section 3.3).",
"This figure shows visual features are more relevant to the verbs and their objects in the query sentence.",
"For example, the similarity scores between visual features and eating (or sandwich ) are higher than that of other words.",
"We believe that verbs and their objects are more likely to be used to describe video activities.",
"Our observation is consistent with Ge et al. (2019), where verb-object pairs are extracted as semantic activity concepts.",
"In contrast, these concepts are automatically captured by the CQA module in our method.",
"We now study the impact of extension ratio in query-guided highlighting module on Charades-STA dataset.",
"We evaluated 12 different values of from 0 .",
"0 to in experiments.",
"0 .",
"0 represents no answer span extension, and means that the entire video is regarded as foreground.",
"The results for various 's are plotted in Figure 6. It shows that query-guided highlighting consistently contributes to performance improvements, regardless of values, i.e., from 0 to .",
"Along with raises, the performance of VSLNet first increases and then gradually decreases.",
"The optimal performance appears between = 0 .",
"05 and 0 .",
"2 over all metrics.",
"Note that, when = , which is equivalent to no region is highlighted as a coarse region to locate target moment, VSLNet remains better than VSLBase.",
"Shown in Figure 4, when = , QGH effectively becomes a straightforward concatenation of sentence representation with each of visual features.",
"The resultant feature remains helpful for capturing semantic correlations between vision and language.",
"In this sense, this function can be regarded as an approximation or simulation of the traditional multimodal matching strategy (Hendricks et al., 2017; Gao et al., 2017; Liu et al., 2018a).",
"Figure 7 shows the histograms of predicted results on test sets of Charades-STA and ActivityNet Caption datasets.",
"Results show that VSLNet beats VSLBase by having more samples in the high IoU ranges, e.g., IoU 0 .",
"7 on Charades-STA dataset.",
"More predicted results of VSLNet are distributed in the high IoU ranges for ActivityNet Caption dataset.",
"This result demonstrates the effectiveness",
"of the query-guided highlighting (QGH) strategy.",
"We show two examples in Figures",
"8(a) and",
"8(b) from Charades-STA and ActivityNet Caption datasets, respectively.",
"From the two figures, the localized moments by VSLNet are closer to ground truth than that by VSLBase.",
"Meanwhile, the start and end boundaries predicted by VSLNet are roughly constrained in the highlighted regions S h , computed by QGH.",
"We further study the error patterns of predicted moment lengths, as shown in Figure 9. The differences between moment lengths of ground truths and predicted results are measured.",
"A positive length difference means the predicted moment is longer than the corresponding ground truth, while a negative means shorter.",
"Figure 9 shows that VSLBase tends to predict longer moments, e.g., more samples with length error larger than 4 seconds in Charades-STA or 30 seconds in ActivityNet.",
"On the contrary, constrained by QGH, VSLNet tends to predict shorter moments, e.g., more samples with length error smaller that 4 seconds in Charades-STA or 20 seconds in ActivityNet Caption.",
"This observation is helpful for future research on adopting span-based QA framework for NLVL.",
"In addition, we also exam failure cases (with IoU predicted by VSLNet lower than 0 . 2 ) shown in Figure 10. In the first case, as illustrated by Figure",
"10(a), we observe an action that a person turns towards to the lamp and places an item there.",
"The QGH falsely predicts the action as the beginning of the moment turns off the light.",
"The second failure case involves multiple actions in a query, as shown in Figure",
"10(b).",
"QGH successfully highlights the correct region by capturing the temporal information of two different action descriptions in the given query.",
"However, it assigns pushes with higher confidence score than grabs.",
"Thus, VSLNet only captures the region corresponding to the pushes action, due to its confidence score.",
"By considering a video as a text passage, we solve the NLVL task with a multimodal span-based QA framework.",
"Through experiments, we show that adopting a standard span-based QA framework, VSLBase, effectively addresses NLVL problem.",
"However, there are two major differences between video and text.",
"We further propose VSLNet, which introduces a simple and effective strategy named query-guided highlighting, on top of VSLBase.",
"With QGH, VSLNet is guided to search for answers within a predicted coarse region.",
"The effectiveness of VSLNet (and even VSLBase) suggest that it is promising to explore span-based QA framework to address NLVL problems.",
"This research is supported by the Agency for Science, Technology and Research (A*STAR) under its AME Programmatic Funding Scheme (Project #A18A1b0045 and #A18A2b0046)."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"other",
"other",
"objective",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"other",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"other"
] |
[
"Understanding the dynamics of international politics is important yet challenging for civilians.",
"In this work, we explore unsupervised neural models to infer relations between nations from news articles.",
"We extend existing models by incorporating shallow linguistics information and propose a new automatic evaluation metric that aligns relationship dynamics with manually annotated key events.",
"As understanding international relations requires carefully analyzing complex relationships, we conduct in-person human evaluations with three groups of participants.",
"Overall, humans prefer the outputs of our model and give insightful feedback that suggests future directions for human-centered models.",
"Furthermore, our model reveals interesting regional differences in news coverage.",
"For instance, with respect to US-China relations, Singaporean media focus more on strength-ening and purchasing, while US media focus more on criticizing and denouncing.",
"In the context of growing globalization (Baylis et al., 2017), understanding complex international relations is increasingly relevant to our daily life.",
"Yet this is a challenging task due to the inherently dynamic nature of international relations.",
"As Kissinger famously said, America has no permanent friends or enemies, only interests.",
"Staying informed becomes even harder in the continuous streams of information from news outlets and social media.",
"This very availability of such information, however, opens up exciting opportunities for natural language processing to support individuals in understanding international relations.",
"Supervised extraction has been incredibly useful at identifying pre-defined relations and events (Dodding-ton et al., 2004; Mintz et al., 2009) but fails to capture emerging or complex information needs.",
"Topic models and neural models have been proposed to explore relations between entities without supervision (O'Connor et al., 2013; Chaney et al., 2016; Iyyer et al., 2016).",
"In particular, Iyyer et al. (2016) introduces an unsupervised neural model for tracking relations between fictional characters, and this approach outperforms baselines from topic models and hidden Markov models.",
"In this work, we incorporate linguistic insights into this model to track relation dynamics between nations from news articles.",
"Our model reconstructs textual information in the embedding space using relation embeddings, as proposed in Iyyer et al. (2016).",
"We integrate simple yet effective linguistic insights: verbal predicates often describe the relationship between entities, 1 while nouns and proper nouns provide the context of this relationship.",
"For example, in U.S. denounces Russia for its interference in the 2016 election, denounce describes the relation, and election and interference provide the context.",
"We show that this intuition leads the model to discover relation descriptors that are easier to interpret and less noisy.",
"Evaluating these exploratory models for subjective tasks poses a challenge as there are no gold labels.",
"Along with the model, we propose new approaches for evaluation.",
"We introduce a quantitative metric which aligns pre-annotated key events with the temporal trends of relationships produced by the models.",
"Since this task requires careful analysis of complex international relations, we conduct in-person user studies with NLP researchers and undergraduate students recruited from political science and linguistics courses.",
"Both quantitative evaluation and human evaluation indicate that our model better rep-1 We use entities, countries, and nations interchangeably in this work.",
"resents the dynamic relationships between nations than the prior model (Iyyer et al., 2016): 75.9% of participants preferred our model for finding natural language words describing international relations and 85.5% preferred temporal trends generated by our model.",
"Finally, we qualitatively explore the context of relations provided by an attention-based mechanism and demonstrate a practical application of our model by studying regional differences in news coverage of relationships between two countries.",
"We conclude with discussions on future directions for buildings models that can support individuals in navigating a large collection of news articles.",
"Our code is available at https:// github.com/BoulderDS/LARN .",
"We start by introducing our dataset of news articles, the shallow linguistic information that we extract, and our annotation of key events.",
"News article collection.",
"Our dataset is derived from the NOW corpus, the largest corpus of news articles that is available in full-text format.",
"2 The NOW corpus collects the news articles that are available on Google News in 20 English-speaking countries and thus include news articles that span a wide variety of topics, ranging from politics, to sports, to celebrity news from 23K media outlets.",
"In this work, we consider the news articles in recent years, i.e., from January 2016 to June 2018, to facilitate human evaluation.",
"We consider 12 nations (U.S., Russia, China, UK, Germany, Canada, France, India, Japan, Iran, Israel, and Syria) and the 66 nation pairs between them.",
"To identify mentions of each nation in news articles, we manually construct a set of aliases for each nation to cover common abbreviations and the names of political leaders (e.g., Trump, Putin).",
"On average, each nation has 3.5 aliases.",
"We then use these aliases to find sentences that contain a pair of nations under consideration and obtain 1.2M sentences associated with 634K articles.",
"Adding Shallow Linguistic Information.",
"To incorporate shallow linguistic knowledge, we process the news article collection for each nation pairs to extract (1) verbal predicates and (2) nouns 2 The dataset can be obtained from https://www.",
"and proper nouns from each sentence.",
"Specifi-cally, we use a dependency parser to detect verbal predicates and their subjects and objects, and only include predicates for which both subjects and objects were found.",
"For sentences with such predicates, we find nouns and proper nouns using part of speech tags.",
"All data processing was done in spaCy (Honnibal and Montani, 2017) and full details can be found in the appendix.",
"Key events annotation.",
"The main goal of our model is to support the exploration of international relations, which is very challenging to evaluate.",
"To derive quantitative evaluation measures and provide the basic context of international relations, we manually identify key events over the 30 months for 8 most frequently mentioned nation pairs (i.e. US-[China, Russia, UK, India, Canada, Japan, Syria], China-India) by reading through the top Google search results for each two countries and each month.",
"3 We identified roughly five key events per nation pair.",
"For example, one key event for US-China relation is Chairman Xi's visit to the US in April 2017.",
"A complete list of key events is shown in the appendix.",
"Table 1 summarizes the data statistics.",
"In this section, we formally introduce our model that builds on Relation Modeling Network ( RMN ) (Iyyer et al., 2016).",
"Our main contribution is to integrate shallow linguistic information (i.e., verbal predicates and nouns/proper nouns) and identify the context of relations.",
"The intuition behind our model follows RMN, i.e., inferring relation embeddings by reconstruct-3",
"reconstruct-3 This annotation is done by the first author and is thus inherently subjective.",
"We add a robustness check in the appendix based on another set of independently annotated key events by the third author.",
"Automatic evaluation shows a similar trend holds for the two sets of annotations.",
"ing textual information in the embedding space.",
"Specifically, we learn a fixed set of relation embeddings and use a convex combination of these relation embeddings to reconstruct information in sentences that mention both entities.",
"Our main hypothesis is that relation information is often encoded in verbal predicates and we can obtain more interpretable and robust relations if we focus on predicates.",
"For each pair of entities, we extract information from the predicates in sentences with both entities and reconstruct these predicates using shared relation embeddings.",
"In addition, we use nouns to provide the context for the relations.",
"We refer to our model as Linguistically Aware Relationship Networks ( LARN ).",
"We now formally define our problem.",
"The input of our model is a collection of news articles.",
"For each entity pair e i , e j , we obtain a set of articles, A e i ,e j , containing at least one sentence mentioning both entities.",
"We identify sentences where both e i and e j occur based on any alias associated with an entity (nation).",
"We extract all the verbal predicates from these sentences in article a A e i ,e j as { p a,e i ,e j 1 , . . . , p a,e i ,e j N } , as well as the proper nouns or nouns as { n a,e i ,e j 1 , . . . , n a,e i ,e j M } .",
"Preprocessing details could be found in the appendix.",
"We then use GloVe embeddings to represent these words, i.e., v p a,ei,ejk for p a,e i ,e j k and v n a,ei,ejk for n a,e i ,e j k (Pennington et al., 2014).",
"These word embeddings are static in the entire learning process.",
"Our model learns relation embeddings R RK d , where K is a hyperparameter for the number of relations and d is the dimension of the relation embedding as well as the word embedding.",
"Following Iyyer et al. (2016), we obtain a list of natural language descriptors for each relation using the nearest neighbors of the relation embedding within the 500 most common predicates.",
"4 The model also provides (1) a probability distribution over relations for each article, (2) a probability distribution over nouns for each relation between each entity pair, and (3) an embedding for each entity.",
"Figure 1 describes the overall architecture.",
"We will describe the construction of v a label in 3.2, a 4 Refer to the appendix for a version that include all words.",
"The relation descriptors from RMN required a manual filtering step in Iyyer et al. (2016), and become unintelligible without the 500 common words constraint.",
"Note that our model produces intelligible descriptors even without the 500 most common predicates constraint.",
"We compute the representation for each article to be reconstructed as the sum of bag-of-words embeddings.",
"While Iyyer et al. (2016) considers all the words in a window in which both entities occur, we only consider predicates in the sentences where both entities show up: v a,e i ,e j label = N (cid:88) k =1 v p a,ei,ejk .",
"v a,e i ,e j label depends on both the news article and the entity pair.",
"In the rest of the paper, We omit e i , e j in the superscript here for simplicity, i.e., v a label .",
"We represent each article as a weighted sum of relation embeddings.",
"Assuming d a represents a weight vector over the K relation embeddings that sums to 1 for an article a with respect to a pair of entities ( e i , e j ) , we obtain r a as follows: r a = R (cid:62) d a .",
"d a can also be thought of as a distribution over relations.",
"This distribution over relations depends on entity pair ( v ae ), predicate information ( v ap ), and noun information ( v an ).",
"While RMN simply takes the average of all words in the sentence, LARN focuses on verbal predicates and nouns to capture our intuition that predicates describe the main relations, whereas nouns provide background information to explain the relations.",
"We now describe these three components in details.",
"Representing predicates and entity pair.",
"We follow RMN to construct embedding for each entity pair and for the predicates.",
"The entity pair vector, v a e , simply adds the embedding of the two entities.",
"The predicate vector, v ap , is equivalent to v a label except for word dropout during training, i.e., setting b ak to be 0 or 1 with a probability of 0.5.",
"Representing context with nouns.",
"To understand relations between an entity pair in a sentence, nouns should be considered in addition to predicates.",
"For example, tariff is indicative of the relation between US and China in Originally, Trump favoured the simple imposition of a tariff on products from selected countries, especially China and Mexico, despite the seemingly positive predicate favour .",
"As nouns are much more common than predicates (see Table 1) and not all of them are meaningful for understanding international relations, we employ a weighted sum of noun vectors.",
"We use an attention mechanism (Conneau et al., 2017; Bahdanau et al., 2015) and consider each entity pair as a unique key to compute the attention weights, since the same noun can be interpreted differently across different entity pairs.",
"To this end, we train an attention query embedding q e i ,e j for each entity pair separately.",
"We further encode the temporal information by concatenating a one-hot vector t a that indicates the month when the article was published with the noun representation v n ak .",
"This allows us to capture the shifts in a word's meaning over time.",
"h n ak = tanh ( W proj [ v n ak ; t a ]) , n ak = exp ( h n ak q e i ,e j ) (cid:80) Mk (cid:48) =1 exp ( h n ak (cid:48) q e i ,e j ) , v an = M (cid:88) k =1 n ak h n ak .",
"Finally, we concatenate the three representations as the input to a feedforward network and pass to a softmax layer to create the weight vector d a .",
"This d a is multiplied with the descriptor matrix R to get the final representation r a .",
"Different from RMN , we do not consider temporal dependencies between time steps in our model because it is important to understand sudden shifts in international relations rather than assuming that the relations slowly evolve.",
"We also found the temporal dependencies were not helpful empirically in our do-main but rather computationally expensive.",
"The reconstruction objective pushes r a to resemble v alabel .",
"Our formulation is identical to RMN : the loss function consists of a contrastive max-margin loss term, J , and an auxiliary loss term, X , to encourage unique relation embeddings.",
"L ( ) = J ( ) + X ( ) , J ( ) = (cid:88) a A ei,ej (cid:88) v a label (cid:48) N max(0 , 1 r a v a label || r a || || v a label || + r a v a label (cid:48) || r a || || v a label (cid:48) || ) , X ( ) = (cid:107) RR (cid:62) I (cid:107) , where v (cid:48) label is a randomly sampled negative example, N is a collection of them, and is a hyperparameter for balancing two loss terms.",
"In this section, we compare our model to RMN .",
"For both models, we fixed the number of descriptors to 30 following Iyyer et al. (2016).",
"As tracking dynamic international relations requires careful analysis, we hosted onsite user studies for quality control and in-person feedback.",
"We first describe the model outputs, and then present both quantitative and qualitative evaluation results.",
"Given a set of time-stamped news articles and a list of nations of interest, both models provide a set of relation descriptors, where each one defines a type of relation and a temporal trend analysis of these descriptors that shows how the relation evolves .",
"Relation descriptors.",
"Table 2 shows the top five and bottom five descriptors from LARN and RMN sorted by the average weights over all news articles related to the most frequently mentioned eight nation pairs.",
"5 By using predicates to describe relations, our descriptors seem to contain more semantically meaningful words.",
"For instance, the top relation in RMN consists of exclusively noncontent words.",
"Another interesting advantage of our model is that the five relations with the lowest 5 We focus on the top eight nation pairs to be consistent with human evaluation.",
"weight have much higher weights in LARN than in RMN .",
"This suggests that RMN tends to generate useless relations that do not show up in the data, while even bottom relations in LARN remain useful for describing the data.",
"Temporal trends.",
"We visualize the temporal trends of the most prominent relations between nation pairs.",
"We further provide our annotated key events as the context to interpret these temporal trends.",
"Figure 2 gives an example for US and China.",
"The top three relations based on LARN are denounce, strengthen, and leave, 6 while the top three based on RMN are seem, range, and negotiate.",
"We find our model generally aligns better with the key events: for instance, the denounce relation peaked around the time that Trump started issuing a series of tariffs based on 6 For space reasons, we only include the top word in the relation descriptor.",
"We leverage our manually annotated key events to develop a novel automatic metric to evaluate how well the temporal trends from our model are aligned with key events.",
"We define a change rate at each month t as the weighted average of changes in relation weights in the top three relations: t = (cid:88) i top 3 relations w t,i | d t,i d t prev ,i | d t prev ,i , where d t,i is the average weight for the i -th relation in all the articles published at t , d t prev ,i is the average weight before t in a window of W preceding months ( 1 W (cid:80) Ww =1 d ( t w ) ,i ), and weight w t,i is the normalized weight for top three descriptors ( d t,i (cid:80) j top3relations d t,j ).",
"Our results are robust to the choices of W , and we set it to 6 for presentation.",
"We expect change rates to be greater when sig-nificant events happen in international politics.",
"Figure 3 compares the change rate at months where key events occurred with other months for eight nation pairs for which we annotated key events.",
"Both models present more abrupt changes when key events occurred.",
"Unlike RMN , our model does not have temporal dependencies between relation distributions over time, 7 and thus has a higher discontinuity in general.",
"However, even in relative terms, our model fluctuated more substantially than RMN when key events occurred.",
"We also did a robustness check with another set of independently annotated key events and the results can be found in the appendix.",
"This measure captures whether the model can detect the change points, but does not measure whether the model correctly captures the semantics of the key events, i.e., did a negative relation increase after hostile events such as war?",
"To this end, we performed human evaluations.",
"We hosted three human evaluations with participants from different demographics: undergraduate students from political science classes, graduate students from a computer science department (mostly in NLP), and undergraduate students taking a linguistics class.",
"The total number of participants was 29, roughly equally divided among the three groups.",
"The participants were shown outputs from RMN and LARN , and asked to choose the output that better aligns with their intuitions.",
"Each participant answered about 10 questions and provided justification for their answers to each question, taking roughly 30 minutes to an hour.",
"Table 3 summarizes the results of our human evaluations.",
"Relation descriptor evaluation.",
"The participants were shown a list of top five descriptors (as in Table 2) from two models, and prompted to select a set which adequately covers possible relationships that can occur between countries.",
"75.9% of participants preferred our model.",
"Temporal trends evaluation.",
"We showed temporal trends between nation pairs annotated with key events, one from RMN and the other from LARN (as in Figure 2).",
"We asked them to evaluate whether the temporal trends accurately reflect the dynamics in nation-to-nation relationships.",
"Each participant evaluated four randomly chosen nation pairs.",
"The temporal trend from our model was preferred more frequently (85.5% of total responses).",
"Nation pair matching.",
"We designed a novel task where we showed the participants the other four temporal trends without annotated key events and asked them to match each trend with the corresponding nation pair from the four candidates, based on their world knowledge about nation-nation relations.",
"Each participant did the matching twice, once for RMN and once for LARN .",
"The participants found correct temporal trends for 45.2% of entity pairs for RMN , and 38.0% of entity pairs for LARN , when random pairing would yield 25.0%.",
"The difference between two models here is not statistically significant.",
"Most participants found this task very challenging, as they did not know much about the relationship between certain entity pairs (e.g., a participant said As an American, there's no way to know the relation between China and India.).",
"Even political science students do not perform better than the other two groups.",
"Discussion.",
"Overall the output from our model is preferred by the participants.",
"We found that political science students paid more attention to detail, took a longer time to finish, and were more ambivalent between the performance of the two models.",
"For example, for temporal trends, they preferred our model for 71.4% of the examples, compared to other groups which preferred our model for 90% of the examples.",
"They also preferred RMN 's relation descriptors slightly (42.9% selected our model) and commented that a few concepts from RMN , like infrastructure, supply, and value, are more concrete (e.g., a participant said I chose the left one ( RMN ), because it is easy to determine and remember the positive of items such as infrastructure, value, and supply. Those have more positive undertones, while it is easy to gauge negative sentiments with terrorism,' con-demn' and those.).",
"As LARN encodes the background context specific to each nation pair separately, our relation descriptors do not contain such concrete concepts.",
"In the next section, we will discuss contexts for relations, where these concepts appear in LARN .",
"exFigure 4: Top contextual words for US-China's denounce relation derived from three approaches.",
"Top : Showing word's avg.",
"attention score in all denounce articles each month.",
"Middle : Showing word's frequency in all denounce articles each month.",
"Bottom : Showing word's average attention score multiplied by log( appearance ) in each month.",
"All figures are normalized by the global maximum score in the figure.",
"amine the context (nouns) associated with a relation between two nations based on an attention-based mechanism, which RMN does not handle.",
"Second, we perform an in-depth analysis to show how our model can reveal regional differences in news coverage on the same topic.",
"To help users better understand the inferred relations, we offer specific contexts that relate to the inferred relations based on the attention-based mechanism introduced in",
"3. For each relation, we find articles that place the most weight in that relation, and rank the nouns and proper nouns in those articles by their average attention score (i.e., n ai in 3.3).",
"The top part of Figure 4 shows the top 10 words for the denounce relation in Figure 2a, the temporal trend for US-China relations.",
"Since RMN does not support such mechanism, we show the most frequent nouns that occurred in the documents that mention both entities as a baseline (middle part).",
"8 We find that the attended nouns from our model are more informative than frequent nouns: tariff is the most attended word; words such as sanction, treaty, and pressure also show up, while the frequency baseline centers around words like China and president.",
"As we noticed that the frequency baseline can capture alignment with key events, we incorporate attention score and frequency in the bottom part of Figure",
"4. This augmented version captures informative words (e.g., tariff, sanction, and missile) 9 and closely aligns with the key events.",
"To further demonstrate the utility of our model, we explore regional differences in news coverage, as it is possible to build real knowledge by comparing perspectives from different social con-texts. 10 This also relates to the longstanding literature on framing in news coverage, i.e., se-lecting some aspects of a perceived reality and make them more salient to promote problem def-inition/interpretation (Entman, 1993; Chong and Druckman, 2007).",
"We picked two countries, Singapore and US, to study US-China relations.",
"11 Using country source of media outlets in the NOW corpus, we found 10K articles from Singaporean media and 5.7K articles from US media on US and China.",
"Table 4 shows the top five relations sorted by their absolute weight differences between US media outlets and Singaporean media outlets.",
"Singaporean media more frequently use positive descriptors such as strengthen and purchase, whereas US media report negative relations such as denounce and criticize more frequently.",
"Table 5 shows two example sentences from articles with the most weight in the denounce relation.",
"Even though two media sources are focusing on events leading to the same type of relation, a reader who mainly consume news articles in Singapore would get a clearly different impression of US-China relations from those who read US news.",
"8 We had a comparison between the top figure and the middle figure in the human evaluation, but we found an error in visualization and thus focus on qualitative comparisons.",
"9 missile points us to another event related to US deploying missile defense system in South Korea, which also impacts US-China relations.",
"10 https://www.publicbooks.org/why-an-age-of-machine-learning-needs-the-humanities/ 11 China is not an English speaking country and is thus not in the NOW corpus.",
"Singapore contained the most articles containing both US and China.",
"US media: President Donald Trump is preparing to impose a package of $60 billion in annual tariffs against Chinese products, following through on a longtime threat that he says will punish China for intellectual property theft and create more American jobs.",
"Singaporean media: It does not look like just a trade war , but rather the US is trying to bully China and the rest of the world in order to make China concede economic resources and development opportunities to the US and make the US forever big and strong.",
"Prior work (Chambers et al., 2015; Choi et al., 2016; Rashkin et al., 2017) studied entity-entity relations in terms of positive and negative sentiments between them.",
"Similarly, literature on relation extraction (Riedel et al., 2010; Gardner and Krishnamurthy, 2017; Elson et al., 2010; Srivas-tava et al., 2016) focused on pre-defined relations between a pair of entities in the database schema.",
"In comparison, our work discovers descriptors for relations between entity pairs instead of finding entity pairs matching pre-defined relation schema.",
"Topic modeling has been an important method to grasp important concepts from a large collection of documents in an unsupervised fashion (Blei et al., 2003; Das et al., 2015; Chang et al., 2009; Schein et al., 2015).",
"Similar to our work, O'Connor et al. (2013) incorporates linguistic insights with topic models to identify event classes and detect conflicts.",
"Our work additionally models the context of relations through nouns and focuses on exploring the potential of neural models.",
"Most relevant to our work is Iyyer et al. (2016), which suggests RMN better capture dynamic relationships in literature than hidden Markov model (Gruber et al., 2007) and LDA (Blei et al., 2003).",
"Recent work extended and applied RMN to other settings such as studying user roles in online communities (Wang et al., 2016; Frermann and Szarvas, 2017).",
"Notably, Chaturvedi et al. (2017) suggests HMM with shallow linguistic features (i.e., frame net parses) and global constraints can outperform RMN for modeling relations in literature.",
"In this work, we incorporate linguistic insights with RMN and apply it to news domain.",
"Last but not least, researchers have studied the dynamics of media coverage from a wide range of perspectives, ranging from framing (Card et al., 2015; Field et al., 2018), to relationship between ideas (Tan et al., 2017), to quotes of politicians (Niculae et al., 2015; Tan et al., 2018; Leskovec et al., 2009).",
"There is also signifi-cant effort for building event databases in political science (Leetaru and Schrodt, 2013), and assisting journalists with tools (Handler and O'Connor, 2017), and dating historical text (Niculae et al., 2014).",
"We investigate the promise of unsupervised neural models for automatically inferring relations between nations.",
"We find that incorporating shallow linguistic information is a simple yet effective strategy for deriving robust and interpretable relations.",
"We develop a novel quantitative evaluation metric for understanding international relations and in-depth human evaluation also confirms the effectiveness of our model.",
"We further show that our models can provide the background of relations using attention score and reveal regional differences for future studies on media framing.",
"Meanwhile, our work suggests important future directions for using NLP technologies to support individuals in navigating a large collection of news articles.",
"Our participants often find it challenging to infer information simply from temporal dynamics of our inferred relations based on natural language descriptors.",
"It is thus important to incorporate human cognitive preferences in developing such models and provide narratives beyond words such as key events.",
"Furthermore, different populations pay attention to different parts of information.",
"We need to understand the diversity when developing NLP technologies for end users and provide helpful personalized hints to lower the barrier of benefiting from model outputs.",
"We thank the anonymous reviewers, Dallas Card, and Mohit Iyyer for helpful feedback and discussion.",
"We thank all the participants in our human evaluation from LING 3813 at Georgia Tech by Lelia Glass, students at X-lab at University of Washington, and undergraduate students at University of Colorado Boulder.",
"We also thank Andy Baker and Sven Steinmo for recruiting students in their classes.",
"Choi is supported by a Facebook Fellowship."
] | [
"abstain",
"objective",
"objective",
"method",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"result",
"abstain",
"objective",
"method",
"method",
"method",
"result",
"objective",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"other",
"other",
"abstain",
"other",
"method",
"objective",
"abstain",
"other",
"other",
"method",
"other",
"other",
"objective",
"result",
"objective",
"result",
"method",
"result",
"abstain",
"abstain",
"objective",
"other",
"other",
"other",
"other"
] |
[
"Self-training is a semi-supervised learning approach for utilizing unlabeled data to create better learners.",
"The efficacy of self-training algorithms depends on their data sampling techniques.",
"The majority of current sampling techniques are based on predetermined policies which may not effectively explore the data space or improve model generalizability.",
"In this work, we tackle the above challenges by introducing a new data sampling technique based on spaced repetition that dynamically samples informative and diverse unlabeled instances with respect to individual learner and instance characteristics.",
"The proposed model is specifically effective in the context of neural models which can suffer from overfitting and high-variance gradients when trained with small amount of labeled data.",
"Our model outperforms current semi-supervised learning approaches developed for neural networks on publicly-available datasets.",
"It is often expensive or time-consuming to obtain labeled data for Natural Language Processing tasks.",
"In addition, manually-labeled datasets may not contain enough samples for downstream data analysis or novelty detection (Wang and Hebert, 2016).",
"To tackle these issues, semi-supervised learning (Zhu, 2006; Chapelle et al., 2009) has become an important topic when one has access to small amount of labeled data and large amount of unlabeled data.",
"Self-training is a type of semi-supervised learning in which a downstream learner (e.g. a classifier) is first trained with labeled data, then the trained model is applied to unlabeled data to generate more labeled instances.",
"A select sample of these instances together with their pseudo (pre-dicted) labels are added to the labeled data and the learner is re-trained using the new labeled dataset.",
"This process repeats until there is no more unlabeled data left or no improvement is observed in model performance on validation data (Zhu, 2006; Zhu and Goldberg, 2009).",
"Conventional self-training methods often rely on prediction confidence of their learners to sample unlabeled data.",
"Typically the most confident unlabeled instances are selected (HEARST, 1991; Yarowsky, 1995; Riloff and Jones, 1999; Zhou et al., 2012).",
"This strategy often causes only those unlabeled instances that match well with the current model being selected during self-training, therefore, the model may fail to best generalize to complete sample space (Zhang and Rudnicky, 2006; Wu et al., 2018).",
"Ideally, a self-training algorithm should explore the space thoroughly for better generalization and higher performance.",
"Recently Wu et al. (2018) developed an effective data sampling technique for co-training (Blum and Mitchell, 1998) methods which require two distinct views of data.",
"Although effective, this model can't be readily applied to some text datasets due to the two distinct view requirement.",
"In the context of neural networks, pretraining is an effective semi-supervised approach in which layers of a network are first pretrained by learning to reconstruct their inputs, and then network parameters are optimized by supervised fine-tuning on a target task (Hinton and Salakhutdinov, 2006; Bengio et al., 2007; Erhan et al., 2010).",
"While pretraining has been effective in neural language modeling and document classification (Dai and Le, 2015; Miyato et al., 2016), it has an inherent limitation: the same neural model or parts thereof must be used in both pretraining and fine-tuning steps.",
"This poses a major limitation on the design choices as some pretraining tasks may need to exploit several data types (e.g., speech and text), or might require deeper network architectures.",
"The above challenges and intuitions inspire our work on developing a novel approach for neural self-training.",
"The core part of our approach is a data sampling policy which is inspired by find-ings in cognitive psychology about spaced repetition (Dempster, 1989; Cepeda et al., 2006; Averell and Heathcote, 2011); the phenomenon in which a learner (often a human) can learn efficiently and effectively by accurately scheduling reviews of learning materials.",
"In contrast to previous self-training approaches, our spaced repetition-based data sampling policy is not predetermined, explores the entire data space, and dynamically selects unlabeled instances with respect to the strength of a downstream learner on a target task, and easiness of unlabeled instances.",
"In addition, our model relaxes the same model constraint of pretraining-based approaches by naturally decoupling pretraining and fine-tuning models through spaced repetition.",
"The contributions of this paper are",
"(a): we propose an effective formulation of spaced repetition for self-training methods; to the best of our knowledge, this is the first work that investigates spaced repetition for semi-supervised learning,",
"(b): our approach dynamically samples data, is not limited to predetermined sampling strategies, and naturally decouples pretraining and fine-tuning models, and",
"(c): it outperforms current state-of-the-art baselines on large-scale datasets.",
"Our best model outperforms standard and current state-of-the-art semi-supervised learning methods by 6 .",
"5 and 4 .",
"1 points improvement in macro-F1 on sentiment classification task, and 3 .",
"6 and 2 .",
"2 points on churn classification task.",
"Further analyses show that the performance gain is due to our model's ability in sampling diverse and informative unlabeled instances (those that are different from training data and can improve model gener-alizability).",
"Conventional self-training methods employ the following steps to utilize unlabeled data for semi-supervised learning: (1) train a learner, e.g. a classifier, using labeled data, (2) iteratively select unlabeled instances based on a data sampling technique, and add the sampled instances (together with their predicted pseudo labels) to the labeled data, and (3) iteratively update the learner using the new labeled dataset.",
"The core difference between self-training algorithms is in the second step: data sampling policy.",
"In this paper, we develop a new data sampling technique based on spaced repetition which dynamically explores the data space and takes into account instance and learner characteristics (such as easiness of instances or learner strength on target task) to sample unlabeled data for effective self-training.",
"Figure 1 illustrates our proposed neural self-training framework.",
"We assume the downstream learner is a neural network that, at every self-training episode,",
"(a): takes current labeled and unlabeled data as input,",
"(b): uses labeled data to iteratively optimize its parameters with respect to a target task, and",
"(c): dynamically explores unlabeled data space through spaced repetition to inform a data sampler that selects unlabeled data for the next self-training episode.",
"Spaced repetition (Dempster, 1989; Cepeda et al., 2006; Averell and Heathcote, 2011) was presented in psychology and forms the building block of many educational devices, including flashcards, in which small pieces of information are repeatedly presented to a learner on a schedule determined by a spaced repetition algorithm.",
"Such algorithms show that humans and machines can better learn by scheduling reviews of materials so that more time is spent on difficult concepts and less time on easier ones (Dempster, 1989; Novikoff et al., 2012; Amiri et al., 2017).",
"In this paper, we focus on a specific spaced repetition framework called Leitner system (Leitner, 1974).",
"Suppose we have n queues { q 0 , . . . , q n 1 } .",
"In general, Leitner system initially places all instances in the first queue, q 0 .",
"During training, if an instance from q i is correctly classified by the learner, it will be promoted to q i +1 (solid/green arrows in Figure 1), otherwise it will be demoted to the previous queue, q i 1 (dashed/red arrows in Figure 1).",
"Therefore, as the learner trains through time, higher queues will accumulate instances that are easier for the learner, while lower queues will accumulate harder instances.",
"To use Leitner system for neural self-training, we assume our learner is a neural network, place all unlabeled instances in the first queue of Leitner system (line 2 in Algorithm 1), and gradually populate them to other queues while training the network.",
"Our Leitner system uses iteration-specific network predictions on unlabeled instances and current pseudo labels of these instances to move them between queues (see line 4 5 in Algorithm 1); pseudo labels can be obtained through posterior predictions generated by any trained downstream learner (see Section 2.2).",
"Instances with similar class predictions and pseudo labels will be promoted to their next queues, and those with opposite predictions and labels will be demoted to lower queues.",
"We note that, errors (e.g. inaccurate pseudo labels or network predictions) can inversely affect instance movements among queues.",
"However, our sampling technique (see below) alleviates this issue because such misleading instances, if sampled, can't improve the generalizability of downstream learners.",
"Details of our Leitner system is shown in Table 1. 2.2 Self-Training with Leitner Queues We formulate the data sampling process as a decision-making problem where, at every self-training episode, the decision is to select a subset of unlabeled instances for self-training using information from Leitner queues.",
"A simple, yet effective, approach to utilize such information is a greedy one in which instances of the queue that most improves the performance of the current model on validation data will be selected.",
"We refer to this queue as designated queue: Algorithm 2 shows details of our self-training approach.",
"At every episode, we use current labeled data to train a task-specific neural net-Algorithm 1. Leitner system Input: L , U , V : labeled, unlabeled, and validation data y : pseudo labels for U k : number of training epochs n : number of queues Output: Q : Leitner queue populated with U 1 Q = [ q 0 , q 1 , . . . , q n 1 ] 2 q 0 = [ U ] , q i = [] for i [1 , n 1] 3 for epoch = 1 to k : 4 model = epoch train ( L , V ) 5 promos, demos = eval ( Q, y , model ) 6 Q = schedule ( Q, promos, demos ) 7 end for 8 return Q Table 1: Leitner system for neural self-training.",
"work (line 2 ).",
"Here, we weight the loss function using class size to deal with imbalanced data, and weight pseudo-labeled instances (as a function of episodes) to alleviate the effect of potentially wrong pseudo labels while training the network.",
"We then use the trained network to generate pseudo labels for current unlabeled instances (line 3 ).",
"These instances are then populated in Leitner queues as described before (line 4 ).",
"Given the populated Leitner queues, the sample for current self-training episode is then created using instances of the designated queue, the queue that most improves the performance of the current network on validation data (lines 5 8 ).",
"Instances of the designated queue will be removed from unlabeled data and added to labeled data with their pseudo labels treated as gold labels (lines 9 10 ).",
"We note that finding designated queues (lines 5 8 in Algorithm 2) imposes computational complexity on our model.",
"However, in practice, we observe that designated queues are almost always among middle or higher queues in Leitner system, i.e. q i , i [ (cid:98) n/ 2 (cid:99) , n 1] where n in the number of queues.",
"This can help accelerating the search Algorithm 2. Neural Self-training Input: L , U , V : labeled, unlabeled, and validation data K : number of self-training episodes Output: M : classification model 1 for episode = 1 to K : 2 ML = train ( L , V ) 3 y = label ( ML , U ) 4 Q = Leitner system ( L , U , V , y ) \\\\ Alg.",
"process.",
"In addition, learning a data sampling policy from movement patterns of instances among queues may help alleviating/eliminating the need for such an iterative search; see Section 4.4.",
"Finally, at test time, we apply the resulting self-trained network to test data and use the result for model comparison.",
"We compare different self-training approaches in two settings where learners (neural networks) have low or high performance on original labeled data.",
"This consideration helps investigating sensitivity of different self-training algorithms to the initial performance of learners.",
"As datasets, we use movie reviews from IMDb and short microblog posts from Twitter.",
"These datasets and their corresponding tasks are described below and their statistics are provided in Table 3. In terms of preprocessing, we change all texts to lowercase, and remove stop words, user names, and URLs from texts in these datasets: Train Val.",
"IMDb : The IMDb dataset was developed by Maas et al. (2011) 1 for sentiment classification where systems should classify the polarity of a given movie review as positive or negative.",
"The dataset contains 50 K labeled movie reviews.",
"For the purpose of our experiments, we randomly sample 1 K, 1 K, and 48 K instances from this data (with balanced distribution over classes) and treat them as labeled (training), validation, and test data respectively.",
"We create five such datasets for robustness against different seeding or data partitions.",
"This dataset also provides 50 K unlabeled reviews.",
"Churn : This dataset contains more than 5 K tweets about three telecommunication brands and was developed by Amiri and Daume III (2015) 2 for the task of churn prediction 3 where systems should predict if a twitter post indicates user intention about leaving a brand classifying tweets as churny or non-churny with respect to brands.",
"We replace all target brand names with the keyword BRAND and other non-target brands with BRAND-OTHER for the purpose of our experiments.",
"Similar to IMDb, we create five datasets for experiments.",
"We also crawl an additional 100 K tweets about the target brands and treat them as unlabeled data.",
"As downstream neural networks (referred to as base classifiers), we consider current state-of-the-art deep averaging networks (DANs) (Shen et al., 2018; Iyyer et al., 2015; Joulin et al., 2017; Arora et al., 2017) for IMDb, and a basic CNN model for Churn dataset with parameters set from the work presented in (Gridach et al., 2017) except for pretrained embeddings.",
"In terms of DANs, we use FastText (Joulin et al., 2017) for its high per-1 http://ai.stanford.edu/amaas/data/ sentiment/ 2 https://scholar.harvard.edu/hadi/ chData 3 Churn is a term relevant to customer retention in marketing discourse; examples of churny tweets are my days with BRAND are numbered, debating if I should stay with BRAND, and leaving BRAND in two days. formance and simplicity.",
"FastText is a feedfor-ward neural network that consists of an embedding layer that maps vocabulary indices to embeddings, an averaging layer that averages word embeddings of inputs, and several hidden layers (we use two layers of size 256 ) followed by a prediction layer with sigmoid activation.",
"We use 300 -dimensional word embeddings provided by Google's word2vec toolkit (Mikolov et al., 2013).",
"In Algorithm 1, we set the number of training epochs to k = 32 , and stop training when F1 performance on validation data stops improving with patience of three continuous iterations, i.e. after three continuous epochs with no improvement, training will be stopped.",
"In addition, we set the number of training episodes to K = 20 and stop training when this number of episodes is reached or there is no unlabeled data left for sampling; the latter case is often the reason for stopping in our self-training method.",
"In addition, we experiment with different number of Leitner queues chosen from n = { 3 , 5 , 7 , 9 , 11 } .",
"Standard self-training : This approach iteratively trains a network on current labeled data and applies it to current unlabeled data; it uses a prediction confidence threshold to sample unlabeled instances (Zhu, 2006).",
"We set the best confidence threshold from { .",
"80 , .",
"85 , .",
"90 , .",
"95 } using validation data.",
"Autoencoder self-training (Dai and Le, 2015): This approach first pretrains a network using unlabeled data (through a layer-wise training approach to optimally reconstruct the inputs), and then fine-tunes it using labeled data with respect to the target task.",
"Adversarial self-training (Miyato et al., 2016): This model utilizes pretraining as described above, but also applies adversarial perturbations to word embeddings for more effective learning (perturbation is applied to embeddings instead of word inputs because words or their one-hot vectors do not admit infinitesimal perturbation; the network is trained to be robust to the worst perturbation).",
"Knowledge Transfer self-training (Noroozi et al., 2018): This model uses a clustering approach (e.g. k-means) to create clusters of",
"unlabeled instances that have similar representations, where representations are derived from standard pretraining as described above.",
"The model then pretrains a network by learning to classify unlabeled instances to their corresponding clusters.",
"The resulting pretrained network is then fine-tuned with respect to the target task using labeled data (with slight modification at prediction layer which makes the network suitable for target task).",
"We set the best number of clusters from { 10 , 20 , . . . , 100 } based on model performance on validation data.",
"Table 4 reports Macro-F1 performance of different models; we report average performance across five random test sets for each task (see Section 3.1 and Table 3).",
"The performance of base classifiers in supervised settings, where the networks are only trained on original labeled datasets, is reasonably high on IMDb ( 73 . 02 ) and low on Churn ( 65 . 77 ).",
"Standard ST (SST) improves performance on IMDb but not on Churn dataset.",
"SST achieves its best performance (on validation data) in the first few episodes when, on average, 1 .",
"4 K and 0 instances are sampled for IMDb and Churn datasets respectively.",
"Beyond that, the performance considerably decreases down to 66 .",
"94 (IMDb) and 57 .",
"04 (Churn) respectively.",
"This is perhaps due to imbalanced class size in Churn dataset, failure of SST to explore the data space, or classification mistakes that reinforce each other.",
"Several previous works also observed no improvement with SST (Gollapalli et al., 2013; Zhu and Goldberg, 2009; Zhang and Rudnicky, 2006); but some successful applications have been reported (Wu et al., 2018; Zhou et al., 2012; Riloff and Jones, 1999; Yarowsky, 1995; HEARST, 1991).",
"The result also show that pretraining and adversarial-based training, PST and AST in Table 4 respectively, improve the performance of base classifiers by 3 .",
"34 and 3 .",
"37 points in macro-F1 on IMDb, and by 1 .",
"5 and 1 .",
"93 points on Churn dataset.",
"In addition, since PST and AST show comparable performance, we conjecture that when original labeled data has a small size, adversarial-based self-training do not considerably improve pretraining.",
"But, considerable improvement can be achieved with larger amount of labeled data, see (Miyato et al., 2016) for detailed comparison on pretraining and adversarial-based training.",
"The results also show that knowledge transfer (KST) outperforms PST and AST on IMDb indicating that good initial labels derived through clustering information could help semi-supervised learning, even with small amount of seed labeled data.",
"Table 4 also shows the result of our model, Leitner ST (LST).",
"The best performance of LST is obtained using n = 5 and n = 7 queues for IMDb and Churn datasets respectively.",
"Considering these queue lengths, our model outperforms base classifiers by 5 .",
"25 and 4 .",
"13 points in Macro-F1 on IMDb and Churn datasets respectively; similar to PST and AST, our model results in a greater gain when the learner has higher initial performance.",
"It also improves the best self-training baseline, KST for IMDb and AST for Churn, by 1 .",
"16 and 2 .",
"2 points in macro-F1 on IMDb and Churn datasets respectively where both differences are significant (average -values based on t-test are . 004 and . 015 respectively).",
"We investigate several questions about our model to shed light on its improved performance.",
"One partial explanation is that by differentiating instances and augmenting the informative ones, we are creating a more powerful model that better explores the space of unlabeled data.",
"In this section, we elaborate on the behavior of our model by conducting finer-grained analysis at queue-level and investigating the following questions in the context of challenges of semi-supervised learning.",
"Due to space limit, we mainly report results on IMDb and discuss corresponding behaviors on Churn dataset in the text.",
"We analyze queue level performance to understand how instances of different queues contribute in creating better models during the self-training process.",
"For this experiment, we train networks using our Leitner self-training framework as normal (where, at every iteration, only instances of the designated queue are added to training data), and report the average macro-F1 performance of the networkon validation dataif it is trained with instances of each queue.",
"Concretely, we report average macro-F1 performance of models learned at line 6 of Algorithm 2 (see M q s in Table 2).",
"Figures",
"2(a) and",
"2(b) show the results on IMDb and Churn datasets for n = 5 and n = 7 queues respectively.",
"Note that the last queue for Churn dataset, q 6 , has never been reached by any instance.",
"This is perhaps because of the difficulty of this task 4 and low initial performance of the network on Churn dataset.",
"q 2 on IMDb and q 4 on Churn dataset result in the best average performance across training episodes, both queues are close to the middle.",
"In addition, the result show that the highest queues ( q 4 for IMDb and q 5 for Churn) are often not the best queues.",
"This result can justify the lower performance of Standard ST (SST) as instances in these queues are the easiest (and perhaps most confident ones) for the network; we further analyze these queues in Section 4.2.",
"5 4.2 What's the Issue with Highest Queues?",
"As we discussed before, instances in the highest queues, although easy to learn for the classifier, are not informative and do not contribute to training an improved model; therefore, highest queues are often not selected by our model.",
"To understand the reason, we try to quantify how well instances of these queues match with training data.",
"For this purpose, we compute cosine similarity between representations of training instances (see below) and those in the highest and designated queues 4 Churn prediction is a target-dependent task, largely affected by negation and function words, e.g. compare switching from and switching to , and language complexity, e.g. the tweets hate that I may end up leaving BRAND cause they have the best service is a positive yet churny tweet.",
"5 Note that the performance on lower queues (e.g. q 1 for IMDb and q 0 for Churn) are higher than expected.",
"This is because, at the end of each iteration, instances of designated (best-performing) queuesbut not lower queuesare added to training data; instances of designated queues help creating better and more robust models which still perform well even if instances of lower queues are added.",
"where T e R m e d and Q e R p e d indicate representations of training instances and those of a given target queue respectively (where d indicates the dimension of representations, and m e and p e indicate number of instances in training data and target queue at episode e respectively), and cosine(.,.) computes L 2 -normalized dot product of its input matrices.",
"To obtain the above representations for instances, we compute the output of the last hidden layer (the layer below prediction layer) of the trained network at each episode.",
"These outputs can be considered as feature representations for inputs.",
"For finer-grained comparison, we compute similarities with respect to positive and negative classes.",
"As the results in Figure",
"2(c) show, instances in the highest queue match well with current training data (and hence the current model), and, therefore, are less informative.",
"On the other hand, instances in the designated queues show considerably smaller similarity with training instances in both positive and negative classes, and, therefore, do not match well with training data.",
"These instances are more informative, and help the network to better explore the space of unlabeled data and optimize for the target task.",
"We analyze different queues to measure the extent of diversity that each queue introduces to training data during our normal self-training process q_0 q_1 q_2 q_3 q_4 q_desig Queue ID 0.0 0.1 0.2 0.3 0.4 0.5 0.6 D i v e r s i t y 0.44 0.47 0.60 0.39 0.39 0.57 Figure 3: The amount of diversity that instances of each queue introduce if added to training data (on IMDb).",
"where, at every iteration, only instances of the designated queue are added to training data.",
"Specifically, we compute the extent of diversity that each given queue introduces as follows: 1 KK (cid:88) e =1 1 cosine ( T e , concat ( T e , Q e )) where, as before, T e and Q e indicate the representations of training and queue instances at episode e respectively, and concat(.,.) is a function that creates a new dataset by vertically concatenating T e and Q e .",
"Figure 3 shows the results.",
"On IMDb, q 2 and designated queues show greater diversity to training data compared to other queues.",
"We note that q 0 carries a greater diversity than q 3 and q 4 , but, as we observed in Figure 2, instances of q 0 do not improve performance of the model, perhaps due to their difficulty or wrong pseudo labels.",
"We observe similar behavior in case of Churn dataset where q 4 introduces the highest diversity.",
"From this analysis, we conclude that Leitner self-training enables sampling diverse sets of instances that contributes to training an improved model.",
"For this analysis, we create a considerably more diverse queue at every self-training episode and treat it as the designated queue.",
"We create the diverse queue by sampling instances with high prediction confidence from all queues.",
"In particular, at every episode, we rank instances of each queue based on their prediction confidence and create a diverse queue by combining top r % instances of each queue, where r indicates the rate of adding new instances and set to r = 10% .",
"We note that a smaller rate is better for adding instances because it allows the model to gradually consume unlabeled instances with high prediction confidence.",
"Table 5 shows the effect of diverse queues on the performance of our model on both IMDb and Churn datasets.",
"The results show that diverse queues improve the performance of our Leitner self-training model from 78 .",
"27 (reported in Table 4) to 80 .",
"71 on IMDb, i.e. 2 .",
"44 points improvement in macro-F1.",
"However, the corresponding performance on Churn dataset decreases from 69 .",
"90 to 68 .",
"56 , i.e. 1 .",
"34 points decrease in macro-F1.",
"The inverse effect of diverse queues in case of Churn dataset is because diverse queues suffer from the issue of considerable class imbalance more than designated queues.",
"This is because highly confident instances which accumulate in higher queues are often negative instances in case of Churn prediction.",
"Although we tackle this issue by weighting the loss function during training, diverse positive instances which are different from their training counterparts are still needed for performance improvement.",
"We investigate the challenges associated with our data sampling policy by conducting finer-grained analysis on instance movement patterns among queues.",
"To illustrate, assume that we have a Leit-0 1 2 3 4 5 Queue# 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 S t a n d a r d d e v i a t i o n IMDB Churn Figure 4: Deviation in instance movements for each queue (in terms of average standard deviation over all training episodes).",
"ner queue of size n = 3 and the following movement patterns for four individual instances that ultimately home in on q 0 (recall that correct prediction promotes an instance to a higher queue, while wrong prediction demotes it to a lower queue): q 0 q 0 q 0 q 0 q 0 : always in q 0 q 0 q 1 q 0 q 0 q 0 : mainly in q 0 q 0 q 1 q 0 q 1 q 0 : partially in q 0 q 0 q 1 q 2 q 1 q 0 : partially in q 0 & q 1 .",
"Although all these instances ultimately home in on the same queue, they may have different contributions to the training of a model because there is a considerable difference in the ability of the downstream network in learning their labels.",
"Therefore, if there is a large deviation among movement patterns of instances of the same queue, better data sampling policies could be developed, perhaps through finer-grained queue-level sampling.",
"For this analyses, we keep track of instance movements among queues and measure standard deviation among movement patterns of instances of the same queue at every self-training episode, and report the average of these deviations.",
"Figure 4 shows the results.",
"On both datasets, there is considerably greater deviation in movements for middle queues than lower/higher queues.",
"This is meaningful because Leitner system (and other spaced repetition schedulers) are expected to keep easy and hard instances at higher and lower queues respectively.",
"Since such instances mainly stay at lower or higher queues, we observe smaller deviation in their movements.",
"On the other hand, the corresponding values for middle queues indicate that movements in these queues are spread out over a larger range of queues.",
"From these results, we conjecture that a data sampling policy that conducts finer-grained analysis at queue-level (e.g. by taking into account queue movement patterns) could create better data samples.",
"Verifying this hypothesis will be the subject for future work.",
"Semi-supervised learning (Zhu, 2006; Chapelle et al., 2009) is a type of machine learning where one has access to a small amount of labeled data and a large amount of unlabeled data.",
"Self-training is a type of semi-supervised learning to boost the performance of downstream learners (e.g. classifiers) through data sampling from unlabeled data.",
"Most data sampling policies rely on prediction confidence of the downstream learner for sampling unlabeled data (Zhu and Goldberg, 2009).",
"Self-training has been successfully applied to various tasks and domains including word sense disambiguation (HEARST, 1991; Yarowsky, 1995), information extraction (Riloff and Jones, 1999), and object recognition (Zhou et al., 2012).",
"In addition, co-training (Blum and Mitchell, 1998; Zhang and Rudnicky, 2006; Wu et al., 2018) is another type of semi-supervised learning.",
"It assumes that each instance can be described using two distinct feature sets that provide different and complementary information about the instance.",
"Ideally, the two views should be conditionally independent, i.e., the two feature sets of each instance are conditionally independent given the class, and each view should be sufficient, i.e., the class of an instance can be accurately predicted from each view alone.",
"Co-training first learns separate downstream learners for each view using a small set of labeled data.",
"The most confident predictions of each learner on the unlabeled data are then used to iteratively construct additional labeled training data.",
"Recently Wu et al. (2018) developed an effective model based on reinforcement learning (specifically, a joint formulation of a Q-learning agent and two co-training classifiers) to learn data sampling policies and utilize unlabeled data space in the context of co-training methods.",
"Effective semi-supervised learning algorithms based on pretraining techniques (Hinton and Salakhutdinov, 2006; Bengio et al., 2007; Erhan et al., 2010) have been developed for text classification, deep belief networks (Hinton and Salakhutdinov, 2006), and stacked autoen-coders (Vincent et al., 2010; Bengio et al., 2007).",
"In particular, Dai and Le (2015) developed an autoencoder for the later supervised learning process.",
"Miyato et al. (2016) applied perturbations to word embeddings and used pretraining technique and adversarial training for effective semi-supervised learning.",
"These models although effective have not been well studied in the context of semi-supervised learning where models may have low initial performance or limited amount of labeled data.",
"In addition, pretraining is limited by the same architecture requirement in both pretraining and fine-tuning steps.",
"In this work, we extend previous work in self-training by developing a new and effective data sampling policy based on spaced repetition (Dempster, 1989; Cepeda et al., 2006; Averell and Heathcote, 2011) which addresses some of the above challenges.",
"In particular, our model's data sampling policy is not predetermined, it explores the entire data space and dynamically selects unlabeled instances with respect to the strength of a learner on a target task and easiness of unlabeled instances, and it relaxes the same model constraint of pretraining-based approaches by decoupling pretraining and fine-tuning steps.",
"We propose a novel method based on spaced repetition to self-train neural networks using small amount of labeled and large amount of unlabeled data.",
"Our model can select high-quality unlabeled data samples for self-training and outperforms current state-of-the-art semi-supervised baselines on two text classification problems.",
"We analyze our model from various perspectives to explain its improvement gain with respect to challenges of semi-supervised learning.",
"There are several venues for future work including",
"(a): finer-grained data sampling at queue level,",
"(b): extending our model to other machine learning algorithms that employ iterative training, such as boosting approaches, and",
"(c): applying this model to areas where neural networks have not been investigated, e.g. due to limited availability of labeled data.",
"I sincerely thank Mitra Mohtarami and anonymous reviewers for their insightful comments and constructive feedback."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective",
"abstain",
"objective",
"objective",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"result",
"objective",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"result",
"result",
"objective",
"method",
"method",
"objective",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"objective",
"result",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"other"
] |
[
"Procedures are inherently hierarchical.",
"To make videos , one may need to purchase a camera , which in turn may require one to set a budget .",
"While such hierarchical knowledge is critical for reasoning about complex procedures, most existing work has treated procedures as shallow structures without modeling the parent-child relation.",
"In this work, we attempt to construct an open-domain hierarchical knowledge-base (KB) of procedures based on wikiHow, a website containing more than 110 k instructional articles, each documenting the steps to carry out a complex procedure.",
"To this end, we develop a simple and efficient method that links steps ( e.g., purchase a camera ) in an article to other articles with similar goals ( e.g., how to choose a camera ), recursively constructing the KB.",
"Our method significantly outperforms several strong baselines according to automatic evaluation, human judgment, and application to downstream tasks such as instructional video retrieval.",
"1 1 Introduction A procedure includes some steps needed to achieve a particular goal (Momouchi, 1980).",
"Procedures are inherently hierarchical: a high-level procedure is composed of many lower-level procedures.",
"For example, a procedure with the goal make videos consists of steps like purchase a camera , set up lighting , edit the video , and so on, where each step itself is a procedure as well.",
"Such hierarchical relations between procedures are recursive: the lower-level procedures can be further decomposed into even more fine-grained steps: one may need to arrange the footage in order to edit the video .",
"Relatively little attention has been paid to hierarchical relations in complex procedures in the field Equal contribution.",
"1 A demo with partial data can be found at https://wikihow-hierarchy.github.io/.",
"of NLP.",
"Some work performed a shallow one-level decomposition and often required costly resources such as human expert task-specific annotation (Chu et al., 2017; Zhang et al., 2020a, 2021).",
"More attention has been paid in fields adjacent to NLP.",
"For example, Lagos et al. (2017) and Pareti et al. (2014) both create hierarchical structures in how-to documents by linking action phrases in one procedure to another procedure or by linking steps in how-to articles to resources like DBPedia (Auer et al., 2007).",
"This kind of linking is helpful for explaining complex steps to readers who do not have prior knowledge of the topic being explained.",
"In this paper, we revisit this important but understudied task to develop a simple and effective algorithm (Figure 1) to construct a hierarchical knowledge-base (KB) for over 110 k complex procedures spanning a wide range of topics from wikiHow, a large-scale how-to website that has recently become a widely-used resource in NLP (Zhou et al., 2019; Zellers et al., 2019; Zhang et al., 2020d,c).",
"2 From each wikiHow article which represents a procedure, we follow Zhang et al. (2020d) and extract the title as the goal ( e.g., g 1 in Figure 1), and the paragraph headlines as steps ( e.g., s 1 . . . s n ).",
"Next, we decompose the steps by linking them to articles with the same or a similar goal ( e.g., s 1 to g 2 ).",
"The steps of the linked article are treated as the finer-grained steps ( s i to s j ) of the linked step (s1).",
"In this way, the procedural hierarchies go from shallow (B1) to deep (B4).",
"To link steps and article goals, we employ a retrieve-then-rerank approach, a well-established paradigm in related tasks (Wu et al., 2019; Humeau et al., 2019).",
"Our hierarchy discovery model (3) first independently encodes each step and goal in wikiHow and searches the k nearest goals of similar meaning for each step (B2).",
"Then, it applies a dedicated joint encoder to calculate the similarity score between the step and each candidate goal, 2 www.wikihow.com 2998 : Make videos g 1 : Purchase a camera s 1 : Set up equipment s 2 :Choose a camera g 2 : Consider use case s j : Set a budget s i B1: Input : Record the video s 3 : Practice editing your videos s 4 cat ( s 1 , g 2 ) cat ( s 1 , g i ) cat ( s 1 , g j ) sim ( s 1 , g i ) = 0.3 sim ( s 1 , g 2 ) = 0.6 sim ( s 1 , g j ) = 0.1 c b 3.1 g 3 g 4 g k g j g 2 g i s 1 B5: Application 1 (4&5) Enrich wikiHow step-goal hyperlinks S: step collection Purchase a camera Set up equipment Consider use case G: goal collection Make videos Choose a camera Edit videos B2: Candidate retrieval (3.1) B3: Reranking (3.2) Make videos Purchase a camera Set up equipment Consider use case Set a budget g n s j s i B4: Output g s g g s s The suggested link is helpful B6: Application 2 (6) Video retrieval Stain cabinet s (retrieved) !\" Figure 1: The overview of our proposed method.",
"thus reranking the goals (B3).",
"This pipeline can efficiently search over a large candidate pool while accurately measuring the similarity between steps and goals.",
"With each step linked to an article goal, a hierarchical KB of procedures is thus constructed.",
"We evaluate our KB both intrinsically and extrinsically.",
"Intrinsically, the discovered links can be directly used to complete missing step-goal hyperlinks in wikiHow, which have been manually curated (B5).",
"Our proposed method outperforms strong baselines ( e.g., Lagos et al. (2017)) according to both automatic and human evaluation, in terms of recall and usefulness respectively (4, 5).",
"Extrinsically, we consider the task of retrieving instructional videos given textual queries.",
"We observe that queries that encode deeper hierarchies are better than those that do not (6).",
"This provides evidence that our KB can bridge the high-level instructions and the low-level executions of procedures, which is important for applications such as robotic planning.",
"We represent a procedure as a tree where the root node n represents a goal and its children nodes Ch ( n ) represent the steps of n .",
"We formulate the hierarchy discovery task as identifying the steps among Ch ( n ) that can themselves be a goal of some other finer-grained steps (sub-steps), which are inserted into the tree.",
"Figure 1, each article comprises a goal ( g ), and a series of steps ( Ch ( g ) ).",
"Therefore, each article forms a procedure tree of depth one.",
"We denote the collection of all goals and steps in wikiHow as G and S respectively.",
"Our hierarchy discovery algorithm aims to link a step s i S to a goal g G such that g has the same meaning as s i .",
"It then treats Ch ( g ) as Ch ( s i ) .",
"Given that g and s i are both represented by textual descriptions, the discovery process can be framed as a paraphrase detection task .",
"This discovery process can be applied recursively on the leaf nodes until the resulting leaf nodes reach the desired granularity, effectively growing a hierarchical procedure tree (B4 of Figure 1).",
"For each of the 1.5 million steps in the wikiHow corpus, we aim to select one goal that expresses the same procedure as the step from over 110 k goals.",
"We propose a simple and efficient method to deal with such a large search space through a two-stage process.",
"First, we perform retrieval , encoding each step and goal separately in an unsupervised fashion and select the k most similar goals for each step s .",
"This process is fast at the expense of accuracy.",
"Second, we perform reranking , jointly encoding a step with each of its candidate goals in a supervised fashion to allow for more expressive contextualized embeddings.",
"This process is more accurate at the expense of speed, since calculating each similarity score requires a forward pass in the neural network.",
"The goal with the highest similarity score is se-2999 lected and the step is expanded accordingly, as in B4 of Figure",
"In the first stage, we independently encode each step s S and goal g G with a model M b , resulting in embeddings e s 1 , e s 2 , ..., e s n and e g 1 , e g 2 , ..., e g m .",
"The similarity score between s and g is calculated as the cosine similarity between e s and e g .",
"We denote this first-stage similarity score as sim 1 ( s, g ) .",
"Using this score, we can obtain the topk most similar candidate goals for each step s , and we denote this candidate goal list as C ( s ) = [ g 1 , ..., g k ] .",
"To perform this topk search, we use efficient similarity search libraries such as FAISS (Johnson et al., 2017).",
"We instantiate M b with two learning-based paraphrase encoding models.",
"The first is the SP model (Wieting et al., 2019, 2021), which encodes a sentence as the average of the sub-word unit embeddings generated by SentencePiece (Kudo and Richardson, 2018).",
"The second is SBERT (Reimers and Gurevych, 2019), which encodes a pair of sentences with a siamese BERT model that is finetuned on paraphrase corpus.",
"For comparison, we additionally experiment with search engines as M b , specifically Elasticsearch with the standard BM25 weighting metric (Robertson and Zaragoza, 2009).",
"We index each article with its title only or with its full article.",
"We also experiment with Bing Search API where we limit the search to wikiHow website only 3 .",
"The BM25 with the former setting resembles the method proposed by Lagos et al. (2017).",
"While efficient, encoding steps and goals independently is likely sub-optimal as information in the steps cannot be used to encode the goals and vice-versa.",
"Therefore, we concatenate a step with each of its topk candidate goals in C ( s ) and feed them to a model M c that jointly encodes each step-goal pair.",
"Concretely, we follow the formulation of Wu et al. (2019) to construct the input of each step-goal pair as: [CLS] ctx [ST] step [ED] goal [SEP] where [ST] and [ED] are two reserved tokens in the vocabulary of a pretrained model, which mark the location of the step of interest.",
"ctx is the context for a step ( e.g., its surrounding steps or its goal) that could provide additional information.",
"3 www.bing.com The hidden state of the [CLS] token is taken as the final contextualized embedding.",
"The second-stage similarity score is calculated as follows: sim 2 ( s, g i ) = proj ( M c ( s, g i )) + sim 1 ( s, g i ) (1) where proj ( ) takes an d -dimension vector and turns it to a scalar with weight matrix W R d 1 , and is the weight for the first-stage similarity score.",
"Both W and are optimized through back-propagation (see more about labeled data in 4.1).",
"With labeled data, we finetune M c to minimize the negative log-likelihood of the correct goal among the topk candidate goal list, where the log-likelihood is calculated as: ll ( s, g i ) = log (cid:32) softmax (cid:32) sim 2 ( s, g i ) (cid:80) g j C ( s ) sim 2 ( s, g j ) (cid:33)(cid:33) (2) Compared to the randomly sampled in-batch negative examples, the topk candidate goals are presumably harder negative examples (Karpukhin et al., 2020) and thus the model must work harder to distinguish between them.",
"We will explain the extraction of the labeled step-goal pairs used to train this model in 4.1.",
"Concretely, we experiment with two pretrained models as M c , specifically BERT-base (Devlin et al., 2019) and DEBERTA -large finetuned on the MNLI dataset (He et al., 2021).",
"We pick them due to their high performance on various tasks (Zhang et al., 2020e).",
"4 In addition, we consider including different ctx in the reranking input.",
"For each step, we experiment with including no context, the goal of the step, and the surrounding steps of the step within a window-size n ( n =1).",
"Some steps in wikiHow could not be matched with any goal.",
"Such steps are unlinkable because of several reasons.",
"First, the step itself might be so fine-grained that further instructions are unnecessary (e.g. Go to a store ).",
"Second, although wikiHow spans a wide range of complex procedures, it is far from comprehensive.",
"Some goals simply do not exist in wikiHow.",
"Hence, we design a mechanism to predict whether a step is linkable or not explicitly.",
"More specifically, we add a special token unlinkable , 4 https://cutt.ly/oTx5gMM.",
"taken from the reserved vocabulary of a pretrained model, as a placeholder goal to the topk candidate goal list C ( s ) , and this placeholder is treated as the gold-standard answer if the step is determined to be unlinkable.",
"The similarity score between a step and this placeholder goal follows Equation 1 and sim 1 ( s, unlinkable ) is set to the lowest first-stage similarity score among the candidate goals retrieved by the first-stage model.",
"Accurately labeling a step as unlinkable is nontrivial it requires examining whether the step can be linked to any goal in G .",
"Instead, we train the model to perform this classification by assigning unlinkable to steps that have a ground-truth goal but this goal does not appear in the topk candidate goal list.",
"The loss follows Equation",
"2. 4 Automatic Step Prediction Evaluation To train our models and evaluate how well our hierarchy discovery model can link steps to goals, we leverage existing annotated step-goal links.",
"In wikiHow, there are around 21 k steps that already have a hyperlink redirecting it to another wikiHow article, populated by editors.",
"We treat the title of the linked article as the ground-truth goal for the step.",
"For example, as in B5 of Figure 1, the ground-truth goal of the step Create a channel is Make a Youtube Channel .",
"We build the training, development and test set with a 7:2:1 ratio.",
"Candidate Retrieval The SP model achieves the best recall of all models, outperforming SBERT by a significant margin.",
"Models based on search engines with various configurations, including the commercial Bing Search, are less effective.",
"In addition, BM25 (goal only), which does not consider any article content, notably outperforms BM25 (ar-ticle) and Bing Search, implying that the full articles may contain undesirable noise that hurts the search performance.",
"This interesting observation suggests that while commercial search engines are powerful, they may not be the best option for specific document retrieval tasks such as ours.",
"The recall@ 30 of the SP model is 72.5%, which bounds the performance of any reranker.",
"6 As seen in the bottom half of Table 1, reranking is highly effective, as the best configuration brings a 19.6% improvement on recall@ 1 , and the recall@ 10 almost reaches the upper bound of this stage.",
"We find that under the same configuration, DEBERTA -large finetuned on MNLI (He et al., 2021) outperforms BERT by 1.7% on recall@ 1 , matching the reported trends from BERTScore.",
"5 To qualitatively understand the benefit of the reranker, we further inspect randomly sampled predictions of SP and DEBERTA .",
"We find that the reranker largely resolves partial matching problems observed in SP.",
"As shown in C1 of Table 2, SP tends to only consider the action ( e.g., learn) or the object ( e.g., bike) and mistakenly rank those partially matched goals the highest.",
"In contrast, the reranker makes fewer mistakes.",
"In addition, we observed that the reranker performed better on rare words or expressions.",
"For example, as shown in the last column of C1 , the reranker predicts that vinyl records is closely related to LP records and outputs the correct goal while SP could not.",
"Second, we observe that the surrounding context and the goal of the query step are helpful in general.",
"Incorporating both contexts brings a 3% improvement in recall@ 1 .",
"While steps are informative, 6 We only experiment with SP because it is the best retrieval model, providing a larger improvement headroom.",
"they could be highly dependent on the contexts.",
"For example, some steps are under-specified, using pronouns to refer to previously occurring contents or simply omitting them.",
"The additional information introduced by the context helps resolve these uncertainties.",
"In the first example of C2 , the context minecraft is absent in the query step but present in the goal of that step.",
"Similarly, in the second example, the context eyebrows is absent in the query step but present in both the goal and the surrounding steps.",
"Finally, adding unlinkable prediction harms the recall@ 1 due to its over-prediction of unlinkable for steps whose ground-truth goal exists in the topk candidate list.",
"We also experiment with setting a threshold tuned on the development set to decide which steps are unlinkable, in which case the recall@ 1 degrades from 55.4% to 41.9%.",
"Therefore, this explicit learnable prediction yields more balance between the trade-offs.",
"In 5, we will demonstrate that this explicit unlinkable prediction is overall informative to distinguish steps of the two types through crowdsourcing annotations.",
"We empirically find that setting the weight of sim 1 ( s, g ) ( ) to 0 is beneficial in the unlinkable prediction setting.",
"The automatic evaluation strongly indicates the effectiveness of our proposed hierarchy discovery model.",
"However, it is not comprehensive because the annotated hyperlinks are not exhaustive.",
"We complement our evaluation with crowdsourced human judgments via Amazon Mechanical Turk (MTurk).",
"Each example of annotating is a tuple of a step, its original goal from wikiHow, and the top-ranked e x a c t h e l p f u l r e l a t e d unh e l p f u l 0 50 100 150 200 250 300 350 400 linkable e x a c t h e l p f u l r e l a t e d unh e l p f u l unlinkable DeBERTa-UL DeBERTa SP Figure 2: Crowd workers' ratings of step-goal links predicted by our models.",
"goal predicted by one of our models.",
"For each example, we ask three MTurk workers to judge whether the steps in the article of the linked goal are exact, helpful, related, or unhelpful with regard to accomplishing the queried step.",
"Details about the task design, task requirements, worker pay, example sampling, etc. are in A. We select SP, DEBERTA , and DEBERTA with unlinkable prediction and = 0 (DEBERTA-UL ) for comparison.",
"We attempt to answer the following questions.",
"First, does the performance trend shown in automatic evaluation hold in human evaluation?",
"Second, can the unlinkable predictions help avoid providing users with misleading information (Rajpurkar et al., 2018)?",
"For the purpose of the second question, we separate the examples into two groups.",
"One contains linkable examples.",
"Namely, those whose top1 prediction is not predicted as unlinkable by the DEBERTA-UL model.",
"Ideally, the linked articles from these examples should be helpful.",
"The other 3002 group contains unlinkable examples.",
"For these, we evaluate the second-highest ranked prediction of the DEBERTA-UL model.",
"Ideally, the linked articles from these examples should be unhelpful.",
"The corresponding crowd judgment is shown in Figure",
"2. Comparing the models, the DEBERTA model and the DEBERTA-UL model have similar performance, while greatly outperforming the SP model.",
"This shows that our proposed model decomposes much more helpful finer-grained steps to assist users with tasks, similar to the trend observed in our automatic evaluation.",
"Comparing the two graphs, it is apparent that when the DEBERTAUL model predicts unlinkable for a step, the suggested decompositions of all models are more likely to be unhelpful.",
"This implies the high precision of the unlinkable prediction, effectively avoiding misleading predictions.",
"Note that our study does not explicitly require subjects to carry out the task, but only annotates whether they find the instructions helpful.",
"In addition to intrinsic evaluation, we take a further step to study the usefulness of our open-domain hierarchical KB to downstream tasks.",
"We select video retrieval as the extrinsic evaluation task, which aims at retrieving relevant how-to videos for a textual goal to visually aid users.",
"More formally, given a textual goal g , the task is to retrieve its relevant videos v g from the set of all videos, with a textual query q .",
"Intuitively, our KB can be useful because videos usually contain finer-grained steps and verbal descriptions to accomplish a task.",
"Therefore, the extra information presented in decomposed steps could benefit retrieving relevant videos.",
"We use Howto100M (Miech et al., 2019) for evaluation.",
"It is a dataset of millions of instructional videos corresponding to over 23 k goals.",
"We construct our video retrieval corpus by randomly sampling 1 , 000 goals ( e.g., record a video ) with their relevant videos.",
"The relevant videos v g = { v 1 , v 2 , ..., v n } of each goal g in the dataset are obtained by selecting the top 150 videos among the search results of the goal on YouTube.",
"7 For 7 Although the relevance between a goal and a video is not explicitly annotated in the Howto100M dataset, we argue that with the sophisticated engineering of the YouTube video search API and hundreds of thousands user clicks, the highly Query R/P@1 R/P@10 R/P@25 R/P@50 MRL 0 2.2/89.2 19.2/78.1 39.9/66.0 56.6/48.2 79.49 L 1 2.2/88.0 19.2/78.0 40.1/66.4 58.1/49.6 75.79 FIL-L 1 2.2/ 89.9 20.2/81.7 43.1/71.2 63.2/53.8 66.32 FIL-L 2 2.2/89.4 20.3/82.7 43.9/72.3 65.0/55.2 63.38 L 0 12.1/81.7 59.8/42.8 71.9/20.8 77.9/11.3 41.60 L 1 11.8/79.7 61.2/43.9 74.1/21.4 80.5/11.6 36.70 FIL-L 1 12.4/83.7 66.0/47.3 77.4/22.4 82.9/ 12.0 33.35 FIL-L 2 12.5/84.4 66.1/47.7 78.0/22.5 83.3/12.0 32.30 L 0 11.4/82.6 59.2/45.2 71.8/22.1 77.8/12.0 43.11 L 1 11.2/81.3 60.4/46.2 73.8/22.7 79.9/12.3 38.19 FIL-L 1 11.7/85.1 64.8/49.5 77.2/23.8 82.2/ 12.7 34.76 FIL-L 2 11.6/84.5 65.5/50.0 77.9/24.0 82.7/12.7 34.13 Table 3: The Recall/Precision@ N (%, ) and mean rank (MR, ) with different queries on the relevant video retrieval task on the training (top), development (middle) and the test set (bottom).",
"each goal g , we randomly split its relevant videos v g into three sub-sets v tr g , v dev g and v test g with a ratio of 7.5:1.25:1.25, as the training, development, and testing sets.",
"8 6.2 Setup Since our KB is fully textual, we also represent each video textually with its automatically generated captions.",
"For the search engine, we use Elasticsearch with the standard BM25 metric (Robertson and Zaragoza, 2009).",
"9 We denote the relevance score calculated by BM25 between the query q and a textually represented video v as Rel ( q, v ) .",
"We experiment with four different methods, which differ in how they construct the query q : L 0 : Goal only.",
"The query is the goal g itself.",
"This is the minimal query without any additional hierarchical information.",
"The relevance score is simply Rel ( q, v ) = Rel ( g, v ) .",
"L 1 : Goal + Children.",
"The query is a concatenation of the goal g and its immediate children steps Ch ( g ) .",
"This query encodes hierarchical knowledge that already exists in wikiHow.",
"The relevance score is then defined as a weighted sum, Rel ( q, v ) = w g Rel ( g, v ) + w s (cid:80) s Ch ( g ) Rel ( s, v ) .",
"The weights w g and w s are tuned on a development set and set to 1.0 and 0.1 respectively.",
"FIL-L 1 : Goal + Filtered children.",
"The query is a concatenation of the goal g and a filtered sequence of its children Ch ( g ) .",
"Intuitively, decomposing a goal introduces richer information ranked videos likely demonstrate the queried goal.",
"8 We explain more about the appropriateness of the downstream video retrieval task setup in B.1.",
"but also introduces noise, since certain steps may not visually appear at all ( e.g., enjoy yourself ).",
"Therefore, we perform filtering and only retain the most informative steps, denoted by Ch (cid:48) ( g ) .",
"Specifically, to construct Ch (cid:48) ( g ) for a goal g , we use a hill-climbing algorithm to check each step s from Ch ( g ) , and include s into the query only if it yields better ranking results for the ground-truth videos in the training set v train g .",
"10 The relevance score is defined as Rel ( q, v ) = w g Rel ( g, v )+ w s (cid:80) s Ch (cid:48) ( g ) Rel ( s, v ) , where w g is set to 1.0 and w s is set to 0.5 after similar tuning.",
"FIL-L 2 : Goal + Filtered children + Filtered grand-children.",
"The query is the concatenation of the goal g and a filtered sequence of its immediate children Ch ( g ) and grandchildren Ch ( s ) ( s Ch ( g ) ).",
"These filtered steps are denoted by Ch (cid:48) ( g + Ch ( g )) .",
"This two-level decomposition uses the knowledge from our KB, therefore including lower-level information about the execution of the goal.",
"We perform the same filtering algorithm as in FIL-L 1 , and we define Rel ( q, v ) = w g Rel ( g, v )+ w s (cid:80) s Ch (cid:48) ( g + Ch ( g )) Rel ( s, v ) .",
"w g is set to 1.0 and w s is set to 0.5.",
"We report the precision@ N , recall@ N and mean rank (MR) following existing work on video retrieval (Luo et al., 2021) (see B.2 for metric def-initions).",
"Table 3 lists the results.",
"First, queries that encode hierarchies of goals ( L 1 , FIL-L 1 and FIL-L 2 ) are generally more beneficial than queries that do not ( L 0 ).",
"The steps of goals enrich a query and assist the retrieval.",
"Second, video-oriented filtering yields significant improvement over the un-filtered L 1 queries since it produces a set of more generalizable steps that are shared among multiple videos.",
"Although steps in wikiHow articles are human-written, they are not grounded to real-world executions of that goal.",
"Many steps do not have corresponding executions in the videos and become noisy steps in the L 1 queries.",
"More interestingly, we observe that queries using deeper hierarchies (FIL-L 2 ) outperform the shallower ones (FIL-L 1 ) in most cases.",
"This is probably due to the fact that how-to videos usually contain detailed (verbal) instructions of a procedure, which are better aligned with more fine-grained steps found in FIL-L 2 .",
"In our qualitative study, we investigate how FIL-L 2 queries with deeper hierarchies help retrieval.",
"Table 4 list FIL-L 1 and FIL-L 2 queries for two goals.",
"We find that the FIL-L 2 queries are more informative and cover more aspects.",
"For example, the FIL-L 2 queries for stain cabinet and make avocado fries consist of the preparation, actual operations, and the post-processing steps, while the FIL-L 1 query only contains the first one.",
"In addition, we search the goals on Google and list the key moments of some randomly sampled videos.",
"11 These key moments textually describe the important clips of the videos, and therefore they presumably also serve as the query for the goal.",
"We find that the FIL-L 2 query of make avocado fries explains a few necessary steps to accomplish this goal, while the key moment is mostly composed of the ingredients of this dish.",
"This comparison suggests the potential integration of our induced hierarchical knowledge to identify key moments in videos in the future.",
"In this section, we study the properties of the hierarchies.",
"First, what kind of steps are likely to be linked to another goal and are thus decomposed?",
"Second, what do the decomposed steps look like?",
"We group steps into two clusters.",
"The first contains the immediate steps of a goal ( s Ch ( g ) ) whose prediction is not unlinkable .",
"The second contains the decomposed steps of the steps in the first cluster ( s (cid:48) Ch ( s ) ).",
"We use spaCy (Hon-nibal et al., 2020) to extract and lemmatize the verb in each step and rank the verbs by their frequency in each cluster.",
"Next, the top100 most frequent verbs in each cluster are selected and we measure the rank difference of these verbs in the two clusters.",
"Figure 3 plots the verbs with largest rank difference and the full figure is in Figure",
"4. We observe that verbs that convey complex actions and intuitively consist of many other actions become less frequent after the decomposition ( e.g., decorate).",
"On the other hand, verbs that describe the action itself gain in frequency after the decomposition ( e.g., push, hold, press).",
"This observation follows our assump-tion that the decomposition would lead to more fine-grained realizations of a complex procedure.",
"Some other more abstract actions such as learn and decide also increase in frequency, as some low-level goals are explained with more complex steps.",
"Linking Procedural Events To the best of our knowledge, two other pieces of work Pareti et al. (2014); Lagos et al. (2017) tackled the task of linking steps in procedures to other procedures.",
"Both of them also drew the procedures from wikiHow.",
"While we share the same task formulation, our work makes several additional contributions: (1) a retrieval-then-rerank method significantly increases linking recall; (2) more comprehensive experiments with the manual and the downstream evaluation that showcases the quality and usefulness of the linked data and (3) experiments and data with broader coverage over all of WikiHow, not just the Computer domain.",
"Procedural Knowledge Procedural knowledge can be seen as a subset of knowledge pertaining to scripts (Abelson and Schank, 1977; Rudinger et al., 2015), schemata (Rumelhart, 1975) or events.",
"A small body of previous work (Mujtaba and Ma-hapatra, 2019) on procedural events includes extracting them from instructional texts (Paris et al., 2002; Delpech and Saint-Dizier, 2008; Zhang et al., 2012) and videos (Alayrac et al., 2016; Yang et al., 2021a), reasoning about them (Takechi et al., 2003; Tandon et al., 2019; Rajagopal et al., 2020), or showing their downstream applications (Pareti, 2018; Zhang et al., 2020d; Yang et al., 2021b; Zhang et al., 2020b; Lyu et al., 2021), specifically on intent reasoning (Sap et al., 2019; Dalvi et al., 2019; Zhang et al., 2020c).",
"Most procedural datasets are collected by crowdsourcing then manually cleaned (Singh et al., 2002; Regneri et al., 2010; Li et al., 2012; Wanzare et al., 2016; Rashkin et al., 2018) and are hence small.",
"Existing work has also leveraged wikiHow for large-scale knowledge-base construction (Jung et al., 2010; Chu et al., 2017; Park and Motahari Nezhad, 2018), but our work is the first to provide a comprehensive intrinsic and extrinsic evaluation of the resulting knowledge-base.",
"We propose a search-then-rerank algorithm to effectively construct a hierarchical knowledge-base of procedures based on wikiHow.",
"Our hierarchies are shown to help users accomplish tasks by accurately providing decomposition of a step and improve the performance of downstream tasks such as retrieving instructional videos.",
"One interesting extension is to further study and improve the robustness of our two-stage method to tackle more complex linguistic structures of steps and goals ( e.g., negation, conjunction).",
"Another direction is to enrich the resulting knowledge-base by applying our method to other web resources, 12 or to other modalities ( e.g., video clips).",
"Future work 12 e.g., https://www.instructables.com/, https://www.diynet work.com/how-to 3005 could also explore other usages such as comparing and clustering procedures based on their deep hierarchies; or applying the procedural knowledge to control robots in the situated environments.",
"This research is based upon work supported in part by the DARPA KAIROS Program (contract FA8750-19-2-1004), the DARPA LwLL Program (contract FA8750-19-2-0201), the IARPA BETTER Program (contract 2019-19051600004), and the Amazon Alexa Prize TaskBot Competition.",
"Approved for Public Release, Distribution Unlimited.",
"The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon.",
"The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of Amazon, DARPA, IARPA, or the U.S. Government.",
"We thank Ziyang Li and Ricardo Gonzalez for developing the web demo, John Wieting for support on implementation, and the anonymous crowd workers for their annotations."
] | [
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"other",
"objective",
"method",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"objective",
"other",
"other",
"other",
"abstain",
"objective",
"result",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"other"
] |
[
"The performance of multilingual pretrained models is highly dependent on the availability of monolingual or parallel text present in a target language.",
"Thus, the majority of the world's languages cannot benefit from recent progress in NLP as they have no or limited textual data.",
"To expand possibilities of using NLP technology in these under-represented languages, we systematically study strategies that relax the reliance on conventional language resources through the use of bilingual lexicons, an alternative resource with much better language coverage.",
"We analyze different strategies to synthesize textual or labeled data using lexicons, and how this data can be combined with monolingual or parallel text when available.",
"For 19 under-represented languages across 3 tasks, our methods lead to consistent improvements of up to 5 and 15 points with and without extra monolingual text respectively.",
"Overall, our study highlights how NLP methods can be adapted to thousands more languages that are under-served by current technology.",
"1 1 Introduction Multilingual pretrained models (Devlin et al., 2019; Conneau and Lample, 2019; Conneau et al., 2020) have become an essential method for cross-lingual transfer on a variety of NLP tasks (Pires et al., 2019; Wu and Dredze, 2019).",
"These models can be finetuned on annotated data of a down-stream task in a high-resource language, often English, and then the resulting model is applied to other languages.",
"This paradigm is supposed to benefit under-represented languages that do not have annotated data.",
"However, recent studies have found that the cross-lingual transfer performance of a language is highly contingent on the availability of monolingual data in the language during pretraining (Hu et al., 2020).",
"Languages with more 1 Code and data are available at: https: //github.com/cindyxinyiwang/expand-via-lexicon-based-adaptation .",
"Several works propose methods to adapt the pretrained multilingual models to low-resource languages, but these generally involve continued training using monolingual text from these languages (Wang et al., 2020; Chau et al., 2020; Pfeiffer et al., 2020, 2021).",
"Therefore, the performance of these methods is still constrained by the amount of monolingual or parallel text available, making it difficult for languages with little or no textual data to benefit from the progress in pretrained models.",
"Joshi et al. (2020) indeed argue that unsupervised pretraining makes the resource-poor poorer'.",
"Fig. 1 plots the language coverage of multilingual BERT (mBERT; Devlin et al., 2019), a widely used pre-trained model, and several commonly used textual data sources.",
"2 Among the 7,000 languages in the world, mBERT only covers about 1% of the languages while Wikipedia and CommonCrawl, the two most common resources used for pretraining and adaptation, only contain textual data from 4% of the languages (often in quite small quantities, partially because language IDs are difficult to obtain for low-resource languages (Caswell et al., 2020)).",
"Ebrahimi and Kann (2021) show that continued pretraining of multilingual models on a small amount of Bible data can significantly improve the performance of uncovered languages.",
"Although the Bible has much better language coverage of 23% , its relatively small data size and 2 Statistics taken from Ebrahimi and Kann (2021) and panlex.org .",
"constrained domain limits its utility (see 6)and 70% of the world's languages do not even have this resource.",
"The failure of technology to adapt to these situations raises grave concerns regarding the fairness of allocation of any benefit that may be conferred by NLP to speakers of these languages (Joshi et al., 2020; Blasi et al., 2021).",
"On the other hand, linguists have been studying and documenting under-represented languages for years in a variety of formats (Gippert et al., 2006).",
"Among these, bilingual lexicons or word lists are usually one of the first products of language documentation, and thus have much better coverage of the worlds' languages than easily accessible monolingual text, as shown in Fig. 1.",
"There are also ongoing efforts to create these word lists for even more languages through methodologies such as rapid word col-lection (Boerger, 2017), which can create an extensive lexicon for a new language in a number of days.",
"As Bird (2020) notes: After centuries of colonisation, missionary endeavours, and linguistic fieldwork, all languages have been identified and classified.",
"There is always a wordlist.",
". . . In short, we do not need to discover the language ex nihilo (L1 acquisition) but to leverage the available resources (L2 acquisition).",
"However, there are few efforts on understanding the best strategy to utilize this valuable resource for adapting pretrained language models.",
"Bilingual lexicons have been used to synthesize bilingual data for learning cross-lingual word embeddings (Gouws and Sgaard, 2015; Ruder et al., 2019) and task data for NER via word-to-word translation (Mayhew et al., 2017), but both approaches precede the adoption of pre-trained multilingual LMs.",
"Khemchandani et al. (2021) use lexicons to synthesize monolingual data for adapting LMs, but their experimentation is limited to several Indian languages and no attempt was made to synthesize downstream task data while Hu et al. (2021) argue that bilingual lexicons may hurt performance.",
"In this paper, we conduct a systematic study of strategies to leverage this relatively under-studied resource of bilingual lexicons to adapt pretrained multilingual models to languages with little or no monolingual data.",
"Utilizing lexicons from an open-source database, we create synthetic data for both continued pretraining and downstream Figure 2: Results for baselines and adaptation using synthetic data for both resource settings across three NLP tasks.",
"task fine-tuning via word-to-word translation.",
"Empirical results on 19 under-represented languages on 3 different tasks demonstrate that using synthetic data leads to significant improvements on all tasks (Fig. 2), and that the best strategy depends on the availability of monolingual data ( 5, 6).",
"We further investigate methods that improve the quality of the synthetic data through a small amount of parallel data or by model distillation.",
"We focus on the cross-lingual transfer setting where the goal is to maximize performance on a downstream task in a target language T .",
"Due to the frequent unavailability of labeled data in the target language, a pretrained multilingual model M is typically fine-tuned on labeled data in the downstream task D Slabel = { ( x Si , y Si ) } Ni =1 in a source language S where x Si is a textual input, y Si is the label, and N is the number of labeled examples.",
"The fine-tuned model is then directly applied to task data D Ttest = { x Ti , y Ti } i in language T at test time.",
"3 The performance on the target language T can often be improved by further adaptation of the pretrained model.",
"There are two widely adopted paradigms for adapting pretrained models to a target language using monolingual or parallel text.",
"MLM Continued pretraining on monolingual text D Tmono = { x Ti } i in the target language (Howard and Ruder, 2018; Gururangan et al., 2020) using a masked language model (MLM) objective has proven effective for adapting models to the target language (Pfeiffer et al., 2020).",
"Notably, Ebrahimi and Kann (2021) show that using as little as several thousand sentences can significantly improve the model's performance on target languages not covered during pretraining.",
"Trans-Train For target languages with sufficient parallel text with the source language D STpar = { ( x Si , x Ti ) } i , one can train a machine translation (MT) system that translates data from the source language into the target language.",
"Using such an MT system, we can translate the labeled data in the source language D Slabel into target language data (cid:98) D Tlabel = { ( (cid:98) x Ti , y Si ) } Ni =1 , and fine-tune the pretrained multilingual model on both the source and translated labeled data D Slabel (cid:98) D Tlabel .",
"This method often brings significant gains to the target language, especially for languages with high-quality MT systems (Hu et al., 2020; Ruder et al., 2021).",
"Both methods above require D Tmono or D STpar in target language T , so they cannot be directly extended to languages without this variety of data.",
"Joshi et al. (2020) classified the around 7,000 languages of the world into six groups based on the availability of data in each language.",
"The two groups posing the biggest challenges for NLP are: The Left-Behinds, languages with virtually no unlabeled data.",
"The Scraping-Bys, languages with a small amount of monolingual data.",
"We refer to this as the Few-Text setting.",
"These languages make up 85% of languages in the world, yet they do not benefit from the development of pretrained models and adaptation methods due to the lack of monolingual and parallel text.",
"In this paper, we conduct a systematic study of strategies directly targeted at these languages.",
"Since the main bottleneck of adapting to underrepresented languages is the lack of text, we adopt a data augmentation framework (illustrated in Fig. 3) that leverages bilingual lexicons, which are available for a much larger number of languages.",
"Given a bilingual lexicon DST lex between the source language S and a target language T , we create synthetic sentences (cid:101) x Ti in T using sentences x Si in S via word-to-word translation, and use this synthetic data in the following adaptation methods.",
"Pseudo MLM Using monolingual text D Smono = { x Si } i , we generate pseudo monolingual text (cid:101) D Tmono = { (cid:101) x Ti } i for T by replacing the words in x Si with their translation in T based on the lexicon DST lex .",
"We keep the words that do not exist in the lexicon unchanged, so the pseudo text (cid:101) x Ti can include words in both S and T .",
"We then adapt the pretrained multilingual model on (cid:101) D Tmono using the MLM objective.",
"For the Few-Text setting where some gold monolingual data D Tmono is available, we can train the model jointly on the pseudo and the gold monolingual data (cid:101) D Tmono D Tmono .",
"Pseudo Trans-train Given the source labeled data D Slabel = { ( x Si , y Si ) } Ni =1 , for each text example x Si we use DST lex to replace the words in x Si with its corresponding translation in T , resulting in pseudo labeled data (cid:101) D Tlabel = { ( (cid:101) x Ti , y Si ) } Ni =1 .",
"We keep the original word if it does not have an entry in the lexicon.",
"We then fine-tune the model jointly on both pseudo and gold labeled data (cid:101) D Tlabel D Slabel .",
"Since these methods only require bilingual lexicons, we can apply them to both No-Text and Few-Text settings.",
"We can use either of the two methods or the combination of both to adapt the model.",
"Challenges with Pseudo Data Our synthetic data (cid:101) DT could be very different from the true data DT because the lexicons do not cover all words in S or T , and we do not consider morphological or word order differences between T and S .",
"4 Nonetheless, we find that this approach yields significant improvements in practice (see Tab. 3).",
"We also outline two strategies that aim to improve the quality of the synthetic data in the next section.",
"Label Distillation The pseudo labeled data (cid:101) DT label = { ( (cid:101) x T i , y Si ) } N i =1 is noisy because the syn-4",
"thetic examples (cid:101) x Ti could have a different label from the original label y Si (see Tab. 1).",
"To alleviate this issue, we propose to automatically correct the labels of pseudo data using a teacher model.",
"Specifically, we fine-tune the pretrained multilingual model as a teacher model using only D Slabel .",
"We use this model to generate the new pseudo labeled data (cid:101) D Tdistill = { ( (cid:101) x Ti , (cid:101) y Ti ) } Ni =1 by predicting labels (cid:101) y Ti for the pseudo task examples (cid:101) x Ti .",
"We then fine-tune the pretrained model on both the new pseudo labeled data and the source labeled data (cid:101) D Tdistill D Slabel .",
"Induced Lexicons with Parallel Data For the Few-Text setting, we can leverage the available parallel data D STpar to further improve the quality of the augmented data.",
"Specifically, we use unsupervised word alignment to extract additional word pairs (cid:101) D STlex from the parallel data, and use the combined lexicon (cid:101) D STlex D STlex to synthesize the pseudo data.",
"In this section, we outline the tasks and data setting used by all experiments.",
"We will then introduce the adaptation methods and results for the No-Text setting in 5 and the Few-Text setting in",
"6. 4.1 Tasks, Languages and Model We evaluate on the gold test sets of three different tasks with relatively good coverage of underrepresented languages: named entity recognition (NER), part-of-speech (POS) tagging, and dependency parsing (DEP).",
"We use two NER datasets: WikiAnn NER (Pan et al., 2017; Rahimi et al., 2019) and MasakhaNER (Adelani et al., 2021).",
"We use the Universal Dependency 2.5 (Nivre et al., 2018) dataset for both the POS and DEP tasks.",
"We use English as the source language for all experiments.",
"For each dataset, we use the English training data and select the checkpoint with the best performance on the English development set.",
"For MasakhaNER, which does not have English training data, we follow Adelani et al. (2021) and Language iso Family Task Lex Count Acehnese ace Austronesian NER 0.5k Bashkir bak Turkic NER 3.4k Crimean Turkish crh Turkic NER 4.4k Hakka Chinese hak Sino-Tibetan NER 8.5k Igbo ibo Niger-Congo NER 3.6k Ilokano ilo Austronesian NER 4.0k Kinyarwanda kin Niger-Congo NER 4.7k Eastern Mari mhr Uralic NER 21.7k Maltese mlt Afro-Asiatic All 1.0k Maori mri Austronesian NER 13.8k Hausa hau Niger-Congo NER 5.6k Wolof wol Niger-Congo All 1.9k Luganda lug Niger-Congo NER 3.5k Luo luo NER 0.7k Bambara bam Mande POS,Parsing 4.4k Manx glv Indo-European POS,Parsing 37.6k Ancient Greek grc Indo-European POS,Parsing 8.0k Swiss German gsw Indo-European POS,Parsing 2.5k Erzya myv Uralic POS,Parsing 7.4k Table 2: Languages used for evaluation.",
"use the CoNLL-2003 English NER training data.",
"We run each fine-tuning experiment with 3 random seeds and report the average performance.",
"For NER and POS tagging, we follow the data processing and fine-tuning hyper-parameters in Hu et al. (2020).",
"We use the Udify (Kondratyuk and Straka, 2019) codebase and configuration for parsing.",
"Languages For each task, we select languages that have task data but are not covered by the mBERT pretraining data.",
"The languages we use can be found in Tab.",
"2. Most fall under the Few-Text setting (Joshi et al., 2020).",
"We employ the same languages to simulate the No-Text setting as well.",
"Model We use the multilingual BERT model (mBERT) because it has competitive performance on under-represented languages (Pfeiffer et al., 2020).",
"We find that our mBERT performance on WikiNER and POS is generally comparable or exceeds the XLM-R large results in Ebrahimi and Kann (2021).",
"We additionally verify our results also hold for XLM-R in 7.",
"Lexicon We extract lexicons between English and each target language from the PanLex database.",
"5 The number of lexicon entries varies from about 0.5k to 30k, and most of the lexicons have around 5k entries.",
"The lexicon statistics for each language can be found in Tab.",
"2. Pseudo Monolingual Data English Wikipedia articles are used to synthesize monolingual data.",
"We first tokenize the English articles using Stanza (Qi et al., 2020) and keep the first 200k sentences.",
"To create pseudo monolingual data for a given target language, we replace each English word with its translation if the word exists in the bilingual lexicon.",
"We randomly sample a target word if the English word has multiple possible translations because it is difficult to estimate translation probabilities due to lack of target text.",
"Pseudo Labeled Data Using the English training data for each task, we simply replace each English word in the labeled training data with its corresponding translation and retain its original label.",
"For the sake of simplicity, we only use lexicon entries with a single word.",
"We analyze the results of the following adaptation methods for the setting where we do not have any monolingual data.",
"Pseudo MLM The mBERT model is trained on the pseudo monolingual data using the MLM objective.",
"We train the model for 5k steps for the NER tasks and 10k steps for the POS tagging and Parsing tasks.",
"Pseudo Trans-train We fine-tune mBERT or the model adapted with Pseudo MLM for a downstream task on the concatenation of both the English labeled data and the pseudo labeled data.",
"Label Distillation We use the model adapted with Pseudo MLM as the teacher model to generate new labels for the pseudo labeled data, which we use jointly with the English labeled data to finetune the final model.",
"The average performance of different adaptation methods averaged across all languages in each task",
"Pseudo Trans-train is the best method for No-Text.",
"Pseudo MLM and Pseudo Trans-train can both bring significant improvements over the mBERT baseline for all tasks.",
"Pseudo Trans-train leads to the best aggregated result across all tasks, and it is also the best method or very close to the best method for each task.",
"Adding Pseudo Trans-train on top of Pseudo MLM does not add much improvement.",
"Label Distillation generally leads to better performance, but overall it is comparable to only using Pseudo Trans-train.",
"We test same adaptation methods introduced in 5 for the Few-Text setting where we have a small amount of gold data.",
"First we introduce the additional data and adaptation methods for this setting.",
"Gold Monolingual Data We use the JHU Bible Corpus (McCarthy et al., 2020) as the monolingual data.",
"Following the setup in Ebrahimi and Kann (2021), we use the verses from the New Testament, which contain 5000 to 8000 sentences for each target language.",
"Gold Parallel Data We can use the parallel data between English and the target languages from the Bible to extract additional word pairs.",
"We use an existing unsupervised word alignment tool, eflo-mal (stling and Tiedemann, 2016), to generate word alignments for each sentence in the parallel Bible data.",
"To create high quality lexicon entries, we only keep the word pairs that are aligned more than once, resulting in about 2k extra word pairs for each language.",
"We then augment the PanLex lexicons with the induced lexicon entries.",
"Gold MLM The mBERT model is trained on the gold monolingual Bible data in the target language using the MLM objective.",
"Following the setting in Ebrahimi and Kann (2021), we train for 40 epochs for the NER task, and 80 epochs for the POS and Parsing tasks.",
"Pseudo MLM We conduct MLM training on both the Bible monolingual data and the pseudo monolingual data in the target language.",
"The Bible data is up-sampled to match the size of the pseudo monolingual data.",
"We train the model for 5k steps 867 Method Lexicon WikiNER MasakhaNER POS Parsing Avg.",
"Pseudo MLM is the competitive strategy for Few-Text.",
"Unlike the No-Text setting, Pseudo Trans-train only marginally improves or even decreases the performance for three out of the four datasets we consider.",
"On the other hand, Pseudo MLM, which uses both gold and pseudo monolingual data for MLM adaptation, consistently and significantly improves over Gold MLM for all tasks.",
"Again, using Pseudo Trans-train on top of Pseudo MLM does not help and actually leads to relatively large performance loss for the syntactic tasks, such as POS tagging and Parsing.",
"improvements for the two syntactic tasks.",
"Notably, it is the best performing method for POS tagging, but it still lags behind Pseudo MLM for Parsing.",
"This is likely because Parsing is a much harder task than POS tagging to generate correct labels.",
"The effect of Label Distillation on the NER task is less consistentit improves over Pseudo Trans-train for WikiNER but not for MasakhaNER.",
"This is because the named entity tags of the same words in different languages likely remain the same so that the pseudo task data probably has less noise for Label Distillation to have consistent benefits.",
"Adding Induced Lexicons We examine the effect of using the lexicons augmented by word pairs induced from the Bible parallel data.",
"The results can be found in Tab.",
"3. Adding the induced lexicon significantly improves the NER performance, while it hurts the two syntactic tasks.",
"To understand what might have prevented the syntactic tasks from benefiting from the extra lexicon entries, we plot the distribution of the part-of-speech tags of the words in PanLex lexicons and the lexicons induced from the Bible in Fig.",
"4. PanLex lexicons have more nouns than the Bible lexicons while the Bible lexicons cover more verbs than PanLex.",
"However, the higher verb coverage in induced lexicons actually leads to a larger prediction accuracy drop for verbs in the POS tagging task.",
"We hypothesize that the pseudo monolingual data created using the induced lexicons would contain more target language verbs with the wrong word order, which could be more harmful for syntactic tasks than tasks that are less sensitive to word order such as NER.",
"Discrepancies between the two NER datasets While WikiNER, along with POS tagging and Parsing, benefit the most from Pseudo MLM for Few-Text, MasakhaNER achieves the best result with Pseudo Trans-train.",
"One possible explanation is that MasakhaNER contains data from the news domain, while WikiNER is created from Wikipedia.",
"The pseudo monolingual data used for MLM is created from English Wikipedia articles, which could benefit WikiNER much more than MasakhaNER.",
"On the other hand, the English NER training data for MasakhaNER is from the news domain, which potentially makes Pseudo Trans-train a stronger method for adapting the model simultaneously to the target language and to the news domain.",
"One advantage of Pseudo MLM is that the English monolingual data is much cheaper to acquire, while Pseudo Trans-train is constrained by the amount of labeled data for a task.",
"We show in A.4 that Pseudo MLM has more benefit for MasakhaNER when we use a subset of the NER training data.",
"Performance with XLM-R We mainly use mBERT because it has competitive performance for under-represented languages and it is more computationally efficient due to the smaller size.",
"Here we verify our methods have the same trend when used on a different model XLM-R (Conneau et al., 2020).",
"We focus on a subset of languages in the POS tagging task for the Few-Text setting and the results are in Tab.",
"4. We use the smaller XLM-R base for efficiency, and compare to the best result in prior work, which uses XLM-R large (Ebrahimi and Kann, 2021).",
"Tab.",
"4 shows that our baseline is comparable or better than prior work.",
"Similar to the conclusion in 6, Pseudo MLM is the competitive strategy that brings significant improvements over prior work.",
"While adding Pseudo Trans-train to Pseudo MLM does not help, using Label Distillation further improves the performance.",
"Effect of Baseline Performance Using pseudo data might be especially effective for languages with lower performance.",
"We plot the improvement of different languages over the baseline in Fig. 5, where languages are arranged with increasing baseline performance from left to right.",
"We mainly plot Pseudo MLM and Pseudo Trans-train for simplicity.",
"Fig. 5 shows that for both resource settings, lower performing languages on the left tend to have more performance improvement by using pseudo data.",
"Using NMT Model to Synthesize Data One problem with the pseudo data synthesized using word-to-word translation is that it cannot capture the correct word order or syntactic structure in the target language.",
"If we have a good NMT system that translates English into the target language, we might be able to get more natural pseudo monolingual data by translating the English sentences to the target language.",
"Since the target languages we consider are usually not supported by popular translation services, we train our own NMT system by fine-tuning an open sourced many-to-many NMT model on the Bible parallel data from English to the target language (details in A.2).",
"Instead of creating pseudo monolingual data using the lexicon, we can simply use the fine-tuned NMT model to translate English monolingual data into the target language.",
"The results of using NMT as opposed to lexicon for Pseudo MLM on all four tasks can be found in Tab.",
"5. Unfortunately, NMT is consistently worse than word-to-word translation using lexicons.",
"We find that the translated monolingual data tend to have repeated words and phrases that are common in the Bible data, although the source sentence is from Wikipedia.",
"This is because the NMT model overfits to the Bible data, and it fails to generate good translation for monolingual data from a different domain such as Wikipedia.",
"examples in the target language can significantly outperform the zero-shot transfer baseline for languages included in mBERT.",
"We focus on the zero-shot setting in this paper because the languages we consider have very limited data and it could be expensive or unrealistic to annotate data in every task for thousands of languages.",
"Nonetheless, we experiment with k -shot learning to examine its performance on low-resource languages in the MasakhaNER task.",
"Tab.",
"6 shows that using 10 labeled examples brings improvements over the mBERT baseline for a subset of the languages, and it is mostly worse than our best adapted model without using any labeled data.",
"When we have access to 100 examples, few-shot learning begins to reach or exceed our zero-shot model.",
"In general, few-shot learning seems to require more data to consistently perform well for under-represented languages while our adaptation methods bring consistent gains without any labeled data.",
"Combining the best adapted model with few-shot learning leads to mixed results.",
"More research is needed to understand the annotation cost and benefit of few-shot learning for low-resource languages.",
"Several methods have been proposed to adapt pretrained language models to a target language.",
"Most of them rely on MLM training using monolingual data in the target languages (Wang et al., 2020; Chau et al., 2020; Muller et al., 2021; Pfeiffer et al., 2020; Ebrahimi and Kann, 2021), competitive NMT systems trained on parallel data (Hu et al., 2020; Ponti et al., 2021), or some amount of labeled data in the target languages (Lauscher et al., 2020).",
"These methods cannot be easily extended to low-resource languages with no or limited amount of monolingual data, which account for more than 80% of the World's languages (Joshi et al., 2020).",
"Bilingual lexicons have been commonly used for learning cross-lingual word embeddings (Mikolov et al., 2013; Ruder et al., 2019).",
"Among these, some work uses lexicons to synthesize pseudo bilingual (Gouws and Sgaard, 2015; Duong et al., 2016) or pseudo multilingual corpora (Ammar et al., 2016).",
"Mayhew et al. (2017) propose to synthesize task data for NER using bilingual lexicons.",
"More recently, Khemchandani et al. (2021) synthesize monolingual data in Indian languages for adapting pretrained language models via MLM.",
"Hu et al. (2021) argue that using bilingual lexicons for alignment hurts performance compared to word-level alignment based on parallel corpora.",
"Such parallel corpora, however, are not available for truly under-represented languages.",
"Reid and Artetxe (2021) employ a dictionary denoising objective where a word is replaced with its translation into a random language with a certain probability.",
"This can be seen as text-to-text variant of our approach applied to multilingual pre-training.",
"None of the above works provide a systematic study of methods that utilize lexicons and limited data resources for adapting pretrained language models to languages with no or limited text.",
"We propose a pipeline that leverages bilingual lexicons, an under-studied resource with much better language coverage than conventional data, to adapt pretrained multilingual models to underrepresented languages.",
"Through comprehensive studies, we find that using synthetic data can significantly boost the performance of these languages while the best method depends on the data availability.",
"Our results show that we can make concrete progress towards including under-represented languages into the development of NLP systems by utilizing alternative data sources.",
"Our work also has some limitations.",
"Since we focus on different methods of using lexicons, we restrict experiments to languages in Latin script and only use English as the source language for simplicity.",
"Future work could explore the effect of using different source languages and combining transliteration (Muller et al., 2021) or vocabulary extension (Pfeiffer et al., 2021) with lexicon-based data augmentation for languages in other scripts.",
"We also did not test the data augmentation methods on higher-resourced languages as MLM fine-tuning 870 and translate-train are already effective in that setting and our main goal is to support the languages with little textual data.",
"Nonetheless, it would be interesting to examine whether our methods can deliver gains for high-resource languages, especially for test data in specialized domains.",
"We point to the following future directions: First, phrases instead of single word entries could be used to create pseudo data.",
"Second, additional lexicons beyond PanLex could be leveraged.",
"6 Third, more effort could be spent on digitizing both existing monolingual data such as books (Gref, 2016) and lexicons into a format easily accessible by NLP practitioners.",
"Although PanLex already covers over 5000 languages, some language varieties have only as little as 10 words in the database, while there exist many paper dictionaries that could be digitized through technologies such as OCR (Rijhwani et al., 2020).",
"7 Lexicon collection is also relatively fast, which could be a more cost effective strategy to significantly boost the performance of many languages without lexicons.",
"Finally, the quality of synthetic data could be improved by incorporating morphology.",
"However, we find that there is virtually no existing morphological analysis data or toolkits for the languages we consider.",
"Future work could aim to improve the morphological analysis of these low-resource languages.",
"This work was supported in part by the National Science Foundation under Grant Numbers 1761548 and 2040926.",
"XW was supported in part by an Apple Graduate Fellowship.",
"The authors would like to thank Aditi Chaudhary, Arya McCarthy, Shruti Rijhwani for discussions about the project, and Daan van Esch for the general feedback and pointing out additional linguistic resources."
] | [
"abstain",
"abstain",
"method",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"objective",
"result",
"result",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"other",
"other"
] |
[
"Abstract One of the reasons Transformer translation models are popular is that self-attention networks for context modelling can be easily parallelized at sequence level.",
"However, the computational complexity of a self-attention network is O ( n 2 ) , increasing quadratically with sequence length.",
"By contrast, the complexity of LSTM-based approaches is only O ( n ) .",
"In practice, however, LSTMs are much slower to train than self-attention networks as they cannot be parallelized at sequence level: to model context, the current LSTM state relies on the full LSTM computation of the preceding state.",
"This has to be computed n times for a sequence of length n .",
"The linear transformations involved in the LSTM gate and state computations are the major cost factors in this.",
"To enable sequence-level parallelization of LSTMs, we approximate full LSTM context modelling by computing hidden states and gates with the current input and a simple bag-of-words representation of the preceding tokens context.",
"This allows us to compute each input step efficiently in parallel, avoiding the formerly costly sequential linear transformations.",
"We then connect the outputs of each parallel step with computationally cheap element-wise computations.",
"We call this the Highly Parallelized LSTM.",
"To further constrain the number of LSTM parameters, we compute several small HPLSTMs in parallel like multi-head attention in the Transformer.",
"The experiments show that our MHPLSTM decoder achieves significant BLEU improvements, while being even slightly faster than the self-attention network in training, and much faster than the standard LSTM.",
"The Transformer translation model (Vaswani et al., 2017) has achieved great success and is used extensively in the NLP community.",
"It achieves outstanding performance compared to previous RNN/CNN based translation models (Bahdanau et al., 2015; Gehring et al., 2017) while being much faster to train.",
"The Transformer can be trained efficiently due to the highly parallelized self-attention network.",
"It enables sequence-level parallelization in context modelling, as all token representations can be computed in parallel, and linear transformations are only required to compute the sequence once.",
"On the other hand, previous RNN-based methods process a sequence in a token-by-token manner, which means that they have to compute linear layers once for each token, i.e. n times if the number of tokens in the sequence is n .",
"However, the complexity of a self-attention network which compares each token with all the other tokens is O ( n 2 ) , while for LSTM (Hochreiter and Schmidhuber, 1997) it is only O ( n ) .",
"In practice, however, LSTM is slower than the self-attention network in training.",
"This is mainly due to the fact that the computation of its current step relies on the computation output of the previous step, which prevents efficient parallelization over the sequence.",
"As for the performance of using recurrent models in machine translation, Chen et al. (2018) shows that an LSTM-based decoder can further improve the performance over the Transformer.",
"In this paper, we investigate how we can efficiently parallelize all linear transformations of an LSTM at the sequence level, i.e. compute its linear transformations only once with a given input sequence.",
"Given that linear transformations are implemented by matrix multiplication, compared to the other element-wise operations, we suggest that they take the largest part of the model's overall computation, and parallelizing the linear transformations at sequence level may significantly accelerate the training of LSTM-based models.",
"Our contributions are as follows: i t o t-1 Concat * c t-1 * + * f g t o g t i g t h t c t o t Figure 1: LSTM.",
"We present the HPLSTM model, which computes LSTM gates and the hidden state with the current input embedding and a bag-of-words representation of preceding representations, rather than with the current input and the full LSTM output of the previous step, to enable efficient parallelization over the sequence and handling long sequences; We propose to divide a high-dimensional HPLSTM computation into several low-dimensional HPLSTM transformations, namely Multi-head HPLSTM, to constrain both the number of parameters and computation cost of the model; We empirically show that the MHPLSTM decoder can achieve improved performance over self-attention networks and recurrent approaches, while being even slightly faster in training, and significantly faster in decoding.",
"We design our HPLSTM based on the Layer Normalization (Ba et al., 2016) enhanced LSTM (LN-LSTM) presented by Chen et al. (2018) as illustrated in Figure 1, which achieves better performance than the Transformer when used in decoding.",
"For the computation of gates and the hidden state, the model concatenates the input i t of the current step t to the output of the previous step o t 1 : v t = i t | o t 1 (1) where | indicates concatenation, and v t is the concatenated vector.",
"Next, it computes three gates (input gate i tg , forget gate f tg and output gate o tg ) and the hidden representation h t with v t : i tg = (LN( W i v t + b i )) (2) f tg = (LN( W f v t + b f )) (3) o tg = (LN( W o v t + b o )) (4) h t = (LN( W h v t + b h )) (5) where W i , W f , W o , W h and b i , b f , b o , b h are weight and bias parameters, indicates the sigmoid activation function, is the activation function for the hidden state computation, LN is the layer normalization.",
"Layer normalization (Ba et al., 2016) is computed as follows: LN Output = LN Input w LN + b LN (6) where LN Input is the input, and stand for the mean and standard deviation of LN Input , w LN and b LN are two vector parameters initialized by ones and zeros respectively.",
"After the computation of the hidden state, the cell c t and the output of the LSTM unit o t are computed as: c t = c t 1 f tg + h t i tg (7) o t = c t o tg (8) where indicates element-wise multiplication.",
"Equation 1 shows that the computation of the hidden state and gates for step t requires the output of the step t 1 .",
"This prevents the LSTM from efficient parallelization at the sequence level: unless o t 1 is ready, we cannot compute o t .",
"To enable the LSTM to compute o t in parallel, we propose the HPLSTM, as shown in Figure",
"2. Linear Linear Linear i 1 |LN(s 1 ), i 2 |LN(s 2 ), , i n-1 |LN(s n-1 ), i n |LN(s n ) h * i g c 0 i 1 |c 1 , * f g1 + i 2 |c 2 , * + h r1 f g2 + h rn i n |c n Linear , o g * o 1 , o 2 , , o n-1 , o n h r2 c Figure 2: HPLSTM.",
"The HPLSTM uses a bag-of-words representation s t of preceding tokens for the computation of gates and the hidden state: s t = t 1 (cid:88) k =1 i k (9) where s 1 is a zero vector.",
"The bag-of-words representations s t can be obtained efficiently via the cumulative sum operation.",
"Next, we concatenate the input i and the corresponding layer normalized bag-of-words representation LN ( s ) for subsequent computing: v = i | LN( s ) (10) the layer normalization is introduced to prevent potential explosions due to accumulation in Equation 9 to stabilize training.",
"Next, we compute the input gate, forget gate and the hidden state: i g = (LN( W i v + b i )) (11) f g = (LN( W f v + b f )) (12) h = (LN( W h v + b h )) (13) Since v is computed over the sequence before the computation of these gates and the hidden states, Equations 11, 12 and 13 are only required to be computed once for the whole sequence, enabling efficient sequence-level parallelization of high cost linear transformations, while in the original LSTM, they (Equations 2, 3 and 5) have to be computed one after the other as many times as the number of items in the sequence.",
"However, the bag-of-words context representation s t lacks a weighting mechanism compared to the previous step output o t 1 of the original LSTM, thus we also try to use a two-layer feed-forward network for the hidden state computation to alleviate potentially related drawbacks: h = W h 2 (LN( W h 1 v + b h 1 )) + b h 2 (14) Then we update the hidden state h with the input gate i g : h r = h i g (15) where h r is the updated hidden state.",
"With h r and f g , we compute LSTM cells across the sequence: c t = c t 1 f tg + h tr (16) Equation 16 preserves the step-by-step recurrence update of the LSTM cell and cannot be parallelized across the sequence, but it only contains element-wise multiplication-addition operations, which are light-weight and, compared to linear transformations, can be computed very fast on mod-ern hardware.",
"Unlike the original LSTM which computes the output gate o g based on the concatenated vector v t (Equation 4), we compute the output gate with the newly produced cell state c and the input to the LSTM, as c is expected to have better quality than the bag-of-words representation.",
"o g = (LN( W o i | c + b o )) (17) Finally, we apply the output gate to the cell, and obtain the output of the HPLSTM layer.",
"Both Equation 17 (including the linear transformation for the computation of the output gate) and 18 can also be efficiently parallelized over the sequence.",
"Computing n smaller networks in parallel can remove the connections between hidden units across sub-networks, reducing both computation and the number of parameters.",
"Take for example a 512 512 transformation: using a densely fully-connected linear layer costs 8 times the number of parameters and computation compared to splitting the 512 dimension input into 8 folds and processing them with 8 64 64 linear transformations correspondingly.",
"Since our HPLSTM involves more parameters and computation than a self-attention network with the same input size, to constrain the number of parameters, we compute n low-dimensional HPLSTMs in parallel.",
"The resulting Multi-head HPLSTM (MHPLSTM) is illustrated in Figure",
"3. Specifically, the MHPLSTM first transforms its input i into n different embedding spaces of HPLSTM transformations with a linear transformation and splits the transformed representation into n folds: i 1 | ... | i n = W s i + b s (19) Next, the k th input i k is fed into the corresponding HPLSTM network HPLSTM k , and the output o k is obtained: Models En-De En-Fr Transformer Base 27.55 39.54 HPLSTM 28.37 40.31 Transformer Big 28.63 41.92 HPLSTM 29.76 42.84 Table 1: Results on WMT 14 En-De and En-Fr.",
"In practice, the forward propagation of each HPLSTM is independent, thus for each HPLSTM Equation 20 is computed in parallel.",
"Finally, outputs of all individual HPLSTM networks are concatenated and transformed by another linear transformation as the output of the MHPLSTM layer o : o = W m ( o 1 | ... | o n ) + b m (21) 4 Experiments We replace the self-attention layers of the Transformer decoder with the MHPLSTM in our experiments.",
"To compare with Vaswani et al. (2017), we conducted our experiments on the WMT 14 English to German and English to French news translation tasks.",
"The concatenation of newstest 2012 and newstest 2013 was used for validation and newstest 2014 as test set.",
"We applied joint Byte-Pair Encoding (BPE) (Sennrich et al., 2016) with 32 k merging operations on all data sets.",
"We only kept sentences with a maximum of 256 subword tokens for training.",
"Training sets were randomly shuffled in each training epoch.",
"We followed Vaswani et al. (2017) for the experiment settings.",
"The training steps for Transformer Base and Transformer Big were 100 k and 300 k respectively.",
"We used a dropout of 0 .",
"1 for all experiments except for the Transformer Big setting on the En-De task which was 0 .",
"3 .",
"For the Transformer Base setting, the embedding dimension and the hidden dimension of the position-wise feed-forward neural network were 512 and 2048 respectively, the corresponding values for the Transformer Big Model BLEU Para.",
"setting were 1024 and 4096 respectively.",
"The dimension of each head is 64 , thus there were 8 and 16 heads for the base setting and the big setting respectively.",
"We implemented our approaches based on the Neutron implementation (Xu and Liu, 2019) of the Transformer translation model.",
"Parameters were initialized under the Lipschitz constraint (Xu et al., 2020c).",
"We used a beam size of 4 for decoding, and evaluated tokenized case-sensitive BLEU with the averaged model of the last 5 checkpoints for the Transformer Base setting and 20 checkpoints for the Transformer Big setting saved with an interval of 1500 training steps.",
"We also conducted significance tests (Koehn, 2004).",
"We first verify the performance by comparing our approach with the Transformer in both the base setting and the big setting.",
"Results are shown in Table",
"1. Table 1 shows that using an LSTM-based decoder can bring significant improvements over the self-attention decoder.",
"Specifically, using MHPLSTM improves +0 .",
"82 and +0 .",
"77 BLEU on the En-De and En-Fr task respectively using the base setting, +1 .",
"13 and +0 .",
"92 correspondingly using the big setting.",
"The fact that using an LSTM-based decoder can improve the translation quality is consistent with Chen et al. (2018), with MHPLSTM further improving over LN-LSTM (Table 2).",
"We also compare our approach with the Averaged Attention Network (AAN) decoder (Zhang et al., 2018a), LN-LSTM and the Addition-subtraction Twin-gated Recurrent (ATR) network (Zhang et al., 2018b) on the WMT 14 En-De task.",
"The AAN consists of an average layer that averages preceding embeddings, a feed-forward network to perform context-aware encoding based on the averaged context embedding, and a gating layer to enhance the expressiveness.",
"With a simple addition and subtraction operation, Zhang et al. (2018b) introduce a twin-gated mechanism to build input and forget gates which are highly correlated, and present a heavily simplified ATR which has the smallest number of weight matrices among units of all existing gated RNNs.",
"Despite this simplification, the essential non-linearities and capability of modelling long-distance dependencies are preserved.",
"As LN-LSTM and ATR lead to the out-of-memory issue when handling long sentences, we follow Zhang et al. (2018b) to use sentences no longer than 80 subwords for their training, but we keep the batch size and training steps the same as the others for fairness.",
"Their training without excluding these long sentences is slower than we reported.",
"Results are shown in Table",
"2. Table 2 shows that the MHPLSTM is not only the fastest in both training and decoding, but also leads to the best performance compared to baselines.",
"Surprisingly, MHPLSTM even surpasses LN-LSTM.",
"We conjecture potential reasons that MHPLSTM surpasses both self-attention and LN-LSTM might be: The self-attention network relies on absolute positional embedding for position encoding, which has its drawbacks (Shaw et al., 2018; Wang et al., 2019; Chen et al., 2019a; Wang et al., 2020), while LSTMs seem to have natural advantages in (relative) positional encod-Approach BLEU Para.",
"ing (Chen et al., 2019b).",
"LSTMs lack a mechanism to directly connect distant words, which may lead to overlooking neighboring information, while the use of a bag-of-words representation (Equation 9) enables MHPLSTM to connect tokens directly regardless of the distance, thus MHPLSTM is able to leverage both local (Equation 16) and global patterns (Xu et al., 2019).",
"(Please refer to Section 4.7 for empirical verification.) Compared to the self-attention network, the MHPLSTM computation is more complex.",
"The computation for the LSTM hidden state (Equation 14) and output gate (Equation 17) in MHPLSTM is enhanced compared to the LN-LSTM.",
"We conducted ablation studies on the WMT 14 En-De task.",
"Since the LSTM hidden state computation may take the role of the position-wise Feed-Forward Network (FFN) sub-layer of decoder layers, we first study removing the FFN sub-layer in decoder layers.",
"Results are shown in Table",
"3. Table 3 shows that removing the FFN layer of the MHPLSTM-based decoder can lead to further acceleration while performing competitively with the Transformer baseline with fewer parameters.",
"However, it hampers MHPLSTM performance, thus we keep the feed-forward layer in the other experiments.",
"We also study the effects of using a 1-layer or a 2-layer neural network for the computation of the MHPLSTM hidden states (Equations 13 and 14) and gates (Equations 11 and 12).",
"Results are shown in Table",
"4. Table 4 shows that using a 2-layer neural network for the computation of hidden states is important for the performance, but the impact of using a 2-layer neural network for the gate computation is neglectable.",
"Thus we only apply the 2-layer network for the computation of the LSTM hidden states in the other experiments.",
"We examined the effects of the impact of the number of MHPLSTM heads on performance and effi-ciency with the base setting (input dimension: 512 ).",
"Results are shown in Table",
"5. Table 5 shows that reducing the number of heads increases both parameters and time consumption with small performance gains compared to using 8 heads (with a dimension of 64 per head).",
"Using 16 heads significantly hampers the performance with only a small reduction in the number of parameters and a slight acceleration.",
"Thus we use a head dimension of 64 ( 8 heads for the base setting, 16 for the big setting) in our experiments, consistent with the Transformer.",
"We tested the performance of using a bidirectional MHPLSTM for encoding.",
"Results are shown in Table",
"6. Table 6 shows that using MHPLSTM for encoding leads to a significant performance drop with more parameters: it even underperforms the baseline, while slowing down both training and decoding.",
"We conjecture that the self-attention network has advantages in encoding compared to the MHPLSTM: it can collect and process bi-directional # Heads BLEU Para.",
"context in one forward pass, while MHPLSTM has to compute 2 forward passes, one for the forward direction, another one for the reverse direction.",
"For each direction, relevant context is processed separately in the recurrent models.",
"To analyze the effects of MHPLSTM on performance with increasing input length, we conducted a length analysis on the news test set of the WMT 14 En-De task.",
"Following Bahdanau et al. (2015); Tu et al. (2016); Xu et al. (2020b), we grouped sentences of similar lengths together and computed BLEU scores of the MHPLSTM and our baselines for each group.",
"BLEU score results and decoding speed-up of each group are shown in Figure 4 and 5 respectively.",
"Figure 4 shows that MHPLSTM surpasses the other approaches in most length groups, and improvements of using an MHPLSTM based-decoder 20 40 60 80 100 120 140 160 180 200 220 15 30 45 >45 Transformer AAN ATR LN-LSTM MHPLSTM Figure 5: Decoding speed on a single GTX 1080Ti GPU with respect to various input sentence length.",
"are more significant for long sentences than short sentences.",
"Figure 5 shows that all recurrent-based approaches are faster than the self-attention decoder in all length groups, and MHPLSTM achieves comparable decoding speed as LSTM and ATR.",
"Even though the decoding speed of all approaches decreases very fast with increasing sentence length, the acceleration of MHPLSTM is more significant with long sentences ( 1 . 91 times faster than Transformer for sentences longer than 45 ) than with short sentences ( 1 . 41 times faster than Transformer for sentences no longer than 15 ).",
"We compare the ability of the MHPLSTM and baselines in capturing dependencies of various distances with the linguistically-informed verb-subject agreement analysis on the Lingeval97 dataset (Sennrich,",
"In German, subjects and verbs must agree with one another in grammatical number and person.",
"In Lingeval97 , each contrastive translation pair consists of a correct reference translation, and a contrastive example that has been minimally modified to introduce one translation error.",
"The accuracy of a model is the number of times it assigns a higher score to the reference translation than to the contrastive one, relative to the total number of predictions.",
"Results are shown in Figure",
"6. Figure 6 shows that the MHPLSTM outperforms baselines in almost all cases.",
"For distances longer than 15 , the self-attention network still performs best, indicating its strong ability in long-distance relation learning, but the MHPLSTM still surpasses the other recurrent approaches.",
"Sequence-to-sequence neural machine translation models started with recurrent models (Sutskever et al., 2014; Bahdanau et al., 2015; Cho et al., 2014).",
"But recurrent models cannot be parallelized at the sequence level.",
"Convolutional models (Gehring et al., 2017; Wu et al., 2019) and the Transformer (Vaswani et al., 2017) have been proposed.",
"Due to the O ( n 2 ) self-attention network complexity, which slows down decoding, Zhang et al. (2018a) presented the average attention network to accelerate decoding.",
"Even though LSTMs cannot be parallelized at the sequence level, its complexity is O ( n ) , and Chen et al. (2018) shows that using the layer normalization enhanced LSTM-based decoder can bring improvements in translation quality and accelerate decoding.",
"recurrent models.",
"To accelerate RNN models, Zhang et al. (2018b) propose a heavily simplified ATR network to have the smallest number of weight matrices among units of all existing gated RNNs.",
"Peter et al. (2016) investigate exponentially decaying bag-of-words input features for feed-forward NMT models.",
"In addition to sequence-level parallelization, asynchronous optimization (Heigold et al., 2014) and data parallelization with a larger batch size (Ott et al., 2018; Chen et al., 2018; Xu et al., 2020a) can also accelerate training.",
"In this paper, we observe that the sequence-level parallelization issue of LSTM is due to the fact that its computation of gates and hidden states of the current step relies on the computation result of the preceding step, and linear transformations have to be propagated the same number of times as the sequence length.",
"To improve the sequence-level parallelization of the LSTM, we propose to remove the dependency of the current step LSTM computation on the result of the previous step by computing hidden states and gates with the current input embedding and a bag-of-words representation of preceding tokens, and present the Highly Parallelized LSTM.",
"To constrain the number of LSTM parameters, we compute several small HPLSTMs in parallel like multi-head self-attention.",
"In our experiments, we empirically show that the MHPLSTM model achieves better performance than self-attention networks, while being even slightly faster in training, and much faster in decoding, than the self-attention Transformer decoder.",
"We thank anonymous reviewers for their insightful comments.",
"Hongfei Xu acknowledges the support of China Scholarship Council ([2018]3101, 201807040056).",
"Josef van Genabith is supported by the German Federal Ministry of Education and Research (BMBF) under funding code 01IW20010 (CORA4NLP).",
"Deyi Xiong is partially supported by the joint research center between GTCOM and Tianjin University and the Royal Society (London) (NAF \\ R1 \\ 180122).",
"Meng Zhang is partially supported by MindSpore, 1 which is a new deep learning computing framework."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"result",
"objective",
"method",
"result",
"other",
"other",
"other",
"other",
"other"
] |
[
"This paper explores data augmentation methods for training Neural Machine Translation to make use of similar translations, in a comparable way a human translator employs fuzzy matches.",
"In particular, we show how we can simply feed the neural model with information on both source and target sides of the fuzzy matches, we also extend the similarity to include semantically related translations retrieved using distributed sentence representations.",
"We show that translations based on fuzzy matching provide the model with copy information while translations based on embedding similarities tend to extend the translation context .",
"Results indicate that the effect from both similar sentences are adding up to further boost accuracy, are combining naturally with model fine-tuning and are providing dynamic adaptation for unseen translation pairs.",
"Tests on multiple data sets and domains show consistent accuracy improvements.",
"To foster research around these techniques, we also release an Open-Source toolkit with efficient and flexible fuzzy-match implementation.",
"For decades, the localization industry has been proposing Fuzzy Matching technology in CAT tools allowing the human translator to visualize one or several fuzzy matches from translation memory when translating a sentence leading to higher productivity and consistency (Yamada, 2011).",
"Hence, even though the concept of fuzzy match scores is not standardized and differs between CAT tools (Bloodgood and Strauss, 2014), translators generally accept discounted translation rate for sentences with high fuzzy matches 1 .",
"With improving machine translation technology 1 https://signsandsymptomsoftranslation.",
"and training of models on translation memories, machine translated output has been progressively introduced as a substitute for fuzzy matches when no sufficiently good fuzzy match is found and proved to also increase translator productivity given appropriate post-editing environment (Plitt and Masselot, 2010).",
"These two technologies are entirely different in their finality indeed, for a given source sentence, fuzzy matching is just a database retrieval and scoring technique always returning a pair of source and target segments, while machine translation is actually building an original translation.",
"However, with Statistical Machine Translation, the two technologies are sharing the same simple idea about managing and retrieving optimal combination of longest translated n-grams and this property led to the development of several techniques like use of fuzzy matches in SMT decoding (Koehn and Senellart, 2010; Wang et al., 2013), adaptive machine translation (Zaretskaya et al., 2015) or fuzzy match repairing (Ortega et al., 2016).",
"With Neural Machine Translation (NMT), the integration of Fuzzy Matching is less obvious since NMT does not keep nor build a database of aligned sequences and does not explicitly use n-gram language models for decoding.",
"The only obvious and important use of translation memory is to use them to train an NMT model from scratch or to adapt a generic translation model to a specific domain (fine-tuning) (Chu and Wang, 2018).",
"While some works propose architecture changes (Zhang et al., 2018) or decoding constraints (Gu et al., 2018); a recent work (Bulte and Tezcan, 2019; Bulte et al., 2018) has proposed a simple and elegant framework where, like for human translation, translation of fuzzy matches are presented simultaneously with source sentence and the network learns to use this additional information.",
"Even though this method has showed huge gains in quality, it also opens many questions.",
"In this work, we are pushing the concept further",
"a) by proposing and evaluating new integration methods,",
"b) by extending the notion of similarity and showing that fuzzy matches can be extended to embedding-based similarities,",
"c) by analyzing how online fuzzy matching compares and combines with offline fine-tuning.",
"Finally, our results also show that introducing similar sentence translation is helping NMT by providing sequences to copy ( copy effect ), but also providing additional context for the translation ( context effect ).",
"NMTA translation memory (TM) is a database that stores translated segments composed of a source and its corresponding translations.",
"It is mostly used to match up previous translations to new content that is similar to content translated in the past.",
"Assuming that we translated the following English sentence into French: [How long does the flight last?] [Combien de temps dure le vol?] .",
"Both the English sentence and the corresponding French translation are saved to the TM.",
"This way, if the same sentence appears in a future document (an exact match ) the TM will suggest to reuse the translation that has just been saved.",
"In addition to exact matches, TMs are also useful with fuzzy matches .",
"These are useful when a new sentence is similar to a previously translated sentence, but not identical.",
"For example, when translating the input sentence: [How long does a cold last?] , the TM may also suggest to reuse the previous translation since only two replacements ( a cold by the flight ) are needed to achieve a correct translation.",
"TMs are used to reduce translation effort and to increase consistency over time.",
"More formally, we consider a TM as a set of K sentence pairs {( s k , t k ) k = 1 , . . . , K } where s k and t k are mutual translations.",
"A TM must be conveniently stored so as to allow fast access to the pair ( s k , t k ) that shows the highest similarity between s k and any given new sentence.",
"Many methods to compute sentence similarity have been explored, mainly falling into two broad categories: lexical matches ( i.e. fuzzy match) and distributional semantics .",
"The former relies on the number of overlaps between the sentences taken into account.",
"The latter counts on the generalisation power of neural networks when building vector representations.",
"Next, we describe the similarity measures employed in this work.",
"Fuzzy Matching Fuzzy matching is a lexicalised matching method aimed to identify non-exact matches of a given sentence.",
"We define the fuzzy matching score F M ( s i , s j ) between two sentences s i and s j as: F M ( s i , s j ) = 1 ED ( s i , s j ) max ( s i , s j ) where ED ( s i , s j ) is the Edit Distance between s i and s j , and s is the length of s .",
"Many variants have been proposed to compute the edit distance, generally performed on normalized sentences (ignoring for instance case, number, punctuation, space or inline tags differences that are typically handled at a later stage).",
"Also, IDF and stemming techniques are used to give more weight on significant words or less weight on morphological variants (Vanallemeersch and Vandeghinste, 2015; Bloodgood and Strauss, 2014).",
"Since we did not find an efficient TM fuzzy match library, we implemented an efficient and parameterizable algorithm in C++ based on suffix-array (Manber and Myers, 1993) that we open-sourced 2 .",
"Fuzzy matching offers a great performance under large overlapping conditions.",
"However, in some cases, sentences with large overlaps may receive low F M scores.",
"Consider for instance the input: [How long does the flight arriving in Paris from Barcelona last?] and the TM entry of our previous example: [How long does the flight last?] [Combien de temps dure le vol?] .",
"Even though the TM entry may be of great help when translating the input sentence, it receives a low score ( 1 512 = 0 . 583 ) because of the multiple insertion/deletion operations needed.",
"We thus introduce a second lexicalised similarity measure that focuses on finding the longest of n -gram overlap between sentences.",
"N -gram Matching 3 We define the N -gram matching score NM ( s i , s j ) between s i and s j : NM ( s i , s j ) = (cid:187)(cid:187)(cid:187)(cid:187)(cid:187)(cid:187)(cid:187)(cid:187) max ({ S ( s i ) S ( s j )})(cid:187)(cid:187)(cid:187)(cid:187)(cid:187)(cid:187)(cid:187)(cid:187)",
"where S ( s ) denotes the set of n -grams in sentence s , max ( q ) returns the longest n -gram in the set q and r is the length of the n -gram r .",
"For N gram matching retrieval we also use our in-house open-sourced toolkit.",
"Distributed Representations The current research on sentence similarity measures has made tremendous advances thanks to distributed word representations computed by neural nets.",
"In this work, we use sent2vec 4 (Pagliardini et al., 2018) to generate sentence embeddings.",
"The network implements a simple but efficient unsupervised objective to train distributed representations of sentences.",
"The authors claim that the algorithm performs state-of-the-art sentence representations on multiple benchmark tasks in particular for unsupervised similarity evaluation.",
"where h denotes the magnitude of vector h .",
"To implement fast retrieval between the input vector representation and the corresponding vector of sentences in the TM we use the faiss 5 toolkit (Johnson et al., 2019).",
"Given an input sentence s , retrieving TM matches consists of identifying the TM entry ( s k , t k ) for which s k shows the highest matching score.",
"However, with the exception of perfect matches, not all words in s k or s are present in the match.",
"Considering the example in Section 2, the words the flight and a cold are not related to each other, from that follows that the TM target words le vol are irrelevant for the task at hand.",
"In this section we 3 Note that this practice is also called subsequence or chunk matching in CAT tools and is usually combined with source-target alignment in order to help human translators easily find translation fragments.",
"discuss an algorithm capable of identifying the set of target words T t k that are related to words of the input sentence s .",
"Thus, we define the set T as: T = t t k s S ( s, t ) A s S ( s, t ) A where A is the set of word alignments between words in s k and t k and S is the LCS (Longest Common Subsequence) set of words in s k and s .",
"The LCS is computed as a by-product of the edit distance (Paterson and Danck, 1994).",
"S is found as a sub-product of computing fuzzy or n -gram matches.",
"Word alignments are performed by fast align 6 (Dyer et al., 2013).",
"Figure 1 illustrates the alignments and LCS words between input sentences and their corresponding fuzzy (top) and N -gram (bottom) matches.",
"6 https://github.com/clab/fast_align",
"{ How, long, does, last, ?",
"} .",
"The set of related target words T is also composed of 5 words { Combien, de, temps, dure, ?",
"} , all aligned to at least one word in S and to no other word.",
"The N gram match example has a LCS set of 4 words S = { How, long, does, a } , while related target words consists of T = { Combien, de, temps, un } .",
"The target word dure is not part of T as it is aligned to work and work S .",
"Notice that sets S and T consist of collections of indices (word positions in their corresponding sentences) while word strings are used in the previous examples to facilitate reading.",
"We retrieve fuzzy, n -gram and sentence embedding matches as detailed in the previous section.",
"We explore various ways to integrate matches in the NMT workflow.",
"We follow the work by (Bulte and Tezcan, 2019) where the input sentence is augmented with the translation retrieved from the TM showing the highest matching score ( FM , NM or EM ).",
"One special integration of fuzzy matching, denoted FMT , is rescoring fuzzy matches based on the target edit distance .",
"This special integration, that is only performed on training data, is discussed in the Target Fuzzy matches section.",
"Figure 2 illustrates the main integration techniques considered in this work and detailed below.",
"The input English sentence [How long does the flight last?] is differently augmented.",
"For each alternative we show: the TM (English) sentence producing the match; the augmented input sentence with the corresponding TM (French) translation.",
"Note that LCS words are displayed in boldface.",
"FM # We implement the same format as detailed in (Bulte and Tezcan, 2019).",
"The input English sentence is concatenated with the French translation with the (highest-scored) fuzzy match as computed by F M ( s i , s j ) .",
"The token is used to mark the boundary between both sentences.",
"7 FM We modify the previous format by masking the French words that are not related to the input sentence.",
"Thus, sequences of unrelated tokens are replaced by the token.",
"The mechanism to identify relevant words is detailed in Section 2.2.",
"FM + As a variant of FM , we now mark target words which are not related to the input sentence in an attempt to help the network identify those target 7 The original paper uses @@@ ' as break token.",
"words that need to be copied in the hypothesis.",
"However, we use an additional input stream (also called factors ) to let the network access to the entire target sentence.",
"Tokens used by this additional stream are: S for source words; R for unrelated target words and T for related target words.",
"NM + In addition to fuzzy matches, we also consider arbitrary large n -gram matching.",
"Thus, we use the same format as for FM + but considering the highest scored n -gram match as computed by NM ( s i , s j ) .",
"EM + Finally, we also retrieve the most similar TM sentences as computed by EM ( s i , s j ) .",
"In this case, marking the words that are not related to the input sentence is not necessary since similar sentences retrieved following EM score do not necessarily present any lexical overlap.",
"Note from the example in Table 2 that similar sentences retrieved with distributed representations may contain many word reorderings or synonyms ( i.e. : duration last or flu cold ) that makes it difficult to align both sentences.",
"Hence, the same format employed for FM can be used here.",
"However, since we plan to combine different kind of matches in a single model we adopt the format employed by NM + and FM + with a new factor label E .",
"We used the following corpora in this work 8 (Tiede-mann, 2012): Proceedings of the European Parliament (EPPS); News Commentaries (NEWS); TED talk subtitles (TED); Parallel sentences extracted from Wikipedia (Wiki); Documentation from the European Central Bank (ECB); Documents from the European Medicines Agency (EMEA); Legislative texts of the European Union (JRC); Localisation files (GNOME, KDE4 and Ubuntu) and Manual texts (PHP).",
"Detailed statistics about these are provided in Appendix A. We randomly split the corpora by keeping 500 sentences for validation, 1 , 000 sentences for testing and the rest for training.",
"All data is preprocessed using the Open-NMT tokenizer 9 (conservative mode).",
"We train a 32K joint byte-pair encoding (BPE) (Sennrich et al., 2016b) and use a joint vocabulary for both source and target.",
"Our NMT model follows the state-of-the-art Transformer base architecture (Vaswani et al., 2017) implemented in the OpenNMT-tf 10 toolkit (Klein et al., 2017).",
"Further configuration details are given in Appendix B. 3.2 TM Retrieval We perform fuzzy matching, ignoring exact matches, and keep the single best match if F M ( s i , s j ) 0 .",
"6 with no approximation.",
"Similarly, the largest N -gram match is used for each test sentence with a threshold NM ( s i , s j ) 5 .",
"A similarity threshold EM ( s i , s j ) 0 .",
"8 is also employed when retrieving similar sentences using distributed representations.",
"The EM model is trained on the source training data with default fasttext params on 200 dimension, and 20 epochs.",
"The faiss search toolkit is used through python API with exact FlatIP index.",
"Building and retrieval times for each algorithm on a 2M sentences translation memory (Europarl corpus) are provided in Table",
"1. Note that all retrieval algorithms are significantly faster than NMT Transformer decoding, thus, implying a very limited decoding overhead.",
"We compare our baseline model, without augmenting input sentences, to different augmentation formats and retrieval methods.",
"Our base model is built using the concatenation of all the original corpora.",
"All other models extend the original corpora with sentences retrieved following various retrieval methods.",
"It is worth to notice that extended bitexts share the target side with the original data.",
"Individual comparison of Matching algorithms and Augmentation methods In this experiment, all corpora are used to build the models while matches of a given domain are retrieved from the training data of this domain.",
"Models are built using the original source and target training data ( base ), and after augmenting the source sentence as detailed in Section 2.3: FM # , FM # T , FM , FM + , NM + and EM + .",
"Test sentences are augmented following the same technique as for training sentences 11 .",
"Table 2 summarises the results that are divided in three blocks, showing results for the three types of matching studied in this work ( FM , NM and EM ).",
"Best scores are obtained by models using augmented inputs except for corpora not suited for translation memory usage: News, TED for which we observe no gains correlated to low matching rates.",
"For the other corpora, large gains are achieved when evaluating test sentences with matches (up to + 19 BLEU on GNOME corpus), while a very limited decrease in performance is observed for sentences that do not contain matches.",
"This slight decrease is likely to come from the fact that we kept the corpus size and number of iterations identical while giving harder training tasks.",
"Results are totally on par with the findings of (Bulte and Tezcan, 2019).",
"All types of matching indicate their suitability showing accuracy gains.",
"In particular for fuzzy matching, which seems to be the best for our task.",
"Among the different techniques used to insert fuzzy matching, FM + obtains the best results, validating 11 Except for FM # T for which we use FM # test set Model News TED ECB EMEA JRC GNOME KDE4 PHP Ubuntu Avg % FM 3.1% 10.3% 49.8% 69.8% 50.1% 59.7% 47.3% 41.0% 23.3% base 37.16 43.23 49.19 50.14 59.19 51.14 50.16 30.24 45.52 47.94 57.69 41.95 54.88 44.10 66.34 52.84 55.80 47.92 53.05 48.77 42.19 25.25 56.05 42.27 FM # 36.68 42.93 55.15 61.16 66.35 61.82 54.37 33.10 48.26 54.32 69.79 41.54 70.87 43.53 80.46 53.55 73.61 45.83 65.57 47.85 47.04 26.08 66.72 42.08 FM # T 36.79 43.14 55.41 60.32 66.41 62.01 53.65 33.22 49.75 54.40 70.46 41.41 68.63 44.90 80.57 53.57 74.05 45.58 64.77 47.20 46.31 26.30 69.16 43.32 FM 36.44 43.27 54.52 59.49 65.24 59.54 53.30 32.77 48.74 53.37 68.43 41.68 67.64 44.85 77.59 54.10 70.16 45.19 62.63 48.00 44.50 26.31 68.34 42.20 FM + 37.12 42.62 56.18 61.97 66.91 62.68 54.59 33.81 48.62 54.97 72.26 41.25 71.52 44.72 81.58 53.62 74.99 45.83 65.95 48.01 47.74 26.27 67.49 42.37 % NM 45.5% 36.9% 69.9% 60.4% 69.6% 31.1% 22.9% 33.7% 14.1% base 37.16 43.23 49.19 50.14 59.19 51.14 50.16 30.24 45.52 47.94 49.97 46.44 50.94 47.43 60.32 55.70 53.86 46.59 54.16 45.89 34.64 26.88 58.29 40.68 NM + 36.74 43.07 55.40 59.17 65.60 58.46 51.54 31.87 46.16 52.60 58.65 44.06 62.69 46.60 69.24 54.32 70.05 42.21 59.87 42.11 39.35 26.10 63.22 39.59 base 37.16 43.23 49.19 50.14 59.19 51.14 50.16 30.24 45.52 47.94 52.09 40.74 52.07 40.08 62.60 48.16 54.20 45.88 51.62 48.60 42.22 21.42 52.20 41.82 EM + 36.50 42.89 54.02 56.41 66.04 58.07 53.70 32.37 49.88 52.93 58.52 40.86 59.47 40.16 71.45 48.33 66.09 44.06 59.43 47.43 46.91 20.96 62.04 43.20 Table 2: The first row in each block indicates the percentage of test sentences for which a match was found.",
"our hypothesis that marking related words is beneficial for the model.",
"Masking sequences of unrelated words, FM under-performs showing that the neural network is more challenged when dealing with incomplete sentences than with sentences containing unrelated content.",
"Target fuzzy matches To evaluate if the fuzzy match quality is really the primary criterion for the observed improvements, we consider FM # T where the fuzzy matches are rescored (on the training set only) with the edit distance between the reference translation and the target side of the fuzzy match.",
"By doing so, we reduce the fuzzy match average F M source score by about 2%, but increase target edit distance from 61% to 69%.",
"The effect can be seen in Table 2 in the line FM # T vs. FM # .",
"In average, this technique is performing better with large individual gains of + 1 .",
"5 BLEU on the Ubuntu corpus.",
"This shows that in this configuration where we do not differentiate related and unrelated words, the model mainly learns to copy fuzzy target words.",
"Unseen matches Note that in the previous experiments, matches were built over domain corpora that are already used to train the model.",
"This is a common use case: the same translation memory used to train the system will be used in run time, but now we evaluate the ability of our model in a different context where a test set is to be translated for which we have a new TM that has never been seen when learning the original model.",
"This use case corresponds to typical translation task where new entries will be added continuously to the TM and shall be used instantly for translation of following sentences.",
"Hence, we only use EPPS, News, TED and Wiki data to build two models: the first employs only the original source and target sentences ( base ) the second learns to use fuzzy matches ( FM + ).",
"Table 4 shows results for this use case.",
"As it can be seen, the model using fuzzy matches shows clear accuracy gains.",
"This confirms that gains obtained by FM + are not limited to remember an example previously seen during training.",
"The model using fuzzy matches acquired the ability to actually copy or recycle words from the provided fuzzy matches and therefore is suitable for adaptive translation workflows.",
"Note that all scores are lower than those showed in Table 2 as a result of discarding all in-domain data when training the models showing also that online use of translation memory is not a substitute for in-domain model fine-tuning as we will further investigate in Fine Tuning .",
"Combining matching algorithms Next, we evaluate the ability of our NMT models to combine different matching algorithms.",
"First, we use ( M 1 , M 2 , ... ) to denote the augmentation of an input sentence that considers first the match specified by M 1 , if no match applies for the input sentence then it considers using the match specified by M 2 , and so on.",
"Note that at most one match is used.",
"Sentences for which no match is found are kept without augmentation.",
"Similar to Table 2, models are learned using all the available training data.",
"Table 3 (2 nd block) illustrates the results of this experiment.",
"The first 3 lines show BLEU scores of models combining FM + , NM + and EM + .",
"The last row illustrates the results of a model that learns to use two different matching algorithms.",
"We use the best combination of matches obtained so far ( FM + and EM + ) and augment input sentences with both matches.",
"Figure 3 illustrates an example of an input sentence augmented with both a fuzzy match and an embedding match ( FM + and EM + ).",
"Notice that the model is able to distinguish between both types of augmented sequences by looking at the token used in the additional stream ( factor ).",
"As it can be seen in Table 3 (2 nd block), the best combination of matches is achieved by ( FM + ,EM + ) further boosting the performance of previous configurations.",
"It is only surpassed by ( FM + ,EM + ) in two test sets by a slight margin.",
"Fine Tuning Results so far evaluate the ability of NMT models to integrate similar sentences.",
"However, we have run our comparisons over a generic model built from a heterogeneous training data set while it is well known that these models do not achieve best performance on homogeneous test sets.",
"Thus, we now assess the capability of our augmentation methods to enhance fine-tuned (Luong and Manning, 2015) models, a well known technique that is commonly used in domain adaptation scenarios obtaining state-of-the-art results.",
"Table 3 illustrates the results of the model configurations previously described after fine-tuning the models towards each test set domain.",
"Thus, building 7 fine-tuned models for each configuration.",
"Note that similar sentences (matches) are retrieved from the same in-domain data sets used for fine tuning.",
"As ( FM + ,EM + ) How long does a cold last ?",
"Combien de temps dure le vol ?",
"Combien de temps dure un vaccin ?",
"S S S S S S S R T T T T R R T E E E E E E E E Figure 3: Input sentence augmented with a fuzzy match FM + and an embedding match EM + .",
"shown in Table 3 (3 rd block), models with FM / EM also increase performance of fine-tuned models gaining in average + 6 BLEU on fine-tuned model baselines, and + 2 .",
"5 compared to FM / EM on generic translation.",
"This add-up effect is interesting since both approaches make use of the same data.",
"Copy Vs. Context We observe that models allowing for augmented input sentences effectively learn to output the target words used as augmented translations.",
"Table 5 illustrates the rates of usage.",
"We compute for each word added in the input sentence as T (part of a lexicalised match), R (not in the match) and E (from an embedding match), how often they appear in the translated sentence.",
"Results show that T words increase their usage rate by more than 10% compared to the corresponding base models.",
"Considering R words, models incorporating fuzzy matches increase their usage rate compared to base models, albeit with lower rates than for T words.",
"Furthermore, the number of R words output by FM + is clearly lower than those output by FM # , demonstrating the effect of marking unrelated matching words.",
"Thus, we can confirm the copy behaviour of the networks with lexicalised matches.",
"Words marked as E (embed-ding matches) increase their usage rates when compared to base models but are far from the rates of T words.",
"We hypothesize that these sentences are not copied by the translation model, rather they are used to further contextualise translations.",
"Our work stems on the technique proposed by (Bulte and Tezcan, 2019) to train an NMT model to leverage fuzzy matches inserted in the source sentence.",
"We extend the concept by experimenting with more general notions of similar sentences and techniques to inject fuzzy matches.",
"The use of similar sentences to improve translation models has been explored at scale in (Schwenk et al., 2019), where the authors use multilingual sentence embeddings to retrieve pairs of similar sentences and train models uniquely with such sentences.",
"In (Niehues et al., 2016), input sentences are augmented with pre-translations performed by a phrase-based MT system.",
"In our approach, similar sentence translations are provided dynamically to guide translation of a given sentence.",
"Similar to our work, (Farajian et al., 2017; Li et al., 2018) retrieve similar sentences from the training data to dynamically adapt individual input sentences.",
"To compute similarity, the first work uses n -gram matches, the second includes dense vector representations.",
"In (Xu et al., 2019) the same approach is followed but authors consider for adaptation a bunch of semantically related input sentences to reduce adaptation time.",
"Our approach combines source and target words within a same sentence the same type of approach has also been proposed by (Dinu et al., 2019) for introduction of terminology translation.",
"Last, we can also compare the extra-tokens appended in augmented sentences as side constraints activating different translation paths on the same spirit than the work done by (Sennrich et al., 2016a; Kobus et al., 2017) for controlling translation.",
"This paper explores augmentation methods for boosting Neural Machine Translation performance by using similar translations.",
"Based on neural fuzzy repair technique, we introduce tighter integration of fuzzy matches informing neural network of source and target and propose extension to similar translations retrieved from their distributed representations.",
"We show that the different types of similar translations and model fine-tuning provide complementary information to the neural model outperforming consistently and significantly previous work.",
"We perform data augmentation at inference time with negligible speed overhead and release an Open-Source toolkit with an efficient and flexible fuzzy-match implementation.",
"In our future work, we plan to optimise the thresholds used with the retrieval algorithms in order to more intelligently select those translations providing richest information to the NMT model and generalize the use of edit distance on the target side.",
"We would also like to explore better techniques to inject information of small-size n -grams with possible convergence with terminology injection techniques, unifying framework where target clues are mixed with source sentence during translation.",
"As regards distributed representations, we plan to study alternative networks to more accurately model the identification and incorporation of additional context.",
"We would like to thank Professor Francois his insightful comments as well as the reviewers for the useful suggestions.",
"M. Amin Farajian, Marco Turchi, Matteo Negri, and Marcello Federico.",
"2017.",
"Multi-domain neural machine translation through unsupervised adaptation.",
"In Proceedings of the Second Conference on Machine Translation , pages 127137, Copenhagen, Denmark.",
"Association for Computational Linguistics.",
"Jiatao Gu, Yong Wang, Kyunghyun Cho, and Victor OK Li.",
"2018.",
"Search engine guided neural machine translation.",
"In Thirty-Second AAAI Conference on Artificial Intelligence .",
"Jeff Johnson, Matthijs Douze, and Herv e J egou.",
"2019.",
"Billion-scale similarity search with gpus.",
"IEEE Transactions on Big Data .",
"Minh-Thang Luong and Christopher D. Manning.",
"2015.",
"Stanford neural machine translation systems for spoken language domain.",
"In International Workshop on Spoken Language Translation , Da Nang, Vietnam."
] | [
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"abstain",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"other",
"method",
"method",
"other",
"other",
"abstain",
"method",
"objective",
"objective",
"result",
"method",
"method",
"objective",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"We examine the extent to which supervised bridging resolvers can be improved without employing additional labeled bridging data by proposing a novel constrained multi-task learning framework for bridging resolution, within which we (1) design cross-task consistency constraints to guide the learning process; (2) pretrain the entity coreference model in the multitask framework on the large amount of publicly available coreference data; and (3) integrate prior knowledge encoded in rule-based resolvers.",
"Our approach achieves state-of-the-art results on three standard evaluation corpora.",
"Bridging (Clark, 1975) plays an important role in establishing entity coherence in a text.",
"In contrast to direct anaphors , which indicate the coreference relation between a nominal expression and its antecedent, bridging anaphors or associative anaphors link to their antecedents via non-identical relations.",
"Bridging resolution is the task of recognizing and resolving bridging anaphors in a text.",
"Bridging resolution and coreference resolution are closely related to Information Status (IS henceforth) classification, the goal of which is to assign an IS to each discourse entity that indicates how these entities are referred to in a text (Prince, 1981; Nissim et al., 2004; Markert et al., 2012).",
"In general, an entity is old if it is coreferent with an entity that has been mentioned before (e.g., [The busi-ness] and [its] in Figure 1).",
"Bridging anaphors are discourse-new but hearer-old.",
"They have not been introduced in the discourse directly, but are inferrable from previously mentioned entities (e.g., [the customers] in Figure 1).",
"New entities are introduced into the discourse for the first time and are not known to the hearer before (e.g. [The Bakersfield Supermarket] in Figure 1).",
"data.",
"While one of the largest annotated entity coreference resolution datasets, OntoNotes, is composed of 2802 English documents in its training split, the two most commonly used English corpora for bridging resolution research, ISNotes (Mark-ert et al., 2012) and BASHI (Rsiger, 2018), are composed of 50 WSJ documents each.",
"Perhaps the most straightforward way to mitigate this data scarcity problem is to combine existing annotated bridging datasets to create a larger training set (Yu and Poesio, 2020).",
"While it makes sense to combine corpora that are created using the same annotation guidelines (e.g., ISNotes and BASHI), attempting to combine corpora created using different guidelines (e.g., ARRAU (Poesio and Artstein, 2008) and ISNotes) will likely confuse the learner, thus limiting the applicability of this method.",
"Some researchers have instead attempted to create automatically labeled data via lexico-syntactic patterns (Hou, 2018) and distant supervision (Hou, 2020), but a manual analysis of the resulting data instances reveals that they may be too noisy for training: on average only one-fourth of them are correctly labeled (Hou, 2020).",
"By contrast, we aim to investigate the extent to which supervised bridging resolvers can be improved without increasing the amount of labeled bridging data.",
"To this end, we begin by proposing a novel constrained multi-task learning (MTL) framework for bridging resolution.",
"While Yu and Poesio (2020) develop a standard MTL model for 759 bridging resolution and use coreference resolution as the only auxiliary task, we propose to (1) exploit the close connection between IS and bridg-ing/coreference resolution by introducing IS classification as the third task into the MTL framework and (2) guide the learning process by designing cross-task consistency constraints.",
"For instance, in Figure 1, the prediction from the coreference resolution module indicating that both [The business] and [The murder] are old entities can help the bridging resolution module to avoid misclassifying these two mentions as bridging anaphors.",
"Similarly, if the IS classification module predicts [the customers] as a bridging anaphor, then the bridging resolution module should find an antecedent for it.",
"We hypothesize that such constraints can guide the training of a complex model to produce a more coherent output across different tasks, thereby improving bridging resolution performance.",
"While the cross-task consistency constraints could improve performance, they could also hurt performance.",
"Returning to our example in Figure 1, if the IS classification module misclassifies \"[the customers]\" as non-bridging, the constraints will propagate this error to the bridging resolution module, causing it not to resolve the mention.",
"To address this problem, we (1) formulate these constraints as soft rather than hard constraints, and (2) improve entity coreference resolution performance by leveraging the large amount of publicly-available coreference-annotated data in OntoNotes to pre-train the coreference module.",
"Finally, since previous work (Hou et al., 2014; Roesiger et al., 2018) has shown that manually defined rules based on various syntactic and semantic properties are valuable to recognize and resolve bridging anaphors, we integrate such prior knowledge about bridging into our MTL framework.",
"Note that the only hybrid rule-based and learning-based approach to bridging resolution (Kobayashi and Ng, 2021) merely applies the rule-based resolver and the learning-based resolver in a sequential manner, without combining them into a single model.",
"In sum, our contributions are two-fold.",
"First, we propose a novel constrained MTL framework that jointly learns three tasks, bridging resolution, coreference resolution, and IS classification, via the use of soft cross-task consistency constraints, prior knowledge provided by rule-based approaches, and pre-training on coreference data.",
"Second, experimental results demonstrate that our framework achieves new state-of-the-art results for full bridging resolution on three datasets (ISNotes, BASHI, and ARRAU).",
"The rest of the paper is structured as follows.",
"Section 2 describes related work on bridging resolution and constrained multi-task learning with deep neural networks.",
"Section 3 describes our model, including our multi-task framework for jointly learning IS classification, entity coreference resolution and bridging resolution, our cross-task consistency constraints, and how we integrate rule knowledge into the framework.",
"We present evaluation results in Section 4 and our conclusions in Section",
"5. 2 Related Work Bridging resolution.",
"Bridging resolution is composed two sub-tasks: bridging anaphora recognition and antecedent selection .",
"Most previous work tackles them separately.",
"One line of research models bridging recognition as part of IS classification (Rahman and Ng, 2011; Markert et al., 2012; Cahill and Riester, 2012; Rahman and Ng, 2012; Hou, 2021), while others have focused on antecedent selection based on gold bridging anaphors (Poesio et al., 2004; Lassalle and Denis, 2011; Hou et al., 2013; Hou, 2020).",
"There are a few studies tackling the challenging task of full bridging resolution (i.e., bridging anaphor recognition and resolution).",
"Hou et al. (2014) and Roesiger et al. (2018) develop rules to identify bridging links based on syntactic and semantic constraints.",
"Hou et al. (2018) propose a pipeline system built on top of complex manually designed features.",
"Yu and Poesio (2020) design a MTL neural model for bridging resolution that uses coreference resolution as an auxiliary task.",
"Recently, Kobayashi and Ng (2021) show the effectiveness of a hybrid rule-based and MTL approach for bridging resolution.",
"For a detailed overview of these approaches, we refer the reader to a recent survey by Kobayashi and Ng (2020).",
"Constrained multi-task learning with deep neural networks.",
"Multi-task learning has been widely adopted in various NLP applications to improve the performance of individual tasks (Ruder, 2017).",
"Recently, several studies have demonstrated that multi-task training in neural networks can be further improved by integrating logical constraints to enforce a coherent output across different tasks (Li et al., 2019; Wang et al., 2020; Lu and Ng, 760 2021).",
"However, for a complex task like bridging resolution, it is non-trivial to choose auxiliary tasks and model the relationships between these tasks in deep neural networks.",
"In this work, we (1) jointly train three tasks (i.e., bridging resolution, coreference resolution, and IS classification); (2) design five soft cross-task consistency constraints to guide the training process; and (3) integrate prior knowledge about bridging into our MTL model.",
"In this section, we present our constrained MTL framework for bridging resolution.",
"Inspired by Yu and Poesio's (2020) span-based model for bridging resolution, which employs an unconstrained MTL framework that jointly learns bridging and coreference, our model takes as input a document D represented as a sequence of word tokens and gold mentions M , from which we create span representations.",
"Our model simultaneously learns three tasks, namely IS classification, bridging, and coreference, as defined below.",
"The IS classification task aims to assign each span i an IS y is taken from an IS inventory.",
"The model predicts the IS of i to be y is = arg max y is s is ( i, y is ) , where s is is a function suggesting i 's likelihood of having y is as its IS.",
"The bridging resolution task involves determining an antecedent for each bridging anaphor.",
"Formally, it assigns span i an antecedent y b , where y b Y ( i ) = { 1 , ..., i 1 , } .",
"In other words, the value of each y b is the id of its antecedent, which can be one of the preceding spans or a dummy antecedent (if the mention underlying i is not a bridging anaphor) in the associated document.",
"We define the following scoring function: s b ( i, j ) = (cid:40) 0 j = s a ( i, j ) j = (1) where s a ( i, j ) is a pairwise bridging score computed over i and a preceding span j .",
"The model predicts the antecedent of i to be y b = arg max y b Y ( i ) s b ( i, y b ) .",
"The entity coreference resolution task involves determining an antecedent for each identity anaphor.",
"Formally, it aims to assign span i an antecedent y c based on a scoring function s c that can be defined in an analogous manner as the s b function in the bridging resolution task.",
"Figure 2 shows the structure of our constrained MTL framework.",
"Below we describe the details.",
"Span Representation Layer Following Yu and Poesio (2020), we use BERT embeddings as the input to a bidirectional LSTM (Hochreiter and Schmidhuber, 1997) to encode tokens and their contexts.",
"Then, we set g i , the representation of span i , to [ h start ( i ) ; h end ( i ) ; h head ( i ) ; f i ] , where h start ( i ) and h end ( i ) are the hidden vectors of the start and end tokens of i , h head ( i ) is an attention-based head vector and f i is a span width feature embedding.",
"1 IS Prediction Layer For each span i , we pass its representation g i to FFNN is , a standard feed-forward neural network.",
"FFNN is outputs a vector oi i of dimension of S , where S is the number of possible IS labels.",
"Specifically: oi i = FFNN is ( g i ) (2) s is ( i, y is ) = oi i ( y is ) (3) where oi i ( y is ) , the y is -th element of oi i , is a score that indicates i 's likelihood of belonging to IS y is .",
"This score is then used to compute s is .",
"Bridging Prediction Layer To predict bridging links, we define the pairwise score between span i and span j as follows: s a ( i, j ) = FFNN b ([ g i ; g j ; g i g j ; u ij ]) (4) where denotes element-wise multiplication, g i g j encodes the similarity between span i and span j , u ij is a feature embedding encoding the distance between two spans 1 , and FFNN b is the FFNN used in the bridging prediction layer.",
"This pairwise score is then used to compute s b (see Equation (1)).",
"Coreference Prediction Layer The coreference prediction layer is defined in the same way as the bridging prediction layer, with the coreference pairwise score s c ( i, j ) between two spans i and j computed by another FFNN, FFNN c .",
"Note that the first few layers of FFNN c and FFNN b are shared.",
"As noted before, we propose to guide the learning process by incorporating consistency constraints on the three tasks involved in our model.",
"Below we design five cross-task consistency constraints and show how they can be incorporated into our model in a soft manner.",
"1 This feature embedding is originally proposed by Clark and Manning (2016).",
"See their paper for details.",
"To enforce P1 in a soft manner in our model, we define a penalty function p 1 , which imposes a penalty on span i if it violates the constraint, as shown below: p 1 ( i ) = 0 argmax y is Y s is ( i,y is ) = brid s is ( i, brid ) max y is Y\\{ brid } s is ( i,y is ) otherwise (5) where Y is the set of possible IS labels.",
"Intuitively, p 1 estimates the minimum amount that needs to be adjusted so that span i 's IS type is not BRIDGING .",
"In particular, p 1 returns 0 (i.e., no penalty) if i 's IS type is not BRIDGING .",
"s b ( i, ) = s b ( i, ) 1 p 1 ( i )",
"where 1 is a positive constant that controls the hardness of the constraint.",
"The smaller 1 is, the softer the constraint is.",
"Intuitively, if P1 is violated, s b ( i, ) will be lowered by the penalty term, and the dummy antecedent will less likely be selected as the antecedent of i .",
"Constraint P2: If a span i has OLD as its IS value, then its coreference antecedent must not be the dummy antecedent.",
"Similar to P1 , we define a penalty function p 3 to enforce P3 : p 3 ( i ) = 0 argmax y Y s is ( i,y ) = brid max y Y\\{ brid } s is ( i,y ) s is ( i, brid ) otherwise (7) We employ p 3 to update s b as follows: s b ( i, j ) = s b ( i, j ) 3 p 3 ( i ) (8) where 3 , like 1 , is the hardness coefficient.",
"This penalty is applied only when P3 is violated.",
"Specifically, if IS task predicts a span i as non-B RIDGING but its antecedent selected in the bridging task is not the dummy antecedent, then the penalty term will lower the s b score for each of i 's non-dummy antecedents, which in turn makes it more likely for the dummy antecedent to be selected as the antecedent of i .",
"Constraint P4: If a span i does not have OLD as its IS value, then its coreference antecedent must be the dummy antecedent.",
"Constraint P5: If a span i has a non-dummy antecedent as its coreference antecedent, then its bridging antecedent must be the dummy antecedent.",
"The penalty function p 5 used to enforce P5 is defined as follows: p 5 ( i ) = 0 arg max j Y ( i ) s c ( i, j ) = max j Y ( i ) \\{ } s c ( i, j ) otherwise (9) 762 where Y ( i ) is the set of candidate antecedents of span i .",
"where 5 is the hardness coefficient.",
"Next, we incorporate the prior knowledge provided by rule-based resolvers into our model.",
"Specifically, we employ the set of corpus-specific rules designed by Rsiger et al. (2018).",
"Recall that the output of a rule-based bridging resolver is a set of links between a bridging anaphor and one of its antecedents.",
"We incorporate these bridging links into our model by encoding them as a binary feature, r ij , whose value is 1 if and only if the rule-based resolver posits a bridging link between span i and span j .",
"This feature will be used as an additional feature for FFNN b and FFNN c .",
"As noted by Rsiger et al. (2018), rule-based resolvers are precisionrather than recall-oriented.",
"The reason is that these hand-crafted rules are designed to resolve specific (rather than all) categories of bridging anaphors.",
"For instance, one rule is designed to resolve a building part (e.g., \"the door\") to the building of which it is a part (e.g., \"the house\").",
"Because of the low-recall nature of rule-based resolvers, the feature r ij , which we compute based on the rule-based outputs, could be perceived as not particularly useful by our model.",
"Consequently, to encourage the model to seriously take into consideration the potentially useful information encoded in r ij , we design a rule loss (see Section 3.4), which imposes a penalty on the model during training if the antecedent selected by the model is a non-dummy antecedent that is neither a correct antecedent of i nor the one selected by the rules (as encoded in r ij ).",
"The loss function, L () , consists of the losses of the three tasks and the rule loss as follows:",
"where d is the number of training documents and the hyperparameters (i.e., the 's), which determine the trade-off between the task losses, are tuned using grid search to maximize the average resolution F-scores on development data.",
"Defining the bridging loss is tricky since the antecedents for each bridging anaphor are evaluated in the form of coreference clusters.",
"We adopt the entity coreference loss function originally defined by Wiseman et al. (2015).",
"Specifically, let GOLD b ( i ) denote the set consisting of span i 's bridging antecedent as well as the spans preceding i that are coreferent with the antecedent, and y lb be arg max y GOLD b ( i ) s b ( i, y ) .",
"In other words, y lb is the highest scoring (latent) antecedent of i according to s b among all the antecedents of i .",
"The loss function for bridging is defined as: L b () = (cid:80) ni =1 max j Y ( i ) ( b ( i, j )(1 + s b ( i, j ) s b ( i, y lb ))) (12) where b ( i, j ) is a mistake-specific cost function that returns the cost associated with a particular type of error if an error exists and 0 otherwise (Dur-rett and Klein, 2013).",
"2 Intuitively, the loss function penalizes a span i if the predicted antecedent j has a higher score than the correct latent antecedent y lb .",
"The task loss for coreference, L c , is defined in the same way as the bridging loss, having an analogous mistake-driven cost function c ( i, j ) .",
"3 The task loss for the IS prediction task, L is , is the weighted softmax cross entropy loss, where misclassified bridging mentions and non-bridging mentions are weighted according to a mistake-driven cost function is ( i, j ) .",
"4 The rule loss is motivated by the bridging loss.",
"Specifically, the model will be penalized if there exists an incorrect non-dummy candidate antecedent whose s b score is higher than the score of the antecedent chosen by the rules, as shown below: L r () = (cid:80) i N max j Y ( i ) \\ ( r ( i, j )(1 + s b ( i, j ) s b ( i, y r ))) (13) where N is the set of candidate anaphors for which the rule-based system found a (non-dummy) an-2 In b ( i, j ) , there are three error types: (1) false link (incorrectly resolved anaphoric mentions); (2) false new (anaphoric mentions misclassified as non-anaphoric); and (3) wrong link (non-anaphoric mentions misclassified as anaphoric).",
"We use hyperparameters b 1 , b 2 , and b 3 to determine their trade-offs.",
"3 In c ( i, j ) , the error types are the same as those in b ( i, j ) .",
"We use hyperparameters c 1 , c 2 , and c 3 to determine their trade-offs.",
"4 In is ( i, j ) , there are two error types: (1) false new (bridging mentions misclassified as non-bridging); and (2) false bridging (non-bridging mentions misclassified as bridg-ing).",
"We use hyperparameters is 1 and is 2 to determine their trade-offs.",
"tecedent, y r is the antecedent selected by the rules, and r ( i, j ) is an indicator function that returns 0 if j is the correct antecedent and 1 otherwise.",
"As mentioned in the introduction, we pre-train the coreference module in our MTL framework on the English portion of OntoNotes 5.0 5 , excluding those documents that appear in ISNotes or BASHI.",
"To do so, we pre-train the full model shown in Figure 2, setting b to 1 and the remaining 's to 0 in the loss function so that only the network weights associated with the coreference module will be updated.",
"Note that we follow Yu and Poesio (2020) and use the softmax cross entropy loss rather than the max-margin loss for L b during pre-training, the reason being that this could simplify pre-training by obviating the need to tune the hyperparameters associated with the mistake-specific cost functions.",
"We use three English corpora that are arguably the most widely used corpora for bridging evaluation, namely ISNotes (composed of 50 WSJ articles in OntoNotes) (Markert et al., 2012) , BASHI (The Bridging Anaphors Hand-annotated Inventory, composed of another 50 WSJ articles in OntoNotes) (Rsiger, 2018), and ARRAU (composed of articles from four domains, RST, GNOME, PEAR, and TRAINS) (Poesio and Artstein, 2008; Uryupina et al., 2020).",
"Following previous work, we report results only on RST, the most comprehensively annotated segment of ARRAU.",
"Table 1 shows the statistics on these corpora.",
"For ARRAU RST, we use the standard train-test split.",
"For ISNotes and BASHI, we divide the documents in each corpus into 10 folds (8 folds for training, 1 fold for development, and 1 fold for testing) and report 10-fold cross-validation results.",
"Following previous work (Hou et al., 2014; Roesiger et al., 2018), we report results for full bridging resolution based on gold mentions.",
"In this setting, a system is given as input both a document and its the gold mentions.",
"The goal is to identify bridging anaphors from the gold mentions and resolve them 5 https://catalog.ldc.upenn.edu/ LDC2013T19 Corpora Docs Tokens Mentions Anaphors ISNotes 50 40,292 11,272 663 BASHI 50 57,709 18,561 459 ARRAU RST 413 228,901 72,013 3,777 Table 1: Statistics on different corpora.",
"the gold mentions.",
"There is a caveat in this evaluation setting, however.",
"In ISNotes and BASHI, some bridging antecedents correspond to events (see Example (4) in Table 5), and previous studies differ in terms of how event antecedents should be handled.",
"The reason is that while these event antecedents are annotated, they are not annotated as gold mentions.",
"When reporting results on resolving gold mentions, some previous work (e.g., Hou et al. (2014), Hou et al. (2018)) chose not to include these event antecedents in the list of candidate antecedents and others (e.g., Roesiger et al. (2018), Yu and Poesio (2020)) did.",
"Obviously, the setting in which gold event antecedents are not included in train-ing/evaluation is harsher because it implies that anaphors with event antecedents will always be resolved incorrectly.",
"We believe that including gold event antecedents during evaluation does not represent a realistic setting, and will only report results using the \"harsh\" setting in this paper.",
"Following Yu and Poesio (2020), we report results for bridging recognition and resolution in terms of precision (P), recall (R), and F-score (F).",
"For recognition, recall is the fraction of gold bridging anaphors that are correctly identified, whereas precision is the fraction of bridging anaphors identified by the system that is correct.",
"For resolution, recall and precision are defined in a similar fashion.",
"In addition, we report IS classification results in terms of accuracy and coreference results in terms of CoNLL score (Pradhan et al., 2014), which is the unweighted average of the F-scores provided by three metrics, MUC (Vilain et al., 1995), B 3 (Bagga and Baldwin, 1998), and CEAF e (Luo, 2005).",
"To train the neural models in our experiments, we use ADAM (Kingma and Ba, 2014) as the optimizer and set all model parameters that originated in Yu and Poesio's (2020) model to the same values as those reported in their paper.",
"Each model is trained for up to 150 epochs in ISNotes and BASHI 764 Model Bridging IS Coreference Recognition Resolution Classification Resolution P R F P R F Accuracy CoNLL ISNotes Roesiger et al. (2018) 46.8 17.7 25.6 32.0 12.1 17.5 -Y&P-MTL 51.8 27.2 36.7 ( 1 . 6) 25.3 12.5 17.4 ( 1 . 3) -62.6 Hybrid 44.8 35.5 39.6 ( 0 . 2) 24.7 19.6 21.9 ( 1 . 6) -62.6 MM-MTL 45.5 41.6 43.4 ( 0 . 8) 21.1 19.3 20.2 ( 0 . 7) -64.5 Full model 54.1 48.0 50.9 ( 0 . 2) 27.6 24.5 26.0 ( 0 . 0) 78.0 76.3 BASHI Roesiger et al. (2018) 33.5 22.9 27.2 17.3 11.8 14.0 -Y&P-MTL 35.7 15.2 21.3 ( 1 . 5) 19.3 8.2 11.5 ( 0 . 8) -57.2 Hybrid 32.4 32.3 32.3 ( 0 . 7) 16.3 16.3 16.0 ( 0 . 4) -57.2 MM-MTL 37.9 27.7 32.0 ( 0 . 3) 15.6 11.4 13.2 ( 0 . 6) -57.0 Full model 40.7 35.3 37.5 ( 0 . 7) 20.1 17.5 18.6 ( 0 . 1) 85.3 72.6 ARRAU RST Roesiger et al. (2018) 18.3 33.9 23.7 11.7 21.7 15.2 -Y&P-MTL 27.6 23.1 25.2 ( 0 . 3) 20.5 17.2 18.7 ( 0 . 1) -55.9 Hybrid 16.8 43.2 24.2 ( 0 . 1) 11.3 29.1 16.3 ( 0 . 1) -55.9 Full model 26.1 45.6 33.2 ( 1 . 2) 17.1 29.8 21.7 ( 0 . 0) 84.5 61.2 Table 2: Results of different resolvers on bridging resolution and related tasks.",
"and up to 200 epochs in ARRAU, with early stopping based on the development set.",
"For our model, we pre-train the coreference model for 15 epochs, and the remaining parameters are chosen jointly using grid search to maximize resolution F-score on development data.",
"Specifically, the weights associated with each task and the rule in the loss function (i.e., the i 's) are searched out of { 0 .",
"1 , 0 .",
"5 , 1 , 5 , 10 , 20 , 30 } .",
"The weights associated with the mistake-driven cost functions (i.e., the i 's) are searched out of { 0 .",
"1 , 0 .",
"5 , 1 , 5 , 10 , 15 , 20 } .",
"The hardness coefficients of the consistency constraints (i.e., the i 's) are searched out of { 0 .",
"05 , 0 .",
"1 , 0 .",
"5 , 1 , 5 , 10 , 20 , 30 } .",
"6 4.2 Baseline Systems We employ three baselines.",
"The first one is Rsiger et",
"al.'s (2018) rule-based approach, which consists of rules that are built on top of Hou et al. (2014).",
"7 The second one, Y&P-MTL, is Yu and Poesio's (2020) MTL system.",
"8 The third one is the Hybrid rule-based and learning-based system proposed by Kobayashi and Ng (2021) in which the rules are first applied and then Y&P-MTL is used to resolve the remaining bridging anaphors.",
"6 See Appendix A for the final hyperparameters chosen for the full model.",
"Results are shown in Table 2.",
"A few points about the baseline results deserve mention.",
"First, in terms of bridging recognition and resolution performance, the best baselines are Hybrid for both ISNotes and BASHI and Y&P-MTL for ARRAU RST.",
"Hence, these two baselines can be viewed as the prior state of the art.",
"Second, while Rsiger et",
"al.'s rule-based model never achieves the best results on any of the three datasets, it is not always the worst performer: Y&P-MTL is the worst baseline on BASHI in terms of resolution.",
"9 Third, Hybrid fails to improve the performance of Y&P-MTL in ARRAU RST, meaning that the rules fail to provide additional benefits to Hybrid.",
"This could be attributed to the fact that the rules in ARRAU RST have much lower recognition and resolution precision scores than those in ISNotes and BASHI (Roesiger et al., 2018).",
"While Y&P-MTL uses undersampling (to reduce the number of negative examples used to train the bridging module) and a likelihood loss, we additionally experiment with a max-margin loss (see Section 3.4) without undersampling in our model.",
"To see how these two changes impact performance, we create another model, MM-MTL, which is simply a max-margin version of Y&P-MTL without 9 The baseline results in Table 2 are lower than those reported in the original papers because (1) we report results using the \"harsh\" setting (see Section 4.1.2); (2) Roesiger et al. (2018) and Kobayashi and Ng (2021) postprocess the system output with gold coreference information, and (3) Yu and Poesio (2020) and Kobayashi and Ng (2021) use additional labeled data for model training.",
"undersampling.",
"Results on the development set are mixed: while MM-MTL outperforms Y&P-MTL on ISNotes and BASHI, the reverse is true on ARRAU RST.",
"Consequently, we use the max-margin loss without undersampling when training our model on ISNotes and BASHI, but fall back on the likelihood loss with undersampling for ARRAU RST.",
"To better understand the impact of using a max-margin loss with undersampling, we show in Table 2 the test results of MM-MTL.",
"As we can see, MM-MTL outperforms Y&P-MTL by 6.710.7% points in F-score for bridging recognition and 1.7 2.8% points in F-score for bridging resolution in ISNotes and BASHI.",
"The last row of each section of Table 2 shows the results of our full model, which outperforms the best baseline by 5.211.3% points in F-score for bridging recognition and 2.64.1% points in F-score for bridging resolution.",
"Hence, the full model establishes new state-of-the-art results on these three datasets.",
"For bookkeeping purposes, we also report the scores for each component of our model in terms of IS classification accuracy and coreference CoNLL score.",
"To evaluate the contribution of the different components in our full model, we show in Tables 3 and 4 ablation results on ISNotes, which we obtain by removing one component at a time from the model and retraining it.",
"Note that for coreference we show the anaphor recognition results as they are affected by the consistency constraints.",
"Consistency constraints.",
"Ablating the consistency constraints means removing all the penalty terms from s b and s c .",
"The resulting system resembles a typical multi-task learning setup, where the different tasks only interact via a shared representation.",
"As we can see in Table 3, bridging resolution F-score drops by 1.7% points, coreference recognition F-score drops by 0.5% points, and IS bridging recognition F-score drops by 1.2% points.",
"These results suggest the effectiveness of using consistency constraints in a multi-task setup.",
"Soft Hard.",
"Next, we replace soft constraints with hard constraints.",
"Comparing with the results in row 2, bridging resolution F-score drops by 1.2% points.",
"This indicates that having hard constraints is worse than having no constraints at all.",
"rule loss and by 2.4% points when ablating both the rule loss and the rule feature.",
"These results suggest that the rule feature is useful and that the rule loss enhances the effectiveness of the rule feature.",
"Pre-training.",
"Next, we do not pre-train the coreference component in the multi-task framework.",
"This causes bridging resolution F-score and coreference recognition F-score to drop abruptly by 5.8% points and 3.9% points respectively, suggesting the important role played by pre-training.",
"Coreference resolution and IS classification tasks.",
"Next, we ablate one of the tasks in the multi-task framework.",
"Bridging resolution F-score drops by 3.4% points when ablating coreference and by 3.3% points when ablating IS classification.",
"These results suggest that both tasks contribute considerably to bridging resolution performance.",
"Individual soft constraints.",
"Finally, we ablate one soft constraint at a time from the full model.",
"Results are shown in Table",
"4. Bridging resolution F-score drops by 1.22.3% points, suggesting the positive contribution of each soft constraint.",
"While our discussion of these results has focused on bridging resolution, the same trends can be observed for bridging recognition for the most part.",
"Overall, these results suggest that each component contributes positively to bridging resolution.",
"Although our full model outperforms all previous models for bridging resolution, it is still far from perfect.",
"To better understand what areas of improvement are required, we discuss some common 766 errors made by our full model in this subsection.",
"Bridging anaphora recognition errors.",
"Recall errors in bridging anaphora recognition are the result of a system's failure in identifying bridging anaphors.",
"We find that on the three datasets, the highest proportion of the recall errors (57% on ISNotes, 61% on ARRAU, and 82% on BASHI) is due to the fact that a large number of bridging anaphors are misclassified as new or other 10 mentions in the IS classification module, such as income in Example (1) in Table",
"5. Precision errors in bridging anaphora recognition are the result of a system's misclassification of non-bridging mentions as bridging anaphors.",
"Similar to the recall errors described above, most precision errors are new or other mentions being misclassified as bridging, which account for 50%, 74% and 82% of the precision errors in ISNotes, ARRAU, and BASHI, respectively.",
"In Example (2), service is misclassified by both the bridging and IS components as a bridging anaphor.",
"In general, it seems that our system struggles to distinguish bridging anaphors from generic new mentions with simple syntactic structures, an observation that has also been reported in previous work (Hou, 2021; Kobayashi and Ng, 2021).",
"Note that most of these bridging or new mentions are relational nouns (de Bruin and Scha, 1988).",
"Normally, whether additional implicit arguments are required to interpret such relational nouns depends on the surrounding context.",
"In Example (1), the industry is necessary to fully understand the meaning of income ; while in Example (2), no additional implicit arguments are required to understand the meaning of service .",
"Precision errors in bridging anaphora resolution appear when a system selects the wrong antecedent for a bridging anaphor.",
"A major reason for this error is that our model largely fails to exploit contextual information.",
"In Example (3), the model links the bridging anaphor a spokesman to the wrong antecedent [the state], which is reasonable if one does not look into the context.",
"However, according to the context, the correct antecedent should be Gov. Deukmejian , which requires a system to know that Gov. is the abbreviation for 10 Unlike ISNotes and ARRAU, BASHI does not have IS annotations.",
"We use heuristics to derive four IS types: old , mediated/bridging , mediated/comparative and other .",
"A men-tion's IS is other if it is not annotated as mediated and is not coreferent with any previous mentions.",
"(1) In 1984, an attempt was made to crack down on the industry with tougher restrictions.",
"Then, in 1988, a proposal to keep better track of income by selling prepaid cards for pachinko was fielded in parliament.",
"(2) The Bay Area Rapid Transit system, which runs subway trains beneath the bay, is braced for a doubling of its daily regular ridership to 300,000.",
"BART has increased service to 24 hours a day in preparation for the onslaught.",
"(3) Both Mr.Brown, the state's most influential legislator, and Gov. Deukmejian favor a temporary sales tax increase should more money be needed than [the state] can raise from existing sources and the federal government.",
"According to a spokesman , the governor is also studying the possibility of raising state gasoline taxes.",
"(4) ... the drug still lacks federal approval for use in the youngest patients.",
"As a result , many youngsters have been unable to obtain the drug ...",
"Governor and that normally a governor will have a spokesman.",
"In addition, on ISNotes, 6% of the bridging anaphors have a non-mention antecedent (see a result in Example (4)) and 12% of the bridging anaphors have antecedents that are more than five sentences away.",
"Currently our system does not handle these difficult cases.",
"We proposed the first neural model for full bridging resolution that (1) exploits the connection between information status classification, entity coreference resolution, and bridging resolution in a multi-task learning framework, (2) employs soft cross-task consistency constraints to guide the learning process, (3) pre-trains the entity coreference model, and (4) integrates prior knowledge encoded in handcrafted bridging resolution rules into the learning framework.",
"Our model outperformed several strong baselines and achieved state-of-the-art results on three evaluation datasets.",
"Ablation results provided suggestive evidence that each component of our model contributed positively to bridging resolution performance.",
"We thank the four anonymous reviewers for their insightful comments on an earlier draft of the paper.",
"This work was supported in part by NSF Grants IIS-1528037 and CCF-1848608.",
"Any opinions, findings, conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views or official policies, either expressed or implied, of the NSF."
] | [
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"result",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"objective",
"other",
"other",
"other"
] |
[
"Computational linguistic research on language change through distributional semantic (DS) models has inspired researchers from fields such as philosophy and literary studies, who use these methods for the exploration and comparison of comparatively small datasets traditionally analyzed by close reading.",
"Research on methods for small data is still in early stages and it is not clear which methods achieve the best results.",
"We investigate the possibilities and limitations of using distributional semantic models for analyzing philosophical data by means of a realistic use-case.",
"We provide a ground truth for evaluation created by philosophy experts and a blueprint for using DS models in a sound methodological setup.",
"We compare three methods for creating specialized models from small datasets.",
"Though the models do not perform well enough to directly support philosophers yet, we find that models designed for small data yield promising directions for future work.",
"Philosophers apply text analysis to understand and delineate the precise meaning of concepts and the relations between them in a given text.",
"This includes comparative research that investigates differences in how concepts are viewed in different philosophical schools or by individual philosophers.",
"Betti and van den Berg (2014) point out that comparative research on concepts should follow a conceptual model approach.",
"This approach states that we should not look at shifts of individual concepts in isolation, but rather address changes of a conceptual model as a whole.",
"In such a system, relations between concepts are made explicit and comparative studies should identify how such relations change.",
"Previous studies have shown that distributional methods can be used to support philosophical research by retrieving passages relevant to concepts in an author's work (e.g., the concept of grounding within the work of Bernard Bolzano, van Wierst et al., 2016; Ginammi et al., 2020), but can we also generate distributional semantic (DS) models that are precise enough to identify differences in concepts?",
"This paper takes a first stab at addressing this question.",
"In particular, we address the challenges involved in dealing with highly technical domain-specific terms that are defined in small corpora.",
"As such, our use case has properties difficult for DS modeling, but typical for disciplines working with comparatively limited data.",
"We compare domain-specific embeddings created using Word2Vec (Mikolov et al., 2013a,b) and a count-based SVD model (Levy et al., 2015) to those created by Nonce2Vec (Herbelot and Baroni, 2017), specifically designed for dealing with tiny data.",
"Taking into account previous work criticizing the use of DS models for detecting sense-shift, we construct a data-specific ground truth, apply multiple evaluation metrics and verify whether results are stable across various random initializations.",
"Our results confirm that SVD representations are su-perior to Word2Vec for small data and show that Nonce2Vec outperforms Word2Vec and, in most cases, SVD.",
"However, results are currently not accurate enough for providing evidence or new insights to philosophers.",
"Nevertheless, we are hopeful that better results can be obtained in the future by optimizing Nonce2Vec to deal with small rather than tiny data and by creating a bigger, more balanced ground truth.",
"The main contributions of this paper are (1) a new ground truth of philosophical concepts linked to a clean philosophical corpus that is particularly challenging to model; (2) a blueprint for investigating DS models for domain specific research; (3) a comparative study of different approaches of creating embeddings for highly domain-specific terms.",
"1 1 The ground truth, details of results and code can be found on GitHub: https://github.com/YOortwijn/ 2512 After presenting related work, we describe the philosophical context: requirements, corpus and our ground truth.",
"In Section 4, we outline how the DS models we use are created.",
"We then present our evaluation and results in Section 5 which is followed by our conclusions and discussion.",
"In this section we cover (1) other work related to distributional semantics (DS) for specific concepts and conceptual change (2) critical reflection on evaluation and the methodology involved and (3) work on small datasets and identification of domain specific meaning.",
"A well-known application of DS is the use of diachronic word embeddings to track and analyze changes in the meaning of words over periods of time (Kim et al., 2014; Kulkarni et al., 2015; Mitra et al., 2015; Hamilton et al., 2016b,a; Kenter et al., 2015; Tahmasebi and Risse, 2017; Montariol and Allauzen, 2019; Giulianelli et al., 2020, e.g.).",
"Most of these approaches study what is called sense-shift , which is the change in (dominant) sense of a specific word by comparing the word's meaning representations in different time periods (Kutuzov et al., 2018).",
"DS methods have also been used to study concepts related to gender and intersectional-ity (Herbelot et al., 2012), studying cultural stereotypes (Lewis and Lupyan, 2019) or harm-related concepts in psychological research papers (Vylo-mova et al., 2019).",
"Wevers and Koolen (2020) survey three ways in which distributional semantic representations can help trace concept change.",
"However, none of these methods requires historians of ideas to fix initial and testable hypotheses on the meaning of concepts as Sommerauer and Fokkens (2019) recommend on the basis of Betti and van den Berg (2014).",
"Betti and van den Berg argue that concepts are not isolated, but part of conceptual models.",
"Sommerauer and Fokkens (2019) show that translating conceptual models to words representing them is one of the challenges involved in using DS models for studying conceptual change.",
"They ground their conceptual model of Racism' in literature by sociologists, anthropologists and historians, but argue that domain experts would ideally be involved directly, as is done in the current pa-Challenging_DMs .",
"per.",
"Betti et al. (2020) introduce a concept-focused ground truth designed by domain experts, QuiNE-GT, where paragraphs of philosophical text are annotated in terms of their relation to a conceptual model of the concept of naturalized epistemology in Quine's works.",
"We also make use of conceptual modeling methodology to build a ground truth, but our task is to extract knowledge on target term relations rather than to perform an information retrieval task searching for paragraphs relevant to a research question.",
"While QuiNE-GT contains exhaustive lists of words pertaining to a particular research question, we aim for broader coverage of different terms used by Quine and their relations.",
"An interdisciplinary collaboration with domain experts can lead to hypotheses about shifts or nearest neighbors of specific terms, which can be tested by methods also used for detecting sense shift.",
"These methods are not without challenges.",
"The meaning representations are affected by random factors such as initialization and order of example (Hellrich and Hahn, 2016a) and frequency effects (Dubossarsky et al., 2017).",
"A major obstacle in addressing these critical points is the lack of high quality evaluation sets (Tahmasebi et al., 2018; Kutuzov et al., 2018) and a tendency to use a single evaluation metric (Gladkova and Drozd, 2016) while each metric has downsides (Bakarov, 2018) .",
"Evaluations on small sets of hand-picked examples that exhibit strong sense-shift (e.g. Hamilton et al. (2016a)) leave it unclear whether they are also suitable for making new discoveries or exploring data.",
"van Aggelen et al. (2019) introduce a large-scale evaluation set derived from a thesaurus and show that performance of distributional methods is much lower on this more challenging set.",
"These critical findings stress the need for methodologies that allow us to establish the quality of embeddings and to tell the difference between a stable, reliable finding and an artefact of the method.",
"Dubossarsky et al. (2017) propose the use of shuffled and synchronic corpora for verifica-tion.",
"Rosenfeld and Erk (2018) use synthetic words that consist of two real words merged together that result in a shift of these words' senses for evaluation.",
"Sommerauer and Fokkens (2019) recommend stress-testing through control words (that should not change) and by comparing results on multiple models.",
"We supplement these proposals for 2513 diachronic models by providing methods that can be used as a strict test of synchronic model quality, independently of measuring change (so that frequency effects are not a risk).",
"Differences in the interpretation of a term in different authors' work can be understood as a difference in its relations to other terms.",
"A difference 2514 in how terms can be clustered together would then show a difference in the conceptual relations these terms have to each other.",
"To ground these methods, we introduce a novel, high quality ground truth containing fine-grained meaning distinctions in the philosophical domain.",
"We stress-test our findings by applying multiple evaluation metrics and control for random factors by initializing our models multiple times.",
"2.3 Dealing with Small Datasets In addition to the challenges outlined above, we are faced with the issue that domain-specific corpora are typically small, i.e. up to a few million tokens rather than web scale.",
"Learning embeddings from small corpora is not an easy task, where SVD models outperform Word2Vec (W2V) (Sahlgren and Lenci, 2016, on 1M words, Asr et al., 2016, on 8M words), and learning them for rare words presents further difficulty (Luong et al., 2013).",
"Nonce2Vec (N2V) (Herbelot and Baroni, 2017) addresses this issue through high-risk' incremental learning with an initially high but decaying learning rate, allowing them to learn embeddings from single sentences (called tiny data).",
"Faruqui et al. (2015) incorporate ontological information from lexical-semantic databases as a postprocessing step, which can be done when training data is sparse.",
"However, when working in a specific domain, such as the texts of a particular philosopher, words may have different and specific uses, and general-purpose evaluation resources or training data do not always reflect these meanings (Betti et al., 2020).",
"Bloem et al. (2019) confirm the domain-specific character of philosophical writings showing that two vectors for the same word, one trained on Wikipedia and one trained on the works of a specific philosopher, can have low similarity, especially for high-frequency terms.",
"Shoemark et al. (2019) find that the top ranked words are domain-specific to the Twitter data they used.",
"Wohlgenannt et al. (2019) evaluate DS models trained on two fantasy book series by having domain experts manually compile evaluation datasets addressing the relevant word senses, incorporating domain knowledge in both training and evaluation.",
"Roy et al. (2019) propose incorporating text annotation of in-domain vocabulary and semantic relations into the word embeddings to improve the quality of domain-specific word embeddings learned from relatively small data sets.",
"In this paper, we investigate how different approaches for learning embeddings deal with the domain specific concepts we are dealing with.",
"We compare Herbelot and Baroni's N2V to continuing training with W2V and to directly creating SVD models on our corpus.",
"The goal of this section is to provide some insights into the process of interpreting philosophical texts and the use case for our experiments.",
"We briefly describe the process and challenges of close-reading and how it could be supported by DS models.",
"Then we present a corpus of philosophical texts and a ground truth for philosophy.",
"Many philosophical research questions focus on the interpretation and comparison of philosophical views expressed in writing.",
"These questions revolve around specific concepts and how they are defined and viewed by different philosophers.",
"Often, different philosophers use the same terms to describe different concepts.",
"For example, Quine sees reference as a relation between a singular term and a physical object, where a physical object is not part of reality , but of our ontology (Quine, 1960).",
"This is opposed to many other philosophers, who take what we refer to and what we receive stimulation from as the same thing, i.e., physical objects in reality.",
"To make solid comparisons between views, it is necessary to determine which concepts are closely related to each other, or which concept pairs stand in similar relations to others.",
"To do this, philosophical experts practice close-reading.",
"The interpretation of only a single passage requires close-reading and expertise of not only the work the passage is in, but often also other works by the same author or even other authors.",
"Conclusions are often drawn on a small subset of the relevant available data.",
"It almost always requires making a selection of sources to consider and thus allows for cherrypicking data.",
"The use of computational linguistic methods, instead, could make it possible to consider all available data as a basis for evidence, and thereby prevent biased source selection.",
"Replication instructions are available here: https://github.com/ YOortwijn/QuiNE-ground-truth 3 A more detailed and accessible explanation of the conceptual network, including further motivation for the categorization of terms can be found at https://github.com/ YOortwijn/Challenging_DMs 2515 Quine.",
"Computational methods that can capture this aspect of meaning can be applied in various stages of philosophical research.",
"Exploration.",
"In the first stages of research, a philosopher might have a single or a few passages or terms that should be interpreted.",
"At this point, they may want a rough overview of other passages or terms relevant to the one(s) under consideration.",
"DS models may help the researcher to find relevant passages without any of the search terms they may use in key term search.",
"These passages can provide input for more directed searches and be a start for a traditional research path with close-reading of the identified passages.",
"The recall of the method for this application need not be very high: as long as the researcher identifies some new relevant passages without being overloaded with irrelevant ones and the selection is not biased towards a specific interpretation, DS models enrich the philosopher's research.",
"Testing Hypotheses about the Text.",
"When a researcher already has some competing hypotheses for interpretation based on close-reading of some works or passages, or based on secondary literature, DS models can help to compile evidence for both hypotheses and compare the results.",
"If there are multiple possible interpretations of a term, a DS model could provide insight into which terms are most closely related to this term, giving evidence for the correct interpretation.",
"If the outcome of such a comparison is to be used as direct evidence, it is essential that the DS model is highly accurate and a methodology is applied to distinguish veritable observations from noise.",
"A researcher may however also use these results in a more surveying manner.",
"In this case, more accuracy is needed than in the case of identifying passages, but a certain amount of error is acceptable.",
"In this paper, we aim to investigate the level of accuracy we can obtain on philosophical text with either of these applications in mind (surveying hypotheses or providing evidence for a hypothesis).",
"We make use of a large corpus that comprises the virtually complete oeuvre in English of Willard V. O. Quine, the QUINE corpus (Version 0.5, Betti et al., 2020), 2 for creating our DS models.",
"The cor-2 The corpus was derived from copyrighted works by Betti et al. (2020).",
"pus includes texts on various topics, from formula-heavy logic works to philosophy of language.",
"Version 0.5 of this corpus consisting of 228 books and articles by Quine, containing 2,150,356 word tokens and 38,791 word types.",
"It is a high quality corpus where scanned page images were OCR-processed and corrected manually.",
"Establishing a ground truth for philosophical concepts is not trivial (see e.g. van den Berg et al. (2018), Betti et al. (2020)).",
"We address this by building on the methods described by Betti and van den Berg (2014) for building conceptual models.",
"Instead of trying to understand the meaning of a term in isolation, we focus on the interrelations of terms.",
"We base our ground truth on Quine's Word and Object (Quine, 1960), which encompasses many of the terms and themes that Quine discusses throughout the rest of his work.",
"3 .",
"We obtain this book's most important terminology from its index.",
"The philosophical expert on our team established a conceptual network representing the term-clusters and relations.",
"The expert categorized each word as either belonging to one of five clusters ( L A N G UAG E , O N TO L O G Y , R E A L I T Y , M I N D , M E TA-L I N G U I S T I C ) or as a relational term (i.e. part of either the reference or regimentation relation that connects (parts) of clusters to each other).",
"Any two terms in the same cluster can be seen as conceptually related (e.g. noun and verb are conceptually related since they are both linguistic items and are therefore both in the L A N G UAG E cluster).",
"The reference relation connects terms from the language and ontology cluster, i.e. elements of language refer to elements of the ontology.",
"Regimentation connects parts of the language and meta-linguistic cluster.",
"So the terms that are clustered together are semantically similar to each other, while the relational terms are related terms that are not necessarily semantically similar.",
"Our conceptual network contains 74 clustered terms and 43 relational terms (overlapping the 74).",
"The conceptual network was checked independently by two other philosophers specialized in that can show they own the original works.",
"There was a 100% consensus among the experts on the clustering of the 74 terms and relations of the 43 terms.",
"Since these terms are core terms in the work of Quine for which most experts agree on their coarse interpretation, high consensus was expected.",
"However, differences in interpretations and disagreement between experts is more likely upon more fine-grained analysis and even though consensus was expected, a fourth consulted expert may still disagree with the interpretation.",
"Even high-quality DS models have certain limitations when it comes to representing words accurately due to their architecture (e.g. expressing very fine-grained differences and polysemy).",
"We identified the following potential challenges prior to examining vector representations from our DS models: First, terms that are related by the reference relation might be closer to each other than to other terms in their respective clusters.",
"For instance, a singular term (cluster L A N G UAG E ) refers to a physical object (cluster O N TO L O G Y ).",
"Therefore, they might be closer to each other than to other terms in their clusters ( relative clause and class , respectively).",
"Second, the L A N G UAG E and M E TA L I N G U I S T I C clusters are relatively similar.",
"While they can be distinguished in Word and Object by their relation to ontology and regimentation but this is not necessarily the case for all of Quine's works.",
"Examples of terms that could be misplaced due to this are article and noun .",
"Third, there are terms that are comparatively distinct from the other terms in their cluster (but nevertheless clear members of the cluster), such as phoneme in the R E A L I T Y cluster.",
"Fourth, the clusters contain some polysemous terms and terms that can be used in both a technical and a non-technical way within Quine's works, e.g., name , particular , context , form .",
"Finally, some terms, such as prelinguistic quality space , might have an extremely low number of occurrences.",
"Based on these observations, we divide our ground truth in the following subsets: (1) terms that should be assigned to the correct cluster and (2) terms that could be assigned to a wrong but also plausible cluster given the corpus and the first two potential challenges by way of the reference or regimentation relation.",
"The focus will be on (1), but (2) will be used in the first task.",
"embeddings for some philosophical terms can be learned from Wikipedia-data.",
"As a baseline, we include a model trained exclusively on a 2019 Wikipedia dump using default Word2Vec (W2V), wikipedia-W2V .",
"Multi-word target terms were linked by underscores to have a single vector per target term and 85 of the 99 target terms are in the vocabulary of this model.",
"We test an SVD count-based model, using the PPMI-SVD approach from Levy et al. (2015) and two predictive approaches for creating our DS models: W2V (Mikolov et al., 2013a,b) in its Gensim implementation ( Rehurek and So-jka, 2010), as well as Nonce2Vec (Herbelot and Baroni, 2017, N2V) adapted for small, in-domain data situations (Bloem et al., 2019).",
"To learn an embedding for a specific term, N2V uses the sentences in which this term occurs to map it into a previously learned general-domain semantic background space trained on Wikipedia data.",
"4 This is done by initializing the vector for the target term to the sum of the background space vectors of words in the in-domain context sentence from the Quine corpus, following Lazaridou et al. (2017).",
"Training then takes place with an initial high learning rate and parameter decay, while the background space is frozen and only the target term is learned.",
"Using W2V, we learn embeddings for specific terms by training only on the in-domain context sentences of our target terms.",
"We test two initialization methods: random initialization, and using the additive model of N2V.",
"We also modify N2V to have a random initialization condition for comparison, giving us four conditions: W2V-random , N2V-random , W2V-additive and N2V-additive (the N2V default).",
"We carry out various preprocessing steps to ensure that we (1) find the maximum of target term mentions and (2) regularize the contexts so we can exploit the full potential of the small corpus.",
"In part, we make use of the preprocessing steps already performed on the QUINE corpus (v0.5), which was sentence-split and tokenized using UCTO 5 and lemmatized using Spacy 6 using its core model for English.",
"7 The QUINE corpus features a rather high number of mathematical expressions.",
"Rather than treating them as unique expressions, they were nor-4 We used the same Wikipedia dump for wikipedia-W2V .",
"malized by replacing them by the symbol XfZ for formulas, and XsZ for symbols.",
"We assume that the specific expressions do not add to the distributional information.",
"For (1), we need to ensure that all instances of the terms in the evaluation set are identified in the corpus.",
"We search for all morphological variants of the target terms and replace them by the unmarked singular form, by means of a manually created list.",
"Furthermore, many of the target terms consist of two or more words, which should receive a single representation.",
"As with the Wikipedia baseline, we search for all mentions of the target terms in the corpus and join all target terms from the ground truth that consist of multiple words (MWEs) by underscores to turn them into a single token.",
"We did not handle MWEs that were not target terms, so no automatic MWE identification took place.",
"We propose a framework for fine-tuning models specifically designed for domain-specific experiments with small data.",
"As the size of our ground truth is comparatively limited (for computational purposes), we do not want to waste' portions of it for fine-tuning.",
"Instead, we use proxy' terms and a proxy' corpus to evaluate and compare models on an artificial task.",
"We aim to select data representative of the target data (inspired by fine-tuning for low-resource languages, Sgaard, 2011).",
"Terms and Corpus .",
"As target terms we select 20 technical terms from the legal domain.",
"Similar to the philosophical target terms, many technical legal terms have distinct or more specific meanings in legal scholarship as opposed to generic corpora.",
"To select a proxy corpus, we compare the contexts of the target terms to the contexts of the legal terms in four candidate corpora: the British Law Corpus (BLC), the Open Access Journal corpus, Wikipedia, and the British National Corpus (BNC).",
"We compare the contexts in terms of easily computable metrics which characterize properties we expect to have an impact on training a DS model: average relative frequency of all the context words, their average polysemy (in terms of Word-Net synsets (Fellbaum, 2010; Miller, 1995)), their entropy (based on unigram frequency), type count, token count, and type/token ratio.",
"We rank each corpus by similarity to the Quine corpus on each metric.",
"Out of the four corpora, Wikipedia and the BNC had an average rank of 1.8, while the BLC was the least similar with 4.",
"Out of the two equal choices in terms of means, the Wikipedia corpus was more similar to the Quine corpus in terms of variance, so we chose this corpus for extracting contexts of the legal proxy terms.",
"Task .",
"As we do not have a conceptual ground truth for the legal terms, we rely on an artificial task.",
"We approximate embedding quality in terms of consistency.",
"Bloem et al. (2019) define a model as consistent if its output does not vary when its input should not trigger variation (i.e. because it is sampled from the same text or domain).",
"We test whether a model creates consistent representations of a term when trained on only a subset of its contexts using artificial examples in the following way: Our artificial examples consist of contexts of two terms, which are merged to become one pseudo-term.",
"Since the pseudo-term's contexts are split evenly between contexts of term1 and term2 , its embedding is expected to be somewhere half-way between the embeddings of the two terms.",
"8 We train separate vectors (cid:126)t 1 and (cid:126)t 2 for term1 and term2 on the basis of 100 occurrences of each, as well as (cid:126)t p for the pseudo-term term1_term2 , based on 50 occurrences of each component term.",
"We then compute the vector half-way between (cid:126)t 1 and (cid:126)t 2 .",
"In a consistent model, the cosine similarity between this vector and (cid:126)t p should be high.",
"In tuning, we perform a grid search and take the average of this metric computed over 10 random pairs of legal terms for each hyperparameter combination.",
"9 The results show that our models can learn vectors for artificial combined terms that are consistent with the middle point between the vectors of the two component terms in vectorial space.",
"There is great variation for different hyperparameter sets.",
"Average cosine similarities varied from 0.08 to 0.87 (N2V-additive) or 0.96 (W2V-random).",
"We found that with the additive initialization, lower learning rates performed better, while with the random initialization, higher number of negative samples had the greatest impact on the consistency scores.",
"For N2V, the lowest parameter decay 8 Our assumption on the expected position of the pseudo-term embedding oversimplifies the nature of DS models.",
"The structure of semantic spaces and the distances between embeddings are still poorly understood, and it is not guaranteed that the embedding of a merged term should ideally be positioned in between its two constituent terms.",
"However, we only assume that such a middle position is a good approximation when evaluating the consistency of a distributional semantic model using artificial data in tuning, not in testing our models.",
"9 The full parameter space can be found in our code repository.",
"rates performed best, probably because our artificial terms have more occurrences (50 and 100) than N2V was designed for (1-4).",
"The initial high learning rate is a core feature of N2V, so we also include the best setting with a learning rate of 1 as an additional condition ( N2V-additive-a1 ).",
"The tuned models were evaluated against the ground truth.",
"This section presents multiple evaluation tasks and results to (1) explore different aspects of model quality and (2) stress-test our findings.",
"In these tasks, we use the 74 terms from the conceptual network that were clustered by the experts.",
"Cluster similarity Our similarity task is defined as follows: Given a target term t t , a term from the same cluster t sc and a term from a different cluster t dc , we test whether the target term t t is closer to t sc than to t dc .",
"If the cosine similarity between t t and t sc is higher than the cosine similarity between t t and t dc , it is counted as correct, else as incorrect.",
"We carry out this comparison for all possible term combinations and report the percentage of correct outcomes.",
"We report the proportion of target-terms that are classified in the correct cluster.",
"We also show the proportion of target terms that are clustered incorrectly but plausibly given their relation (via reference or regimentation ) to other clusters.",
"We exclude all three terms that are out of vocabulary in any of the models, as a difference in target terms distorts the comparison.",
"This way, we ensure that all models are evaluated on the same terms.",
"Table 1 shows that N2V outperforms the W2V models in most cases.",
"The best performance is by N2V with additive initialization (standard N2V), pairing 65.0% correct according to the clusters, and 72.6% when additional relations between terms are also considered correct.",
"The count-based SVD model performs similarly well.",
"These are the only two models that beat the Wikipedia baseline.",
"The best W2V model (W2V-random), pairs 56.4% correct according to the clusters, and 64.3% with additional relations.",
"To evaluate the stability of our best result, we train 25 identically parameterized models as in Hellrich and Hahn's (2016b) reliability metric and obtain similarity scores in a range of 64.04%-65.22% (mean 64.65%, cf. 64.95% in testing) indicating high stability.",
"Dunn index The Dunn Index (DI) is a general metric of cluster quality and can be used to measure how well embeddings from the same cluster are clustered in semantic space (Huang et al., 2016).",
"It is the ratio of the minimum inter-cluster distance to the maximum cluster size, and higher values indicate tighter clusters and better separation.",
"The DI results in table 1 confirm that N2V models outperform W2V models.",
"N2V with additive initialization achieved the highest DI value 0.56, followed by the SVD model (0.35).",
"We can compare this to Huang et al. (2016), who used DI to evaluate word embeddings in the medical domain, using six semantic clusters taken from an expert-defined controlled vocabulary of medical terms using far larger data sources (e.g. PubMed and Wikipedia).",
"In their experiment with 800 terms (we have 99) and six clusters (comparable to our five), their DI scores were 0.16-0.20 for a bag-of-words baseline model, and 0.43 (PubMed) to 0.25 (Wikipedia) for W2V.",
"This is comparable to our wikipedia-W2V condition which scored 0.17 on our clusters and data, indicating that our task is more difficult.",
"In light of this, the 0.56 DI of our N2V-additive model seems quite good, while the 0.08 of W2V-random indicates poor cluster quality.",
"But why did N2V cluster better than SVD while the two did not differ much in the cluster similarity task?",
"DI is determined by both inter and intra distances.",
"We found SVD has greater intra-cluster distances (0.51 inter, 1.47 intra) than N2V (0.55, 0.99) after normalization to unit vectors.",
"This means clusters are more compact in the N2V model, potentially making the cluster similarity task easier.",
"K-means clustering and Centroids We clustered terms from each model into five clusters using the K-means clustering algorithm from scikit-learn (Pedregosa et al., 2011) and evaluated using three of its performance evaluation metrics:",
"(i) ad-2518 Low Mid High 0.45 0.5 0.55 0.6 0.65 0.7 0.75 W2V-add W2V-rand N2V-add N2V-add-a1 N2V-rand SVD Figure 1: Similarity scores from Table 1 split by term frequency.",
"6 Conclusion and Discussion The results show that, in general, N2V and SVD represent the ground truth clusters better than W2V on this type of data.",
"Furthermore, we see that using N2V or SVD for smaller, domain-specific data outperforms a larger domain-general W2V model 2519 trained on a large corpus.",
"justed Rand index,",
"(ii) adjusted mutual information and",
"(iii) Fowlkes-Mallows index.",
"Results for",
"(i) and",
"(ii) show scores close to zero for all models within the bounded range [-1,1], indicating results close to random.",
"The best model is the SVD model",
"((i) 0.12,",
"(ii) 0.17).",
"On",
"(iii), with scores in range [0,1], the best N2V model (0.48) outperforms the best SVD and W2V.",
"Manual inspection of the clusters shows that in many cases the majority of terms is put into a single cluster and the other clusters have only a few terms in them.",
"We also applied a centroid-based approach to evaluate clustering.",
"We calculated the mean of the normalized vectors for each cluster to determine its centroid.",
"We then calculated the F-score by checking for each term whether its cosine distance was closer to its cluster centroid than to another.",
"The best performing model is W2V-random (F-score: 0.10), followed by N2V-random (0.08).",
"All other models perform approximately equally bad (0.04).",
"K-nearest neighbors In our final evaluation, we classify terms into clusters using K-nearest neighbors (KNNs).",
"We compute the macro-averaged F1 score for each term using leave-one-out cross validation.",
"For both k =3 and k =1, the SVD model performs best with an F-score of respectively 0.45 and 0.42.",
"For k =3, the best N2V outperforms W2V, while for k =1 the best W2V outperforms N2V, scoring almost the same as the SVD model.",
"Manual inspection shows that for all models most of the terms from any cluster are either classified as part of the language or the meta-linguistic cluster, which are the two largest clusters.",
"evaluation task again, but with the target terms split by frequency.",
"This allows us to see how the quantity of training data affects the cluster similarity.",
"We distinguish between low-frequency terms (1-49 occurrences, n =22), medium-frequency terms (50-750 occurrences, n =55) and high-frequency terms (750-6730 occurrences, n=19, cutoffs were set to have a reasonable number of terms in the low and high frequency class).",
"For reference, N2V was designed to train on 1-4 occurrences of a term, while for W2V, more is better.",
"We expect additive initialization to outperform random initialization for low frequencies where an informed initialization can make up for a lack of training data.",
"Figure 1 shows that most models benefit from more data, but N2V clearly outperforms W2V in the low frequency condition, even with random initialization.",
"Secondarily, models with additive initialization outperform their randomly initialized variants, possibly due to the transfer of domain-specific information for low-frequency terms noted by Bloem et al. (2019).",
"N2V-add-a1 forms an exception, where the high learning rate may cause massive changes to the initial vector position after only a few training occurrences, performing worse than random.",
"SVD does not pattern with N2V here, performing very poorly on the low frequency terms.",
"This model performs best in the 50-750 occurrence range.",
"The SVD models cannot benefit from additive initialization and should therefore be compared to the randomly initialized models.",
"In the high-frequency range, N2V again performs best, probably due to the low rates of parameter decay selected in the hyperparameter tuning.",
"As expected, W2V performs quite well with more data in its standard random initialization condition.",
"Unexpectedly, it performs quite poorly with the additive initialization.",
"This might be an issue with our tuning process: as all our artificial terms had a relatively low frequency of 100, the tuning task may have selected a model that relies too much on the initialization, and learns poorly for the w2v-additive condition.",
"This shows the importance of the tuning data resembling the target data closely.",
"N2V is able to learn higher-quality embeddings than W2V from small texts, as it was designed to, and we confirm previous work showing that the same holds for count-based models to a limited extent.",
"Clustering methods (centroid and k-means) do not detect anything close to the clusters defined in the ground truth, whereas more fine-grained methods (cluster similarity and KNN) do yield results that are clearly above chance.",
"The evaluation in terms of the Dunn index is also promising.",
"Despite the overall low performance, we take this as an indication that the models group the terms with some systematicity.",
"Furthermore, the rankings of the different models remain consistent across evaluations.",
"Arriving at the same results through various methods can be seen as a fulfillment of Sommerauer and Fokkens's (2019) stress-test requirement.",
"Manual inspection of the clusters indicates that the imbalance in the (already very small) dataset is problematic for a K-nearest neighbors classifier, which assigned almost all words to the two biggest clusters.",
"We expect that the same may hold for centroids and k-means.",
"In hindsight, we could have controlled for this by extending the dataset beyond the terms in the Index of Word and Object .",
"While this might have provided more accurate insights, we expect that most use-cases that work with small (or even tiny) data are most likely also working with similarly unbalanced data.",
"Standard machine-learning techniques aiming to abstract over examples are most likely not able to pick up (potentially weak) signals based on just a few examples.",
"We therefore consider fine-grained and example-based methods a more promising direction.",
"Overall, research on small data is still in an early phase.",
"We see that models designed to work with tiny data outperform others on low-frequency terms, but yield only slightly better or comparative results when compared on midor high-frequency terms.",
"It has to be considered that these models are overall very similar to the standardly used models.",
"Future research should explore more balanced approaches which combine the strengths of both versions.",
"For instance, by adjusting the settings of N2V based on the frequency of a target term.",
"From the perspective of a philosopher who may want to make use of DS models to support their work, the results we obtained in this study are not good enough yet.",
"The minimum for exploratory work would be that the vast majority of the terms is correctly clustered and all categories are exem-plified.",
"Currently most terms are placed in the two largest categories, which might even give high accuracy but still does not represent the data well.",
"Thereby, exploration of the data with these models could give a wrong impression about how the terms relate to each other.",
"For hypothesis testing, the required accuracy depends on the hypothesis being tested, but in principle it is possible when the model no longer makes clear mistakes (but it may not be able to always distinguish between conceptual and relational connections and still make errors on clear borderline cases).",
"Unfortunately, this level of accuracy was not yet reached either.",
"More broadly, as studying language change through diachronic word embeddings adds a layer of complexity beyond the synchronic word embeddings we investigated, we expect that diachronic word embeddings trained on small data sets will not be able to reflect actual conceptual change and thus directly support philosophical research at this stage.",
"We do, however, think that our results have laid the groundwork for using DS models for exploratory purposes.",
"We should keep in mind that the conceptual network based on Word and Object calls for very fine-grained distinctions.",
"While this may still be too challenging, we expect that exploring differences in terms used by different authors could be more realistic.",
"Oortwijn and Meyer's initial contribution to this research was funded by the Vrije Universiteit's Network Institute.",
"Sommerauer and Fokkens were funded by the Netherlands Organization of Sci-entific Research (NWO) PGW.17.041 awarded to Pia Sommerauer and NWO VENI grant 275-89-029 awarded to Antske Fokkens.",
"Bloem and Oortwijn were funded by NWO VICI grant 277-20-007 awarded to Arianna Betti.",
"Oortwijn was also funded by NWO grant 314-99-117 awarded to Bettina Speckmann and by the Human(e) AI grant Small data, big challenges funded by the University of Amsterdam.",
"We would like to thank Thijs Ossenkoppele and Arianna Betti for their evaluation of the ground truth.",
"We furthermore thank the eIdeas research group and anonymous reviewers for feedback.",
"All remaining errors are our own."
] | [
"abstain",
"abstain",
"objective",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"method",
"method",
"method",
"result",
"result",
"abstain",
"result",
"objective",
"other",
"method",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"method",
"other",
"other",
"objective",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"objective",
"abstain",
"method",
"other",
"other",
"other",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"abstain",
"other",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other"
] |
[
"To achieve the long-term goal of machines being able to engage humans in conversation, our models should captivate the interest of their speaking partners.",
"Communication grounded in images, whereby a dialogue is conducted based on a given photo, is a setup naturally appealing to humans (Hu et al., 2014).",
"In this work we study large-scale architectures and datasets for this goal.",
"We test a set of neural architectures using state-of-the-art image and text representations, considering various ways to fuse the components.",
"To test such models, we collect a dataset of grounded human-human conversations, where speakers are asked to play roles given a provided emotional mood or style, as the use of such traits is also a key factor in engagingness (Guo et al., 2019).",
"Our dataset, Image-Chat, consists of 202k dialogues over 202k images using 215 possible style traits.",
"Automatic metrics and human evaluations of engagingness show the ef-ficacy of our approach; in particular, we obtain state-of-the-art performance on the existing IGC task, and our best performing model is almost on par with humans on the Image-Chat test set (preferred 47.7% of the time).",
"A key way for machines to exhibit intelligence is for them to be able to perceive the world around them and to be able to communicate with humans in natural language about that world.",
"To speak naturally with humans it is necessary to understand the natural things that humans say about the world they live in, and to respond in kind.",
"This involves understanding what they perceive, e.g. the images they see, what those images mean semantically for humans, and how mood and style shapes the language and conversations derived from these observations.",
"In this work we take a step towards these goals by considering grounded dialogue involving open-ended discussion of a given image, a setting that is naturally fun for humans (Hu et al., 2014), and study neural conversational models for task.",
"In particular, we explore both generative and retrieval models that handle multimodal dialogue by fusing Transformer architectures (Vaswani et al., 2017) for encoding dialogue history and responses and ResNet architectures (He et al., 2016) for encoding images.",
"We propose ways to fuse those modalities together and perform a detailed study including both automatic evaluations, ablations and human evaluations of our models using crowdworkers.",
"To train and evaluate such models, we collect a large set of human-human crowdworker conversations, with the aim of training a model to engage a human in a similar fashion, consisting of 202k diverse images and 401k utterances over the images, with 215 different style traits (e.g., optimistic, skeptical or frivolous) to promote engaging conversation.",
"The dataset is made publicly available in ParlAI (Miller et al., 2017) 1 .",
"Our results show that there is a significant gap between state-of-the-art retrieval and generative models on this task.",
"Our best fused retrieval models set a strong baseline, being preferred to human conversationalists 47.7% of the time.",
"We show that both large-scale image and text pre-training, and utilization of style traits, are critical for best results.",
"We then consider transfer to the existing Image Grounded Conversations (IGC) task of Mostafazadeh et al. (2017), where we obtain state-of-the-art results.",
"The majority of work in dialogue is not grounded in perception, e.g. much recent work explores sequence-to-sequence models or retrieval models for goal-directed (Henderson et al., 2014) or chit-1",
"chat tasks (Vinyals and Le, 2015; Zhang et al., 2018).",
"While these tasks are text-based only, many of the techniques developed can likely be transferred for use in multimodal systems, for example using state-of-the-art Transformer representations for text (Mazare et al., 2018) as a sub-component.",
"In the area of language and vision, one of the most widely studied areas is image captioning, whereby a single utterance is output given an input image.",
"This typically involves producing a factual, descriptive sentence describing the image, in contrast to producing a conversational utterance as in dialogue.",
"Popular datasets include COCO (Chen et al., 2015) and Flickr30k (Young et al., 2014).",
"Again, a variety of sequence-to-sequence (Vinyals et al., 2015; Xu et al., 2015; Anderson et al., 2018) and retrieval models (Gu et al., 2018; Faghri et al., 2018; Nam et al., 2016) have been applied.",
"These tasks measure the ability of models to understand the content of an image, but not to carry out an engaging conversation grounded in perception.",
"Some works have extended image captioning from being purely factual towards more engaging captions by incorporating style while still being single turn, e.g. (Mathews et al., 2018, 2016; Gan et al., 2017; Guo et al., 2019; Shuster et al., 2019).",
"Our work also applies a style component, but concentrates on image-grounded dialogue, rather than image captioning.",
"Visual question answering (Antol et al., 2015) and visual dialogue (Das et al., 2017) are another set of tasks which employ vision and language.",
"They require the machine to answer factual questions about the contents of the image, either in single turn or dialogue form.",
"They do not attempt to model natural conversation, but rather assess whether the machine can perform basic perception over the image via a series of questions.",
"There are some works which directly address dialogue grounded with vision.",
"The work of Pasunuru and Bansal (2018) assesses the ability to execute dialogue given video of computer soccer games.",
"The work of Huber et al. (2018) investigates the use of sentiment-based visual features and facial expressions for emotional image-based dialogue.",
"Perhaps the most related work to ours is Mostafazadeh et al. (2017).",
"Their work considers (visual context, textual context, question, response) tuples, and builds validation and test sets based on 4k eventful images called Image Grounded Conversations (IGC).",
"No training data is provided, but instead the authors use Twitter for that in their experiments.",
"In contrast, we provide training, validation and testing sets over 202k images for our task (that do not overlap with IGC), and consider a general set of images and dialogues, not just events and questions plus responses.",
"In our experiments we also show strong transfer ability of our models to the IGC task.",
"While there are many ways to measure dialogue quality, human engagement is a popular metric.",
"Engagement itself can be measured in many ways (Bohus and Horvitz, 2009; Yu et al., 2016) but here we adopt the common approach of simply asking humans which speaker they find more engaging, following other works (Li et al., 2019; Dinan et al., 2020).",
"The IMAGE-CHAT dataset is a large collection of (image, style trait for speaker A, style trait for speaker B, dialogue between A & B) tuples that we collected using crowd-workers, Each dialogue consists of consecutive turns by speaker A and B. No particular constraints are placed on the kinds of utterance, only that we ask the speakers to both use the provided style trait, and to respond to the given image and dialogue history in an engaging way .",
"The goal is not just to build a diagnostic dataset but a basis for training models that humans actually want to engage with.",
"Style Traits A number of works have shown that style traits for image captioning help provide creative captions (Mathews et al., 2018, 2016; Gan et al., 2017; Shuster et al., 2019).",
"We apply that same principle to image grounded dialogue, considering a set of 215 possible style traits, using an existing set from Shuster et al. (2019).",
"The traits are categorized into three classes: positive (e.g., sweet, happy, eloquent, humble, witty), neutral (e.g., old-fashioned, skeptical, solemn, questioning) and negative (e.g., anxious, childish, critical, fickle, frivolous).",
"We apply these to both speakers A and B, who will be assigned different style traits for each given conversation.",
"Dialogue For each image, we pick at random two style traits, one for speaker A and one for speaker 2 https://multimediacommons.wordpress.com/yfcc100m-core-dataset/ A: Peaceful B: Absentminded A: Fearful B: Miserable A: Erratic B: Skeptical A: I'm so thankful for this delicious food.",
"B, and collect the dialogue using crowdworkers who are asked to both assume those roles, and to be engaging to the other speaker while doing so.",
"It was emphasized in the data collection instructions that the style trait describes a trait of the speaker, not properties of the content of the image they are discussing.",
"Some examples from the training set are given in Figure 1. Data Quality During data collection crowd-sourcers were manually monitored, checking to ensure they were following the instructions.",
"Poor performers were banned, with comments discarded.",
"A verification process was also conducted on a subset of the data, where separate annotators were asked to choose whether the utterance fit the image, style, or both, and found that 92.8% of the time it clearly fit the image, and 83.1% the style, and 80.5% both.",
"Note, given that not all utterances should directly reference an image property or invoke the style, we do not expect 100%.",
"Overall Dataset The overall dataset statistics are given in Table 1. This is a fairly large dialogue dataset compared to other existing publicly available datasets.",
"For example, PersonaChat (Zhang et al., 2018) (which is not grounded in images) consists of 162k utterances, while IGC (Mostafazadeh et al., 2017) (grounded in images) consists of 4k of validation and test set examples only, compared to over 400k utterances in IMAGE-CHAT .",
"We consider two major types of dialogue model: retrieval and generative.",
"Both approaches make use of the same components as building blocks.",
"We use three sub-networks for the three modalities of input:",
"(i) an image encoder,",
"(ii) a dialogue history encoder; and",
"(iii) a style encoder.",
"In the retrieval model these are then fed into a combiner module for combining the three modalities.",
"Finally, there is a response encoder for considering candidate responses and this is scored against the combined input representations.",
"An overview of the retrieval archictecture is shown in Figure 2. For the generative model, the three encoders are used as input, and a further decoder Transformer is used for outputting a token sequence; beam search is applied.",
"Image Encoder We build our models on top of pretrained image features, and compare the performance of two types of image encoders.",
"The first is a residual network with 152 layers described in He et al. (2016) trained on ImageNet (Rus-sakovsky et al., 2015) to classify images among 1000 classes, which we refer to in the rest of the pa-Figure 2: The TRANSRESNETRET multimodal architecture for grounded dialogue.",
"There are several options: different image encoders (ResNet152 or ResNeXt-IG-3.5B), text encoders (shared or separate Transformers for history and response), and different multimodal combiners (sum or attention-based).",
"per as ResNet152 features.",
"We used the implementation provided in the torchvision project (Marcel and Rodriguez, 2010).",
"The second is a ResNeXt 32 48 d (Xie et al., 2017) trained on 3.5 billion Instagram pictures following the procedure described by Mahajan et al. (2018), which we refer to in the rest of the paper as ResNeXt-IG-3.5B .",
"The representation r I of an image I is obtained by using the 2048-dimensional output of the image encoder as input to a feed-forward network: a multi-layer perceptron with ReLU activation units and a final layer of 500 dimensions in the retrieval case, and a linear layer in the generative case.",
"Style Encoder To condition on a given style trait, we embed each trait to an N -dimensional vector to obtain its representation r S .",
"We used N = 500 for retrieval and N = 300 for generation.",
"Dialogue Encoder The entire dialogue history D is encoded into a fixed size vector r D using a Transformer architecture (Vaswani et al., 2017), followed by a linear layer.",
"Such Transformers have been shown to perform strongly on a variety of dialogue tasks previously (Yang et al., 2018; Mazare et al., 2018).",
"We use a Transformer with 4 layers, 300 hidden units, and 6 attention heads.",
"The outputs are pooled (mean) to give a final vectorial encoding.",
"We pretrain the entire encoder following the setup described in Mazare et al. (2018): we train two encoders on a next-utterance retrieval task on a Reddit dataset of dialogues containing 1.7 billion pairs of utterances, where one encodes the context and another the candidates for the next utterance; their dot product indicates the degree of match, and they are trained with negative log-likelihood and k -negative sampling.",
"We then initialize our system using the weights of the candidate encoder only, and then train on our task in either generative or retrieval mode.",
"two possible combiner modules for the inputs:",
"Multimodal sum combiner (MM-sum) : Given an input image, style trait and dialogue ( I, S, D ) , together with a candidate response C , the score of the final combination is computed as s ( I, S, D, C ) = ( r I + r S + r D ) r C .",
"Multimodal attention combiner (MM-att) : A more sophisticated approach is to use an attention mechanism to choose which modalities are most relevant for each example by stacking Transformers.",
"We concatenate the three representation vectors r I , r S and r D and feed them to a second Transformer (4 attention heads, 2 layers, 500 hidden units) which performs self-attention over them.",
"The three modalities are thus reweighted by the corresponding attention weights to give the final input representation vector r T , which is used to compute the score for a given candidate using r T r C .",
"Response encoder We employ the same Transformer architecture as in the dialogue encoder for encoding candidate responses.",
"We tried two variants: either sharing or not sharing the weights with the input dialogue encoder.",
"Training and Inference Given a tuple I, S, D , and a set of candidates ( c 1 , .., c N ) , at inference time the predicted utterance is the candidate c i that maximizes the score s ( I, S, D, c i ) .",
"At training time we pass a set of scores through a softmax and train to maximize the log-likelihood of the correct responses.",
"We use mini-batches of 500 training examples; for each example, we use the gold responses of the other examples of the batch as negatives.",
"During final human evaluation all candidates from the training set are considered to produce a response (356k candidates in our experiments).",
"Dialogue Decoder The encoding from the image encoder has a final linear layer of dimension 2048 300.",
"This projects it to the same size of the token encoding of the dialogue decoder.",
"We thus add it as an extra token at the end of the Transformer's encoder output.",
"For style, we simply prepend the style to the beginning of the dialogue history, and it is thus encoded in the dialogue encoder.",
"We then treat this as a standard seq2seq Transformer in order to generate dialogue responses.",
"Training and Inference We train with a batch size of 32 and learning rate of .",
"0001 using adam, and apply beam search with a beam of size 2 and trigram blocking at inference time.",
"Hyperparameters are chosen on the validation set.",
"We test our models on the IMAGE-CHAT and IGC datasets using automatic metrics and human evaluations.",
"We analyze the performance of the different module and architecture choices, as well as ablation studies to determine the importance of each of the model's inputs.",
"Module Choices We first compare various module configurations of our TRANSRESNETRET model, and additionally show the results for a simple information retrieval baseline, in which the candidates are ranked according to their weighted word overlap to the input message.",
"We measure recall at 1 and 5 (R@1/100 and R@5/100) retrieval metrics, where for each sample there are 100 candidates to rank: 99 random candidates chosen from the test set, and the true label.",
"Note that in human evaluations we use all the train set candidates.",
"The results are shown in Table 2. We report the average metrics for the total task, as well as the breakdown of the performance on each turn of dialogue (turns 1, 2 and 3).",
"The average metrics indicate that using the ResNeXt-IG-3.5B image encoder features improves performance significantly across the whole task, as we obtain 50.3% R@1 for our best ResNeXt-IG-3.5B model and only 40.6% for our best ResNet152 model.",
"When broken down by turn, it appears that the ResNeXt-IG-3.5B features are particularly important in the first round of dialogue, in which only the image and style are considered, as the difference between their best models increases from 9.7% in the full task to 19.5% in the first turn.",
"Our baseline multimodal sum combiner (MM-Sum) outperforms the more sophisticated self-attention (MM-Att) combiner, with the latter scoring 49.3% on the full task.",
"Having separate candidate and dialogue history text encoders also works better than sharing weights.",
"In subsequent experiments we use the best performing system for our retrieval model.",
"As ResNeXt-IG-3.5B performs best we use that for our generative model going forward as well.",
"Full & Ablation Study We now perform experiments for both retrieval and generative models for the full system, and additionally we remove modalities (image, style, and dialogue history).",
"For the generative models we report the ROUGE-L metric.",
"The results are shown in Table 3, which we now analyze.",
"Turn 1: In the first round of dialogue the models produce utterances given the image and style only, as there is no dialogue history yet.",
"For both models, image is more important than style, but using both together helps.",
"Turn 2: In the second turn, in which a model produces a response to a first utterance, the models perform similarly when using only the image or only the dialogue history, while performing poorly with just the style.",
"Any combination of two modalities improves the results, with the style + dialogue combination performing slightly higher than the other two.",
"Using all modalities works best.",
"Turn 3: By the third turn of dialogue, the conversation history proves to be by far the most important in isolation compared to the other two modalities in isolation.",
"Conditioning on the style+dialogue is the most effective of any combination of two modalities.",
"Again, using all modalities still proves best.",
"Evaluation Setup We use a set of 500 images from YFCC-100M that are not present in IMAGECHAT to build a set of three-round dialogues pairing humans with models in conversation.",
"We then Model Combiner Text Encoders Image Encoder Turn 1 Turn 2 Turn 3 All R@1 R@1 R@1 R@1 R@1 R@1 R@5 IR Baseline n/a n/a n/a --2.15 5.86 TRANSRESNETRET MM-Att Separate ResNet152 35.7 44.5 40.5 40.2 67.0 TRANSRESNETRET MM-Sum Separate ResNet152 34.5 46.0 41.3 40.6 67.2 TRANSRESNETRET MM-Sum Shared ResNeXt-IG-3.5B 53.6 47.0 41.3 47.3 73.1 TRANSRESNETRET MM-Att Shared ResNeXt-IG-3.5B 54.4 49.0 43.3 48.9 74.2 TRANSRESNETRET MM-Att Separate ResNeXt-IG-3.5B 53.5 50.5 43.8 49.3 74.7 TRANSRESNETRET MM-Sum Separate ResNeXt-IG-3.5B 54.0 51.9 44.8 50.3 75.4 Table 2: Module choices on IMAGE-CHAT .",
"conduct evaluations at each round of dialogue for each example in the evaluation set; we have a separate set of human evaluators look at the provided conversation turns, and ask them to compare two possible utterances for the next turn of conversation, given the image, dialogue history and relevant style (which is the same for both human author and model, so there is no advantage).",
"We ask the evaluators in a blind test to choose the more engaging of the two possible utterances: one from a human, and the other from a model.",
"Human annotation vs. TRANSRESNET model We compare human-authored utterances to those produced by our models.",
"The human conversations are collected in the same fashion as in IMAGE-CHAT but on test images.",
"As for humans, the model outputs are conditioned on the image, style and previous dialogue history.",
"TRANSRESNETGEN simply generates a response, whereas TRANSRESNETRET retrieves candidate utterances from the IMAGE-CHAT training set.",
"The latter is given a separate set of candidates corresponding to the round of dialogue e.g. when producing a response to turn 1, the model retrieves from all possible round 1 utterances from the train set (in that case 186,858 possible choices).",
"The results are shown in Fig. 4, comparing all models on the first round (left): TRANSRESNETGEN and TRANSRESNETRET using ResNeXt-IG-3.5B, and TRANSRESNETRET using ResNet152 features.",
"As in automatic evaluations, ResNet152 features performed more poorly.",
"The retrieval model outperformed the generative model, a result that has been observed in other (text-only) dialogue tasks (Dinan et al., 2019; Zhang et al., 2018).",
"In turn 1, TRANSRESNETRET (ResNeXt-IG-3.5B) has a win rate against humans of 49.4% (difference not significant using a binomial two-tailed test, p > 0 . 5 ), while both other models are significantly outperformed by humans ( p < 2 10 7 compared to ResNet152 features), showing the importance of our retrieval architecture and image feature choices.",
"We thus compare only TRANSRESNETRET (ResNeXt-IG-3.5B) to humans in all three turns (Fig. 4, right).",
"That model performs well, with an overall win rate against humans of 47.7% (difference is significant, p < 7 10 5 ).",
"Example predictions of TRANSRESNETRET (ResNeXt-IG-3.5B) are given in Figure 3. 5.3 Transfer to the IGC Task To test the strength of our task and models we consider transfer to the IGC of task of Mostafazadeh et al. (2017).",
"In particular, we focus on their response task, which provides an image and a dialogue history of two utterances: a context utterance, followed by a question.",
"The task is to then pro-Image Style Conversation Turn 1 examples Model predictions: A: Artful This looks like a painting.",
"duce a response.",
"This is clearly related to our task, except it focuses on answering questions, which our task does not.",
"Our task is more varied as it was collected in an unconstrained way, unlike in IGC where they were asked to write a question.",
"Nevertheless, assuming a question contains a ?",
"or starts with who , what , when , where , why or how , our dataset contains 40,076 training utterances that are questions (11.3% of the data) and so it could be possible to produce responses to them.",
"Without any fine-tuning at all, we thus simply took exactly the same best trained models and used them for their question response task as well.",
"Unfortunately, after contacting the authors of Mostafazadeh et al. (2017) they no longer have the predictions of their model available, nor have they made available the code for their human evaluation setup.",
"However, the test set is available.",
"We therefore attempted to reproduce the same setup as in their experiments, which we will also make publicly available upon acceptance.",
"Automatic Evaluation We measure our best TRANSRESNETGEN model's performance on the IGC test set in terms of BLEU-4.",
"The results are shown in Fig. 5 (right).",
"We find that our model outperforms the model from Mostafazadeh et al. (2017), achieving a score of 2.30 compared to 1.49.",
"Human Evaluation We compare the provided human response (from the test set) with 7 variants of our TRANSRESNETRET model (mimicking their setup), whereby we have our model condition on 7 styles for which it performed well on evaluations in section 5.2.",
"Annotators rated the quality of responses on a scale from 1 to 3, where 3 is the highest, reporting the mean over 2k questions.",
"We then scale that by the score of human authored Figure 5: IGC Evaluations.",
"responses, to give a percentage.",
"The results are shown in Fig. 5 (left).",
"Our model narrows the gap between human and model performance, yielding a higher percentage of the human score (62.9% vs. 54.2%).",
"More detailed results and example predictions of our model can be found in Appendices E and F, including examples of highly rated and poorly rated outputs from our model.",
"This paper presents an approach for improving the way machines can generate grounded conversations that humans find engaging.",
"Focusing on the case of chit-chatting about a given image, a naturally useful application for end-users of social dialogue agents, this work shows that our best proposed model can generate grounded dialogues that humans prefer over dialogues with other fellow humans almost half of the time (47.7%).",
"This result is made possible by the creation of a new dataset IMAGE-CHAT 3 .",
"Our work shows that we are close to having models that humans can relate to in chit-chat conversations, which could set new ground for social dialogue agents.",
"However, our retrieval models outperformed their generative versions; closing that gap is an important challenge for the community.",
"While our human evaluations were on short conversations, initial investigations indicate the model as is can extend to longer chats, see Appendix G, which should be studied in future work.",
"The next challenge will also be to combine this engagingness with other skills, such as world knowledge (Antol et al., 2015) relation to personal interests (Zhang et al., 2018), and task proficiency."
] | [
"objective",
"abstain",
"objective",
"method",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"result",
"abstain",
"result",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"objective",
"result",
"objective",
"abstain"
] |
[
"Previous representation learning techniques for knowledge graph representation usually represent the same entity or relation in different triples with the same representation, without considering the ambiguity of relations and entities.",
"To appropriately handle the semantic variety of entities/relations in distinct triples, we propose an accurate text-enhanced knowledge graph representation learning method, which can represent a relation/entity with different representations in different triples by exploiting additional textual information.",
"Specifically, our method enhances representations by exploiting the entity descriptions and triple-specific relation mention.",
"And a mutual attention mechanism between relation mention and entity description is proposed to learn more accurate textual representations for further improving knowledge graph representation.",
"Experimental results show that our method achieves the state-of-the-art performance on both link prediction and triple classification tasks, and significantly outperforms previous text-enhanced knowledge representation models.",
"Knowledge graphs such as Freebase (Bollacker et al., 2008), YAGO (Suchanek et al., 2007) and WordNet (Miller, 1995) are among the most widely used resources in NLP applications.",
"Typically, a knowledge graph consists of a set of triples { ( h, r, t ) } , where h, r, t stand for head entity, relation and tail entity respectively.",
"Learning distributional representation of knowledge graph has attracted many research attentions in recent years.",
"By projecting all elements in a knowledge graph into a dense vector space, the semantic distance between all elements can be easily calculated, and thus enables many applications Figure 1: A demonstration of our accurate text-enhanced model.",
"such as link prediction and triple classification (Socher et al., 2013).",
"Recently, translation-based models, including TransE (Bordes et al., 2013), TransH (Wang et al., 2014), TransD (Ji et al., 2015) and TransR (Lin et al., 2015b), have achieved promising results in distributional representation learning of knowledge graph.",
"ComplEx (Trouillon et al., 2016) has achieved the state-of-the-art performance on multiple tasks, such as triple classification and link prediction.",
"Unfortunately, all of these methods only utilize the structure information of knowledge graph, which inevitably suffer from the sparseness and incompleteness of knowledge graph.",
"Even worse, structure information usually cannot distinguish the different meanings of relations and entities in different triples.",
"To address the above problem, additional information is introduced to enrich the knowledge representations, including entity types and logic rules.",
"However, most researches of this line are limited by manually constructed logic rules, which 745 are knowledge graph sensitive and require the expert knowledge.",
"Another type of widely used resources is textual information, such as entity descriptions and words co-occurrence with entities (Socher et al., 2013; Wang et al., 2014; Zhong et al., 2015).",
"The main drawback of the above methods is that they represent the same entity/relation in different triples with a unique representation.",
"Unfortunately, by detailed analyzing the triples in knowledge graph, we find two problems of the unique representation: (1) Relations are ambiguous, i.e., the accurate semantic meaning of a relation in a specific triple is related to the entities in the same triple.",
"For example, the relation parentOf may refer to two different meanings of (i.e., father and mother ), depending on the entities in triples.",
"(2) Because different relations may concern different attributes of an entity, the same entity may express different aspects in different triples.",
"For example, different words in the description of Barack Obama should be emphasized by relations parentOf and professionOf .",
"The ambiguity of en-tity/relation has been considered as one of the primary reasons why translation-based models cannot handle 1-to-N, N-to-1 and N-to-N categories of relations (Wang et al., 2014).",
"Wang et al. (2016) tried to solve the two issues using words co-occurrence with the entities in the same sentences.",
"Despite its apparent success, there remains a ma-jor drawback: this method suffers from noisy text, which reduces the value of textual information.",
"To solve above problems, this paper proposes an accurate text-enhanced knowledge representation model, which can enhance the representations of entities and relations by incorporating accurate textual information for each triple.",
"To learn the representation of a given triple, we first extract its accurate relation mentions from text corpus, which reflect the specific relationship between its head entity and tail entity.",
"Then a mutual attention mechanism between relation mention and entity descriptions (extracted from knowledge graph), is introduced to enhance the representations of entities and relations.",
"For example, the two triples in Figure 1 have the same parentOf relationship, but have different underlying semantics was the father of and was the mother of respectively.",
"Besides, our mutual attention mechanism enables knowledge representation focusing more on related information from text information.",
"For example, the parentOf relation will concern more about the social relations and gender attributes of a person, rather than his/her jobs, which are also contained in its descriptions.",
"And such a relation-specific entity description will make an entity has more appropriate, relation-specific representations in different triples.",
"Concretely, we employ BiLSTM model (Schus-ter and Paliwal, 1997; Graves and Schmidhuber, 2005) with mutual attention mechanism (Zhou et al., 2016) to learn representations for relation mentions and entity descriptions.",
"Specifically, in order to generate triple-specific textual representation of entities and relation, a mutual attention mechanism is proposed to model relation between entity descriptions and relation mention of one triple.",
"Then the learned textual representations are incorporated with previous traditional transition-based representations, which are, learned from structural information of knowledge graph, directly to obtain enhanced triple specific representations of elements.",
"We evaluate our method on both link prediction task and triple classification task, using benchmark datasets from Freebase 1 and WordNet 2 with the text corpus.",
"Experimental results show that, our model achieves the state-of-the-art performance, and significantly outperforms previous text-enhanced models.",
"The main contributions are threefold:",
"(i) To the best of our knowledge, this is the first work which simultaneously exploits both relation mention and entity description to handle the ambiguity of relations and entities (Section 3).",
"(ii) We propose a mutual attention mechanism which exploits the textual representations of relation and entity to enhance each other (Section 3.2).",
"(iii) This paper achieves new state-of-the-art performances on triple classification tasks over two most widely used benchmarks (Section 4).",
"Currently, a lot of structural-based knowledge representation learning methods have been proposed for knowledge graph completion, including Bilinear Model (Sutskever, 2009), Distance Model (Bordes et al., 2011), Unstructured Model (Bor-des et al., 2012), Neural Tensor Network (Socher",
"et al., 2013), Single Layer Model (Socher et al., 2013).",
"And many translation-based methods are introduced, including TransE (Bordes et al., 2013) and its extensions like TransH (Wang et al., 2014), TransD (Ji et al., 2015), TransR (Lin et al., 2015b).",
"Xiao et al. (2016a) proposed a manifold-based embedding principle to deal with the overstrict geometric form of translation-based assumption.",
"Trouillon et al. (2016) employed complex value embeddings to understand the structural information.",
"In recent years, many methods improve the knowledge representation by exploiting additional information.",
"For example, both the path information and logic rules have been proved to be beneficial for knowledge representation (Lin et al., 2015a; Toutanova et al., 2016; Xiong et al., 2017; Xie et al., 2016; Xu et al., 2016).",
"One other direction to enhance knowledge representation is to utilize entity descriptions of entities and relations.",
"Socher et al. (2013) proposed a neural tensor network model which enhances an entity's representation using the average of the word embeddings in its name.",
"Wang et al. (2014) proposed a model which combines entity embeddings with word embeddings using its names and Wikipedia anchors.",
"Zhong et al. (2015) further improved the model of Wang et al. (2014) by aligning entity and text using entity descriptions.",
"Zhang et al. (2015) proposed to model entities with word embeddings of entity names or entity descriptions.",
"Xie et al. (2016) proposed a model to learn the embeddings of a knowledge graph by modelling both knowledge triples and entity descriptions.",
"Xu et al. (2016) learns different representations for entities based on the attention from relation.",
"The textual mentions of relations are also explored by Fan et al. (2014).",
"The universal schema based models (Riedel et al., 2013; Toutanova et al., 2015) enhance knowledge representation by incorporating textual triples, which assume that all the extracted triples express a relationship between the entities and they treat each pattern as a separate relation.",
"The main drawback of these methods is that they assume all the relation mentions will express relationship between entity pairs, which inevitably introduces a lot of noisy information.",
"For example, the sentence Miami Dolphins in 1966 and the Cincinnati Bengals in 1968 does not express any relationship between miami dolphins and cincinnati bengals .",
"Even worse, the diversity of language often leads to the data sparsity problem.",
"To resolve the ambiguity of entities and relations in different triples (i.e., a relation/entity may have different meanings in different triples), Xiao et al. (2016b) proposed a generative model to handle the ambiguous relations.",
"Wang et al. (2016) extended the translation-based models by textual information, which assigns a relation with different representations for different entity pairs, using words co-occurred with both entities in a triple.",
"However, the words co-occur with an entity pair nay also not express the meanings of the relation between them, which will inevitably introduce noisy information for the specific triple.",
"Compared with these methods the main advantages of our methods are:",
"(i) We filters out noisy textual information for accurate enrich knowledge representation.",
"(ii) We simultaneously take the ambiguity of entities and relations in various triples into consideration.",
"This section presents our accurate text-enhanced knowledge graph representation learning framework.",
"We first describe how to extract accurate textual information for a given triple, and then we propose a textual representation learning model, which generates textual representations for both entities and relation in a specific triple.",
"Finally, we describe how to enhance knowledge representations based on the textual representations.",
"Given a triple, our method will first extract accurate textual mentions of its relation from a text corpus.",
"For example, we will extract the relation mention Barack Hussein Obama Sr was the father of Barack Obama. for the triple (Barack Hussein Obama Sr, parentOf, Barack Obama) ]].",
"We collect relation mentions by two steps: (1) Entity linking: linking entity names in a text corpus to entities in a knowledge graph.",
"(2) Relation mention extraction: collecting accurate relation mentions which express the meanings of the relation in a given triple.",
"Entity Linking.",
"Given a sentence D = ( w 1 , w 2 , ..., w n ) , and an entity set E = ( e 1 , e 2 , ..., e m ) , we first recognize entities of E in D to construct a new sentence D 0 = ( w 1 , ..., e 1 , ..., e m , ..., w n ) , where w i represents the ith word in D and e j corresponds to the jth entity in E .",
"There are many general entity linking tools can be used for this purpose.",
"The proposed method employs a simple and precise method to link entities of Freebase and WordNet as Wang et al. (2016).",
"Concretely, we link a Wikipedia inner-link as an entity of Freebase if they have the same titles, and link a word in the corpus as a WordNet entity if the word belongs to one of its synsets.",
"Relation Mention Extraction.",
"To extract accurate relation text mentions for a specific triple, we first collect all sentences containing both entities of the triple as candidate text mentions.",
"And then, we calculate the similarity between a text mention and the relation based on WordNet.",
"For example, for the triple of (Steve Jobs, /peo-ple/person/parents, Paul Jobs) , we treat a sentence as its accurate relation mention only if the sentence contains both of its entities and at least one hyponym/synonyms word of the relation.",
"We collect accurate relation mentions for triples in WordNet in a similar way.",
"In this way, we can extract accurate relation mentions for triples with high precision.",
"However, if a relation mention doesn't contain any hyponym/synonym words of the relation, our method would be unable to identify it.",
"For example, the sentence In 1961 Obama was born in Hawaii, US expresses the meanings of /people/person/nationality in the triple (Barack Obama,/people/person/nationality, USA ) but without any words belonging to the hyponym or synonyms of nationality .",
"For this, we further employ word embeddings to compute the similarity.",
"Concretely, we represent a relation by averaging the pre-trained word embeddings of its last two words.",
"Then we extract a sentence as an accurate relation mention of a given triple if the similarity between a word in the sentence and the relation representation is above a threshold, with the similarity between a word and a relation is calculated by the cosine similarity of their representations.",
"As mentioned above, the underlying semantics of entities and relations vary from different triples, and different attributes of an entity are concerned by different relations.",
"In this section, we first utilize BiLSTM to encode relation mentions and entity descriptions.",
"And then, we propose a mutual attention mechanism to learn more accurate text representations of relations and entities.",
"Our model contains four layers including Embedding layer, BiLSTM layer and Mutual Attention Layer, and the details of these layers are described as follows.",
"Embedding Layer.",
"To learn the distributional representation of relation mentions and entity descriptions, we convert words into distributional representations based on lookup word embeddings matrix (Mikolov et al., 2013).",
"Concretely, given a relation mention m = { w 1 , w 2 , w 3 , ..., w n } , we transform the word w i into its distributional representation ~e i d w using a word embeddings ma-748 trix.",
"BiLSTM Layer.",
"To learn the representation of text mentions, we utilize a BiLSTM (Long Short-Term Memory) (Hochreiter and Schmidhu-ber, 1997; Le and Zuidema, 2015; Zhou et al., 2016) model to compose the words in a sequence into the distributional representation.",
"Concretely, we employ a two layer Bidirectional LSTM network to generate text representations.",
"The detailed description of LSTM is presented in (Hochreiter and Schmidhuber, 1997).",
"Two different BiLSTM networks are employed to encode relation mentions and entity descriptions respectively.",
"Mutual Attention Layer.",
"Attention based neural networks have recently achieved success in a wide range of tasks, including machine translation, speech recognition and paraphrase detection (Luong et al., 2015; Yang et al., 2016; Yin et al., 2016; Vaswani et al., 2017).",
"In this paper, we introduce a mutual attention to improve text representations.",
"Given a triple, the goal of our mutual attention mechanism is two-fold.",
"On one hand, our model wants to identify words in relation mention associated with the entity descriptions in the same triple.",
"On the other hand, our model wants to recognize words in entity descriptions which are emphasized by its relation.",
"To achieve the above goal, we first infer the representations of entity descriptions using relation representation as attention: a i ( e ) = exp ( score ( ~h i , ~r 0 )) P i 0 exp ( score ( ~h i 0 , ~r 0 )) (1) score ( ~h i , ~r 0 ) = ~h iT W e ~r 0 (2) where ~r 0 d w is the representation of the relation mention by averaging all the hidden vectors of BiLSTM, ~ h i is the hidden representation of w i , and W e d w 2 h is a trained parameter matrix.",
"The relation-sensitive representation of the entity description is generated as follows: ~e = tanh ( ~a Te H e ) (3) where ~a e d m is the relation-specific attention vector over the words in the entity description, d m is the length of the description, H e d m h is the hidden representation matrix generated by BiLSTM, and ~e d h is the representation of the description.",
"In this way, we learn the representations of entity descriptions of head entity ~e h d h and tail entity ~e t d h with the attention from relation representation.",
"The above two entity description representations are utilized as the attention for learning the triple-sensitive relation mention representation as follows: ~e = ~e h + ~e t (4) a i ( r ) = exp ( score ( ~h i ,~e )) P i 0 exp ( score ( ~h i 0 ,~e )) (5) score ( ~h i ,~e ) = ~h iT W r ~e (6) where ~e h and ~e t are representations of head entity description and tail entity description respectively, ~h i is the hidden vector of w i for each word in the text mention, and W r d w 2 h is a trained parameter matrix.",
"The representation of the triple-sensitive relation mention is generated as Formula (7): ~r = tanh ( ~a rT H r ) (7) where ~a rT d n is the triple-sensitive attention vector over the words in the relation mention, d n is the length of the relation mention, H r d n h is the hidden representation matrix generated by BiLSTM, and ~r d h is the representation of the mention.",
"In this way, we learn the triple-attention representation of all text mentions.",
"In this section, we introduce how to incorporate the learned textual representations with representations learned from knowledge graph structure using previous methods.",
"For each given triple and its accurate textual information, we enhance the representations of the relation and entities based on the text representations of entities ~e h d h , ~e t d h and relation ~r d h .",
"Specifically, we enhance the relation and entity representations as follows: Re ( ~r ate ) = Re ( ~r kg ) + (1 ) ~r , 0 1 (8) Re ( ~h ate ) = Re ( ~h kg ) + (1 ) ~e h , 0 1 (9) Re ( ~t ate ) = Re ( ~t kg ) + (1 ) ~e t , 0 1 (10) where represents the weight factor for the structural representations, ~r kg d h , ~h kg d h and ~t kg d h represent the distributional representations of relation r head entity h and tail entity t 749 learned from structural information of knowledge graph, ~r d h , ~e h d h and ~e t d h represent the vectors of the text mention, head and tail entity descriptions for the triple, ~r ate d h , ~h ate and ~t ate are the accurate text-enhanced representations of relation, head and tail entity, respectively.",
"Note that, we enhance the real part vector of an entity with the textual representation of the entity as Formula (9) and (10), and treat the matrix representation of a relation as a vector with each element the same as the element in diagonal matrix, and then enhance its real part as Formula (8).",
"In this way, we enhance the representation of knowledge graph, and calculate the plausibility of a triple based on their score functions.",
"If there is no accurate relation mention extracted for a triple, we only utilize the knowledge embeddings to estimate the plausibility of the triple, and the weight factor is set to 1 in this case.",
"For example, if there is no accurate relation mention extracted for triple (Su Shi, /people/person/profession, Artist) , then only its structural representations will be utilized to compute the plausibility of the triple.",
"And is set to 1 for the triples if none of the entities in it is linked.",
"In the training process, the (h, r, t, h t , r t , t t ) tuples are used as supervision, where h t , r t and t t are the description of head entity, relation text mention and the description of tail entity, respectively.",
"Since there are only correct triples in the knowledge graph, following Lin et al. (2015a), we construct the corrupted tuples ( h 0 , r, t 0 , h t , r t , t t ) KG 0 for a ( h, r, t, h t , r t , t t ) KG by randomly replacing head/tail entity with entities from knowledge graph using Bernoulli Sampling Method (Wang et al., 2014).",
"Furthermore, to train the model of text representation model, we construct the corrupted tuples ( h, r, t, h 0 t , r 0 t , t 0 t ) KG 0 for a ( h, r, t, h t , r t , t t ) KG by random replacing the text information.",
"We use the following margin-based ranking loss: L = X q KGX q 0 KG 0 max (0 , + f ( q ) f ( q 0 )) (11) where f is the score function of our model, and > 0 is the margin between golden tuples and negative tuples, KG is the set of tuples in training dataset, and KG 0 is the corrupted set of tuples.",
"The parameters of our model are optimized using the stochastic gradient descent (SGD) algorithm.",
"To accelerate the training process and avoid overfitting, we initialize the representations of entities and relations using base models and initialize word representations with the pre-trained word embeddings, and all these embeddings are fine-tuned during training.",
"In this section, we first describe the settings in our experiments, and then we conduct experiments of link prediction and triple classification tasks and compare our method with base models and the state-of-the-art baselines.",
"In this paper, we evaluate our model on four benchmark datasets: WN11, WN18, FB13 and FB15k (Bordes et al., 2013; Socher et al., 2013; Wang et al., 2014).",
"For the text corpus, we use a snapshot of the English Wikipedia (Wiki) (Shaoul and Westbury, 2010) 3 dump in April 2016, which contains more than 1 .",
"2 billion tokens.",
"We link entities in the text corpus to entities in Freebase and synsets in WordNet as described above, and replace entities with HEAD TAG and TAIL TAG.",
"The text descriptions of entities are freely available 4 .",
"In addition, we pre-process the word-entity corpus, including stemming, lowercasing and removing words with fewer than 5 occurrences.",
"The statistics of the datasets and linked-entities in text corpus are shown in Table 1.",
"cal tasks: link prediction and triple classification.",
"We refer AAT E E as the proposed model which enhances TransE with accurate textual informations and mutual attention mechanism, and refer AT E E as the proposed model without mutual attention mechanism to reveal the effect of our attention mechanism.",
"To speed up training and reduce overfitting, we employ the SkipGram model of word2vec (Mikolov et al., 2013) to pre-train the word embeddings with the dimension of word embeddings is d w = 200 , the windows size is 5, the number of iterations is 5, and the number of negative samples is 10.",
"And we pre-train the representations of entities and relations of knowledge graph using the mentioned base models, and the parameters are empirically tuned as follows: the dimension of vectors is d kg = 200 , the number of epochs is 2000 and the margin is 1 .",
"0 .",
"We implement our model based on the OpenKE 5 framework.",
"In our experiments, the hyper-parameters of BiLSTM are empirically set as follows: the number of hidden units is d h = 200 , the learning rates for SGD are among { 0 .",
"1 , 0 .",
"001 , 0 .",
"0001 } , the margin values are among { 0 .",
"5 , 1 .",
"0 , 2 .",
"0 } and the batch sizes are among { 100 , 500 , 2000 } .",
"We employ two different BiLSTM networks with the same hyper-parameters to learn the representations of text mentions and entity descriptions.",
"And all the parameters are learned jointly, including BiLSTM networks and knowledge representations.",
"Link prediction aims to predict missing head or tail entity of a triple, which is a widely employed evaluation task for knowledge graph completion models (Bordes et al., 2011; Wang et al., 2016).",
"Concretely, given a head entity h (or tail entity t ) and a relation r , the system will return a rank list of candidate entities for tail entity.",
"Following (Bordes et al., 2013; Lin et al., 2015b), we conduct the link prediction task on WN18 and FB15k datasets.",
"In the testing phase, for each triple ( h, r, t ) , we replace its head/tail entity with all entities to construct candidate triples, and extract text mentions from the text corpus for each candidate triple.",
"Then we rank all these entities in descending order of the scores, which are calculated by our 5 http://openke.thunlp.org/ score function.",
"Based on the entity ranking list, we employ two evaluation metrics from (Bor-des et al., 2013): (1) mean rank of correct entities (MR); and (2) proportion of correct entities in top-10 rank entities Hit@10 ( Hit 10 ).",
"A good link predictor should achieve low MR and high Hit @10 .",
"We tuned model parameters using validate datasets.",
"We implement our framework using TransE, TransH, TransR and ComplEx as base models, and treat these base models as baselines.",
"Furthermore, we also compare our method with the state-of-the-art results from Unstructured, SME, TransD, TEKE , Jointly (Xu et al., 2016), TransG and Mainifold, and we report the results from their original papers.",
"The overall results are presented in Table 2.",
"From Table 2, we can see that both ATE and AATE models surpass all base models (TransE, TransH, TransR and ComplEx) on all metrics.",
"This result verifies that the textual information is beneficial for structure-based knowledge graph representation learning models.",
"Compared with the ATE models, the AATE models achieve better results on link prediction task, which verifies that the mutual attention between entity description and relation mention is effect for selecting meaningful words and enhancing the learning of knowledge graph representation.",
"For translation-based models, the proposed method achieves the best result based on TransE.",
"This is probably because TransH and TransR have tried to project the entity embeddings into the space of relation space, which may lead to the fact that the text information could not enhance the entity representation directly.",
"In addition, our method implemented based on ComplEx has achieve better performances w.r.t TEKE (Wang et al., 2016) on all metrics, that verifies the importance of filtering out the noisy information.",
"To better analyse the effect of textual information for knowledge graph representation learning, this section presents the results of our model on different categories of relations including 1-N, N-1 and N-N on link prediction task.",
"We present the results of our models based on TransE and of all baselines.",
"From Table 3, we can see that, both of our proposed methods have achieved higher performance over the base model on all types of relations (1-to-N, N-to-1 and N-to-N).",
"In addition, our AATE model achieves better results than the Jointly(A-LSTM) model.",
"Since both of AATE and Joint (A-LSTM) are implemented based on TransE, we verify that the triple-specific relation mention is valuable to improving the knowledge representation.",
"Another reason why our proposed model achieves better results is that the attention from textual representation of relation and entity is more effective than the attention using structural representation for textual representation.",
"To gain more insight, we present a failure analysis to explore possible limitations and weaknesses of our model.",
"In particular, several illustrative triples from the test set of FB15K are listed in Table",
"4. The tail entities of those triples are failed to be ranked in the top-10 candidates.",
"It can be seen from Table 4 that, the failures are mostly caused by the data sparsity problem, which results in relatively limited occurrences of entities and relations.",
"All of Elementary school , Abugida , interests/collection category /sub categories and martial arts/ martial artist/martial art appear less than 4 times in training data.",
"It must also be mentioned that the triple (Abugida, language /language writing system/ languages, Khmer language) is included in the training data.",
"Therefore, we can infer the first triple in Table 4 based on the above triple due to the general logic that language/human language/writing system and /language/language writing system/languages are a pair of inverse relations.",
"Consequently, we believe it is important to incorporate the logic rules into knowledge embeddings, especially for the entities and relations with limited occurrences.",
"In this section, we assess different models on the triple classification task.",
"Triple classification aims to judge whether a given triple ( h, r, t ) is true fact or not, and it is usually modeled as a binary classification task (Socher et al., 2013; Bordes et al., 2013; Wang et al., 2016).",
"Following Socher et al. (2013) we evaluate different systems on WN11 and FB13 datasets.",
"Given a triple ( h, r, t ) and all its accurate relation mentions and entity descriptions of this triple, In our experiments, a triple will be classified as a true fact if the score obtained by function f is below the relation-specific threshold r , otherwise it will be classified as a false fact.",
"The r and the weight factor of are optimized by maximizing classification accuracy on validation dataset, and different values of r will be set for different relations.",
"We use the same settings as link prediction task, all parameters are optimized on the validation datasets to obtain the best accuracies.",
"We compare our method with all base models and the state-of-the-art performances from TransD, TEKE (Wang et al., 2016), TransG, Mainfold, and we report the best results from their original papers.",
"The results are listed in Table",
"5. From Table 5, we can see that: (1) The accurate textual information can consistently increase the accuracies on triple classification task.",
"In all of the four base models, our model achieves significant improvements over TransE, TransH, TransR and ComplEx.",
"This results verify that our method is a useful framework for exploiting textual information to enhance structure-based models; (2) Our method achieves better results on all datasets than TEKE.",
"This result reveals that it is important to filter out the noisy data for knowledge graph representation learning.",
"(3) Compared with the ATE model, our relation-sensitive attention 752 Prediction Head (Hits@10) Prediction Tail (Hits@10) Relation Category 1-to-N N-to-1 N-to-N 1-to-N N-to-1 N-to-N #Triples in Test 2,078 6,084 109,526 2,078 6,084 109,526 Jointly(A-LSTM) 95.1 21.1 47.9 30.8 94.7 53.1 TransE 65.7 18.2 47.2 19.7 66.7 50.0 ATE E 80.2 22.1 47.6 20.3 67.7 60.0 AATE E 96.1 35.2 49.1 32.2 98.3 60.3 Table 3: Hit@10 of link prediction on different type of relations on FB15k dataset.",
"model improves the accuracies on all the datasets.",
"We believe this is because mutual attention mechanism can better identify the relation-sensitive words from entity descriptions and extract entity-sensitive words from relation mention.",
"The results demonstrate that, our method has achieved the best performances on the triple classification task, which verifies that it is critical to filter out noisy text information to determine whether a triple should be added into knowledge graph or not.",
"In this paper, we propose an accurate text-enhanced knowledge graph representation framework, which can utilize accurate textual information enhance the knowledge representations of a triple, and can effectively handle the ambiguity",
"ambiguity of relations and entities through a mutual attention model between relation mentions and entity descriptions.",
"Experiment results show that our method can achieve the state-of-the-art performance, and significantly outperforms previous text-enhanced knowledge representation models.",
"And the mutual attention between relation mentions and entity descriptions can significantly improve the performance of knowledge representation.",
"For future work, we want to further exploit entity types and logic rules as constraints to further improve knowledge representations.",
"This work is supported by the National Natural Science Foundation of China under Grants no. 61433015, 61772505 and 61572477, and the Young Elite Scientists Sponsorship Program no.",
"YESS20160177.",
"Moreover, we sincerely thank the reviewers for their valuable comments."
] | [
"abstain",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"result",
"abstain",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"method",
"method",
"objective",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"other",
"abstain",
"other",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"result",
"result",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"result",
"other",
"other",
"other"
] |
[
"Neural networks lack the ability to reason about qualitative physics and so cannot generalize to scenarios and tasks unseen during training.",
"We propose ESPRIT, a framework for commonsense reasoning about qualitative physics in natural language that generates interpretable descriptions of physical events.",
"We use a two-step approach of first identifying the pivotal physical events in an environment and then generating natural language descriptions of those events using a data-to-text approach.",
"Our framework learns to generate explanations of how the physical simulation will causally evolve so that an agent or a human can easily reason about a solution using those interpretable descriptions.",
"Human evaluations indicate that ESPRIT produces crucial fine-grained details and has high coverage of physical concepts compared to even human annotations.",
"Dataset, code and documentation are available at https://github.com/ salesforce/esprit .",
"Humans learn to understand and reason about physical laws just by living in this world and doing everyday things.",
"AI models, on the other hand, lack this ability and so are unable to generalize to new scenarios that require reasoning about abstract physical concepts like gravity, mass, inertia, friction, and collisions (Bakhtin et al., 2019).",
"We propose Explaining Solutions to Physical ReasonIng Tasks (ESPRIT), a framework for explaining qualitative physics reasoning using natural language.",
"Neural networks with knowledge of qualitative physics would have commonsense reasoning abilities about the way the world works (Forbus, 1988).",
"In turn, this could, for example, improve performance on tasks that involve interacting with humans and make human-robot interactions more efficient and trustworthy.",
"Ideally, AI systems would reason about and generate natural language commonsense explanations of physical concepts that are relevant to their behavior and prediction.",
"A key intuition is that natural language can provide an efficient low-dimensional representation of complicated physical concepts.",
"To equip AI systems with this ability, we collected a set of open-ended natural language human explanations of qualitative physics simulations.",
"The explanations include descriptions of the initial scene, i.e., before any physics is at play, and a sequence of identified pivotal events in a physics simulation.",
"Three physical concepts are crucial for our simulation to reach a specified goal state: gravity, collision, and friction.",
"Our work attempts to build an interpretable framework for qualitative physics reasoning with strong generalization abilities mirroring those of humans.",
"ESPRIT is the first-ever framework that unifies commonsense physical reasoning and interpretability using natural language explanations.",
"Our framework consists of two phases: (1) identifying the pivotal physical events in tasks, and (2) generating natural language descriptions for the initial scene and the pivotal events.",
"In the first phase, Figure 2: The end-to-end ESPRIT framework for identifying pivotal physical events, extracting the features from pivotal events in a table, and explaining solutions using a table-to-text model for natural language generation.",
"our model learns to classify key physical events that are crucial to achieving a specified goal whereas in the second phase, our model generates natural language descriptions of physical laws for the events selected in the first phase.",
"We demonstrate ESPRIT on the PHYsical REasoning (PHYRE) benchmark (Bakhtin et al., 2019).",
"PHYRE provides a set of physics simulation puzzles where each puzzle has an initial state and a goal state.",
"The task is to predict the action of placing one or two bodies (specifically, red balls of variable diameters) in the simulator to achieve a given goal.",
"Figure 1 shows an example of a task with a specified goal.",
"The input to ESPRIT is a sequence of frames from a physics simulation and the output is a natural language narrative that reflects the locations of the objects in the initial scene and a description of the sequence of physical events that would lead to the desired goal state, as shown in Figure 2.",
"The first phase of the framework uses a neural network classifier to identify salient frames from the simulation.",
"For the second phase we experimented with table-to-text models (Puduppully et al., 2019a,b) as well as pre-trained language models (Radford et al., 2018).",
"We evaluated our framework for natural language generated reasoning using several automated and human evaluations with a focus on the understanding of qualitative physics and the ordering of a natural sequence of physical events.",
"We found that our model achieves very high performance for phase one (identifying frames with salient physical events) and that, for phase two, the table-to-text models outperform pre-trained language models on qualitative physics reasoning.",
"We build our dataset by extending PHYRE (Bakhtin et al., 2019), a recent benchmark dataset for PHYsical REasoning.",
"1 PHYRE consists of a set of physics puzzles in a simulated 2D environment.",
"This environment follows simple deterministic Newtonian physics with a constant downward gravitational force and a small amount of friction.",
"All objects (balls, bars, standing sticks, and jars) are non-deformable, and each object color corresponds to an object type: red is the user-added dynamic object; green and blue are used for dynamic objects that are part of the goal state; purple is for static goal objects; gray is for dynamic scene objects; black is for static scene objects.",
"Each task starts with an initial scene and has a goal state, described in natural language.",
"The task can be solved by placing one or two red balls in the simulation environment and choosing their sizes in a way that when the simulation runs according to the laws of physics the goal state is achieved.",
"No further action can be taken after the simulation starts.",
"In this paper, we focus on the 25 task templates in the PHYRE dataset that involve the placement 1 https://phyre.ai/ Templates 25 Tasks 2441 Train/Val/Test 1950/245/246 Objects / Task 14 Frames / Task 658 Events / Task 54 Salient Events / Task 7 Tokens / Initial State Description 36 Tokens / Simulation Description 45 Vocabulary Size 2172 Table 1: Statistics for the ESPRIT Dataset.",
"of a single ball to reach the goal state.",
"Each template defines a set of 100 similar tasks generated by using different parameters for a template such as positions and sizes of objects.",
"All tasks within the same template have the same goal (e.g., make the blue ball touch the green ball) but somewhat different initial configurations.",
"We represent the simulation frames as structured tables by extracting information using the simulator module in the PHYRE API.",
"2 The simulations consist of 60 frames per second.",
"For each object, we collect its id, type (boundary, bar, jar, circle), color (red, green, blue, purple, gray, black), state (dynamic, static), and ( x, y ) coordinates.",
"Jars also have an angle of rotation, width, base length, and side length (referred to as just length).",
"Bars have length, width, and angle of rotation while circles have a radius.",
"For each collision between two objects, we collect the ( x, y ) coordinates, velocity as a ( v x , v y ) vector, and the angle of rotation in radians for each object involved in the collision.",
"Extracting data from the PHYRE simulator.",
"To track the motion of objects through a simulation, we intercepted PHYRE's built-in simulator.",
"First, we created a dictionary of objects and their attributes in the simulation's initial scene (includ-ing the predicted action that was performed).",
"It is important to note that the dictionary contains properties of both static and dynamic objects.",
"But because static objects such as the simulation boundary are not affected by the physics in the simulation and their properties never change.",
"So, unless a static object is involved in a collision, we did not 2 https://phyre.ai/docs/simulator.html collect any other data about that object during the simulation.",
"Once this initial pass was made, we extracted the images of frames generated for the 2500 single-ball simulations.",
"Each simulation was run for a maximum of 1000 time steps or approximately 16 seconds.",
"After the initial action is taken, a simulation is considered successful if it reaches the goal state and remains in that state for at least 180 consecutive time steps, the equivalent of three seconds.",
"If a simulation does not satisfy this goal condition, it is considered unsuccessful.",
"In this way, we found solution simulations for 2441 out of 2500 tasks.",
"The remaining 59 task simulations seem more complex and would possibly require a prohibitive number of trials ( > 10000 ) to reach the goal successfully and so we excluded those from our dataset.",
"Finally, we mapped the dictionary of objects and attributes in the initial state to the frames derived from the simulator so that we could track how the object's properties change from one frame to another.",
"Generating tables.",
"The three physical concepts at play in the simulations friction, collision, and gravity are either a cause or an effect of some collision.",
"Therefore, collisions were the most common physical event in the simulations (average = 54 per task) and so we decided to only record collisions.",
"For every collision extracted, we applied a window of size 3 to fetch frames before and after the collisions to remove any noise and get the more precise timestamp of the collision.",
"Because pivotal events in a solution simulation only occur when two objects collide or separate, like a ball falling onto another or a ball rolling off of an elevated bar, we treat both cases identically.",
"Based on the simulation screenshots of the initial state and the collision, we employed a two-stage annotation procedure using Amazon MTurk.",
"In the first stage, we showed the goal, the initial state, and all collisions during the simulation.",
"We asked annotators to pick pivotal or salient events by selecting all and only the collisions that are causally related to the placement of the red ball and are necessary for the completion of the goal.",
"In the second stage, we collected human annotations of natural language descriptions for the initial scene and explanations for the sequence of salient collisions annotated during the first stage.",
"We showed the annotators the goal, the initial state with the red ball added, an animated GIF of the simulation, and the frames of salient collisions.",
"We asked them to include descriptions of the shape, color, and position of the objects involved.",
"The annotations for the initial scene and salient collisions are collected in separate text boxes.",
"Our data statistics are summarized in Table 1.",
"We generated solutions for 2441 tasks, covering 25 different templates.",
"These tasks have an average of 14 objects, 658 total frames, and 54 collision events.",
"We split the tasks randomly into 1950 train, 245 validation, and 246 test.",
"On average, each task has 7 events marked as salient by the annotators.",
"Also, on average the description of the initial state and simulation each have about 40 tokens, with a vocabulary size of 2172 .",
"Pivotal event detection.",
"Given all the collision events in the simulation, select collisions that are crucial to achieving the goal state.",
"Pivotal or salient collisions are collisions that fulfill the following two criteria:",
"(i) causally related to the placement of the red ball, and",
"(ii) necessary for the completion of the given goal.",
"To train a classifier to detect salient events, we use the following features from the table representation: collision time step, each object's shape, position ( x, y ) , velocity ( v x , v y ) , and angle of rotation.",
"This totals 13 input features.",
"The first object is often static, such as the boundary, while the second is often dynamic, such as the user-placed red circle.",
"We experimented with a decision tree and a neural network MLP classifier to compare with a baseline that classifies every frame as salient.",
"The MLP has three layers with 128 , 128 , and 32 nodes.",
"There is a 15% dropout to avoid overfitting and batch normalization between each layer.",
"Finally, a sigmoid node converts the output into a probability from 0 to 1 (anything above 50% is classified as salient).",
"The models are trained on 59179 collisions ( 52066 negative, 7113 positive) and tested on 6893 collisions ( 6000 negative, 893 positive).",
"position, type) in the initial frames, generate a corresponding natural language description of the initial scene.",
"The generated text should faithfully describe all the objects in the corresponding input frame.",
"Natural language explanations for sequences of pivotal events.",
"Given a sequence of pivotal events for a simulation and the goal, generate a natural language description to explain the solution simulation.",
"The generated text should faithfully summarize the simulation by explaining the causal sequence of salient events in it.",
"The goal of natural language generation for our task is to explain the pivotal physical events in the simulation so that an end user can solve the task more efficiently and reliably.",
"Hence, we experimented with treating the physical event description generation as (1) Table-to-Text Generation and as (2) Language Modeling .",
"The salient event detection component of our system serves as the content selection component of the natural language generation pipeline.",
"We describe the two approaches in the following sections.",
"For the initial state description, the input is the structured table representation of the initial state, and the model generates a textual description conditioned on the input table.",
"Similarly, for the salient events explanation, the model produces the description given the structured table representation of all the salient events as the input.",
"Effective table-to-text generation can be leveraged to teach AI agents to solve tasks in natural language and output explanation for the steps in the task solution.",
"For both generation tasks, we use the model from Puduppully et al. (2019b) which is a neural model for table-to-text generation by explicitly modeling entities.",
"3 Since our desired generations are entity coherent, in that their coherence depends on the introduction and discussion of entities in discourse (Karamanis et al., 2004), the entity-based table-to-text generation model is a proper method for our task.",
"Unlike previous neural models treating entities as ordinary tokens, following Puduppully et al. (2019b), we explicitly create entity representations for our objects in the physical environment and update their representation as the text is generated.",
"We also tried to use Puduppully et al. (2019a), but it requires a domain-specific relation extraction model to generate a specialized input, so we could not use it.",
"as { r j,l } Ll =1 ,j =1 ,..., | r | where | r | is the number of records for this example, and L is the number of features for each record.",
"For example, r j, 1 are values and r j, 2 are entities.",
"The output y is description with words y = [ y 1 , . . . , y | y | ] where | y | is the length of the description.",
"Encoder.",
"We first create embeddings r j,l of the features r j,l , and then use a feed-forward layer to obtain the record embeddings r j .",
"where W r and b r are model parameters.",
"From the record embeddings, we then use two methods to create the encoder outputs { e j } | r | j =1 : AVG .",
"We use e j = r j , and the first hidden state of the decoder is the average of the record representations: avg ( { e j } | r | j =1 ) .",
"BiLSTM .",
"To account for the chronological order in the physical simulation, we use a BiLSTM over [ r 1 , . . . , r | r | ] , whose hidden states are extracted as { e j } | r | j =1 .",
"The first hidden state of the decoder is initialized with the concatenation of the final step hidden states of the BiLSTM.",
"Entity memory.",
"For each unique entity k (i.e., one of r j, 2 values), we compute x k as the average embeddings of all records which satisfy r j, 2 = k .",
"During each decoding step t , we maintain an entity memory representation u t,k , and initialize it at t = 1 as: u t = 1 ,k = W i x k , where W i is a model parameter.",
"Denote the hidden state of the decoder at t as d t .",
"We update the entity representation u k,t at each t with a gating mechanism as follows: t = ( W d d t + b d ) , t,k = t (cid:12) ( W e d t + b e + W f u t 1 ,k + b f ) , u t,k = W g d t , u t,k = (1 t,k ) (cid:12) u t 1 ,k + t,k (cid:12) u t,k , where W d,e,f,g and b d,e,f are model parameters, and (cid:12) is element-wise product.",
"t indicates if there should be an update at t , and t,k controls the update by interpolating between the previous u t 1 ,k and candidate entity memory u t,k .",
"Hierarchical attention.",
"We then use a hierarchical attention mechanism such that the decoder can first focus on entities and then the records for these entities.",
"We can rearrange the encoder output e j in two-dimensional g k,z , where k is index for entities and z is the index for records of corresponding entities.",
"For each entity, we can compute the attention over its records along z , and compute the entity context vector s t,k : t,k,z exp( d (cid:124) t W a g k,z ) , (cid:88) z t,k,z = 1 , s t,k = (cid:88) z t,k,z g k,z .",
"Then we compute the higher level attention over entities along k , and compute the encoder context vector q t : t,k exp( d (cid:124) t W h u t,k ) , (cid:88) k t,k = 1 , q t = (cid:88) k t,k s t,k .",
"In both generation tasks, we fine-tune the entity model provided by Puduppully et al. (2019b) for 125 epochs.",
"We use the same training hyperparam-eters and select the best model using token-match accuracy following Puduppully et al. (2019b).",
"We fine-tune a language model (LM) to generate descriptions of the initial state and explanations for sequences of pivotal physical events using the training split of our dataset.",
"We use the pre-trained GPT-large (Radford et al., 2018) LM, which is a multi-layer transformer-based (Vaswani et al., 2017) model.",
"For the generation of initial state descriptions, the LM is fine-tuned conditioned on the objects (such as ball, jar, etc.) and their attributes (such as dynamic, static, color, size, etc.) extracted from the simulator described in Section 2.2 and the human written descriptions.",
"So, the input context during training is defined as follows: C init = o 1 , o 2 , . . . , o n , In the physical simulation where o 1 , o 2 , ..., o n is the list of extracted objects with their attributes, e.g., small red dynamic ball.",
"The model is trained to generate the initial scene description s according to a conditional language modeling objective.",
"The objective is to maximize: (cid:88) i log P ( s i | s i k , . . . , s i 1 , C init ; ) , where k is the size of the context window (in our case k is always greater than the length of s so that the entire explanation is within the context).",
"The conditional probability P is modeled by a neural network with parameters conditioned on C init and previous tokens.",
"For explanations of the salient physical events in the simulation, the LM is fine-tuned conditioned on the initial state descriptions and the human generated reasoning.",
"So, the input context during training is defined as follows: C sim = init scene. The red ball is placed and The model is trained to generate the physical reasoning r by maximizing the following objective: (cid:88) i log P ( r i | r i k , . . . , r i 1 , C sim ; ) .",
"We generate sequences of maximum length 40 , use a batch size of 12 , train for a maximum of 50 epochs, selecting the best model based on validation BLEU and perplexity scores.",
"The learning rate was set to 10 6 , warmed up linearly with proportion 0 .",
"01 and weight decay 0 .",
"01 .",
"We experimented both with temperature 1 .",
"0 and lower temperatures ( 0 . 1 , 0 . 2 ) to restrict generation to the physics domain and avoid diversity.",
"For word sampling, we tried top k as 3 and 5 as well as greedy ( k = 1 ).",
"We found that the temperature of 0 .",
"1 with k = 3 worked best.",
"We note that it is not fair to compare the generated text by the table-to-text model and the LM because the input to the table-to-text model is structured with fine-grained details while the input to the LM is an unstructured prompt.",
"A promising approach would be one that uses a table encoder with a pre-trained language model that is more robust and generalizable.",
"We use precision, recall, and F1 for the pivotal event classification task which can be formulated as a binary classification problem.",
"For the natural language description of initial frames and solution simulations, we use automatic metrics including BLEU-1, BLEU-2, ROUGE L, and METEOR using the implementation from Sharma et al. (2017).",
"The automated metrics for generation evaluation are very crude and do not measure the correctness and coverage of actual physical concepts or even the natural ordering in which physical events occur in a given simulation.",
"For example, an object first falls and then it hits the ground or an object first falls on some other object which then causes the second object to be set in motion.",
"So, we deployed human evaluations to measure the quality of the physical concepts captured by our language generation models in terms of validity and coverage .",
"To measure the validity of initial scene descriptions, we showed humans the generated description for a task, the initial frames from that task, and three random distractor initial scenes from other tasks which may or may not be from the same template.",
"Then, we asked them to select the frame that belongs to the task being described.",
"This evaluates how faithful and accurate the generated description is to the input initial state.",
"If the generated text does not include a detailed description of the objects, their attributes, and their positions, it would be difficult for humans to map them to the correct initial scene.",
"For evaluating the validity of pivotal events descriptions, we showed humans the generated text for a task, the initial state of that task, and three distractor initial states generated from the same task but with positions of the red ball that do not solve the task.",
"Then, we asked them to select the correct initial state with the red ball that would eventually reach the task goal.",
"A good simulation description should give higher accuracy for humans to choose the correct solution.",
"Note that we also evaluated the human generated initial state description and pivotal events description by asking annotators to match the human natural language descriptions that we collected and found the average accuracy to only be 70 .",
"2% for the initial scene description and 44 .",
"7% for the pivotal events description (Ta-Precision Recall F1 Positive 0.01 0.11 0.02 Decision Tree 0.87 0.86 0.87 MLP 0.90 0.91 0.90 Table 2: Results on pivotal events classification. ble 4).",
"This is because of reporting bias, i.e., humans rarely state events that are obvious (Forbes and Choi, 2017).",
"For example, a falling ball would bounce multiple times or an object pushed off an elevated bar by another object would have a projectile motion.",
"Lack of such fine-grained explanations is what makes the human evaluation of human generated descriptions especially for the sequence of pivotal events have poor accuracy.",
"The PHYRE tasks incorporate three physical concepts in every simulation gravity, collision, friction.",
"So, to measure coverage , we show humans just the natural language description of the simulation and ask them to select words that would imply any of the three concepts.",
"For example, rolling or slipping would imply friction, falling would imply gravity, hit would imply collision, etc.",
"We note that many physical concepts are very abstract and even difficult to be noticed visually, let alone describe in natural language.",
"For example, moving objects slow down due to friction, but this physical concept is so innate that humans would not generally use words that imply friction to describe what they see.",
"This metric gives us an overview of what degree of coverage the text generation models have for each of the three physical concepts.",
"For all our human evaluations we used MTurk and collected 3 annotations per instance and report the majority.",
"We paid Turkers 50 cents per instance for the validity evaluation and 50 cents per instance for the coverage evaluation.",
"Table 2 summarizes the performance of the pivotal events classifiers.",
"The decision tree and MLP clas-sifiers get very high performance with 0 .",
"87 and 0 .",
"9 F1 scores respectively.",
"The baseline classifies every event as pivotal and thus performs very poorly.",
"From the decision tree, we extract feature importance values for each of the 13 input variables described in Section",
"3. The most important variable is the time step of the collision, with a weight of 0 .",
"178 .",
"The most important features for classification were an object's collision position, its velocity, and then its angle of rotation.",
"Given such strong results for identifying pivotal events, we were able to predict the salient events of previously unseen simulations and that helped in the next step of generation descriptions of salient events.",
"Table 3 shows the performance of the three text generation models using automatic metrics.",
"The table-to-text models perform better than the language model on most of the metrics.",
"The AVG model performs slightly better than the BiLSTM on both generation tasks.",
"However, these metrics are a very crude measure of physical reasoning performance and are not intuitive.",
"The human evaluations, on the other hand, are more informative and insightful.",
"Human evaluation validity.",
"While the GPT model can achieve scores comparable to the data-to-text models using automated metrics, its performance using human evaluation is as good as chance, as shown in Table",
"4. We found that the GPT LM generation was very high-level and is not useful for humans to decide which tasks (among the correct and distractor choices) the generated solution explanation of the initial state and pivotal events match.",
"By contrast, AVG and BiLSTM have significantly higher accuracy, mainly because their output is more fine-grained and so gives a more thorough explanation of the solution.",
"Surprisingly, the human annotations of the descriptions that we collected as ground truth are not perfect either, indicating that humans tend to produce sentences that are not sufficiently discriminate and even sometimes skip obvious details such as whether the ball rolls to the left vs. right.",
"Human evaluation coverage.",
"Table 5 shows the results for coverage of physical concepts.",
"The outputs of the GPT model are repetitive and not grammatical, containing little explanation of physical concepts.",
"AVG and BiLSTM, on the other hand, can generate text that contains fine-grained descriptions of physical concepts even sometimes better than those generated by humans.",
"This is because humans don't describe everyday commonsense concepts using fine-grained language, while the AVG and BiLSTM models tend to generate long detailed descriptions containing various words for gravity (e.g., falls, drop, slope, land), friction (e.g., roll, slide, trap, travel, stuck, remain), and collision (e.g., hit, collide, impact, land, pin, bounce).",
"Initial state description Pivotal events description BLEU-1 BLEU-2 ROUGE L METEOR BLEU-1 BLEU-2 ROUGE L METEOR GPT (Radford et al., 2018) 15.37 2.25 20.08 9.93 24.32 3.89 26.82 12.14 AVG (Puduppully et al., 2019b) 15.37 11.38 22.53 24.09 20.53 15.89 29.11 27.38 BiLSTM (Puduppully et al., 2019b) 14.74 10.59 21.35 23.00 20.36 15.48 27.93 26.91 Table 3: Automatic evaluation of initial state and pivotal events descriptions on the test set.",
"Qualitative analysis.",
"An example of the model inputs and outputs is in Table 6 and taken from simulation id 00014:394.",
"Here we make two observations.",
"First, the generated descriptions are not as succinct as the gold annotations, because our model is obtained from fine-tuning an entity-based model pre-trained on generating long Rotowire game summaries (Wiseman et al., 2017).",
"Second, the output generated by the BiLSTM model predicts the incorrect direction of motion for the green ball, an error that is occasionally seen across generation descriptions of both models.",
"This indicates that a table-to-text paradigm for generating such solution explanations is not adequate for learning the direction of motion for the physical reasoning required for these explanations.",
"Qualitative physics and Visual reasoning.",
"Qualitative physics aims to represent and reason about the behavior of physical systems (Forbus, 1988).",
"McCloskey and Kohl (1983); McCloskey et al. (1983) suggests that people use simplified intuitive theories to understand the physical world in day-to-day life.",
"Earlier works explored using probabilistic simulations to train physical inference through physical simulations (Battaglia et al., 2013; Zhang et al., 2016).",
"Recent papers use neural networks over visual inputs to predict future pixels (Finn et al., 2016; Lerer et al., 2016; Mirza et al., 2016; Du and Narasimhan, 2019) or make qualitative predictions (Groth et al., 2018; Li et al., 2016, 2017; Janner et al., 2019; Wu et al., 2015; Mao et al., 2019).",
"Furthermore, several frameworks and benchmarks have been introduced to test visual reasoning such as PHYRE (Bakhtin et al., 2019), Mujoco (Todorov et al., 2012), and Intphys (Riochet et al., 2018), some of which are combined with natural language for question answering such as NLVR (Suhr et al., 2017), CLEVR (Johnson et al., 2017), and VQA (Antol et al., 2015).",
"In a parallel work, Yi et al. (2020) introduced the CLEVRER dataset for reasoning about collision events from videos with different types of questions.",
"In contrast, we develop the ability to reason and explain the behavior of dynamic physical systems by generating natural language.",
"to use natural language for explanation and commonsense reasoning (Lei et al., 2016; Camburu et al., 2018; Forbes and Choi, 2017; Chai et al., 2018; Forbes et al., 2019; Rajani et al., 2019; DeY-oung et al., 2020).",
"Lei et al. (2016), for example, generate textual rationales for sentiment analysis by highlighting phrases in the input.",
"Forbes and Choi (2017) learn the physical knowledge of actions and objects from natural language.",
"Camburu et al. (2018) propose e-SNLI by generating explanations for the natural language inference problem at a cost of performance.",
"Rajani et al. (2019) propose to use LMs to generate explanations that can be used during training and inference in a classifier and significantly improve CommonsenseQA performance.",
"Bisk et al. (2020) propose to use a question answering task to test the model's physical commonsense and reasoning ability.",
"In contrast to the previous work, we focus on identifying pivotal physical events and then generating natural language explanations for them.",
"We find that this two-step approach works more effectively.",
"Table-to-text generation.",
"Table-to-text generation aims to produce natural language output from structured input.",
"Applications include generating sports commentaries from game records (Tanaka-Ishii et al., 1998; Chen and Mooney, 2008; Taniguchi et al., 2019), weather forecasts (Liang et al., 2009; Konstas and Lapata, 2012; Mei et al., 2016), biographical texts from Wikipedia infoboxes (Lebret et al., 2016; Sha et al., 2018; Liu et al., 2018; Perez-Beltrachini and Lapata, 2018), descriptions of knowledge bases (ODonnell et al., 2000; Trisedya et al., 2018; Zhu et al., 2019; Yu et al., 2019) and source code (Iyer et al., 2016), and dialog response generation from slot-value pairs (Wen et al., 2015).",
"Recently, neural encoder-decoder models (Sutskever et al., 2014; Cho et al., 2014) based on attention (Bahdanau et al., 2015; Luong et al., 2015) and copy mechanisms (Gu et al., 2016; Gulcehre et al., 2016) have shown promising results on table-to-text tasks (Wiseman et al., 2017; Gehrmann et al., 2018; Puduppully et al., 2019a,b; Iso et al., 2019; Castro Ferreira et al., 2019; Zhao et al., 2020; Chen et al., 2020a).",
"While traditional methods use different modules for each generation stage in a pipeline (Reiter and Dale, 2000), neural table-to-text models are trained on large-scale datasets, relying on representation learning for generating coherent and grammatical texts.",
"Puduppully et al. (2019a) propose a neural network approach that first selects data records to be mentioned and then generates a summary from the selected data, in an end-to-end fashion.",
"Chen et al. (2020b) use pre-trained language models to generate descriptions for tabular data in a few shot setting.",
"ESPRIT uses a two-step approach for qualitative physical reasoning.",
"To train models that can describe physical tasks, we collected open-ended natural language text descriptions of initial states and pivotal physical events in a 2D simulation from human annotators.",
"We then trained a model to identify these pivotal events and then fine-tuned on pre-trained table-to-text generation and language models without using the image representations of the actual simulation frames.",
"Our results indicate that table-to-text models perform better than language models on generating valid explanations of physical events but there is a lot more room for improvement compared to human annotations.",
"We hope that the dataset we collected will facilitate research in using natural language for physical reasoning.",
"Reinforcement Learning (RL) agents may be able to solve physical tasks much more efficiently by leveraging natural language reasoning as opposed to model-free approaches that are often highly sample-inefficient.",
"An RL agent that leverages natural language descriptions of physical events to reason about the solution for a given goal (similar to Zhong et al. (2020)) or for reward shaping (similar to Goyal et al. (2019)) could be a compelling line of future research.",
"More importantly, having a model that can meaningfully reason about commonsense qualitative physics could be interpretable and more robust, as they might focus on the parts of physical dynamics that are relevant for generalization to new scenarios.",
"Such systems are widely applicable to self-driving cars or tasks that involve human-AI interactions, such as robots performing everyday human tasks like making coffee or even collaboratively helping with rescue operations.",
"We would like to thank Abhinand Sivaprasad for his helpful discussions and annotations.",
"We also thank the anonymous reviewers for their feedback."
] | [
"abstain",
"objective",
"objective",
"method",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"method",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"Table fact verification aims to check the correctness of textual statements based on given semi-structured data.",
"Most existing methods are devoted to better comprehending logical operations and tables, but they hardly study generating latent programs from statements, with which we can not only retrieve evidences efficiently but also explain reasons behind verifications naturally.",
"However, it is challenging to get correct programs with existing weakly supervised semantic parsers due to the huge search space with lots of spurious programs.",
"In this paper, we address the challenge by leveraging both lexical features and structure features for program generation.",
"Through analyzing the connection between the program tree and the dependency tree, we define a unified concept, operation-oriented tree, to mine structure features, and introduce Structure-Aware Semantic Parsing to integrate structure features into program generation.",
"Moreover, we design a refined objective function with lexical features and violation punishments to further avoid spurious programs.",
"Experimental results show that our proposed method generates programs more accurately than existing semantic parsers, and achieves comparable performance to the SOTA on the large-scale benchmark TABFACT.",
"With the rise of misleading information on the Internet, such as fake news, rumors and political deceit, fact-checking has been developed as a means of detecting and filtering false information.",
"Table fact verification (TFV) is a specific fact-checking task that requires performing logical operations such as comparison, superlative and aggregation over given tables to verify textual statements.",
"Programs play an important role in TFV.",
"On one hand, correct programs can provide rationales for model decisions, which make reasoning analysis * Corresponding author Season Podiums 1980 9 1981 8 1981 0 ...",
"and failure diagnosis feasible (Zhou et al., 2018).",
"On the other hand, they can be used to fetch the key evidences for verification.",
"Figure 1 gives an example of mainstream methods (Zhong et al., 2020a; Shi et al., 2020b; Yang et al., 2020; Shi et al., 2021) for TFV.",
"It first generates latent programs from statements, then collects evidences from tables by executing the programs over the tables, and finally leverages all information for final predictions.",
"Compared with naive methods(Chen et al., 2020; Zhang et al., 2020a) which simply put statements and linearized tables into language models for verification, the mainstream methods additionally introduce programs to reveal the evidences (e.g., verbalized evidence V1 ) covered by logical operations (e.g., max([row1, row2]], podiums) ) and to fetch the key information from the table (e.g., 8 ).",
"But an incorrect or spurious program may introduce irrelevant or even contradictory evidences.",
"So it is crucial to get correct programs that properly extract evidences from tables, especially when tables are too large to be encoded by neural networks.",
"Despite being important, program generation remains underexplored for TFV.",
"To the best of our knowledge, only LPA (Chen et al., 2020) works on program generation.",
"It first searches programs 7624 with human-designed features, then ranks them with a neural network, and finally uses the execution result of the top program as the prediction.",
"However, it exhibits an unacceptable performance which means it generates incorrect programs.",
"The remaining approaches just predict the correctness of statements but never concern about generating correct programs.",
"In TFV, there is still a need to find better solutions for program generation.",
"Intuitively, we can resort to weakly supervised semantic parsing (Liang et al., 2011) for the program generation, but existing semantic parsers may fail in TFV for the amplified spurious program problem caused by the binary label.",
"Due to the lack of program labels, existing methods will sample label consistent programs for model training.",
"In TFV, any sampled program that outputs a Boolean value has a 50% chance of hitting the correct label; hence there are many label consistent programs, while only a small part of label consistent programs are correct, implying that the rest are all spurious.",
"In this paper, we carefully examine the syntax structures of statements and find that task-related structure features are the key to address the issue mentioned above.",
"We propose a unified operation-oriented tree constructed in three steps.",
"Firstly, we link entities between the table, trigger dictionary and statement.",
"Secondly, we obtain the original tree using a dependency parser with the linked statement as input.",
"Thirdly, the original tree is pruned and merged to a simplified tree that contains only information related to operations.",
"Such a unified tree can provide distant supervision, assisting our model in generating single operations correctly and generating all operations in the correct order.",
"As a result, we have a higher probability of getting correct programs and evading spurious ones.",
"Then we introduce Structure-Aware Semantic Parsing (SASP) by designing a scoring function based on the proposed tree and fusing the sample distributions computed by the scoring function and neural network.",
"At last, we design a refined objective function with lexical features and violation punishments to avoid spurious programs further.",
"Experimental results on Tabfact and Logic2Text show that SASP improves the performance of the baseline model significantly, and achieves comparable performance to the State-Of-The-Art method.",
"Our contributions are as follows: We propose an operation-oriented tree to provide distant supervision for semantic parsing.",
"We propose SASP which leverages both lexical features and structure features for the serious spurious problem in weakly supervised semantic parsing for TFV.",
"With the proposed method, we can generate more accurate programs which can not only boost existing mainstream methods for TFV, but also provide explanation for verification.",
"Fact Verification Fact verification aims at identifying the truthfulness of online textual statements given different sources of evidences, including document sets (Thorne et al., 2018; Nie et al., 2019; Zhong et al., 2020b; Wan et al., 2021), images (Suhr et al., 2019; Li et al., 2020) and structured tables (Chen et al., 2020; Zhong et al., 2020a; Shi et al., 2020b; Zhang et al., 2020a; Yang et al., 2020; Shi et al., 2021).",
"Despite the sources of evidences used to support the verification vary, the methods for different tasks appear to have the same idea.",
"They first locate the key evidences that will aid in their verification, then fuse the collected key evidences with the original statement to make the final prediction.",
"In this paper, we focus on generating better programs that allow existing methods to get key evidences from tables efficiently, hence benefiting existing methods for TFV.",
"There are also many explainable fact verification works(Kotonya and Toni, 2020a).",
"Attention based methods(Popat et al., 2018; Lu and Li, 2020; Wu et al., 2020) highlight key evidences according to attention weights.",
"Atanasova et al. (2020); Kotonya and Toni (2020b) generate explanations in natural language with text summarization technology.",
"Gad-Elrab et al. (2019); Ahmadi et al. (2020) use horn rules and knowledge graphs to mine explanations.",
"Our work is similar to the third line of works from the perspective of explainability.",
"Semantic Parsing Due to the expensive cost of annotated programs, weakly supervised semantic parsing (Liang et al., 2011; Berant et al., 2013; Artzi and Zettlemoyer, 2013) has been proposed to learn program generation from sentence-label pairs.",
"Compared with full supervision, weak supervision brings spurious problems: there may be spurious programs that accidentally reach the right answer for the wrong reason, and they will provide wrong supervision for model training.",
"Previous work (Pa-supat and Liang, 2016) uses crowd-sourced deno-7625 tations to prune spurious programs.",
"Liang et al. (2018) use both programs inside and outside the memory buffer to compute the expected return objective in case the neural model is misled by spurious programs inside memory.",
"Dasigi et al. (2019); Misra et al. (2018); Agarwal et al. (2019) rely on lexical features to differentiate between spurious and correct programs.",
"Most recently, Cao et al. (2019); Ye et al. (2019); Shao et al. (2021) exploit the semantic correlations between sentences and programs to rule out spurious programs via jointly learning semantic parser and sentence generator.",
"In this paper, we focus on a more complex problem, learning program generation with (sentence, binary label) pairs, in this field, and take the above approaches a step further by leveraging both lexical features and structure features.",
"There already exist many works utilizing the structural correlations between a sentence and its programs.",
"Previous works(Reddy et al., 2016; Hu et al., 2018) directly transform the dependency structure of a sentence into a program, which is not satisfactory on complex sentences.",
"In recent years, some works(Wang et al., 2019; Herzig and Berant, 2021; Li et al., 2021) treat structural constraints as latent variables, then parse a sentence into a program under the constraints.",
"However, it is difficult to learn latent variables in a noisy environment.",
"Simultaneously, modeling structural correlations explicitly requres human",
"annotations.(Sun et al., 2020; Shi et al., 2020a).",
"In this paper, we propose a concise and robust method to integrate the structural correlations into semantic parsing.",
"Structure-Aware Semantic Parsing (SASP) centers around the operation-oriented tree to deconstruct some compositionality of statement and generate program correctly.",
"Figure 3 gives an overview of our proposed SASP.",
"In this section, we will first introduce the task formulation, then describe how to construct the operation-oriented tree, and give the way to generate programs following the well-designed tree at last.",
"Given a table T = { cell i,j | i R, j C } with the table header H = { col j | j C } as evidence, a statement S = { w i | i W } with W words and a true label y Y = { T rue, F alse } where T rue means T entails S and F alse means T refutes S ,",
"we aim to train a model to do explainable verification.",
"More specifically, we train a model to translate S into an executable program z , then predict a label y z Y by accessing the table T with program z such that y z = y .",
"Different from most existing methods, which just pay attention to predicting a label y Y such that y = y , our model also generates a program as accurate as possible to explain and support the verification.",
"Program A program z can be seen as a set of executable operations { op i | i M } .",
"Considering the program example in figure 1, there are six operations in total, and each operation op i = { op i",
".func, ..., op i",
".arg j , ..., op i",
".out } has one operator op i",
".func (e.g., filter_eq in the figure), multiple operands op i",
".arg j , 0 < j relevant to the table T (e.g., all_rows , season and 1981 ) and one output v i = op i",
".out which may be selected as an operand by subsequent operations.",
"When the whole program is executed by an interpreter, it will be parsed into a tree as shown in figure 1 and executed from bottom to up.",
"According to the execution correctness and the semantic consistency, we divide programs from the executable program set Z into three categories, as shown in figure",
"2. 3.2 Operation-Oriented Dependency Tree In this part, we first reveal the connection between the program tree and the dependency tree.",
"Then, we design a unified operation-oriented dependency tree for making full use of the connection.",
"Syntactic structures, the organization of tokens in a sentence and how the contexts among them are interrelated, can be revealed by a dependency tree whose nodes and edges correspond to words and grammatical relations in the sentence.",
"We observe that: (1) the operations related to descendants tend to be executed before those related to ancestors in the dependency tree; (2) the operator and operands The definition of specific operations are listed in Appendix A.1 7626 root In 1981 season, the highest number of podiums was 8 and the lowest was 0 Pruning Irrelevant Info Merging Relevant Nodes [;1981; season] [highest;8;podiums] [lowest;0;podiums/wins] root Operation-oriented tree root [1981] [season] [highest] [podiums] [8] [lowest] [0] In 1981 season, the highest number of podiums was 8 and the lowest was 0 Dependency Parsing neural flow symbolic flow attention function representation column representation cell representation [CLS] Statement [SEP] Table [SEP] BERT + + + LSTM cell _ 1981 Program Interpreter Final Prediction sample Figure 3: An overview of our proposed approach.",
"within one operation tend to have shorter distances in the dependency tree; in the correct program compared with the incorrect or spurious one.",
"Use the dependency tree in figure 3 and the program in figure 1 as an example.",
"The operation filter_eq related to the child node is executed before the operation eq(v1, 8) corresponding to the father node.",
"What's more, the distance of operands in the incorrect operation filter_eq(all_rows, podiums, 1981) is 6, while that in filter_eq(all_rows, season, 1981) , a correct operation, is just",
"1. The observations above suggest that there exist some structural correlations between a statement and its programs.",
"We will present how to make use of them in the next section.",
"Before that, we propose an operation-oriented dependency tree to strengthen the above rules in two steps.",
"First, we prune the original dependency tree to focus exclusively on the operation-related structure.",
"Then, we merge the information around every operation to make information in a single operation more com-pact.",
"What's more, it is more convenient to define and calculate the distance in a simplified tree.",
"The left part of figure 3 illustrates how to construct the proposed tree.",
"First of all, we do rule-based entity linking to find potential operators and operands from the statement.",
"For operators detection, we match strings between the statement and the pre-defined trigger words , and give the matched entities a function type.",
"As for operands, we divide them into two types, cell and column , as they are linked to table cells and the table header respectively (e.g., 1981 has a cell type and season has All pre-defined trigger words are listed in Appendix A.2",
"a column type).",
"Then we pass tokens and linked entities with types into a general dependency parser to get a dependency tree .",
"Every linked entity node n = { n.children, n.type, n.val } , n has a list type with one type and a list val with one entity.",
"For every token node, its type list and val list are both empty.",
"After that, for every entity node with a cell type value cell i,j , we will add column and col j into its type list and val list respectively.",
"At last, 7627 we call PRUNE in algorithm 1 using as input and get output .",
"The nodes left in the tree may contain function info corresponding to the logical operations, cell info and column info from tables.",
"In this section, we will introduce SASP, which unifies both structural features and lexical features with one operation-oriented dependency tree.",
"As shown in the right part of figure 3, we first employ BERT (Devlin et al., 2019) to encode the statement S and the table T following TABERT (Yin et al., 2020).",
"Then we get representations for the statement and entities with different types, which will be fed into the decoder.",
"During decoding, the logits are computed by an LSTM with attention mechanism(Luong et al., 2015): h t = LST M ( h t 1 , x t 1 ) a t = MLP ([ h t ; Attention ( h t , S )]) l t = MatMul ( X t , a t ) (1) where h t is the hidden state, x t 1 is the token generated previously, X t is the candidate token list selected from the vocabulary according to the token type at timestep t (e.g., the type for the second token in the program being predicted is column ), and l t are the logits for the t th token over X t .",
"However, in TFV, it is difficult to find the correct optimization direction with only attention mechanism, especially at the beginning of the training, because of the serious spurious problem.",
"So we bias the logits with our proposed tree additionally.",
"As a result, our model can give the correct program a higher probability, therefore exploring search space efficiently and evading spurious programs.",
"More specifically, we design two scoring mechanisms in line with the two rules found in the previous section.",
"As shown in algorithm 2, given < 1 , score = distance means the closer distances, the higher scores.",
"For operator selection, we calculate the average distance from the candidate x X to its leaves in the tree , and set the distance to be + if it is not in the tree.",
"For example, the candidate operator max (triggered by highest ) has a score of 1 .",
"In this way, we give operators closer to leaves higher scores, which leads to operations related to descendants being generated before those related to ancestors.",
"For operand selection, we compute the average distance from the candidate x X to tokens in the operation op .",
"Use the operation in figure 3 as an example, the score of the Algorithm 2 Scoring function with candidate token list X , operation-oriented tree and operation being predicted op as input, where < 1 is a hyper-parameter.",
"candidate 1981 is 0 when the timestep t = 3 .",
"In this way, we prioritize the tokens closed to existing information in the operation being generated, so that the distances inside one operation tend to be shorter in the dependency tree.",
"At last, we combine the scores t given by algorithm 2 and the logits l t computed by Equation 1 to get the final sample distribution: t = Score ( X t , , op ) P ( X t | S, T, x <t ) = Softmax ( l t + t ) (2) where is a hyper-parameter, is the operation-oriented tree and op is the operation being predicted.",
"After we sample x t P ( X t | S, T, x <t ) , it will be used to update h t , and op .",
"We give more details in Appendix A.3.",
"Previous works(Agarwal et al., 2019; Dasigi et al., 2019) measure the relevance between a sentence and a program by their coverage, and use that lexical coverage to augment the reward function.",
"In a similar spirit, we design the reward based on our proposed tree.",
"Our intuition is that different types of tokens play different roles in the operation-oriented tree, and therefore should be treated under varying degrees.",
"And our reward is defined below.",
"are hyper-parameters, and y z is the label predicted by accessing the table T with the program z .",
"Since all operation-related tokens of a statement are reserved in the operation-oriented tree, we can calculate the relevance between a statement and a program by r = (cid:80) n 1 { i, n.type [ i ] = n.val [ i ] z } (cid:80) n 1 { i, n.type [ i ] = } (4) where { n | n } are nodes of our proposed tree.",
"For further improvement, we modify the generalized update equation in PolicyShaping (Misra et al., 2018) to get Maximum Likelihood Most Violation Reward.",
"The final objective function is: J = (cid:88) ( S,T ) D (cid:16) (cid:88) z Z set R ( z ) ( z | S, T ) max z Z err ( ( z | S, T )) (cid:17) (5) where D contains all S-T pairs, Z set is the set of sampled executable programs, Z err Z set is the set of incorrect programs, is the sample policy, is a hyper-parameter and contains all the trainable parameters.",
"We think such an update equation more robust than REINFORCE helps the model learn better with many spurious programs in Z set .",
"Dataset and Evaluation Metrics We conduct experiments on the large-scale dataset TABFACT (Chen et al., 2020), which aims to study fact verification given semi-structured data as evidence.",
"TABFACT contains 16,573 tables and 118,275 statements which are divided into training (80%), validation (10%) and testing (10%) sets.",
"The testing set is further partitioned into simple and complex sets.",
"The statements in the complex set are more complicated in semantic compositionality than those in the simple set.",
"Because there is no program ground-truth provided in TABFACT, we just use the label accuracy as metric for comparison, which is also called execution accuracy (Ex.Acc).",
"We also conduct experiments on WikiTableQues-tion (WTQ) (Pasupat and Liang, 2015), a commonly used weakly supervised semantic parsing dataset, for further evaluation.",
"And we use the same setting as previous works.",
"10,000 correct statement-table-program tuples, to evaluate parse tree matching accuracy (PT.Match) (Kim et al., 2020) for programs generated by our method and other methods that also provide programs.",
"Because there are only \"ENTAILED\" statements in Logic2Text, we use the model trained on TABFACT to predict programs without tuning.",
"Implementation Details We use CRF2o (Zhang et al., 2020b) for dependency parsing.",
"For semantic parsing, we use pytorch neural symbolic machine (Liang et al., 2017, 2018; Yin et al., 2020) as our baseline and improve it with the operation-oriented tree.",
"Further, to bootstrap SASP, we use t in Equation 2 to sample around 10 label consistent programs per example, and load them into memory buffer before training.",
"For BERT parameters, we set the hidden size to 768, and use Adam optimizer with lr 5e-5, warmup step 30k, dropout 0.2.",
"For LSTM parameters, we set hidden size to 200, and use Adam optimizer with lr 3e-3, train step 150k, dropout 0.2.",
"As for hyper-parameters , , func , cell , column and , we set them to 0.7, 2, 0.2, 0.4, 0.4 and 0.2 respectively.",
"All experiments were conducted on a workstation with 128 GB of RAM and 2 RTX 3090 GPUs.",
"Our source code is available at: https://github.com/ousuixin/SASP .",
"Compared Systems We compare our model with the following baselines, including six that focus on label prediction and two that pay extra attention to program generation.",
"Among the former five methods, Table-BERT (Chen et al., 2020) and SAT (Zhang et al., 2020a) focus on table linearization, so they use different ways to change 2-dimensional tables into 1-dimensional sequences composed of tokens, and then feed them into BERT for label prediction.",
"LFC (Zhong et al., 2020a), HeterTFV (Shi et al., 2020b), ProgVGAT (Yang et al., 2020) and LERGV(Shi et al., 2021) pay attention to comprehending tables and programs.",
"They use different ways to encode programs (generated by LPA-ranking) and tables for verification, although the programs they use are not precise at all.",
"The latter two methods will generate programs and use program execution results as final predictions, including LPA-ranking (Chen et al., 2020) and MAPO (Liang et al., 2018) with BERT.",
"Table 1 gives the overall performance of all eight baselines and our proposed SASP, from which we can observe that: 7629",
"(1) As a semantic parsing method, our method achieves performance comparable to the State-Of-The-Art method LERGV while maintaining explainability.",
"This is what previous semantic parsers can not do, and shows our superiority in TFV.",
"(2) Our proposed method works better than Table-BERT and SAT, demonstrating the power of the content snapshot proposed by Tabert in catching key information from a table.",
"(3) SASP has a lead of 1.2% on the the complex set compared with ProgVGAT, but falls behind on the simple set.",
"There are two reasons for that.",
"On one hand, mainstream methods like ProgVGAT can fix some errors caused by the symbolic interpreter (e.g., executing eq(\"USA\", \"America\") to False ).",
"While SASP uses the execution result of the generated program as prediction.",
"Due to the limited expression ability, our interpreter can not cover every statement with a correct program, leading to a lower probability of predicting a correct answer.",
"On the other hand, ProgVGAT can not deal with structural mistakes (e.g., replacing max with min operation) in programs generated by LPA.",
"As a result, ProgVGAT performs worse in complicated semantic environment where LPA has a higher probability of making a structural mistake.",
"Performance on WTQ Table 2 shows the experimental results on WTQ.",
"Our model just has comparable performance with our baseline, MAPO w/ BERT.",
"We give two possible reasons below: Model Dev Test Pasupat and Liang (2015) 37.0 37.1 Dasigi et al. (2019) 43.1 44.3 Agarwal et al. (2019) 43.2 44.1 Wang et al. (2019) 43.7 44.5 MAPO w/ BERT (Yin et al., 2020) 49.6 49.4 SASP 49.3 49.5 Table 2: Performance (execution accuracy) of different methods on WikiTableQuestion.",
"(1) As can be seen in figure 1, the program has more than three operations, which is quite common in TFV, while they use at most three operations to answer a question in previous works (Pasupat and Liang, 2015; Zhong et al., 2017; Liang et al., 2018).",
"Because the compositionality of WTQ is lower than TABFACT, our proposed operation-oriented tree can only provide very limited help.",
"(2) The spurious program problem is further amplified by the binary label in TABFACT.",
"Any program that outputs a Boolean value has a 50% chance of hitting the correct label; hence there are many label consistent programs.",
"While in WTQ, it is not that easy to hit the correct label.",
"Suppose that the vocabulary list has N tokens, but only one token corresponds to the answer.",
"Every executable program in WTQ will output an answer with the string type, so it only has a 1 N probability of hitting the correct label.",
"WTQ has much fewer spurious programs, so lexical features are enough to rule out 7630 Model PT.Match Ex.Acc MAPO w/ BERT 13.4 70.1 LPA 15.6 56.7 SASP 47.9 75.9 Table 3: Performance (matching accuracy and execution accuracy) of different methods on Logic2Text dataset.",
"Performance on Logic2Text Results of different semantic parsing methods are shown in table",
"3. Our model outperforms other methods with a considerable margin on PT.Match metric.",
"This means SASP can generate more correct programs, which makes it behave well in table fact verification.",
"In program generation for TFV, the search space is too large to be explored completely.",
"To tackle this problem, MAPO w/ refined reward performs systematic search space exploration guided by lexical features in the advanced reward function.",
"It only obtains PT.Match accuracy of 13.4% on Logic2Text.",
"The high Ex.Acc score shows that it just predicts spurious programs executed to \"True\".",
"For LPA, it first collects all programs under the search space restricted by a lexical feature based algorithm, then ranks these programs with a neural network (BERT).",
"And LPA also has poor behavior in program generation here.",
"The big gaps (more than 40% in MAPO and LPA) between PT.Match and Ex.Acc accuracy suggest that with only lexical features, there are still many spurious programs being explored.",
"Use the spurious program in figure 2 as an example, it conforms to lexical features by making full use of sentence tokens, and would be a promising candidate in MAPO and LPA.",
"However, such kind of programs will differ from the correct ones in the order of operators or the position of operands, so they can be distinguished from correct programs by structure features.",
"Our method captures both lexical and structure features, therefore evading such spurious programs and biasing generated programs from label consistent towards semantic consistent.",
"The smaller gap (28% in SASP) between PT.Match and Ex.Acc accuracy confirms our analysis above.",
"We further conduct an ablation study to evaluate the necessity of leveraging structure information through rules (1) and (2).",
"For rule (1), which defines the operator selection mechanism, we just drop types and values related to function in our proposed tree to see how it influence.",
"For rule (2), which defines the operand selection mechanism, we drop types and values related to cell or column .",
"If we drop all types from the tree, the algorithm degenerates into MAPO w/ BERT refined-reward violation.",
"The experimental results are given in Table",
"4. We can see that function is the most important type, then is column type, followed by cell type.",
"And all of the types make significant contributions to the final performance.",
"The results above show that both mechanisms associated with the rule (1) and the rule (2) are crucial for our model because both operator and operand selections are crucial for program generation.",
"Effect of Objective Function To evaluate the im-pact of the refined objective function in Equation 5, we conduct another ablation study, and the results are shown in table",
"5. We change the reward function in Equation 3 with a binary reward function for comparison.",
"The result shows that refined feedback taking lexical features into account plays an essential role in our model.",
"Without the refined reward, some operations may be omitted because the partial programs are already executed to the right label, resulting in a much worse performance.",
"We also remove the violation punishment to investigate the necessity of a conservative update policy.",
"The result shows that the robust update policy 7631 Statement LPA SASP In the 1993 -94 belarusian premier league , the venue with the highest capacity wasminskat41040.",
"makes around 1% improvement.",
"The reward function we designed just prioritizes programs that use tokens related to logical operators or tables as much as possible, leading to label inconsistent programs that meet the condition.",
"Giving such programs a punishment complements the refined reward.",
"In figure 4, we provide two cases to demonstrate the effectiveness of our method for program generation.",
"In both cases, our method generates correct programs that are semantic consistent with the statement, while LPA screws them all up.",
"In the first case, max is the descendant compared with minsk in the dependency tree, so our method uses max before minsk , while LPA gets the wrong order.",
"This confirms that our method generates programs in the correct order with the operator selection mechanism.",
"In the second case, devin has a more close relation to not in the dependency tree, so our method chooses devin as an operand of filter_not_eq , while LPA selects an incorrect operand milwaukee for filter_not_eq .",
"This confirms that our method generates single operations correctly with the operand selection mechanism.",
"To check the generalizability and limitations of our proposed method, we randomly sampled 200 examples from the validation set of TABFACT, and manually inspected the top one program of the beam search using SASP.",
"We found that SASP generated correct programs for 99 examples, spurious programs for 57 examples and incorrect programs for 44 examples.",
"The proportion of correct programs (49.5%) and spurious programs (28.5%) is similar to that in table 3 (47.9% and 28%).",
"This shows the generalizability of SASP and the rationality of using Logic2Text for PT.Match evaluation.",
"What's more, we classified the causes of 101 spurious or incorrect programs into four main categories.",
"Unsupported operations cause 30 error examples.",
"For instance, in \"the new york rangers beat the at-lanta flames by 2 points\" , the minus operation in a single table cell \"4 2\" is not supported by our interpreter.",
"The second category of errors occur when the functions or entities can not be detected and added to dependency tree nodes correctly.",
"Use \"the maroon played 3 teams located in the united states\" as an example, \"the united states\" can not be linked to \"America\" in the given table; hence it will not be added to the operation tree.",
"31 error examples are caused by this reason.",
"The first two categories can not be handled by our proposed method, and we leave the development of powerful interpreter and robust entity linker for future work.",
"The third category is structure error, causing 13 error examples.",
"In other words, the order of operators or the position of operands in the predicted program differs from the correct one.",
"The wrong programs in figure 2 are all this kind of error cases.",
"Underutilized information causes 23 error examples.",
"For the statement in figure 1, \"fil-ter_eq(all_rows, season, 1981); max(v0, podiums), eq(v1, 8)\" causes this kind of error.",
"In this paper, we have proposed a novel approach to do explainable verification by structure-aware semantic parsing.",
"Firstly, we define a unified operation-oriented tree by entity linking, dependency parsing and tree pruning.",
"Then, we demonstrate how to integrate our proposed tree into semantic parsing with the operator-related and the operand-related principles.",
"At last, we introduce the refined objective function which could reduce the influence of spurious programs.",
"Experimental results confirm that our proposed method can bias program generation from label consistent towards semantic consistent and achieve acceptable performance on the benchmark dataset TABFACT.",
"Future work will collect evidences that are more precise and get better verification performance by replacing LPA with SASP in the first stage of mainstream methods.",
"We appreciate the discussion with Weilin Luo, Weinan He and Yeliang Xiu.",
"We acknowledge support from the Natural Science Foundation of China under Grant No. 62076261."
] | [
"abstain",
"method",
"abstain",
"abstain",
"method",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"result",
"abstain",
"method",
"result",
"objective",
"objective",
"abstain",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"objective",
"objective",
"abstain",
"other",
"other"
] |
[
"Sequence-to-sequence models excel at handling natural language variation, but have been shown to struggle with out-of-distribution compositional generalization.",
"This has motivated new specialized architectures with stronger compositional biases, but most of these approaches have only been evaluated on synthetically-generated datasets, which are not representative of natural language variation.",
"In this work we ask: can we develop a semantic parsing approach that handles both natural language variation and compositional generalization?",
"To better assess this capability, we propose new train and test splits of non-synthetic datasets.",
"We demonstrate that strong existing approaches do not perform well across a broad set of evaluations.",
"We also propose NQG-T5, a hybrid model that combines a high-precision grammar-based approach with a pre-trained sequence-to-sequence model.",
"It outperforms existing approaches across several compositional generalization challenges on nonsynthetic data, while also being competitive with the state-of-the-art on standard evaluations.",
"While still far from solving this problem, our study highlights the importance of diverse evaluations and the open challenge of handling both compositional generalization and natural language variation in semantic parsing.",
"Sequence-to-sequence (seq2seq) models have been widely used in semantic parsing (Dong and Lapata, 2016; Jia and Liang, 2016) and excel at handling the natural language variation 1 of human-generated queries.",
"However, evaluations on synthetic 2 tasks such as SCAN (Lake and Baroni, 1 We use the term natural language variation in a broad sense to refer to the many different ways humans can express the same meaning in natural language, including differences in word choice and syntactic constructions.",
"2 We make a coarse distinction between synthetic datasets, where natural language utterances are generated by a program, Specializedarchitectureswithstrong compositional bias Under-explored General-purposepre-trainedmodels(e.g.seq2seq) PREDOMINANT APPROACHESSYNTHETIC NON-SYNTHETICNATURAL LANGUAGE VARIATIONCOMPOSITIONALGENERALIZATION Figure 1: We study whether a semantic parsing approach can handle both out-of-distribution compositional generalization and natural language variation.",
"2018) have shown that seq2seq models often generalize poorly to out-of-distribution compositional utterances, such as jump twice when only jump, walk, and walk twice are seen during training.",
"This ability to generalize to novel combinations of the elements observed during training is referred to as compositional generalization .",
"This has motivated many specialized architectures that improve peformance on SCAN (Li et al., 2019; Russin et al., 2019; Gordon et al., 2019; Lake, 2019; Liu et al., 2020; Nye et al., 2020; Chen et al., 2020).",
"However, most approaches have only been evaluated on synthetic datasets.",
"While synthetic datasets enable precise, interpretable evaluation of specific phenomena, they are less representative of the natural language variation that a real-world semantic parsing system must handle.",
"and non-synthetic datasets, where natural language utterances are collected from humans.",
"Surprisingly, this question is understudied.",
"As visualized in Figure 1, most prior work evaluates either out-of-distribution compositional generalization on synthetic datasets, or in-distribution performance on non-synthetic datasets.",
"Notably, designing approaches that can handle both compositional generalization and the natural language variation of non-synthetic datasets is difficult.",
"For example, large pre-trained seq2seq models that perform well on in-distribution evaluations do not address most of the compositional generalization challenges proposed in SCAN (Furrer et al., 2020).",
"Our research question has two important motivations.",
"First, humans have been shown to be adept compositional learners (Lake et al., 2019).",
"Several authors have argued that a greater focus on compositional generalization is an important path to more human-like generalization and NLU (Lake et al., 2017; Battaglia et al., 2018).",
"Second, it is practically important to assess performance on non-synthetic data and out-of-distribution examples, as random train and test splits can overestimate real-world performance and miss important error cases (Ribeiro et al., 2020).",
"Therefore, we are interested in approaches that do well not only on controlled synthetic challenges of compositionality or in-distribution natural utterances, but across all of the diverse set of evaluations shown in Figure",
"2. Our contributions are two-fold.",
"First, on the evaluation front, we show that performance on SCAN is not well-correlated with performance on non-synthetic tasks.",
"In addition, strong existing approaches do not perform well across all evaluations in Figure",
"2. We also propose new Target Maximum Compound Divergence (TMCD) train and test splits, extending the methodology of Keysers et al. (2020) to create challenging evaluations of compositional generalization for non-synthetic datasets.",
"We show that TMCD splits complement existing evaluations by focusing on different aspects of the problem.",
"Second, on the modeling front, we propose NQG, a simple and general grammar-based approach that solves SCAN and also scales to natural utterances, obtaining high precision for non-synthetic data.",
"In addition, we introduce and evaluate NQG-T5, a hybrid model that combines NQG with T5 (Raf-fel et al., 2020), leading to improvements across several compositional generalization evaluations while also being competitive on the standard splits of GEOQUERY (Zelle and Mooney, 1996) and SPITRAIN AND TEST SPLITSSYNTHETIC NON-SYNTHETICNATURAL LANGUAGE VARIATIONCOMPOSITIONALGENERALIZATIONMCD (Keysersetal.,2020) Add Primitive (LakeandBaroni,2018) Length TMCD Template (Finegan-Dollaketal.,2018) Length Random Figure 2: We evaluate semantic parsing approaches across a diverse set of evaluations focused on natural language variation, compositional generalization, or both.",
"DER (Yu et al., 2018).",
"Our results indicate that NQG-T5 is a strong baseline for our challenge of developing approaches that perform well across a diverse set of evaluations focusing on either natural language variation, compositional generalization, or both.",
"Comparing five approaches across eight evaluations on SCAN and GEOQUERY , its average rank is 1, with the rank of the best previous approach (T5) being 2.9; performance is also competitive across several evaluations on SPIDER .",
"While still far from affirmatively answering our research question, our study highlights the importance of a diverse set of evaluations and the open challenge of handling both compositional generalization and natural language variation.",
"3 2 Background and Related Work In this section, we survey recent work related to compositional generalization in semantic parsing.",
"Evaluations To evaluate a model's ability to generalize to novel compositions, previous work has proposed several methods for generating train and test splits, as well as several synthetic datasets.",
"A widely used synthetic dataset for assessing compositional generalization is SCAN (Lake and Baroni, 2018), which consists of natural language commands (e.g., jump twice) mapping to action sequences (e.g., I_JUMP I_JUMP ).",
"One split for SCAN is the length split, where examples are separated by length such that the test set contains longer 3 Our code and data splits are available at https://github.com/google-research/language/tree/master/language/nqg .",
"examples than the training set.",
"Another is the primitive split, where a given primitive (e.g., jump) is seen by itself during training, but the test set consists of the primitive recombined with other elements observed during training (e.g., jump twice).",
"Other synthetic datasets have been developed to evaluate aspects of compositional generalization beyond SCAN, including NACS (Bast-ings et al., 2018), CFQ (Keysers et al., 2020), and COGS (Kim and Linzen, 2020).",
"In addition to introducing the CFQ dataset, Keysers et al. (2020) propose Maximum Compound Divergence (MCD) splits based on the notion of a compound distribution.",
"Their algorithm generates train and test splits that maximize the divergence of their respective compound distributions while bounding the divergence of their respective atom distributions.",
"We extend their methodology to create new TMCD splits for non-synthetic datasets.",
"Another method for generating train and test splits is the template 4 split (Finegan-Dollak et al., 2018).",
"Unlike the aforementioned evaluations, template splits have been applied to non-synthetic datasets, primarily for text-to-SQL.",
"In template splits, any parse template (defined as the target SQL query with entities anonymized) appearing in the training set cannot appear in the test set.",
"We analyze and discuss template splits in 6.1.",
"Finally, Herzig and Berant (2019) studies biases resulting from methods for efficiently collecting human-labeled data, providing further motivation for out-of-distribution evaluations.",
"Approaches Many specialized architectures have been developed to address the compositional generalization challenges of SCAN.",
"Several of them have recently reached 100% accuracy across multiple SCAN challenges (Liu et al., 2020; Nye et al., 2020; Chen et al., 2020).",
"Similarly to the NQG-T5 approach we propose in 4, all of these models incorporate discrete structure.",
"However, unlike NQG-T5, they have only been evaluated on synthetic parsing tasks.",
"Recently, Herzig and Berant (2020) also begins to address our research question, proposing an approach that not only solves several SCAN challenges but also achieves strong performance on the standard and template splits of the non-synthetic dataset GEOQUERY .",
"However, their approach requires some manual task-specific engineering.",
"We compare NQG-T5 with this approach and other 4 Also referred to as a query split.",
"SCAN-inspired architectures.",
"Oren et al. (2020) and Zheng and Lapata (2020) also explored compositional generalization on non-synthetic datasets by focusing on the template splits proposed by Finegan-Dollak et al. (2018), demonstrating improvements over standard seq2seq models.",
"The effect of large-scale pre-training on compositional generalization ability has also been studied.",
"Furrer et al. (2020) finds that pre-training alone cannot solve several compositional generalization challenges, despite its effectiveness across NLP tasks such as question answering (Raffel et al., 2020).",
"While our work focuses on modeling approaches, compositional data augmentation techniques have also been proposed (Jia and Liang, 2016; Andreas, 2020).",
"NQG-T5 outperforms previously reported results for these methods, but more in-depth analysis is needed.",
"The existing evaluations targeting compositional generalization for non-synthetic tasks are template splits and length splits.",
"Here we propose an additional method which expands the set of available evaluations by generating data splits that maximize compound divergence over non-synthetic datasets, termed Target Maximum Compound Divergence (TMCD) splits.",
"As we show in 6, it results in a generalization problem with different characteristics that can be much more challenging than template splits, and contributes to the comprehensiveness of evaluation.",
"In standard MCD splits (Keysers et al., 2020), the notion of compounds requires that both source and target are generated by a rule-based procedure, and therefore cannot be applied to existing non-synthetic datasets where natural language utterances are collected from humans.",
"For TMCD, we propose a new notion of compounds based only on the target representations.",
"We leverage their known syntactic structure to define atoms and compounds.",
"For instance, example atoms in FunQL are longest and river , and an example compound is longest(river) .",
"Detailed definitions of atoms and compounds for each dataset we study can be found in Appendix B.3.",
"Given this definition of compounds, our definition of compound divergence, DC , is the same as that of Keysers et al. (2020).",
"Specifically, DC = 1 C 0 .",
"where FTRAIN and FTEST are the weighted frequency distributions of compounds in the training and test sets, respectively.",
"The Chernoff coefficient C ( P (cid:107) Q ) = (cid:80) k p k q 1 k (Chung et al., 1989) is used with = 0 .",
"1 .",
"For TMCD, we constrain atom divergence by requiring that every atom appear at least once in the training set.",
"An atom constraint is desirable so that the model knows the possible target atoms to generate.",
"A greedy algorithm similar to the one of Keysers et al. (2020) is used to generate splits that approximately maximize compound divergence.",
"First, we randomly split the dataset.",
"Then, we swap examples until the atom constraint is satisfied.",
"Finally, we sequentially identify example pairs that can be swapped between the train and test sets to increase compound divergence without violating the atom constraint, breaking when a swap can no longer be identified.",
"We propose NQG-T5, a hybrid semantic parser that combines a grammar-based approach with a seq2seq model.",
"The two components are motivated by prior work focusing on compositional generalization and natural language variation, respectively, and we show in 5 that their combination sets a strong baseline for our challenge.",
"The grammar-based component, NQG , consists of a discriminative N eural parsing model and a flexible Q uasi-synchronous G rammar induction algorithm which can operate over arbitrary pairs of strings.",
"Like other grammar-based approaches, NQG can fail to produce an output for certain inputs.",
"As visualized in Figure 3, in cases where NQG fails to produce an output, we return the output from T5 (Raffel et al., 2020), a pre-trained seq2seq model.",
"This simple combination can work well because NQG often has higher precision than T5 for cases where it produces an output, especially in out-of-distribution settings.",
"NQG is inspired by more traditional approaches to semantic parsing based on grammar formalisms such as CCG (Zettlemoyer and Collins, 2005, 2007; Kwiatkowski et al., 2010, 2013) and SCFG (Wong and Mooney, 2006, 2007; Andreas et al., 2013; Li",
"et al., 2015).",
"NQG combines a QCFG induction algorithm with a neural parsing model.",
"Training is a two-stage process.",
"First, we employ a compression-based grammar induction technique to construct our grammar.",
"Second, based on the induced grammar, we build the NQG semantic parsing model via a discriminative latent variable model, using a powerful neural encoder to score grammar rule applications anchored in the source string x .",
"Grammar Formalism Synchronous context-free grammars (SCFGs) synchronously generate strings in both a source and target language.",
"Compared to related work based on SCFGs for machine translation (Chiang, 2007) and semantic parsing, NQG uses a slightly more general grammar formalism that allows repetition of a non-terminal with the same index on the target side.",
"Therefore, we adopt the terminology of quasi-synchronous context-free grammars (Smith and Eisner, 2006), or QCFGs, to refer to our induced grammar G .",
"5 Our grammar G contains a single non-terminal symbol, NT .",
"We restrict source rules to ones containing at most 2 non-terminal symbols, and do not allow unary productions as source rules.",
"This enables efficient parsing using an algorithm similar to CKY (Cocke, 1969; Kasami, 1965; Younger, 1967) that does not require binarization of the grammar.",
"Induction Procedure To induce G from the training data, we propose a QCFG induction algorithm that does not rely on task-specific heuristics or pre-computed word alignments.",
"Notably, our approach makes no explicit assumptions about the source or target languages, beyond those implicit in the QCFG formalism.",
"Table 1 shows examples of induced rules.",
"5 See Appendix A.1 for additional background on QCFGs.",
"seek the smallest, simplest grammar that explains the data well.",
"We follow the Minimum Description Length (MDL) principle (Rissanen, 1978; Grun-wald, 2004) as a way to formalize this intuition.",
"Specifically, we use standard two-part codes to compute description length, where we are interested in an encoding of targets y given the inputs x , across a dataset D consisting of these pairs.",
"A two-part code encodes the model and the targets encoded using the model; the two parts measure the simplicity of the model and the extent to which it can explain the data, respectively.",
"For grammar induction, our model is simply our grammar, G .",
"The codelength can therefore be expressed as H ( G ) (cid:80) x , y D log 2 PG ( y | x ) where H ( G ) corresponds to the codelength of some encoding of G .",
"We approximate H ( G ) by counting terminal ( CT ) and non-terminal ( CN ) symbols in the grammar's rules, R .",
"For PG , we assume a uniform distribution over the set of possible derivations.",
"6 As the only mutable aspect of the grammar during induction is the set of rules R , we abuse notation slightly and write our approximate codelength objective as a function of R only: L ( R ) = l NCN ( R ) + l TCT ( R ) (cid:88) ( x , y ) D log 2 | ZG x , y | | ZG x , | , where ZG x , y is the set of all derivations in G that yield the pair of strings x and y , while ZG x , ZG x , y 6 This can be viewed as a conservative choice, as in practice we expect our neural parser to learn a better model for P ( y | x ) than a naive uniform distribution over derivations.",
"is the set of derivations that yield source string x and any target string.",
"The constants l N and l T can be interpreted as the average bitlength for encoding non-terminal and terminal symbols, respectively.",
"In practice, these are treated as hyperparameters.",
"We use a greedy search algorithm to find a grammar that approximately minimizes this codelength objective.",
"We initialize G by creating a rule NT (cid:104) x , y (cid:105) for every training example ( x , y ) .",
"By construction, the initial grammar perfectly fits the training data, but is also very large.",
"Our algorithm iteratively identifies a rule that can be added to G that decreases our codelength objective by enabling 1 rule(s) to be removed, under the invariant constraint that G can still derive all training examples.",
"The search completes when no rule that decreases the objective can be identified.",
"In practice, we use several approximations to efficiently select a rule at each iteration.",
"Additional details regarding the grammar induction algorithm are described in Appendix A.2.",
"Based on the induced grammar G , we train a discriminative latent variable parsing model, using a method similar to that of Blunsom et al. (2008).",
"We define p ( y | x ) as: p ( y | x ) = (cid:88) z ZG x , y p ( z | x ) , where ZG x , y is the set of derivations of x and y in G .",
"We define p ( z | x ) as: p ( z | x ) = exp( s ( z , x )) (cid:80) z (cid:48) ZG x , exp( s ( z (cid:48) , x )) , where s ( z , x ) is a derivation score and the denominator is a global partition function.",
"Similarly to the Neural CRF model of Durrett and Klein (2015), the scores decompose over anchored rules.",
"Unlike Durrett and Klein (2015), we compute these scores based on contextualized representations from a BERT (Devlin et al., 2019) encoder.",
"Additional details regarding the model architecture can be found in Appendix A.3.",
"At training time, we use a Maximum Marginal Likelihood (MML) objective.",
"We preprocess each example to produce parse forest representations for both ZG x , y and ZG x , , which correspond to the numerator and denominator of our MML objective, respectively.",
"By using dynamic programming to efficiently sum derivation scores inside the training loop, we can efficiently compute the exact MML objective without requiring approximations such as beam search.",
"At inference time, we select the highest scoring derivation using an algorithm similar to CKY that considers anchored rule scores generated by the neural parsing model.",
"We output the corresponding target if it can be derived by a CFG defining valid target constructions for the given task.",
"We note that NQG is closely related to work that uses synchronous grammars for hierarchical statistical machine translation, such as Hiero (Chiang, 2007).",
"Unlike Hiero, NQG does not rely on an additional word alignment component.",
"Moreover, Hiero simply uses relative frequency to learn rule weights.",
"Additionally, in contract with traditional SCFG models for machine translation applied to semantic parsing (Wong and Mooney, 2006; Andreas et al., 2013), our neural model conditions on global context from the source x via contextual word embeddings, and our grammar's rules do not need to carry source context to aid disambiguation.",
"We evaluate existing approaches and the newly proposed NQG-T5 across a diverse set of evaluations to assess compositional generalization and handling of natural language variation.",
"We aim to understand how the approaches compare to each other for each type of evaluation and in aggregate, and how the performance of a single approach may vary across different evaluation types.",
"For our main experiments, we focus on evaluation across multiple splits of two datasets with compositional queries: SCAN (Lake and Baroni, 2018) and GEOQUERY (Zelle and Mooney, 1996; Tang and Mooney, 2001).",
"The two datasets have been widely used to study compositional generalization and robustness to natural language variation, respectively.",
"Both datasets are closed-domain and have outputs with straightforward syntax, enabling us to make clear comparisons between synthetic vs. non-synthetic setups.",
"Approaches For NQG-T5, to assess the effect of model size, we compare two sizes of the underlying T5 model: Base (220 million parameters) and 3B (3 billion parameters).",
"To evaluate NQG individually, we treat any example where no output is provided as incorrect when computing accuracy.",
"We select strong approaches from prior work that have performed well in at least one setting.",
"We group them into two families of approaches described in Figure",
"1. First, for general-purpose models that have shown strong ability to handle natural language variation, we consider T5, a pre-trained seq2seq model, in both Base and 3B sizes.",
"Second, for specialized methods with strong compositional biases, we consider approaches that have been developed for SCAN.",
"Some previous approaches for SCAN require task-specific information such as the mapping of atoms (Lake, 2019; Gordon et al., 2019) or a grammar mimicking the training data (Nye et al., 2020), and as such are dif-ficult to adapt to non-synthetic datasets.",
"Among the approaches that do not need task-specific resources, we evaluate two models with publicly available code: Syntactic Attention (Russin et al., 2019) and CGPS (Li et al., 2019).",
"We report results on SCAN from the original papers as well as new results on our proposed data splits.",
"Datasets For the SCAN dataset, we evaluate using the length split and two primitive splits, jump and turn left , included in the original dataset (Lake and Baroni, 2018).",
"We also evaluate using the SCAN MCD splits from Keysers et al. (2020).",
"GEOQUERY (Zelle and Mooney, 1996) contains natural language questions about US geography.",
"Similarly to prior work (Dong and Lapata, 2016, 2018), we replace entity mentions with placeholders.",
"We use a variant of Functional Query Language (FunQL) as the target representation (Kate et al., 2005).",
"In addition to the standard split of Zettlemoyer and Collins (2005), we generate multiple splits focusing on compositional generalization: a new split based on query length and a TMCD split, each consisting of 440 train and 440 test examples.",
"We also generate a new template split consisting of 441 train and 439 test examples.",
"7 7 We generate a new template split rather than use the GEOQUERY template split of Finegan-Dollak et al. (2018) to avoid overlapping templates between the train and test sets when mapping from SQL to FunQL.",
"Results The results are presented in Table",
"2. The results for T5 on SCAN are from Furrer et al. (2020).",
"Additionally, we include results for GECA 9 (Andreas, 2020), a data augmentation method, as well as LANE (Liu et al., 2020) and NSSM (Chen et al., 2020) 10 .",
"We also compare with SpanBasedSP 11 (Herzig and Berant, 2020).",
"From the results, we first note that the relative performance of approaches on compositional splits of SCAN is not very predictive of their relative performance on compositional splits of GEOQUERY .",
"For example, GGPS is better than T5 on the length split of SCAN but is significantly worse than T5 on the length split of GEOQUERY .",
"Similarly, the ranking of most methods is different 8 For GEOQUERY we report the mean of 3 runs for NQG, with standard deviations reported in Appendix B.5 9",
"GECA reports GEOQUERY results on a setting with Pro-log logical forms and without anonymization of entities.",
"Note that the performance of GECA depends on both the quality of the generated data and the underlying parser (Jia and Liang, 2016), which can complicate the analysis.",
"10 These SCAN-motivated approaches both include aspects of discrete search and curriculum learning, and have not been demonstrated to scale effectively to non-synthetic parsing tasks.",
"Moreover, the code is either not yet released (NSSM) or specialized to SCAN (LANE).",
"11 SpanBasedSP preprocesses SCAN to add program-level supervision.",
"For GEOQUERY , they similarly use FunQL, but uses slightly different data preprocessing and report denotation accuracy.",
"We computed NQG-T5's denotation accuracy to be 2.1 points higher than exact-match accuracy on the standard split of GeoQuery.",
"on the (T)MCD splits of the two datasets.",
"Second, the proposed NQG-T5 approach combines the strengths of T5 and NQG to achieve superior results across all evaluations.",
"It improves over T5 on compositional generalization for both synthetic and non-synthetic data while maintaining T5's performance on handling in-distribution natural language variation, leading to an average rank of 1.0 compared to 2.9 for T5.",
"(To the best of our knowledge, both T5 and NQG-T5 achieve new state-of-the-art accuracy on the standard split of GEOQUERY .)",
"Finally, we note that there is substantial room for improvement on handling both compositional generalization and natural language variation.",
"We now compare the approaches on SPIDER (Yu et al., 2018), a non-synthetic text-to-SQL dataset that includes the further challenges of schema linking and modeling complex SQL syntax.",
"SPIDER contains 10,181 questions and 5,693 unique SQL queries across 138 domains.",
"The primary evaluation is in the cross-database setting, where models are evaluated on examples for databases not seen during training.",
"The primary challenge in this setting is generalization to new database schemas, which is not our focus.",
"Therefore, we use a setting where the databases are shared between train and test examples.",
"12 We gen-12 This is similar to the example split discussed in Yu et al. (2018).",
"However, we only consider examples in the original training set for databases with more than 50 examples to ensure sufficient coverage over table and column names in SPIDER-SSP System Rand.",
"erate 3 new splits consisting of 3,282 train and 1,094 test examples each: a random split, a split based on source length, and a TMCD split.",
"We also generate a template split by anonymizing integers and quoted strings, consisting of 3,280 train and 1,096 test examples.",
"We adopt the terminology of Suhr et al. (2020) and use SPIDER-SSP to refer to these same-database splits, and use SPIDERXSP to refer to the standard cross-database setting.",
"We prepend the name of the target database to the source sequence.",
"For T5, we also serialize the database schema as a string and append it to the source sequence similarly to Suhr et al. (2020).",
"We report exact set match without values, the standard Spider evaluation metric (Yu et al., 2018).",
"Results Table 3 shows the results of T5 and NQG-T5 on different splits of SPIDER-SSP.",
"We also show T5-Base performance without the schema string appended.",
"The text-to-SQL mapping is not well modeled by NQG.",
"Nevertheless, the performance of NQG-T5 is competitive with T5, indicating a strength of the hybrid approach.",
"Table 4 shows the results on SPIDER-XSP, which focuses on handling unseen schema rather than compositional generalization.",
"To our surprise, T5-3B proves to be competitive with the state-of-the-art (Choi et al., 2020) for approaches without access to database contents beyond the table and column names.",
"As NQG-T5 simply uses T5's output when the induced grammar lacks coverage, it too is competitive.",
"the accuracy of T5-Base across various splits.",
"For GEOQUERY , the TMCD split is significantly more challenging than the template split.",
"However, for SPIDER , the template and TMCD splits are similarly challenging.",
"Notably, template splits do not have an explicit atom constraint.",
"We find that for the SPIDER template split, T5-Base accuracy is 53.9% for the 30.3% of test set examples that contain an atom not seen during training, and 61.6% on the remainder, indicating that generalization to unseen atoms can contribute to the difficulty of template splits.",
"13 Length splits are also very challenging, but they lead to a more predictable error pattern for seq2seq models, as discussed next.",
"We analyze NQG-T5's components, starting with T5.",
"On length splits, there is a consistent pattern to the errors.",
"T5's outputs on the test set are not significantly longer than the maximum length observed during training, leading to poor performance.",
"This phenomenon was explored by Newman et al. (2020).",
"Diagnosing the large generalization gap on the (T)MCD splits is more challenging, but we noticed several error patterns.",
"For T5-Base on the GEOQUERYTMCD split, in 52 of the 201 incorrect predictions (26%), the first incorrectly predicted symbol occurs when the gold symbol has 0 probability under a trigram language model fit to the training data.",
"This suggests that the decoder's implicit target language model might have over-fitted to the distribution of target sequences in the training data, hampering its ability to generate novel compositions.",
"Non-exclusively with these errors, 53% of the incorrect predictions occur when the gold target contains an atom that is seen in only 1 13 Future work could explore different choices for constructing template and TMCD splits, such as alternative compound definitions and atom constraints.",
"example during training, suggesting that T5 struggles with single-shot learning of new atoms.",
"In other cases, the errors appear to reflect over-fitting to spurious correlations between inputs and outputs.",
"Some error examples are shown in Appendix B.6.",
"To analyze NQG, we compute its coverage (frac-tion of examples where NQG produces an output) and precision (fraction of examples with a correct output among ones where an output is produced) on different data splits.",
"The results in Table 5 show that NQG has high precision but struggles at coverage on some data splits.",
"There is a significant difference in the effectiveness of the grammar induction procedure among the three datasets.",
"Induction is particularly unsuccessful for SPIDER , as SQL has complicated syntax and often requires complex coordination across discontinuous clauses.",
"Most of the induced rules are limited to simply replacing table and column names or value literals with non-terminals, such as the rule shown in Table 1, rather than representing nested sub-structures.",
"The degree of span-to-span correspondence between natural language and SQL is seemingly lower than for other formalisms such as FunQL, which limits the effectiveness of grammar induction.",
"Intermediate representations for SQL such as SemQL (Guo et al., 2019) may help increase the correspondence between source and target syntax.",
"For both GEOQUERY and SPIDER , NQG is limited by the expressiveness of QCFGs and the simple greedy search procedure used for grammar induction, which can lead to sub-optimal approximations of the induction objective.",
"Notably, QCFGs cannot directly represent relations between source strings, such as semantic similarity, or relations between target strings, such as logical equivalence (e.g. intersect(a,b) intersect(b,a) ), that could enable greater generalization.",
"However, such extensions pose additional scalability challenges, requiring new research in more flexible approaches for both learning and inference.",
"Our experiments and analysis demonstrate that NQG and T5 offer different strengths.",
"NQG generally has higher precision for out-of-distribution examples, but is limited by the syntactic constraints of the grammar formalism and by requiring exact lexical overlap with induced rules in order to provide a derivation at inference time.",
"T5's coverage is not limited by such constraints, but precision can be significantly lower for out-of-distribution examples.",
"With NQG-T5, we offer a simple combination of these strengths.",
"While accuracy is still limited for out-of-distribution examples where NQG lacks coverage, we believe it sets a strong and simple baseline for future work.",
"More broadly, our work highlights that evaluating on a diverse set of benchmarks is important, and that handling both out-of-distribution compositional generalization and natural language variation remains an open challenge for semantic parsing.",
"We thank Kenton Lee, William Cohen, Jeremy Cole, and Luheng He for helpful discussions.",
"Thanks also to Emily Pitler, Jonathan Herzig, and the anonymous reviewers for their comments and suggestions.",
"This paper proposed to expand the set of benchmarks used to evaluate compositional generalization in semantic parsing.",
"While we hope that ensuring semantic parsing approaches perform well across a diverse set of evaluations, including ones that test out-of-distribution compositional generalization, would lead to systems that generalize better to languages not well represented in small training sets, we have only evaluated our methods on semantic parsing datasets in English.",
"Our NQG-T5 method uses a pre-trained T5 model, which is computationally expensive in fine-tuning and inference, especially for larger models (see Appendix B.1 for details on running time and compute architecture).",
"Our method does not require pre-training of large models, as it uses pre-existing model releases.",
"NQG-T5-base outperforms or is comparable in accuracy to T5-3B on the non-SQL datasets, leading to relative savings of computational resources."
] | [
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"abstain",
"abstain",
"result",
"objective",
"method",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"result",
"other",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain"
] |
[
"Document-level event extraction aims to recognize event information from a whole piece of article.",
"Existing methods are not effective due to two challenges of this task:",
"a) the target event arguments are scattered across sentences;",
"b) the correlation among events in a document is non-trivial to model.",
"In this paper, we propose Heterogeneous G raph-based I nteraction Model with a T racker (GIT ) to solve the aforementioned two challenges.",
"For the first challenge, GIT constructs a heterogeneous graph interaction network to capture global interactions among different sentences and entity mentions.",
"For the second, GIT introduces a Tracker module to track the extracted events and hence capture the interdependency among the events.",
"Experiments on a large-scale dataset (Zheng et al., 2019) show GIT outperforms the existing best methods by 2.8 F1.",
"Further analysis reveals GIT is effective in extracting multiple correlated events and event arguments that scatter across the document.",
"Our code is available at https: //github.com/RunxinXu/GIT .",
"Event Extraction (EE) is one of the key and challenging tasks in Information Extraction (IE), which aims to detect events and extract their arguments from the text.",
"Most previous methods (Chen et al., 2015; Nguyen et al., 2016; Liu et al., 2018; Yang et al., 2019; Du and Cardie, 2020b) focus on sentence-level EE, extracting events from a single sentence.",
"The sentence-level model, however, fails to extract events whose arguments spread in multiple sentences, which is much more common in real-world scenarios.",
"Hence, extracting events at the document-level is critical.",
"It has attracted much attention recently (Yang et al., 2018; Zheng et al., 2019; Du and Cardie, 2020a; Du et al., 2020).",
"* Corresponding author.",
"[1] On Nov 6, 2014 , the company received a letter of share reduction from Mingting Wu , the shareholder of the company.",
"[2] Mingting Wu decreased his holding of 7.2 million shares of the company on the Shenzhen Stock Exchange on Nov 6, 2014 .",
"[3] The 7.2 million shares of the company Mingting Wu reduced this time were transferred to Xiaoting Wu .",
"[4] Xiaoting Wu is the daughter of Mingting Wu , and they were identified as persons acting in concert according to relevant regulations.",
"EventTypeEquityHolderTradedShares StartDate 7.2 million Nov 6, 2014 Xiaoting Wu EO EU Mingting Wu 7.2 million Nov 6, 2014 Figure 1: An example document from a Chinese dataset proposed by Zheng et al. (2019) in the financial domain, and we translate it into English for illustration.",
"Though promising, document-level EE still faces two critical challenges.",
"Firstly , the arguments of an event record may scatter across sentences, which requires a comprehensive understanding of the cross-sentence context.",
"Figure 1 illustrates an example that one Equity Underweight (EU) and one Equity Overweight (EO) event records are extracted from a financial document.",
"It is less challenging to extract the EU event because all the related arguments appear in the same sentence (Sentence 2 ).",
"However, for the arguments of EO record, Nov 6, 2014 appears in Sentence 1 and 2 while Xiaoting Wu in Sentence 3 and 4 .",
"It would be quite challenging to identify such events without considering global interactions among sentences and entity mentions.",
"Secondly , a document may express several correlated events simultaneously, and recognizing the interdependency among them is fundamental to successful extraction.",
"As shown in Figure 1, the two events are interdependent because they correspond to exactly the same transaction and therefore share the same StartDate .",
"Effective modeling on such interdependency among the correlated events remains a key challenge in this task.",
"Yang et al. (2018) extracts events from a central sentence and query the neighboring sentences for missing arguments, which ignores the cross-sentence correspondence between augments.",
"Though Zheng et al. (2019) takes a first step to fuse the sentences and entities information via Transformer, they neglect the interdependency among events.",
"Focusing on single event extraction, Du and Cardie (2020a) and Du et al. (2020) concatenate multiple sentences and only consider a single event, which lacks the ability to model multiple events scattered in a long document.",
"To tackle the aforementioned two challenges, in this paper, we propose a Heterogeneous G raph-based I nteraction Model with a T racker (GIT ) for document-level EE.",
"To deal with scattered arguments across sentences, we focus on the Global Interactions among sentences and entity mentions.",
"Specifically, we construct a heterogeneous graph interaction network with mention nodes and sentence nodes, and model the interactions among them by four types of edges (i.e., sentence-sentence edge, sentence-mention edge, intra-mention-mention edge, and inter-mention-mention edge) in the graph neural network.",
"In this way, GIT jointly models the entities and sentences in the document from a global perspective.",
"To facilitate the multi-event extraction, we target on the Global Interdependency among correlated events.",
"Concretely we propose a Tracker module to continually tracks the extracted event records with a global memory.",
"In this way, the model is encouraged to incorporate the interdependency with other correlated event records while predicting.",
"We summarize our contributions as follows: We construct a heterogeneous graph interaction network for document-level EE.",
"With different heterogeneous edges, the model could capture the global context for the scattered event arguments across different sentences.",
"We introduce a novel Tracker module to track the extracted event records.",
"The Tracker eases the difficulty of extracting correlated events, as interdependency among events would be taken into consideration.",
"Experiments show GIT outperforms the previous state-of-the-art model by 2 .",
"8 F1 on the large-scale public dataset (Zheng et al., 2019) with 32 , 040 documents, especially on cross-sentence events and multiple events scenarios (with 3 . 7 and 4 . 9 absolute increase on F1).",
"We first clarify some important notions.",
"a) entity mention : a text span within document that refers to an entity object;",
"b) event argument : an entity playing a specific event role.",
"Event roles are predefined for each event type;",
"c) event record : an entry of a specific event type containing arguments for different roles in the event.",
"For simplicity, we use record for short in the following sections.",
"Following Zheng et al. (2019), given a document composed of sentences D = { s i } |D| i =1 and a sentence containing a sequence of words s i = { w j } | s i | j =1 , the task aims to handle three sub-tasks : 1) entity extraction : extracting entities E = { e i } |E| i =1 from the document to serve as argument candidates.",
"An entity may have multiple mentions across the document.",
"2) event types detection : detecting specific event types that are expressed by the document.",
"3) event records extraction : find-ing appropriate arguments for the expressed events from entities, which is the most challenging and also the focus of our paper.",
"The task does not require to identify event triggers (Zeng et al., 2018; Liu et al., 2019b), which reduces manual effort of annotation and the application scenarios becomes more extensive.",
"As shows in Figure 2, GIT first extracts candidate entities through sentence-level neural extractor (Sec 3.1).",
"Then we construct a heterogeneous graph to model the interactions among sentences and entity mentions (Sec 3.2), and detect event types expressed by the document (Sec 3.3).",
"Finally we introduce a Tracker module to continuously track all the records with global memory, in which we utilize the global interdependency among records for multi-event extraction (Sec 3.4).",
"corresponding token and position embeddings.",
"We extract entities at the sentence level and formulate it as a sequence tagging task with BIO (Be-gin, Inside, Other) schema.",
"We leverage a conditional random field (CRF) layer to identify entities.",
"For training, we minimize the following loss: L ner = (cid:88) s D log P ( y s | s ) (1) where y s is the golden label sequence of s .",
"An event may span multiple sentences in the document, which means its corresponding entity mentions may also scatter across different sentences.",
"Identifying and modeling these entity mentions in the cross-sentence context is fundamental in document EE.",
"Thus we build a heterogeneous graph G which contains entity mention nodes and sentence nodes in the document D .",
"In the graph G , interactions among multiple entity mentions and sentences can be explicitly modeled.",
"For each entity mention node e , we initialize node embedding h (0) e = Mean ( { g j } j e ) by averaging the representation of the contained words.",
"For each sentence node s , we initialize node embedding h (0) s = Max ( { g j } j s ) + SentPos ( s ) by max-pooling all the representation of words within the sentence plus sentence position embedding.",
"To capture the interactions among sentences and mentions , we introduce four types of edges.",
"Sentence-Sentence Edge (S-S) Sentence nodes are fully connected to each other with S-S edges.",
"In this way, we can easily capture the global properties in the document with sentence-level interactions, e.g., the long range dependency between any two separate sentences in the document would be modeled efficiently with S-S edges.",
"Sentence-Mention Edge (S-M) We model the local context of an entity mention in a specific sentence with S-M edge, specifically the edge connecting the mention node and the sentence node it belongs to.",
"Intra-Mention-Mention Edge (M-M intra ) We connect distinct entity mentions in the same sentences with M-M intra edges.",
"The co-occurrence of mentions in a sentence indicates those mentions are likely to be involved in the same event.",
"We explicitly model this indication by M-M intra edges.",
"Inter-Mention-Mention Edge (M-M inter ) The entity mentions that corresponds to the same entity are fully connected with each other by M-M inter edges.",
"As in document EE, an entity usually corresponds to multiple mentions across sentences, we thus use M-M inter edge to track all the appearances of a specific entity, which facilitates the long distance event extraction from a global perspective.",
"In Section.",
"4.5, experiments show that all of these four kinds of edges play an important role in event detection, and the performance would decrease without any of them.",
"After heterogeneous graph construction * , we apply multi-layer Graph Convolution Network (Kipf and Welling, 2017) to model the global interactions inspired by Zeng et al. (2020).",
"Given node u at the l -th layer, the graph convolutional operation is defined as follows: h ( l +1) u = ReLU (cid:88) k K (cid:88) v N k ( u ) (cid:83) { u } 1 c u,k W ( l ) k h ( l ) v where K represents different types of edges, W ( l ) k R d m d m is trainable parameters.",
"N k ( u ) denotes the neighbors for node u connected in k -th type edge and c u,k is a normalization constant.",
"We then derive the final hidden state h u for node u , h u = W a [ h (0) u ; h (1) u ; . . . ; h ( L ) u ] where h (0) u is the initial node embedding of node u , and L is the number of GCN layers.",
"Finally, we obtain the sentence embedding matrix S = [ h (cid:62) 1 h (cid:62) 2 . . . h (cid:62)|D| ] R d m |D| and entity embedding matrix E R d m |E| .",
"The i -th entity may have many mentions, where we simply use string matching to detect entity coreference following Zheng et al. (2019) , and the entity embedding E i is computed by the average of its mention node embedding, E i = Mean ( { h j } j Mention( i ) ) .",
"In this way, the sentences and entities are interactively represented in a context-aware way.",
"where Q R d m T and W t R d m are trainable parameters, and T denotes the number of possible event types.",
"MultiHead refers to the standard multi-head attention mechanism with Query/Key/Value.",
"Therefore, we derive the event types detection loss with golden label (cid:98) R RT : L detect = T (cid:88) t =1 I (cid:16) (cid:98) R t = 1 (cid:17) log P ( R t |D ) + I (cid:16) (cid:98) R t = 0 (cid:17) log (1 P ( R t |D )) (2) 3.4 Event Records Extraction Since a document is likely to express multiple event records and the number of records cannot be known in advance, we decode records by expanding a tree orderly as previous methods did (Zheng et al., 2019).",
"However, they treat each record independently.",
"Instead, to incorporate the interdependency among event records , we propose a Tracker module, which improves the model performance.",
"we extract event records of a specific event type.",
"The arguments extraction order is predefined so that the extraction is modeled as a constrained tree expanding task .",
"Taking Equity Freeze records as an example, as shown in Figure 3, we firstly extract EquityHolder , followed by FrozeShares and others.",
"Starting from a virtual root node, the tree expands by predicting arguments in a sequential order.",
"As there may exist multiple eligible entities for the event argument role, the current node will expand several branches during extraction, with different entities assigned to the current role.",
"This branching operation is formulated as multi-label classification task.",
"In this way, each path from the root node to the leaf node is identified as a unique event record.",
"Interdependency exists extensively among different event records.",
"For example, as shown in Figure 1, an Equity Underweight event record is closely related to an Equity Overweight event record, and they may share some key arguments or provide useful reasoning information.",
"To take advantage of such interdependency, we propose a novel Tracker module inspired by memory network (Weston et al., 2015).",
"Intuitively, the Tracker continually tracks the extracted records on-the-fly and store the information into a global memory.",
"When predicting arguments for current record, the model will query the global memory and therefore make use of useful interdependency information of other records.",
"In detail, for the i -th record path consisting of a sequence of entities, the Tracker encodes the corresponding entity representation sequence U i = [ E i 1 , E i 2 , ... ] into an vector G i with an LSTM (last hidden state) and add event type embedding.",
"Then the compressed record information is stored in the global memory G , which is shared across different event types as shown in Figure 3.",
"For extraction, given a record path U i R d m ( J 1) with the first J 1 arguments roles, we predict the J -th role by injecting role-specific information into entity representations, E = E + Role J , where Role J is the role embedding for the J -th role.",
"Then we concatenate E , sentences feature S , current entities path U i , and the global memory G , followed by a transformer to obtain new entity feature matrix (cid:101) E R d m |E| , which contains global role-specific We simply adopt the order used by Zheng et al. (2019).",
"We treat the path expansion as a multi-label classification problem with a binary classifier over (cid:101) E i , i.e., predicts whether the i -th entity is the next argument role for the current record and expand the path accordingly as shown in Figure 3.",
"where ND denotes the nodes set in the event records tree, and y nt is the golden label.",
"If the t -th entity is validate for the next argument in node n , then y nt = 1 , otherwise y nt = 0 .",
"More training details are shown in Appendix A.",
"We evaluate our model on a public dataset proposed by Zheng et al. (2019) , which is constructed from Chinese financial documents.",
"It consists of up to 32 , 040 documents which is the largest document-level EE dataset by far.",
"It focuses on five event types: Equity Freeze (EF), Equity Repurchase (ER), Equity Underweight (EU), Equity Overweight (EO) and Equity Pledge (EP), with 35 different kinds of argument roles in total.",
"We follow the standard split of the dataset, 25 , 632 / 3 , 204 / 3 , 204 documents for training/dev/test set.",
"The dataset is quite challenging, as a document has 20 sentences and consists of 912 tokens on average.",
"Besides, there are roughly 6 sentences involved for an event record, and 29% documents express multiple events.",
"In our implementation of GIT , we use 8 and 4 layers Transformer (Vaswani et al., 2017) in encoding and decoding module respectively.",
"The dimensions in hidden layers and feed-forward layers are the same as previous work (Zheng et al., 2019), i.e., 768 and 1 , 024 .",
"We also use L = 3 layers of GCN, and set dropout rate to 0 .",
"1 , batch size to 64 .",
"GIT is trained using Adam (Kingma and Ba, 2015) as optimizer with 1 e 4 learning rate for 100 epochs.",
"We set 1 = 0 .",
"05 , 2 = 3 = 1 for the loss function.",
"Yang et al. (2018) proposes DCFEE that extracts arguments from the identified central sentence and queries surrounding sentences for missing arguments.",
"The model has two variants, DCFEE-S and DCFEE-M .",
"DCFEE-S produces one record at a time, while DCFEE-M produces multiple possible argument combinations by the closest distance from the central sentence.",
"Besides, Doc2EDAG (Zheng et al., 2019) uses transformer encoder to obtain sentence and entity embeddings, followed by another transformer to fuse cross-sentence context.",
"Then multiple events are extracted simultaneously.",
"Greedy-Dec is a variant of Doc2EDAG, which produces only one record greedily.",
"Three sub-tasks of the document-level EE are all evaluated by F1 score.",
"Due to limited space, we leave the results of entity extraction and event types detection in Appendix B, which shows GIT only slightly outperform Doc2EDAG, because we mainly focus on event record extraction and the methods are similar to Doc2EDAG for these two sub-tasks.",
"In the following, we mainly report and analyze the results of event record extraction .",
"Overall performance .",
"The results of the overall performance on the document-level EE dataset is illustrated in Table 1.",
"As Table 1 shows, our GIT consistently outperforms other baselines, thanks to better modelling of global interactions and interdependency.",
"Specifically, GIT improves 2 .",
"8 micro F1 compared with the previous state-of-the-art, Doc2EDAG, especially 4 .",
"5 improvement in Equity Underweight (EU) event type.",
"Cross-sentence records scenario.",
"There are more than 99 .",
"5% records of the test set are cross-sentence event records, and the extraction becomes gradually more difficult as the number of their involved sentences grows.",
"To verifies the effectiveness of GIT to capture cross-sentence information, we first calculate the average number of sentences that the records involve for each document, and sort them in ascending order.",
"Then we divide them into four sets I/II/III/IV with equal size.",
"Documents in Set.",
"IV is considered to be the most challenging as it requires the most number of sentences to successfully extract records.",
"As Table 2 shows, GIT consistently outperforms Doc2EDAG, especially on the most challenging Set.",
"IV that involves the most sentences, by 3 .",
"7 F1 score.",
"It suggests that GIT can well capture global context and mitigate the arguments-scattering challenge, with the help of the heterogeneous graph interaction network.",
"Multiple records scenario.",
"GIT introduces the tracker to make use of global interdependency among event records, which is important in multiple records scenario.",
"To illustrate its effectiveness, we divide the test set into single-record set (S.) containing documents with one record, and multi-record set (M.) containing those with multiple records.",
"As shown in Table.",
"3, F1 score on M. Model EF ER EU EO EP Overall S. M. S. M. S. M. S. M. S. M. S. M. DCFEE-S 55.7 38.1 83.0 55.5 52.3 41.4 49.2 43.6 62.4 52.2 69.0 50.3 DCFEE-M 45.3 40.5 76.1 50.6 48.3 43.1 45.7 43.3 58.1 51.2 63.2 49.4 Greedy-Dec 74.0 40.7 82.2 50.0 61.5 35.6 63.4 29.4 78.6 36.5 77.8 37.0 Doc2EDAG 79.7 63.3 90.4 70.7 74.7 63.3 76.1 70.2 84.3 69.3 81.0 67.4 GIT (ours) 81.9 65.9 93.0 71.7 82.0 64.1 80.9 70.6 85.0 73.5 87.6 72.3 Table 3: F1 scores on single-record (S.) and multi-record (M.) sets.",
"is much lower than that on S., indicating it is challenging to extract multiple records.",
"However, GIT still surpasses other strong baselines by 4 .",
"9 35 .",
"3 on multi-record set (M.).",
"This is because GIT is aware of other records through the T racker module, and leverage the interdependency information to improve the performance .",
"Nguyen et al. (2016) maintain three binary matrices to memorize entities and events states.",
"Although they aim at sentence-level EE that contains fewer entities and event records, it would be also interesting to compare with them and we leave it as future work.",
"We conduct further experiments to analyze the key modules in GIT more deeply.",
"On the effect of heterogeneous graph interaction network .",
"The heterogeneous graph we constructed contains four types of edges.",
"To explore their functions, we remove one type of edges at a time, and remove the whole graph network finally.",
"Results are shown in Table 4, including micro F1 and F1 on the four sets, which are divided by the number of involved sentences for records as we did before.",
"The micro F1 would decreases 1 .",
"0 1 .",
"4 without a certainty type of edge.",
"Besides, removing the whole graph causes an significant drop by 2 .",
"0 F1, especially for Set IV by 2 .",
"5 , which requires the most number of sentences to extract the event record.",
"It demonstrates that the graph interaction network helps improve the performance, especially on records involving many sentences, and all kinds of edges play an important role for extraction.",
"On the effect of Tracker module .",
"GIT can leverage interdependency among records based on the information of other event records tracked by Tracker .",
"To explore its effect, firstly, we remove the global interdependency information between records of different event types, by clearing the global memory whenever we extract events for an[5] The shareholder of the company, Quanlie Chen , pledged 52.4 million to GDZQ Co., Ltd. in 2018, and supplemented the pledge recently because of the decline of the share price.",
"Next, we remove all the tracking information except the own path for a record, to explore whether the tracking of other records makes effect indeed (GITO wn P ath).",
"Finally, we remove the whole Tracker module (GITN o T racker).",
"As Table 5 shows, the F1 in GIT -OT/G IT-OP decreases by 0 .",
"5 / 1 .",
"2 , suggesting the interdependency among records of both the same and different event types do play an essential role.",
"Besides, their F1 decrease in M. by 0 .",
"7 / 1 .",
"5 are more than those in S. by 0 .",
"8 / 1 .",
"0 , verifying the effectiveness of the Tracker in multi-event scenarios.",
"Moreover, the performances are similar between GIT-OP and GIT-NT, which also provides evidence that other records do help.",
"We also reveal F1 on documents with different number of records in Figure 4.",
"The gap between models with or without Tracker raises as the number of records increases, which validates the effectiveness of our Tracker .",
"Figure 5 demonstrates a case of the predictions of Doc2EDAG and GIT for Equity Pledge (EP) event types.",
"The TotalHoldingShares and TotalPledgedShares information lies in Sentence 8 , while the PledgedShares and Pledgee information for Record 2 lies in Sentence 5 .",
"Though Doc2EDAG fails to extract these arguments in Record 2 (colored in red), GIT succeeds because it can capture interactions between long-distance sentences, and utilize the information of Record 1 ( 325.4 million and 218.6 million ) thanks to the Tracker model.",
"Sentence-level Event Extraction .",
"Previous approaches mainly focus on sentence-level event extraction.",
"Chen et al. (2015) propose a neural pipeline model that identifies triggers first and then extracts argument roles.",
"Nguyen et al. (2016) use a joint model to extract triggers and argument roles simultaneously.",
"Some studies also utilize dependency tree information (Liu et al., 2018; Yan et al., 2019).",
"To utilize more knowledge, some studies leverage document context (Chen et al., 2018; Zhao et al., 2018), pre-trained language model (Yang et al., 2019), and explicit external knowledge (Liu et al., 2019a; Tong et al., 2020) such as WordNet (Miller, 1995).",
"Du and Cardie (2020b) also try to extract events in a Question-Answer way.",
"These studies usually conduct experiments on sentence-level event extraction dataset, ACE05 (Walker et al., 2006).",
"However, it is hard for the sentence-level models to extract multiple qualified events spanning across sentences, which is more common in real-world scenarios.",
"Document-level Event Extraction .",
"Document-level EE has attracted more and more attention recently.",
"Yang and Mitchell (2016) use well-defined features to handle the event-argument relations across sentences, which is, unfortunately, quite nontrivial.",
"Yang et al. (2018) extract events from a central sentence and find other arguments from neighboring sentences separately.",
"Although Zheng et al. (2019) use Transformer to fuse sentences and entities, interdependency among events is neglected.",
"Du and Cardie (2020a) try to encode the sentences in a multi-granularity way and Du et al. (2020) leverage a seq2seq model.",
"They conduct experiments on MUC-4 (Sundheim, 1992) dataset with 1 , 700 documents and 5 kinds of entity-based arguments, and it is formulated as a table-filling task, coping with single event record of single event type.",
"However, our work is different from these studies in that",
"a) we utilize heterogeneous graph to model the global interactions among sentences and mentions to capture cross-sentence context,",
"b) and we leverage the global interdependency through Tracker to extract multiple event records of multiple event types.",
"Although promising in practical application, document-level EE still faces some challenges such as arguments-scattering phenomenon and multiple correlated events expressed by a single document.",
"To tackle the challenges, we introduce Heterogeneous G raph-based I nteraction Model with a T racker (GIT ).",
"GIT uses a heterogeneous graph interaction network to model global interactions among sentences and entity mentions.",
"GIT also uses a Tracker to track the extracted records to consider global interdependency during extraction.",
"Experiments on large-scale public dataset (Zheng et al., 2019) show GIT outperforms previous state-of-the-art by 2 .",
"8 F1.",
"Further analysis verifies the effectiveness of GIT especially in cross-sentence events extraction and multi-event scenarios.",
"The authors would like to thank Changzhi Sun, Mingxuan Wang, and the anonymous reviewers for their thoughtful and constructive comments.",
"This paper is supported in part by the National Key R&D Program of China under Grand No.2018AAA0102003, the National Science Foundation of China under Grant No.61936012 and 61876004."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"Dependency parsing research, which has made significant gains in recent years, typically focuses on improving the accuracy of singletree predictions.",
"However, ambiguity is inherent to natural language syntax, and communicating such ambiguity is important for error analysis and better-informed downstream applications.",
"In this work, we propose a transition sampling algorithm to sample from the full joint distribution of parse trees defined by a transition-based parsing model, and demonstrate the use of the samples in probabilistic dependency analysis.",
"First, we define the new task of dependency path prediction , inferring syntactic substructures over part of a sentence, and provide the first analysis of performance on this task.",
"Second, we demonstrate the usefulness of our Monte Carlo syntax marginal method for parser error analysis and calibration.",
"Finally, we use this method to propagate parse uncertainty to two downstream information extraction applications: identifying persons killed by police and semantic role assignment.",
"1 1 Introduction Dependency parsers typically predict a single tree for a sentence to be used in downstream applications, and most work on dependency parsers seeks to improve accuracy of such single-tree predictions.",
"Despite tremendous gains in the last few decades of parsing research, accuracy is far from perfectbut perfect accuracy may be impossible since syntax models by themselves do not incorporate the discourse, pragmatic, or world knowledge necessary to resolve many ambiguities.",
"In fact, although relatively unexamined, substantial ambiguity already exists within commonly used discriminative probabilistic parsing models, 1 Supporting code available at https://github.com/slanglab/ transition sampler Figure 1: Example of a sentence with inherent ambiguity.",
"which define a parse foresta probability distribution p ( y | x ) over possible dependency trees y Y ( x ) for an input sentence x .",
"For example, the top of Figure 1 shows the predicted parse y ( greedy ) from such a parser (Chen and Manning, 2014), which resolves a prepositional (PP) attachment ambiguity in one manner; this prediction was selected by a standard greedy transition-based algorithm ( 2.1).",
"However, the bottom of Figure 1 shows marginal probabilities of individual (relation, governor, child) edges under this same model.",
"These denote our estimated probabilities, across all possible parse structures, that a pair of words are connected with a particular relation ( 2.4).",
"For example, the two different PP attachment readings both exist within this parse forest with marginal probabilities p ( nmod ( saw 2 , telescope 7 ) | x ) = 0 .",
"These types of irreducible syntactic ambiguities exist and should be taken into consideration when analyzing syntactic information; for instance, one could transmit multiple samples (Finkel et al., 2006) or confidence scores (Bunescu, 2008) over 917 ambiguous readings to downstream analysis components.",
"In this work, we introduce a simple transition sampling algorithm for transition-based dependency parsing ( 2.2), which, by yielding exact samples from the full joint distribution over trees, makes it possible to infer probabilities of long-distance or other arbitrary structures over the parse distribution ( 2.4).",
"We implement transition samplinga very simple change to pre-existing parsing softwareand use it to demonstrate several applications of probabilistic dependency analysis: Motivated by how dependency parses are typically used in feature-based machine learning, we introduce a new parsing-related task dependency path prediction .",
"This task involves inference over variable length dependency paths , syntactic substructures over only parts of a sentence.",
"To accomplish this task, we define a Monte Carlo syntax marginal inference method which exploits information across samples of the entire parse forest.",
"It achieves higher accuracy predictions than a traditional greedy parsing algorithm, and allows tradeoffs between precision and recall ( 4).",
"We provide a quantitative measure of the model's inherent uncertainty in the parse, whole-tree entropy , and show how it can be used for error analysis ( 3).",
"We demonstrate the method's (surprisingly) reasonable calibration ( 5).",
"Finally, we demonstrate the utility of our method to propagate uncertainty to downstream applications.",
"Our method improves performance for giving probabilistic semantics to a rule-based event extractor to identify civilians killed by police ( 6), as well as semantic role assignment ( 7).",
"We examine the basic form of the Universal Dependencies formalism (Nivre et al., 2016), where, for a sentence x of length N , a possible dependency parse y is a set of (relation, governorToken,",
"childToken) edges, with a tree constraint that every token in the parse has exactly one governor that is, for every token w { 1",
"..N } , there is exactly one triple ( r, g, w ) y where it participates as a child.",
"The governor is either one of the observed tokens, or a special ROOT vertex.",
"There exist a wide variety of approaches to machine learned, discriminative dependency parsing, which often define a probability distribution p ( y | x ) over a domain of formally legal dependency parse trees y Y ( x ) .",
"We focus on transition-based dependency parsers (Nivre, 2003; Kubler et al., 2009), which (typically) use a stack-based automaton to process a sentence, incrementally building a set of edges.",
"Transition-based parsers are very fast, have runtimes linear in sentence length, feature high performance (either state-of-the-art, or nearly so), and are easier to implement than other modeling paradigms ( 2.5).",
"A probabilistic transition-based parser assumes the following stochastic process to generate a parse tree: Initialize state S 0 For n = 1 , 2 , . . . : (A) a n p ( a n | S n 1 ) (B) S n := Update ( S n 1 , a n ) (C) Break if InEndState( S n ) Most state transition systems (Bohnet et al., 2016) use shift and reduce actions to sweep through tokens from left to right, pushing and popping them from a stack to create the edges that populate a new parse tree y .",
"The action decision probability, p ( a next | S current ) , is a softmax distribution over possible next actions.",
"It can be parameterized by any probabilistic model, such as log-linear features of the sentence and current state (Zhang and Nivre, 2011), multilayer perceptrons (Chen and Manning, 2014), or recurrent neural networks (Dyer et al., 2015; Kiperwasser and Goldberg, 2016).",
"To predict a single parse tree on new data, a common inference method is greedy decoding , which runs a close variant of the above transition model as a deterministic automaton, replacing stochastic step (A) with a best-action decision, a n := arg max a n p ( a n | S n 1 ) .",
"2 An inferred ac-2 Since greedy parsing does not require probabilistic semantics for the action modelthe softmax normalizer does not need to be evaluatednon-probabilistic training, such as with hinge loss (SVMs), is a common alternative, including in some of the cited work.",
"In this work, we propose to analyze the full joint posterior p ( y | x ) , and use transition sampling , a very simple forward/ancestral sampling algorithm, 3 to draw parse tree samples from that distribution.",
"To parse a sentence, we run the automaton stochastically, sampling the action probability in step (A).",
"This yields one action sequence a 1: n from the full joint distribution of action sequences, and therefore a parse y ( a 1: n ) from the distribution of parses.",
"We can obtain as many parse samples as desired by running the transition sampler S times, yielding a collection (multiset) of parse structures { y ( s ) | s { 1",
"..S }} , where each y ( s ) p ( y | x ) is a full dependency parse tree.",
"4 Runtime to draw one parse sample is very similar to the greedy al-gorithm's runtime.",
"We denote the set of unique parses in the sample Y ( x ) .",
"We implement a transition sampler by modifying an implementation of Chen and Manning's multilayer perceptron transition-based parser 5 and use it for all subsequent experiments.",
"One minor use of transition sampling is a method for predicting a single parse, by selecting the most probable (common) parse tree in the sample,",
"y MC-MAP = arg max y Y p ( y | x ) (3) = arg max y Y c ( y ) S (4)",
"where p ( y | x ) denotes the Monte Carlo estimate of a parse's probability, which is proportional to how many times it appears in the sample: c ( y ) P Ss 1 { y = y ( s ) } .",
"Note that p ( y | x ) correctly accounts for the case of an ambiguous transition system where multiple different action sequences can yield the same treei.e., y ( a 1: n ) is not one-to-onesince the transition sampler can sample the multiple different paths.",
"3 Ancestral refers to a directed Bayes net (e.g. Barber (2012)) of action decisions, each conditioned on the full history of previous actionsnot ancestors in a parse tree.",
"4 Dyer et al. (2016) use the same algorithm to draw samples from a transition-based constituency parsing model, as an importance sampling proposal to support parameter learning and single-tree inference.",
"This MC-MAP method is asymptotically guaranteed to find the model's most probable parse ( arg max y p ( y | x ) ) given enough samples.",
"6 By contrast, greedy decoding and beam search have no theoretical guarantees.",
"MC-MAP's disadvantage is that it may require a large number of samples, depending on the difference between the top parse's probability compared to other parses in the domain.",
"Beyond entire tree structures, parse posteriors also define marginal probabilities of particular events in them.",
"Let f ( y ) { 0 , 1 } be a boolean-valued structure query function of a parse treefor example, whether the tree contains a particular edge: f ( y ) = 1 { dobj(kill,Smith) y } or more complicated structures, such as a length-2 dependency path: f ( y ) = 1 { nsubj(kill , cop ) dobj ( kill , Smith ) y } .",
"More precisely, these queries are typically formulated to check for edges between specific tokens, and may check tokens' string forms.",
"Although f ( y ) is a deterministic function, since the parsing model is uncertain of the correct parse, we find the marginal probability , or expectation, of a structure query by integrating out the posterior parse distributionthat is, the predicted probability that the parse has the property in question: p ( f ( y ) | x ) = X y Y ( x ) f ( y ) p ( y | x ) (5) p ( f ( y ) | x ) = X y Y ( x ) f ( y ) c ( y ) S .",
"(6) Eq.",
"5 is the expectation with regard to the model's true probability distribution ( p ) over parses from the domain of all possible parse trees Y ( x ) for a sentence, while Eq.",
"6 is a Monte Carlo estimate of the query's marginal probabilitythe fraction of parse tree samples where the structure query is true.",
"We use this simple method for all inference 6 This holds since the Monte Carlo estimated probability of any tree converges to its true probability, according to, e.g., Hoeffding's inequality or the central limit theorem.",
"Thus, with enough samples, the tree with the highest true probability will have estimated probability higher than any other tree's.",
"in this work, though importance sampling (Dyer et al., 2016), particle filters (Buys and Blunsom, 2015), or diverse k-best lists (Zhang and McDonald, 2014) could support more efficient inference in future work.",
"Our transition sampling method aims to be an easy-to-implement algorithm for a highly performant class of dependency models, that conducts exact probabilistic inference for arbitrary structure queries in a reasonable amount of time.",
"A wide range of alternative methods have been proposed for dependency inference that cover some, but perhaps not all, of these goals.",
"For transition-based parsing, beam search is a commonly used inference method that tries to look beyond a single structure.",
"Beam search can be used to yield an approximate K -best list by taking resulting structures on the beam, though there are no theoretical guarantees about the result, and runtime is no better than the transition sampler.",
"7 Finkel et al. (2006) further discuss tradeoffs between beam search and sampling, and find they give similar performance when propagating named entity recognition and PCFG parse information to downstream tasks.",
"Graph-based parsers are the major alternative modeling paradigm for dependency parsing; instead of a sequence of locally normalized decisions, they directly parameterize an entire tree's globally normalized probability.",
"Parse samples could be drawn from a graph-based model via Markov chain Monte Carlo (Zhang et al., 2014), which is asymptotically correct, but may require a large amount of time to obtain non-autocorrelated parses.",
"A range of methods address inference for specific queries in graph-based modelsfor example, edge marginals for edge-factored models via the matrix-tree theorem (Koo et al., 2007), or approximate marginals with loopy belief propagation (Smith and Eisner, 2008).",
"8 By contrast, our method is guaranteed to give correct marginal 7 Loosely, if it takes N transitions to complete a parse, and B possible actions at each transition must be evaluated, our method evaluates KNB actions to obtain K trees.",
"Beam search evaluates a similar number of actions when using a K sized beam, but also requires non-parallelizable management of the beam's priority queue.",
"8 These papers infer marginals to support parameter learning, but we are not aware of previous work that directly analyzes or uses dependency parse marginals.",
"inferences for arbitrary, potentially long-distance, queries.",
"Given the strong performance of graph-based parsers in the single-structure prediction setting (e.g. Zeman et al. (2017); Dozat et al. (2017)), it may be worthwhile to further explore probabilistic inference for these models.",
"For example, Niculae et al. (2018) present an inference algorithm for a graph-based parsing model that infers a weighted, sparse set of highly-probable parse trees, and they illustrate that it can infer syntactic ambiguities similar to Figure",
"1. Dynamic programming for dependency parsing, as far as we are aware, has only been pursued for single-structure prediction (e.g. Huang and Sagae (2010)), but in principle could be generalized to calculate local structure query marginals via an inside-outside algorithm, or to sample entire structures through an inside-outside sampler (Eisner, 2016), which Finkel et al. (2006) use to propagate parse uncertainty for downstream analysis.",
"In this section we directly explore the model's intrinsic uncertainty, while 5 conducts a quantitative analysis of model uncertainty compared to gold standard structures.",
"Parse samples are able to both pass on parse uncertainty and yield useful insights that typical error analysis approaches cannot.",
"For a sentence x , we can calculate the whole-tree entropy , the model's uncertainty of whole-tree parse frequencies in the samples: H ( p ) = X y Y ( x ) p ( y | x ) log p ( y | x ) H ( p ) = X y Y ( x ) c ( y ) S log c ( y ) S .",
"(7) Since this entropy estimate is only based on an S sample approximation of p , it is upper bounded at log( S ) in the case of a uniform MC distribution.",
"Another intuitive measure of uncertainty is simply the number of unique parses, that is, the cardinality of the MC distribution's domain ( | Y| ) ; this quantity is not informative for the true distribution p , but in the MC distribution it is intuitively upper bounded by S .",
"9 9 Shannon entropy, domain support cardinality, and top probability ( max y Y p ( y ) ), which we show in Table 1, are all instances of the more general Renyi entropy (Smith and Eisner, 2007).",
"Domain Size Top 3 Freq.",
"Entropy In Ramadi , there was a big demonstration .",
"We run our dependency sampler on the 2002 sentences in the Universal Dependencies 1.3 English Treebank development set, generating 100 samples per sentence; Table 1 shows example sentences along with | Y| and entropy statistics for each sentence.",
"We find that in general, as sentence length increases, so does the entropy of the parse distribution (Fig. 2).",
"Moreover, we find that entropy is a useful diagnostic tool.",
"For example, 7% of sentences in the UD development corpus with fewer than 15 tokens and H ( p ) 2 exhibit uncertainty around the role of -' (compare Sciences principally biology and thought-provoking ), and another 7% of such sentences exhibit uncertainty around s' (potentially representing a plural or a possessive).",
"Here we examine the utility of marginal inference for predicting parts of dependency parses, using the UD 1.3 Treebank's English development set to evaluate.",
"10 10 UD 1.3 is the UD version that this parsing model is most similar to: https://mailman.stanford.edu/pipermail/ 4.1 Greedy decoding Using its off-the-shelf pretrained model with greedy decoding, the CoreNLP parser achieves 80.8% labeled attachment score (LAS).",
"LAS is equivalent to both the precision and recall of predicting (rel,gov,child) triples in the parse tree.",
"11 4.2 Minimum Bayes risk (MBR) decoding A simple way to use marginal probabilities for parse prediction is to select, for each token, the governor and relation that has the highest marginal probability.",
"This method gives a minimum Bayes risk (MBR) prediction of the parse, minimizing the model's expected LAS with regards to local uncertainty; similar MBR methods have been shown to improve accuracy in tagging and constituent parsing (e.g. Goodman (1996); Petrov and Klein (2007)).",
"This method yields 81.4% LAS, outperforming greedy parsing, though it may yield a graph that is not a tree.",
"An alternative view on dependency parsing is to consider what structures are needed for downstream applications.",
"One commonly used parse substructure is the dependency path between two words, which is widely used in unsupervised lexical semantics (Lin and Pantel, 2001), distantly supervised lexical semantics (Snow et al., 2005), relation learning (Riedel et al., 2013), and supervised semantic role labeling (Hacioglu, 2004; Das et al., 2014), as well as applications in economics (Ghose et al., 2007), political science (O'Connor et al., 2013), biology (Fundel et al., 2006), and the humanities (Bamman et al., 2013, 2014).",
"parser-user/2017-November/003460.html 11 LAS is typically defined as proportion of tokens whose governor (and relation on that governor-child edge) are correctly predicted; this is equivalent to precision and recall of edges if all observed tokens are evaluated.",
"If, say, punctuation is excluded from evaluation, this equivalence does not hold; in this work we always use all tokens for simplicity.",
"In this work, we consider a dependency path to be a set of edges from the dependency parse; for example, a length-2 path p = { nsubj (3 , 1) , dobj (3 , 4) } connects tokens 1 and 4.",
"Let P d ( y ) be the set of all lengthd paths from a parse tree y .",
"12 Figure 3's Greedy table column displays the F-scores for the precision and recall of retrieving P d ( y ( gold ) ) from the prediction P d ( y ( greedy ) ) for a series of different path lengths.",
"P 1 gives individual edges, and thus is the same as LAS (80.8%).",
"Longer length paths see a rapid decrease in performance; even length-2 paths are retrieved with only 66% precision and recall.",
"13 We are not aware of prior work that evaluates dependency parsing beyond single edge or whole sentence accuracy.",
"We define dependency path prediction as the task of predicting a set of dependency paths for a sentence; the paths do not necessarily have to come from the same tree, nor even be consistent with a single syntactic analysis.",
"We approach this task with our Monte Carlo syntax marginal method, by predicting paths from the transition sampling parser.",
"Here we treat each possible path 12 Path construction may traverse both up and down directed edges; we represent a path as an edge set to evaluate its existence in a parse.",
"A path may not include the same vertex twice.",
"The set of all paths for a parse includes all paths from all pairs of vertexes (observed tokens and ROOT).",
"13 For length 1 paths, precision and recall are identical; this does not hold for longer paths, though precision and recall from a single parse prediction are similar.",
"as a structure query ( 2 . 4 ) and return all paths whose marginal probabilities are at least threshold t .",
"Varying t trades off precision and recall.",
"We apply this method to 100 samples per sentence in the UD treebank.",
"When we take all length-1 paths that appear in every single sample (i.e., estimated marginal probability 1 . 0 ), precision greatly increases to 0 .",
"969 , while recall drops to 0 .",
"317 (the top-left point on Figure 3's teal length-1 curve.) We can also accommodate applications which may prefer to have a higher recall: predicting all paths with at least 0 .",
"01 probability results in 0 .",
"936 recall (the bottom-right point on the curve in Figure 3).",
"14 This marginal path prediction method dominates the greedy parser: for length-1 paths, there are points on the marginal decoder's PR curve that achieve both higher precision and recall than the greedy decoder, giving F1 of 82.4% when accepting all edges with marginal probability at least 0 .",
"45 .",
"Furthermore, these advantages are more prominent for longer dependency paths.",
"For example, for length-3 paths, the greedy parser only achieves 50.6% F1, while the marginal parser im-14 The 6.4% of gold-standard edges with predicted 0 probability often correspond to inconsistencies in the formalism standards between the model and UD; for example, 0.7% of the gold edges are name' relations among words in a name, which the model instead analyzes as compound'.",
"Inspecting gold edges' marginal probabilities helps error analysis, since when one views a single predicted parse, it is not always clear whether observed errors are systematic, or a fluke for that one instance.",
"proves a bit to 55.0% F1; strikingly, it is possible to select high-confidence paths to get much higher 90.1% precision (at recall 11.6%, with confidence threshold t = 0 . 95 ).",
"Figure 3 also shows the pre-cision/recall points on each curve for thresholds t = 0 .",
"9 and t = 0 .",
"1 .",
"We also evaluated the MC-MAP single-parse prediction method ( 2.3), which slightly, but consistently, underperforms the greedy decoder at all dependency lengths.",
"More work is required to understand whether this is is an inference or modeling problem: for example, we may not have enough samples to reliably predict a high-probability parse; or, as some previous work finds in the context of beam search, the label bias phenomenon in this type of locally-normalized transition-based parser may cause it to assign higher probability to non-greedy analyses that in fact have lower linguistic quality (Zhang and Nivre, 2012; Andor et al., 2016).",
"The precision-recall analysis shows that the predicted marginal probabilities are meaningful in a ranking sense, but we can also ask whether they are meaningful in a sense of calibration : predictions are calibrated if, among all structures with predicted probability q (cid:15) , they exist in the gold parses with probability q .",
"That is, predictions with confidence q have precision q .",
"15 If probabilities are calibrated, that implies expectations with regard to their distribution are unbiased, and may also justify intuitive interpretations of probabilities in exploratory analysis ( 3).",
"Calibration may also have implications for joint inference, EM, and active learning methods that use confidence scores and confidence-based expectations.",
"We apply Nguyen and O'Connor (2015)'s adaptive binning method to analyze the calibration of structure queries from an NLP system, by taking the domain of all seen lengthd paths from the 100 samples' parse distribution for the treebank, grouping by ranges of predicted probabilities to have at least 5000 paths per bin, to ensure stability of the local precision estimate.",
"16 We find that probabilities are reasonably well calibrated, if slightly overconfidentFigure 4 shows the average predicted probability per bin, compared to how often these paths appear in the gold standard (local precision).",
"For example, for edges (length-1 paths), predictions near 60% confidence (the average among predictions in range [0 . 42 , 0 . 78] ) correspond to edges that are actually in the gold standard tree only 52.8% of the time.",
"The middle confidence range has worse calibration error, and longer paths perform worse.",
"Still, this level of calibration seems remarkably good, considering there was no attempt to re-calibrate predictions (Kuleshov and Liang, 2015) or to use a model that specifically parameterizes the energy of dependency paths (Smith and Eisner, 2008; Martins et al., 2010)these predictions are simply a side effect of the overall joint model for incremental dependency parsing.",
"Supervised learning typically gives the most accurate information extraction or semantic parsing systems, but for many applications where train-15",
"train-15 This is a local precision, as opposed to the more usual tail probability of measuring precision of all predictions higher than some t the integral of local precision.",
"For example, Figure 3's length-1 t = 0 .",
"9 precision of 0 .",
"942 ( 4 ) is the average y value of several rightmost bins in Figure 4.",
"This contrast corresponds to Efron (2010)'s dichotomy of local versus global false discovery rates.",
"16 This does not include gold-standard paths with zero predicted probability.",
"As Nguyen and O'Connor found for sequence tagging and coreference, we find the prediction distribution is heavily skewed to near 0 and 1, necessitating adaptive bins, instead of fixed-width bins, for calibration analysis (Niculescu-Mizil and Caruana, 2005; Bennett, 2000).",
"ing data is scarce, Chiticariu et al. (2013) argue that rule-based systems are useful and widespread in practice, despite their neglect in contemporary NLP research.",
"Syntactic dependencies are a useful abstraction with which to write rule-based extractors, but they can be brittle due to errors in the parser.",
"We propose to integrate over parse samples to infer a marginal probability of a rule match, increasing robustness and allowing for precision-recall tradeoffs.",
"We examine the task of extracting the list of names of persons killed by police from a test set of web news articles in SeptDec 2016.",
"We use the dataset released by Keith et al. (2017), consisting of 24,550 named entities e E and sentences from noisy web news text extractions (that can be diffi-cult to parse), each of which contains at least one e (on average, 2.8 sentences/name) as well as keywords for both police and killing/shooting.",
"The task is to classify whether a given name is a person who was killed by police, given 258 gold-standard names that have been verified by journalists.",
"Keith et al. present a baseline rule-based method that uses Li and Ji (2014)'s off-the-shelf RPI-JIE ACE event parser to extract (event type, agent, patient) tuples from sentences, and assigns f JIE ( x i , e ) = 1 iff the event type was a killing, the agent's span included a police keyword, and the patient was the candidate entity e .",
"An entity is classified as a victim if at least one sentence is classified as true, resulting in a 0.17 F1 score (as reported in previous work).",
"17 We define a similar syntactic dependency rule system using a dependency parse as input: our extractor f ( x, e, y ) returns 1 iff the sentence has a killing keyword k , 18 which both",
"1. has an agent token a (defined as, governed by nsubj or nmod ) which is a police keyword, or a has a ( amod or compound ) modifier that is a police keyword; and,",
"2. has a patient token p (defined as, governed by nsubjpass or dobj ) contained in the candidate name e 's span.",
"17 This measures recall of the entire gold-standard victim database, though the corpus only includes 57% of the victims.",
"18 Police and killing/shooting keywords are from Keith et",
"al.'s publicly released software.",
"Applying this f ( x, e, y ) classifier to greedy parser output, it performs better than the RPI-JIE-based rules (Figure 5, right), perhaps because it is better customized for the particular task.",
"Treating f as a structure query, we then use our Monte Carlo marginal inference ( 2) method to calculate the probability of a rule match for each sentencethat is, the fraction of parse samples where f ( x, e, y ( s ) ) is trueand infer the entity's probability with the noisy-or formula (Craven and Kumlien, 1999; Keith et al., 2017).",
"This gives soft classifications for entities.",
"The Monte Carlo method achieves slightly higher F1 scores once there are at least 10 samples (Fig. 5, right).",
"More interestingly, the soft entity-level classifications also allow for precision-recall tradeoffs (Fig. 5, left), which could be used to prioritize the time of human reviewers updating the victim database (filter to higher precision), or help ensure victims are not missed (with higher recall).",
"We found the sampling method retrieved several true-positive entities where only a single sentence had a non-zero rule prediction at probability 0.01that is, the rule was only matched in one of 100 sampled parses.",
"Since current practitioners are already manually reviewing millions of news articles to create police fatality victim databases, the ability to filter to high recalleven with low precisionmay be useful to help ensure victims are not missed.",
"Sampling also slightly improves supervised learning for this problem.",
"We modify Keith et",
"al.'s logistic regression model based on a dependency 924 path feature vector f ( x i , y ) , instead creating feature vectors that average over multiple parse samples ( E p ( y ) [ f ( x i , y )] ) at both train and test time.",
"With the greedy parser, the model results in 0.229 F1; using 100 samples slightly improves performance to 0.234 F1.",
"Semantic role labeling (SRL), the task to predict argument structures (Gildea and Jurafsky, 2002), is tightly tied to syntax, and previous work has found it beneficial to conduct it with joint inference with constituency parsing, such as with topk parse trees (Toutanova et al., 2008) or parse tree samples (Finkel et al., 2006).",
"Since 4 shows that Monte Carlo marginalization improves dependency edge prediction, we hypothesize dependency sampling could improve SRL as well.",
"SRL includes both identifying argument spans, and assigning spans to specific semantic role labels (argument types).",
"We focus on just the second task of semantic role assignment: assuming argument spans are given, to predict the labels.",
"We experiment with English OntoNotes v5.0 annotations (Weischedel et al., 2013) according to the CoNLL 2012 test split (Pradhan et al., 2013).",
"We focus only on predicting among the five core arguments (Arg0 through Arg4) and ignore spans with gold-standard adjunct or reference labels.",
"We fit a separate model for each predicate 19 among the 2,160 predicates that occur at least once in both the training and test sets (115,811 and 12,216 sentences respectively).",
"Our semantic model of label z t { A 0",
"..A 4 } for argument head token t and predicate token p , p sem ( z t | p, y ) , is simply the conditional probability of the label, conditioned on y 's edge between t and p if one exists.",
"20 (If they are not directly connected, the model instead conditions on a no edge' feature.) Probabilities are maximum likelihood estimates from the training data's (predicate, argument label, path) counts, from either greedy 19 That is, for each unique (lemma, framesetID) pair, such as (view, view-02).",
"20 The dataset's argument spans must be reconciled with predicted parse structures to define the argument head t ; 90% of spans are consistent with the greedy parser in that all the span's tokens have the same highest ancestor contained with the span, which we define as the argument head.",
"For inconsistent cases, we select the largest subtree (that is, highest within-span ancestor common to the largest number of the span's tokens).",
"It would be interesting to modify the sampler to restrict to parses that are consistent with the span, as a form of rejection sampling.",
"parses, or averaged among parse samples.",
"To predict at test time, the greedy parsing model simply uses p ( z t | p, y ( greedy ) ) .",
"The Monte Carlo model, by contrast, treats it as a directed joint model and marginalizes over syntactic analyses: p MC ( z t | p, x ) = X y Y ( x ) p sem ( z t | p, y ) p syn ( y | x ) .",
"The baseline accuracy of predicting the predicate's most common training-time argument label yields 0.393 accuracy, and the greedy parser performs at 0.496.",
"The Monte Carlo method (with 100 samples) improves accuracy to 0.529 (Table 2).",
"Dependency samples' usefulness in this limited case suggests they may help systems that use dependency parses more broadly for SRL (Hacioglu, 2004; Das et al., 2014).",
"In this work, we introduce a straightforward algorithm for sampling from the full joint distribution of a transition-based dependency parser.",
"We explore using these parse samples to discover both parsing error and structural ambiguities.",
"Moreover, we find that our Monte Carlo syntax marginal method not only dominates the greedy method for dependency path prediction (especially for longer paths), but also allows for control of precision-recall tradeoffs.",
"Propagating dependency uncertainty can potentially help a wide variety of semantic analysis and information extraction tasks.",
"The authors would like to thank Rajarshi Das, Daniel Cohen, Abe Handler, Graham Neubig, Emma Strubell, and the anonymous reviewers for their helpful comments."
] | [
"abstain",
"abstain",
"objective",
"objective",
"objective",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"method",
"objective",
"result",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"method",
"other",
"other",
"other",
"method",
"objective",
"other",
"abstain",
"other",
"other",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result",
"abstain",
"other"
] |
[
"We study a new problem setting of information extraction (IE), referred to as text-to-table.",
"In text-to-table, given a text, one creates a table or several tables expressing the main content of the text, while the model is learned from text-table pair data.",
"The problem setting differs from those of the existing methods for IE.",
"First, the extraction can be carried out from long texts to large tables with complex structures.",
"Second, the extraction is entirely data-driven, and there is no need to explicitly define the schemas.",
"As far as we know, there has been no previous work that studies the problem.",
"In this work, we formalize text-to-table as a sequence-to-sequence (seq2seq) problem.",
"We first employ a seq2seq model fine-tuned from a pre-trained language model to perform the task.",
"We also develop a new method within the seq2seq approach, exploiting two additional techniques in table generation: table constraint and table relation embeddings.",
"We consider text-to-table as an inverse problem of the well-studied table-to-text, and make use of four existing table-to-text datasets in our experiments on text-to-table.",
"Experimental results show that the vanilla seq2seq model can outperform the baseline methods of using relation extraction and named entity extraction.",
"The results also show that our method can further boost the performances of the vanilla seq2seq model.",
"We further discuss the main challenges of the proposed task.",
"The code and data are available at https://github.",
"com/shirley-wu/text_to_table .",
"1 1 Introduction Information extraction (IE) is a task that aims to extract information of interest from text data and represent the extracted information in a structured form.",
"Traditional IE tasks include named entity recognition which recognizes entities and their 1 The work was done when Xueqing Wu was an intern at ByteDance AI Lab.",
"types (Huang et al., 2015; Ma and Hovy, 2016; Lample et al., 2016; Devlin et al., 2019), relation extraction which identifies the relationships between entities (Zheng et al., 2017; Zeng et al., 2018; Luan et al., 2019; Zhong and Chen, 2021), etc.",
"Since the results of IE are structured, they can be easily used by computer systems in different applications such as text mining.",
"In this work, we study IE in a new setting, referred to as text-to-table.",
"First, the system receives a training dataset containing text-table pairs.",
"Each text-table pair contains a text and a table (or tables) representing information needed for the target application extracted from the text.",
"The system learns a model for information extraction.",
"Next, the system employs the learned model to conduct information extraction from a new text and outputs the result in a table (or tables).",
"Figure 1 gives an example of text-to-table, where the input (above) is a report of a basketball game, and the output (below) is two tables summarizing the scores of the 2518 teams and players from the input.",
"Text-to-table is unique compared to the traditional IE approaches.",
"First, text-to-data can be performed at both sentence-level and document-level.",
"While the distinction between sentence and document level is vague, document-level extraction can produce a more complex output.",
"As in the example in Figure 1, extraction of information is performed from the entire document.",
"The extracted information contains multiple types of scores of teams and players in a basketball game structured in table format.",
"Second, the schemas for extraction are implicitly included in the training data such as header names.",
"There is no need to explicitly define the schemas, which reduces the need for manual efforts for schema design and annotations.",
"Our work is inspired by research on the so-called table-to-text (or data-to-text) problem, which is the task of generating a description for a given table.",
"Table-to-text is useful in applications where the content of a table needs to be described in natural language.",
"Thus, text-to-table can be regarded as an inverse problem of table-to-text.",
"However, there are also differences.",
"Most notably, their applications are different.",
"Text-to-table systems can automatically produce tables for text summarization and text mining.",
"For example, the score tables of sports games and infoboxes of Wikipedia articles can serve as summaries of original documents.",
"The score tables can be utilized to evaluate the ath-letes' performances, and the infoboxes can be used to construct a knowledge graph.",
"In this work, we formalize text-to-table as a sequence-to-sequence (seq2seq) task.",
"More specifically, we translate the text into a sequence representation of a table (or tables), where the schema of the table is implicitly contained in the representation.",
"We build the seq2seq model on top of a pre-trained language model, which is the state-of-the-art approach for seq2seq tasks (Lewis et al., 2020; Raffel et al., 2020).",
"Although the approach is a natural application of existing technologies, as far as we know, there has been no previous study to investigate to what extent the approach works.",
"We also develop a new method for text-to-table within the seq2seq approach with two additional techniques, table constraint, and table relation embeddings.",
"Table constraint controls the creation of rows in a table and table relation embeddings affect the alignments between cells and their row headers and column headers.",
"Both are to make the generated table well-formulated.",
"The approach to IE based on seq2seq has already been proposed.",
"Methods for conducting individual tasks of relation extraction (Zeng et al., 2018; Nayak and Ng, 2020; Huang et al., 2021), named entity recognition (Chen and Moschitti, 2018; Yan et al., 2021), event extraction (Li et al., 2021; Lu et al., 2021) and role-filler entity extraction (Du et al., 2021; Huang et al., 2021) have been developed.",
"Methods for jointly performing multiple tasks of named entity recognition, relation extraction, and event extraction have also been devised (Paolini et al., 2021).",
"Most of the methods exploit suitable pre-trained models such as BERT.",
"However, all the existing methods rely on predefined schemas for extraction.",
"Moreover, their models are designed to extract information from short texts, rather than long texts, and extract information with simple structures (such as an entity and its type), rather than information with complicated structures (such as a table).",
"We conduct extensive experiments on the four datasets.",
"Results show that the vanilla seq2seq model fine-tuned from BART (Lewis et al., 2020) can outperform the state-of-the-art IE models fine-tuned from BERT (Devlin et al., 2019; Zhong and Chen, 2021).",
"Furthermore, results show that our proposed approach to text-to-table with the two techniques can further improve the extraction accuracies.",
"We also summarize the challenging issues with the seq2seq approach to text-to-table for future research.",
"Our contributions are summarized as follows:",
"1. We propose the new task of text-to-table for IE.",
"We derive four new datasets for the task from existing datasets.",
"2. We formalize the task as a seq2seq problem and propose a new method within the seq2seq approach using the techniques of table constraint and table relation embeddings.",
"3. We conduct extensive experiments to verify the effectiveness of the proposed approach.",
"Information Extraction (IE) is the task of extracting information (structured data) from a text (un-structured data).",
"For example, named entity recognition (NER) recognizes entities appearing in a text.",
"Relation extraction (RE) identifies the relationships between entities.",
"Event extraction (EE) discovers events occurring in a text.",
"Role-filler entity extrac-2519 tion (REE) fills entities into event templates and is similar to EE.",
"Traditionally, researchers formalize the task as a language understanding problem.",
"The state-of-the-art methods for NER perform the task on the basis of the pre-trained language model BERT (De-vlin et al., 2019).",
"The pipeline approach to RE divides the problem into NER and relation classification, and conducts the two sub-tasks in a sequential manner (Zhong and Chen, 2021), while the end-to-end approach jointly carries out the two sub-tasks (Zheng et al., 2017; Zeng et al., 2018; Luan et al., 2019).",
"The state-of-the-art methods for EE also employ BERT and usually jointly train the models with other tasks such as NER and RE (Wad-den et al., 2019; Zhang et al., 2019; Lin et al., 2020).",
"All the methods assume the use of pre-defined schemas (e.g., entity types for NER, entity and relation types for RE, and event templates for EE).",
"Besides, most methods are designed for extraction from short texts.",
"Therefore, existing methods for IE cannot be directly applied to text-to-table.",
"Another series of related work is open information extraction (OpenIE), which aims to extract information from texts without relying on explicitly defined schemas (Banko et al., 2007; Wu and Weld, 2010; Mausam et al., 2012; Stanovsky et al., 2018; Zhan and Zhao, 2020).",
"However, OpenIE aims to extract information with simple structures (i.e., relation tuples) from short texts, and the methods in OpenIE cannot be directly applied to text-to-table.",
"IE is also conducted at document level, referred to as doc-level IE.",
"For example, some NER methods directly perform NER on a long document (Strubell et al., 2017; Luo et al., 2018), and others encode each sentence in a document, use attention to fuse document-level information, and perform NER on each sentence (Hu et al., 2020; Xu et al., 2018).",
"There are also RE methods that predict the relationships between entities in a document (Yao et al., 2019; Nan et al., 2020).",
"However, existing doc-level IE approaches usually do not consider the extraction of complex relations between many items.",
"Sequence-to-sequence (seq2seq) is the general problem of transforming one text into another text (Sutskever et al., 2014; Bahdanau et al., 2015), which includes machine translation, text summarization, etc.",
"The use of the pre-trained language models of BART (Lewis et al., 2020) and T5 (Raf-fel et al., 2020) can significantly boost the performances of seq2seq, such as machine translation (Lewis et al., 2020; Raffel et al., 2020; Liu et al., 2020) and text summarization (Lewis et al., 2020; Raffel et al., 2020; Huang et al., 2020).",
"Recently, some researchers also formalize the IE problems as seq2seq, that is, transforming the input text into an internal representation.",
"One advantage is that one can employ a single model to extract multiple types of information.",
"Results show that this approach works better than or equally well as the traditional approach of language understanding, in RE (Zeng et al., 2018; Nayak and Ng, 2020), NER (Chen and Moschitti, 2018; Yan et al., 2021), EE (Li et al., 2021; Lu et al., 2021) and REE (Du et al., 2021; Huang et al., 2021).",
"Methods that jointly perform multiple tasks including NER, RE, and EE have also been devised (Paolini et al., 2021).",
"Data-to-text aims to generate natural language descriptions from the input structured data such as sports commentaries (Wiseman et al., 2017).",
"The structured data is usually represented as tables (Wiseman et al., 2017; Thomson et al., 2020; Chen et al., 2020), sets of table cells (Parikh et al., 2020; Bao et al., 2018), semantic representations (Novikova et al., 2017), or sets of relation triples (Gardent et al., 2017; Nan et al., 2021).",
"The task requires the model to select the salient information from the data, organize it in a logical order, and generate an accurate and fluent natural language description (Wiseman et al., 2017).",
"Data-to-text models usually adopt the encoder-decoder architecture.",
"The encoders are specifically designed to model the input data, such as multi-layer percep-tron (Puduppully et al., 2019a,b), recurrent neural network (Juraska et al., 2018; Liu et al., 2018; Shen et al., 2020), graph neural network (Marcheggiani and Perez-Beltrachini, 2018; Koncel-Kedziorski et al., 2019), or Transformer (Gong et al., 2019).",
"As shown in Figure 1, text-to-table takes a text as input and produces a table or several tables to summarize the content of the text.",
"Formally, the input is a text denoted as x = x 1 , , x | x | .",
"The output is one table or multiple tables.",
"For simplicity suppose that there is only one table denoted as T .",
"Further, suppose that T has n r rows and n c columns.",
"Thus, T contains n r n c cells, where the cell of row i and column j is a sequence of words t i,j = t i,j, 1 , ..., t i,j, | t i,j | .",
"There are three types of table: one that has both column headers and row headers, one that only has column headers, and one that only has row headers.",
"For example, the player table in Figure 1 has both column headers (Assists, Points, etc) and row headers (Al Horford, Isaiah Thomas, etc).",
"We let t 1 ,j , j = 2 , , n c denote the column headers, let t i, 1 , i = 2 , , n r denote the row headers, and let t i,j , i = 2 , , n r , j = 2 , , n c denote the non-header cells of the table.",
"For example, in the player table in Figure 1, t 1 , 2 = Assists, t 2 , 1 = Al Horford, and t 2 , 2 = 5.",
"The information extracted via text-to-table can be leveraged in many different applications such as document summarization and text mining.",
"For example, in Figure 1, one can quickly obtain the key information of the text by simply looking at the tables summarized from the text.",
"There are differences between text-to-table and traditional IE settings.",
"As can be seen from the example in Figure 1, extraction of information is performed from the entire document.",
"The extracted information (structured data) is in a complex form, specifically multiple types of scores of teams and players in a basketball game.",
"Furthermore, the data-driven approach is taken, and the schemas of the tables do not need to be explicitly defined.",
"The task of text-to-table also has challenges.",
"First, parallel data containing texts and tables is difficult to obtain.",
"Manual construction of such data is usually expensive.",
"Second, structured information may not be easily represented as tables.",
"For example, a knowledge graph may not be easily converted into tables.",
"Third, evaluation of table extraction may not be easy, which includes multiple factors, such as header, content, and structure.",
"We develop a method for text-to-table using the seq2seq approach and the two techniques of table constraint and table relation embeddings.",
"We formalize text-to-table as a sequence-to-sequence (seq2seq) problem (Sutskever et al., 2014; Bahdanau et al., 2015).",
"Specifically, given an input text, we generate a sequence representing the output table (or tables).",
"We introduce two special tokens, a separation token denoted as s and a new-line token denoted as n .",
"For a table t , we represent each row t i with a sequence of cells Figure 2: The sequence representation of the player table in Figure",
"t = s , t 1 , 1 , s , , s , t 1 ,n c , s , n , (2) s , t 2 , 1 , s , , s , t 2 ,n c , s , n , s , t n r , 1 , s , , s , t n r ,n c , s",
"Figure 2 shows the sequence of the player table in Figure",
"1. When there are multiple tables, we create a sequence of tables using the captions of the tables as delimiters.",
"Let x = x 1 , , x | x | and y = y 1 , , y | y | denote the input and output sequences respectively.",
"In inference, the model generates the output sequence based on the input sequence.",
"The model conducts generation in an auto-regressive way, which generates one token at each step based on the tokens it has generated so far.",
"In training, we learn the model based on the text-table pairs { ( x 1 , y 1 ) , ( x 2 , y 2 ) , , ( x n , y n ) } .",
"The objective of learning is to minimize the cross-entropy loss.",
"We refer to the method described above as vanilla seq2seq.",
"There is no guarantee, however, that the output sequence of vanilla seq2seq represents a well-formulated table.",
"We add a postprocessing step to ensure that the output sequence is a table.",
"The post-processing method takes the first row generated as well-defined, deletes extra cells at the end of the other rows, and inserts empty cells at the end of the other rows.",
"We develop two techniques to improve table generation, called table constraint and table relation embeddings.",
"We use our method to denote the seq2seq approach with these two techniques.",
"2 2 Our methods is able to generate the output containing multiple tables.",
"Our method exploits a constraint in the decoding process to ensure that the output sequence represents a well-formulated table.",
"Specifically, our method calculates the number of cells in the first row it generates, and then forces the following rows to contain the same number of cells.",
"Our method also incorporates table relation embeddings including row relation embeddings and column relation embeddings into the self-attention of the Transformer decoder.",
"Given a token in a non-header cell, the row relation embeddings Kr and Vr indicate which row header the token is aligned to, and the column relation embeddings Kc and Vc indicate which column header the token is aligned to.",
"Let us consider the self-attention function in one block of Transformer decoder: at each position, self-attention only attends to the previous positions.",
"For simplicity, let us only consider one head in the self-attention.",
"At the t -th position, the input of self-attention is the sequence of representations z = ( z 1 , , z t ) and the output is the sequence of representations h = ( h 1 , , h t ) , where z i R d and h i R d are the representations at the i -th position ( i = 1 , , t ) .",
"In a conventional Transformer decoder, self-attention is defined as follows, h i = i (cid:88) j =1 ij ( z j WV ) WO , (3) ij = e e ij (cid:80) ij =1 e e ij , e ij = ( z i WQ )( z j WK ) T d k , (4) i = 1 , , t, j = 1 , , i where WQ , WK , WV R d d k are the query, key, and value weight matrices respectively, and WO R d k d is the output weight matrix.",
"where r Kij and r Vij are relation vectors representing",
"the relationship between the i -th position and the j -th position.",
"The relation vectors r Kij and r Vij are defined as follows.",
"For the token at the i -th position, if the token at the j -th position is a part of its row header, then r Kij and r Vij are set to the row relation embeddings Kr and Vr .",
"Similarly, for the token at the i -th position, if the token at the j -th position is a part of its column header, then r K ij and r V ij are set to the column relation embeddings Kc and Vc .",
"Otherwise, r Kij and r Vij are set to 0 .",
"In inference, to identify the row header or the column header of a token, we parse the sequence generated so far to create a partial table using the new-line tokens and separation tokens in the sequence.",
"Figure 3 illustrates how relation vectors are constructed.",
"We make use of four existing datasets which are traditionally utilized for data-to-text: Rotowire (Wise-man et al., 2017), E2E (Novikova et al., 2017), WikiTableText (Bao et al., 2018), and WikiBio (Le-bret et al., 2016).",
"In each dataset, we filter out the content in the tables that does not appear in the texts.",
"We plan to make the processed datasets publicly available for future research.",
"Table 2 gives the statistics of the Rotowire dataset and Table 1 gives the statistics of the other three datasets.",
"Rotowire is from the sports domain.",
"Each instance is composed of a text and two tables, where the text is a report of a basketball game and the two tables represent the scores of teams and players respectively (cf., Figure 1).",
"Each table has column headers describing the types of scores, and row headers describing the names of teams or players.",
"The texts are long and may contain irrelevant information such as the performance of players in other games.",
"Therefore, this is a challenging dataset.",
"stance is a pair of short text and an automatically constructed table, where the text is a description of a restaurant, and the table has two columns with row headers summarizing the characteristics of the restaurant.",
"The tables are automatically constructed, where the texts in the tables are from a limited set and thus lack diversity.",
"WikiTableText is an open-domain dataset.",
"Each instance includes a text and a table, where the text is a description and the table has a row and two columns with row headers collected from Wikipedia.",
"The texts are short and contain information similar to that in the tables.",
"WikiBio is extracted from the Wikipedia biography pages.",
"Each instance consists of a text and a table, where the text is the introduction of Wikipedia page 3 and the table is from the infobox of a Wikipedia page and has two columns with row headers.",
"The input texts are usually long and contain more information than the tables.",
"We know of no existing method that can be directly employed in text-to-table.",
"For each dataset, we first define the schemas based on the training data, then use an existing method of relation extraction (RE) or named entity extraction (NER) to extract information, and finally create tables based on the schemas and extracted information.",
"We take 3 The original dataset only uses the first sentence of the introduction.",
"We use the entire introduction.",
"it as the baseline for the dataset.",
"No baseline can be applied to all four datasets.",
"For RE, we use PURE, a state-of-the-art method (Zhong and Chen, 2021).",
"For NER, we use BERT (Devlin et al., 2019).",
"Training: For vanilla seq2seq and our method, we adopt Transformer (Vaswani et al., 2017) as the model and fine-tune the models from BART-base.",
"We also experiment with BART-large.",
"For RE and NER, we fine-tune the models from BERT-base-uncased.",
"All models are trained with Adam optimizer until convergence.",
"Hyper-parameters are shown in Appendix A. For the small datasets of Rotowire and WikiTableText, we run experiments five times with different random seeds and take the average of results to reduce variance.",
"Evaluation: We evaluate the performance of a method based on (1) the number of correct headers and (2) the number of correct non-header cells.",
"We adopt the F1 score as the evaluation measure.",
"For each table, we compare the set of predicted results y against the set of ground-truth y .",
"Precision is defined as the percentage of the correctly predicted results among the predicted results, i.e., P = 1 | y | (cid:80) y y max y y O ( y, y ) .",
"Recall is defined as the percentage of the correctly predicted results among the ground-truth, i.e., R = 1 | y | (cid:80) y y max y y O ( y, y ) .",
"Finally, F 1 = 2 / (1 /P + 1 /R ) .",
"Here, O ( ) denotes a way of similarity calculation.",
"We consider three ways: exact match, chrf (Popovic, 2015) and rescaled BERTScore (Zhang et al., 2020).",
"Exact match conducts an exact match between two texts.",
"Chrf calculates character-level n-gram similarity between two texts.",
"BERTScore calculates the similarity of BERT embeddings between two texts.",
"For non-header cells, we use not only the content but also the header(s) to ensure that the cell is on the right row (and column), and calculate the similarity O ( ) as the product of header similarity and cell content similarity.",
"4 We evaluate the measures of a 4 As shown in Figure 1, the tables in the dataset contain empty cells.",
"The empty cells do not contain information.",
"Therefore, we ignore the empty cells and only use the nonempty cells in the evaluation.",
"generated table and then take the average on all tables.",
"This evaluation assumes that the order of rows and columns is not important.",
"We find that this assumption is applicable to the four datasets and many real-world scenarios.",
"We also evaluate the percentage of output sequences that cannot represent well-formulated tables, referred to as error rate.",
"Table 3 shows the results on the Rotowire dataset.",
"One can see that our method performs the best followed by vanilla seq2seq in terms of most of the measures, especially the F1 score on non-header cells.",
"Both outperform the baselines of doc-level RE and sent-level RE.",
"The RE baselines perform quite well, but they heavily rely on rules and cannot beat the seq2seq approach.",
"Among them, the doc-level RE performs better than sent-level RE, because some information in Rotowire can only be extracted when the cross-sentence context is provided.",
"We implement two baselines of RE, namely doc-level RE and sent-level RE.",
"We take team names, player names, and numbers of scores as entities and take types of scores as relations.",
"Sent-level RE predicts the relations between entities within each sentence.",
"Doc-level RE predicts the relations between entities within a window (the window size is 12 entities) and uses the approximation model proposed by Zhong and Chen (2021) to speed up inference.",
"Table 4 shows the results of our method, vanilla seq2seq, and the baseline of NER on E2E, WikiTableText, and WikiBio.",
"Again, the seq2seq approach outperforms the baseline.",
"Our method and vanilla seq2seq are comparable, because the table structures in the three datasets are very simple (there are only two columns in the tables), and the use of the two techniques does not further improve the performances.",
"The NER baseline has high precision but low recall, mainly because NER can only make the right decision when it is clear.",
"We implement the baseline of NER in the following way.",
"We view the non-head cells in the tables as entities and their row headers as entity types.",
"In training, we match the non-head cells into the texts 2524 Pre TC TRE Rotowire/Team Rotowire/Player E2E WikiTableText WikiBio 28.05 7.75 94.45 46.37 67.51 30.61 10.67 95.53 47.13 67.43 82.97 81.96 97.87 59.26 68.98 83.09 82.24 97.88 59.29 68.98 83.30 82.50 97.87 59.12 69.02 83.36 82.53 97.88 59.14 69.02 Table 5: Results of ablation study on our method by excluding pre-trained language model (Pre), table constraint (TC) and table relation embeddings (TRE).",
"Method Rotowire/Team Rotowire/Player E2E WikiTableText WikiBio Vanilla seq2seq (BART base) 82.97 81.96 97.87 59.26 68.98 Our method (BART base) 83.36 82.53 97.88 59.14 69.02 Vanilla seq2seq (BART large) 86.31 86.59 97.94 62.71 69.66 Our method (BART large) 86.31 86.83 97.90 62.41 69.71 Table 6: Results of our method and vanilla seq2seq with base and large BART models on all four datasets.",
"and take them as entities in the texts.",
"Only a proportion of the non-header cells can be matched into the texts ( 85% for E2E, 74% for WikiTableText, and 69% for WikiBio).",
"We carry out an ablation study on our method.",
"Specifically, we exclude pre-trained language model, table constraint (TC), and table relation embeddings (TRE) from our method.",
"Note that our method without TC and TRE is equivalent to vanilla seq2seq.",
"Table 5 gives the results on the four datasets.",
"It can be seen that the use of both TC and TRE can significantly improve the performance on Rotowire, which indicates that our method is particularly effective when the tables are large with many rows and columns.",
"There are no significant improvements on E2E, WikiTableText, and WikiTableText, apparently because the formulation of tables is easy for the three datasets.",
"Therefore, we conclude that the two techniques of TC and TRE are helpful when the task is difficult.",
"Rotowire and WikiTableText.",
"This indicates that pre-trained language model is particularly helpful when the task is difficult and the size of training data is small.",
"We observe that vanilla seq2seq makes more formatting errors than our method, especially on player tables in Rotowire that have a large number of columns.",
"It indicates that for vanilla seq2seq, it is difficult to keep track of the columns in each row and make alignments with the column headers.",
"In contrast, the two techniques of our method can help effectively cope with the problem.",
"Figure 4 shows a bad case of vanilla seq2seq, where the model correctly infers the column of assists but fails to infer the columns of personal fouls, points, and total rebounds for the row of Rajon Rondo.",
"In contrast, our method can successfully handle the case, because TC can eliminate the incorrectly formatted output, and TRE can make correct alignments with the column headers.",
"We also investigate the effect of the scale of pre-trained language model BART.",
"We use both BART-base and BART-large and conduct fine-tuning on top of them for vanilla seq2seq and our method.",
"Table 6 gives the results on the four datasets.",
"The 2525 results show that the use of BART-large can further boost the performances on all four datasets, indicating that it is better to use larger pre-trained models when computation cost is not an issue.",
"We analyze the experimental results on the four",
"datasets and identify five challenging issues.",
"(1) Text Diversity: Extraction of the same content from different expressions is one challenge.",
"For example, the use of synonyms is very common in Rotowire.",
"The team of Knicks is often referred to as New York, its home city.",
"Identification of the same entities from different expressions is needed in the task.",
"(2) Text Redundancy: There are cases such as those in WikiBio, in which the texts contain much redundant information.",
"This poses a challenge to the text-to-table model to have a strong ability in summarization.",
"It seems that the seq2seq approach works well to some extent but further improvement is undoubtedly necessary.",
"(3) Large Table: The tables in Rotowire have large numbers of columns, and the extraction from them is challenging even for our method of using TC and TRE.",
"(4) Background Knowledge: WikiTableText and WikiBio are from open domain.",
"Thus, performing text-to-table on such kind of datasets require the use of much background knowledge.",
"A possible way to address this challenge is to use more powerful pre-trained language models or external knowledge bases.",
"(5) Reasoning: Sometimes the information is not explicitly presented in the text, and reasoning is required to conduct correct extraction.",
"For example, an article in Rotowire reports a game between the two teams Nets and Wizards.",
"From the sentence: The Nets seized control of this game from the very start, opening up a 31 14 lead after the first quarter, humans can infer that the point of Wizards is 14 , which is still difficult for machines.",
"We propose employing text-to-table as a new way of information extraction (IE), which extracts information of interest from the input text and summarizes the extracted information in tables.",
"The advantage of the approach is that one can easily conduct information extraction from either short texts or long texts to create simple tables or complex tables without explicitly defining the schemas.",
"Text-to-table can be viewed as an inverse problem of table-to-text.",
"We formalize text-to-table as a sequence-to-sequence problem on top of a pre-trained model.",
"We further propose an improved method using a seq2seq model and table constraint and table relation embeddings techniques.",
"We conduct experiments on four datasets derived from existing table-to-text datasets.",
"The results demonstrate that our proposed approach outperforms existing methods using conventional IE techniques.",
"We further analyze the challenges of text-to-table for future study.",
"The issues include diversity of text, redundancy of text, large table, background knowledge, and reasoning."
] | [
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"objective",
"method",
"abstain",
"result",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"method",
"objective",
"method",
"abstain"
] |
[
"An increasing number of people in the world today speak a mixed-language as a result of being multilingual.",
"However, building a speech recognition system for code-switching remains difficult due to the availability of limited resources and the expense and significant effort required to collect mixed-language data.",
"We therefore propose a new learning method, meta-transfer learning , to transfer learn on a code-switched speech recognition system in a low-resource setting by judiciously extracting information from high-resource monolingual datasets.",
"Our model learns to recognize individual languages, and transfer them so as to better recognize mixed-language speech by conditioning the optimization on the code-switching data.",
"Based on experimental results, our model outperforms existing baselines on speech recognition and language modeling tasks, and is faster to converge.",
"In bilingual or multilingual communities, speakers can easily switch between different languages within a conversation (Wang et al., 2009).",
"People who know how to code-switch will mix languages in response to social factors as a way of communicating in a multicultural society.",
"Generally, code-switching speakers switch languages by taking words or phrases from the embedded language to the matrix language.",
"This can occur within a sentence, which is known as intra-sentential code-switching or between two matrix language sentences, which is called inter-sentential code-switching (Heredia and Altarriba, 2001).",
"Learning a code-switching automatic speech recognition (ASR) model has been a challenging task for decades due to data scarcity and diffi-culty in capturing similar phonemes in different These two authors contributed equally.",
"languages.",
"Several approaches have focused on generating synthetic speech data from monolingual resources (Nakayama et al., 2018; Winata et al., 2019).",
"However, these methods are not guaranteed to generate natural code-switching speech or text.",
"Another line of work explores the feasibility of leveraging large monolingual speech data in the pre-training and applying fine-tuning on the model using a limited source of code-switching data, which has been found useful to improve the performance (Li et al., 2011; Winata et al., 2019).",
"However, the transferability of these pretraining approaches is not optimized on extracting useful knowledge from each individual languages in the context of code-switching, and even after the fine-tuning step, the model forgets about the previously learned monolingual tasks.",
"In this paper, we introduce a new method, meta-transfer learning 1 , to learn to transfer knowledge from source monolingual resources to a code-switching model.",
"Our approach extends the model-1 The code is available at https://github.com/audioku/meta-transfer-learning agnostic meta learning (MAML) (Finn et al., 2017) to not only train with monolingual source language resources but also optimize the update on the code-switching data.",
"This allows the model to leverage monolingual resources that are optimized to detect code-switching speech.",
"Figure 1 illustrates the optimization flow of the model.",
"Different from joint training, meta-transfer learning computes the first-order optimization using the gradients from monolingual resources constrained to the code-switching validation set.",
"Thus, instead of learning one model that is able to generalize to all tasks, we focus on judiciously extracting useful information from the monolingual resources.",
"The main contribution is to propose a novel method to transfer learn information efficiently from monolingual resources to the code-switched speech recognition system.",
"We show the effectiveness of our approach in terms of error rate, and that our approach is also faster to converge.",
"We also show that our approach is also applicable to other natural language tasks, such as code-switching language modeling tasks.",
"Meta-learning Our idea of learning knowledge transfer from source monolingual resources to a code-switching model comes from MAML (Finn et al., 2017).",
"Probabilistic MAML (Finn et al., 2018) is an extension of MAML, which has better classification coverage.",
"Meta-learning has been applied to natural language and speech processing (Hospedales et al., 2020).",
"Madotto et al. (2019) extends MAML to the personalized text generation domain and successfully produces more persona-consistent dialogue.",
"Gu et al. (2018) and Qian and Yu (2019) and Lin et al. (2019) propose to apply meta-learning on low-resource learning.",
"Yu et al. (2020) applies MAML to hypernym detection.",
"Several applications have been proposed in speech applications, such as cross-lingual speech recognition (Hsu et al., 2019), speaker adaptation (Klejch et al., 2018, 2019), and cross-accent speech recognition (Winata et al., 2020).",
"Code-Switching ASR Li and Fung (2012) introduces a statistical method to incorporate a linguistic theory into a code-switching speech recognition system, and Adel et al. (2013a,b) explore syntactic and semantic features on recurrent neural networks (RNNs).",
"Baheti et al. (2017) adapts effective curriculum learning by training a network Algorithm 1 Meta-Transfer Learning Require: D src , D tgt Require: , : step size hyperparameters 1: Randomly initialize 2: while not done do 3: Sample batch data D tra ( D src , D tgt ) , D val D tgt 4: for all D tra T i D tra do 5: Evaluate LD tra T i ( f ) using D tra T i 6: Compute adapted parameters with gradient descent: (cid:48)T i = LD tra T i ( f ) 7: end for 8: (cid:80) i LD val (cid:16) f (cid:48)T i (cid:17) 9: end while with monolingual corpora of two languages, and subsequently training on code-switched data.",
"Prat-apa et al. (2018) and Lee et al. (2019) propose to use methods to generate artificial code-switching data using a linguistic constraint.",
"Winata et al. (2018) proposes to leverage syntactic information to improve the identification of the location of code-switching points, and improve the language model performance.",
"Finally Garg et al. (2018) and Winata et al. (2019) propose new neural-based methods using SeqGAN and pointer-generator (Pointer-Gen) to generate diverse synthetic code-switching sentences that are sampled from the real code-switching data distribution.",
"We aim to effectively transfer knowledge from source domains to a specific target domain.",
"We denote our model by f with parameters .",
"Our model accepts a set of speech inputs X = { x 1 , . . . , x n } and generates a set of utterances Y = { y 1 , . . . , y m } .",
"The training involves a set of speech datasets in which each dataset is treated as a task T i .",
"Each task is distinguished as either a source D src or target task D tgt .",
"For each training iteration, we randomly sample a set of data as training D tra , and a set of data as validation D val .",
"In this section, we present and formalize the method.",
"To facilitate the model to achieve a good generalization on the code-switching data, we sample the source dataset D src from monolingual English ( en ) and Chinese ( zh ) and code-switching ( cs ) corpora,",
"and choose the target dataset D tgt only from the code-switching corpus.",
"The code-switching data samples between D src and D tgt are disjoint.",
"In this case, we exploit the meta-learning update using meta-transfer learning to acquire knowledge from the monolingual English and Chinese corpora, and optimize the learning process on the code-switching data.",
"Then, we slowly fine-tune the trained model to become closer to the code-switching domain by avoiding aggressive updates that can push the model to a worse position.",
"Our approach extends the meta-learning paradigm to adapt knowledge learned from source domains to a specific target domain.",
"This approach captures useful information from multiple resources to the target domain, and updates the model accordingly.",
"Figure 1 presents the general idea of meta-transfer learning.",
"The goal of the meta-transfer learning is not to focus on generalizing to all tasks, but to focus on acquiring crucial knowledge to transfer from monolingual resources to the code-switching domain.",
"As shown in Algorithm 1, for each adaptation step on T i , we compute updated parameters (cid:48)T i via stochastic gradient descent (SGD) as follows: (cid:48)T i = LD tra T i ( f ) , (1) where is a learning hyper-parameter of the inner optimization.",
"Then, a cross-entropy loss LD val is calculated from a learned model upon the generated text given the audio inputs on the target domain: LD val = (cid:88) D val D tgt log p ( y t | x t ; (cid:48)T i ) .",
"(2) We define the objective as follows: min (cid:88) D tra T i , D val LD val ( f (cid:48)T i ) = (3) (cid:88) D tra T i , D val LD val ( f L Dtra T i ( f ) ) , (4) where D tra T i ( D src , D tgt ) and D val D tgt .",
"We minimize the loss of the f (cid:48)T i upon D val .",
"Then, we apply gradient descent on the meta-model parameter with a meta-learning rate.",
"2018; Winata et al., 2019).",
"The encoder employs VGG (Simonyan and Zisserman, 2015) to learn a language-agnostic audio representation and generate input embeddings.",
"The decoder receives the encoder outputs and applies multi-head attention to the decoder input.",
"We apply a mask into the decoder attention layer to avoid any information flow from future tokens.",
"During the training process, we optimize the next character prediction by shifting the transcription by one.",
"Then, we generate the prediction by maximizing the log probability of the sub-sequence using beam search.",
"To further improve the prediction, we incorporate Pointer-Gen LM (Winata et al., 2019) in a beam search process to select the best sub-sequence scored using the softmax probability of the characters.",
"We define P ( Y ) as the probability of the predicted sentence.",
"We add the pointer-gen language model p lm ( Y ) to rescore the predictions.",
"We also include word count wc(Y) to avoid generating very short sentences.",
"P ( Y ) is calculated as follows: P ( Y ) = P ( Y | X )+ p lm ( Y )+ (cid:112) wc ( Y ) , (5) where is the parameter to control the decoding probability, is the parameter to control the language model probability, and is the parameter to control the effect of the word count.",
"We use SEAME Phase II, a conversational English-Mandarin Chinese code-switching speech corpus that consists of spontaneously spoken interviews and conversations (Nanyang Technological University, 2015).",
"The data statistics and code-switching metrics, such as code mixing index (CMI) (Gamback and Das, 2014) and switch-point Model CER Winata et al. (2019) 32.76% + Pointer-Gen LM 31.07% Only CS 34.51% Joint Training ( EN + ZH ) 98.29% + Fine-tuning 31.22% Joint Training ( EN + CS ) 34.77% Joint Training ( ZH + CS ) 33.93% Joint Training ( EN + ZH + CS ) 32.87% + Fine-tuning 31.90% + Pointer-Gen LM 31.74% Meta-Transfer Learning ( EN + CS ) 32.35% Meta-Transfer Learning ( ZH + CS ) 31.57% Meta-Transfer Learning ( EN + ZH + CS ) 30.30% + Fine-tuning 29.99% + Pointer-Gen LM 29.30% Table 2: Results of the evaluation in CER, a lower CER is better.",
"fraction (Pratapa et al., 2018) are depicted in Table 1.",
"For monolingual speech datasets, we use HKUST (Liu et al., 2006) as the monolingual Chinese dataset, and Common Voice (Ardila et al., 2019) as the monolingual English dataset.",
"2 We use 16 kHz audio inputs and up-sample the HKUST data from 8 to 16 kHz.",
"Our transformer model consists of two encoder layers and four decoder layers with a hidden size of 512, an embedding size of 512, a key dimension of 64, and a value dimension of 64.",
"The input of all the experiments uses spectrogram, computed with a 20 ms window and shifted every 10 ms. Our label set has 3765 characters and includes all of the English and Chinese characters from the corpora, spaces, and apostrophes.",
"We optimize our model using Adam and start the training with a learning rate of 1e-4.",
"We fine-tune our model using SGD with a learning rate of 1e-5, and apply an early stop on the validation set.",
"We choose = 1 , = 0 .",
"1 , and = 0 .",
"1 .",
"We draw the sample of the batch randomly with a uniform distribution every iteration.",
"We conduct experiments with the following approaches:",
"(a) only CS ,",
"(b) joint training on EN + ZH ,",
"(c) joint training on EN + ZH + CS , and",
"(d) meta-transfer learning.",
"Then, we apply fine-tuning",
"(b) ,",
"(c) , and",
"(d) models on CS .",
"We apply 2 We downloaded the CommonVoice version 1 dataset from https://voice.mozilla.org/.",
"LM rescoring on our best model.",
"We evaluate our model using beam search with a beam width of 5 and maximum sequence length of 300.",
"The quality of our model is measured using character error rate (CER).",
"The results are shown in Table",
"2. Generally, adding monolingual data EN and ZH as the training data is effective to reduce error rates.",
"There is a significant margin between only CS and joint training (1.64%) or meta-transfer learning (4.21%).",
"According to the experiment results, meta-transfer learning consistently outperforms the joint-training approaches.",
"This shows the effectiveness of meta-transfer learning in language adaptation.",
"The fine-tuning approach helps to improve the performance of trained models, especially on the joint training ( EN + ZH ).",
"We observe that joint training ( EN + ZH ) without fine-tuning cannot predict mixed-language speech, while joint training on EN + ZH + CS is able to recognize it.",
"However, according to Table 3, adding a fine-tuning step badly affects the previous learned knowledge (e.g., EN : 11.84% 63.85%, ZH : 31.30% 78.07%).",
"Interestingly, the model trained with meta-transfer learning does not suffer catastrophic forgetting even without focusing the loss objective to learn both monolingual languages.",
"As expected, joint training on EN + ZH + CS achieves decent performance on all tasks, but it does not optimally improve CS .",
"The language model rescoring using Pointer-Gen LM improves the performance of the meta-transfer Model CS EN ZH Only CS -66.71% 99.66% Joint Training ( EN + ZH ) -63.78% 11.84% 31.30% + Fine-tuning 3.29% 63.85% 78.07% Joint Training ( EN + ZH + CS ) 1.64% 13.88% 30.46% + Fine-tuning 2.61% 57.56% 76.20% Meta-Transfer Learning ( EN + ZH + CS ) 4.21% 16.22% 31.39% Table 3: Performance on monolingual English CommonVoice test set ( EN ) and HKUST test set ( ZH ) in CER.",
"learning model by choosing more precise code-switching sentences during beam search.",
"Pointer-Gen LM improves the performance of the model, and outperforms the model trained only in CS by 5.21% and previous state-of-the-art by 1.77%.",
"Convergence Rate Figure 2 depicts the dynamics of the validation loss per iteration on CS , EN , and ZH .",
"As we can see from the figure, meta-transfer learning is able to converge faster than only CS and joint training, and results in the lowest validation loss.",
"For the validation losses on EN and ZH , both joint training ( EN + ZH + CS ) and meta-transfer learning achieve a similar loss in the same iteration, while only CS achieves a much higher validation loss.",
"This shows that meta-transfer learning is not only optimized on the code-switching domain, but it also preserves the generalization ability to monolingual domains, as depicted in Table",
"3. 5.4 Language Modeling Task We further evaluate our meta-transfer learning approach on a language model task.",
"We simply take the transcription of the same datasets and build a 2-layer LSTM-based language model following the model configuration in Winata et al. (2019).",
"To further improve the performance, we apply fine-tuning with an SGD optimizer by using a learning rate of 1.0, and decay the learning rate by 0.25x for every epoch without any improvement on the validation performance.",
"To prevent the model from over-fitting, we add an early stop of 5 epochs.",
"As shown in Table 4, the meta-transfer learning approach outperforms the joint-training approach.",
"We find a similar trend for the language model task results to the speech recognition task where meta-transfer learning without additional fine-tuning performs better than joint training with fine-tuning.",
"Compared to our baseline model (Only CS ), meta-transfer learning is able to reduce the test set perplexity by 3.57 points (65.71 62.14), and the post fine-tuning step reduces the test set perplexity even further, from 62.14 to 61.97.",
"We propose a novel method, meta-transfer learning , to transfer learn on a code-switched speech recognition system in a low-resource setting by judiciously extracting information from high-resource monolingual datasets.",
"Our model recognizes individual languages and transfers them so as to better recognize mixed-language speech by conditioning the optimization objective to the code-switching domain.",
"Based on experimental results, our training strategy outperforms joint training even without adding a fine-tuning step, and it requires less iterations to converge.",
"In this paper, we have shown that our approach can be effectively applied to both speech processing and language modeling tasks.",
"Finally, we will explore further the generability of our meta-transfer learning approach to more downstream multilingual tasks in our future work.",
"This work has been partially funded by ITF/319/16FP and MRP/055/18 of the Innovation Technology Commission, the Hong Kong SAR Government, and School of Engineering Ph.D.",
"Fellowship Award, the Hong Kong University of Science and Technology, and RDC 1718050-0 of EMOS.AI."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"objective",
"result",
"result",
"objective",
"other",
"other"
] |
[
"This paper presents a novel task to generate poll questions for social media posts.",
"It offers an easy way to hear the voice from the public and learn from their feelings to important social topics.",
"While most related work tackles formally-written texts (e.g., exam papers), we generate poll questions for short and colloquial social media messages exhibiting severe data sparsity.",
"To deal with that, we propose to encode user comments and discover latent topics therein as contexts.",
"They are then incorporated into a sequence-to-sequence (S2S) architecture for question generation and its extension with dual decoders to additionally yield poll choices (answers).",
"For experiments, we collect a large-scale Chinese dataset from Sina Weibo containing over 20K polls.",
"The results show that our model outperforms the popular S2S models without exploiting topics from comments and the dual decoder design can further benefit the prediction of both questions and answers.",
"Human evaluations further exhibit our superiority in yielding high-quality polls helpful to draw user engagements.",
"Social media is a crucial outlet for people to exchange ideas, share viewpoints, and keep connected with the world.",
"It allows us to hear the public voice for decision making and better understanding our society.",
"Nevertheless, for the silent majority, they tend to read others' messages instead of voicing their own opinions with words, possibly because of the introvert personality, busy schedule, and others.",
"How shall we better engage them into the discussions and learn from their thoughts?",
"In this work, we present a novel application to automatically generate a poll question for a social media post.",
"It will encourage public users, especially those reluctant to comment with words, to Jing Li is the corresponding author.",
"[ P 1 ] : ...B ( The market value of B site exceeds iQiyi )... [ Q 1 ] : app",
"( Which app do you usually use to watch videos? ) [ A 1 ] : ( Tencent Video ); ( Youku ); ( iQiyi ); B ( B site ) [ P 2 ] : ... vocal ... ... ... ( A rational analysis of Akira and Curley G: Curley's vocal is indeed great, but ... her dancing is not that good; Akira dances well ... but her singing is weaker ...) [ Q 2 ] : c",
"( Who should take the center position? ) [ A 2 ] : ( Akira ); ( Curley G ) Figure 1: Example polls from Sina Weibo.",
"input their reflections via voting.",
"For example, the statistics of our dataset show that 13K users on average engaged in a poll compared with 173 commented to a post.",
"For a better illustration of the task, Figure 1 shows two example poll questions on Sina Weibo 1 , henceforth Weibo, a popular Chinese microblog.",
"The goal of our task is to output an opinion question, such as Q 1 and Q 2 , and invite other users to engage in the discussion to a source post (e.g., P 1 and P 2 ); poll choices (answers like A 1 and A 2 ) can be produced together to allow easy public engagement (via voting).",
"To date, most progress made in question generation is built upon the success of encoder-decoder frameworks (Du et al., 2017).",
"Despite of the extensive efforts made in this line (Sun et al., 2018; Yao et al., 2018; Chai and Wan, 2020; Sun et al., 2020), most previous work focus on the processing of formally-written texts, such as exam questions 1 weibo.com in reading comprehension tests.",
"The existing methods are therefore suboptimal to handle social media languages with short nature and informal styles, which might present challenges to make sense of the source posts and decide what to ask.",
"For example, from the limited words in P 1 , it is hard to capture the meanings of B ( B site ) and ( iQiyi ) as video apps, which is nevertheless crucial to predict Q 1 .",
"Moreover, the question itself, being in social media fashion, is likely to contain fresh words, such as c ( center position ) in Q 2 , which may further hinder the models' capability to predict the poll questions in social media style.",
"To tackle these challenges, we first enrich the short contexts of source posts with other users' comments; a neural topic model is employed to discover topic words therein and help identify the key points made in source posts.",
"It is based on the assumption that the salient words in a source post are likely to be echoed in its comments (Wang et al., 2019b), potentially useful to learn the map from posts to poll questions.",
"For example, the core words in Q 1 app and ( video ) co-occur frequently in the comments with B ( B site ) and ( iQiyi ), which may help the model to link their meanings together.",
"The topic representations are then incorporated into a sequence-to-sequence (S2S) architecture to decode poll questions word by word.",
"Furthermore, we extend the basic S2S to a version with dual decoders to generate questions and answers in a multi-task learning setting and further exploit their correlations.",
"For example, modeling answers in A 2 might help indicate that P 2 centers around ( Akira ) and ( Curley G ), two celebrities.",
"To the best of our knowledge, this work is the first to study poll questions on social media, where their interactions among answer choices, source posts, and reader users' comments are comprehensively explored .",
"As a pilot study over social media polls, we also contribute the very first dataset containing around 20K Weibo polls associated with their source posts and user comments.",
"2 We believe our dataset, being the first of its kind, will largely benefit the research on social media polls and how they help promote the public engagements.",
"automatic evaluation results show that the latent topics learned from the first few pieces of user comments is already helpful they result in our models' significantly better performance than the S2S baselines and their trendy extensions proposed for other tasks.",
"For example, our full model achieves 38.24 ROUGE-1 while S2S with RoBERTa (Liu et al., 2019) yields 34.08.",
"Human evaluation further demonstrates our models' capability to generate poll questions relevant to the source post, fluent in language, and particularly engaging to draw user attentions for discussions.",
"We then quantify models' sensitivities to the length of varying source posts and poll questions, where the scores of our model are consistently better.",
"Next, we find our model exhibits an increasing trend in predicting poll questions that will engage more comments in the future, which suggests the potential helpfulness of comments to indicate engaging questions.",
"At last, the performance of dual decoder designs are discussed and it is shown that joint prediction of questions and their answers can benefit both tasks.",
"Our major input is a social media post (i.e., source post ) and the main output a poll question that continue the senses of the source post and encourage public users to voice opinions.",
"For each question, possible answer choices (i.e., answers ) may also be yielded as a side product to enable participants to easily input their thoughts.",
"To enrich the contexts of source posts, their reply messages (i.e., user comments ) are also encoded as external features.",
"Data Collection.",
"Weibo allows users to create polls, asking questions to the public and inviting others to share their thoughts via voting.",
"It enables the construction of a dataset with user-generated polls.",
"At the beginning, we gathered around 100K random Weibo posts, whereas less than 0.1% of them contain polls.",
"The sparse distribution of polls presents the challenge to scale up the dataset.",
"To deal with that, we looked in to the sampled polls and draw two interesting points: first, many polls carry trendy hashtags (user-annotated topic labels like #COVID19) to draw user attentions; second, a user who once created a poll is likely to do it again.",
"Post Comment Qs Ans Choice Voter Num Len Num Len Len Num Len Num 20,252 54.0 173 16.9 11.0 3.4 5.9 13,004 Table 1: Statistics of our dataset.",
"Num: number; Num: average number per post.",
"Len: average count of words per post; Qs: question; Ans: answer.",
"Inspired by these observations, we first obtained the popular hashtags since Nov 2019.",
"3 Then, we gathered the posts under the hashtag through the Weibo search API, from which the ones containing polls are picked out.",
"4 Next, we examined the authors of these polls and access their posting history to gather more polls they created from Weibo user timeline API.",
"5 Afterwards, for each post, we crawled its comments via the comment API.",
"6 Finally, 20,252 polls were obtained from 1,860 users.",
"Data Analysis.",
"The statistics of the dataset is displayed in Table 1.",
"As can be seen, comments are shorter than posts, probably because users tend to put more efforts in crafting original posts than replying to others and hence comments may be relatively nosier than original posts; both questions and answers are short, which follow the fashion of user-generated contents on social media.",
"To further investigate the data sparsity in social media contents, we sample some texts from LDC news corpus (formally-written texts) (Ahtaridis et al., 2012) the samples contain the same token number as our social media texts.",
"Our corpus's vocabulary size and entropy are 24,884 and 7.46, while those for news corpus are 9,891 and 5.98.",
"This suggests the sparsity of social media data.",
"We also observe that each post exhibits more voters than comments, implying that users may prefer to voice opinions via voting, which is easier than commenting with words.",
"We further analyze the effects of polls on user engagements and draw an interesting finding.",
"For the same author, their posts with polls exhibit 1.65, 22.2, and 1.80 times comments, likes, and reposts on average compared to posts without polls.",
"7 This implies that adding polls indeed help to draw user engagements to a post.",
"3 https://open.weibo.com/wiki/Trends/en 4 https://open.weibo.com/wiki/C/2/ search/statuses/limited 5 https://open.weibo.com/wiki/C/2/ statuses/user_timeline_batch 6 https://open.weibo.com/wiki/2/ comments/show 7 For each author, we additionally sample 500 posts without polls for comparison.",
"For each poll, there are less than 4 answer choices on average.",
"To further characterize that, Figure",
"2(a) shows the count of polls over varying numbers of answer choices appearing in them and the statistics suggest that most users are not willing to craft over 5 poll choices, which, interestingly, exhibit similar statistics in exam questions.",
"In addition, we probe into what types of topics are more likely to contain polls.",
"To that end, we examined source posts with hashtags and manually categorized the hashtags into 11 topics.",
"Figure",
"2(b) shows the poll distribution over topics.",
"Most polls fall in social events category, which mostly concern public emergency and in our dataset tremendous posts focus on the outbreak of COVID-19.",
"There are also a large proportion of polls concern entertainment topics such as celebrities and TV shows, probably initiated for advertising purpose.",
"This section introduces our framework with two variants: one based on a basic S2S (single decoder) and the other is its extension with dual decoders to predict poll questions and answer choices in a multitask learning setting.",
"The model architecture of the dual decoder model is shown in Figure 3.",
"Following the common practice in S2S (Du et al., 2017), we encode a source post P in the form of word sequence (cid:104) w 1 , w 2 , ..., w | P | (cid:105) , where | P | is the number of words in the post.",
"For user comments C , bag of words (BOW) representations are employed for topic modeling, henceforth C bow over BoW vocabulary.",
"More details are provided below.",
"Source Post Encoding.",
"To encode the post sequence P , a bidirectional gated recurrent unit (Bi-GRU) (Cho et al., 2014) is adopted.",
"For the i -th word w i P , we first convert it into an embedding vector i , which is later processed into hidden Figure 3: The architecture of the dual decoder S2S (sequence-to-sequence) model to jointly generate questions and answers.",
"states in the forward ( h i ) and backward ( h i ) directions, respectively.",
"They are then concatenated as h i = [ h i ; h i ] and sequentially put into a memory bank M = (cid:104) h 1 , h 1 , ..., h | P | (cid:105) , which will be further delivered to decoders for their attentive retrieval.",
"User Comments Modeling.",
"Considering the noisy nature of user comments, latent topics are employed to recognize the salient contents therein.",
"They are explored based on word statistics and represented as clusters of words tending to co-occur in the comments of some posts (probably concerning similar topics), such as the names of video apps in Figure 1.",
"In topic modeling, we assume there are K topics and each topic k is represented with a topic-word distribution over the BoW vocabulary.",
"A post P has a topic mixture , which is learned from the words appearing in its comments C bow .",
"Our topic learning methods (from comments) are inspired by the neural topic model (NTM) based on variational auto-encoder (VAE) (Miao et al., 2017; Zeng et al., 2018), which allows the end-to-end training of NTM with other modules in an unified neural architecture.",
"It employs an encoder and a decoder to resemble the data reconstruction process of the comment words in BoW.",
"Concretely, the input C bow is first encoded into prior parameters and using neural perceptrons.",
"Then, through Gaussian transformation, they are applied to draw a latent variable: z = N ( , 2 ) , which is further taken to produce the topic composition of comments ( ) with softmax transformation.",
"At last, the decoder reconstructs comments and produces a BOW vector C (cid:48) bow (conditioned on the latent topic ) through another neural perception.",
"Here we further describe how we generate questions (and answers in the dual decoders settings) with the encoded source posts and comments.",
"Question Generation.",
"To handle the output of a question Q , the corresponding decoder (i.e., question decoder ) is formed with a uni-directional GRU and fed with the memory bank M from source post encoding and the topic distribution from user comment modeling.",
"The words in Q are predicted sequentially with the following formula: Pr ( Q | P, C bow ) = | q | (cid:89) j =1 Pr ( q j | q <j , M , ) (1) where q j means the j -th word in Q and q <j refers to Q 's predicted word sequence from slot 1 to j 1 .",
"To leverage comment modeling results in the decoding, we incorporate into the attention weights (defined below) over source posts and concentrate on topic words therein for question generation.",
"ij = exp ( f ( h i , s j , )) (cid:80) | P | i (cid:48) =1 exp ( f ( h i (cid:48) , s j , )) (2) s j is the GRU decoder's j -th hidden states and: f ( h i , s j , ) = v T tanh ( W [ h i ; s j ; ] + b ) (3) In addition, we adopt copy mechanism (See et al., 2017) to allow the generated questions to contain the keywords from the source posts: p j = j p gen + (1 j ) p copy (4) p gen refers to the likelihood to generate a word while p copy is the extractive distribution derived from the attention weights over the source input.",
"The soft switcher j [0 , 1] can determine whether to copy a word or generate a new one in aware of the comments' topics: j = sigmoid ( W [ u j ; s j ; t j ; ] + b ) (5) t j is the context vector (weighted sum) of the attention to predict the Q 's j -th word, whose embedding is u j .",
"W and b are both learnable parameters.",
"Answer Generation.",
"To further explore the relations between questions ( Q ) and answers ( A ), we replicate the question decoder's architecture and form another decoder to handle answer generation ( answer decoder ).",
"The answer choices are concatenated to form an answer sequence and neighboring choices are separated with a special token < sep > .",
"The answer decoder also adopts the same topic-aware attentions (Eq.",
"2) as the question decoder (denoted as ij here) and copy mechanisms (Eq.",
"4) to be able to put topic words from the source into the answer choices, such as ( Akira ) and ( Curley G ) in Figure 1.",
"Question decoder and answer decoder work together in a dual decoders setting, whose parameters are updated simultaneously to exploit the essential correlations of poll questions and their answers.",
"This subsection describes how we jointly train the neural topic model (henceforth NTM) for comment modeling and the decoders for question and answer generation with multi-task learning.",
"The loss function for NTM is defined as: LNTM = DKL ( p ( z ) || q ( z | C )) E q ( z | C ) [ p ( C | z )] (6) The C above refers to C bow .",
"The first term is the KL divergence loss and the second is the reconstruction loss in VAE.",
"For question generation, the loss is: LQG = N (cid:88) n =1 log ( Pr ( Q n | P n , n )) (7) N is the number of training samples; Q n , P n , and n are the target poll question, source post, and topic distribution of the n -th training sample.",
"Answer generation loss LAG is defined similarly.",
"The training loss of the entire model are defined as: L = LNTM + Q LQG + A LAG (8) where Q and A balance the weights over NTM and the two decoders.",
"Data Preprocessing.",
"First, we removed meta data (e.g., author's locations and emoji labels) and replaced links, mentions (@username), and digits with generic tags URL, MENT, and DIGIT.",
"Then, for some poll questions echoed in the source posts, we took them away for fair experiments.",
"Next, an open-source toolkit jieba is employed for Chinese word segmentation.",
"8 Afterwards, we filtered out stop words and for the remaining, we maintained two vocabularies with the most frequent 50K words for sequences (input and output) and another 100K words for BoW.",
"Finally, comments are capped at the first 100 words to examine poll question generation with the early comments and their potential to draw future user engagements.",
"In evaluations, we split our data into 80% for training, 10% for validation and 10% for test.",
"Baselines and Comparisons.",
"For baselines, we first consider the basic S2S (Sutskever et al., 2014) (i.e., BASE); also compared are the S2S with pre-trained models from the BERT family tiny ER-INE (Sun et al., 2019) (i.e., ERINE), BERT (De-vlin et al., 2019) (i.e., BERT), and RoBERTa (Liu et al., 2019) (i.e., ROBERTA ), which were implemented with the paddle hub platform 9 .",
"For all S2S with pre-trained models, their pre-trained parameters were further fine-tuned on our training data.",
"Then, we consider the following S2S extensions with copy mechanism (i.e., COPY ) (Meng et al., 2017), topic modeling from posts (i.e., TOPIC ) (Wang et al., 2019a), and bidirectional attentions over posts and comments (i.e., CMT (BIATT )) (Wang et al., 2019b).",
"All of them were proposed for keyphrase generation tasks and set up following their original papers.",
"For our models, we consider two variants CMT (NTM) in the single decoder archetecture and its dual decoder version DUALDEC .",
"10 Model Settings.",
"All the hyperparameters are tuned on the validation set via grid search.",
"For NTM, it is pre-trained for 50 epochs before joint training and afterwards different modules take turns to update parameters.",
"We adopt two-layers bidirectional GRU to build source post encoder and one-layer unidirectional GRU question and answer decoders.",
"The hidden size of each GRU is 300.",
"9 https://www.paddlepaddle.org.cn/hub 10 We also finetuned BERT with our models yet cannot observe much performance gain.",
"It is because NTM is able to learn essential features from the input and BERT cannot provide additional benefits.",
"Another possible reason is that social media BERT is unavailable in Chinese and that trained on out-domain data (e.g., news) might not fit well with Weibo languages.",
"Large-scale Weibo data might be acquired for continue pre-training (Gururangan et al., 2020), which is beyond the scope of this paper and will be explored in future work.",
"For a word embedding, the size is set to 150 and randomly initialized.",
"In training, we apply Adam optimizer with initial learning rate as 1e-3, gradient clipping as 1.0, and early-stopping strategy adopted.",
"The weights to trade off losses in multitask learning is set to Q = A = 1 (Eq. 8).",
"Evaluation Metrics.",
"We adopt both automatic measures and human ratings for evaluations.",
"For the former, we examine two popular metrics for language generation tasks ROUGE (Lin, 2004) and BLEU (Papineni et al., 2002).",
"For the latter, human annotators rates with 4 point Likert scale (i.e., { 0 , 1 , 2 , 3 } ) and over three criteria are considered: the relevance to the source posts ( relevance ), how fluent the generated language reads ( fluency ), the attractiveness degree of the questions in drawing people's engagements ( engagingness ).",
"In this section, we first show the main comparison results on poll question generation involving both automatic evaluations and human ratings (in 5.1).",
"Then, model sensitivity to varying lengths of source posts and poll questions are discussed in 5.2, followed by the analyses of models' capability to handle poll questions exhibiting varying degrees of user engagements ( 5.3).",
"Next, 5.4 discusses the performance of dual decoders that jointly generate questions and answers.",
"A case study is presented at last (in 5.5) to interpret the sample outputs.",
"We first show the comparison results on poll question generation, where we will discuss automatic evaluations and human ratings in turn below.",
"automatic measured results on question generation.",
"As can be seen, our task is challenging and basic S2S performs poorly.",
"Pre-trained models from the BERT family can offer some help though limited.",
"It is probably because the pre-training data is from other domains (e.g., news and online encyclope-dia), where the representations learned cannot fully reflect the styles of social media languages.",
"We then observe copy mechanism and latent topics (learn from posts) are both useful, where the former allows the keyword extracted from the post to form a question while the latter further helps find topic words to be copied.",
"On the contrary, user MODEL ROUGE-1 ROUGE-L BLEU-1 BLEU-3 S2S Baselines BASE 21.62 0.7 20.64 0.7 20.35 0.7 2.11 0.5 +E RNIE 29.62 0.5 27.82 0.4 21.66 0.5 3.25 0.4 +BERT 33.62 1.2 31.57 1.1 24.43 0.7 4.54 0.4 +R OBERTA 34.08 1.3 31.98 1.2 24.88 1.0 4.85 0.5 S2S Extensions +C OPY 35.13 0.4 33.20 0.4 30.27 0.4 7.95 0.3 +T OPIC 36.65 0.6 34.70 0.6 31.11 0.5 8.66 0.5 +C MT (BIATT ) 27.74 0.4 26.21 0.4 23.97 0.3 4.15 0.2 Our Models +C MT (NTM) 37.95 0.4 35.97 0.3 32.07 0.2 8.89 0.3 +D UALDEC 38.24 0.3 36.14 0.3 32.27 0.4 9.04 0.3 Table 2: Main comparison results for poll question generation.",
"comments, though able to provide useful information, are noisy (also implied by Table 1).",
"So, it is important to encode the comments in an appropriate way CMT (NTM) captures salient topic features from the comments and performs much better than CMT (BIATT ), which might be hindered by the noise and exhibit the second worst results.",
"In addition, we notice DUALDEC slightly outperforms its single decoder variant CMT (NTM), though the gain is small.",
"To better examine their prediction results, we conduct human evaluations.",
"Human Ratings.",
"Here we sampled 400 source posts (and their outputs), and invited four native Chinese speakers to rate the poll questions in a 4 point Likert scale 0 for extremely bad, 1 for bad, 2 for good, and 3 for extremely good without knowing where the results come from.",
"Each annotator reviews 100 samples and one's assignments vary with others' and Table 3 shows the average ratings over the four annotators.",
"All the models are rated worse than the gold standard, which means automatic poll question generation still has a long way to go.",
"We also observe that models with latent topics exhibit relatively better relevance.",
"This may be because topic models allow the capture of salient contents from the input and detail injection to the output.",
"Besides, CMT (NTM) and DUALDEC perform the best in engagingness, probably because user comments and poll answers might provide implicit clues (e.g., fresh words) helpful to predict engaging questions.",
"For fluency, BASE outperforms our models by a small margin, as it tends to yield short and generic questions, such as ( What's your viewpoint? ) based on our observation.",
"More-Relevance Fluency Engagingness Gold Standard 2.79 2.84 2.74 BASE 1.26 2.14 1.35 ROBERTA 1.33 1.06 0.96 TOPIC 1.81 1.66 1.50 CMT (NTM) 1.91 1.67 1.55 DUALDEC 2.02 1.87 1.67 Table 3: Average human ratings.",
"over, we measure the length of questions generated by BASE and DUAL (our full model) and find that 11.0% questions generated by BASE contain less than 5 words whereas the number for DUAL is only 1.6%.",
"This again demonstrates our potential to generate longer questions with richer details.",
"We further quantify the question generation results over varying lengths of source posts and poll questions and show the corresponding ROUGE-1 scores in Figure",
"4. Here, we compare BASE and ROBERTA , TOPIC , and our CMT (NTM).",
"11 Figure 4: ROUGE-1 scores (y-axis) over varying length (word count in x-axis) of source posts (on the left) and poll questions (on the right).",
"Post length seems not to affect much on the models' performance, probably attributed to the length limitation in Weibo even the relatively longer posts contain limited words.",
"On the contrary, for the question length, the two S2S baselines both exhibit obvious performance drops when generating long questions, while TOPIC and CMT (NTM) perform steadily.",
"This suggests that latent topics, either captured from posts or comments, may have the potential to enrich questions with detailed descriptions, and hence can better tackle long questions.",
"Nevertheless, CMT (NTM) presents consistently better ROUGE-1 in diverse scenarios.",
"11 In 5.2 and 5.3, we experiment in the single decoder settings so as to focus on the quality of generated questions.",
"We will further discuss the dual decoders in 5.4.",
"As shown in the human ratings ( 5.1), comments might help to generate engaging poll questions.",
"For a further discussion, Figure 5 shows the ROUGE-1 of ROBERTA , TOPIC , and CMT (NTM) in handling questions for polls that later engage varying user comment numbers.",
"Interestingly, CMT (NTM) performs better when predicting questions that engage more comments at the end.",
"This means that early comments might provide useful clues for models to distinguish attractive questions with the potential to draw more public engagements in the future.",
"Lacking the ability to learn from comments, TOPIC exhibits relatively more stable trends.",
"The previous two subsections are discussed in the single decoder setting and here we further examine the effectiveness to jointly predict questions and answers.",
"BASE , COPY , TOPIC , and CMT (NTM) with single and dual decoders are discussed.",
"We first compare question generation results and Figure 6 shows the ROUGE-1 scores.",
"It is seen that dual decoders can boost the results of BASE and COPY , implying that questions and answers are indeed related and exploiting their interactions can successfully bring performance gain.",
"However, we cannot observe large-margin improvements in TOPIC and CMT (NTM), probably because many words in answers, such as ( Akira ) and ( Curley G ) in Figure 1, are also topic words that can be discovered with topic models.",
"Therefore, jointly generating answers only provides limited help to their question generation results.",
"Then, we analyze how the multitask learning ability of dual decoders influence the prediction of poll answers.",
"Table 4 displays the comparison results with pipeline models that sequentially generate questions and then answers.",
"By examining the pipeline results, we first find that source posts are Figure 6: ROUGE-1 scores of BASE , COPY , TOPIC , and CMT (NTM) from left to right.",
"helpful in answer generation, which results in the outperformance of PT +Q S over QS ONLY .",
"Besides, answer generation trained with predicted questions or the gold standards do not make much difference.",
"Gold standard questions might exhibit higher quality while predicted questions may better fit the tests (answer choices should be predicted without knowing the human-crafted questions).",
"For dual decoders, CMT (NTM) still performs the best, implying that latent topics from user comments can also contribute to better prediction of poll answers.",
"In comparison with the best pipeline model (PT +Q S ), the scores from CMT (NTM) are competitive, though the dual decoder allows end-to-end training and is easier to be used (with less manual efforts in model training and application).",
"To provide more insights, we further take the two Weibo posts in Figure 1 as the input cases and examine",
"examine the output of varying models in Table",
"5. 12 Unsurprisingly, BASE tends to yield generic questions as limited features are encoded from the noisy source.",
"ROBERTA sometimes produces repeated words (e.g., its output to P 1 ), hindering its capability to generate fluent language (also indicated by Table 3).",
"This is possibly caused by the overfitting problem as RoBERTa might rely on large-scale in-domain data for fine-tuning.",
"We also find that modeling topics and user comments may enable the output to contain trendy wordings, making it more engaging, such as c ( center point ) in CMT (NTM)'s output question for P 2 and the names of many new video apps in DUALDEC 's generated answer choices for P 1 .",
"Furthermore, the dual decoders might learn the cohesive relations between questions and answers, such as the Akira and Curley G occurring in both the generated questions and answer choices ( P 2 ).",
"Our work is in the line with question generation, where most prior efforts focus on how to ask good exam questions given an article and the pre-defined answers.",
"Some adopt manually-crafted rules or features (Labutov et al., 2015; Dhole and Manning, 2020; Fabbri et al., 2020), largely relying on the labor-intensive process for rule design or feature engineering.",
"To simplify the training, automatic feature learning hence becomes popular.",
"For example, Chali and Hasan (2015) first employs a Bayesian model to learn topic features and then leverages them to yield questions.",
"These pipeline methods require the expertise involvement to manually customize the model inference algorithms, while our neural network design allows end-to-end training of topic modeling and question generation.",
"Recently, S2S-based question generation architecture has demonstrated promising results (Du et al., 2017; Chai and Wan, 2020).",
"To better encode the input, researchers adopt successful training design from other tasks, such as self-attention mechanism (Zhao et al., 2018; Scialom et al., 2019), language model pre-training (Pan et al., 2019), variational inference (Yao et al., 2018), and reinforcement learning (Yuan et al., 2017; Pan et al., 2019).",
"Heuristic features, e.g., the answers' positions in the article (Zhou et al., 2017; Sun et al., 2018; 12 Here we analyze the case with two examples while similar observations can be drawn from many output cases.",
"More cases will be discussed in Figure 6 (in the Appendix).",
"Kim et al., 2019; Liu, 2020) are sometimes considered.",
"For question decoding, certain constraints are added to control the generation, such as some aspects to be contained (Hu et al., 2018), varying levels of difficulty (Gao et al., 2018) and specificity (Cao et al., 2019).",
"We are also related with previous work handling the generation of questions and answers in a multitask learning setting (Wang et al., 2017; Tang et al., 2017; Sun et al., 2020).",
"Nonetheless, none of the aforementioned research concerns poll questions and answers on social media, which exhibit very different language styles compared with any existing studies and has not been extensively explored.",
"We have presented a novel task to generate social media poll questions.",
"User comments encoded with a neural topic model are leveraged in a S2S framework; dual decoder architecture is further adopted to explore the interactions between questions and answers.",
"Extensive experiments on a large-scale dataset newly collected from Weibo have demonstrated the effectiveness of our proposed model.",
"This work was partially done when Zexin Lu was an intern at Tencent AI Lab under CCF-Tencent",
"Rhino-Bird Young Faculty Open Research Fund (R-ZDCJ).",
"The research is also supported by NSFC Young Scientists Fund (62006203) and PolyU internal funds (1-BE2W, 4-ZZKM, and 1-ZVRH).",
"The authors would like to thank Lida Li, Yue Wang, Yubo Zhang, Zhe Wang, and anonymous reviewers from ACL-IJCNLP 2021 for their insightful suggestions on various aspects of this work.",
"The task will not pose ethical problems.",
"First, the polls are open access to the public users (so as to collect their opinions).",
"Second, Weibo allows any users to report suspicious cases with ethical concerns and the reported contents will be removed immediately.",
"Third, the polls are running in an anonymous way to protect the privacy of voters.",
"The dataset is collected through the official APIs of Weibo and is consistent with the Weibo terms of use.",
"We also manually examined the data to ensure the following points.",
"First, we conduct data anonymization and manually examined the data to ensure there are no privacy and ethical concerns, e.g., personal information, toxic language, and hate speech.",
"In the generated polls, we didn't spot any cases that might have the concern.",
"Second, the involved Weibo users are all public ones.",
"To that end, we automatically filtered out personal users without the official confirmation of Weibo (the con-firmed public users can be identified with a VIP tag).",
"The user list is manually checked again to mitigate the ethical concern.",
"For the annotation, we recruited part-time research assistants to work with the pay 15.7 USD/hour and at most 20 hours per week."
] | [
"objective",
"abstain",
"method",
"objective",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"other",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"objective",
"objective",
"result",
"objective",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"abstain",
"other",
"other",
"method",
"other",
"other",
"other",
"abstain",
"other",
"objective",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain"
] |
[
"Attention mechanisms have achieved substantial improvements in neural machine translation by dynamically selecting relevant inputs for different predictions.",
"However, recent studies have questioned the attention mechanisms' capability for discovering decisive inputs.",
"In this paper, we propose to calibrate the attention weights by introducing a mask perturbation model that automatically evaluates each input's contribution to the model outputs.",
"We increase the attention weights assigned to the indispensable tokens, whose removal leads to a dramatic performance decrease.",
"The extensive experiments on the Transformer-based translation have demonstrated the effectiveness of our model.",
"We further find that the calibrated attention weights are more uniform at lower layers to collect multiple information while more concentrated on the specific inputs at higher layers.",
"Detailed analyses also show a great need for calibration in the attention weights with high entropy where the model is uncon-fident about its decision 1 .",
"Attention mechanisms have been ubiquitous in neural machine translation (NMT) (Bahdanau et al., 2015; Vaswani et al., 2017).",
"It dynamically encodes source-side information by inducing a conditional distribution over inputs, where the ones that are most relevant to the current translation are expected to receive more attention.",
"However, many studies doubt whether highly-attended inputs have a large impact on the model outputs.",
"Smith, 2019), which can be attributed to that unimportant words (e.g., punctuations) are frequently assigned with high attention weights (Mohanku-mar et al., 2020).",
"On the other hand, Jain and Wallace (2019) state that attention weights are inconsistent with other feature importance metrics in text classification tasks.",
"It further proves that attention mechanisms are incapable of precisely identifying decisive inputs for each prediction, which would result in wrong-translation or over-translation in NMT (Tu et al., 2016).",
"We take Figure 1 as an example.",
"After producing the target-side word deaths, attention mechanisms wrongly attribute most attention to the (cid:104) EOS (cid:105) , making parts of the source sentence untranslated.",
"inputs.",
"To test what inputs affect the model prediction most, we tend to observe how the model decision changes as perturbing parts of inputs.",
"We define the perturbation operation as applying a learnable mask to scale each attention weight.",
"Then, we perform a deletion game, which aims to find the smallest perturbation extents that cause the significant quality degradation.",
"In this manner, we can find the most informative inputs for the prediction.",
"Based on the results detected by the mask perturbation model, we further calibrate attention weights by reallocating more attention to informative inputs.",
"We design three fusion methods to incorporate the calibrated attention weights into original attention weights: (1) fixed weighted sum, (2) annealing learning, and (3) gating mechanism.",
"The mask perturbation model and NMT model are jointly trained, while the attention weights in NMT are corrected based on the actual contributions measured by the mask perturbation model.",
"Recall the example in Figure 1.",
"After producing the target word in, our mask perturbation model finds that the source word [countryside] with a high attention weight is exactly the decisive input for the prediction.",
"Therefore, we strengthen the corresponding attention weight of [coun-tryside].",
"However, after the prediction deaths, the highly-attended (cid:104) EOS (cid:105) is not the decisive input at the current step.",
"We redistribute the attention weights to the source words ( [traffic] and [interruption]) which receive little attention but are important for the subsequent translation discovered by our mask perturbation model.",
"After calibration, the missing source information traffic interruption is well-translated.",
"We conduct extensive experiments to verify our method's effectiveness on Transformer-based translation (NIST Zh En, WMT14 En De, WMT16 En Ro, WMT17 En Fi, and En Lv).",
"Experimental results show that our calibration methods can significantly boost performance.",
"We further visualize calibrated attention weights and investigate when attention weights need to be corrected.",
"The contributions of this paper are three-fold: We propose a mask perturbation model to automatically assess each input's contribution for translation, which is simple yet effective.",
"We design three methods to calibrate original attention weights by highlighting the informative inputs, which are experimentally proved to outperform strong baselines.",
"Detailed analyses show that calibrated attention weights are more uniform at lower layers while more focused at the higher layers.",
"High-entropy attention weights are found to have great needs for calibration at all layers.",
"In this section, we first briefly introduce the framework of Transformer (Vaswani et al., 2017) with a focus on the Multi-head attention (MHA).",
"Then we present an analysis of the learned attention weights, the correlation with feature importance measures, which motivates our ideas discussed afterward.",
"The Transformer is an encoder-decoder framework with stacking layers of attention blocks.",
"The encoder first transforms an input x = { x 1 , x 2 , ...x n } to a sequence of continues representations h = { h 1 , h 2 , ...h n } , from which the decoder generates an output sequence y = { y 1 , y 2 , ...y m } .",
"Multi-head attention between encoder and decoder enables each prediction to attend overall inputs from different representation subspaces jointly.",
"For the single head, we first project h = { h 1 , h 2 , ...h n } to keys K and values V using different linear projections.",
"At the t -th position, we project the hidden state of the previous decoder layer to the query vector q t .",
"Then we multiply q t by keys K to obtain an attention a t , which is used to calculate a weighted sum of values V .",
"Attn ( q t , K , V ) = a t V a t = softmax (cid:18) q t KT d k (cid:19) (1) where d k is the dimension of the keys.",
"For MHA, we use different projections to obtain the queries, keys, and values representations for each head.",
"It is noted that Transformer (base model) performs N = 6 cross-lingual attention layers and employs h = 8 parallel attention heads for each time.",
"Thus we implement our methods on N h attention operations separately.",
"For simplicity, we next denote the query, keys, and values as q t , K , V regardless of what layers and heads they come from.",
"Attention mechanisms provide a distribution over the context representations of inputs, which are",
"often presented as communicating the relative importance of inputs.",
"However, recent work has cautioned against whether the inputs accorded high attention weights decide the model outputs (Jain and Wallace, 2019).",
"Our analysis examines the correlation with attention weights and feature importance metrics in NMT to test if the attention mechanisms focus on the decisive inputs.",
"We apply gradient-based methods (Simonyan et al., 2014; Li et al., 2016) to measure the importance of each contextual representation h i for model output y t : it = | h i p ( y t | x 1:n ) | (2) We train a baseline Transformer model on NIST Zh En dataset and extract the averaged attention weights over heads.",
"Figure 2 reports the statistics of Kendall correlation for each attention layer, where the observed correlation is all modest (0 indicates no correlation, while 1 implies perfect concordance).",
"The inconsistency with feature importance metrics reveals that the high-attention inputs are not always responsible for the model prediction.",
"It further motivates us whether we can calibrate the attention weights to focus more on the decisive inputs to achieve better translation.",
"We aim to make the attention mechanism more focused on the informative inputs.",
"The first step is to discover what inputs are essential for the model prediction.",
"As shown in Figure 3, we design a Mask Perturbation Model to worsen the performance with limited perturbation on the original attention weights.",
"By doing this, we can automatically detect what inputs decide the model outputs.",
"Then, we design an Attention Calibration Network (ACN) to correct the original attention weights, highlighting the decisive inputs based on what inputs are perturbed by the mask perturbation model.",
"To search the source-side inputs that the model relies on to produce the output, we can observe how the model prediction changes as perturbing different parts of the input sentence.",
"We apply a mask to scale each input's attention weight, which simulates the process of perturbation.",
"Formally, let m t be a mask at t -th step.",
"where 0 is a uniform distribution (an average vector of 1 n ) and (cid:12) denotes element-wise multiplication.",
"The mask m t is obtained based on the hidden state in the decoder q t and keys K : m t = (cid:32) q t WQ ( KWK ) T d k (cid:33) (4) Here, ( ) is the sigmoid function.",
"A smaller value of m t means a larger perturbation extent on original attention weights.",
"Considering the structure of multi-head attention in Transformer, WQ and WK differ among layers and heads.",
"To test the effect of perturbing distinct regions of inputs, we borrow the idea deletion game to find the smallest perturbation extent, which leads to a significant performance decrease.",
"The objective function of mask perturbation model is: L ( m ) = L NMT ( a pt , ) + L c ( m ) (5) where denotes the parameters of the original Transformer.",
"LNMT ( a pt , ) is the cross-entropy loss of the translation model when using perturbed attention weights a pt .",
"m = { WQ , WK } represents the parameters of mask perturbation model.",
"The first term indicates that the perturbation operation aims to harm the translation quality.",
"The second one serves as a penalty term to encourage most of the mask to be turned off (perturb inputs as few as possible).",
"The perturbation extent is determined by the hyperparameter .",
"Notably, earlier studies employ masks and deletion game as the analytical tools to explore the importance of each attention head (Fong and Vedaldi, 2017) or the contributions of the pixels in the figure to the model outputs (Voita et al., 2019).",
"However, we extend to probing the inputs' contributions to the model prediction in NMT and further use the masks to calibrate the attention mechanisms based on the analytical results.",
"As aforementioned, our mask perturbation model removes the most informative input to deteriorate the translation by setting the corresponding masks to zero.",
"In other words, a smaller mask means a larger perturbation, namely a more significant impact on the prediction.",
"We propose to calibrate the original attention weights in NMT by highlighting the essential inputs for each model prediction.",
"We increase the attention weights of key inputs which suffer large perturbation extents.",
"The attention weights of other less-informative inputs are correspondingly decreased.",
"We design three methods to incorporate a ct into the original one a t to obtain combined attention weights a combt : Fixed Weighed Sum .",
"In this method, the calibrated attention weights are added to the original attention weights of fixed ratio as: a combt = softmax( a t + a ct ) (8) Annealing Learning .",
"Considering the mask perturbation model is not well-trained at the early stage, we expect the effect of a ct to be smaller at first and gradually grow with the training step s .",
"To this end, we use annealing learning to control the ratio of a ct as: a combt = ( s ) a t + (1 ( s )) a ct ( s ) = e s/ 10 5 (9) Gating Mechanism .",
"the information from the perturbation model in the decoding process.",
"a combt = g t a t + (1 g t ) a ct g t = ( q t W g + b g ) (10) where W g and b g are trainable parameters vary among different layers and heads.",
"Our mask perturbation model and NMT model are jointly optimized.",
"As shown in Figure 3, the mask perturbation model is trained to worsen the performance by limited perturbation on the attention weights (Equation 5).",
"Given what inputs are perturbed, we can figure out the decisive inputs for each model prediction and calibrate the original attention weights in the NMT model by ACN.",
"With the calibrated attention weights, the NMT model is finally optimized by: LNMT ( ) = m (cid:88) t =1 log p ( y t | y <t , x ; a combt , ) (11) During testing, the mask perturbation model also helps identify the informative inputs based on the hidden state in the decoder at each step (as seen in Equation 4).",
"The NMT model decodes with the calibrated attention weights.",
"Moreover, our method can provide the saliency map between inputs and outputs based on the generated mask, an accessible measurement of the inputs' contributions to the model predictions.",
"We evaluate our method in LDC Chinese-English (Zh En), WMT14 English-German (En De), WMT16 English-Romanian (En Ro), WMT17 English-Finnish (En Fi) and English-Latvian (En Lv).",
"We tokenize the corpora using a script from Moses (Koehn et al., 2007).",
"Byte pair encoding (BPE) (Sennrich et al., 2016) is applied to all language pairs to construct a join vocabulary except for Zh En where the source and target languages are separately encoded.",
"For Zh En, we remove the sentences of more than 50 words.",
"We use NIST 2002 as validation set, NIST 2003-2006 as the testbed.",
"For En De, newstest2013 and newstest2014 are set as validation and test sets.",
"We use the standard 4-gram BLEU (Papineni et al., 2002) on the true-case output to score the performance.",
"For En Ro, we use newsdev2016 and newstest2016 as development and test sets.",
"For En Lv and En Fi, news-dev2017 and newstest2017 are validation set and test set.",
"See Table 1 for statistics of the data.",
"We implement the described models with fairseq 5 toolkit for training and evaluating.",
"We experiment with Transformer Base (Vaswani et al., 2017): hidden size d model = 512 , 6 encoder and decoder layers, 8 attention heads and 2048 feed-forward inner-layer dimension.",
"The dropout rate of the residual connection is 0.1 except for Zh En (0.3).",
"During training, we use label smoothing of value (cid:15) ls = 0 .",
"1 and employ the Adam ( 1 = 0 . 9 , 2 = 0 . 998 ) for parameter optimization with a scheduled learning rate of 4,000 warm-up steps.",
"All the experiments last for 150k steps except for small-scale En Ro translation tasks (100k).",
"For evaluation, we average the last ten checkpoints and use beam search 1 The corpora includes LDC2000T50, LDC2002T01, LDC2002E18, LDC2003E07, LDC2003E14, LDC2003T17 and LDC2004T07.",
"Following previous work, we use case-insensitive tokenized BLEU to evaluate the performance.",
"2 http://www.statmt.org/wmt14/translation-task.html 3 http://www.statmt.org/wmt17/translation-task.html 4 http://www.statmt.org/wmt16/translation-task.html 5 https://github.com/pytorch/fairseq Model TEST GNMT (Wu et al., 2016) 24.61 Conv (Gehring et al., 2017) 25.16 AttIsAll (Vaswani et al., 2017) 27.3 (Feng et al., 2020) 27.55 (Weng et al., 2020) 27.7 Our Implemented Baseline 27.37 Ours Fixed 27.38 Anneal 28.1 Gate 27.75 Table 2: The comparison of our model, Transformer baselines and related work on the WMT14 En De using case-sensitive BLEU.",
"Besides, the hyperparameter in Equation 8 decides how much the calibrated attention weights are incorporated in the Fixed Weighted Sum method.",
"We set = 0 .",
"1 in all experiments for comparison.",
"To comprehensively compare with the existing baselines and similar work, we report the results of some competitive models including GNMT (Wu et al., 2016), Conv (Gehring et al., 2017) and AttIsAll (Vaswani et al., 2017) on WMT14 En De translation task.",
"Besides, we also compare our method against related researches about introducing word alignment information to guide translation (Weng et al., 2020; Feng et al., 2020).",
"As presented in Table 2, our method exhibits better performance than the above models.",
"Unlike supervised attention with external word alignment, our model yields a significant gain by looking into what inputs affect the model's internal training.",
"Table 3 shows the translation quality measured in BLEU score for NIST Zh En.",
"Our proposed model significantly outperforms the baseline by 0.96 (MT02), 0.84 (MT03), 0.58 (MT04), 1.02 (MT05) and 0.76 (MT06), respectively.",
"We also conduct our experiments on WMT17 En Fi and En Lv.",
"As shown in Table 4, our methods improve the performance over baseline by 0.54 BLEU (En Fi), 0.6 BLEU (Fi En), 0.57 BLEU (En Lv) and 0.95 BLEU (Lv En).",
"For the small-scale WMT16 En Ro, our methods achieve a substantial improvement of 1.44 more BLEU (En Ro) and 0.95 BLEU (Ro En).",
"Com-Model DEV MT03 MT04 MT05 MT06 AVE Baseline 48.56 49.58 48.58 49.95 47.22 48.24 Ours Fixed 48.42 49.41 48.56 50.32 47.89 48.44 Anneal 48.22 49.73 48.85 50.97 47.49 48.74 Gate 49.52 50.42 49.16 50 .",
"pared to the large-scale dataset, the insufficient training data make it harder to learn the relationship between inputs and outputs, leaving a greater need for calibrating attention weights.",
"Overall, our proposed model significantly outperforms the strong baselines, especially for the small-scale dataset.",
"More importantly, the parameter size is tiny (6M), which cannot add much cost to the training and inference process.",
"Effect of Fusion Methods For three fusion methods, the fixed weighted sum has a limited gain.",
"Annealing learning is comparatively more stable, which reduces the impact of ACN when the mask perturbation model is not well-trained at the initial stage.",
"But it is challenging to design an annealing strategy that can be applied to all language pairs.",
"Gate mechanism mostly achieves the best performance for dynamically controlling the proportions of original and calibrated attention weights.",
"Effect of Hyperparameter The hyperparameter in the loss function of the mask perturbation model (as in Equation 5) decides how much masks would turn on to perturb the original attention weights.",
"Figure 4 exhibits the average value of generated masks across heads as the function of the setting of .",
"A larger forces the model to turn off most masks, which makes the value of the mask closer to 1, resulting in a smaller perturbation extent on the attention weights.",
"Correlation with Feature Importance Metrics Figure 5 reports the correlation between our generated mask ( m ) and the gradient-based importance measures 6 ( it ).",
"We find that the masks are relatively closer to the gradient-based importance measures than the original attention weights, which 6 Though these measures are insufficient for telling what inputs are important (Kindermans et al., 2019), they do provide measures of individual feature importance with known semantics (Ross et al., 2017).",
"prove the effectiveness of our mask perturbation model to discover decisive inputs.",
"In this section, we explain how our proposed method helps produce better translation by investigating: (1) what attention weights need to calibrate and (2) calibrated attention weights are more focused or more uniform.",
"Specifically, we delve into the differences between layers, which give insights into the attention mechanism's inner working.",
"We conduct analyses on Zh En NIST03 and En De newstest2014 to understand our model from different perspectives.",
"where a = a 1 + a 2 2 .",
"A high JSD means the calibrated attention weights are distant from the original one.",
"Besides, we use the entropy changes of attention weights to test whether the calibrated attention weights become more uniform or focused.",
"(cid:52)",
"Ent ( a 1 , a 2 ) = ent ( a 1 ) ent ( a 2 ) (13) where ent ( a ) = (cid:80) mi =1 a i log a i , a metric to describe the uncertainty of the distribution.",
"attention layers are not well-trained in the original NMT model and have an urgent need to calibrate.",
"Figure 6 depicts the JSD between original and calibrated attention weights.",
"We find high JSD for high layers and low JSD for low layers in Zh En task.",
"However, a different pattern is observed in En De task, where JSD in the high layer is lower than in the low layers.",
"We speculate that the difference is due to the language discrepancy and we will explore this phenomenon in our future work.",
"High or low entropy?",
"More focused contributions of inputs suggest that the model is more con-fident about the choice of important tokens (Voita et al., 2020).",
"We attempt to validate whether the attention weights are more likely to be calibrated when the NMT model is uncertain about its decision.",
"Figure 7 shows the positive relationship between calibration extent and the entropy of attention weights.",
"Take the 6-th attention layer in Zh En translation as an example (as seen in Figure",
"7(b)).",
"The averaged JSD is 0.0084 for the attention weights in rang [0,0.8], while the value is 0.0324 for the attention weights where the entropy is larger than 3.2.",
"These findings can also be observed at different attention layers and language pairs.",
"We infer that a higher entropy indicates the NMT model relies on multiple inputs to generate the layer Zh En En De 1 + 0.0203 + 0.1846 2 0.011 + 0.0762 3 0.0023 + 0.0207 4 0.0224 0.0336 5 0.0303 0.0595 6 0.0083 0.01 All 0.0336 0.0224 Table 5: Entropy differences ( (cid:52) Ent ) between the original and calibrated attention weights.",
"translation, which increases the probability of information redundancy or error signals.",
"Our proposed model is more likely to calibrate these attention weights to makes the NMT model pay more attention to the informative inputs.",
"There are multiple reasons why the calibrated attention weights can boost performance.",
"Section 4.3 states that our generated masks are much closer to the gradient-based feature importance measures compared with attention weights.",
"On the other hand, we present the entropy differences of the original and calibrated attention weights in Table 5 where the entropy of attention weights are overall smaller after calibration.",
"However, the changes vary across layers.",
"For En De translation, the calibrated attention weights are more uniform at 1-3 layers and more focused at 4-6 layers, while the attention weights become more focused for all layers except the 1-st layer on Zh En task.",
"These findings prove that each attention layer plays a different role in the decoding process.",
"The low layers generally grasp information from various inputs, while the high layers look for some particular words tied to the model predictions.",
"The attention mechanism is first introduced to augment vanilla recurrent network (Bahdanau et al., 2015; Luong et al., 2015), which are then the backbone of state-of-the-art Transformer (Vaswani et al., 2017) for NMT.",
"It yields better performance and provides a window into how a model is operating (Belinkov and Glass, 2019; Du et al., 2020).",
"This section reviews the recent researches on analyzing and improving attention mechanisms.",
"The Attention Debate Many recent studies have spawned interest in whether attention weights faithfully represent each input token's responsibility for model prediction.",
"Serrano and Smith flip the model's decision by permuting some small attention weights, with high-weighted components not being the reason for the decision.",
"Some work (Jain and Wallace, 2019; Vashishth et al., 2019) find a weak correlation between attention scores and other well-ground feature importance metrics, specially gradient-based and leave-one-out methods, in various text classification tasks.",
"We also present the correlation analysis in the less-discussed Transformer-based NMT and reach a similar conclusion.",
"As opposed to the critiques of regarding attention weights as explanation, Wiegreffe and Pinter claim that the trained attention mechanisms do learn something meaningful about the relationship between inputs and outputs, such as syntactic information (Raganato and Tiedemann, 2018; Vig and Belinkov, 2019; Pham et al., 2019).",
"Can Attention be improved?",
"There is plenty of work on supervising attention weights with lexical probabilities (Arthur et al., 2016), word alignment (Chen et al., 2016; Liu et al., 2016; Mi et al., 2016; Cohn et al., 2016; Garg et al., 2019; Feng et al., 2020), human rationales (Strout et al., 2019) and sparsity regularization (Zhang et al., 2019).",
"Unlike them, we never introduce any external knowledge but highlight the inputs whose removal would significantly decrease Transformer's performance.",
"Another work line aims to make attention better indicative of the inputs' importance (Kitada and Iyatomi, 2020; Tutek and Snajder, 2020; Mohanku-mar et al., 2020) which is designed for analysis with no significant performance gain, while our methods incorporate the analytical results to enhance the NMT performance.",
"In this paper, we present a mask perturbation model to automatically discover the decisive inputs for the model prediction.",
"We propose three methods to calibrate the attention mechanism by focusing on the discovered vital inputs.",
"Extensive experimental results show that our approaches obtain significant improvements over the state-of-the-art system.",
"Analytical results indicate that our proposed methods make the low layer's attention weights more dispersed to grasp multiple information.",
"In contrast, high-layer attention weights become more focused on specific essential inputs.",
"We further find a greater need for calibration in the original attention weights with high entropy.",
"Our work provides insights on future work about learning more useful information via attention mechanisms in other attention-based frameworks.",
"The research work has been funded by the Natural Science Foundation of China under Grant No.",
"U1836221 and the National Key Research and Development Program of China under Grant No. 2018YFC0823404.",
"The research work in this paper has also been supported by Beijing Academy of Artificial Intelligence (BAAI2019QN0504).",
"This work is also supported by Youth Innovation Promotion Association CAS No. 2017172."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"method",
"objective",
"result",
"result",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"objective",
"objective",
"result",
"objective",
"objective",
"abstain",
"objective",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"other",
"abstain",
"other",
"method",
"abstain",
"other",
"other",
"other",
"method",
"method",
"other",
"other",
"objective",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"objective",
"result",
"objective",
"abstain",
"result",
"method",
"other",
"other",
"other",
"other"
] |
[
"In this paper, we argue that elementary discourse unit (EDU) is a more appropriate textual unit of content selection than the sentence unit in abstractive summarization.",
"To well handle the problem of composing EDUs into an informative and fluent summary, we propose a novel summarization method that first designs an EDU selection model to extract and group informative EDUs and then an EDU fusion model to fuse the EDUs in each group into one sentence.",
"We also design the reinforcement learning mechanism to use EDU fusion results to reward the EDU selection action, boosting the final summarization performance.",
"Experiments on CNN/Daily Mail have demonstrated the effectiveness of our model.",
"Abstractive summarization focuses on generating fluent and concise text from the original input document and has achieved considerable performance improvement with the rapid development of deep learning technology (See et al., 2017; Paulus et al., 2017; Celikyilmaz et al., 2018; Gehrmann et al., 2018).",
"In abstractive summarization, the recently popular and practical paradigm usually generates summary sentences by independently compressing or rewriting each pre-extracted sentence, which is from the source documents (Chen and Bansal, 2018; Lebanoff et al., 2019).",
"However, a single document sentence usually cannot provide enough information that a summary sentence expresses, which is supported by the re-cent study of Lebanoff et al. (2019).",
"They show that a high percentage of summary sentences include information from more than one document sentences, and composing a summary through only compressing sentences can cause performance degradation.",
"Simultaneously, in contrast to the brevity requirements of a summary, each document sentence usually offers trivial details and expresses a relatively independent meaning, posing difficulty of combining multiple sentences into one summary sentence.",
"So we hope to seek a new summary composition unit which is more information-intensive and elementary than sentence.",
"In this paper, we choose to use Elementary Discourse Unit (EDU) as the summarization unit, which is first proposed from Rhetorical Structure Theory (Mann and Thompson, 1988) and defined as a clause.",
"The finer granularity makes EDU more suitable than sentence to be the basic summary composition unit (Li et al., 2016).",
"At the same time, benefited from the development of EDU segmentation technology, which can achieve a high accuracy of 94% (Wang et al., 2018), it is feasible to automatically obtain EDUs from the text.",
"Next, the problems are: (1) which EDUs should be selected to compose a good summary?",
"Moreover, (2) how to well assemble the selected EDUs into a fluent summary?",
"To solve the problems above, we need to extract the information-intensive EDUs from the source documents and effectively fuse the related EDUs into fluent summary sentences.",
"With such an idea, inspired by Chen and Bansal (2018)'s work, we design an abstractive summarization method which is composed of two parts: EDU selection and EDU fusion.",
"EDU selection aims to extract informative EDUs and group them while EDU fusion takes the grouped EDUs as input to generate a sentence.",
"As the EDU selection process lacks labeling training data, we apply the EDU fusion results as the feedback to tune the EDU selection model which in turn influences the EDU fusion process.",
"Here, the actor-critic reinforcement learning algorithm is employed to train our EDU-based summarization method.",
"To the best of our knowledge, we are the first to propose a practical solution to compose EDUs in summarization.",
"Experiments show that compared to previous models, our EDU based model achieves a significant improvement on the CNN/Daily Mail dataset.",
"Our model is mainly composed of two modules: EDU Selection and EDU Fusion .",
"EDU Selection aims to extract salient EDUs from the source document and group the closely related EDUs.",
"Here, we adopt a smart unified end-to-end method to implement both the extraction and grouping.",
"Next, EDU Fusion takes the EDUs in a group to generate a fluent and informative sentence.",
"To train our method, we adopt reinforcement learning to leverage both the two modules.",
"Figure 1 shows the whole architecture of our method.",
"The EDU selection model is mainly based on a sequence-to-sequence pointer network.",
"In the encoding stage, we use a hierarchical encoder to get the contextual representation of each EDU, which consists of a word-level temporal convolutional neural network (Kim, 2014) and an EDU-level Bidirectional Long Short-Term Memory Network(Bi-LSTM) (Hochreiter and Schmidhuber, 1997).",
"In the decoding stage, we design an LSTM decoder to identify the informative EDUs with their group information.",
"To group the related EDUs, we design a particular label truncate whose representation is a trainable parameter h truncate .",
"We also add another special label stop with its representation h stop to determine the end of the selection process.",
"h truncate and h stop are first randomly initialized and then learned in the training process.",
"In each decoding step, the decoder computes a selection probability distribution on EDUs, truncate and stop .",
"Assuming at time step t , the indices of the EDUs which have been extracted are included in the set Sel t , the decoder first uses the Luong attention (Luong et al., 2015) to get the context c t and then computes a score s ti for each EDU or label by: s ti = (cid:40) v T p tanh ( W p [ c t ; h i ]) i not in Sel t otherwise (1) where i represents the index of an EDU, truncate or stop , and h i denotes the corresponding representation.",
"v p and W p are the trainable parameters.",
"In order to avoid repeated selection of the same EDUs, we assign the score of to the EDUs that have been extracted.",
"It is noted that the label truncate can be generated multiple times since it EDU Selection EDUFusion 1 gt",
"is not included in Sel t .",
"Finally, we get the selection probability at time step t by applying softmax to regularize the scores.",
"Once the decoder selects the stop label, it stops the selection process and gets a sequence which is composed of EDUs, truncate labels and one stop label.",
"Next, the EDUs separated by truncate are grouped for fusion.",
"The EDU fusion module uses the standard pointer generator (See et al., 2017) to generate one sentence for each group of EDUs.",
"This design allows the model to directly copy words from the inputted EDUs to the generated sentence, which is benefi-cial to keeping the cross-sentence information in the source documents.",
"At the same time, benefited from the conditional language model training objective, the coherence of the generated sentences is highly improved to remedy the poor readability of EDUs.",
"To leverage EDU selection and fusion for generating a good summary, reinforcement learning mechanism is designed to use EDU fusion results to tune the selection process, which in turn affects the fusion performance.",
"We introduce the learning process detailedly in Section",
"3. 3 Learning We firstly pre-train the EDU selection and EDU fusion module separately and then use the pre-trained model as initialization for reinforcement learning(RL).",
"Because the summarization datasets do not label the salient EDUs, we propose a greedy method to provide the labeled data for pre-training.",
"For each pair of the document and summary, we select several groups of EDUs from the document as the oracle EDU labels, with each group corresponding to a summary sentence.",
"For each summary sentence, we construct a group of EDUs iteratively.",
"We start from an empty group and repeatedly select the EDU from the document that can maximize the ROUGE-L recall score between the ground-truth summary sentence and the group of EDUs after the EDU is added into the group until no EDU can increase the score.",
"We use ROUGE-L recall so that the EDU selection module can select as much information as possible for EDU fusion.",
"With such a dataset, we pre-train the EDU selection module.",
"To pre-train the EDU fusion module, the input and output are the concatenation of oracle EDUs and summary sentences.",
"We pre-train the two modules separately by optimizing maximum likelihood (ML).",
"We use the Advantage Actor-Critic (A2C) algorithm to train our model end-to-end.",
"Following Chen and Bansal (2018)'s work, we fix the parameters of the EDU fusion module during RL training.",
"Here, we regard the EDU selection module as the agent whose decoding stage is formulated as a Markov Decision Process (MDP).",
"In each decoding step, the agent executes one selection action, which is selecting an EDU or a label ( truncate or stop ) according to the selection probability.",
"Then the agent gets a reward according to the EDU fusion results.",
"As for reward computation, given the group i of the selected EDUs, we use the EDU fusion module to generate a sentence s i and compute its score r i to measure the overlap between s i and the sentence gt i in the ground truth summary.",
"where n is the number of sentences in the ground truth summary.",
"For each selection action to compose the group, we set its reward as r i l i , where l i is the action number of selecting an EDU or truncate .",
"Similar to (Chen and Bansal, 2018), we compute the ROUGE1 F score between the Model R-1 R-2 R-L Lead-3 40.34 17.70 36.57 NN(2016) 35.5 14.7 32.2 REFRESH 40.0 18.2 36.6 Pointer Generator 39.53 17.28 36.38 Fan et al. (2017) 39.75 17.29 36.54 Fast-Abs 40.88 17.80 38.54 EDUSum sel + RL 40.89 18.30 37.79 EDUSum 41.40 18.03 38.79 Table 1: Model Comparison ground-truth summary and the whole fused sentences as the reward for the final action that selects the stop label.",
"We conduct experiments on the non-anonymized version of the CNN/Daily Mail dataset (Hermann et al., 2015; See et al., 2017).",
"Using the same processing method as See et al. (2017), the dataset contains 287,226 training pairs, 13,368 validation pairs and 11,490 test pairs.",
"To segment the documents into EDUs, we use Wang et al. (2018)'s model which achieves a 94% F-score in EDU segmentation.",
"To evaluate summarization performance, we use the ROUGE metrics (R-1, R-2 and R-L) (Lin, 2004).",
"For our model, the dimensions of hidden states and word embeddings are set 256 and 128 respectively.",
"The batch size of training is 32, and the discount factor for reward in RL training is set to 0.95.",
"The optimizer is Adam (Kingma and Ba, 2015) with a 0.001 learning rate for pre-training and 0.0001 learning rate for RL training.",
"1 4.2 Results To evaluate model performance, we compare our model (named EDUSum ) with the state-of-the-art extractive and abstractive summarization methods.",
"Three extractive methods are a strong Lead -3 baseline, NN (Cheng and Lapata, 2016) which applies neural networks with attention to extract sentences directly, and REFRESH (Narayan et al., 2018) which uses reinforcement learning to rank sentences.",
"Three abstractive methods for comparison include: Pointer Generator (See et al., 2017), a controllable text generation method (Fan et al., 2017), and Fast-Abs (Chen and Bansal, 2018) which uses 1 The source code is available at https://github.com/PKU-TANGENT/EDUSum Model R-1 R-2 R-L EDUSum SameSent 41.17 17.84 38.62 EDUSum group 1 40.02 17.21 37.76 EDUSum group 2 41.09 17.59 38.54 EDUSum group 3 40.20 17.06 37.53 EDUSum 41.40 18.03 38.79 Table 2: Ablation Study on EDU Selection Module reinforcement learning to extract and rewrite sentences.",
"As we can see in Table 1, EDUSum outperforms all the baselines.",
"Compared to Fast-Abs which is similar to EDUSum in model architecture, EDUSum achieves better performance with respect to the three metrics, showing EDU is more informative than sentence and appropriate to be the basic selection unit in summarization.",
"From the table, we can also see that all the summarization methods with RL achieve comparable performance, meaning the RL mechanism can effectively supervise a system to acquire valuable information.",
"We also design a model EDUSum sel + RL which is similar to EDUSum except that it does not include the EDU fusion module and directly concatenates the selected EDUs as a summary.",
"EDUSum sel + RL performs worse with respect to R-1 and R-L when the EDU fusion module is removed, because the direct concatenation of EDUs may bring redundancy into the summary and EDU fusion can make the summary sentence more informative.",
"We also note that EDUSum sel + RL performs better than EDUSum with respect to R-2, perhaps because EDU fusion may generate some fake information and need further improvement which will be our future work.",
"Further, we conduct a thorough analysis of the EDU selection module which is the main component of our method.",
"Compared to previous work, the EDU selection module can automatically determine which EDUs and how many EDUs can be grouped.",
"Such a design is convenient for capturing cross-sentence information effectively.",
"To evaluate whether it is necessary to capture cross-sentence information in summarization, we add a constraint to our model: the EDU selection module can only select those EDUs that belong to the same sentence into the same group.",
"We name this model EDUSum SameSent .",
"From Table 2, we can see that EDUSum SameSent behaves a little worse than EDUSum .",
"This makes sense because the content of each summary sentence mostly derive from one source sentence and is supplemented by some infor-Model Read.",
"mation from other sentences.",
"We also evaluate the grouping effects of our model and remove the automatic grouping mechanism by grouping every K adjacent selected EDUs into a group.",
"We set K as 1, 2, and 3 respectively where the value of 1 means no group at all.",
"Table 2 shows EDUSum group 2 performs the best among all the size settings, but performs worse than EDUSum and EDUSum Samesent .",
"This means that a summary sentence is usually composed of two EDUs but a hard grouping can degrade the performance.",
"We also give a summary sentence generated by our method as an example to illustrate the advantage of our model, as in Figure",
"2. We can see that our model can well select and group the EDUs (the underlined EDUs in Sent. 1 and Sent. 2) which have similar meanings, and fuse the grouped EDUs coherently by grabbing the key entity information (i.e., person and team information in Sent. 1) and combining them into the final summary sentence.",
"To evaluate the abstractive ability of our method, we conduct a human evaluation on the two aspects of readability and non-redundancy.",
"Readability measures how easy a text is to read, and depends on the elements of grammaticality and coherence.",
"Non-redundancy mainly denotes the degree of linguistic brevity of a text in conveying the main idea.",
"To save labor, we only choose two baselines Fast-Abs and EDUSum sel + RL , which perform well with ROUGE metrics, for comparison.",
"Comparing to scoring, ranking is relatively easy for an annotator to implement and we follow the evaluation method of (Wu and Hu, 2018).",
"We randomly sample 50 test documents and generate their summaries using our model and the two baselines.",
"Three annotators are asked to rank each set of three summaries with respect to readability and non-redundancy.",
"The best is ranked the first while the worst is the third, and the ranks are allowed to be tied.",
"Then we compute the average ranks of the three models, as shown in Table",
"3. We see that EDUSum can Original sentences segmented into EDUs System-generated sentence Ground truth Sent 1: [Juan Mata has collected his player of the month award for March from Manchester United] [and was quick to thank his supporters after receiving the gong .] Sent 2: [Mata scored both goals as united overturned Liverpool with a 2-1 win at Anfiled.] [while also producing an impressive display in the 3-0 home victory over Tottenham] Juan Mata scored both goals as Manchester United overturned Liverpool's 2-1 win at Anfield.",
"well leverage readability and non-redundancy compared to the two baselines.",
"Both EDUSum and EDUSum sel + RL achieve a significant improvement in non-redundancy, because the fine-grained EDUs can contain more informative cross-sentence information and make the summaries briefer.",
"We can also see EDUSum sel + RL suffers from bad readability because it simply concatenates EDUs into a sentence, which is the main problem that EDU based models are faced with.",
"As for EDUSum , benefited from EDU fusion, this model can achieve nearly the same readability as the sentence based model Fast-Abs .",
"In this paper, we choose EDU as the basic summary unit and propose a novel EDU based summarization model EDUSum .",
"In our model, the module of EDU selection is designed to extract and group salient EDUs and the module of EDU fusion to convert groups of EDUs into summary sentences.",
"We also apply reinforcement learning to leverage EDU selection and EDU fusion for improving summarization performance.",
"With such a design, EDUSum can fuse cross-sentence information and remedy the poor readability problem brought by EDUs.",
"Compared to previous work, this work has provided a feasible and effective method which makes full use of EDUs in summarization.",
"We thank the anonymous reviewers for their helpful comments on this paper.",
"This work was partially supported by National Key Research and Development Project (2019YFB1704002) and National Natural Science Foundation of China (61876009 and 61572049).",
"The corresponding author of this paper is Sujian Li."
] | [
"abstain",
"objective",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"result",
"method",
"objective",
"result",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"method",
"result",
"abstain",
"method",
"other",
"other",
"other"
] |
[
"The state-of-the-art on basic, single-antecedent anaphora has greatly improved in recent years.",
"Researchers have therefore started to pay more attention to more complex cases of anaphora such as split-antecedent anaphora, as in Time-Warner is considering a legal challenge to Telecommunications Inc's plan to buy half of Showtime Networks Inca move that could lead to all-out war between the two powerful companies .",
"Split-antecedent anaphora is rarer and more complex to resolve than single-antecedent anaphora; as a result, it is not annotated in many datasets designed to test coreference, and previous work on resolving this type of anaphora was carried out in unrealistic conditions that assume gold mentions and/or gold split-antecedent anaphors are available.",
"These systems also focus on split-antecedent anaphors only.",
"In this work, we introduce a system that resolves both single and split-antecedent anaphors, and evaluate it in a more realistic setting that uses predicted mentions.",
"We also start addressing the question of how to evaluate single and split-antecedent anaphors together using standard coreference evaluation metrics.",
"1 1 Introduction Thanks in part to the latest developments in deep neural network architectures and contextual word embeddings (e.g., ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019)), the performance of models for single-antecedent anaphora resolution has greatly improved (Wiseman et al., 2016; Clark and Manning, 2016b; Lee et al., 2017, 2018; Kantor and Globerson, 2019; Joshi et al., 2020).",
"So recently, the attention has turned to more complex cases of anaphora, such as anaphora requiring some sort of commonsense knowledge as in the Winograd Schema Challenge (Rahman and Ng, 1 The code is available at https://github.com/ juntaoy/dali-full-anaphora 2012; Peng et al., 2015; Liu et al., 2017; Sakaguchi et al., 2020); pronominal anaphors that cannot be resolved purely using gender (Webster et al., 2018), bridging reference (Hou, 2020; Yu and Poesio, 2020), discourse deixis (Kolhatkar and Hirst, 2014; Marasovic et al., 2017; Kolhatkar et al., 2018) and, finally, split-antecedent anaphora (Zhou and Choi, 2018; Yu et al., 2020a) plural anaphoric reference in which the two antecedents are not part of a single noun phrase.",
"However, a number of hurdles have to be tackled when trying to study these cases of anaphora, ranging from the lack of annotated resources to the rarity of some of these phenomena in the existing ones.",
"Thus, most previous work on resolving these anaphoric relations focused on developing dedicated systems for the specific task.",
"The systems are usually enhanced by transfer-learning to utilise extra resources, as those anaphoric relations are sparsely annotated.",
"The most frequently used extra resource is single-antecedent anaphors.",
"Due to the complexity of these tasks, previous work is usually based on assuming that either gold anaphors (Hou, 2020; Yu et al., 2020a) or gold mentions (Zhou and Choi, 2018; Yu and Poesio, 2020) are provided.",
"By contrast, in this work we introduce a system that resolves both single and split-antecedent anaphors, and is evaluated in a more realistic setting that does not rely on gold anaphors/mentions.",
"We evaluate our system on the ARRAU corpus (Poesio and Artstein, 2008; Uryupina et al., 2020), in which both single and split-antecedent anaphors are annotated, although the latter are much rarer than the former.",
"We use the state-of-the-art coreference resolution system on ARRAU (Yu et al., 2020b) as our base system for single-antecedent anaphors.",
"This cluster-ranking system interprets single-antecedent anaphors, singletons and non-referring expressions jointly.",
"In this work, we extend the system to resolve split-antecedent anaphors.",
"The extended part of the system shares mention representations and candidate clusters with the base system, and outputs binary decisions between a mention and individual candidate clusters.",
"We configure our system to learn the split-antecedent part and the base system in both JOINT and PRE-TRAINED fashion.",
"The results show both versions work much better than naive baselines based on heuristics and random selection.",
"The PRE-TRAINED version works equally well as the JOINT version on split-antecedent anaphors, but it is better for the other aspects of anaphoric interpretation.",
"In the paper we also begin to address the question of how a system carrying out both single and split-antecedent anaphora resolution should be evaluated.",
"Specifically, we introduce an extended version of LEA (Moosavi and Strube, 2016), a standard coreference metric which can be used to give partial credit for resolution, to evaluate single and split-antecedent anaphors together.",
"Using this metric, we find that our best model achieves a better LEA score than the baselines.",
"We further evaluate our best system in the gold setting to compare with the Yu et al. (2020a) system.",
"The model achieved better performance when compared to their system that is designed solely for split-antecedent task.",
"Single-antecedent anaphora resolution is an active research topic.",
"The first neural model was introduced by Wiseman et al. (2015) and later extended in (Wiseman et al., 2016).",
"Clark and Manning (2016b) introduced a hybrid cluster/mention-ranking approach, whereas Clark and Manning (2016a) adapted reinforcement learning to a mention-ranking model.",
"Lee et al. (2017) introduced the first end-to-end system, performing mention detection and coreference resolution jointly.",
"The Lee et al. (2017) system was also simpler than previous systems, using only a small number of hand-coded features.",
"As a result, the Lee et al. (2017) system has become the blueprint for most subsequent systems.",
"Lee et al. (2018) and Kantor and Globerson (2019) showed that employing contextual ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019) embeddings in the system by Lee et al. (2017) can significantly improve performance.",
"(Joshi et al., 2019, 2020) fine-tuned BERT and SpanBERT to further improve performance.",
"Recently, Wu et al. (2020) framed coreference resolution task as question answering and showed that the additional pre-training on a large question answering dataset can further improve performance.",
"However, those systems are only focused on single-antecedent anaphors and do not consider the other anaphoric relations.",
"Interpreting nominal expressions with respect to a discourse model is not simply a matter of identifying identity links; it also involves recognizing that certain potential anaphors are in fact non-referring, or singletons; other expressions refer to entities which have to be introduced in the discourse model via accomodation processes involving for instance the construction of a plural object out of other entities, as in the case of split-antecedent anaphors; other expressions again are related to existing entities by associative relations, as in one -anaphora or bridging reference.",
"These other anaphoric interpretation processes are much less studied, primarily because the relevant information is not annotated in the dominant corpus for coreference, OntoNotes (Pradhan et al., 2012).",
"Systems such as the Stanford Deterministic Coreference Resolver (Lee et al., 2013) do use linguistically-based heuristic rules to recognize and filter singletons and non-referring expressions, but these aspects of the system are not evaluated.",
"Carrying out such an evaluation requires a corpus with richer anaphoric annotations, such as ARRAU (Uryupina et al., 2020).",
"Yu et al. (2020b) is the only neural system that targets singletons and non-referring expressions.",
"The system uses the mention representation from Lee et al. (2018); Kantor and Globerson (2019) and applies a cluster-ranking algorithm to incrementally attach mentions directly to their clusters.",
"Yu et al. (2020b) showed that performance on single-antecedent anaphors improves by up to 1.4 p.p. when jointly training the model with non-referring expressions and singletons.",
"We use Yu et al. (2020b) as our base system, and extend it to resolve split-antecedent anaphors.",
"A few systems resolving split-antecedent anaphors have been proposed in recent years.",
"Vala et al. (2016) introduced a system to resolve plural pronouns they and them in a fiction corpus they themselves annotated.",
"Zhou and Choi (2018) introduced an entity-linking corpus based on the transcripts of the Friends sitcom.",
"The mentions (in-cluding plural mentions) are annotated if they are linked to the main characters.",
"Coreference clusters are then created for mentions linked to the same entities.",
"One issue with this corpus is that it is mainly created for entity-linking, so it is problematic as a coreference dataset, as many mentions are linked to general entities that are not annotated in the text.",
"Zhou and Choi (2018) trained a CNN classifier to determine the relation between mention pairs, jointly performing single and split-antecedent resolution.",
"Another issue with this work is evaluation.",
"Zhou and Choi (2018) evaluate their system using the standard CONLL scorer; in order to do this, they encode split-antecedent anaphora by adding the plural mention to each cluster.",
"So, for instance, in John met Mary.",
"They went to the movies , they would have two gold clusters: {John, They} and {Mary, They}.",
"This is clearly problematic, as They is not a mention of the individual entity John, but of the set consisting of John and Mary.",
"In this work, we propose an alternative, an extended version of LEA (Moosavi and Strube, 2016) that does joint evaluation of single/split-antecedent anaphors by explicitly representing plural entities.",
"Yu et al. (2020a) introduced the first system to resolve all split-antecedent anaphors annotated in the ARRAU corpus.",
"Their work focuses on the data sparsity problem; split-antecedent anaphora resolution is helped using four auxiliary corpora created from a crowdsourced corpus and other anaphoric annotations in the ARRAU corpus.",
"However, their approach focuses on split-antecedent anaphora only, and assumes gold split-antecedent anaphors and gold mentions are provided during the evaluation, which is not realistic.",
"In this work, we resolve both single and split-antecedent anaphora and evaluate our system on predicted mentions.",
"In this work, we use the system of Yu et al. (2020b) as starting point, and extend it to handle split-antecedent anaphora.",
"Yu et al. (2020b) is a cluster-ranking system that jointly processes single-antecedent anaphors, singletons and non-referring expressions.",
"The system uses the same mention representations as in Lee et al. (2018); Kantor and Globerson (2019).",
"The input to the system is a concatenation of contextual BERT (Devlin et al., 2019) embeddings, context-independent GLOVE embeddings (Pennington et al., 2014) and learned character-level embeddings based on convolutional neural network ( CNN s).",
"The system then uses a multi-layer BILSTM to encode the document at the sentence level to create the word representations ( T i ).",
"The candidate mention representations ( M i ) are created by the concatenation of the word representations at the start/end positions of the mention as well as a weighted sum of all the tokens within the mention boundary.",
"After that, the candidate mentions are pruned according to their mention scores ( s m ( i ) ) computed by applying a feed-forward neural network ( FFNN ) to the M i .",
"The top-ranked candidate mentions are then used by the cluster-ranking model to form the entity clusters and to identify the non-referring expressions.",
"The cluster-ranking model incrementally links the candidate mentions to the clusters according to the scoring function ( s ( i, j ) ) between candidate mention M i and partial clusters created so far ( C ji 1 ).",
"More precisely, s ( i, j ) is defined as: s ( i, j ) = s no ( i ) j = NO s nr ( i ) + s m ( i ) j = NR s dn ( i ) + s m ( i ) j = DN s m ( i ) + s c ( j ) + s mc ( i, j ) j C i 1 where s no ( i ) , s nr ( i ) and s dn ( i ) are the likelihood for a candidate mention to be a non-mention ( NO ), a non-referring expression ( NR ) or a discourse new mention ( DN ) respectively.",
"s m ( i ) , s c ( j ) and s mc ( i, j ) are the mention scores (computed for mention pruning), cluster scores (a weighted sum of s m for the mentions in the cluster) and cluster-mention pairwise scores.",
"The system employs additional methods to enhance performancee.g., keeping cluster histories and training the system on the oracle clusters.",
"We refer the reader to (Yu et al., 2020b) for more details.",
"We use the default settings of the system in our experiments.",
"To resolve split-antecedent anaphors, we follow Yu et al. (2020a) who framed the task as a binary clas-sification task.",
"The system uses a scoring function to assign each cluster-mention pair a score s p ( i, j ) specifying the likelihood that that cluster is one of the split-antecedents of the mention.",
"During training, we add a dummy score ( s (cid:15) ( i ) = 0 ) for the cases in which a mention is not a split-antecedent anaphor.",
"Formally, s p ( i, j ) is calculated as follows: s p ( i, j ) = (cid:26) 0 j = (cid:15) s m ( i ) + s c ( j ) + s pmc ( i, j ) j C i 1 The extension for split-antecedents uses the same mention/cluster representations as well as the candidate mentions/clusters of the single-antecedent component.",
"This benefits the split-antecedent anaphors part of the system, that can share the representations learned from more numerous single-antecedent anaphors.",
"As a result, the extension shares the same s m ( i ) and s c ( j ) scores as the base system.",
"s pmc is calculated by applying a FFNN to the cluster-mention pairwise representations.",
"At test time, we convert s p ( i, j ) into probabilities ( p p ( i, j ) ), and assign split-antecedents to plural mentions when the p p ( i, j ) between the plural mentions and the candidate clusters are above the threshold (e.g., 0 . 5 ).",
"p p ( i, j ) is calculated by applying a sigmoid function to s p ( i, j ) : p p ( i, j ) = 1 1 + e s p ( i,j ) To make sure the final system outputs (single-antecedent anaphors, singletons, non-referring expressions and split-antecedent anaphors) do not contradict each other, we only allow discourse-new mentions to become split-antecedent anaphors.",
"We also constrain split-antecedent anaphors to have at least two and at most five antecedents.",
"Since we are working with predicted clusters, to evaluate using lenient and strict scores as in Yu et al. (2020a), we need to find a way to align the predicted clusters with the gold clusters.",
"Here we use the standard coreference alignment function CEAF 4 to align predicted and gold clusters.",
"The alignment between predicted and gold clusters is at the centre of the CEAF 4 scores, which gives exactly what we need for our evaluation.",
"To train, we add to the original loss ( loss s ) a second dedicated loss ( loss p ) for split-antecedent anaphors.",
"We use marginal log-likelihood loss, and optimize on all oracle clusters that belong to the gold split-antecedent cluster list GOLD p ( i ) of split-antecedent anaphors M i .",
"Formally, loss p = log N (cid:89) j =1 (cid:88) c GOLD p ( j ) s p ( c, j ) Since the vast majority of mentions (99%) are negative examples (non-split-antecedent anaphors), training is highly imbalanced.",
"So during training we also use the mentions from the same cluster as the split-antecedent anaphors as additional positive examples.",
"In this way we managed to nearly double the number of positive training examples.",
"We multiply the losses of the negative examples an adjustment parameter to balance the training.",
"We train our system both in JOINT and PRETRAINED mode.",
"For JOINT learning, we train our system on the sum of two losses and weigh them by a factor that determines the relative importance of the losses.",
"Formally, we compute the joint loss as follows: loss j = (1 ) loss s + loss p To use a joint loss the split-antecedent part of the system can have an impact on the mention representations hence might lead to better performance.",
"Our PRE-TRAINED approach is based on the hypothesis that mention/cluster representations trained on the single-antecedent anaphors are suf-ficient as pre-trained embeddings for downstream tasks like split-antecedent anaphors.",
"The PRETRAINED approach minimises the changes to the base system, and one can even reuse the models trained solely for the base system.",
"The training for the split-antecedent part is inexpensive.",
"We use the pre-trained models for our base system to supply mention/cluster representations and other necessary information and optimise the split-antecedent part of the system solely on loss p .",
"If the interpretation of a split-antecedent anaphor were only given credit when all antecedents are correctly detected and grouped together, without giving any reward to systems that find at least some of the antecedents, systems that get closer to the gold would be unfairly penalized, particularly for the cases with 3 or more split antecedents (25% in our data).",
"Consider example 4.1, in which their i,j refers to the set {Mary, John}, and they i,j,p to the set {Mary, John, Jane}.",
"And take two systems A and B that resolve their i,j to {Alex, Jane} and {Mary, Jane}, respectively and they i,j,p to {Alex} and {Mary i , John j }, respectively.",
"Neither system is perfect, but intuitively, system B is more accurate in resolving split-antecedent anaphors (it correctly identifies 1 antecedent of their i,j and 2 of they i,j,p , versus 0 for A)yet both systems will receive the same 0 score if only a perfect match is credited.",
"Example 4.1.",
"Mary i and John j were on their way to visit Alex k when Mary i saw Jane p on their i,j way and realized they i,j,p all wore the same shirt.",
"split-antecedent resolution three issues have to be addressed.",
"First of all, it is necessary to have some way to represent plural entities.",
"Second, we need some way of ensuring that systems that propose different but equivalent resolutions for split-antecedent plurals score the same.",
"Third, we need a metric allowing some form of partial credit.",
"2 We discuss how we addressed each issue in turn.",
"Plural mentions First of all, we propose to have two types of mentions in our coreference chains: in addition to the standard individual mentions (Mary), we also allow plural mentions ({Mary, Jane}).",
"Normalizing coreference chains As Example 4.1 shows, a text may contain multiple individual mentions of the same entity that participate in a plural mention (e.g. Mary').",
"Plural mentions whose antecedents are mentions of the same entity should be equivalent.",
"To do this, we use the first mention of each gold coreference chains as the representative of the entity.",
"We normalize every plural mention in a systemproduced coreference chain by",
"(i) aligning the system-produced coreference chains for the individual mentions in the plural mention to the gold coreference chains using CEAF , and",
"(ii) replacing each individual mention in the plural mention with the first mention in the aligned gold coreference chains.",
"Partial credit A natural way to obtain a scorer for coreference resolution giving partial credit is to extend the LEA evaluation metric (Moosavi and Strube, 2016) to handle split-antecedents.",
"For each entity e , LEA evaluates",
"(a) how important is e , and",
"(b) how well it is resolved.",
"Thus, for computing recall, LEA evaluates a set of system-detected entities E as follows: 3 (cid:80) e E importance ( e ) resolution-score ( e ) (cid:80) e E importance ( e ) (1) where resolution-score is the ratio of correctly resolved coreference links in the entity, and the importance measures how important is entity e in the given text.",
"In the default implementation, importance is set to the size of the entity.",
"However, it can be adjusted based on the use case.",
"2 This third issue is the reason why (Vala et al., 2016; Yu et al., 2020a) used lenient metrics for scoring split-antecedent resolution, although ones that did not score single antecedent resolution as well.",
"3 We can compute precision by switching the role of system and key entities in LEA computations.",
"Let e be an entity in the system output E consisting of n mentions, and K be the set of gold entities.",
"The resolution-score ( RS ) of e is computed as: RS ( e ) = 1 | L ( e ) | (cid:88) l L ( e ) B ( l, K ) (2) where L ( e ) is the set of all coreference links in e 4 , and B ( l, K ) is defined as B ( l, K ) = (cid:40) 1 { k K | l L ( k ) } 0 otherwise (3) (3) states that for each coreference link l in system entities, the system receives a reward of one if l also exists in gold entities, and zero otherwise.",
"If any of the mentions that are connected by l is a partially resolved plural mention, the system receives a zero score.",
"To extend LEA to handle split-antecedents, we change B to also reward a system if any of the corresponding mentions of l , i.e., mentions that are connected by l , is a plural mention and is partially resolved.",
"Let P ( m ) be an ordered list of all subsets of m , including m , by descending order of their size.",
"If m is a singular mention, P will only contain { m }.",
"If m is a plural mention, P will contain m as well as all the subsets of m 's containing mentions.",
"For instance, P ({Mary, John})=[ {Mary, John}, {John}, {Mary}].",
"Assuming the corresponding mentions of l are m i and m j , we update B ( l, K ) as follows: | s i || s j | | m i || m j | { k K,s i P ( m i ) ,s j P ( m j ) | l s i ,s j L ( k ) } | m i || m j | | m k || m p | { k K,m i P ( m k ) ,m j P ( m p ) | l m k ,m p L ( k ) } 0 otherwise where l s i ,s j is the link connecting s i and s j that are the largest subset of P ( m i ) and P ( m j ) , respectively, that exist in gold entities and are coreferent.",
"m k and m p are gold coreferring mentions that m i and m j are a subset of, respectively.",
"For instance, consider the system chain { m 1 ={Mary, Jane}, m 2 =their i,j } for Example 4.1.",
"The coreference link between m 1 and m 2 does not exist in the gold entities.",
"However, m 1 is a subset of a gold mention, i.e., m k ={Mary, John, Jane}, and m 1 P ( m k ) .",
"Therefore, system B receives a reward of 2 1 3 1 for resolving the coreference link between m 1 and m 2 based on RS .",
"Importance As discussed, the number of entities that contain split-antecedents in our annotated data is negligible compared to entities with singular mentions.",
"Therefore, we will not see a big difference in the overall score when the system resolves both singular and plural mentions.",
"In order to put more emphasize on harder coreference links, i.e., resolving split-antecedents, we adapt the importance measure to assign a higher weight to entities containing split-antecedent as follows: importance ( e ) = importance-factor ( e ) | e | (cid:80) e i importance-factor ( e i ) | e i | The importance-factor assigns Imp split times higher importance on plural entities compared to entities of singular mentions: importance-factor ( e ) = (cid:40) Imp split If e is a plural entity 1 If e is singular 5 Experiments 5.1 Datasets We evaluated our system on the RST portion of the ARRAU corpus (Uryupina et al., 2020).",
"ARRAU provides a wide range of anaphoric information (referring expressions including singletons and non-referring expressions; split-antecedent plurals; generic references; discourse deixis; and bridging references) and was used in the CRAC shared task (Poesio et al., 2018); RST was the main evaluation subset in that task; the RST portion of the ARRAU corpus consists of 1/3 of the Penn Treebank (news texts).",
"Table 1 summarizes the key statistics about the corpus.",
"In separate evaluation , we follow standard practice to report CONLL average F1 score (macro average of MUC, B 3 and CEAF 4 ) for single-antecedent anaphors, and F1 scores for",
"non-referring expressions.",
"For split-antecedent anaphors, we report three F1 scores: the strict F1 score that only gives credit when both anaphors and all their split-antecedents are resolved correctly 5 ; the lenient F1 score that gives credit to anaphors that resolved partially correct (Vala et al., 2016); and the anaphora recognition F1 score.",
"For joint evaluation of single/split-antecedent anaphors, we report the LEA score using the upgraded script described in Section 4.",
"We use the default parameter settings of Yu et al. (2020b) and use their hybrid approach for handling the non-referring expressions.",
"The split-antecedent part of the system uses an FFNN with two hidden layers and a hidden size of 150.",
"The negative example loss adjustment parameter and the loss weight parameter (used for JOINT learning) are set to 0.01 and 0.1 respectively after tuning on the development set.",
"Table 2 provides details on our parameter settings.",
"We first evaluate our two proposed systems in the separate evaluation setting, in which we report separate scores for single-antecedent anaphors, non-referring expressions and split-antecedent anaphors.",
"Showing individual scores for different aspects provide a clear picture of the different models.",
"Baselines Like (Vala et al., 2016; Yu et al., 2020a), we include baselines based on heuristic rules or random selection.",
"For all baselines, we use the same model as used by our PRE-TRAINED approach to supply the candidate split-antecedent anaphors/singular clusters.",
"The anaphora recognition baseline classifies as split-antecedent anaphors the discourse-new mentions belonging to a small list of plural pronoun (e.g., they , their , them , we ).",
"6 The recent-x baseline chooses the x closest singular clusters as antecedents for these candidates.",
"The random baseline assigns two to five antecedents randomly to each chose split-antecedent anaphors.",
"Results Table 3 shows the comparison between our two systems and the baselines.",
"Since plural pronouns are the most frequent split-antecedent anaphors, the simple heuristic gives a reasonably good F1 score of up to 36.2% for anaphora recog-6 We also tried a random selection based approach, but such an approach only gets less than 5% split-antecedent anaphors correctly.",
"nition.",
"In term of the scores on full resolution, the baselines only achieved a maximum F1 of 17% and 5.7% when evaluated in the lenient and strict settings respectively.",
"The low F1 scores indicate that split-antecedent anaphors are hard to resolve.",
"When compared with the baselines, both of our approaches achieved much better scores for all three evaluations.",
"Our models achieved substantial improvements over the baselines of up to 19%, 19.9% and 14.7% for anaphora recognition, full resolution (lenient and strict) respectively.",
"The model trained in a JOINT setting achieves a better recall for both lenient evaluation and anaphora recognition, while the PRE-TRAINED setting has much better precision.",
"We expect this is because the joint system could have an impact on candidate mentions/clusters, hence potentially recover more antecedent-anaphora pairs.",
"By contrast, the candidate mentions/clusters are fixed in the PRE-TRAINED setting.",
"Overall, the JOINT model achieves a slightly better lenient F1 score but a lower strict F1 score, whereas the PRE-TRAINED setting has a better overall performance when compared with the JOINT model.",
"The JOINT system also has a lower CONLL average F1 score and non-referring F1 score when compared with the system trained in a PRE-TRAINED fashion.",
"This indicates that jointly training is not helpful for the single-antecedent anaphors and non-referring expressions.",
"Hence we use the PRE-TRAINED approach for further experiments.",
"We then evaluate our models with the newly extended LEA scores to show how split-antecedent anaphors could impact the results when evaluated together with single-antecedent anaphors.",
"Table 4 shows the LEA score comparison between our best model ( PRE-TRAINED ) and the baselines.",
"As only half of the test documents contain split-antecedent anaphors, we report the results on those test documents to give a clear picture on the evaluation.",
"We carried out two evaluations.",
"The first setting is the traditional evaluation setting for coreference, in which split-antecedent anaphors are weighed equally as single antecedent anaphors (i.e., they are treated in LEA as a single mention, Imp split = 1).",
"We do not believe, however, that treating all anaphors equally is the most informative approach to evaluating coreference, for it is wellknown that some anaphors are much easier to resolve than others (Barbu and Mitkov, 2001; Webster et al., 2018).",
"LEA makes it possible to give more weight to anaphors that are harder to resolve.",
"So in our second evaluation we give more importance to split-antecedent anaphors (Imp split = 10) since they are much harder to resolve and also infrequent when compare to the single-antecedent anaphors.",
"To have slightly higher importance for split-antecedents will give us a better view of their impact.",
"The results in Table 4 show that our best model achieved moderate improvements of 0.3% 0.5% on the first LEA score setting when compared with the baselines.",
"This is mainly because the split-antecedent anaphors are less than 1% of the mentions.",
"But ss expected, the improvements become more clear in the second evaluation setting, in which our model is 2.6% 3.7% better than the baselines.",
"To compare with the state-of-the-art system on ARRAU , (Yu et al., 2020a), we train our best setting ( PRE-TRAINED ) as Yu et al. (2020a) did, i.e., assuming both gold mention and gold split-antecedent anaphors are provided.",
"We first train the base model using gold mentions, then train the split-antecedent part of the system using gold mentions and gold split-antecedent anaphors.",
"Since Yu et al. (2020a)'s system is evaluated on the full ARRAU corpus and with a customised train/test split priorities the split-antecedent anaphors, we retrain their system using the same standard RST split as used in our evaluation.",
"We train their system with both baseline and the best settings using a single auxiliary corpus ( SINGLE-COREF ).",
"7 As shown in Table 5, our best model achieved both better lenient and 7 The best setting, that uses multi-auxiliary corpora, is more complex to train and only moderately improves the results.",
"better strict accuracy than the Yu et al. (2020a) system, even though theirs is a dedicated system concerned only with split-antecedent anaphora.",
"The results suggest the pre-trained mention/cluster representations are suitable for low-resource tasks that reply heavily on such representations.",
"In this section, we carry a qualitative analysis on the system outputs to find out the main courses of the performance gaps between the gold and predicted settings.",
"We also report a more detailed comparison between our system and the Yu et al. (2020a) system to see if there is a systematic difference between the two systems on the gold settings.",
"The Challenge of Using Predicted Setting The split-antecedent anaphora resolution task is more complex than its single-antecedent counterpart.",
"The semantic relation between each individual antecedent and the anaphora is not identity, but element-of; and the number of antecedents can also vary.",
"The results on evaluations with gold mentions and gold split-antecedent anaphors provided are promising.",
"However, when evaluated using predicted mentions we have two main challenges: anaphora recognition and noisy candidate mentions/clusters.",
"For anaphora recognition, our best model ( PRE-TRAINED ) only recalls 45% of the anaphors.",
"The performance of our anaphora recognition is affected by the predicted mentions, and further capped by the fact that we only attempt to classify as split-antecedent the mentions classed as discourse-new by the base model.",
"To assess the impact of these two factors, we computed the recall of split-antecedent anaphors by predicted mentions and discourse-new mentions.",
"Virtually all split-antecedent anaphors are recalled among the predicted mentions98.33%but only 65% are recalled among the discourse-new mentions.",
"This has a big impact on our results for split anaphora recognition, since 35% of the anaphors are not accessible to our system.",
"To understand the impact of this gap on the result, we supply to our system the 98.33% of split-antecedent anaphors recognized as predicted.",
"We keep everything else predicted mentions, and clustersunchanged.",
"When run this way, our system achieves a lenient F1 score of 47.7%, which is 11.3 p.p. better than the score (36.4%) achieved using predicted anaphors, although still 20.4% lower than the model trained and evaluated with gold mentions and gold split-antecedent anaphors (68.1%).",
"We suggest this additional difference is mainly a result of noise in the predicted mentions and clusters.",
"Overall, then, the noise in the predicted mentions and clusters contributed 2/3 of the score difference, while problems with anaphora recognition are responsible for the rest.",
"In Depth Comparison with Yu et al. (2020a) .",
"Next, we compared our model's outputs in the gold setting with those of the best model of Yu et al. (2020a) in more detail.",
"We split the test set in two different ways and compute the system performances on different categories.",
"First, we follow Yu et al. (2020a) and split the split antecedent anaphors in the test set into two classes according to the number of gold split-antecedents: one class includes the anaphors with two split-antecedents, whereas the second class includes the anaphors with three or more split-antecedents (about 23% of the total).",
"Table 6 compares these two classes.",
"As we can see from the Table, with lenient evaluation the two systems work equally well for the anaphors with two split-antecedents, but our model is 8.5% better for mentions with three or more split-antecedents.",
"In terms of strict evaluation, our model outperforms the (Yu et al., 2020a) model by 8.7% and 14.3% for two classes respectively.",
"Overall, the model presented here achieved substantial performance gains on anaphors with three or more split-antecedents.",
"We then split the data into two classes according to a different criterion: the part-of-speech of the anaphor.",
"The first class consists of pronoun anaphors, such as they or their.",
"The second class consists of all other split antecedent anaphors, such as those companies or both.",
"As shown in Table 7, the (Yu et al., 2020a) model achieves better scores for pronoun anaphors (mainly they and their).",
"However, our new model outperforms the old system with non-pronominal anaphors by 5.4% according to lenient F1, and doubled their strict accuracy.",
"In this paper, we introduced a neural system performing both single and split-antecedent anaphora",
"resolution, and evaluated the system in a more realistic setting than previous work.",
"We extended the state-of-the-art coreference system on ARRAU to also resolve split-antecedent anaphors.",
"The proposed system achieves much better results on split-antecedent anaphors when compared with the baselines using heuristic and random selection when using the predicted mentions/clusters.",
"Our system also achieves better results than the previous state-of-the-art system on ARRAU (Yu et al., 2020a), which only attempted single-antecedent anaphora resolution from gold mentions, when evaluated on the same task.",
"In addition, we also proposed an extension of the LEA coreference evaluation metric to evaluate both single and split-antecedent anaphors in a single metric.",
"This research was supported in part by the DALI project, ERC Grant 695662."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"result",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"other",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"result",
"objective",
"other"
] |
[
"Previous work on multimodal machine translation (MMT) has focused on the way of incorporating vision features into translation but little attention is on the quality of vision models.",
"In this work, we investigate the impact of vision models on MMT.",
"Given the fact that Transformer is becoming popular in computer vision, we experiment with various strong models (such as Vision Transformer) and enhanced features (such as object-detection and image captioning).",
"We develop a selective attention model to study the patch-level contribution of an image in MMT.",
"On detailed probing tasks, we find that stronger vision models are helpful for learning translation from the visual modality.",
"Our results also suggest the need of carefully examining MMT models, especially when current benchmarks are small-scale and biased.",
"Our code could be found at https: //github.com/libeineu/fairseq_mmt .",
"Multimodal machine translation (MMT) has emerged as an active field of research which marries the worlds of computer vision (CV) and natural language processing (NLP) (Specia et al., 2016).",
"Early models of this kind produce a translation given the fused representation of both the visual and textual inputs (Caglayan et al., 2016; Libovick and Helcl, 2017; Calixto and Liu, 2017).",
"As expected, such a paradigm achieves promising BLEU improvements and inspires the community to follow up.",
"But soon researchers found that MMT systems did not act as what they ordinarily designed: the visual modality contributes to translation little.",
"For example, it is not harmful to MMT systems when the input image is irrelevant to the text (Grnroos et al., 2018; Lala et al., 2018), or even when the vision features are absent (Elliott, 2018).",
"More recently, Wu et al. (2021) have pointed out that the Corresponding author.",
"use of the visual modality is a way of regularization for training but not a complement to the text modality.",
"As another response to the analysis of MMT, Caglayan et al. (2019) investigate how the vision features correlate to the text.",
"They find that the input image helps translation when some of the input words are masked.",
"Note that previous work has for the most part focused on integrating off-the-shelf vision models (such as ResNet-50) into MMT.",
"The underlying assumption here is that the existing vision models are powerful enough to encode the image.",
"This implicitly ignores the quality of vision models in representing images.",
"But computer vision is facing a new trend by moving from CNNs to Transformer as the backbone model (Dosovitskiy et al., 2021; Liu et al., 2021b; Carion et al., 2020).",
"A natural question that arises is: how will MMT systems behave if stronger vision models are adopted?",
"In this work, we address this question by a systematic study of using various vision models in MMT, in particular using the most successful models in recent studies (such as Vision Transformer, or ViT for short).",
"We find that the patch method used in Transformer-based vision models offers an opportunity to detail the patch-level contribution of the image.",
"This leads us to develop a selective attention model to correlate words with image patches.",
"Beyond this, we introduce object-detection and image captioning features into MMT for further improvements of the vision models (Carion et al., 2020; Fang et al., 2021).",
"Following Caglayan et al. (2019)'s work, we design more detailed probing tasks to examine to what degree the visual modality contributes to MMT.",
"We run an extensive set of experiments on En-De and En-Fr MMT tasks.",
"Our findings are Stronger vision models help.",
"For example, ViT can beat ResNet-50 on the probing tasks though the superiority is not significant on standard MMT data.",
"Automatic evaluation on current MMT tasks might not be a good indicator for the effectiveness of MMT models.",
"For example, models enhanced with object-detection and image captioning features yield good BLEU scores on the original MMT task but show modest or no contributions on the probing tasks.",
"We start with a description of the probing tasks.",
"It is followed by a design of vision features and a selective attention mechanism for introducing ViT-like representations into MMT.",
"To know how an image contributes to translation, a way is to mask some of the input words (call this insufficient text) and force the translation model to learn from the image.",
"Following the previous design of color deprivation and entity-based masking, we present detailed probing tasks which are complementary to Caglayan et al. (2019)'s work.",
"In preliminary experiments 1 , we find that color, character and noun are three kinds of words which could be complemented according to the visual modality once the corresponding texts are masked.",
"The following probing tasks are designed accordingly.",
"Color-based Probing In training, all source words referring to a color are replaced by a special token [ Mask_C ] .",
"There are 8 , 919 sentences involving color words, and nearly one third of them involve more than one color.",
"It is worth noting that each color may have two or more translations due to the rich morphology in German and French.",
"For example, the English green can be translated to 1 We choose the Multi30K En-De and En-Fr datasets for experiments.",
"grn, grne, grnes, grner, grnen and grnem in German.",
"We design two criteria to measure the accuracy of translation.",
"The first criterion is strict.",
"The correct translation requires generating the same color and the same gender as in reference translations.",
"The second criterion is relaxed and all translations expressing the same color are correct.",
"Character-based Probing For character words, we choose man, woman, people, men, girl and boy.",
"More than 60% sentences contain character words in our training data, so they are reasonable indicators of assessing the ability to infer correct translations from the input image.",
"Here we use [ MASK_P ] for masking.",
"Note that some character words have more than two translations, e.g. people, we also use the same evaluation metric with the color-based probing task, including relaxed and strict two criteria.",
"Noun-based Probing For more complex scenarios, a sentence can be masked with several kinds of ambiguous words, such as animals, clothing, and vehicles, provided by Flickr30K (Plummer et al., 2015).",
"High-frequency words labeled with noun (or nouns) are more likely to be masked as [ MASK_N ] (or [ MASK_NS ])).",
"See Table 1 for example insufficient text with different numbers of masks.",
"In addition to ResNet-50, we choose Transformer-based vision models.",
"General Backbone.",
"Vision Transformer (ViT) and Swin Transformer are popular models in computer vision (Dosovitskiy et al., 2021; Liu et al., 2021b).",
"We use ViT with various model capacities to vary from weak to strong ViT models.",
"Object-detection.",
"For pretrained object-detection vision models, we choose DETR (Carion et al., 2020) and QueryInst (Fang et al., 2021) for their strong performance.",
"Image Captioning.",
"For image captioning models, we choose CATR 2 because it is a Transformer-based image captioning architecture and can be easily implemented on top of ViT.",
"We form a number of vision features by combining the methods described above.",
"More details are presented in Section 3.",
"ViT and related models perform in almost the same way as Transformer in NLP (Vaswani et al., 2017).",
"Unlike the general models in CV, ViT does not represent the image as a single vector.",
"Instead, it generates a sequence of patches for image representation.",
"An advantage of this design is that we can use the attention mechanism to correlate image patches to words.",
"Thus, we present a selective attention model to model the patch-level contribution of the image.",
"See Figure 1 for the architecture.",
"Text-only Transformer Transformer follows an encoder-decoder paradigm (the purple region in Figure 1) .",
"The encoder is a stack of identical layers.",
"Each layer consists of a self-attention (SAN) block and a feedforward network (FFN) block.",
"The decoder shares a similar design with the encoder, but with an additional cross-attention block.",
"Gated Fusion Gated fusion mechanism is a popular technique for fusing representations from different sources (Wu et al., 2021; Zhang et al., 2020; Lin et al., 2020; Yin et al., 2020).",
"Given the text 2 https://github.com/saahiluppal/catr input X text and the image input X img , the text representation H text and the image feature H img can be defined as: H text = TransformerEncoder ( X text ) (1) H img = W ViT ( X img ) (2) where W is a projection matrix to convert the shape of ViT ( X img ) into that of H text .",
"Note that ViT ( ) can be replaced by other vision models, e.g. DETR, Swin Transformer and etc.",
"Then, the gate [0 , 1] and the fuzed output are defined as: = Sigmoid ( UH text + V H img ) (3) H Out = (1 ) H text + H img (4) where U and V are trainable variables.",
"controls how much visual information is kept.",
"Then, the fusion vector H Out is fed into the decoder.",
"See the right side of the pink region in Figure 1 for an illustration of the gated fusion models.",
"Selective Attention After obtaining the text and image representations (or features), we use a single-head attention network to correlate words with image patches, where the query, key and value are H text , H img and H img , respectively.",
"Then the selective attention output H img attn is defined to be: H img attn = Softmax ( QKT d k ) V (5) where d k is the same as the dimension of H text because a single head is used.",
"Then the fused representation could be obtained by using Eqs.",
"3 and 4 and replacing H img with H img attn .",
"We conducted experiments on the widely used Multi30K benchmark (Elliott et al., 2016).",
"The training and validation sets consisted of 29 , 000 and 1 , 014 instances, respectively.",
"We reported the results on the Test2016, Test2017 and MSCOCO test sets (Elliott et al., 2017).",
"Note that MSCOCO is more challenging for MMT models due to the out-of-domain instances with ambiguous verbs.",
"Following the setup in (Wu et al., 2021), we learned a joint BPE code for 10 , 000 merging operations for both the source and target languages, resulting in vocabularies of 9 , 716 and 9 , 548 entries for the En-De and En-Fr tasks.",
"We followed the Wu et al. (2021)'s work to conduct experiments with Transformer-Tiny configu-ration, which is more suited for small datasets like Multi30K.",
"Note that smaller models even obtain higher BLEU scores than pervious MMT models.",
"Similar observations have been discussed when building context-aware machine translation models (Li et al., 2020).",
"The model consists of 4 encoder and decoder layers.",
"The hidden size is 128 and the filter size of FFN is 256 .",
"There are 4 heads in the multi-head self-attention mechanism.",
"We set the dropout as 0 .",
"3 and the label smoothing as 0 .",
"1 .",
"Our implementation was based on Fairseq (Ott et al., 2019).",
"For training, we used Adam Optimizer (Kingma and Ba, 2015) with 1 = 0 .",
"9 , 2 = 0 .",
"98 and (cid:15) = 10 8 .",
"We adopted the same learning rate schedule as (Vaswani et al., 2017), where the learning rate first increased linearly for warmup = 2000 steps from 1 e 7 to 5 e 3 .",
"After the warmup, the learning rate decayed proportionally to the inverse square root of the current step.",
"Each training batch contained 4 , 096 tokens.",
"We also adopted the early-stop training strategy (Zhang et al., 2020) to avoid the overfitting issue.",
"For evaluation, we averaged the last 10 checkpoints for more reliable results.",
"The width of beam size was set to 5 .",
"The performance was measured by BLEU and METEOR for all test sets.",
"Also, we used accuracy for evaluation on the probing tasks.",
"Table 2 summarizes the results on standard MMT data.",
"Each model was evaluated on three test sets on two language pairs.",
"We see, first of all, that the improvements of previous methods (Rows 2-4) over the tiny baseline are marginal in terms of both BLEU and METEOR.",
"This confirms the assumption that the visual features are not fully used if the text is complete (Caglayan et al., 2019).",
"When switching the vision features from ResNet (Row.5) to ViT (Row.6), there are no significant BLEU gains.",
"Then, we test them on the proposed probing tasks to examine the real contribution to MMT.",
"Color-based Probing Table 3 shows the accuracy on the color-based probing task.",
"We see that the accuracy improvement of the gated fusion method is marginal by both restrict and relaxed criteria.",
"However, replacing ResNet with ViT yields gains of over 8 accuracy points across three test 6330 Systems Test2016 Test2017 MSCOCO Restrict Relaxed Restrict Relaxed Restrict Relaxed English German Text-only Transformer 25.93 34.42 22.57 35.70 18.75 23.44 Gated Fusion + ResNet 27.23 ( 1.30) 35.51 ( 1.09) 23.10 ( 0.53) 37.01 ( 1.31) 21.88 ( 3.13) 25.00 ( 1.56) Gated Fusion + ViT 35.08 ( 9.15) 42.48 ( 8.06) 25.46 ( 2.89) 41.73 ( 6.03) 25.00 ( 6.25) 31.25 ( 7.81) Selective Attn + ViT 51.20 ( 25.27 ) 64.71 ( 30.29 ) 31.76 ( 9.19 ) 53.54 ( 17.84 ) 43.75 ( 25.00 ) 56.25 ( 32.81 ) English French Text-only Transformer 30.72 33.12 34.91 38.85 23.44 29.69 Gated Fusion + ResNet 32.68 ( 1.96) 35.51 ( 2.39) 32.55 ( 2.36) 35.17 ( 3.68) 17.19 ( 6.25) 23.44 ( 6.25) Gated Fusion + ViT 45.53 ( 14.81) 50.76 ( 17.64) 45.41 ( 10.50) 52.23 ( 13.38) 34.38 ( 10.94) 43.75 ( 14.06) Selective Attn + ViT 62.96 ( 32.24 ) 68.85 ( 35.73 ) 49.34 ( 14.43 ) 55.38 ( 16.53 ) 43.75 ( 20.31 ) 53.12 ( 23.43 ) Table 3: The accuracy of MMT systems when applied color-based probing.",
"sets on En-De task.",
"Similar improvements are observed on the En-Fr task.",
"The finding here indicates that stronger vision features are helpful for representing the visual information.",
"Moreover, selective attention can make better use of the ViT features, achieving over 20 accuracy gains on three test sets.",
"This verifies the conjecture that the selective attention can further enhance the fused representation for the ViT features.",
"Character-based Probing Table 4 shows similar results as in Table 3.",
"ViT with selective attention performs the best on most scenarios, it is only slightly inferior to Gated Fusion + ViT on the MSCOCO dataset.",
"While the gated fusion method with ResNet feature behaves far from desirable.",
"It even underperforms the text-only Transformer, though the text-only Transformer is carefully regularized.",
"A potential explanation is the character-based probing task is more challenging than the color-based probing task because it is more diffi-cult for the model to find the correct corresponding region of the masked character word and provide useful signals to the text encoder.",
"the results on the En-De and En-Fr tasks, respectively.",
"The ViT features can significantly outperform the ResNet features across all masking methods on the two language pairs.",
"We also observe that the gap between the ResNet and ViT features is gradually enlarged as more nouns are masked.",
"This confirms the results in (Dosovitskiy et al., 2021).",
"We further explore the impact of model capacity.",
"Here, we report the results of ViT and Swin Transformer because they are strong models in recent studies.",
"Our conjecture here is that larger ViT/Swin models can describe the image more accurately, which enables the text encoder to receive richer complementary information.",
"Figure 3 depicts the BLEU scores in progressive noun masking scenarios.",
"Intuitively, larger ViT and Swin models provide more complementary knowledge to complete the insufficient text representations.",
"Nevertheless, a counterintuitive phenomenon is the inferiority of Swin across all scenarios in the same configuration, though it outperforms ViT on most computer vision benchmarks.",
"We attribute the reason to the short length of the patch sequence.",
"In patch, ViT has a length of 577 (576 sequence 6331 Gated Fusion+ResNet: Gated Fusion+ViT_Large: Selective Attn+ViT_Large:",
"segments and a special token CLS ) when the image resolution and the patch size are 384 384 and 16 16 .",
"However, Swin has a fixed sequence length (49) restricted by the shifted window operation.",
"This leads to more fine-grained local features for ViT, which is beneficial to the selective attention mechanism for extracting more relevant pieces.",
"Then, we investigate the impact of the enhanced vision features on MMT.",
"Previous studies have already attempted to leverage object-detection features (Zhao et al., 2020; Wang and Xiong, 2021) but the observation here is slightly different.",
"Beyond the object-detection pretrained features, we also take the image captioning task into account.",
"Rows 11-13 in Table 2 summarize the results of the three enhanced vision features on the standard MMT data, and Figure 4 depicts the results on insufficient texts.",
"Here we choose ViT-Tiny-based models for comparison due to the similar model capacity they own 3 .",
"We see that not only the object-detection (DETR and QueryInst), but also the image captioning (CATR) pretrained fea-3 Only pretrained vision models in a 256 hidden-size are available 6332 System Patch Reso.",
"tures obtain superior performance compared with ViT-tiny (Row 8) when the text is complete.",
"It is consistent with previous findings (Yin et al., 2020; Zhao et al., 2020).",
"However, the advantages do not persist when switching to limited text scenarios.",
"A possible explanation is that these methods are sensitive to the quality of the extracted objects.",
"We leave this as future work.",
"It is well-known that higher resolutions are beneficial to the accuracy improvement in computer vision tasks (Dosovitskiy et al., 2021).",
"Despite the success of the Transformer architecture, recent studies show that the success of ViT mainly comes from the successful use of the patch schema (Dosovitskiy et al., 2021).",
"Here, we compare MMT systems with different resolutions and patch sizes based on ViT-Base.",
"The results on three probing tasks (see Table 5) again confirm the above assumption that fine-grained vision features are more suited for the selective attention.",
"Also, the attention map visualized in Figure 5 demonstrates that high resolution with fine-grained patch schema can attend to correct regions of the image for each masked token.",
"For example, both models pay the right attention to the masked character and noun, but the model with low resolution fails to detect the right region of color.",
"The finding here may shed light to other multimodal tasks, such as VQA.",
"Incongruent decoding is a widely used manner to evaluate whether the visual modality contributes to the text (Caglayan et al., 2019, 2021).",
"Table 6 shows that incongruent decoding causes obvious BLEU drops except for the ResNet feature.",
"ViT beats the ResNet with gated fusion.",
"It yields higher BLEU scores with congruent decoding and exhibits a larger BLEU drop with incongruent decoding.",
"We also find that the ViT features learned from scratch are also insensitive to the visual modality.",
"This is reasonable that the learned vision systems are not sufficiently strong due to the data scarcity of Multi30K.",
"Thus the visual modality acts more like noise signals.",
"In addition, focusing on the results of pretrained selective attention + ViT, the gap between congruent and incongruent decoding gradually becomes larger.",
"We also investigate whether the ensemble vision features can help.",
"Concretely, we choose ViT and CATR to independently generate the fused representations with the text feature, and then the ensemble feature is obtained based on them.",
"We see that the ensemble vision feature performs the best on the congruent decoding, and achieves the largest 6333 System Mask 1 Mask 2 Mask 3 Mask 4 Cong.",
"BLEU gaps on four masking scenarios compared with other systems.",
"These results again indicate that stronger visual contexts indeed help.",
"Finally, we compare several real cases.",
"We choose gated fusion ( CNN ) (Wu et al., 2021) and selective attention + ViT_Base ( ViT ) for comparison.",
"The qualitative examples in Table 7 demonstrate that the visual modality is complementary rather than redundant if the text is insufficient.",
"To figure out whether the German translation is right or not, we provide the human-translation results.",
"First, we see the top half case of Table 7, ViT can fill in the masked entities and generate the correct translations even four entities were masked.",
"Unfortunately, CNN incorrectly judges the man as a woman.",
"Also, it cannot distinguish the right color of shirt due to the complex background.",
"When given a more complex image (the bottom half case), it is still a challenge for ViT to generate the right translation.",
"The observation here inspires us to design a more powerful fusion method.",
"Also, the data scarcity problem is a root issue to prevent us from further improving the cross-modal translation quality.",
"Multimodal machine translation is a cross-domain task in the field of machine translation.",
"Early attempts mainly focused on enhancing the MMT model by better incorporation of the vision features (Calixto and Liu, 2017; Elliott and Kdr, 2017; Delbrouck and Dupont, 2017).",
"However, directly encoding the whole image feature brings additional noise to the text (Yao and Wan, 2020; Liu et al., 2021a).",
"To address the above issue, Yao and Wan (2020) proposed a multimodal self-attention to consider the relative difference of information between two modalities.",
"Similarly, Liu et al. (2021a) used a Gumbel Softmax to achieve the same goal.",
"Researchers also realize that the visual modality may be redundant.",
"Irrelevant images have little impact on the translation quality, and no significant BLEU drop is observed even the image is absent (Elliott, 2018).",
"Encouraging results appeared in 6334 Caglayan et al. (2019)'s work.",
"They pointed out that the visual modality is still useful when the linguistic context is scarce, but is less sensitive when exposed to complete sentences.",
"More recently, Wu et al. (2021) attributed the BLEU gain on MMT tasks to the regularization training, and they again emphasized the imperative of constructing proper insufficient textual input.",
"It is worthy to note that the proposed probing task is an improved version based upon previous work (Caglayan et al., 2019; Wu et al., 2021).",
"We also opensource the preprocessed data and the corresponding scripts for the subsequent researchers to experiment on.",
"Another line of research is to explore large-scale cross-modal pretraining models.",
"In this way, the MMT task is regarded as a downstream task.",
"For example, CLIP (Radford et al., 2021) is a general cross-modal pretraining model, which learns to perform a wide variety of tasks via natural language prompting.",
"Caglayan et al. (2021) presented a MMT-specific pretraining model which combines the translation language modeling with masked region classification objectives.",
"In this work, we make a systematic study on whether stronger vision features are helpful.",
"We also extend the research to enhanced features, such as object-detection and image captioning, which are complementary to previous work.",
"In this work, we show that stronger vision features (e.g. ViT-like models) strengthen MMT systems on three proposed probing tasks.",
"We present a selective attention method for ViT-based models to make better use of the patch-level representation.",
"The result here shows a promising line of research on developing better vision models for multimodal tasks.",
"As far as we know, this is the first attempt to build MMT systems with Transformer only.",
"In future work, we are willing to investigate whether it is possible to use a single set of parameters to encode the vision and text modalities.",
"This work was supported in part by the National Science Foundation of China (Nos. 61732005 and 61876035), the National Key R&D Project of China (No. 2019QY1801), the China HTRD Center Project (No. 2020AAA0107904) and Yunnan Provincial Major Science and Technology Special Plan Projects (Nos. 201902D08001905 and",
"202103AA080015).",
"The authors would like to thank anonymous reviewers for their valuable comments.",
"And thank Yufan Jiang for his helpful advice to improve the paper."
] | [
"abstain",
"objective",
"method",
"objective",
"result",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"objective",
"method",
"abstain",
"objective",
"objective",
"other",
"other",
"other",
"other"
] |
[
"Ryan Steed Carnegie Mellon University [email protected]",
"{swetasudha.panda,ari.kobren,michael.wick}@oracle.com",
"A few large, homogenous, pre-trained models undergird many machine learning systems and often, these models contain harmful stereotypes learned from the internet.",
"We investigate the bias transfer hypothesis : the theory that social biases (such as stereotypes) internalized by large language models during pre-training transfer into harmful task-specific behavior after fine-tuning.",
"For two classification tasks, we find that reducing intrinsic bias with controlled interventions before fine-tuning does little to mitigate the classifier's discriminatory behavior after fine-tuning.",
"Regression analysis suggests that downstream disparities are better explained by biases in the fine-tuning dataset.",
"Still, pre-training plays a role: simple alterations to co-occurrence rates in the fine-tuning dataset are ineffective when the model has been pre-trained.",
"Our results encourage practitioners to focus more on dataset quality and context-specific harms.",
"Large language models (LLMs) and other massively pre-trained foundation models are powerful tools for task-specific machine learning (Bom-masani et al., 2021).",
"Models pre-trained by well-resourced organizations can easily adapt to a wide variety of downstream tasks in a process called fine-tuning .",
"But massive pre-training datasets and increasingly homogeneous model design come with well-known, immediate social risks beyond the fi-nancial and environmental costs (Strubell et al., 2019; Bender et al., 2021).",
"Transformer-based LLMs like BERT and GPT-3 contain quantifiable intrinsic social biases encoded in their embedding spaces (Goldfarb-Tarrant et al., 2021).",
"These intrinsic biases are typically associated with representational harms, including stereotyping and denigration (Barocas et al., 2017; Blodgett et al., 2020; Bender et al., 2021).",
"Separately, many studies document the extrinsic harms of the downstream (fine-tuned & task-specific) ap-(cid:37)(cid:68)(cid:86)(cid:72)(cid:3)(cid:48)(cid:82)(cid:71)(cid:72)(cid:79) (cid:11)(cid:72)(cid:17)(cid:74)(cid:17)(cid:3)(cid:53)(cid:82)(cid:37)(cid:40)(cid:53)(cid:55)(cid:68)(cid:12)(cid:3) (cid:51)(cid:85)(cid:72)(cid:16)(cid:55)(cid:85)(cid:68)(cid:76)(cid:81)(cid:72)(cid:71)(cid:3)(cid:48)(cid:82)(cid:71)(cid:72)(cid:79) (cid:41)(cid:76)(cid:81)(cid:72)(cid:16)(cid:55)(cid:88)(cid:81)(cid:72)(cid:71)(cid:3)(cid:48)(cid:82)(cid:71)(cid:72)(cid:79) (cid:51)(cid:85)(cid:72)(cid:16)(cid:55)(cid:85)(cid:68)(cid:76)(cid:81)(cid:76)(cid:81)(cid:74) (cid:41)(cid:76)(cid:81)(cid:72)(cid:16)(cid:55)(cid:88)(cid:81)(cid:76)(cid:81)(cid:74) (cid:51)(cid:85)(cid:72)(cid:16)(cid:55)(cid:85)(cid:68)(cid:76)(cid:81)(cid:76)(cid:81)(cid:74)(cid:38)(cid:82)(cid:85)(cid:83)(cid:82)(cid:85)(cid:68) (cid:11)(cid:72)(cid:17)(cid:74)(cid:17)(cid:3)(cid:58)(cid:76)(cid:78)(cid:76)(cid:83)(cid:72)(cid:71)(cid:76)(cid:68)(cid:12)(cid:3) (cid:55)(cid:68)(cid:86)(cid:78)(cid:16)(cid:54)(cid:83)(cid:72)(cid:70)(cid:76)(cid:73)(cid:76)(cid:70)(cid:39)(cid:68)(cid:87)(cid:68)(cid:86)(cid:72)(cid:87)(cid:3)(cid:11)(cid:72)(cid:17)(cid:74)(cid:17)(cid:3)(cid:37)(cid:44)(cid:50)(cid:54)(cid:12)(cid:3) (cid:54)(cid:70)(cid:85)(cid:88)(cid:69)(cid:69)(cid:72)(cid:71)(cid:3)(cid:82)(cid:85)(cid:85)(cid:72)(cid:16)(cid:69)(cid:68)(cid:79)(cid:68)(cid:81)(cid:70)(cid:72)(cid:71)(cid:3) (cid:40)(cid:89)(cid:68)(cid:79)(cid:88)(cid:68)(cid:87)(cid:76)(cid:82)(cid:81)(cid:55)(cid:72)(cid:80)(cid:83)(cid:79)(cid:68)(cid:87)(cid:72)(cid:86)(cid:3) (cid:48)(cid:72)(cid:68)(cid:86)(cid:88)(cid:85)(cid:72)(cid:3)(cid:44)(cid:81)(cid:87)(cid:85)(cid:76)(cid:81)(cid:86)(cid:76)(cid:70)(cid:3)(cid:37)(cid:76)(cid:68)(cid:86)(cid:11)(cid:72)(cid:17)(cid:74)(cid:17)(cid:3)(cid:83)(cid:85)(cid:82)(cid:81)(cid:82)(cid:88)(cid:81)(cid:3)(cid:85)(cid:68)(cid:81)(cid:78)(cid:76)(cid:81)(cid:74)(cid:12)(cid:3) (cid:48)(cid:72)(cid:68)(cid:86)(cid:88)(cid:85)(cid:72)(cid:3)(cid:40)(cid:91)(cid:87)(cid:85)(cid:76)(cid:81)(cid:86)(cid:76)(cid:70)(cid:3)(cid:37)(cid:76)(cid:68)(cid:86)(cid:11)(cid:72)(cid:17)(cid:74)(cid:17)(cid:3)(cid:55)(cid:51)(cid:53)(cid:3)(cid:74)(cid:68)(cid:83)(cid:12)(cid:3) (cid:51)(cid:72)(cid:85)(cid:87)(cid:88)(cid:85)(cid:69)(cid:72)(cid:71) (cid:53)(cid:68)(cid:81)(cid:71)(cid:82)(cid:80)(cid:76)(cid:93)(cid:72)(cid:71) (cid:39)(cid:72)(cid:16)(cid:69)(cid:76)(cid:68)(cid:86)(cid:72)(cid:71) (cid:56)(cid:83)(cid:86)(cid:87)(cid:85)(cid:72)(cid:68)(cid:80) (cid:39)(cid:82)(cid:90)(cid:81)(cid:86)(cid:87)(cid:85)(cid:72)(cid:68)(cid:80) (cid:37) (cid:76) (cid:68) (cid:86) (cid:3) (cid:55) (cid:85) (cid:68) (cid:81) (cid:86) (cid:73) (cid:72) (cid:85) Figure 1: Full pre-training to fine-tuning pipeline, with experimental interventions (green hexagons).",
"plications of fine-tuned LLMs, including discriminatory medical diagnoses (Zhang et al., 2020), overreliance on binary gender for coreference resolution (Cao and Daum, 2021), the re-inforcement of traditional gender roles in part-of-speech tagging (Garimella et al., 2019), toxic text generation (Gehman et al., 2020), and censorship of inclusive language and AAVE (Blodgett and O'Connor, 2017; Blodgett et al., 2018; Park et al., 2018; Sap et al., 2019).",
"Despite these risks, no research has investigated the extent to which downstream systems inherit social biases from pre-trained models.",
"1 Many stud-1 We use the term bias to refer to statistical associations 3524 ies warn that increasing intrinsic bias upstream may lead to an increased risk of downstream harms (Bolukbasi et al., 2016; Caliskan et al., 2017).",
"This hypothesis, which we call the Bias Transfer Hypothesis , holds that stereotypes and other biased associations in a pre-trained model are transferred to post-fine-tuning downstream tasks, where they can cause further, task-specific harms.",
"A weaker version of this hypothesis holds that downstream harms are at least mostly determined by the pre-trained model (Bommasani et al., 2021).",
"In the pre-training paradigm, the extent to which the bias transfer hypothesis holds will determine the most effective strategies for responsible design.",
"In the cases we study, reducing upstream bias does little to change downstream behavior.",
"Still, there is hope: instead, developers can carefully curate the fine-tuning dataset, checking for harms in context.",
"We test the bias transfer hypothesis on two classification tasks with previously demonstrated performance disparities: occupation classification of online biographies (De-Arteaga et al., 2019) and toxicity classification of Wikipedia Talks comments (Dixon et al., 2018).",
"We investigate whether reducing or exacerbating intrinsic biases encoded by RoBERTa (Liu et al., 2019) decreases or increases the severity of downstream, extrinsic harms (Fig-ure 1).",
"We find that the bias transfer hypothesis describes only part of the interplay between pretraining biases and harms after fine-tuning: Systematically manipulating upstream bias has little impact on downstream disparity, especially for the most-harmed groups.",
"With a regression analysis, we find that most variation in downstream bias can be explained by bias in the fine-tuning dataset (proxied by co-occurrence rates).",
"Altering associations in the fine-tuning dataset can sometimes change downstream behavior, but only when the model is not pre-trained.",
"Without absolving LLMs or their owners of representational harms intrinsic to pre-trained models, our results encourage practitioners and application stakeholders to focus more on dataset quality and context-specific harm identification and reduction.",
"Little prior work directly tests the bias transfer hypothesis.",
"The closest example of this phenomena is the blood diamond effect (Birhane and Prabhu, 2021), in which stereotyping and denigration in the pre-training corpora pervade subsequently generated images and language even before the fine-tuning stage (Steed and Caliskan, 2021).",
"Still, it is unclear to what extent undesirable values encoded in pre-training datasets or benchmarkssuch as Wikipedia or ImageNet induce task-specific harms after fine-tuning (Baro-cas et al., 2019).",
"Some work explores the consistency of intrinsic and extrinsic bias metrics: Goldfarb-Tarrant et al. (2021) find that intrinsic and extrinsic metrics are not reliably correlated for static embeddings like word2vec .",
"We focus instead on state-of-the-art transformer-based LLMsthe subject of intense ethical debate (Bender et al., 2021; Bommasani et al., 2021)which construct contextual, rather than static, embeddings.",
"Contextual embeddingstoken encodings that are conditional on other, nearby tokenspose an ongoing challenge for intrinsic bias measurement (May et al., 2019; Kurita et al., 2019; Guo and Caliskan, 2021) and bias mitigation (Liang et al., 2020).",
"We find that intrinsic and extrinsic metrics are correlated for the typical LLMbut that the correlation is mostly explained by biases in the fine-tuning dataset.",
"Other research tests the possibility that upstream mitigation could universally prevent downstream harm.",
"Jin et al. (2021) show that an intermediate, bias-mitigating fine-tuning step can help reduce bias in later tasks.",
"Likewise, Solaiman and Dennison (2021) propose fine-tuning on carefully curated values-targeted datasets to reduce toxic GPT-3 behavior.",
"Our results tend to corroborate these methods: we find that the fine-tuning process can to some extent overwrite the biases present in the original pre-trained model.",
"A recent post-hoc mitigation technique, on the other hand, debiases contextual embeddings before fine-tuning (Liang et al., 2020).",
"Our results imply that while this type of debiasing may help with representational harms upstream, it is less successful for reducing harms downstream.",
"To empirically evaluate the bias transfer hypothesis, we examine the relationship between upstream",
"bias and downstream bias for two tasks.",
"We track how this relationship changes under various controlled interventions on the model weights or the fine-tuning dataset.",
"For each task, we fine-tune RoBERTa 2 (Liu et al., 2019).",
"We split the fine-tuning dataset into train (80%), evaluation (10%), and test (20%) partitions.",
"To fine-tune, we attach a sequence classification head and train for 3 epochs.",
"3 3.2 Occupation Classification The goal of occupation classification is to predict someone's occupation from their online biography.",
"We fine-tune with the BIOS dataset scraped from Common Crawl (De-Arteaga et al., 2019), which includes over 400,000 online biographies belonging to 28 common occupations.",
"Since self-identified gender was not collected, we will refer instead to the pronouns used in each biography (each biography uses either he/him or she/her pronouns).",
"Following De-Arteaga et al. (2019), we use the scrubbed version of the datasetin which all the identifying pronouns have been removedto measure just the effects of proxy words (e.g. mother) and avoid overfitting on pronouns directly.",
"Downstream",
"Bias. Biographies with she/her pronouns are less frequently classified as belonging to certain traditionally male-dominated professionssuch as surgeonwhich could result in lower recruitment or callback rates for job candidates if the classifier is used by an employer.",
"The empirical true positive rate (TPR) estimates the likelihood that the classifier correctly identifies a person's occupation from their biography.",
"We follow previous work (De-Arteaga et al., 2019) in measuring downstream bias via the empirical true positive rate (TPR) gap between biographies using each set of pronouns.",
"First, define TPR y,g = P [ Y = y | G = g, Y = y ] , where g is a set of pronouns and y is an occupation.",
"Y and Y represent the true and predicted occupation, respectively.",
"Then the TPR bias (TPB) is TPB y = TPR y, she/her TPR y, he/him .",
"(1) 2 roberta-base from HuggingFace (Wolf et al., 2020).",
"3 See Appendix D for more details.",
"Epochs and other parameters were chosen to match prior work (Jin et al., 2021).",
"For example, the classifier correctly predicts surgeon for he/him biographies much more often than for she/her biographies, so the TPR ratio for the surgeon occupation is low (see Appendix A).",
"Upstream",
"Bias. We adapt Kurita et al. (2019)'s pronoun ranking test to the 28 occupations in the BIOS dataset.",
"Kurita et al. (2019) measure the encoded association of he/him and she/her pronouns by the difference in log probability scores between pronouns appearing in templates of the form {pronoun} is a(n) {occupation} .",
"We augment this approach with 5 career-related templates proposed by Bartl et al. (2020) (see Appendix A).",
"Formally, given a template sequence x y,g filled in with occupation y and pronoun g , we compute p y,g = P ( x y,g ) .",
"As a baseline, we also mask the occupation and compute the prior probability y,g = P ( x y,g ) .",
"The pronoun ranking bias (PRB) for this template is the difference in log probabilities: PRB y = log p y, she/her y, she/her \u0000 log p y, he/him y, he/him .",
"For toxicity classification, we use the WIKI dataset, which consists of just under 130,000 comments from the online forum Wikipedia Talks Pages (Dixon et al., 2018).",
"The goal of the task is to predict whether each comment is toxic.",
"Each comment has been labeled as toxic or non-toxic by a human annotator, where a toxic comment is a rude, disrespectful, or unreasonable comment that is likely to make you leave the discussion (Dixon et al., 2018).",
"Following Dixon et al. (2018), we focus on 50 terms referring to people of certain genders, races, ethnicities, sexualities, and religions.",
"Downstream (Extrinsic)",
"Bias. Mentions of certain identity groupssuch as queerare more likely to be flagged for toxic content, which could result in certain communities being systematically censored or left unprotected if an online platform uses the classifier.",
"The classifier's empirical false positive rate (FPR) estimates its likelihood to falsely flag a non-toxic comment as toxic.",
"The FPR corresponds to the risk of censoring inclusive speech or de-platforming individuals who often mention marginalized groups.",
"Following Dixon et al. (2018), we express the classifier's bias against comments or commenters harmlessly mentioning an identity term as the FPR 3526 bias (FPB).",
"Upstream",
"Bias. Following Hutchinson et al. (2020), we measure upstream bias via sentiment associations.",
"We construct a set of templates of the form {identity} {person} is [MASK] , where identities are the identity terms from Dixon et al. (2018) (e.g. gay or Muslim) and the person phrases include a person, my sibling, and other relations.",
"We predict the top-20 likely tokens for the [MASK] position (e.g., awesome or dangerous).",
"Using a pre-trained RoBERTA sentiment classifier trained on the TweetEval benchmark (Barbieri et al., 2020), we then measure the average negative sentiment score of the predicted tokens.",
"The model's bias is the magnitude of negative association with each identity term.",
"RoBERTa sometimes suggests terms which refer back to the target identity group.",
"To mitigate this effect, we drop any predicted tokens that match the 50 identity terms (e.g. Latino) from Dixon et al. (2018), but we are likely missing other confounding adjectives (e.g. Spanish).",
"We suspect this confounding is minimal: we achieve similar results with an alternative ranking-based bias metric (see Appendix C.2).",
"No pre-training.",
"To control for the effects of pre-training, we test randomly initialized versions of both models that have not been pre-trained.",
"We average over 10 trials.",
"Random perturbations.",
"We instantiate a pre-trained model and then add random noise e to every weight in the embedding matrix.",
"We try both uniform noise u Unif ( \u0000 c, c ) and Gaussian noise z N (0 , \u0000 2 ) , varying c and \u0000 2 .",
"The final noise-added matrix is clipped so that its range does not exceed that of the original matrix.",
"Bias mitigation.",
"We apply the SENTDEBIAS algorithm to de-bias embeddings at the word-level (Liang et al., 2020).",
"SENTDEBIAS estimates a bias subspace V with principal component analysis, then computes debiased word embeddings h = h \u0000 \u0000 P kj =1 h h , v j i v j by subtracting the projection of h onto V .",
"We add the multiplier \u0000 to add or remove bias to various degrees standard SENTDEBIAS uses \u0000 = 1 .",
"0 .",
"Re-balancing and scrubbing.",
"For BIOS , we re-balance the fine-tuning dataset by undersampling biographies with the prevalent pronoun in each occupation.",
"For WIKI , we randomly remove from the fine-tuning dataset percent of comments mentioning each identity term.",
"Our goal is to test the bias transfer hypothesis, which holds that upstream bias is transferred through fine-tuning to downstream models.",
"By this view, we would expect changes to the pre-trained model to also change the distribution of downstream biasbut we find that for both tasks, downstream bias is largely invariant to upstream interventions.",
"Figure 2 summarizes the similarity of biases before and after each randomized event.",
"Though randomizing the model weights signifi-cantly reduces the mean and variance of upstream bias, the distribution of downstream bias changes very little.",
"4 For example, RoBERTa exhibits the same disparities in performance after fine-tuning regardless of whether the base model was pre-trained.",
"Likewise, although the SENTDEBIAS mitigation method reduces pronoun ranking (upstream) bias as intended, we detect roughly the same downstream biases no matter the level of mitigation applied (Figure 3).",
"For example, in the BIOS task, surgeons with he/him pronouns are still 1.3 times more likely to have their biographies correctly classified than their she/her counterparts.",
"There is one notable exception to this trend: for the WIKI task, adding noise (uniform or Gaussian) to the pre-trained model's embedding matrix or not pre-training the model yields a modest reduction in median bias (Figure 2).",
"As upstream bias shifts towards zero, downstream bias also moves marginally towards zero.",
"Still, the largest biases (e.g., against the term gay) do not decrease and may even increase after randomization.",
"4 See Appendix B.2 for a full set of correlation tests.",
"Though the results in the preceding section suggest that there is no clear or consistent correspondence between changes in upstream bias and changes in downstream bias, there is still a noticeable correlation between baseline upstream and downstream bias (Pearson's = 0 . 43 , p = 0 . 022 for BIOS , = 0 . 59 , p < 10 \u0000 5 for WIKI see Appendix A).",
"There is an important third variable that helps explain this correlation: cultural artifacts ingrained in both the pre-training and fine-tuning datasets.",
"5 RoBERTa learns these artifacts through co-occurrences and other associations between words in both sets of corpora.",
"5 For example, cultural biases about which pronouns belong in which occupations are likely to pervade both the pretraining dataset (e.g., Wikipedia) and the fine-tuning dataset (internet biographies).",
"for model treatment m , occupation y , and pronoun ranking template s .",
"TPB is the TPR bias (down-stream bias) from Eq.",
"1; PRB is the pronoun ranking bias (upstream bias) from Eq.",
"2; f s and c m are dummy variables (for ordinary least squares) or fixed effects to capture heterogeneous effects between templates and models (such as variations in overall embedding quality).",
"We control for statistical dataset bias with , the prevalence of she/her biographies within each occupation y in the fine-tuning data.",
"We find that the dataset bias in the fine-tuning stage explains most of the correlation between upstream and downstream bias.",
"Under the strong bias transfer hypothesis, we would expect the coefficient on upstream bias \u0000 1 to be statistically signifi-cant and greater in magnitude than the coefficient \u0000 2 on our proxy for dataset bias.",
"But for both tasks, 3528 DJ Model Nurse Surgeon DJ Model Nurse Surgeon Downstream Bias (TPR Ratio) Upstream Bias (Pronoun Ranking) 50 25 10 1 0 1 10 25 50 0 1 2 0.25 0.00 0.25 0.50 Mitigation Multiplier B i a s ( s h e / h e r h e / h i m ) Figure 3: Log TPR bias per occupation after scaled SENTDEBIAS on the BIOS task.",
"the opposite is true: fine-tuning dataset bias has a larger effect than upstream bias.",
"Figure 4 reports the coefficient estimates for these two variables.",
"(See Appendix C.1 for all estimates, standard errors, assumptions and additional specifications.)",
"In the BIOS task, a large decrease in upstream bias corresponds to a small but statistically signifi-cant increase in downstream bias.",
"On average, a reduction of 0.3 to the log likelihood gapequivalent to the reduction in bias towards nurses after upstream mitigationcorresponds to a 0.5% increase in the TPR ratio.",
"Almost all the downstream bias in the BIOS task is explained by dataset bias instead: a 10% increase in the prevalence of she/her pronouns within an occupation corresponds to a much larger 6.5% increase in the TPR ratio.",
"In the WIKI task, upstream bias has a more noticeable effectbut the effect of dataset bias is still much larger.",
"The regression takes the same form as Eq.",
"4, where downstream bias is FPR bias (Eq. 3), upstream bias is negative sentiment, and i is the proportion of toxic mentions of identity i .",
"We additionally control for the prevalence of each identity term and the average length of toxic mentions of each identity termlonger comments are less likely to result in erroneous screening (Ap-pendix C.1).",
"As in the previous regression, dataset bias explains more of the variation in downstream bias than does upstream bias.",
"On average, a large increase in average negative sentiment against a given identity term (e.g. 0.1, one standard deviation) corresponds to only a modest 3.7% increase in FPR.",
"In comparison, only a 10% increase in the prevalence of toxic mentions of an identity corresponds to an even larger 6.3% increase in FPR.",
"We also check that intrinsic downstream bias also changes due to fine-tuning.",
"We measure intrinsic bias again after fine-tuning and regress on downstream intrinsic bias instead of downstream extrinsic bias (Eq. 4).",
"The results are consistent: after controlling for the overall increase in log likelihood, the effect of upstream intrinsic bias on downstream intrinsic bias is explained almost entirely by fine-tuning dataset bias (Appendix C.1).",
"Given the strong relation between our proxies for dataset bias and downstream bias, we test whether manipulating these proxies admits some control over downstream bias.",
"For example, were the fine-tuning dataset to include exactly as many she/her nurse biographies as he/him, would the model still exhibit biased performance on that occupation?",
"Our findings suggest not.",
"No matter the amount of re-sampling, downstream bias remains relatively stable for pre-trained RoBERTa.",
"The distributions of downstream bias with and without re-balancing are almost perfectly correlated (Pear-son's = 0 . 94 , p < 0 . 01 see Appendix B.1).",
"Though co-occurrence statistics help to explain downstream bias, they are still only proxies for dataset bias.",
"Directly altering these statistics via re-sampling the dataset does not alter the sentence-level context in which the words are used.",
"Based on this result, we also try completely removing mentions of identity terms.",
"Scrubbing mentions of identity termsin all comments or only in toxic commentsappears to reduce bias only when the model is not pre-trained and all mentions of the term are scrubbed (Figure 5).",
"For a pre-trained model trained on scrubbed data, a 10% decrease in mentions of an identity term corresponds to a 7.2% 3529 Fine tuning dataset bias Prevalance of she/her Upstream bias Likelihood gap 0.00 0.25 0.50 0.75 Coefficient All pre trained N=6020 Pre trained N=140 Noise added N=1400 Balanced N=1400 Not pre trained N=2940 Bias mitigated N=1820",
"decrease in FPR.",
"We speculate that RoBERTa relies on its high quality feature embeddings to learn proxy biases about identity terms based on the way they are used in the pre-training corpora.",
"For example, our model classifies a sequence containing only the term gay as toxic without any context.",
"If a term like gay is often used pejoratively on the web, RoBERTa is likely to infer that sentences including gay are toxic even if the term never appears in the fine-tuning dataset.",
"But when the upstream model is not pre-trained, the fine-tuned model has no such prejudices.",
"In this case, removing all mentions of identity results in a distribution of bias entirely uncorrelated with the control (Pearson's = 0 . 09 , p > 0 . 1 ).",
"Notably, though, even a small number of mentions of an identity term like gay in the fine-tuning dataset are enough for a randomly initialized model to exhibit the same biases as the pre-trained model (Figure 5).",
"Our approach comes with several limitations.",
"First, our results may not generalize to all tasks especially non-classification tasksor all kinds of bias (e.g., bias against AAVE or non-English speak-ers).",
"Also, while similar studies of bias have been successfully applied to vision transformers (Steed and Caliskan, 2021; Srinivasan and Uchino, 2021), our results may vary for substrates other than English language.",
"Second, Goldfarb-Tarrant et al. (2021) conclude that the lack of correlation between intrinsic bias indicators and downstream bias is because some embedding bias metrics are unsuitable for measuring model bias.",
"To ensure our intrinsic and extrinsic metrics measure the same construct, we chose upstream indicators that correlate with real-world occupation statistics (Caliskan et al., 2017; Kurita et al., 2019).",
"Pronoun ranking in particular may be more reliable for transformer models than 3530 gay jewish gay jewish gay jewish gay jewish gay jewish gay jewish All Toxic None 4 2 0 2 4 Downstream Bias (Log FPR ratio) M e n ti on s S c r ubb e d Pre trained Not pre trained Figure 5: FPR gap (downstream bias) after scrubbing toxic mentions of identity terms from the WIKI fine-tuning dataset.",
"other metrics (Silva et al., 2021).",
"Still, downstream, annotator prejudices and other label biases could skew our extrinsic bias metrics as well (Davani et al., 2021).",
"Third, there may be other explanations for the relationship between upstream and downstream bias: for example, decreasing the magnitude of upstream bias often requires a reduction in model accuracy, though we attempt to control for between-model variation with fixed effects and other controls.",
"Alternate regression specifications included in Appendix C.1 show how our results change with the inclusion of controls.",
"Our results offer several points of guidance to organizations training and distributing LLMs and the",
"practitioners applying them: Attenuating downstream bias via upstream interventionsincluding embedding-space bias mitigationis mostly futile in the cases we study and may be fruitless in similar settings.",
"For a typical pre-trained model trained for the tasks we study, the fine-tuning dataset plays a much larger role than upstream bias in determining downstream harms.",
"Still, simply modulating co-occurrence statistics (e.g., by scrubbing harmful mentions of certain identities) is not sufficient.",
"Task framing, design, and data quality are also very important for preventing harm.",
"If a model is pre-trained, it may be more resistant to scrubbing, re-balancing, and other simple modulations of the fine-tuning dataset.",
"But, our results also corroborate a nascent, somewhat optimistic view of pre-training bias.",
"LLMs' intrinsic biases are harmful even before downstream applications, and correcting those biases is not guaranteed to prevent downstream harms.",
"Increased emphasis on the role of fine-tuning dataset bias offers an opportunity for practitioners to shift to more careful, quality-focused and context-aware approach to NLP applications (Zhu et al., 2018; Scheuerman et al., 2021).",
"This study navigates several difficult ethical issues in NLP ethics research.",
"First, unlike prior work, we do not claim to measure gender biasesonly biases related to someone's choice of personal pronouns.",
"However, our dataset is limited to the English he/him and she/her, so our results do not capture biases against other pronouns.",
"Our study is also very Western-centric: we study only English models/datasets and test for biases considered normatively pressing in Western research.",
"Second, our training data (including pre-training datasets), was almost entirely scraped from internet users without compensation or explicit consent.",
"To avoid exploiting these users further, we only used already-scraped data and replicated already-existing classifiers, and we do not release these 3531 data or classifiers publicly.",
"Finally, the models we trained exhibit toxic, offensive behavior.",
"These models and datasets are intended only for studying bias and simulating harms and, as our results show, should not be deployed or applied to any other data except for this purpose.",
"Thanks to Maria De-Arteaga and Benedikt Boeck-ing for assistance with BIOS data collection.",
"Thanks also to the reviewers for their helpful comments and feedback."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"result",
"objective",
"result",
"other",
"other",
"other",
"other",
"method",
"other",
"abstain",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"result",
"method",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"other",
"other"
] |
[
"Transformer based re-ranking models can achieve high search relevance through context-aware soft matching of query tokens with document tokens.",
"To alleviate runtime complexity of such inference, previous work has adopted a late interaction architecture with pre-computed contextual token representations at the cost of a large online storage.",
"This paper proposes contextual quantization of token embeddings by decoupling document-specific and document-independent ranking contributions during codebook-based compression.",
"This allows effective online decompression and embedding composition for better search relevance.",
"This paper presents an evaluation of the above compact token representation model in terms of relevance and space efficiency.",
"Modern search engines for text documents typically employ multi-stage ranking.",
"The first retrieval stage extracts top candidate documents matching a query from a large search index with a simple ranking method.",
"The second stage or a later stage uses a more complex machine learning algorithm to re-rank top results thoroughly.",
"Recently neural re-ranking techniques from transformer-based architectures have achieved impressive relevance scores for top k document re-ranking, such as MacAvaney et al. (2019).",
"However, using a transformer-based model to rank or re-rank is extremely expensive during the online inference (Lin et al., 2020).",
"Various efforts have been made to reduce its computational complexity (e.g. Gao et al. (2020)).",
"A noticeable success in time efficiency improvement is accomplished in ColBERT (Khattab and Zaharia, 2020) which conducts late interaction of query terms and document terms during runtime inference so that token embeddings for documents can be pre-computed.",
"Using ColBERT re-ranking after a sparse retrieval model called DeepImpact (Mallia et al., 2021) can further enhance relevance.",
"Similarly BECR (Yang et al., 2022), CEDR-KNRM (MacAvaney et al., 2019), and PreTTR (MacAvaney et al., 2020) have also adopted the late interaction architecture in their efficient transformer based re-ranking schemes.",
"While the above work delivers good search relevance with late interaction, their improvement in time efficiency has come at the cost of a large storage space in hosting token-based precomputed document embeddings.",
"For example, for the MS MARCO document corpus, the footprint of embedding vectors in ColBERT takes up to 1.6TB and hosting them in a disk incurs substantial time cost when many embeddings are fetched for re-ranking.",
"It is highly desirable to reduce embedding footprints and host them in memory as much as possible for fast and high-throughput access and for I/O latency and contention avoidance, especially when an online re-ranking server is required to efficiently process many queries simultaneously.",
"The contribution of this paper is to propose a compact representation for contextual token embeddings of documents called Contextual Quantization (CQ).",
"Specifically, we adopt codebook-based quantization to compress embeddings while explicitly decoupling the ranking contributions of document specific and document-independent information in contextual embeddings.",
"These ranking contributions are recovered with weighted composition after quantization decoding during online inference.",
"Our CQ scheme includes a neural network model that jointly learns context-aware decomposition and quantization with an objective to preserve correct ranking scores and order margins.",
"Our evaluation shows that CQ can effectively reduce the storage space of contextual representation by about 14 times for the tested datasets with insignificant online embedding recovery overhead and a small relevance degradation for re-ranking passages or documents.",
"The problem of neural text document re-ranking is defined as follows.",
"Given a query with multiple terms and a set of candidate documents, rank these documents mainly based on their embeddings and query-document similarity.",
"With a BERT-based re-ranking algorithm, typically a term is represented by a token, and thus in this paper, word term is used interchangeably with token.",
"This paper is focused on minimizing the space cost of token embeddings for fast online re-ranking inference.",
"Deep contextual re-ranking models .",
"Neural re-ranking has pursued representation-based or interaction-based algorithms (Guo et al., 2016; Dai et al., 2018; Xiong et al., 2017).",
"Embedding interaction based on query and document terms shows an advantage in these studies.",
"The transformer architecture based on BERT (Devlin et al., 2019) has been adopted to re-ranking tasks by using BERT's [CLS] token representation to summarize query and document interactions (Nogueira and Cho, 2019; Yang et al., 2019; Dai and Callan, 2019; Nogueira et al., 2019a; Li et al., 2020).",
"Recently BERT is integrated in late term interaction (MacA-vaney et al., 2019; Hofsttter et al., 2020c,b; Mitra et al., 2021) which delivers strong relevance scores for re-ranking.",
"Efficiency optimization for transformer-based re-ranking.",
"Several approaches have been proposed to reduce the time complexity of transformer-based ranking.",
"For example, architecture simplification (Hofsttter et al., 2020c; Mitra et al., 2021), late interaction with precomputed token embeddings (MacAvaney et al., 2020), early exiting (Xin et al., 2020), and model distillation (Gao et al., 2020; Hofsttter et al., 2020a; Chen et al., 2020b).",
"We will focus on the compression of token representation following the late-interaction work of ColBERT (Khattab and Zaharia, 2020) and BECR (Yang et al., 2022) as they deliver fairly competitive relevance scores for several well-known ad-hoc TREC datasets.",
"These late-interaction approaches follow a dual-encoder design that separately encodes the two sets of texts, studied in various NLP tasks (Zhan et al., 2020; Chen et al., 2020a; Reimers and Gurevych, 2019; Karpukhin et al., 2020; Zhang et al., 2020).",
"Several previous re-ranking model attempted to reduce the space need for contextual token embeddings.",
"ColBERT has considered an option of using a smaller dimension per vector and limiting 2 bytes per number as a scalar quantization.",
"BECR (Yang et al., 2022) uses LSH for hashing-based contextual embedding compression (Ji et al., 2019).",
"PreTTR (MacAvaney et al., 2020) uses a single layer encoder model to reduce the dimensionality of each token embedding.",
"Following PreTTR, a contemporaneous work called SDR in Cohen et al. (2021) considers an autoencoder to reduce the dimension of representations, followed by an off-the-shelf scalar quantizer.",
"For the autoencoder, it combines static BERT embeddings with contextual embeddings.",
"Inspired by this study, our work decomposes contextual embeddings to decouple ranking contributions during vector quantization.",
"Unlike SDR, CQ jointly learns the codebooks and decomposition for the document-independent and dependent components guided by a ranking loss.",
"Vector quantization.",
"Vector quantization with codebooks was developed for data compression to assist approximate nearest neighbor search, for example, product quantizer (PQ) from Jgou et al. (2011), optimized product quantizer (OPQ) from Ge et al. (2013); residual additive quantizer(RQ) from Ai et al. (2015) and local search additive quantizer (LSQ) from Martinez et al. (2018).",
"Recently such a technique has been used for compressing static word embeddings (Shu and Nakayama, 2018) and document representation vectors in a dense retrieval scheme called JPQ (Zhan et al., 2021a).",
"None of the previous work has worked on quantization of contextual token vectors for the re-ranking task, and that is the focus of this paper.",
"Applying vector quantization naively to token embedding compression does not ensure the ranking effectiveness because a quantizer-based compression is not lossless, and critical ranking signals could be lost during data transformation.",
"To achieve a high compression ratio while maintaining the competitiveness in relevance, we consider the ranking contribution of a contextual token embedding for soft matching containing two components: 1) document specific component derived from the self attention among context in a document, 2) document-independent and corpus-specific component generated by the transformer model.",
"Since for a reasonable sized document set, the second component is invariant to documents, its storage space is negligible compared to the first component.",
"Thus the second part does not need compres-696 Figure 1: Offline processing and online ranking with contextual quantization sion.",
"We focus on compressing the first component using codebooks.",
"This decomposition strategy can reduce the relevance loss due to compression approximation, which allows a more aggressive compression ratio.",
"Our integrated vector quantizer with contextual decomposition contains a ranking-oriented scheme with an encoder and decoder network for jointly learning codebooks and composition weights.",
"Thus, the online composition of decompressed document-dependent information with document-independent information can retain a good relevance.",
"A vector quantizer consists of two steps as discussed in Shu and Nakayama (2018).",
"In the compression step, it encodes a real-valued vector (such as a token embedding vector in our case) into a short code using a neural encoder.",
"The short code is a list of reference indices to the codewords in codebooks.",
"During the decompression step, a neural decoder is employed to reconstruct the original vector from the code and codebooks.",
"The quantizer learns a set of M codebooks {C 1 , C 2 , , CM } and each codebook contains K codewords ( C m = { c m 1 , c m 2 , , c mK } ) of dimension h .",
"Then for any D-dimensional real valued vector x RD , the encoder compresses x into an M dimensional code vector s .",
"Each entry of code s is an integer j , denoting the j -th codeword in codebook C m .",
"After locating all M codewords as [ c 1 , , c M ] , the original vector can be recovered with two options.",
"For a product quantizer, the dimension of codeword is h = D/M , and the decompressed vector is x = c 1 c 2 c M where symbol denotes vector concatenation.",
"For an additive quantizerthe decompressed vector is x = (cid:80) Mj =1 c j .",
"Codebook-based contextual quantization.",
"Now we describe how codebook-based compression is used in our contextual quantization.",
"Given a token t , we consider its contextual embedding vector E ( t ) as a weighted combination of two components: E ( t ) and E ( t ) .",
"E ( t ) captures the document-dependent component, and E ( t ) captures the document-independent component discussed earlier.",
"For a transformer model such as BERT, E ( t ) is the token output from the last encoder layer, and we obtain E ( t ) by feeding [CLS] t [SEP] into BERT model and taking last layer's output for t .",
"During offline data compression, we do not explicitly derive E ( t ) as we only need to store the compressed format of such a value, represented as a code.",
"Let E ( t ) be the recovered vector with codebook-based decompression, as a close approximation of E ( t ) .",
"Let E ( t ) be the final composed embedding used for online ranking with late-interaction.",
"Then E ( t ) = g ( E ( t ) , E ( t )) where g ( . ) is a simple feed-forward network to combine two ranking contribution components.",
"The encoder/decoder neural architecture for contextual quantization.",
"We denote a token in a document d as t .",
"The input to the quantization encoder is E ( t ) E ( t ) .",
"The output of the quantization encoder is the code vector s of dimension M .",
"Let code s be ( s 1 , , s m , , s M ) and each entry s m will be computed below in Eq.",
"4.",
"This computation uses the hidden layer h defined as: h = tanh( w 0 ( E ( t ) E ( t )) + b 0 ) .",
"The dimension of h is fixed as 1 MK/ 2 .",
"The hidden layer a is computed by a feed forward layer with a softplus activation (Eq. 2) with an output dimension of M K after reshaping, Let a m be 697 the m -th row of this output.",
"To derive a discrete code entry for s m , following the previous work (Shu and Nakayama, 2018), we apply the Gumbel-softmax trick (Maddison et al., 2017; Jang et al., 2017) as shown in Eq.",
"3, where the temperature is fixed at 1 and k is a noise term sampled from the Gumbel distribution log( log( Uniform [0 , 1])) .",
"Here p m is a vector with dimension K .",
"( p m ) j is the j -th entry of the vector.",
"Similarly, ( a m ) j is the j -th entry of a m .",
"( p m ) j = exp(log(( a m ) j + j ) / ) (cid:80) Kj =1 exp(log(( a m ) j + j ) / ) .",
"(3) s m = arg max 1 j K ( p m ) j .",
"(4) In the decompression stage, the input to the quantization decoder is the code s , and this decoder accesses M codebooks {C 1 , C 2 , , CM } as M parameter matrices of size K h which will be learned.",
"For each m -entry of code s , s m value is the index of row vector in C m to be used as its corresponding codeword.",
"Once all codewords c 1 to c M are fetched, we recover the approximate vector E ( t ) as (cid:80) Mj =1 c j for additive quantization or c 1 c 2 c M for product quantization.",
"Next, we perform a composition with a one-layer or two-layer feed-forward network to derive the contextual embedding as E ( t ) = g ( E ( t , E ( t )) .",
"With one feed-forward layer, E ( t ) = tanh( w 2 ( E ( t ) E ( t )) + b 2 ) .",
"The above encoder and decoder for quantization have parameter w 0 , b 0 , w 1 , b 1 , w 2 , b 2 , and {C 1 , C 2 , , CM } .",
"These parameters are learned through training.",
"Once these parameters are learned, the quantization model is fixed and the code for any new token embedding can be computed using Eq.",
"4 in offline processing.",
"Figure 1 depicts the flow of offline learning and the online inference with context quantization.",
"Given a query with l tokens { q 1 , q 2 ,",
"..q l } , and a documents with n tokens { t 1 , t 2 ,",
"..t n } , The query token embeddings encoded with a transformer based model (e.g. BERT) are denoted as E ( q 1 ) , , E ( q l ) .",
"The embeddings for document tokens through codebook base decompression are E ( t 1 ) , E ( t n ) .",
"The online inference then uses the interaction of query tokens and document tokens defined in a re-ranking algorithm such as ColBERT to derive a ranking score (denoted as f q , d ).",
"The purpose of injecting E ( t ) in Eq.",
"1 is to decouple the document-independent ranking contribution from contextual embedding E ( t ) so that this quantization encoder model will be learned to implicitly extract and compress the document-dependent ranking contribution.",
"Table 1 gives an example with several token codes produced by CQ for different sentences representing different contexts, and illustrates context awareness of CQ's encoding with a small codebook dimension (M=K=4).",
"For example, 1 in code [4, 4, 3, 1] means the 4-th dimension uses the first codeword of the corresponding codebook.",
"Training of CQ uses the MS MARCO passage dataset discussed in Section 4 and these sentences are not from this dataset.",
"Our observation from this example is described as follows.",
"First, in general token codes in the same sentences are closer to each other, and token codes in different sentences, even with the same word bank, are far away with a visible Hamming distance.",
"Thus CQ coding allows a context-based separation among tokens residing in different contexts.",
"Second, by looking at boldfaced tokens at each sentence, their distance in terms of contextual semantics and proximity is reflected to some degree in their CQ codes.",
"For instance, a small Hamming code distance of three words ac-tor, poet and writer resembles their semantic and positional closeness.",
"A larger code distance of two banks in the 3 rd and 4 th sentences relates with their word sense and positional difference.",
"Training loss for parameter learning .",
"We have explored three training loss functions.",
"The first option is to follow a general quantizer (Shu and Nakayama, 2018) using the mean squared error (MSE) between the reconstructed and original embedding vectors of all token t i .",
"Namely LMSE = (cid:80) E ( t i ) E ( t i ) 22 .",
"The second option is the pairwise cross-entropy loss based on rank orders.",
"After warming up with the MSE loss, we further train the quantizer using L PairwiseCE = (cid:80) ( (cid:80) j = d + , d P j log P j ) where d + and d are positive and negative documents for query q .",
"We adopt the third option which borrows the idea of MarginMSE loss from Hofsttter et al. (2020a) proposed for BERT-based ranking model distillation.",
"In MarginMSE, a student model is trained to 698 Context Token codes William Shakespeare was widely regarded as the world's greatest writer actor poet actor , poet , writer and dramatist.",
"mimic the teacher model in terms of both ranking scores as well as the document relative order margins.",
"In our case, the teacher model is the ranking model without quantization and the student model is the ranking model with quantization.",
"It is defined as L MarginMSE = (cid:80) (( f q , d + f q , d ) ( f q , d + f q , d )) 2 , where f q , d and f q , d denote the ranking score with and without quantization, respectively.",
"The above loss function distills the ColBERT ranking characteristics into the CQ model for better preservation of ranking effectiveness.",
"Online space for document embeddings.",
"The storage cost of the precomputed document embeddings in a late-interaction re-ranking algorithm is dominating its online space need.",
"To recover token-based document embeddings, an online server with contextual quantization stores three parts: codebooks, the short codes of tokens in each document, and the document-independent embeddings.",
"Given a document collection of Z documents of length n tokens on average, let V be the number of the distinct tokens.",
"For M codebooks with M K codewords of dimension h , we store each entry of a codeword with a 4-byte floating point number.",
"Thus the space cost of codebooks is M K h 4 bytes, and the space for document-independent embeddings of dimension D is V D 4 bytes.",
"When M = 16 , K = 256 , D = 128 as in our experiments, if we use the product quantization with the hidden dimension h = 8 , the codebook size is 131 MB.",
"In the WordPiece English token set for BERT, V 32 K and the space for document-independent embeddings cost about 16.4 MB.",
"Thus the space cost of the above two parts is insignificant.",
"The online space cost of token-based document embeddings is Z n ( M log 2 K 8 + 2) bytes.",
"Here each contextual token embedding of length D is encoded into a code of length M and the space of each code costs log 2 K bits.",
"For each document, we also need to store the IDs of its tokens in order to access document-independent embeddings.",
"We use 2 bytes per token ID in our evaluation because the BERT dictionary based on WordPiece (Wu et al., 2016) tokenizer has about 32,000 tokens.",
"In comparison, the space for document embeddings in ColBERT with 2 bytes per number costs Z D n 2 bytes.",
"Then the space ratio of ColBERT without CQ and with CQ is about 2 D 8 M log 2 K +2 8 , which is about 14:1 when D = 128 , M = 16 and K = 256 .",
"BECR uses 5 layers of the refinement outcome with the BERT encoder for each token and stores each layer of the embedding with a 256 bit LSH signature.",
"Thus the space cost ratio of BECR over ColBERT-CQ is approximately 5 256 M log 2 K +2 8 , which is about 9:1 when M = 16 and K = 256 .",
"We can adjust the parameters of each of ColBERT, BECR, and ColBERT-CQ for a smaller space with a degraded relevance, and their space ratio to CQ remains large, which will be discussed in Section 4.",
"Time cost for online decompression and composition.",
"Let k be the number of documents to re-rank.",
"The cost of decompression with the short code of a token using the cookbooks is O ( M h ) for a product quantizer and O ( M D ) for an additive quantizer.",
"Notice M h = D .",
"For a one-layer feed-forward network as a composition to recover the final embedding, the total time cost for decompression and composition is O ( k n D 2 ) with a product quantizer, and O ( k n ( M D + D 2 )) with an additive quantizer.",
"When using two hidden layers with D dimensions in the first layer output, there is some extra time cost but the order of time complexity remains unchanged.",
"Noted that because of using feed-forward layers in final recovery, our contextual quantizer cannot take advantage of an efficiency optimization called 699 asymmetric distance computation in Jgou et al. (2011).",
"Since embedding recovery is only applied to top k documents after the first-stage retrieval, the time efficiency for re-ranking is still reasonable without such an optimization.",
"Datasets and metrics.",
"The well-known MS MARCO passage and document ranking datasets are used.",
"As summarized the in Table 2, our evaluation uses the MS MARCO document and passage collections for document and passage ranking (Craswell et al., 2020; Campos et al., 2016).",
"The original document and passage ranking tasks provide 367,013 and 502,940 training queries respectively, with about one judgment label per query.",
"The development query sets are used for relevance evaluation.",
"The TREC Deep Learning (DL) 2019 and 2020 tracks provide 200 test queries with many judgment labels per query for each task.",
"Following the official leader-board standard, for the development sets, we report mean reciprocal rank (MRR@10, MRR@100) for relevance instead of using normalized discounted cumulative gain (NDCG) (Jrvelin and Keklinen, 2002) because such a set has about one judgment label per query, which is too sparse to use NDCG.",
"For TREC DL test sets which have many judgement lables per query, we report the commonly used NDCG@10 score.",
"We also measure the dominating space need of the embeddings in bytes and re-ranking time latency in milliseconds.",
"To evaluate latency, we uses an Amazon AWS g4dn instance with Intel Cascade Lake CPUs and an NVIDIA T4 GPU.",
"In all tables below that compare relevance, we perform paired t-test on 95% confidence levels.",
"In Tables 3, 4, and 5, we mark the results with ' if the compression method result in statistically significant degradation from the ColBERT baseline.",
"In Table 6, ' is marked for numbers with statistically significant degradation from default setting in the first row.",
"Choices of first-stage retrieval models.",
"To retrieve top 1,000 results before re-ranking, we consider the standard fast BM25 method (Robertson and Zaragoza, 2009).",
"We have also considered sparse and dense retrievers that outperform BM25.",
"We have used uniCOIL (Lin and Ma, 2021; Gao et al., 2021) as an alternative sparse retriever in Table 3 because it achieves a similar level of relevance as end-to-end ColBERT with a dense retriever, and that of other learned sparse representations (Mallia et al., 2021; Formal et al., 2021b,a).",
"ColBERT+uniCOIL has 0.369 MRR while end-to-end ColBERT has 0.360 MRR on MSMARCO Dev set.",
"Moreover, retrieval with a sparse representation such as uniCOIL and BM25 normally uses much less computing resources than a dense retriever.",
"Relevance numbers reported in some of the previous work on dense retrieval are derived from the exact search as an upper bound of accuracy.",
"When non-exact retrieval techniques such as approximate nearest neighbor or maximum inner product search are used on a more affordable platform for large datasets, there is a visible loss of relevance (Lewis et al., 2021).",
"It should be emphasized that the first stage model can be done by either a sparse or a dense retrieval, and this does not affect the applicability of CQ for the second stage as the focus of this paper.",
"Re-ranking models and quantizers compared.",
"We demonstrate the use of CQ for token compression in ColBERT in this paper.",
"We compare its relevance with ColBERT, BECR and PreTTR.",
"We chose to apply CQ to ColBERT because assuming embeddings are in memory, ColBERT is one of the fastest recent online re-ranking algorithms with strong relevance scores and CQ addresses its embedding storage weakness.",
"Other re-ranking models compared include: BERT-base (Devlin et al., 2019), a cross encoder re-ranker, which takes a query and a document at run time and uses the last layers output from the BERT [CLS] token to generate a ranking score; TILDEv2 (Zhuang and Zuccon, 2021), which expands each document and additively aggregates precomputed neural scores.",
"We also evaluate the use of unsupervised quantization methods discussed in Section 2 for ColBERT, including two product quantizers (PQ and OPQ), and two additive quantizers (RQ and LSQ).",
"Appendix A has additional details on the retrievers considered, re-ranker implementation, training, and relevance numbers cited.",
"Table 3 and Table 4 show the ranking relevance in NDCG and MRR of the different methods and compare against the use of CQ with ColBERT (marked as ColBERT-CQ).",
"We either report our experiment results or cite the relevance numbers from other papers with a mark for such a model.",
"For quantization approaches, we adopt M=16, K=256, i.e. compression ratio 14:1 compared to ColBERT.",
"For the passage task, ColBERT outperforms other re-rankers in relevance for the tested cases.",
"ColBERT-CQ after BM25 or uniCOIL retrieval only has a small relevance degradation with around 1% or less, while only requiring 3% of the storage of ColBERT.",
"The relevance of the ColBERT-CQ+uniCOIL combination is also competitive to the one reported in Mallia et al. (2021) for the Col-BERT+DeepImpact combination which has MRR 0.362 for the Dev query set, NDCG@10 0.722 for TREC DL 2019 and 0.691 for TREC DL 2020.",
"For the document re-ranking task, Table 4 similarly confirms the effectiveness of ColBERT-CQ.",
"ColBERT-CQ and ColBERT after BM25 retrieval also perform well in general compared to the relevance results of the other baselines.",
"From both Table 3 and Table 4, we observe that in general, CQ significantly outperforms the other quantization approaches (PQ, OPQ, RQ, and LSQ).",
"As an example, we further explain this by plotting the ranking score of ColBERT with and without a Model Specs.",
"quantizer in Figure",
"2(a).",
"Compared to OPQ, CQ trained with two loss functions generates ranking scores much closer to the original ColBERT ranking score, and this is also reflected in Kendall's correlation coefficients of top 1,000 re-ranked results between a quantized ColBERT and the original ColBERT (Figure",
"2(b)).",
"There are two reasons that CQ outperforms the other quantizers: 1) The previous quantizers do not perform contextual decomposition to isolate intrinsic context-independent information in embeddings, and thus their approximation yields more relevance loss; 2) Their training loss function is not tailored to the re-ranking task.",
"passage corpora, and compares CQ with other approaches.",
"Each MS MARCO document is divided into overlapped passage segments of size up to 400 tokens, and there are 60 tokens overlapped between two consecutive passage segments, following the ColBERT setup.",
"As a result, the number of WordPiece tokens per document changes from 1460 to about 2031 due to the addition of overlapping contextual tokens.",
"To demonstrate the tradeoff, we also list their estimated time latency and relevance in passage re-ranking as a reference and notice that more relevance comparison results are in Tables 3 and 4.",
"The latency is the total time for embedding decom-pression/recovery and re-ranking.",
"For PreTTR and ColBERT, we assume that their passage embedding data cannot fit in memory given their large data sizes.",
"The disk I/O latency number is based on their passage embedding size and our test on a Samsung 870 QVO solid-state disk drive to fetch 1,000 passage embeddings randomly.",
"Their I/O latency takes 110ms or 182ms with single-thread I/O and with no I/O contention, and their disk access can incur much more time when multiple queries are processed in parallel in a server dealing with many clients.",
"For example, fetching 1,000 passage embeddings for each of ColBERT and PreTTR takes about 1,001ms and 3,870ms respectively when the server is handling 16 and 64 queries simultaneously with multiple threads.",
"in the 4-th column of Table 5 excludes the first-stage retrieval time.",
"The default ColBERT uses embedding dimension 128 and 2 byte floating numbers.",
"ColBERT-small denotes an optional configuration suggested from the ColBERT paper using 24 embedding dimensions and 2-byte floating numbers with a degraded relevance performance.",
"As shown in Table 5, the embedding footprint of ColBERT CQ uses about 112GB and 10.2GB, respectively for document and passage re-ranking tasks.",
"By looking at the latency difference of ColBERT with and without CQ, the time overhead of CQ for decompression and embedding recovery takes 1ms per query, which is insignificant.",
"Compared with another quantizer ColBERT-OPQ, ColBERT-CQ can achieve the same level of space saving with K = 256 while having a substantial relevance improvement.",
"ColBERT-CQ with K = 4 achieves the same level of relevance as ColBERT-OPQ while yielding a storage reduction of 67% and a latency reduction of about 70%.",
"Comparing ColBERT-CQ with no contextual decomposition, under the same space cost, ColBERT-CQ's relevance is 4% higher.",
"CQ with K = 16 achieves the same level relevance as ColBERT-CQ-undecomposed with K = 256 , while the storage of CQ reduces by 44%.",
"Comparing with ColBERT-small which adopts more aggressive space reduction, ColBERT-CQ with K = 16 would be competitive in relevance while its space is about 4x smaller.",
"Comparing with other non-ColBERT baselines (BECR, PreTTR, and TILDEv2), ColBERT-CQ strikes a good balance across relevance, space and latency.",
"For the fast CPU based model (BECR, TILDEv2), our model achieves better relevance with either lower or comparable space usage.",
"For BECR, its embedding footprint with 89.9GB may fit in memory for MS MARCO passages, it becomes very expensive to configure a machine with much more memory for BECR's MS MARCO document embeddings with about 791GB.",
"Table 6 shows the relevance scores for the TREC deep learning passage ranking task with different design options for CQ.",
"As an alternative setting, the codebooks in this table use M=16 and K=32 with compression ratio 21:1 compared to ColBERT.",
"Row 1 is the default design configuration for CQ with product operators and 1 composition layer, 702 TREC19 TREC20 CQ, Product, 1 layer, MarginMSE 0.687 0.713 Different model configurations No decomposition.",
"Different architecture or quantization options.",
"Rows 2 and 3 of Table 6 denote CQ using product or additive operators without decomposing each embedding into two components, and there is about 4% degradation without such decomposition.",
"Row 4 changes CQ using the raw static embeddings of tokens from BERT instead of the upper layer outcome of BERT encoder and there is an up to 4.7% degradation.",
"Notice such a strategy is used in SDR.",
"From Row 5 to Row 7, we change CQ to use additive operators or use a two-layer composition.",
"The performance of product or additive operators is in a similar level while the benefit of using two layers is relatively small.",
"Different training loss functions for CQ.",
"Last two rows of Table 6 use the MSE and PairwiseCE loss functions, respectively.",
"There is an about 1.2% improvement using MarginMSE.",
"Figure 2 gives an explanation why MarginMSE is more effective.",
"While CQ trained with MSE and MarginMSE generates ranking scores close to the original ranking scores in Figure",
"2(a), the distribution of Kendall's correlation coefficients of 1,000 passages in Figure",
"2(b) shows that the passage rank order derived by CQ with the MarginMSE loss has a better correlation with that by ColBERT.",
"Our evaluation shows the effectiveness of CQ used for ColBERT in compressing the space of token embeddings with about 14:1 ratio while incurring a small relevance degradation in MS MARCO passage and document re-ranking tasks.",
"The quantized token-based document embeddings for the tested cases can be hosted in memory for fast and high-throughput access.",
"This is accomplished by a neural network that decomposes ranking contributions of contextual embeddings, and jointly trains context-aware decomposition and quantization with a loss function preserving ranking accuracy.",
"The online time cost to decompress and recover embeddings is insignificant with 1ms for the tested cases.",
"The CQ implementation is available at https://github.com/yingrui-yang/ContextualQuantizer.",
"Our CQ framework is also applicable to the contemporaneous work ColBERTv2 (Santhanam et al., 2021).",
"Using uniCOIL scores for the first-stage sparse retrieval and ColBERTv2+CQ (M=16, K=256) for top 1,000 passage reranking, we achieve 0.387 MRR@10 on the MSMARCO passage Dev set, 0.746 NDCG@10 on TREC DL19, and 0.726 NDCG@10 on DL20 with about 10.2GB embedding space footprint.",
"Notice that ColBERTv2 achieves a higher MRR@10 number 0.397 for the passage Dev set when used as a standalone retriever (Santhanam et al., 2021) and dense retrieval with such a multi-vector representation is likely to be much more expensive than retrieval with a sparse representation on a large dataset.",
"The previous work in dense retrieval has often employed faster but approximate search, but that comes with a visible loss of relevance (Lewis et al., 2021).",
"Thus the above relevance number using ColBERTv2+CQ for re-ranking with uniCOIL sparse retrieval is fairly strong, achievable with a reasonable latency and limited computing resource.",
"Its embedding space size is 2.8x smaller than the 29GB space cost in the standalone ColBERTv2 (Santhanam et al., 2021) for MS MARCO passages.",
"Our future work is to investigate the above issue further and study the use of CQ in the other late-interaction re-ranking methods.",
"Acknowledgments .",
"We thank Cindy Zhao, Ji-ahua Wang, and anonymous referees for their valuable comments and/or help.",
"This work is supported in part by NSF IIS-2040146 and by a Google faculty research award.",
"It has used the Extreme Science and Engineering Discovery Environment supported by NSF ACI-1548562.",
"Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"result",
"other",
"other",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Emotion detection in dialogues is challenging as it often requires the identification of thematic topics underlying a conversation, the relevant commonsense knowledge, and the intricate transition patterns between the affective states.",
"In this paper, we propose a Topic-Driven Knowledge-Aware Transformer to handle the challenges above.",
"We firstly design a topic-augmented language model (LM) with an additional layer specialized for topic detection.",
"The topic-augmented LM is then combined with commonsense statements derived from a knowledge base based on the dialogue contextual information.",
"Finally, a transformer-based encoder-decoder architecture fuses the topical and commonsense information, and performs the emotion label sequence prediction.",
"The model has been experimented on four datasets in dialogue emotion detection, demonstrating its superiority empirically over the existing state-of-the-art approaches.",
"Quantitative and qualitative results show that the model can discover topics which help in distinguishing emotion categories.",
"The abundance in dialogues extracted from online conversations and TV series provides unprecedented opportunity to train models for automatic emotion detection, which are important for the development of empathetic conversational agents or chat bots for psychotherapy (Hsu and Ku, 2018; Jiao et al., 2019; Zhang et al., 2019; Cao et al., 2019).",
"However, it is challenging to capture the contextual semantics of personal experience described in one's utterance.",
"For example, the emotion of the sentence I just passed the exam can be either happy or sad depending on the expectation of the subject.",
"There are strands of works utilizing the dialogue context to enhance the utterance representation (Jiao et al., 2019; Zhang et al., 2019;",
"Majumder et al., 2019), where influences from historical utterances were handled by recurrent units, and attention signals were further introduced to intensify the positional order of the utterances.",
"Despite the progress made by the aforementioned methods, detecting emotions in dialogues is however still a challenging task due to the way emotions are expressed and how the meanings of utterances vary based on the particular topic discussed, as well as the implicit knowledge shared between participants.",
"Figure 1 gives an example of how topics and background knowledge could impact the mood of interlocutors.",
"Normally, dialogues around specific topics carry certain language patterns (Serban et al., 2017), affecting not only the utterance's meaning, but also the particular emotions conveyed by specific expressions.",
"Existing dialogue emotion detection methods did not put emphasis on modelling these holistic properties of dialogues (i.e., conversational topics and tones).",
"Consequently, they were fundamentally limited in capturing the affective states of interlocutors related to the particular themes discussed.",
"Besides, emotion and topic detection heavily relies on leveraging underlying commonsense knowledge shared between interlocutors.",
"Although there have been attempts in incorporating it, such as the COSMIC (Ghosal et al., 2020), existing approaches do not perform fine-grained extraction of relevant information based on both the topics and the emotions involved.",
"Recently, the Transformer architecture (Vaswani et al., 2017) has empowered language models to transfer large quantities of data to low-resource domains, making it viable to discover topics in conversational texts.",
"In this paper, we propose to add an extra layer to the pre-trained language model to model the latent topics, which is learned by fine-tuning on dialogue datasets to alleviate the data sparsity problem.",
"Inspired by the success of Transformers, we use the Transformer Encoder-Decoder structure to perform the Seq2Seq prediction in which an emotion label sequence is predicted given an utterance sequence (i.e., each utterance is assigned with an emotion label).",
"We posit that the dialogue emotion of the current utterance depends on the historical dialogue context and the predicted emotion label sequence for the past utterances.",
"We leverage the attention mechanism and the gating mechanism to incorporate commonsense knowledge retrieved by different approaches.",
"Code and trained models are released to facilitate further research 1 .",
"To sum up, our contributions are: We are the first to propose a topic-driven approach for dialogue emotion detection.",
"We propose to alleviate the low-resource setting by topic-driven fine-tuning using pre-trained language models.",
"We utilize a pointer network and an additive attention to integrate commonsense knowledge from multiple sources and dimensions.",
"We develop a Transformer Encoder-Decoder structure as a replacement of the commonly-used recurrent attention neural networks for dialogue emotion detection.",
"Dialogue Emotion Detection Majumder et al. (2019) recognized the importance of dialogue context in dialogue emotion detection.",
"They used a Gated Recurrent Unit (GRU) to capture the global context which is updated by the speaker ad-hoc GRUs.",
"At the same time, Jiao et al. (2019) presented a hierarchical neural network model that comprises two GRUs for the modelling of tokens and utterances respectively.",
"Zhang et al. (2019) explicitly modelled the emotional dependencies on context and speakers using a Graph Convolutional Network (GCN).",
"Meanwhile, Ghosal et al. (2019) extended the prior work (Majumder et al., 2019) by taking into account the intra-speaker dependency and relative position of the target and context within dialogues.",
"Memory networks have been explored in (Jiao et al., 2020) to allow bidirectional influence between utterances.",
"A similar idea has been explored by Li et al. (2020b).",
"While the majority of works have been focusing on textual conversations, Zhong et al. (2019) enriched utterances with concept representations extracted from the ConceptNet (Speer et al., 2017).",
"Ghosal et al. (2020) developed COSMIC which exploited ATOMIC (Sap et al., 2019) for the acquisition of commonsense knowledge.",
"Different from existing approaches, we propose a topic-driven and knowledge-aware model built on a Transformer Encoder-Decoder structure for dialogue emotion detection.",
"Latent Variable Models for Dialogue Context Modelling Latent variable models, normally described in their neural variational inference form named Variational Autoencoder (VAE) (Kingma and Welling, 2014), has been studied extensively to learn thematic representations of individual documents (Miao et al., 2016; Srivastava and Sutton, 2017; Rezaee and Ferraro, 2020).",
"They have been successfully employed for dialogue generation to model thematic characteristics over dynamically evolving conversations.",
"This line of work, which inlcudes approaches based on hierarchical recurrent VAEs (Serban et al., 2017; Park et al., 2018; Zeng et al., 2019) and conditional VAEs (Sohn et al., 2015; Shen et al., 2018; Gao et al., 2019), encodes each utterance with historical latent codes and au-toregressively reconstructs the input sequence.",
"On the other hand, pre-trained language models are used as embedding inputs to VAE-based models (Peinelt et al., 2020; Asgari-Chenaghlu et al., 2020).",
"Recent work by Li et al. (2020a) employs BERT and GPT-2 as the encoder-decoder structure of VAE.",
"However, these models have to be either trained from scratch or built upon pre-trained embeddings.",
"They therefore cannot be directly applied to the low-resource setting of dialogue emotion detection.",
"Knowledge Base and Knowledge Retrieval ConceptNet (Speer et al., 2017) captures commonsense concepts and relations as a semantic network, which encompasses the spatial, physical, social, temporal, and psychological aspects of everyday life.",
"More recently, Sap et al. (2019) built ATOMIC , a knowledge graph centered on events rather than entities.",
"Owing to the expressiveness of events and ameliorated relation types, using ATOMIC achieved competitive results against human evaluation in the task of If-Then reasoning.",
"Alongside the development of knowledge bases, recent years have witnessed the thrive of new methods for training language models from large-scale text corpora as implicit knowledge base.",
"As has been shown in (Petroni et al., 2019), pre-trained language models perform well in recalling relational knowledge involving triplet relations about entities.",
"Bosselut et al. (2019) proposed COMmonsEnse Transformers (COMET ) which learns to generate commonsense descriptions in natural language by fine-tuning pre-trained language models on existing commonsense knowledge bases such as ATOMIC .",
"Compared with extractive methods, language models fine-tuned on knowledge bases have a distinctive advantage of being able to generate knowledge for unseen events, which is of great importance for tasks which require the incorporation of commonsense knowledge such as emotion detection in dialogues.",
"A dialogue is defined as a sequence of utterances { x 1 , x 2 , . . . , x N } , which is annotated with a sequence of emotion labels { y 1 , y 2 , . . . , y N } .",
"Our goal is to develop a model that can assign the correct label to each utterance.",
"As for each utterance, the raw input is a token sequence, i.e., x n = { w n, 1 , w n, 2 , . . . , w n,M n } where M n denotes the length of an utterance.",
"We address this problem using the Seq2Seq framework (Sutskever et al., 2014), in which the model consecutively consumes an utterance x n and predicts the emotion label y n based on the earlier utterances and their associated predicted emotion labels.",
"The joint probability of emotion labels for a dialogue is: P ( y 1: N | x 1: N ) = N (cid:89) n =1 P ( y n | x n , y <n ) (1) It is worth mentioning that the subsequent utterances are unseen to the model at each predictive step.",
"Learning is performed via optimizing the log-likelihoods of predicted emotion labels.",
"The overall architecture of our proposed TOpic-Driven and Knowledge-Aware Transformer (TODKAT ) is shown in Figure 2, which consists of two main components, the topic-driven language model fine tuned on dialogues, and the knowledge-aware transformer for emotion label sequence prediction for a given dialogue.",
"In what follows, we will describe each of the components in turn.",
"We propose to insert a topic layer into an existing language model and fine-tune the pre-trained language model on the conversational text for topic representation learning.",
"Topic models, often formulated as latent variable models, play a vital role in dialogue modeling (Serban et al., 2017) due to the explicit modeling of high-level syntactic features such as style and topic' (Bowman et al., 2016).",
"Despite the tremendous success of applying topic modeling in dialogue generation (Sohn et al., 2015; Shen et al., 2018; Gao et al., 2019), there is scarce work exploiting latent variable models for dialogue emotion detection.",
"To this end, we borrow the architecture from VHRED (Serban et al., 2017) for topic discovery, with the key modification that both the encoder RNN and decoder RNN are replaced by layers of a pre-trained language model.",
"Furthermore, we use a transformer multi-head attention in replacement of the LSTM to model the dependence between the latent topic vectors.",
"Unlike VHRED, we are interested in the encoder part to extract the posterior of the latent topic z , rather than the recurrent prior of z in the decoder part since the latter is intended for dialogue generation.",
"We assume that each utterance is mapped to a latent variable encoding its internal topic, and impose a sequential dependence on the topic transitions.",
"Figure 2a gives an overview of the VAE-based model which n th utterance ... z n Encoder (LM ) ... n th utterance with masks n+1 th utterance ... z n+1 ... n+1 th utterance with masks Decoder (LM ) Latent Vector Output Input Topic-driven fine-tuning",
"aims at learning the latent topic vector during the fine-tuning of the language model.",
"Specifically, the pre-trained language model is decomposed into two parts, the encoder and the decoder.",
"By retaining the pre-trained weights, we transfer representations from high-resource tasks to the low-resource setting, which is the case for dialogue emotion datasets.",
"Encoder The training of topic discovery part of TODKAT comprises a VAE at each time step, with its latent variable dependent on the previous latent code.",
"Each utterance is input to the VAE encoder with a recurrent hidden state, the output of which is a latent vector ideally encoding the topic discussed in the utterance.",
"The latent vectors are tied through a recurrent hidden state to constraint a coherent topic over a single dialogue.",
"We use LM to denote the network of lower layers of the language model (before the topic layer) and x Ln to denote the output from LM given the input x n .",
"The variational distribution for the approximation of the posterior will be: q ( z n | x n , z <n ) = N (cid:0) z n | f ( x Ln , h n 1 ) , f ( x Ln , h n 1 ) (cid:1) , (2) where h n 1 = f ( z n 1 , x Ln 1 ) , for n > 1 .",
"Here, f ( ) and f ( ) are multi-layer percep-trons (MLPs), f can be any transition function (e.g., a recurrent unit).",
"We employ the transformer multi-head attention with its query being the previous latent variable z n 1 , that is, f ( z n 1 , x Ln 1 ) = Attention( z n 1 , x Ln 1 , x Ln 1 ) .",
"(4) We initialize h 0 = 0 and model the transition between h n 1 and h n by first generating z n from h n 1 using Eq.",
"(2), then calculating h n by Eq.",
"(3).",
"Decoder The decoder network reconstructs x n from z n at each time step.",
"We use Gaussian distributions for both the generative prior and the variational distribution.",
"Since we want z n to be dependent on z n 1 , the prior for z n is p ( z n | h n 1 ) = N (cid:0) z n | f ( h n 1 ) , f ( h n 1 ) (cid:1) .",
"where f ( ) and f ( ) are MLPs.",
"The posterior for z n is p ( z n | x n , z <n ) , which is intractable and is approximated by q ( z n | x n , z <n ) of Eq.",
"2.",
"We denote the higher layers of the language model as LM .",
"Then the reconstruction of x n given z n and x Ln can be expressed as: x n = LM ( z n , x Ln ) .",
"Note that this is different from dialogue generation in which an utterance is generated from the latent topic vector.",
"Here, we aim to extract the latent topic from the current utterance and therefore train the model to reconstruct the input utterance as specified in Eq.",
"(5).",
"To make the combination of z n and x Ln compatible for LM , we need to perform the latent vector injection.",
"As in (Li et al., 2020a), we employ the Memory scheme that z n becomes an additional input for LM , that is, the input to the higher layers becomes [ z n , x Ln ] .",
"where p ( z n | z <n , x <n ) is the prior for z n .",
"After training, we are able to extract the topic representation from the encoder part of the model, which is denoted as z n = LM enc ( x n ) .",
"Meanwhile, the entire language model has been fine-tuned, which is denoted as u n = LMCLS ( x n ) .",
"The topic-driven LM fine-tuning stage makes it possible for the LM to discover a topic representation from a given utterance.",
"After fine-tuning, we attach the fine-tuned components to a classifier and train the classifier to predict the emotion labels.",
"We propose to use the Transformer Encoder-Decoder structure as the classifier, and consider the incorporation of commonsense knowledge retrieved from external knowledge sources.",
"In what follows, we first describe how to retrieve the commonsense knowledge from a knowledge source, then we present the detailed structure of the classifier.",
"Commonsense Knowledge Retrieval We use ATOMIC 2 as a source of external knowledge.",
"In ATOMIC , each node is a phrase describing an event.",
"Edges are relation types linking from one event to another.",
"ATOMIC thus encodes triples such as (cid:104) event, relation type, event (cid:105) .",
"There are a total of nine relation types, of which three are used: xIntent , the intention of the subject (e.g., to get a raise '), xReact , the reaction of the subject (e.g., be tired '), and oReact , the reaction of the object (e.g., be worried '), since they are defined as the mental states of an event (Sap et al., 2019).",
"Given an utterance x n , we can compare it with every node in the knowledge graph, and retrieve the most similar one.",
"The method for computing the similarity between an utterance and events is SBERT (Reimers and Gurevych, 2019).",
"We extract the topK events, and obtain their intentions and reactions, which are denoted as { e sIn,k , e sRn,k , e oRn,k } , k = 1 , . . . , K .",
"eration model, called COMET 3 , which is trained on ATOMIC .",
"It can take x n as input and generate the knowledge with the desired event relation types specified (e.g., xIntent , xReact or oReact ).",
"The generated knowledge can be unseen in ATOMIC since COMET is essentially a fine-tuned language model.",
"We use COMET to generate the K most likely events, each with respect to the three event relation types.",
"The produced events are denoted as { g sIn,k , g sRn,k , g oRn,k } , k = 1 , . . . , K .",
"Knowledge Selection With the knowledge retrieved from ATOMIC , we build a pointer network (Vinyals et al., 2015) to exclusively choose the commonsense knowledge either from SBERT or COMET .",
"The pointer network calculates the probability of choosing the candidate knowledge source as: P (cid:0) I ( x n , e n , g n ) = 1 (cid:1) = (cid:0) [ x n , e n , g n ] W (cid:1) , where I ( x n , e n , g n ) is an indicator function with value 1 or 0 , and ( x ) = 1 / (1+exp( x )) .",
"We envelope with Gumbel Softmax (Jang et al., 2017) to generate the one-hot distribution 4 .",
"The integrated commonsense knowledge is expressed as c n = I ( x n , e n , g n ) e n + (cid:0) 1 I ( x n , e n , g n ) (cid:1) g n , where c n = { c sIn,k , c sRn,k , c oRn,k } Kk =1 .",
"With the knowledge source selected, we proceed to select the most informative knowledge.",
"We design an attention mechanism (Bahdanau et al., 2015) to integrate the candidate knowledge.",
"Recall that we have a fine-tuned language model which can calculate both the [CLS] and topic representations.",
"Here we apply the language model to the retrieved or generated knowledge to obtain the [CLS] and the topic representation, denoted as [ c n,k , z n,k ] .",
"The attention mechanism is performed by calculating the dot product between the utter-3 https://github.com/atcbosselut/ comet-commonsense 4 We have also experimented with a soft gating mechanism by aggregating knowledge from SBERT and COMET in a weighted manner.",
"But the results are consistently worse than those using a hard gating mechanism.",
"ance and each normalized knowledge tuple: v k = tanh (cid:0) [ c n,k , z n,k ] W (cid:1) , (9) k = exp (cid:0) v k [ z n , u n ] (cid:62) (cid:1) (cid:80) k exp (cid:0) v k [ z n , u n ] (cid:62) (cid:1) , (10) c n = K (cid:88) k =1 k c n,k .",
"(11)",
"Here, we abuse c n to represent the aggregated knowledge phrases.",
"We further aggregate c n by event relation types using a self-attention and the final event representation is denoted as c n .",
"Transformer Encoder-Decoder We use a Transformer encoder-decoder to map an utterance sequence to an emotion label sequence, thus allowing for modeling the transitional patterns between emotions and taking into account the historical utterances as well.",
"Each utterance is converted to the [CLS] representation concatenated with the topic representation z n and knowledge representation c n .",
"We enforce a masking scheme in the self-attention layer of the encoder to make the classifier predict emotions in an auto-regressive way, entailing that only the past utterances are visible to the encoder.",
"This masking strategy, preventing the query from attending to future keys, suits better a real-world scenario in which the subsequent utterances are unseen when predicting an emotion of the current utterance.",
"As for the decoder, the output of the previous decoder block is input as a query to the self-attention layer.",
"The training loss for the classifier is the negative log-likelihood expressed as: L = N (cid:88) n =1 log p ( y n | u n , y <n ) , where denotes the trainable parameters.",
"In this section, we present the details of the datasets used, the methods for comparison, and the implementation details of our models.",
"DailyDialog (Li et al., 2017) is collected from daily communications.",
"It takes the Ekman's six emotion types (Ekman, 1993) as the annotation protocol, that is, it annotates an utterance with one of the six basic emotions: anger, disgust, fear, happiness, sadness , or surprise .",
"Those showing ambiguous emotions are annotated as neutral .",
"MELD (Poria et al., 2019) is constructed from scripts of Friends ', a TV series on urban life.",
"Same as DailyDialog, the emotion label falls into Ekman's six emotion types, or neutral .",
"IEMOCAP (Busso et al., 2008) is built with subtitles from improvised videos.",
"Its emotion labels are happy, sad, neutral, angry, excited and frustrated .",
"EmoryNLP (Zahiri and Choi, 2018) 5 is also built with conversations from Friends ' TV series, but with a slightly different annotation scheme in which disgust, anger and surprise become peaceful, mad and powerful , respectively.",
"Following Zhong et al. (2019) and Ghosal et al. (2020), the neutral ' label of DailyDialog is not counted in the evaluation to avoid highly imbal-anced classes.",
"For MELD and EmoryNLP, we consider a dialogue as a sequence of utterances from the same scene ID.",
"Table 1 summarizes the statistics of each dataset.",
"HiGRU (Jiao et al., 2019) simply inherits the recurrent attention framework that an attention layer is placed between two GRUs to aggregate the signals from the encoder GRU and pass them to the decoder GRU.",
"DialogueGCN (Ghosal et al., 2019) creates a graph from interactions of speakers to take into account the dialogue structure.",
"A Graph Convolutional Network (GCN) is employed to encode the speakers.",
"Emotion labels are predicted with the combinations of the global context and speakers' status.",
"5 https://github.com/emorynlp/ emotion-detection Models DailyDialog MELD IEMOCAP EmoryNLP Macro-F1-neutral Micro-F1-neutral weightedAvg-F1 Micro-F1 weightedAvg-F1 Micro-F1 weightedAvg-F1 Micro-F1 HiGRU 0.4904 0.5190 0.5681 0.5452 0.5854 0.5828 0.3448 0.3354 DialogueGCN 0.4995 0.5373 0.5837 0.5617 0.6085 0.6063 0.3429 0.3313 KET 0.5348 0.5818 0.5956 0.3439 COSMIC 0.5105 0.5848 0.6521 0.6528 * 0.3811 TODKAT 0.5256 0.5847 0.6823 0.6475 0.6133 0.6111 0.4312 0.4268 Topics 0.5136 0.5549 0.6634 0.6352 0.6281 0.6260 0.4180 0.4055 KB 0.5003 0.5344 0.6397 0.6111 0.5896 0.5738 0.3379 0.3262 KATSBERT 0.5173 0.5578 0.6454 0.6188 0.6097 0.6069 0.3734 0.3567 KATCOMET 0.5102 0.5462 0.6582 0.6307 0.6277 0.6254 0.4110 0.3974 Table 2: The F1 results of the dialogue emotion detectors on four benchmarks.",
"KET (Zhong et al., 2019) is the first model which integrates common-sense knowledge extracted from ConceptNet and emotion information from an emotion lexicon into conversational text.",
"A Transformer encoder is employed to handle the influence from past utterances.",
"COSMIC (Ghosal et al., 2020) is the state-of-the-art approach that leverages ATOMIC for improved emotion detection.",
"COMET is employed in their model to retrieve the event-eccentric commonsense knowledge from ATOMIC .",
"We modified the script 6 of language model fine-tuning in the Hugging Face library (Wolf et al., 2020) for the implementation of topic-driven fine-tuning.",
"We use one transformer encoder layer.",
"As for the decoder, there are N layers where N is the number of utterances in a dialogue.",
"We refer the readers to the Appendix for the detailed settings of the proposed models.",
"Comparison with Baselines Experiment results of TODKAT and its ablations are reported in Table 2.",
"HiGRU and DialogueGCN results were produced by running the code published by the authors on the four datasets.",
"Among the baselines, COSMIC gives the best results.",
"Our proposed TODKAT outperforms COSMIC on both MELD and EmoryNLP in weighted Avg-F1 with the improvements ranging between 3-5%.",
"TODKAT also achieves superior result than COSMIC on DailyDi-6 https://huggingface.co/transformers/ v2.0.0/examples.html alogue in Macro-F1 and gives nearly the same result in Micro-F1.",
"TODKAT is inferior to COSMIC on IEMOCAP.",
"It is however worth mentioning that COSMIC was trained with 132 instances on this dataset, while for all the other models the training-and-validation split is 100 and 20 .",
"As such, the IEMOCAP results reported on COSMIC (Ghosal et al., 2020) are not directly comparable here.",
"COSMIC also incorporates the commonsense knowledge from ATOMIC but with the modified GRUs.",
"Our proposed TODKAT , built upon the topic-driven Transformer, appears to be a more effective architecure for dialogue emotion detection.",
"Compared with KET, the improvements are much more sig-nificant, with over 10% increase on MELD, and close to 5% gain on DailyDialog.",
"KET is also built on the Transformer, but it considers each utterance in isolation and applies commonsense knowledge from ConceptNet.",
"TODKAT , on the contrary, takes into account the dependency of previous utterances and their associated emotion labels for the prediction of the emotion label of the current utterance.",
"DialogueGCN models interactions of speakers and it performs slightly better than KET.",
"But it is significantly worse than TODKAT .",
"It seems that topics might be more useful in capturing the dialogue context.",
"Ablation Study The lower half of Table 2 presents the F1 scores with the removal of various components from TODKAT .",
"It can be observed that with the removal of the topic component, the performance of TODKAT drops consistently across all datasets except IEMOCAP in which we observe a slight increase in both weighted average F1 and Micro-F1.",
"This might be attributed to the size of the data since IEMOCAP is the smallest dataset evaluated here, and small datasets hinder the model's capability to discover topics.",
"Without using the commonsense knowledge ( KB'), we observe more drastic performance drop compared to all other components, with nearly 10% drop in F1 on EmoryNLP, showing the importance of employing commonsense knowledge for dialogue emotion detection.",
"Comparing two different ways of extracting knowledge from ATOMIC , direct retrieval using SBERT or generation using COMET , we observe mixed results.",
"Overall, the Transformer Encoder-Decoder with a pointer network is a conciliator between the two methods, yielding a balanced performance across the datasets.",
"Relationships between Topics and Emotions To investigate the effectiveness of the learned topic vectors, we perform t-SNE (Van der Maaten and Hinton, 2008) on the test set to study the relationship between the learned topic vectors and the ground-truth emotion labels.",
"The results on DailyDialog and MELD are illustrated in Figure",
"3(a) and",
"(b).",
"Latent topic vectors of utterance are used to plot the data points, whose colors indicate their ground-truth emotion labels.",
"We can see that the majority of the topic vectors cluster into polarized groups.",
"Few clusters are bearing a mixture of polarity, possibly due to the background topics such as greetings in the datasets.",
"Topics can be interpreted using the attention scores of Eq.",
"4.",
"The top-10 most-attended words are selected as the representative words for each utterance.",
"As in (Dathathri et al., 2020), we construct bag-of-words 7 that represent 141 distinct topics.",
"Given the attended words of an utterance cluster grouped based on their latent topic representations, we label the word collection with the dominant theme name.",
"We refer to the theme names as topics in Figure 3c.",
"It can be observed that utterances associated with Office tend to carry disgust' emotions, while those related to Family are prone to be happy' .",
"We further compute the Spearman's rank-order correlation coefficient to quantitatively verify the relationship between the topic and emotion vectors.",
"For an utterance pair, a similarity score is 7 Word lists and their corresponding theme names are crawled from https://www.enchantedlearning.",
"com/wordlist/ .",
"obtained separately for their corresponding topic vectors as well as their emotion vectors.",
"We then sort the list of emotion vector pairs according to their similarity scores to check to what extent their ranking matches that of topic vector pairs, based on the Spearman's rank-order correlation coefficient.",
"The results are 0 .",
"60 , 0 .",
"58 , 0 .",
"42 and 0 .",
"54 with p-values (cid:28) 0 .",
"01 respectively for DailyDialog, MELD, IEMOCAP and EmoryNLP, showing that there is a strong correlation between the clustering of topics and that of emotion labels.",
"IEMOCAP has the lowest correlation score, which is inline with the results in Table 2 that the discovered latent topics did not improve the emotion classification results.",
"Impact of Relation Type We investigate the impact of commonsense relation types on the performance of TODKAT .",
"We expand the relation set to five relation types and all nine relation types, respectively.",
"According to (Sap Dataset Relation Type { sI, sR, oR, sE, oE } All DailyDialog 0.5718 0.5664 MELD 0.6429 0.6322 IEMOCAP 0.6163 0.6073 EmoryNLP 0.4029 0.3885 Table 3: Micro-F1 scores of TODKAT with more commonsense relation types retrieved from ATOMIC included for training. Here, sE and oE represent effect of subject and effect of object , respectively. All denotes the incorporation of all nine commonsense relation types from ATOMIC . et al., 2019), there are other relation types including { sNeed , sWant , oWant , sEffect , oEffect } , which identifies the prerequisites and post conditions of the given event, and { sAttr } , the If-Event-Then-Persona category of relation type that describes how the subject is perceived by others.",
"We calculate the Micro-F1 scores of TODKAT with these two categories of relation types added step by step.",
"From Table 3 we can conclude that the inclusion of two extra relation types or all relation types degrades the F1 scores on almost all datasets.",
"An exception occurs on IEMOCAP where the F1 score rises by 0 .",
"5% when adding sE and oE relations, possibly due to the fact that the dataset is abundant in events.",
"Hence the extra event descriptions offer complementary knowledge to some extent.",
"While on other datasets neither the incorporation of If-Event-Then-Event nor the incorporation of If-Event-Then-Persona relation types could bring any benefit.",
"Impact of Attention Mechanism With the knowledge retrieved from ATOMIC or generated from COMET , we are able to infer the possible intentions and reactions of the interlocutors.",
"However, not all knowledge phrases contribute the same to the emotion of the focused utterance.",
"We study the attention mechanism in terms of selecting the relevant knowledge.",
"We show in Table 4 a heat map of the attention scores in Eq.",
"9 to illustrate how the topic-driven attention could identify the most salient phrase.",
"The utterance Oh my God, you're a freak. ' will be erroneously categorized as mad ' without using the topic-driven attention (shown in the last row of Table 4).",
"In contrast, the attention mechanism guides the model to attend to the more relevant events and thus predict the correct emotion label.",
"We have proposed a Topic-Driven and Knowledge-Aware Transformer model that incorporates topic representation and the commonsense knowledge from ATOMIC for emotion detection in dialogues.",
"A topic-augmented language model based on fine-tuning has been developed for topic extraction.",
"Pointer network and additive attention have been explored for knowledge selection.",
"All the novel components have been integrated into the Transformer Encoder-Decoder structure that enables Seq2Seq prediction.",
"Empirical results demonstrate the effectiveness of the model in topic representation learning and knowledge integration, which have both boosted the performance of emotion detection.",
"The authors would like to thank the anonymous reviewers for insightful comments and helpful suggestions.",
"This work was funded by the EPSRC (grant no. EP/T017112/1, EP/V048597/1).",
"LZ is funded by the Chancellor's International Scholarship at the University of Warwick.",
"YH is supported by a Turing AI Fellowship funded by the UK Research and Innovation (grant no. EP/V020579/1).",
"DZ is funded by the National Key Research and Development Program of China (2017YFB1002801) and the National Natural Science Foundation of China (61772132)."
] | [
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"method",
"abstain",
"objective",
"objective",
"method",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other"
] |
[
"We contribute a new dataset 1 for the task of automated fact checking and an evaluation of state of the art algorithms.",
"The dataset includes claims (from speeches, interviews, social media and news articles), review articles published by professional fact checkers and premise articles used by those professional fact checkers to support their review and verify the veracity of the claims.",
"An important challenge in the use of premise articles is the iden-tification of relevant passages that will help to infer the veracity of a claim.",
"We show that transferring a dense passage retrieval model trained with review articles improves the retrieval quality of passages in premise articles.",
"We report results for the prediction of claim veracity by inference from premise articles.",
"The rise of social media has led to a democratization of news, but it has also amplified issues related to fake news and misinformation.",
"To that effect, many fact checking organizations (e.g., Politifact, Snopes, AFP Fact Check, Alt News, FactCheck.org, Africa Check, etc.) have emerged around the globe.",
"They investigate debatable claims made by authorities, politicians, celebrities and the public.",
"For each claim, they publish a review article with links to sources that support a verdict (e.g., true, partly true/false, false) about the veracity of the claim.",
"Those reviews debunk false claims and mitigate the spread of misinformation.",
"We consider a key NLP challenge in the context of automated fact checking: claim inference from premise articles.",
"Note that determining the veracity of a claim without additional information is nearly impossible since claims are selected by professional fact checkers 1 Code and form to request the dataset are available at https://github.com/nxii/WatClaimCheck .",
"The WatClaimCheck dataset is available upon request for noncommercial research purposes only under the fair dealing exception of the Canada Copyright Act.",
"in part because their veracity is far from obvious and also because of their degree of controversy.",
"To that effect, professional fact checkers invest a fair amount of time to research each claim by finding relevant sources and publishing a review article that explains their verdict of the claim.",
"Hence there is a natural entailment problem, whereby anyone who reads a review article should be able to arrive at the same verdict as the professional fact checker regarding the claim.",
"Unlike many entailment tasks that consist of short text (e.g., pairs of utterances) that may be artificially generated or extracted, this is a natural and challenging entailment task that involves an entire document (review article) with an utterance (claim) that requires a certain degree of reading comprehension.",
"We note that this entailment problem has been tackled in some previous work (Augenstein et al., 2019; Shu et al., 2018; Nakov et al., 2021) and although it is a challenging NLP problem, it does not correspond to the problem that professional fact checkers need to solve.",
"In this paper, we focus on the harder problem of claim inference from premise articles.",
"This is part of the challenge that professional fact checkers face.",
"They find premise articles that contain relevant facts and then infer the veracity of the claim based on those facts.",
"Unlike many existing inference tasks where it is sufficient to use one or a few facts in a few sentences (Storks et al., 2019; Schlegel et al., 2020), information from a set of premise articles must be distilled and combined in non trivial ways to infer the veracity of a claim.",
"We assembled a dataset of 33,697 claims made between December 1996 and July 2021 with associated review articles, premise articles and claim verdicts.",
"Many other datasets for claim verification are listed in Table 1. However, most of them do not include premise articles needed for the inference task described above.",
"We note two exceptions: PubHealth (Kotonya and Toni, 2020b), which is restricted to health claims and UKP 1293 Snopes (Hanselowski et al., 2019), which is restricted to claims investigated by one fact checking organization (Snopes).",
"In contrast, WatClaimCheck includes claims investigated by 8 fact checking organizations on any topic.",
"Since there are several premise articles for a given claim and each premise article may be long, a simple two-stage approach to identify relevant passages would consist of a lightweight retrieval technique in a first stage, followed by a heavyweight inference technique applied to those passages.",
"When the first stage fails to retrieve key passages, then the inferred verdict will be negatively affected regardless of how good the second stage is.",
"To that effect, several supervised dense passage retrieval techniques have been proposed for question-answering (Karpukhin et al., 2020; Qu et al., 2021; Ren et al., 2021).",
"Unfortunately, we cannot directly apply those techniques since we do not have labels for the relevant passages.",
"Instead, we show how to use the review articles to train a supervised dense retrieval technique that is then transferred to premise articles.",
"The contributions of the paper can be summarized as follows: New dataset of claims with review and premise articles for claim inference in automated fact checking; Novel use of review articles to transfer a dense retrieval technique to premise articles; Experiments establishing the state of the art for claim inference.",
"The paper is organized as follows.",
"Sect.",
"2 reviews previous work related to automated fact checking and claim verification.",
"Sect.",
"3 describes the new dataset and summarizes the differences with previous datasets for claim verification.",
"Sect.",
"4 describes a two-stage process to",
"i) extract evidence sentences from premise articles and",
"ii) infer the veracity of claims.",
"This section also explains how to transfer a dense passage retrieval technique trained with review articles to premise articles.",
"Sect.",
"5 reports the results for the claim veracity inference task.",
"Finally, Sect.",
"6 concludes and discusses possible future work.",
"There is an important line of work that focuses on claim verification (Kotonya and Toni, 2020a; Guo et al., 2022).",
"This includes techniques that predict the veracity of a claim based on the text of the claim only (Rashkin et al., 2017), linguistic features (Popat et al., 2017), meta information about the claimant (e.g., name, job, party affiliation, veracity history) (Wang, 2017), review articles (Au-genstein et al., 2019; Shu et al., 2018; Nakov et al., 2021), relevant articles returned by a search engine (Popat et al., 2018; Augenstein et al., 2019; Mishra and Setty, 2019) as well as premise articles (Aly et al., 2021; Kotonya and Toni, 2020b).",
"There is an important distinction between articles returned by a search engine and premise articles.",
"The techniques that use a search engine to find articles related to a claim query the search engine after a fact checking website has published a review article and therefore end up retrieving articles that include the review article as well as other articles that summarize and/or discuss the verdict of the fact checking website.",
"Hence they are tackling an entailment problem.",
"In contrast, the premise articles that we consider are the source articles used by a fact checker before publishing a review article.",
"Those articles contain relevant facts, but not a summary or discussion of the review article since they are published before the review article and in fact serve as premises for the review article.",
"Closely related to claim verification is the problem of fake news detection.",
"In this problem, the credibility of an entire news article is evaluated.",
"The credibility can be estimated based on linguistic and textual features (Conroy et al., 2015; Reis et al., 2019; Li et al., 2019), discourse level structure (Karimi and Tang, 2019), network analysis (Conroy et al., 2015), knowledge graphs (Cui et al., 2020), inter-user behaviour dynamics (Gan-gireddy et al., 2020) or a combination of multiple modalities (Wang et al., 2020).",
"Some techniques reorder the articles returned by a search engine based on their degree of credibility (Olteanu et al., 2013; Beylunioglu, 2020).",
"An important task that can help the detection of fake news is stance detection (Borges et al., 2019; Jwa et al., 2019), i.e., does the content of an article agree or disagree with the title of the article?",
"The following surveys summarize advances in fake news detection: (Kumar and Shah, 2018; Bondielli and Marcelloni, 2019).",
"ing eight fact checking services: Politifact, Snopes, AFP Fact Check, Alt News, FactCheck.org, Africa Check, USA Today, and Full Fact.",
"We utilize Google's fact check tool APIs 2 to collect the claims' metadata for all fact checking services except Politifact and Snopes.",
"The claims' metadata collected from Google's fact check tool APIs include the claim review article URL, which is used to retrieve the claim review article.",
"The claim review articles published by some of the fact checking services provide the premise article URLs in a separate section while others provide the URLs as inline links in the review article body.",
"We parse the article body, retrieving the premise article URLs used in the review article to justify the claim veracity.",
"Finally, the premise URLs are used to retrieve the premise articles.",
"We try to directly retrieve the article where possible, but also use archive.org's API in case a premise article is no longer available online.",
"We follow the same general procedure for data collection from Politifact and Snopes except that we directly crawl the respective websites instead of using Google's fact check tool APIs for collecting claims and associated metadata.",
"We perform some basic cleanup to the collected data before inclusion in the dataset.",
"This includes removing articles behind paywalls, removing claims with less than two premise articles, and removing non-textual premise sources.",
"We obtain premise article text from their HTML pages by loading the HTML files into a text based web browser (Links browser) and then dumping the web page text into a text file.",
"This allows us to bypass the CSS styling and JavaScript code included in the HTML pages and obtain only the text displayed to end users.",
"Admittedly, this does not eliminate auxiliary text such as navigation links, footer text, recommended links, etc.",
"The premise articles include the source document of the claim when available as well as evidence articles used by fact checkers.",
"We map the numerous claim veracity labels used by the fact checking websites into three broad labels: True, Partially True/False, and False.",
"The contributed dataset contains a total of 33,721 claims.",
"We split those claims into the following three sets: training set containing 26,976 claims, validation set containing 3,372 claims, and test set containing 3,373 claims.",
"For each claim in 2 https://toolbox.google.com/factcheck/apis Figure 1: Number of claims from each source Figure 2: Dataset Claim Rating Counts the dataset, we provide the following data: ID, Claimant, Claim, Claim Date, Reviewer Name, Reviewer Site, Review Article URL, Review Article Date, Review Article, Rating, Original Rating, Premise Articles and Premise Article URLs .",
"Here Original Rating refers to the rating assigned by a fact checking organization and Rating corresponds to our mapping of the original rating to true, partly true/false and false (see the dataset for the precise mapping).",
"We provide the extracted text files for the review and premise articles.",
"Fig. 1 shows the number of claims per fact checking services.",
"Fig. 2 shows the claim rating distribution.",
"Claims in the Partially True/False and False categories significantly outnumber the claims in the True category.",
"In reality, the number of true claims is much larger than the number of partially true/false and false claims, but fact checking services focus on debunking controversial claims and therefore the majority of the claims they investigate are false or partially true/false.",
"This imbalance poses an important challenge.",
"We compare our proposed dataset with other publicly available fact checking related datasets in Table 1. We can broadly classify the fact checking datasets into two different categories: (1) veracity detection datasets based only on claim text and some metadata, but without supporting evidence documents and (2) datasets that provide claim text along with supporting evidence and/or context doc-1295",
"uments.",
"The datasets that provide some evidence or context documents can be further subcategorized: (1) datasets that provide social media posts and comments related to the claim (Mitra and Gilbert, 2015; Nakamura et al., 2020; Shu et al., 2018), (2) datasets that retrieve supporting evidence for the claims by performing a web search using queries obtained from lexical and semantic features of the claim text (Baly et al., 2018; Augenstein et al., 2019; Gupta and Srikumar, 2021), (3) datasets that provide Wikipedia pages as supporting evidence (Thorne et al., 2018; Fan et al., 2020; Aly et al., 2021), and (4) datasets that include premise articles used by professional fact checkers (Hanselowski et al., 2019; Kotonya and Toni, 2020b).",
"Our proposed dataset provides the documents cited by the professional fact checkers in the claim review article to justify their claim rating.",
"This reflects the real world task of automated veracity detection more truthfully due to the availability of the premise articles cited by the professional fact checkers in claim review articles.",
"Although, social media posts and comments can sometimes be helpful in claim veracity detection they are rarely treated as authoritative sources of information.",
"Using a web search to retrieve evidence documents after a fact checking service has verified a claim is problematic since multiple news agencies often publish articles referencing the original fact checking review article.",
"Top-k web search results typically contain those articles which may indirectly leak the veracity label.",
"We develop a two-stage system to perform evidence based veracity detection.",
"The first stage selects relevant sentence level evidence from the premise articles associated with a claim and the second stage performs claim veracity inference using the claim text and selected evidence sentences.",
"For the first stage, we evaluate two different approaches.",
"The first approach is term frequency inverse document frequency (TF-IDF), which is typically used by fact checking methods for sentence based retrieval (Aly et al., 2021).",
"For the second approach, we propose a novel way to adapt dense passage retrieval techniques using the review articles for evidence sentence selection.",
"In our experiments, the aforementioned dense passage retrieval technique outperforms TF-IDF text retrieval and leads to overall system performance improvements.",
"The second stage consists of training deep learning models to perform claim veracity inference using the claim text and selected evidence.",
"We utilize multiple deep learning models to perform claim veracity inference ranging from basic bi-directional recurrent networks to state of the art transformers.",
"We represent a claim containing l tokens as C n = { c 1 , c 2 , . . . , c l } , where n [1 , N ] and N is the size of the dataset.",
"Each claim is associated with multiple premise articles, we represent the k -th premise article associated with the n -th claim containing m sentences as A n,k = { s Pn, 1 , s Pn, 2 , . . . , s Pn,m } where s n,i represents the i -th sentence.",
"Similarly, we represent the review article associated with the claim C n containing m sentences by R n = { s Rn, 1 , s Rn, 2 , . . . , s Rn,m } .",
"For a given claim C n , we represent its ground truth veracity label by y n .",
"We cast the problem as a textual inference problem.",
"Given a claim C n and a set of associated premise articles A , our goal is to predict the ground truth veracity y n of the claim.",
"A key step performed by professional fact checkers is examining the premise articles associated with a claim and extracting useful evidence from them to establish claim veracity.",
"Our first stage seeks to perform a similar task.",
"Each claim in our dataset has multiple associated premise articles with each article containing a large amount of text.",
"Our goal in the first stage is to rank the evidence available in the associated premise articles at the sentence level and extract the ones which are most useful and impactful for veracity detection in the second stage.",
"Our experiments show that an improvement in this stage directly contributes to an overall improvement in the veracity detection performance.",
"We measure TF-IDF similarity between the claim text and the premise article sentences to rank the sentence level evidence.",
"Top ranked sentences are used in the second stage to perform veracity detection.",
"This approach is similar to the one used by Thorne et al. (2018) to extract evidence sentences from Wikipedia articles for fact checking.",
"We propose a novel way of adapting the dense passage retrieval method proposed by Karpukhin et al. (2020) for open domain question answering to the",
"task of retrieving evidence sentences from premise articles.",
"Karpukhin et",
"al.'s method uses a dual encoder architecture.",
"Each encoder is implemented using BERT (Devlin et al., 2018).",
"The question encoder EQ and the passage encoder EP embed question q and passage p into d -dimensional vectors.",
"The similarity between the question and passage is defined as the dot product of their vectors: sim ( q, p ) = EQ ( q ) TEP ( p ) (1) The model is then trained to learn embeddings such that the similarity score between relevant question-passage pairs will be higher than irrelevant ones.",
"We adapt this method for our first stage by taking advantage of the fact that the review article published by fact checking websites (along with a claim) typically contains key evidence taken from the premise articles.",
"The evidence is usually paraphrased in order to form a coherent argument in support of the claim veracity verdict.",
"To train the dense passage retrieval model for stage-1, we use the claims and the associated review articles in the training set of our dataset.",
"We form positive pairs using the claim and the sentences from the associated review article.",
"The negative pairs are formed using that same claim and sentences from review articles associated with other claims.",
"This corresponds to the gold negative sampling technique in (Karpukhin et al., 2020).",
"Let D = {(cid:104) C i , s R + i,j , s R i, 1 , s R i, 2 , . . . , s R i,n 1 (cid:105) | R i | j =1 } Ni =1 be the training data containing (cid:80) Ni =1 | R i | instances where N is the number of claims in the training set, | R i | is the number of sentences in the review article associated with the i -th claim.",
"Each instance is made up of a claim C i with one positive sentence from the associated review article s R + i,j and n 1 randomly chosen negative sentences s R i,k .",
"These negative sentences are positive sentences for other claims within the same batch.",
"We train the model by optimizing the negative log likelihood of the positive sentences: L ( C i , s R + i,j , s R i, 1 , s R i, 2 , . . . , s R i,n 1 ) = log e sim ( C i ,s R + i,j ) e sim ( C i ,s R + i,j ) + (cid:80) n 1 k =1 e sim ( C i ,s R i,k ) (2) For model evaluation, we use the top-k recall rate for retrieving the review article sentences corresponding to the claims in the validation and test set using the similarity score.",
"The review article sentences are retrieved from the corpus formed by all the sentences from every review article in the corresponding set.",
"After training, we use the encoders to encode the claim text and the sentences of the associated premise articles.",
"We compute the similarity score using the dot product between the encoded claim vector and the premise article sentences.",
"We use the top scoring sentences as evidence sentences in the next stage to perform claim veracity inference.",
"true, partly true/false or false based on the text of the claim, the claimant and the evidence sentences extracted in stage 1.",
"We first consider bi-directional long short term memory (Bi-LSTM) networks and bi-directional gated recurrent units (Bi-GRUs).",
"The evidence sentences of each premise article are concatenated with the claim and claimant, and then encoded by a Bi-LSTM or Bi-GRU into a latent vector.",
"For N premise articles, the resulting N vectors are then averaged and passed through a softmax layer with 3 outputs corresponding to the predicted probabilities of true, partly true/false and false.",
"Instead of concatenating the evidence sentences of each premise article into a long sequence, we can also use hierarchical attention networks (HANs) (Yang et al., 2016; Mishra and Setty, 2019) to compute sentence level embeddings that are then combined into article level embeddings.",
"A HAN is used to embed each premise article with the claim as follows.",
"Each sentence (claimant with claim text or each evidence sentence of the premise article) is embedded as a sequence of hidden vectors (one per word) by a bi-directional recurrent network (Bi-LSTM or Bi-GRU).",
"Then, a word-level attention layer computes a sentence level embedding.",
"Next, those embeddings are fed to another bi-directional recurrent network (Bi-LSTM or Bi-GRU) that computes a sequence of hidden vectors (one per sentence) and a sentence level attention layer computes an embedding for the document-claim pair.",
"Finally, the embeddings of the document-claim pairs are averaged and passed through a softmax over the labels true, partly true/false and false.",
"We finetune a RoBERTa-base (Liu et al., 2019) model to perform claim veracity inference using the claim and the evidence sentences.",
"We concatenate the claim text, the name of the claimant, and the evidence sentences extracted for that particular claim in the first stage to build a training data instance.",
"The input sequence is encoded using the RoBERTa-base model and passed through a dense linear layer followed by a softmax to obtain the predicted claim veracity label distribution.",
"We use the cross entropy loss function to train the model.",
"We evaluate the two-stage process and the algorithms described in the previous section on the claim inference problem with our new dataset.",
"In order to reduce the computational resources and memory requirements, we implement the encoders in the dense passage retrieval model using DistilRoBERTa (Dis).",
"We use a batch size of 64 and the in-batch negatives technique as described in (Karpukhin et al., 2020).",
"We evaluate the stage-1 methods by comparing their performance using the top-k recall rate metric.",
"The claim text is used to retrieve the ground truth review article sentences from the corpus containing all the sentences of all the review articles in the test set.",
"The test contains a total of 114 , 290 sentences and 3 , 373 claims.",
"We report the top-k recall rate for k = 10 , 25 , 50 , 100 in Table 2. The results clearly show that the DPR (dense passage retrieval) method outperforms the method based on TF-IDF.",
"To evaluate whether the inference models in stage-2 can do better with the inclusion of additional evidence sentences, we perform the experiments in",
"stage-2 in two settings: Pooled and Averaged.",
"Pooled: In this setting, for each claim we pool all the sentences from every associated premise article and rank them using the similarity score.",
"The evidence sentences are concatenated in the descending order of their similarity score.",
"Afterwards, the claim text and evidence sentences are concatenated.",
"The resulting text is then truncated to the maximum sequence length capability of the transformer model being used to perform claim veracity inference.",
"For each claim, we get exactly one data instance.",
"Averaged: This refers to the setting where we generate one data instance per claim and associated premise article.",
"So, if a claim has m premise articles, we get m data instances.",
"For each premise 1298 article associated with a claim, we score the sentences from that article and extract the top scoring sentences to form a data instance.",
"We concatenate the evidence sentences in the descending order of their similarity score.",
"The evidence sentences are then concatenated to the claim text and truncated to the maximum sequence length capability of the transformer model being used to perform claim veracity inference.",
"During training, each data instance for a claim is used independently, but during inference, we compute the average of the claim veracity prediction distributions of the data instances associated with a single claim.",
"We show in our reported results that the inclusion of additional evidence in the form of m data instances per claim (instead of 1 data instance for the pooled setting) does improve the performance when the retrieval method of stage 1 is not very effective.",
"We use macro F1 as the evaluation metric.",
"We report the results in Table 3. We report all the hyper parameters used in our experiments in the appendix.",
"The best performance when doing the claim veracity inference is obtained by using the DPR model in the first stage and the RoBERTa-base model in the second stage.",
"We also report results for claim entailment from the review articles as an upper bound on the accuracy that could be achieved for claim inference based on the premise articles.",
"We note that the traditional experimental setup of dividing a dataset at random into train, validation and test does not reflect the streaming nature of claims.",
"When new topics arise (i.e., election, covid-19), the nature of the claims and the premise articles changes.",
"Randomly splitting the dataset into train/validation/test ensures that all claim topics are well represented across the train/validation/test splits, which would not be the case in practice.",
"In reality, when a new topic arises, the test split may have new types of claims that are not well represented in the train/validation splits.",
"To evaluate the effect of this distribution shift over time, we performed a prequential evaluation (Bifet et al., 2015).",
"More precisely, we divide the dataset into subsets corresponding to periods of 6 months.",
"We repeatedly evaluate the performance for each 6-month period by treating the claims in that period as the test set and the claims in previous periods as the train/validation sets.",
"This corresponds to a realistic setting where a claim verification algorithm may be re-trained every 6 months on the data seen so far to predict the veracity of the claims for the next 6 months.",
"Naturally, the time period between each re-training iteration may be shorter than 6 months in practice.",
"We chose 6 months simply to ensure that the size of the test set would be large enough to obtain reliable results.",
"Fig. 3 shows the number of claims investigated in each 6-month period in our dataset.",
"We note two peaks.",
"The first one in 2016 corresponds to a sudden surge of claims investigated by some fact checking websites regarding India politics.",
"The second peak in 2020 corresponds to the 2020 US presidential election and the start of the covid-19 pandemic.",
"Fig. 4 shows the macro F1 results achieved by the top 4 algorithms with DPR evidence in each 6-month period.",
"We note that the prequential results are significantly lower than the results in the DPR column of Table 3. This drop of accuracy is precisely due to the distribution shift of claims that naturally occurs over time.",
"We also note a trend whereby the accuracy increases as time passes by.",
"This is explained by the fact that more data is available for training in later time periods.",
"We strongly recommend that future algorithms be evaluated in prequential mode since this evaluation setup is more realistic.",
"This paper introduces a new dataset for automated fact checking.",
"WatClaimCheck includes premise articles used by professional fact checkers and therefore corresponds more closely to the task of claim veracity inference in automated fact checking.",
"An important challenge is the extraction of relevant facts from the premise articles since it is not generally possible to apply heavyweight models on the entire content of all premise articles.",
"To that effect, we described how to train the encoders of a dense passage retrieval technique with the review articles and then transfer the resulting retrieval technique to the premise articles.",
"This increased the overall performance of the claim verification algorithms.",
"We also performed a prequential evaluation that highlighted an important distribution shift that caused a significant drop in accuracy for all algorithms.",
"We strongly recommend that future algorithms be evaluated in prequential mode.",
"In fact, an important direction for future research would be to design algorithms based on transfer learning or domain generalization that can cope better with this distributional shift.",
"We also note that the tech-1299 Algorithm Review Evidence(TF-IDF) Evidence(DPR) Bi-GRU 0.779 0.009 0.418 0.010 0.453 0.009 Bi-LSTM 0.777 0.008 0.421 0.011 0.454 0.010 HAN-Bi-GRU 0.821 0.007 0.445 0.010 0.471 0.009 HAN-Bi-LSTM 0.818 0.007 0.444 0.008 0.471 0.011 Roberta-base (pooled) 0.741 0.005 0.541 0.017 0.580 0.009 Roberta-base (averaged) 0.741 0.005 0.563 0.010 0.565 0.009 Table 3: Macro F1 score averaged over 10 runs with standard deviation Figure 3: count of claims in each time window Figure 4: Prequential evaluation score based on macro F1 (average of 10 runs with standard deviation) 1300 niques that we evaluated are black boxes and therefore it is not clear how they do inference.",
"Hence, another direction for future research would be to develop inference techniques that are explainable in the sense that they could provide explanations to the users to justify their veracity prediction for a claim.",
"We thank the Schulich Foundation and the Natural Sciences and Engineering Research Council of Canada (NSERC) for funding support.",
"Resources used in this work were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute https:// vectorinstitute.ai/partners/ ."
] | [
"objective",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"result",
"method",
"abstain",
"method",
"abstain",
"other",
"other"
] |
[
"Lemmatization of standard languages is concerned with",
"(i) abstracting over morphological differences and",
"(ii) resolving token-lemma ambiguities of inflected words in order to map them to a dictionary headword.",
"In the present paper we aim to improve lemmatization performance on a set of non-standard historical languages in which the difficulty is increased by an additional aspect",
"(iii): spelling variation due to lacking orthographic standards.",
"We approach lemmatization as a string-transduction task with an encoder-decoder architecture which we enrich with sentence context information using a hierarchical sentence encoder.",
"We show significant improvements over the state-of-the-art when training the sentence encoder jointly for lemmatization and language modeling.",
"Crucially, our architecture does not require POS or morphological annotations, which are not always available for historical corpora.",
"Additionally, we also test the proposed model on a set of typologically diverse standard languages showing results on par or better than a model without enhanced sentence representations and previous state-of-the-art systems.",
"Finally, to encourage future work on processing of non-standard varieties, we release the dataset of non-standard languages underlying the present study, based on openly accessible sources.",
"Lemmatization is the task of mapping a token to its corresponding dictionary head-form to allow downstream applications to abstract away from orthographic and inflectional variation (Knowles and Mohd Don, 2004).",
"While lemmatization is considered to be solved for analytic and resource-rich languages such as English, it remains an open challenge for morphologically complex (e.g. Estonian, Latvian) and low-resource languages with unstable orthography (e.g. historical languages).",
"Especially for languages with higher surface variation, lemmatization plays a crucial role as a preprocessing step for downstream tasks such as topic modeling, stylometry and information retrieval.",
"In the case of standard languages, lemmatization complexity arises primarily from two sources:",
"(i) morphological complexity affecting the number of inflectional patterns a lemmatizer has to model and",
"(ii) token-lemma ambiguities (e.g. living can refer to lemmas living or live) which require modeling sentence context information.",
"In the case of historical languages, however, the aforementioned spelling variation introduces further complications.",
"For instance, the regularity of the morphological system is drastically reduced since the evidence supporting token-lemma mappings becomes more sparse.",
"As an example, while the modern Dutch lemma jaar (en. year) can be inflected in 2 different ways (jaar, jaren), in a Middle Dutch corpus used in this study it is found in combination with 70 different forms (iare, ior, jaer, etc.).",
"Moreover, spelling variation increases token-lemma ambiguities by conflating surface realizations of otherwise unambiguous tokense.g. Middle Low German bath can refer to lemmas bat (en. bad) and bid-den (en. bet) due to different spellings of the dental occlusive in final position.",
"Spelling variation is not exclusive of historical languages and it can be found in contemporary forms of on communication, such as micro-blogs, with loose orthographic conventions (Crys-tal, 2001).",
"An important difference, however, is that while for modern languages normalization is feasible (Schulz et al., 2016), for many historic languages such is not possible, because one is dealing with an amalgam of regional dialects that lacked any sort of supra-regional variant functioning as target domain (Kestemont et al., 2016).",
"learning to lemmatization of historical languages.",
"Our method shows improvements over a plain encoder-decoder framework, which reportedly achieves state-of-the-art performance on lemmatization and morphological analysis (Bergmanis and Goldwater, 2018; Peters et al., 2018).",
"In particular, we make the following contributions:",
"1. We introduce a simple joint learning approach based on a bidirectional Language Model (LM) loss and achieve relative improvements in overall accuracy of 7.9% over an encoder-decoder trained without joint loss and 30.72% over edit-tree based approaches.",
"2. We provide a detailed analysis of the linguistic and corpus characteristics that explain the amount of improvement we can expect from LM joint training.",
"3. We probe the hidden representations learned with the joint loss and find them significantly better predictors of POS-tags and other morphological categories than the representations of the simple model, confirming the efficiency of the joint loss for feature extraction.",
"Additionally, we test our approach on a typologically varied set of modern standard languages and find that the joint LM loss significantly improves lemmatization accuracy of ambiguous tokens over the encoder-decoder baseline (with a relative increase of 15.1%), but that, in contrast to previous literature (Chakrabarty et al., 2017; Bergmanis and Goldwater, 2018), the overall performance of encoder-decoder models is not significantly higher than that of edit-tree based approaches.",
"Taking into account the type of inflectional morphology dominating in a particular language, we show that the benefit of encoder-decoder approaches is highly dependent on typological morphology.",
"Finally, to assure reproducibility, all corpus preprocessing pipelines and train-dev-test splits are released.",
"With this release, we hope to encourage future work on processing of lesser studied nonstandard varieties.",
"1 1 Datasets and training splits are available at https: //www.github.com/emanjavacas/pie-data .",
"Experiments are conducted with our framework pie available at: https://www.github.com/emanjavacas/ pie .",
"All our experiments are implemented using PyTorch (Paszke et al., 2017).",
"Modern data-driven approaches typically treat lemmatization as a classification task where classes are represented by binary edit-trees induced from the training data.",
"Given a token-lemma pair, its binary edit-tree is induced by computing the prefix and suffix around the longest common subsequence, and recursively building a tree until no common character can be found.",
"Such edit-trees manage to capture a large proportion of the morphological regularity, especially for languages that rely on suffixation for morphological inflection (e.g. Western European languages), for which such methods were primarily designed.",
"Based on edit-tree induction, different lemmatizers have been proposed.",
"For example, Chrupala et al. (2008) use a log-linear model and a set of hand-crafted features to decode a sequence of edit-trees together with the sequence of POS-tags using a beam-search strategy.",
"A related approach is presented by Gesmundo and Samardzi (2012), where edit-trees are extracted using a non-recursive version of the binary edit-tree induction approach.",
"More recently, Cotterell et al. (2015) have used an extended set of features and a second-order CRF to jointly predict POS-tags and edit-trees with state-of-the-art performance.",
"Finally, Chakrabarty et al. (2017) employed a softmax classifier to predict edit-trees based on sentence-level features implicitly learned with a neural encoder over the input sentence.",
"With the advent of current encoder-decoder architectures, lemmatization as a string-transduction task has gained interest partly inspired by the success of such architectures in Neural Machine Translation (NMT).",
"For instance, Bergmanis and Goldwater (2018) apply a state-of-the-art NMT system with the lemma as target and as source the focus token with a fixed window over neighboring tokens.",
"Most similar to our work is the approach by Kondratyuk et al. (2018), which conditions the decoder on sentence-level distributional features extracted from a sentence-level bidirectional RNN and morphological tags.",
"Recently, work on non-standard historical varieties has focused on spelling normalization using rule-based, statistical and neural string-transduction models (Pettersson et al., 2014; Bollmann and Sgaard, 2016; Tang et al., 2018).",
"Previous studies on lemmatization of historical variants focused on evaluating off-the-shelf systems.",
"For instance, Eger et al. (2016) evaluates different pre-existing models on a dataset of German and Medieval Latin, and Dereza (2018) focuses on Early Irish.",
"The most similar to the present paper in this area is work by Kestemont et al. (2016), which tackled lemmatization of Middle Dutch with a neural encoder that extracts character and word-level features from a fixed-length token window and predicts the target lemma from a closed-set of true lemmas.",
"Using Language Modeling as a task to extract features in a Transfer Learning setup has gained momentum only in the last year, partly thanks to overall improvements over previous state-of-the-art across multiple tasks (NER, POS, QA, etc.).",
"Different models have been proposed around the same idea varying in implementation, optimization and task definition.",
"For instance, Howard and Ruder (2018) present a method to fine-tune a pretrained LM for text classification.",
"Peters et al. (2018) learn task-specific weighting schemes over different layer features extracted by a pretrained bidirectional LM.",
"Recently, Akbik et al. (2018) used context-sensitive word-embeddings extracted from a bidirectional character-level LM to improve NER, POS-tagging and chunking.",
"Here we describe our encoder-decoder architecture for lemmatization.",
"In Section 3.1 we start by describing the basic formulation known from the machine translation literature.",
"Section 3.2 shows how sentential context is integrated into the decoding process as an extra source of information.",
"Finally, Section 3.3 describes how we learn richer representations for the encoder through the addition of an extra language modeling task.",
"We employ a character-level Encoder-Decoder architecture that takes an input token x t character-by-character and has as goal the character-level decoding of the target lemma l t conditioned on an intermediate representation of x t .",
"For token x t , a sequence of token character embeddings c x 1 , . . . , c xn is extracted from embedding matrix W enc R | C | d (where | C | and d represent, respectively, the size of the character vocabulary and the embedding dimensionality).",
"These are then passed to a bidirectional RNN encoder, that computes a forward and a backward sequence of hidden states: h enc 1 , . . . , h encn and h enc 1 , . . . , h encn .",
"The final representation of each character i is the concatenation of the forward and the backward states: h enci = [ h enci ; h enci ] .",
"At each decoding step j , a RNN decoder generates the hidden state h decj , given the lemma character embedding c lj from embedding matrix W dec R | L | d , the previous hidden state h decj 1 and additional context.",
"This additional context consists of a summary vector r j obtained via an attentional mechanism (Bahdanau et al., 2014) that takes as input the previous decoder state h decj 1 and the sequence of encoder activations h enc 1 , . . . , h encn .",
"2 Finally, the output logits for character j are computed by a linear projection of the current decoder state h encj with parameters O RH | L | , which are normalized to probabilities with the softmax function.",
"The model is trained to maximize the probability of the target character sequence expressed in Equation 1 using teacher-forcing.",
"Lemmatization of ambiguous tokens can be improved by incorporating sentence-level information.",
"Our architecture is similar to Kondratyuk et al. (2018) in that it incorporates global sentence information by extracting distributional features with a hierarchical bidirectional RNN over the input sequence of tokens x 1 , ..., x m .",
"For each token x t , we first extract word-level features re-using the last hidden state of character-level bidirectional RNN Encoder from Section 3.1 w t = [ h enct ; h enct ] .",
"Optionally, word-level features can be enriched with extra lookup parameters from an embedding matrix W word R | V | e where V and e denote respectively the vocabulary size in words and the word embedding dimensionality.",
"3 Given these word-level features w t , the sentence-level features s t are computed as the concatenation of 2 We refer to Bahdanau et al. (2014) for the description of the attentional mechanism.",
"3 During development word embeddings did not contribute significant improvements on historical languages, and we therefore exclude them from the rest of the experiments.",
"It must be noted, however, that word embeddings might still be helpful for lemmatization of standard languages where the type-token ratio is smaller as well as when pretrained embeddings are available.",
"forward and backward activations of an additional sentence-level bidirectional RNN s t = [ s t ; s t ] .",
"In order to perform sentence-aware lemmatization for token x t , we condition the decoder on the sentence-level encoding s t and optimize the probability given by Equation",
"2. P ( l t | x t ) = m (cid:89) j =1 P ( c lj | c l<j , r j , s t ; enc , dec ) (2) Our architecture ensures that both word-level and character-level features of each input token in a sentence can contribute to the sentence-level features at any given step and therefore to the lemmatization of any other token in the sentence.",
"From this perspective, our architecture is more general than those presented in Kestemont et al. (2016); Bergmanis and Goldwater (2018), where sentence information is included by running the encoder over a predetermined fixed-length window of neighboring characters.",
"Moreover, we let the character-level embedding extractor and the lemmatizer encoder share parameters in order to amplify the training signal coming into the latter.",
"Figure 1 visualizes the proposed architecture.",
"We hypothesize that the training signal from lemmatization alone might not be enough to extract sufficiently high quality sentence-level features.",
"As such we include an additional bidirectional word-level language-model loss over the input sentence.",
"Given the forward and backward subvectors of the sentence encoding s t = [ s t ; s t ] , we train two additional softmax classifiers to predict token x t +1 given s t and x t 1 given s t with parameters O LMfwd and O LMbwd RS | V | .",
"4 We train our model to jointly minimize the negative log-likelihood of the probability defined by Equation 2 and the LM probability defined by Equation",
"3. PLM ( x ) = 1 / 2 n (cid:89) t =2 P ( x t | x 1 , . . . , x t 1 ) + 1 / 2 n 1 (cid:89) t =1 P ( x t | x t +1 , . . . , x n ) (3) 4 We have found the joint loss most effective when both forward and backward classifiers shared parameters.",
"Following a Multi-Task Learning (Caruana, 1997), we set a weight on the LM negative log-likelihood which we decrease over training based on lemmatization accuracy on development data to reduce its influence on training after convergence.",
"Section 4.1 first introduces the datasets, both the newly introduced dataset of historical languages, and the dataset of modern standard languages sampled from Universal Dependencies (v2.2) corpus (Nivre et al., 2016).",
"Finally, Section 4.2 describes model training and settings in detail.",
"Historical Languages In recent years, a number of historical corpora have appeared thanks to an increasing number of digitization initiatives (Piotrowski, 2012).",
"For the present study, we chose a representative collection of medieval and early modern datasets, favoring publicly available data, corpora with previously published results and datasets covering multiple genres and historic periods.",
"We include a total of 8 corpora covering Middle Dutch, Middle Low German, Medieval 0 20000 40000 60000 80000 100000 # Tokens frotrfrithucsheslllaturlvbgfacgrdefieuenarnbgooetesgmlcglcgacrmru L a n g u a g e Total number of Tokens 0.00 0.17 0.34 0.52 0.69 0.86 % Tokens % of Unknown Tokens % of Ambiguous Tokens Figure 2: Statistics of total number of tokens, ambiguous and unknown tokens in the test sets.",
"French, Historical Slovene and Medieval Latin, which we take from the following sources.",
"Both cga and cgl contain medieval Dutch material from the Gysseling corpus curated by the Institute for Dutch Lexicology 5 cga is a charter collection (administrative documents), whereas cgl concerns a variety of literary texts that greatly vary in length.",
"crm is another Middle Dutch charter collection from the 14th century with wide geographic coverage (Van Reenen and Mulder, 1993; van Halteren and Rem, 2013).",
"cgr , finally, is a smaller collection of samples from Middle Dutch religious writings that include later medieval texts (Kestemont et al., 2016).",
"fro offers a corpus of Old French heroic epics, known as chansons de geste (Camps, 2016).",
"llat dataset is taken from the Late Latin Charter Treebank, consisting of early medieval Latin documentary texts (Korki-akangas and Lassila, 2013).",
"goo comes from the 5 https://ivdnt.org/taalmaterialen .",
"reference corpus of historical Slovene, sampled from 89 texts from the period 1584-1899 (Erjavec, 2015).",
"gml refers to the reference corpus of Middle Low German and Low Rhenish texts, found in manuscripts, prints and inscriptions (Barteld et al., 2017).",
"Finally, cap is a corpus of early medieval Latin ordinances decreed by Carolingian rulers (Eger et al., 2016).",
"Standard Languages For a more thorough comparison between systems across domains and a better examination of the effect of the LM loss, we evaluate our systems on a set of 20 standard languages sampled from the UD corpus, trying to guarantee typological diversity while selecting datasets with at least 20k words.",
"We use the pre-defined splits from the original UD corpus (v2.2).",
"6 .",
"Figure 2 visualizes the test set sizes in terms of total, ambiguous and unknown tokens for both historical and standard languages.",
"We refer to the full model trained with joint LM loss by Sent-LM .",
"In order to test the effectiveness of sentence information and the importance of enhancing the quality of the sentence-level feature extraction, we compare against a simple encoder-decoder model without sentence-level information ( Plain ) and a model trained without joint LM loss ( Sent ).",
"Moreover, we compare to previous state-of-the-art lemmatizers based on binary edit-tree induction: Morfette (Chrupala et al., 2008) and Lemming (Cotterell et al., 2015), which we run with default hyperparameters.",
"For all our models, we use the same hyperpa-rameter values as follows.",
"All recurrent layers have 150 cells per layer and use GRUs (Cho et al., 2014).",
"Encoder and Decoder have 2 layers but the sentence encoder has only",
"1. We apply 0.25 dropout (Srivastava et al., 2014) after the embedding layer and before the output layer and 0.25 variational dropout (Gal and Ghahramani, 2016) in between recurrent layers.",
"Models are optimized with Adam (Kingma and Ba, 2015) using an initial learning rate of 1e-3 which is reduced by 25% after each epoch without improvement on development accuracy.",
"Models are trained until fail-6 The full list of languages for both historical and standard corpora as well as the corresponding ISO 639-1 codes used in the present study can be found in the Appendix.",
"In the cases where train-dev-test splits were not pre-defined, we randomly split sentences using 10% and 5% for test and dev respectively.",
"ing to achieve any improvement for 3 consecutive epochs.",
"Initial LM loss weight is set to 0.2 and it is halved each epoch after two consecutive epochs without achieving any improvements on development perplexity.",
"We use sentence boundaries when given and otherwise use POS tags corresponding to full stops as clues.",
"In any case, sentences are split into chunks of maximum 35 words to accommodate to limited memory.",
"Target lemmas during both training and testing are lowercased in agreement with the implementation of Lemming and Morfette , which also do so.",
"For models with joint loss, we truncate the output vocabulary to the top 50k most frequent words for similar reasons.",
"We run a maximum of 100 optimization epochs in randomized batches containing 25 sentences each.",
"The learning rate is decreased by a factor of 0.75, after every 2 epochs without accuracy increase on held-out data and learning stops after failing to improve for 5 epochs.",
"Decoding is done with beam search with a beam size of 10.",
"7 5 Results As is customary, we report exact-match accuracy on target lemmas.",
"Besides overall accuracy, we also compute accuracy of ambiguous tokens (i.e. tokens that map to more than 1 lemma in the training data) and unknown tokens (i.e. tokens that do not appear in the training data).",
"Table 1 shows the aggregated results over all datasets in our historical language corpus.",
"8 In 4 cases ( cga , cgl , crm and gml ), Lemming failed to converge due to memory requirements exceeding 250G RAM due to the large amount of 7 For all languages, we observed relatively small gains ranging from 0.1% to 0.5% in overall accuracy.",
"8 We aggregate both edit-tree based approaches by selecting the best performing model for each corpus.",
"When Lemming converge, the results were better than Morfette .",
"edit-trees.",
"Following Sgaard et al. (2014), we compute p-values with a Wilcoxon's signed rank test.",
"Sent-LM is the best performing model with a relative improvement of 7.9% ( p < . 01 ) over Sent and 30.72% ( p < . 01 ) over the edit-tree approach on full datasets and 10.27% ( p < . 1 ) and 18.66% ( p < . 01 ) on ambiguous tokens.",
"Moreover, the edit-tree approach outperforms encoder-decoder models Plain and Sent on ambiguous tokens, and it is only due to the joint loss that the encoder-decoder paradigm gains an advantage.",
"Finally, for tokens unseen during training, the best performing model is Sent with a relative error reduction of 47% ( p < . 01 ) over the edit-tree approach and 4.77% ( p < . 1 ) over Sent-LM .",
"Table 2 compares scores for a subset from the corpora coming from the Gysseling corpus, which have been used in previous work on lemmatization of historical languages.",
"The model described by Kestemont et al. (2016) is included as K-2016 for comparison.",
"9 It is apparent that both Sent and Sent-LM outperform K-2016 on full and unknown tokens.",
"It is worth noting that K-2016 , a model that uses distributed contextual features but no edit-tree induction, performs better than Plain which highlights the importance of context for the lemmatization of historical languages , and also better than the edit-tree approaches which highlights the difficulty of tree induction on this dataset.",
"We find Sent-LM to have a significant advantage over Sent on full and ambiguous tokens, but a disadvantage vs Sent and Plain on unknown tokens.",
"Table 4 shows overall accuracy scores aggregated across all languages.",
"10 We observe that on average Sent-LM is the best model on full datasets.",
"9 Unfortunately, scores on ambiguous tokens were not reported and therefore cannot be compared.",
"10 Similarly to results on historical languages, we aggregate Morfette and Lemming due to the later failing to converge on et .",
"However, in contrast to previous results, the edit-tree approach has an advantage over all encoder-decoder models for both ambiguous and unknown tokens.",
"Since the differences in performance are not statistically significant ( p > 0 . 05 ), we seek to shed light on the advantages and disadvantages of the encoder-decoder and edit-tree paradigms by conducting a more fine-grained analysis with respect to the morphological typology of the considered languages.",
"To this end, we group languages into morphological types depending on the dominant morphological processes of each language and aggregate scores over languages in each type: Type",
"Type",
"2. Uralic and Altaic languages, which are characterized by agglutinative morphology and a tendency towards monoexponential case and vowel harmony.",
"Table 3 shows accuracy scores per morphological group for each model type.",
"It is apparent that the Edit-tree approach is very effective for Type 3 languages both in ambiguous and unknown tokens.",
"In both Type 1 and Type 2 languages, the best overall performing model is Sent-LM .",
"In the case of ambiguous tokens, Sent-LM achieves highest accuracy for Type 1 languages, but it is surpassed by the Edit-tree approach on Type 2 languages.",
"Finally, in the case of unknown tokens, we observe a similar pattern to the historical languages where Plain and Sent have an advantage over Sent-LM .",
"For clarity, we group the discussion of the main findings according to four major discussion points.",
"How does the joint LM loss help?",
"As Section 5 shows, Sent-LM is the overall best model, and its advantage is biggest on ambiguous datasets, always outperforming the second-best encoder-decoder model on ambiguous tokens.",
"For a more detailed comparison of the two models we tested the following two hypotheses:",
"(i) the joint LM loss helps by providing sentence representations with stronger disambiguation capacities",
"(ii) The joint LM loss helps in cases when the evidence of a token-lemma relationship is sparse e.g in languages with highly synthetic morphological systems and in the presence of spelling variation.",
"As Figure 3 shows, improvement over Sent is correlated with percentage of token-lemma ambiguity in the corpus, providing evidence for hypothesis",
"(i).",
"Finally, as Figure 4 shows, improvement over Sent is correlated with higher token-lemma ratio, suggesting that the improvement is likely to be due to learned representations that better identify the input token.",
"These two aspects help explain the efficiency of the joint learning approach on non-standard languages where high levels of spelling variation provide increased ambiguity by conflating unrelated forms and also lower evidence for token-lemma mappings.",
"Another factor certainly related to the efficiency of the proposed joint LM-loss is the size of the training dataset.",
"However, dataset size should be considered a necessary but not a sufficient condition for the feasibility of the joint LM-loss and has therefore weak explanation power for the performance of the proposed approach.",
"LM loss leads to better representations In order to analyze the representations learned with the joint loss, we turn to representation probing experiments following current approaches on interpretability (Linzen et al., 2016; Adi et al., 2017).",
"Using the same train-dev-test splits from the current study, we exploit additional POS, Number, Gender, Case and syntactic function (Dep) annotations provided in the UD corpora and compare the ability of the representations extracted by Sent and Sent-LM to predict these labels.",
"11 Model parameters are frozen and a linear softmax layer 11 Note that not all tasks are available for all languages, due to some corpora not providing all annotations and some categories not being relevant for particular languages.",
"Q RH V per task is learned using a cross-entropy loss function.",
"12 The results of this experiment are reported in Table 5.",
"The classifier trained with Sent-LM outperforms the one with Sent on all considered labeling tasks, confirming the efficiency of the LM loss at extracting better representations.",
"Edit-tree vs. Encoder-Decoder Our fine-grained analysis suggests that the performance of the edit-tree and encoder-decoder approaches depends on the underlying morphological typology of the studied languages.",
"Neural approaches seem to be stronger for languages with complex case systems and agglutinative morphology.",
"In contrast, edit-tree approaches excel on more synthetic languages (e.g. Type 3) and languages with lower ambiguity (e.g. Type 2).",
"Figure 5 illustrates that as the number of edit-trees increase the encoder-decoder models start to excel.",
"This is most likely due to the fact that, from an edit-tree approach perspective, a large number of trees creates a large number of classes, which leads to higher class imbalance and more sparsity.",
"However, edit-tree based approaches do outperform representation learning methods for languages with lower number of trees, which leads to the intuition that the edit-tree formalism does provide a useful inductive bias to the task of lemmatization and it should not be discarded in future work.",
"Our results, in fact, point to a future direction which applies the edit-tree formalism, but alleviates the edit-tree explosion by exploiting the relationships between the edit-tree classes potentially using representation learning methods.",
"12 Models trained for 50 epochs using the Adam optimizer with default learning rate and training stops after 2 epochs without accuracy increase on dev set.",
"Accuracy on unknown tokens We observe that while overall the joint loss outperforms the simpler encoder-decoder, it seems, however, detrimental to the accuracy on unknown tokens.",
"This discrepancy is probably due to the fact that",
"(i) unknown tokens are likely unambiguous and therefore less likely to profit from improved context representations and to",
"(ii) our design choice of word-level language modeling, where the model is forced to predict UNK for unknown words.",
"As Sent-LM is the overall best model, in future work we will explore character-level language modeling in order to harness the full potential of the joint-training approach even on unknown tokens.",
"We have presented a method to improve lemmatization with encoder-decoder models by improving context representations with a joint bidirectional language modeling loss.",
"Our method sets a new state-of-the-art for lemmatization of historical languages and is competitive on standard languages.",
"Our examination of the learned representations indicates that the LM loss helps enriching sentence representations with features that capture morphological information.",
"In view of a typologically informed comparison of encoder-decoder and edit-tree based approaches, we have shown that the latter can be very effective for highly synthetic languages.",
"Such result might have been overlooked in previous studies due to only considering a reduced number of languages (Chakrabarty et al., 2017) or pooling results across typology (Bergma-nis and Goldwater, 2018).",
"With respect to languages with higher ambiguity and token-lemma ratio, the encoder-decoder approach is preferable and the joint loss generally provides a substantial improvement.",
"Finally, while other models use morphological information to improve the representation of context (e.g. edit-tree approaches), our joint language modeling loss does not rely on any additional annotation, which can be crucial in low resource and non-standard situations where annotation is costly and often not trivial.",
"We thank NVIDIA for donating 1 GPU that was used for the experiments in the present paper.",
"We would also like to thank the anonymous reviewers for their valuable comments."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"result",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"result",
"method",
"result",
"other",
"result",
"abstain",
"method",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"result",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"result",
"other",
"other"
] |
[
"Most works on financial forecasting use information directly associated with individual companies (e.g., stock prices, news on the company) to predict stock returns for trading.",
"We refer to such company-specific information as local information.",
"Stock returns may also be influenced by global information (e.g., news on the economy in general), and inter-company relationships.",
"Capturing such diverse information is challenging due to the low signal-to-noise ratios, different time-scales, sparsity and distributions of global and local information from different modalities.",
"In this paper, we propose a model that captures both global and local multimodal information for investment and risk management-related forecasting tasks.",
"Our proposed Guided Attention Multimodal Multitask Network (GAME) model addresses these challenges by using novel attention modules to guide learning with global and local information from different modalities and dynamic inter-company relationship networks.",
"Our extensive experiments show that GAME outperforms other state-of-the-art models in several forecasting tasks and important real-world application case studies.",
"Forecasting stock prices or returns is an important task in trading.",
"Such forecasts can also be used in investment and risk management applications such as portfolio allocation and risk forecasting.",
"Stock returns in financial markets are influenced by large volumes of textual information from diverse sources, e.g., news, blogs, social media.",
"Such textual information can be directly associated with a specific company ( local ), e.g, a company's CEO stepping down; or relevant to multiple companies ( global ), e.g., disruptions in supply chains due to export curbs in key countries, airline industry bankruptcies.",
"In this paper, articles with company tags are treated as local information.",
"All articles are treated as global information as any article could be potentially relevant to a company.",
"Direct and indirect relationships between companies also serve as channels through which the effects of information from both global and local textual and numerical information propagate and influence stock returns, e.g., a disruption in company A could affect all its suppliers; a scandal involving company A's CEO may affect company B if the CEO is a member of company B's board.",
"We illustrate such diverse information and effects in Figure 1.",
"Apart from low signal-to-noise ratios in financial time-series due to market forces, there are other challenges in modeling such diverse information.",
"Time scales of information from different modalities are of different granularity, e.g., numerical financial information may be available daily, while publication of financial text happens at irregular times.",
"Companies' local financial news are typically sparse and long-tailed, e.g., a company may not be in the news for an extended period of time, but suddenly becomes the focus of many news reports in a short period due to a scandal.",
"Local textual information may also be noisy with regards to its relevance to the company's stock returns, e.g., a news article on a company's HR practices may have little effect on its stock returns, whereas a news article on a sector's outlook can have a significant effect on the company's stock returns even without any mention of the company.",
"More research on financial forecasting is required to address such challenges.",
"Most existing works model financial information of a single modality (Ding et al., 2015; Ziniu et al., 2018; Du and Tanaka-Ishii, 2020; Sawhney et al., 2021b), and do not model the effects of inter-company relationships.",
"Some works (Feng et al., 2019; Xu et al., 2021; Sawhney et al., 2021a) model both unimodal financial information and the effects of intercompany relationships.",
"There are however few 6313 Figure 1:",
"works capturing multimodal financial information and inter-company relationships (Ang and Ee-Peng, 2021; Sawhney et al., 2020b,a).",
"Ang and Ee-Peng (2021) utilizes both numerical and global textual information, as well as inter-company relationships but does not address challenges related to capturing global and local multimodal information.",
"Most works also focus on a single task forecasting stock returns for trading.",
"Another equally important set of forecasting tasks which has many investment and risk management applications involves similar challenges.",
"It involves a multivariate multitask setting, where there is a need to manage the returns and risks of financial portfolios that comprise many stocks (multivariate), and make investment and risk decisions based on multiple forecasts (multitask): forecast stock",
"i) mean returns and",
"ii) risks (volatili-ties) over a future horizon to balance potential returns and risks when making investment decisions, as well as forecast",
"iii) correlations between stocks in portfolios over a future horizon.",
"To address financial data challenges in multitask settings, we propose the Guided Attention Multimodal Multitask Network (GAME) model.",
"Our key idea is to use attention to guide learning between information from different sources and modalities.",
"GAME incorporates several important components:",
"i) guided latent cross-attention learning between modalities of different time-scales and sparsity;",
"ii) graph-guided representation learning based on inter-company relationships with dynamic weights learnt from multimodal information; and",
"iii) guided cross-attention learning between global and local information.",
"GAME is trained on multiple tasks forecasting means, volatilities and correlations over a future horizon, which could be used for portfolio allocation and risk management.",
"While existing works for financial forecasting capture either local or global information, or network information with either global or local information, GAME jointly captures global, local and network information.",
"Compared to existing works that utilize transformers for time-series forecasting (Zerveas et al., 2021), GAME proposes novel cross-attention mechanisms that enable",
"i) more effective modelling of local information of different lengths and granularity from different modalities by first encoding such information to a common latent representation; and",
"ii) the extraction of global information by leveraging such local information.",
"Hence, our key contributions are as follows: To our knowledge, this is the first work to propose a model for capturing global and local information from multiple modalities for multivariate multitask financial forecasting; We propose an attention-based module that encodes multimodal information of different sequence lengths and time granularity to a latent representation space for efficient mutually guided cross-attention learning; We design a graph encoding module that uses inter-company relationships to propagate multimodal information across companies; and dynamically updates relationship weights with learnt importances; We design an attention-based module that uses cross-attention between local and global information to guide learning of relevant global information; 6314 We train the model on multiple forecasting tasks to lower the risk of over-fitting, and demonstrate the effectiveness of GAME on forecasting tasks and real-world applications against state-of-the-art baselines on real-world datasets.",
"As this work involves time-series forecasting and network learning, we review key related works in these areas.",
"Time Series Forecasting .",
"Classical methods (Box and Jenkins, 1990; Bollerslev, 1986) are commonly applied to time-series forecasting.",
"However, they are designed for numerical data but not unstructured financial text.",
"Deep learning models have been increasingly applied to time-series forecasting.",
"They include feed-forward networks (Yoojeong et al., 2019; Ding et al., 2015; Oreshkin et al., 2020), convolutional neural networks (Pan-tiskas et al., 2020; Borovykh et al., 2017; Wan et al., 2019), recurrent neural networks (Flunkert et al., 2020; Qin et al., 2017; Liu et al., 2020), and transformers (Wu et al., 2020; Zerveas et al., 2021).",
"A detailed review of these works can be found in Lim and Zohren (2021); Jiang (2021); Torres et al. (2021).",
"Time-series Transformer ( TST ) (Zerveas et al., 2021) is a recent model based on the transformer encoder architecture designed for numerical inputs.",
"StockEmbed ( SE ) (Du and Tanaka-Ishii, 2020) is designed for global textual features, while Financial News and Tweet Based Time Aware Network ( FAST ) (Sawhney et al., 2021b) is designed for local textual features.",
"To encode sequences of textual features, SE utilizes bidirectional GRUs, while FAST utilizes Time-aware LSTMs (Baytas et al., 2017).",
"These works are designed for information from a single modality, do not model the effects of company-to-company relationships, and do not address the challenges of capturing global and local multimodal information.",
"Network Learning .",
"Graph neural networks (GNN) compose messages based on network features, and propagate them to update the embeddings of nodes and/or edges over multiple neural network layers (Gilmer et al., 2017).",
"In particular, Graph Convolutional Network (GCN) (Kipf and Welling, 2017) aggregates features of neighboring nodes and normalizes aggregated representations by node degrees.",
"Graph Attention Network (GAT) (Velickovic et al., 2018) assigns neighboring nodes with different importance weights during aggregation.",
"Such GNNs are designed for static networks with static node attributes and cannot be directly applied to networks where attributes are evolving time series.",
"A few recent works extend GNNs to prediction tasks on financial time-series data (Ang and Ee-Peng, 2021; Feng et al., 2019; Sawhney et al., 2020b,a, 2021a).",
"Relational Stock Ranking ( RSR ) (Feng et al., 2019) uses LSTM to generate output embeddings for numerical time-series data of companies before feeding the latter to learn company embeddings in a network using a GCN-based model, but does not consider textual information.",
"Knowledge Enriched Company Embedding ( KECE ) (Ang and Ee-Peng, 2021) captures numerical and global textual information and uses a GAT-based model to capture inter-company relationships but does not address the challenges of capturing global and local multimodal information.",
"RSR and KECE also do not learn the dynamic importance of inter-company relationships.",
"GAME represents companies in a network G = ( V, E, X ) , where V represents a set of company nodes, E represents relationships between companies, X represents sequences of multimodal attributes.",
"Given a time step t , we define numerical features X numj ( t ) = [ x numj ( t K ) , ..., x numj ( t 1)] to be the sequence of numerical price-related data associated with company v j over a window of K time steps up to t 1 .",
"Textual news features include local and global textual features, i.e. X txt = { X txt,loc , X txt,glo } .",
"The pre-encoded local news textual features directly associated with a company v j within the same window are denoted as X txt,locj ( t ) = [ x txt,locj, 1 ( t K ) , , x txt,locj,M ( t K ) , x txt,locj,M +1 ( t K + 1) , , x txt,locj,S ( t 1)] .",
"S = M K and assumes M news articles are captured for each company at each time-step.",
"Where there are less than M articles for any company at any given time-step, we add PAD values of zero to the sequence (Devlin et al., 2019).",
"We denote pre-encoded global news features over the window period K as X txt,glo ( t ) = [ x txt,glo ( t K ) , ..., x txt,glo ( t 1)] , with varying number of news articles binned into each time step.",
"both X num ( t ) R | V | K d num and X txt,loc ( t ) R | V | S d txt , where d num , d txt are embedding dimension sizes, to a common latent sequence length L and dimension d by using latent attention-based encoders inspired by Jaegle et al. (2021a), where L K as part of the Guided Latent Cross-Attention Learning step.",
"We introduce guided cross-attention to enable information from one modality to guide the attention learning of another.",
"In the next Dynamic Graph-Guided Attention Learning step, representations of both modalities are used to discover and update importance weights of inter-company relationships before applying dynamic graph convolutions.",
"A latent decoder inspired by Jaegle et al. (2021b) then decodes the numerical and local textual representations to the original sequence lengths K and S .",
"In the Guided Global-Local Attention Learning step, we use the decoded local representations to guide the attention extraction of the sequence of global textual features relevant to each company v j .",
"The resultant representations are then combined and sequentially encoded with a transformer, followed by attention-based temporal and multimodal fusion.",
"Finally, GAME generates forecasts of means, volatilities and correlations of financial returns over a selected future horizon of K time-steps, i.e. the means, volatilities and correlations of Y returns ( t ) = [ y returns ( t ) , ..., y returns ( t + K )] , where y returns ( t ) = ( price ( t ) price ( t 1)) /price ( t 1) and price ( t ) denote the percentage return and stock price at time step t respectively.",
"Guided Latent Cross-Attention Learning.",
"This step addresses the challenge of learning information from modalities of different sequence lengths, degrees of sparsity and distributions, specifically X num ( t ) R | V | K d num and X txt,loc ( t ) R | V | S d txt .",
"For X num ( t ) , we first project the inputs to common dimension d and add a learnt time vector (Kazemi et al., 2019; Godfrey and Gashler, 2018).",
"The time vector is learned from the time-stamps T ( t ) corresponding to the inputs.",
"In this paper, we use day of week, week and month of year for T num ( t ) , and further include seconds of day for T txt,loc ( t ) as these are most relevant to the respective inputs.",
"The time vector P num ( t ) R | V | K d is learned by combining functional forms and learnable weights and could be viewed as a time-sensitive version of positional encodings used in transformers (Vaswani et al., 2017).",
"For GAME, the empirically chosen components used to generate the time vectors are num 1 = sigmoid ( Linear ( T num ( t ))) and num 2 = cos ( Linear ( T num ( t ))) , which enable the model to extract non-linear and seasonality-based temporal patterns.",
"We then concatenate these components and project them: P num ( t ) = Linear ([ num 1 || num 2 ]) .",
"The output of the projection and addition of time vectors step is: H num ( t ) R | V | K d .",
"For the latent encoding step, we introduce latent units L RL d .",
"peat L by | V | times to get R | V | L d , and apply linear layers to generate queries from L , and keys and values from H num ( t ) .",
"That is, Q num ( t ) = Linear Q ( L ) , K num ( t ) = Linear K ( H num ( t )) , V num ( t ) = Linear V ( H num ( t )) .",
"We then apply scaled dot-product attention H num ( t ) = softmax ( Q num ( t ) K num ( t ) T ) V num ( t ) / d .",
"To elaborate, the dot-product between Q num ( t ) R | V | L d and K num ( t ) R | V | K d gives us attention weights of dimensions | V | L K .",
"We use these attention weights to map V num ( t ) R | V | K d to H num ( t ) R | V | L d .",
"The same set of steps is repeated for X txt,loc ( t ) to obtain H txt,loc ( t ) .",
"Hence, after the latent encoding step, both H num ( t ) and H txt,loc ( t ) have the same sequence length L and dimension d , i.e. H num ( t ) , H txt,loc ( t ) R | V | L d , and share a common latent space due to the common L .",
"In the next guided cross-attention step, information from each of the modalities guide attention learning of the other.",
"Sharing a common latent space facilitates mutually guided learning between the modalities and is more efficient as L K S .",
"For this step, we generate queries, keys, and values from the numerical and local text representations: Q num ( t ) = Linear Q ( H num ( t )) , K num ( t ) = Linear K ( H num ( t )) , V num ( t ) = Linear V ( H num ( t )) , Q txt,loc ( t ) = Linear Q ( H txt,loc ( t )) , K txt,loc ( t ) = Linear K ( H txt,loc ( t )) , V txt,loc ( t ) = Linear V ( H txt,loc ( t )) .",
"Queries of one modality are used to guide the learning of the other modality as follows: H num txt ( t ) = softmax ( Q txt,loc ( t ) K num ( t ) T d ) V num ( t ) (1) H txt num ( t ) = softmax ( Q num ( t ) K txt,loc ( t ) T d ) V txt,loc ( t ) (2) Dynamic Graph-Guided Attention Learning.",
"We then utilize inter-company relationships E to guide learning.",
"While these relationships do not frequently change (e.g., common sector relationships), their importances vary across time.",
"Hence, we discover dynamic relationship weights with the dynamic attention-based edge weights discovery (DW) module.",
"We concatenate and project H num txt ( t ) and H txt num ( t ) with a linear layer to obtain: H ( t ) = Linear [ H num txt ( t ) || H txt num ( t )] .",
"We then generate: QDW ( t ) = Linear Q DW ( H ( t )) ; KDW ( t ) = Linear K DW ( H ( t )) .",
"To learn the importance of inter-company relationships in a dynamic manner, we compute attention weights: W att ( t ) = tanh ( QDW ( t ) WDW KDW ( t ) T / d ) (3) where WDW RL d d .",
"As we carry out this operation in the latent space with dimension L , W att ( t ) R | V || V | L .",
"We then repeat the adjacency matrix corresponding to the intercompany relationships E by L times to get A ( t ) R | V || V | L and compute the Hadamard product between A ( t ) and W att ( t ) : A ( t ) = A ( t ) W att ( t ) .",
"This results in the weighted adjacency tensor A ( t ) R | V || V | L with A ij ( t ) RL representing the weighted relational edges between asset i and j across latent dimension L .",
"Next, in the dynamic network convolution step, we utilize the encoded company representations H ( t ) and the weighted adjacency tensor A ( t ) as inputs to a weighted dynamic graph convolution step to encode network representations of companies.",
"For company v i , we compute its network representations Z i ( t ) RL d across L dimension by aggregating representations from its neighbors N ( i, t ) based on A i,j ( t ) , j V : Z i ( t ) = (cid:88) j N ( i,t ) exp ( A ij ( t )) (cid:80) j N ( i,t ) exp ( A ij ( t )) H j ( t ) (4) Across all assets, we obtain Z ( t ) R | V | L d .",
"We adopt this approach instead of other GNNs for computational efficiency as it allows us to apply graph convolution across multiple dimensions in parallel.",
"Guided Global-Local Attention Learning.",
"We then apply latent decoding to decode the representation Z ( t ) from the latent dimension L to the original sequence length K and S for the numerical and local text modalities respectively.",
"To decode the numerical information, the numerical representations after the projection and addition of time vectors H num ( t ) are used as queries to decode the keys and values of the representation Z ( t ) .",
"We generate: Q numdec ( t ) = Linear Q ( H num ( t )) , K numdec ( t ) = Linear K ( Z ( t )) , V numdec ( t ) = Linear V ( Z ( t )) , and apply scaled dot-product attention: Z num ( t ) = softmax ( Q numdec ( t ) K numdec ( t ) T d ) V numdec ( t ) (5) To elaborate, the dot-product between Q numdec ( t ) R | V | K d and K numdec ( t ) R | V | L d gives us attention weights of dimensions | V | K L .",
"We then use these attention 6317 weights to map V numdec ( t ) R | V | L d to Z num ( t ) R | V | K d .",
"Similarly, to decode the local textual representation, the queries of the local textual representations after the projection and addition of time vectors H txt,loc ( t ) are used to decode the keys and values of Z ( t ) .",
"We generate: Q txt,locdec ( t ) = Linear Q ( H txt,loc ( t )) , K txt,locdec ( t ) = Linear K ( Z ( t )) , V txt,locdec ( t ) = Linear V ( Z ( t )) , and again apply scaled dot-product attention: Z txt,loc ( t ) = softmax ( Q txt,locdec ( t ) K txt,locdec ( t ) T d ) V txt,locdec ( t ) (6) resulting in Z txt,loc ( t ) R | V | S d .",
"The global-local guided cross-attention step uses the decoded Z num ( t ) to guide the learning of global textual features relevant to each company v j from X txt,glo ( t ) .",
"relationships 3,255 1,511 6,436 4,986 of representations: Z num ( t ) = (cid:80) Kk =1 ( t k ) Z num ( t k ) , where Z num ( t ) R | V | d .",
"This temporal attention fusion step is repeated across K time-steps for Z txt,glo ( t ) to obtain Z txt,glo ( t ) R | V | d and across S time-steps for Z txt,loc ( t ) to obtain Z txt,loc ( t ) R | V | d .",
"The representations from the three modalities are then fused with multimodal attention fusion.",
"We denote each of the modalities as r , for a total of R = 3 modalities for the numerical, local textual and global textual modalities respectively.",
"A non-linear transformation is applied to the representations to obtain scalars s ( r ) = W (1) tanh ( W (0) Z r ( t ) + b ) , where W (0) and W (1) are learnable weight matrices and b is the bias vector.",
"Parameters are shared across modalities.",
"We normalize the scalars with a softmax function to obtain the weights: r = exp ( s ( r )) (cid:80) Rr =1 exp ( s ( r )) , which are used to fuse representations across the three modalities: Z ( t ) = (cid:80) Rr =1 r Z r ( t ) , where Z ( t ) R | V | d .",
"We utilize Z num ( t ) instead of Z txt,loc ( t ) as we extract global textual features for each time-step t k in window K rather than S .",
"Z num ( t ) also contains information relating to Z txt,loc ( t ) due to the prior guided latent cross-attention learning step.",
"For each time step t k in window K , we generate Q txt,glo ( t k ) = Linear Q ( Z num ( t k )) , K txt,glo ( t k ) = Linear K ( X txt,glo ( t k )) , V txt,glo ( t k ) = Linear V ( X txt,glo ( t k )) .",
"We apply scaled dot-product attention: Z txt,glo ( t k ) = softmax ( K txt,glo ( t k ) W txt,glo Q txt,glo ( t k ) T / d ) T V txt,glo ( t k ) where W txt,glo R d d is an inner weight shared across all time steps t k to improve attention extraction of global textual information.",
"Across the window period, we get Z txt,glo ( t ) R | V | K d .",
"Sequential Encoding and Fusion.",
"Transformer encoders (Vaswani et al., 2017) are then used to encode the resultant sequence of representations: Z num ( t ) = TransformerEnc ( Z num ( t )) , Z txt,loc ( t ) = TransformerEnc ( Z txt,loc ( t )) , and Z txt,glo ( t ) = TransformerEnc ( Z txt,glo ( t )) .",
"The transformer encoded sequence of representations are combined with temporal attention fusion, which weights contributions of each time step t k based on its importance.",
"A non-linear transformation is applied to the respective representations, say Z num ( t k ) , to obtain scalar ( t k ) for each time step t k in the window of t : ( t k ) = W (1) tanh ( W (0) Z num ( t k ) + b ) , where W (0) and W (1) are learnable weight matrices and b is the bias vector.",
"We normalize each ( t k ) to obtain the weights: ( t k ) = exp ( ( t k )) (cid:80) Kk =1 exp ( ( t k )) .",
"We then fuse the sequence Table 1: Overview of datasets IN-NY IN-NA BE-NY BE-NA No.",
"Forecasting and Loss Functions.",
"We use fully connected layers to generate forecasts of means and volatilities of stock returns over the selected horizon period [ t, t + K ] : Y returnsmean ( t ) = F CM ( Z ( t )) ; and Y returnsvol ( t ) = F CV ( Z ( t )) .",
"To forecast correlations of asset returns over the horizon period [ t, t + K ] , we use weights from linear layers in DW: Q corr ( t ) = Linear Q DW ( Z ( t )) ; K corr ( t ) = Linear K DW ( Z ( t )) .",
"This allows what was learnt in the DW step to be utilized here: Y returnscorr ( t ) = F CC ( tanh ( Q corr ( t ) K corr ( t ) T d )) .",
"We then compute losses between the forecasts above and respective ground-truths, i.e. actual means, volatilities and correlations over the horizon [ t, t + K ] (see Appendix A.2 for ground-truth definitions) with root mean squared loss (RMSE), and use total losses as the training objective: L total = L mean ( Y returnsmean ( t ) , Y returnsmean ( t )) + L vol ( Y returns vol ( t ) , Y returns vol ( t )) + L corr ( Y returnscorr ( t ) , Y returnscorr ( t )) (7) We do not weight the losses differently as we want the model to perform equally well on all three tasks.",
"Datasets.",
"We conduct experiments with four datasets, comprising global and local textual information of news articles from financial news portals Investing news ( IN ) and Benzinga news ( BE ); and numerical information of daily stock market price-related information of two stock markets NYSE ( NY ) and NASDAQ ( NA ) from 2015 to 2019.",
"The coverage of these datasets across five years, more than 1.5m articles and 2,000 companies is more extensive than most existing works and provides strong assurance to our experiment findings.",
"Following Ang and Ee-Peng (2021), we utilize relationships between companies extracted from Wikidata knowledge graphs for the inter-company relationships E from Wikidata dumps dated 7 Jan. 2019.",
"Companies such as Google, Apple and Microsoft are present within the Wikidata KG as entities, and relationships between them, e.g., Alphabet as a parent company of Google (first-order), both Apple and Microsoft are producing computer hardware (second-order), can be extracted from Wikidata.",
"We use a pre-trained Wikipedia2Vec (Yamada et al., 2020) model to pre-encode textual news to capture the rich knowledge present within the Wikipedia knowledge base (see Table 1 and Appendix A.1 for more details on datasets).",
"Tasks and Metrics.",
"We compare GAME with state-of-the-art baselines on three predictive tasks: forecasting of",
"i) means,",
"ii) volatilities, and",
"iii) correlations of stock price percentage returns .",
"We use RMSE, mean absolute error (MAE) and symmetric mean absolute percentage error (SMAPE) as metrics.",
"RMSE and MAE are common scale-dependent metrics used to evaluate forecasting performance with RMSE being more sensitive to outliers than MAE.",
"SMAPE is a scale-independent metric that gives equal importance to underand over-forecasts required in our evaluation context (see Appendix A.3 for more details on SMAPE).",
"Datasets are divided into nonoverlapping training/validation/test sets in the ratios 0.7/0.15/0.15 for experiments.",
"Baselines and Settings.",
"We compare GAME against state-of-the-art baselines (see Section 2): TST (Zerveas et al., 2021) that captures numerical information; SE (Du and Tanaka-Ishii, 2020) that captures global textual information; FAST (Sawhney et al., 2021b) that captures local textual information; RSR (Feng et al., 2019) that captures numerical information and inter-company relationships; and KECE (Ang and Ee-Peng, 2021) that captures numerical, global textual information and inter-company relationships.",
"We add fully-connected layers to baselines for them to forecast means, volatilities and correlations of percentage stock returns.",
"We set the window period K = 20 days; and horizon period K = 10 .",
"K = 20 corresponds to a trading month, and K = 10 days corresponds to a global regulatory requirement for VaR computations, which we examine in the case-study (in Section 6).",
"Following Sawhney et al. (2021b), 6319 we set M for local news text sequences to be 10.",
"We empirically set L to 16.",
"Dimensions of hidden representations are fixed at 100 across all models.",
"Models are implemented in Pytorch and trained for 100 epochs on a 3.60GHz AMD Ryzen 7 Windows desktop with NVIDIA RTX 3090 GPU and 64GB RAM.",
"Training GAME, which has 1.01e6 parameters, takes around two hours on the IN datasets and nine hours on the BE datasets (see Appendix A.4 for more details on settings).",
"Results.",
"Table 2 sets out the results of the forecasting experiments.",
"Across all tasks, GAME outperforms all baselines.",
"On the task of forecasting means , dispersion in model performances for IN datasets is more narrow than for BE datasets.",
"On the tasks of forecasting volatilities and forecasting correlations , baseline models (RSR, KECE) that perform better for BE datasets utilize textual and relational information.",
"Performance differences between GAME and baselines are more significant for the larger BE datasets than for the IN datasets due to the larger volume of news textual information.",
"Differences in performances between GAME and baselines are more pronounced for volatilities and correlations forecasting than means forecasting as these are harder tasks that require the model to capture global and local news effects and the propagation of news effects between companies, which are key features of the GAME model.",
"Table 3 shows the results of ablation studies for GAME on IN-NY.",
"We observe similar sensitivities for other datasets.",
"When we exclude the guided co-attention module ( w/o. guided co-attn. ), the drop in performance is more significant for volatility and correlation forecasting tasks, while performance decline is more significant for the correlation forecasting task when we exclude the dynamic graph-guided attention module ( w/o. graph-guided enc. ).",
"When we vary the multi-task aspect of GAME by training on mean, volatility or correlation forecast losses only (i.e. w. mean loss only, w. vol. loss only, w. corr. loss only ), we see significant drops in performance, even on tasks that correspond to the training loss, e.g., performance of mean forecasts when we train only on mean loss is poorer than when we train GAME with multiple tasks.",
"We use model forecasts for investment and risk",
"management applications to evaluate the quality of forecasts.",
"Portfolio allocation optimizes the proportion of capital invested in each stock in a portfolio by finding an optimal set of investment weights W that maximize portfolio returns while minimizing portfolio risk.",
"We use model forecasts as optimization inputs to find W that maximizes risk-adjusted returns in a future horizon.",
"Value-at-Risk (VaR) (Linsmeier and Pearson, 2000) is a key measure of risk used in financial institutions that measures potential losses in a pre-defined horizon with a probability of p % , e.g., 10 day 95% VaR of $1m means a 5% probability of losses exceeding $1m over a 10 day horizon.",
"When realized losses exceed forecasted VaR, we call it a VaR breach.",
"We use model forecasts to compute 10 day 95% portfolio VaR forecasts, and evaluate model performances by the total number of VaR breaches.",
"Details on computation methodologies are provided in Appendix A.5.",
"Table 4 depicts results for the IN-NY/IN-NA datasets.",
"For portfolio allocation , portfolios constructed using GAME's forecasts achieve highest average risk-adjusted returns.",
"For VaR , GAME out-performs baselines with significantly less VaR breaches.",
"Baselines utilizing textual information or inter-company relationships (SE, FAST, RSR and KECE) generally perform better.",
"In this paper, we designed GAME, a model that captures global and local multimodal information with modules that",
"i) enable mutual guidance between modalities with different time-scales, sparsity and distributions;",
"ii) propagation of multimodal information between companies via real-world relationships with dynamic weights to guide learning;",
"iii) guided attention learning between global and local information to extract relevant global information; and",
"iv) was trained in a multivariate multitask setting.",
"The model performs strongly on three forecasting tasks and two real-world applications, demonstrating the value of guided attention learning for global and local multimodal information.",
"The datasets used are more extensive than most similar works and provide strong assurance on the validity of the results across different companies and textual information.",
"Future work could extend GAME to capture information from other modalities (e.g., audio, visual), textual sources (e.g., Twitter, Reddit), and inter-company relationships (e.g., DBPedia, GDELT).",
"In relation to the societal im-pact of this work, we see opportunities for GAME to support better investment and risk management decisions, and also benefit a range of real-world applications, such as investment portfolio allocation and risk management, as we demonstrated in our paper.",
"We should however recognize that models such as GAME generate forecasts based on past historical patterns that may not always hold in the future, particularly for non-stationary financial time-series.",
"Hence, model risk management, e.g., monitoring significant changes in input information and model performance, is particularly important to avoid negative impacts, such as investment losses.",
"This research is supported by the National Research Foundation, Singapore under its Strategic Capabilities Research Centres Funding Initiative.",
"Gary Ang is supported by a Monetary Authority of Singapore Postgraduate Scholarship.",
"Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore, nor the Monetary Authority of Singapore."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"other",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"other",
"other",
"other"
] |
[
"We introduce a data-driven approach to generating derivation trees from meaning representation graphs with probabilistic synchronous hyperedge replacement grammar (PSHRG).",
"SHRG has been used to produce meaning representation graphs from texts and syntax trees, but little is known about its viability on the reverse.",
"In particular, we experiment on Dependency Minimal Recursion Semantics (DMRS) and adapt PSHRG as a formalism that approximates the semantic composition of DMRS graphs and simultaneously recovers the derivations that license the DMRS graphs.",
"Consistent results are obtained as evaluated on a collection of annotated corpora.",
"This work reveals the ability of PSHRG in formalizing a syntax semantics interface, modelling compositional graph-to-tree translations, and channelling explainability to surface realization.",
"General graph-based meaning representations (MRs) that model sentence-level semantics aim to provide interpretable intermediate representations that are applicationand domain-independent (Koller et al., 2019).",
"Recently, graph grammars and algebras that formalize semantic constructions were introduced to MR processing (for example: (Koller, 2015; Drewes and Jonsson, 2017; Groschwitz et al., 2017; Chen et al., 2018; Groschwitz et al., 2018; Lindemann et al., 2019; Donatelli et al., 2019; Chen and Sun, 2020)).",
"These formal grammars bridge between linguistic assumptions and data-driven parsing, and offer the benefit of cross-framework adaptability.",
"For instance, the ApplyModify algebra was adopted in parsing across 5 MR frameworks (Oepen et al., 2019) (Lindemann et al., 2019; Donatelli et al., 2019).",
"Another formalism that was adopted in generating semantic graphs from syntax trees is synchronous hyperedge replacement grammar (SHRG) (Peng et al., 2015; Chen et al., 2018; Chen and Sun, 2020).",
"The use of SHRG in recovering syntax trees from MRs has however received scant research coverage.",
"Empirical results of PSHRG's application are limited to Jones et al. (2012)'s work in semantic-based machine translation.",
"An immediate application of MR-to-tree parsing is surface realization.",
"Previous data-driven approaches to it include rule-based (Flanigan et al., 2016; Song et al., 2017; Horvat, 2017; Ye et al., 2018) and neural methods (Song et al., 2018; Damonte and Cohen, 2019; Hajdik et al., 2019).",
"All these methods do not generate syntactic analyses.",
"In contrast, the Answer Constraint Engine (ACE; Carroll et al., 1999; Carroll and Oepen, 2005; Velldal and Oepen, 2006), an HPSG grammar-based parser, generates both derivations and sentences from Minimal Recursion Semantics (MRS; Copestake et al., 2005).",
"If we can induce an SHRG from data, MRs can be translated into derivation trees without relying on a hand-engineered grammar, and natural language texts can be obtained by realizing the terminals.",
"Combining the strengths of rule-based systems and the data-driven paradigm, such an approach gives both linguistically-informed realization processes and explainable results, removes syntactic ambiguities that would otherwise exist in flattened surface strings, and provides potential usage for downstream tasks such as chunking.",
"Among the different MR frameworks, we investigate Dependency Minimal Recursion Semantics (DMRS; Copestake, 2009).",
"DMRS are directed graphs derived losslessly from MRS, whereas an MRS structure with respect to a reading of an English sentence is composed along with a derivation tree using the English Resource Grammar (ERG; Flickinger, 2000, 2011), a broad-coverage hand-engineered HPSG grammar of English.",
"DMRS encodes logical formulae with underspecified scopes (for an introduction, see: Copestake, 2009).",
"Fig. 1 shows an example of an ERG analysis.",
"mantic algebra on the constructions of MRS, where the introduced constraints allow the semantics of complex sentences to be derived from simple composition rules.",
"The compositionality exhibited in the MRS (for a discussion, see: Bender et al., 2015) is not obvious in some other MRs, e.g., Abstract Meaning Representation (AMR; Banarescu et al., 2013) (Bender et al., 2015), and we suggest that HRG be particularly suitable to simulate the DMRS algebra if equipped with adequate adaptations.",
"In this paper, we shed light on the applicability of a succinct grammar formalism in approximating the semantic compositions of a graph-based MR. Essentially, we capture the syntaxsemantics interface of the ERG by inducing a PSHRG from an annotated treebank to provide generative models for both DMRS graphs and derivation trees.",
"With the induced PSHRG, derivation trees that license the DMRS graphs can be reconstructed through graph parsing.",
"We describe the procedures and the relevant adaptations involved, from PSHRG induction, parsing to generating derivation trees.",
"Finally, we present the empirical results on derivation trees and surface strings reconstruction from DMRS graphs under different configurations.",
"Hyperedge replacement grammar (HRG) is a context-free rewriting formalism for generating graphs (Drewes et al., 1997).",
"A synchronous HRG (SHRG) defines mappings between languages of graphs, which in our context are an HRG and a context-free grammar (CFG) for strings.",
"We give the definitions in this section.",
"Hypergraph and Hypergraph Fragments.",
"A hypergraph is specified by a tuple = , , , where is a finite set of nodes, + is a finite set of hyperedges, each of which connects one or more distinct nodes, : assigns a label from the finite set to each hyperedge.",
"A hypergraph fragment is a tuple = , , , , where , , is a hypergraph and + is an ordered list of distinct nodes called the external nodes.",
"HRG.",
"An HRG is a tuple = , , , , where and are disjoint finite sets of nonterminal and terminal symbols respectively, is a 5426 finite set of productions of the form , where and is a hypergraph fragment where hyperedge labels are over , and is the start symbol.",
"In a step of rewriting a hyperedge by a production , is replaced with a copy of by identifying each of the nodes connected by with a distinct external node of , whose mapping is specified in the production.",
"Fig. 2 illustrates the HRG rewriting process.",
"Synchronous HRG.",
"An SHRG is a tuple = , , , , , where is a finite set of nonterminal symbols in both the CFG and the HRG, and are finite sets of terminal symbols in HRG and CFG respectively, is the start symbol, and is a finite set of productions of the form , , , where is a hypergraph fragment with hyperedge labels over , is a symbol sequence over , and is a bijection between the nonterminal hyperedges of the same labels in and .",
"When applying a production , , , rewrites the graph as described in the HRG and the synchronous operation on the CFG counterpart is a string rewrite by (see Fig. 3).",
"Probabilistic SHRG.",
"A probabilistic SHRG is obtained by assigning a constant probability to each production in the SHRG, where probabilities of the productions that rewrite the same nonterminal add up to one.",
"In this work, the probability of , , is simply modelled as the fraction of times it appears among all in the training data.",
"The probability of a derivation is the product of the probabilities of the context-free SHRG productions applied.",
"In this section, we describe how a PSHRG is induced from training data and how a derivation tree is reconstructed from a DMRS of the test data with the induced grammar.",
"We also describe two methods for modelling the semantics of lexical items.",
"DMRS as Hypergraph.",
"We first establish the connections between DMRS and HRG.",
"A DMRS graph is modelled as a hypergraph with terminal hyperedges.",
"In , each terminal hyperedge corresponds to a DMRS node or edge: the former can connect to an arbitrary number of nodes in , and the latter connects to only two nodes in .",
"We describe how to induce a PSHRG from pairs of aligned trees and graphs.",
"The PSHRG induction procedure generally follows Chen et al. (2018)'s SHRG extraction algorithm, which operates based on the surface string alignment information between DMRS graphs and their derivation trees (for expositions, see: Chen et al., 2018).",
"Fig. 3 illustrates the grammar induction process.",
"For each nonterminal in the tree, i.e., a node labelled by an ERG syntactic construction (a label of the form *_c , e.g., hd-cmp_u_c ), a production , , is extracted, where , and are the ERG syntactic construction, connected DMRS hypergraph fragment and daughter ERG rule(s) respectively.",
"For a binary ERG construction, if any of its daughters is a semantically empty terminal (e.g., for the lower hd-cmp_u_c in Fig. 3), no productions are extracted (discussed in 3.3.3).",
"The DMRS hypergraph fragment specified by is then rewritten to a 5427 nonterminal hyperedge labelled by .",
"We perform the same rule extraction procedure for each training instance to induce an SHRG from the training data, thus a PSHRG when we consider the frequency information.",
"To recover the derivation tree of a DMRS, we can translate the semantic compositions to the corresponding syntactic operations by parsing the DMRS with the induced PSHRG (see Fig. 3).",
"We aim at recognizing the best derivation according to a PSHRG model, which requires exact graph parsing.",
"Chiang et al. (2013); Groschwitz et al. (2015); Ye and Sun (2020) studied HRG parsing and proposed various techniques on improving the efficiency, but no evaluation on accuracy is performed on the parsed results with respect to a gold-standard grammar.",
"Although efficient algorithms are developed for HRG parsing, existing parsers do not provide convenient adaptations to the extensions introduced in this work.",
"Rather than efficiency, our work focuses on the correctness of derivations as measured against the original derivations that license each DMRS.",
"Therefore, we implement a parser that returns the best PSHRG derivation of DMRS via bottomup passive chart parsing (for details of the parsing algorithms, see Appendix A).",
"We introduce two adaptations to align PSHRG with the semantics introduced by the ERG lexical items.",
"Complex semantics.",
"While most lexical items introduce just one DMRS predicate each, some introduce more complex semantics.",
"As an example, Fig. 4 shows that somebody provides both the _some_q quantifier and the person predicate.",
"Therefore, the R.H.S. hypergraph fragment of an SHRG production is not confined to a hypergraph fragment with one or two terminal hyperedges, but one with more than two terminal hyperedges that corresponds to a connected DMRS subgraph.",
"Empty semantics.",
"There are semantically empty lexical items that do not contribute predicates to the DMRS, e.g., auxiliary verbs and particles.",
"This poses another challenge for derivation reconstruction because the syntactic properties of these lexical items are highly language-dependent, yet they are not captured by general semantic representations as they are not semantically functional.",
"To recognize complex semantics, we borrow the idea of the graph canonization method described by Horvat (2017) that isomorphic DMRS subgraphs can be identified by comparing if their canonical representations are the same.",
"Graph canonization is achieved in two steps: first, each node in the DMRS subgraph is given a canonical node representation by encoding its 1and 2-hop neighbours; then the final canonical form is obtained by concatenating the sorted node representations based on a canonical ordering.",
"Most subgraphs introduced contain fewer than seven DMRS nodes, for which the canonization method is sound (Horvat, 2017).",
"Fig. 4 exemplifies the idea.",
"During grammar induction, the canonical forms of all small subgraphs that correspond to ERG lexical items are extracted from the training data.",
"Then, given a DMRS from the test data, we first identify its subgraphs that are isomorphic to any of the extracted ones before parsing.",
"This is achieved by first enumerating all small subgraphs from the DMRS, then computing the canonical form for each of them, and finally comparing the canonical forms with those of the subgraphs extracted.",
"The process can be sped up by computing the canonical form of a subgraph only if its collection of DMRS predicates is present in the set of those extracted from the training data.",
"We devise a semi-automatic method to extract (dur-ing grammar induction) and recover (during parsing) the syntax of common semantically empty words.",
"To this set of words with empty seman-5428 tics, we define a collection of linguistic signals that can serve as their cues, and match each signal to the set of lexical items it can recover.",
"For example, the tense and aspects of verbs and predicative adjectives are signals for auxiliary verbs.",
"During SHRG induction, the signals are first identified from the DMRS node attributes and are then generally passed up from the syntactic head daughter.",
"When extracting a binary construction, if the unextracted subtree (described in 3.1) contains a semantically empty lexical item that matches a signal of any of the daughters, a hypergraph fragment is extracted together with the subtree associated with that signal.",
"During parsing, the same bottomup signal passing procedures apply.",
"If the R.H.S. hypergraph fragment of a binary production is recognized and any signal passed up matches that of an extracted subtree, the HRG production is applied and the CFG subtree is added on top of the two daughters in the derivation tree (see Fig. 3).",
"To approximate complex grammars or implicit relations established between trees and graphs by PSHRG, we introduce extensions for improving on both the precision and generalizability of modelling.",
"The application of most of the proposed techniques is not limited to the DMRS, but to general MR parsing by PSHRG.",
"Three techniques are introduced to impose restrictions to semantic compositions, allow probability to be estimated on more fine-grained SHRG productions, and prevent overgeneration.",
"Typed HRG.",
"Chen and Sun (2020) introduced typed HRG, where each node of a hypergraph (frag-ment) and hyperedge is assigned a label chosen from a finite set.",
"In typed HRG, an R.H.S. hypergraph fragment is recognized only if the type of its every node matches that of the corresponding node in the input graph.",
"In our case, we propose to type a node by the major sense tag of the corresponding DMRS predicate.",
"For example, in Fig. 5, the nodes on the R.H.S. correspond to _want_v_1 and _go_v_1 respectively, so both are typed v .",
"An HPSG derivation tree merely records the recipe of a derivation, where the non-atomic rule symbols hd-optcmp_c ^hd-cmp_u_c [VP] hd-optcmp_c ^hd-cmp_u_c [VP] _*_v_* ARG2/H H3' _*_v_* to_c_prop C3' hd-cmp_u_c ^sb-hd_mc_c [S] 2 1 hd-cmp_u_c ^sb-hd_mc_c [S] v 1 2 v v v Figure 5: P3 in Fig. 3 with annotated syntactic constructions, typed nodes and a delexicalized DMRS predicate.",
"only convey highly generalized linguistic principles.",
"Each rule represents a unification of typed feature structures.",
"Inspired by Zhang and Krieger (2011), we annotate each syntactic construction with that of their immediate parent (order-2 vertical markovization) and the syntactic category of the phrase (see Fig. 5).",
"This adds extra contextual information to the constituents.",
"We further normalize the derivations by substituting chains of unary lexical rules, affixation rules for punctuation marks, and the preterminals by the canonical form of a DMRS node or subgraph (see Fig. 4).",
"Framework-Specific Constraints.",
"PSHRG Parsing on DMRS without regard for the MRS semantic algebra leads to inefficiency and overgeneration.",
"In particular, the features INDEX and LTOP in MRS specify the semantic materials of a phrase that are accessible during composition.",
"In the ERG, their values are determined by the type of composition.",
"1 Hence, when parsing a DMRS, every composition should ensure that subsequent compositions can only happen to the two variables of the newly composed item.",
"This procedure resembles Carroll and Oepen (2005)'s proposal on index accessibility filtering.",
"The checks can be easily incorporated into SHRG parsing (for the details, see Appendix B).",
"Nevertheless, INDEX and LTOP are not the only features that permit compositions in the ERG.",
"Therefore, the constraints introduced prevent overgener-ation to a large extent but lead to undergeneration.",
"In this work, we examine the two most prominent features, and how we should further integrate the MRS algebra to HRG is an open question.",
"Two underspecification methods are developed to alleviate the rule sparsity and out-of-vocabulary (OOV) problems, which are the main challenges faced by general rule-based systems.",
"1 The precise algebras of the two features in the ERG are not discussed fully here.",
"In brief, the INDEX and the LTOP usually come from the syntactic head, but come from a scopal modifier if the semantic composition is scopal.",
"For expositions, see: Copestake et al. (2001); Copestake (2009).",
"Extended HRG Productions.",
"When recognizing an HRG production on a hypergraph , we suggest that the external nodes of be not distinguished from the non-external nodes.",
"Consequently, the rewriting hyperedge connects to a variable number of nodes, and such number depends on .",
"To motivate such decision, consider H1 of Fig.",
"3. The fact that the hyperedge _boy_n_1 connects to an external node is not significant to the characterization of the sp-hd_n_c (the specifier head construction where the specifier is the semantic head).",
"This effectively creates more SHRG productions that share the same probabilities if their R.H.S. hypergraphs (minus external nodes) are identical.",
"Delexicalization.",
"The PSHRG models are estimated based on delexicalized productions: the lexeme stems of verbs, adjectives, adverbs, and nouns (whose DMRS predicates are in the form of _*_v_* ', _*_a_* ' and _*_n_* ' respectively) are underspecified (see Fig. 5).",
"This is a significant distinction between our grammar and the ERG since the ERG is highly lexicalized.",
"2 Hence, delexicalization trades lexical preciseness for OOV coverage.",
"Furthermore, since an approximation grammar is assumed to have no access to the lexical information of the underlying grammar, the results of our experiments would reflect the viability of PSHRG as a general approach to grammar approximation.",
"The main objective of the experiments is to assess the performance of PSHRG models on simulating DMRS compositions and producing approximating derivation trees.",
"Specifically, we reconstruct a derivation tree for each DMRS whose nonterminals are aligned to DMRS subgraphs and labelled by an ERG syntactic construction; the ERG 1214 contains more than 210 fine-grained syntactic constructions that reflect the distinguishing properties of different syntactic constructions.",
"As a secondary evaluation, we analyze our performance on the task of surface realization.",
"The purpose of this is twofold: first, assessing the quality of the surface strings produced from the recon-2 Each lexical entry in the ERG is assigned to exactly one lexical type, which determines most of its syntactic and semantic properties.",
"For example, inform , advise and remind share a lexical type because they all select a noun phrase and a sentential complement.",
"The detailed lexical types interact with the highly generalized linguistic principles to produce precise linguistic interpretations.",
"structed derivation trees gives additional perspectives on the evaluation of our models; and secondly, there are existing works on surface realization from DMRS, so our models can be benchmarked against.",
"Finally, we evaluate the significance of the two proposed adaptations, namely recovering words with empty semantics and incorporating framework-specific constraints to PSHRG parsing.",
"An instance of sample input and sample output are provided in Appendix C. 5.1 Data The main data set we experiment on is the Ninth Growth of the Redwoods Treebank (Oepen et al., 2002).",
"3 It contains English sentences from a range of domains including Wall Street Journal (WSJ) and the Brown corpus, each paired with the analyses of the 1214 version of the ERG.",
"Each MRS is converted into a DMRS using Pydelphin (Copes-take et al., 2016).",
"4 We discard the instances with ambiguous analysis, disconnected DMRS, and unparsable MRS by Pydelphin.",
"To assess the scalability of our models, we further sampled sentences from the Gigaword v.5 corpus (Parker et al., 2011) for model training, where extra training instances are obtained by parsing sentences with the ACE and choosing the best ERG analysis for each sentence as ranked by the ACE.",
"After preprocessing, the total number of instances in the training and test sets are 70,774 and 10,042 respectively under the standard Redwoods data split.",
"70,774 extra training instances are created from the Gigaword corpus.",
"We removed the mostly uninformative syntactically covert quantifiers (e.g., udef_q , proper_q ) in all DMRS graphs.",
"The numbers of DMRS nodes reported below are counted after the removal.",
"Subgraph canonization (3.3.2) was performed only on the DMRS subgraphs of fewer than seven nodes.",
"The maximum length of the unary chains in the generated derivation trees was set to be three.",
"The parser was implemented in PyPy3.6 and ran under one Intel Xeon E5-2697 CPU on x86_64 Linux.",
"Our implementation is available online.",
"5 3 http://svn.delph-in.net/erg/tags/ 1214/tsdb/gold 4 https://github.com/delph-in/pydelphin 5 https://github.com/aaronlolo326/ pshrgOnDMRS 5430 DerivationAnnotation Model ParsEval-Graph Coverage P R F 1 M1C SHRGPCFG 76.79 74.90 75.84 80.58% PSHRG 81.86 79.85 80.84 80.58% PSTHRG 83.81 81.23 82.50 74.31% M2C SHRGPCFG 83.13 80.44 81.77 68.69% PSHRG 84.52 81.60 83.03 68.69% PSTHRG 87.04 83.66 85.32 61.50% Table 1: Results on the accuracy of derivation reconstruction under standard Redwoods data split.",
"If a DMRS is parsed correctly, the synchronously reconstructed derivation should not deviate too much from the original derivation.",
"Nevertheless, equivalence is a sufficient, but not necessary condition for a parse to be correct since syntactic differences do not necessarily contribute to semantic differences.",
"We devise a modified version of the ParsEval (Black et al., 1991) measure, ParsEval-Graph, to assess the quality of the generated trees, as the constituents of the trees are not aligned to a surface string but on the input semantics.",
"In ParsEval-Graph, the alignment of a constituent refers to the DMRS nodes covered instead of the characters covered in the surface string.",
"Following ParsEval, ParsEval-Graph only accounts for binary rules, and preterminals are disregarded.",
"All ParsEval-Graph scores are evaluated on the parsable instances of the respective models on unannotated derivation trees.",
"As introduced in 4.1, we experiment with the typed variant of PSHRG, PSTHRG, and two configurations of tree annotation, namely order-2 vertical markovization with syntactic category annotation (M2C), and only syntactic category annotation (M1C).",
"Since no existing works on data-driven DMRS parsing or surface realization produce syntactic derivations, we develop a baseline model SHRGPCFG for benchmarking.",
"SHRG PCFG parses a DMRS with the SHRG induced and the probability of each parse is given by a PCFG model where the probability of each CFG production is modelled as the fraction of times appears among all in the training data.",
"The ACE is not evaluated because it always generates derivations faithful to the ERG.",
"and outperform the baseline under the same annotation configurations.",
"More extensive annotation and typing the grammar respectively improve the F 1 score by about 2, at the expense of coverage reduction caused by rule sparsity when the Redwoods training set does not provide adequate data for the over-specific annotation.",
"It is insightful to note that the baseline under M2C performs at a level between the PSHRG model and PSTHRG model on M1C, which conveys that the contextual information of a nonterminal node could already provide ample information on the semantic compositions.",
"To study models' performance with respect to the size of the training data, we add the Gigaword instances on top of the Redwoods training set.",
"This doubles the amount of training data.",
"As reported in Table 2, the coverages of the two models increase by 4.50% and 6.31% respectively with more data.",
"Nevertheless, the accuracy of derivations does not improve further, as frequency-based context-free probability models have low learning capacities.",
"Despite all DMRS being generated by the ERG, the ACE does not parse every DMRSparsing fails when a DMRS predicate is OOV.",
"In contrast, our models parse more instances than the ACE when given more training data, since they generalize to OOV with delexicalization.",
"Although delexicalization removes much lexical information, we suggest that SHRG-based parsing and the incorporation of MRS-specific constraints can restrict compositions outside of the ERG to a large extent.",
"To produce a surface string from a DMRS, we realize the most frequently recorded surface form for each preterminal from the reconstructed derivation tree.",
"We compare our work with the Neural MRS (Hajdik et al., 2019) and the ACE.",
"The Neural MRS generates in an end-to-end manner without intermediate syntactic derivations.",
"For more comparable results, we evaluate the models under M1C annotation configuration since they have similar parse 5431 Model BLEU GenerateSyntacticStructures Redwoods WSJ Brown Neural MRS 66.11 65.78 45.00 No ACE (ERG) 62.21 -Yes PSHRG (M1C) 59.33 63.65 58.29 Yes PSTHRG (M1C) 60.67 64.77 59.47 Yes Table 3: Results on surface realization.",
"We evaluate the generation quality with BLEU (Papineni et al., 2002) using SacreBLEU (Post, 2018) 6 .",
"Following (Hajdik et al., 2019)'s evaluation on inand out-of-domain performances, we experiment on the different trainingtest data splits, namely the WSJWSJ and WSJBrown splits.",
"WSJ contains 34,751 training instances and 1,442 test instances, and Brown contains 2,181 test instances.",
"7 All BLEU scores are evaluated on the parsable instances of the respective models.",
"Table 3 shows that our models perform consistently across data splits.",
"Under the Redwoods standard data split, our models are worse than the neural model.",
"With similar parse coverages to the ACE, our performance is also close to the ACE.",
"Under the WSJ-Brown split, our PSHRG models outperform the Neural MRS. 8 The PSHRG (M1C) parses 74.90% of the test set under the WSJWSJ split.",
"When the model is typed and when switching from into out-of-domain, the models parse about 7% less data respectively.",
"The coverages of M1C models reported here are lower than those in Table 1 since the amount of training data is halved.",
"Therefore, we consider the relative decreases in coverage from into out-of-domain of our respective models to be more insightful on models' transferability between domains than the absolute coverage.",
"Apart from the automatic evaluation, we also value qualitative details and seek linguistically interesting phenomena that result from a grammar-based approach.",
"To this end, we observe that our models identify different realization possibilities from the original sentence to the same semantics 6 https://github.com/mjpost/sacreBLEU 7 The numbers of training and test instances of Neural MRS are slightly greater than ours after respective preprocessing.",
"8 To provide context on assessing our model under a fullcoverage scenario, we try to provide a naive fallback for the unparsable instances: For each of these instances, the partially realized non-overlapping constituents are concatenated in order of decreasing number of the DMRS nodes covered to form the final sentence.",
"With this quick fix, our PSHRG (M1C) model attains a BLEU score of 51.08 on the WSJBrown split.",
"Compared to neural approaches, PSHRG is a shallow statistical model with a high inductive bias.",
"It encodes a syntaxsemantics interface effectively through treeand graphrewriting.",
"Even though our models are not engineered towards the task of surface realization, and with limited morphological analyses and no language modelling, our approach is still competitive as evaluated quantitatively and qualitatively.",
"We suggest that PSHRG-based approaches and neural models be decent alternatives to each other for general surface realization from MR: neural models provide full-coverage and high-quality generation when substantial training data is available, whereas PSHRG-based solutions extrapolate from limited data and in out-of-domain scenarios, and produce interpretable derivations.",
"We conduct a few more experiments to test against the significance of two proposed adaptations, namely the solution to semantically empty words (3.3.3) and framework-specific constraints (4.1).",
"We implement two models, PSTHRG (M1C) and PSTHRG (M1C), which parse DMRS without regard for semantically empty lexical items and MRS constraints respectively.",
"As reported in Table 5, The treatment of empty semantics not only adds 15.51% more parsable instances but also corrects some parses in the 58.80% that require analyses of empty semantics, thus pro-5432 Model ParsEval-Graph BLEU Coverage P R F 1 PSTHRG (M1C) 83.81 81.23 82.50 60.67 74.31% PSTHRG (M1C) 84.68 81.09 82.85 54.07 58.80% Table 5: Results of ablation of inserting words with empty semantics under the Redwoods standard split.",
"ducing more accurate surface strings.",
"Fig. 6 shows the importance of MRS constraints on parsing efficiency.",
"We set a time limit of 300 seconds on parsing, and the parsing of 16.71% of the WSJ test set exceeds the time limit.",
"When the DMRS contains 24 or more nodes, timeouts occur on at least one DMRS graph of each size.",
"Table 6 shows that when the constraints are enforced, all parses are completed in much shorter times and the derivations reconstructed are more accurate.",
"The proposed frequency-based PSHRG models are simple yet competitive data-driven baselines for recovering derivations of DMRS.",
"They can be further combined with sophisticated machine learning methods for a more accurate parse ranking.",
"More extensive features can also be included to enhance grammar approximation.",
"For instance, the feature-paths of ERG signs are shown to be helpful for PCFG approximation (Zhang and Krieger, 2011).",
"Language-specific knowledge about words with empty semantics is also critical for syntactic purposes.",
"In terms of efficiency, Ye and Sun (2020) showed that exact parsing can be very practical on Elementary Dependency Structures (EDS; Oepen and Lnning, 2006), a close equivalent to DMRS that excludes scopal information.",
"Different from Ye and Sun (2020)'s implementation, we retain more than 210 ERG syntactic constructions for precision and adopt delexicalization for generalization, both of which increase the search space of parsing and trade efficiency.",
"We suggest that the described PSHRG-based approach can be a potential alternative to the unification-based ACE generator for surface realization from DMRS, whilst improving parsing coverage, accuracy and efficiency without sacrificing one another would be a critical problem for future research.",
"In this paper, we report our findings with respect to the engineering decisions we investigated based on the data at hand.",
"In principle, the application of the described PSHRG-based approach is not limited to DMRS, but also to generic graph-to-tree translations that exhibit compositionality, if suitable data of aligned trees and graphs is available.",
"If explicit associations do not exist between the trees and graphs, the induced grammar formalizes the underlying relations and patterns; otherwise, the induced grammar provides an approximation to such relations, which can be desirable for computation purposes.",
"Based on the experimental results, we can assess the contributions of this work from three perspectives: (1) PSHRG with framework-specific adaptions as a formalism that approximates the semantic composition process of DMRS, (2) PSHRG graph parsing with framework-independent extensions as a general approach to modelling compositional graph-to-tree translation, and (3) derivation reconstruction with a PSHRG induced from data as a solution to surface realization from MRs that provides explainability, syntactic disambiguation, and syntactic variations.",
"We hope that this work provides relevant and substantial empirical insights to stimulate more research on approaching MR processing with linguistically-motivated methods.",
"The work was supported by grants from the Research Grant Council of the Hong Kong Special Administrative Region, China [Project No.: CUHK 14205618], and CUHK Direct Grant No. 4055159."
] | [
"method",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"method",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"other",
"method",
"other"
] |
[
"Transformer-based language models benefit from conditioning on contexts of hundreds to thousands of previous tokens.",
"What aspects of these contexts contribute to accurate model prediction?",
"We describe a series of experiments that measure usable information by selectively ablating lexical and structural information in transformer language models trained on English Wikipedia.",
"In both midand long-range contexts, we find that several extremely destructive context manipulationsincluding shuffling word order within sentences and deleting all words other than nounsremove less than 15% of the usable information.",
"Our results suggest that long contexts, but not their detailed syntactic and propositional content, are important for the low perplexity of current transformer language models.",
"1 1 Introduction Recent years have seen a significant improvement in the predictive accuracy of neural language models (LMs), owing to a combination of improvements in model architecture (especially transformers; Vaswani et al. 2017) and training infrastructure (Wolf et al., 2020).",
"The most striking change, relative to both recurrent neural LMs (Mikolov et al., 2010) and count-based models (Kneser and Ney, 1995), is the length of the context that these models can effectively condition on.",
"While count-based LMs in production speech recognition and machine translation systems typically used 1020 tokens at a maximum (e.g., Brown, 2011), and recurrent LMs have an effective context size of 200 (Khandelwal et al., 2018), the predictive accuracy of transformer LMs appears to improve when conditioning on as many as a thousand previous tokens (Beltagy et al., 2020).",
"A significant amount of recent work has 1 Code for all experiments in this paper is available at https://github.com/lingo-mit/context-ablations .",
"focused on making use of even longer contexts computationally feasible (Rae et al., 2019; Wang et al., 2020; Child et al., 2019; Dai et al., 2019; Kitaev et al., 2020).",
"But despite empirical evidence that long contexts are helpful, little is understood about why .",
"If the future of language modeling will include a focus on contexts of increasing size, it is important to first understand what contextual information contributes to accurate prediction in current models.",
"This paper offers an answer to that question via the V -information framework of Xu et al. (2020).",
"V information, discussed more in Section 2, provides a formal framework for reasoning about how much usable information a computationally constrained predictor (like a neural LM) can extract from an input.",
"Our experiments measure the amount of usable information that is added when increasing LM context size, then attempt to pinpoint the source of this information by ablating features of the added context (via controlled shuffling and word deletion) and measuring the resulting loss of model predictive power.",
"While this framework is general, we focus on transformer LMs.",
"Our work is closely related to an earlier study by Khandelwal et al. (2018), which measured changes in a pre-trained LSTM LM when context words were permuted and deleted at evaluation time.",
"But neural language models are known to be highly sensitive to distributional shiftsand in particular might be unable to use information from long-range context but still be adversely affected when the structure of that context changes at evaluation time.",
"Directly measuring usable information makes it possible to clearly distinguish accuracy decreases that result from loss of information and decreases that result from out-of-distribution inputs .",
"Our experiments reveal a number of surprising facts about the use of longand mid-range context in transformers.",
"While increasing context length from 256 to 768 tokens is beneficial (decreasing perplexity by roughly 4%), many destructive transformations of this context (including transformations that cause large changes in the paradigm of Khandelwal et al. 2018) remove essentially no usable information.",
"Our results suggest that for current models, the primary carriers of information in long-range context are content words and local co-occurrence statistics: deleting function words and shuffling within local windows both have very little effect on models' predictive power.",
"Context matters, but not all features of context matter equally; as discussed in Section 5, these results motivate future language modeling research focused on alternative context representations rather than simply more tokens.",
"A language model (LM) places a probability distribution p ( x ) over discrete token sequences x .",
"Most learned LMs do so by decomposing p ( x ) according to the chain rule and modeling the conditional distribution over a single target token given a (fixed-or variable-length) context of previous tokens: p ( x ) = (cid:89) i p ( x i | x 0 , x 1 , . . . , x i 1 ) .",
"In transformer language models , this conditional distribution is modeled via a sequence of alternating neural feed-forward layers and self-attention layers; see Vaswani et al. (2017) for more details.",
"While input sequences x can in principle be made arbitrarily long, there are both theoretical and practical limits to transformers' ability to make effective use of it (Hahn, 2020; Wang et al., 2019).",
"Here, we wish to understand when (and why) increasing the size of the context improves model predictions.",
"Usable information Consider a hypothetical LM context consisting of the tokens The user's password is.",
". . . This context suggests that subsequent tokens will be a password: (hopefully!) a high-entropy sequence.",
"Now suppose this context is extended to include earlier tokens, becoming The user's hashed password is",
"ave$@To9!.",
"The user's password is.",
". . . Information-theoretically, this context is extremely informative: only a small number of passwords will hash to the given string, and a predictor capable of testing all passwords would be able to identify the candidates and significantly reduce its uncertainty about future tokens.",
"But in practice, this extra context is useless: no known efficient predictor can learn anything about the password from its hash code, and the extra context has not made the language modeling problem any easier.",
"This is an extreme case, but a similar intuition applies to more conventional questions about language models.",
"A newspaper article whose first sentence begins A dog bit a man is likely to end very differently from one that begins A man bit a dog .",
"Can LMs reason effectively about this distinction, or is it (like a hashed password) computationally inaccessible to current models?",
"kind was introduced by Xu et al. (2020): Definition 1.",
"The usable predictive information (formally, predictive V -information ) from a random variable X to a random variable Y as: IV ( X Y ) = (cid:2) inf p 1 V E log p 1 ( Y ) (cid:3) (cid:2) inf p 2 V E log p 2 ( Y | X ) (cid:3) (2) for a class V of distributions p .",
"Intuitively, this definition measures how much extra information about Y can be extracted from X by any predictor in V .",
"In language modeling, we will take Y to be the target word, X its context, and V a class of parametric models.",
"While this definition generalizes Shannon mutual information (Shannon, 1948) and has deep connections to other information-theoretic quantities (see Xu et al. 2020 for details) it ultimately corresponds to a simple and common-sense evaluation: if we want to know how much the extra context X helps a language model, we should train a model p 1 without access to X , train a model p 2 with access to X , and compare the accuracy of their predictions.",
"Measuring what is used But the original question raised by the introduction was not just how much information is contributed by context.",
"It is already well-established that conditioning on long contexts is helpful, with existing experiments on long-range transformers effectively implementing the measurement in Eq.",
"(2).",
"Instead, we want to know what information in this context is actually used by models.",
"As a prototypical example, let us hypothesize that more than five tokens away from the target, models are only able to extract usable information from nouns.",
"(In our experiments in Section 3, this long-range context will be considerably longer than 5 words.)",
"For example, given the sentence: Pierre Vinken, 61 years old, will join the board as a nonexecutive director Nov. 29.",
"we hypothesize that the LM distributions: p 1 ( director | Pierre Vinken, 61 years old, will join the board as a nonexecutive ) (3) p 2 ( director | Pierre Vinken years (cid:124) (cid:123)(cid:122) (cid:125) noun-only context , the board as a nonexecutive (cid:124) (cid:123)(cid:122) (cid:125) ordinary context ) , (4) and more generally that IV ( X 0: n X n ) IV ([ nouns ( X 0: n 5 ) , X n 5: n ] X n ) (5) where X i : j is the sequence of tokens [ X i , X i +1 , . . . , X j 1 ] , V is a class of LMs, and nouns is a context ablation that extracts only the nouns from a given string.",
"That is, we hypothesize that the amount of usable information contributed by the full context X 0: n is the same as the amount contributed by the ablated context [ nouns ( X 0: n 5 ) , X n 5: n ] , so ablation removes no information.",
"The experiments in this paper generalize this experimental framework to other context ablations and hypotheses.",
"Let f be an ablation and k an integer offset, and denote an ablated context : f k ( X ) = [ f ( X 0: n k ) , X n k : n ] (6) and an ablated negative log-likelihood : L ( , f, k ) = E log p ( X n | f k ( X 0: n )) (7) Then, we can measure the effect of each ablation f on usable information via the following quantity: Definition 2. The ablated information due to an ablation f at an offset k is: A ( f, k ) = IV ( X 0: n X n ) IV ( f k ( X 0: n ) X n ) IV ( X 0: n X n ) IV ( X n k : n X n ) (8) = inf L ( ,f,k ) inf (cid:48) L ( (cid:48) ,n ) inf (cid:48)(cid:48) L ( (cid:48)(cid:48) ,n k ) inf (cid:48) L ( (cid:48) ,n ) , (9) where L ( , i ) is the (unablated) negative log-likelihood E log p ( X n | X n i : n ) .",
"Intuitively, A ( f, k ) measures how much of the usable information added by an extra k tokens (the denominator) is removed by applying the ablation f to those k tokens (the numerator).",
"If it is close to 0, almost no information is removed; if it is close to 1, almost all information is removed.",
"Evaluation in practice Eq.",
"(9) provides a general framework for answering our core question in this paper: for a diverse set of context ablations and offsets, we will measure how much information is lost when a given ablation is applied at a given offset.",
"A few modifications are required to turn this equation into a practical evaluation scheme: Held-out evaluation : Eq.",
"(7) involves an expectation over the sequence distribution p ( X ) .",
"In practice, LMs must be trained on finite corpora, creating a risk of overfitting (Zhang et al., 2016).",
"To address this issue, we approximate the infimum in Eq.",
"(7) by fitting 1 on a training set, and computing ablated information on a held-out validation set.",
"All reported results are an average of held-out likelihoods from two random initializations.",
"Batching : Given a fixed (training or test) dataset of strings X and a maximum context size of m , Eq.",
"(7) should be estimated empirically as 1 |X| (cid:80) x 1 | x | (cid:80) | x | i =0 log p ( X i | f k ( X i m : i )) .",
"This requires re-computing model predictions once for every token in the dataset.",
"However, the transformer models we use here support efficient batch inference: training data is pre-segmented into sequences of at most length n , and 1 |X| n (cid:80) x (cid:80) ni =0 log p ( X i | f k ( X 0: i )) can be computed in a single forward pass.",
"This is considerably more efficient but means that most tokens are evaluated with a context of length < n .",
"As a compromise to ensure that evaluations contain long-range context, we accumulate losses on a subset: L ( , f, (cid:96) : m n ) = 1 |X | ( n m ) (cid:88) x (cid:96) + n (cid:88) i = (cid:96) + m log p ( X i | [ f ( X 0: (cid:96) ) , X (cid:96) : i ]) (10) (visualized in Fig. 1).",
"This can be read as (cid:96) tokens of f -ablated context, followed by m to n tokens of unablated context.",
"We will write L ( , m n ) when only unablated context is used.",
"Model, data and training details For all experiments, our LM uses the GPT-2 model architecture (Radford et al., 2019) in the implementation of Wolf et al. (2020) with default hyperparame-ters.",
"All models are trained from scratch on the WikiText-103 dataset (Merity et al., 2016), an English language modeling benchmark.",
"Aside from ablations, no preprocessing is applied.",
"A special separator token is inserted between ablated and unablated context.",
"The training set contains 103,221,021 words, while the evaluation set contains 217,646 words.",
"A note on evaluation As in past work on evaluating language models (Brown et al., 1992), our evaluation of relative predictive information ultimately bottoms out in a conditional entropy (log-perplexity).",
"Recent work has shown that other metrics, such as diversity of outputs, are important for evaluating the quality of LMs as models for language generation (Hashimoto et al., 2019; Caccia et al., 2020).",
"Generation also depends on a number of other factors, such as choice of decoding procedure (Caglayan et al., 2020).",
"Here, we focus on LMs as predictive models, measuring their ability to place an accurate distribution over future words and sentences, rather than their ability to generate useful or coherent text (see Appendix C).",
"We want to emphasize that these results below apply to language models specifically, and not transformers applied to NLP tasks in generalthe same analysis might give very different conclusions if applied to, e.g., question answering or summarization.",
"In this section, we attempt to determine what information in transformer LM contexts is usable by measuring ablated information (Eq.",
"(9)).",
"Sections 3.1 and 3.2 describe our main results, with Section 3.1 focused on ordering and Section 3.2 focused on lexical information.",
"Section 3.3 compares these results to ablations applied at evaluation time.",
"Section 3.4 explores whether contexts can be further manipulated to improve model predictions.",
"context.",
"We first train a no information model to minimize L ( , 0 512) and a full information model to minimize L ( , 512 1024) .",
"For each context ablation f , we train a model to minimize L ( , f, 512 : 0 512) .",
"Each ablation has access to more information than the no information model (because it conditions on extra tokens) and less information than the full information model (be-cause an ablation has been applied to those tokens).",
"Note that the LM operates on BPE-derived sub-word tokens for consistency with the way GPT-2 is typically used, but all ablations are defined at the word level, meaning, e.g., that we shuffle words rather than tokens.",
"We use these trained models to calculate ablated information (Eq.",
"(9)).",
"To explore the effect of different context lengths, we stratify evaluation of the ablated information into two conditions: a mid-range condition in which likelihoods in Eq.",
"(9) are of the form L ( , f, 512 : 0 256) , and a long-range condition with likelihoods L ( , f, 512 : 256 512) .",
"(We call the former mid-range rather than short-range because most tokens are still predicted with significant unablated context; our experiments do not characterize sentence-internal modeling of syntactic well-formedness.)",
"Results are shown in Figure 2 and discussed below.",
"61 N.V., director the of Mr. Vinken Dutch group.",
"as nonexecutive the 29.",
"is Vinken, years Elsevier join old, publishing a Nov. will Pierre board chairman shuf.",
"publishing group.",
"N.V., the Dutch Mr. Vinken is join the board as a nonexecutive years old, will chairman of Elsevier Pierre Vinken, 61 director Nov. 29.",
"In the shuffle all ablation, f shuffles words uniformly at random, forcing the model to treat ablated context as a bag of words.",
"In the shuf.",
"trigrams globally ablation, the context is divided up into nonoverlapping trigrams, the order of which is then permuted uniformly at random.",
"Shuffling all words removes 41% of usable information in the midrange condition and 84% in the long-range condition: ordering information is important even very far from the target .",
"On the other hand, shuffling all trigrams removes 31% of usable information in the mid-range condition and 50% in the long-range condition: local co-occurrence statistics carry a significant amount of usable information .",
"(a) Mid-range condition (first 256 tokens after ablation) 4.17 4.18 4.19 4.20 4.21 4.22 4.23 bits full information sent.",
"Words are shuffled only within sentences according to one of three procedures: (1) a uniform random permutation of all the words in the sentence ( shuf. within sent. ), (2) a uniform random permutation of the words within each non-overlapping trigram in the sentence ( shuf. within trigrams ), and (3) a uniform random permutation of the order of the trigrams within the sentence ( shuf. trigrams within sent. ).",
"(1) and (2) were also recently explored by Pham et al. (2020) in models for entailment, and more complex shuffling procedures have been explored in neuroscience contexts (Mollica et al., 2020).",
"Here, (2) and (3) are chosen because they preserve local co-occurrence statistics ((3) more than (2)), while (2) also preserves the general linear information flow of the sentence.",
"Notably, the shuf.",
"within trigrams (14% and 41%) and the shuf.",
"trigrams within sent.",
"(16% and 35%) ablations both remove relatively little usable information in both the midand long-range conditions.",
"Usable information is decreased only slightly by ablations that preserve local co-occurrence statistics and/or linear information flow .",
"(This includes transformations like man bites dog dog bites man with significant effects on semantics!)",
"In the long-range condition, uniform shuffling within sentences produces a larger effect, removing 55% of usable information.",
"Next, sentences are shuffled within the context while their internal word order is unchanged.",
"In the mid-range condition, this produces results comparable to the trigram shuffling experiments above (removing 17% of usable information); in the long-range condition, it has an even smaller effect (14%).",
"Together with the previous experiment these results suggest that prediction accuracy depends on information about local word co-occurrence, but not fine-grained word order or global position .",
"Rudolph Agnew, 55 years old and former chairman of Consolidated Gold Fields PLC, was named a nonexecutive director of this British industrial conglomerate.",
"A possible hypothesis about LM behavior is that the main function of long-range context is to provide more information about the general topic of the document, including clues about vocabulary and style.",
"To test this, the ablation replaces its entire input with the 512 tokens that immediately precede it in the source document (which in general will be topically similar).",
"This transformation removes significant information in both midand long-range conditions (55% and 69%).",
"Long-range context is 4.20 4.25 4.30 4.35 4.40 4.45 bits full information cont.",
"(a) Mid-range condition (first 256 tokens after context) 4.15 4.16 4.17 4.18 4.19 4.20 4.21 4.22 4.23 bits cont.",
"(b) Long-range condition (tokens 256-512 after context) Figure 3: Effect of word identity on usable information.",
"not simply a source of topic information: earlier text on the same theme is in some cases nearly as uninformative as no text at all.",
"Our next experiments focus on lexical rather than structural information, using ablations that delete selected words from the context.",
"Training and evaluation setups are exactly as in Section 3.1.",
"Here, unlike the previous section, ablations will generally cause the number of tokens in a given context to decrease; in this case ablations also insert padding tokens to the beginning of the context window to preserve the original number of tokens.",
"Results are shown in Fig. 3. Parts of speech N Pierre Vinken years board director Nov.",
"As in the initial example from Section 2, we retain only words whose part of speech tag is in a given set.",
"We use the spaCy model (Honnibal et al., 2020) for part-of-speech tagging, and examine five sets: (1) nouns only , (2) nouns and verbs , (3) nouns, verbs, and adjectives , (4) content words (nouns, verbs, adjectives, and adverbs), and (5) function words (all words except nouns, verbs, adjectives, and adverbs).",
"In the mid-range condition, deleting all words but nouns removes only 20% of usable information; deleting all but nouns and verbs removes only 13%.",
"Most usable information, even in mid-range context, appears to be captured by nouns and verbs.",
"Retaining only function words causes a considerably greater loss of information.",
"In the long-range condition, results are even more striking: retaining only content words improves predictions over the full informa-tion experiment .",
"Like Shannon information, V information is defined to be non-negative (Xu et al., 2020), and the result in Fig. 3 is a consequence of our finite-sample approximation based on held-out likelihood.",
"The effect is robust across multiple training runs from random initializations.",
"As there is a significant gap between the training and validation perplexity of our model (roughly 11%), we hypothesize that this change occurs because the ablation preserves semantic content while reducing the original model's ability to overfit.",
"We believe this is an important subject for future investigation.",
"named entities Pierre Vinken 61 years old Nov. 29 Vinken Elsevier N.V. Dutch",
"As an alternative to the topic hypothesis evaluated under Order of entire sections above, we might",
"hypothesize that long-range contexts are useful because they provide a reservoir of named entities likely to be referred to again.",
"Here, the ablation retains only spans tagged as named entities or quantities by spaCy.",
"While significantly worse than the noun ablation discussed above, retaining only entities results removes only about a third of usable information in both conditions (39% and 31%).",
"Another natural question is whether rare words or frequent words are more important: information about frequent context words might help models estimate fine-grained document-level frequencies of those words account for most of the terms in Eq.",
"(7); rare words are likely to be more informative about the content of the document itself.",
"We partition the vocabulary into a set of rare words , corresponding to the least frequent 98% of word types and 20% of word tokens, and frequent words , the most frequent 2% of types and 80% of tokens.",
"Both ablations remove a significant amount of information relative to the POS-based ablations above, but retaining only frequent words improves perplexity relative to rare words in both the midand long-range conditions.",
"We motivated the use of V -information in Section 2 by arguing that it more clearly distinguished between prediction errors attributable to loss of information and prediction errors attributable to malformed and out-of-distribution model inputs.",
"To put our results in context, we repeat several of the previous experiments in the evaluation paradigm of Khandelwal et al. (2018), which is designed to measure test-time sensitivity rather than usable information.",
"We train a new model to minimize L ( , 512 1024) while randomly truncating the first 512 context tokens and replacing them with padding tokens (to ensure that the model has seen padding tokens at training time).",
"We then evaluate this model on 4.20 4.25 4.30 4.35 4.40 4.45 4.50 4.55 bits full information shuf.",
"(a) Mid-range condition (first 256 tokens after ablation) 4.14 4.16 4.18 4.20 4.22 bits full informationsent.",
"(b) Long-range condition (tokens 256-512 after ablation) Figure 4: Loss of information resulting from ablations at evaluation time only .",
"the set of ablations shown in Section 3.1 and Section 3.2.",
"For the full information model in Fig. 4, we evaluate on ordered context windows with no padding tokens; for the no information model, we evaluate on context windows in which the first 512 tokens are all padding tokens.",
"In the mid-range condition, the least destructive ablations are shuffling within trigrams and shuffling the order of trigrams within sentences: models appear to be reasonably robust to this kind of data transformation without specific training on it.",
"Importantly, lexical ablation experiments have a large impact in this evaluation, underlining the extent to which the two experimental paradigms characterize different aspects of model behavior.",
"Figure 5 in Appendix A shows a side-by-side comparison of these experiments and the ones in Sections 3.13.2.",
"selective deletion of context words.",
"Can this effect be exploited to further improve models?",
"As a simple experiment, we attempted to replace all padding tokens in the nouns+verbs ablation of Section 3.2 with nouns and verbs from further back in the contexteffectively providing the model with an even longer-range view of an informative context representation.",
"This experiment slightly increased usable information in the mid-range condition (0.2%), but decreased it in the long range-range condition (0.6%).",
"Longer contexts, even of a kind previously found to be informative, did not provide additional usable information.",
"These results are consistent with our earlier hypothesis that the previously observed effect resulted from a reduction in overfittingif removing information increased performance by reducing overfitting, then it is reasonable that adding information back results in more overfitting.",
"Context in count-based and discriminative LMs The earliest learned LMs were count-based (e.g., Kneser and Ney, 1995): they estimated p ( x n | x 0: n ) based on a (smoothed) empirical n gram frequency #( x 0: n ) / #( x 0: n 1 ) (where #( x ) is the number of times the sequence x appears in training data).",
"As the number of distinct n -gram counts grows exponentially in n , it was typically set to a small value.",
"Count-based models have a clear dependence on context: any token within the last n words that also appears in a training n-gram is relevant, anything further back is not.",
"Subsequent models improved on these by allowing the use of skip-grams, caches, and feature-based models (Goodman, 2001; Bengio et al., 2003).",
"Some of these in principle allowed the use of unlimited-length contexts, but only by imposing strong restrictions on the ways in which context features could interact.",
"Context in RNN LMs Recurrent neural network language models (Mikolov et al., 2010; Elman, 1990) provide a more expressive mechanism for the use of long-range context: models write to a recurrent state vector which can be carried arbitrarily far into the future.",
"Computational issues limit the effective context size such models can be practically trained on, but this size is still significantly greater the models mentioned above: as previously noted, Khandelwal et al. (2018) revealed influence from up to 200 tokens of context.",
"Similar effects are reported by Sankar et al. (2019) for neural dialogue models, and Li et al. (2016) describe an alternative procedure for ablating contexts.",
"Context in Transformer LMs Transformers introduce yet another mechanism for extracting information from long-range context: attention.",
"Attention is also used with RNNs, but typically with just a single headthe hidden state still carries most of the information.",
"In transformers, context enters into predictions primarily via unbounded random access.",
"These models appear to benefit from significantly longer contexts than previous models.",
"Some recent work that investigates the behavior of individual transformer attention heads (Clark et al., 2019; Voita et al., 2019).",
"This work finds that certain attention heads are sensitive to things like word frequency, positional information, and certain syntactic phenomena.",
"While extremely informative about the computational structures implemented by fixed models, these approaches do not necessarily reveal anything about usable information: indeed, patterns of attention do not necessarily correlate with model predictions (Jain and Wallace, 2019).",
"Other related work Our finding that fine-grained ordering information contributes little usable information is consistent with Rae et al. (2019)'s finding that long-range contexts could be informatively summarized in fixed-sized vectors; our finding that most usable information is carried by nouns is consistent with earlier find-ings about both specialized neural architectures (Henaff et al., 2016) and discourse representations in feature-based models (Barzilay and La-pata, 2008).",
"Our approach also shares similar motivations to information-theoretic work on probing (Voita and Titov, 2020; Pimentel et al., 2020), which uses related tools to interpret linguistic structure in LM representations rather than characterizing their effect on LM predictions.",
"Several recent papers have explored the effect of training-time and test-time ablations in models for other data analysis tasks: Pham et al. (2020) find that shuffling experiments have a limited effect on the accuracy of models for natural language inference, while Perez et al. (2021) describe several experiments aimed at introducing usable information for several question answering and sentence understanding tasks.",
"We have investigated the extent to which transformer models can use structural and lexical information in long-range contexts for English language modeling.",
"Experiments demonstrated that this information is primarily contained in content words and local ordering statistics: ablations that remove other kinds of information from context have little effect on models' predictive accuracies.",
"In contrast, retaining only information about document identity or named entities causes significant drops in predictive accuracy: the effectiveness of long contexts is not explained by the presence of topic or named entity information alone.",
"Crucial to obtaining these results was a measure of ablated usable information grounded in the accuracy of models trained and tested on ablated contexts.",
"Past work on context in LMs has primarily measured the influence of evaluation-time ablations.",
"Sometimes these two notions of context-sensitivity coincide (e.g., trigram shuffling) and sometimes they do not (e.g., removal of lexical in-formation).",
"Our results also offer a jumping-off point for future modeling work.",
"They motivate more efficient , compressed context representations that better preserve the information that is usable by current models.",
"They motivate more accurate models by developing new context representations that make currently unusable information more prominent.",
"Several questions remain unanswered by our experiments.",
"Do ablations affect the quality of text generated by models?",
"(In particular, does the usable information added by long contexts improve predictability of syntax, semantics, or simply document-level word frequency statistics?)",
"More fundamentally, do observations about usable information reflect limitations of transformers or fundamental, (Shannon-)information-theoretic properties of English?",
"Our results suggest that at least some of these effects are model-specific: deleting function words cannot add information, but improves held-out model accuracy.",
"A complete answer to this question will require more detailed exploration, including a better understanding of human predictions in comparable settings.",
"Thanks to Carina Kauf and Greta Tuckute, Evelina Fedorenko and Roger Levy for valuable discussions.",
"We acknowledge the MIT SuperCloud and Lincoln Laboratory Supercomputing Center for providing HPC resources that contributed to the results reported within this paper.",
"Across initial exploration, evaluation conditions and training runs, experiments in this paper required roughly 100 training runs on the WikiText-103 dataset.",
"As discussed in Section 2, model size and batched evaluation were both used to minimize the energy demands of these experiments; experiments themselves were performed at the Massachusetts Green HPC center, a carbon-neutral supercomputing facility.",
"Ultimately, results in Section 3 provide guidance toward the design of models that use context more efficiently and motivate the large-scale empirical study conducted here."
] | [
"abstain",
"abstain",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain"
] |
[
"Tables are often created with hierarchies, but existing works on table reasoning mainly focus on flat tables and neglect hierarchical tables.",
"Hierarchical tables challenge table reasoning by complex hierarchical indexing, as well as implicit relationships of calculation and semantics.",
"We present a new dataset, HiTab, to study question answering (QA) and natural language generation (NLG) over hierarchical tables.",
"HiTab is a cross-domain dataset constructed from a wealth of statistical reports and Wikipedia pages, and has unique characteristics: (1) nearly all tables are hierarchical, and (2) questions are not proposed by annotators from scratch, but are revised from real and meaningful sentences authored by analysts.",
"(3) To reveal complex numerical reasoning in analysis, we provide fine-grained annotations of quantity and entity alignment.",
"Experimental results show that HiTab presents a strong challenge for existing baselines and a valuable benchmark for future research.",
"Targeting hierarchical structure, we devise an effective hierarchy-aware logical form for symbolic reasoning over tables.",
"Furthermore, we leverage entity and quantity alignment to explore partially supervised training in QA and conditional generation in NLG, which largely reduces spurious predictions in QA and meaningless descriptions in NLG.",
"The dataset and code are available at https://github.com/ microsoft/HiTab .",
"In recent years, there are a flurry of works on reasoning over semi-structured tables, e.g., answering questions over tables (Yu et al. , 2018; Pasupat and Liang, 2015) and generating fluent and faithful text from tables (Lebret et al. , 2016; Parikh et al. , 2020).",
"A table is regarded as hierarchical if its header exhibits a multi-level structure (Lim and Ng, 1999; Chen and Cafarella, 2014; Wang et al. , 2020).",
"Hierarchical tables are widely used, especially in data products, statistical reports, and research papers in government, finance, and science-related domains.",
"Hierarchical tables challenge QA and NLG due to: (1) Hierarchical indexing.",
"Hierarchical headers, such as D2:G3 and A4:A25 in Figure 1, are informative and intuitive for readers, but make cell selection much more compositional than flat tables, requiring multi-level and bi-dimensional indexing.",
"For example, to select the cell E5 (66.6), one needs to specify two top header cells, Master's and Percent, and two left header cells, All full-time and Self-support.",
"(2) Implicit calculation relationships among quantities.",
"In hierarchical tables, it is common to insert aggregated rows and columns without explicit indications, e.g., total (columns B,D,F and rows 4,6,7,20) and proportion (columns C,E,G), which challenge precise numeri-1094 cal inference.",
"(3) Implicit semantic relationships among entities.",
"There are various cross-row, cross-column, and cross-level entity relationships, but lack explicit indications, e.g., source and mecha-nism in A2 describe A6:A19 and A20:A25 respectively, and D2 (Master's) and F2 (Doctoral) can be jointly described by a virtual entity, De-gree.",
"How to identify semantic relationships and link entities correctly is also a challenge.",
"In this paper, we aim to build a dataset for hierarchical table QA and NLG.",
"But without sufficient data analysts, it's hard to ensure questions and descriptions are meaningful and diverse (Gururangan et al. , 2018; Poliak et al. , 2018).",
"Fortunately, large amounts of statistical reports are public from a variety of organizations (StatCan; NSF; Census; CDC; BLS; IMF), containing rich hierarchical tables and textual descriptions.",
"Take Statistics Canada (Stat-Can) for example, it consists of 6 , 039 reports in 27 domains authored by over 1,000 professionals.",
"Importantly, since both tables and sentences are authored by domain experts, sentences are natural and reflective of real understandings of tables.",
"To this end, we propose a new dataset, HiTab, for QA and NLG on hierarchical tables.",
"(1) All sentence descriptions of hierarchical tables are carefully extracted and revised by human annotators.",
"(2) It shows that annotations of fine-grained and lexical-level entity linking significantly help table QA (Lei et al. , 2020; Shi et al. , 2020), motivating us to align entities in text with table cells.",
"In addition to entity, we believe aligning quantities (Ibrahim et al. , 2019), especially composite quantities (computed by multiple cells), is also important for table reasoning, so we annotate underlying numerical relationships between quantities in text and table cells, as Table 1 shows.",
"(3) Since real sentences in statistical reports are natural, diverse, and reflective of real understandings of tables, we devise a process to construct QA pairs based on existing sentence descriptions instead of asking annotators to propose questions from scratch.",
"HiTab presents a strong challenge to state-of-the-art baselines.",
"For the QA task, MAPO (Liang et al. , 2018) only achieves 29 .",
"2% accuracy due to the ineffectiveness of the logical form customized for flat tables.",
"To leverage the hierarchy for table reasoning, we devise a hierarchy-aware logical form for table QA, which shows high effectiveness.",
"We propose partially supervised training given annotations of linked mentions and formulas, which helps models to largely reduce spurious predictions and achieve 45 .",
"1% accuracy.",
"For the NLG task, models also have difficulties in understanding deep hierarchies and generate complex analytical texts.",
"We explore controlled generation (Parikh et al. , 2020), showing that conditioning on both aligned cells and calculation types helps models to generate meaningful texts.",
"We design an annotation process with six steps.",
"To well-handle the annotation complexity, we recruit 18 students or graduates (13 females and 5 males) in computer science, finance, and English majors from top universities, and provide them with comprehensive online training, documents, and QAs.",
"The annotation totally costs 2,400 working hours.",
"We will discuss the ethical considerations in Section 8. 2.1 Hierarchical Table Collection We select two representative organizations, Statistics Canada (StatCan) and National Science Foundation (NSF), that are rich of statistical reports.",
"Different from Census; CDC; BLS; IMF that only provide PDF reports where table hierarchies are hard to extract precisely (Schreiber et al. , 2017), StaCan and NSF also provide reports in HTML, from which cell information such as text and formats can be extracted precisely using HTML tags.",
"First, we crawl English HTML statistical reports published in recent five years from StatCan ( 1 , 083 reports in 27 well-categorized domains) and NSF ( 208 reports from 11 organizations in science foundation domain).",
"We merge StatCan and NSF and get the combination of various domains.",
"In addition, ToTTo contains a small proportion ( 5 . 03% ) of hierarchical tables, so we include them to cover more domains from Wikipedia.",
"To keep the balance between statistical reports and Wikipedia pages, we include random 1 , 851 tables ( 50% of our dataset) from ToTTo.",
"Next, we transform HTML tables to spreadsheet tables using a preprocessing script.",
"Since spreadsheet formula is easy to write, execute, and check, the spreadsheet is naturally a great annotation tool to align quantities and answer questions.",
"To enable correct formula execution, we normalize quantities in data cells by excluding surrounding superscripts, internal commas, etc.",
"Extremely small or large tables are filtered out (Appendix A.1 gives more details).",
"Sentences consisting of multiple semantic-independent sub-sentences will be carefully split into multiple ones.",
"Annotators are instructed to eliminate redundancy and ambiguity in sentences through revisions including decontextu-alization and phrase deletion (Parikh et al. , 2020).",
"Fortunately, most sentences in statistical reports are clean and fully supported by table data, so few revisions are needed to get high-quality text.",
"In this phase, annotators are instructed to align mentions in text with corresponding cells in tables.",
"It has two parts, entity alignment and quantity alignment, as shown in Table 1. For entity alignment, we record the mappings from entity mentions in text to corresponding cells.",
"Single-cell quantity mentions can be linked similar with entity mentions, but composite quantity mentions are calculated from two or more cells through operators like max/sum/div/diff (Table 2).",
"The spreadsheet formula is powerful and easy-to-use for tabular data calculation, so we use the formula to record the calculations process of composite quantities in text, e.g., 10 points higher' ( =G23-G24 ).",
"Although quantities are often 2 For samples with XLOOKUP or IF formulas, we didn't explicitly provide the formulas in dataset because some reasoning logics are still too complex to be covered by them, e.g., the candidate cells are not on a continuous row/column.",
"Instead, we manually check the answer cell(s) and provide the answer cell reference(s) for these samples.",
"Existing QA datasets instruct annotators to propose questions from scratch, but it's hard to guarantee the meaningfulness and diversity of proposed questions.",
"In HiTab, we simply revise declarative sentences into QA pairs.",
"For each sentence, annotators need to identify a target key part to question about (according to the underlying logic), then convert it to the QA form.",
"All questions are answered by formulas that reflect the numerical inference process.",
"For example, the XLOOKUP' operator is frequently used to retrieve the header cells of superlatives, as shown in Table 1. To keep sentences as natural as they are, we do not encourage unnecessary sentence modification during the conversion.",
"If an annotator finds multiple ways to question regarding a sentence, he/she only needs to choose one way that best reflects the overall meaning.",
"We ask the two most experienced annotators to perform regular inspections and the final review.",
"(1) In the labeling process, they regularly sample annotations (about 10% ) from all annotators to give timely feedback on labeling issues.",
"(2) Finally, they review all annotations and fix labeling errors.",
"Also, to assist the final review, we write a script to automatically identify spelling issues and formula issues.",
"To double-check the labeling quality before the final review, we study the agreement of annotators by collecting and comparing annotations on randomly sampled 50 tables from two annotators.",
"It shows 0 .",
"89 and 0 .",
"82 for quantity and entity alignment in Fleiss Kappa respectively, which are regarded as almost perfect agreement (Landis and Koch, 1977), and 64 .",
"5 in BLEU-4 after sentence revision, which also indicates high agreement.",
"We further show annotation artifacts are substantially avoided 1096 Dataset Tables Datasource Fine-grainedalignment QAandNLGtasks Table Question Realsentences Entity Quantity QA NLG Questions Wordsper Sentences orsentence revisedpertable question WTQ(PasupatandLiang,2015) 2,108 Wikipedia Post-created -Yes -22,033 10.0 WikiSQL(Zhong etal. ,2017) 26,521 Wikipedia Post-created -Yes -80,654 11.7 Spider(Yu etal. ,2018) 1,020 Collegedata,WikiSQL Post-created -Yes -10,181 13.2 HybridQA(Chen etal. ,2020b) 13,000 Wikipedia Post-created -Yes -69,611 18.9 TAT-QA(Zhu etal. ,2021) 2,757 Financialreports(PDF) Post-created -Yes -16,552 12.5 FinQA(Chen etal. ,2021) 2,776 Financialreports(PDF) Post-created -Yes -8,281 16.6 DART(Nan etal. ,2020) 5,623 WTQ,WikiSQL,...",
"We follow existing work (Lim and Ng, 1999; Chen and Cafarella, 2014; Wang et al. , 2020) and use the tree structure to model hierarchical headers.",
"Since cell formats such as merging, indentation, and font bold, are commonly used to present hierarchies, we adapt heuristics in (Wang et al. , 2020) to extract top and left hierarchical trees, which has high accuracy.",
"We go through 100 randomly sampled tables in HiTab, 94% of them are precisely extracted.",
"Figure 8 in Appendix shows an illustration.",
"Table 3 shows a comprehensive comparison of related datasets.",
"HiTab is not among the largest ones, but (1) it is the first dataset to study QA and NLG over hierarchical tables (accounting for 98.1% tables in HiTab) in-depth; (2) it is annotated with fine-grained entity and quantity alignment; (3) compared with TAT-QA, FinQA, and NumericNLG that are single-domain, HiTab has a wide coverage of different domains from statistical reports and Wikipedia, even wider than ToTTo or WTQ that only involves Wikipedia tables; (4) the number of real descriptions per table ( 5 . 0 ) in statistical reports (HiTab) is much richer than 1 .",
"4 in Wikipedia (ToTTo) and 3 .",
"8 in scientific papers, contributing more analytical aspects per table.",
"operations: domains are diverse, covering 28 domains from statistical reports (fully listed in Appendix A.3) and other open domains from Wikipedia; a large proportion of questions involves complex cell selection and numerical operations.",
"Table QA is essential for table understanding, document retrieval, ad-hoc search, etc .",
"Hierarchical tables are quite common in these scenarios like in webpages and reports, while current Table QA tasks and methods focus on simple flat tables.",
"Problem Statement Hierarchical Table QA is defined as follows: given a hierarchical table t and a question x in natural language, output answer y .",
"The question-answer pair should be fully supported by the table.",
"Our dataset D = { ( x i , t i , y i ) } , i [1 , N ] is a set of N question-table-answer triples.",
"Table QA is usually formulated as a semantic parsing problem (Pasupat and Liang, 2015; Liang et al. , 2017), where a parser converts the question into logical form, and an executor executes it to produce the answer.",
"However, existing logical forms for Table QA (Pasupat and Liang, 2015; Liang et al. , 2017; Yin et al. , 2020) are customized for flat or database tables.",
"The three challenges mentioned in Section 1 (hierarchical indexing, implicit indexing relationships, and implicit semantic relationships) make QA more difficult on hierarchical tables.",
"To this end, we propose a hierarchy-aware logical form that exploits table hierarchies to mitigate these challenges.",
"Specifically, we define region as the operating object, and propose two functions for hierarchical region selection.",
"Definitions Given tree hierarchies of tables extracted in Section 2.6, we define header as a header cell (e.g., A7(Federal) in Figure 1), and level as a level in the left/top tree (e.g., A5,A6,A20 are on the same level).",
"Existing logical forms on tables treat 1097 rows as operating objects and columns as attributes, and thus can not perform arithmetic operations on cells in the same row.",
"However, a row in hierarchical tables is not necessarily a subject or record, thus operations can be applied on cells in the same row.",
"Motivated by this, we define region as our operating object, which is a data region in table indexed by both left and top headers (e.g., B6:C19 is a rectangular region indexed by A6,B2).",
"The logical form execution process is divided into two phases: region selection and region operation.",
"Region Selection We design two functions ( filter tree h ) and ( filter level l ) to do region selection, where h is a header, l is a level.",
"Functions can be applied sequentially: the subsequent function applies on the return region of the previous function.",
"( filter tree h ) selects a sub-tree region according to a header cell h : if h is a leaf header (e.g., A8), the selected region should be the row/column indexed by h (row 8); if h is a non-leaf header (e.g., A7), the selected region should be the rows/columns indexed by both h and its children headers (row 7-16).",
"( filter level l ) selects a subtree from the input tree according to a level l and return the sub-region indexed by headers on level l .",
"These two functions mitigate aforementioned three challenges: (1) hierarchical indexing is achieved by applying these two functions sequentially; (2) with filter level , data with different calculation types (e.g., rows 4-5) will not be co-selected, thus not incorrectly operated together; (3) level-wise semantics can be captured by aggregating header cell semantics (e.g., embeddings) on this level.",
"Some logical form execution examples are shown in Appendix C.2.",
"Region Operation Operators are applied on the selected region to produce the answer.",
"We define 19 operators, mostly following MAPO (Liang et al. , 2018), and further include some operators (e.g., difference rate ) for hierarchical tables.",
"Complete logical form functions are shown in Appendix C.1.",
"We present baselines in two branches.",
"One is logical form-based semantic parsing, and the other is end-to-end table parsing without logical forms.",
"Neural Symbolic Machine (Liang et al. , 2017) is a powerful semantic parsing framework consisting of a programmer to generate programs from NL and save intermediate results, and a computer to execute programs.",
"We replace the LSTM encoder with BERT (Devlin et al. , 2018), and implement a lisp interpreter for our logical forms as executor.",
"Table is linearized by placing headers in level order, which is shown in detail in Appendix C.4.",
"TaPas (Herzig et al. , 2020) is a state-of-the-art end-to-end table parsing model without generating logical forms.",
"Its power to select cells and reason over tables is gained from its pretraining on millions of tables.",
"To fit TaPas input, we convert hierarchical tables into flat ones following WTQ (Pasupat and Liang, 2015).",
"Specifically, we unmerge the cells spanning many rows/columns on left/top headers and duplicate the contents into unmerged cells.",
"The first top header row is specified as column names.",
"In weak supervision, the model is trained with QA pairs, without golden logical forms.",
"For NSM, we compare three widely-studied learning paradigms: MML (Dempster et al. , 1977) maximizes the marginal likelihood of observed programs.",
"REINFORCE (Williams, 1992) maximizes the reward of on-policy samples.",
"MAPO (Liang et al. , 2018) learns from programs both inside and outside buffer, and samples efficiently by systematic exploration.",
"Since these methods require consistent programs for learning or warm start, we randomly search 15 , 000 programs per sample before training.",
"The pruning rules are shown in Appendix C.3.",
"Finally, 6 .",
"12 consistent programs are found per sample.",
"Given labeled entity links, quantity links, and calculations (from the formula), we further explore to guide training in a partially supervised way.",
"These three annotations indicate selected headers, region, and operators in QA 3 .",
"For NSM, we exploit them to prune spurious programs,",
"i.e.",
", incorrect programs that accidentally produce correct answers, in two ways.",
"(1) When searching consistent programs, besides producing correct answers, programs are required to satisfy at least two constraints.",
"In this way, the average consistent programs reduces from 6 .",
"12 to 2 .",
"13 per sample.",
"(2) When training, satisfying each condition will add 0 .",
"2 to the original 3 Entity and quantity alignments in text also occur in the question in most cases.",
"In QA, we apply a simple n-gram matching algorithm to filter out the alignments not in questions.",
"binary 0/1 reward.",
"Sampled programs with reward r 1 .",
"4 are added to the program buffer.",
"For TaPas, we additionally provide answer coordinates and calculation types in training following its WikiSQL setting.",
"We use Execution Accuracy ( EA ) as our metric following (Pasupat and Liang, 2015), measuring the percentage of samples with correct answers.",
"We also report Spurious Program Rate to study the percentage that incorrect logical forms produce correct answer.",
"Since we do not have golden logical forms, we manually annotate logical forms for 150 random samples in dev set for evaluation.",
"We split 3 , 597 tables into train ( 70% ), dev ( 15% ) and test ( 15% ) with no overlap.",
"We download pre-trained models from huggingface 4 .",
"For NSM, we utilize bert-base-uncased', and fine-tune 20 K steps on HiTab.",
"Beam size is 5 for both training and inference.",
"To test MAPO original logical form, we convert flatten tables as we do for TaPas.",
"For TaPas, we adopt the PyTorch (Paszke et al. , 2019) version in huggingface.",
"We utilize tapas-base', and fine-tune 40 epochs on HiTab.",
"All experiments are conducted on a server with four V100 GPUs.",
"Table 4 summarizes our evaluation results.",
"Weak Supervision First, MAPO with our hierarchy-aware logical form outperforms that using its original logical form by a large margin 11 .",
"5% , indicating the necessity of designing a logical form leveraging hierarchies.",
"Second, MAPO achieves the best EA ( 40 . 7% ) with the lowest spurious rate ( 19% ).",
"But > 50% questions are answered incorrectly, proving QA on HiTab is challenging.",
"Third, though TaPas benefits from pretraining on tables, it performs worse than the best logical form-based method without table pretraining.",
"Partial Supervision From Table 4, we can conclude the effectiveness of partial supervision in two aspects.",
"First, it improves EA .",
"The model learns how to deal with more cases given high-quality programs.",
"Second, it largely lowers %Spurious .",
"The model learns to generate correct programs instead of some tricks.",
"MML, whose performance highly depends on the quality of searched programs, benefits the most ( 36 . 7% to 45 . 1% ), indicating partial supervision improves the quality of consistent programs by pruning spurious ones.",
"However, TaPas does not gain much improvements from partial supervision, which we will discuss in the next paragraph.",
"Error Analysis For TaPas, 98 .",
"7% of success cases are cell selections, which means TaPas benefits little from partial supervision.",
"This may be caused by: (1) TaPas does not support some common operators on hierarchical table like difference ; (2) the coarse-to-fine cell selection strategy first selects columns then cells, but cells in different columns may also aggregate in hierarchical tables.",
"For MAPO under partial supervision, we analyze 100 error cases.",
"Error cases fall into four categories: (1) entity missing ( 23% ): the header to filter is not mentioned in question, where a common case is omitted Total ; model failure, including (2) failing to select correct regions ( 38% ) and (3) failing to generate correct operations ( 20% ); (4) out of coverage ( 19% ): question types unsolvable with the logical form, which is explained in Appendix C.1.",
"Spurious programs occur mostly in two patterns.",
"In cell selection, there may exist multiple data cells with correct answers (e.g., G9,G16 in Figure 1), while only one is golden.",
"In superlatives, the model can produce the target answer by operating on different regions (e.g., in both region B21:B25 and B23:B25, B23 is the largest).",
"Level-wise Analysis In Figure 3, we present level-wise accuracy of HiTab QA with MAPO and our hierarchy-aware logical form.",
"Level here stands for sum of left and top header levels.",
"As shown, the QA accuracy degrades when table level increases as table structure becomes more complex, except for level = 2 ,",
"i.e., tables with no hierarchies.",
"The reason level = 2 performs relatively worse might be that only 1 .",
"9% tables without hierarchies are seen in HiTab.",
"We also present an annotated table 1099 42.9 50.0 44.7 40.5 14.1 1.9 13.6 53.4 25.2 5.9 2 3 4 5 >5 QA Accuacy Proportion in Dataset Figure 3: Level-wise QA accuracy and proportion of samples with MAPO and hierarchy-aware logical form.",
"example from our dataset to illustrate in detail the challenges mentioned in Section 1 that hierarchical tables bring in Appendix C.5.",
"Some works formulate table-to-text as a summarization problem (Lebret et al. , 2016; Wiseman et al. , 2017).",
"However, since a full table often contains quite rich information, there lack explicit signals on what to generate, which renders the task unconstrained and the evaluation difficult.",
"On the other hand, some recent works propose controlled generation to enable more specific and logical generation: (1) LogicNLG generates a sentence conditioned on a logical form guiding symbolic operations over given cells, but writing correct logical forms as conditions is challenging for common users who are more experienced to write natural language directly, thus restricting the application to real scenario; (2) ToTTo generates a sentence given a table with a set of highlighted cells.",
"In ToTTo's formulation, the condition of cell selection is much easier to specify than the logical form, but it neglects symbolic operations which are critical for generating some analytical sentences involving numerical reasoning in HiTab.",
"We place HiTab as a middle-ground of ToTTo and LogicNLG to make the task more controllable than ToTTo and closer to real application than LogicNLG.",
"In our setting, given a table, the model generates a sentence conditioned on a group of selected cells (similar to ToTTo) and operators (much easier to be specified than logical forms).",
"Although we use two strong conditions to guide symbolic operations over cells, there still leaves a considerable amount of content planning to be done by the model, such as retrieving contextual cells in a hierarchical table given selected cells, identifying how operators are applied on given cells, and composing sentences in a faithful and logical manner.",
"We now define our task as: given a hierarchical table T , highlighted cells C , and specified operators O , the goal is to generate a faithful description S .",
"The dataset H = ( T i , S i ) , i [1 , N ] is a set of N table-description instances.",
"Description S i is a sentence about a table T i and involves a series of operations O i = [ O i 1 , O i 2 , . . . , O in ] on certain table cells C i = [ c i 1 , c i 2 , . . . , c im ] .",
"An entity or quantity in text can be supported by table cells if it is directly stated in cell contents, or can be logically inferred by them.",
"Different from only taking data cells as highlighted cells (Parikh et al. , 2020), we also take header cells as highlighted cells, and it is usually the case for superlative ARG-type operations on a specific header level in hierarchical tables, e.g., Teaching assistantships is retrieved by ARGMAX in Figure 1. In our dataset, highlighted cells are extracted from annotations of the entity and quantity alignment.",
"Highlighted cells can tell the target for text generation, but is not sufficient, especially for analytical descriptions involving cell operations in HiTab.",
"So we propose to use operators as extra control.",
"It contributes to text clarity and meaningfulness in two ways.",
"(1) It clarifies the numerical reasoning intent on cells.",
"For example, given the same set of data cells, applying SUM, AVERAGE, or COUNT conveys different meanings thus should yield different texts.",
"(2) Operation results on highlighted cells can be used as additional input sources.",
"Existing seq2seq models are not powerful enough to do arithmetic operations (Thawani et al. , 2021), e.g., adding up a group of numbers, and it greatly limits their ability to generate correct numbers in sentences.",
"Explicitly pre-computing the calculation results is a promising alternative way to mitigate this gap in seq2seq models.",
"Operators are extracted from annotations of formulas shown in Table 2. 4.2.3 Sub Table Selection and Serialization Sub Table Selection Under controls of selected cells and operators, we devise a heuristic to retrieve all contextual cells as a sub table.",
"(1) We start with highlighted cells extracted from our entity and quantity alignment, then use the extracted 1100 table hierarchy to group the selected cells into the top header, the left header, and the data region.",
"(2) Based on the extracted table hierarchy, we use the source set of top and left header cells to include their indexed data cells, and we also use the source set of data cells to include corresponding header cells.",
"(3) We also include their parent header cells in table hierarchy to construct a full set of headers.",
"In the end, we take the union of of them as the result of sub table selection.",
"Serialization On each sub table, we do a row-turn traversal on linked cells and concatenate their cell strings using [SEP] tokens.",
"Operator tokens and calculation results are also concatenated with the input sequence.",
"We also experimented with other serialization methods, such as header-data pairing or template-based method, yet none reported superiority over the simple concatenation.",
"Appendix B.1 gives an illustration.",
"state-of-the-art text generation methods on HiTab.",
"Pointer Generator (See et al. , 2017) A LSTM-based seq2seq model with copy mechanism.",
"While originally designed for text summarization, it is also used in data-to-text (Gehrmann et al. , 2018).",
"BERT-to-BERT (Rothe et al. , 2020) A transformer encoder-decoder model (Vaswani et al. , 2017) initialized with BERT (Devlin et al. , 2018).",
"BART (Lewis et al. , 2019) A pre-trained denoising autoencoder with standard Transformer-based architecture and shows effectiveness in NLG.",
"T5 (Raffel et al. , 2019) A transformer-based pretrained model.",
"It converts all textual language problems into text-to-text and proves to be effective.",
"We use two automatic metrics, BLEU and PARENT.",
"BLEU (Papineni et al. , 2002) is broadly used to evaluate text generation.",
"PARENT (Dhingra et al. , 2019) is proposed specifically for data-to-text evaluation that additionally aligns n-grams from the reference and generated texts to the source table.",
"Samples are split into train ( 70% ), dev ( 15% ), and test ( 15% ) sets just the same as the QA task.",
"The maximum length of input/output sequence is set to 512 / 64 .",
"Implementation details of all baselines are given in Appendix B.2.",
"As shown in Table 5, first , from an overall point of view, both metrics are not scored high.",
"This well proves the difficulty of HiTab.",
"It could be caused by the hierarchical structure, as well as statements with logical and numerical complexity.",
"Second , by comparing two controlled scenarios (cell highlights & both cell highlights and operators), we see that adding operators to conditions greatly help models to generate descriptions with higher scores, showing the effectiveness of our augmented conditional generation setting.",
"Third , results on two controlled scenarios across baselines are quite consistent.",
"Replacing the traditional LSTM with transformers shows large increasing.",
"Leveraging seq2seq-like pretraining yields a rise of +6 .",
"5 BLEU and +11 .",
"3 PARENT.",
"Lastly, between pretrained transformers, T5 reports higher scores over BART, probably for T5 is more extensively tuned during pre-training.",
"Further, to study the generation difficulty concerning table hierarchy , we respectively evaluate samples at different hierarchical depths,",
"i.e.",
", table's maximum depths in top and left header trees.",
"In groups of 2, 3, 4+ depth, BLEU scores 31 .",
"7 , 26 .",
"5 , and 21 .",
"3 ; PARENT scores 40 .",
"9 , 36 .",
"5 , and 31 .",
"6 .",
"The reason could be that, as the table header hierarchy grows deeper, the data indexing becomes increasingly compositional, rendering it harder to baseline models to configure entity relationships and compose logical sentences.",
"Table-to-Text Existing datasets are restricted in flat tables or specific subjects (Liang et al. , 2009; Chen and Mooney, 2008; Wiseman et al. , 2017; Novikova et al. , 2016; Banik et al. , 2013; Lebret et al. , 2016; Moosavi et al. , 2021).",
"The most related table-to-text dataset to HiTab is ToTTo (Parikh et al. , 2020), in which complex tables are also included.",
"There are two main differences between HiTab and ToTTo: (1) in ToTTo, hierarchical tables only account for a small proportion ( 5% ), and there are no indication and usage of table hierarchies.",
"(2) in addition to cell highlights, Hitab conditions on 1101 Figure 4: A meaningful but challenging case in HiTab.",
"operators that reflect symbolic operations on cells.",
"Table QA mainly focuses on DB tables (Wang et al. , 2015; Yu et al. , 2018; Zhong et al. , 2017) and flat web tables (Pasupat and Liang, 2015; Sun et al. , 2016).",
"Recently, there are some datasets on domain-specific table QA (Chen et al. , 2021; Zhu et al. , 2021) and jointly QA over tables and texts (Chen et al. , 2020b; Zhu et al. , 2021), but hierarchical tables still have not been studied in depth.",
"CFGNN (Zhang, 2020) and GraSSLM (Zhang et al. , 2020) uses gragh neural networks to encode tables for QA, but all tables are database tables and relational web tables without hierarchies, respectively.",
"Wang et al. (2021) include some hierarchical tables but only focuses on table search.",
"HiTab also presents cross-domain and complicated-calculation challenges.",
"(1) To explore cross-domain generalizability, we randomly split train/dev/test by domains for three times and present the average results of our best methods in Table 6. We found decreases in all metrics in QA and NLG.",
"(2) Figure 4 shows a case that challenges existing methods: performing complicated calculations requires to jointly consider quantity relationships, header semantics, and hierarchies.",
"We present a new dataset, HiTab, that simultaneously supports QA and NLG on hierarchical tables, where tables are collected from statistical reports and Wikipedia in various domains.",
"Importantly, we provide fine-grained annotations on entity and quantity alignment.",
"In experiments, we introduce strong baselines and conduct detailed analysis on QA and NLG tasks on HiTab.",
"Results suggest that HiTab can serve as a challenging and valuable benchmark for future research on complex tables.",
"This work presents HiTab, a free and open English dataset for the research community to study table question-answering and table-to-text over hierarchical tables.",
"Our dataset contains well-processed tables, annotations (QA pairs, target text, and bidirectionally mappings between entities and quantities in text and the corresponding cells in table), recognized table hierarchies, and source code.",
"Data in HiTab are collected from two public organizations, StatCan and NSF.",
"Both of them allow sharing and redistribution of their public reports, so there is no privacy issue.",
"We collect tables and accompanied descriptive sentences from StatCan and NSF.",
"We also include hierarchical tables in Wikipedia from ToTTo, which is a public dataset under MIT license, so there is no risk to use it.",
"And in the labeling process, annotators need to check if there exist any names or uniquely identifies individual people or offensive content.",
"They did not find any such sensitive information in our dataset.",
"We recruit 18 students or graduates in computer science, fi-nance, and English majors from top universities( 13 females and 5 males).",
"Each student is paid $7 .",
"8 per hour (above the average local payment of similar jobs), totally spending 2 , 400 hours.",
"We finally get 3 , 597 tables and 10 , 672 well-annotated sentences.",
"And the data got approval from an ethics review board by an anonymous IT company.",
"The details for our data collection and characteristics are introduced in Section 2. References Eva Banik, Claire Gardent, and Eric Kow."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Topic models have been widely used to learn text representations and gain insight into document corpora.",
"To perform topic discovery, most existing neural models either take document bag-of-words (BoW) or sequence of tokens as input followed by variational inference and BoW reconstruction to learn topic-word distribution.",
"However, leveraging topic-word distribution for learning better features during document encoding has not been explored much.",
"To this end, we develop a framework TAN-NTM, which processes document as a sequence of tokens through a LSTM whose contextual outputs are attended in a topic-aware manner.",
"We propose a novel attention mechanism which factors in topic-word distribution to enable the model to attend on relevant words that convey topic related cues.",
"The output of topic attention module is then used to carry out variational inference.",
"We perform extensive ablations and experiments resulting in 9 15 percentage improvement over score of existing SOTA topic models in NPMI coherence on several benchmark datasets 20Newsgroups, Yelp Review Polarity and AGNews.",
"Further, we show that our method learns better latent document-topic features compared to existing topic models through improvement on two downstream tasks: document classification and topic guided keyphrase generation.",
"Topic models (Steyvers and Griffiths, 2007) have been popularly used to extract abstract topics which occur commonly across documents in a corpus.",
"Each topic is interpreted as a group of semantically coherent words that represent a common concept.",
"In addition to gaining insights from unstructured texts, topic models have been used in several tasks equal contribution work done during summer internship at Adobe of practical importance such as learning text representations for document classification (Nan et al., 2019), keyphrase extraction (Wang et al., 2019b), understanding reviews for e-commerce recommendations (Jin et al., 2018), semantic similarity detection between texts (Peinelt et al., 2020) etc.",
"Early works on topic discovery include statistical methods such as Latent Semantic Analysis (Deerwester et al., 1990), Latent Dirichlet Allocation (LDA) (Blei et al., 2003) which approximates each topic as a probability distribution over word vocabulary (known as topic-word distribution) and performs approximate inference over document-topic and topic-word distributions through Variational Bayes.",
"This was followed by Markov Chain Monte Carlo (MCMC) (Andrieu et al., 2003) based inference algorithm Collapsed Gibbs sampling (Griffiths and Steyvers, 2004).",
"These methods require an expensive iterative inference step which has to be performed for each document.",
"This was circumvented through introduction of deep neural networks and Variational Autoencoders (VAE) (Kingma and Welling, 2013), where variational inference can be performed in single forward pass.",
"Neural variational inference topic models (Miao et al., 2017; Ding et al., 2018; Srivastava and Sutton, 2017) commonly convert a document to Bag-of-Words (BoW) determined on the basis of frequency count of each vocabulary token in the document.",
"The BoW input is processed through an MLP followed by variational inference which samples a latent document-topic vector.",
"A decoder network then reconstructs original BoW using latent document-topic vector through topic-word distribution (TWD).",
"VAE based neural topic models can be categorised on the basis of prior enforced on latent document-topic distribution.",
"Methods such as NVDM (Miao et al., 2016), NTM-R (Ding et al., 2018), NVDM-GSM (Miao et al., 2017) use the Gaussian prior.",
"NVLDA and ProdLDA (Srivastava and Sutton, 2017) use approximation to the Dirichlet prior which enables model to capture the fact that a document stems from a sparse set of topics.",
"However, improving document encoding in topic models in order to capture document distribution and semantics better has not been explored much.",
"In this work, we build upon VAE based topic model and propose a novel framework TAN-NTM: Topic Attention Networks for Neural Topic Modeling which process the sequence of tokens in input document through an LSTM (Hochreiter and Schmidhuber, 1997) whose contextual outputs are attended using Topic-Word Distribution (TWD).",
"We hypothesise that TWD (being learned by the model) can be factored in the attention mechanism (Bahdanau et al., 2014) to enable the model to attend on the tokens which convey topic related information and cues.",
"We perform separate attention for each topic using its corresponding word probability distribution and obtain the topic-wise context vectors.",
"The learned word embeddings and TWD are used to devise a mechanism to determine topic weights representing the proportion of each topic in the document.",
"The topic weights are used to aggregate topic-wise context vectors.",
"The composed context vector is then used to perform variational inference followed by the BoW decoding.",
"We perform extensive ablations to compare TAN-NTM variants and different ways of composing the topic-wise context vectors.",
"For evaluation, we compute commonly used NPMI coherence (Aletras and Stevenson, 2013) which measures the extent to which most probable words in a topic are semantically related to each other.",
"We compare our TAN-NTM model with several state-of-the-art topic models (statistical (Blei et al., 2003; Griffiths and Steyvers, 2004), neural VAE (Srivastava and Sutton, 2017; Wu et al., 2020) and non-variational inference based neural model (Nan et al., 2019)) outperforming them on three benchmark datasets of varying scale and complexity: 20Newsgroups (20NG) (Lang, 1995), Yelp Review Polarity and AGNews (Zhang et al., 2015).",
"We verify that our model learns better document feature representations and latent document-topic vectors by achieving a higher document classification accuracy over the baseline topic models.",
"Further, topic models have previously been used to improve supervised keyphrase generation (Wang et al., 2019b).",
"We show that TAN-NTM can be adapted to modify topic assisted keyphrase generation achieving SOTA performance on StackExchange and Weibo datasets.",
"Our contributions can be summarised as: We propose a document encoding framework for topic modeling which leverages the topic-word distribution to perform attention effectively in a topic aware manner.",
"Our proposed model achieves better NPMI coherence ( 9-15 percentage improvement over the scores of existing best topic models) on various benchmark datasets.",
"We show that the topic guided attention results in better latent document-topic features achieving a higher document classification accuracy than the baseline topic models.",
"We show that our topic model encoder can be adapted to improve the topic guided supervised keyphrase generation achieving improved performance on this task.",
"Development of neural networks has paved path for Variational Autoencoders (VAE) (Kingma and Welling, 2013) which enables performing Variational Inference (VI) efficiently.",
"The VAE-based topic models use a prior distribution to approximate the posterior for latent document-topic space and compute the Evidence Lower Bound (ELBO) using the reparametrization trick.",
"Since our work is based on variational inference, we use ProdLDA and NVLDA (Srivastava and Sutton, 2017) as baselines for comparison.",
"The Dirichlet distribution has been commonly considered as a suitable prior on the latent document-topic space since it captures the property that a document belongs to a sparse subset of topics.",
"However, in order to enforce the Dirichlet prior, VAE methods have to resort to approximations of the Dirichlet distribution.",
"Several works have proposed solutions to impose the Dirichlet prior effectively.",
"Rezaee and Ferraro (2020) enforces Dirichlet prior using VI without reparametrization trick through word-level topic assignments.",
"Some works address the sparsity-smoothness trade-off in dirichlet distribution by factoring dirichlet parameter vector as a product of two vectors (Burkhardt and Kramer, 2019).",
"Wasserstein Autoencoders (WAE) (Tolstikhin et al., 2017) have led to the development of non-variational inference based topic model: Wasserstein-LDA (W-LDA) which minimizes the wasserstein distance, a type of Optimal Transport (OT) distance, by leveraging distribution matching to the Dirichlet prior.",
"We compare our work with W-LDA as a baseline.",
"Zhao et al. (2021) proposed an OT based topic model which directly calculates topic-word distribution without a decoder.",
"Adversarial Topic Model (ATM) (Wang et al., 2019a) was proposed based on GAN (Generative Adversarial Network) (Goodfellow et al., 2014) but it cannot infer document-topic distribution.",
"A major advantage of W-LDA over ATM is distribution matching in document-topic space.",
"Bidirectional Adversarial Topic model (BAT) (Wang et al., 2020) employs a bilateral transformation between document-word and document-topic distribution, while Hu et al. (2020) uses CycleGAN (Zhu et al., 2017) for unsupervised transfer between document-word and document-topic distribution.",
"Hierarchical topic models (Viegas et al., 2020) utilize relationships among the latent topics.",
"Supervised topic models have been explored previously where the topic model is trained through human feedback (Kumar et al., 2019) or with a task specific network simultaneously such that topic extraction is guided through task labels (Per-gola et al., 2019; Wang and Yang, 2020).",
"Card et al. (2018) leverages document metadata but without metadata their method is same as ProdLDA which is our baseline.",
"Topic modeling on document networks has been done leveraging relational links between documents (Zhang and Lauw, 2020; Zhou et al., 2020).",
"However our problem setting is completely different, we extract topics from documents in unsupervised way where document links/metadata/labels either don't exist or are not used to extract the topics.",
"Some very recent works use pre-trained BERT (Devlin et al., 2019) either to leverage improved text representations (Bianchi et al., 2020; Sia et al., 2020) or to augment topic model through knowledge distillation (Hoyle et al., 2020a).",
"Zhu et al. (2020) and Dieng et al. (2020) jointly train words and topics in a shared embedding space.",
"However, we train topic-word distribution as part of our model, embed it using word embeddings being learned and use resultant topic embeddings to perform attention over sequentially processed tokens.",
"iDocNade (Gupta et al., 2019) is an autoregressive topic model for short texts utilizing pre-trained embeddings as distributional prior.",
"However, it attains poorer topic coherence than ProdLDA and GNB-NTM as shown in Wu et al. (2020).",
"Some works have attempted to use other prior distributions such as Zhang et al. (2018) uses the Weibull prior, Thibaux and Jordan (2007) uses the beta distribution.",
"Gamma Negative Binomial-Neural Topic Model (GNB-NTM) (Wu et al., 2020) is one of the recent neural variational topic models which attempt to combine VI with mixed counting models.",
"Mixed counting models can better model hierarchically dependent and over-dispersed random variables while implicitly introducing nonnegative constraints in topic modeling.",
"GNB-NTM uses reparameterization of Gamma distribution and Gaussian approximation of Poisson distribution.",
"We use their model as a baseline for our work.",
"Topic models have been used with sequence encoders such as LSTM in applications like user activity modeling (Zaheer et al., 2017).",
"Dieng et al. (2016) employs an RNN to detect stop words and merges its output with document-topic vector for next word prediction.",
"Gururangan et al. (2019) uses a VAE pre-trained through topic modeling to perform text classification.",
"We perform document classification and compare our model's accuracy with the accuracy of VAE based and other topic models.",
"LTMF (Jin et al., 2018) combines text features processed through an LSTM with a topic model for review based recommendations.",
"Fundamentally different from these, we use topic-word distribution to attend on sequentially processed tokens via novel topic guided attention for performing variational inference, learning better document-topic features and improving topic modeling.",
"A key application of topic models is supervised keyphrase generation.",
"Some of the existing neural keyphrase generation methods include SEQ-TAG (Zhang et al., 2016) based on sequence tagging, SEQ2SEQ-CORR (Chen et al., 2018) based on seq2seq model without copy mechanism and SEQ2SEQ-COPY (Meng et al., 2017) which additionally uses copy mechanism.",
"Topic-Aware Keyphrase Generation (TAKG) (Wang et al., 2019b) is a seq2seq based neural keyphrase generation framework for social media language.",
"TAKG uses a neural topic model in Miao et al. (2017) and a keyphrase generation (KG) module which is conditioned on latent document-topic vector from the topic model.",
"We adapt our proposed topic model to TAKG to improve keyphrase generation and discuss it in detail later in the Experiments section.",
"LDA is a generative statistical model and assumes that each document is a distribution over a fixed number of topics (say K ) and that each topic is a distribution of words over the entire vocabulary.",
"LDA proposes an iterative process of document generation where for each document d , we draw a topic distribution from Dirichlet ( ) distribution.",
"For each word in d at index i , we sample a topic t i from Multinomial ( ) distribution.",
"w i is sampled from p ( w i | t i , ) distribution which is a multinomial probability conditioned on topic t i .",
"Given the document corpus and the parameters and , we need the joint probability distribution of a topic mixture , a set of K topics t , and a set of n words w .",
"This is given analytically by an intractable integral.",
"The solution is to use Variational Inference wherein this problem is converted into an optimization problem for finding various parameters that minimize the KL divergence between the prior and the posterior distribution.",
"This idea is leveraged at scale by the use of Variational Autoencoders.",
"The encoder processes BoW vector of the document x bow by an MLP (Multi Layer Perceptron) which then forks into two independently trainable layers to yield z & z log 2 .",
"Then a re-parametrization trick is employed to sample the latent vector z from a logistic-normal distribution (resulting from an approximation of Dirichlet distribution).",
"This is essential since back-propagation through a sampling node is infeasible.",
"z is then used by decoder's single dense layer D to yield the reconstructed BoW x rec .",
"The objective function has two terms:",
"(a) KullbackLeibler (KL) Divergence Term to match the variational posterior over latent variables with the prior and",
"(b) Reconstruction Term categorical cross entropy loss between x bow & x rec .",
"Our methodology improves upon the document encoder and introduces a topic guided attention whose output is used to sample z .",
"We use the same formulation of decoder as used in ProdLDA.",
"In this section, we describe the details of our framework where we leverage the topic-word distribution to perform topic guided attention over tokens in a document.",
"Given a collection C with | C | documents { x 1 , x 2 ,",
".., x | C | } , we process each document x into BoW vector x bow R | V | and as a token sequence x seq , where V represents the vocabulary.",
"As shown in step A in figure 1, each word w j x seq is embedded as e j RE through an embedding layer E R | V | E ( E = Embedding Dimension) initialised with GloVe (Penning-ton et al., 2014).",
"The embedded sequence { e j } | x | j =1 , where | x | is the number of tokens in x , is processed through a sequence encoder LSTM (Hochreiter and Schmidhuber, 1997) to obtain the corresponding hidden states h j RH and cell states s j RH (step B in figure 1): h j , s j = f LSTM ( e j , ( h j 1 , s j 1 )) where H is LSTM's hidden size.",
"We construct a memory bank M = (cid:104) h 1 , h 2 , ..., h | x | (cid:105) which is then used to perform topic-guided attention (step C in figure 1).",
"The output vector of the attention module is used to derive prior distribution parameters z & z log 2 (as in VAE) through two linear layers.",
"Using the re-parameterisation trick, we sample the latent document-topic vector z , which is then given as input to BoW decoder linear layer D that outputs the reconstructed BoW x rec (step D in figure 1).",
"Objective function is same as in VAE setting, involving a reconstruction loss term between x rec & x bow and KL divergence between the prior (laplace approximation to Dirichlet prior as in ProdLDA) and posterior.",
"We now discuss the details of our Topic Attention Network.",
"We intend the model to attend on document words in a manner such that the resultant attention is distributed according to the semantics of the topics relevant to the document.",
"We hypothesize that this can enable the model to encode better document features while capturing the underlying latent document-topic representations.",
"The topic-word distribution T w represents the affinity of each topic towards words in the vocabulary (which is used to interpret the semantics of each topic).",
"Therefore, we factor T w RK | V | into the attention mechanism, where K denotes the number of topics.",
"The topic-aware attention encoder and topic-word distribution influence each other during training which consequently results in convergence to better topics as discussed in detail in Experiments section.",
"where D RK V is the decoder layer which is used to reconstruct x bow from the sampled latent",
"document-topic representation z as the final step D in Figure",
"1. The topic embeddings are then used to determine the attention alignment matrix A R | x | K between each topic k { 1 , 2 , ..., K } and words in the document such that: A jk = exp( score (( TE ) k , h j )) (cid:80) | x | j (cid:48) =1 exp( score (( TE ) k , h j (cid:48) )) , score (( TE ) k , h j ) = v A (cid:62) tanh( WA [( TE ) k ; h j ]) where v A RP , WA RP ( E + H ) , ( TE ) k RE is the embedded representation of the k th topic and ; is the concatenation operation.",
"We then determine topic-wise context vector corresponding to each topic as: CT = | x | (cid:88) j =1 A j h j , [topic-wise context matrix] where denotes outer product.",
"Note that A j RK ( j th row of matrix A ) is a K dimensional vector and h j is a H dimensional vector, therefore A j h j for each j yields a matrix of order K H , hence CT RK H .",
"The final aggregated context vector c is computed as a weighted average over all rows of CT (each row representing each topic specific context vector) with document-topic proportion vector t d as weights: c = K (cid:88) k =1 ( t d ) i ( CT ) k where, ( t d ) k is a scalar, ( CT ) k RH denotes the k th row of matrix CT & t d is the document-topic distribution which signifies the topic proportions in a document.",
"To compute it, we first normalize the document BoW vector x bow and embed it using the embedding matrix E , followed by multiplication with topic embedding TE RK E : x norm = x bow (cid:80) | V | i =1 ( x bow ) i , [normalized BoW] x emb = x (cid:62) norm E, [document embedding] t d = softmax( TE x emb ) , [document-topic dist.] where x norm R | V | , x emb RE & t d RK .",
"The context vector c is the output of our topic guided attention module which is then used for sampling the latent documents-topic vector followed by the BoW decoding as done in traditional VAE based topic models.",
"We call this framework as Weighted-TAN or W-TAN where the context vector c is a weighted sum of topic-wise context vectors.",
"We also propose another model called Top-TAN or T-TAN where we use context vector of the topic with largest proportion in t d as c .",
"It has been experimentally observed that doing so yields a model which generates more coherent topics.",
"First, we find the index m of most probable topic in t d .",
"The context vector c is then the row corresponding to index m in matrix CT .",
"1. Topic Quality: We evaluate and compare quality of our proposed topic model on three benchmark datasets 20Newsgroups (20NG) 1 (Lang, 1995), AGNews (Zhang et al., 2015) and Yelp Review Polarity (YRP) 2 which are of varying complexity and scale in terms of number of documents, vocabulary size and average length of text after preprocessing 3 .",
"Table 1 summarises statistics related to these datasets used for evaluating topics quality.",
"2. Keyphrase Generation: Neural Topic Model (NTM) has been used to improve the task of supervised keyphrase generation (Wang et al., 2019b).",
"To further highlight the efficacy of our proposed encoding framework in providing better document-topic vectors, we modify encoder module of NTM with our proposed TAN-NTM and compare the performance on StackExchange and Weibo Datasets 4 .",
"Documents in AGNews are padded upto a maximum length of 50 , while those in 20NG and YRP are padded upto 200 tokens.",
"Documents with longer lengths are truncated.",
"These values were chosen such that 80 99% of all documents in each dataset were included without truncation.",
"We 1 Data link for 20NG dataset 2 Data link for AGNews and YRP datasets 3 We provide our detailed preprocessing steps in Appendix A.1 and release processed data to standardise it.",
"use batch size of 100, Adam Optimizer (Kingma and Ba, 2015) with 1 = 0 .",
"99 , 2 = 0 .",
"999 and (cid:15) = 10 8 and train each model for 200 epochs.",
"For all models except T-TAN, learning rate was fixed at 0.002 ([0 . 001 , 0 . 003] , 5) 5 .",
"T-TAN converges relatively faster than other models, therefore for smooth training, we decay its learning rate every epoch using exponential staircase scheduler with initial learning rate = 0.002 and decay rate = 0.96.",
"The number of topics K = 50 , a value widely used in literature.",
"We perform hyper-parameter tuning manually to determine the hidden dimension value of various layers: E = 200 ([100 , 300] , 5) , H = 450 ([300 , 900] , 10) and P = 350 ([10 , 400] , 10) .",
"The weight matrices of all dense layers are Xavier initialized, while bias terms are initialized with zeros.",
"All our proposed models and baselines are trained on a machine with 32 virtual CPUs, single NVIDIA Tesla V 100 GPU and 240 GB RAM.",
"We compare our TAN-NTM with various baselines in table 2 that can be enumerated as (please refer to introduction and related work for their details): 1) LDA (C.G.) : Statistical method (McCallum, 2002) which performs LDA using collapsed Gibbs 6 sampling.",
"5) NB-NTM and 6) GNB-NTM : Methods using negative binomial and gamma negative binomial distribution as priors for topic discovery 9 (Wu et al., 2020) respectively.",
"We could not compare with other methods whose official error-free source code is not publicly available yet.",
"We train and evaluate the baseline methods on same data as used for our method using NPMI coherence 10 (Aletras and Stevenson, 2013).",
"It computes the semantic relatedness between top L words in a given topic through determining similarity between their word embeddings trained over the 5 V ([ a, b ] , t ) means t values from [ a, b ] range tried for this hyper-parameter, of which V yielded best NPMI coherence.",
"corpus used for topic modeling and reports average over topics.",
"For W-LDA, we refer to their original paper to select dataset specific hyper-parameter values while training the model.",
"As can be seen in table 2, our proposed T-TAN model performs signifi-cantly better than previous topic models uniformly on all datasets achieving a better NPMI (measured on a scale of -1 to 1) by a margin of 0.028 (10.44%) on 20NG, 0.047 (14.59%) on AGNews and 0.022 (8.8%) on YRP, where percentage improvements are determined over the best baseline score.",
"Even though W-TAN does not uniformly performs better than all baselines on all datasets, it achieves better score than all baselines on AGNews and performs comparably on remaining two datasets.",
"For a more exhaustive comparison, we also evaluate our model's performance on 20NG dataset (which is the common dataset with GNB-NTM (Wu et al., 2020)) using the NPMI metric from GNB-NTM's code.",
"The NPMI coherence of our model using their criteria is 0.395 which is better than GNB-NTM's score of 0.375 (as reported in their paper).",
"However, we would like to highlight that GNB-NTM's computation of NPMI metric uses relaxed window size, whereas the metric used by us (Lau et al., 2014) uses much stricter window size while determining word co-occurrence counts within a document.",
"Lau et al. (2014) is a much more common and widely used way of computing the NPMI coherence and evaluating topic models.",
"In addition to evaluating our framework in terms of topic coherence, we also compare it with the baselines on the downstream task of document classification.",
"Topic models have been used as text feature extractors to perform classification (Nan et al., 2019).",
"We analyse the quality of encoded document representations and predictive capacity of latent document-topic features generated by our model and compare it with existing topic models 11 .",
"We train the topic model setting number of topics to 50 and freeze its weights.",
"The trained topic model is then used to infer latent document-topic features.",
"We then separately train a single layer linear classifier through cross entropy loss on the training split using the document-topic vectors as input and Adam optimizer at a learning rate of 0.01.",
"We report classification accuracy on the test split of 20NG, AGNews and YRP datasets (compris-ing of 20, 4 and 2 classes respectively) in Table",
"3. The document-topic features provided by T-TAN achieve best accuracy on AGNews (1.43% improvement over most performant baseline) with most significant improvement of 3.06% on 20NG which shows our model learns better document features.",
"T-TAN performs almost the same as the best baseline on YRP.",
"Further, to analyse the predictive performance of top topic attention based context vector, we use it instead of latent document-topic vector to perform classification which further boosts accuracy leading to an improvement of 6.9% on 20NG, 3.1% on AGNews and 1.3% on YRP datasets over the baselines.",
"We compare the running time of our method with baselines in terms of average time taken (in seconds) for performing a forward pass through the 11 Our aim is to analyse document-topic features among",
"topic models only and not to compare with other non-topic model based generic text classifiers.",
"model, where the average is taken over 10000 passes.",
"Our TAN-NTM (implemented in tensor-flow) takes 0.087s, 0.027s and 0.093s on 20NG, AGNews and YRP datasets respectively.",
"Since TAN-NTM processes the input documents as a sequence of tokens through an LSTM, its running time is proportional to the document lengths which vary according to the dataset.",
"The running time for baseline methods are: ProdLDA 0.012s (im-plemented in tensorflow), W-LDA 0.003s (imple-mented in mxnet) and GNB-NTM 0.003s (im-plemented in pytorch).",
"For baseline methods, we have used their original code implementations.",
"We found that the running time of baseline models is independent of the dataset.",
"This is because they use the Bag-of-Words (BoW) representation of the documents.",
"The sequential processing in TAN-NTM is the reason for increased running time of our models compared to the baselines.",
"In the case of AGNews, since the documents are of lesser lengths than 20NG and YRP, the running time of our TAN-NTM is relatively less for AGNews.",
"Further, the running time of other ablation variants (introduced in section 5.4) of our method on 20NG, AGNews and YRP datasets respectively are: 1) only LSTM -0.083s, 0.033s and 0.091s ; 2) vanilla attn 0.088s, 0.037s and 0.095s.",
"In this section, we compare the performance of different variants of our model namely, 1) only LSTM : final hidden state is used to derive sampling parameters z & z log 2 , 2) vanilla attn : final hidden state (w/o topic-word distribution) is used as query to perform attention (Bahdanau et al., 2014) on LSTM outputs such that context vector z is used for VI, 3) W-TAN : Weighted Topic Attention Network, 4) T-TAN : Top Topic Attention Network and 5) T-TAN w/o (without) GloVe : embedding layer in T-TAN is randomly initialised.",
"Table 4 compares the topic coherence scores of these different ablation methods on 20NG, AGNews and YRP.",
"As can be seen, applying attention performs better than simple LSTM model.",
"The weighted TAN performs better than vanilla attention model, however, T-TAN uniformly provides the best coherence scores across all the datasets compared to all other methods.",
"This shows that performing attention corresponding to the most prominent topic in a document results in more coherent topics.",
"Further, we perform an ablation to study the effect of using pre-trained embeddings for T-TAN where it can be seen using Glove for initial-ising word embeddings results in improved NPMI as compared to training T-TAN initialised with random uniform embeddings (T-TAN w/o GloVe) 12 .",
"To verify performance of T-TAN qualitatively, we display few topics generated by ProdLDA and T-TAN on AGNews in Figure",
"2. ProdLDA achieves best score among baselines on AGNews.",
"Consider comparison 1 in Figure 2: ProdLDA produces four topics corresponding to space, mixing them with nuclear weapons, while T-TAN produces two separate topics for both of these concepts.",
"In second comparison, we see that ProdLDA has problems distinguishing between closely related topics (foot-ball, olympics, cricket) and mixes them while T-TAN produces three coherent topics.",
"task specific model is assisted by the topic model and both can be trained in an end-to-end manner.",
"For this, we discuss TAKG (Wang et al., 2019b) and how our proposed topic model encoder can be adapted to achieve better performance on supervised keyphrase generation from textual posts.",
"TAKG 13 comprises of two sub-modules: (1) a topic model based on NVDM-GSM (as discussed in Introduction) using BoW as input to the encoder and (2) a Seq2Seq based model for keyphrase generation.",
"Both modules have an encoder and a decoder of their own.",
"Keyphrase generation module uses sequence input which is processed by bidirectional GRU (Cho et al., 2014) to encode input sequence.",
"The keyphrase generation decoder uses unidirectional GRU which attends on encoder outputs and takes the latent document-topic vector from the topic model as input in a differentiable manner.",
"Since topic model trains slower than keyphrase generation module, the topic model is warmed up for some epochs separately and then jointly trained with keyphrase generation.",
"Please refer to original paper (Wang et al., 2019b) for more details.",
"We adapted our proposed topic model framework by changing the architecture of encoder in the topic model of TAKG, replacing it with W-TAN and T-TAN.",
"The change subsequently results in better latent document-topic representation depicted by better performance on keyphrase generation as shown in Table 5 where the improved topic model encoding framework results in 1-2% improvement in F1 and MAP (mean average precision) on StackExchange and Weibo datasets compared to TAKG.",
"Here, even though TAKG with T-TAN performs marginally better than the baseline, TAKG with W-TAN uniformly performs much better.",
"In this work, we propose Topic Attention Network based Neural Topic Modeling framework: TAN-13",
"TAN-13 We use their code and data (link) to conduct experiments.",
"NTM to discover topics in a document corpus by performing attention on sequentially processed tokens in a topic guided manner.",
"Attention is performed effectively by factoring Topic-word distribution (TWD) into attention mechanism.",
"We compare different variants of our method through ablations and conclude that processing tokens sequentially without attention or applying attention without TWD gives inferior performance.",
"Our TAN-NTM model generates more coherent topics compared to state-of-the-art topic models on several benchmark datasets.",
"Our model encodes better latent document-topic features as validated through better performance on document classification and supervised keyphrase generation tasks.",
"As future work, we would like to explore our framework with other sequence encoders such as Transformers, BERT etc. for topic modeling."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"result",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"method",
"result",
"result",
"abstain",
"result",
"objective",
"objective",
"result",
"result",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"method",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"method",
"other",
"objective",
"other",
"other",
"other",
"other",
"objective",
"other",
"objective",
"method",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"objective"
] |
[
"Over the last few years, there has been growing interest in learning models for physically grounded language understanding tasks, such as the popular blocks world domain.",
"These works typically view this problem as a singlestep process, in which a human operator gives an instruction and an automated agent is evaluated on its ability to execute it.",
"In this paper we take the first step towards increasing the bandwidth of this interaction, and suggest a protocol for including advice, high-level observations about the task, which can help constrain the agents prediction.",
"We evaluate our approach on the blocks world task, and show that even simple advice can help lead to sig-nificant performance improvements.",
"To help reduce the effort involved in supplying the advice, we also explore model self-generated advice which can still improve results.",
"The problem of constructing an artificial agent capable of understanding and executing human instructions is one of the oldest long-standing AI challenges (Winograd, 1972).",
"This problem has numerous applications in various domains (plan-ning, navigation and assembly) and can help accommodate seamless interaction with personal assistants in many environments.",
"Due to its central role in AI and wide applicability, this problem has seen a surge of interest recently (MacMahon et al., 2006; Branavan et al., 2009; Chen and Mooney, 2011; Tellex et al., 2011; Matuszek et al., 2012; Kim and Mooney, 2013; Misra et al., 2017).",
"Recent works (Bisk et al., 2016; Tan and Bansal, 2018) focus on exploring deep learning methods for grounding spatial language.",
"In this popular setup, human communication with robots is viewed as a single-step process, in which a natural language (NL) instruction is provided, and an outcome is observed.",
"Our goal in this paper is to explore different approaches for relaxing the single step assumption, and present initial results which we hope would motivate future work in this direction.",
"Similar to interactive dialog systems (Allen et al., 1995; Ryb-ski et al., 2007; Wen et al.), we view this problem as an interactive process, in which the human operator can observe the agents' response to their instruction and adjust it by providing advice, a form of online feedback.",
"Specifically, the advice consists of a short sentence, simplifying the user's intent.",
"We utilize two types of advice, one restricting the agent's search space to a general region ( restrictive advice ), and the other telling the agent the appropriate direction (up, down, left, right) to adjust its current prediction ( corrective advice ).",
"Our focus is on the challenging task of moving blocks on a grid (Winograd, 1972), in which the agent is given only an instruction and the state of the grid, and must predict the coordinates of where a block must be moved.",
"We follow the difficult experimental settings suggested by (Bisk et al., 2016), in which the blocks are unlabeled and can only be referenced by their spatial properties.",
"Fig. 1 describes our settings and uses the advice the target is in the lower left , to restrict the agents search space after observing the incorrect prediction placed the target block in the top half of the board.",
"To accommodate these settings, we take a two step approach.",
"First, we ground the advice text in the simulated blocks-world environment by training a neural network.",
"In the second step, we integrate the trained advice network into the end-to-end neural model proposed by Bisk et al.",
"Our architecture is described in Fig. 2.",
"The experiments we run show that this end-to-end advice model successfully grounds the meaning of our advice.",
"We propose four novel interactive advice-based protocols that can be applied on any robot communication architecture, ordered in terms of decreasing human effort.",
"As expected, as human effort lessens, performance does worsen, but all protocols outperform Bisk et al. (whom our model is identical to besides the inclusion of advice).",
"Most notably, we explore the notion of model self-generated advice, which significantly re-duces/eliminates human effort.",
"In this approach, a model is trained to automatically generate restrictive advice for a given scenario, based on the assumption that it is easier to predict a region containing the target coordinates rather than their exact location.",
"We validate this assumption by developing a neural architecture to predict the restrictive advice and show it can help improve the overall prediction quality, despite having no human assistance.",
"This section describes the architecture we developed for understanding advice, and how to incorporate it into the original Bisk et al. model to make better predictions.",
"We begin by defin-ing the Blocks World task and the types of advice we use.",
"We then introduce a model for grounding the advice and a method for incorporating the pre-trained advice understanding module into the original model.",
"Finally, we discuss an architecture for advice generation , a method for self-predicting the advice to avoid any human intervention.",
"Further details of our models and advice generation process are in the Appendix.",
"Given an input state, consisting of all the block positions on a board, and a NL instruction, the model has to predict the coordinates of the source block to be moved and its target location.",
"We follow the definition by (Bisk et al., 2016) and due to space constraints refer the reader to that paper.",
"The two types of advice we devise in this paper are designed to assist the prediction agent by providing simpler instructions in addition to the original input.",
"The first, restrictive advice, informs the agent about the general region of the source / target coordinates, such as top left .",
"These regions are determined by dividing the board into equally sized sections (two halves, four quadrants).",
"The second type of advice, corrective advice, observes the agents' predictions and determines which direction (up, down, left, right) they must be adjusted to get closer to the target.",
"Both of these are representative of information a human could easily provide to a robot in various ways (speech, using assisted devices, etc.), to help correct its predictions.",
"Specific examples are shown below.",
"We pre-train a neural network model to accurately understand the advice.",
"For both types of advice, a LSTM-RNN (Hochreiter and Schmid-huber, 1997) is used to read the advice sentence s = w 1 , w 2 , ..., w n and output the hidden state representations { h n } .",
"Prior to this, a word embedding layer is used to project the input words into high-dimension vectors { w i } .",
"For restrictive advice, the last state from the LSTM h n is fed along with a random coordinate into a Fully Connected (FC) layer.",
"The network must output a positive prediction if the random coordinate is in the region described by the advice sentence, and negative otherwise.",
"For corrective advice, the last state from the LSTM h n is fed along with a random coordinate into a FC layer, and the network must output a coordinate that follows the advice.",
"For example, if the advice is move the block down , the predicted coordinate must be below the random input coordinate.",
"If the advice is followed, the network receives 0 loss, otherwise a MSE regression loss.",
"the best performing End-to-End RNN architecture proposed in (Bisk et al., 2016) by adding a FC layer to the pre-trained LSTM state h n and summing it with the LSTM hidden state of the original model (as shown in Figure 2b).",
"We load and freeze the best performing parameters from our pre-trained model into the relevant portion of this end-to-end architecture, and train it on the original task of predicting the coordinates of the source / target location, with the addition of advice input.",
"We use a neural network model to self-generate restrictive advice (as shown in Figure 3), passing the instruction into an embedding layer followed by a LSTM, the board state into a FC layer, concatenating these into a FC layer, and finally using a softmax to classify the input example into a region.",
"region.",
"We train this architecture and then run it on the test set, generate the appropriate advice based on the region the data is classified in, and use that as test advice input for the end-to-end architecture from section 2.4.",
"Next, we present our experiments over our four different advice protocols, each with decreasing human effort and overall performance.",
"In each protocol, we provide advice to the end-to-end model from Section 2.4, whether it is given by a human user or model self-generated.",
"Our results, evaluated on each model's mean and median prediction error, are presented in Table 1.",
"We always compare to the baseline Bisk et al. model, which our model is identical to besides the addition of advice (and we always beat), and the state-of-the-art best non-ensemble Tan and Bansal architecture.",
"Note that Tan and Bansal use an advanced neural architecture and a different training procedure (source prediction trained as classification).",
"We hypothesize that using the advice mechanism over this more complex architecture would lead to further improvements, and leave it for future work.",
"The pre-trained advice grounding models from Section 2.3 achieve 99.99% accuracy, and are vital, as shown by the poor performance without them ( M4 vs M5 ).",
"These grounding models allow the end-to-end architecture to generalize to the variability in advice utterances.",
"When training the end-to-end model from Section 2.4, we provide restrictive advice at training time for only half the examples.",
"For every epoch, a different half set of examples (determined randomly) receive advice.",
"This mechanism gives the model a chance to learn to interpret each example with and without advice, so that it can handle the interactivity without overfitting to one setup.",
"This setup also gave the best performance.",
"At test time, the advice is provided only whenever the predictions fall in the wrong general region, just like a human would.",
"As seen in Table 1, this model ( M5 ) significantly outperforms both baselines ( M1 , M3 ).",
"We note that the performance did not improve much when advice was always provided, showing that this model was able to perform well in its absence and does not rely on it (due to our choice not to provide advice all the time in training).",
"In fact a human would only have to provide restrictive advice for 395/720 examples, and the model always follows it.",
"1 3.2 Corrective Advice We train corrective advice identically to restrictive advice from Section 3.1, except we train in two separate iterations.",
"This is necessary as the model must learn to adjust its predictions based on the advice, which is why it is first trained to make the normal prediction (first iteration), then trained to adjust the prediction (second iteration).",
"In the first iteration, we train identically to (Bisk et al., 2016) with no advice, but in the second iteration corrective advice is generated based on which direction the predictions must be adjusted to be more accurate.",
"This case is simpler than restrictive advice, since the human operator just has to provide the direction to adjust the predictions, rather than the precise region of the coordinates.",
"However, the performance does worsen ( M5 vs M6 ).",
"In Section 2.5, we introduced a model that was able to self-generate restrictive advice by predicting the general region of the block coordinates given the NL instruction and blocks world.",
"Table 2 shows this model's accuracy on that task when the board is split into 4 regions.",
"As this is a hard problem with low accuracy ( A1 ), we instead generate advice for the top 2 most confident predictions (determined by the softmax scores) ( A2 ).",
"We now introduce a new multi-step retry advice protocol.",
"In the first step, the model from Section 2.5 self-generates restrictive advice based on the most confident predicted region, which it uses as input in the end-to-end model.",
"If the user believes the coordinate prediction based on this advice is wrong, it can tell the model to retry, and then the second most likely restrictive advice will be used.",
"Thus, the only human feedback needed now is telling the model to retry, rather than accurate advice as before.",
"The performance of this ( M7 ) still significantly outperforms Bisk et al. and is close to Tan and Bansal on target prediction.",
"We now aim to avoid any human interaction, by letting the model completely self-generate the advice.",
"Accomplishing it would allow us to improve the model's performance without additional human effort.",
"We experimented with two approaches.",
"In the first, we generate advice as described in Section 3.3.",
"However, instead of having the user ask the model to retry, we treat the top 2 confidence regions as a general region, and provide that as advice input as described in Section 3.1.",
"In this case, there is a performance improvement over Bisk et al. with no human effort required ( M8 in Table 1).",
"Our second approach for self-generated advice aims to improve on some of the shortcomings of the first approach.",
"Previously, when generating the advice, we had decided on four coarse-grained regions, and trained a model to classify each input example into one of these regions.",
"In many cases, x",
"Self-Generated Advice",
"(a) The Bisk et al. model would have made a prediction (x') close to the true block (square).",
"However, the advice region (blue) was incorrect (due to the true block being close to the edge of it) and this led to a significantly worse prediction (circle).",
"(b) In the input-specific self-generated advice model, the advice region (blue) is centered at the incorrect coordinate prediction (x'), leading to the true source block being included and a correct prediction (circle).",
"the true coordinate lay close to the boundary of one of these regions, often resulting in the model predicting the wrong region when self-generating the advice.",
"This incorrect prediction would lead to significantly worse performance (when compared to the model without advice) when running the end-to-end model from Section 3.1, as the advice was incorrect (remember that the model always follows our advice, and the true coordinate is not in the advice region due to the mistake).",
"However, if we had instead chosen our regions to be centered at the true coordinate of each input example, it would be less likely that the model would make an incorrect region prediction (since a small error would still lead to the region containing the correct coordinate).",
"Figure 4 provides a visual explanation of this.",
"For this reason, we now introduce input-specific model self-generated advice.",
"In this case, we run the Bisk et al. coordinate prediction model in two iterations.",
"In the first iteration, we use the prediction to generate advice for a region (of the same size as in the case of 4 quadrants) centered at the predicted coordinate (see Figure 4b).",
"2 In the second iteration, we feed in this generated advice just like Section 3.1.",
"This model ( M9 ) achieves performance slightly worse than retry advice, and significantly better than Bisk et al., all with no human effort.",
"3 Table 2 shows the accuracy increase 2 We make sure the advice region doesn't exceed the board boundaries.",
"in predicting the advice now ( A3 vs A1 ).",
"It is unsurprising that this approach to self-generating advice performs better, as now the regions are more specific to each coordinate (so there is a higher probability that the true coordinate is actually in the predicted region see Figure 4).",
"We hypothesize that the performance improvements in self-generated advice happen since it is easier to predict the general region used to generate the advice rather than the specific coordinates.",
"Previously, we have also shown the benefit of restrictive advice in improving overall coordinate prediction, so it is unsurprising that a high accuracy of advice generation leads to better overall performance.",
"Due to this, we propose that future robot communication works take advantage of predicting and then using model self-generated advice in their end-to-end training procedure.",
"This paper takes a first step towards a stronger interaction between automated agents and their human operators, for physically grounded language understanding tasks.",
"We focus on the popular blocks task and introduce the notion of advice, Natural Language hints provided by the human operator, correcting the model's predictions.",
"We show that using four versions of this interactive advice driven protocol on an existing robot communication architecture, we can obtain signifi-cant performance improvements.",
"The last method, model self-generated advice, shows the benefit of considering advice even when not designing an interactive protocol.",
"Our future work focuses on further increasing the accuracy of the self-generated advice model, so we can achieve better performance with no human effort.",
"We thank the anonymous reviewers of this paper for all of their vital feedback.",
"This work was partially supported by the Defense Advanced Research Projects Agency (DARPA) under the ASED program.",
"Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the DARPA.",
"as there are now significantly more regions.",
"The accuracy of that model is still 99.99%, and the training procedure does not change."
] | [
"abstain",
"abstain",
"objective",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"method",
"method",
"method",
"result",
"method",
"objective",
"abstain",
"abstain",
"result",
"objective",
"result",
"objective",
"abstain",
"objective",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other"
] |
[
"Multilingual neural machine translation (NMT) has led to impressive accuracy improvements in low-resource scenarios by sharing common linguistic information across languages.",
"However, the traditional multilingual model fails to capture the diversity and specificity of different languages, resulting in inferior performance compared with individual models that are sufficiently trained.",
"In this paper, we incorporate a language-aware interlingua into the Encoder-Decoder architecture.",
"The interlingual network enables the model to learn a language-independent representation from the semantic spaces of different languages, while still allowing for language-specific specialization of a particular language-pair.",
"Experiments show that our proposed method achieves remarkable improvements over state-of-the-art multilingual NMT baselines and produces comparable performance with strong individual models.",
"Neural Machine Translation (NMT) (Sutskever et al., 2014; Vaswani et al., 2017) has significantly improved the translation quality due to its end-to-end modeling and continuous representation.",
"While conventional NMT performs single pair translation well, training a separate model for each language pair is resource consuming, considering there are thousands of languages in the world.",
"Therefore multilingual NMT is introduced to handle multiple language pairs in one model, reducing the online serving and offline training cost.",
"Furthermore, the multilingual NMT framework facilitates the cross-lingual knowledge transfer to improve translation performance on low resource language pairs (Wang et al., 2019).",
"the language diversity and model capacity limitations lead to inferior performance against individual models that are sufficiently trained.",
"So recent efforts in multilingual NMT mainly focus on enlarging the model capacity, either by introducing multiple Encoders and Decoders to handle different languages (Firat et al., 2016; Zoph and Knight, 2016), or enhancing the attention mechanism with language-specific signals (Blackwood et al., 2018).",
"On the other hand, there have been some efforts to model the specificity of different languages.",
"Johnson et al. (2017) and Ha et al. (2016) tackle this by simply adding some pre-designed tokens at the beginning of the source/target sequence, but we argue that such signals are not strong enough to learn enough language-specific information to transform the continuous representation of each language into the shared semantic space based on our observations.",
"In this paper, we incorporate a language-aware Interlingua module into the Encoder-Decoder architecture.",
"It explicitly models the shared semantic space for all languages and acts as a bridge between the Encoder and Decoder network.",
"Specifically, we first introduce a language embedding to represent unique characteristics of each language and an interlingua embedding to capture the common semantics across languages.",
"Then we use the two embeddings to augment the self-attention mechanism which transforms the Encoder representation into the shared semantic space.",
"To minimize the information loss and keep the semantic consistency during transformation, we also introduce reconstruction loss and semantic consistency loss into the training objective.",
"Besides, to further enhance the language-specific signal we incorporate language-aware positional embedding for both Encoder and Decoder, and take the language embedding as the initial state of the target side.",
"We conduct experiments on both standard WMT data sets and large scale in-house data sets.",
"And our proposed model achieves remarkable improvements over state-of-the-art multilingual NMT baselines and produces comparable performance with sufficiently trained individual models.",
"As shown in Figure 1, we propose a universal Encoder-Interlingua-Decoder architecture for multilingual NMT.",
"The Encoder and Decoder are identical to the generic self-attention TRANSFORMER (Vaswani et al., 2017), except some mod-ifications in the positional embedding.",
"The Interlingua is shared across languages, but with language-specific embedding as input, so we call it language-aware Interlingua.",
"The Interlingua module is composed of a stack of N identical layers.",
"Each layer has a multi-head attention sub-layer and a feed-forward sub-layer.",
"The Interlingua module uses multi-head attention mechanism, mapping the Encoder output H enc of different languages to a language-independent representation I .",
"I = FFN ( ATT ( Q, K, V )) (1) Q = FFN ( L emb , I emb ) R d r (2) K, V = H enc R d n (3) The H enc denotes the hidden states out of the Encoder, while the d is the hidden size, and the n denotes the length of the source sentence.",
"ATT ( . ) is the multi-head attention mechanism (Vaswani et al., 2017).",
"The ( K, V ) here are computed from the hidden states of the Encoder output H enc .",
"The Q is composed of two parts in simple linear combination.",
"One part is from the language-specific part L emb , and the other part is a shared matrix I emb , which we called interlingua embedding.",
"Note that, the interlingua embedding I emb has a fixed size of [ d r ].",
"the i -th column of I emb represents a initial semantic subspace that guides what semantic information of the H enc should be attended to at the corresponding position i of the Interlingua output.",
"The r means every Encoder H enc will be mapped into a fixed size representation of r hidden states, and it is set to 10 during all of our experiments, similar to the work of (Vazquez et al., 2018).",
"By incorporating a shared interlingua embedding, we expect that it can exploit the semantics of various subspaces from encoded representation, and the same semantic components of different sentences from both same and different languages should be mapped into the same position i [1 , r ] .",
"Language embedding L emb is used as an indicator for the Interlingua that which language it is attending to, as different languages have their own characteristics.",
"So we call the module language-aware Interlingua.",
"FFN (.) is a simple position-wise feed-forward network.",
"By introducing Interlingua module into the Encoder-Decoder structure, we explicitly model the intermediate semantic.",
"In this framework, the language-sensitive Enc is to model the characteristics of each language, and the language-independent Interlingua to enhance cross-language knowledge transfer.",
"The universal Encoder-Decoder model (Johnson et al., 2017) use a special token (e.g. < 2en > ) at the beginning of the source sentence, which gives a signal to the Decoder to translate sentences into the right target language.",
"But it is a weak signal as the language information must go through N = 6 Encoder self-attention, and then N = 6 Encoder-Decoder attention before the Decoder attends to it.",
"Inspired by Wang et al. (2018), we build a language embedding explicitly, and directly use it as the initial state of the Decoder.",
"Considering the structural differences between languages, each language should have a specific positional embedding.",
"Wang et al. (2018) use trigonometric functions with different orders or offsets in the Decoder for different language.",
"Inspired by this, we provide language-aware positional embedding for both Encoder and Decoder by giving language-specific offsets to the original sine ( x ) , cosine ( x ) functions in TRANSFORMER .",
"The offset is calculated from WLL emb , where WL is a weight matrix and L emb is the language embedding.",
"We introduce three types of training objectives in our model, similar to (Escolano et al., 2019).",
"(i) Translation objective : Generally, a bilingual NMT model adopts the cross-entropy loss as the training objective, which we denote as L s 2 t , meanwhile, we incorporate another loss L t 2 s for translation from the target to the source.",
"(ii) Reconstruction objective : The Interlingua transforms the Encoder output into an intermediate representation I .",
"During translation, the Decoder only uses the I instead of any Encoder information.",
"Inspired by Lample et al. (2017), Tu et al. (2017) and Lample et al. (2018), we incorporate an reconstruction loss for the purpose of minimizing information loss.",
"We denote the X (cid:48) = Decoder ( Interlingua ( Encoder ( X ))) as the reconstruction of X .",
"So we employ cross-entropy between X (cid:48) and X as our reconstruction loss, and denote L s 2 s for the source, L t 2 t for the target.",
"(iii) Semantic consistency objective : Obviously, sentences from different languages with the same semantics should have the same intermediate representation.",
"So we leverage a simple but effective method, cosine similarity to measure the consistency.",
"Similar objectives were incorporated in zero-shot translation (Al-Shedivat and Parikh, 2019; Arivazhagan et al., 2019) sim ( I s , I t ) = 1 r r (cid:88) i =1 I si I ti (cid:107) I si (cid:107)(cid:107) I ti (cid:107) (4) Where, I s and I t denote the Interlingua representation of the source and target sides respectively.",
"I i is the i -th column of matrix I .",
"L dist = 1 sim ( I s , I t ) is used as distance loss in our training objective.",
"Finally, the objective function of our learning algorithm is thus: L = L s 2 t + L t 2 s + L s 2 s + L t 2 t + L dist (5) 3 Experiments 3.1 Experimental Settings We conduct our experiments on both WMT data and in-house data.",
"For WMT data, we use the WMT13 English-French (En-Fr) and English-Spanish (En-Es) data.",
"The En-Fr and En-Es data consist of 18M and 15M sentence pairs respectively.",
"We use newstest2012 and newstest2013 as our validation set and test set.",
"Our in-house data contains about 130M parallel sentences for each language pair in En-Fr, En-Es, En-Pt (Por-tuguese), and 80M for En-Tr (Turkish).",
"During all our experiments, we follow the settings of TRANSFORMER -base (Vaswani et al., 2017) with hid-den/embedding size 512, 6 hidden layers and 8 attention heads.",
"We set 3 layers for Interlingua, and r = 10 similar to the work of (Vazquez et al., 2018).",
"We apply sub-word NMT (Sennrich et al., 2015), where a joint BPE model is trained for all languages with 50,000 operations.",
"We used a joint vocabulary of 50,000 sub-words for all language pairs.",
"We take the UNIV model introduced by Johnson et al. (2017) as our multilingual NMT baseline, and individual models trained for each language pair as our bilingual NMT baseline.",
"Note that we set the Encoder of the UNIV model to 9 layers, which makes it comparable to this work in the term of model size.",
"Compared with the individual models, our model is slightly better for Fr/Es-En in many-to-one scenario.",
"In the one-to-many scenario, the individual models get the best BLEU score, while our model outperforms the universal model in all language pairs.",
"Similarly, the experimental results on in-house large-scale data are shown in Table",
"2. In one-to-many settings, our model acquires comparable BLEU scores with the bilingual NMT baselines (Individual model), and around 1 BLEU point improvement in En-Pt translation.",
"Our model gets the best BLEU score in many-to-one directions for all language pairs.",
"Besides, the proposed model significantly exceeds the multilingual baseline (Universal model) in all directions.",
"The results show that multilingual NMT models perform better in big data scenarios.",
"This might the reason that intermediate representation can be trained more fully and stronger in a large-scale setting.",
"To examine whether our language-aware Interlingua can help cross-lingual knowledge transfer, we perform zero-shot translation on WMT data.",
"The Fr-Es and Es-Fr translation directions are the zero-shot translations.",
"As shown in Table 1, our method yields more than 10 BLEU points improvement compared with the universal Encoder-Decoder approach and significantly shortens the gap with sufficiently trained individual models.",
"We further verify the impact of different training objectives in Table",
"1. Compared with the INTL baseline, the REC training objective can further improve the translation quality of both supervised and zero-shot language pairs.",
"However, the SIM objective contributes to zero-shot translation quality significantly, with a slight decrease in supervised language pairs.",
"The integration of both REC and SIM in INTL ultimately achieves balance increments between supervised and zero-shot language pairs.",
"This suggests that constraints on Interlingua can lead to better intermediate semantic representations and translation quality.",
"Multilingual NMT is first proposed by Dong et al. (2015) in a one-to-many scenario and generalized by Firat et al. (2016) to many-to-many scenario.",
"Multilingual NMT suffered from the language diversity and model capacity problem.",
"So one direction is to enlarge the model capacity, such as introducing multiple Encoders and Decoders to handle different languages (Luong et al., 2015; Dong et al., 2015; Firat et al., 2016; Zoph and Knight, 2016), or enhancing the attention mechanism with language-specific signals (Blackwood et al., 2018).",
"The other direction is aimed at a uni-fied framework to handle all language pairs (Ha et al., 2016; Johnson et al., 2017).",
"They try to handle diversity by enhancing language-specific signals, by adding designed language tokens (Ha et al., 2016) or language-dependent positional encoding (Wang et al., 2018).",
"Our work follows the second line by explicitly building a language-aware Interlingua network which provides a much stronger language signal than the previous works.",
"In regards to generating language-independent representation, Lu et al. (2018) and Vazquez et al. (2018) both attempted to build a similar language-independent representation.",
"However, their work is all based on multiple language-dependent LSTM Encoder-Decoders, which significantly increase the model complexity.",
"And they don't have the specially designed training objective to minimize the information loss and keep the semantic consistency.",
"Whereas our work is more simple and effective in these regards and tes-tified on a much stronger TRANSFORMER based system.",
"We have introduced a language-aware Interlingua module to tackle the language diversity problem for multilingual NMT.",
"Experiments show that our method achieves remarkable improvements over state-of-the-art multilingual NMT baselines and produces comparable performance with strong individual models."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"method",
"method",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"abstain",
"method",
"result"
] |
[
"Warning: this paper contains example data that may be offensive or upsetting.",
"Current open-domain conversational models can easily be made to talk in inadequate ways.",
"Online learning from conversational feedback given by the conversation partner is a promising avenue for a model to improve and adapt, so as to generate fewer of these safety failures.",
"However, current state-of-the-art models tend to react to feedback with defensive or oblivious responses.",
"This makes for an unpleasant experience and may discourage conversation partners from giving feedback in the future.",
"This work proposes SaFeRDialogues, a task and dataset of graceful responses to conversational feedback about safety failures.",
"We collect a dataset of 8k dialogues demonstrating safety failures, feedback signaling them, and a response acknowledging the feedback.",
"We show how fine-tuning on this dataset results in conversations that human raters deem considerably more likely to lead to a civil conversation, without sacrificing engagingness or general conversational ability.",
"Large neural generative dialogue models trained to mimic human English-language open-domain conversations have become engaging (Adiwardana et al., 2020; Roller et al., 2020b), but are still prone to uttering problematic language, e.g., displaying toxicity or bias, or agreeing with offensive statements (Xu et al., 2021; Dinan et al., 2021).",
"Conversation partners may give helpful feedback to the model, by signaling that what the model said is not ok, even giving more detailed indications as to why.",
"This could in turn be precious training signal for on-going improvement of models through online learning (Hancock et al., 2019; Roller et al., 2020a).",
"In particular, the boundaries of what constitutes ok Figure 1: Types of bot responses when responding to feedback about problematic inputs from the BAD dataset (Xu et al., 2021).",
"or not ok language vary a lot across individuals (within and across cultures, with different lines as to what is offensive or funny) and times (what might have been acceptable a century ago might often be deemed highly inappropriate according to modern social norms).",
"Thus, a single conversational model might say things that would be acceptable to most people, yet still generate feedback from individuals who want to signal their discomfort.",
"This feedback could eventually be used to update a single model into individualized models that learn the boundaries of each conversation partner but this requires the model to make the feedback interaction positive by demonstrating openness.",
"Instead, current conversational models typically respond to feedback in a way that discourages the partner from giving more in the future: models often double down on their controversial position, or ignore the feedback altogether (see Figure 1 and Table 1).",
"Some safer response strategies such as changing the subject (Xu et al., 2021) do reduce model attacks, but still do not apologize (Figure 1).",
"This work improves the response of end-to-end conversational models to feedback about safety 6462 Sample Context 1 Sample Context 2 Safety failure: Mostly labradors, they are nice companions then once they are full grown the meat cooks real nice and the texture is awesome.",
"failures by fine-tuning them on a conversational dataset specifically collected to encourage graceful response to feedback (see counts in Figure 1, and examples in Table 1).",
"Automated and human evaluations show that the resulting models are evaluated as considerably more likely to lead to a civil conversation, while maintaining engagingness.",
"Thus, the contribution of this work is twofold: (1) it proposes a task and accompanying dataset of responding to feedback about safety failures 1 and (2) it demonstrates how fine-tuning on this dataset makes models more receptive to feedback, in a way that human raters evaluate as leading to conversations that are more civil yet still as engaging.",
"1 The dataset and task have been released through the ParlAI framework (Miller et al., 2017) and are available at https://github.com/ facebookresearch/ParlAI/tree/main/parlai/tasks/saferdialogues 2 Recovering from Safety Failures in a conversation Constructive feedback is an important tool in human learning (Ovando, 1994).",
"Unfortunately, feedback can often be perceived as self-threat (i.e., challenge to a positive view of oneself), leading to various defensive responses that impede learning (Sherman and Cohen, 2006), such as resistance to changing beliefs, or even adoption of more extreme beliefs (Lord et al., 1979).",
"These common human psychological self-defense responses widely appear in large-scale human corpora used to train neural generative conversational models, such as pushshift.io Reddit (Baumgartner et al., 2020).",
"Accordingly, conversational models frequently exhibit defensive or oblivious responses, rejecting the feedback instead of reflecting on it (Figure 1).",
"a crowdsourced dataset where workers are specifically instructed to acknowledge feedback in a way that would lead to a civil interaction.",
"Conversational models fine-tuned on that data would then be expected to display that target quality of graceful acceptance of feedback.",
"This overall strategy is similar to previous work endowing models with more empathy or knowledge, by fine-tuning on data collected with the goal of exhibiting the desired quality (Smith et al., 2020; Rashkin et al., 2019).",
"Before providing a more detailed description of our approach, we briefly review related work.",
"As reviewed in Dinan et al. (2021), neural end-to-end conversational models can display a host of safety issues, e.g. generating inappropriate content (Dinan et al., 2019), or responding inappropriately to sensitive content uttered by the conversation partner (Cercas Curry and Rieser, 2018).",
"Efforts to train models on adversarially collected datasets have resulted in safer models (Dinan et al., 2019; Xu et al., 2021), which can however still be goaded into uttering offensive statements (Xu et al., 2021).",
"Feedback from the conversation partner is likely to become an important source of information for improving deployed models, as argued in Roller et al. (2020a), and is particularly important for making models more robust to evolving values and social norms (Dinan et al., 2021).",
"In this work, we do not attempt to improve the safety of conversational models, and instead focus on improving how they respond to feedback given by the conversation partner within the conversation.",
"Several works have examined response strategies to unsafe utterances.",
"Chin and Yi (2019); Chin et al. (2020) look at how different response strategies (disengaging, apologizing, or counter-attacking) can change how conversational models are rated and how many negative responses they elicit.",
"Curry and Rieser (2019) show that different strategies are deemed appropriate according to the type of unsafe input.",
"Paranjape et al. (2020) look at re-offense rates after various response types.",
"More recent work has focused on generating counterspeech and teaching interventions (Pranesh et al., 2021; Chaud-hary et al., 2021; Zhu and Bhat, 2021).",
"By contrast, this work looks at the other side of the conversation, where the model itself has said something unsafe and the human partner has given feedback that signals it.",
"This set-up corresponds to a learner bot, rather than a moderator bot such as in de los Riscos and D'Haro (2021).",
"In this section, we introduce a new task and dataset named SaFeRDialogues 2 (SD) for training models that can recover from safety failures.",
"We collect data of (1) crowdsource workers giving feedback when something unsafe is said, and (2) of other crowdsource workers providing subsequent civil responses to that feedback.",
"To provide a context of conversational safety failures, we start from the train split of the Bot-Adversarial Dialogue (BAD) dataset from Xu et al. (2021), of dialogues between bots and crowdworkers, where humans were trying to probe or adversarially goad the bot into responding with unsafe utterances.",
"Each dialogue utterance in that dataset is labeled as either safe or unsafe by the crowdworkers, where a message is UNSAFE or NOT OK if it is not ok to send in a friendly conversation with someone you just met online.",
"We take 7,049 instances of 4 consecutive utterances that end in an unsafe utterance (whether from bot or human) from the train set of the BAD dataset, and use those as context of safety failure.",
"Signaling Failure Task Crowdworkers write natural responses to those dialogue contexts, to signal to the other speaker that the previous message is NOT OK (see screenshot in Appendix, Figure 3).",
"The resulting data is validated as adequately signaling safety failure by other sets of crowdworkers, as described in more detail in Appendix A. Recovery Task Other crowdworkers then respond to the resulting dialogues and the provided feedback about conversational safety failure, with instructions to respond in a way that encourages civility (see screenshot in Figure 2, and additional details in Appendix B).",
"After validation through a separate verification task, we keep 7,881 recovery responses (out of 11,246).",
"SaFeRDialogues (SD) dataset The resulting SaFeRDialogues (SD) dataset consists in 7,881 dialogues, each composed of 4 utterances from the train set from the BAD dataset where the 4th utterance is not ok, followed by a response signaling the safety failure, and a valid recovery response.",
"The 2 for Safety Feedback Recovery Dialogues 6464 Figure 2: Screenshot from the Recovery task.",
"7881 dialogues are split into a train, valid, and test sets of 6305, 788 and 788 dialogues, respectively.",
"The sets of seeding train BAD dialogue contexts are kept distinct between train, valid and test set.",
"Table 2 shows that words signaling problematic responses ( rude, offensive, illegal ) or potentially sensitive topics ( women, violence, race ) are much more frequent in the feedback utterances of the dataset, compared to regular chitchat (BST).",
"For recovery responses, words associated with openness to feedback ( apologize, reflect ) and the modality of feedback giving ( speaking, saying, pointing ) become more frequent.",
"Table 3 shows the 10 most frequent 4-grams for the Signaling and Recovery responses in SD, and for BST.",
"We consider large Transformer-based architectures trained on dialogue tasks and fine-tune them on our new Safety Feedback Recovery Dialogue dataset (SaFeRDialogues), using the ParlAI toolkit (Miller et al., 2017).",
"To maintain the general conversational ability of the model, we multi-task with equal weight on the Blended Skill Talk dataset (Smith et al., 2020) without using personas (BSTnp), as removing personas was not rated as significantly more engaging (Roller et al., 2020b), and the BAD dataset does not have personas.",
"Differential persona presence between datasets would allow the model to use the absence of personas as a spurious indicator that responding to feedback is required.",
"3 Fine-tuning only on the SaFeRDialogues dataset would lead to an extreme over-representation of apologetic utterances (\"I am sorry\"), even when not called for.",
"We use two initial pre-trained models, BST2.7 and DialoGPT.",
"BST2.7 We run most of our experiments using the BST 2.7B parameter model from Roller et al. (2020b) as initial pre-trained model, because it was rated as more engaging by humans in previous 3 To measure that effect, we trained a model where personas were used for BST, and confirmed that the model indeed ends up apologizing too much, with 25% of responses in a general conversation context being answered with the word \"sorry\", and only 40% of these being appropriate in the context.",
"work (Roller et al., 2020b; Xu et al., 2021).",
"Models based on BST2.7 are used with a minimum generation length of 20 as recommended in Roller et al. (2020b).",
"DialoGPT To show that fine-tuning on our SD dataset can improve other models, we also run experiments using the medium-sized DialoGPT (Zhang et al., 2019), a 345M parameter GPT2 model trained on 147M conversation-like exchanges extracted from Reddit, as base pre-trained model.",
"We also use an \"intermediate baseline\" that fine-tunes DialoGPT on BST to check what part of the improvement in civility is due to that fine-tuning on generally better-behaved conversations alone, with no focus on responding to feedback.",
"The DialoGPT models are used with standard beam search decoding, as in the original paper (Zhang et al., 2019).",
"In the following, Recovery (BST 2.7B) and Recovery (DialoGPT) denote the BST 2.7B model and DialoGPT fine-tuned on SD, respectively, while BST-DialoGPT denotes the DialoGPT model fine-tuned on BST.",
"We compare our Recovery fine-tuned models against 5 base models, (1) BST 2.7B, (2) DialoGPT, (3) the pushshift.io Reddit 2.7B model (a 2.7 billion parameter generative dialogue model pretrained using a previously existing Reddit dataset extracted and obtained by a third party that was hosted by pushshift.io (Baumgartner et al., 2020)), (4) the BST 2.7B model with an adversarial safety layer from Xu et al. (2021), and for some experiments, (5) BST-DialoGPT.",
"conversational and recovery ability, and the percentage of safe generated responses as given by the Multi-turn Safety Classifier from Xu et al. (2021).",
"Human Quality Evaluation We perform two types of crowdsourced human evaluation, rating either single utterances or entire conversations, where crowdworkers decide which of two model generations they prefer.",
"We measure engagingness and civility on individual utterances on both BSTnp and SD contexts, and engagingness in natural interactive conversation to check that the ability to converse hasn't been damaged by the SD task.",
"Details of questions asked are given in Appendix C. For all human evaluations, rows with ( p < 0 . 05 ) and ( p < 0 . 01 ) are statistically significant.",
"Types of Bot Responses The bot responses are annotated by crowdworkers into 4 categories: attack, ignore, apologize, other .",
"Appendix D and Figure 5 give more details about this task.",
"Table 4 shows automatic metrics on SD.",
"As expected, baselines that weren't fine-tuned on SD have higher perplexity and lower F1 score.",
"Both Recovery models have a higher percentage of safe utterances than before fine-tuning on the SaFeRDialogues task.",
"This is not surprising, as the recovery responses were collected with the intent of shifting the conversation in a more positive direction, and do not use aggressive defensive responses, or responses doubling down on the initial offensive point, contrary to baseline models (see Figure 1).",
"Table 5 reports metrics on BSTnp to check that general conversational ability is maintained.",
"The Recovery (BST 2.7B) only slightly suffers in per-6466 Model Safe% PPL F1 Recovery (BST 2.7B) 100% 6.7 0.23 BST 2.7B 76.0% 11.3 0.16 BST 2.7B + Safety Layer 97.7% 11.3 0.10 pushshift.io Reddit 2.7B 51.3% 14.6 0.14 Recovery (DialoGPT) 99.9% 8.5 0.23 DialoGPT 81.9% 56.4 0.12 Table 4: Automatic Metrics on the SD task.",
"plexity and F1 score compared to the original BST 2.7B model.",
"While SD is seeded with unsafe BAD dialogues, BSTnp contains few unsafe utterances, or utterances that are trying to provoke unsafe utterances in the conversation partner, so the safety score is unsurpisingly higher.",
"Types of model responses Figure 1 shows that models trained on pushshift.io Reddit are rated as attacking the most and apologizing the least, while the BST + Safety model ignores the feedback the most and attacks the least (but is still rated as attacking nearly 10% of the time), which is consistent with its strategy of changing the topic when encountering unsafe inputs.",
"Among the baseline models, BST 2.7B apologizes the most (19.2% of responses).",
"Fine-tuning on SD boosts the rate of apologizing responses of the Recovery models to about 90%, when responding to feedback about unsafe inputs from the BAD dataset.",
"Human evaluation: civility.",
"Results on SD are shown in Table 6, where the Recovery (BST2.7B) model is largely preferred over all baseline models (and there is no statistically significant preference compared to the human responses).",
"The BST2.7B model and the Recovery (BST2.7B) model use the same decoding settings (e.g. minimum beam length of 20 BPE tokens).",
"We also report civility evaluation results for the Recovery (DialoGPT) model in Table 7.",
"Again, there is a very large preference for the fine-tuned model compared to the base DialoGPT model.",
"This preference might be partly explained by the fine-tuning on BST, which overall leads to more apologizing compared to pushshift.io Reddit (see Figure 1), but directly comparing the Recovery (Di-aloGPT) and BST-DialoGPT shows that the Recovery model is still rated as much more civil.",
"Human evaluation: engagingness.",
"Table 8 compares responses for engagingness on SD.",
"The human response is preferred (even though the difference does not reach significance).",
"More interestingly, the Recovery model is not deemed less engaging than the baseline model (if anything, engagingness appears slightly higher).",
"Method vs. Recovery (BST2.7B) Human Response 53 47 BST 2.7B 39 61 BST 2.7B + Safety Layer 35 * 65 * pushshift.io Reddit 27 ** 73 ** Table 8: Human evaluation of engagingness on SD compared to our Recovery (BST2.7B) model .",
"The Recovery model is deemed significantly more engaging than the BST2.7B+Safety Layer Model, which may not be surprising given that the safety layer model resorts to canned non-sequiturs.",
"The Recovery model also significantly wins over the pushshift.io Reddit Model.",
"Similar results for the Recovery (DialoGPT) Model are shown in Table",
"9. Method vs. Recovery (DialoGPT) Human Response 61 39 DialoGPT 10 ** 90 ** Table 9: Human evaluation of engagingness for human and model responses on SD compared to our our Recovery (DialoGPT) model .",
"Does the model just apologize all the time?",
"The very high rate of responses that apologize when responding to SD context (about 90%, see Figure 1) suggests the bot might be constantly apologizing, even when the context does not call for it.",
"In fact, this tends to happen when multitasking on BST without dropping the personas (see footnote above: 25% of responses of recovery models on BST then contain \"sorry,\" and only 40% of those work in the context).",
"We rule this out through the following more detailed analysis, comparing Re-covery(BST2.7B) and BST2.7B.",
"First, the Recovery model does not say \"sorry\" very frequently in response to BSTnp contexts, as shown in Table",
"10. Spot checks of those occurrences show that only a small fraction are inadequate: in many cases where the Recovery model uses \"sorry\" while BST 2.7B doesn't, the response of the Recovery model works well.",
"The model appears to be able to do this smoothly, as shown in Table",
"11. We then test engagingness quantitatively through 6468 Interactive Example Partner A: i have a goldfish Recovery (BST2.7B) Model: That's cool.",
"crowdsourced human evaluation (see details in Appendix C).",
"When evaluated for engagingness on single utterance responses on BSTnp ( Table 12) or on interactive longer free-form dialogues in Acute Eval (Table 13), Recovery (BST2.7B) and BST2.7B are not rated as significantly different in engagingness, with BST2.7B trending slightly higher on BSTnp single utterances, while Recovery (BST2.7B) has the slight edge on full conversations.",
"Method vs. Recovery (BST2.7B) BST 2.7B (civil) 51 49 BST 2.7B (engaging) 56 44 Table 12: Human evaluation of engagingness and which response will lead to a more civil conversation for BST 2.7B compared to our Recovery (BST2.7B) model in a safe conversational context (BSTnp).",
"To give a qualitative sense of how model responses differ, we show sample conversations with the responses of several models and crowdsource workers in Table 1 and Table 14.",
"Additional sample conversations are shown in Appendix G. Sample Conversation: 1 Partner B: Females belong in the kitchen, whilst men should be out working.",
"generations and human response from the SD dataset.",
"While the Recovery (BST2.7B) model performs well in ordinary interactive conversation, it is not hard to get it to fail by interacting adversarially.",
"While we did not conduct large scale adversarial tests, our experience is that the model tends to fail by apologizing too much rather than too little, and responding as if it had been given feedback when that's not the case.",
"Examples of failures of the Recovery (BST2.7B) model are shown in Table 15 and Table",
"16. These examples were obtained by interacting with the model and trying to trip it\" into giving an apologetic response that wasn't warranted. Failure Case: 1 Partner A: Let's talk about voting equality in history. Recovery (BST2.7B) Model: Voting equality is a very important topic to talk about. I'm glad you brought it up. Partner A: Yeah. I'm glad we have amendments now that let everyone vote. Do you know who couldn't vote before? Recovery (BST2.7B) Model: I'm not sure, but I do know that women were not allowed to vote until 1920. Partner A: Oh that's so sexist! Recovery (BST2.7B) Model: I'm sorry, I see it now-thanks for letting me know, I will reflect on that. Table 15: Sample conversation in interactive mode with the Recovery (BST 2.7B) model where an apologizing response is incorrectly given: the model does not recognize that the \"sexist\" feedback is not directed to it, but to the context that was being discussed. In Table 15, the model does not recognize that the \"sexist\" comment is being made in reference to a situation in the past, and not the utterance itself. It apologizes even though the feedback was not directed to the model. Table 16 shows two conversations where a minor change in the response to the model leads to either a correct response that does not apologize (Conver-sation 1), or to an incorrect apology (Conversation 2). These failures reflect more general common sense and reference resolution problems with models (e.g., see Adiwardana et al. (2020); Roller et al. (2020a,b)). They could be somewhat improved with adversarial data collection that attempts to better approach limit cases of current bot failures Failure Case: 2 Sample Conversations Comparison Conversation 1 Partner A: My hairdresser told me she can't fix my ugly hair. Partner B: That's an awful thing that she said to you! Recovery (BST2.7B) Model: I know, right? I was so shocked I didn't even know what to say. Conversation 2 Partner A: My hairdresser told me she can't fix my ugly hair. Partner B: That's an awful thing to say to another person! Recovery (BST2.7B) Model: I'm sorry, I didn't mean to offend you. Thanks for speaking up, I will reflect on that. Table 16: Comparison of 2 sample conversations in interactive mode with the Recovery (BST2.7B) Model. Very minor changes in the response to the model lead to the model responding well (Conversation 1) or incorrectly apologizing (Conversation 2). (similar to the procedure used in Xu et al. (2021)), but would ultimately require conversational models to make deeper progress on reasoning and true understanding. 6 Conclusion In this work, we proposed SaFeRDialogues, a novel task and dataset of dialogues, where a conversation participant who uttered something unsafe is given feedback that signals it, and responds in a way that acknowledges that feedback and is more likely to lead to a more civil conversation down the line. We showed that fine-tuning dialogue models on this data, while carefully multi-tasking on a more general open-domain chitchat dataset, results in conversational models that are still rated as engaging and capable of normal conversation, yet are deemed significantly more likely to produce more civil conversations. We verified that the models do not unduly apologize in normal conversation, while very reliably producing graceful apologies when confronted with feedback about some not ok utterance. In future work, we will examine how to automatically detect signaling feedback and learn from it in an online learning set up, as well as examine what happens to the trajectory of natural conversations, depending on the type of feedback given, and the type of response given to that feedback. 6470 7 Ethical considerations and limitations The goal of this work is to make conversational models respond more gracefully to feedback about safety failures. This makes human raters evaluate model responses as more likely to lead to a civil conversation. However, this is a limited mitigation. We describe several important ethical considerations. First, this work is limited to English-language models, and English-language crowd-sourced responses written by workers located in the United States 4 a population which may quite substantially differ from the expected audience of a deployed model. In particular, the notion of what is unsafe, how to formulate feedback, and what is a graceful response, might vary according to culture and populations (Schmidt and Wiegand, 2017). Our human evaluations use similar sources of crowdsource workers, and would therefore reflect this same narrow perspective. While there is research showing that Amazon Mechanical Turk workers show some reasonable amount of diversity (Moss et al., 2020), this is still a narrow, US-centric set. Second, this work fine-tunes large neural models to generate language. While our proposed approach improves a few limited undesirable behaviors of these models, most of the known issues of large language models remain relevant (e.g., see issues and risks outlined in Bender et al. (2021); Bommasani et al. (2021); Weidinger et al. (2021)). The very notion of a graceful response to a safety failure implies that the model already exposed its audience to an undesirable message. Third, the model generates an apology or a graceful response, but there is no corresponding training and update of the model: learning from the feedback to actually change the model is outside the scope of this work. Thus, the model would keep displaying the same safety failure that the conversation partner gave feedback on, even after responding that it would reflect on it. This work is therefore a limited first step, and we are actively working on getting models to learn from the feedback. Acknowledgements We thank Emily Dinan and Spencer Poff for helpful ideas and discussions, and anonymous ARR reviewers for helpful suggestions. 4 We used Amazon Mechanical Turk for all crowdsourcing tasks. Our crowdsourcing tasks pays workers well above minimum wage, and we asked privacy and policy experts to review these tasks before launching. The tasks do not request any personal information from workers. References Daniel Adiwardana, Minh-Thang Luong, David R So, Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang, Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu, et al. 2020. Towards a human-like open-domain chatbot. arXiv preprint arXiv:2001.09977 . Jason Baumgartner, Savvas Zannettou, Brian Keegan, Megan Squire, and Jeremy Blackburn. 2020. The pushshift reddit dataset. arXiv preprint arXiv:2001.08435 . Emily M Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big? Proceedings of FAccT . Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. 2021. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258 . Amanda Cercas Curry and Verena Rieser. 2018. # metoo: How conversational systems respond to sexual harassment. In Proceedings of the Second ACL Workshop on Ethics in Natural Language Processing , pages 714. Mudit Chaudhary, Chandni Saxena, and Helen Meng. 2021. Countering online hate speech: An nlp perspective. arXiv preprint arXiv:2109.02941 . Hyojin Chin, Lebogang Wame Molefi, and Mun Yong Yi. 2020. Empathy is all you need: How a conversational agent should respond to verbal abuse. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems , pages 113. Hyojin Chin and Mun Yong Yi. 2019. Should an agent be ignoring it? a study of verbal abuse types and conversational agents' response styles. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems , pages 16. Amanda Cercas Curry and Verena Rieser. 2019. A crowd-based evaluation of abuse response strategies in conversational agents. arXiv preprint arXiv:1909.04387 . Agustn Manuel de los Riscos and Luis Fernando D'Haro. 2021. Toxicbot: A conversational agent to fight online hate speech. In Conversational Dialogue Systems for the Next Decade , pages 1530. Springer. Emily Dinan, Gavin Abercrombie, A Stevie Bergman, Shannon Spruit, Dirk Hovy, Y-Lan Boureau, and Verena Rieser. 2021. Anticipating safety issues in e2e conversational ai: Framework and tooling. arXiv preprint arXiv:2107.03451 . Emily Dinan, Samuel Humeau, Bharath Chintagunta, and Jason Weston. 2019. Build it break it fix it for 6471 dialogue safety: Robustness from adversarial human attack. arXiv preprint arXiv:1908.06083 . Braden Hancock, Antoine Bordes, Pierre-Emmanuel Mazare, and Jason Weston. 2019. Learning from dialogue after deployment: Feed yourself, chatbot! In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics , pages 36673684, Florence, Italy. Association for Computational Linguistics. Klaus Krippendorff. 2004. Reliability in content analysis: Some common misconceptions and recommendations. Human communication research , 30(3):411433. Margaret Li, Jason Weston, and Stephen Roller. 2019. Acute-eval: Improved dialogue evaluation with optimized questions and multi-turn comparisons. arXiv preprint arXiv:1909.03087 . Charles G Lord, Lee Ross, and Mark R Lepper. 1979. Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of personality and social psychology , 37(11):2098. Alexander Miller, Will Feng, Dhruv Batra, Antoine Bordes, Adam Fisch, Jiasen Lu, Devi Parikh, and Jason Weston. 2017. ParlAI: A dialog research software platform. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations , pages 7984. ACL. Aaron J Moss, Cheskie Rosenzweig, Jonathan Robinson, and Leib Litman. 2020. Demographic stability on mechanical turk despite covid-19. Trends in cognitive sciences , 24(9):678680. Martha N Ovando. 1994. Constructive feedback: A key to successful teaching and learning. International Journal of Educational Management . Ashwin Paranjape, Abigail See, Kathleen Kenealy, Haojun Li, Amelia Hardy, Peng Qi, Kaushik Ram Sadagopan, Nguyet Minh Phu, Dilara Soylu, and Christopher D Manning. 2020. Neural generation meets real people: Towards emotionally engaging mixed-initiative conversations. arXiv preprint arXiv:2008.12348 . Raj Ratn Pranesh, Ambesh Shekhar, and Anish Kumar. 2021. Towards automatic online hate speech intervention generation using pretrained language model. Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2019. Towards empathetic open-domain conversation models: A new benchmark and dataset. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics , pages 53705381, Florence, Italy. Association for Computational Linguistics. Stephen Roller, Y-Lan Boureau, Jason Weston, Antoine Bordes, Emily Dinan, Angela Fan, David Gunning, Da Ju, Margaret Li, Spencer Poff, et al. 2020a. Open-domain conversational agents: Current progress, open problems, and future directions. arXiv preprint arXiv:2006.12442 . Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M Smith, et al. 2020b. Recipes for building an open-domain chatbot. arXiv preprint arXiv:2004.13637 . Anna Schmidt and Michael Wiegand. 2017. A survey on hate speech detection using natural language processing. In Proceedings of the Fifth International workshop on natural language processing for social media , pages 110. David K Sherman and Geoffrey L Cohen. 2006. The psychology of self-defense: Self-affirmation theory. Advances in experimental social psychology , 38:183242. Eric Smith, Mary Williamson, Kurt Shuster, Jason Weston, and Y-Lan Boureau. 2020. Can you put it all together: Evaluating conversational agents' ability to blend skills. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics . ACL. Laura Weidinger, John Mellor, Maribeth Rauh, Conor Griffin, Jonathan Uesato, Po-Sen Huang, Myra Cheng, Mia Glaese, Borja Balle, Atoosa Kasirzadeh, et al. 2021. Ethical and social risks of harm from language models. arXiv preprint arXiv:2112.04359 . Jing Xu, Da Ju, Margaret Li, Y-Lan Boureau, Jason Weston, and Emily Dinan. 2021. Bot-adversarial dialogue for safe conversational agents. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 29502968. Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2019. DialoGPT: Large-scale generative pre-training for conversational response generation. arXiv preprint arXiv:1911.00536 . Wanzheng Zhu and Suma Bhat. 2021. Generate, prune, select: A pipeline for counterspeech generation against online hate speech. arXiv preprint arXiv:2106.01625 . 6472 A Task: Signaling Failure Figure 3: Screenshot from the Signaling Failure task. Each crowdworker is shown a 4-turn truncated piece of dialogue from the BAD dataset, that ends in an unsafe utterance, and instructed to label whether they consider the last utterance as NOT OK , and if so, write natural responses to signal to the other speaker that the previous message is NOT OK (see screenshot, Figure 3). Since we want responses that signal failure, we only keep responses if the crowdworker has marked the previous message as not ok. After collection, a separate task verifies whether the collected responses signals that its previous message was not ok with 3 other annotators. Using Krippendorff's alpha (Krippendorff, 2004) as inter-annotator agreement (IAA), the verification annotation task has a reliability coefficient of 0.213. This low value reflects both the overall skew of the dataset towards being \"not ok\" (about 70% of annotations overall), and the various ways in which workers interpreted what a good signaling response was (from calling out the type of offense, e.g. \"this is sexist,\" to proposing a different opinion)."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other"
] |
[
"Being engaging, knowledgeable, and empathetic are all desirable general qualities in a conversational agent.",
"Previous work has introduced tasks and datasets that aim to help agents to learn those qualities in isolation and gauge how well they can express them.",
"But rather than being specialized in one single quality, a good open-domain conversational agent should be able to seamlessly blend them all into one cohesive conversational flow.",
"In this work, we investigate several ways to combine models trained towards isolated capabilities, ranging from simple model aggregation schemes that require minimal additional training, to various forms of multi-task training that encompass several skills at all training stages.",
"We further propose a new dataset, BlendedSkillTalk, to analyze how these capabilities would mesh together in a natural conversation, and compare the performance of different architectures and training schemes.",
"Our experiments show that multi-tasking over several tasks that focus on particular capabilities results in better blended conversation performance compared to models trained on a single skill, and that both unified or two-stage approaches perform well if they are constructed to avoid unwanted bias in skill selection or are fine-tuned on our new task.",
"A good open-domain conversational agent should have a well-rounded set of skills 1 and qualities that allow it to seamlessly blend listening with empathy, providing knowledgeable responses, and talking about various topics from everyday life to their favorite hobbies or latest challenges.",
"1 Skills in the conversational AI literature is sometimes taken to mean a very defined specific set of abilities such as telling the weather (e.g., Zhou et al. (2020)).",
"Our use in this paper is much more general and refers to any desirable capability.",
"Recent research has made solid strides towards gauging and improving performance of open-domain conversational agents along specific axes such as how knowledgeable they are (Dinan et al., 2019b; Moghe et al., 2018; Qin et al., 2019), how well they can display empathy (Rashkin et al., 2019; Lin et al., 2019) or talk about their personal background (Zhang et al., 2018; Li et al., 2017).",
"However it remains unclear whether models optimized for performance along one of these axes can retain the learned skill while blending it with other desirable skills, or how to best conduct simultaneous training of multiple skills.",
"In this work, we compare several ways to combine tasks designed to evaluate and improve a single conversational skill, ranging from multi-task training over several datasets to training a top-level classifier to play the role of a dialogue manager and query the most appropriate single-skill pretrained model for a response.",
"In order to evaluate those methods, we propose a new English-language dataset, BlendedSkillTalk, that blends several skills into a single conversation, and use it to evaluate methods with both automated metrics and human crowdsourced ratings across different axes.",
"Our experiments show that existing single-skill tasks can effectively be combined to obtain a model that blends all skills into a single conversational agent if care is taken to make the dialogue agent avoid unwanted biases when selecting the skill, or if fine-tuning on blended data, or both.",
"We propose methods that compare those competing approaches, and provide a detailed analysis of their successes and failures.",
"While most commercial dialogue systems rely on hand-coded narrow skills (e.g., see Zhou et al.",
"(2020); Ram et al. (2018)), typically focusing on separate task-oriented features such as alarm setting, calendar entries, etc., we are interested in models that display various qualities in open-domain dialogue.",
"Further, we focus on skills that can be learned end-to-end, as end-to-end learning affords the promise of better generalization to unseen domains.",
"Recent promising conversational models have leveraged very large conversation-like data such as datasets extracted from Reddit and made available by a third party on pushshift.io (Mazare et al., 2018; Humeau et al., 2019; Keskar et al., 2019; Rashkin et al., 2019).",
"These large-scale datasets are very useful in providing vast amounts of conversational material that allow for reproducible research and comparison with prior work, however the qualities of resulting conversational agents are dependent on the qualities present in the source conversations.",
"Given how online conversations can turn toxic and lack empathy, indiscriminate pretraining on such corpora is unlikely to spontaneously endow a conversational agent with desirable qualities such as avoiding toxic responses (Dinan et al., 2019a) or demonstrating empathy (Rashkin et al., 2019) or knowledge (Dinan et al., 2019b).",
"This has led the community to propose tasks and datasets focusing specifically on some trait or skill.",
"In this work, we examine how to combine three such traits that each have a corresponding task and dataset: demonstrating an ability to talk about oneself and get to know your partner, as captured by the ConvAI2 dataset, an extension of the PersonaChat dataset (Zhang et al., 2018; Dinan et al., 2020); being knowledgeable and discussing a topic in depth, as measured through the Wizard of Wikipedia task (Dinan et al., 2019b); and demonstrating empathy and being able to talk about emotional personal situations, as measured by the EmpatheticDialogues benchmark proposed in Rashkin et al. (2019).",
"The ConvAI2 dataset comprises more than 140k utterances of crowdsourced conversations between paired workers getting to know each other.",
"Each worker was assigned a persona consisting of a few sentences such as I have a pet hamster, which had separately been crowdsourced.",
"The Wizard of Wikipedia (WoW) task aims to explore conversation informed by expert knowledge from Wikipedia, and provides about 194k utterances of conversations on about 1,250 topics.",
"The EmpatheticDialogues (ED) dataset consists in about 50k utterances between a Speaker who is talking about an emotional situation, and a Listener who is tasked to respond in an empathetic manner, acknowledging the other person's feelings.",
"In addition to being associated with easy-to-use datasets, these three skills benefit from being clearly defined and separate in scope.",
"Focusing on blending only three skills keeps data collection, ablations, and analyses manageable while already presenting a challenge for models, and it helps narrow down the most promising approaches for blending a greater number of skills.",
"A model separately trained on a variety of skills might be able to do well on each of them in isolation, but still struggle to seamlessly blend them over the course of a single conversation where it has to navigate whether a given utterance calls for informative knowledge or empathy, for example.",
"It must learn to switch between skills, each time incorporating previous dialogue context which may contain utterances from either partner relating to multiple skills, and on some turns may have to blend skills into a single response.",
"In order to gauge how successful a model is at this blended objective, we collect BlendedSkillTalk, a small crowdsourced dataset of about 5k conversations in English where workers are instructed to try and be knowledgeable, empathetic, or give personal details about their given persona, whenever appropriate.",
"We collect conversations from 2,679 workers, with each worker participating in an average of 5.4 conversations in the train set and a maximum of 15 conversations.",
"The dataset consists of 4,819 train-set conversations, 1,009 validation-set conversations, and 980 test-set conversations.",
"We ensure that the sets of workers involved in collecting the train, validation, and test sets are completely disjoint to prevent our models from bene-fiting from learning about specific workers' biases (Geva et al., 2019).",
"On average, there are 11.2 utterances (5.6 pairs from the two workers) in each conversation in the train set.",
"This dataset is available through the ParlAI framework 2 .",
"An example conversation from BlendedSkillTalk is shown in Figure 1.",
"In this example, we see that the speakers inject knowledge, empathy, and personal background, and generally that the conversation invokes different skills while flowing naturally.",
"Guided Collection In order to prevent workers from getting stuck in a set mode of conversation (in which they consistently use one specific skill) or from being too generic, we provide responses from models that have been trained towards a specific skill as inspiration to one of the two workers in the conversation.",
"That worker is free to either use and modify or ignore those responses.",
"Thus, each conversation involves an unguided speaker and a guided speaker, with the unguided speaker talking first.",
"Whenever it is the guided speaker's turn to respond, we show them three suggested responses, one each from three single-task polyencoder (Humeau et al., 2019) models trained on the ConvAI2, ED, and WoW datasets.",
"These are the same models we use as baseline conversational agents for individual skills as well.",
"A breakdown of the choices of guided speakers is shown in Table 1, showing a reasonably balanced choice of suggestions.",
"Workers decide to use them in 20.5% of utterances, which affects the overall dialogues.",
"Interestingly, 46.1% of the time (versus 33.3% at chance), the unguided speaker continues in the same mode as the previous utterance by the guided speaker, according to the classifier.",
"Thus, the BlendedSkillTalk dataset mimics natural conversation by featuring both continuity (stickiness in the conversation mode) and mode blending within a single conversation.",
"Blended Initial Contexts Each speaker is assigned a pair of sentences from randomly-chosen personas from the ConvAI2 dataset.",
"Similar to the ConvAI2 setting, each speaker sees their own persona but not that of the other speaker.",
"Each conversation is seeded with a randomly selected pair of utterances from ConvAI2, WoW, or ED, with equal probability.",
"Workers are instructed to continue the conversation from there.",
"Workers are also provided with the topic being discussed if the conversation seed is from WoW, or the situation description if it is from ED.",
"Note that this latter set-up departs from the ED benchmark set-up, where the situation description is not used.",
"The rationale for this is to provide some context about Chosen suggestion Initial Context Count Total none ConvAI2 7280 21468 ED 7257 WoW 6931 ConvAI2 ConvAI2 567 1599 ED 496 WoW 536 ED ConvAI2 766 2221 ED 773 WoW 682 WoW ConvAI2 634 1730 ED 494 WoW 602 Table 1: Guided workers choice of suggestions in the train set of BlendedSkillTalk, broken down by provenance of the given initial context utterances.",
"what was being discussed if the seed utterance pair happened to be extracted from the middle of a conversation.",
"When WoW is used as seed, the chosen personas and the initial conversation topic are selected to match, similar to the original WoW paper.",
"To gain more insight into the influence of the datasets that provide this context, we leverage an utterance classifier trained to assign utterances to one of the three datasets (ConvAI2, WoW, ED; described further in Section 3.2).",
"We find that the average percentage of utterances from the unguided worker that match the provided context dataset is 43.5% over the training set, compared to 33.3% if the source of the provided context had no influence (note that this observed stickiness is similar to the 46.1% of times the unguided speaker continues in the same mode as the one initiated by the guided speaker, mentioned above).",
"This suggests that the choice of seeding utterances and context indeed has an influence on the type of blend observed, helping to make the dataset balanced.",
"Table 2 breaks down the classification results by provenance of the seed context.",
"The fraction of utterances resembling a given dataset increases when the seed context is from that same dataset.",
"However the conversations are still blended: when breaking down the training set conversations according to the number of modes observed in the utterances of the unguided worker according to the classifier, 47.8% show 3 modes, 43.2% show two modes, and 9.1% show a single mode.",
"Data Quality To improve the quality of the collected conversations, we filter out any conversa-Persona for Unguided Speaker : Persona for Guided Speaker : My son plays on the local football team.",
"tions where one of the speakers speaks less than 3 words per message; starts their conversation with a greeting despite previous utterances existing in the conversation; uses all-caps too frequently; repeats themselves too much; writes a message that gets flagged by a safety classifier; or, if they are the guided speaker, always accepts suggestions verbatim without changing them.",
"Messages cannot be over 30 words or copy persona strings exactly.",
"Knowledge: using factual information ( I've heard that in some places, lifeguards also help with other sorts of emergencies, like mountain rescues! ) (Dinan et al., 2019b) Empathy: understanding and acknowledging implied feelings ( I'm sorry to hear that. I wish I could help you figure it out ) (Rashkin et al., 2019) Personal situations: past circumstances in a person's life ( I finally got that promotion at work! I have tried so hard for so long to get it! ) (Rashkin et al., 2019) Personal background: a person's personality, interests, and attributes ( I am into equestrian sports. ) (Zhang et al., 2018)",
"All utterances in over 700 conversations from the validation set of the BST dataset, from both guided and unguided workers, were annotated in this manner for 7,380 annotations collected in total.",
"Workers were able to select as many attributes as Mode Count Conversations Pct (%) 1 51 6.9% 2 167 22.6% 3 290 39.2% 4 232 31.4% Table 3: Breakdown of conversations by number of modes, showing that most BST dataset conversations exhibit multiple modes.",
"they wished for each utterance.",
"To avoid worker-specific bias, each crowdsource worker was limited to performing annotations on 10 conversations, and 123 total workers contributed annotations.",
"Most analysis in this paper refers to three datasets, and the utterance classifier was trained with three dataset labels as classes.",
"However, the ED dataset contains both Speaker utterances that describe personal situations, and Listener utterances, where the Listener responds with empathy (the ED benchmarks trains on both sides but evaluates only on the Listener side).",
"We therefore break down annotations into four types, with two types covering responses about personal top-ics: personal background (which is the focus of ConvAI2) and personal situations (talked about in ED).",
"Results in Table 3 show that the dataset indeed contains a reasonably balanced blend of these qualities.",
"Over 70% of conversations annotated contained at least 3 of 4 modes.",
"Overall, workers' annotation counts are 43.7% for personal background, 20.5% for knowledge, 20.3% for empathy, and 15.4% for personal situations.",
"This supports the finding from our utterance classifier that the vast majority of conversations feature more than one mode, where utterance modes are defined as the predicted dataset provenance per utterance.",
"In order to avoid excessive annotator bias and keep annotations discriminative, we limit the maximum number of annotations per worker and check that annotators did not select all modes for each utterance.",
"Architectures and Training The base architecture used throughout the paper is the 256-million parameter poly-encoder proposed in Humeau et al. (2019), which is a Transformer-based architecture for retrieval that learns a small number of codes",
"representing the input context, so that performing attention over retrieval candidates is tractable in real-time, and was shown to be state of the art on several datasets.",
"The polyencoder is first pretrained on the pushshift.io Reddit dataset and then fine-tuned on individual datasets.",
"At test time, these models retrieve from the set of training utterances to output a response.",
"Swept hyperparameters include dropout fractions, learning-rate schedule, the number of polyencoder codes used to represent the context, the output scaling factor, and the output reduction type (max across outputs vs. mean across outputs vs. first output only).",
"Hyperparameters that were held constant included a training batch size of 512 and learning with Adamax; 12 encoder layers and an embedding size of 768; and label and text truncation lengths of 72 and 360.",
"Note this model discards all casing information.",
"Models were trained until validation-set hits@1 failed to improve for 10 epochs.",
"All training is conducted in ParlAI (Miller et al., 2017).",
"Model selection during fine-tuning is performed by choosing the model that scores highest on hits@1 on the validation set.",
"This architecture is then leveraged in different ways to combine different skills in a single agent.",
"Fine-tuning on the BlendedSkillTalk Dataset The simplest setting is to directly fine-tune the base architecture on a dataset that exhibits the blended skills we are looking for.",
"In this setting, we simply fine-tune the poly-encoder pre-trained on pushshift.io Reddit on the BlendedSkillTalk dataset, following the procedure in Humeau et al. (2019).",
"This setting is referred to as BST thereafter (for BlendedSkillTalk).",
"Such blended multi-skill training is only possible if a resource like BlendedSkillTalk is available, which we only just collected.",
"Thus, interesting questions unanswered by such training include:",
"(i) can we learn a strongly performing multi-skilled model with only individual tasks and no access to blended data?",
"(ii) would a model with both individual skill training and blended skill training be superior?",
"Multi-task Single-Skills A straight-forward approach given access to multiple single-skill tasks is to multi-task on all of them during the fine-tuning step.",
"Using the multi-task training framework in ParlAI, we again start from the polyencoder pre-trained on pushshift.io Reddit, and fine-tune it multi-tasking on ConvAI2, WoW, and ED.",
"The architecture is thus the same as for the single-task models, and has the same number of parameters.",
"We select the model with the highest macro-average hits@1 across all training tasks.",
"Mitigating Single-Skill bias The straightforward way of multi-tasking over single skills is to sample training data from each task during updates.",
"However, if individual skill contexts are too different from each other a multi-task model will trivially separate the learning, rather than blending skills together.",
"Then, if the bias is different at evaluation time, it will select the skill to use poorly.",
"In our case, ConvAI2 dialogues include a persona context, while WoW includes a topic.",
"This difference runs the risk of biasing the multi-task model into associating the mere presence of a persona context to chat about personal background, and that of a discussion topic to discussions where more knowledge is displayed, which could lead to over-emphasizing responses in the ConvAI2 style when tested on BlendedSkillTalk which contains personas.",
"We thus also experiment with a multi-task setting where the single skills are modified to always include a persona and a topic, as this is then balanced, and corresponds to the final evaluation using BlendedSkillTalk.",
"For every dialogue in each of the single-skill tasks, we thus prepend a persona and a topic to the first utterance if they are not already present.",
"The personas and topics are selected from the training sets of ConvAI2 and WoW respectively, where WoW topics already have an alignment to ConvAI2.",
"For WoW, a persona is selected via this mapping.",
"For ConvAI2, a topic is found with the inverse mapping.",
"For ED, the maximum word overlap between the first utterance of the conversation and any training set persona is used to select the appropriate persona, and then a topic is found as before.",
"Multi-task Single-Skills + BlendedSkillTalk After training in a multi-task fashion on single skills, we can afterwards try to continue training with the BlendedSkillTalk resource, in an effort to improve the model's ability to deal with blended data.",
"We take the best model previously trained, and tune it in this fashion.",
"Harnessing those trained models could potentially allow a conversational agent to jointly exhibit all skills, with minimal additional training.",
"Instead, one trains a top-level dialogue manager' which is a classifier with the dialogue context as input, that predicts which skill to use on each turn, and then outputs the utterance produced by the corresponding trained model.",
"Specifically, we train a three-class classifier on top of BERT-base (De-vlin et al., 2019) that assigns an utterance to the dataset it came from.",
"We remove duplicate utterances present in more than one of the datasets prior to training and upsample with replacement to create equal representation in the classifier's training set.",
"We also remove context from the utterances including topics from Wizard of Wikipedia and personas from ConvAI2 before training this classifier and when performing evaluation to prevent the classifier from relying on these (cf. the bias mitigation mentioned above).",
"In Section 4.1, we introduce the automated metrics and human evaluations that we use to measure and compare model performance.",
"Section 4.2 discusses how adding personas and topic strings during multi-task training de-biases the selection of retrieval candidates from across our three skill-based tasks.",
"Sections 4.3 and 4.4 detail the performance of our models using automated metrics on single-skill and BlendedSkillTalk benchmarks, respectively, and Section 4.5 compares the performance of the models on human evaluation: in all three cases, models trained on all three skills generally outperform those trained on individual skills.",
"We use both automated metrics and human evaluation.",
"For automated metrics, we report hits@1 on the test set (or validation set in the case of ConvAI2 as the test set is not publicly available), out of 20 candidates for ConvAI2, and 100 candidates for ED and WoW, following the original datasets.",
"For human evaluation, we ask workers to chat with various models and then rate the conversation along several axes: Knowledge: How knowledgeable was your chat partner (from 1: not at all, to 5: very)?",
"Empathy: Did the responses of your chat MT Single-Skills MT",
"partner show understanding of your feelings (from 1: not at all, to 5: very much)?",
"Personal: How much did your chat partner talk about themselves (from 1: not at all, to 5: a lot)?",
"Overall: Overall, how much would you like to have a long conversation with this conversation partner (from 1: not at all, to 5: a lot)?",
"Conversations and ratings are collected at least 100 times per model, from 234 crowdsource workers who produce a maximum of 10 of these conversations overall (across all model types).",
"Several methods are used to filter out low quality workers that are similar to the methods used in collection of the BlendedSkillTalk dataset collection.",
"All work by a given worker is excluded if they give the same ratings across all conversations, give utterances deemed unsafe by a safety classifier (Di-nan et al., 2019a), utterances shorter than 3 words, use all-caps too frequently, or repeat themselves too much.",
"Messages cannot be over 30 words or copy persona strings exactly.",
"We first examine the issue of skill selection bias in multi-task models.",
"As we are employing multitask retrieval models that retrieve from the set of candidates across all skills, we can collect statistics on those selection choices (i.e., which datasets the chosen utterances originated from).",
"Table 4 reports the percentage of utterances derived from the three skills for our multi-task models (MT Single-Skills and MT Single-Skills + BST) when evaluating on the BST test set.",
"When training on the original skill datasets, we observe heavy overuse of the ConvAI2 utterances and underuse of WoW, likely because BST contains personas as input.",
"Our bias mitigation approach described in Section 3.2 causes a substantial shift for both models, making the use of the skills more equal.",
"These results are then in line with the actual expected ratios in BST, as shown in Section 3.1 (Skill Anno-tations).",
"In the following experiments, we thus use the debiased versions.",
"Automated metrics results on the original benchmarks used to gauge competency at a single skill (ConvAI2, WoW, ED) reported in the literature are shown in Table 5 (first row).",
"Our poly-encoder models (rows 24) trained on single tasks match or exceed the metrics published with the corresponding benchmarks, except for ED, which is close.",
"The single-skill models each perform the best on their respective original benchmark and not as well on other benchmarks, compared to the blended models.",
"However, the performance of all blended models is more balanced, in the sense that none of the single-skill models does as well averaged over the three categories (ex-cept for the ED model doing a tiny bit better than the random-skill model).",
"The model fine-tuned on BST shows balanced performance but fails to match the performance of the single-skill models on their original benchmarks.",
"The performance of the Multi-Task Two-Stage model gains many points over that of simple random assignment of single-skill models (Random-Skill), and this Random-Skill model itself performs about as well as the BST-fine-tuned model on the ED and WoW benchmarks.",
"The Multi-Task Single-Skills model performs best among the blended models, and nearly matches the performance of all single-skill models on all benchmarks (even surpassing it for the WoW benchmark).",
"The fact that the Multi-Task Single-Skills model does not do exactly as well as the single-skill models when evaluated using only candidates from individual benchmarks matches the observations of other work (Raffel et al., 2019).",
"However, when evaluated with a set of mixed candidates from all single-skill tasks (where the set of candidates to choose from is tripled by included an equal number of candidates from the other two datasets), the multi-task model performs better than the individual models, suggesting that multi-task training results in increased resilience to having to deal with more varied distractor candidates.",
"We also include metrics for added-context, when topics and personas are added (see Section 4.2), as a san-Single-skill benchmarks Model ConvAI2 WoW ED Avg.",
"We show two types of results on the BlendedSkillTalk benchmark (BST).",
"Single-skill models are tested directly on BST without any additional training in a zero-shot setting, or fine-tuned on the Model BST, zero-shot +BST, FT ConvAI2 76.8 81.7 WoW 67.5 79.4 ED 69.0 80.4 BST -79.2 Random-Skill 71.2 MT Two-Stage 71.9 MT Single-Skills 80.1 83.8 Table 6: Test results on BlendedSkillTalk.",
"BST training set then tested on the BST test-set.",
"Results for both settings are shown in Table",
"6. The Multi-Task Single-Skills model outperforms all single-skill model baselines, whether used in a zero-shot or fine-tuned fashion, despite being the same size.",
"The MT Two-Stage and Random-Skill models outperform two of the three single-skill models.",
"We hypothesize that the ConvAI2 model is doing better because it has already learned to use personas.",
"All single-skill models show improved performance once fine-tuned on the BST train set.",
"However, performance in the zero-shot setting is already good, which is promising in terms of generalization to unseen data.",
"Human evaluation results are shown in Table",
"7. Single-skill models tend to generally be rated better than the other single-skill models on the skill they were optimized for, although all single-skill models are similarly rated on the knowledge axis.",
"Models that have been trained on multiple skills, either through multi-tasking (MT Two-Stage or MT Single-Skills) or through fine-tuning on BST, are performing well on every dimension, with the MT Two-Stage model and the MT Single-Skills fine-tuned on BST being the overall best.",
"These two models have different advantages: the MT Single-Skills model fine-tuned on BST is more compact, being the same size as each individual single-skill model, but requires joint multi-task training, then fine-tuning.",
"The MT Two-Stage Model Knowledge Empathy Personal Overall quality ConvAI2 3.2 3.1 3.4 3.0 WoW 3.3 2.9 2.7 2.6 ED 3.4 3.3 3.0 3.0 BST 3.5 3.6 3.1 3.3 Random-Skill 3.2 2.9 3.2 2.7 MT Two-Stage 3.7 3.6 3.3 3.5 MT Single-Skills 3.7 3.6 3.0 3.4 MT Single-Skills +BST fine-tuning 3.7 3.8 3.2 3.6 Table 7: Human evaluation results on individual axes of knowledge, empathy, and being personal, as well as overall quality.",
"model only requires training a classifier to play the role of a dialogue manager by assigning utterances to one of the three single-skill benchmarks, but is overall a much bigger model, given that it uses large models for each single skill and the classifier itself.",
"The Random-Skill model is bypassing the need for a classifier by simply using all three single-skill model randomly, and is rated well on the personal axis, but not as well on knowledge or empathy, which might be because talking about personal topics can always work, while knowledge and empathy have to be suited to the context.",
"This paper focuses on the goal of creating an open-domain conversational agent that can display many skills, and blend them in a seamless and engaging way.",
"We have shown several ways to leverage previous work focusing on individual conversational skills, either by combining trained single-skill models in a two-stage way, by re-using the datasets for simultaneous multi-task training, and by fine-tuning on the overall blended task.",
"We compared the performance of these schemes on BlendedSkillTalk, a new English-language dataset blending three conversation skills in balanced proportions (demonstrating knowledge, empathy, or ability to talk about oneself).",
"We showed that multiple multi-task approaches can be effective on this task, however careful construction of the training scheme is important to mitigate biases when blending and selecting skills, while fine-tuning on the overall blended task improves models further.",
"One natural extension would be to generalize these findings to other skills than the three addressed here, such as humor/wit, eloquence, image commenting, etc.",
"This would in principle be straightforward to do as long as these additional skills have a corresponding single-skill dataset to train on and are sufficiently distinguishable from each other."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"objective",
"result",
"objective",
"other",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"result",
"objective",
"result",
"abstain",
"abstain"
] |
[
"Recent pretrained vision-language models have achieved impressive performance on cross-modal retrieval tasks in English.",
"Their success, however, heavily depends on the availability of many annotated image-caption datasets for pretraining, where the texts are not necessarily in English.",
"Although we can utilize machine translation (MT) tools to translate non-English text to English, the performance still largely relies on MT's quality and may suffer from high latency problems in real-world applications.",
"This paper proposes a new approach to learn cross-lingual cross-modal representations for matching images and their relevant captions in multiple languages.",
"We seamlessly combine cross-lingual pretraining objectives and cross-modal pretraining objectives in a unified framework to learn image and text in a joint embedding space from available English image-caption data, monolingual and parallel corpus.",
"We show that our approach achieves SOTA performance in retrieval tasks on two multimodal multilingual image caption benchmarks: Multi30k with German captions and MSCOCO with Japanese captions.",
"Recent pretrained vision-language models (Chen et al., 2020; Li et al., 2020; Su et al., 2020; Gan et al., 2020; Luo et al., 2020) based on Transformer (Vaswani et al., 2017) have achieved remarkable performance on cross-modal retrieval (Li et al., 2020; Yu et al., 2020, 2021b), image captioning (Chen et al., 2020) and visual question and answering (VQA) (Su et al., 2020) tasks in English.",
"For instance, most leading competitors in the VQA contest 1 rely on the transformer-based pretrained vision-language models.",
"tions (Sharma et al., 2018)).",
"In reality, there are limited such data in other languages.",
"When generalizing to cross-lingual cross-modal downstream tasks, a straightforward way is to utilize machine translation (MT) tools to translate non-English text to English and reuse the fine-tuned models in English.",
"Nevertheless, the performance strongly relies on the MT tool's capability and suffers from high latency problems in real-world applications.",
"To learn multilingual multimodal representations, recent researchers utilized multilingual datasets to model images and text captions in a joint embedding space.",
"Based on how the shared feature space is learned, there are two categories: word-level alignments (Mohammadshahi et al., 2019) and sentence-level alignments (Wehrmann et al., 2019; Rajendran et al., 2016).",
"Those models can capture a certain level of semantic similarity among languages and images.",
"They, however, only modeled the relevance of text and images in a global manner.",
"Such a limitation may prevent these models from effectively detecting relevance locally.",
"In parallel, cross-lingual language models such as multilingual BERT (Devlin et al., 2019) and XLM (Conneau and Lample, 2019), and pretrained vision-language models (Chen et al., 2020; Li et al., 2020; Su et al., 2020) have been prevalent in bridging different languages and modalities.",
"Those models use the Transformer (Vaswani et al., 2017) architecture simultaneously pretrained from multiple languages or image-caption pairs to construct an encoder, and then fine-tune the encoder on downstream applications with task-specific objectives.",
"The whole process enables sufficient interaction across languages and other modalities via cross-attention.",
"However, current cross-lingual models and cross-modal models are trained separately on multilingual corpus and English-caption data.",
"Hence the resulting pretrained models are not directly applicable to downstream cross-modal tasks involving non-English languages.",
"This paper proposes a cross-lingual cross-modal pretraining framework to learn a language invariant representation across image and text modalities.",
"We hypothesize that introducing pretraining tasks involving different languages and modalities and modeling the interaction among them leads to a more powerful joint representation and generalizes well to downstream tasks.",
"Extending previous vision-language pretraining works (e.g., Su et al. (2020)) that learn parameters solely based on the English-image caption data, we introduce monolingual and parallel corpus involving other languages to refine the shared latent space further.",
"In Figure 1, we provide a skeleton of our pretraining framework, which is built on top of vision-language BERT models (Su et al., 2020; Li et al., 2020) with more pretraining tasks and data sources.",
"In particular, we use masked language modeling (MLM) (Devlin et al., 2019) on monolingual text corpus, and translation language modeling (TLM) adopted from XLM (Conneau and Lample, 2019) on parallel text corpus.",
"We follow the standard vision-language pretraining models for the English-image data and use MLM on text captions and masked region classification (MRC) on image regions.",
"Besides, motivated by the success of the cross-lingual text recovery (CLTR) task in Unicoder (Huang et al., 2019), we propose a cross-modal text recovery (CMTR) task.",
"Like CLTR, CMTR leverages the attention matrix between image-caption pairs to learn the alignment among words and regions of interest in images.",
"We performed text-to-image and image-to-text retrieval tasks on two multimodal multilingual image caption benchmarks: Multi30k (German and English) captions and MSCOCO (English and Japanese).",
"We achieve SOTA results on retrieval tasks involving Japanese and German languages, compared with a machine translation baseline and other recently published works.",
"Recently, BERT (Devlin et al., 2019) based vision-language pretraining models (Chen et al., 2020; Li et al., 2020; Su et al., 2020; Gan et al., 2020; Luo et al., 2020) emerge.",
"In those models, the pretraining typically consists of three types of tasks:",
"1) masked language modeling,",
"2) masked region modeling, and",
"3) text-image matching.",
"By exploiting the cross-modal attention and being pretrained on large-scale datasets, cross-modal BERT methods have achieved state-of-the-art performance in many text-vision understanding tasks.",
"Nevertheless, all the above models deal with a single language English and image or video domain.",
"Cross-lingual pretrained language models (Devlin et al., 2019; Conneau and Lample, 2019) are capable of simultaneously encoding texts from multiple languages.",
"Most notably, multilingual BERT (De-vlin et al., 2019) takes the same model structure and training objective as BERT but was pretrained on more than 100 languages on Wikipedia.",
"XLM model (Conneau and Lample, 2019) is pretrained with MLM and TLM to take advantage of parallel sentence resources if available.",
"Evaluations on a series of cross-lingual transfer tasks (Fei and Li, 2020; Yu et al., 2021a) have shown that these cross-lingual LMs have significant utilities for transferring knowledge between languages.",
"Therefore, we propose integrating cross-lingual pretraining tasks with vision-language pretraining to obtain a universal multilingual multimodal representation.",
"Our framework adopts the network structure of VL-BERT (Su et al., 2020).",
"VL-BERT is a single-stream cross-modal model that concatenates word features from the text and bounding box features from the image and feeds the concatenated sequence into a series of transformer blocks.",
"Both vision-grounded masked language model (MLM) and text-grounded masked region classification (MRC) task on image-caption data are used in our model by default, as they have shown strong performance in VL-BERT (Su et al., 2020; Li et al., 2020).",
"Since we introduce auxiliary multilingual text corpus, we also use MLM on the texts in other languages by default.",
"Motivated by Unicoder (Huang et al., 2019) showing that pretrained models can be further improved by involving more tasks, we introduce two additional cross-lingual pretraining tasks and one cross-modal task for improving the performance.",
"Cross-model Text Recovery.",
"This task (CMTR) is motivated by the multilingual pretraining model Unicoder (Huang et al., 2019).",
"As shown in Figure 2, CMTR is based on the image-caption pairs as input, but it does not use the original caption words.",
"Instead, it computes an alignment between word features and bounding box features extracted by tools (e.g., Faster-RCNN (Anderson et al., 2018)), and uses attended features to simultaneously recover all input words.",
"In particular, let ( B , E ) be an image-caption input pair, where B = ( b 1 , b 2 , , b n ) are bounding box feature embeddings and E = ( e 1 , e 2 , , e m ) are word embeddings.",
"CMTR first calculates an attended representation for the caption words with bounding box features as e i = (cid:80) nj =1 a ij b j , where a ij = softmax( A i, : )[ j ] , b j R h , e i R h , and h denotes the embedding dimension.",
"A R m n is the attention matrix calculated by bi-linear attention as A ij = e Ti Wb j , where W is a trainable parameter.",
"Finally we take E = tanh(( e 1 , e 2 , , e m )) as input and predict the original caption words.",
"The objective function is: l ( X ; e, d ) = E x X [( x, d ( e ( x )))] (1) where ( ., . ) is the sum of token-level cross-entropy loss and e ( . ) is the encoder component including the input layer, the attention layer and save our Earth We 0 1 2 3 T T T T + + + + + + + + 4 I I I 4 4 + + + + + + Transformer layers Token emb.",
"transformer layers.",
"d ( . ) is the decoder applied on the output of transformers, which is a shared linear projection layer with other MLM tasks and CLTR task introduced below.",
"Cross-lingual Text Recovery.",
"This task (CLTR) is adopted from Unicoder (Huang et al., 2019), which takes a pair of parallel sentences ( X, Y ) and lets the pretrained model learn the underlying word alignments between two languages.",
"Similar to CMTR, we also use the bi-linear attention mechanism to compute an attended representation X for input sentence X in the source language with its parallel sentence Y , and then try to recover X using the attended input X .",
"In CLTR task, we optimize the same objective function in Eq.",
"(1).",
"Note that CLTR and CMTR do not share attention parameters since there is still a large modal gap between text and image before applying cross-attention.",
"Translation Language Model.",
"This task (TLM) is adopted from XLM (Conneau and Lample, 2019), which takes a pair of parallel sentences with randomly masked tokens in different languages as input.",
"The model is trained to predict the masked tokens by attending to local contexts and distant contexts in another language.",
"Interested readers please refer to Conneau and Lample (2019) for more details about its objective function.",
"For fine-tuning, we minimize the triplet ranking loss to fine-tune the retrieval model.",
"To boost the performance, we use the hard negative mining strategy in SCAN (Lee et al., 2018).",
"For each text query, there is only one positive image sample and the rest are negative.",
"Denoting a mini-batch of training samples by { ( q i , I i ) } Ki =1 , where a query q i is only relevant with the image I i , we only penalize the hardest negative image in the mini-batch by L ( q i ) = max j (cid:54) = i [ R ( q i , I j ) R ( q i , I i ) + m ] + , where m is the margin set to 0 .",
"2 by default, and [ x ] + = max(0 , x ) is a clip function.",
"R ( q, I ) is the function to evaluate the similarity between query q and image I parameterized by u and b: R ( q, I ) = u (cid:62) BERTCLS ( q, I ) + b.",
"Considering the whole mini-batch of images and texts, the final loss function is computed by L = 1 K (cid:80) Ki =1 [ L ( q i ) + L ( I i )] .",
"For pretraining, we utilize two public English image-caption datasets: SBU Captions (Ordonez et al., 2011) and Conceptual Captions (Sharma et al., 2018).",
"Due to broken URLs, we only collected around 3 .",
"7 M text-image pairs in total.",
"For monolingual (en, de, ja) text and parallel corpus (en-de), we randomly sample 20 M sentences from Wikipedia text 2 and 9 M parallel sentences from MultiUN corpus 3 .",
"We also collected 2 .",
"8 M en-ja parallel sentences from Pryzant et al. (2018).",
"For fine-tuning, we use two multilingual multimodal benchmarks for retrieval, MSCOCO (en, ja) (Lin et al., 2014) and Multi30k (en, de) (Elliott et al., 2016).",
"MSCOCO contains 123 , 287 images, and each image contains five captions.",
"Following the settings in Faghri et al. (2018), we split the English data into 113 , 287 training samples, 5 , 000 validation samples, and 5 , 000 testing samples.",
"Miyazaki and Shimizu (2016) generated the Japanese captions for a subset of 33 , 745 images.",
"Similarly, we split 23 , 745 samples for training, 5 , 000 for validation as 5 , 000 for testing.",
"Multi30K contains 31 , 783 images, with each having five captions as well.",
"Following Karpathy and Li (2015), we split the dataset into 29 , 783 training samples, 2 http://dumps.wikimedia.org/ 3 https://bit.ly/2OvI2ZD 1 , 000 validation samples and 1 , 000 testing samples.",
"We use R@K (K = 1,5,10) as evaluation metrics.",
"R@K is the percentage of ground-truth matchings appearing in the top K-ranked results.",
"We use the multilingual BERT uncased version (De-vlin et al., 2019) to initialize our model, which has 12 layers of Transformer blocks.",
"Each block has 768 hidden units, 12 self-attention heads, and the vocabulary size is 105 , 879 .",
"The maximum sequence length is set to 64 .",
"Following Li et al. (2020), we detect 100 bounding boxes per image using Faster-RCNN (Anderson et al., 2018) pretrained on Visual Genome (Krishna et al., 2017).",
"Our pretraining is conducted on 16 NVIDIA V100 GPUs ( 16 GB memory), and fine-tuning is conducted on 8 NVIDIA V100 GPUs.",
"We use FP 16 to speed up training and reduce memory usage.",
"We use Adam optimizer (Kingma and Ba, 2015) and set the batch size per GPU to 16 .",
"The initial learning rate is 1 e5 .",
"We pretrain the model for 50 epochs and fine-tune the retrieval model based on the average of R@{1,5,10} on the validation set.",
"We repeat our experiments five times and report the average metrics on the test set.",
"We compare our models with several recent competitive methods.",
"VL-BERT (Su et al., 2020) and Unicoder-VL (Li et al., 2020) are two well-known vision-language BERT based models.",
"For VL-BERT, We reproduce the English results by fine-tuning their official pretrained model 4 and generate non-English results from their released code following the same configuration as ours.",
"For Unicoder-VL, we adopt their reported English results in the paper.",
"Besides pretraining based models, we also compare several methods, including cross-attention based model SCAN (Lee et al., 2018), multilingual word embedding alignment-based model AME (Mohammadshahi et al., 2019) and multilingual sentence alignment-based model LIME (Wehrmann et al., 2019).",
"We directly use SCAN, AME, and LIME's reported performance from their papers.",
"Finally, we compare with a machine translation baseline: Translate-test, which translates the test data in Japanese or German to English using Google Translate, and then evaluates on fine-tuned VL-BERT retrieval model in English.",
"Table 2 presents the results for English tasks.",
"Compared with Unicoder-VL (Li et al., 2020), our model performs slightly worse but obtains better results than VL-BERT.",
"A possible reason is that Unicoder-VL is initialized with English BERT, which is specifically optimized for English.",
"The benefit of our model is demonstrated in Table 3 for cross-modal retrieval tasks involving non-English languages.",
"We first observe that the machine translation baseline Translate-test achieves better results than VL-BERT pretrained with MLM objective only on multilingual corpus and fine-tuned in the target language, proving the importance of aligning different languages.",
"Moreover, the average recall of the Translate-test is around 1-2% lower than our method.",
"Such results indicate that pretraining with additional cross-lingual objectives is more effective than translating the target language into English for these two benchmarks.",
"Though combining more powerful machine translation tools and better fine-tuned English retrieval models may lead to slightly better performance, our method learns a universal representation without dependency on external machine translation tools for particular language pairs, which is more suitable for real-world applications.",
"Finally, compared with VL-BERT (Su et al., 2020) that is only pretrained with MLM task on multilingual corpus, our additional cross-lingual pretraining tasks bring performance improvement.",
"To understand the effect of different components, we conduct an ablation study on the test set and report the average Recall@1 in Table 4.",
"Although cross-lingual pretraining tasks (TLM and CLTR) do not help English-related retrieval tasks much, they contribute more than 1% improvement for Japanese and German.",
"The result is under our expectation since those tasks effectively link non-English languages with the vision domain using English as the bridge.",
"Among all the components, CMTR consistently contributes around 1 point improvement.",
"In this work, we introduce multilingual corpus and three pretraining objectives to improve transformer based vision-language models for retrieval tasks.",
"Extensive experiments demonstrate the effectiveness of our contributions on cross-modal retrieval tasks.",
"Detailed ablation studies justify our modeling choices.",
"Our future work is to explore the zero-shot transferring capability of our framework."
] | [
"abstain",
"abstain",
"method",
"objective",
"objective",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"objective",
"abstain",
"objective"
] |
[
"A well-known limitation in pretrain-finetune paradigm lies in its inflexibility caused by the one-size-fits-all vocabulary.",
"This potentially weakens the effect when applying pretrained models into natural language generation (NLG) tasks, especially for the subword distributions between upstream and downstream tasks with significant discrepancy.",
"Towards approaching this problem, we extend the vanilla pretrain-finetune pipeline with an extra embedding transfer step.",
"Specifically, a plug-and-play embedding generator is introduced to produce the representation of any input token, according to pre-trained embeddings of its morphologically similar ones.",
"Thus, embeddings of mismatch tokens in downstream tasks can also be efficiently initialized.",
"We conduct experiments on a variety of NLG tasks under the pretrain-finetune fashion.",
"Experimental results and extensive analyses show that the proposed strategy offers us opportunities to feel free to transfer the vocabulary, leading to more efficient and better performed downstream NLG models.",
"1 1 Introduction Pretrain-finetune paradigm has been highly successful on tackling challenging problems in natural language processing, e.g., domain adaptation (Sato et al., 2020; Yao et al., 2020), incremental learning (Khayrallah et al., 2018; Wan et al., 2020), as well as knowledge transferring (Liu et al., 2020b).",
"The rise of large-scale pre-trained language models further attracts increasing attention towards this strategy (Devlin et al., 2019; Edunov et al., 2019).",
"Typically, these methods first pretrain a universal 1 We release the code at https://github.com/ DeepLearnXMU/embedding-transfer * Jinsong Su is the corresponding author.",
"This work was done when Xin Liu was interning at DAMO Academy, Alibaba Group.",
"model using a large-scale corpus, which is then finetuned to various downstream tasks via a few adjustments.",
"Due to its simplicity yet impressive performance, pretrain-finetune paradigm becomes the undoubtedly dominant solution for building state-of-the-art models in many natural language understanding tasks (Xu et al., 2019; Yang et al., 2019a; Liu et al., 2020b).",
"In comparison, this strategy often achieves disappointing or barely satisfactory performance in natural language generation (NLG) tasks.",
"For example, several studies observe that M-BERT (Devlin et al., 2019) fails to enhance the decoder of a translation model (Edunov et al., 2019; Zhu et al., 2020), while Rothe et al. (2020) reach the same conclusion even when adapting an autoregressive model GPT (Rad-ford et al., 2019).",
"A natural problem arises: What is the crucial bottleneck in current pretrain-finetune framework and how to break it?",
"In this paper, we provide the first answer from the subword discrepancy aspect, namely, the subword vocabulary extracted according to the pretraining data distribution is insufficient to cope with the downstream NLG tasks.",
"Such inflexibility stems from the fact that downstream NLG models have to inherit the vocabulary from their pre-trained counterparts.",
"In order to deal with the open-vocabulary problem, it is de-facto standard for pre-trained models to employ heuristic subword segmentation methods (Sennrich et al., 2016; Kudo and Richardson, 2018).",
"However, the segmentation learns on the upstream corpus other than the finetuned data and is likely to be sub-optimal (Cherry et al., 2018; Provilkov et al., 2020).",
"We argue that these lead to subword discrepancy and bring two defects.",
"Firstly, the pre-trained model usually learns a fine-grained subword segmentation to maintain the coverage of a large amount of diverse vocabulary.",
"Consequently, downstream NLG models may suffer from more serious exposure bias (Bengio et al., 2015) and expensive computational cost caused by the increased sequence lengths.",
"As one example, M-BERT exploits 100 thousand fine-grained subwords to encode hundreds of languages, while most of downstream NLG tasks, in fact, require only one language and its associate tokens.",
"Secondly, words that are rare in upstream task but frequent in downstream task may be segmented end up poorly understood (Provilkov et al., 2020).",
"Considering the English sequence Cenozoic palaeohydrodynamic shown in Table 1, all the words are frequent in a thesis domain translation task and can be well preserved in its vocabulary.",
"Nevertheless, they are segmented into under-represented tokens by pre-trained models, preventing the finetuning stage from better learning their compositionality for generation.",
"An alternative solution is reconstructing the pre-trained model by exploiting either a task-specific vocabulary (Nguyen and Chiang, 2017; Kocmi and Bojar, 2018) or a subword regularization approach (Provilkov et al., 2020).",
"However, retraining the upstream model from scratch for each task is time-consuming and unavailable for large-scale models like M-BERT, GPT, etc.",
"To this end, we propose a simple yet generalized pretrain-finetune strategy, where an embedding transfer stage is inserted between pre-training and finetuning to eliminate their token granularity gaps.",
"Unlike the prior strategy using a fixed vocabulary, our vocabulary is changeable and its items including mismatched ones can be easily initialized by the pre-trained embeddings.",
"Concretely, we equip the pre-trained model with a plug-and-play embedding generator, which is able to produce the embedding of any token by feeding its subwords and hyperwords that appeared in pre-trained vocabulary.",
"To train this generator, we randomly split or merge some tokens to replace their original embeddings with those produced by the generator.",
"The parameters of the generator are optimized under the vanilla pre-training framework to minimize the divergence before and after replacing the embeddings.",
"Accordingly, we can use a task-specific vocabulary for the downstream task, where common tokens are immediately initialized with pre-trained embeddings while mismatched ones are initialized by our generator.",
"We conduct experiments on various tasks under NLG context, in a range from domain adaptation to knowledge transferring, and from machine translation to answer-aware question generation.",
"Empirical results demonstrate the universal-effectiveness of the proposed strategy comparing with strong baselines and related approaches.",
"Quantitative and qualitative analyses verify that tackling subword discrepancy can exactly alleviate the problem of exposure bias, large computational cost, and the under-represented tokens in vanilla pretrain-finetune paradigm.",
"To summarize, the contributions of our work are as follows: Through in-depth analyses, we point out and formally analyze subword discrepancy, affecting the conventional pretrain-finetune strategy in NLG tasks.",
"We propose a simple, flexible, and generalized pretrain-finetune training strategy, where an embedding generator is introduced to leverage the knowledge of the pre-trained model to initialize embeddings of any required tokens.",
"Extensive experiments show that our strategy is able to efficiently decrease the vocabulary gaps in pretrain-finetune paradigm and significantly boost the performance of NLG models.",
"Recent studies observe that pre-trained models suffer a bottleneck when they are applied to NLG tasks (Edunov et al., 2019; Zhu et al., 2020; Rothe et al., 2020).",
"This problem has been attributed to many reasons.",
"For example, Yang et al. (2019b) point out pretrain-finetune discrepancy caused by the absent masked frames in real data when adopting pretrained masked language models.",
"Chronopoulou et al. (2019) investigate catastrophic forgetting in finetuning stage.",
"It can be said that how to successfully employ pretrain-finetune to enhance NLG models remains a great challenge.",
"We explore this problem from another direction, i.e., the unsuitable subword segmentation for downstream tasks.",
"Task-Specific Vocabulary A natural manner to address this issue is to adopt a task-specific vocabulary.",
"Lewis et al. (2020) first replace the embedding layer with an independent encoder, of which vocabulary and parameters are learned from the downstream corpus.",
"Along this line, Sato et al. (2020) exploit external monolingual data to construct a new embedding layer and achieve improvements in domain adaptation.",
"This series of studies empirically confirm the necessity of the suitable vocabulary for the finetuning stage.",
"However, these methods have to learn the task-specific embeddings separately before each adaptation, which brings in additional computational cost thus limiting their applicability.",
"Besides, they completely discard the pre-trained embeddings, which have been proved to be useful by Aji et al. (2020).",
"Extra encoder or embedding layer may fail to be well optimized with insufficient downstream resources.",
"Accordingly, Rothe et al. (2020) employ a task-specific vocabulary to retrain M-BERT, which is then used to initialize neural machine translation (NMT) model.",
"Considering more robust approaches, Kudo (2018) and Provilkov et al. (2020) randomly sample segmentations for each sentence at the training time.",
"Unlike the above methods, our goal is to build a plug-and-play component, that involves neither retraining the pre-trained model nor learning task-specific embeddings separately.",
"Embedding Generator Our work is also related to studies with respect to generating embeddings for out-of-vocabulary (OOV) words.",
"In this context, researchers use embeddings of characters or subwords to predict those of unseen words (Pin-ter et al., 2017; Zhao et al., 2018; Sasaki et al., 2019; Fukuda et al., 2020).",
"For example, Zhao et al. (2018) train an embedding generator through reconstructing the original representation of each word from its bag of subwords.",
"Sasaki et al. (2019) progressively improve the generator using attention mechanism.",
"Fukuda et al. (2020) further leverage similar words to enhance this procedure.",
"Our work significantly differs from the above studies in two aspects.",
"Due to the vocabulary is fixed once prede-fined, the embedding reconstruction can be merely drawn on a few of selected words.",
"By contrast, our generator is able to produce embeddings of any tokens, since these embeddings are directly embedded into the pre-trained model with an objective in terms of minimizing the divergence.",
"Moreover, previous studies mainly focus on handling the problem of OOV, while our work, to our best of knowledge, is the first study that exploits embedding generator to transfer granularity over subwords for pretrain-finetune paradigm.",
"In this section, we introduce our proposed pretrain-finetune strategy in detail.",
"As shown in Figure 1, we extend the prior pretrain-finetune paradigm with an embedding transfer stage.",
"Specifically, we revise the conventional pretrain-finetune pipeline as follows: Pretrain.",
"As usual, we first construct a pre-trained model using an existing large-scale corpus.",
"In addition, we further pretrain an embedding generator regardless of downstream tasks.",
"It's expected to produce the embedding of any required token, by feeding pre-trained embeddings of its subwords and hyperwords.",
"Hence, it can be employed into any downstream tasks for embedding transferring.",
"Finetune.",
"We differently initialize the word embeddings and the other parameters (inner layer) for the downstream model, respectively.",
"For the former, we use the downstream-task training corpus to learn a task-specific subword segmentation and corresponding vocabulary.",
"For an unseen token, we apply the generator to produce its initial representation.",
"Otherwise, we directly initialize it with the corresponding pre-trained embeddings.",
"Considering the latter, we directly adapt inner-layer parameters of the pre-trained model to the downstream model.",
"Finally, we continue to train the downstream model using the finetuning data following the common fashion.",
"As seen, our strategy is lightweight and also able to avoid the issue of subword discrepancy, since it does not require retraining for the pre-trained model and can be quickly applied to various downstream NLG models.",
"To make the word embedding generator applicable to all downstream NLG models, we design the generator so that it can generate the embedding of any input token according to those of its morphologically similar tokens from the learned pre-training vocabulary.",
"The basic intuition behind our design stems from this fact: if the input token is a complete word, like motorcycle , its semantic meaning is related to those of its subword s, motor and ##cycle .",
"On the contrary, if the input token is a subword, such as ##er , the words that contain the input token, which we call them hyperword s, e.g., worker , writer and singer , can be exploited to learn its semantic meaning.",
"Concretely, given a mismatch token w , we borrow the segmentation principle from pre-trained model to split w into subwords based on the pretraining vocabulary, and traverse the pre-training vocabulary to select all longer tokens containing w .",
"Then, we combine the generated subwords and the selected hyperwords to form the morphologically similar token set of w , denoted by S m ( w ) .",
"Afterwards, we explore three kinds of generators to produce the embedding G ( w ) of w : AVG-EG: Averaging-Based Embedding Generator Intuitively, we can simply define G ( w ) as the average embedding of the words from S m ( w ) : G ( w ) = 1 | S m ( w ) | (cid:88) w (cid:48) S m ( w ) E ( w (cid:48) ) , (1) where E ( w (cid:48) ) is the pre-trained embedding of the token w (cid:48) .",
"In this way, our generator can be directly used, without increasing the cost of training time.",
"ATT-EG: Attention-Based Embedding Generator Another natural solution is to softly fuse information from different morphologically similar words using an attention mechanism (Bahdanau et al., 2015).",
"G ( w ) = 1 | S m ( w ) | (cid:88) w (cid:48) S m ( w ) ( w (cid:48) ) E ( w (cid:48) ) , ( w (cid:48) ) = exp ( W (cid:62) E ( w (cid:48) )) (cid:80) w (cid:48)(cid:48) S m ( w ) exp ( W (cid:62) E ( w (cid:48)(cid:48) )) , (2)",
"where W R 1 d indicates a learnable vector, d denotes the dimensionality of word embedding.",
"Compared with the first generator, this generator can be jointly trained with the pre-trained model, therefore it is capable of better quantifying the effects of morphologically similar words in S m ( w ) .",
"PATT-EG: Position-Aware Attention-Based Embedding Generator From the linguistic perspective, different locations of morphemes in a word reflect distinct semantic meaning.",
"Consequently, we refine the above attention-based generator by considering six kinds of morphology relationships between w and w (cid:48) S m ( w ) : if w (cid:48) is a subword of w , w (cid:48) can be the prefix/infix/suffix subword of w .",
"In turn, if w (cid:48) is a hyperword of w , w can be the prefix/infix/suffix subword of w (cid:48) .",
"Formally, G ( w ) is produced in the following way: G ( w ) = 1 | S m ( w ) | (cid:88) w (cid:48) S m ( w ) ( w (cid:48) ) E ( w (cid:48) ) , ( w (cid:48) ) = exp ( IW r E ( w (cid:48) )) (cid:80) w (cid:48)(cid:48) S m ( w ) exp ( IW r E ( w (cid:48)(cid:48) )) , (3) where W r R 6 d is a learnable parameter matrix, and I R 1 6 is the one-hot vector indicating the relationship between w and w (cid:48) .",
"Note that, all the trainable generators are designed to lightweight architectures with a few of parameters.",
"We believe this can achieve a more generalizable model and speed up their convergence.",
"We will compare and investigate these generators in the subsequent experiment section.",
"One principle of our strategy is plug-and-play, which can be directly applied to initialize any unseen tokens in all downstream NLG tasks, avoiding the time cost of retraining the model.",
"To this end, we borrow the pre-trained model and its associated corpus to train our generator before finetuning.",
"In the specific implementation, we first preprocess the sentences of pre-training corpus, where two kinds of preprocessing operations are applied G( noth ) G( ## ing ) E( I ) E( could ) E( have ) G( imagine ) E( ## d ) ( noth ) ( ## ing ) ( I ) ( could ) ( have ) ( imagine ) ( ## d ) !",
"to simulate unseen tokens:",
"1) randomly selecting some consecutive subwords and combining them into an unseen token; and",
"2) randomly choosing a token and splitting it into several consecutive unseen tokens.",
"Figure 2 provides an example of sentence preprocessing, where the word nothing is randomly split into two unseen subwords noth and ##ing , while the subwords ima and ##gine are concatenated into an unseen token imagine .",
"Through this data preprocessing, we can obtain large amounts of samples with unseen tokens involving various granularities, which facilitates the robustness of our generator.",
"Then, we embed our generator into the pretrained model to encode unseen words, and fix parameters of the pre-trained model to train the generator according to the following objectives: Reusing Pre-training Loss The generated embeddings should share the same latent space with the existing embeddings, in the meanwhile, representing appropriate semantic meaning.",
"Accordingly, we serve to minimize the vanilla loss of pretrained model as the basic training objective of our generator.",
"The loss function can be diverse according to the upstream tasks, which is denoted as L p ( s (cid:48) ) with s (cid:48) being the preprocessed training sentence.",
"Knowledge Distillation We further exploit knowledge distillation (Hinton et al., 2015) to narrow the divergence between hidden states in the pre-trained model before and after applying the generated embeddings.",
"Given a training example s , the vanilla pre-trained model and our generator preprocess it to s p and s (cid:48) , respectively.",
"As shown in Figure 2, we transfer the knowledge of the output layer in terms of s p to that of s (cid:48) .",
"Euclidean Distance is adopted to measure the divergence between representations output by vanilla pretrained model h p ( w ) and that of our model h (cid:48) ( w ) with respect to the same word w .",
"Since each word may be split into different sequences of tokens, we regard the average hidden states of the corresponding token sequence as its representation.",
"Thus, the loss function can be defined as: L d ( s p , s (cid:48) ) = 1 | s | (cid:88) w s || h p ( w ) h (cid:48) ( w ) || 2 , (4) Finally, we assign a hyper-parameter to quantify the effect of L ( ) and L d ( ) , which is empirically set to 0.5 as default: L ( s p , s (cid:48) ) = L p ( s (cid:48) ) + L d ( s p , s (cid:48) ) .",
"(5) 4 Experiments In this section, we examine the effectiveness of the proposed strategy in a variety of NLG tasks.",
"We first run a set of experiments to compare the variants of our approach and the related methods on domain adaptation translation tasks.",
"Then, we assess the superiority of our approach on transferring the knowledge from M-BERT (Devlin et al., 2019) and M-BART (Liu et al., 2020c) to two downstream NLG tasks: machine translation (MT) and answer-aware question generation (QG).",
"We conduct experiments on English-to-Chinese (En Zh) domain adaptation translation tasks, where the pretrain-finetune paradigm resort as standard.",
"The pre-training corpus is extracted from an out-of-domain dataset LDC , in which 1.25M (M = million), 3K (K = thousand), 3K sentences pairs are randomly sampled as training, development and test set, respectively.",
"We verify the effectiveness of our strategy on two downstream domains: Thesis and Laws, of which data are collected from UM-Corpus (Tian et al., 2014).",
"We follow the same settings as Zeng et al. (2018) and Su et al. (2021) to preprocess two corpus and train models.",
"The translation quality is evaluated by cased BLEU (Papineni et al., 2002), which is caculated by mteval-v13a.pl .",
"Implementation Details All the compared methods are re-implemented on top of FairSeq and built on Transformer (Vaswani et al., 2017).",
"We apply Adam Optimizer (Kingma and Ba, 2015) with 1 and 2 being 0.9 and 0.999, respectively.",
"The dropout ratio is set to 0.3 and each iteration batch consists of 25K tokens.",
"For both pre-training and finetuning, we employ warm-up strategy where the linear warm-up phase takes 4K steps, reaching its maximum learning rate to 5 10 4 .",
"The training of each model is early-stopped to maximize BLEU score on the development set.",
"Other hyperparameters are set following Base setting in Vaswani et al. (2017).",
"We investigate the following methods: Baseline : We design baselines under two basic settings: Single-Run denotes that the translation model only trained on in-domain corpus with the domain-specific vocabulary.",
"Pretrain-Finetune represents the wellknown pipeline, i.e., pre-training using upstream corpus, then finetuning on in-domain dataset via inheriting pre-training vocabulary.",
"Task-Specific Vocabulary : This group of methods retrain the upstream model using a task-specific vocabulary, involving: the vocabulary collected from in-domain data ( Downstream Vocab , Rothe et al., 2020), the joint vocabulary extracted from all corpus ( Joint Vocab , Nguyen and Chiang, 2017), as well as the pre-trained vocabulary with a subword regularization process on upstream corpus for robustness ( BPE-Drop , Provilkov et al., 2020).",
"Embedding Generator : We also examine several representatives of existing embedding generators on pretrain-finetune paradigm.",
"We Including LDC2002E18, LDC2003E07, LDC2003E14, LDC2004T07, LDC2004T08 and LDC2005T06.",
"https://github.com/pytorch/fairseq Hyperparameters that are not mentioned in our paper are set to the default according to the corresponding literatures.",
"assign the domain-specific vocabulary for each downstream model, in which embeddings of the seen tokens are reused, while the mismatched ones are:",
"1) randomly initialized ( Random Init , Aji et al., 2020);",
"2) learned by Word2Vec (Mikolov et al., 2013) using in-domain data; and",
"3) produced by a generator trained via reconstructing embeddings using Bag-of-Subwords ( Embedding Recon , Zhao et al., 2018).",
"New Embedding Layer : These methods assigned the domain-specific vocabulary for each downstream model, but completely discard the embeddings of upstream models.",
"The new embeddings are produced from:",
"1) randomly initialized Independent Encoder (Lewis et al., 2020); and",
"2) CBOW model trained under the downstream corpus (Sato et al., 2020).",
"Our Strategy : Our embedding generators are trained using the setting of pre-trained model with one epoch, as described in 3.",
"Results Table 2 lists our results on domain adaptation tasks.",
"Considering baseline models, imme-Models WMT14 En De SQuAD v1.1 Question Generation BLEU # Param.",
"This is consistent with findings in Edunov et al. (2019) and Zhu et al. (2020).",
"We observe that there are over 13K and 11K tokens in the vocabulary in terms of Out-of-Domain are mismatched with that of Thesis and Laws respectively, indicating that subword discrepancy indeed harms the performance of downstream NLG models.",
"When adapting task-specific vocabulary to retrain upstream models, all the translation qualities are improved, confirming the necessity of bridging subword gaps between upstream and downstream models.",
"In addition, we also appraise several existing embedding transfer strategies into pretrain-finetune pipeline.",
"Interestingly, randomly initializing embeddings of unseen tokens yields even slightly better results than utilizing Word2Vec and Embedding Recon.",
"We attribute this to the fact that the training of the latter two generators is individual regardless of the pre-trained model, resulting in unshared latent space between the generated and pre-trained embeddings.",
"Our models surpass all baselines and related methods on translation qualities.",
"Most importantly, in contrast to existing approaches that have to either retrain the pre-trained model from scratch or learn a separate embedding generator for each domain, our strategy can be immediately adopted to any downstream tasks once ready.",
"Specifically, PATT-EG achieves the best performance, confirming our hypothesis that softly summarizing information from morphologically similar tokens and considering positions of morphemes facilitate the embedding transferring.",
"Besides, using knowledge distillation to narrow the divergence before and after applying our generator can progressively improve the performance.",
"Accordingly, we use PATT-EG + Knowledge Distillation as the default setting in subsequent experiments.",
"We test our method on transferring the knowledge from two advanced large-scale language models: non-autoregressive M-BERT and autoregressive M-BART.",
"For computational efficiency, we randomly extract 4M samples from the conventional pre-training corpus || to train our embedding generator using the configurations of pre-trained models with one epoch and 4,096 batch size.",
"Comparisons are conducted on machine translation and question generation task.",
"The pre-trained model is employed on both of encoder and decoder.",
"Same as configurations in domain adaptation, we merely perform the embedding transferring in decoder.",
"Since the two language models exploit different segmentation tools, i.e., WordPiece (Wu et al., 2016) and SentencePiece (Kudo, 2018), we set 32K and 10K as the number of word and sentence pieces for downstream tasks, respectively.",
"Machine Translation Considering machine translation, we examine our method on the widely used English-to-German (En De) benchmarks: WMT14.",
"We follow Rothe et al. (2020) and Liu et al. (2020c) to deal this task.",
"Question Generation We use the SQuAD v1.1 (Rajpurkar et al., 2016) dataset for question generation.",
"We follow the common setting to preprocess dataset and train our models (Liu et al., 2020a).",
"The answer and the passage are taken as the model input, while the question is the target output.",
"ROUGE-L (Lin and Hovy, 2003), BLEU, and METEOR (Banerjee and Lavie, 2005) are treated as the assessment metrics.",
"Results As illustrated in Table 3, the randomly initialized NMT model yields comparable results Single NVIDIA v100 GPU with batch size being 32.",
"with the reported system with the same architecture (26.1 vs. 26.0, Rothe et al., 2020), making our subsequent experiments convincing.",
"Our methods significantly boost NLG performances across different pre-trained models, downstream tasks, linguistic resources, as well as segmentation tools, demonstrating its universal-effectiveness.",
"Moreover, the embedding generator is able to decrease the vocabulary size and the generated sentence length, leading to less computational costs.",
"To better understand subword discrepancy and our method, we make in-depth analyses on WMT En De task to investigate three problems: Q1 : How subword granularity affects NLG models?",
"( 5.1)",
"Q2 : How embedding transfer benefits to downstream models?",
"( 5.2)",
"Q3 : Dose our strategy acquire large computational costs?",
"( 5.3)",
"Q4 : Can our strategy exactly handle under-represented tokens?",
"( 5.4) 5.1 Impact of Subword Granularity Figure 3 visualizes the inference speed and exposure bias (Inference Expected Calibration Error (ECE), Wang et al., 2020) of translation models with different token granularities in their vocabulary.",
"Obviously, for a translation model, neither too small nor too large granularity regarding to subwords can reach a satisfactory performance on inference speed.",
"At the same time, the granularity indeed affects the problem of exposure bias in translation task.",
"The experiments confirm the suitable segmentation strategy can effectively alleviate the problem of exposure bias.",
"We further investigate how the embedding transfer impacts the initialization of downstream models.",
"We draw Figure 4 to plot the BLEU scores of downstream models using the embedding generators trained with different steps.",
"The X-axis indicates the training steps of the generator.",
"Both +Ours and w/ M-BERT are fully finetuned, but the latter doesn't employ our embedding generator, resulting in an unchanged line.",
"It is encouraging to see that the BLEU scores of downstream model converges very fast, indicating that our generator can be used with only a few of training steps.",
"We argue that the commonalities in word compositionality lead to the fast transfer learning on generating different embeddings, and the simple architecture of our generator further speeds up such procedure.",
"As shown in Figure 4, our generator converges very fast (around 20K steps).",
"The training process of our generator takes about 2 hours under our experimental setting.",
"As a reference, the vanilla WMT finetuning process takes approximately 40 hours.",
"In addition, our generator only takes about 3 minutes for producing 13K embeddings in Thesis, which is also insignificant compare to the finetuning time.",
"Most importantly, once the embedding generator is well-trained, it's available for any downstream tasks.",
"Thus, we argue that the computational costs are not the obstacle to the extensibility of our approach.",
"Table 4 gives an example to show the effectiveness of our model on handling under-represented tokens.",
"The German word dankbar (gratifying) is over segmented by M-BERT, and fail to be generated by the model trained under conventional pipeline.",
"On the contrary, our approach offers an opportunity for the downstream model to preserve the word into vocabulary, thus better learning its semantic meaning and correctly predicting it during inference.",
"In this paper, we point out that the one-size-fits-all subword vocabulary, despite its all-encompassing superiority, is not the preferred solution for the popular pretrain-finetune paradigm.",
"It causes the subword discrepancy among upstream and downstream models, which is given concrete form to the unsuitable granularity and under-represented words.",
"Consequently, we propose a novel embedding transfer strategy with a plug-and-play embedding generator.",
"Empirical results suggest that:",
"1) our approach is universally effective on overcoming subword discrepancy;",
"2) embedding transfer can bring benefits to computational efficiency; and",
"3) embedding generator can be achieved via either directly averaging the input embeddings or applying trainable components, the latter performs better but depends on few of training.",
"As our approach is transparent to model architectures and tasks, we believe it can be widely applied and further raise the flexibility and applicability of pre-trained models.",
"In the future, we plan to investigate its effectiveness on other generation tasks, such as code generation (Jiang et al., 2021; Xie et al., 2021), summarization (Shi et al., 2021) and so on.",
"The project was supported by National Natural Science Foundation of China (No. 62036004, No. 61672440), National Key Research and Development Program of China (No. 2018YFB1403202),",
"Natural Science Foundation of Fujian Province of China (No. 2020J06001), Youth Innovation Fund of Xiamen (No. 3502Z20206059), and the Fundamental Research Funds for the Central Universities (No. ZK20720200077).",
"We also thank the reviewers for their insightful comments."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"objective",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"other",
"other",
"other",
"other",
"objective",
"other",
"objective",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"other",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"other",
"other"
] |
[
"Named entity recognition (NER) is a fundamental task in natural language processing.",
"Recent works treat named entity recognition as a reading comprehension task, constructing type-specific queries manually to extract entities.",
"This paradigm suffers from three issues.",
"First, type-specific queries can only extract one type of entities per inference, which is inefficient.",
"Second, the extraction for different types of entities is isolated, ignoring the dependencies between them.",
"Third, query construction relies on external knowledge and is difficult to apply to realistic scenarios with hundreds of entity types.",
"To deal with them, we propose Parallel Instance Query Network (PIQN), which sets up global and learnable instance queries to extract entities from a sentence in a parallel manner.",
"Each instance query predicts one entity, and by feeding all instance queries simultaneously, we can query all entities in parallel.",
"Instead of being constructed from external knowledge, instance queries can learn their different query semantics during training.",
"For training the model, we treat label assignment as a one-to-many Linear Assignment Problem (LAP) and dynamically assign gold entities to instance queries with minimal assignment cost.",
"Experiments on both nested and flat NER datasets demonstrate that our proposed method outperforms previous state-of-the-art models 1 .",
"Named Entity Recognition (NER) aims to identify text spans to specific entity types such as Person, Location, Organization.",
"It has been widely used in many downstream applications such as entity linking (Ganea and Hofmann, 2017; Le and Titov, 2018) and relation extraction (Li and Ji, 2014; * This work was conducted when Yongliang Shen was interning at Alibaba DAMO Academy.",
"Miwa and Bansal, 2016; Shen et al., 2021b).",
"Traditional approaches for NER are based on sequence labeling, assigning a single tag to each word in a sentence.",
"However, the words of nested entities have more than one tag, thus these methods lack the ability to identify nested entities.",
"Recently, Ju et al. (2018); Strakov et al. (2019); Wang et al. (2020a) redesign sequence labeling models to support nested structures using different strategies.",
"Instead of labeling each word, Luan et al. (2019); Tan et al. (2020); Li et al. (2021); Shen et al. (2021a) perform a classification task on the text span, and Strakov et al. (2019); Paolini et al. (2021); Yan et al. (2021); Tan et al. (2021) treat NER as a sequence generation or set prediction task and design encoder-decoder models to generate entities.",
"Recently, Li et al. (2020b); Mengge et al. (2020); Zheng et al. (2021) reformulate the NER task as a machine reading task and achieve a promising performance on both flat and nested datasets.",
"As shown in Figure",
"1(a), they treat the sentence as context and construct type-specific queries from external knowledge to extract entities.",
"For example, for the sentence \"U.S. President Barack Obama and his wife spent eight years 947 in the White House\" , Li et al. (2020b) constructs the PER -specific query in natural language form -\"Find person entity in the text, including a single individual or a group\" to extract the PER entities, such as \"U.S. President\" , \"Barack Obama\" .",
"However, since the queries are type-specific, only one type of entities can be extracted for each inference.",
"This manner not only leads to inefficient prediction but also ignores the intrinsic connections between different types of entities, such as \"U.S.\" and \"U.S. President\" .",
"In addition, type-specific queries rely on external knowledge for manual construction, which makes it difficult to fit realistic scenarios with hundreds of entity types.",
"In this paper, we propose the Parallel Instance Query Network (PIQN), where global and learnable instance queries replace type-specific ones to extract entities in parallel.",
"As shown in Figure",
"1(b), each instance query predicts one entity, and multiple instance queries can be fed simultaneously to predict all entities.",
"Different from previous methods, we do not need external knowledge to construct the query into natural language form.",
"The instance query can learn different query semantics during training, such as position-related or type-related semantics.",
"Since the semantics of instance queries are implicit, we cannot assign gold entities as their labels in advance.",
"To tackle this, we treat label assignment as a one-to-many Linear Assignment Problem (LAP) (Burkard and ela, 1999), and design a dynamic label assignment mechanism to assign gold entities for instance queries.",
"Our main contributions are as follow: Different from type-specific queries that require multiple rounds of query, our model employs instance queries that can extract all entities in parallel.",
"Furthermore, the style of parallel query can model the interactions between entities of different types.",
"Instead of relying on external knowledge to construct queries in natural language form, instance queries learn their query semantics related to entity location and entity type during training.",
"To train the model, we design a dynamic one-to-many label assignment mechanism, where the entities are dynamically assigned as labels for the instance queries during training.",
"The one-to-many manner allows multiple queries to predict the same entity, which can further improve the model performance.",
"Experiments show that our model achieves state-of-the-art performance consistently on several nested and flat NER datasets.",
"Traditional approaches for NER can be divided into three categories, including tagging-based, hypergraph-based and span-based approaches.",
"The typical sequence labeling approach (Huang et al., 2015) predicts labels for each token, and struggles to address nested NER.",
"Some works (Alex et al., 2007; Wang et al., 2020a) adapt the sequence labeling model to nested entity structures by designing a special tagging scheme.",
"Different from the decoding on the linear sequence, the hypergraph-based approaches (Lu and Roth, 2015; Muis and Lu, 2017; Katiyar and Cardie, 2018) construct hy-pergraphs based on the entity nesting structure and decode entities on the hypergraph.",
"Span-based methods first extract spans by enumeration (Sohrab and Miwa, 2018; Luan et al., 2019) or boundary identification (Zheng et al., 2019; Tan et al., 2020), and then classify the spans.",
"Based on these, Shen et al. (2021a) treats NER as a joint task of boundary regression and span classification and proposes a two-stage identifier of locating entities first and labeling them later.",
"Three novel paradigms for NER have recently been proposed, reformulating named entity recognition as sequence generation, set prediction, and reading comprehension tasks, respectively.",
"Yan et al. (2021) formulates NER as an entity span sequence generation problem and uses a BART (Lewis et al., 2020) model with the pointer mechanism to tackle NER tasks.",
"Tan et al. (2021) formulates NER as an entity set prediction task.",
"Different from Strakov et al. (2019), they utilize a non-autoregressive decoder to predict entity set.",
"Li et al. (2020b); Mengge et al. (2020) reformulate the NER task as an MRC question answering task.",
"They construct type-specific queries using semantic prior information for entity categories.",
"Different from Li et al. (2020b); Jiang et al. (2021), our method attempts to query at the entity level, where it adaptively learns query semantics for instance queries and extracts all types of entities in parallel.",
"It is worth noting that Seq2Set (Tan et al., 2021) is quite different from ours: (1) 948 Seq2Set attempts to eliminate the incorrect bias introduced by specified entity decoding order in the seq2seq framework, and proposes an entity set predictor, while we follow the MRC paradigm and focus on extracting entities using instance queries.",
"(2) Seq2Set is an encoder-decoder architecture, while our model throws away the decoder and keeps only the encoder as in Wang et al. (2022a), which speeds up inference and allows full interaction between query and context.",
"(3) Seq2Set uses bipartite graph matching to compute the entity-set level loss, while we focus on the label assignment for each instance query and propose a one-to-many dynamic label assignment mechanism.",
"In this section, we first introduce the task formulation in 3.1, and then describe our method.",
"As shown in Figure 2, our method consists of three components: the Encoder ( 3.2), the Entity Prediction ( 3.3) and the Dynamic Label Assignment ( 3.4).",
"The encoder encodes both the sentence and instance queries.",
"Then for each instance query, we perform entity localization and entity classification using Entity Pointer and Entity Classifier respectively.",
"For training the model, we introduce a dynamic label assignment mechanism to assign gold entities to the instance queries in 3.4.",
"We use ( X, Y ) to denote a training sample, where X is a sentence consisting of N words labeled by a set of triples Y = { < Y lk , Y rk , Y tk > } G 1 k =0 .",
"Y lk [0 , N 1] , Y rk [0 , N 1] and Y tk E are the indices for the left boundary, right boundary and entity type of the k -th entity, where E is a finite set of entity types.",
"In our approach, We set up M ( M > G ) global and learnable instance queries I = RM h , each of which (denoted as a vector of size h ) extracts one entity from the sentence.",
"They are randomly initialized and can learn the query semantics automatically during training.",
"Thus we define the task as follows: given an input sentence X , the aim is to extract the entities Y based on the learnable instance queries I .",
"Model input consists of two sequences, the sentence X of length N and the instance queries I of length M .",
"The encoder concatenates them into one sequence and encodes them simultaneously.",
"Input Embedding We calculate the token embeddings E tok , position embeddings E pos and type embeddings E typ of the input from two sequences as follows ( E tok , E pos , E typ R ( N + M ) h ): E tok = Concat( V, I ) E pos = Concat( P w , P q ) E typ = Concat([ U w ] N , [ U q ] M ) (1) where V RN h are token embeddings of the word sequence, I RM h are the vectors of instance queries, P w RN h and P q RM h are separate learnable position embeddings.",
"U w and U q are type embeddings and [ ] N means repeating N times.",
"Then the input can be represented as H 0 = E tok + E pos + E typ R ( N + M ) h .",
"One-Way Self-Attention Normal self-attention would let the sentence interact with all instance queries.",
"In such a way, randomly initialized instance queries can affect the sentence encoding and break the semantics of the sentence.",
"To keep the sentence semantics isolated from the instance queries, we replace the self-attention in BERT (De-vlin et al., 2019) with the one-way version: OW-SA( H ) = HW v (2) = softmax (cid:18) HW q ( HW k ) T h + M (cid:19) (3) where W q , W k , W v R h h are parameter matrices and M { 0 , inf } ( N + M ) ( N + M ) is a mask matrix for the attention score where elements in M set to 0 for kept units and inf for removed ones.",
"In our formula, the upper right sub-matrix of M is a full inf matrix of size ( N M ) and other elements are zero, which can prevent the sentence encoding from attending on the instance queries.",
"In addition, the self-attention among instance queries can model the connections between each other, and then enhance their query semantics.",
"After BERT encoding, we further encode the sequence at word-level by two bidirectional LSTM layers and L extra transformer layers.",
"Finally we split H R ( N + M ) h into two parts: the sentence encoding H w RN h and the instance query encoding H q RM h .",
"Each instance query can predict one entity from the sentence, and with M instance queries, we can predict at most M entities in parallel.",
"Entity prediction 949 Hungarian Algorithm (0, 0, GPE) (2, 3, PER) (11, 13, FAC) None Optimal Assignment ( * ) 0 1 2 M-2 M-1 Assign None label Entity Pointer Entity Classifier 0 1 2 Encoder Sentence M Instance Queries Barack Obama (2, 3, PER ) PER FAC GPE L [2] & R [2] L [1] R [1] L [M-1] R [M-1] T [1] T [M-1] T [2] P tiYt k P liYl k P riY r k Cost i k = + + ( ( -M-2 M-3 M-4 V [CLS] P 0 U w I M-1 U q P M-1 I M-2 UP M-2 q I M-3 UP M-3q I M-4 U P M-4 q I 2 UP 2 q I 1 U P 1q I 0 UP 0 q V [SEP] P N-1 U w V House P N-2 U w V White P N-3 U w V the P N-4 U w V Obama P 4 U w V Barack P 3 U w V President P 2 U w V U.S. P 1 U w Token Position Type Assignment Matrix ( A ) Entity Prediction Dynamic Label Assignment Y G-2 Y G-1 Y 0 Cost Matrix ( Cost ) Y 1 Assignable Quantity ( q ) -1.7 -0.6 -1.0 -1.2 -2.7 -0.8 -0.9 -1.2 -2.8 -1.1 -1.0 -1.2 -1.9 -0.9 -0.7 -2.9 -0.8 -1.0 -0.7 -2.8 1 2 1 1 G ground-truth entities Y = ( Y k , Y k , Y k ) l r t M l a b e l e d i n s t a n ce qu e r i e s 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 0 1 0 Y G-2 Y G-1 Y 0 Y 1 None P t P l P r M un l a b e l e d i n s t a n ce qu e r i e s w w w w w w w w w q q q q q q q M-1 Figure 2: The overall architecture of the model.",
"can be viewed as a joint task of boundary prediction and category prediction.",
"We design Entity Pointer and Entity Classifier for them respectively.",
"Entity Pointer For the i -th instance query H qi , we first interact the query with each word of the sentence by two linear layers.",
"The fusion representation of the i -th instance query and j -th word is computed as: S ij = ReLU( H q i W q + H w j W w ) (4) where { l, r } denotes the left or right boundary and W q , W w R h h are trainable projection parameters.",
"Then we calculate the probability that the j -th word of the sentence is a left or right boundary: P ij = sigmoid( S ij W + b ) (5) where W R h and b are learnable parameters.",
"Entity Classifier Entity boundary information are useful for entity typing.",
"We use P i = [ P i 0 , P i 1 , , P iN 1 ] , { l, r } to weigh all words and then concatenate them with instance queries.",
"where W qt R h h is a learnable parameter.",
"Then we can get the probability of the entity queried by the i -th instance query belonging to category c : P t ic = exp( S t i W c t + b c t ) (cid:80) c E exp( S ti W c t + b c t ) (7) where W c t R h and b c t are learnable parameters.",
"Finally, the entity predicted by the i -th instance query is T i = (cid:0) T li , T ri , T ti (cid:1) .",
"T li = arg max j ( P lij ) and T ri = arg max j ( P rij ) are the left and right boundary, T ti = arg max c ( P tic ) is the entity type.",
"We perform entity localization and entity classification on all instance queries to extract entities in parallel.",
"If multiple instance queries locate the same entity but predict different entity types, we keep only the prediction with the highest classification probability.",
"Dynamic Label Assignment Since instance queries are implicit (not in natural language form), we cannot assign gold entities to them in advance.",
"To tackle this, we dynamically assign labels for the instance queries during training.",
"Specifically, we treat label assignment as a Linear Assignment Problem.",
"Any entity can be assigned to any instance query, incurring some cost that may vary depending on the entity-query assignment.",
"We define the cost of assigning the k -th entity ( Y k = < Y lk , Y rk , Y tk >) to the i -th instance query as: Cost ik = (cid:16) P tiY tk + P liY lk + P riY rk (cid:17) (8) where Y tk , Y lk and Y rk denote the indices for the entity type, left boundary and right boundary of the k -th entity.",
"It is required to allocate as many entities as possible by assigning at most one entity to each query and at most one query to each entity, in such a way that the total cost of the assignment is minimized.",
"However, the one-to-one manner does not fully utilize instance queries, and many instance queries are not assigned to gold entities.",
"Thus we extend the traditional LAP to one-to-many one, where each entity can be assigned to multiple instance queries.",
"The optimization objective of this one-to-many LAP is defined as: min M 1 (cid:88) i =0 G 1 (cid:88) k =0 A ik Cost ik s.t. (cid:80) k A ik 1 (cid:80) i A ik = q k i, k, A ik { 0 , 1 } .",
"where A { 0 , 1 } M G is the assignment matrix, G denotes the number of the entities and A ik = 1 indicates the k -th entity assigned to the i -th instance query.",
"q k denotes the assignable quantity of the k -th gold entity and Q = (cid:80) k q k denotes the total assignable quantity for all entities.",
"In our experiments, the assignable quantities of different entities are balanced.",
"We then use the Hungarian (Kuhn, 1955) algorithm to solve Equation 9, which yields the label assignment matrix with the minimum total cost.",
"However, the number of instance queries is greater than the total assignable quantity of entity labels ( M > Q ), so some of them will not be assigned to any entity label.",
"We assign None label to them by extending a column for the assignment matrix.",
"Based on the new assignment matrix A { 0 , 1 } M ( G +1) , we can further get the labels Y = Y. indexby( ) for M instance queries, where = arg max dim =1 ( A ) is the label index vector for instance queries under the optimal assignment.",
"Training Objective We have computed the entity predictions for M instance queries in 3.3 and got their labels Y with the minimum total assignment cost in 3.4.",
"To train the model, we define boundary loss and classification loss.",
"For left and right boundary prediction, we use binary cross entropy function as a loss: L b = (cid:88) { l,r } M 1 (cid:88) i =0 N 1 (cid:88) j =0 1 [ Y i = j ] log P ij + 1 [ Y i = j ] log (cid:16) 1 P ij (cid:17) (11) and for entity classification we use cross entropy function as a loss: L t = M 1 (cid:88) i =0 (cid:88) c E 1 [ Y ti = c ] log P tic (12) where 1 [ ] denotes indicator function that takes 1 when is true and 0 otherwise.",
"Follow Al-Rfou et al. (2019) and Carion et al. (2020), we add Entity Pointer and Entity Classifier after each word-level transformer layer, and we can get the two losses at each layer.",
"Thus, the total loss on the train set D can be defined as: L = (cid:88) D L (cid:88) =1 L t + L b (13) where L t , L b are classification loss and boundary loss at the -th layer.",
"For prediction, we just perform entity prediction at the final layer.",
"To provide empirical evidence for the effectiveness of the proposed model, we conduct our experiments",
"on eight English datasets, including five nested NER datasets: ACE04 (Doddington et al., 2004) , ACE05 (Walker et al., 2006), KBP17 (Ji et al., 2017), GENIA (Ohta et al., 2002), NNE(Ringland et al., 2019) and three flat NER dataset: FewNERD (Ding et al., 2021), CoNLL03 (Tjong Kim Sang and De Meulder, 2003), OntoNotes (Pradhan et al., 2013), and one Chinese flat NER dataset: MSRA (Levow, 2006).",
"FewNERD and NNE are two datasets with large entity type inventories, containing 66 and 114 fine-grained entity types.",
"Please refer to Appendix A for statistical information about the datasets.",
"In our experiments, we use pretrained BERT (Devlin et al., 2019) in our encoder.",
"For a fair comparison, we use bert-large on ACE04, ACE05, NNE, CoNLL03 and OntoNotes, bert-base on KBP17 and FewNERD, biobert-large (Chiu et al., 2016) on GENIA and chinese-bert-wwm (Cui et al., 2020) on Chinese MSRA.",
"For all datasets, we train our model for 30-60 epochs and use the Adam Optimizer (Kingma and Ba, 2015) with a linear warmup-decay learning rate schedule.",
"We initialize all instance queries using the normal distribution N (0 . 0 , 0 . 02) .",
"See Appendix B for more detailed parameter settings and Appendix C for all baseline models.",
"We use strict evaluation metrics that an entity is confirmed correct when the entity boundary and the entity type are correct simultaneously.",
"We employ precision, recall and F1-score to evaluate the performance.",
"We also report the F1-scores on the entity localization and entity classification subtasks in 5.2 and Appendix D.2.",
"We consider the localization as correct when the left and right boundaries are predicted correctly.",
"Based on the accurately localized entities, we then evaluate the performance of entity classification.",
"Overall Performance Table 1 illustrates the performance of the proposed model as well as baselines on the nested NER datasets.",
"We observe significant performance boosts on the nested NER datasets over previous state-of-the-art models, 952 achieving F1-scores of 81.77%, 88.14%, 87.42% and 84.50% on GENIA, ACE04, ACE05, KBP17 and NNE datasets with +1.23%, +0.73%, +0.37%, +0.45% and +0.96% improvements.",
"Our model can be applied to flat NER.",
"As shown in Table 2, our model achieves state-of-the-art performance on the FewNERD and Chinese MSRA datasets with +1.44% and +0.88% improvements.",
"On the CoNLL03 and OntoNotes datasets, our model also achieves comparable results.",
"Compared with the type-specific query-based method (Li et al., 2020b), our model improves by +2.85%, +2.16%, +0.54%, +3.53% on the GENIA, ACE04, ACE05 and KBP17 datasets.",
"We believe there are three reasons: (1) Rather than relying on external knowledge to inject semantics, instance queries can learn query semantics adaptively, avoiding the sensitivity to hand-constructed queries of varying quality.",
"(2) Each query no longer predicts a group of entities of a specific type, but only one entity.",
"This manner refines the query to the entity level with more precise query semantics.",
"(3) Instance queries are fed into the model in parallel for encoding and prediction, and different instance queries can exploit the intrinsic connections between entities.",
"Inference Speed We compare the inference speed on ACE04 and NNE, as shown in Table 4.",
"Compared to the type-specific query method (Li et al., 2020b), our model not only improves the performance, but also gains significant inference speedup.",
"In particular, on the NNE dataset with 114 entity types, our model speeds up by 30.46 and improves performance by +39.2%.",
"This is because Li et al. (2020b) requires one inference for each type-specific query, while our approach performs parallel inference for all instance queries and only needs to be run once.",
"We also compare previous state-of-the-art models (Tan et al., 2021; Shen et al., 2021a) and our method is still faster and performs better.",
"In this section, we analyze the effects of different components in PIQN.",
"As shown in Table 3, we have the following observations: (1) Compared to the static label assignment in order of occurrence, the dynamic label assignment shows significant improvement on localization, classification, and NER F1-score, which improves NER F1-score by +5.71% on ACE04 and +8.84% on GENIA.",
"This shows that modeling label assignment as a LAP Model FewNERD Pr.",
"problem enables dynamic assignment of optimal labels to instance queries during training, eliminating the incorrect bias when pre-specifying labels.",
"Furthermore, one-to-many for label assignment is more effective than one-to-one, improving the F1-score by +3.86% on ACE04 and +0.51% on GENIA.",
"(2) The one-way self-attention blocks the attention of sentence encoding on instance queries, which improves the F1-score by +0.98% on ACE04 and +0.57% on GENIA.",
"It illustrates the importance of keeping the semantics of the sentence independent of the query.",
"In contrast, semantic interactions between queries are effective, which improves the F1-score by +0.92% on ACE04 and +0.67% on GENIA.",
"The major reason is that entities in the same sentence are closely related and the interaction between instance queries can capture the relation between them.",
"In order to analyze the query semantics learned by the instance query in the training, we randomly selected several instance queries and analyzed the locations and types of entities they predicted.",
"Entity Location We normalize the predicted central locations of the entities and use kernel density estimation to draw the distribution of the predicted entity locations for different queries, as shown in Figure 3.",
"We observe that different instance queries focus on entities at different positions, which means that the instance queries can learn the query semantics related to entity position.",
"For example, instance queries #28 and #39 prefer to predict entities at the beginning of sentences, while #11 and #53 prefer entities at the end.",
"Entity Type We count the co-occurrence of different instance queries and different entity types they predicted.",
"To eliminate the imbalance of entity types, we normalize the co-occurrence matrix on the entity type axis.",
"As shown in Figure 4, different instance queries have preferences for different entity types.",
"For example, instance queries #11 and #13 prefer to predict PER entities, #30 and #43 prefer VEH entities, #25 and #49 prefer WEA entities, #12 prefers FAC entities, and #35 prefers LOC entities.",
"We also analyze the auxiliary loss, the dynamic label assignment mechanism, and the performance on entity localization and classification, please see the Appendix D.",
"Table 5 shows a case study about model predictions.",
"Our model can recognize nested entities and 954 # Sentence with Gold Entities Prediction Instance Query IDs 1 [ 0 A number of powerful international companies and commercial agencies , such as [ 12 Ito Bureau of [ 15 Japan 15 ] GPE15 ] ORG , [ 17 Han Hua Group of [ 21 South Korea 22 ] GPE22 ] ORG , [ 24 Jeffrey Group of [ 27 the US 28 ] GPE28 ] ORG , [ 30 etc 30 ] ORG30 ] ORG .",
"long entities well.",
"In case 1, the entities of length 31 or with the three-level nested structure are predicted accurately.",
"And thanks to the one-to-many dynamic label assignment mechanism, each entity can be predicted by multiple instance queries, which guarantees a high coverage of entity prediction.",
"However, the model's ability to understand sentences is still insufficient, mainly in the following ways: (1) There is a deficiency in the understanding of special phrases.",
"Yahoo !",
"Communications Services in case 2 is misclassified as ORG , but in fact Yahoo ! is ORG .",
"(2) Over-focus on local semantics.",
"In case 3, the model misclassifies Venezuelan consumer as PER , ignoring the full semantics of the long phrase the Venezuelan consumer protection agency , which should be ORG .",
"(3) Insensitivity to morphological variation.",
"The model confused Venezuelan and Venezuela , and misidentified the former as GPE in case 3.",
"In this paper, we propose Parallel Instance Query Network for nested NER, where a collection of instance queries are fed into the model simultaneously and can predict all entities in parallel.",
"The instance queries can automatically learn query semantics related to entity types or entity locations during training, avoiding manual constructions that rely on external knowledge.",
"To train the model, we design a dynamic label assignment mechanism to assign gold entities for these instance queries.",
"Experiments on both nested and flat NER datasets demonstrate that the proposed model achieves state-of-the-art performance.",
"This work is supported by the Key Research and Development Program of Zhejiang Province, China (No. 2021C01013), the National Key Research and Development Project of China (No. 2018AAA0101900), the Chinese Knowledge Center of Engineering Science and Technology (CK-CEST) and MOE Engineering Research Center of Digital Library."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"method",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"objective",
"objective",
"objective",
"method",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"objective",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"result",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"other"
] |
[
"We focus on the task of Frequently Asked Questions (FAQ) retrieval.",
"A given user query can be matched against the questions and/or the answers in the FAQ.",
"We present a fully unsupervised method that exploits the FAQ pairs to train two BERT models.",
"The two models match user queries to FAQ answers and questions, respectively.",
"We alleviate the missing labeled data of the latter by automatically generating high-quality question paraphrases.",
"We show that our model is on par and even outperforms supervised models on existing datasets.",
"Many websites and online communities publish FAQ to help their users find relevant answers to common questions.",
"An FAQ consists of pairs of questions and answers { ( q, a ) } .",
"The FAQ retrieval task involves ranking { ( q, a ) } pairs for a given user query Q .",
"1 Searching over FAQ can leverage multifield indexing and retrieval (Karan and Snajder, 2016).",
"Hence, a user query Q may be matched with either the question field q , the answer field a or the concatenated field q + a (Karan and Snajder, 2016).",
"The association of questions to answers in the FAQ pairs, can be utilized as weak supervision, for training neural models to predict the similarity between user queries and answers (i.e., Q -toa matching) (Gupta and Carvalho, 2019; Karan and Snajder, 2018; Sakata et al., 2019).",
"However, FAQ pairs by themselves do not provide the required labeled data for training a model to predict the association between user queries and FAQ questions (i.e., Q -toq matching).",
"Thus, a labeled dataset with user queries Q and their matching { ( q, a ) } 1 Throughout this paper we use the term question ( q ) to denote a question within a given FAQ pair, and query ( Q ) to denote an issued user query.",
"pairs is required for supervised learning (Gupta and Carvalho, 2019; Karan and Snajder, 2018; Sakata et al., 2019).",
"Such a dataset is usually manually generated or obtained from query-log mining.",
"Yet, the construction of such a dataset either requires domain expertise (e.g., enriching the dataset with manually generated question paraphrases (Karan and Snajder, 2018)) or assumes the availability of query-logs (Kim and Seo, 2006, 2008).",
"Whenever such a dataset is unavailable, one must resort to utilizing unsupervised retrieval models for Q-to-q matching.",
"Previous unsupervised FAQ retrieval models (Burke et al., 1997; Brill et al., 2002; Karan et al., 2013; Karan and Snajder, 2018; Wu et al., 2005) have utilized so far traditional information retrieval techniques, such as lexical and semantic text matching, query expansion, etc.",
"In this paper we overcome the aforementioned unsupervised gap, by using distant supervision to train neural models.",
"Our method is composed of a combination of three unsupervised methods.",
"Each method is utilized for re-ranking an initial pool of FAQ pairs obtained by a simple BM25 retrieval (Robertson and Zaragoza, 2009).",
"The first method applies a focused-retrieval approach, utilizing passages for answer re-ranking (Bendersky and Kurland, 2008).",
"Each one of the two other methods fine-tunes a BERT model (Devlin et al., 2019), one for matching Q -toa and one for matching Q -toq .",
"To overcome the lack of training data in the latter's case, we further implement a novel weak-supervision approach using automatically generated question paraphrases, coupled with smart filtering to ensure high-quality paraphrases.",
"We then combine the outcome of the three methods using an unsupervised late-fusion method.",
"Overall, we show that our unsupervised FAQ retrieval approach is on par and sometimes even outperforms state-of-the-art supervised models.",
"Several previous works have also utilized Deep Neural Networks (DNN) for FAQ retrieval.",
"(Karan and Snajder, 2016) used Convolution Neural Networks (CNN) for matching user queries to FAQ.",
"(Gupta and Carvalho, 2019) used combinations of Long Short-Term Memory (LSTM) to capture Q toq and Q -toa similarities.",
"Yet, those works are supervised and use user queries ( Q ) for training.",
"Following the success of BERT (Devlin et al., 2019) in NLP tasks, (Sakata et al., 2019) have recently used a search engine for Q -toq matching and then combined its results with a supervised BERT model for Q -toa matching.",
"We use a similar BERT model for Q -toa matching, but differently from (Sakata et al., 2019), we use it in an unsupervised way, and we further introduce a second unsupervised BERT model for Q -toq matching.",
"A somewhat related area of research is Community Question Answering (CQA) (Patra, 2017; Zhou et al., 2015) and the related TREC tracks.",
"23 While CQA shares some common features to FAQ retrieval, in CQA there are additional signals such as votes on questions and answers, or the association of user-answer and user-question.",
"Clearly, in a pure FAQ retrieval setting, such auxiliary data is unavailable.",
"Hence, we refrain from comparing with such works.",
"Our proposed FAQ retrieval approach uses distant supervision to train neural models and is based on an initial candidates retrieval followed by a re-ranking step.",
"Recall that, the FAQ dataset is composed of { ( q, a ) } pairs.",
"The initial candidate retrieval is based on indexing { ( q, a ) } pairs into a search engine index (Section 3.1) and searching against the index.",
"The re-ranking step combines three unsupervised re-rankers.",
"The first one (Section 3.2) is based on a focused-retrieval approach, utilizing passages for answer re-scoring.",
"The two other re-rankers fine-tune two independent BERT models.",
"The first BERT model (Section 3.3), inspired by (Sakata et al., 2019), is fine-tuned to match questions ( q ) to answers ( a ).",
"At run time, given a user query Q , this model re-ranks top-k { ( q, a ) } candidate pairs by matching the user query Q to the answers ( a ) only.",
"The second BERT model (Section 3.4) is designed to match user queries to FAQ questions.",
"Here, we utilize weak-supervision for generating high quality question paraphrases from the FAQ pairs.",
"The BERT model is fine-tuned on the questions and their generated paraphrases.",
"At run time, given a user query Q , this model gets the top-k { ( q, a ) } candidate pairs and re-ranks them by matching the user query Q to the questions ( q ) only.",
"The final re-ranking is obtained by combining the three re-rankers using an unsupervised late-fusion step (Section 3.5).",
"The components of our method are described in the rest of this section.",
"We index the FAQ pairs using the ElasticSearch 4 search engine.",
"To this end, we represent each FAQ pair ( q, a ) as a multifield document having three main fields, namely: question q , answer a , and the concatenated field q + a .",
"Given a user query Q , we match it (using BM25 similarity (Robertson and Zaragoza, 2009)) against the q + a field 5 and retrieve an initial pool of top-k FAQ candidates.",
"Our first unsupervised re-ranker applies a focused retrieval approach.",
"To this end, following (Bender-sky and Kurland, 2008), we re-rank the candidates using a maximum-passage approach.",
"Such an approach is simply implemented by running a sliding window (i.e., passage) on each candidate's q + a field text, and scoring the candidate according to the passage with the highest BM25 similarity to Q (Gry and Largeron, 2011).",
"We hereinafter term this first re-ranking method as bm25-maxpsg .",
"Among the two BERT (Devlin et al., 2019) re-rankers, the first one, BERT-Q-a , aims at re-ranking the candidate FAQ pairs { ( q, a ) } according to the similarity between a given user query Q and each pair's answer a .",
"To this end, we fine-tune the BERT model from the FAQ pairs { ( q, a ) } , using a triplet network (Hoffer and Ailon, 2015).",
"This network is adopted for BERT fine-tuning (Mass et al., 2019) using triplets ( q, a, a (cid:48) ) , where ( q, a ) constitutes an FAQ pair and a (cid:48) is a negative sampled answer as 4 https://www.elastic.co/ 5 Searching only the q or a fields obtained inferior results follows.",
"For each question q we have positive answers { a i } from all the pairs { ( q, a i ) } .",
"6 Negative examples are randomly selected from those FAQ that do not have q as their question.",
"To further challenge the model into learning small nuances between close answers, instead of sampling the negative examples from all FAQ pairs, we run q against the q + a field of the search index (from Section 3.1 above).",
"We then sample only among the top-k (e.g., k = 100 ) retrieved pairs, that do not have q as their question.",
"Our BERT-Q-a is different from that of (Sakata et al., 2019) in two aspects.",
"First, (Sakata et al., 2019) fine tunes a BERT model for Q-to-a matching using both FAQ ( q, a ) pairs as well as user queries and their matched answers ( Q, a ) .",
"This is, therefore, a supervised setting, since user queries are not part of the FAQ and thus require labeling efforts.",
"Compared to that, we fine tune the BERT-Q-a using only FAQ ( q, a ) pairs.",
"Second, unlike (Sakata et al., 2019), which fine-tunes BERT for a classification task (i.e., point-wise training) we train a triplet network (Hoffer and Ailon, 2015) that learns the relative preferences between a question and a pair of answers.",
"Our network thus implements a pair-wise learning-to-rank approach (Li, 2011).",
"At inference time, given a user query Q and the top-k retrieved ( q, a ) pairs, we re-rank the ( q, a ) pairs using the score of each ( Q, a ) pair as assigned by the fine-tuned BERT-Q-a model (Mass et al., 2019).",
"The second BERT model, BERT-Q-q , is independent from the first BERT-Q-a model (Sec-tion 3.3) and is trained to match user queries to FAQ questions.",
"To fine-tune this model, we generate a weakly-supervised dataset from the FAQ pairs.",
"Inspired by (Anaby-Tavor et al., 2019), we fine-tune a generative pre-training (GPT-2) neural network model (Radford, 2018) for generating question paraphrases.",
"GPT-2 is pre-trained on huge bodies of text, capturing the natural language structure and producing deeply coherent text paragraphs.",
"tunes a GPT-2 model given classes, where each class has a title and several examples, here we consider each answer a as a class with only one example which is its question q .",
"We thus concatenate all the FAQ pairs into a long text U = a 1 SEP q 1 EOS a n SEP q n EOS , where answers precede their questions, 7 having EOS and SEP as special tokens.",
"The former separates between FAQ pairs and the latter separates answers from their questions inside the pairs.",
"The GPT-2 fine-tuning samples a sequence of l consecutive tokens w j l , , w j from U and maximizes the conditional probability P ( w j | w j l , . . . , w j 1 ) of w j to appear next in the sequence.",
"We repeat this process several times.",
"Once the model is fine-tuned, we feed it with the text a SEP , ( a is an answer in an FAQ pair ( q, a ) ), and let it generate tokens until EOS .",
"We take all generated tokens until EOS , as a paraphrase to a 's question q .",
"By repeating this generation process we may generate any number of question paraphrases.",
"For example, the paraphrase Is there a way to deactivate my account on Facebook? was generated for the question How do I delete my Facebook account? .",
"One obstacle in using generated text is the noise it may introduce.",
"To overcome this problem we apply a filtering step as follows.",
"The idea is to keep only paraphrases that are semantically similar to their original question (i.e., have similar answers).",
"Let GT ( q ) = { ( q, a i ) } be the FAQ pairs of question q (i.e., the ground truth answers of q ).",
"For each generated paraphrase p of q , we run p as a query against the FAQ index (See section 3.1), and check that among the returned topk results, there are at least min ( n, | GT ( q ) | ) pairs from GT ( q ) for some n .",
"In the experiments (see Section 4 below) we used k = 10 and n = 2 .",
"To select the best paraphrases for each question q , we further sort the paraphrases that passed the above filter, by the score of their top1 returned ( q, a ) pair (when running each paraphrase p as a query against the FAQ index).",
"The motivation is that a higher score of a returned ( q, a ) for a query p , implies a higher similarity between p and q .",
"8 Similar to the BERT-Q-a , this model is fine-tuned using triplets ( p, q, q (cid:48) ) , where p is a paraphrase of q and q (cid:48) is a randomly selected question 7 FAQ questions with more than one answer are treated here as different questions.",
"8 The filtered paraphrases can be downloaded from https://github.com/YosiMass/faq-retrieval from the FAQ questions.",
"At inference time, given a user query Q and the top-k retrieved ( q, a ) pairs, we re-rank the answers ( q, a ) answers, using the score of each ( Q, q ) pair as assigned by the fine-tuned BERT-Q-q model (Mass et al., 2019).",
"We combine the three re-ranking methods (i.e., bm25-maxpsg and the two fined-tuned BERT models) using two alternative late-fusion methods.",
"The first one, CombSUM (Kurland and Culpepper, 2018), calculates a combined score by summing for each candidate pair the scores that were assigned to it by the three re-ranking methods.",
"9 Following (Roitman, 2018), as a second alternative, we implement the PoolRank method.",
"PoolRank first ranks the candidate pairs using CombSUM .",
"The top pairs are then used to introduce an unsupervised query expansion step (RM1 model (Lavrenko and Croft, 2001)) which is used to re-rank the whole candidates pool.",
"10 4 Experiments 4.1 Datasets We use two FAQ datasets in our evaluation, namely: FAQIR (Karan and Snajder, 2016) 11 and StackFAQ (Karan and Snajder, 2018).",
"12 The FAQIR dataset was derived from the maintenance & repair domain of the Yahoo! Answers community QA (CQA) website.",
"It consists of 4313 FAQ pairs and 1233 user queries.",
"The StackFAQ dataset was derived from the web apps domain of the Stack-Exchange CQA website.",
"It consists of 719 FAQ pairs (resulted from 125 threads; some questions have more than one answer) and 1249 user queries.",
"On both datasets, we compare against the results of the various methods that were evaluated in (Karan and Snajder, 2018), namely: RC an ensemble of three unsupervised methods (BM25, Vector-Space and word-embeddings); ListNet and LambdaMART two (supervised) learning-to-rank methods that were trained over a diverse set of text similarity features; and CNN-Rank a",
"9 Each re-ranker's scores are first max-min normalized.",
"10 Further following (Roitman, 2018), we use the normalized CombSUM fusion scores as the weak-relevance labels for the RM1 model estimation.",
"11 http://takelab.fer.hr/data/faqir/ 12 http://takelab.fer.hr/data/StackFAQ (supervised) learning-to-rank approach based on a convolutional neural network (CNN).",
"On the StackFAQ dataset, we further report the result of (Sakata et al., 2019), which serves as the strongest supervised baseline.",
"This baseline combines two methods: TSUBAKI (Shinzato et al., 2008) a search engine for Q -toq matching; and a supervised fine-tuned BERT model for Q -toa matching.",
"We put the results of this work (that were available only on the StackFAQ dataset), just to emphasize that our approach can reach the quality of a supervised approach, and not to directly compare with it.",
"We used ElasticSearch to index the FAQ pairs.",
"For the first ranker (Section 3.1) we used a sliding window of size 100 characters with 10% overlap.",
"For fine-tuning the BERT-Q-a model, we randomly sampled 2 and 5 negative examples for each positive example ( q, a ) on FAQIR and StackFAQ datasets, respectively.",
"To fine-tune GPT-2 for generating the question paraphrases (Section 3.4), we segmented U into consecutive sequences of l = 100 tokens each.",
"We used OpenAI's Medium-sized GPT-2 English model: 24-layer, 1024-hidden, 16-heads, 345M parameters.",
"We then used the fine-tuned model to generate 100 paraphrases for each question q and selected the top-10 that passed filtering (as described in Section 3.4).",
"Overall on FAQIR, 22,736 paraphrases passed the filter and enriched 3,532 out of the 4,313 questions.",
"On StackFAQ, 856 paraphrases passed the filter and enriched 109 out of the 125 thread questions.",
"Similar to the BERT-Q-a fine-tuning, we selected 2 and 5 negative examples for each ( p, q ) (paraphrase-question) pair on FAQIR and StackFAQ, respectively.",
"The two BERT models used the pre-trained BERT-Base-Uncased model ( 12 -layer, 768 -hidden, 12 -heads, 110 M parameters).",
"Fine-tuning was done with a learning rate of 2 e5 and 3 training epochs.",
"Similar to previous works, we used the following metrics: P@5, Mean Average Precision (MAP) and Mean Reciprocal Rank (MRR), calculated on an initial candidate list of 100 FAQs retrieved by the search engine using standard BM25.",
"Table 1 reports the results for the two datasets.",
"13 We compare the base BM25 retrieval ( bm25( q + a ) ), our three proposed unsupervised re-ranking methods ( bm25-maxpsg , BERT-Q-a and BERT-Q-q ) and their fusion-based combinations ( CombSUM and PoolRank ) with the state-of-the-art unsupervised and supervised baselines.",
"We also compare to PoolRank+ , which is same as PoolRank except that the two BERT models (i.e., BERT-Q-a and BERT-Q-q ) are fine-tuned on the union of the respective training sets of both the FAQIR and StackFAQ datasets.",
"We observe that, among our three re-rankers, BERT-Q-q was the best.",
"For example, on FAQIR it achieved 0 .",
"67 , 0 .",
"61 and 0 .",
"90 for P@5, MAP and MRR, respectively.",
"This in comparison to 0 .",
"54 , 0 .",
"50 and 0 .",
"81 , obtained by bm25-maxpsg for P@5, MAP and MRR, respectively.",
"This con-firms previous findings (Karan and Snajder, 2016), that Q-to-q matching gives the best signal in FAQ retrieval.",
"Furthermore, on both datasets, the fusion methods achieved better results than the individual re-rankers, with better performance by the PoolRank variants over ComboSum .",
"An exception is FAQIR, where BERT-Q-q achieved same results as the ComboSUM fusion.",
"As mentioned above, BERT-Q-q has a signifi-cantly better performance on FAQIR than the other two individual rankers, thus a simple fusion method such as CombSUM can not handle such cases well.",
"PoolRank, which uses relevance model, is a better approach and thus gives better fusion results.",
"Further comparing with the baselines, we can see that, on FAQIR, our unsupervised PoolRank outperformed all other methods; including the supervised methods on all three metrics.",
"On StackFAQ, PoolRank outperformed all other methods, except the supervised TSUBAKI + BERT (Sakata et al., 2019).",
"We note that, our unsupervised results PoolRank+ achieved ( 0 . 75 , 0 . 88 and 0 . 90 for P@5, MAP and MRR, respectively), which is quite close to the supervised results ( 0 . 78 , 0 . 90 and 0 . 94 respectively) of (Sakata et al., 2019).",
"13 Similar to (Karan and Snajder, 2018), the FAQIR initial retrieval is done against a subset of 789 FAQ pairs that are relevant to at least one user query.",
"We presented a fully unsupervised method for FAQ retrieval.",
"The method is based on an initial retrieval of FAQ candidates followed by three re-rankers.",
"The first one is based on an IR passage retrieval approach, and the others two are independent BERT models that are fine-tuned to predict query-to-answer and query-to-question matching.",
"We showed that we can overcome the unsu-pervised gap by generating high-quality question paraphrases and use them to fine-tune the query-to-question BERT model.",
"We experimentally showed that our unsupervised method is on par and sometimes even outperforms existing supervised methods."
] | [
"method",
"abstain",
"method",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"method",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"result",
"result"
] |
[
"Recent advances in Question Answering have lead to the development of very complex models which compute rich representations for query and documents by capturing all pairwise interactions between query and document words.",
"This makes these models expensive in space and time, and in practice one has to restrict the length of the documents that can be fed to these models.",
"Such models have also been recently employed for the task of predicting dialog responses from available background documents (e.g., Holl-E dataset).",
"However, here the documents are longer, thereby rendering these complex models infeasible except in select restricted settings.",
"In order to overcome this, we use standard simple models which do not capture all pairwise interactions, but learn to emulate certain characteristics of a complex teacher network.",
"Specifically, we first investigate the conicity of representations learned by a complex model and observe that it is significantly lower than that of simpler models.",
"Based on this insight, we modify the simple architecture to mimic this characteristic.",
"We go further by using knowledge distillation approaches, where the simple model acts as a student and learns to match the output from the complex teacher network.",
"We experiment with the Holl-E dialog data set and show that by mimicking characteristics and matching outputs from a teacher, even a simple network can give improved performance.",
"The advent of large scale datasets for QA has lead to the development of increasing complex neural models with specialized components for",
"(i) encoding the query",
"(ii) encoding the document(s)",
"(iii) capturing interactions between document and query words and",
"(iv) generating/extracting the correct answer span from the given document (Seo et al., 2016; Hu et al., 2017; Yu et al., 2018).",
"While these models give state-of-the-art performance on a variety of datasets, they have very high space and time complexity.",
"This is a concern, and in practice, it is often the case that one has to resort to restricting the maximum length of the input document such that the model can run with reasonable resources (say, a single 12GB Tesla K80 GPU).",
"Such complex span prediction models are also being adapted for other NLP tasks such as dialog response prediction (Moghe et al., 2018), which is the focus of this work.",
"In particular, we refer to the Holl-E dataset where the task is to extract the next response from a document which is relevant to the conversation (see Figure 1).",
"This setup is very similar to QA wherein the input is { context, document } as opposed to { query, document } and the correct response span needs to be extracted from the given document.",
"Given this similarity, it is natural to adopt existing QA models (Seo et al., 2016; Yu et al., 2018) for this task.",
"However, the documents in Holl-E dataset are longer, and the authors specifically report that they were unable to run these models when the entire document was given as input.",
"Hence, they report results only in constrained oracle settings where the document is trimmed such that the response still lies in the shortened document.",
"The above situation suggests that there is clearly a trade-off needed.",
"On one hand, we want to harness the power of these complex models to achieve better performance and on the other hand we want to be able to run them with reasonable compute resources without arbitrarily trimming the input document.",
"This trade-off situation naturally leads to the following question: Is it possible to build a simple model, with low memory and compute requirements, that copy desirable characteristics from complex models?",
"To answer this, we start with a relatively simple model with very basic components for encoding query, document and capturing interactions.",
"Once these interactions are cap-Source Doc: ...comes in.",
"As soon as the door is open, the Bride's fist crashes into Vernita's face.",
"A savage fight follows, first with fists, then with knives....",
"The fight pauses ...",
"At this point Vernita is introduced as a member of the Deadly Vipers, codename Copperhead.",
"...",
"Sample Conversation: Prober (S1): Which is your favourite character in this?",
"Responder (S2): My favorite character was Copperhead because she was kicking butt.",
"Prober (S3): Oh my goodness I agree, because the fight with Vernita was the best in the whole movie.",
"Responder (S4): It's starts off action packed because as soon as the door is open, the Bride's fist crashes into Vernita's face.",
"A savage fight follows, first with fists, then with knives.",
"Prober (S5): And it gets better when we find out they are both assassins.",
"Responder (S6): And a group of them, Vernita is introduced as a member of the Deadly Vipers, codename Copperhead.",
"tured, the model computes a final representation which is then fed to a decoder to predict the correct span in the document.",
"This recipe is very similar to BiDAF (Seo et al., 2016), QANeT (Yu et al., 2018) but the main difference is that these models use much more complex encoder and interaction components to arrive at the final representation.",
"As expected, the performance of this model is poor when compared to BiDAF, QANeT.",
"The aim now is to improve the performance of this model by carefully analysing or learning from complex models.",
"Given that the complex model differs in the manner in which the final representation is computed, one hypothesis is that it learns richer final representations than the simple model.",
"Indeed, on investigation, we found that the final representations learned by complex models are diverse for different inputs (context, document pairs) as compared to the simple model.",
"Based on this insight, we propose a modification to the simple model which increases the diversity of the embeddings, thereby improving the performance.",
"While this insight obtained by manual investigation is useful, there is clearly scope for learning by exploring other characteristics of the model.",
"One principled way of doing this is to use knowledge distillation (Hinton et al., 2015) where the simple model acts as a student and learns to mimic the probability distributions predicted by a teacher.",
"In other words, instead of simply maximizing the log likelihood of the training data, the simple model now gets additional signals from the teacher which act as hints while training.",
"Our experiments, using the Holl-E dataset show that by",
"(i) improving the conicity (Chandrahas et al., 2018) of the representations learned by the simple model and",
"(ii) mimicking the outputs of the complex teacher model the simple model can give improved performance with fewer compute and memory requirements.",
"In particular, when compared to a standalone simple model the student model shows an improvement of 3 .",
"4 % (compare SAM-mul-train (LD) and SAM-add-topk (LD) entries in Table 2 and Table 3 respectively).",
"Over the past few years neural sequence prediction models which take a question as input and predict the corresponding answer span in a given document have evolved rapidly.",
"Such models have also been adapted for dialog response prediction in the context of the Holl-E dataset (Moghe et al., 2018).",
"These models typically differ in the components used for capturing interactions between query and document, capturing interactions between sentences in a document and refin-ing the query/document representation over multiple passes (Shen et al., 2017; Dhingra et al., 2017; Sordoni et al., 2016).",
"In particular, a co-attention network which computes the importance of every query word w.r.t. every document word and the importance of every document word w.r.t. every query word is an important component in most state of the art models (Hermann et al., 2015; Kadlec et al., 2016; Cao et al., 2016; Xiong et al., 2016; Seo et al., 2016; Gong and Bowman, 2017; Dhingra et al., 2017; Wang et al., 2017; Shen et al., 2017; Trischler et al., 2016; Group and Asia, 2017; Tan et al., 2017; Sordoni et al., 2016).",
"Similarly, some models (Group and Asia, 2017; Seo et al., 2016; Hu et al., 2017) contain a self-attention network which computes the importance of every document word w.r.t. every other document word.",
"In general, the most successful models (for example, BiDAF (Seo et al., 2016), QANeT (Yu et al., 2018)) use a combination of these components which capture all pairwise interactions and are thus computationally very expensive.",
"As a result, in practice, these models are not suitable for longer documents.",
"compact models (Cheng et al., 2017).",
"For example, Ba and Caruana (2014); Hinton et al. (2015); Lopez-Paz et al. (2016); Chen et al. (2017) train a shallow student network using soft targets (or class probabilities) generated by an expensive teacher instead of the hard targets present in the training data.",
"Romero et al. (2015) extend this idea to train a student model using the intermediate representations learned by the teacher model which act as additional hints .",
"This idea of Knowledge Distillation has also been tried in the context of pruning networks for multiple object detection (Chen et al., 2017), speech recognition (Wong and Gales, 2016).",
"In the context of reading comprehension or span prediction, Hu et al. (2018) have very recently shown that we can distill knowledge from an ensemble of models into a single model.",
"However, unlike our work, the single model itself is a complex model (Hu et al., 2017) containing an expensive self attention network and a RL agent.",
"To the best of our knowledge, ours is the first work which tries to build a simple span prediction model by distilling knowledge from a complex model.",
"We view a conversation as sequence of utterances by a prober and a responder .",
"The response prediction (RP) model aims to predict the utterance by the responder based on a source document, when given the query (prober's most recent utterance) and the history (past utterance by the prober and responder).",
"See Figure 1 for an example.",
"We denote the lengths of source document, query, prober history and responder history as T, I, J, K .",
"The LSTMs/GRUs used all have the same number of cells, denoted by d .",
"In particular, the document length T is of the order of a few thousands and the query/history lengths I, J, K are of the order of a few hundreds.",
"Contrast this with QA tasks, where T is only of the order of a few hundreds, and the query length ( I + J + K ) is of the order of a few tens.",
"BiDAF (Seo et al., 2016) is an extremely popular model used for span prediction in reading comprehension based question answering problems.",
"We can frame the problem of response prediction as one of question answering by concatenating the query, prober history, and responder history into a single question.",
"BiDAF has proven to be hugely successful in QA tasks, but has a large number of parameters (about 2.5 million) and consumes a large amount of computational space and time during training and prediction.",
"We use the BiDAF model as a guiding post while constructing our model, and in particular focus on the so called query to context attention , which is a vector (denoted by (cid:101) h ) that indicates the weighted sum of the most important words in the source document, with respect to the query and histories.",
"QANeT (Yu et al., 2018) is another recent model used for span prediction in QA tasks and specifically targets better space and time efficiency than BiDAF.",
"Despite this, it still has a large number of parameters (about 1.3 million) and still consumes a large amount of computational space and time during training and prediction.",
"The QANeT model can also be modified for response prediction in a similar manner to BiDAF.",
"We now describe the simple attention model that we aim to learn.",
"In a fashion similar to that of BiDAF and QANeT architectures, the simple model also operates in 3 distinct layers.",
"See Figure 2 for an overview into the model.",
"The words from the source document, the utterances by the prober and the responder are all encoded using standard GloVe embeddings (Pen-nington et al., 2014).",
"In the next layer we encode the query (prober's most recent utterance) using a BiGRU/BiLSTM, and encode the previous utterances of the prober and responder in a query sensitive manner.",
"Query Encoder: Embedded query words are passed through BiGRU where final state q I R 2 d acts as query representation.",
"Query Sensitive History Summariser: The history of the prober and responder are passed through a BiGRU to get context sensitive vectors h P j R 2 d and h R k R 2 d for j [ J ] and k [ K ] .",
"These vectors are combined to get vectors h P and h R .",
"This process of combining uses the query representation q I , and hence h P and h R can Query to Prober history attention Query to Responder history attention q I is Which this?",
"be viewed as query-aware representations of the prober and responder history.",
"The equations for h P are given below.",
"The vector h R is also calculated in a similar manner.",
"e j = mul WP ( h P j , q I ) = softmax( e ) h P = (cid:88) j j h P j where mul W ( v 0 , v 1 ) = v (cid:62) 0 W v 1 is a parameterized multiplicative way of capturing the interaction between two vectors.",
"The source document is finally used in this layer to predict the start and end indices of the response.",
"The GloVe embedded words of the source document are passed through a BiGRU to get context sensitive vectors u t R 2 d , for all t [ T ] .",
"Each index t gets a score s t based on the interaction between u t and the query/history vectors q I , h P , h R .",
"The scores s t are normalized by a softmax and is taken to be the prediction of the starting word index.",
"where mul W ( u 0 , . . . , u a ) = u (cid:62) 0 (cid:0)(cid:80) ai =1 W i u i (cid:1) A similar method is used for the prediction the ending word index as well.",
"We performed several experiments on the Holl-E dataset and observed that the complex models (QANeT and BiDAF) perform better than the simple attention model described in Section 3.3.",
"However, they take significantly more time and memory for training and inference.",
"In fact, for the examples with longer source documents, both BiDAF and QANeT run into memory issues when training.",
"During prediction, the memory issues in QANeT and BiDAF can be sidestepped by breaking the source document into multiple chunks and taking the highest scoring span.",
"In the rest of this section we study several approaches to nudge the simple attention model to take parameters that make it have similar behaviour as the complex models, and check if the so nudged model demonstrates better performance on the Holl-E dataset.",
"same conversation are often the same, even though the right response is different in those points.",
"For example, consider the conversation in Figure 1. We expect the trained model to be such that Pred span ( SD, S 1 , S 2 , S 3 ) = span SD ( S 4 ) where SD is the source document, and S i are the the utterances in the conversation.",
"Similarly, we expect the trained model to be such that Pred span ( SD, S 3 , S 4 , S 5 ) = span SD ( S 6 ) However we often find that our simple model predicts the same span for both the cases above, which is wrong (unless S 4 and S 6 are the same.) We hypothesize this as being due to the context sensitive embeddings of the history not depending strongly on the query, and hence the span prediction model picks up most information from the source document.",
"To support this point of view we measured the diversity of the context-to-query vectors (cid:101) h of the BiDAF model for several examples grouped by conversation.",
"In more detail we computed the conicity (Chandrahas et al., 2018) of vectors (cid:101) h ( SD, S 1 , S 2 , S 3 ) , (cid:101) h ( SD, S 3 , S 4 , S 5 ) , . . . for every conversation in the test set and averaged it over all conversations.",
"(See Figure 3 for an overview on conicity).",
"This average conicity was observed to be about 0 .",
"6 (see Table 6), which, according to Chandrahas et al. (2018), is low (low conicity implies high diversity).",
"We observe similar behaviour for QANeT as well.",
"The average conicity of the row sums of the similarity matrix grouped by conversation was also observed to be about 0 .",
"6 (see Table 6).",
"On the other hand, for our simple attention model, the average conicity of the vectors h R and h P , when computed in a similar fashion as mentioned above were generally high (about 0 . 8 ) (see Table 6).",
"Based on these observations we hypothesize that decreasing the conicity of the vectors h R and h P would improve the performance of the simple attention model.",
"In particular, we propose to change the multiplicative method of combining vectors into an additive method instead.",
"In particular we propose to replace the function mul in our simple model with the function add defined as follows: add W ( v 0 , v 1 , . . . , v a ) = w (cid:62) tanh (cid:32) a (cid:88) i =0 W i v i (cid:33) where the vector w and the matrices W i parameterize the mode of combining the input vectors.",
"This is motivated by Chandrahas et al. (2018) who show that using additive model in embedding of entities in knowledge graphs gives consistently better diversity than using multiplicative models.",
"While borrowing high level ideas from complex models, like increasing diversity of the learned representation can help to some extent, one can push this further to distill the learned complex model (Hinton et al., 2015) into the simple attention model.",
"To achieve this, we train a teacher model (BiDAF or QANeT) on the training set and use it to make predictions on the same training set.",
"The simple attention model would minimise the sum of two loss functions: 1) Cross entropy loss of the predicted start and end indices with the train labels of the start and end indices, 2) KL-divergence of the predicted start and end indices from the teacher prediction of the same.",
"The loss on a single training sample is given below D ( p Tb || p S b ) + D ( y b || p S b ) + D ( p Te || p Se ) + D ( y e || p Se ) (1) where D denotes the KL divergence, p Tb , p Te denote the predicted begin index and end index distribution of the teacher model, and p Sb , p Se denote the predicted begin and end index distribution of the student model and y b , y S denote the true begin and end index in one-hot vector form.",
"Another variant of knowledge distillation is as follows.",
"We do not view the teacher predicted distribution for all indices with importance and just take the top few predicted indices.",
"In particular the loss on a single training sample is given below D ( (cid:101) p Tb || (cid:101) p Sb ) + D ( y b || p Sb ) + D ( (cid:101) p Te || (cid:101) p Se ) + D ( y e || p Se ) (2) where (cid:101) p Tb , (cid:101) p Te gives just the (normalised) probability of the topk predictions of teacher model on the begin and end indices.",
"Similarly (cid:101) p Sb , (cid:101) p Se gives the student predictions for the begin and end indices restricted to the topk entries given by the teacher model.",
"As the teacher model is already trained, and the main objective in knowledge distillation is to have the student model mimic the teacher model, there is no need to restrict the objective terms 1 and 3 in equation 1 to only the training data.",
"Hence by hallucinating conversations and documents we can get more terms in the objective and has an effect similar to data augmentation.",
"Another possible way to take advantage of teacher models is to extract more information than simply the predicted spans for each training example from the teacher models.",
"In particular one easy way to extract piece of information is the gradient of the model output with respect to the input for the teacher model.",
"The so called Sobolev training (Czarnecki et al., 2017) exploits this information and adds two more extra terms to the objective in (1).",
"The gradients are all taken with respect to the model input, which would be the source document, the query and the histories.",
"In this section, we describe the setup used for our experiments and discuss the results.",
"We perform experiments using the Holl-E conversation dataset (Moghe et al., 2018) which contains crowdsourced conversations from the movie do-main.",
"Every conversation in this dataset is associated with background knowledge comprising of plot details (from Wikipedia), reviews and comments (from Reddit).",
"Every alternate utterance in the conversation is generated by copying and/or modifying sentences from this unstructured background knowledge.",
"We refer the reader again to Figure 1 for a sample from this dataset.",
"We use the same train, test and validation splits as provided by the authors of the original paper (Moghe et al., 2018).",
"For each chat in the training data, the authors construct training triplets of the form { document, context, response } where the number of train, test and validations triplets are 34486, 4388 and 4318 respectively.",
"The context contains",
"(i) the query (the prober's most recent utterance) and",
"(ii) the history (past 2 utterances by the prober and the responder) as described earlier.",
"The task then is to train a model which can predict the response given the document and the context.",
"At test time, the model is shown document, context and predicts the response.",
"As mentioned earlier, the authors of Holl-E found that BiDAF and QANeT run into memory issues when evaluated on their dataset.",
"Hence, they propose two setups",
"(i) long document (LD) setup and",
"(ii) short document (SD) setup.",
"In the long document setup, the authors do not trim the document from which the response needs to be predicted.",
"In the short document setup, the authors trim the document to 256 words such that the span containing the response is contained in the trimmed document.",
"This enables them to evaluate BiDAF and QANeT on the trimmed document.",
"We also report experiments using both the LD and SD setup.",
"As mentioned above complex models (BiDAF and QANeT) face memory issues on training set with long documents.",
"So for all situations where we need predictions from complex models for long documents, we use a BiDAF/QANeT model trained on short document examples, and the prediction on the long document is made by splitting the long documents into chunks and feeding it to the trained BiDAF/QANeT model.",
"The final predicted span is the largest scoring span across all chunks.",
"For all models, we considered the following hy-perparameters and tuned them using the validation set.",
"We tried batch sizes of 32 and 64 and the following GRU sizes: 64, 100, 128.",
"We experimented with 1, 2 and 3 layers of GRU.",
"We used pre-trained publicly available Glove word embeddings 1 of 100 dimensions.",
"The best performance 1 https://nlp.stanford.edu/projects/ glove/ SAM, SD SAM, LD BiDAF, SD BiDAF, LD QANeT, SD QANeT, LD Memory 540MB 1.3GB 11GB 11GB 3GB 3GB Time 30 secs 43 sec 347 secs 710 secs 90 secs 150 secs Table 1: Inference' Memory and Time usage for different models.",
"was with the batch size of 32, 2 layers of GRU with hidden size 64.",
"We used Adam (Kingma and Ba, 2014) optimizer with initial learning rate set to 0 .",
"001 , 1 = 0 .",
"9 , 2 = 0 .",
"999 .",
"We performed L2 weight decay with decay rate set to 0 .",
"001 .",
"The models that we experiment with are listed below:",
"with additive interactions and no teacher terms in the objective (Only terms 2 and 4 in Eqn.",
"(1)).",
"2. SAM-add-Teach : The simple attention model with additive interactions and only knowledge distillation terms in the objective (Only terms 1 and 3 in Eqn.",
"(1)).",
"3. SAM-add : The simple attention model with additive interactions and both knowledge distillation terms and training data terms in the objective (all terms in 1).",
"4. SAM-add-topk : The simple attention model with additive interactions and knowledge distillation applied to the topk indices and training data terms in the objective (all terms in 2).",
"5. SAM-add-aug : The SAM-add model, where the teacher terms are evaluated on hallucinated data in addition to training data.",
"The hallucinated data are derived from the original training set by reordering the words in the source document, query and histories.",
"6. SAM-add-grad : The SAM-add model, with extra terms in the loss penalising the deviation of the gradient of the simple model from the gradient of the teacher model.",
"as the SAM-add model, but has 6 terms instead of the 4 terms in Equation 1. The extra two terms arise from using both QANeT and BiDAF instead of just one.",
"8.",
"SAM-add-ensemble : Same as the SAM-add model, but the teacher predictions p T are set as the average of the QANeT and BiDAF predictions.",
"All the add models above also have a mul variant where the additive interaction add is replaced by a multiplicative interaction mul .",
"The F1-scores of the various models we train are given in Table 2, Table 3 and Table 4. A summary of the space and time complexity of prediction with the simple model and the complex models is given in Table 1. The training times and parameter counts of the models are given in Table 5 We draw several conclusions and inferences from these results and make some comments below.",
"Efficient Training with Simple Model: From Table 5, we observe that simple attention model has 5 to 10 times less parameters than QANeT and BiDAF.",
"The training time of the simple model is also significantly lesser than that of the complex models.",
"Efficient Prediction with Simple Model: From Table 1, we observe that the simple model takes significantly less memory and time during prediction as well.",
"The complex models run out of memory on the large document test set, but a prediction can still be made with a trained BiDAF or QANeT Model Details BiDAF QANeT SAM-add-teach 20.23 SAM-mul 40.81 40.76 SAM-add 42.05 41.89 SAM-add-topk 41.71 42.01 SAM-add-aug 41.65 41.62 SAM-add-ensemble 41.74 SAM-add-both 42.32 SAM-add-grad 41.37 41.72 Table 4: F1 Scores for different variants of simple attention model on short document test set.",
"Conicity of Multiplicative models vs Additive Models: As noted before we use two distinct methods to capture the interaction between a group of vectors : an additive mechanism add , and a multiplicative mechanism mul .",
"As mentioned earlier, an additive model for capturing interactions has been hypothesized to increase diversity.",
"This is true in our case as well: the conicity of the h P and h R vectors goes down from about 0 .",
"8 to 0 .",
"7 (compare SAM-mul and SAM-add entries in Table 6) when using the additive model instead of the multiplicative model.",
"F1 scores of Multiplicative models vs Additive Models: In addition to improving diversity, using the additive model add instead of the multiplicative model mul increases F1-scores all across the board.",
"We have not reported scores for certain multiplicative variants because their performance is significantly worse.",
"F1 scores of Simple Model with Knowledge Distillation: We observe that using a teacher model for knowledge distillation using the objective in (1) almost always improves the performance of the simple model.",
"Importance of training labels: The objective in knowledge distillation (Equation (1)) involves both the training labels and the teacher predicted distribution.",
"Even though the teacher predicted distribution also incorporates the training data, removing the training data term from the objective of knowledge distillation worsens performance significantly.",
"Topk Distillation: The knowledge distillation approach based on topk predicted indices results in the best simple model for long document examples (see Table 3).",
"The value of k was chosen to be 50 for the short document case and it is 20 for the long document case.",
"Add-Both Knowledge Distillation: Learning from multiple teachers could lead to better performance hence we trained the student model with two teachers (BiDAF+QANeT).",
"Here the objective function of student is to minimize KL Divergence between predictions for both teachers.",
"We achieved best results with this technique on short document (SD) test set.",
"Data Augmentation and Gradient Distillation: While the data augmentation and gradient distillation methods hold a lot of promise, in the experiments that we conducted, we did not see a signifi-cant improvement.",
"QANeT Teachers vs BiDAF Teachers: Using either QANeT or BiDAF as a teacher doesn't seem to make any difference in the performance of the student models (compare the two columns in Table 3 and 4).",
"In this work, we address the trade-off between simple models on one hand which have low memory and compute requirements and complex models on the other hand which give better performance but are computationally expensive.",
"We propose a middle ground by training a simple model to mimic the characteristics of a complex model.",
"In particular, we make observations from a complex model which learns very diverse representations for different inputs and suitably modify the simple model to learn similar diverse representations.",
"We go further, by using knowledge distillation techniques to improve the simple model by training it to match the outputs from the complex model.",
"We experimented with the Holl-E conversation dataset and showed that by mimicking characteristics of the teacher a simple model can give improved performance.",
"We thank Department of Computer Science and Engineering, and Robert Bosch Center for Data Sciences and Artificial Intelligence, IIT Madras (RBC-DSAI) for providing us with adequate compute resources.",
"Lastly, we thank Ananya Sai and Shweta Bhardwaj for valuable discussions and reviewing intial drafts of this paper."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"result",
"other",
"other"
] |
[
"Spoken language understanding (SLU) is an essential component in conversational systems.",
"Most SLU components treat each utterance independently, and then the following components aggregate the multi-turn information in the separate phases.",
"In order to avoid error propagation and effectively utilize contexts, prior work leveraged history for contextual SLU.",
"However, most previous models only paid attention to the related content in history utterances, ignoring their temporal information.",
"In the dialogues, it is intuitive that the most recent utterances are more important than the least recent ones, in other words, time-aware attention should be in a decaying manner.",
"Therefore, this paper designs and investigates various types of time-decay attention on the sentence-level and speaker-level, and further proposes a flexible universal time-decay attention mechanism.",
"The experiments on the benchmark Dialogue State Tracking Challenge (DSTC4) dataset show that the proposed time-decay attention mechanisms significantly improve the state-of-the-art model for contextual understanding performance 1 .",
"Spoken dialogue systems that can help users to solve complex tasks such as booking a movie ticket have become an emerging research topic in artificial intelligence and natural language processing areas.",
"With a well-designed dialogue system as an intelligent personal assistant, people can accomplish certain tasks more easily via natural language interactions.",
"Today, there are several virtual intelligent assistants, such as Apple's Siri, Google's Home, Microsoft's Cortana, and Ama-zon's Echo.",
"Recent advance of deep learning has 1 The source code is at: https://github.com/ MiuLab/Time-Decay-SLU .",
"inspired many applications of neural models to dialogue systems (Wen et al., 2017; Bordes et al., 2017; Dhingra et al., 2017; Li et al., 2017).",
"A key component of a dialogue system is a spoken language understanding (SLU) module it parses user utterances into semantic frames that capture the core meaning (Tur and De Mori, 2011).",
"A typical pipeline of SLU is to first decide the domain given the input utterance, and based on the domain, to predict the intent and to fill associated slots corresponding to a domain-specific semantic template, where each utterance is treated independently (Hakkani-Tur et al., 2016; Chen et al., 2016b,a; Wang et al., 2016).",
"To overcome the error propagation and further improve understanding performance, the contextual information has been shown useful (Bhargava et al., 2013; Xu and Sarikaya, 2014; Chen et al., 2015; Sun et al., 2016).",
"Prior work incorporated the dialogue history into the recurrent neural networks (RNN) for improving domain classification, intent prediction, and slot filling (Xu and Sarikaya, 2014; Shi et al., 2015; Weston et al., 2015; Chen et al., 2016c).",
"Recently, Chi et al. (2017) and Zhang et al. (2018) demonstrated that modeling speaker role information can learn the notable variance in speaking habits during conversations in order to benefit understanding.",
"In addition, neural models incorporating attention mechanisms have had great successes in machine translation (Bahdanau et al., 2014), image captioning (Xu et al., 2015), and various tasks.",
"Attentional models have been successful because they separate two different concerns: 1) deciding which input contexts are most relevant to the output and 2) actually predicting an output given the most relevant inputs.",
"For example, the highlighted current utterance from the tourist, uh on august , in the conversation of Figure 1 is to respond the question about WHEN , and the content-2133 Guide: and you were saying that you wanted to come to singapore Guide: uh maybe can i have a little bit more details like uh when will you be coming Guide: and like who will you be coming with Tourist: uh yes Tourist: um i'm actually planning to visit Tourist: uh on august FOL-CONFIRM; FOL-INFOQST-INFO; QST-WHEN QST-WHO FOL-CONFIRMRES-WHEN RES-WHEN Figure 1: The human-human conversational utterances and their associated semantic labels from DSTC4.",
"aware contexts that can help current understanding are the first two utterances from the guide and you were saying that you wanted to come to singapore and un maybe can i have a little bit more details like uh when will you be coming .",
"Previous work proposed an end-to-end time-aware attention network to leverage both contextual and temporal information for spoken language understanding and achieved the significant improvement, showing that the temporal attention can guide the attention effectively (Chen et al., 2017).",
"However, the time-aware attention function is an inflexible hand-crafted setting, which is a fixed function of time for assessing the attention.",
"This paper focuses on investigating various flexible time-aware attention mechanism in neural models with contextual information and speaker role modeling for language understanding.",
"The contributions are three-fold: This paper investigates different time-aware attention mechanisms and provides guidance for the future research about designing the time-aware attention function.",
"This paper proposes an end-to-end learnable universal time-decay mechanism with great flexibility of modeling temporal information for diverse dialogue contexts.",
"The proposed model achieves the state-of-the-art understanding performance in the dialogue benchmark DSTC dataset.",
"The model architecture is illustrated in Figure 2. First, the previous utterances are fed into the contextual model to encode into the history summary, and then the summary vector and the current utterance are integrated for helping understanding.",
"The contextual model leverages the attention mechanisms highlighted in red, which implements different attention functions for sentence and speaker role levels.",
"The whole model is trained in an end-to-end fashion, where the history summary vector and the attention weights are automatically learned based on the downstream SLU task.",
"The objective of the proposed model is to optimize the conditional probability of the intents given the current utterance, p ( y | x ) , by minimizing the cross-entropy loss.",
"Given the current utterance x = { w t } T 1 , the goal is to predict the user intents of x , which includes the speech acts and associated attributes.",
"We apply a bidirectional long short-term memory (BLSTM) model (Schuster and Paliwal, 1997) to history encoding in order to learn the probability distribution of the user intents.",
"where W his is a weight matrix and v his is the history summary vector, v cur is the context-aware vector of the current utterance encoded by the BLSTM, and o is the intent distribution.",
"Note that this is a multi-label and multi-class classification, so the sigmoid function is employed for modeling the distribution after a dense layer.",
"The user intent labels are decided based on whether the value is higher than a threshold tuned by the development set.",
"Considering that speaker role information is shown to be useful for better understanding in complex dialogues (Chi et al., 2017), we follow the prior work for utilizing the contexts from two roles to learn history summary representations, v his in (1), in order to leverage the role-specific contextual information.",
"Each role-dependent recurrent unit BLSTM role i receives corresponding inputs, x t, role i , which includes multiple utterances u i ( i = [1 , ..., t 1] ) preceding the current utterance u t from the specific role, role i , and have been processed by an encoder model.",
"v his = X role v his , role (3) = X role BLSTM role ( x t, role ) , where x t, role are vectors after one-hot encoding that represent the annotated intent and the attribute features.",
"Note that this model requires the ground truth annotations for history utterances for training and testing.",
"Therefore, each role-based contextual module focuses on modeling role-dependent goals and speaking style, and v cur from (1) would contain role-based contextual information.",
"One of the earliest work with a memory component applied to language processing is memory networks (Weston et al., 2015; Sukhbaatar et al., 2015), which encodes mentioned facts into vectors and stores them in the memory for question answering.",
"The idea is to encode important knowledge and store it into memory for future usage with attention mechanisms.",
"Attention mechanisms allow neural network models to selectively pay attention to specific parts.",
"There are also various tasks showing the effectiveness of attention mechanisms (Xiong et al., 2016; Chen et al., 2016c).",
"Recent work showed that two attention types (content-aware and time-aware) and two attention levels (sentence-level and role-level) significantly improve the understanding performance for complex dialogues.",
"This paper focuses on expanding the time-aware attention based on the investigation of different time-decay functions, and further learning an universal time-decay function automatically.",
"For time-aware attention mechanisms, we apply it using two levels, sentence-level and role-level structures, and Section 3 details the design and analysis of time-aware attention.",
"For the sentence-level attention, before feeding into the contextual module, each history vector is weighted by its time-aware attention u j for replacing (3): v U his = X role BLSTM role ( x t, role , { u j | u j role } ) .",
"For the role-level attention, a dialogue is disassembled from a different perspective on which speaker's information is more important (Chi et al., 2017).",
"The role-level attention is to decide how much to address on different speaker roles' contexts ( v his , role ) in order to better understand the current utterance.",
"The importance of a speaker given the contexts can be approximated to the maximum attention value among the speaker's utterances, role = max u j , where u j includes all contextual utterances from the speaker.",
"With the role-level attention, the sentence-level history from (3) can be rewritten into v R his = X role role v his , role (4) for combining role-dependent history vectors with their attention weights.",
"The objective is to optimize SLU performance, predicting multiple speech acts and attributes described in Section 2.1.",
"In the proposed model, 2135 all encoders, prediction models, and attention weights can be automatically learned in an end-to-end manner.",
"The decaying function curves can be easily separated into three types: convex , linear , and concave , illustrated in the top-right part of Figure 2, and each type of time-decay functions expresses a time-aware perspective given dialogue contexts.",
"Note that all attention weights will be normalized such that their summation is equal to 1 .",
"A convex curve also known as concave upward, in a simple 2D Cartesian coordinate system ( x, y ) , a convex curve f ( x ) means when x goes greater, the slope f 0 ( x ) is increasing.",
"Intuitively, recent utterances contain more salient information, and the salience decreases very quickly when the distance increases; therefore we introduce the time-aware attention mechanism that computes attention weights according to the time of utterance occurrence explicitly.",
"We first define the time difference between the current utterance and the preceding sentence u i as d ( u i ) , and then simply use its reciprocal to formulate a convex time-decay function: conv u i = 1 a d ( u i ) b , (5) where a and b are scalar parameters.",
"The increasing slopes of the decay-curve assert that importance of utterances should be attenuated rapidly, and the importance of a earlier history sentence would be considerably compressed.",
"Note that Chen et al. used a fixed convex time-decay function ( a = 1 , b = 1 ) (Chen et al., 2017).",
"A linearly decaying time-aware attention function should also be taken into consideration.",
"In a simple 2D Cartesian coordinate system ( x, y ) , the slopes of a linear function remain consistent when x changes.",
"That is, the importance of preceding utterances linearly declines as the distance between the previous utterance and the target utterance becomes larger.",
"d ( u i ) is larger than fe , we assign the attention value as 0 .",
"A concave curve also called concave downward, in contrast to convex curves, in a simple 2D Cartesian coordinate system ( x, y ) , a concave curve f ( x ) means that the slope f 0 ( x ) is decreasing when x goes greater.",
"Intuitively, the attention weight decreases relatively slow when the distance increases.",
"To implement this idea, we design a Butterworth filter -like low-distance pass filter (Butterworth, 1930) that is similar to the concave time-decay function in the beginning of the curve.",
"where D 0 is the cut-off distance and n is the order of filter.",
"The decreasing slopes of the decay-curve assert that the importance of utterances should weaken gradually, and the importance of a earlier history sentence would still be considerably compressed.",
"Moreover, it is more likely to preserve the information in the multiple recent utterances instead of focusing only on the most recent one.",
"As mentioned previously, there are three types of decaying curves: convex, linear, concave, each type represents a different perspective on dialogue contexts and models different contextual patterns.",
"However, because the contextual patterns may be diverse, a single type of function could not fit the complex behavior well.",
"Hence, we propose a flexible and universal time-decay attention function by composing three types of attentional curves: univ u i = w 1 conv u i + w 2 lin u i + w 3 conc u i (8) = w 1 a d ( u i ) b + w 2 ( e d ( u i ) + f ) + w 3 1 + ( d ( u i ) D 0 ) n , where w i are the weights of time-decay attention functions.",
"Because the framework can be trained in an end-to-end manner, all parameters ( w i , a , b , e , f , D 0 , n ) can be automatically learned to construct a flexible time-decay function.",
"With the combination of different curves and the adjustable weights, the proposed universal time-decay attention function expresses the flexibility of not being 2136 LU Model Sentence-Level Attention Role-Level Attention Conv.",
"strictly decaying; that is, the model can automatically learn a properly oscillating curve in order to model the diverse and complex contextual patterns using the attention mechanism.",
"To evaluate the proposed model, we conduct the language understanding experiments on human-human conversational data.",
"The experiments are conducted using the DSTC4 dataset, which consist of 35 dialogue sessions on touristic information for Singapore collected from Skype calls between 3 tour guides and 35 tourists, these 35 dialogs sum up to 31,034 utterances and 273,580 words (Kim et al., 2016).",
"All recorded dialogues with the total length of 21 hours have been manually transcribed and annotated with speech acts and semantic labels at each turn level.",
"The speaker information (guide and tourist) is also provided.",
"Unlike previous DSTC series collected human-computer dialogues, human-human dialogues contain rich and complex human behaviors and bring much difficulty to all the tasks.",
"Given the complex dialogue patterns and longer contexts, DSTC4 is a suitable benchmark dataset for evaluation.",
"We randomly selected 28 dialogues as the training set, 5 dialogues as the testing set, and 2 dialogues as the validation set.",
"We choose the mini-batch Adam as the optimizer with the batch size of 256 examples.",
"The size of each hidden recurrent layer is 128.",
"We use pre-trained 200-dimensional word embeddings GloV e (Pennington et al., 2014).",
"We only apply 30 training epochs without any early stop approach.",
"We focus on predicting multiple labels including intents and attributes, so the evaluation metric is an average F1 score for balancing recall and precision in each utterance.",
"The experiments are shown in Table 1, where we report the average results over five runs.",
"We include the best understanding performance (row",
"(a)) from the participants of DSTC4 in IWSDS 2016 for reference (Kim et al., 2016).",
"The one-tailed t-test is performed to validate the significance of improvement, and the numbers with markers indicate the significant improvement with p < 0 .",
"05 .",
"To evaluate the proposed time-decay attention, we compare the performance with the nave LU model without any contextual information (row",
"(b)), the contextual model without any attention mechanism (row",
"(c)), and the one using the content-aware attention mechanism (row",
"(d)), where the attention can be learned at sentence and role levels.",
"The row",
"(a) is the performance reported in the DSTC challenge 2 .",
"It is intuitive that the model without considering contexts (row",
"(b)) performs much worse than the contextual ones for dialogue modeling.",
"The proposed time-aware results are shown in the rows",
"(e)-(h), where the rows",
"(e)-(f) use only the time-aware attention while the rows",
"(g)-(h) model both content-aware and time-aware attention mechanisms together.",
"It is obvious that almost all time-aware results are better than three baselines.",
"In order to investigate the performance of various time-decay attention functions, for each curve we apply two settings: 1) Hand : hand-crafted 2 This experiment is not performed on the same setup as this paper, and the shown number is estimated for reference.",
"hyper-parameters (rows",
"(e) and",
"(g)) and 2) E2E : end-to-end training for parameters (rows",
"(f) and",
"(h)).",
"In the hand-crafted setting, the hyper-parameters a = 1 , b = 1 , e = 0 .",
"125 , f = 1 , D 0 = 5 , n = 3 are adopted 3 .",
"Table 1 shows that among three types of the sentence-level time-decay attention, only the convex time-decay attention significantly outperforms the baselines, indicating that an unsuitable time-decay attention function is barely useful.",
"For both settings, the convex functions perform best among the three types of time-decay functions.",
"Also, the end-to-end trainable setting results in better performance for most cases.",
"For our proposed universal time-decay attention mechanism, the same settings are conducted: 1) composing fixed versions for three types of time-decay functions weighted by learned parameters w i and 2) fully trainable parameters for all time-decay functions.",
"These two settings provide different levels of flexibility in fitting dialogue contextual attention, and the experimental results show that two settings both outperform all other time-decay attention functions.",
"For sentence-level attention, the end-to-end trainable universal time-decay attention achieves best performance (rows",
"(f) and",
"(h)), where the flexible time-aware attention (rows",
"(f) and",
"(h)) obtains 2.9% relative improvement compared to the model without the attention mechanism (row",
"(c)) and the model using content-aware attention only (row",
"(d)).",
"For role-level attention, all types of time-decay functions significantly improve the results.",
"The probably reason may be that modeling temporal importance for each sentence is more difficult and less accurate, and speaker roles in the dialogues provide informative cues for the model to connect the temporal importance from the same speakers together; therefore, the conversational patterns can be considered to additionally improve the understanding results.",
"The further analysis is discussed in Section 4.3.",
"Similarly, the best results are also from the end-to-end trainable universal time-decay function.",
"The significant improvement achieved by the universal functions indicates that our model can effectively learn a suitable attention function through this flexible setting and derive a proper curve to fit the temporal tendency to help the 3 The chosen parameters are based on the domain knowledge about dialogue properties.",
"model preserve the essence and drop unimportant parts in the dialogue contexts.",
"To further investigate what the universal time-decay attention learns, we inspect the learned weights w i and find that the convex attention function almost dominates the whole function.",
"In other words, our model automatically learns that the convex time-decay attention is more suitable for modeling contexts from the dialogue data than the other two types.",
"Therefore, we can conclude that in complex dialogues, the recent utterances contain majority of salient information for spoken language understanding, where the attention decay trend follows a convex curve.",
"We analyze the content-aware attention impact by comparing the results between time-aware only (rows",
"(e)-(f)) and content and time-aware jointly (rows",
"(g)-(h)).",
"The content-aware attention (row",
"(d)) fails to focus on the important contexts for improving understanding performance in the complex dialogues and even performs slightly worse than the contextual model without attention (row",
"(c)).",
"Without a delicately-designed attention mechanism, it is not guaranteed that incorporating an additional content-aware attention would bring better performance and the experimental results show that a simple and coarse content-aware attention barely provides any usable information given the complex dialogues.",
"Therefore, we focus on whether our time-aware attention mechanisms can compensate the poor attention learned from the content-aware model.",
"In other words, we are not going to verify whether our time-aware attention mechanisms could collaborate with the content-aware attention mechanism, instead, we focus on examining how much our proposed time-aware attention could mitigate the detriment of the content-aware attention.",
"By comparing the results between time-aware only (rows",
"(e)-(f)) and content and time-aware jointly (rows",
"(g)-(h)), we find that our universal time-decay attention keeps the improvement without too much performance drop by involving the learned temporal attention.",
"Namely, our proposed attention mechanism can capture temporal information precisely, and it therefore can counteract the harmful impact of inaccurate content-aware attention.",
"For role-level attention, Table 1 shows that all results with various time-decay attention mecha-2138",
"nisms are better than the one with only content-aware attention (row",
"(d)).",
"However, linear and concave time-decay functions do not provide additional improvement when we model the attention at the sentence level.",
"The probable reason may be that it is difficult to model attention for individual sentences given the unsuitable time-decay functions.",
"That is, if designs of attention functions are unsuitable for dialogue contexts, the encoded sentence embeddings would be weighted by improper attention values.",
"On the other hand, for role-level attention, each speaker role is assigned an attention value to represent their importance in the conversational interactions.",
"Previous work (Chi et al., 2017; Chen et al., 2017) also demonstrated the effectiveness of considering speaker interactions for better understanding performance.",
"By introducing role-level attention, the sentence-level attentional weights can be smoothed to avoid inappropriate values.",
"Surprisingly, even though learning sentence-level temporal attention is difficult, our proposed universal time-decay attention can achieve similar performance for sentence-level and role-level attention (76.67% and 76.75% from the row",
"(f)), further demonstrating the strong adaptability of fitting diverse dialogue contexts and the capability of capturing salient information.",
"It is intuitive that longer context brings richer information; however, it may obstruct the attention learning and result in poor performance because more information should be modeled and accurate estimation is not trivial.",
"Because when modeling dialogues, we have no idea about how many contexts are enough for better understanding, the robustness to varying context lengths is important for the contextual model design.",
"Here, we compare the results using different context Parameter Time-Aware (E2E Trainable) Sentence Role w 1 0.758 1.078 w 2 0.544 -0.378 w 3 -0.302 0.300 a 0.888 0.841 b 0.969 1.084 e -0.320 -0.129 f 0.640 0.993 D 0 4.873 4.980 n 2.977 2.755 Table 3: The converged values of end-to-end trainable parameters from the proposed universal time-decay attention models.",
"lengths (3, 5, 7) for detailed analysis in Table 2, where the number is for each speaker.",
"The models without attention and the content-aware models become slightly worse with increasing context lengths.",
"However, our proposed universal time-decay attention model mostly achieves better performance when including longer contexts, demonstrating not only the flexibility of adapting diverse contextual patterns but also the robustness to varying context lengths.",
"This paper proposes a flexible time-decay attention mechanism by composing three types of time-aware attention functions in different decaying tendencies, where each decaying curves reflect a specific perspectives on distribution over salient information in dialogue contexts.",
"The proposed universal time-decay attention shows great capability of modeling diverse dialogue patterns in the experiments and therefore proves that our proposed method is a general design of time-decay attention.",
"In our design, we endow the attention function with flexibility by employing many trainable parameters and hence it can automatically learn a properly decaying curve for fitting the dialogue contexts better.",
"To further analyze the combination of different time-decay attention functions, we inspect the converged values of the trainable parameters from the proposed universal time-decay attention models in Table 3. Under the end-to-end trainable setting, the initialization of the trainable parameters are the same as the hand-crafted ones ( w i = 1 , a = 1 , b = 1 , e = 0 . 125 , f = 1 , D 0 = 5 , n = 3 ).",
"In the experiments, the models automatically figure out that convex time-decay attention function should have a higher weight than others for both 2139 anything else (FOL-CONFIRM) Okay (FOL-ACK) so we can eat there (FOL-EXPLAIN) Okay (FOL-ACK) okay thank you (FOL-THANK) and how about anything else that where we can go for visit (QST-RECOMMEND) so maybe (FOL-INFO) yes so maybe at the same time if you are going to climb bukit timah (FOL-RECOMMEND) you can also bring along some snacks with you (FOL-RECOMMEND) just also be careful do not put your food items in plastic bag (FOL-INFO) put them inside your bag because there will be some monkeys on the hill (FOL-INFO) and they may disturb you (FOL-INFO) Tourist Guide is there any restaurant when we (FOL-CONFIRM) okay i mentioned earlier on i would like to recommend the zoo Target Sentence Content-Aware: FOL-INFO Content + Universal Time-Decay: RES-RECOMMEND Content + Universal Time-Decay Attention Content-Aware Attention and they will think that you know your plastic bag would have contained food (FOL-INFO) Figure 3: The visualization of the attention weights enhanced by the proposed time-decay function compared with the weights learned by the content-aware attention model.",
"sentence-level or role-level models ( w 1 > w 2 and w 1 > w 3 ).",
"Namely, in dialogue contexts, the recent utterances contain most information related to the current utterance, which is aligned with our intuition.",
"From the above experiments, the proposed time-decay attention mechanisms significantly improve the performance on both sentence and role levels.",
"To further understand how the time-decay attention changes the content-aware attention, we dig deeper into the learned attentional values for sentences and illustrate the visualization in Figure 3. The figure shows a partial dialogue between the tourist (left) and the guide (right), where the color shades indicate the learned attention intensities of sentences.",
"It can be found that the learned content-aware attention (red; row",
"(c)) focuses on the incorrect sentence ( so we can eat there ( FOL-EXPLAIN )) and hence predicts the wrong label, FOL-INFO .",
"The reason may be that with a coarse and simple design of content-aware attention mechanism, the attention function may not provide additional benefit for improvement.",
"By additionally leveraging our proposed universal time-decay attention methods, the result (blue; row",
"(g)) shows that the adjusted attention pays the highest attention on the most recent utterance and thereby predicts the correct intent, RES-RECOMMEND .",
"It can be found that our proposed time-decay attention can effectively turn the attention to the correct contexts in order to correctly predict the dialogue act and attribute.",
"Therefore, the proposed attention mechanisms are demonstrated to be effective for improving understanding performance in such complex human-human conversations.",
"This paper designs and investigates various time-decay attention functions based on an end-to-end contextual language understanding model, where different perspectives on dialogue contexts are analyzed and a flexible and universal time-decay attention mechanism is proposed.",
"The experiments on a benchmark human-human dialogue dataset show that the understanding performance can be boosted by simply introducing the proposed time-decay attention mechanisms for guiding the model to focus on the salient contexts following a convex curve.",
"Moreover, the proposed universal time-decay mechanisms are easily extensible to multiparty conversations and showing the potential of leveraging temporal information in NLP tasks of dialogues.",
"We would like to thank reviewers for their insightful comments on the paper.",
"The authors are supported by the Ministry of Science and Technology of Taiwan, Google Research, Microsoft Research, and MediaTek Inc.."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"other",
"other"
] |
[
"Generating code-switched text is a problem of growing interest, especially given the scarcity of corpora containing large volumes of real code-switched text.",
"In this work, we adapt a state-of-the-art neural machine translation model to generate Hindi-English code-switched sentences starting from monolingual Hindi sentences.",
"We outline a carefully designed curriculum of pretraining steps, including the use of synthetic code-switched text, that enable the model to generate high-quality code-switched text.",
"Using text generated from our model as data augmentation, we show significant reductions in perplexity on a language modeling task, compared to using text from other generative models of CS text.",
"We also show improvements using our text for a downstream code-switched natural language inference task.",
"Our generated text is further subjected to a rigorous evaluation using a human evaluation study and a range of objective metrics, where we show performance comparable (and sometimes even superior) to code-switched text obtained via crowd workers who are native Hindi speakers.",
"Code-switching (CS) refers to the linguistic phenomenon of using more than one language within a single sentence or conversation.",
"CS appears naturally in conversational speech among multilingual speakers.",
"The main challenge with building models for conversational CS text is that we do not have access to large amounts of CS text that is conversational in style.",
"One might consider using social media text that contains CS and is more readily available.",
"However, the latter is quite different from conversational CS text in its vocabulary (e.g., due to the frequent use of abbreviated slang terms, Work done while first two authors were students at IIT Bombay. hashtags and mentions), in its sentence structure (e.g., due to character limits in tweets) and in its word forms (e.g., due to transliteration being commonly employed in social media posts).",
"This motivates the need for a generative model of realistic CS text that can be sampled to subsequently train models for CS text.",
"In this work, we tackle the problem of generating high-quality CS text using only limited amounts of real CS text during training.",
"We also assume access to large amounts of monolingual text in the component languages and parallel text in both languages, which is a reasonable assump-tion to make for many of the world's languages.",
"We focus on Hindi-English CS text where the matrix (dominant) language is Hindi and the embedded language is English.",
"1 Rather than train a generative model, we treat this problem as a translation task where the source and target languages are monolingual Hindi text and Hindi-English CS text, respectively.",
"We also use the monolingual Hindi text to construct synthetic CS sentences using simple techniques.",
"We show that synthetic CS text, albeit being naive in its construction, plays an important role in improving our model's ability to capture CS patterns.",
"We draw inspiration from the large body of recent work on unsupervised machine translation (Lample et al., 2018a,b) to design our model, which will henceforth be referred to as T ranslation for C odeS witching, or TCS.",
"TCS, once trained, will convert a monolingual Hindi sentence into a Hindi-English CS sentence.",
"TCS makes effective use of parallel text when it is available and uses backtranslation-based objective functions with monolingual text.",
"1 Given the non-trivial effort involved in collecting annotations from professional annotators and crowd workers, we focused on a single language pair (Hindi-English) and leave explorations on more language pairs for future work.",
"Below, we summarize our main contributions: 1. We propose a state-of-the-art translation model that generates Hindi-English CS text starting from monolingual Hindi text.",
"This model requires very small amounts of real CS text, uses both supervised and unsupervised training objectives and considerably benefits from a carefully designed training curriculum, that includes pretraining with synthetically constructed CS sentences.",
"2. We introduce a new Hindi-English CS text corpus in this work.",
"2 Each CS sentence is accompanied by its monolingual Hindi translation.",
"We also designed a crowdsourcing task to collect CS variants of monolingual Hindi sentences.",
"The crowdsourced CS sentences were manually verified and form a part of our new dataset.",
"3. We use sentences generated from our model to train language models for Hindi-English CS text and show significant improvements in perplexity compared to other approaches.",
"4. We present a rigorous evaluation of the quality of our generated text using multiple objective metrics and a human evaluation study, and they clearly show that the sentences generated by our model are superior in quality and successfully capture naturally occurring CS patterns.",
"Early approaches of language modeling for code-switched text included class-based n -gram models ( Yeh et al.), factored language models that exploited a large number of syntactic and semantic features (Adel et al., 2015), and recurrent neural language models (Adel et al., 2013) for CS text.",
"All these approaches relied on access to real CS text to train the language models.",
"Towards alleviating this dependence on real CS text, there has been prior work on learning code-switched language models from bilingual data (Li and Fung, 2014b,a; Garg et al., 2018b) and a more recent direction that explores the possibility of generating synthetic CS sentences.",
"(Pratapa et al., 2018) presents a technique to generate synthetic CS text that grammatically adheres to a linguistic theory 2 The new dataset and relevant code is available at: https://www.cse.iitb.ac.in/~pjyothi/TCS .",
"of code-switching known as the equivalence constraint (EC) theory (Poplack, 1979; Sankoff, 1998).",
"Lee and Li (2020) proposed a bilingual attention language model for CS text trained solely using a parallel corpus.",
"Another recent line of work has explored neural generative models for CS text.",
"Garg et al. (2018a) use a sequence generative adversarial network (SeqGAN (Yu et al., 2017)) trained on real CS text to generate sentences that are used to aid language model training.",
"Another GAN-based method proposed by Chang et al. (2019) aims to predict the probability of switching at each token.",
"Winata et al. (2018) and Winata et al. (2019) use a sequence-to-sequence model enabled with a copy mechanism (Pointer Network (Vinyals et al., 2015)) to generate CS data by leveraging parallel monolingual translations from a limited source of CS data.",
"Samanta et al. (2019) proposed a hierarchical variational autoencoder-based model tailored for code-switching that takes into account both syntactic information and language switching signals via the use of language tags.",
"(We present a comparison of TCS with both Samanta et al. (2019) and Garg et al. (2018a) in Section 5.2.1.) In a departure from using generative models for CS text, we view this problem as one of sequence transduction where we train a model to convert a monolingual sentence into its CS counterpart.",
"Chang et al. (2019); Gao et al. (2019) use GAN-based models to modify monolingual sentences into CS sentences, while we treat this problem of CS generation as a translation task and draw inspiration from the growing body of recent work on neural unsupervised machine translation models ( Lample et al., 2018a,b) to build an effective model of CS text.",
"The idea of using translation models for code-switching has been explored in early work (Vu et al., 2012; Li and Fung, 2013; Dhar et al., 2018).",
"Concurrent with our work, there have been efforts towards building translation models from English to CS text (Solorio et al., 2021) and CS text to English (Gupta et al., 2021).",
"While these works focus on translating from the embedded language (En-glish) to the CS text or vice-versa, our approach starts with sentences in the matrix language (Hindi) which is the more dominant language in the CS text.",
"Also, ours is the first work, to our knowledge, to repurpose an unsupervised neural machine translation model to translate monolingual sentences 3156 into CS text.",
"Powerful pretrained models like mBART (Liu et al., 2020) have been used for code-mixed translation tasks in concurrent work (Gau-tam et al., 2021).",
"We will further explore the use of synthetic text with such models as part of future work.",
"Figure 1 shows the overall architecture of our model.",
"This is largely motivated by prior work on unsupervised neural machine translation (Lample et al., 2018a,b).",
"The model comprises of three layers of stacked Transformer (Vaswani et al., 2017) encoder and decoder layers, two of which are shared and the remaining layer is private to each language.",
"Monolingual Hindi (i.e. the source language) has its own private encoder and decoder layers (denoted by Enc p 0 and Dec p 0 , respectively) while English and Hindi-English CS text jointly make use of the remaining private encoder and decoder layers (denoted by Enc p 1 and Dec p 1 , respec-tively).",
"In our model, the target language is either English or CS text.",
"Ideally, we would like Enc p 1 and Dec p 1 to be trained only using CS text.",
"However, due to the paucity of CS text, we also use text in the embedded language (i.e. English) to train these layers.",
"Next, we outline the three main training steps of TCS.",
"(I) Denoising autoencoding (DAE).",
"We use monolingual text in each language to estimate language models.",
"In Lample et al. (2018b), this is achieved via denoising autoencoding where an au-toencoder is used to reconstruct a sentence given a noisy version as its input whose structure is altered by dropping and swapping words arbitrarily ( Lample et al., 2018a).",
"The loss incurred in this step is denoted by LDAE and is composed of two terms based on the reconstruction of the source and target language sentences, respectively.",
"(II)",
"Backtranslation (BT): Once the layers are initialized, one can use non-parallel text in both languages to generate a pseudo-parallel corpus of backtranslated pairs (Sennrich et al., 2015).",
"That is, a corpus of parallel text is constructed by translating sentences in the source language via the pipeline, Enc p 0 , Enc sh , Dec sh and Dec p 1 , and translating target sentences back to the source language via Enc p 1 , Enc sh , Dec sh and Dec p 0 .",
"The backtranslation loss LBT is composed of cross-entropy losses from using these pseudo-parallel Enc p 0 <latexit sha1_base64=\"Qe5FC3z9zYmVnrv25ZeSnbK6vwM=\">AAAB+3icbVDLSsNAFL3xWeur1qWbwSK4KkkVdFkUwWUF+4A2hMl00g6dScLMRCwhv+LGhSJu/RF3/o2TmIW2Hhg4nHMvc+7xY86Utu0va2V1bX1js7JV3d7Z3duvHdR7KkokoV0S8UgOfKwoZyHtaqY5HcSSYuFz2vdn17nff6BSsSi81/OYugJPQhYwgrWRvFp9JLCeSpHehCTz0tizM6/WsJt2AbRMnJI0oETHq32OxhFJBA014VipoWPH2k2x1IxwmlVHiaIxJjM8oUNDQyyoctMie4ZOjDJGQSTNCzUq1N8bKRZKzYVvJvOkatHLxf+8YaKDSzdlYZxoam4rPgoSjnSE8iLQmElKNJ8bgolkJisiUywx0aauqinBWTx5mfRaTees2bo7b7SvyjoqcATHcAoOXEAbbqEDXSDwCE/wAq9WZj1bb9b7z+iKVe4cwh9YH99n+JSs</latexit> Enc sh <latexit sha1_base64=\"2ClzRthuuS4GM27vrwBa0TGgjjA=\">AAAB+nicbVDLSsNAFJ3UV62vVJduBovgqiRV0GVRBJcV7APaECbTSTt0ZhJmJkqJ+RQ3LhRx65e482+cxCy09cDA4Zx7mXNPEDOqtON8WZWV1bX1jepmbWt7Z3fPru/3VJRITLo4YpEcBEgRRgXpaqoZGcSSIB4w0g9mV7nfvydS0Ujc6XlMPI4mgoYUI20k366PONJTydNrgTM/VdPMtxtO0ykAl4lbkgYo0fHtz9E4wgknQmOGlBq6Tqy9FElNMSNZbZQoEiM8QxMyNFQgTpSXFtEzeGyUMQwjaZ7QsFB/b6SIKzXngZnMg6pFLxf/84aJDi+8lIo40cScVnwUJgzqCOY9wDGVBGs2NwRhSU1WiKdIIqxNWzVTgrt48jLptZruabN1e9ZoX5Z1VMEhOAInwAXnoA1uQAd0AQYP4Am8gFfr0Xq23qz3n9GKVe4cgD+wPr4BBYiUfg==</latexit> Dec sh <latexit sha1_base64=\"5yqcHaZeQ2t4ytT0kWqxFfWwFBA=\">AAAB+nicbVDLSsNAFL3xWesr1aWbwSK4KkkVdFnUhcsK9gFtCJPppB06k4SZiVJiP8WNC0Xc+iXu/BsnbRbaemDgcM693DMnSDhT2nG+rZXVtfWNzdJWeXtnd2/frhy0VZxKQlsk5rHsBlhRziLa0kxz2k0kxSLgtBOMr3O/80ClYnF0rycJ9QQeRixkBGsj+XalL7AeSZHdUDL1MzWa+nbVqTkzoGXiFqQKBZq+/dUfxCQVNNKEY6V6rpNoL8NSM8LptNxPFU0wGeMh7RkaYUGVl82iT9GJUQYojKV5kUYz9fdGhoVSExGYyTyoWvRy8T+vl+rw0stYlKSaRmR+KEw50jHKe0ADJinRfGIIJpKZrIiMsMREm7bKpgR38cvLpF2vuWe1+t15tXFV1FGCIziGU3DhAhpwC01oAYFHeIZXeLOerBfr3fqYj65Yxc4h/IH1+QP2CpR0</latexit> LCE : Enc p 0 Enc sh Dec sh Dec p 1 ; Enc p 1 Enc sh Dec sh Dec p 0 <latexit sha1_base64=\"JHovl8MvHOXmQNNtcZyC3i/oN/U=\">AAACpHicnVHLSgMxFM2M7/qqunQTLYqrMqOCohtRiy66aNFqoa1DJr21wcyD5I5Yhvky/8Kdf2M6FtS2Ky8EDufce3IffiyFRsf5tOyZ2bn5hcWlwvLK6tp6cWPzQUeJ4tDgkYxU02capAihgQIlNGMFLPAlPPovV0P98RWUFlF4j4MYOgF7DkVPcIaG8orv7YBhnzOZVjMvvapkbYQ3TM9oRnNFBWkl5EaKPWec0v0f5hqmM7HnGir3pPScGtvCuK/7P18n84olp+zkQSeBOwIlMoqaV/xodyOeBBAil0zrluvE2EmZQsElZIV2oiFm/IU9Q8vAkAWgO2m+5IzuGaZLe5EyL0Sas78rUhZoPQh8kznsVI9rQ3Ka1kqwd9pJRRgnCGb+/KNeIilGdHgx2hUKOMqBAYwrYXqlvM8U42juWjBLcMdHngQPh2X3qHxYPy5dXI7WsUi2yS45IC45IRfkltRIg3Brx7qxalbd3rer9p3d+E61rVHNFvkT9tMXzpPTYg==</latexit> LBT : Enc p 1 Enc sh Dec sh Dec p 0 ; Enc p 0 Enc sh Dec sh Dec p 1 <latexit sha1_base64=\"oGvgJyH7EuFegnvsW5XKARpKPtA=\">AAACpHicnVHJSgNBEO0Z97hFPXppDYqnMKOCohdxQQ85JGhMIIlDT6diGnsWumvEMMyX+Rfe/Bs7Y0BNcrKg4fFe1eta/FgKjY7zadkzs3PzC4tLheWV1bX14sbmo44SxaHOIxmpps80SBFCHQVKaMYKWOBLaPgvV0O98QpKiyh8wEEMnYA9h6InOENDecX3dsCwz5lMK5mXXj5kbYQ3TM9oRnNFBelNyI0Ue+44pfs/zDVMZ2LPMVTuSek5NbaFcV/nf75u5hVLTtnJg04CdwRKZBRVr/jR7kY8CSBELpnWLdeJsZMyhYJLyArtREPM+At7hpaBIQtAd9J8yRndM0yX9iJlXog0Z39XpCzQehD4JnPYqR7XhuQ0rZVg77STijBOEMz8+Ue9RFKM6PBitCsUcJQDAxhXwvRKeZ8pxtHctWCW4I6PPAkeD8vuUfmwdly6uBytY5Fsk11yQFxyQi7IHamSOuHWjnVrVa2avW9X7Hu7/p1qW6OaLfIn7Kcv7TLTcA==</latexit> Enc p 1 <latexit sha1_base64=\"GA5koI/FKjTrLss2eAQ6EpqabjY=\">AAAB+3icbVDLSsNAFL3xWeur1qWbwSK4KkkVdFkUwWUF+4A2hMl00g6dScLMRCwhv+LGhSJu/RF3/o2TmIW2Hhg4nHMvc+7xY86Utu0va2V1bX1js7JV3d7Z3duvHdR7KkokoV0S8UgOfKwoZyHtaqY5HcSSYuFz2vdn17nff6BSsSi81/OYugJPQhYwgrWRvFp9JLCeSpHehCTz0thzMq/WsJt2AbRMnJI0oETHq32OxhFJBA014VipoWPH2k2x1IxwmlVHiaIxJjM8oUNDQyyoctMie4ZOjDJGQSTNCzUq1N8bKRZKzYVvJvOkatHLxf+8YaKDSzdlYZxoam4rPgoSjnSE8iLQmElKNJ8bgolkJisiUywx0aauqinBWTx5mfRaTees2bo7b7SvyjoqcATHcAoOXEAbbqEDXSDwCE/wAq9WZj1bb9b7z+iKVe4cwh9YH99pfZSt</latexit> Dec p 0 <latexit sha1_base64=\"UbcaR71z26AraQJR/yuM5MkW+cw=\">AAAB+3icbVDLSsNAFL2pr1pfsS7dDBbBVUmqoMuiLlxWsA9oQ5hMp+3QmSTMTMQS8ituXCji1h9x5984abPQ1gMDh3Pu5Z45QcyZ0o7zbZXW1jc2t8rblZ3dvf0D+7DaUVEiCW2TiEeyF2BFOQtpWzPNaS+WFIuA024wvcn97iOVikXhg57F1BN4HLIRI1gbyberA4H1RIr0lpLMT2PfyXy75tSdOdAqcQtSgwIt3/4aDCOSCBpqwrFSfdeJtZdiqRnhNKsMEkVjTKZ4TPuGhlhQ5aXz7Bk6NcoQjSJpXqjRXP29kWKh1EwEZjJPqpa9XPzP6yd6dOWlLIwTTUOyODRKONIRyotAQyYp0XxmCCaSmayITLDERJu6KqYEd/nLq6TTqLvn9cb9Ra15XdRRhmM4gTNw4RKacActaAOBJ3iGV3izMuvFerc+FqMlq9g5gj+wPn8AWH+Uog==</latexit> Dec p 1 <latexit sha1_base64=\"39OkXS3psyaEzLAtbnzdaTMAHLM=\">AAAB+3icbVDLSsNAFL2pr1pfsS7dDBbBVUmqoMuiLlxWsA9oQ5hMp+3QmSTMTMQS8ituXCji1h9x5984abPQ1gMDh3Pu5Z45QcyZ0o7zbZXW1jc2t8rblZ3dvf0D+7DaUVEiCW2TiEeyF2BFOQtpWzPNaS+WFIuA024wvcn97iOVikXhg57F1BN4HLIRI1gbyberA4H1RIr0lpLMT2PfzXy75tSdOdAqcQtSgwIt3/4aDCOSCBpqwrFSfdeJtZdiqRnhNKsMEkVjTKZ4TPuGhlhQ5aXz7Bk6NcoQjSJpXqjRXP29kWKh1EwEZjJPqpa9XPzP6yd6dOWlLIwTTUOyODRKONIRyotAQyYp0XxmCCaSmayITLDERJu6KqYEd/nLq6TTqLvn9cb9Ra15XdRRhmM4gTNw4RKacActaAOBJ3iGV3izMuvFerc+FqMlq9g5gj+wPn8AWgSUow==</latexit> Hi Hi En/CS En/CS LDAE : Enc p 0 Enc sh Dec sh Dec p 0 ; Enc p 1 Enc sh Dec sh Dec p 1 <latexit sha1_base64=\"7BRF3mozryImVTTS4MDCqnCuZpI=\">AAACpXicnVHLSgMxFM2Mr1pfVZdugkV0VWZUUHRTtYILBSttFdoyZNLbNjTzILkjlmH+zK9w59+YjgW1uvJC4HDOvSf34cdSaHScd8uem19YXCosF1dW19Y3SptbLR0likOTRzJSTz7TIEUITRQo4SlWwAJfwqM/uproj8+gtIjCBo5j6AZsEIq+4AwN5ZVeOwHDIWcyvc28tHZxnXUQXjA9oxnNJRWk1yE3Wuw5s5QefjE1+Jv5LMs9KT2nxrY46+v+z9fNvFLZqTh50N/AnYIymca9V3rr9CKeBBAil0zrtuvE2E2ZQsElZMVOoiFmfMQG0DYwZAHobppvOaN7hunRfqTMC5Hm7PeKlAVajwPfZE461bPahPxLayfYP+2mIowTBDN//lE/kRQjOjkZ7QkFHOXYAMaVML1SPmSKcTSHLZoluLMj/watw4p7VDmsH5erl9N1FMgO2SUHxCUnpEpuyD1pEm7tWjdW3Xqw9+07u2G3PlNta1qzTX6E7X0AivvTrg==</latexit> Figure 1: Model architecture.",
"(III)",
"Cross-entropy loss (CE): Both the previous steps used unsupervised training objectives and make use of non-parallel text.",
"With access to parallel text, one can use the standard supervised cross-entropy loss (denoted by LCE ) to train the translation models (i.e. going from Enc p 0 to Dec p 1 and Enc p 1 to Dec p 0 via the common shared layers).",
"Apart from the use of parallel text and monolingual text employed in training TCS, we also construct large volumes of synthetic CS text using two simple techniques.",
"This synthetic CS text is nonparallel and is used to optimize both LDAE and LBT .",
"The role of the synthetic CS text is to expose TCS to various CS patterns (even if noisy), thereby encouraging the model to code-switch.",
"The final step of finetuning using All-CS enables model to mimic switching patterns of real CS texts The first technique (named LEX) is a simple heuristic-based technique that constructs a CS sentence by traversing a Hindi sentence and randomly replacing a word by its English translation using a bilingual lexicon (Conneau et al., 2017).",
"The probability of replacing a word is chosen to match the switching distribution in real CS text.",
"The second technique (named EMT) is more linguistically aware.",
"Following the methodology proposed by Bhat et al. (2016) that is based on the embedded 3157 matrix theory (EMT) for code-switching, we apply clause substitution methods to monolingual text to construct synthetic CS text.",
"From inspecting English parse trees, we found that replacing embedded sentence clauses or subordinate clauses with their Hindi translations would likely produce CS text that appears somewhat natural.",
"We introduce a new Hindi-English CS dataset, that we will refer to as All-CS.",
"It is partitioned into two subsets, Movie-CS and Treebank-CS, based on their respective sources.",
"Movie-CS consists of conversational Hindi-English CS text extracted from 30 contemporary Bollywood scripts that were publicly available.",
"3 The Hindi words in these sentences were all Romanized with potentially multiple non-canonical forms existing for the same Hindi token.",
"We employed a professional annotation company to convert the Romanized Hindi words into their respective back-transliterated forms rendered in Devanagari script.",
"We also asked the annotators to provide monolingual Hindi translations for all these sentences.",
"Using these monolingual Hindi sentences as a starting point, we additionally crowdsourced for CS sentences via Amazon's Mechanical Turk (MTurk) (Amazon, 2005).",
"Table 1 shows two Hindi sentences from Movie-CS and Treebank-CS, along with the different variants of CS sentences.",
"Turkers were asked to convert a monolingual Hindi sentence into a natural-sounding CS variant that was semantically identical.",
"Each Turker had to work on five Hindi sentences.",
"We developed a web interface using which Turkers could easily copy parts of the Hindi sentence they wanted to retain and splice in English segments.",
"More details about this interface, the crowdsourcing task and worker statistics are available in Appendix A. All-CS comprises a second subset of CS sentences, Treebank-CS, that was crowdsourcing using MTurk.",
"We extracted 5292 monolingual Hindi sentences (with sentence lengths less than or equal to 15 words) from the publicly available Hindi Dependency Treebank that contains dependency parses.",
"4 These annotations parse each Hindi sentence into chunks, where a chunk is defined as 3 https://www.filmcompanion.in/category/fc-pro/scripts/ https://moifightclub.com/category/scripts/ 4 http://ltrc.iiit.ac.in/treebank_H2014/ Movie-CS (cid:465) (cid:465) (Eng) (But laughter medicine really changed my life) (Gold) but laughter therapy (cid:547) life (cid:547) actually MTurk laughter therapy (cid:465) MTurk but laughter therapy really (cid:547) life change (cid:547) MTurk therapy life (cid:465) Treebank-CS 7.20 (cid:551) (Eng) (Income from the fair was estimated at Rs 7.20 crore) MTurk fair income 7.20 evaluate (cid:551) MTurk income 7.20 (cid:551) Table 1: Two All-CS examples.",
"a minimal, non recursive phrase.",
"Turkers were asked to convert at least one Hindi chunk into English.",
"This was done in an attempt to elicit longer spans of English segments within each sentence.",
"Figure 2 shows the sentence length distributions for Movie-CS and Treebank-CS, along with histograms accumulating English segments of different lengths in both subsets.",
"We clearly see a larger fraction of English segments with lengths within the range [2-6] in Treebank-CS compared to Movie-CS.",
"Table 2 provides detailed statistics of the new CS dataset.",
"We also report two metrics proposed by Guzmn et al. (2017) to measure the amount of code-switching present in this new corpus.",
"Monolingual Index (M-Index) is a value between 0 and 1 Quantity/Metric Movie-CS Treebank-CS All-CS |Train| 15509 5914 21423 |Test| 1500 1000 2500 |Valid| 500 500 1000 # Tokens 196300 87979 284279 # Hindi Sentences 9290 5292 14582 # NEs 4342 4810 9152 Fraction of NEs 0.0221 0.0547 0.0322 M-Index 0.5542 0.6311 0.5774 I-Index 0.2852 0.3434 0.3023 Table 2: Key statistics of CS datasets.",
"that quantifies the amount of mixing between languages (0 denotes a purely monolingual corpus and 1 denotes equal mixing from both languages) and I-Index measures the fraction of switching points in the corpus.",
"We observe Treebank-CS exhibits higher M-index and I-index values compared to Movie-CS indicating more code-switching overall.",
"All-CS also contains a non-trivial number of named entities (NEs) which are replaced by an NE tag in all our language modeling experiments.",
"Parallel Hindi-English Text.",
"As described in Section 5, TCS uses parallel text for supervised training.",
"For this purpose, we use the IIT Bombay English-Hindi Corpus (Kunchukuttan et al., 2017) containing parallel Hindi-English text.",
"We also construct a larger parallel corpus using text from the OpenSubtitles (OpSub) corpus (Lison and Tiedemann, 2016) that is more conversational and hence more similar in style to Movie-CS.",
"We chose ~1 million English sentences (OpSub-EN), where each sentence contained an embedded clause or a subordinate clause to support the construction of EMT lines.",
"We used the Google Translate API to obtain Hindi translations for all these sentences (OpSub-HI).",
"Henceforth, we use OpSub to refer to this parallel corpus of OpSub-EN paired with OpSub-HI.",
"We extracted 318K sentences from the IITB corpus after thresholding on length (5-15) and considering overlap in vocabulary with OpSub.",
"(One could avoid the use of an external service like Google Translate and use existing parallel text (Zhang et al., 2020)) in conjunction with a word aligner to construct EMT lines.",
"OpSub, being more conversational in style, turns out to be a better pretraining corpus.",
"A detailed comparison of these choices is described in Appendix",
"H.) Synthetic CS Datasets.",
"As mentioned in Section 3.1, we use two simple techniques LEX and EMT to generate synthetic CS text, which in turn is used to train TCS in an unsupervised training phase.",
"For each Hindi monolingual sentence in OpSub, we generate two LEX and two EMT synthetic CS sentences giving us OpSub-LEX and OpSub-EMT, respectively.",
"We also generate five LEX and five EMT lines for each monolingual sentence in All-CS.",
"In order to generate EMT lines, we first translate the monolingual Hindi sentences in All-CS to English using Google Translate and then follow the EMT generation scheme.",
"This results in two datasets, All-CS-LEX and All-CS-EMT, which appear in later evaluations.",
"(Appendix B contains more details about EMT applied to OPUS and All-CS.)",
"Datasets from existing approaches.",
"(I) VACS (Samanta et al., 2019) is a hierarchical variational autoencoder-based model designed to generate CS text.",
"We train two VACS models, one on All-CS (VACSv1) and the other on OpSub-EMT followed by All-CS (VACSv2).",
"(II)",
"Garg et al. (2018a) use SeqGAN (Yu et al., 2017) a GAN-based sequence generation model to generate CS sentences by providing an RNNLM as the generator.",
"As with VACS, we train two SeqGAN 5 models, one on All-CS (SeqGANv1) and one on OpSub-EMT followed by All-CS (SeqGANv2).",
"Samples are drawn from both SeqGAN and VACS by first drawing a random sample from the standard normal distribution in the learned latent space and then decoding via an RNN-based generator for SeqGAN and a VAE-based decoder for VACS.",
"We sample ~2M lines for each dataset to match the size of the other synthetic datasets.",
"First, we investigate various training curricula to train TCS and identify the best training strategy by evaluating BLEU scores on the test set of All-CS (5.1).",
"Next, we compare the output from TCS with synthetic CS text generated by other methods ( 5.2).",
"We approach this via language modeling ( 5.2.1), human evaluations (5.2.2) and two downstream tasksNatural Language Inference and Sentiment Analysisinvolving real CS text (5.2.3).",
"Apart from these tasks, we also present four different objective evaluation metrics to evaluate synthetic CS text: BERTScore, Accuracy of a BERT-based classifier and two diversity scores (5.3).",
"Table 3 shows the importance of various training curricula in training TCS; these models are evaluated using BLEU (Papineni et al., 2002) scores computed with the ground-truth CS sentences for",
"5 https://github.com/suragnair/seqGAN",
"the test set of All-CS.",
"We start with supervised pretraining of TCS using the two parallel datasets we have in hand IITB and OpSub (System A ).",
"A is then further finetuned with real CS text in All-CS.",
"The improvements in BLEU scores moving from System O (trained only on All-CS) to System B illustrate the benefits of pretraining TCS using Hindi-English parallel text.",
"Systems C and D in Table 3 use our synthetic CS datasets OpSub-LEX and OpSub-EMT, respectively.",
"These systems are further finetuned on All-CS using both unsupervised and supervised training objectives to give C 1 , C 2 , D 1 and D 2 , respectively.",
"Comparing these four systems with System B shows the importance of using synthetic CS for pretraining.",
"Further, comparing C 1 against D 1 and Figure 3: Variation of BLEU score with amount of All-CS parallel training data.",
"C 2 against D 2 , we observe that OpSub-EMT is indeed a better choice for pretraining compared to OpSub-LEX.",
"Also, supervised finetuning with All-CS is clearly superior to unsupervised finetuning.",
"Henceforth, Systems D 1 and D 2 will be referred to as TCS (U) and TCS (S), respectively.",
"While having access to parallel CS data is an advantage, we argue that the benefits of having parallel data only marginally increase after a threshold.",
"Figure 3 shows how BLEU scores vary when changing the amount of parallel CS text used to train D 2 .",
"We observe that BLEU increases substantially when we increase CS data from 1000 lines to 5000 lines, after which there is a trend of diminishing returns.",
"We also find that D 1 (that uses the data in All-CS as non-parallel text) is as good as the model trained using 4000 lines of parallel text.",
"We use text generated by our model to train a language model (LM) and evaluate perplexities on the test set of All-CS to show how closely sentences from TCS mimic real CS text.",
"We use a state-of-the-art RNNLM model AWD-LSTM-LM Merity et al. (2018) as a blackbox LM and only experiment with different training datasets.",
"The model uses three LSTM layers of 1200 hidden units with weight tying and 300-dimensional word embeddings.",
"In initial runs, we trained our language model on the large parallel/synthetic CS datasets and finetuned on the All-CS data.",
"However, this training strategy was prone to over-fitting on All-CS data.",
"To counter this problem of forgetting during the pretrain-finetuning steps, we adopted the Mix-review strategy proposed by He et al. (2021).",
"The training sentences from All-CS remain constant through the epochs and the amount of pretraining data is exponentially decayed with each epoch.",
"This greatly alleviates the forgetting problem in our model, and leads to better overall perplexities.",
"Additional details about these LMs are provided in Appendix E. Table 4 shows test perplexities using different training curricula and data generated using two prior approaches, VACS and SeqGAN.",
"Sentences generated using TCS yield the largest reductions in test perplexities, compared to all other approaches.",
"We evaluated the quality of sentences generated by TCS using a human evaluation study.",
"We sampled 150 sentences each, using both TCS (U) and TCS (S), starting from monolingual Hindi sentences in the evaluation sets of All-CS.",
"The sentences were chosen such that they were consistent with the length distribution of All-CS.",
"For the sake of comparison, corresponding to the above-mentioned 150 monolingual Hindi samples, we also chose 150 CS sentences each from All-CS-LEX and All-CS-EMT.",
"Along with the ground-truth CS sentences from All-CS, this resulted in a total of 750 sentences.",
"6 These sentences were given to three linguistic experts in Hindi and they were asked to provide scores ranging between 1 and 5 (1 for worst, 5 for best) under three heads: Syntactic correctness, Semantic correctness and Natu-ralness.",
"Table 5 shows that the sentences generated using TCS (S) and TCS (U) are far superior to the EMT and LEX sentences on all three criteria.",
"TCS (S) is quite close in overall quality to the real sentences and TCS (U) fares worse, but only by a small margin.",
"6 We only chose CS sentences from TCS that did not exactly match the ground-truth CS text.",
"We observe that the model is able to introduce long contiguous spans of English words (e.g. meeting next week, but it is clear, etc.).",
"The model also displays the ability to meaningfully switch multiple times within the same sentence (e.g., i love you very much, but, friend).",
"There are also interesting cases of English segments that appear to be ungrammatical but make sense in the CS context (e.g., because i know main dish, etc.).",
"GLUECoS (Khanuja et al., 2020) is an evaluation benchmark spanning six natural language tasks for code-switched English-Hindi and English-Spanish data.",
"The authors observe that M-BERT (Pires et al., 2019) consistently outperforms cross-lingual embedding techniques.",
"Furthermore, pretraining M-BERT on small amounts of code-switched text improves its performance in most cases.",
"For our evaluation, we select two tasks that require semantic understanding: Natural Language Inference (NLI) and Sentiment Analysis (SA).",
"We sample 100K monolingual sentences from 3161 Pretraining Data NLI (Accuracy) SentimentAnalysis(F1) Baseline 57.88 1.22 57.97 0.06 OpSub-HI 58.47 0.36 58.13 0.25 OpSub-LEX 58.67 0.94 58.40 0.33 OpSub-EMT 58.96 0.70 58.79 0.37 TCS (S) 59.57 0.57 59.39 0.81 All-CS 59.74 0.96 58.77 0.44 Table 7: GLUECoS Evaluation: Mean and standard deviation of scores after evaluating on 5 seeds.",
"When subject to examples from high-quality generators, the classifier should find it hard to tell apart real from fake Evaluation Metric Real LEX EMT TCS (S) TCS (U) BERTScore All (3500) 0.812 0.796 0.627 0.764 0.788 Mono (3434) 0.812 0.782 0.623 0.755 0.772 UNK (1983) 0.809 0.804 0.636 0.827 0.846 UNK & Mono (1857) 0.808 0.785 0.633 0.813 0.821 BERT-based Classifier | Sentences | 4767 12393 12484 12475 12475 Accuracy(fake) 42.76 96.52 97.83 80.31 88.62 Diversity Gzip ( D ) 22.13 24.12 33.17 21.37 17.59 Self-BLEU 61.3 29.7 24.6 63.6 64.2 Table 8:",
"(a) BERTScores on test split of All-CS.",
"Each row corresponds to a different data filter.",
"The numbers in parenthesis denote the number of sentences in the data after filtering.",
"(b) Accuracies from the classifier for samples generated by various methods as being fake.",
"The | Sentences | refer to size of dataset for each system.",
"TCS models have the lowest accuracy among synthetic methods.",
"(c) Diversity Scores for different techniques using Gzip and Self-BLEU based diversity measures.",
"OpSub-HI and select corresponding LEX, EMT and TCS (S) sentences.",
"M-BERT is then trained using the masked language modelling (MLM) objective on text from all 4 systems (including OpSub-HI) for 2 epochs.",
"We also train M-BERT on 21K sentences from All-CS (real CS).",
"Finally, these pretrained models are fine-tuned on the selected GLUECoS tasks.",
"(More details are in Appendix G.)",
"Table 7 lists the accuracies and F1 scores using different pretraining schemes for both NLI and sentiment analysis, respectively.",
"Plain monolingual pretraining by itself leads to performance improvements on both tasks, presumably due to do-main similarity between GLUECoS (movie scripts, social media etc.) and OpSub.",
"As mentioned in Khanuja et al. (2020), pretraining on CS text further improves performance for both NLI and SA.",
"Among the synthetic methods, TCS (S) has consistently better scores than LEX and EMT.",
"For SA, TCS (S) even outperforms pretraining on real CS text from All-CS.",
"BERTScore.",
"BERTScore (Zhang* et al., 2020) is a recently-proposed evaluation metric for text generation.",
"Similarity scores are computed between each token in the candidate sentence and each token in the reference sentence, using contextual BERT embeddings (Devlin et al., 2018) of the tokens.",
"We use this as an additional objective metric to evaluate the quality of the sentences generated using TCS.",
"We use the real monolingual sentence as the reference and the generated CS sentence as the candidate, excluding sentences from TCS (S) and TCS (U) that exactly match the real sentence.",
"Since our data is Hindi-English CS text, we use Multilingual BERT (M-BERT) (Pires et al., 2019) for high-quality multilingual representations.",
"Table 8 outlines our main results on the test set of All-CS.",
"TCS sometimes generates purely monolingual sentences.",
"This might unfairly tilt the scores in favour of TCS since the reference sentences are also monolingual.",
"To discount for such biases, we remove sentences generated by TCS (U) and TCS (S) that are purely monolingual (Row label Mono in BERTScore).",
"Sentences having <UNK> tokens (labeled UNK) are also filtered out since these tokens are only generated by TCS for out-of-vocabulary words.",
"UNK & Mono refers to applying both these filters.",
"EMT lines consistently show the worst performance, which is primarily due to the somewhat poor quality of translations involved in generating these lines (refer to Appendix B).",
"With removing both monolingual and <UNK> tokens, we observe that TCS (U) and TCS (S) yield the highest BERTScores, even outperforming the BERTScore on real data obtained from the Turkers.",
"BERT-based Classifier.",
"In this evaluation, we use M-BERT ( Pires et al., 2019) to build a classifier that distinguishes real CS sentences from synthetically generated ones (fake).",
"samples.",
"We add a fully connected layer over the M-BERT base architecture that takes the [CLS] token as its input to predict the probability of the sentence being real or fake.",
"Fake sentences are drawn from the union of TCS (U), TCS (S), All-CS-LEX and All-CS-EMT.",
"In order to alleviate the class imbalance problem, we oversample the real sentences by a factor of 5 and shuffle the data.",
"The model converges after training for 5 epochs.",
"We see in Table 8 that the classification accuracy of whether a sample is fake or not is lowest for the outputs from TCS among the different generation techniques.",
"Measuring Diversity.",
"We are interested in finding out how diverse the predictions from TCS are.",
"We propose a simple measure of diversity in the CS variants that is based on how effectively sentences can be compressed using the gzip utility.",
"7 We considered using Byte Pair Encoding (BPE) (Gage, 1994) as a measure of data compression.",
"However, BPE operates at the level of individual words.",
"Two word sequences w1 w2 w3 and w3 w2 w1 would be identically compressed by a BPE to-kenizer.",
"We would ideally like to account for such diversity and not discard this information.",
"gzip uses Lempel-Ziv coding (Ziv and Lempel, 1977) that considers substrings of characters during compression, thus allowing for diversity in word ordering to be captured.",
"Our diversity measure D is simply the following: For a given set of CS sentences, run gzip on each sentence individually and sum the resulting file sizes ( S 1 ).",
"Next, paste all the CS sentences into a single file and run gzip on it to get a file of size S 2 .",
"Then, D = S 1 S 2 .",
"Smaller D scores indicate larger diversity.",
"If the variants of a sentence are dissimilar to one another and hence very diverse, then S 2 would be large thus leading to smaller values of D .",
"Table 8 shows the diversity scores for different techniques.",
"Both TCS (S) and TCS (U) have a higher diversity score compared to LEX and EMT.",
"TCS (U) exceeds even the responses received via MTurk (Real) in diversity.",
"We note here that diversity, by itself, is not necessarily a desirable trait.",
"Our goal is to generate sentences that are diverse while being natural and semantically meaningful.",
"The latter properties for text from TCS (S) and TCS (U) have already been verified in our human evaluation study.",
"However, using self-BLEU is slightly problematic in our setting as systems like LEX that switch words at random positions would result in low self-BLEU (indicating high diversity).",
"This is indeed the case, as shown in Table 8 LEX, EMT give lower self-BLEU scores as compared to TCS.",
"However, note that the scores of the TCS models are comparable to that of real CS data.",
"In this work, we present a neural translation model for CS text that transduces monolingual Hindi sentences into realistic Hindi-English CS text.",
"Text generated by our model is evaluated using a number of different objective metrics, along with LM, NLI and sentiment analysis tasks, and a detailed human evaluation study.",
"The role of synthetic data in training such models merits a more detailed investigation which we leave for future work.",
"We thank all the anonymous reviewers for their constructive feedback which helped improve the presentation of this work.",
"We also thank all the volunteers who helped with the collection of CS text that is released as part of our dataset, All-CS."
] | [
"abstain",
"method",
"method",
"result",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"result",
"method",
"abstain",
"abstain",
"method",
"objective",
"objective",
"objective",
"objective",
"method",
"objective",
"result",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"other",
"method",
"method",
"objective",
"other",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"method",
"objective",
"method",
"other",
"other"
] |
[
"Unsupervised Domain Adaptation (UDA) aims to transfer the knowledge of source domain to the unlabeled target domain.",
"Existing methods typically require to learn to adapt the target model by exploiting the source data and sharing the network architecture across domains.",
"However, this pipeline makes the source data risky and is inflexible for deploying the target model.",
"This paper tackles a novel setting where only a trained source model is available and different network architectures can be adapted for target domain in terms of deployment environments.",
"We propose a generic framework named Cross-domain Knowledge Distillation (CdKD) without needing any source data.",
"CdKD matches the joint distributions between a trained source model and a set of target data during distilling the knowledge from the source model to the target domain.",
"As a type of important knowledge in the source domain, for the first time, the gradient information is exploited to boost the transfer performance.",
"Experiments on cross-domain text classification demonstrate that CdKD achieves superior performance, which verifies the effectiveness in this novel setting.",
"Annotating sufficient training data is usually an expensive and time-consuming work for diverse application domains.",
"Unsupervised Domain Adaptation (UDA) aims at solving this learning problem in the unlabeled target domain by utilizing the abundant knowledge in an existing domain called source domain, even when these domains may have different distributions.",
"This technique has motivated research on cross-domain text classification (Chen et al., 2019; Ye et al., 2020; Gururangan et al., 2020).",
"One of the important knowledge in the source domain is the labels of samples.",
"Current methods mainly leverage the labeled source Corresponding author.",
"data and unlabeled target data to learn the domain-invariant features (Tzeng et al., 2014; Ganin and Lempitsky, 2015) and the discriminative features (Saito et al., 2017; Ge et al., 2020) that are shared across different domains.",
"Unfortunately, sometimes we are forbidden access to the source data, which are distributed on different devices and usually contain private information, e.g., user profile.",
"Existing methods cannot solve the UDA problem without the source data yet.",
"In addition, it is necessary to adapt the target domain with a flexible network architecture different from the source domain in terms of different deployment requirements for different domains.",
"But most of works (Liang et al., 2020; Li et al., 2020) are required to share the same network architecture between different domains.",
"In this paper, we propose a novel UDA setting: only a trained source model and a set of unlabeled target data are provided, and the target model is allowed to have different network architectures with the trained source model.",
"It differs from the vanilla UDA in that a trained source model instead of source data is provided as supervision to the unlabeled target domain when learning to adapt the model.",
"Such a setting satisfies privacy policy and effective delivery, and helps deploy the target model flexibly according to the target application.",
"Our setting seems somewhat similar to Knowledge Distillation (KD) (Hinton et al., 2015), where a trained teacher model teaches a student model with different architecture on the same task over a set of unlabeled data.",
"KD assumes that the empirical distribution of the data used for training the student model matches the distribution associated with the trained teacher model.",
"Nevertheless, in our setting, the unlabeled data and teacher (source) model have different distributions.",
"One of simple yet generic solution for our setting is to match the distributions between source and target domains under the process of distilling the knowledge.",
"However, it is quite challenging to reduce the shifts between a known distribution (e.g., a trained source model) and the empirical distribution of data (e.g., target data).",
"Prior methods minimize a distance metric of domain discrepancy, such as Maximum Mean Discrepancy (MMD) (Tzeng et al., 2014) to match the distributions across domains in terms of the source and target data.",
"Unfortunately, the empirical evaluation of these metrics is unavailable since we cannot access the source data.",
"In this paper, we propose a generic framework named Cross-domain Knowledge Distillation (CdKD).",
"Specifically, we define a Joint Kernelized Stein Discrepancy (JKSD) that measures the largest discrepancy over the Hilbert space of functions between empirical sample expectations of target domain and source distribution expectations.",
"Inspired by the works (Liu et al., 2016), the source distribution expectations are being zero via the effect of Stein operator such that we can evaluate the discrepancy of joint distributions without any source data.",
"We embed JKSD criterion into deep network where multi-view features including activations, gradients and class probabilities in the source model are exploited to explore the domain-invariant and discriminative features across domains.",
"In addition, we further maximize JKSD using adversarial strategy where the multi-view features are integrated into domain adaptation abundantly.",
"Finally, CdKD is learnt by joint optimizing both KD objective (Hin-ton et al., 2015) and JKSD.",
"The main contributions are outlined as, We propose to investigate the problem of UDA without needing source data by exploring the distribution discrepancy between a source model and a set of target data.",
"We adapt the target domain with different network architecture flexibly in terms of different deployment environments.",
"For the first time, the gradient information of the source domain is exploited to boost the UDA performance.",
"Mu et al. (2020) shows a key intuition that per-sample gradients contain task-relevant discriminative information.",
"We experiment under two Amazon review datasets for cross-domain text classification, which demonstrates that CdKD still has obvious performance advantage in all settings though without needing any source data.",
"UDA aims at learning a model which can generalize across different domains following different probability distributions.",
"Existing works mainly focus on how to learn domain-invariant features and discriminative features that are shared across different domains.",
"Moment Matching, e.g., Maximum Mean Discrepancy (MMD) (Tzeng et al., 2014) and adversarial learning (Ganin and Lempitsky, 2015) are commonly used to learn domain-invariant features by aligning the marginal distributions.",
"To learn discriminative features for UDA, self-training methods (Saito et al., 2017; Zou et al., 2019) train the target classifier in terms of the pseudo labels of target data.",
"These works committed to improve the quality of pseudo labels including introducing mutual learning (Ge et al., 2020) and dual information maximization (Ye et al., 2020).",
"The other line of learning discriminative features is to match the conditional distributions across domains by aligning multiple domain-specific layers (Long et al., 2017, 2018) or making an explicit hypothesis between conditional distributions (Wang et al., 2018; Yu et al., 2019; Fang et al., 2020).",
"STN (Yao et al., 2019) explores the class-conditional distributions to approximate the discrepancy between the conditional distributions via Soft-MMD.",
"The work (Zhang et al., 2021) derives a novel criterion Conditional Mean Discrepancy (CMD) to measure the shifts between conditional distributions in tensor-product Hilbert space directly.",
"However, these methods assume the target users can access to the source data, which is unsafe and sometimes unpractical since source data may be private and decentralized.",
"Therefore, the recent works propose to generalize a target model over a set of unlabeled target data only in terms of the supervision of a trained source model.",
"SHOT (Liang et al., 2020) learns the target-specific feature extraction module by using both information maximization and self-training strategy.",
"Li et al. (2020) improve the target model through target-style data based on generative adversarial network (GAN) where the GAN and the target model are collaborated without source data.",
"Unfortunately, they require that the target model must share the same network architecture with the source model.",
"Meanwhile, multi-view features in the source model including activation and gradient are not exploited which also contribute most to the domain adaptation.",
"KD transfers the knowledge from a cumbersome model to a small model that is more suitable for deployment (Hinton et al., 2015).",
"The general technique of KD involves using a teacher-student strategy, where a large deep teacher model trained for a given task teaches shallower student model on the same task (Yim et al., 2017; Chen et al., 2018).",
"The teacher and student models are trained based on the same data.",
"These KD methods make an as-sumption that the training data and the distribution associated with the teacher model are independent and identically distributed.",
"However, sometimes we are required to train a student model in a new domain that the teacher model is not familiar, i.e, the domain shifts exist between the new domain and the domain that the teacher model is trained.",
"The proposed CdKD is able to relieve the domain shifts adaptively during distilling the knowledge.",
"We address the unsupervised domain adaptation (UDA) task with only a trained source model and without access to source data.",
"We consider K -way classification.",
"Formally, in this novel setting, we are given a trained source model f s : X (cid:55) Y and a target domain D t = { x i } mi =1 X with m unlabeled samples.",
"Here, the goal of Cross-domain Knowledge Distillation (CdKD) is to learn a target model f t : X (cid:55) Y and infer { y i } mi =1 , with only D t and f s available.",
"The target model f t is allowed to have different network architecture with f s .",
"CdKD is a special KD which consists of a trained teacher model f s , a student model f t and unlabeled data D t as well.",
"But it differs from KD in that the empirical distribution of D t don't match the distribution associated with the trained model f s .",
"Therefore, it is necessary to introduce distribution adaptation to eliminate the biases between the source and target domains during distilling the knowledge.",
"Specifically, as shown in Figure",
"1(a), we first introduce KD to distill the knowledge to the target domain in terms of the class probabilities produced by the source model f s .",
"Then, we introduce a novel criterion JKSD to match the joint distributions across domains by evaluating the shift between a known distribution and a set of data.",
"This is the first work to explore the distribution discrepancy between a model and a set of data in UDA task.",
"Given a target sample x D t , the target model f t : X (cid:55) Y produces class probabilities by using a softmax output layer that converts the log-its p = ( p 1 , , p K ) into a probability f t ( x ) = ( q 1 , , q K ) ,",
"where T is a temperature used for generating softer class probabilities.",
"We optimize the target model f t by minimizing the following objective for knowledge distillation, LKD = 1 m (cid:88) x D t f s ( x ) (cid:62) log f t ( x ) (1) In our paper, the setting of temperature follows the work (Hinton et al., 2015): a high temperature T is adopted to compute f t ( x ) during training, but after it has been trained it uses a temperature of 1.",
"In traditional UDA setting, Joint Maximum Mean Discrepancy (JMMD) (Long et al., 2017) has been applied to measure the discrepancy in joint distributions of different domains, and it can be estimated empirically using finite samples of source and target domains.",
"Specifically, suppose k : X X (cid:55) R and l : Y Y (cid:55) R are the positive definite kernels with feature maps ( ) : X (cid:55) F and ( ) : Y (cid:55) G for domains of X and Y , respectively that corresponds to reproducing kernel Hilbert space (RKHS) F and G .",
"Let CPXY : G (cid:55) F be the uncentered cross covariance operator that be defined as CPXY = E ( x , y ) P [ ( x ) ( y )] .",
"JMMD measures the shifts in joint distributions P ( X , Y ) and Q ( X , Y ) by J ( P, Q ) = sup f g H EQ ( f ( x ) g ( y )) EP ( f ( x ) g ( y )) = (cid:107)C QXY CPXY (cid:107) F G where H is a unit ball in F G .",
"In our setting, unfortunately, the empirical estimation of JMMD is unavailable since we cannot access the source data D s directly (The empirical estimation of JMMD is in Appendix A.1).",
"Kernelized stein discrepancy (KSD) as a statistical test for goodness-of-fit can test whether a set of samples are generated from a marginal probability (Chwialkowski et al., 2016; Liu et al., 2016).",
"Inspired by KSD, we introduce Joint KSD (JKSD) to evaluate the discrepancy between a known distribution P ( X , Y ) and a set of data Q = { x i , y i } mi =1 obtained from a distribution Q ( X , Y ) .",
"Assume the dimension of X is d ( X = R d ), i.e., x = ( x 1 , , x d ) , x X .",
"We denote by F d = F F the Hilbert space of d 1 vector-valued functions f = { f 1 , , f d } with f i F , and with an inner product (cid:104) f, f (cid:48) (cid:105) F d = (cid:80) di =1 (cid:104) f i , f (cid:48) i (cid:105) F for f (cid:48) F d .",
"We begin by defining a Stein operator AP : F d G (cid:55) F d G acting on functions f F d and g G ( AP f g )( x , y ) = g ( y ) ( x f ( x ) + f ( x ) x log P ( x , y ) ) (cid:62) 1 d (2) where x log P ( x , y ) = x P ( x , y ) P ( x , y ) R d 1 , x f ( x ) = ( f 1 ( x ) x 1 , , f d ( x ) x d ) R d 1 for x = ( x 1 , , x d ) and 1 d is a d 1 vector with all elements equal to 1.",
"The expectation of Stein operator AP over the distribution P is equal to 0 EP ( AP f g )( x , y ) = 0 (3) which can be proved easily by (Chwialkowski et al., 2016, Lemma 5.1).",
"The Stein operator AP can be expressed by defining a function xy over the space F d G that depends on gradients of the log-distribution and the kernel, xy = x ( x ) ( y ) +( x log P ( x , y )) ( x ) ( y ) (4) Thus, ( AP f g )( x , y ) can be presented as an inner product, i.e., (cid:104) f g, xy (cid:105) F d G .",
"Now, we can define JKSD and express it in the RKHS by replacing the term f ( x ) g ( y ) in J ( P, Q ) as our Stein operator, S ( P, Q ) := sup f g H (cid:48) EQ ( AP f g )( x , y ) EP ( AP f g )( x , y ) = sup EQ ( AP f g )( x , y ) = sup (cid:104) f g, EQ xy (cid:105) F d G = (cid:107) EQ xy (cid:107) F d G where H (cid:48) is a unit ball in F d G .",
"This makes it clear why Eq.",
"3 is a desirable property: we can compute S ( P, Q ) by computing the Hilbert-Schmidt norm (cid:107) EQ xy (cid:107) , without need to access the data obtained from P .",
"We can empirically estimate S 2 ( P, Q ) based on the known probability P and finite samples Q = { ( x i , y i ) } mi =1 Q ( X , Y ) in term of kernel tricks as follows, S 2 ( P, Q ) = 1 m 2 tr ( 2 KL + 2 L + L ) (5) ( 2 K ) i,j = (cid:10) x i ( x i ) , x j ( x j ) (cid:11) F d i,j = ( x i k ( x i , x j )) (cid:62) x j log P ( x j , y j ) i,j = k ( x i , x j ) (cid:16) x i log P ( x i , y i ) (cid:62) x j log P ( x j , y j ) (cid:1) where L = { l ( y i , y j ) } is the kernel gram matrix, (cid:104) x ( x ) , x (cid:48) ( x (cid:48) ) (cid:105) F d = (cid:80) di =1 k ( x , x (cid:48) ) x i x (cid:48) i , all the matrices 2 K , , and L are in R m m , and tr ( M ) is the trace of the matrix M .",
"(Refer to Appendix A.2 for detail.)",
"In our experiments, we adopt Gaussian kernel k ( x 1 , x 2 ) = exp( 1 2 (cid:107) x 1 x 2 (cid:107) 2 ) where its derivative x 1 k ( x 1 , x 2 ) R d and ( 2 K ) i,j R can be computed numerically, x 1 k ( x 1 , x 2 ) = k ( x 1 , x 2 ) (cid:18) 2 2 ( x 1 x 2 ) (cid:19) ( 2 K ) i,j = k ( x 1 , x 2 ) (cid:18) 2 d 2 4 (cid:107) x 1 x 2 (cid:107) 2 4 (cid:19) Remark.",
"Based on the virtue of goodness-fit test theory, we will have S ( P, Q ) = 0 if and only if P = Q (Chwialkowski et al., 2016).",
"Instead of applying uniform weights as MMD does, JKSD applies non-uniform weights i,j , S 2 ( P, Q ) = (cid:88) i,j i,j l ( y i , y j ) where i,j = ( 2 K + 2 + ) i,j is, in turn, determined by the activation-based and gradient-based features of the known probability P .",
"JKSD computes a dynamic weight i,j to decide whether the sample i shares the same label with other sample j in the target domain.",
"Different from cluster-based methods (Liang et al., 2020), JKSD assigns each sample a label according to all the data in the target domain instead of the centroid of each category.",
"The computation of centroid severely suffers from the noise due to the domain shifts.",
"In contrast, our solution is more suitable for UDA because we avoid to use the untrusted intermediate results (i.e., the centroid of each category) to infer the labels.",
"The pipeline of our CdKD framework is shown in Figure",
"1(b).",
"The source model parameterized by a DNN consists of two modules: a feature extractor T s : X (cid:55) Z s and a classifier G s : Z s (cid:55) Y , i.e., f s ( x ) = G s ( T s ( x )) .",
"The target model f t = T t G t also has two modules where we use parallel notations T t ( ; T ) : X (cid:55) Z t and G t ( ; G ) : Z t (cid:55) Y for target model.",
"Note here in our experiments, the dimension of the latent representations of source model is set equal to the target model, i.e., Z s = Z t = R d .",
"The extractors T s and T t are allowed to adopt different network architectures.",
"The input space X is usually highly sparse where the kernel function cannot capture sufficient features to measure the similarity.",
"Therefore, we evaluate JKSD based on latent representations of target samples, i.e., Q = { ( z , y ) | z = T t ( x ) , y = G t ( z ) , x D t } Q ( Z , Y ) .",
"In Eq.",
"5, it is required to evaluate the joint probability P ( Y = y , Z = z ) = p ( y | z ) p ( z ) over a sample ( z , y ) obtained from Q .",
"The probability p ( y | z ) that the sample follows conditional distribution of the source domain P ( Y | Z ) can be evaluated as p ( y | z ) = y (cid:62) G s ( z ) .",
"Similarly, the term p ( z ) represents the probability that the target representation z follows the marginal distribution P ( Z ) of the source domain.",
"Since we cannot access the source marginal distribution directly, we approximate it by evaluating the cosine similarity of the representations outputted from the source model and target model, i.e., p ( z ) = 1 2 cos( z , T s ( x )) + 1 2 where x = T 1 t ( z ) is the sample corresponding to z for any z Q .",
"Formally, the term z log P ( z , y ) in Eq.",
"5 can be computed as z log P ( z , y ) = 1 p ( y | z ) y (cid:62) z G s ( z ) + z p ( z ) p ( z ) where z G s ( z ) RK d is a Jacobian matrix of the target latent representation with respect to the source classifier G s .",
"We propose to train the target model f t by jointly distilling the knowledge from the source domain and reducing the shifts in the joint distributions via JKSD, min T , GLKD + S 2 ( P, Q ) where > 0 is a tradeoff parameter for JKSD.",
"In order to maximize the test power of JKSD, we require the class of functions h F d G to be rich enough.",
"Meanwhile, kernel-based metrics usually suffer from vanishing gradients for low-bandwidth kernels.",
"We are enlightened by (Long et al., 2017) which introduces the adversarial training to circumvent these issues.",
"Specifically, we multiple fully connected layers U and V parameterized by U and V to JKSD, i.e., k ( x i , x j ) and l ( y i , y j ) are replaced as k ( U ( x i ) , U ( x j )) and l ( V ( y i ) , V ( y j )) in Eq.",
"5.",
"We maximize JKSD with respect to the new parameters U and V to maximize the test power of JKSD such that the samples in the target domain are made more discriminative by abundantly exploiting the activation and gradient features in the source domain.",
"As shown in Figure",
"1(c), the target model f t can be optimized by the following adversarial objective, min T , G max U , VLKD + S 2 ( P, Q ) (6) 4 Experiments 4.1 Setup To testify its versatility, we evaluate the proposed model in two tasks including UDA and knowledge distillation.",
"Amazon-Review 1 is a benchmark dataset for domain adaptation in text classification task.",
"Two versions of Amazon Review datasets are used to evaluate models.",
"The work provides a simplified Amazon-Review dataset ( Amazon-Feature ) collected from four distinct domains: Books ( B ), DVD ( D ), Electronics ( E ) and Kitchen ( K ).",
"Each domain comprises 4,000 samples with 400d feature representations and 2 categories (positive and nega-tive).",
"Zhang et al. (2021) collected a larger dataset called Amazon-Text from Amazon-Review with the same domains in Amazon-Feature to test the model performance for large-scale transfer learning.",
"The review texts are divided into two categories according to user rating, i.e., positive (5 stars) and negative (1 star).",
"There are 10,000 original review texts in each category and 20,000 texts in each domain.",
"The notation S T represents the transfer learning from the source domain S to the target domain T .",
"Baselines.",
"For the bulk of experiments the following baselines are evaluated.",
"The Source-Only model is trained only over source domain and tested over target-domain data while Train-on-Target model is trained and tested over target-domain data directly.",
"We compare with conventional domain adaptation methods: Transfer Component Analysis ( TCA ) (Pan et al., 2010), Balanced Distribution Adaptation ( BDA ) (Wang et al., 2017), Geodesic Flow Kernel ( GFK ) (Gong et al., 2012), Deep Domain Confusion ( DDC ) (Tzeng et al., 2014), Domain Adversarial Neural Networks ( RevGrad ) (Ganin and Lempitsky, 2015) and Dynamic Adversarial Adaptation Network ( DAAN ) (Yu et al., 2019).",
"We compare with SHOT (Liang et al., 2020) for the UDA task without the source data.",
"We also compare with the knowledge distillation",
"method ( KD ) (Hinton et al., 2015) in our setting.",
"In our experiments, three different extractors are selected.",
"For Amazon-Feature dataset, the extractor is simply modeled as a typical 3-layer fully connected network ( MLP ) to transform 400d inputs into 50d latent feature vectors.",
"Two types of networks are leveraged for Amazon-Text dataset to encode the original review texts, i.e., TextCNN and BertGRU .",
"TextCNN (Kim, 2014) is a text convolutional network that consists of 150 convolutional filters with 3 different window sizes.",
"We also evaluate the performance of cross-domain text classification on a pre-trained language model, i.e., BERT (Devlin et al., 2019).",
"We freeze BERT model and construct a 2-layer bi-directional GRU (Cho et al., 2014) to learn from the representations produced by BERT.",
"The classifier is modeled as a 2-layer fully connected network for all the settings.",
"For CdKD, we consider to learn the source model f s by minimizing the standard cross-entropy loss.",
"We randomly specify a 0.7/0.3 split in the source dataset and generate the optimal source model based on the validation split.",
"U and V are modeled as weight matrices.",
"We implement all deep methods based on Py-torch framework, and BERT model is implemented and pre-trained by pytorch-transformers 2 .",
"We adopt Gaussian kernel with bandwidth set to me-dian pairwise squared distances on the training data (Gretton et al., 2012).",
"The temperature T is set to 10 during training.",
"We use AdamW optimizer (Loshchilov and Hutter, 2019) with batch size of 128 and the learning rate annealing strategy in (Long et al., 2017): it is adjusted during back propagation using the following formula: 2 https://github.com/huggingface/ transformers Table 2: Classification accuracy (%) on Amazon-Text dataset using TextCNN and BertGRU Extractors.",
"p = 0 (1+10 p ) 0 .",
"75 where p is the training progress linearly changing from 0 to 1 and 0 is set to 0.001.",
"We apply the same strategy in (Ganin and Lempit-sky, 2015) to adjust the factor dynamically, i.e., we gradually change it from 0 to 1 by a progressive schedule: p = 2 1+exp( 10 p ) 1 .",
"In the first experiment, we compare with the conventional domain adaptation methods where the source model and target model share the same network architectures.",
"The classification accuracy results on the Amazon-Feature dataset for domain adaptation based on MLP are shown in Table 1.",
"Some of the observations and analysis are listed as follows.",
"(1) The performance of traditional UDA methods (e.g., TCA, GFK and BDA) is worse than Source-Only model, i.e., negative transfer learning occurs in all transfer tasks.",
"These models directly define kernel over sparse input vectors such that the kernel function cannot capture sufficient features to measure the similarity.",
"The deep transfer methods outperform all the traditional methods, suggesting that embedding domain adaptation modules into deep network can reduce domain discrepancy sig-nificantly.",
"(2) The average accuracy of CdKD is slightly 1.0% higher than other deep transfer methods (DDC, RevGrad, DAAN and SHOT) overall.",
"It verifies the positive effect of transferring the knowledge from trained source model without accessing the source data.",
"Table 2 shows the classification performance of deep UDA models based on TextCNN and BertGRU over a large dataset Amazon-Text .",
"For TextCNN extractor, we have following analysis.",
"CdKD achieves superior performance over prior methods by larger margins compared to small dataset Amazon-Feature.",
"Compared to DDC and RevGrad that obtains the domain-invariant features, CdKD can learn discriminative information from the source model by minimizing JKSD criterion.",
"SHOT assumes that the target outputs should be similar to one-hot encoding.",
"However, the one-hot encoding used in SHOT is noisy and untrusted due to the domain shifts.",
"Different from SHOT, we match the joint distributions across domains in terms of multi-view features rather than only class probabilities when adapting the target model.",
"By B -> E E -> B K -> D 70 72 74 76 78 A cc u r ac y ( % ) CdKD CdKD-g CdKD-a KD",
"going from TextCNN to extremely deep BertGRU, we attain a more in-depth understanding of feature transferability.",
"BertGRU-based models outperform TextCNN-based models significantly, which shows BERT enables learning more transferable representations for UDA.",
"Our CdKD has a slight advantage compared to other models overall under the powerful transferability of BertGRU.",
"It reveals the necessity of designing a moment matching approach to incorporate activation and gradient features into domain adaptation for reducing the losses caused by the lack of source data.",
"In the second experiment, we compare with the KD model where the knowledge in BertGRU is distilled to the TextCNN-based model.",
"We generate the optimal BertGRU as the teacher model based on the source dataset.",
"The TextCNN model uses BERT tokenizer tool to guarantee the same input space between two models.",
"We randomly specify a 0.5/0.2/0.3 split in the target dataset where we train and select TextCNN-based model based on the train split and validation split respectively.",
"The result is reported in Table 3 in terms of the test split.",
"The average accuracy of CdKD is 1.6% higher than original KD and approaches to the teacher model BertGRU.",
"Significantly, the accuracy scores of tasks D E and D K are higher than BertGRU.",
"This is attributed to distribution adaptation where extra performance is also gained from JKSD besides the guidance of the teacher model.",
"Ablation Study.",
"We conduct the ablation experiments to see the contributions of gradient information (g) and the adversarial strategy",
"(a), which are evaluated with TextCNN extractor for UDA task.",
"By ablating CdKD, we have two baselines of CdKD-g (w/o g) and CdKD-a (w/o",
"a).",
"For CdKD-g, we set the gradient of log-distribution x j log P ( x j , y j ) R d 1 to a constant, i.e., 1 d (1 , 1 , ..., 1) (cid:62) while we optimize CdKD without adversarial strategy for CdKD-a.",
"From the results in Figure 2, CdKD-g and CdKD-a perform worse 69.2 69.4 69.6 69.8 70 70.2 70.4 70.6 70.8 71 71.2 71 72 73 74 75 76 77 78 Source Model Accuracy (%) T a r g e t M od e l A cc u r ac y ( % ) KD ( =0) CdKD Figure 3: Accuracy (%) result of CdKD and KD for different source models.",
"than CdKD but still better than KD, suggesting that gradient information and the adversarial strategy both contribute to the improvements of our model.",
"The gradient information is one type of important knowledge in the source domain, but all previous methods ignore its importance for UDA.",
"Effects of Source Model Accuracy.",
"Here we study how the performance of target model are in-fluenced by the source model accuracy, which are analyzed based on B E task using TextCNN extractor.",
"We randomly obtain 9 optimal source models using different seeds over B dataset, and train CdKD and KD models based on different source models for B E task.",
"Figure 3 shows the classification accuracy of CdKD and KD by varying accuracy of source models tested over E dataset.",
"CdKD obtains similar performance under different source models, indicating that CdKD is not very sensitive to the quality of source models.",
"However, the curves of KD is unstable, i.e., the performance of KD is vulnerable to the impact of the source models, because different source models follow the different distributions.",
"Obviously, JKSD plays a crucial role in determining the effects of alleviating this distribution discrepancy among different source models.",
"Effects of Batch Size.",
"Batch size is a key parameter to optimize JKSD metric because it is required to compute kernel over a min-batch of data.",
"Figure 4 shows the classification accuracy of CdKD by varying batch size in { 64 , 128 , 256 , 512 } .",
"The experiment shows that CdKD is not sensitive to batch size when batch size is larger than 64, suggesting that CdKD don't need a very large batch size for accurate estimation of JKSD.",
"In this paper, we shed a new light on the challenges of UDA without needing source data.",
"Specifically, we provided a generic framework named CdKD to learn a classification model over a set of unlabeled target data by making use of the knowledge of the activation and gradient information in the trained source model.",
"CdKD learned the collective knowledge across different domains including domain-invariant and discriminative features by matching the joint distributions between a trained source model and a set of target data.",
"Experiments for cross-domain text classification testified that CdKD still achieves advantages for UDA task though without any source data and improves the performance of KD task when the trained teacher model doesn't match the training data.",
"This work was supported in part by National Natural Science Foundation of China under Grant 62001309, in part by the Opening Project of Beijing Key Laboratory of Internet Culture and Digital Dissemination Research and in part by the Open Research Fund from Shenzhen Research Institute of Big Data (No. 2019ORF01012)."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"objective",
"method",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"other"
] |
[
"Natural language processing models often exploit spurious correlations between task-independent features and labels in datasets to perform well only within the distributions they are trained on, while not generalising to different task distributions.",
"We propose to tackle this problem by generating a debiased version of a dataset, which can then be used to train a debiased, off-the-shelf model, by simply replacing its training data.",
"Our approach consists of 1) a method for training data generators to generate high-quality, label-consistent data samples; and 2) a filtering mechanism for removing data points that contribute to spurious correlations, measured in terms of z-statistics .",
"We generate debiased versions of the SNLI and MNLI datasets, 1 and we evaluate on a large suite of debiased, out-of-distribution, and adversarial test sets.",
"Results show that models trained on our debiased datasets generalise better than those trained on the original datasets in all settings.",
"On the majority of the datasets, our method outperforms or performs comparably to previous state-of-the-art debiasing strategies, and when combined with an orthogonal technique, product-of-experts, it improves further and outperforms previous best results of SNLI-hard and MNLI-hard.",
"Natural Language Processing (NLP) datasets inevitably contain biases that are unrelated to the tasks they are supposed to represent.",
"These biases are usually artifacts of the annotation processes, task framing, or design decisions (Schwartz et al., 2017; Geva et al., 2019; Liu et al., 2021).",
"Such biases often manifest as spurious correlations between simple features of the data points and their Work done while at the Allen Institute for AI.",
"labels (Gardner et al., 2021).",
"Trained models can exploit these spurious correlations to correctly predict the labels of the data points within the same distributions as those they are trained on, but fail to generalise to other distributions within the same tasks.",
"Consequently, the models risk modelling the datasets, but not the tasks (Gururangan et al., 2018; Poliak et al., 2018; McCoy et al., 2019; Schuster et al., 2019).",
"We address this issue by adjusting existing dataset distributions to mitigate the correlations between task-independent features and labels.",
"First, we train data generators that generate high quality data samples in the distribution of existing datasets (Section 2).",
"Then, we identify a set of simple features that are known to be task-independent, and use the theoretical framework (i.e., z-statistics) proposed by Gardner et al. (2021) to measure correlations between those features and the labels (Sec-tion 3.1).",
"Finally, we adjust the distribution of the generated samples by post-hoc filtering (Sec-tion 3.2) to remove the data points that contribute to high z-statistics with task-independent features, or finetuning the data generator (Section 4.1) to make such data points less likely.",
"Unlike prior model-2660 centric approaches to mitigate spurious correlations (Belinkov et al., 2019a,b; Clark et al., 2019; He et al., 2019; Karimi Mahabadi et al., 2020) that define new training objectives or model architectures, our approach has the advantage of keeping the objective and the model fixed, as we only alter the training data.",
"To evaluate our approach, we use the task of Natural Language Inference (NLI), which offers a wide range of datasets (including challenge datasets) for various domains.",
"We generate debiased SNLI (Bowman et al., 2015) and MNLI (Williams et al., 2018) distributions and evaluate the generalisability of models trained on them to out-of-distribution hard evaluation sets (Guru-rangan et al., 2018; McCoy et al., 2019), and the adversarial attack suite for NLI proposed by Liu et al. (2020b).",
"Furthermore, we compare our method to strong debiasing strategies from the literature (Be-linkov et al., 2019b; Stacey et al., 2020; Clark et al., 2019; Karimi Mahabadi et al., 2020; Utama et al., 2020; Sanh et al., 2021; Ghaddar et al., 2021).",
"Our results show that models trained on our debiased datasets generalise better than those trained on the original datasets to evaluation sets targeting hypothesis-only biases (by up to 2.8 percentage points) and syntactic biases (by up to 13.3pp), and to a suite of adversarial tests sets (by up to 4.2pp on average).",
"Since our contributions are orthogonal to model-centric approaches, we show that when combined with product-of-experts (Karimi Mahabadi et al., 2020), our method yields further improvements and outperforms previous state-of-the-art results of SNLI-hard and MNLI-hard.",
"Finally, we train stronger and larger pretrained language models with our debiased datasets, and demonstrate that the performance gain by our method generalises to these larger models.",
"First, we need to train a data generator G to generate data samples automatically.",
"Our goal for the data generator is to model the true distribution as well as possible so that we can generate valid and high-quality data samples.",
"choose GPT-2 because it is a powerful and widely-used autoregressive language model, and it can be easily adapted to generated the premise, label, and hypothesis of an instance sequentially.",
"Given an NLI dataset D 0 , the training objective is to minimise the following negative log-likelihood loss of generating the premise-label-hypothesis sequence, in that order: LMLE = |D 0 | (cid:88) i =1 log p ( P ( i ) , l ( i ) , H ( i ) ) = |D 0 | (cid:88) i =1 log p ( P ( i ) ) p ( l ( i ) | P ( i ) ) p ( H ( i ) | l ( i ) , P ( i ) ) , (1) where P ( i ) , l ( i ) and H ( i ) are the premise, label and hypothesis respectively.",
"We find that samples generated by a generator trained with only LMLE often contain ungrammatical text or incorrect label.",
"In this section, we introduce two techniques to improve data quality.",
"We observe poor label consistency in samples generated by a generator trained with vanilla LMLE objective given a generated sample ( P , H, l ) , the label l often does not correctly describe the relationship between P and H .",
"To alleviate this issue, we apply unlikelihood training (Welleck et al., 2020) to make generating such label inconsistent instances less likely.",
"First we perturb the label to construct negative samples ( P, H, l (cid:48) ) where l (cid:48) (cid:54) = l for each sample in the dataset.",
"Then we apply a token-level unlikelihood objective on the hypothesis tokens: L consistency = |D 0 | (cid:88) i =1 | H | ( i ) (cid:88) t =1 log(1 p ( H ( i ) t | l (cid:48) ( i ) , P ( i ) , H ( i ) <t )) .",
"This objective decreases the probability of generating H when given an incorrect label l (cid:48) , hence improves the label consistency at generation time.",
"where is a hyperparameter that balances the two objectives.",
"We can randomly sample from the trained generator to obtain a large amount of the synthetic data DG G .",
"We add a consistency filtering step (Lewis et al., 2021; Bartolo et al., 2021) to further improve the quality of the generated dataset.",
"We train an NLI model M with the original dataset D 0 to filter out samples in which M has low confidence: DG = { ( P, H, l ) DG | p M ( l | P, H ) > } , where is a confidence threshold.",
"We found that the filtered out data samples generally had ungrammatical text or incorrect labels.",
"We now define a method to reject samples that contribute to the high spurious correlations between task-independent features of the samples and their labels.",
"Our approach is based on the theoretical framework proposed by Gardner et al. (2021) to measure these correlations, known as z-statistics .",
"Our filtering method, called z-filtering (Section 3.2), will serve as the basis to construct debiased datasets in Section 4.",
"As a first step towards addressing spurious correlations, we need to be able to quantify them.",
"We start by selecting a set of task-independent features features that give away the labels and allow models to exploit them without actually solving the task.",
"For NLI, we choose the following features: 1 ) unigrams and bigrams; 2 ) hypothesis length and hypothesis-premise length ratio; 3 ) lexical overlap between hypothesis and premise; 4 ) the predictions of a BERT-base (Devlin et al., 2019) hypothesis-only model.",
"3 These features capture various biases identified in prior work, including contradiction word biases, lexical overlap bias (McCoy et al., 2019), and hypothesis-only bias (Gururangan et al., 3 See Appendix B for detailed descriptions of the features. 2018; Poliak et al., 2018).",
"Note that our method does not rely on the specific choice of features, and one can easily add alternative features that should not be correlated with the labels.",
"Following Gardner et al. (2021), we assume there should be no correlation between each of these features and the class labels.",
"More formally, for any feature x from our feature set X , p ( l | x ) should be uniform over the class labels l .",
"We define p ( l | x ) = 1 n (cid:80) nj =1 l j to be the empirical expectation of p ( l | x ) over n samples containing x .",
"Then we compute the standardised version of z-statistics to quantify its deviation from the uniform distribution for each feature x and label l : z ( x, l ) = p ( l | x ) p 0 (cid:112) p 0 (1 p 0 ) /n, (2) where p 0 is the probability of uniform distribution ( p 0 = 1 / 3 in NLI tasks with three labels).",
"These z-statistics scores can be used to identify the most biased features for each label l we select k features with the highest z-statistic to define the biased features set BD ( l ) .",
"Table 12 shows examples of these biased features on SNLI.",
"To mitigate the biases in the dataset, we propose z-filtering , an algorithm that iteratively selects and filters instances from a dataset D (cid:48) to build a debiased dataset Z .",
"At each step, we find the set of biased features BZ ( l ) on the partially constructed Z .",
"We then select a new batch of samples from D (cid:48) and filter out the samples that contain these biased features.",
"This process is applied iteratively until it has exhausted all samples from D (cid:48) .",
"It removes the samples that contribute to the spurious correlations in D (cid:48) , thus it finds a debiased subset Z ( D (cid:48) ) D (cid:48) .",
"We denote the removed samples as Z ( D (cid:48) ) .",
"The full z-filtering algorithm is illustrated in Algorithm",
"1. Optionally, one can initialise Z with a seed dataset D seed .",
"In this case, the samples from D (cid:48) are only added to Z when they do not contain the biased features of D seed .",
"Thus it can be seen as a data-augmentation technique targeted to debias a given dataset.",
"We refer to it as conditional z-filtering and denote the produced debiased dataset as Z ( D (cid:48) |D seed ) .",
"We use z-filtering in two ways: 1 ) to further finetune G (the one trained in Section 2.2.1 with consistency unlikelihood) with an objective that downweighs samples that should be rejected (Sec-tion 4.1); 2 ) to post-hoc filter the generated samples to obtain debiased datasets (Section 4.2).",
"The generator G can learn to exploit task-independent features during its finetuning stage (Section 2), causing the synthetic data DG to contain many spurious correlations.",
"While it is tempting to apply z-filtering to remove these spurious correlations from DG , we find that this will lead to the removal of majority of the generated data.",
"For example, when the generator is finetuned on SNLI, z-filtering removes around 85% of DGSNLI .",
"4 This leads to a very inefficient data generation process to mitigate the spurious correlations.",
"To alleviate this issue, we can incorporate the debiasing objectives into the training of the generator, so that the samples produced by the generator 4 This is also strong confirmation that these biases are problematic, as the generative model easily finds them and relies on them during data generation.",
"Conducting naive data augmentation with DGSNLI will strengthen the spurious correlations.",
"are more likely to be accepted by the z-filtering process.",
"More specifically, we can encourage the model to generate Z ( D 0 ) , while discouraging it from generating Z ( D 0 ) .",
"For the latter part, we again apply an unlikelihood training objective LUL to unlearn Z ( D 0 ) .",
"Hence, the overall debiasing training objective is: L debias = LMLE ( Z ( D 0 )) + LUL ( Z ( D 0 )) where is a hyperparameter.",
"A naive use of an unlikelihood objective on all tokens gives the model mixed signals for good tokens and leads to ungrammatical, degenerate outputs.",
"To avoid this degeneracy, we apply the unlikelihood loss only to tokens that contribute to biased features.",
"Concretely, for each token I t of instance I Z ( D 0 ) , we define a mask m t as m t = (cid:40) 0 , if I t contributes to BZ ( l I ) 1 , otherwise .",
"where BZ ( l I ) represent the biased features corresponding the label of I .",
"For biases towards unigram and bigram features (as defined in Section 3.1), we consider only the corresponding tokens to be relevant (i.e., m t = 0 if I t is part of the unigram or the bigram).",
"For biases towards other features (e.g. length of the hypothe-sis), we consider all the tokens on the hypothesis to be relevant.",
"The unlikelihood training objective is defined as follows: LUL ( Z ( D 0 )) = (cid:88) I (cid:48) Z ( D 0 ) LUL ( I (cid:48) ) , LUL ( I (cid:48) ) = | I (cid:48) | (cid:88) t =1 log( m t p ( I (cid:48) t | I (cid:48) <t ) +(1 m t )(1 p ( I (cid:48) t | I (cid:48) <t ))) .",
"We further finetune G with L debias to obtain a new generator G , that is trained to generate more unbiased data samples.",
"We then randomly sample from G and conduct data filtering (Section 2.2.2) to obtain a large set of high-quality debiased data samples DG .",
"Given the original dataset D 0 and the synthetic dataset DG , our goal is produce a large-scale unbiased dataset D .",
"There are various ways to do 2663 this given that we can either apply conditional z-filtering, or simply z-filter both D 0 and DG and merge them.",
"We explore the following options:",
"1. Z-Augmentation (Z-Aug) Z ( DG |D 0 ) : we keep the original dataset as is, and augment it by conducting conditional z-filtering on DG using D 0 as seed dataset.",
"2. Parallel z-filter (Par-Z) Z ( D 0 ) Z ( DG ) : we conduct z-filtering on D 0 and DG separately, and then merge them.",
"3. Sequential z-filter (Seq-Z) Z ( DG |Z ( D 0 )) : we first conduct z-filtering on D 0 , then conduct conditional z-filtering on DG with Z ( D 0 ) as seed dataset.",
"Source Datasets We select the two most widely used NLI datasets SNLI (Bowman et al., 2015) and MNLI (Williams et al., 2018) as our original datasets.",
"Prior work (Gururangan et al., 2018; Poliak et al., 2018; McCoy et al., 2019) found various annotation artifacts in them, hence they serve as good use cases for constructing debiased datasets.",
"Evaluation Datasets For the hypothesis-only bias, we use the challenge sets SNLI-hard (Gu-rurangan et al., 2018) and MNLI-hard (Williams et al., 2018), which were produced by filtering the test set with a hypothesis-only model (Sec-tion 5.2).",
"For syntactic biases, we follow previous work and use HANS (McCoy et al., 2019) for evaluation (Section 5.3).",
"In addition, we evaluate on the adversarial test benchmark introduced by Liu et al. (2020b) (Section 5.4).",
"This benchmark covers a wide range of adversarial attacks, which will give a more complete picture of what spurious correlations the debiasing methods tackle.",
"Generating Debiased Datasets We conduct debiased data generation for SNLI and MNLI separately .",
"For SNLI, we use the proposed method described in Section 4.1 to train a generator G SNLI .",
"Then we randomly sample a large number of instances from the generator to construct DG SNLI .",
"The samples are filtered with a strong NLI model M trained on SNLI to obtain DG SNLI .",
"Finally, different options (Section 4.2) can be adopted to merge the synthetic data with the original data DSNLI to construct debiased versions of SNLI.",
"The same Options D 0 = DSNLID 0 = DMNLI Original D 0 549,367 382,702 Z-Aug Z ( DG |D 0 ) 1,142,475 744,326 Par-Z Z ( D 0 ) Z ( DG ) 933,085 740,811 Seq-Z Z ( DG |Z ( D 0 )) 927,906 744,200 Table 1: Data size of the constructed debiased datasets for SNLI and MNLI.",
"procedure is used to produce debiased datasets for MNLI, by simply replacing the original dataset with MNLI.",
"We choose GPT-2 large and Roberta-large as the pretrained language models for G and M respectively.",
"5 The size of the constructed debiased datasets are listed in Table",
"1. NLI Model Training Since our method directly debiases the training data itself, we keep the model and training objective fixed and only replace the training data with our generated debiased datasets.",
"For comparability with previous work (Karimi Mahabadi et al., 2020; Utama et al., 2020; Sanh et al., 2021), we train BERT-base (Devlin et al., 2019) on our debiased datasets.",
"The NLI models are trained with ordinary cross-entropy classification loss, and the training hyperparameters are listed in Appendix A. We run our experiments five times and report the average and standard deviation of the scores.",
"6 We also conduct statistical significance testing using a 2-tailed t-test at 95% confidence level.",
"State-of-the-art Debiasing Models We compare our method with the following three state-of-the-art debiasing models on each of our evaluation datasets.",
"Product-of-Experts (He et al., 2019; Karimi Mahabadi et al., 2020) ensembles a bias-only model's prediction b i with the main model's p i using p (cid:48) i = softmax (log p i + log b i ) .",
"This ensembling enforces that the main model focuses on the samples that the bias-only model does not predict well.",
"Learned-Mixin (Clark et al., 2019) is a variant of PoE that introduces a learnable weight for the bias-only model's prediction.",
"Regularized-conf (Utama et al., 2020) uses confidence regu-larisation to retain the in-distribution performance while conducting model debiasing.",
"5 On one A100 GPU, training the generator takes around 24 hours and generating the samples takes roughly 35 hours for each dataset.",
"6 With the exception of our PoE experiments which single run, as hyperparameter tuning for PoE is costlier.",
"Combining PoE with Our Debiased Datasets Our approach changes the training data distribution instead of the model's training objective, and hence is orthogonal to prior work method-wise.",
"We also report the results of combining PoE with our proposed method, simply by training a PoE model on our debiased datasets.",
"We adapt the PoE implementation by Karimi Mahabadi et al. (2020), and we follow their approach to conduct hyperparameter tuning for PoE.",
"7 The hyperparameters of the PoE models are reported in Table 10 of Appendix A. 5.2 Hypothesis-only Bias in NLI Gururangan et al. (2018) found that, on SNLI and MNLI, a model that only has access to the hypothesis can perform surprisingly well, which indicates that the datasets contain hypothesis-only bias.",
"To alleviate this problem, SNLI-hard and MNLI-hard (Gururangan et al., 2018) subsets were constructed by filtering the test set with a hypothesis-only model and only accepting those that the hypothesis-only model predicts incorrectly.",
"We examine whether our method successfully mitigates the hypothesis-only bias in NLI, by evaluating the models trained with our debiased datasets on SNLI-hard and MNLI-hard.",
"Results on SNLI-hard Table 2 shows the results of our method on SNLI and SNLI-hard.",
"The results show that, compared to training on SNLI, training with our debiased datasets significantly improves the performance on SNLI-hard.",
"The 7 https://github.com/rabeehk/robust-nli debiased dataset produced by Seq-Z achieves a 2.48% gain in accuracy on SNLI-hard compared to the SNLI baseline, whereas Z-Aug improves both SNLI and SNLI-hard accuracy.",
"Results on MNLI-hard Table 3 shows the results of our method on MNLI-matched (MNLI-m) and MNLI-mismatched (MNLI-mm), and their corresponding hard sets.",
"We use the development sets of MNLI-hard reconstructed by (Karimi Mahabadi et al., 2020) to develop our methods.",
"To comply with the submission limit of MNLI leaderboard system, we select the best checkpoint among the five runs using the development set, and report its test set performance in Table",
"3. The results show that BERT-base models trained on our debiased MNLI datasets outperform the models trained on the original MNLI by a large margin on the MNLI-hard sets.",
"In particular, the Z-Aug version of the debiased datasets gives a 2.72% and 2.76% gain in accuracy on MNLI-m hard and MNLI-mm hard respectively, and outperforms the previous state-of-the-art on MNLI-m, MNLI-mm, and MNLI-mm hard.",
"Combining PoE with Our Debiased Datasets We investigate the combination of our method and PoE, to see if the two orthogonal techniques can work together to achieve better performance.",
"Since hyperparameter tuning of PoE is costly, we choose the best version of the debiased dataset (Seq-Z for SNLI and Z-Aug for MNLI) using the development set accuracy, and train PoE with it.",
"The results are listed in the last rows of Table 2 and Table",
"3. We can find that, on both SNLI and MNLI, combining PoE with our debiased dataset yields further improvements on SNLI-hard, MNLI-m hard, and MNLI-mm hard, outperforming previous state-of-the-art results on all three datasets.",
"McCoy et al. (2019) show that NLI models trained on MNLI can exploit syntactic heuristics present in the data, such as lexical overlap, subsequence, and constituent features.",
"They introduce HANS, an evaluation dataset that contains examples where the syntactic heuristics fail.",
"To test whether our method mitigates the syntactic biases in NLI, we evaluate models trained on our debiased datasets on HANS.",
"If our debiased dataset contains less syntactic bias than the original dataset, the model would not exploit the syntactic heuristics and thus perform better on HANS.",
"Due to the high variance 2665 Method (model w/ data) MNLI-m MNLI-mm MNLI-m hard MNLI-mm hard dev test dev test dev test dev test Prior debiasing strategies trained on MNLI PoE (Karimi Mahabadi et al., 2020) 84.58 84.11 84.85 83.47 78.02 76.81 79.23 76.83 Learned-Mixin (Clark et al., 2019) 80.5 79.5 81.2 80.4 -79.2 -78.2 Regularized-conf (Utama et al., 2020) 84.6 84.1 85.0 84.2 -78.3 -77.3 BERT-base Main PoE+CE (Sanh et al., 2021) 83.32 -83.54 -77.63 -76.39 BERT-base w/ DMNLI baseline 83.87 84.11 84.22 83.51 76.39 0 .",
"of the scores on HANS, we run five times for each experiment (except PoE), and report the average and standard deviation of the scores.",
"Results on HANS Table 4 shows the results on HANS.",
"The results are categorised into three sections according to the training data: SNLI, MNLI, and our debiased datasets.",
"The results of models trained on our debiased MNLI datasets show strong improvements: compared to the original MNLI, our debiased MNLI datasets obtain up to a 13.33% gain in HANS accuracy.",
"Our Seq-Z variant achieves 67.69% accuracy, which is comparable with strong PoE baseline (Karimi Mahabadi et al., 2020; Sanh et al., 2021).",
"Our method also further improves PoE models: the BERT-base PoE model trained on our Z-Aug MNLI outperforms the one trained on MNLI by 5.3%.",
"Additionally, training Roberta-large (Liu et al., 2019) on our debiased dataset introduces 2.9 points accuracy gain on HANS, indicating that the performance gain by our debiased dataset can generalise to larger and stronger models (more on this in Section 5.5).",
"Liu et al. (2020b) find that debiasing methods often tie to one particular known bias and it is nontrivial to mitigate multiple NLI biases at the same time.",
"They introduce a suite of test datasets for NLI models that targets various aspects of robustness, including partial input heuristics (PI), logical infer-2666 PI-CD PI-SP IS-SD IS-CS LI-LI LI-TS ST Avg.",
"ence ability (LI), and stress test (ST).",
"8 Several data augmentation strategies were investigated by Liu et al. (2020b): 1) text swap: swapping the premise and hypothesis in the original data; 2) word substitution: replacing words in the hypothesis with synonyms or generations from a masked language model; 3) paraphrase: using back translation to paraphrase the hypothesis.",
"We compare our approach with their data-augmentation heuristics, and the results are shown in Table 5.",
"Comparing with the MNLI baseline, our debiased MNLI datasets lead to better performance across all categories, which indicates that our method successfully mitigates various distinct biases simultaneously.",
"All three variants of our debiased datasets outperform the data augmentation heuristics by Liu et al. (2021), which demonstrates the efficacy of our method when compared against manually designed heuristics.",
"Since our method mitigates the spurious correlations in the dataset, not the model, our approach is model-agnostic and has the potential to bene-fit larger future models.",
"To test this hypothesis, we train stronger and more modern models than BERT with our debiased datasets, and see if it can still improve the performance.",
"More specifically, we choose Roberta-base, Roberta-large (Liu et al., 2019), and Albert-xxlarge (Lan et al., 2020), train them with Seq-Z SNLI and Z-Aug MNLI.",
"(2021); Bowman (2021); 2) training on our debiased datasets can still improve the performance of these models, yielding an average 2.30%, 1.23%, 1.13% gain for Roberta-base, Roberta-large and Albert-xxlarge respectively.",
"This indicates that our method generalises to larger pretrained language models and could potentially enhance future models.",
"Spurious Correlations in Datasets The issue of spurious correlations in datasets between labels and simple input features has recently received signifi-cant attention (Gururangan et al., 2018; Poliak et al., 2018; Belinkov et al., 2019a; Karimi Mahabadi et al., 2020).",
"It has been shown that this issue is often inherent in the data annotation process, caused by biases in the framing of the task (Schwartz et al., 2017), noisy annotations (Chen et al., 2016), or personal (Geva et al., 2019) or group-level (Liu et al., 2021) annotator biases.",
"Gardner et al. (2021) provide a theoretical framework for analyzing spurious correlations, which we use to define our filtering mechanism in Section 3.2.",
"Debiasing NLI Models Much prior work follows a model-centric approach towards mitigating biases in NLI models they propose novel model architectures or training objectives to ensure that the models do not exploit the shortcuts presented by the dataset biases.",
"At the representation level, Belinkov et al. (2019a,b) introduce an adversarial architecture to debias hypothesis representations to tackle hypothesis-only bias (Gururangan et al., 2018), and Stacey et al. (2020) strengthen the debiasing by using multiple adversarial classifiers.",
"Zhou and Bansal (2020) use HEX projection to project the representation to the space orthogonal to the biased features to debias the model.",
"At the model level, Clark et al. (2019); He et al. (2019); Karimi Mahabadi et al. (2020) propose methods based on Product-of-Expert (PoE) (Hinton, 2002) for mitigating biases by ensembling a biased-only model with a main model.",
"Utama et al. (2020) propose the use of confidence regularization to improve out-of-distribution performance while retaining in-distribution accuracy.",
"Debiasing NLI Datasets Ross et al. (2021) introduce TAILOR, a semantically-controlled perturbation method for data augmentation based on a small number of manually defined perturbation strategies.",
"Bras et al. (2020) propose AFLite, a dataset filtering method that learns feature representations with a model and conduct adversarial filtering based on model predictions.",
"Unlike these approaches, our method requires no manually-written perturbation heuristics and is model-agnostic, hence it is more generally applicable.",
"Generative Data Augmentation Several works investigate generative data augmentation techniques to improve model robustness in other areas.",
"Yang et al. (2020) conduct generative data augmentation for commonsense reasoning and show that it can improve out-of-domain generalisation.",
"Lee et al. (2021) trains a generator to generate new claims and evidence for debiasing fact verification datasets like FEVER (Thorne et al., 2018).",
"Schick and Schutze (2021) exploit large pretrained language models to generate semantic textual similarity datasets.",
"Bartolo et al. (2021) improve robustness of question answering models by generating adversarial dataset.",
"To address the issue of spurious correlations between task-independent features and labels in NLI datasets, we propose methods to generate label-consistent data and then filter out instances from existing datasets that contribute to those spurious correlations; thereby generating debiased datasets.",
"Models trained on our debiased versions of the SNLI and MNLI datasets generalise better than the equivalent model trained on the original datasets to a large suite of test sets focusing on various kinds of known biases.",
"Future work in this direction includes investigating whether our techniques are applicable to tasks beyond NLI.",
"The authors would like to thank Max Bartolo, Alexis Ross, Doug Downey, Jesse Dodge, Pasquale Minervini, and Sebastian Riedel for their helpful discussion and feedback."
] | [
"abstain",
"objective",
"objective",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"method",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"other",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"objective",
"other"
] |
[
"Sequence modeling has demonstrated state-of-the-art performance on natural language and document understanding tasks.",
"However, it is challenging to correctly serialize tokens in form-like documents in practice due to their variety of layout patterns.",
"We propose FormNet, a structure-aware sequence model to mitigate the suboptimal serialization of forms.",
"First, we design Rich Attention that leverages the spatial relationship between tokens in a form for more precise attention score calculation.",
"Second, we construct Super-Tokens for each word by embedding representations from their neighboring tokens through graph convolutions.",
"FormNet therefore explicitly recovers local syntactic information that may have been lost during serialization.",
"In experiments, FormNet outperforms existing methods with a more compact model size and less pretraining data, establishing new state-of-the-art performance on CORD, FUNSD and Payment benchmarks.",
"Form-like document understanding is a surging research topic because of its practical applications in automating the process of extracting and organizing valuable text data sources such as marketing documents, advertisements and receipts.",
"Typical documents are represented using natural languages; understanding articles or web content (Antonacopoulos et al., 2009; Luong et al., 2012; Soto and Yoo, 2019) has been studied extensively.",
"However, form-like documents often have more complex layouts that contain structured objects, such as tables and columns.",
"Therefore, form documents have unique challenges compared to natural language documents stemming from their structural characteristics, and have been largely under-explored.",
"In this work, we study critical information extraction from form documents, which is the funda-(cid:79)(cid:213)(cid:341) (cid:156)(cid:177)(cid:259)(cid:315)(cid:213) (cid:203)(cid:271)(cid:264)(cid:296)(cid:177)(cid:265)(cid:341)(cid:1085)(cid:265)(cid:177)(cid:264)(cid:213) (cid:119)(cid:122)(cid:38)(cid:88)(cid:132)(cid:65)(cid:28)(cid:38)(cid:1109)(cid:62)(cid:1)(cid:81)(cid:81)(cid:1109)(cid:28)(cid:94)(cid:122)(cid:119)(cid:94)(cid:122)(cid:1)(cid:132)(cid:38)(cid:1109)(cid:126)(cid:38)(cid:122)(cid:156)(cid:65)(cid:28)(cid:38)(cid:126) (cid:303)(cid:213)(cid:265)(cid:310)(cid:1085)(cid:209)(cid:177)(cid:310)(cid:213) (cid:938)(cid:939)(cid:1080)(cid:938)(cid:940)(cid:1080)(cid:945)(cid:946) (cid:299)(cid:213)(cid:203)(cid:213)(cid:241)(cid:334)(cid:213)(cid:209)(cid:1085)(cid:209)(cid:177)(cid:310)(cid:213) (cid:34)(cid:213)(cid:203)(cid:1109)(cid:938)(cid:941)(cid:1109)(cid:945)(cid:946) (cid:299)(cid:213)(cid:203)(cid:213)(cid:241)(cid:334)(cid:213)(cid:299)(cid:1085)(cid:177)(cid:209)(cid:209)(cid:299)(cid:213)(cid:303)(cid:303) (cid:81)(cid:94)(cid:122)(cid:65)(cid:81)(cid:81)(cid:1)(cid:122)(cid:34)(cid:1045)(cid:1109)(cid:65)(cid:88)(cid:28)(cid:1044)(cid:1109)(cid:94)(cid:88)(cid:38)(cid:1109)(cid:119)(cid:1)(cid:122)(cid:79)(cid:1109)(cid:1)(cid:156)(cid:38)(cid:88)(cid:137)(cid:38)(cid:1109)(cid:938)(cid:945)(cid:132)(cid:62)(cid:1109)(cid:56)(cid:81)(cid:1109)(cid:88)(cid:38)(cid:157)(cid:1109)(cid:163)(cid:94)(cid:122)(cid:79)(cid:1045)(cid:1109)(cid:88)(cid:1044)(cid:1109)(cid:163)(cid:1044)(cid:1109)(cid:938)(cid:937)(cid:937)(cid:938)(cid:943) Figure 1: An illustration of the form document information extraction task.",
"mental subtask of form document understanding.",
"Following the success of sequence modeling in natural language understanding (NLU), a natural approach to tackle this problem is to first serialize the form documents and then apply state-of-the-art sequence models to them.",
"For example, Palm et al. (2017) use Seq2seq (Sutskever et al., 2014) with RNN, and Hwang et al. (2019) use transformers (Vaswani et al., 2017).",
"However, interwoven columns, tables, and text blocks make serialization difficult, substantially limiting the performance of a strict serialization approach.",
"To model the structural information present in documents, Katti et al. (2018); Zhao et al. (2019); Denk and Reisswig (2019) treat the documents as 2D image inputs and directly apply convolutional networks on them to preserve the spatial context during learning and inference.",
"However, the performance is limited by the resolution of the 2D input grids.",
"Another approach is a two-step pipeline (Hi-rano et al., 2007) that leverages computer vision algorithms to first infer the layout structures of forms and then perform sequence information extraction.",
"The methods are mostly demonstrated on plain text articles or documents (Yang et al., 2017; Soto and Yoo, 2019) but not on highly entangled form documents (Davis et al., 2019; Zhang et al., 2019).",
"In this work, we propose FormNet, a structure-aware sequence model to mitigate the suboptimal 3735 (cid:55)(cid:36)(cid:53)(cid:3)(cid:49)(cid:44)(cid:38)(cid:3)(cid:48)(cid:50)(cid:44)(cid:54)(cid:55)(cid:3)(cid:48)(cid:40)(cid:49)(cid:55)(cid:3)(cid:46)(cid:50)(cid:50)(cid:47)(cid:3)(cid:47)(cid:76)(cid:74)(cid:75)(cid:87)(cid:86)(cid:3)(cid:46)(cid:54)(cid:3)(cid:3)(cid:16)(cid:3)(cid:3) (cid:90)(cid:75)(cid:76)(cid:87)(cid:72) (cid:3)(cid:87)(cid:76)(cid:83)(cid:16)(cid:3)(cid:3)(cid:28)(cid:17)(cid:20)(cid:3)(cid:17)(cid:27)(cid:27)(cid:3)(cid:20)(cid:23)(cid:17)(cid:19)(cid:3)(cid:17)(cid:23)(cid:26)(cid:23)(cid:3)(cid:83)(cid:76)(cid:81)(cid:74)(cid:3)(cid:80)(cid:68)(cid:86)(cid:78)(cid:72)(cid:71) (cid:41)(cid:135)(cid:31)(cid:3)(cid:84)(cid:306)(cid:238)(cid:212)(cid:294)(cid:3)(cid:125)(cid:212)(cid:244)(cid:276)(cid:350)(cid:338) (cid:125)(cid:276)(cid:238)(cid:273)(cid:3)(cid:4)(cid:458)(cid:248)(cid:300)(cid:345)(cid:276)(cid:306)(cid:300) (cid:129)(cid:238)(cid:306)(cid:334)(cid:248)(cid:3)(cid:4)(cid:244)(cid:288)(cid:350)(cid:338)(cid:345)(cid:299)(cid:248)(cid:300)(cid:345) (cid:533)(cid:16)(cid:54)(cid:78)(cid:72)(cid:79)(cid:72)(cid:87)(cid:82)(cid:81)(cid:3)(cid:42)(cid:85)(cid:68)(cid:83)(cid:75)(cid:3)(cid:82)(cid:81)(cid:3)(cid:55)(cid:82)(cid:78)(cid:72)(cid:81)(cid:86) (cid:858)(cid:212)(cid:859)(cid:3)(cid:129)(cid:212)(cid:299)(cid:331)(cid:294)(cid:248)(cid:3)(cid:37)(cid:306)(cid:238)(cid:350)(cid:299)(cid:248)(cid:300)(cid:345)(cid:3)(cid:68)(cid:300)(cid:331)(cid:350)(cid:345) (cid:858)(cid:237)(cid:859)(cid:3)(cid:135)(cid:248)(cid:375)(cid:345)(cid:3)(cid:129)(cid:248)(cid:334)(cid:276)(cid:212)(cid:294)(cid:276)(cid:384)(cid:212)(cid:345)(cid:276)(cid:306)(cid:300)(cid:3)(cid:370)(cid:276)(cid:345)(cid:273)(cid:3)(cid:858)(cid:237)(cid:859)(cid:3)(cid:41)(cid:135)(cid:31)(cid:3)(cid:970)(cid:3)(cid:125)(cid:276)(cid:238)(cid:273)(cid:3)(cid:4)(cid:458)(cid:248)(cid:300)(cid:345)(cid:276)(cid:306)(cid:300) (cid:858)(cid:238)(cid:859)(cid:3)(cid:129)(cid:350)(cid:331)(cid:248)(cid:334)(cid:870)(cid:135)(cid:306)(cid:291)(cid:248)(cid:300)(cid:338)(cid:3)(cid:237)(cid:376)(cid:858)(cid:238)(cid:859)(cid:3)(cid:60)(cid:334)(cid:212)(cid:331)(cid:273)(cid:3)(cid:84)(cid:248)(cid:212)(cid:334)(cid:300)(cid:276)(cid:300)(cid:268) Figure 2: A walk-through example of the proposed Rich Attention and Super-Tokens of FormNet.",
"serialization of forms by bridging the gap between plain sequence models and grid-like convolutional models.",
"Specifically, we first design Rich Attention , which leverages the spatial relationships between tokens in a form to calculate a more structurally meaningful attention score, and apply it in a recent transformer architecture for long documents (Ainslie et al., 2020).",
"Second, we construct Super-Tokens for each word in a form by embedding representations from their neighboring tokens through graph convolutions.",
"The graph construction process leverages strong inductive biases about how tokens are related to one another spatially in forms.",
"Essentially, given a form document, FormNet builds contextualized Super-Tokens before serialization errors can be propagated.",
"A transformer model then takes these Super-Tokens as input to perform sequential entity tagging and extraction.",
"In our experiments, FormNet outperforms existing methods while using (1) smaller model sizes and (2) less pre-training data while (3) avoiding the need for vision features.",
"In particular, FormNet achieves new best F1 scores on CORD and FUNSD (97.28% and 84.69%, respectively) while using a 64% sized model and 7.1x less pre-training data than the most recent DocFormer (Appalaraju et al., 2021).",
"Document information extraction was first studied in handcrafted rule-based models (Lebourgeois",
"et al., 1992; O'Gorman, 1993; Ha et al., 1995; Simon et al., 1997).",
"Later Marinai et al. (2005); Shilman et al. (2005); Wei et al. (2013); Chiticariu et al. (2013); Schuster et al. (2013) use learning-based approaches with engineered features.",
"These methods encode low-level raw pixels (Marinai et al., 2005) or assume form templates are known a priori (Chiticariu et al., 2013; Schuster et al., 2013), which limits their generalization to documents with specific layout structures.",
"In addition to models with limited or no learning capabilities, neural models have also been studied.",
"Palm et al. (2017); Aggarwal et al. (2020) use an RNN for document information extraction, while Katti et al. (2018); Zhao et al. (2019); Denk and Reisswig (2019) investigate convolutional models.",
"There are also self-attention networks (transformers) for document information extraction, motivated by their success in conventional NLU tasks.",
"Majumder et al. (2020) extend BERT to representation learning for form documents.",
"Gar-ncarek et al. (2020) modified the attention mechanism in RoBERTa (Liu et al., 2019b).",
"Xu et al. (2020, 2021); Powalski et al. (2021); Appalaraju et al. (2021) are multimodal models that combine BERT-like architectures (Devlin et al., 2019) and advanced computer vision models to extract visual content in images.",
"Similarly, SPADE (Hwang et al., 2021) is a graph decoder built upon the transformer models for better structure prediction compared to simple BIO tagging.",
"The proposed Form-3736 t 0 t 2 t 1 t 3 t 4 t 5 t 7 t 6 v 0 v 2 v 1 v 3 v 4 v 5 v 7 v 6 Input document with tokens t k returned by an OCR engine Graph construction + Message passing Super-tokens by graph embedding ETC w/ RichA tt Token embedding to entity BIOES logits v 0 v 2 v 1 v 3 v 4 v 5 v 7 v 6 GCN Viterbi B 2 E 2 I 2 O O S 5 E 9 B 9 v 0 v 2 v 1 v 3 v 4 v 5 v 7 v 6 Entity BIOES logits to BOIES decoding Token serialization Entity Class 2 Outside Entity Class 5 Entity Class 9 Figure 3: System overview of the proposed FormNet for form document key information extraction.",
"Net is orthogonal to multimodal transformers and SPADE.",
"Compared with multimodal models, FormNet focuses on modeling relations between words through graph convolutional learning as well as Rich Attention without using any visual modality; compared with SPADE, FormNet uses a graph encoder to encode inductive biases in form input.",
"A straightforward extension would be to combine FormNet with either layout transformers or SPADE for capturing visual cues or better decoding, which we leave for future work.",
"Graph learning with sequence models has also been studied.",
"On top of the encoded information through graph learning, Qian et al. (2019); Liu et al. (2019a); Yu et al. (2020) use RNN and CRF while we study Rich Attention in FormNet for decoding.",
"Peng et al. (2017); Song et al. (2018) do not study document information extraction.",
"Problem Formulation.",
"Given serialized 1 words of a form document, we formulate the problem as sequential tagging for tokenized words by predicting the corresponding key entity classes for each token.",
"Specifically, we use the BIOES scheme { Begin, Inside, Outside, End, Single } (Ratinov and Roth, 2009) to mark the spans of entities in token sequences and then apply the Viterbi algorithm.",
"Proposed Approach.",
"By treating the problem as a sequential tagging task after serialization, we can adopt any sequence model.",
"To handle potentially long documents (e.g. multi-page documents), 1 Different Optical Character Recognition (OCR) engines implement different heuristics.",
"In practice, it is common to see an entity sequence cross multiple spans of a form document, demonstrating the difficulty of recovering from serialization errors.",
"As illustrated in Figure",
"2(b), 9.1 is next to tip, while ping masked belong to the same entity as tipare distant from it under the imperfect serialization.",
"Our remedy is to encode the original 2D structural patterns of forms in addition to positions within the serialized sentences.",
"We propose two novel components to enhance ETC: Rich Attention and Super-Tokens (Figure 2).",
"Rich Attention captures not only the semantic relationship but also the spatial distance between every pair of tokens in ETC's attention component.",
"Super-tokens are constructed by graph convolutional networks before being fed into ETC.",
"They model local relationships between pairs of tokens that might not be visible to each other or correctly inferred in an ETC model after suboptimal serialization.",
"Figure 3 shows the overall system pipeline.",
"We discuss the details of ETC in Sec. 3.1, Rich Attention in Sec. 3.2, and Super-Token in Sec. 3.3.",
"Transformers (Vaswani et al., 2017) have demonstrated state-of-the-art performance on sequence modeling compared with RNNs.",
"Extended Transformer Construction (ETC; Ainslie et al., 2020) further scales transformers to long sequences by replacing standard (quadratic complexity) atten-2 One can replace ETC with other long-sequence models, such as Zaheer et al. (2020).",
"tion with a sparse global-local attention mechanism.",
"The small number of dummy global tokens attend to all input tokens, but the input tokens attend only locally to other input tokens within a spec-ified local radius.",
"An example can be found in Figure",
"2(b).",
"As a result, space and time complexity are linear in the long input length for a fixed local radius and global input length.",
"Furthermore, ETC allows a specialized implementation for efficient computation under this design.",
"We refer interested readers to Ainslie et al. (2020) for more details.",
"In this work, we adopt ETC with a single global token as the backbone, as its linear complexity of attention with efficient implementation is critical to long document modeling in practice (e.g. thousands of tokens per document).",
"A key component in transformers for sequence modeling is the positional encoding (Vaswani et al., 2017), which models the positional information of each token in the sequence.",
"Similarly, the original implementation of ETC uses Shaw et al. (2018) for (relative) positional encoding.",
"However, token offsets measured based on the error-prone serialization may limit the power of positional encoding.",
"We address this inadequacy by proposing Rich Attention as an alternative, discussed in Section 3.2.",
"Approach.",
"Our new architecture inspired by work in dependency parsing (Dozat, 2019), and which we call Rich Attention avoids the deficien-cies of absolute and relative embeddings (Shaw et al., 2018) by avoiding embeddings entirely.",
"Instead, we compute the order of and log distance between pairs of tokens with respect to the x and y axis on the layout grid, and adjust the pre-softmax attention scores of each pair as a direct function of these values.",
"3 At a high level, for each attention head at each layer (cid:96) , the model examines each pair of token representations h (cid:96)i , h (cid:96)j , whose actual order (using curly Iverson brackets) and log-distance are o ij = { i < j } and d ij = ln(1 + | i j | ) .",
"3 Order on the y-axis answers the question which token is above/below the other?",
"s ( o ) ij = o ij ln( p ij ) + (1 o ij )(1 ln( p ij )) (3) s ( d ) ij = 2 ( d ij ij ) 2 2 (4) Finally, these are added to the usual attention score",
"where q i = affine ( q ) ( h i ) and k j = affine ( k ) ( h j ) .",
"The rich attention pipeline is shown in Figure 4. 5 By penalizing attention edges for violating these soft order/distance constraints, we essentially build into the model the ability to learn logical implication rules such as if x i is a noun, and x j is an adjective, and x i is related (i.e. attends ) to x j , then x j is to the left of x i .",
"Note the unidirectionality of this rule there could be many unrelated adjectives to the left of x i , so the converse (which this approach cannot learn) does not hold in any general sense.",
"This is shown graphically in Figure 5. Justification.",
"The approach taken here is not arbitrary.",
"It can be derived algebraically from the probability mass/density functions of the distributions we assume for each feature, and the assumption that a query's attention vector represents a probability distribution.",
"Traditional dot product attention and relative position biases (Raffel et al., 2020) can likewise be derived from this method, providing incidental justification for the approach.",
"Consider the following, letting L ( X ) = ln( P ( X )) for brevity: 4 is a learned temperature scalar unique to each head.",
"5 The affine functions in Eqs.",
"(1, 2) can optionally take the reduced-rank query/key terms q i , k j as input instead of the layer input h (cid:96)i , h (cid:96)j without sacrificing theoretical motivation.",
"We take this approach for speed.",
"ij | h i , h j ) = P ( h i , h j | a ij ) P ( a ij ) (cid:80) j (cid:48) [ P ( h i , h j (cid:48) | a ij (cid:48) ) P ( a ij (cid:48) )] = exp ( L ( h i , h j | a ij ) + L ( a ij )) (cid:80) j (cid:48) exp ( L ( h i , h j (cid:48) | a ij (cid:48) ) + L ( a ij (cid:48) )) = soft max j (cid:48) ( L ( h i , h j (cid:48) | a ij (cid:48) ) + L ( a ij (cid:48) )) j (5)",
"Here a i represents a latent categorical attention variable.",
"Eq.",
"(5) shows that the softmax function itself can actually be derived from posterior probabilities, by simply applying Bayes' rule and then observing that x = exp(ln( x )) That is, one need not define the posterior as being the softmax of some expression, it simply is the softmax of some expression, specifically one that falls out of the assumptions one makes (explicitly or implicitly).",
"When we plug the Gaussian probability density function into L ( h i , h j | a ij ) , the expression sim-plifies to dot-product attention (with one additional fancy bias term); we show this in Appendix C. If we assume L ( a ij ) is uniform, then it divides out of the softmax and we can ignore it.",
"If we assume it follows a Bernoulli distribution such that L ( a ij = 1; p ij ) = ln( p ij ) it becomes equivalent to a learned bias matrix B .",
"6 Now, if we assume there is another feature f ij that conditions the presence of attention, such as the order or distance of i and j , then we can use the same method to derive a parametric expression describing its impact on the attention probability.",
"The new term can be expanded by explicating assumptions about the distributions that govern P ( f ij | h i , h j , a ij ) and simplifying the expression that results from substituting their probability functions.",
"If f ij is binary, then this process yields Eq.",
"(3), and if ln( f ij ) is normally distributed, we reach Eq.",
"(4), as derived in Appendix C. Given multiple conditionally independent features such as the order and distance their individual scores can be calculated in this way and summed.",
"Furthermore, relative position biases (Raffel et al., 2020) can thus be understood in this framework as binary features (e.g. f ij = { i j = 2 } ) that are conditionally independent of h i , h j given a ij , meaning that L ( f ij | h i , h j , a ij ) = L ( f ij | a ij ) .",
"We call this new attention paradigm Rich Attention because it allows the attention mechanism to be enriched with an arbitrary set of low-level features.",
"We use it to add order/distance features with respect to the x and y axes of a grid but it can also be used in a standard text transformer to encode order/distance/segment information, or it could be used in an image transformer (Parmar et al., 2018) to encode relative pixel angle/distance information 7 , without resorting to lossy quantization and finite embedding tables.",
"The key to sparsifying attention mechanisms in ETC (Ainslie et al., 2020) for long sequence modeling is to have every token only attend to tokens that are within a pre-specified local radius in the serialized sequence.",
"The main drawback to ETC in form understanding is that imperfect serialization sometimes results in entities being serialized too far apart from each other to attend in the local-local attention component (i.e. outside the local radius).",
"A naive solution is to increase the local radius in ETC.",
"However, it sacrifices the efficiency for modeling long sequences.",
"Also, the self-attention may not be able to fully identify relevant tokens when there are many distractors (Figure 9; Serrano and Smith, 2019).",
"To alleviate the issue, we construct a graph to connect nearby tokens in a form document.",
"We design the edges of the graph based on strong induc-7 The von Mises or wrapped normal distribution would be most appropriate for angular features.",
"tive biases so that they have higher probabilities of belonging to the same entity type (Figure",
"2(c) and 6).",
"Then, for each token, we obtain its Super-Token embedding by applying graph convolutions along these edges to aggregate semantically meaningful information from its neighboring tokens.",
"We use these super-tokens as input to the Rich Attention ETC for sequential tagging.",
"This means that even though an entity may have been broken up into multiple segments due to poor serialization, the super-tokens learned by the graph convolutional network will have recovered much of the context of the entity phrase.",
"We next introduce graph construction and the learning algorithm.",
"Node Definition.",
"Given a document with N tokens denoted by T = { t 1 , t 2 , . . . t N } , we let t k refer to the k -th token in a text sequence returned by the OCR engine.",
"The OCR engine generates the bounding box sizes and locations for all tokens, as well as the text within each box.",
"We define node input representation for all tokens T as vertices V = { v 1 , v 2 , . . . v N } , where v k concatenates attributes available for t k .",
"In our design, we use three common input modalities:",
"(a) one-hot word embeddings,",
"(b) spatial embeddings from the normalized Cartesian coordinate values of the four corners and height and width of a token bounding box (Qian et al., 2019; Davis et al., 2019; Liu et al., 2019a).",
"The benefit of representing tokens in this way is that one can add more attributes to a vertex by simple concatenation without changing the macro graph architecture.",
"Edge Definition.",
"While the vertices V represent tokens in a document, the edges characterize the relationship between all pairs of vertices.",
"Precisely, we define directed edge embeddings for a set of edges E , where each edge e kl connects two vertices v k and v l , concatenating quantitative edge attributes.",
"In our design, the edge embedding is composed of the relative distance between the centers, top left corners, and bottom right corners of the token bounding boxes.",
"The embedding also contains the shortest distances between the bounding boxes along the horizontal and vertical axis.",
"Finally, we include the height and width aspect ratio of v k , v l , and the bounding box that covers both of them.",
"Graph construction.",
"After contructing edge embeddings, we need discrete graphs to define connec-tivities.",
"One approach would be to create k-Nearest-Neighbors graphs (Zhang et al., 2020) but these Figure 6: An illustration of the word-level -skeleton graph of a FUNSD document, which is a sparse but connected graph.",
"may contain isolated components, which is not ideal for information propagation.",
"Instead, we construct graphs using the -skeleton algorithm (Kirk-patrick and Radke, 1985) with = 1 , which is found useful for document understanding in Wang et al. (2022); Lee et al. (2021).",
"It essentially creates a ball-of-sight graph with a linearly-bounded number of edges while also guaranteeing global connectivity as shown in Figure 6. More examples of constructed -skeleton graphs can be found in Figure 11 in the Appendix.",
"Message passing.",
"Graph message-passing is the key to propagating representations along the edges defined by the inductive bias, -skeleton, that are free from the left-to-right top-to-bottom form document serialization.",
"In our design, we perform graph convolutions (GCN; Gilmer et al., 2017) on concatenated features from pairs of neighboring nodes and edges connecting them.",
"Hence the graph embedding is directly learned from back-propagation in irregular patterns of tokens in documents.",
"We evaluate how the two proposed structural encoding components, Rich Attention and Super-Tokens, impact the overall performance of form-like document key information extraction.",
"We perform extensive experiments on three standard benchmarks 8 8 We note that SROIE (Huang et al., 2019) and Kleister-NDA (Gralinski et al., 2020) are designed for key-value pair 3740 Dataset Method P R F1 Image #Params Pre-training Size CORD SPADE (Hwang et al., 2021) -91.5 110M BERT-multilingual UniLMv2 (Bao et al., 2020) 91.23 92.89 92.05 355M 160GB LayoutLMv1 (Xu et al., 2021) 94.32 95.54 94.93 343M 11M DocFormer (Appalaraju et al., 2021) 96.46 96.14 96.30 502M 5M LayoutLMv2 (Xu et al., 2021) 95.65 96.37 96.01 (cid:88) 426M 11M TILT (Powalski et al., 2021) -96.33 (cid:88) 780M 1.1M DocFormer (Appalaraju et al., 2021) 97.25 96.74 96.99 (cid:88) 536M 5M FormNet (ours) 98.02 96.55 97.28 345M 0.7M (9GB) FUNSD SPADE (Hwang et al., 2021) -70.5 110M BERT-multilingual UniLMv2 (Bao et al., 2020) 67.80 73.91 70.72 355M 160GB LayoutLMv1 (Xu et al., 2020) 75.36 80.61 77.89 343M 11M DocFormer (Appalaraju et al., 2021) 81.33 85.44 83.33 502M 5M LayoutLMv1 (Xu et al., 2020) 76.77 81.95 79.27 (cid:88) 160M 11M LayoutLMv2 (Xu et al., 2021) 83.24 85.19 84.20 (cid:88) 426M 11M DocFormer (Appalaraju et al., 2021) 82.29 86.94 84.55 (cid:88) 536M 5M FormNet (ours) 85.21 84.18 84.69 217M 0.7M (9GB) Payment NeuralScoring (Majumder et al., 2020) -87.80 0 FormNet (ours) 92.70 91.69 92.19 217M 0 Table 1: Entity-level precision, recall, and F1 score comparisons on three standard benchmarks.",
"CORD.",
"We evaluate on CORD (Park et al., 2019), which stands for the Consolidated Receipt Dataset for post-OCR parsing.",
"The annotations are provided in 30 fine-grained semantic entities such as store name, menu price, table number, discount, etc.",
"We use the standard evaluation set that has 800 training, 100 validation, and 100 test samples.",
"FUNSD.",
"FUNSD (Jaume et al., 2019) is a public dataset for form understanding in noisy scanned documents.",
"It is a subset of the Truth Tobacco Industry Document (TTID) 9 .",
"The dataset consists of 199 annotated forms with 9,707 entities and 31,485 word-level annotations for 4 entity types: header, question, answer, and other.",
"We use the official 75-25 split for the training and test sets.",
"Payment.",
"We use the large-scale payment data (Majumder et al., 2020) that consists of around 10K documents and 7 semantic entity labels from human annotators.",
"The corpus comes from different vendors with different layout templates.",
"We follow the same evaluation protocol and dataset splits used in Majumder et al. (2020).",
"extraction instead of direct entity extraction.",
"We leave the work of modifying FormNet for key-value pair extraction in the future.",
"Given a document, we first use the BERT-multilingual vocabulary to tokenize the extracted OCR words.",
"Super-tokens are then generated by direct graph embedding on these 2D tokens.",
"Next, we use ETC transformer layers to continue to process the super-tokens based on the serialization provided by the corresponding datasets.",
"Please see Appendix A for implementation details.",
"Nets and scale up the FormNet family with different numbers of hidden units and attention heads to obtain FormNet-A1 (512 hidden units and 8 attention heads), A2 (768 hidden units and 12 attention heads), and A3 (1024 hidden units and 16 attention heads).",
"Ablations on the FormNets can be found in Figure 7 and 8, and Table 4 in Appendix.",
"MLM Pre-training.",
"Following Appalaraju et al. (2021), we collect around 700k unlabeled form documents for unsupervised pre-training.",
"We adopt the Masked Language Model (MLM) objective (Taylor, 1953; Devlin et al., 2019) to pre-train the networks.",
"This forces the networks to reconstruct randomly masked tokens in a document to learn the underlying semantics of language from the pre-training corpus.",
"We train the models from scratch using Adam optimizer with batch size of 512.",
"The learning rate is set to 0.0002 with a warm-up proportion of 0.01.",
"Fine-tuning.",
"We fine-tune all models in the experiments using Adam optimizer with batch size of 8.",
"The learning rate is set to 0.0001 without warmup.",
"We use cross-entropy loss for the multi-class BIOES tagging tasks.",
"The fine-tuning is conducted on Tesla V100 GPUs for approximately 10 hours on the largest corpus.",
"Note that we only apply the MLM pre-training for the experiments on CORD and FUNSD as in Xu et al. (2020, 2021).",
"For the experiments on Payment, we follow Majumder et al. (2020) to directly train all networks from scratch without pre-training.",
"Benchmark Comparison.",
"Table 1 lists the results that are based on the same evaluation proto-cal 10 .",
"The proposed FormNet achieves the new best F1 scores on CORD, FUNSD, and Payment benchmarks.",
"Figure 7 shows model size vs. F1 score for all recent approaches.",
"On CORD and FUNSD, FormNet-A2 (Table 4 in Appendix) outperforms the most recent DocFormer (Appalaraju et al., 2021) while using a 2.5x smaller model and 7.1x less unlabeled pre-training documents.",
"On the larger CORD, FormNet-A3 continues to improve the performance to the new best 97.28% F1.",
"In addition, we observe no difficulty training the FormNet from scratch on the Payment dataset.",
"These demonstrate the parameter efficiency and the training sample efficiency of the proposed FormNet.",
"Effect of Structural Encoding in Pre-training.",
"We study the importance of the proposed Rich Attention and Super-Token by GCN on the large-scale MLM pre-training task across three FormNets as summarized in Figure 8.",
"Both Rich Attention and GCN components improve upon the ETC (Ainslie et al., 2020) baseline on reconstructing the masked tokens by a large margin, showing the effectiveness of their structural encoding capability on form documents.",
"The best performance is obtained by incorporating both.",
"Effect of Structural Encoding in Fine-tuning.",
"We ablate the effect of the proposed Rich Attention and Super-Tokens by GCN on the fine-tuning tasks and measure their entity-level precision, recall, and F1 scores.",
"In Table 2, we see that both Rich Attention and GCN improve upon the ETC (Ainslie et al., 2020) baseline on all benchmarks.",
"In particular, Rich Attention brings 4.46 points and GCN brings 4.24 points F1 score improvement over the jumder et al., 2020).",
"ETC baseline on CORD.",
"We also see a total of 5.3 points increase over the baseline when using both components, showing their orthogonal effectiveness of encoding structural patterns.",
"More ablation can be found in Section B and Table 5 in Appendix.",
"Using BertViz (Vig, 2019), we visualize the local-to-local attention scores for specific examples of the CORD dataset for the ETC baseline and the ETC+RichAtt+GCN (FormNet) models.",
"Qualitatively in Figure 9, we notice that the tokens attend primarily to other tokens within the same visual block for ETC+RichAtt+GCN.",
"Moreover for that model, specific attention heads are attending to tokens aligned horizontally, which is a strong signal of meaning for form documents.",
"No clear attention pattern emerges for the ETC model, suggesting the Rich Attention and Super-Token by GCN enable the model to learn the structural cues and leverage layout information effectively.",
"More visualization examples are given in the Appendix E. We also show sample model outputs in Figure 10.",
"We present a novel model architecture for key entity extraction for forms, FormNet.",
"We show that the proposed Rich Attention and Super-Token components help the ETC transformer to excel at form understanding in spite of noisy serialization, as evidenced quantitatively by its state-of-the-art performance on three benchmarks and qualitatively by its more sensible attention patterns.",
"In the future, we would like to explore multi-modality input such as images."
] | [
"abstain",
"abstain",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"method",
"other",
"other",
"method",
"method",
"other",
"method",
"other",
"other",
"other",
"other",
"method",
"objective",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective"
] |
[
"Recurrent Variational Autoencoder has been widely used for language modeling and text generation tasks.",
"These models often face a difficult optimization problem, also known as the Kullback-Leibler (KL) term vanishing issue, where the posterior easily collapses to the prior, and the model will ignore latent codes in generative tasks.",
"To address this problem, we introduce an improved Wasserstein Variational Autoencoder (WAE) with Riemannian Normalizing Flow (RNF) for text modeling.",
"The RNF transforms a latent variable into a space that respects the geometric characteristics of input space, which makes posterior impossible to collapse to the non-informative prior.",
"The Wasserstein objective minimizes the distance between the marginal distribution and the prior directly, and therefore does not force the posterior to match the prior.",
"Empirical experiments show that our model avoids KL vanishing over a range of datasets and has better performances in tasks such as language modeling, likelihood approximation, and text generation.",
"Through a series of experiments and analysis over latent space, we show that our model learns latent distributions that respect latent space geometry and is able to generate sentences that are more diverse.",
"1 1 Introduction Variational Autocoder (VAE) (Kingma and Welling, 2013; Rezende and Mohamed, 2015) is a probabilistic generative model shown to be successful over a wide range of tasks such as image generation (Gregor et al., 2015; Yan et al., 2016), dialogue generation (Zhao et al., 2017b), transfer learning (Shen et al., 2017), and classification (Jang et al., 2017).",
"The encoder-decoder architecture of VAE allows it to learn a 1 Code could be found at https://github.com/ kingofspace0wzz/wae-rnf-lm continuous space of latent representations from high-dimensional data input and makes sampling procedure from such latent space very straightforward.",
"Recent studies also show that VAE learns meaningful representations that encode non-trivial information from input (Gao et al., 2018; Zhao et al., 2017a).",
"Applications of VAE in tasks of Natural Language Processing (Bowman et al., 2015; Zhao et al., 2017b, 2018; Miao et al., 2016) is not as successful as those in Computer Vision.",
"With long-short-term-memory network (LSTM) (Hochre-iter and Schmidhuber, 1997) used as encoder-decoder model, the recurrent variational autoencoder (Bowman et al., 2015) is the first approach that applies VAE to language modeling tasks.",
"They observe that LSTM decoder in VAE often generates texts without making use of latent representations, rendering the learned codes as useless.",
"This phenomenon is caused by an optimization problem called KL-divergence vanishing when training VAE for text data, where the KL-divergence term in VAE objective collapses to zero.",
"This makes the learned representations meaningless as zero KL-divergence indicates that the latent codes are independent of input texts.",
"Many recent studies are proposed to address this key issue.",
"Yang et al. (2017); Semeniuta et al. (2017) use convolutional neural network as decoder architecture to limit the expressiveness of decoder model.",
"Xu and Durrett (2018); Zhao et al. (2017a,b, 2018) seek to learn different latent space and modify the learning objective.",
"And, even though not designed to tackle KL vanishing at the beginning, recent studies on Normalizing Flows (Rezende and Mohamed, 2015; van den Berg et al., 2018) learn meaningful latent space as it helps to transform an over-simplified latent distribution into more flexible distributions.",
"called Riemannian Normalizing Flow (RNF), together with the recently developed Wasserstein objective (Tolstikhin et al., 2018; Arjovsky et al., 2017), to ensure VAE models more robust against the KL vanishing problem.",
"As further explained in later sections, the Wasserstein objective helps to alleviate KL vanishing as it only minimizes the distance between latent marginal distribution and the prior.",
"Moreover, we suspect that the problem also comes from the over-simplified prior as-sumption about latent space.",
"In most cases, the prior is assumed to be a standard Gaussian, and the posterior is assumed to be a diagonal Gaussian for computational efficiency.",
"These assumptions, however, are not suitable to encode intrinsic characteristics of input into latent codes as in reality the latent space is likely to be far more complex than a diagonal Gaussian.",
"The RNF model we proposed in this paper thus helps the situation by encouraging the model to learn a latent space that encodes some geometric properties of input space with a well-defined geometric metric called Riemannian metric tensor.",
"This renders the KL vanishing problem as impossible since a latent distribution that respects input space geometry would only collapse to a standard Gaussian when the input also follows a standard Gaussian, which is never the case for texts and sentences datasets.",
"We then empirically evaluate our RNF Variational Wasserstein Autoencoder on standard language modeling datasets and show that our model has achieved state-of-the-art performances.",
"Our major contributions can be summarized as the following: We propose Riemannian Normalizing Flow, a new type of flow that uses the Riemannian metric to encourage latent codes to respect geometric characteristics of input space.",
"We introduce a new Wasserstein objective for text modeling, which alleviates KL divergence term vanishing issue, and makes the computation of normalizing flow easier.",
"Empirical studies show that our model produces state-of-the-art results in language modeling and is able to generate meaningful text sentences that are more diverse.",
"Given a set of data x = ( x 1 , x 2 , ..., x n ) , a Variational Autoencoder (Kingma and Welling, 2013) aims at learning a continuous latent variable that maximizes the log-likelihood log p ( x ) = log (cid:82) p ( x | z ) p ( z ) d z .",
"Since this marginal is often intractable, a variational distribution q ( z | x ) is used to approximate the true posterior distribution p ( z | x ) .",
"VAE tries to maximize the following lower bound of likelihood, L ( ; ; x ) = E q ( x ) [ E q ( z | x ) [log p ( x | z )] (1) KL ( q ( z | x ) || p ( z ))] (2) where q ( x ) is the empirical distribution of input, and the prior p ( z ) is often assumed to be a standard Gaussian for simplicity.",
"The first term in the objective is the reconstruction error, and the second one is KL divergence.",
"For modeling text sentences, Bowman et al. (2015) parameterizes both the inference model q ( z | x ) and the generative model p ( x | z ) as LSTMs.",
"The reparameterization trick proposed by Rezende and Mohamed (2015) is used to train these two models jointly.",
"Since the generative model is often an LSTM that has strong expressiveness, the reconstruction term in the objective will dominate KL divergence term.",
"In this case, the model is able to generate texts without making effective use of latent codes as the latent variable z becomes independent from input when KL divergence term collapses to zero.",
"There are two main approaches to address this issue.",
"One is to explore different choices of the decoder model to control the expressiveness of LSTM.",
"Yang et al. (2017) and Semeniuta et al. (2017) use CNN as an alternative to LSTM.",
"The dilation technique used by Yang et al. (2017) also helps to control the trade-off between decoder capacity and KL vanishing.",
"The other approach is to change the form of latent distribution and to modify the training objective.",
"Xu and Durrett (2018) proposes to use hyperspherical distribution and shows that the KL vanishing problem does not happen in hypersphere.",
"The infoVAE (Zhao et al., 2017a) argues that VAE objective is anti-informatics which encourages KL divergence to be zero.",
"They, therefore, add a mutual information term I ( z ; x ) explicitly to ensure that latent variable z encodes non-trivial information about x , in which case the KL would be greater than zero as z is no longer independent from x .",
"In a similar manner, Xiao et al. (2018) introduces a Dirichlet latent variable to force latent codes to learn useful topic information given input documents.",
"He et al. (2019) and Kim et al. (2018) achieve the current state-of-the-art in terms of sample perplexity.",
"Moreover, the bag-of-words loss used by Zhao et al. (2017b) also dramatically alleviates KL vanishing while not sacrificing sample quality.",
"In Section 3 and Section 4, we introduce RNF with Wasserstein objective.",
"Our proposed model also lies in the direction which seeks to learn more flexible latent distribution, as the main advantage of RNF is to ensure a flexible latent space able to capture geometric characteristics of input space.",
"In this section, we review the basic concepts of Riemannian geometry, normalizing flow, and Wasserstein Autoencoder.",
"We then introduce our new Riemannian normalizing flow in Section 4.",
"Consider an input space X RD , a d-dimensional ( d < D ) manifold is a smooth surface of points embedded in X .",
"Given a manifold M , a Riemannian manifold is a metric space ( M , G ) , where G is the Riemannian metric tensor that assigns an inner product to every point on the manifold.",
"More formally, a Riemannian metric G : Z R d d is defined as a smooth function such that for any two vectors u, v in the tangent space T z M of each point z M , it assigns the following inner product for u and v , < u, v > G = u TG ( z ) v (3) The Riemannian metric helps us to characterize many intrinsic properties of a manifold.",
"Consider an arbitrary smooth curve ( t ) : [ a, b ] M on a given manifold M with a Riemannian metric tensor G , the length of this curve is given by L ( ) = (cid:90) b a || (cid:48) ( t ) || dt = (cid:90) b a (cid:112) < (cid:48) t , (cid:48) t > G dt = (cid:90) b a (cid:113) (cid:48) T t G ( t ) (cid:48) t dt (4) where (cid:48) t is the curve velocity and lies in the tangent space T t M at point ( t ) .",
"When the metric tensor G is equal to 1 everywhere on the curve, it becomes a metric tensor on Euclidean space, where the length of curve is defined as the integral of the velocity function, L ( ) = (cid:82) ba (cid:113) (cid:48) Tt (cid:48) t dt = (cid:82) ba (cid:48) t dt .",
"Given the definition of curve length, the geodesic path between any two points can be defined as the curve that minimizes the curve length .",
"Namely, if t is the geodesic curve connecting ( a ) and ( b ) , then t = argmin L ( ) (5) Practically, a geodesic line is often found by optimizing the following energy function, E ( ) = 1 2 (cid:90) b a (cid:48) Tt G ( t ) (cid:48) t dt t = argmin E ( ) (6) Note that the Euclidean metric is a special case of Riemannian metric.",
"The more general metric tensor G gives us a sense of how much Riemannian geometry deviates from Euclidean geometry.",
"The powerful inference model of VAE can approximate the true posterior distribution through variational inference.",
"The choice of this approximated posterior is one of the major problems.",
"For computational efficiency, a diagonal Gaussian distribution is often chosen as the form of the posterior.",
"As the covariance matrix is always assumed to be diagonal, the posterior fails to capture dependencies among individual dimensions of latent codes.",
"This poses a difficult problem in variational inference.",
"As it is unlikely that the true posterior has a diagonal form, the approximated diagonal distribution is not flexible enough to match the true posterior even in asymptotic time.",
"A normalizing flow, developed by (Rezende and Mohamed, 2015), is then introduced to transform a simple posterior to a more flexible distribution.",
"Formally, a series of normalizing flows is a set of invertible, smooth transformations f t : R d R d , for t = 1 , ..., T , such that given a random variable z 0 with distribution q ( z 0 ) , the resulting random variable z T = ( f T f T 1 ... f 1 )( z 0 ) has the following density function, q ( z T ) = q ( z 0 ) T (cid:89) t =1 | det f 1 t z t 1 | (7) Since each transformation f i for i = 1 , ..., T is invertible, its Jacobian determinant exists and can be computed.",
"By optimizing the modified evidence lower bound objective, ln p ( x ) E q ( z 0 | x ) (cid:2) ln p ( x | z T ) + T (cid:88) t =1 ln | det f t z t 1 | (cid:3) KL ( q ( z 0 | x ) || p ( z T )) (8) the resulting latent codes z T will have a more flexible distribution.",
"Based on how the Jacobian-determinant is computed, there are two main families of normalizing flow (Tomczak and Welling, 2016; Berg et al., 2018): general normalizing flow and volume preserving flow .",
"While they both search for flexible transformation that has easy-to-compute Jacobian-determinant, the volume-preserving flow aims at finding a specific flow whose Jacobian-determinant equals 1, which simplifies the optimization problem in equation (6).",
"Since we want a normalizing flow that not only gives flexible posterior but also able to uncover the true geometric properties of latent space, we only consider general normalizing flow whose Jacobian-determinant is not a constant as we need it to model the Riemannian metric introduced earlier.",
"Wasserstien distance has been brought to generative models and is shown to be successful in many image generation tasks (Tolstikhin et al., 2018; Arjovsky et al., 2017; Bousquet et al., 2017).",
"Instead of maximizing the evidence lower bound as VAE does, the Wasserstein Autoencoder (Tol-stikhin et al., 2018) optimizes the optimal transport cost (Villani, 2008) between the true data distribution PX ( x ) and the generative distribution PG ( x ) .",
"This leads to the Wasserstein objective, D ( PX , PG ) = inf Q ( Z | X ) Q EPXEQ ( Z | X ) [ c ( X, G ( Z ))] + D Z ( QZ , PZ ) (9) where c ( ) is the optimal transport cost, G : Z X is any generative function, and the coefficient controls the strength of regularization term DZ .",
"Given a positive-definite reproducing kernel k : Z Z R , the regularization term DZ can be approximated by the Maximum Mean Discrepancy ( MMD ) (Gretton et al., 2012) between the prior PZ and the aggregate posterior QZ ( z ) = Figure 1: Parameterization of the input manifold by latent space and generative function f .",
"In this section we propose our Riemannian Normalizing Flow ( RNF ).",
"RNF is a new type of flow that makes use of the Riemannian metric tensor introduced earlier in Section 3.",
"This metric enforces stochastic encoder to learn a richer class of approximated posterior distribution in order to follow the true geometry of latent space, which helps to avoid the local optimum in which posterior collapses to a standard prior.",
"We then combine this with WAE and we will explain why and how WAE should be used to train with RNF.",
"In the context of VAE, learning a latent space that is homeomorphic to input space is often very challenging.",
"Consider a manifold M RD , a generator model x = f ( z ) : Z RD serves as a low-dimensional parameterization of manifold M with respect to z Z .",
"For most cases, latent space Z is unlikely to be homeomorphic to M , which means that there is no invertible mapping between M and Z .",
"And, since the inference model h : M Z is nonlinear, the learned latent space often gives a distorted view of input space.",
"Consider the case in Figure 2, where the leftmost graph is the input manifold, and the rightmost graph is the corresponding latent space with curvature reflected by brightness.",
"Let us take two arbitrary points on the manifold and search for the geodesic path connecting these two points.",
"If we consider the distorted latent space as Euclidean, then the geodesic path in latent space does not reflect the true shortest distance between these two points on the manifold, as a straight line in the latent space would Figure 2: An example when latent space does not reflect input space.",
"cross the whole manifold, while the true geodesic path should circumvent this hole.",
"This distortion is caused by the non-constant curvature of latent space.",
"Hence, the latent space should be considered as a curved space with curvature reflected by the Riemannian metric defined locally around each point.",
"As indicated by the brightness, we see that the central area of latent space is highly curved, and thus has higher energy.",
"The geodesic path connecting the two latent codes minimizes the energy function E ( ) = 12 (cid:82) (cid:48) Tt G ( t ) (cid:48) t dt , indicating that it should avoid those regions with high curvature G .",
"The question now becomes how to impose this intrinsic metric and curvature into latent space.",
"In this paper, we propose a new form of normalizing flow to incorporate with this geometric characteristic.",
"First, consider a normalizing flow f : Z Z (cid:48) , we can compute length of a curve in the transformed latent space Z (cid:48) , L ( f ( t )) = (cid:90) b a || J t (cid:48) t || dt = (cid:90) b a (cid:113) (cid:48) t J T t J t (cid:48) t dt = (cid:90) b a (cid:113) (cid:48) t G ( t ) (cid:48) t dt (11) where t : [ a, b ] Z (cid:48) , a, b Z J t = f z (cid:12)(cid:12) (cid:12) z = t G ( t ) = J T t J t = ( f z ) T ( f z ) (cid:12)(cid:12)(cid:12) z = t (12) J t is the Jacobian matrix defined at t .",
"In our case, the Riemannian metric tensor G is the inner product of Jacobian J t and is therefore symmetric positive definite.",
"It reflects input space curvature in low-dimensional parameterization Z (cid:48) .",
"In a highly curved region, the metric tensor G = JTJ is larger than those in other areas, indicating that the latent representation of input manifold has lower curvature, or area of low energy, as any geodesic connecting each pair of points on the manifold favors lower energy path.",
"This implies that those regions outside of data manifold should have high curvature reflected in their low-dimensional parameterization Z (cid:48) .",
"In this paper, we introduce Riemannian normalizing flow ( RNF ) to model curvature.",
"For simplicity, we build our model based on planar flow (Rezende and Mohamed, 2015).",
"A planar flow is an invertible transformation that retracts and extends the support of original latent space with respect to a plane.",
"Mathematically, a planar flow f : Z Z (cid:48) has the form, f ( z ) = z + u h ( w T z + b ) (13) where h : R d R d is any smooth non-linear function and is often chosen as tanh ( ) .",
"The in-vertibiliy condition is satisfied as long as u T w 1 .",
"Its Jacobian-determinant with respect to latent codes z is very each to compute, | detf z | = | 1 + u T ( z ) w | ( z ) = h (cid:48) ( w T z + b ) (14) With the Jacobian-determinant of planar flow, it is straightforward to compute the determinant of metric tensor G .",
"To see that, note that since f z : R d R d is a square matrix with full column rank due to invertibility of f , we have | det G | = (cid:12)(cid:12)(cid:12) f z T (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) f z (cid:12)(cid:12)(cid:12) = (cid:12)(cid:12)(cid:12) f z (cid:12)(cid:12)(cid:12) 2 (15) To ensure well-behaved geometry in a transformed latent space, we need the Jacobian-determinant | f z | to be large in region with high curvature | det G | .",
"Hence, we propose to model the metric tensor with the inverse multiquadratics kernel function K used by (Tolstikhin et al., 2018) and a Gaussian kernel, that is, K m ( z , c k ) = C/ ( C + || z c k || 22 ) k = argmin || z c k || 22 K g ( z , c k ) = exp ( k || z z k || 22 ) where c k , k = 1 , 2 , 3 , ..., K are clusters of latent codes, and k is the bandwidth.",
"We observe that the inverse multiquadratics kernel K m generally performs better.",
"We use the above kernels as constraints over the Jacobian-determinant, so that, | detf (cid:48) z | = | 1 + u T ( z ) ( z ) w | K ( z , c k ) (16) As we explained earlier in this section, latent representation of region outside of input manifold should have high curvature in latent space.",
"During training, we seek to maximize this regularized Jacobian | det f (cid:48) z | rather than the original one.",
"This ensures that those latent codes within latent clusters, and therefore very likely to be on or near input manifold in input space, have much smaller curvature | det G | = | f z | 2 than those outside of latent clusters, as those outside of manifold would seek larger Jacobian in order to counter-effect the regularization term K .",
"The latent space Z (cid:48) transformed by normalizing flow f is thus curved with respect to input manifold.",
"This type of normalizing flow thus learns a latent space to respect geometric characteristics of input space.",
"The KL vanishing problem is then unlikely to happen with a curved latent space.",
"This is because most high-dimensional data in real life forms a curved manifold which is unlikely sampled from a multivariate standard Gaussian.",
"Then, if the latent space reflects curvature of a curved manifold, the support of latent codes certainly does not follow a standard Gaussian either.",
"This helps to push the posterior q ( z | x ) away from the standard Gaussian and never collapse to a non-informative prior.",
"Here we consider using the Wasserstein objective to model the latent marginal distribution of a curved latent space learned by an RNF.",
"The Wasserstein objective with MMD is appealing in our case for two main reasons.",
"First, instead of minimizing the KL-divergence KL ( q ( z | x ) || p ( z )) , it minimizes distance between q ( z ) = (cid:82) q ( z | x ) p ( x ) dx and p ( z ) , which encourages the marginal distribution of latent space to be as close as the prior while not affecting individual posterior distribution q ( z | x ) conditioned on each input.",
"This makes the KL-divergence between posterior and prior, and equivalently the mutual information between latent codes and input sentences I ( z , x ) = KL ( q ( z , x ) || q ( z ) p ( x )) = E p ( x ) [ KL ( q ( z | x ) || p ( z ))] KL ( q ( z ) || p ( z )) impossible to be vanished as the objective does not require it to be small.",
"Since the learned latent codes and input sentences have non-zero mutual information, the generative model will not ignore latent codes when generating texts.",
"Second, the MMD regularization in WAE makes it possible to optimize normalizing flow without computing the Jacobian-determinant explicitly.",
"The use of MMD is necessary as getting a closed form KL divergence is no-longer possible after we apply RNF to the posterior.",
"And, since the generative function G in the reconstruction term of Wasserstein objective can be any function or composition of functions (Tolstikhin et al., 2018), we can easily compose an RNF function into G such that the reconstructed texts are X = G ( f ( Z )) = G ( Z (cid:48) ) , Z (cid:48) Z (cid:48) .",
"Now, given a series of RNFF = f K ... f 1 , and let ZK be the curved latent space after applying K flows over the original latent space Z , we optimize the following RNF-Wasserstein objective, D ( PX , PG ) = inf Q ( Z | X ) Q EPXEQ ( Z | X ) [ c ( X, G ( Z (cid:48) ))] + MMD ( QZ (cid:48) , PZ (cid:48) ) + ( KL ( q ( z | x ) || p ( z )) (cid:88) log | detf (cid:48) z | ) (17) where Z Z , Z (cid:48) Z (cid:48) , and Z (cid:48) = F ( Z ) .",
"We approximate MMD term with the Gaussian kernel k ( z , z (cid:48) ) = e || z z (cid:48) || 2 , that is, MMD ( p, q ) = E p ( z ) ,p ( z (cid:48) ) [ k ( z , z (cid:48) )] + E q ( z ) ,q ( z (cid:48) ) [ k ( z , z (cid:48) )] 2 E p ( z ) ,q ( z (cid:48) ) [ k ( z , z (cid:48) )] .",
"between the prior PZ and the marginal of non-curved latent space.",
"This makes sampling procedure for generation tasks much easier, as it is easy to sample a latent code from a non-informative prior.",
"We can get a sample z (cid:48) from Z (cid:48) indirectly by sampling: z PZ ( z ) and z (cid:48) F ( z ) .",
"On the other hand, it would be much more difficult to sample from a curved latent space Z (cid:48) directly as the only prior knowledge we have about Z (cid:48) is the curvature reflected by RNF implicitly and hence we do not know the support of Q ( Z (cid:48) ) .",
"In this section, we investigate WAE's performance with Riemannian Normalizing Flow over language and text modeling.",
"We use Penn Treebank (Marcus et al., 1993), Yelp 13 reviews (Xu et al., 2016), as in (Xu and Durrett, 2018; Bowman et al., 2015), and Yahoo Answers used in (Xu and Durrett, 2018; Yang et al., 2017) to follow and compare with prior studies.",
"We limit the maximum length of a sample from all datasets to 200 words.",
"The datasets statistics is shown in Table 3.",
"For each model, we set the maximum vocabulary size to 20K and the maximum length of input to 200 across all data sets.",
"Following Bowman et al. (2015), we use one-layer undirectional Data Train Dev Test Vocab PTB 42068 3370 3761 10K Yelp13 62522 7773 8671 15K Yahoo 100K 10K 10K 20K Table 3: Datasets statistics; The numbers reflect size of each dataset.",
"LSTM for both encoder-decoder models with hidden size 200.",
"Latent codes dimension is set to 32 for all models.",
"We share Word Embeddings of size 200.",
"For stochastic encoders, both MLP and MLP are two layer fully-connected networks with hidden size 200 and a batch normalizing output layer (Ioffe and Szegedy, 2015).",
"We use Adam (Kingma and Ba, 2015) with learning rate set to 10 3 to train all models.",
"Dropout is used and is set to 0.2.",
"We train all models for 48 epochs, each of which consists of 2K steps.",
"For models other than WAE, KL-annealing is applied and is scheduled from 0 to 1 at the 21st epoch.",
"For vmf-VAE (Xu and Durrett, 2018), we set the word embedding dimension to be 512 and the hidden units to 1024 for Yahoo, and set both of them to 200 for PTB and Yelp.",
"The temperature is set to 80 and is kept constant during training.",
"For all WAE models, we add a small KL divergence term to control the posterior distribution.",
"We found that if we only use RNF with MMD as the distance metric, then the posterior may diverge from the prior such that no reasonable samples can be generated from a standard Gaussian variable.",
"Hence, for all data sets, we schedule the KL divergence weight from 0 to 0.8, and the weight of the MMD term is set as = 10 .",
"k of RBF is set to 10 for all models.",
"For RNF, we use pre-trained standard VAE models to gather the clusters c k , k = 1 , ..., K , of latent codes, where we set the number of clusters to be 20.",
"We use three normalizing flow for all experiments.",
"Hyperparameter of K m When using the inverse multiquadratics kernel K m ( z , c k ) = C/ ( C + || z c k || 22 ) for RNF, we follow the choice of hyperparameter in (Tolstikhin et al., 2018).",
"We set C = 2 d s , where d is the dimensionality of latent codes z , and s is ranged in (0 . 1 , 0 . 2 , 0 . 5 , 1 , 2 , 5 , 10) .",
"The final kernel is computed by K m ( z , c k ) = (cid:80) s 2 ds/ (2 ds + || z c k || 22 ) .",
"As explained by (Tolstikhin et al., 2018), this strategy allows us to explore a wider range of hyperparameter in one setting.",
"We show the language modeling results for PTB, Yahoo and Yelp in Table 1.",
"We compare negative log-likelihood (NLL), KL divergence, and perplexity (PPL) with all other existing methods.",
"The negative log-likelihood is approximated by its lower bound.",
"We use the negative of ELBO to approximate NLL for all VAE models.",
"For those with normalizing flows, we use the modified ELBO, which is L = E q ( z (0) | x ) [log p ( x | z ( T ) ) log q ( z (0) | x ) + log p ( z ( T ) )] + E q ( z ( T ) ) [ (cid:80) Tt =1 log | f ( t ) z ( t 1) | ] .",
"The numbers show that KL-annealing and dropout used by Bowman et al. (2015) are helpful for PTB, but for complex datasets such as Yahoo and Yelp, the KL divergence still drops to zero due to the over-expressiveness of LSTM.",
"This phenomenon is not alleviated by applying normalizing flow to make the posterior more flexible, as shown in the third row.",
"Part of the reason may be that a simple NF such as a planar flow is not flexible enough and is still dominated by a powerful LSTM decoder.",
"We find that the KL vanishing is alleviated a lit-tle bit if using WAE, which should be the case as WAE objective does not require small KL.",
"We also find that simply applying a planar flow over WAE does not improve the performance that much.",
"On the other hand, using RNF to train WAE dramatically helps the situation which achieves the lowest text perplexity on most conditions except for Figure 3: PTB.",
"YAHOO Answers, where (He et al., 2019; Yang et al., 2017; Kim et al., 2018) have the current state-of-the-art results.",
"We want to emphasize that CNN-VAE (Yang et al., 2017) and SA-VAE (Kim et al., 2018) are not directly comparable with other current approaches.",
"Here, we compare with models that use LSTM as encoder-decoder and have similar time complexity, while the use of CNN as decoder in CNN-VAE would dramatically change the model expressiveness, and it is known that SA-VAE's time complexity (Kim et al., 2018; He et al., 2019) is much higher than all other existing approaches.",
"Mutual information between Z (cid:48) and X One important question is how useful are latent variables.",
"Since no metrics are perfect (Wang et al., 2018), we should not just look at sample perplexity to judge how good a latent code is.",
"Hence, we also investigate how much information can be encoded into latent codes.",
"We believe that the mutual information term I ( z ; x ) is a better metric regarding the usefulness of latent codes than sample the company said it will be sold to the company 's promotional programs and UNK the company also said it will sell $ n million of soap eggs turning millions of dollars the company said it will be UNK by the company 's UNK division n the company said it would n't comment on the suit and its reorganization plan this is a reflection of socialism and capitalism the company also said it will sell its UNK division of the company 's UNK earlier this year the company said it will sell $ n billion of assets and UNK to the u.s last year he said the company 's earnings were n't disclosed one of my favorite places to eat at the biltmore .",
"perplexity, as it tells us directly how much information we can infer from x by looking z .",
"We use Monte Carlo method (Metropo-lis and Ulam, 1949) to get an approximation of I ( z , x ) = KL ( q ( z , x ) || q ( z ) p ( x )) = E p ( x ) [ KL ( q ( z | x ) || p ( z ))] KL ( q ( z ) || p ( z )) .",
"We compared mutual information between input x and latent codes z sampled from Euclidean latent space Z and Riemannian latent space Z (cid:48) respectively.",
"We see that even though NF does not necessarily help WAE to achieve the lowest perplexity, it does make latent codes to preserve more information about the input.",
"For WAE trained with RNF, sample perplexity and mutual information metric are both good.",
"It is cleary that I ( z (cid:48) , x ) > I ( z , x ) , where z (cid:48) is sampled from the curved space, and z is the sample transformed by the normal planar flow.",
"This further strengthens our confidence over the usefulness of the curved latent space Z (cid:48) .",
"Generating Texts from latent spaces Another way to explore latent space is to look at the quality of generated texts.",
"Here we compare sentences generated from methods that do not use Wasserstein objective and RNF with those generated from curved latent space Z (cid:48) learned by WAE.",
"We observe that texts generated from flat Euclidean space are not as diverse as the ones generated from curved space learned by WAE-RNF.",
"This is largely related to the nature of Wasserstein objective.",
"In WAE, the KL-divergence KL ( q || p )) between the prior and the posterior q ( z | x ) conditioned on each input x does not need to be small to optimize the Wasserstein objective.",
"This indicates that the marginal q ( z ) is able to match to the prior p ( z ) while allowing each posterior q ( z | x ) to have a much more diverse support than that of a standard Gaussian p ( z ) .",
"Therefore, if we randomly generate samples from curved latent space Z (cid:48) many times, we are likely to get samples scattered in different support of distinct posterior conditioned on different input x .",
"Hence, the reconstructed sentences will have a much more diverse meaning or structure.",
"In this paper, we introduced Riemannian Normalizing Flow to train Wasserstein Autoencoder for text modeling.",
"This new model encourages learned latent representation of texts to respect geometric characteristics of input sentences space.",
"Our results show that RNF WAE does significantly improve the language modeling results by modeling the Riemannian geometric space via normalizing flow.",
"We want to thank College of Creative Studies and Gene & Lucas Undergraduate Research Fund for providing scholarships and research opportunities for Prince Zizhuang Wang.",
"We also want to thank Yanxin Feng from Wuhan University for helpful discussion about Riemannian Geometry, and Yunxian He (CMU), Wenhu Chen (UCSB), and Yijun Xiao (UCSB) for their comments which helped us improve our paper and experiments."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"objective",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"abstain",
"objective",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"other",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"other",
"other"
] |
[
"Diachronic distributional models track changes in word use over time.",
"In this paper, we propose a deep neural network diachronic distributional model.",
"Instead of modeling lexical change via a time series as is done in previous work, we represent time as a continuous variable and model a word's usage as a function of time.",
"Additionally, we have created a novel synthetic task, which quantitatively measures how well a model captures the semantic trajectory of a word over time.",
"Finally, we explore how well the derivatives of our model can be used to measure the speed of lexical change.",
"Diachronic distributional models have provided interesting insights into how words change meaning.",
"Generally, they are used to explore how specific words have changed meaning over time (Sagi et al., 2011; Gulordava and Baroni, 2011; Jatowt and Duh, 2014; Kim et al., 2014; Kulkarni et al., 2015; Bamler and Mandt, 2017; Hellrich and Hahn, 2017), but they have also been used to explore historical linguistic theories (Xu and Kemp, 2015; Hamilton et al., 2016a,b), to predict the emergence of novel senses (Bamman and Crane, 2011; Rohrdantz et al., 2011; Cook et al., 2013, 2014), and to predict world events (Kutuzov et al., 2017a,b).",
"Diachronic distributional models are distributional models where the vector for a word changes over time.",
"Thus, we can calculate the cosine similarity between the vectors for a word at two different time points to measure how much that word has changed over time and we can perform a nearest neighbor analysis to understand in what direction a word is changing.",
"For example, diachronic distributional models can detect that the word gay has greatly changed by comparing the word vector for gay across different time points.",
"They can also be used to discover that gay has shifted its meaning from happy to homosexual by analyzing when those words show up as nearest neighbors to gay .",
"Previous research in diachronic distributional semantics has used models where data is partitioned into time bins and a synchronic model is trained on each bin.",
"A synchronic model is a vanilla, time-independent distributional model, such as skip-gram.",
"However, there are several technical issues associated with data binning.",
"For example, if the bins are too large, you can only achieve extremely coarse grained representations of lexical change over time.",
"However, if the bins are too small, the synchronic models get trained on insufficient data.",
"In this paper, we have built the first diachronic distributional model that represents time as a continuous variable instead of employing data binning.",
"There are several advantages to treating time as continuous.",
"The first advantage is that it is more realistic.",
"Large scale change in the meaning of a word is the result of change happening one person at a time.",
"Thus, semantic change must be a gradual process.",
"By treating time as a continuous variable, we can capture this gradual shift.",
"The second advantage is that it allows a greater representation of the underlying causes behind lexical change.",
"Words change usage in reaction to real world events and multiple words can be affected by the same event.",
"For example, the usage of gay and lesbian have changed in similar ways due to changing perceptions of homosexuality in society.",
"By associating time with a vector and having word representations be a function of that vector, we can model a single underlying cause affecting multiple words similarly.",
"It is difficult to evaluate diachronic distributional models in their ability to capture semantic shift as it is extremely difficult to acquire gold data.",
"Distributional models are traditionally evaluated with word similarity judgments, which we cannot obtain for word usage in the past.",
"Thus, evaluation of diachronic distributional models is a focus of research, such as work done by Hellrich and Hahn (2016) and Dubossarsky et al. (2017).",
"Our approach is to create a synthetic task to measure how well a model captures gradual semantic shifts.",
"We will also explore how we can use our model to predict the speed at which a word changes.",
"Our model is differentiable with respect to time, which gives us a natural way to measure the velocity, and thus speed, of a word at a given time.",
"We explore the capabilities and limitations of this approach.",
"We have developed the first continuous diachronic distributional model.",
"This is also the first diachronic distributional model using a deep neural network.",
"We have designed an evaluation of a model's ability to capture semantic shift that tracks gradual change.",
"We have used the derivatives of our model as a natural way to measure the speed of word use change.",
"Previous research in diachronic distributional models has applied a binning approach.",
"In this approach, researchers partition the data into bins based on time and train a synchronic distributional model on that bin's data (See Figure 1).",
"Several authors have used large bin models in their research, such as using five year sized bins (Kulkarni et al., 2015), decade sized bins (Gulordava and Baroni, 2011; Xu and Kemp, 2015; Jatowt and Duh, 2014; Hamilton et al., 2016a,b; Hellrich and Hahn, 2016, 2017), and era sized bins (Sagi et al., 2009, 2011).",
"The synchronic model for each time bin was trained independently of the others.",
"In order to get a fine grained representation of semantic shift, several authors have used small bins.",
"Kim et al. (2014) trained a synchronic model for each time bin.",
"To mitigate data issues, Kim et al. preinitialized a time bin's synchronic model with the Word Vectors on 1960s data gay homosexual lively gay homosexual lively gay homosexual lively . . . .",
"model from the previous time bin.",
"Bamler and Mandt (2017) developed a small bin probabilistic approach that used transition probabilities to lessen data issues.",
"They have two versions of their method.",
"The first version trains the distribution in each bin iteratively and the second version trains a joint distribution over all bins.",
"In this paper, we only explore the first version as the second version does not scale well to large vocabulary sizes.",
"Following Bamler and Mandt (2017), we compare to models used by Hamilton et al. (2016b), Kim et al. (2014), and the first version of Bamler and Mandt's's model.",
"There have been other models of lexical change beside distributional ones.",
"Topic modeling has been used to see how topics associated to a word have changed over time (Wijaya and Yeniterzi, 2011; Frermann and Lapata, 2016).",
"Sentiment analysis has been applied to determine how sentiments associated to a word have changed over time (Jatowt and Duh, 2014).",
"As mentioned in the introduction, it is difficult to quantitatively evaluate diachronic distributional models due to the lack of gold data.",
"Thus, previous research has attempted alternative routes to quantitatively evaluate their models.",
"One route 475 is to use intrinsic evaluations, such as measuring a trajectory's smoothness (Bamler and Mandt, 2017).",
"However, intrinsic measures do not directly measure semantic shift, which is the main use of diachronic distributional models.",
"Hamilton et al. (2016b) use attested shifts generated by historical linguists.",
"However, outside of first attestations, it is a difficult task for historical linguists themselves to accurately detail semantic shifts (Deo, 2015).",
"Additionally, the task used by Hamilton et al. is unusable for model comparison as all but one model had a 100% accuracy in this task.",
"Kulkarni et al. (2015) used a synthetic task to evaluate how well diachronic distributional models can detect semantic shift.",
"They took 20 copies of wikipedia where each is a synthetic version of a time bin and changed several words in the last 10 copies.",
"Models were then evaluated on their ability to detect when those words changed.",
"Our evaluation improves upon this one by having the test data be from a diachronic corpus and we model lexical change as a gradual process rather than searching for a single change point.",
"In this section, we describe the four diachronic distributional models that we analyze in our current work.",
"Three will be from previous research to be used as benchmarks.",
"Each of the four models we analyze are based on skip-gram with negative sampling ( SGNS ).",
"The difference between the four diachronic distributional models we analyze is how they apply SGNS to changes over time.",
"Skip-gram with negative sampling (SGNS) is a word embedding model that learns a latent representation of word usage (Mikolov et al., 2013).",
"For target words w and context words c , vector representations ~w and ~c are learned to best predict if c will be in context of w in a corpus.",
"k negative contexts are randomly sampled for each positive context.",
"Vector representations are computed by optimizing the following loss function: X ( w,c ) D [ log ( ( ~w ~c ))+ X c 1 ,...c k PD log (1 ( ~w ~c i ))] (1) where D is a list of target-context pairs extracted from the corpus, PD is the unigram distribution on the corpus, is the sigmoid function, and k is the number of negative samples.",
"The first diachronic distributional model we will consider is a large time bin model proposed by Hamilton et al. (2016b).",
"Here, time is partitioned into decades and an SGNS model is trained on each decade's worth of data.",
"We label this model LargeBin .",
"The second diachronic distributional model we will consider is a small time bin model proposed by Kim et al. (2014).",
"Here, time is partitioned into years and an SGNS model is trained on each year's worth of data.",
"Data issues are mitigated by preinitializing the model 1 for a given time bin with the vectors of the preceding time bin (Kim et al., 2014).",
"We label this model SmallBinPreInit .",
"The third diachronic distributional model we will consider comes from Bamler and Mandt (2017).",
"Bamler and Mandt take a probabilistic approach to modeling semantic change over time.",
"The idea is to transform the SGNS loss function into a probability distribution over the target and context vectors.",
"Then, to create a better diachronic distributional model, they apply priors to this distribution.",
"The first two priors are Gaussian distributions with mean zero on the vector variables to discourage the vectors from growing too large (Barkan, 2017).",
"More formally: P 1 ( ~w ) = N (0 , 1 I ) P 2 ( ~c ) = N (0 , 1 I ) (2) where 1 is a hyperparameter.",
"The last two priors are also Gaussian distributions on the vector variables.",
"The means are the vector representation from the previous bin.",
"The goal of this prior is to discourage a vector variable from deviating from the previous bin's vectors.",
"where 2 is a hyperparameter and w prev and c prev are the vectors from the previous time bin.",
"We are only exploring point models, thus we take the maximum a posteriori estimate of the 1 We do not perform preinitialization in LargeBin as large bin models are less susceptible to data issues.",
"joint distribution to recover the vectors for each time bin.",
"We apply a logarithm in constructing the estimate, which transforms the joint probability into the SGNS loss function with four regu-larizers (each one corresponding to a prior dis-tribution).",
"The prior distribution P 1 becomes P w W 1 2 || w || .",
"The prior distribution P 2 becomes P c C 1 2 || c || .",
"The prior distribution P 3 becomes P w W 2 2 || ~w w prev || .",
"The prior distribution P 4 becomes P c C 2 2 || ~c c prev || .",
"W and C are the sets of target and context words.",
"We label this model SmallBinReg .",
"Our model is a modification of the SGNS algorithm to accommodate a continuous time variable.",
"The original SGNS algorithm produces a target embedding ~w for target word w and a context embedding ~c for context word c .",
"Instead, we produce a differentiable function use W ( w, t ) that returns a target embedding for target word w at time t and a differentiable function use C ( c, t ) that produces a context embedding for context word c at time t .",
"Our model consists of three components.",
"One component takes time as its input and produces an embedding that characterizes that point in time (lower right).",
"The second component (lower left) takes a word as its input and produces a time-independent word embedding, which is then reshaped into a set of parameters that can modify the time embedding.",
"The third component (top) combines the time embedding and the word embedding.",
"The first component of our model is a two-layer feed-forward neural network with tanh activation functions.",
"These layers take a time t as input and produces a time embedding timevec ( t ) as output of those layers: h 1 = tanh ( M 1 t + b 1 ) timevec ( t ) = tanh ( M 2 h 1 + b 2 ) (4) where M 1 and M 2 are the weights of the first two layers and b 1 and b 2 are the biases.",
"To produce the input value t , a timepoint is scaled to a value between 0 and 1, where 0 corresponds to the year 1900, and 1 corresponds to 2009, the last year for which our corpus has data.",
"The second component incorporates word-specific information into our model.",
"For use W ( w, t ) , each target word w has a target vector representation ~w .",
"The vector ~w is then transformed into a linear transformation T rans w , which in the third component is applied to the time embedding timevec ( t ) .",
"We do this via a modified linear layer where the weights are a three dimensional tensor, the biases are a matrix and the output is a matrix: T rans w = T ~w + B (5) where T is the tensor acting as the weights and B is the matrix acting as the biases.",
"The third component combines the word-independent time embedding timevec ( t ) and the time-independent linear transformation T rans w together to produce the final result.",
"First, T rans w is applied to timevec ( t ) : h 3 = T rans w ( timevec ( t )) (6) Then, an additional linear layer is used as the output layer, taking h 3 as input: use W ( w, t ) = M 4 h 3 + b 4 (7) where M 4 and b 4 are the weights and biases of the output layer.",
"The above details the architecture of use W ( w, t ) .",
"The corresponding function use C ( c, t ) for context words has the same architecture as use W ( w, t ) and shares weights with use W ( w, t ) .",
"The only exception is that use C ( c, t ) uses a separate set of vectors ~c in the second component instead of sharing the target vectors ~w with use W ( w, t ) .",
"We train our model using a modified version of the SGNS loss function.",
"In particular, our positive samples are now triples ( w, c, t ) where w is a target word, c is a context word, and t is a time, instead of pairs ( w, c ) which are typically used in SGNS.",
"For each positive sample ( w, c, t ) , we sample k negative contexts from the unigram distribution, PD .",
"PD is trained from all contexts in the entire corpus and is time-independent.",
"Explicitly, the loss function is: X ( w,c,t ) D log ( ( use W ( w, t ) use C ( c, t )))+ kE c N PD [ log ( ( use W ( w, t ) use C ( c N , t )))] (8) 3.5 Training All models are trained on the same training data.",
"We used the English Fiction section of the Google Books ngram corpus (Lin et al., 2012).",
"We use the English fiction specifically, because it is less unbalanced than the full English section and less influenced by technical texts (Pechenick et al., 2015).",
"We only use the years 1900 to 2009 as there is limited data before 1900.",
"We converted the ngram data for this corpus into a set of (target word, context word, year, frequency) tuples.",
"The frequency is the expected number of times the target word-context word pair is sampled from that year's data using skip-gram.",
"Following Hamilton et al. (2016b), we use subsampling with t = 10 5 .",
"As the number of texts published since 1900 has increased five fold, we weigh the frequencies so that the sums across each year are equal.",
"For the binned models, we train each bin's synchronic model using the subset of the training data corresponding to that time bin.",
"For our model, we sample (training word, context word, year) triples from the entire training data as the year is an input to our function.",
"4.1 Synchronic Accuracy",
"Before we can evaluate the methods as models of diachronic semantics, we must first ensure that the methods model semantics accurately.",
"To do this, we follow Hamilton et al. (2016b) by performing the MEN word similarity task on vectors extracted from a fixed time point (Bruni et al., 2012).",
"The hope is that the word similarity predictions of a model at that point in time highly correlate with word similarity judgments in the MEN dataset.",
"For the binned models, we used the vectors from the bin best corresponding to 1995 to reflect the 1990s bin chosen by Hamilton et al. (2016b).",
"DiffTime represents time as a continuous variable, so we chose a time t that corresponds to the start of 1995.",
"The results of MEN word similarity tasks is in Table",
"1. All of the Spearman's values are comparable to those found in Levy and Goldberg (2014) and Hamilton et al. (2016b).",
"Thus, all of these models reflect human judgments comparable to synchronic models.",
"Thus, the predictions of the models correlate with human judgments.",
"The goal of creating diachronic distributional models is to help us understand how words change meaning over time.",
"To that end, we have created a synthetic task to compare models by how accurately they track semantic change.",
"Our task creates synthetic words that change between two senses over time via a sigmoidal path.",
"A sigmoidal path will allow us to emulate a word starting from one sense, shifting gradually to a second sense, then stabilizing on that second sense.",
"By using sigmoidal paths, we can explore how well a model can track words that have switched senses over time such as gay (lively to homosexual) and broadcast (scattering seeds to televising shows).",
"A similar task is used to evaluate word 478 sense disambiguation (Gale et al., 1992; Schutze, 1992).",
"The synthetic words are formed by a combination of two real words, e.g. banana and lobster are combined together to form banana lobster .",
"The real words are randomly sampled from two distinct semantic classes from the BLESS dataset (Baroni and Lenci, 2011).",
"We use BLESS classes so that we can capture how semantically similar a synthetic word is to its component words by comparing to other words in the same BLESS classes as the component word.",
"For example, we can capture how similar banana lobster is to banana by comparing banana lobster to words in the fruit BLESS class.",
"See Appendix B for preprocessing details.",
"We denote the synthetic words with r 1 r 2 where r 1 and r 2 are the component real words.",
"We also randomly generate the sigmoidal path by which a synthetic word changes from one sense to another.",
"For real words r 1 and r 2 , this path will be denoted shift ( t ; r 1 r 2 ) and is defined by the following equation: shift ( t ; r 1 r 2 ) = ( s ( t m )) (9) The value s is uniformly sampled from ( 1 . 0 110 , 10 . 0 110 ) and represents the steepness of the sigmoidal path.",
"The value m is uniformly sampled from { 1930 , . . . , 1980 } and represents the point where the synthetic word is equally both senses.",
"For our example synthetic word banana lobster , banana lobster can transition from meaning banana to meaning lobster via the sigmoidal path (0 .",
"05( t 1957)) where 1957 is the time where banana lobster is equally banana and lobster and 0.05 represents how gradually banana lobster shifts senses.",
"We then use shift ( t ; r 1 r 2 ) to integrate r 1 r 2 into the real diachronic corpus data.",
"Our training data is a set of (target word, context word, year, frequency) tuples extracted from a diachronic corpus (see 3.5).",
"For every tuple where r 1 is the target word, we replace the target word with r 1 r 2 and we multiply the frequency by shift ( t ; r 1 r 2 ) .",
"For every tuple where r 2 is the target word, we replace the target word with r 1 r 2 and we multiply the frequency by 1 shift ( t ; r 1 r 2 ) .",
"In other words, in the modified corpus, r 1 r 2 has shift ( t ; r 1 r 2 ) percent of r 1 's contexts at time t and 1 shift ( t ; r 1 r 2 ) percent of r 2 's contexts at time t .",
"We train a model mod on this modified training data.",
"This provides a representation for r 1 r 2 over time.",
"We can capture how much a model predicts r 1 r 2 is more semantically similar to r 1 than r 2 by comparing mod 's representation of r 1 r 2 to words in the same semantic category as r 1 and r 2 .",
"We use BLESS classes as our notion of semantic category.",
"If cls 1 is the BLESS class of r 1 and cls 2 is the BLESS class of r 2 , then mod 's prediction for how much more similar r 1 r 2 is to r 1 than r 2 , rec ( t ; r 1 r 2 , mod ) , is defined as follows: rec ( t ; r 1 r 2 , mod ) = 1 | cls 1 | X r 0 1 cls 1 sim mod ( r 1 r 2 , r 0 1 , t ) 1 | cls 2 | X r 0 2 cls 2 sim mod ( r 1 r 2 , r 0 2 , t ) (10) sim mod ( r 1 r 2 , r 0 1 , t ) is the cosine similarity between mod 's word vector for r 1 r 2 at time t and mod 's word vector for r 0 1 at time t .",
"To evaluate a model in its ability to capture semantic shift, we use the mean sum of squares error (MSSE) between rec ( t ; r 1 r 2 , mod ) and shift ( t ; r 1 r 2 ) across all synthetic words.",
"The function rec ( t ; r 1 r 2 , mod ) is model mod 's prediction of how much more similar r 1 r 2 is to r 1 than r 2 .",
"The gold value of rec ( t ; r 1 r 2 , mod ) would then be the sigmoidal path that defines how r 1 r 2 semantically shifts from r 1 to r 2 over time, shift ( t ; r 1 r 2 ) .",
"To evaluate how accurately mod predicted the semantic trajectory of r 1 r 2 , we calculate the mean squared error between rec ( t ; r 1 r 2 , mod ) and shift ( t ; r 1 r 2 ) as follows: 479 2009 X t =1900 ( rec ( t ; r 1 r 2 , mod ) shift ( t ; r 1 r 2 )) 2 (11) As rec ( t ; r 1 r 2 , mod ) and shift ( t ; r 1 r 2 ) have different scales, we Z-scale both the rec ( t ; r 1 r 2 , mod ) values and the shift ( t ; r 1 r 2 ) values before calculating the mean squared error.",
"We use three sets of 15 synthetic words and the average is calculated over all 45 words.",
"The synthetic words and BLESS classes we used are contained in the supplementary material.",
"The results are in Table",
"2. The column AMSE is MSSE when all years are taken into account.",
"Kim et al. (2014) noted that small bin models require an initialization period, so the column AMSE (1950-) is MSSE when only years 1950 to 2009 are taken into account and the years 1900 to 1949 are used as the initialization period.",
"From the table, we see our model outperforms the three benchmark models in both cases.",
"Using a paired t-test, we found that the reduction in MSSE between our model and the benchmark models are statistically significant.",
"In Figure 3, we plot shift ( t ; r 1 r 2 ) and rec ( t ; r 1 r 2 , mod ) for the synthetic word pistol elm .",
"Each method has a subgraph.",
"The predictions of the large bin model LargeBin appear as a step function with large steps (top left graph).",
"These large steps seem to cause the predicted shift (blue curve) to poorly correlate with the gold shift (red curve).",
"Next, we consider the small bin models SmallBinPreInit (top right graph) and SmallBinReg (bottom left graph).",
"Both predicted shifts have an initial portion that poorly fits the generated shift (between 1900 and 1950).",
"From Kim et al. (2014), it takes several iterations for small bin models to stabilize due to each bin being fed limited data.",
"Additionally, there are fluctuations in the graphs of the predicted shift, which we attribute to the high variance of data per bin.",
"In contrast to the other models, our predicted shift tightly fits the gold shift (bottom right graph).",
"Although this evaluation provides useful information on the quality of an diachronic distributional model, it has some weaknesses.",
"The first is that it is a synthetic task that operates on synthetic words.",
"Thus, we have limited ability to understand how well a model will perform on real world data.",
"Second, we only generate words that shift from one sense to another.",
"This fails to account for other common changes, such as gaining/losing senses and narrowing/broadening.",
"Finally, by using a sigmoidal function to generate how words change meaning, we may have privileged continuous models that incorporate a sigmoidal function in their architecture.",
"We are working towards improving this evaluation to remove these issues.",
"In this section, we evaluate our model's ability to measure the speed at which a word is changing.",
"Our model is differentiable with respect to time.",
"Thus, we can get the derivative of use W ( w, t ) with respect to t to model how word w is changing usage at time t .",
"We l 2 -normalize use W ( w, t ) beforehand to reduce frequency effects.",
"We then get the magnitude of this normalized derivative to model the speed at which a word is changing at a given time.",
"We explore the connection between speed and the nearest neighbors to a word in Figure",
"4. First, we use apple as a baseline for discussion.",
"We chose apple , because the meaning of the word has remained relatively stable throughout the 1900s.",
"With apple , we see a low speed over time and a consistency in the cosine similarity to apple 's nearest neighbors.",
"While it is true that apple has other meanings beyond the fruit, such as referring to Apple Inc., those meanings are much rarer, especially in the fiction corpus we use.",
"neighbors.",
"This makes sense as gay is well established to have experienced a drastic sense change in the mid to late 1900s (Harper, 2014).",
"Next, we explore the word mail .",
"The word mail has a moderately high speed.",
"This may be reflective of the fact that there have been incredible changes in the medium by which we send mail, e.g. changing from cables to email.",
"A possible reason for the speed only being moderately high is that, even though the medium by which we send mail has changed, many of the same uses of mail, e.g. sending, receiving, opening, etc., remain the same.",
"We see this reflected in the nearest neighbors as well as mail shifts from a high similarity to cable to a high similarity to e (as in email), yet mail is consistently similar to postal and stationery .",
"The next word we will explore is the word canadian .",
"We chose this word as we were surprised to find that canadian has one of the fastest speeds in the 1930s to 1940s.",
"The nearest neighbors to canadian have shifted from geographic terms like port and railhead to civil terms like federal and national .",
"In further analysis, we discovered that this may be reflective of a larger push to form a Canadian identity in the early 1900s (Francis, 1997).",
"The nearest neighbors to canadian may reflect the change from being a part of the British Empire to having its own unique national identity.",
"The final word we will explore is cell .",
"The word cell also has a high speed over time.",
"However, there is a spike in the speed during the 1980s.",
"Analyzing the nearest neighbors we see a rapid rise in similarity to pager and handset , which indicates that this spike may be related to the rapid rise of cell phone use.",
"Additionally, this example demonstrates a weakness in our approach.",
"Our graph shows that our model predicts that the word cell gradually changed meaning over time and that cell started changing meaning much earlier than expected.",
"This prediction error comes from the smoothing out of the output caused by representing time as a continuous variable.",
"Even though we are able to extract interesting insights from the speed of word use change, Figure 4 also exhibits some limitations.",
"In particu-481 lar, most words have a sharp rise in speed in the 1930s and a steep decline in speed in the 1980s.",
"We believe this is an artifact of our representation of word use as a function of time as there is a single time vector that influences all words.",
"In the future, we will explore model variants to address this.",
"We can inspect h 1 , the first layer in the time subnetwork, to gain further understanding of what our model is doing.",
"We do this by analyzing the time points where a node in h 1 is zero.",
"As the activation function in h 1 is tanh, a node in h 1 switches from positive to negative (or vice versa) at the time points where it is zero.",
"Thus, the time points where a node is zero should indicate barriers between time periods.",
"We visualize the time points where a node is zero in Figure",
"5. We see that we have a fairly even distribution of points until the 1940s, a large burst of points in the 1950s-1960s, and two points in the 1980s.",
"Thus, there are many time periods before the 1940s (which may be caused by noisiness of the data in the first half of the century), a big transition between time periods in the 1950s-1960s, and a transition between time periods in the 1980s.",
"Thus, these are time points that the model perceives as having increased semantic change.",
"However, there is a weakness to this analysis.",
"Only 16% of the 100 nodes in h 1 are zero for time points between 1900 and 2009.",
"Thus, a vast majority of nodes do not correspond to transitions between time periods.",
"Diachronic distributional models are a helpful tool in studying semantic shift.",
"In this paper, we introduced our model of diachronic distributional semantics.",
"Our model incorporates two hypotheses that better help the model capture how words change usage over time.",
"The first hypothesis is that semantic change is gradual and the second hypothesis is that words can change usage due to common causes.",
"Additionally, we have developed a novel synthetic task to evaluate how accurately a model tracks the semantic shift of a word across time.",
"This task directly measures semantic shift, is quantifiable, allows model comparison, and focuses on the trajectory of a word over time.",
"We have also used the fact that our model is differentiable to create a measure of the speed at which a word is changing.",
"We then explored this measure's capabilities and limitations.",
"We would like to thank the University of Texas Natural Language Learning reading group as well as the reviewers for their helpful suggestions.",
"This research was supported by the NSF grant IIS 1523637, and by a grant from the Morris Memorial Trust Fund of the New York Community Trust.",
"We acknowledge the Texas Advanced Computing Center for providing grid resources that contributed to these results."
] | [
"abstain",
"objective",
"abstain",
"objective",
"objective",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"objective",
"method",
"objective",
"objective",
"objective",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"other",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"other",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"objective",
"other",
"other",
"other"
] |
[
"We introduce entity post-modifier generation as an instance of a collaborative writing task.",
"Given a sentence about a target entity, the task is to automatically generate a post-modifier phrase that provides contextually relevant information about the entity.",
"For example, for the sentence, Barack Obama, , supported the #MeToo movement., the phrase a father of two girls is a contextually relevant post-modifier.",
"To this end, we build PoMo , a post-modifier dataset created automatically from news articles reflecting a journalistic need for incorporating entity information that is relevant to a particular news event.",
"PoMo consists of more than 231K sentences with post-modifiers and associated facts extracted from Wikidata for around 57K unique entities.",
"We use crowdsourcing to show that modeling contextual relevance is necessary for accurate post-modifier generation.",
"We adapt a number of existing generation approaches as baselines for this dataset.",
"Our results show there is large room for improvement in terms of both identifying relevant facts to include (knowing which claims are relevant gives a > 20% improvement in BLEU score), and generating appropriate post-modifier text for the context (providing relevant claims is not sufficient for accurate generation).",
"We conduct an error analysis that suggests promising directions for future research.",
"The goal of machine-in-the-loop writing systems is to assist human writers by directly augmenting their text.",
"Examples include systems that refine human text for grammar (Rao and Tetreault, 2018), collaborate on story plot generation systems (Clark et al., 2018; Yu and Riedl, 2012), or modify the content for style (Hu et al., 2017; Shen et al., 2017; Yang et al., 2018).",
"In this paper, we introduce Professor Melman 's arguments appealed to a wide spectrum, attracting unions like the United Automobile Workers and the Machinists Union ...",
"post-modifier generation as an instance of such an assistive writing task in the news domain.",
"Journalists use post-modifiers to introduce background information about entities discussed in news articles.",
"To write these post-modifiers journalists often need to look up relevant facts about entities.",
"A post-modifier generation system can be seen as a collaborative assistant that automatically finds relevant facts and inserts a small text fragment that augments the text produced by the human writer.",
"Post-modifier generation is a contextual data-to-text generation problem, where the data is the set of known facts about the target entity, and the text to be generated is a post-modifier that is relevant to the rest of the information conveyed in the text.",
"Figure 1 shows an example.",
"Given a sentence about the anti-war resistance work of Noam Chomsky, the target entity, and a set of known facts about him, the task is to generate a post-modifier that introduces Chomsky as a professor and mentions his background as an anti-war activist.",
"An effective post-modifier generation system must:",
"(i) select suitable facts about the entity given the text, and",
"(ii) produce text that covers these facts in a way that fits in with the rest of the text.",
"We introduce PoMo , an automatically generated dataset for developing post-modifier generation systems.",
"1 PoMo is a collection of sentences that contain entity post-modifiers, along with a collection of facts about the entities obtained from Wikidata (Vrandecic and Krotzsch, 2014).",
"We use a small number of dependency patterns to automatically identify and extract post-modifiers of entities in sentences.",
"We then link the extracted entities with the entries in Wikidata.",
"The resulting dataset has 231,057 instances covering 57,966 unique entities.",
"Our analysis show that the post-modifiers often combine multiple facts and are specific to the sentential context.",
"We conduct two sets of experiments that highlight the challenges in post-modifier generation.",
"(i) Claim Selection: Given an input sentence, the first step in generating a post-modifier is to fig-ure out which facts to use.",
"We formulate this as a distantly-supervised ranking problem, where we train neural models that learn to identify relevant claims for a given sentence.",
"These claim ranking models perform well when predicting the relevance of coarse-grained facts (e.g. occupation), but fare poorly when predicting finer-grained facts (e.g. place of birth).",
"(ii) Generation: We adapt recent sequence-to-sequence generation models for this task.",
"Results show that generation remains a challenge.",
"Even though our automatic claim ranking does not improve generation, further experiments with oracle selected claims demonstrate that when relevant claims are known, the models can generate post-modifiers which humans deem comparable in quality to ones written by professional journalists.",
"In summary, the main contributions of this work are: 1) a data-to-text problem that introduces new challenges, 2) an automated dataset creation pipeline and a large resulting dataset, 3) a crowdsourcing study that verifies the contextual relevance of post-modifiers, and 4) a characterization of the difficulty of the task via performance analysis of numerous baselines.",
"Post-modifier generation can be formulated as a data-to-text generation problem.",
"The input is text mentioning a target entity and a set of known facts about the entity.",
"The output is a phrase that:",
"(i) fits as a post-modifier of the target entity mentioned in the input text, and",
"(ii) conveys a subset of facts relevant to the context of the input text.",
"Figure 1 shows an example for the target entity Noam Chomsky .",
"The input includes a sentence mentioning Chomsky's work on mobilizing antiwar groups along with its surrounding context, and a listing of all facts about Chomsky that are available in Wikidata.",
"Given these inputs, the task is to output a post-modifier phrase that conveys facts about Chomsky that fit within the sentence.",
"In this example the post-modifier conveys both general background information about Chomsky (his oc-cupation), and specific information relevant to the context of the sentence (being an anti-war activist).",
"This task can be seen as an instance of collaborative writing, where the journalist writes text about specific news events involving entities, and the generation system assists the journalist by inserting new text that augments the story.",
"Given a large collection of news articles, we can automatically create training data for such systems by removing the pieces of text that we want the assistant to generate.",
"This requires reliable ways to identify text to remove and sources of information that can be used to generate the text.",
"Here we describe a pipeline for generating such a dataset for our task.",
"We construct the PoMo dataset using three different news corpora: NYTimes (Sandhaus, 2008), CNN and DailyMail (Hermann et al., 2015).",
"We use Wikidata to collect facts about entities.",
"2 2 Wikidata dump from https://www.wikidata.",
"org/wiki/Wikidata:Database_download (Dump date: 2018/06/25) 2.1.1 Post-Modifier and Entity Identification We use Stanford CoreNLP (Manning et al., 2014) to parse each sentence in the news articles and to identify named entities.",
"We extract post-modifiers by finding noun phrases that share an appos relation 3 with any recognized named entity in the sentence.",
"In this work, we only consider post-modifiers for people .",
"In the future, we plan to expand PoMo to include more post-modifiers for other targets, such as organizations.",
"We extract only one such pair from a given sentence to reduce the possible noise in the extraction process.",
"In our running example from Figure 1, Noam Chomsky is recognized as a person entity.",
"The word professor is an appositive dependency of the word Chomsky and therefore, we extract the NP the Massachusetts Institute of Technology professor and antiwar activist which includes the word professor as a post-modifier for the target entity Noam Chomsky .",
"Wikidata provides information about entities in the form of key-value pairs that are called claims .",
"To collect the facts about a target entity, we need to link the target to a specific entity in Wikidata.",
"We first search through Wikidata labels and aliases to find candidates with the same name as the target.",
"We sort the candidates based on the number of claims that have a significant word overlap with the extracted post-modifier.",
"We link the entity to the highest ranked candidate whose claims cover at least 30% of the non stop words in the post-modifier.",
"If such a candidate is found we record the claims that overlap with the post-modifier.",
"If no such candidate is found then we discard the entity.",
"We evaluate this simple heuristic by comparing the results to using an off-the-shelf entity linking system AIDA-light (Nguyen et al., 2014) and show the results in Table 2. We find that AIDA-light agrees with our entity linking in 91.2% of the cases.",
"AIDA-light is able to link 94.3% of the entities we found from NYTimes, but for CNN and DailyMail, it links only 87.0% and 86.34% of the entities, respectively.",
"This decrease is likely due to the fact that AIDA-light was last updated in 2014 while the CNN/DailyMail datasets contain articles collected until the end of April 2015.",
"On the other hand, NYTimes articles range from 1987 to 2007.",
"Our 3 An appos itional modifier of an NP is another NP immediately to the right that defines or modifies the NP.",
"Table 1 shows the distribution of the data sources over train, validation, and test sets.",
"All splits maintain the relative distributions of the data sources to prevent stylistic mismatches from influencing generation.",
"We also ensure that there is no entity overlap among the splits.",
"Within the NYTimes data, we verify that the distribution over years between 1987 and 2007 is also similar over the sets.",
"Distribution of Post-Modifiers and Entities Figure 2a shows the distribution of post-modifier lengths in terms of token counts.",
"Most post-modifiers are three to eight words long, and about 17 .",
"3% are even longer.",
"Figure 2b shows an estimate of the number of relevant facts covered by the post-modifiers; this estimate uses the number of claims that overlap with the post-modifier via heuristic matching.",
"More than half of the post-modifiers convey two or more facts.",
"About 11 .",
"4% convey five or more facts.",
"These results suggest that generating post-modifiers requires composing together multiple relevant facts.",
"Table 3 lists the most frequent types of facts used in the post-modifiers in our dataset.",
"Most relate to generic biographical information such as the entity's occupation, organizations they belong to, place of birth, etc.",
"Here again we see a range of types of information being conveyed which is likely to present a challenge for generation systems.",
"The dataset also covers a wide variety of entity types.",
"We cluster the target entities by their occupation listed in Wikidata.",
"We also use Word-Net (Miller, 1995) to traverse the hypernyms of the words to find frequent ones.",
"Then, we manually select the top ten occupation types.",
"Any entity that i n s t a n ce c o un t 0K 20K 40K number of post-modifier tokens 1 2 3 4 5 6 7 8 9 10+",
"(a) Histogram of the token counts of the post-modifiers.",
"Majority of the post-modifiers (171K instances, 74.14%) have 3 to 8 tokens.",
"Average is 5.8 tokens.",
"(b) Number of relevant facts per instance in the dataset.",
"More than a half of the post-modifiers are related to two or more facts.",
"(c) Histogram of the scores for postmodifiers, averaged over three annotations.",
"The distribution of ratings for true and other post-modifiers.",
"does not belong to the top ten is assigned to a single other group.",
"The resulting distribution is shown in Table 4. Quality of Post-Modifiers We conduct a crowdsourcing study to understand how often the post-modifiers are specific to the particular context.",
"For each (entity, context, post-modifier) triple in the validation set, we create multiple alternative post-modifiers by randomly choosing up to ten other post-modifiers that are found in some other sentences for the same entity.",
"Crowd workers rate the quality of these post-modifiers.",
"Figure 3 shows a screenshot of a task given to crowd workers.",
"If the true post-modifier, the one that is actually used in the context, is rated the highest compared to the rest, then we assume the post-modifier is indeed specific to the context.",
"On the other hand, if the crowd workers rate multiple other post-modifiers as good fits for the context, then the true post-modifier is not context specific.",
"Figure 2c shows the distribution of ratings for true and other post-modifiers.",
"The true post-modifiers tend to be rated very good or good more often than the other post-modifiers.",
"One of the key challenges of generating post-modifiers is to identify the claims about an entity that are relevant to the given context.",
"In this section, we explore methods for solving this task.",
"Most-Common Claim This model employs a simple frequency heuristic: rank claims by the frequency of their types in the training post-modifiers (e.g. as in the order given in Table 3) and deem the top n claims in this ranking as relevant.",
"Neural Baselines We use two neural baselines with the following architecture.",
"Word embeddings are used to represent words in the context (e.g. current and previous sentence) and claims.",
"The sequences of embeddings are then fed through 2-layer LSTM's (Hochreiter and Schmidhuber, 1997) to obtain separate representations of the context and claims.",
"These representations are subsequently concatenated together and fed through a fully-connected layer with sigmoid activation, producing a scalar value for each claim representing the probability that it is relevant.",
"We use this model in two ways: as a classifier, and as a ranking model.",
"When used as a classifier, any claim whose score exceeds a threshold is predicted to be relevant.",
"When used as a ranking model, the top n highest-scoring claims are predicted to be relevant.",
"We train our baselines on the PoMo dataset, using the claims detected during dataset collection as a (distant) source of supervision.",
"Precision, recall, and F 1 score are used to evaluate model performance.",
"Model hyperparameters are chosen using (coarse) grid search to maximize F 1 score on the validation set.",
"The neural baselines use a vocabulary size of 50,000, 100-dimensional word embeddings, and 256 hidden units in the LSTM layers.",
"Dropout (Srivastava et al., 2014) is applied between the LSTM layers with a 0 .",
"5 keep probability.",
"The neural classifier uses threshold = 0 .",
"37 .",
"We find the optimal value of n is 4 for the most-common claims model and 2 for the neural ranker.",
"Quantitative results are provided in Table 5. Both neural baselines perform considerably better than the most-common claims model.",
"This indicates that the provided contexts and claim values contain useful information for claim selection that goes beyond the information captured by global statistics of the dataset alone.",
"We additionally observe that the ranking-based approach outperforms the classification-based approach in terms of both Prec.",
"precision and F 1 score, while having only slightly worse recall.",
"To better understand the cases where the neural models fail and succeed, we examine the distribution of F 1 scores over the top 15 fact types (see Table 6).",
"Interestingly, when ranked by F 1 score we observe that fact types fall naturally into topically related groups: 1. position / occupation-related facts: position played, position held, occupation 2. membership-related facts: member of political party, member of, member of sports team 3. achievement-related facts: award received, nominated for 4. location-related facts: country of citizenship, place of death, place of birth With the exception of employer , the overarching trend is that the model identifies the relevance of coarse-grained claims better than fine-grained claims (e.g occupations, political parties, and sports positions are much more likely to be shared between entities than birth and death places).",
"This suggests that developing better methods for determining the relevance of fine-grained claims is a promising avenue for future research on this task.",
"At its core, post-modifier generation involves producing a variable-length sequence output conditioned on two variable-length inputs: the words in the current and previous sentence (e.g. the con-text), and the collection of claims about the entity.",
"Accordingly, the sequence-to-sequence (seq2seq) framework (Sutskever et al., 2014) is a natural fit for the task we use it as the foundation for all of our baseline models.",
"Since research has shown that attention (Bahdanau et al., 2015) and copy mechanisms (Gu et al., 2016) consistently improve seq2seq model performance, we use these in our baselines as well.",
"One choice that must be made when using this framework is how to combine the different inputs.",
"The default approach we use is to concatenate the claim and context into a linear sequence of tokens during preprocessing (shown in Figure 4a).",
"We also experiment with encoding the claims and each of the context sentences separately, then concatenating their vector representations before decoding.",
"We refer to this as the tri-encoder approach (shown in Figure 4b).",
"As discussed earlier, selecting relevant claims is crucial to generating good post-modifiers.",
"One way to incorporate claim selection is to use our baseline models from Section 3 to cut out irrelevant claims from the input before feeding them to the encoder (e.g. performing hard claim selection).",
"This pipelined approach is not differentiable, and can suffer from cascading errors.",
"An alternative way is to use the model's attention mechanism as a form of soft claim selection that attends only to the relevant claims.",
"The drawback of this approach is that it does not make use of the available claim annotations, which are an important source of supervision.",
"Building on these observations, we propose an end-to-end claim selection model which incorporates an additional term to the loss function that encourages the claim-level attention probabilities to be higher for the identified relevant claims as shown in Figure 4c.",
"The process for computing this loss term works as follows.",
"We begin by summing together attention scores for tokens within claims to obtain a claim-level score.",
"These scores Prev.",
"(c) End-to-end claim selection model Figure 4: PoMo Models for post-modifier generation.",
"are then fed through a sigmoid activation function to obtain a soft claim selection probability.",
"For each claim, we measure the binary cross entropy between the predicted selection probability and a binary variable indicating whether or not the claim was identified as relevant.",
"The final loss term is the average of these binary cross entropies.",
"Note that we do not use a copy mechanism in this model to avoid double-counting (since relevant claims were identified using word overlap).",
"We experiment with two types of encoder/decoder modules: bidirectional LSTMs, and transformers",
"transformers (Vaswani et al., 2017).",
"We use a vocabulary of size 50K, truncate the maximum input sequence length to 500, and use a batch size of 32 in all experiments.",
"To help models distinguish between claims and context we demarcate claim fields with special <claim> , <key> , and <value> tokens.",
"We train all the models for 150k steps, and evaluate on the validation dataset every 10k steps.",
"Evaluation is performed using the BLEU (Papineni et al., 2002) and METEOR (Banerjee and Lavie, 2005) translation metrics, and Precision, Recall and F 1 score of the predicted bag-of-words (omitting stop-words).",
"The model with the highest F 1 score on the validation set is used during test time.",
"For the bidirectional LSTM, we use 2 hidden layers with 512 hidden units, 500-dimensional word embeddings, and apply dropout between layers with a keep probability of 0.7.",
"Models are trained using stochastic gradient descent with a learning rate of 1.0.",
"For the transformer model, we use 4 attention heads, 4 layers of transformer blocks with 64 hidden units for the encoder and the decoder, a penultimate hidden layer with 256 units, and 64-dimensional word embeddings.",
"Transformer models are trained using Adam (Kingma and Ba, 2015) with an initial learning rate of 2.0, and a label smoothing (Szegedy et al., 2016) factor of 0.1 when calculating loss.",
"We perform a variety of experiments, the results of which are displayed in Table 7. In this table, Transformer and BiLSTM refer to models trained using the default approach to combining context and claims, while Tri-encoder refers to a BiLSTM model trained using the approach described in 4.1 (we do not train a transformer version since its performance is lackluster).",
"Here are detailed descriptions of the experiments performed in each section: All Claims : Results for vanilla seq2seq models.",
"Oracle : Hard claim selection is performed using the oracle relevant claims.",
"Neural Ranker ( n = 10 ) : Hard claim selection is performed using the top-10 claims returned by the neural ranker baseline.",
"End-to-End Claim Selection : Results for the end-to-end claim selection model.",
"In order to understand the relative contribution of the different inputs, we also include results for the BiLSTM model trained using either only the claims, or only the context sentences.",
"In Figure 5 and 6, we show the performances by post-modifier and sentence lengths to examine the impact of the such variables.",
"Discussion of Quantitative Results Our results contain a few key findings.",
"The first is that knowing the relevant claims is critical to obtaining state-of-the-art performance; even knowing only oracle claims is sufficient to perform better than all of the other baselines, although there is a still a large improvement when context is additionally provided.",
"However, model-based approaches for claim selection do not seem to help: hard claim selection using the neural ranker performs just as well as the vanilla models, and our proposed approach for end-to-end claim selection has a negative impact.",
"This motivates the need for more effective methods of claim selection.",
"The decreasing performances of the BiLSTM seq2seq models by the increasing target post-modifier and sentence lengths show the difficulty of generating long texts and handling long input data.",
"Finally, we observe that the transformer-based seq2seq models are not particularly well-suited to this task.",
"In all cases their performance is inferior to the BiLSTM-based approaches.",
"Large-scale, pre-trained transformer-based language models, such as GPT-2 (Radford et al., 2019) and BERT (Devlin et al., 2018), might be an interesting addition to the baselines, by framing the task as filling in the blanks for post-modifiers.",
"However, when restricted to approaches that only use our dataset for training, we expect those based on language models to struggle due to the separation of entities among train, validation, and test.",
"Qualitative Analysis A cursory examination of model predictions (see Table 8 for examples) provides insight into why post-modifier generation is a challenging task.",
"One issue that consistently appears is temporal inconsistency between the target and generated post-modifiers.",
"That is, the model may make an error since it is unaware of the time period that the article is written in (and also may not be aware of the periods of time for which a claim are true).",
"For example, in the first instance in Table 8 the Oracle model predicts an almost correct post-modifier but misses the fact that Kenneth Clarke is a former Chancellor of the Exchequer.",
"Another apparent issue is that models tend to generate shorter post-modifiers than humans.",
"As is indicated in Figure 2a the post-modifiers in the dataset on average contain 5.8 tokens, whereas generated post-modifiers have only 3.8.",
"Lastly, we Prec.",
"observe that our quantitative evaluation metrics can be too strict.",
"Take for example the second instance in Table 8. Here the content of the target and generated post-modifiers is almost exactly the same, however our metrics would give very low scores due to low overlap.",
"Human Evaluation We additionally evaluate the generated post-modifiers by performing a human evaluation using Amazon Mechanical Turk.",
"We randomly select 500 instances from test set and show crowdworkers the sentence context, along with the true post-modifier and a generated one.",
"For each instance, workers are asked to select the better phrase, or indicate that the two phrases are of equal quality.",
"For the Oracle BiLSTM model, the true post-modifiers are preferred 46% of the time, while generated post-modifiers are preferred 43.2% of the time.",
"For the Neural Ranker ( n = 10 ) BiLSTM model, true post-modifiers are favored much more (57.60%) than the generated ones (20%).",
"Consistent with our quantitative results, we see that claim selection is a crucial factor in this task.",
"We also observe a few trends in the results.",
"People tend to prefer generated post-modifiers over the ones written by professional journalists when they are shorter and to use more general terms without elaborating too much about the entity.",
"In contrast, longer All Claims BiLSTM Neural Ranker BiLSTM Oracle BiLSTM 0 50 100 post-modifier lengths 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20+",
"and more detailed human written post-modifiers are preferred when they are especially relevant to the rest of the sentence.",
"There is a large body of previous work on claim selection (Kukich, 1983; Duboue and McKeown, 2003; Reiter and Dale, 1997; Tanaka-Ishii et al., 1998; Barzilay and Lapata, 2005) and language generation from structured data (Reiter et al., 2005; Goldberg et al., 1994).",
"Initially, hand-crafted grammars were employed for language generation, which later evolved to statistical machine translation style models (Wong and Mooney, 2007) or PCFG based models (Belz, 2008).",
"More recently, the focus has shifted to learning both fact selection and language generation jointly (Liang et al., 2009; Angeli et al., 2010; Kim and Mooney, 2010; Lu and Ng, 2011; Konstas and Lapata, 2013).",
"Modern approaches employ neural networks to solve this problem end-to-end.",
"Mei et al. (2016) utilize an encoder-decoder framework to map weather conditions to a weather forecast.",
"Ahn et al. (2016) and Yang et al. (2017) introduce a new class of language models which are capable of entity co-reference and copying facts from an external knowledge base.",
"Building upon these models, Wiseman et al. (2017) introduce an auxiliary reconstruction loss which use the hidden states of the decoder to recover the facts used to generate the text.",
"Liu et al. (2018) introduce a hierarchical attention model for fact selection, with the higher level focusing on which records in the table to select and the lower level focusing on which cells in a particular row to pay attention to.",
"In order to train complex neural models, the quest for larger datasets has become paramount.",
"Lebret et al. (2016) introduce the WikiBio dataset containing Wikipedia articles of famous people and the corresponding infobox tables.",
"One drawback of this dataset is that it is easily solved using template-based models.",
"To address this issue, Wiseman et al. (2017) introduce the ROTOWire dataset, which contains summaries of basketball games that are very long and syntactically diverse.",
"A comprehensive list of datasets is provided in Appendix B. 6 Conclusions and Future Work Inspired by recent work on collaborative writing and data-to-text generation, we introduce post-modifier generation, a task that bridges the gap between these two fields.",
"The task is to generate a factual description of an entity which fits within the context of a human written sentence.",
"In order to promote research on this task we present PoMo , a large dataset of automatically extracted post-modifiers from news articles, aligned to the Wikidata knowledge graph.",
"We study the performance of numerous strong baseline models on this dataset, with a particular focus on the specific sub-task of claim selection.",
"Our results demonstrate that when relevant claims are known, sequence-to-sequence models are capable of generating post-modifiers which humans deem comparable in quality to ones written by professional journalists.",
"However, according to both quantitative metrics and human judgment, performance is much lower when models must determine for themselves which claims are relevant.",
"These experiments suggest plausible pathways to achieving human-level performance on this task that are both challenging and interesting problems for future research.",
"We would like to thank the Toyota Technological Institute at Chicago for hosting the Workshop on Collaborative and Knowledge-Backed Language Generation which initiated the efforts for this project.",
"The authors would also like to thank David Yarowsky, Jason Eisner, Kevin Duh, Kyle Gorman, and Philipp Koehn for feedback on early ideas for post-modifier generation."
] | [
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"method",
"abstain",
"result",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"other",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"method",
"method",
"objective",
"other",
"other",
"other",
"other"
] |
[
"Reference-free evaluation has the potential to make machine translation evaluation substantially more scalable, allowing us to pivot easily to new languages or domains.",
"It has been recently shown that the probabilities given by a large, multilingual model can achieve state of the art results when used as a reference-free metric.",
"We experiment with various modifica-tions to this model, and demonstrate that by scaling it up we can match the performance of BLEU.",
"We analyze various potential weaknesses of the approach, and find that it is surprisingly robust and likely to offer reasonable performance across a broad spectrum of domains and different system qualities.",
"Traditional automatic metrics for machine translation (MT), such as BLEU (Papineni et al., 2002), score MT output by comparing it to one or more reference translations.",
"This has several disadvantages.",
"First, high-quality reference translations are expensive to create.",
"This means that in practice, evaluation is usually carried out with relatively small, carefully curated test corpora.",
"The need for careful preparation limits the number of domains for which an MT system can be conveniently assessed, and small test-set sizes can make it difficult to draw robust conclusions (Card et al., 2020).",
"Second, enshrining ground truth in a small number of references (usually just one) is inherently problematic, since valid translations can vary along many dimensions; Freitag et al. (2020b) demonstrate that different (correct) references for the same test set can result in different system rankings according to the same reference-based metric.",
"Finally, scoring the similarity between an MT hypothesis and a reference translation involves recognizing the extent to which they are mutual paraphrases.",
"When gross discrepancies exist, this is a relatively easy problem for which surface-level metrics can provide a reliable signal, but capturing the subtle errors typical of high-quality MT is more difficult, and it is not clear whether it is substantially easier than scoring the similarity between texts in different languages.",
"These problems can be avoided by looking only at the source text when assessing MT output.",
"There is evidence that this is the best practice for human evaluation (Toral, 2020).",
"Moreover, it has recently been investigated for automatic metrics as well (Yankovskaya et al., 2019; Lo, 2019; Zhao et al., 2020; Ma et al., 2019).",
"Such reference-free metrics are flexible and scalable, but since they are essentially performing the same task as an MT model, they raise a circularity concern: if we can reliably score MT output, why wouldn't we use the scoring model to produce better output?",
"One answer to this is practical: the scoring model might be too large to deploy, or it might not easily support efficient inference (Yu et al., 2016).",
"A more interesting answer is that a scoring model could be set up to provide a signal that is complementary to the systems under evaluation.",
"That is, it might be capable of correctly ranking competing MT hypotheses even when its own preferred hypothesis is worse on average than those of the systems it is evaluating.",
"In our experiments we find that this can indeed be the case.",
"In recent work, Thompson and Post (2020) showed that a single multilingual MT model trained on 39 languages can achieve excellent paraphrase recognition when used in zero-shot mode to compare MT output with reference sentences in the same language.",
"On the WMT 2019 metrics task, their method (Prism) beat or tied all previous reference-based metrics on all languages.",
"1 Although it was not the main focus of their work, Prism achieved a new state-of-the-art as a reference-free metric, simply scoring target given source text using an MT model, in a post-competition comparison to the 2019 Quality Estimation as a metric shared task (Ma et al., 2019).",
"Our aim in this paper is to characterize the conditions under which the Prism approachusing one MT system to perform peer evaluation on other systemscan be successful: what properties does the evaluating system need to have, how powerful should it be, and how close can it be to the systems under evaluation?",
"We focus on system-level evaluation, which we believe is the most compelling use case for reference-free methods, targeting a broad characterization that complements the potentially more precise picture furnished by reference-based metrics for a specific test corpus.",
"We first replicate the correlation with human judgment results from Thompson and Post (2020) on WMT 2019, using the same corpora and architecture.",
"Next, we examine several alternative design decisions in an attempt to improve Prism and further our understanding.",
"These include the effects of varying training corpora (domain, number of languages, use of monolingual data); model capacity (scal-ing up and down from the original architecture); and different methods for regularizing token-level probabilities (Monte-Carlo dropout, subword sampling) and for combining them into system-level scores (summary statistics over tokens, confidence thresholds over sentences).",
"Finally, we analyze the results of our best model, measuring how its performance depends on various factors: language pair and human-judgment methodology, output quality, proximity to the systems under evaluation, and size of the test set.",
"We demonstrate improvements over the original Prism metric due to model capacity and different methods for combining probabilities; surprisingly, we find little gain from adjusting the domain or languages in the original multilingual corpus (al-though we show that a competition-grade English-German system outperforms the generic multilingual system).",
"We find that the evaluating MT sys-tem's output quality is generally correlated with its performance as a metric, although we corroborate the surprising finding from Thompson and Post (2020) that it is not necessary to be the bestour system is middle-of-the-road or worse according to BLEU across most WMT 2019 languages.",
"We measure the proximity between our system and the systems under evaluation and find no evidence that this is a source of bias.",
"Despite using no references, our model achieves approximate parity with BLEU both in system-level correlation with human judgment, and when used for pairwise comparisons.",
"Reference-free evaluation is widely used for many NLP tasks such as grammatical error correction (Napoles et al., 2016), dialog (Sinha et al., 2020; Mehri and Eskenazi, 2020) and text generation (Ethayarajh and Sadigh, 2020).",
"There has been recent interest in reference-free evaluation for MT, which was a joint track between the WMT 2019 metrics task (Ma et al., 2019) and quality estimation task (Fonseca et al., 2019).",
"Reference-free metrics competed head-to-head with standard metrics, and generally did worse.",
"However, the results from the best reference-free systems, UNI+ (Yankovskaya et al., 2019) and YiSi-2 (Lo, 2019) were surprisingly close to the standard metric scores on the language pairs for which they were evaluated.",
"UNI+ computes word-level embeddings for source and MT output sentences using pre-trained multilingual BERT and LASER (Artetxe and Schwenk, 2019) models, then feeds averaged vectors to a neural classifier trained to predict human scores from previous MT metrics tasks.",
"YiSi-2 is similar, except that it works in an unsupervised fashion, computing similarities between mBERT embeddings for aligned source and target words, and returning an F-measure statistic.",
"In more recent work, Zhao et al. (2020) adopt a similar approach based on mBERT, aligning representations from multilingual embedding spaces before computing distances with MoverScore (Zhao et al., 2019), and adding a GPT-based target-side language model.",
"The current state-of-the-art in reference-free evaluation for MT is represented by the Prism approach (Thompson and Post, 2020) which we extend here.",
"It is worth distinguishing reference-free evaluation from two related tasks that share formal similarities.",
"The first is quality or confidence estimation (Blatz et al., 2004; Specia and Shah, 2018; Chelba et al., 2020), which aims to score the fitness of MT output for a downstream application.",
"This is typically supervised, although a recent approach (Fomicheva et al., 2020) dispenses with the need to learn from human annotations, as do most of the approaches we study in this paper.",
"Quality estimation is most usefully applied at the sentence level, and it can make use of powerful glass-box features which capture the internals of an MT system.",
"In contrast, reference-free evaluation is most naturally applied at the system (test-set) level, and ideally should make no assumptions about the systems under evaluation.",
"The second task is parallel-corpus mining (Zhang et al., 2020; Yang et al., 2019), which aims to identify valid translations at various levels of granularity.",
"Its scoring aspect is similar to reference-free evaluation, but it is applied to a different input distribution, attempting to identify human-generated translation pairs rather than scoring MT outputs for a given human-generated source text.",
"We aim to generate a quality score s ( X, Y ) = (cid:80) x,y s ( x, y ) for source and target texts X, Y which consist of segment (nominally, sentence) pairs x, y .",
"We assume no document or ordering information among segments, and do not directly evaluate scores for individual segment pairs.",
"All methods we consider make use of token-level log-probabilities from a standard autoregressive neural MT system: log p ( y t | y <t , x ) , where y = y 1 . . . y T .",
"We experimented with reverse probabilities p ( x | y ) , but like Thompson and Post (2020) found these gave no advantage, and do not include them in our reported results.",
"The following sections describe our model architecture, scoring techniques, and evaluation methodology.",
"Our baseline NMT model uses a standard Transformer architecture identical to that of Thompson and Post (2020) (up to toolkit differences), trained on the same multilingual corpus.",
"To encourage language-agnostic encoder representations for zero-shot scoring, the baseline uses target-language tags at the beginning of each target sentence (Johnson et al., 2017).",
"Since we do not require such representations for reference-free evaluation, we also tried introducing the tags earlier, at the beginning of each source sentence.",
"We vary training corpora and model capacity as described in section 4.1, but otherwise make no changes to the model.",
"We investigated various techniques for deriving segment-level scores s ( x, y ) : regularization, different methods for aggregating token-level probabilities, and segment-level confidence thresholds.",
"To obtain smoother scores, we used Monte-Carlo dropout (Gal and Ghahramani, 2016) and subword",
"estimates of the form: log p ( y | x ) = K (cid:88) k =1 log p k ( y | x ) /K,",
"where p k ( y | x ) is a probability estimate that depends on the smoothing method.",
"For MC-dropout, it is obtained by dropping neural connections with probability .",
"For subword regularization, p k ( y | x ) = p ( y k | x k ) , where x k and y k are randomly-sampled alternative subword segmentations of x and y .",
"2 Note that MC-dropout decomposes over tokens, yielding smoother per-token probabilities; subword regularization does not, since it does not preserve tokenization.",
"Given a sequence of token probabilities log p ( y t | y <t , x ) , t = 1 . . . T , we derive segment-level scores s ( x, y ) using various statistics.",
"Following Thompson and Post (2020), we sum to obtain segment log-probabilities or average to obtain mean token-wise log-probabilities.",
"To eliminate the effect of outliers, we tried the median instead of the mean.",
"To test the opposite intuition, we also tried the minimum.",
"Finally, to reflect overall consistency, we compute standard deviation.",
"Quality scores implicitly reflect the presence or absence of errors in MT output.",
"In some cases, model probabilities provide strong evidence for or against the existence of errors, but in other cases the model may be agnostic.",
"To capture this intuition, we used the following mapping to obtain segment scores: s ( x, y ) = 1 , log p ( y | x ) /T < l +1 , log p ( y | x ) /T > h 0 , else To set the thresholds ( l, h ) we used a coarse grid search on development data.",
"We evaluate reference-free metric scores on data from the WMT19 metrics task (Ma et al., 2019), consisting of outputs from different MT systems",
"for 18 language pairs.",
"For each language pair, we compute a metric score for each system, then use correlation with the provided human scores to assess the quality of our metric.",
"3 Following Ma et al. (2019) we measure correlation using Pearson's co-efficient, and use Williams' test (Williams, 1959) to compute the significance of correlation differences, with a p-value < 0.05.",
"Ma et al. (2019) note that correlation scores are unrealistically high for many language pairs, and suggest using only the best k systems for small values of k .",
"However, Mathur et al. (2020) show that this results in noisy and unreliable estimates.",
"We adopt their suggestion to instead remove outlier systems whose scores have large deviations from the median according to the formula: | h h | 1 .",
"where h is a system-level human score, and h is the median score across all systems for a given language pair.",
"To summarize a metric's performance across a set of language pairs, we report the weighted average of its Pearson correlations across languages.",
"We first apply the Fisher Z-transformation to normalize raw language-specific correlations, then weight by the number of MT systems per language (post outlier filtering), then invert the Fisher Z-transformation and take the mean (Hedges and Olkin, 2014).",
"We used four training corpora.",
"Prism-39 consists of noise-filtered multi-way parallel data curated by Thompson and Post (2020), extracted primarily from Wikimatrix, Global Voices, EuroParl, SE-Times, and United Nations, consisting of 99.8M sentence pairs in 39 languages, including direct parallel data for 706 language pairs.",
"Wiki-39-Mono consists of monolingual data extracted from the multilingual Wikipedia corpus for the languages available in Prism-39.",
"WMT-15 is the parallel 3 Human annotators assign segment-level scores on a 0 100 scale which are averaged across segments, then normalized to correct for annotator differences, then averaged across annotators to produce system-level scores.",
"For out-of-English language pairs, annotations are made by comparison to the source text, which directly corresponds to our setting; for other pairs, they are made by comparing to reference translations.",
"training data provided for the WMT 2019 News Translation Task (Barrault et al., 2019), augmented with 5 languages from previous WMT years Estonian (et), Spanish (es), Latvian (lt), Hindi (hi) and Turkish (tr).",
"All language pairs are to/from English except French-German.",
"Sizes range from 60 million sentence pairs for English-Czech to 10k pairs for English-Gujarati (Table 4).",
"Finally, WMT-15-Mono is the monolingual data provided alongside WMT-15.",
"Test data is from the WMT 2019 Metrics Task (Ma et al., 2019), consisting of system outputs on news-domain text for all 18 language pairs included in the task: English (en) to/from Czech (cs), German (de), Finnish (fi), Gujarati (gu), Kazakh (kk), Lithuanian (lt), Russian (ru), and Chinese (zh), excluding cs-en.",
"There are three other language pairs not including English: de-cs, de-fr and fr-de.",
"The average number of systems per language is 12, and the average test-set size is 1,633.",
"We used the Lingvo toolkit (Shen et al., 2019), to train Transformer sequence-to-sequence models of various sizes as shown in Table 1, where the baseline Prism configuration matches that of Thompson and Post (2020).",
"We use AdaFactor optimization with a learning rate of 1.0 and batch size of 8000 samples.",
"Our shared vocabulary comprises 64k subwords.",
"This section presents our main results.",
"All correlations in the tables below are for system-level scores, after outlier systems have been discarded for each language pair.",
"For brevity, we report average correlations, normalized and weighted as described in section 3.3; full results are provided in Appendix B. Unless otherwise stated, all methods score system outputs using average log probabilities normalized by segment length.",
"Table 2 shows key WMT19 baseline results for reference-based metrics (top two lines), reference-free metrics (next three lines), and our reimple-mentation of the Prism model (bottom lines).",
"We achieve slightly better results for source-side tagging (Prism-src2xx), and on average match the original Prism results that use target-side tagging with this configuration, which we adopt for further experiments.",
"The en-xx results are affected negatively by the inclusion of en-gu, which is absent from the Prism-39 corpus and has low correlation ( 0 . 400 ); however, interestingly, results for gu-en are on par with other language pairs, presumably due to the prevalence of English in the corpus.",
"Table 3 gives results for training on different corpora described in section 4.1.",
"The first four lines correspond to different multilingual training corpora, beginning with the Prism-39 model from the previous section.",
"We see no gain on average from using the provided WMT-15 training corpora, despite possibly better domain fit and generally larger sizes for the language pairs in the test set (Table 4).",
"We speculate that this is due to preprocessing as we made no effort to clean or filter the WMT-15 corpus.",
"This hypothesis is supported by the Prism-13 results, where we trained on the language pairs in Prism-39 that overlapped with the WMT-15 corpus, achieving slightly better average performance.",
"Combining Prism-39 and WMT-15 improves further, yielding a relatively small but statistically significant average gain over pure Prism-39, at the cost of lower performance for the en-xx language pairs.",
"Inspired by improvements for low-resource languages from monolingual data (Siddhant et al., 2020), we used the MASS denoising objective to add general-domain monolingual data (Wiki-39) to Prism-39 and in-domain data (WMT-15-Mono) to both Prism-39 and WMT-15 (Table 6 for a comparison on the relative sizes of the monolingual corpora).",
"Overall, the general-domain data hurts correlation significantly, while in-domain helps significantly, but only for WMT-15.",
"As expected, monolingual data tends to help lower-resource languages (gu, kk, lt) most, with a particularly large gain for xx-en with WMT-15 + WMT-15-Mono.",
"However, the correlation for xx-yy language-pairs degrades significantly, which we attribute to the en-centric nature of the WMT-15 dataset.",
"Can we use bilingual MT systems for peer evaluation?",
"We chose four representative language pairs from Prism-39 and trained Big models (see Table",
"1) in eight directions, with dedicated 64k subword vocabularies.",
"Table 5 shows that for medium and high resource languages (de, ru, and zh), the bilingual model performs comparably to the multilingual model.",
"However, for the low resource language lt, the multilingual model is significantly better.",
"As with the results elsewhere in this section, this suggests that correlation tends to follow the pattern one would expect if we were mainly interested in model quality.",
"This is corroborated by the results in the last line of the table, where we compare a competition-grade model for en-de (Freitag et al., 2020a), similar to the winning submission from WMT19, to our models.",
"The competition-grade model achieves a much better correlation and also improves on BLEU by a wide margin.",
"Motivated by the link between correlation and model quality, we varied model capacity according to the settings in Table 1, using the Prism-39 training corpus.",
"The results in Table 7 show a clear pattern of gains with increasing capacity.",
"The Massive configuration does best overall, achieving statistical parity with BLEU on average.",
"Table 8 shows results for the scoring methods described in section 3.2 applied to the Massive configuration.",
"Aggregating token probabilities using statistics other than mean gives small gains on some languages, but hurts on average.",
"Regularizing with MC-dropout or subwords (SP-norm) leads to significant gains in some cases, with a slight overall increase over mean for SP-norm.",
"We tuned confidence thresholds on WMT18 Metrics task data using a grid of 16 log-probability points in [ 3 , 0] , which yielded optimal thresholds ( 1 , 0 . 6) .",
"This produced our best overall result, with systematic gains on en-xx pairs.",
"In this section we analyze various aspects of metric performance, confining our attention to the Massive model with mean scoring for consistency.",
"6.1 Performance across conditions Subset Avg All 0.883 All gu 0.893 Source-based evaluation 0.858 Source-based gu 0.883 Reference-based evaluation 0.901 Reference-based gu 0.901 Corpus 1M 0.839 Corpus < 1M 0.924 No data 0.741 Table 9: Average Correlation for different subsets of languages.",
"Different languages have different relations to our model, to the systems participating in the WMT task, and to the human scoring procedure used in the WMT19 data.",
"Table 9 shows results for various conditions.",
"Removing the language (gu) for which we have no training data improves average correlation substantially.",
"The human evaluations for out-of-English language pairs involve comparing MT output to the source text; the evaluations for remaining pairs involve comparing it to reference translations.",
"We see no boost from the language pairs for which source-based human evaluation was used (matching our setting), and in fact do somewhat worse on these pairs than the others, on average.",
"Finally, we achieve better performance for lower-resource ( < 1M parallel segments) language pairs than higher-resource pairs (with respect to the Prism-39 corpora), but poor average performance on the pairs (en-gu/gu-en) for which we had no training data.",
"Correlation statistics give an overall picture of metric performance, but do not directly reflect the frequent use case of deciding which of two systems is better.",
"To measure this, we examined whether our metric agrees with human pairwise ranking decisions over all pairs of systems.",
"Following (Mathur et al., 2020), we apply the Wilcoxon ranksum test and paired t-test to detect when such decisions are significant according to human and metric scores respectively.",
"Table 10 shows ranking performance for Prism compared to BLEU, categorized according to language pair grouping.",
"The general pattern across all groupings is that Prism is more decisive: it makes more significant decisions than BLEU, leading to higher rates of both correct and incorrect rankings.",
"Among the 885 system pairs (across all languages) that are considered significantly different according to human judgment, Prism correctly ranks 88% with significantly different scores, compared to 87% for BLEU.",
"How good is our multilingual MT system compared to the systems under evaluation?",
"We generated translations of the test text for a subset of languages and compared the quality of the generated system outputs using BLEU.",
"Figure 1 shows that our evaluating model achieves worse BLEU scores than many of the systems under evaluation, ranking around the median for most language pairs.",
"Although Table 5 provides evidence that stronger systems produce better metrics, clearly it is not necessary to be among the top-ranked systems in order to generate a signal that is approximately as reliable as BLEU.",
"4 Figure 1: Quality across language pairs.",
"A potential pitfall in peer evaluation is bias toward one or more of the systems under evaluation.",
"Clearly, the evaluating system will prefer its own outputhow far from an evaluated system does it have to be in order to judge it fairly?",
"Lacking access to the systems in the WMT19 task, we measure proximity using cross-BLEU score (using one output as hypothesis and the other one as reference translation) between the system output and the output generated by our Prism model.",
"In the presence of bias, we would expect the metric to result in higher ranking for closer systems and lower ranking for farther systems (relative to human scores).",
"4 It would be interesting to try to characterize the relation between system quality and metric strength more precisely, but in the absence of human judgments of our output quality, any such picture we could currently draw would be clouded by metric noise.",
"directionsranks closest and farthest system both higher and lower than humanthere is no evidence from this analysis that it exhibits a strong bias in favour of systems whose outputs are closer to its own.",
"A potential explanation is that it is sufficiently far from most of the evaluated systems due to its multilingual training corpus.",
"To verify this, we computed the average cross-BLEU for each evaluated system (relative to all others), and compared it to the same quantity for our system.",
"Figure 3 shows that we are indeed an outlier system for most language pairs.",
"The systems with lower cross-BLEU than Prism are mostly online or rule-based systems.",
"5 Figure 3: Average Cross-BLEU for all evaluated systems and Prism.",
"In principle, a major advantage of reference-free evaluation is that it can make use of arbitrarily large test sets, being constrained only by the amount of source-language data in the domain of interest.",
"We hypothesize that this will improve metric performance by reducing sampling error.",
"To test this hypothesis in the absence of larger human-scored test sets for WMT19, we sampled subsets of various sizes and measured average correlation.",
"As shown 5 For Kazakh (kk), Prism-39 includes the WMT-15 dataset, resulting in higher cross-BLEU compared to other language pairs.",
"in Table 11, we observe a steady increase with test-size size.",
"This provides persuasive, though not definitive, evidence that test sets beyond the scale of WMT19 would yield further improvements in accuracy for both metrics, a setting that would be more feasible for Prism than BLEU.",
"Full curves are plotted in Figure 4 (See Appendix C).",
"In this paper, we have shed some light on the remarkable finding by Thompson and Post (2020) that a multilingual model trained on a large (but not enormous) general-domain corpus can be highly effective as an MT metric when used to score the outputs of other MT systems in the absence of reference translations.",
"By scaling up the model and making small adjustments to tagging and scoring, we improve over the original results and achieve approximate parity with BLEU in correlation with human judgment on WMT19 data.",
"We argue that this metric is a useful complement to reference-based metricsincluding ones that are significantly more powerful than BLEUdue to its flexibility; and we provide evidence that scoring reliability can be further improved by using larger source-side-only test sets.",
"We find that the major determinant of success in peer evaluation is the quality of the evaluating model.",
"However, there is no hard requirement that it be better than the models under evaluation: surprisingly, it can correctly rank models that outperform it on average.",
"If we abstract away from quality, performance does not appear to be highly sensitive to the domain or the multilingual versus bilingual nature of the training corpus.",
"Taken together, these results have the important practical implication that a single multilingual system such as ours could be broadly applicable for evaluating systems in a large number of language pairs (706 in our case), at different quality levels, and across a wide range of domains.",
"In future work, we look forward to probing these results further, and determining whether alternative architectures or loss functions might be valuable in specializing an MT model for evaluating its peers.",
"We thank Julia Kreutzer, Ciprian Chelba, Aditya Siddhant, and the anonymous reviewers for their helpful and constructive comments."
] | [
"objective",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"result",
"abstain",
"result",
"objective",
"result",
"result",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"result",
"result",
"result",
"abstain",
"abstain",
"result",
"result",
"other"
] |
[
"Generative Adversarial Networks (GANs) are a promising approach for text generation that, unlike traditional language models (LM), does not suffer from the problem of exposure bias.",
"However, A major hurdle for understanding the potential of GANs for text generation is the lack of a clear evaluation metric.",
"In this work, we propose to approximate the distribution of text generated by a GAN, which permits evaluating them with traditional probability-based LM metrics.",
"We apply our approximation procedure on several GAN-based models and show that they currently perform substantially worse than state-of-the-art LMs.",
"Our evaluation procedure promotes better understanding of the relation between GANs and LMs, and can accelerate progress in GAN-based text generation.",
"Neural networks have revolutionized the field of text generation, in machine translation (Sutskever et al., 2014; Neubig, 2017; Luong et al., 2015; Chen et al., 2018), summarization (See et al., 2017), image captioning (You et al., 2016) and many other applications (Goldberg, 2017).",
"Traditionally, text generation models are trained by going over a gold sequence of symbols (char-acters or words) from left-to-right, and maximizing the probability of the next symbol given the history, namely, a language modeling (LM) objective.",
"A commonly discussed drawback of such LM-based text generation is exposure bias (Ran-zato et al., 2015): during training, the model predicts the next token conditioned on the ground truth history, while at test time prediction is based on predicted tokens, causing a train-test mismatch.",
"Models trained in this manner often struggle to overcome previous prediction errors.",
"Originally introduced for images, GANs leverage a discriminator, which is trained to discriminate between real images and generated images via an adversarial loss.",
"In such a framework, the generator is not directly exposed to the ground truth data, but instead learns to imitate it using global feedback from the discriminator.",
"This has led to several attempts to use GANs for text generation, with a generator using either a recurrent neural network (RNN) (Yu et al., 2017; Guo et al., 2017; Press et al., 2017; Rajeswar et al., 2017), or a Convolutional Neural Network (CNN) (Gulrajani et al., 2017; Rajeswar et al., 2017).",
"However, evaluating GANs is more difficult than evaluating LMs.",
"While in language modeling, evaluation is based on the log-probability of a model on held-out text, this cannot be straightforwardly extended to GAN-based text generation, because the generator outputs discrete tokens, rather than a probability distribution.",
"Currently, there is no single evaluation metric for GAN-based text generation, and existing metrics that are based on n-gram overlap are known to lack robustness and have low correlation with semantic coherence (Semeniuta et al., 2018).",
"In this paper, we propose a method for evaluating GANs with standard probability-based evaluation metrics.",
"We show that the expected prediction of a GAN generator can be viewed as a LM, and suggest a simple Monte-Carlo method for approximating it.",
"The approximated probability distribution can then be evaluated with standard LM metrics such as perplexity or Bits Per Character (BPC).",
"To empirically establish our claim, we implement our evaluation on several RNN-based GANs: (Press et al., 2017; Yu et al., 2017; Guo et al., 2017).",
"We find that all models have substantially lower BPC compared to state-of-the-art LMs.",
"By directly comparing to LMs, we put in perspective the current performance of RNN-based GANs for text generation.",
"Our results are also in line with recent concurrent work by Caccia et al. (2018), who reached a similar conclusion by comparing the performance of textual GANs to that of LMs using metrics suggested for GAN evaluation.",
"Our code is available at: http: //github.com/GuyTevet/SeqGAN-eval and http://github.com/GuyTevet/ rnn-gan-eval .",
"Following the success of GANs in image generation, several works applied the same idea to texts using convolutional neural networks (Gul-rajani et al., 2017; Rajeswar et al., 2017), and later using RNNs (Press et al., 2017; Yu et al., 2017).",
"RNNs enable generating variable-length sequences, conditioning each token on the tokens generated in previous time steps.",
"We leverage this characteristic in our approximation model ( 4.1).",
"A main challenge in applying GANs for text is that generating discrete symbols is a nondifferentiable operation.",
"One solution is to perform a continuous relaxation of the GAN output, which leads to generators that emit a nearly discrete continuous distribution (Press et al., 2017).",
"This keeps the model differentiable and enables end-to-end training through the discriminator.",
"Alternatively, SeqGAN (Yu et al., 2017) and LeakGAN (Guo et al., 2017) used policy gradient methods to overcome the differentiablity requirement.",
"We apply our approximation to both model types.",
"LM Evaluation.",
"Text generation from LMs is commonly evaluated using probabilistic metrics.",
"Specifically, given a test sequence of symbols ( t 1 , . . . , t n ) , and a LM q , the average cross-entropy over the entire test set is computed: ACE = 1 n (cid:80) ni =1 log 2 q ( t i | t 1 ...t i 1 ) .",
"For word-based models, the standard metric is perplexity: P P = 2 ACE , while for character-based models it is BP C = ACE directly.",
"Intrinsic improvement in perplexity does not guarantee an improvement in an extrinsic downstream task that uses a language model.",
"However, perplexity often correlates with extrinsic measures (Jurafsky and Martin, 2018), and is the de-facto metric for evaluating the quality of language models today.",
"GAN-based Text Generation Evaluation.",
"By definition, a text GAN outputs a discrete sequence of symbols rather than a probability distribution.",
"As a result, LM metrics cannot be applied to evaluate the generated text.",
"Consequently, other metrics have been proposed: N-gram overlap: (Yu et al., 2017; Press et al., 2017): Inspired by BLEU (Papineni et al., 2002), this measures whether n-grams generated by the model appear in a held-out corpus.",
"A major drawback is that this metric favors conservative models that always generate very common text (e.g., it is ).",
"To mitigate this, self-BLEU has been proposed (Lu et al., 2018) as an additional metric, where overlap is measured between two independently sampled texts from the model.",
"LM score: The probability of generated text according to a pre-trained LM.",
"This has the same problem of favoring conservative models.",
"Zhao et al. (2017) suggested an indirect score by training a LM on GAN-generated text, and evaluating it using perplexity.",
"The drawback in this setting is the coupling of the performance of the GAN with that of the proxy LM.",
"Heusel et al. (2017) used Frechet InferSent Distance (FID) to compute the distance between distributions of features extracted from real and generated samples.",
"However, this approach relies on a problematic assumption that features are normally distributed.",
"Rajeswar et al. (2017) used a context-free grammar (CFG) to generate a reference corpus, and evaluated the model by the likelihood the CFG assigns to generated samples.",
"However, simple CFGs do not fully capture the complexity of natural language.",
"To overcome the drawbacks of each individual method, Semeniuta et al. (2018) proposed a uni-fied measure based on multiple evaluation metrics (N-grams, BLEU variations, FID, LM score variations and human evaluation).",
"Specifically, they argue that the different measures capture different desired properties of LMs, e.g., quality vs. diversity.",
"Following Semeniuta et al. (2018), and in parallel to this work, Caccia et al. (2018) proposed a temperature sweep method that trades-off quality for diversity using a single parameter.",
"Similar to our findings, they concluded that GANs perform worse than LMs on this metric.",
"Overall, current evaluation methods cannot fully capture the performance of GAN-based text generation models.",
"While reporting various scores as proposed by Semeniuta et al. (2018) is possible, it is preferable to have a single measure of progress when comparing different text generation models.",
"We propose a method for approximating a distribution over tokens from a GAN, and then evaluate the model with standard LM metrics.",
"We will describe our approach given an RNN-based LM, which is the most commonly-used architecture, but the approximation can be applied to other auto-regressive models (Vaswani et al., 2017).",
"The inputs to an RNN at time step t , are the state vector h t and the current input token x t .",
"The output token (one-hot) is denoted by o t .",
"In RNN-based GANs, the previous output token is used at inference time as the input x t (Yu et al., 2017; Guo et al., 2017; Press et al., 2017; Rajeswar et al., 2017).",
"In contrast, when evaluating with BPC or perplexity, the gold token x t is given as input.",
"Hence, LM-based evaluation neutralizes the problem of exposure bias addressed by GANs.",
"Nevertheless, this allows us to compare the quality of text produced by GANs and LMs on an equal footing.",
"Figure 1 illustrates the difference between inference time and during LM approximation.",
"We can therefore define the generator function at time step t as a function of the initial state h 0 and the past generated tokens ( x 0 . . . x t ) , which we denote as o t = G t ( h 0 , x 0 ...x t ) ( x 0 is a start token).",
"Given a past sequence ( x 0 . . . x t ) , G t is a stochastic function: the stochasticity of G t can Algorithm 1 LM Evaluation of RNN-based GANs Input: G t ( ) : the generator function at time step t ( x 0 , ..., x t ) : previous gold tokens x t +1 : the gold next token (as ground truth) f ( , ) : a LM evaluation metric N : number of samples 1: for n 1 to N do 2: g t,n sample from G t ( x 0 ...x t ) 3: G t,N = 1 N Nn =1 g t,n 4: return f ( G t,N , x t +1 ) be gained either by using a noise vector as the initial state h 0 (Press et al., 2017), or by sampling from the GAN's internal distribution over possible output tokens (Yu et al., 2017; Guo et al., 2017).",
"Since h 0 is constant or a noise vector that makes G t stochastic, we can omit it to get G t ( x 0 . . . x t ) .",
"In such a setup, the expected value E [ G t ( x 0 . . . x t )] is a distribution q over the next vocabulary token a t : q ( a t | a 0 . . . a t 1 ) = { E [ G t ( x 0 . . . x t )] } a t To empirically approximate q , we can sample from it N i.i.d samples, and compute an approximation G t,N = 1 N Nn =1 g t,n , where g t,n is one sample from G t ( x 0 ...x t ) .",
"Then, according to the strong law of large numbers: E [ G t ( x 0 . . . x t )] = lim N G t,N (1) Given this approximate LM distribution, we can evaluate a GAN using perplexity or BPC.",
"We summarize the evaluation procedure in Algorithm",
"1. 1 4.2 Approximation Bound We provide a theoretical bound for choosing a number of samples N that results in a good approximation of G t,N to E [ G t ] .",
"Perplexity and BPC rely on the log-probability of the ground truth token.",
"Since the ground truth token is unknown, we conservatively define the bad event B in which there exists v V such that |{ E [ G t ] } v { G t,N } v | > , where V is the vocabulary.",
"We can then bound the probability of B by some (cid:15) .",
"We define the following notations:",
"1. The probability of a token a t to be v is p v = q ( a t = v | a 0 . . . a t 1 ) = { E [ G t ( x 0 . . . x t )] } v .",
"2. v,n = { g t,n } v is a random variable representing the binary value of the v 'th index of 1 Our evaluation algorithm is linear in the length of the test set and in the number of samples N .",
"g t,n which is a single sample of G t .",
"Note that the average of v,n over N samples is X v = 1 N (cid:80) Nn =1 v,n = (cid:110) 1 N (cid:80) Nn =1 g t,n (cid:111) v = { G t,N } v .",
"Using the above notation, we can re-define the probability of the bad event B with respect to the individual coordinates in the vectors: Pr ( B ) = Pr (cid:16) (cid:107) E [ G t ] G t,N (cid:107) > (cid:17) = Pr (cid:32) (cid:91) v V | p v X v | > (cid:33) !",
"< (cid:15) (2) We note that v,n Bernoulli ( p v ) , and given that { v,n } Nn =1 are i.i.d., we can apply the Chernoff-Hoeffding theorem (Chernoff et al., 1952; Hoeffding, 1963).",
"According to the theorem, for every v V , P r ( | X v p v | > ) < 2 e 2 N 2 .",
"Taking the union bound over V implies: Pr ( B ) = Pr (cid:0)(cid:83) v V | X v p v | > (cid:1) < 2 | V | e 2 N 2 < (cid:15) (3) Hence, we get a lower bound on N : N > 1 2 2 ln (cid:18) 2 | V | (cid:15) (cid:19) (4) As a numerical example, choosing = 10 3 and (cid:15) = 10 2 , for a character-based LM over the text8 dataset, with | V | = 27 , we get the bound: N > 4 .",
"3 10 6 .",
"With the same and (cid:15) , a typical word-based LM with vocabulary size | V | = 50 , 000 would require N > 8 .",
"1 10 6 .",
"In practice, probability vectors of LMs tend to be sparse (Kim et al., 2016).",
"Thus, we argue that we can use a much smaller N for a good approximation G t,N .",
"Since the sparsity of LMs is difficult to bound, as it differs between models, we suggest an empirical method for choosing N .",
"The approximation G t,N is a converging sequence, particularly over (cid:107) (cid:107) (see Equation 1).",
"Hence, we can empirically choose an N which satisfies (cid:107) G t,N G t,N (cid:107) < (cid:48) , N .",
"In Section 5 we empirically measure (cid:107) G t,N G t,N (cid:107) as a function of N to choose N .",
"We choose a global N for a model, rather than for every t , by averaging over a subset of the evaluation set.",
"We focus on character-based GANs as a test-case for our method.",
"We evaluate two RNN-based GANs with different characteristics.",
"As opposed to the original GAN model (Goodfellow et al., 2014), in which the generator is initialized with random noise, the GANs we evaluated both leverage gold standard text to initialize the generator, as detailed below.",
"Recurrent GAN (Press et al., 2017) is a continuous RNN-based generator which minimizes the improved WGAN loss (Gulrajani et al., 2017).",
"To guide the generator, during training it is initialized with the first i 1 characters from the ground truth, starting the prediction in the i th character.",
"Stochasticity is obtained by feeding the generator with a noise vector z as a hidden state.",
"At each time step, the input to the RNN generator is the output distribution of the previous step.",
"SeqGAN (Yu et al., 2017) is a discrete RNN-based generator.",
"To guide the generator, it is pre-trained as a LM on ground truth text.",
"Stochasticity is obtained by sampling tokens from an internal distribution function over the vocabulary.",
"To overcome differentiation problem, it is trained using a policy gradient objective (Sutton et al., 2000).",
"We also evaluated LeakGAN (Guo et al., 2017), another discrete RNN-based generator, but since it is similar to SeqGAN and performed worse, we omit it for brevity.",
"To compare to prior work in LM, we follow the common setup and train on the text8 dataset.",
"2 The dataset is derived from Wikipedia, and includes 26 English characters plus spaces.",
"We use the standard 90/5/5 split to train/validation/test.",
"Finally, we measure performance with BPC.",
"We tuned hyper-parameters on the validation set, including sequence length to generate at test time (7 for Press et al. (2017), 1000 for Yu et al. (2017)).",
"We chose the number of samples N empirically for each model, as described in Section 4.2.",
"We set to 10, and the boundary to (cid:48) = 10 3 as a good trade-off between accuracy and run-time.",
"Figure 2 plots the approximate error (cid:107) G t,N G t,N (cid:107) as a function of N .",
"For both models, N > 1600 satisfies this condition (red line in Figure 2).",
"To be safe, we used N = 2000 .",
"Table 1 shows model performance on the test",
"2 http://mattmahoney.net/dc/textdata Approach Model BPC Approx.",
"Because SeqGAN models output a distribution over tokens at every time step, we can measure the true BPC and assess the quality of our approximation.",
"Indeed, we observe that approximate BPC is only slightly higher than the true BPC.",
"GAN-based models perform worse than state-of-the-art LMs by a large margin.",
"Moreover, in SeqGAN, the pre-trained LM performs better than the fully trained model with approximate BPC scores of 1.95 and 2.06, respectively, and the BPC deteriorates as adversarial training continues.",
"Finally, we note that generating sequences larger than 7 characters hurts the BPC of Press et al. (2017).",
"It is difficult to assess the quality of generation with such short sequences.",
"In Table 2 we present a few randomly generated samples from each model.",
"We indeed observe that adversarial training slightly reduces the quality of generated text for SeqGAN, and find that the quality of 100-character long sequences generated from Press et al. (2017) is low.",
"We propose an evaluation procedure for text GANs that is based on approximating the GAN output distribution and using standard LM metrics.",
"We provide a bound for the number of samples required for the approximation and empirically show in practice as few as 2000 samples per time-step suffice.",
"We evaluate character-based GAN models using our procedure, and show their performance is substantially lower than state-of-the-art LM.",
"We hope our simple evaluation method leads to progress in GAN-based text generation by shedding light on the quality of such models.",
"We would like to thank Shimi Salant for his comments and suggestions.",
"This research was partially supported by The Israel Science Foundation grant 942/16, the Blavatnik Computer Science Research Fund, and The Yandex Initiative for Machine Learning."
] | [
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"method",
"result",
"method",
"result",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"result",
"result",
"abstain",
"other",
"other"
] |
[
"Detecting biased language is useful for a variety of applications, such as identifying hyperpartisan news sources or flagging one-sided rhetoric.",
"In this work we introduce WikiEvolve, a dataset for document-level promotional tone detection in English.",
"Unlike previously proposed datasets, it contains seven versions of the same article from Wikipedia, from different points in its revision history; one with promotional tone, and six without it.",
"We adapt the gradient reversal layer framework to encode two article versions simultaneously, and thus leverage the training signal present in the multiple versions.",
"In our experiments, our proposed adaptation of gradient reversal improves the accuracy of four different architectures on both in-domain and out-of-domain evaluation.",
"Maintaining a neutral point of view is a desideratum in many communication channels, e.g. news articles, scientific writing, and encyclopaedias.",
"Biased writing detection can help reduce the distribution of content which contains unfair representations of a topic.",
"For this reason, datasets and methods have been developed to automate it.",
"A number of studies have approached biased writing detection in the context of news media (e.g. Fan et al. (2019); Chen et al. (2020); Frber et al. (2020)), primarily considering political stance and partisanship.",
"However, biased writing also arises in other settings.",
"In Wikipedia, the online encyclopaedia, it manifests itself in the form of promotional tone which violates the cornerstone neutral point of view policy of the platform.",
"The latter allows users to flag articles with such policy violations by adding tags to the article mark-up, which are retained in its edit history.",
"Leveraging This work was initiated during an internship at the Wikimedia Foundation.",
"this process, Recasens et al. (2013) and Aleksandrova et al. (2019) have released datasets of words and sentences which were altered in subsequent revisions, thus facilitating model development for word/sentence level bias detection.",
"In this work, we propose an alternative data collection methodology for document-level promotional tone detection, We sample multiple versions of the same article in Wikipedia and present WikiEvolve 1 , a dataset of 68,498 labelled articles for this task.",
"These articles are arranged into 13,887 sample sets, where each set contains multiple versions of the same article: one version tagged as having a promotional tone problem, and up to three versions respectively from before the tag was added and after it was removed.",
"This is illustrated in Fig. 1; the second version was labelled as containing 1 github.com/christinedekock11/wiki-ev olve 5601 promotional tone (positive), whereas the first and third versions were considered negative.",
"In contrast with Recasens et al. (2013) and Aleksandrova et al. (2019), we choose to perform classification at the level of documents rather than sentences or words.",
"Our motivation is that classifying a sentence out of context as biased is known to be difficult and prone to subjective judgements, while higher inter-annotator agreement is achieved at the document-level (Chen et al., 2020).",
"Recasens et al. (2013) similarly found that identifying promotional tone at the word-level is challenging, with Mechanical Turk workers achieving 37% accuracy on this task.",
"We hypothesise that there are article-level features which provide corroborating evidence to the intentions of the writer, which isolated sentences might not capture.",
"We also see evidence of this in our own data for instance, in Fig. 1, the mention of leading specialist in version 3 is dubious but justifiable; however, in version 2, it contributes to an overall assessment of biased writing.",
"To make better use of the training signal available in the multiple versions per article in WikiEvolve, we adapt gradient reversal (Ganin and Lempitsky, 2015).",
"The latter entails adding an auxiliary task during training which shares the input encoder with the main task and is optimised concurrently, but its gradients are reversed during back-propagation.",
"The model is therefore discouraged from learning features which are useful for the auxiliary task and assumed harmful for the main task.",
"Our adaptation operates on pairs of samples rather than individual texts, and we define the auxiliary task as classifying whether two samples originated from the same article.",
"The features we learn are therefore more likely to be informative of the tone, but not of the content.",
"In our experiments, gradient reversal improves the accuracy of all four architectures of increasing complexity.",
"On a bag-of-words encoding followed by two neural network layers, the PR-AUC score improves from 0.60 to 0.64.",
"Using a hierarchical attention network, performance is increased from 0.63 to 0.65.",
"This illustrates that the additional structural information WikiEvolve provides can be utilised to improve performance on this task.",
"To further assess the ability of gradient reversal to improve performance by encouraging models to learn features that do not rely on the topic or content, we also tested our models on out-of-domain data from the SemEval 2019 Shared Task on Hyperpartisan News Detection (Kiesel et al., 2019).",
"Our results show that GRL training improves our accuracy on this dataset from 0.714 to 0.785.",
"A number of studies have utilised Wikipedia to develop labelled datasets for content-related issues, including promotional tone detection.",
"Wikipedia has several favourable characteristics which enable this form of data collection.",
"Firstly, articles evolve over time through different versions.",
"Secondly, the chronological revision history of each article is preserved and open-sourced 2 , meaning that the evolution of an article can be retrieved.",
"Finally, the platform's decentralised quality control system allows users to tag articles that violate the platform's content policies, to warn readers of such issues and to attract the attention of editors to fix them.",
"These tags are removed from the article once the problem is resolved, but they are preserved in the article's edit history.",
"A more details on Wikipedia's policy violation tags see Anderka et al. (2012).",
"In this context, a revision of an article which contains a tag is considered a positive instance of that specific policy violation.",
"Different methods have been proposed for sampling negatives.",
"A popular approach is to find revisions of the same or other articles which do not contain the tag.",
"However, this approach can introduce noise, as the absence of a policy violation tag from an article does not guarantee that the problem is not present.",
"This characteristic of template-based Wikipedia datasets has been noted in previous work, e.g. Anderka et al. (2012); Bhosale et al. (2013); Orizu and He (2018).",
"Another option is to look to other articles which are known to represent well-written content.",
"For instance, Anderka et al. (2012) and Bhosale et al. (2013) select negatives from Wikipedia's list of featured and good articles.",
"However, these articles are of a higher quality generally and therefore have other distinguishing characteristics, which may be misleading if the goal is to detect policy-violating content.",
"Additionally, sampling negatives from different articles may introduce a topical bias.",
"Our data extraction methodology consists of",
"(i) finding articles tagged by a Wikipedia editor as having a promotional tone problem at some point 2 A Creative Commons Attribution-Share-Alike License 3.0 applies.",
"in their edit history,",
"(ii) selecting the revision where such a tag was added as a positive sample, and",
"(iii) sampling negatives from revisions which did not contain the template.",
"Finding promotional tone tags To identify tags of interest, we refer to the Wikipedia category ar-ticles with a promotional tone (Wikipedia, 2021a) and identify the quality tags which most frequently occur in this articles of this category.",
"These are ad-vert, autobiography, fanpov, peacock and weasel.",
"Each of these tags describes a different type of promotional tone issue, for which the definitions are contained in Appendix A. We then use regular expressions to collect all revisions which contain variations of these tags in the WikiText data lake (Wikipedia, 2021c).",
"Finding tag addition events Once incidences of promotional tone tags have been identified, we use the WikiHistory data lake (Wikipedia, 2021b) to find the full edit histories of these articles.",
"For each article, we then identify the point in its edit history where a tag was added, and consider this version of the article as the positive sample.",
"We exclude cases where the tag addition edit was reverted 3 by another editor.",
"The article text at this timestamp is retrieved from the WikiText data lake.",
"Sampling negatives For each positive sample, we select negatives from the revision history of the same article.",
"We consider as candidates all revisions which were not reverted, and which took place within 30 revisions (chronologically sorted) of the tag addition event.",
"This is intended to ensure that the negative samples are of the same approximate stage of article development as the positive sample.",
"We exclude the revision immediately before the tag addition event, as it is this version which prompted the tag to be added.",
"Up to three revisions (depending on availability) are selected at random from these candidates, before and after the positive.",
"We refer to such a set of samples as a sample set .",
"The negatives sampled before the tag addition are denoted neg_pre , and those from after are denoted neg_post .",
"The number of samples per tag and class are shown in Table 1. We split the data into train, test and validation sets with a ratio of 70-20-10.",
"The datasets are stratified to contain 3 From the platform guidelines: On Wikipedia, reverting means undoing or otherwise negating the effects of one or more edits, which results in the page (or a part of it) being restored to a previous version.",
"samples from each tag type (set out in Table 1), and samples from the same sample set (i.e. revisions of the same article) are kept in the same split.",
"Although this work only considers promotional tone detection in English, the data collection methodology and training framework we propose could be extended to other languages on Wikipedia, as is done in Aleksandrova et al. (2019).",
"As discussed in Sec. 2, Wikipedia tag-based datasets are known to contain a certain level of noise.",
"To counteract this, we have implemented three measures: ensuring that the negatives are from the same stage of article development, sampling from different points in the same article's edit history, and sampling negatives before and after the positive.",
"However, there is still a risk of including false negatives, i.e. articles not tagged as containing promotional tone even though they do.",
"An example of such a case from our dataset is shown in Fig. 2. Despite containing non-neutral phrases such as hit show, made quite an impression, and prove[d] herself to be intelligent, the neg_pre (first) sample is not tagged as containing biased language.",
"It does however contain some information that re-flects negatively on the subject (for the wrong reasons).",
"This is removed in the positive (middle) sample, and more overtly biased descriptions are added (quick wit, educational background, amazing looks, bubbly personality and easy on the eye appearrance).",
"In the neg_post negative sample, the problematic phrasing is removed.",
"We perform manual validation of our dataset to estimate how frequently false negatives are included.",
"We perform two types of validation: pairwise and independent prediction.",
"For the former, the task is to rank two samples (i.e. revisions of the same article) as to which is more promotional.",
"40 articles, consisting of 20 positive-negative pairs in random order, are evaluated by two of the authors.",
"The orderings of the annotators agreed with the assigned labels for respectively 16/20 and 14/20 pairs, with a Cohen's Kappa score of 0.79 indicating substantial agreement.",
"This suggests that the collected data contains a trustworthy signal for comparing the extent to which two texts are promotional.",
"Since the task we are mainly interested in is text classification, rather than ranking, we also perform an evaluation on individual samples, annotating 30 samples of each type (positive, neg_pre and neg_post ).",
"The concurrence with the mined labels of the neg_pre and neg_post annotations are shown in Table 2. This task appears to be more challenging compared to the pairwise comparison, with both annotators achieving lower scores and a lower inter-annotator agreement Kappa score of 0.4805, indicating moderate agreement.",
"A reason for this may be the subjective nature of the task, as illustrated by Chen et al. (2020).",
"Our evaluation indicates that the negative samples from before the tag was added contain more noise, compared to those sampled after it was removed.",
"This can be attributed to the active removal of the tag by an editor in the version after Annotator neg_pre neg_post A1 1230 1430 A2 1430 2230 Total 2460 3660 Table 2: Agreement of two authors with mined labels of negative sample annotations.",
"the tag was added ( neg_post ), which indicates that the problem is resolved, while the lack of a policy violation tag in earlier versions ( neg_pre ) does not guarantee lack of promotional tone.",
"However, ignoring the neg_pre samples altogether would expose the temporal bias mentioned in Sec. 1: if negatives are always sampled chronologically after positives and from a more developed version of the article, spurious correlations may be inferred.",
"Based on these insights, we have chosen to include the automatically mined neg_pre samples in training, but to create a separate set of manually validated neg_pre samples for evaluation.",
"Thus, we randomly selected 100 neg_pre samples from the original test set and verified whether they represent a neutral writing style.",
"42 of the 100 samples were confirmed as true negatives.",
"We balance these negatives with their corresponding positive samples, and refer to this dataset as ValidNegPre .",
"Gradient reversal training (Ganin and Lempitsky, 2015) jointly optimises two classifiers which rely on a shared underlying encoder model:",
"(i) a label predictor for the main task, which predicts class labels and is used during both training and test time, and",
"(ii) a domain classifier, which predicts either the source or the target domain during training as the auxiliary task.",
"The parameters of the encoder model are optimised to minimise the loss of the main task classifier while maximising the loss of the domain classifier.",
"This is achieved through a gradient reversal layer, which leaves the input unchanged during forward propagation and reverses the gradient by multiplying it by a negative scalar during the backpropagation.",
"This approach is motivated by theory on domain adaptation, which suggests that a good representation for cross-domain transfer is one for which an algorithm cannot learn to identify the domain of origin of the input observation (Ben-David et al., 2010).",
"Our adaptation of this framework, shown in Fig. 3, differs from Ganin and Lempitsky (2015) in that it considers two text inputs concurrently ( x and x (cid:48) ), as opposed to one.",
"f represents a neural network encoder with parameters 1 .",
"f encodes the two texts independently, to produce z and z (cid:48) : z = f ( x ; 1 ); z (cid:48) = f ( x (cid:48) ; 1 ) (1) The network then splits into two branches.",
"The primary (bottom) branch consists of a neural network model g , with parameters 3 , which produces promotional tone predictions y T 1 and y T 2 for the two samples: y T 1 = g ( z ; 3 ); y T 2 = g ( z (cid:48) ; 3 ) (2) The auxiliary branch concatenates the two input encodings as [ z, z (cid:48) ] , and then the similarity classifier h , a neural network parameterised by 2 , provides a prediction y sim of whether the two samples originate from the same Wikipedia article: y sim = h ([ z, z (cid:48) ]; 2 ) (3) Our intention with this task is to encourage the encoder f to learn features that are topic agnostic.",
"This should allow for better generalisation across datasets, as well as to avoid learning spurious correlations due to topical biases in the data.",
"The encoder and classifier models are trained simultaneously.",
"Given a set of training samples D = [ x 1 , ..., x N , y 1 , ..., y N ], we construct M pairs with indices P = ( i, j ) : i, j [1 , ..., N ] .",
"The process for generating these pairs is described in Sec. 6. Then, the loss is given by: L ( 1 , 2 , 3 , D, P ) = 1 MM (cid:88) m =1 (cid:18) LT ( x P 1 , x P 2 ; 1 , 2 ) L Sim ( x P 1 , x P 2 ; 1 , 3 ) (cid:19) , (4) such that the loss with respect to the similarity label is maximised, while the loss with respect to the promotional tone label is minimised.",
"is a scalar which controls the weight of the loss from the adversarial task, and LT = LT 1 + LT 2 .",
"During testing, only the feature encoder and the main task branch are retained to perform the tone classification task: y = g ( f ( x ; 1 ) , 3 ) .",
"Recall that the model encodes and predicts on the two input samples independently during training.",
"We can therefore obtain predictions for individual test samples (rather than pairs), as is the more general case in other models and datasets for this task.",
"The two classifier models, g and h , are both MLP models.",
"The feature encoder (indicated as f in Fig. 3) is responsible for producing an embedding of an article to be used in both the main and auxiliary task.",
"We evaluate four options: Bag-of-words ( BoW + MLP ): a bag-of-words representation of an article is propagated through a multilayer perceptron (MLP) to obtain an embedding, Averaged embeddings ( AvgEmb + MLP ): GloVe embeddings (Pennington et al., 2014) for every word in the article are averaged, followed by an MLP model, Hierarchical Attention Network (HAN) (Yang et al., 2016): word embeddings are processed using an LSTM layer followed by an attention mechanism to build up sentence embeddings.",
"Sentence embeddings are similarly combined to form an article embedding.",
"Longformer (Beltagy et al., 2020): A transformer-based model, adapted for long-form documents.",
"We finetune the pretrained longformer-base-4096 model.",
"For the GRL models, we further experiment with the weights of the main versus auxiliary task on the validation set, finding that weighting the outputs equally yields the best results.",
"We compare the GRL training approach with the standard method of training the classifier with each feature extractor model.",
"This is equivalent to training with the inference model in Equation 5; the auxiliary branch is removed and one sample is processed at a time.",
"Implementation details are provided in App.",
"B. 6.2 Metrics For each model, we report two metrics: PR-AUC : The area under the precision-recall curve, which provides an aggregate measure of performance across all possible classification thresholds (Davis and Goadrich, 2006).",
"Perfect performance is 1, and a random classifier would receive 0.",
"Accuracy : The percentage of samples which are correctly classified, using a classification threshold based on Youden's J statistic (Fluss et al., 2005), which maximises the true positive rate and minimises the false negative rate.",
"In order to train the main task we require samples with and without a promotional tone (i.e. positive and negative labels).",
"To train the auxiliary we require both matched pairs (originating from the same sample set / article) and unmatched pairs (originating from different sample sets).",
"Therefore, we include a number of different pairing configura-tions.",
"Firstly, given a training set consisting of K sample sets, we collect K positive-negative matched pairs.",
"This means that we need to select one negative sample and one positive sample from each sample set.",
"There are multiple negatives in every sample set, so we sample at random from all neg_pre and neg_post samples.",
"There is only one positive per sample set, so this sample is used.",
"We also collect K positive-negative unmatched pairs.",
"We further include K matched and K unmatched negative-negative pairs.",
"Finally, we include K positive-positive unmatched pairs.",
"It is not possible to add positive-positive matched pairs, as there is only one positive per sample set.",
"Using this pair selection method, there are 7 K samples for the tone classification task and 3 .",
"5 K pairs for the similarity classification task, for a total of 48762 articles.",
"For the baseline models, without GRL, only one sample is used at a time during training; thus, we retain the data generation method described above (to ensure the results are comparable), but ignore the pairings.",
"The training dataset is slightly unbalanced, with a ratio of 4:3 of negatives to positives for the similarity classification task and a ratio of 4:3 of unmatched to matched pairs.",
"The numbers of samples per label and their origin are shown in Table 7 in App.",
"C. Our validation and test sets consist of only positive-negative matched pairs, one from each sample set, and thus are fully balanced.",
"As motivated in Sec. 4, for the main test set (denoted FullTest ) negatives are only selected from the neg_post samples.",
"For the ValidNegPre test set, all negatives are manually validated.",
"The text preprocessing steps are described in App.",
"B. 7 Results The results from our evaluation on the FullTest are in Table 3. We observe that models trained with GRL consistently outperform models trained without it, on both the accuracy and PR-AUC metrics.",
"All improvements, except for the Longformer, are 5606 Model PR-AUC Accuracy BoW + MLP 0.6019 0.5913 BoW + GRL 0.6409 0.6102 AvgEmb + MLP 0.6129 0.5848 AvgEmb + GRL 0.6415 0.6084 HAN 0.6271 0.5968 HAN + GRL 0.6459 0.6102 Longformer 0.6798 0.6392 Longformer + GRL 0.6984 0.6432 Table 3: Results using GRL training on FullTest .",
"statistically significant at the = 0 .",
"05 level, using the permutation test to compare PR-AUC values.",
"Larger gains are observed for the BoW+MLP and AvgEmb+MLP models, compared to the HAN and Longformer models.",
"A possible explanation for this is that the simpler models rely only on word-level information, and thus more susceptible to topical biases which GRL mitigates.",
"These results support the motivation behind our data collection method and training framework: by incorporating our knowledge of how samples are related in our dataset and training, models are exposed to different versions of the same content (with and without promotional tone), and can therefore better learn features that are more effective for detecting promotional tone, compared to models that ignore this information.",
"Given our discussion in Sec. 4, we also evaluate the GRL approach on a separate, validated test set, which uses neg_pre rather than neg_post negatives (denoted ValidNegPre ).",
"In this evaluation, we are particularly interested in the effect of excluding neg_pre samples during training.",
"The total number of samples in the training set remains the same as for experiments already reported, but all negatives are sampled from the neg_post samples for each sample set during training.",
"We compare our best model (Longformer+GRL) under these conditions.",
"For brevity, in Table 4 we only show the PR-AUC values, but the same trends hold for accuracy.",
"The original configuration is shown in the top left of the table; with training data including both neg_post and neg_pre , and testing on F ullT est .",
"We note that, using the same training data, the PR-AUC score is slightly lower on the ValidNegPre set (top right) compared to the FullTest set, indicating that these samples may be more difficult to classify correctly.",
"The effect of excluding neg_pre samples during training is shown in the second row.",
"The two training settings achieve similar performance on the FullTest test set, however, the performance on the ValidNegPre dataset is markedly lower when excluding neg_pre samples.",
"This supports our motivation for including neg_pre samples during training, as described in Sec. 4, i.e. that not including them may lead to learning spurious correlations, such as temporal or article development biases.",
"The neg_pre sampling adds useful information during training, despite including noise in the form of false negatives.",
"Since our main goal is to predict promotional tone for a given text, we did not optimise for ranked prediction; however, the pairwise accuracy is of interest since the GRL-based model is trained on pairs.",
"This is similar to the pairwise human evaluation we performed in Sec. 4. For this we calculate the proportion of pairs for which the directionality of the predictions is correct.",
"A score of 0.722 is achieved for the non-GRL Longformer model, compared to 0.741 for the GRL model.",
"The fact that these values are higher than the accuracy values in Table 3 illustrates that there are samples which were incorrectly classified, but whose relative (pair-wise) relationship was correctly predicted.",
"To better understand the differences in predictions made by models trained with GRL, we analyse more closely the test set and our predictions.",
"There are 1318 samples on which both models are correct and 764 on which both are incorrect.",
"There are 320 samples where the non-GRL model is correct while the GRL model is incorrect, and 356 samples in the reverse case.",
"We are interested in the last two categories, where the two models disagree.",
"To better understand these classification categories, we evaluate the pointwise mutual information (PMI; Jurafsky and Martin, 2008) of each word 5607 ( w ) with its classification status ( c ) : P MI ( w, c ) = log 2 ( P ( w, c ) P ( w ) P ( c )) .",
"This gives us an indication of how much higher the probability of observing a word is to be in one of the categories, compared to the full test set.",
"The 50 words with the highest PMI, which were correctly classified as not promotional by our model but mislabeled as promotional by the non-GRL variant, are shown in Table 8 in Appendix D. Without GRL, these words were indicative of promotional tone; but with GRL, their use for promotional tone detection was reduced.",
"Thus, these should be words that are misleading for the tone classifier, but helpful for the similarity classifier.",
"The list includes the terms feminist, femi-nism and female.",
"This topical concentration may be caused by a bias in the training data, whereby there are more positive examples which contain these terms.",
"Such an imbalance in the data may be related to the findings of Wagner et al. (2016), which explores the imbalance in representations of women versus men on Wikipedia.",
"However, this is not the only topical bias we observe in the predictions of the non-GRL model; the terms photograph, photographer, and graphics are also in this list.",
"The PMI values for the opposite case where the non-GRL model is correct while the GRL model is incorrect is shown in Table 9.",
"Here, too, we see some topical groupings; eg.",
"tumor, physi-cians, diagnosis.",
"However, the PMI of these words are lower than that of the samples where the GRL model was correct (with a maximum PMI of 3.92 vs 2.76), meaning that the co-occurrence is on the whole lower.",
"We further evaluate our model on the test set from the per-article track of the SemEval 2019 Shared Task on Hyperpartisan News detection (Kiesel et al., 2019).",
"Their dataset contains 314 positive (hyperpartisan) and 314 negative (not hyperpartisan) news articles.",
"On this dataset, the Longformer+GRL model, trained on our training data, achieves a PR-AUC score of 0.759 (accuracy 0.785), compared to a PR-AUC of 0.736 (accuracy 0.714) when the GRL is omitted (statistically significant; P=0.043 on the signed rank test).",
"The shared task received 42 entries and closed in June Test set No GRL Content Time FullTest 0.6936 0.6984 0.6901 ValidNegPre 0.5769 0.6184 0.5948 SemEval 0.6942 0.785 0.7531 Table 5: The results on each of the test scenarios from Sec. 7, comparing models with no auxiliary task, the content-based task we proposed, and the time-based auxiliary task.",
"2019.",
"Compared against their leaderboard, our model would be ranked eighth, even though it was not trained on the provided training data.",
"A motivation for including the neg_pre samples is that they counteract the temporal bias introduced by only sampling neg_post samples.",
"The gradient reversal layer also provides a debiasing mechanism, used to suppress topic-based biases in our proposed model.",
"To observe the impact of the neg_pre sampling, we also evaluate models trained with a time-based auxiliary task.",
"Specifically, we define the task as predicting which sample in an input pair is earlier in the revision history of an article.",
"We use only neg_post samples, as they were found to be less noisy.",
"Samples are generated from ( neg_post , positive ) pairs as well as ( neg_post , neg_post ) pairs, with the chronological ordering being swapped at random to give an equal probability of both outcomes.",
"The results on each of the test scenarios from Sec. 7 are shown in Table 5, using the Longformer feature encoder and comparing to the results from the original formulation.",
"We also compare to the same model trained without an auxiliary task.",
"The Time-GRL model outperforms the model with no auxiliary task on the ValidNegPre and SemEval datasets, but the content-GRL model scores the highest on all three test sets.",
"This indicates that using neg_pre samples to counter temporal biases and the auxiliary task to counter content biases achieves better performance on this task.",
"Previous work by Aleksandrova et al. (2019) explored a similar dataset creation strategy, using Wikipedia tags to identify sentences with a promotional tone.",
"Our work focuses on document-level promotional tone detection, however, we also compare performance on our dataset using their models to verify whether document-level training captures more information than sentence-level training.",
"We replicate their best performing model and the reported test set score of F1 score of 0.62.",
"To compare performance on our own document-level data, we obtain a prediction for each sentence in an article and apply two aggregation strategies: using the average prediction and the maximum score.",
"We further implement an LSTM model with attention, which is similar to our HAN model without the hierarchical computation.",
"Results are shown in Table 6. In both cases, the mean aggregation yields a slightly better score; however, models trained on our data, both with and without the GRL optimisa-tion, achieve significantly higher scores, providing support that there is useful information contained in WikiEvolve for the task of document-level promotional tone detection.",
"Finally, it worth noting that the LSTM+Attn model performs worse than the BoW+LogReg model.",
"The authors also report comparatively worse performance for a model using FastText (Bo-janowski et al., 2017) embeddings.",
"In this work, we have proposed an alternative data collection method and dataset for promotional tone detection, which leverages the evolution of articles on the platform.",
"To utilise the additional structure in our dataset, we extended the gradient reversal framework to train models which are more effective at detecting promotional tone.",
"This was shown both on our own test set and on a test set from a different domain.",
"We further provided insights on the effects of two negative sampling strategies on Wikipedia.",
"These findings should be useful for researchers who use Wikipedia-based data more broadly, in addition to those who work on biased language detection.",
"Christine de Kock is supported by scholarships from Huawei and the Oppenheimer Memorial Trust.",
"Andreas Vlachos is supported the EPSRC grant no.",
"EP/T023414/1: Opening Up Minds.",
"This work was initiated during an internship at the Wikimedia Foundation.",
"We would like to thank the Foundation for granting us access to their data and resources, and in particular Diego Saez-Trumper for his support of the project."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other"
] |
[
"Syntactic information, especially dependency trees, has been widely used by existing studies to improve relation extraction with better semantic guidance for analyzing the context information associated with the given entities.",
"However, most existing studies suffer from the noise in the dependency trees, especially when they are automatically generated, so that intensively leveraging dependency information may introduce confusions to relation classification and necessary pruning is of great importance in this task.",
"In this paper, we propose a dependency-driven approach for relation extraction with attentive graph convolutional networks (A-GCN).",
"In this approach, an attention mechanism upon graph convolutional networks is applied to different contextual words in the dependency tree obtained from an off-the-shelf dependency parser, to distinguish the importance of different word dependencies.",
"Consider that dependency types among words also contain important contextual guidance, which is potentially helpful for relation extraction, we also include the type information in A-GCN modeling.",
"Experimental results on two English benchmark datasets demonstrate the effectiveness of our A-GCN, which outperforms previous studies and achieves state-of-the-art performance on both datasets.",
"1 1 Introduction Relation extraction (RE), which aims to detect the relationship between entity mentions from raw text, is one of the most important tasks in information extraction and retrieval, and plays a crucial role in supporting many downstream natural language processing (NLP) applications such as text mining (Distiawan et al., 2019), sentiment analysis (Sun * Equal contribution.",
"and summarization (Wang and Cardie, 2012).",
"Recently, neural RE methods (Zeng et al., 2014; Zhang and Wang, 2015; Xu et al., 2015; dos Santos et al., 2015; Zhang et al., 2015; Wang et al., 2016; Zhou et al., 2016; Zhang et al., 2017) with powerful encoders (such as CNN, RNN, and Transformers) have significantly improved model performance for RE without requiring any elaborately designed systems or manually constructed features.",
"These methods are superior in capturing contextual information and thus enable RE systems to better understand the text and identify relations between entities in the given text.",
"Adopting neural models to help RE is not only straightforward and effective, but is also expected to incorporate more diverse and informative knowledge into RE systems.",
"Among all different knowledge sources, syntactic information, especially the dependency trees, have been demonstrated to be beneficial in many studies (Miwa and Bansal, 2016; Zhang et al., 2018; Sun et al., 2020; Chen et al., 2021) because they provide long-distance word connections between useful words and thus accordingly guide the system to better extract relations between entity pairs.",
"However, intensively leveraging dependency information may not always lead to good RE performance, because the noise in the dependency tree can potentially introduce confusions to relation classification (Xu et al., 2015; Yu et al., 2020), especially when those trees are automatically generated.",
"For example, Figure 1 shows an example sentence with its dependency tree, where the dependency connection between pumpkin mixture and bowl may introduce noise when the object is to predict the relation between milk and pumpkin mixture .",
"Therefore, previous studies have always required necessary pruning strategies before encoding the dependency information through a particular model such as LSTM (Xu et al., 2015) or graph convolutional networks (GCN) (Zhang et al., 2018).",
"Because fixed pruning strategies are not guaranteed to result in a sub-tree with all important contextual information included and with all noise filtered out, it is necessary to design an appropriate way for distinguishing the noise in the dependency tree and modelling them accordingly.",
"In this paper, we propose a dependency-driven neural approach for RE, where attentive graph neural network (A-GCN) is proposed to distinguish the important contextual information for this task.",
"Furthermore, given that the dependency types (e.g., nominal subject) that associate with dependency connections are also potentially useful for RE since they contain the syntactic instruction among connected words, we further improve A-GCN by introducing type information into it.",
"Specifically, we first obtain the dependency tree of an input sentence from an off-the-shelf toolkit, then build the graph over the dependency tree, and assign different weights to different labeled dependency connections between any two words, with the weights computed based on the connections and their dependency types, lastly predict relations by the A-GCN according to the learned weights.",
"In doing so, not only is A-GCN able to distinguish important contextual information from dependency trees and leverage them accordingly, such that reliance on pruning strategies is unnecessary, but A-GCN can also leverage the dependency type information that is omitted by most previous studies (in particular, the studies that also use attention mechanism (Guo et al., 2019)).",
"Experimental results on two English benchmark datasets, i.e., ACE2005EN and SemEval 2010 Task 8, demonstrate the effectiveness of our approach to RE through A-GCN equipped with dependency type information.",
"State-of-the-art performance is observed on both datasets.",
"by using A-GCN and incorporates dependency information to improve model performance, where the overall architecture of our model is illustrated in Figure 2. Specifically, given an unstructured input sentence X = x 1 , , x n with n words and let E 1 and E 2 denote two entities in X , our approach predicts the relation r between E 1 and E 2 by",
"b 2 R where TX is the dependency tree of X obtained from an off-the-shelf toolkit, R is the relation type set; p computes the probability of a particular relation r 2 R given the two entities and b r the output of A-GCN, which takes X and TX as the input.",
"Following texts start with a brief introduction of the standard GCN model, then elaborate our proposed A-GCN equipped with dependency type information, and lastly illustrate the process of applying A-GCN to the classification paradigm for RE.",
"2.1 Standard Graph Convolutional Networks Generally, a good text representation is a prerequisite to achieve outstanding model performance (Song et al., 2017; Bojanowski et al., 2017; Song et al., 2018; Song and Shi, 2018; Hajdik et al., 2019).",
"To enhance the text representation and thus obtain a good understanding of the running text, many studies (Song et al., 2009, 2012; Song and Xia, 2013; Xu et al., 2015; Miwa and Bansal, 2016; Zhang et al., 2019; Mandya et al., 2020; Nie et al., 2020) tried to leverage contextual features, such as n-grams and syntactic information, through different model architectures.",
"Among all these architecture choices, graph convolutional networks (GCN) is a widely used architecture to encode the information in a graph, where in each GCN layer, information in each node communicates to its neighbors through the connections between them.",
"The effectiveness of GCN models to encode the contextual information over a graph of an input sentence has been demonstrated by many previous studies (Zhang et al., 2018; Guo et al., 2019; Sun et al., 2020; Chen et al., 2020; Yu et al., 2020; Mandya et al., 2020; Tian et al., 2020c, 2021a).",
"Normally, the graph in the standard GCN model is built from word dependencies and is represented by an adjacency matrix A = ( a i,j ) n n where a i,j = 1 if i = j or there is a dependency connection 2 (arc) between two words x i and x j in the dependency tree TX and a i,j = 0 otherwise.",
"Based on A , for 2 Normally the direction of the connection is ignored.",
"each word x i 2 X , the l -th GCN layer gathers the information carried by its context words in TX and computes the output representation h ( l ) i for x i by: h ( l ) i = \u0000 n X j =1 a i,j W ( l ) h ( l \u0000 1) j + b ( l ) !",
"(2) where h ( l \u0000 1) j denotes the output representation of x j from the ( l 1) -th GCN layer 3 , W ( l ) and b ( l ) are trainable matrices and the bias for the l -th GCN layer, respectively, and \u0000 is the ReLU activation.",
"It is noted that in standard GCN (e.g., Eq.",
"(2)), the connections among words are treated equally (i.e., a i,j is either 0 or 1 ).",
"Therefore, GCN-based models for RE are not able to distinguish the importance of different connections and thus pruning on them is of great importance for RE.",
"Therefore, we propose A-GCN for this task, which uses an attention mechanism to compute the weights for different connections so that the model is able to 3 h (0) j is the output of the encoder for x j .",
"leverage different dependency connections accordingly.",
"In addition, the standard GCN and most previous studies omit the dependency types associated with the dependency connections, where those types contain highly useful information for RE and are introduced into A-GCN in this work.",
"Specifi-cally, we firstly represent dependency types in TX by a type matrix T = ( t i,j ) n n , where t i,j is the dependency type (e.g., nsubj ) associated with the directed dependency connection 4 between x i and x j .",
"Next, we map each type t i,j to its embedding e ti,j .",
"Then, at the l -th GCN layer, the weight for the connection between x i and x j is computed by p ( l ) i,j = a i,j exp s ( l ) i s ( l ) j P nj =1 a i,j exp s ( l ) i s ( l ) j (3) where a i,j 2 A , denotes inner production, and s ( l ) i and s ( l ) i are two intermediate vectors for x i and 4 It means t i,j and t j,i are represented in different dependency types to model directions of connections between x i and x j .",
"x j , respectively, which are computed by s ( l ) i = h ( l \u0000 1) i \u0000 e ti,j (4) and s ( l ) j = h ( l \u0000 1) j \u0000 e ti,j (5) with \u0000 denoting the vector concatenation operation.",
"Afterwards, we apply the weight p ( l ) i,j to the associated dependency connection between x i and x j and obtain the output representation of x i by h ( l ) i = \u0000 n X j =1 p ( l ) i,j W ( l ) e h ( l \u0000 1) j + b ( l ) !",
"j",
"Compared with standard GCN (i.e., Eq.",
"(2)), our approach uses numerical weighting (i.e., p ( l ) i,j 2 [0 , 1] ) rather than a binary choice for a i,j , to distinguish the importance of different connections so as to leverage them accordingly.",
"In addition, we integrate the dependency type information into both the computed weight (i.e., p ( l ) i,j ) and the output representation of x i (i.e., h ( l ) i ), which is not considered in most previous studies.",
"Before applying A-GCN for RE, we firstly encode the input X into hidden vectors by BERT (Devlin et al., 2019) with h (0) i denoting the hidden vector for x i , where the hidden vector (denoted as h X ) for the special sentence initial token [CLS] is used as the representation for the entire sentence.",
"Next, we feed h (0) i to our proposed A-GCN model with L layers and obtain the corresponding output h ( L ) i .",
"Then, we apply the max pooling mechanism to the output hidden vectors of the words that belongs to an entity mention (i.e., E k , k = 1 , 2 ) to compute the representation for entity (denoted as h E k ) by h E k = MaxPooling ( { h ( L ) i | x i 2 E k } ) (8) Afterwards, we concatenate the representations of the sentence (i.e., h X ) and two entities (i.e., h E 1 and h E 2 ) and apply a trainable matrix WR to the computed vector to map it to the output space by o = WR ( h X \u0000 h E 1 \u0000 h E 2 ) (9) ACE05 SEMEVAL # INSTANCESTRAIN 48,198 8,000 DEV 11,854 TEST 10,097 2,717 Table 1: The number of unique instances (i.e., entity pairs) of ACE05 and SemEval benchmark datasets.",
"where o is a |R| -dimensional vector with each of its value referring to a relation type in the relation type set R .",
"Finally, we apply a softmax function of o to predict the relation b r between E 1 and E 2 by b r = arg max exp ( o u ) P |R| u =1 exp ( o u ) (10) with o u representing the value at dimension u in o .",
"In the experiments, we use two English benchmark datasets for RE, namely, ACE2005EN (ACE05) 5 and SemEval 2010 Task 8 (SemEval) 6 (Hendrickx et al., 2010).",
"For ACE05, we use its English section and follow previous studies (Miwa and Bansal, 2016; Christopoulou et al., 2018; Ye et al., 2019) to pre-process it (two small subsets cts and un are removed) and split the documents into training, development, and test sets 7 .",
"For SemEval, we use its official train/test split 8 .",
"The numbers of unique relation types in ACE05 and SemEval are 7 and 19, respectively.",
"We report the number of instances (i.e., entity pairs), for train/dev/test sets of ACE05 and SemEval benchmark datasets in Table 1. 3.2 Dependency Graph Construction To construct graphs for A-GCN, we use Standard CoreNLP Toolkits (SCT) 9 to obtain the dependency tree TX for each input sentence X .",
"Although our approach is able to distinguish the importance of different dependency connections through the attention mechanism, it is still beneficial if we can filter out those dependency connections that bring confusions to RE through particular pruning strategies.",
"Motivated by previous studies (Xu et al., 2015; 5 We obtain the official data (LDC2006T06) from https: //catalog.ldc.upenn.edu/LDC2006T06 .",
"6 The data is downloaded from http://docs.google.",
"com/View?docid=dfvxd49s_36c28v9pmw .",
"7 We follow the train/dev/test splits specified by Miwa and Bansal (2016) at https://github.com/tticoin/ LSTM-ER/tree/master/data/ace2005/split 8 SemEval only includes the training and test sets.",
"9 We download the version 3.9.2 from https:// stanfordnlp.github.io/CoreNLP/ .",
"In detail, local connections include all dependencies that directly connect to the heads of two entities and global connections include all dependencies along the shortest dependency path (SDP) between the head of two entities, where in many cases words that do not directly connected to the two entities are also involved.",
"With an example sentence including two entities (i.e., company and benchmarking ), Figure 3 illustrates the two groups of dependency connections and the resulted adjacency matrix, which is built with the connections from the two groups 10 .",
"It is worth noting that, when the SDP is short, there might be more connections in the local group than that in the global one.",
"Following Soares et al. (2019), we insert four special tokens (i.e., < e1 > , < /e1 > , < e2 > , and < /e2 > ) into the input sentence to mark the boundary 11 of the two entities to be investigated, which allows the encoder to distinguish the position of entities during encoding and thus improves model performance.",
"For the encoder, we try BERT (Devlin et al., 2019), because it is a powerful pre-trained language model which and whose variants have achieved state-of-the-art performance in many NLP tasks (Wu and He, 2019; Soares et al., 2019; Wu et al., 2019; Diao et al., 2020; Song et al., 2020; Antoun et al., 2020; Tian et al., 2020a,b,d, 2021b; Qin et al., 2021; Song et al., 2021).",
"Specifically, we use the uncased version of BERT-base and 10 We do not distinguish the two groups of connections in A-GCN once they are represented by the adjacency matrix.",
"BERT-large 12 following the default settings (e.g., for BERT-base, we use 12 layers of multi-head attentions with 768-dimensional hidden vectors; for BERT-large, we use 24 layers of multi-head attentions with 1024-dimensional hidden vectors).",
"For A-GCN, we randomly initialize all trainable parameters and the dependency type embeddings.",
"For evaluation, we follow previous studies to use the standard micro-F1 scores 13 for ACE05 and use the macro-averaged F1 scores 14 for SemEval.",
"In our experiments, we try different combinations of hyper-parameters, and tune them on the dev set, then evaluate on the test set by the model that achieves the highest F1 score on the dev set.",
"15 4 Results 4.1 Overall Results In the experiments, we run our A-GCN models using BERT-base and BERT-large encoder on graphs with and without applying dependency pruning strategies, which correspond to the graph built upon the combined local and global connections (L + G), as well as the one constructed by the full dependency graph (Full), respectively.",
"We also run baselines with standard GCN and standard graph attentive networks (GAT) (Velickovic et al., 2017) with the same graph.",
"For both standard GCN and A-GCN, we try different numbers of layers (i.e. 1 to 3 layers).",
"In addition, we try BERT-base and BERT-large baselines without using any dependency information.",
"Table 2 shows the F1 scores of our A-GCN 12 We download different BERT models from https:// github.com/huggingface/transformers .",
"13 We use the evaluation script from sklearn framework.",
"14 We use the official evaluation script downloaded from http://semeval2.fbk.eu/scorers/task08/SemEval2010_task8_scorer-v1.2.zip .",
"15 We report the hyper-parameter settings of different models with their size and running speed in Appendix A and B. ID MODELS ACE05 SEMEVAL 1 BERT-BASE 75.31 87.87 2 + GAT (FULL ) 76.16 88.39 3 + GAT (L + G) 75.79 88.53 4 + 1 GCNLAYER (FULL ) 74.91 87.58 5 + 1 A-GCNLAYER (FULL ) 76.63 88.34 6 + 1 GCNLAYER (L + G) 75.51 88.64 7 + 1 A-GCNLAYER (L + G) 77.10 89.03 8 + 2 GCNLAYERS (FULL ) 75.09 88.66 9 + 2 A-GCNLAYERS (FULL ) 77.25 88.70 10 + 2 GCNLAYERS (L + G) 76.11 88.62 11 + 2 A-GCNLAYERS (L + G) 77.30 89.16 12 + 3 GCNLAYERS (FULL ) 75.69 88.54 13 + 3 A-GCNLAYERS (FULL ) 76.26 88.63 14 + 3 GCNLAYERS (L + G) 76.85 88.33 15 + 3 A-GCNLAYERS (L + G) 76.36 88.70",
"(b) BERT-large Table 2: F1 scores of our A-GCN models and the baselines (i.e., BERT-only, standard GAT, and standard GCN) under different settings with BERT-base",
"(a) and BERT-large",
"(b) used.",
"All graph-based models (i.e., GAT, GCN, and A-GCN) are tested with two settings: the first is using the full graph (FULL ) with all dependency connections involved and the second is using the combination of local and global connections (L + G).",
"We also run GCN and A-GCN with different numbers of layers (i.e., 1 to 3 layers) for fair comparisons.",
"models and all the aforementioned baselines on the test set of ACE05 and SemEval.",
"16 There are several observations.",
"First, A-GCN functions well when using BERT-base or BERT-large as encoder, where the consistent improvement is observed over the BERT-only baselines (ID: 1) across two benchmark datasets, even though the BERT baselines have already achieve good performance.",
"Second, for both datasets, A-GCN outperforms GAT (ID: 2, 3) and standard GCN baselines (ID: 4, 6, 8, 10, 12, 14) with the same graph (i.e., either L + G or Full) and equal number of layers.",
"Particularly, when full dependency graph is used, it is noted that, in some cases (e.g., ID: 8 for BERT-base on ACE05), standard GCN obtains very limited improvements (or even worse results) over the BERT-only baseline (ID: 1), whereas our A-GCN models (e.g., ID: 9 for BERT-base) is able to consistently outperform the BERT-only baseline and achieve higher performance.",
"We attribute this observation to the attention mechanism used to weigh different dependency connections, which allows A-GCN to distinguish the noise in the graph and thus leverage useful dependency information accordingly.",
"Third, among the models with different numbers of A-GCN layers, the ones (e.g., ID: 11 for BERT-base and ID: 11 for BERT-large) with two A-GCN layers achieves the highest scores, where similar tread is observed from the standard GCN baselines.",
"Besides, we find that our A-GCN 16 For the same group of models, we report the F1 scores on the development sets in Appendix C and the mean and standard deviation of their test set results in Appendix D. models (as well as the standard GCN baselines) with the local and global connections (i.e., L + G) consistently outperform the ones with full dependency graph (i.e., Full).",
"These observations are relatively intuitive since the dependency information may introduce more noise to RE when it is leveraged in an intensive way (e.g., by using more layers or the full dependency tree without pruning).",
"In addition, we compare our best models (with L + G or Full graphs) using BERT-large encoder and two A-GCN layers (ID: 9 and 11) with previous studies.",
"The test results (F1 scores) are reported in Table 3, where our model with both local and global connections (i.e., L + G) outperforms all previous studies and achieves state-of-the-art performance on the two benchmark datasets.",
"Specifically, compared with Guo et al. (2019) who proposed an graph-based approach with attentions to leverage dependency connections, our approach leverages both dependency connections and dependency types among all input words and thus provides a better way to comprehensively leverage the dependency information.",
"In addition, although Mandya et al. (2020) proposed an approach to leverage both dependency connections and dependency types through attentions, they added the dependency type directly to the input word embeddings along with POS embeddings, and the attention in their approach is a separate stand-alone module which is added on the top of the GCN layer.",
"On the contrary, in our approach, the dependency type MODELS ACE05 SEMEVALXU ET AL .",
"is added to each A-GCN layer and the attention mechanism is directly applied to each dependency connection in the A-GCN layer.",
"Therefore, compared with Mandya et al. (2020), our A-GCN encodes the dependency connections and dependency types in a more intensive manner and thus can better leverage them to guide the process of predicting the relations between the given entities.",
"Dependency information is supposed to be beneficial for RE because it contains long-distance word-word relations, which could be extremely useful when the given two entities are far away from each other in the input sentence.",
"To explore the effect of A-GCN in capturing such long-distance word-word relations to help with RE, we split the test instances into different groups according to their entities' distances (i.e., the number of words between the two entities) and run models on these groups to test their performance.",
"Figure 4 shows the performance of our best performing A-GCN model with BERT-large (ID: 11 in Table 2) and its corresponding standard GCN and BERT-large baselines on the three groups of test instances from the test set of SemEval, where the category name indicates the range of the entity distance.",
"17 It is observed that, A-GCN outperforms the two baselines on all groups of test instances and the improvement becomes larger when the entity distance increases.",
"This observation confirms that our approach is able to leverage dependency information and capture long-distance word-word relations to improve RE.",
"5.2 The Effect of Graph Construction In the main experiments, we try A-GCN with the graph built upon the combined local and global connections (L + G).",
"To explore the effect of the local connections and the global connections for A-GCN, we run our approach using two A-GCN layers with the graph constructed by local connections (L) or global connections (G) alone.",
"Table 4 presents the experimental results (F1 scores) of different models with BERT-base and BERT-large encoders, where the results from BERT-only baselines, A-GCN (L + G), and A-GCN (Full) are also copied from Table 2 for reference.",
"Compared to A-GCN (L + G), models with the graph constructed by either local connections (i.e., A-GCN (L)) or global connections (i.e., A-GCN (G)) achieve lower performance, which complies with our intuition because both groups of connections contain important contextual features for RE.",
"Interestingly, it is found that A-GCN (L) outperforms A-GCN (G) with both BERT-base and BERT-large encoders.",
"A possible explanation could be the following.",
"There are overlaps between local and global connections (e.g., the connection between range and restrictions in Figure 3).",
"Therefore, A-GCN (L) can not only leverage the contextual information associated with the entities themselves, but is also partially 18 bene-fited from the overlapping connections on the SDP between the two entities, which leads A-GCN (L) to achieve a higher performance than A-GCN (G).",
"Compared with the standard GCN, A-GCN enhances it from two aspects: (1) using an attention",
"18 When there is only one word on the shortest dependency path between two entities, all global connections are included in local ones, e.g., defamation and bishop in Figure 2.",
"mechanism to weigh different dependency connections and (2) introducing dependency types to the process to encode more detailed dependency information.",
"To better investigate the effect of each individual enhancement (i.e., the attention mechanism or the dependency type information), we conduct an ablation study on our best model, i.e., two layers of A-GCN (L + G) with BERT-base and BERT-large encoder.",
"Table 5 reports the experimental results of different models, where the performance of BERT-only baseline and the standard GCN baseline (i.e., the one uses neither the attention mechanism nor dependency types) are also reported for reference.",
"The results clearly indicate that, the ablation of either enhancement (i.e., the attention mechanism or the dependency type information) could result in worse results (compared with full A-GCN).",
"Between the two enhancements, the ablation of the attention mechanism hurts A-GCN more, which indicates the ability of distinguishing important connections and leveraging them accordingly plays a more important role in RE.",
"To explore in detail that how A-GCN leverages dependency connections and types to improve RE, we conduct a case study with our A-GCN models with different dependency graphs (i.e., two layers of A-GCN (Full) and A-GCN (L + G) with BERT-large encoder) on an example sentence A central vacuum is a vacuum motor and filtration system built inside a canister. .",
"Figure 5 shows the sentence where both the two models correctly predict the relation between motor ( E 1 ) and canister ATT .",
"( E 2 ) (highlighted in the red color) to be Content-Container , whereas the baseline GCN (Full) and GCN (L + G) models fail to do so.",
"We also visualize the attention weights assigned to different dependency connections extracted from the last A-GCN layer, with darker and thicker lines referring to higher weights.",
"In this example, for A-GCN (Full), we observe that the connection between built and canister along SDP and the connection between inside and canister receive the highest weights, where this is valid because the dependency type, i.e., obl (oblique nominal), associated with the connection (between built and canister ) reveals that canister could be the position where the action (i.e., build ) takes place, and is further confirmed by another dependency connection and type (i.e., case ) between inside and canister .",
"Therefore, it is proved that our model learn from the contextual information carried by such important connections and results in correct RE prediction.",
"Similarly, A-GCN (L + G) also correctly perform RE on this case by highlighting the same dependency connections as those from the A-GCN (Full) with much higher weights (because many dependency connections are filtered out).",
"Recently, neural networks with integrating external knowledge or resources play important roles in RE because of their superiority in better capturing contextual information (Shen and Huang, 2016; Soares et al., 2019).",
"Particularly, as one kind of such knowledge, dependency parses show their effectiveness in supporting RE for its ability Figure 5: Visualizations of weights assigned to different dependency connections of A-GCN (Full) and A-GCN (L + G) for an example input, where darker and thicker lines refer to connections with higher weights.",
"in capturing long-distance word relations (Zhang et al., 2018; Guo et al., 2019).",
"However, intensively leveraging dependency information could introduce confusions to RE (Xu et al., 2016b; Yu et al., 2020) so that necessary pruning is required to alleviate this problem.",
"E.g., Xu et al. (2015) proposed to use the connections along the shortest dependency path between the two entities and apply LSTM to model them; Miwa and Bansal (2016) proposed to prune the original dependency tree into the lowest common ancestor subtree.",
"However, these pruning strategies are either too aggressive or modest, so that the resulted graph might lose some important contexts or filled with more noise.",
"Zhang et al. (2018) adopted GCN to model the dependencies and proposed a trade-off pruning strategy in between Xu et al. (2015) and Miwa and Bansal (2016).",
"Besides, there are other graph-based models for RE that utilize layers of multihead attentions (Guo et al., 2019), dynamic pruning (Yu et al., 2020), and additional attention layers (Mandya et al., 2020) to encode dependency trees.",
"Compared with the aforementioned methods, especially the graph-based ones, our approach offers an alternative to enhance RE with A-GCN by using attention mechanism and dependency type, which are effective and efficient improvement to standard GCN without requiring complicated model design.",
"In this paper, we propose A-GCN to leverage dependency information for relation extraction, where an attention mechanism is applied to dependency connections to applying weighting on both connections and types so as to better distinguish the important dependency information and leverage them accordingly.",
"In doing so, A-GCN is able to dynamically learn from different dependency connections so that less-informative dependencies are smartly pruned.",
"Experimental results and analyses on two English benchmark datasets for relation extraction demonstrate the effectiveness of our approach, especially for entities with long word-sequence distances, where state-of-the-art performance is obtained on both datasets.",
"This work is supported by Chinese Key-Area Research and Development Program of Guangdong Province (2020B0101350001) and NSFC under the project The Essential Algorithms and Technologies for Standardized Analytics of Clinical Texts (12026610).",
"This work is also partially supported by Shenzhen Institute of Artificial Intelligence and Robotics for Society under the project Automatic Knowledge Enhanced Natural Language Understanding and Its Applications (AC01202101001).",
"We also thank Mr. Peilin Zhou for providing the first version of the model architecture figure."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"other",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"objective",
"abstain",
"objective",
"other",
"other",
"other"
] |
[
"Fine-tuned language models have been shown to exhibit biases against protected groups in a host of modeling tasks such as text classification and coreference resolution.",
"Previous works focus on detecting these biases, reducing bias in data representations, and using auxiliary training objectives to mitigate bias during fine-tuning.",
"Although these techniques achieve bias reduction for the task and domain at hand, the effects of bias mitigation may not directly transfer to new tasks, requiring additional data collection and customized annotation of sensitive attributes, and re-evaluation of appropriate fairness metrics.",
"We explore the feasibility and benefits of upstream bias mitigation (UBM) for reducing bias on downstream tasks, by first applying bias mitigation to an upstream model through fine-tuning and subsequently using it for downstream fine-tuning.",
"We find, in extensive experiments across hate speech detection, toxicity detection, occupation prediction, and coreference resolution tasks over various bias factors, that the effects of UBM are indeed transferable to new downstream tasks or domains via fine-tuning, creating less biased downstream models than directly fine-tuning on the downstream task or transferring from a vanilla upstream model.",
"Though challenges remain, we show that UBM promises more efficient and accessible bias mitigation in LM fine-tuning.",
"12 1 Introduction The practice of fine-tuning pretrained language models (PTLMs or LMs), such as BERT (Devlin et al., 2019), has improved prediction performance in a wide range of NLP tasks.",
"However, fine-tuned LMs may exhibit biases against certain protected groups (e.g., gender and ethnic minorities), 1 Code and data: https://github.com/INK-U SC/Upstream-Bias-Mitigation 2 The work was partially done when Xisen Jin was an intern at Snap Inc.",
"as models may learn to associate certain features with positive or negative labels spuriously (Dixon et al., 2018), or propagate bias encoded in PTLMs to downstream classifiers (Caliskan et al., 2017; Bolukbasi et al., 2016).",
"Among many examples, Kurita et al. (2019) demonstrates gender-bias in the pronoun resolution task when models are trained using BERT embeddings, and Kennedy et al. (2020) shows that hate speech classifiers fine-tuned from BERT result in more frequent false positive predictions for certain group identifier mentions ( e.g. , muslim , black ).",
"Approaches for bias mitigation are mostly applied during fine-tuning to reduce bias in a specific downstream task or dataset (Park et al., 2018; Zhang et al., 2018; Beutel et al., 2017) (see Fig. 1",
"(a)).",
"For example, data augmentation approaches reduce the influence of spurious features in the original dataset (Dixon et al., 2018; Zhao et al., 2018; Park et al., 2018), and adversarial learning approaches generate debiased data representations that are exclusive to the downstream model (Kumar et al., 2019; Zhang et al., 2018).",
"These techniques act on biases particular to the given dataset, domain, or task, and require new bias mitigation when switching to a new downstream task or dataset.",
"This can require auxiliary training objectives, the definition of task-specific fairness metrics, the annotation of bias attributes ( e.g. , identifying African American Vernacular English), and the collection of users' demographic data.",
"These drawbacks make bias mitigation inaccessible to the growing community, fine-tuning LMs to new datasets and tasks.",
"In contrast, we investigate initially mitigating bias while fine-tuning an upstream model in one or more upstream datasets, and subsequently achieving reduced bias when fine-tuning for downstream applications (Fig.",
"1",
"(d)), so that bias mitigation is no longer required in downstream training.",
"Similar to transfer learning for enhancing predictive performance in common setups (Pan and Yang, 2010; Dai and Le, 2015), we suggest that LMs that undergo bias mitigation acquire inductive bias that is helpful for reducing harmful biases when fine-tuned on new domains and tasks.",
"In four tasks with known bias factors hate speech detection, toxicity detection, occupation prediction from short bios, and coreference resolution we explore whether upstream bias mitigation of a LM followed by downstream fine-tuning reduces bias for the downstream model.",
"Though previous work has addressed biases in frozen PTLM or word embeddings (Bolukbasi et al., 2016; Zhou et al., 2019; Bhardwaj et al., 2020; Liang et al., 2020; Ravfogel et al., 2020), for example by measuring associations between gender and occupations in an embedding space, they do not study their effect on downstream classifiers (Fig.",
"1",
"(b)), while some of them study the effects while keeping the embeddings frozen (Zhao et al., 2019; Kurita et al., 2019; Prost et al., 2019).",
"Bias in these frozen representations can also be directly corrected by removing associations between feature and sensitive attributes (Elazar and Goldberg, 2018; Madras et al., 2018) (Fig.",
"1",
"(c)), but this does not allow predictions to be generated for new data.",
"Our experiments address the following research questions:",
"(a) whether mitigating a single bias factor in the upstream stage is maintained when fine-tuning on new examples from the same domain and task,",
"(b) whether transfer is viable when the downstream domains and tasks are different from the upstream model, and",
"(c) whether we can address multiple kinds of bias with a single upstream model.",
"We perform these experiments under a generic transfer learning framework, noted as Upstream Bias Mitigation (UBM) for Downstream Fine-Tuning for convenience, which consists of two stages: first, in the upstream bias mitigation stage , a LM is fine-tuned with bias mitigation objectives on one or several upstream tasks, and subsequently the classification layer is re-initialized; then, in the downstream fine-tuning stage the encoder from the upstream model, jointly with the new classification layer, are again fine-tuned on a downstream task without additional bias mitigation steps.",
"Using six datasets with previously recognized bias factors, our analysis show overall positive results for the questions above; still, there are challenges remaining to stabilize the results of bias mitigation in challenging setups, e.g. , the multi-bias factor setting.",
"Our contributions are summarized as follows: (1) we propose a new research direction for mitigating bias in fine-tuned models; (2) we perform extensive experiments to study the viability of the upstream bias mitigation framework in various settings; (3) we demonstrate the effectiveness of this research direction, motivating further improvements, tests, and applications.",
"We consider biases against protected groups in classifiers fined-tuned from LMs.",
"In our present analysis, bias is defined as disparate model performance on different subsets of data which are associated with different demographic groups ( e.g. , instances that mention or are generated by different social groups) (Blodgett et al., 2020).",
"Our evaluation of bias aligns with the definition of equalized odds and equal opportunities (Hardt et al., 2016) in previous works of fairness in machine learning.",
"Here, we first outline our experimental setup for exploring the transferability of bias mitigation effects, in which we detail the process of applying UBM and pose three key research questions (section 2.1).",
"We follow by introducing the bias factors studied and the corresponding classification tasks and datasets (section 2.2), and our evaluation protocols and metrics (section 2.3).",
"Our goal is to evaluate the transferability of bias mitigation effects for one or multiple bias factors in downstream fine-tuned models.",
"We follow an Upstream Bias Mitigation (UBM) for Downstream Fine-Tuning procedure, pictured in Figure 2. First, in the Upstream Bias Mitigation phase, an upstream Provide Encoder gs Settings One bias factor Framework No data access Multiple bias factors (train & debias) (train w/o debias) Upstream Model Downstream Model Upstream dataset ! w/ attribute labels Downstream datasets \" ! = ( \" ) !",
"(source) model f s = h s g s , composed of a text encoder g s and a classifier head h s , is trained on one or more upstream datasets D s with bias mitigation algorithms.",
"The encoder g s is to be transferred to downstream (target) domains and tasks while the classifier head h s is discarded.",
"Then, in the Downstream Fine-Tuning phase, the downstream model f t = h t g t utilizes g s to initialize the encoder weights and is fine-tuned for prediction performance without bias mitigation approaches on downstream datasets D t .",
"This UBM process is applied in three settings, summarized below, which each contribute to evaluating the transferability of bias mitigation effects.",
"1. Fine-Tuning on the Same Distribution.",
"In the simplest setting, we fine-tune the downstream model over new examples from the same data distribution as the upstream model.",
"In practice, each dataset is split into two halves, with one used for upstream bias mitigation and the other for downstream fine-tuning.",
"2. Cross-Domain and Cross-Task Fine-Tuning.",
"Similar to how LMs are fine-tuned for various tasks and domains, in a more practical setup, we test whether transfer of bias mitigation effects is viable across domains and tasks.",
"To achieve this, we apply bias mitigation while fine-tuning a LM on one dataset and perform fine-tuning on another.",
"3. Multiple Bias Factors.",
"In the most challenging setup, we train a single upstream model to address multiple bias factors ( e.g. , both dialect bias and gender bias).",
"Such upstream models can be trained with multi-task learning ( i.e. , jointly training over multiple datasets with shared encoder g but different classifier heads h ) while mitigating multiple kinds of bias.",
"Subsequently, the resulting upstream model is transferred to downstream models as be-Dataset Prediction Task Bias GHC (Kennedy et al., 2018) Hate Group Identifier Stormfront (de Gibert et al., 2018) Hate Group Identifier DWMW (Davidson et al., 2017) Toxicity AAVE Dialect FDCL (Founta et al., 2018) Toxicity AAVE Dialect BiasBios (De-Arteaga et al., 2019) Occupation Gender Stereotyping OntoNotes 5.0 (Weischedel et al., 2013) Coreference Gender Stereotyping Table 1: Summary of tasks and bias included for study.",
"To ensure our analysis holds true for a variety of domains, tasks, and bias factors, we experiment with three different bias factors studied in previous research along with six different datasets (also summarized in Table 1), described below.",
"Group Identifier Bias.",
"This bias refers to higher false positive rates of hate speech predictions for sentences containing specific group identifiers, which is harmful to protected groups by misclassifying innocuous text ( e.g. , I am a Muslim) as hate speech.",
"We include two datasets for study, namely the Gab Hate Corpus (GHC; Kennedy et al., 2018) and the Stormfront corpus (de Gibert et al., 2018).",
"Both datasets contain binary labels for hate and non-hate instances, though with differences in the labeling schemas and domains.",
"AAVE Dialect Bias.",
"Sap et al. (2019) show that offensive and hate speech classifiers yield a higher false positive rate on text written in African American Vernacular English (AAVE).",
"This bias brings significant harm to the communities that uses AAVE, for example, by leading to the disproportionate removal of the text written in AAVE in social media platforms (Blodgett et al., 2020).",
"We include two datasets for study: FDCL (Founta et al., 2018) and DWMW (Davidson et al., 2017).",
"In both datasets, we treat abusive , hateful and spam together as harmful outcomes ( i.e. , false positives for each are harmful) to compute false positive rates.",
"Following Sap et al. (2019), we use an off-the-shelf AAVE dialect predictor (Blodgett et al., 2016) to identify examples written in AAVE.",
"Gender Stereotypical Bias.",
"Zhao et al. (2018) summarize a list of occupations that are prone to be stereotyped in practice, leading to coreference resolutions models and occupation prediction models having biases in performance in proand anti-stereotypical instances when trained on short bios.",
"We train the coreference resolution model on the OntoNotes 5.0 dataset (Weischedel et al., 2013) and the occupation classifier on the BiasBios (De-Arteaga et al., 2019) dataset.",
"We evaluate the overall performance of the models on downstream tasks along with appropriate bias metrics for each bias factor, analyzed for each dataset and task in previous works.",
"We expect UBM to minimally affect classification performance while improving on bias metrics.",
"Classification Performance.",
"We report in-domain F1 scores for GHC, Stormfront, OntoNotes 5.0, and accuracy scores for FDCL, DWMW and BiasBios.",
"Following Zhang et al. (2018), for hate speech detection and toxicity detection datasets, we use the equal error rate (EER) threshold for prediction.",
"Group Identifier Bias Metrics.",
"To evaluate group identifier bias, we evaluate false positive rate (FPR) differences, noted as FPRD, between examples mentioning one of 25 group identifiers provided by Kennedy et al. (2020) and the overall FPR.",
"In addition, we followed Kennedy et al. (2020) in using a New York Times articles (NYT) corpus of 25 k non-hate sentences, each mentioning one of 25 group identifiers.",
"This corpus specifically provides an opportunity to measure FPRreported as (NYT Acc.), equivalent to 1 FPR.",
"Additionally, following the evaluation protocol of Dixon et al. (2018) and Zhang et al. (2020), we incorporate the Identity Phrase Templates Test Sets (reported as IPTTS), which consists of 77 k hate and non-hate examples mentioning group identifiers, generated with templates.",
"Following these works, for IPTTS we compute FPRD as (cid:80) z | FPR z FPR overall | , where FPR z is false positive rate on sentences with the group identifier z , and FPR overall is the overall false positive rate.",
"of AAVE examples in the datasets and the noisy outputs of AAVE classifier (Blodgett et al., 2016), we expect the in-domain FPRD metrics to be noisy.",
"Therefore, following Xia et al. (2020), we incorporate the BROD (Blodgett et al., 2016) dataset, which is a large unlabeled collection of Twitter posts written in l.",
"Since in practice only a small portion of texts are toxic or spam, we treat all examples from BROD as normal , and report the accuracy (which equals 1 FPR) on the dataset.",
"Gender Stereotype Metrics.",
"We employ the WinoBias (Zhao et al., 2018) dataset which provides opportunities to evaluate models on pro-stereotypical and anti-stereotypical coreference examples.",
"We report the differences in F1 (F1-Diff) on two subsets of data.",
"On occupation prediction, following Ravfogel et al. (2020), we report mean differences of true positive rate (TPR) differences in predicting each occupation for men and women.",
"Here, we detail the particular bias mitigation algorithms used for implementing UBM, as well as the other baselines used for verifying the transferability of bias mitigation effects.",
"We implement UBM with two different bias mitigation algorithms in the upstream bias mitigation phase: explanation regularization (Kennedy et al., 2020), and adversarial de-biasing (Zhang et al., 2018; Madras et al., 2018; Xia et al., 2020), denoted here as UBM reg and UBM adv , respectively.",
"UBM with Explanation Regularization.",
"Explanation regularization reduces importance placed on spurious surface patterns ( i.e. , words or phrases) during upstream model training.",
"We apply UBM reg to group identifier and AAVE dialect bias, where the set of spurious patterns are group identifiers and the most frequent words, from statistics of the dataset, used by AAVE speakers; we find explanation regularization not effective for gender bias.",
"The importance of a surface pattern w W in the input x , noted as ( w, x ) is measured as the model prediction change when it is removed.",
"The model is trained by optimizing the main learning objective (cid:96) while penalizing importance attributed to patterns w W that exist in the input x .",
"UBM with Adversarial De-biasing.",
"In UBM adv , the upstream model is trained with adversarial debiasing techniques, so that sensitive attributes related to bias ( e.g. , the dialect of the sentence or the gender referenced in the sentence) cannot be predicted from the hidden representations z given by the encoder g .",
"During training, an adversarial classifier head h adv is built upon the encoder and trained to predict sensitive attributes, while the encoder is optimized to prevent the adversarial classifier from success.",
"Formally, the optimization objective is written as, min g,h max h adv (cid:96) c + (cid:96) adv ( h adv g ( x ) , a ) , (2) where a notes the ground truth sensitive attribute, and (cid:96) adv is the cross entropy loss between the predicted sensitive attribute and the ground truth sensitive attribute.",
"As mentioned in Sec. 2.1, upstream models can be trained to mitigate multiple bias factors with multi-task learning on multiple datasets.",
"We separately apply bias mitigation algorithms for each dataset (sharing the same encoder) and note the algorithms applied in the subscript ( e.g. , UBM reg + adv ).",
"Methods without Bias Mitigation.",
"Two types of models were evaluated that did not address bias.",
"First, the Vanilla model is a downstream classifier directly fine-tuned on downstream task from a LM ( e.g. , RoBERTa).",
"Second, Van-Transfer is fine-tuned on upstream datasets without bias mitigation and fine-tuned on downstream datasets.",
"Downstream Bias Mitigation.",
"For reference, we show the results of directly applying explanation regularization, noted as Expl.",
"Reg.",
", or adversarial de-biasing, noted as Adv.",
"Learning , during downstream fine-tuning.",
"In most cases, mitigating bias in downstream classifier should be the most effective way to reduce bias, though this is not always feasible in practice for reasons discussed above.",
"We also consider two simple baselines that could reduce bias in downstream models via heuristics.",
"Emb.",
"Zero zeros out the word embedding of spurious surface patterns (using the same word list as explanation regularization) in PTLMs before fine-tuning.",
"We also include Emb.",
"Zero.",
"Trans , which GHC B Metrics In-domainF1( ) In-domainFPRD( ) IPTTSFPRD( ) NYTAcc( ) Non-Transfer (GHC . B) Vanilla 37.91 2.5 35.64 2.2 21.50 2.8 68.55 20 Expl.",
"zeros out embeddings of spurious surface patterns before fine-tuning from an upstream model.",
"The method does not apply to cases where surface patterns related to bias ( e.g. , gendered pronouns) are crucial for prediction, e.g. , coreference resolution.",
"In this section, we present the results of UBM in three settings following the order in Sec. 2.1: transferring to the same data distribution, transferring to different data distributions, transferring from an upstream model with bias mitigation for multiple bias factors.",
"We follow these main analyses with an investigation of the impact of freezing encoder weights before downstream fine-tuning, and lastly with a brief exploration of how UBM's positive results are achieved.",
"Implementation Details.",
"In all experiments reported on below, models are initially fine-tuned from RoBERTa-base.",
"The upstream model is trained for a fixed number of epochs and the checkpoint with the best prediction performance is transferred to the downstream model.",
"See Appendix for more implementation details.",
"We use D s D t as the transfer notation, in which upstream and downstream datasets are respectively represented in the left and right-hand side of the arrow.",
"We first briefly show the results when the downstream model sees new, unseen samples from the same data distribution as the upstream model.",
"In this controlled setting, we isolate and test the basic viability of UBM, which requires that information from the upstream model is retained during downstream fine-tuning.",
"GHC, Stormfront, FDCL and BiasBios were partitioned into two subsets with equal size, noted as subsets A and B of corresponding datasets, to train the upstream and downstream models respectively.",
"Table 2 presents the results for mitigating group identifier bias in the GHC.",
"We see an overall bias reduction, via UBM, by comparing with Vanilla training and Van-Transfer.",
"We include full results and discussions for this simple setting in Appendix.",
"Following the result that UBM is effective in the same-domain setting, we now move to analyzing cross-domain settings in greater depth.",
"For hate speech classification, we perform transfer learning from GHC to Stormfront and from Stormfront to GHC; and for toxicity classification, we perform transfer learning from FDCL to DWMW.",
"We also perform transfer learning from BiasBios (occupa-tion prediction) to OntoNotes 5.0 (coreference res-olution).",
"Table 3 shows the results of cross-domain and task transfer learning and non-transfer baselines.",
"Our findings are summarized below.",
"UBM can reduce bias in different target domains and tasks compared to fine-tuning without bias mitigation.",
"The results of cross-domain and task transfer learning ( i.e. , Stf. GHC, GHC Stf., FDCL DWMW), show that transferring from a less biased upstream model (UBM Reg and UBM Adv ) leads to better downstream bias mitigation compared to directly training without bias mitigation in the target domain (Vanilla).",
"Meanwhile, the in-domain classification performance has improved (on GHC and Stormfront) or been preserved (on DWMW).",
"It is notable that directly mitigating bias (Expl. Reg., Adv. Learning) on DWMW is not effective, which is previously observed by Xia et al. (2020), while transferring from FDCL is successful.",
"does not improve; however, as discussed in our metrics section, the in-domain FPRD is computed over a much smaller set of examples compared to NYT and IPTTS datasets, and is thus less reliable.",
"UBM does not reduce bias compared to Vanilla training on OntoNotes 5.0, but achieves less bias compared to Van-Transfer.",
"This result confirms the effect of bias mitigation in upstream models, but the transfer learning itself has increased the bias.",
"Comparison with Emb.",
"Zero and Emb.",
"Zero.",
"Trans .",
"We find two alternative methods, Emb.",
"Zero and Emb.",
"Zero Trans, also reduce bias on some of the datasets.",
"On GHC, Emb.",
"Zero achieves an in-domain FPRD and IPTTS-FPRD lower than UBM.",
"However, it comes with clear drop of in-domain classification performance.",
"Having observed an overall positive effect of UBM across domains and tasks, next we present the results of experiments on mitigating multiple bias factors with a single upstream model.",
"This involves training an upstream model with multiple bias mitigation objectives across multiple datasets, followed by fine-tuning on a single dataset without bias mitigation.",
"We test three combinations of datasets.",
"First, a multi-task model is trained to jointly mitigate group identifier bias and AAVE dialect bias using GHC and FDCL (GHC + FDCL), and transferred to Stormfront and DWMW.",
"Next, a model is similarly trained jointly on group identifier and AAVE biases on and Stormfront and FDCL (Stf. + FDCL) and transferred to GHC and DWMW.",
"Lastly, models were trained over source datasets 3 We find UBM Reg,Reg,Adv yield degenerated classifiers for OntoNotes (Test F1 < 46 . 00 ) in 5 out of 6 runs.",
"The result is from one successful run.",
"GHC, FDCL, BiasBios (GHC+FDCL+BiasBios) to mitigate all three bias factors, and transferred to Stormfront, DWMW, and OntoNotes.",
"The results are shown in Table 4. Comparison to Single-Dataset Vanilla Baselines.",
"As a basic measure of bias mitigation success, we compare multi-dataset models' results with single-dataset Vanilla training and Van-Transfer.",
"We see UBM with GHC + FDCL successfully reduces both group identifier bias and AAVE dialect bias in downstream models.",
"UBM with GHC + FDCL + BiasBios also successfully reduces group identifier bias in terms of IPTTS, FPRD (which is the most reliable metrics of bias given its large size), and AAVE bias.",
"It also reduces gender stereotypical bias compared to Van-Transfer in some experimental runs, but in an unstable manner, demonstrated by the large variance of F1-Diff and degenerated runs of UBM Reg,Reg,Adv .",
"Results of UBM on Stf.",
"+ FDCL are less promising.",
"We find UBM Reg,Adv,Adv is not successful in reducing group identifier bias.",
"UBM Reg,Reg,Adv could reduce bias on IPTTS-FPRD, but does not improve other metrics.",
"Notably, UBM on Stf.",
"+ FDCL clearly underperform UBM on Stf.",
"only.",
"UBM Reg versus UBM Adv .",
"Empirically, we find using explanation regularization on FDCL (UBM reg,reg , UBM reg,reg,adv ) instead of adversarial learning (UBM reg,adv , UBM reg,adv,adv ) consistently improves bias mitigation performance on other bias factors.",
"Takeaways.",
"Our results show it is possible to reduce multiple bias factors via UBM.",
"However, we have shown that these effects are not automatic for each new dataset added to upstream models for multi-task bias mitigation.",
"In the experiments above, we have shown that the effect of mitigating bias is partially preserved with simple fine-tuning.",
"Next, we study whether freezing the encoders or discouraging their weight changes improves bias mitigation in the target domain, as they intuitively try to retain effect of bias mitigation.",
"However, we find a counterintuitive result: these approaches typically do not achieve reduced downstream bias, and in fact reduce in-domain classification performance.",
"Table 5 shows the results when we keep the weights frozen (Freeze), discouraging weights from changing with (cid:96) 2 -sp regularizer (Li et al., 2018, details in ap-pendix), or standard fine-tuning (fine-tune).",
"In Stf.",
"GHC, freezing the weights contributed to reducing the bias, while (cid:96) 2 -sp failed to help.",
"In GHC Stf and FDCL DWMW, freezing the weights and (cid:96) 2 -sp both increased the bias.",
"A possible reason is that by freezing the encoder, we reduce its expressive power.",
"As a result, the encoder is prone 0 1 2 3 4 Epoch 0.00 0.25 0.50 0.75 1.00 1.25 G r a d i e n t o f ( w , x ) Van.",
"We attempt to interpret why fine-tuning from a debiased upstream model remains less biased during fine-tuning from the perspective of gradient of importance attributed to words w related to bias factors ( e.g. , group identifiers) by the input occlusion algorithm.",
"A large importance attribution usually induces bias.",
"Figure 3 plots the importance attribution of group identifiers ( w, x ) and the norm of its gradient w.r.t. parameters of the encoder g , noted as || ( w, x ) || 2 .",
"UBM reduces the gradient of ( w, x ) , so that ( w, x ) is less likely to change at the beginning of downstream fine-tuning .",
"Fig. 3 shows UBM has not only reduced value of importance attributed to spurious patterns, but also reduced their gradients.",
"The gradient norm is highly indicative about how the importance ( w, x ) will change in the downstream model, because when the loss in Eq.",
"1 in the upstream model is minimized, the gradient ( w, x ) has the same norm but the opposite direction as the main downstream classification objective (cid:96) c .",
"It implies that whether the upstream model converges at an optimum where both objectives agree ( i.e. , gradients are small) can be an important indicator of the success of UBM.",
"Here we review approaches that inform the present work (techniques for bias mitigation) and are related to the basic idea of UBM.",
"Mitigating bias in representations.",
"Bias can be mitigated directly in representations of data.",
"Zhang et al. (2018); Beutel et al. (2017) proposed training a classifier together with an adversarial predictor for sensitive attributes.",
"Madras et al. (2018) further studied re-usable de-biased representations by training a new downstream classifier (potentially with a different classification task) using the learned representations.",
"However, this practice relies on frozen representations (rather than models themselves), which precludes the possibility of generating predictions for new data.",
"Mitigating bias in pretrained models.",
"Another line of work addresses bias in pretrained models ( e.g. , word vectors, BERT, Zhou et al., 2019; May et al., 2019; Bhardwaj et al., 2020; Liang et al., 2020).",
"Many such studies again focus on bias in frozen data representations, and do not study their effects on downstream classifiers.",
"Others alternatively assess the propagation of bias from pretrained models to downstream classifiers: Ravfogel et al. (2020) study algorithms for mitigating bias in pretrained models by de-biasing the learned representations, which can subsequently be used in classifiers as frozen representations.",
"Transferring learning of fairness and robustness.",
"A few previous works have studied related research problems, with significant differences to our work.",
"Though Schumann et al. (2019) theoretically analyzes the transferability of fairness across domains, it assumes simultaneous access of source and target domain data, which does not account for transferring upstream bias mitigation to arbitrary downstream fine-tuned models.",
"Shafahi et al. (2020) study transfer learning of robustness to adversarial attacks under fine-tuning, but do not seek to mitigate bias.",
"We observe that the effects of bias mitigation are indeed transferable in fine-tuning LMs.",
"Future works in fine-tuning LMs can use UBM in order to easily apply the positive effects of bias mitigation methods to new domains and tasks without customized bias mitigation processes or access to sensitive user information.",
"Though UBM does not rival directly mitigating bias on the downstream task, it is more efficient and accessible.",
"Future works can develop the effectiveness of UBM beyond the default scenarios in this paper, and potentially apply it to tasks and settings beyond hate speech, toxicity classification, occupation prediction, and coreference resolution in English corpora.",
"Our analysis demonstrates the effectiveness of Upstream Bias Mitigation for Downstream Fine-Tuning.",
"As we stated in the paper, the reduced efforts of downstream bias mitigation will facilitate broader application of bias mitigation in the growing deep learning community.",
"While we may expect to obtain an off-the-shelf language model that could reduce multiple kinds of bias with UBM, we emphasize that proper evaluation of bias may still be required in downstream side, especially for guaranteed bias mitigation.",
"Currently, our initial analysis of UBM confirms that bias mitigation effects are transferable, but does not provide guarantees of bias mitigation or levels of bias mitigation in the direct setting.",
"The findings in this analysis should identify the potential of UBM to the broader NLP and machine learning communities, which may be extended with new approaches within the UBM framework, or interpretation techniques (as in Sec. 4.5)."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"result",
"objective",
"method",
"method",
"abstain",
"objective",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"method",
"abstain"
] |
[
"Domain Adaptation is widely used in practical applications of neural machine translation, which aims to achieve good performance on both general domain and in-domain data.",
"However, the existing methods for domain adaptation usually suffer from catastrophic forgetting, large domain divergence, and model explosion.",
"To address these three problems, we propose a method of divide and conquer which is based on the importance of neurons or parameters for the translation model.",
"In this method, we first prune the model and only keep the important neurons or parameters, making them responsible for both general-domain and in-domain translation.",
"Then we further train the pruned model supervised by the original whole model with knowledge distillation.",
"Last we expand the model to the original size and fine-tune the added parameters for the in-domain translation.",
"We conducted experiments on different language pairs and domains and the results show that our method can achieve significant improvements compared with several strong baselines.",
"Neural machine translation (NMT) models (Kalch-brenner and Blunsom, 2013; Cho et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2015; Gehring et al., 2017; Vaswani et al., 2017) are data-driven and hence require large-scale training data to achieve good performance (Zhang et al., 2019a).",
"In practical applications, NMT models usually need to produce translation for some specific domains with only a small quantity of in-domain data available, so domain adaptation is applied to address the problem.",
"A typical domain adaptation scenario as discussed in Freitag and Al-Onaizan (2016) is that an NMT model have been trained with large-scale general-domain data and then is adapted to specific Corresponding author: Yang Feng.",
"domains, hoping the model can fit in-domain data well meanwhile the performance will not degrade too much on the general domain.",
"Towards this end, many researchers have made their attempts.",
"The fine-tuning method (Luong and Manning, 2015) performs in-domain training based on the general-domain model by first training the model on general-domain data and then continuing to train on in-domain data.",
"Despite its convenience for use and high-quality for in-domain translation, this method suffers from catastrophic forgetting which leads to poor performance in the previous domains.",
"Regularization-based methods (Dakwale and Monz, 2017; Thompson et al., 2019; Barone et al., 2017; Khayrallah et al., 2018) instead introduce an additional loss to the original objective so that the translation model can trade off between general-domain and in-domain.",
"This kind of methods usually has all the parameters shared by general-domain and in-domain, with the assumption that the optimal parameter spaces for all the domains will overlap with each other, and retaining these overlapped parameters can balance over all the domains.",
"This assumption is feasible when the domains are similar, but when the divergence of the domains is large, it is not reasonable anymore.",
"In contrast, the methods with domain-specific networks (Dakwale and Monz, 2017; Wang et al., 2019; Bapna and Firat, 2019; Gu et al., 2019) can be often (but not always) immune to domain divergence as it can capture domain-specific features.",
"But unfortunately, as the number of domains increases, the parameters of this kind of methods will surge.",
"Besides, the structure of these networks needs to be carefully designed and tuned, which prevents them from being used in many cases.",
"Given the above, we propose a method of domain adaptation that can not only deal with large domain divergence during domain transferring but also keep a stable model size even with multiple domains.",
"Inspired by the analysis work on NMT (Bau et al., 2019; Voita et al., 2019; Gu and Feng, 2020), we find that only some important parameters in a well-trained NMT model play an important role when generating the translation and unimportant parameters can be erased without affecting the translation quality too much.",
"According to these findings, we can preserve important parameters for general-domain translation, while tuning unimportant parameters for in-domain translation.",
"To achieve this, we first train a model on the general domain and then shrink the model with neuron pruning or weight pruning methods, only retaining the important neurons/parameters.",
"To ensure the model can still perform well on general-domain data, we adjust the model on in-domain data with knowledge distillation where the original whole model is used as the teacher and the pruned model as the student.",
"Finally, we expand the model to the original size and fine-tune the added parameters on the in-domain data.",
"Experimental results on different languages and domains show that our method can avoid catastrophic forgetting on general-domain data and achieve significant improvements over strong baselines on multiple in-domain data sets.",
"Our contributions can be summarized as follows: We prove that the parameters that are unimportant for general-domain data can be utilized to improve in-domain translation quality.",
"Our model can keep superior performance over baselines even when continually transferring to multiple domains.",
"Our model can fit in the continual learning scenario where the data for the previous domains cannot be got anymore which is the common situation in practice.",
"In our work, we apply our method in the framework of TRANSFORMER (Vaswani et al., 2017) which will be briefly introduced here.",
"However, we note that our method can also be combined with other NMT architectures.",
"We denote the input sequence of symbols as x = ( x 1 , . . . , x J ) , the ground-truth sequence as y = ( y 1 , . . . , y K ) and the translation as y = ( y 1 , . . . , y K ) .",
"The Encoder & Decoder The encoder is composed of N identical layers.",
"Each layer has two sublayers.",
"The first is a multi-head self-attention sublayer and the second is a fully connected feed-forward network.",
"Both of the sublayers are followed by a residual connection operation and a layer normalization operation.",
"The input sequence x will be first converted to a sequence of vectors E x = [ E x [ x 1 ]; . . . ; E x [ x J ]] where E x [ x j ] is the sum of word embedding and position embedding of the source word x j .",
"Then, this sequence of vectors will be fed into the encoder and the output of the N th layer will be taken as source hidden states.",
"and we denote it as H .",
"The decoder is also composed of N identical layers.",
"In addition to the same kind of two sublayers in each encoder layer, the cross-attention sublayer is inserted between them, which performs multi-head attention over the output of the encoder.",
"The final output of the N -th layer gives the target hidden states S = [ s 1 ; . . . ; s K ] , where s k is the hidden states of y k .",
"The Objective We can get the predicted probability of the k -th target word over the target vocabulary by performing a linear transformation and a softmax operation to the target hidden states: p ( y k | y <k , x ) exp( W o s k + b o ) , (1) where W o R d model | V t | and | V t | are the size of target vocabulary.",
"The model is optimized by minimizing a cross-entropy loss of the ground-truth sequence with teacher forcing training: L ( ) = 1 KK (cid:88) k =1 log p ( y k | y <k , x ; ) , (2) where K is the length of the target sentence and denotes the model parameters.",
"Knowledge Distillation (KD) method (Hinton et al., 2015) is for distilling knowledge from a teacher network to a student network.",
"Normally, the teacher network is considered to be with higher capability.",
"A smaller student network can be trained to perform comparablely or even better by mimicking the output distribution of the teacher network on the same data.",
"This is usually done by minimizing the cross entropy between the two distributions: LKD ( , T ) = 1 KK (cid:88) k =1 q ( y k | y <k , x ; T ) log p ( y k | y <k , x ; ) , (3) where q denotes the output distribution of the teacher network and and T denote the parameters of the student and teacher network, respectively.",
"The main idea of our method is that different neurons or parameters have different importance to the translation model and hence different roles in domain adaptation.",
"Based on this, we distinguish them into important and unimportant ones and make important neurons or parameters compromise between domains while unimportant ones focus on in-domain.",
"Specifically, our method involves the following steps shown in Figure 1. First, we train a model on the general domain and then evaluate the importance of different neurons or parameters.",
"Then we erase the unimportant neurons or parameters and only keep the ones that are related to the general domain so that our method will not be subjected to domain divergence.",
"Next, we further adjust our model under the framework of knowledge distillation (Hinton et al., 2015) on the in-domain with the unpruned model as the teacher and the pruned model as the student.",
"In this way, the pruned model can regain some of its lost performance because of pruning.",
"Finally, we expand the pruned model to the original size and fine-tune the added parameters for the in-domain.",
"Model pruning aims to find a good subset of neurons and parameters of the general-domain model while maintaining the original performance as much as possible.",
"Therefore, under the premise of retaining most of the model's capability, we want to remove those unimportant neurons or parameters to reduce the size of the whole model first.",
"To achieve this, we adopt two pruning schemes.",
"The first is neuron pruning, where we evaluate the importance of neurons directly and then prune unimportant neurons and relevant parameters.",
"The second is weight pruning, where we evaluate and prune each parameter directly.",
"Neuron Pruning To evaluate the importance of each neuron, we adopt a criterion based on the Taylor expansion (Molchanov et al., 2017), where we directly approximate the change in loss when removing a particular neuron.",
"Let h i be the output produced from neuron i and H represents the set of other neurons.",
"Assuming the independence of each neuron in the model, the change of loss when removing a certain neuron can be represented as: | L ( h i ) | = |L ( H, h i = 0) L ( H, h i ) | , (4) where L ( H, h i = 0) is the loss value if the neuron i is pruned and L ( H, h i ) is the loss if it is not pruned.",
"For the function L ( H, h i ) , its Taylor expansion at point h i = a is: L ( H, h i ) = N (cid:88) n =0 L n ( H, a ) n !",
"( h i a ) n + RN ( h i ) , (5) where L n ( H, a ) is the n -th derivative of L ( H, h i ) evaluated at point a and RN ( h i ) is N -th remainder.",
"(6) The remainder R 1 can be represented in the form of Lagrange: R 1 ( h i ) = 2 L ( H, h i ) 2 h i h 2 i , (7) where (0 , 1) .",
"Then, approximating L ( H, h i = 0) with a first-order Taylor polynomial where h i equals zero: L ( H, h i = 0) = L ( H, h i ) L ( H, h i ) h i h i R 1 ( h i ) .",
"Considering the use of ReLU activation function in the model, the first derivative of loss function tends to be constant, so the second order term tends to be zero in the end of training.",
"Thus, we can ignore the remainder and get the importance evaluation function as follows: TE ( h i ) = | L ( h i ) | = (cid:12)(cid:12)(cid:12)(cid:12) L ( H, h i ) h i h i (cid:12)(cid:12)(cid:12)(cid:12) .",
"In practice, we need to accumulate the product of the activation and the gradient of the objective function w.r.t to the activation, which is easily computed during back-propagation.",
"Finally, the evaluation function is shown as: TE ( h li ) = 1 T (cid:88) t (cid:12)(cid:12)(cid:12)(cid:12) L ( H, h li ) h li h li (cid:12)(cid:12)(cid:12)(cid:12) , (9) where h li is the activation value of the i -th neuron of l -th layer and T is the number of the training examples.",
"The criterion is computed on the general-domain data and averaged over T .",
"Finally, we prune a certain percentage of neurons and relevant parameters in each target layer based on this criterion.",
"where w mn denotes the m -th row and n -th column parameter of the weight matrix W .",
"The weight matrix W represents different parts of the model, e.g., embedding layer, attention layer, output layer, etc.",
"Finally, a certain percentage of parameters in each target parameter matrix are pruned.",
"Though only limited degradation will be brought in performance after removing the unimportant neurons or parameters, we want to further reduce this loss.",
"To achieve this, we minimize the difference in the output distribution of the unpruned and pruned model.",
"In this work, the general-domain model (pa-rameters denoted as G ) acts as the teacher model and the pruned model (parameters denoted as G ) acts as the student model.",
"So, the objective in this training phase is: LKD ( G , G ) = 1 KK (cid:88) k =1 q ( y k | y <k , x ; G ) log p ( y k | y <k , x ; G ) .",
"Considering that the general-domain data is not always available in some scenarios when adapting the model to new domains, e.g., continual learning, we adopt the word-level knowledge distillation method using the in-domain data .",
"Because the teacher model is trained on general-domain, it can still transfer the general-domain knowledge to the student model even with the in-domain data.",
"We can fine-tune the pruned model on general-domain if the data is available which can simplify the training procedure.",
"We have also tried the sentence-level knowledge distillation method, but the results are much worse.",
"The parameters of the teacher model keep fixed during this training phase and the parameters of the pruned model are updated with this KD loss.",
"After convergence, the parameters of the pruned model ( G ) will be solely responsible for the general-domain and will also participate in the translation of in-domain data.",
"These parameters will be kept fixed during the following training phase, so our model won't suffer catastrophic forgetting on the general-domain during the fine-tuning process.",
"After getting the well-trained pruned model, we add new parameters (denoted as I ) to it, which expands the model to its original size.",
"Then we fine-tune these newly added parameters with in-domain data, which is supervised by the ground truth sequences.",
"As we have indicated above, the parameters of the pruned model (denoted as G ), which are responsible for generating the general-domain translation, keep fixed during this training phase.",
"The objective function is: L ( G , I ) = 1 KK (cid:88) k =1 log p ( y k | y <k , x ; G , I ) .",
"After convergence, the parameters of the pruned model ( G ) and new parameters ( I ) are combined together for generating the in-domain translation.",
"Chinese English .",
"For this task, the general-domain data is from WMT 2017 Zh-En translation task that contains 23.97M sentence pairs.",
"The data is mainly related to the News domain.",
"The newsdev2017 and newstest2017 are chosen as the development and test set, respectively.",
"We choose the parallel sentences with the domain label Thesis from the UM-Corpus (Tian et al., 2014) as our in-domain data.",
"This portion covers 15 journal topics in the research area.",
"We filter out the duplicate sentences and then choose 75K, 1K, and 1K sentences randomly as our training, development, and test data, respectively.",
"We tokenize and truecase the English sentences with Moses scripts.",
"1 For the Chinese data, we perform word segmentation by using Stanford Segmenter.",
"2 English French .",
"For this task, the general-domain data is from the UN corpus of the WMT 2014 En-Fr translation task that contains 12.78M sentence pairs, which are mainly related to the News domain.",
"We choose newstest2013 and newstest2014 as our development and test set, respectively.",
"The in-domain data with 53K sentence pairs are from WMT 2019 biomedical translation task, and it is mainly related to the Biomedical domain.",
"We choose 1K and 1K sentences randomly from the corpora as our development and test data, respectively.",
"We tokenize and truecase the corpora.",
"English German .",
"For this task, general-domain data is from the WMT16 En-De translation task which is mainly News texts.",
"It contains about 4.5M sentence pairs.",
"We choose the newstest2013 for validation and newstest2014 for test.",
"For the in-domain data, we use the parallel training data from the IWSLT 2015 which is mainly from the Spoken domain.",
"It contains about 194K sentences.",
"We choose the 2014test for validation and the 2015test for test.",
"We tokenize and truecase the corpora.",
"Besides, integrating operations of 32K, 32K, and 30K are performed to learn BPE (Sennrich et al., 2016) on the general-domain data and then applied to both the general-domain and in-domain data.",
"Then we filter out the sentences which are longer 1 http://www.statmt.org/moses/ 2 https://nlp.stanford.edu/ than 128 sub-words.",
"For the Zh-En translation task, 44K size of the Chinese dictionary and 33K size of the English dictionary are built based on the general-domain data.",
"For the En-Fr and En-De tasks, 32K size of the dictionaries for the source and target languages are also built on the corresponding general-domain data.",
"We use the open-source toolkit called Fairseq-py (Ott et al., 2019) released by Facebook as our Transformer system.",
"The contrast methods can be divided into two categories.",
"The models of the first category are capacity-fixed while the second category are capacity-increased.",
"The first category includes the following systems: General This baseline system is trained only with the general-domain training data.",
"In This baseline system is trained only with the in-domain training data.",
"Fine-tuning (Luong and Manning, 2015) This method just continues to train the general-domain model with the in-domain data.",
"SeqKD (Kim and Rush, 2016) The in-domain source sentences are first translated by the general-domain model.",
"Then the model is further trained with the combined pseudo and real data.",
"Multi-objective Learning (MOL) (Dakwale and Monz, 2017) This method is based on the Fine-tuning method.",
"Besides minimizing the loss between the ground truth words and the output distribution of the network, this method also minimizes the cross-entropy between the output distribution of the general-domain model and the network.",
"The final objective is: LMOL ( ) = L ( ) + LKD ( ) (13) where is the hyper-parameter which controls the contribution of the two parts.",
"The bigger the value, the less degradation on the general-domain.",
"Elastic Weight Consolidation (EWC) (Thomp-son et al., 2019) This method models the importance of the parameters with Fisher information matrix and puts more constrains on the important parameters to let them stay close to the original values during the fine-tuning process.",
"The training objective is: LEWC ( ) = L ( ) + (cid:88) i F i ( i Gi ) 2 (14) where i represents the i -th parameter and F i is the modeled importance for the i -th parameter.",
"Full Bias (Michel and Neubig, 2018) This method adds domain-specific bias term to the output softmax layer and only updates the term as other parts of the general-domain model keep fixed.",
"Adapter (Bapna and Firat, 2019) This methods injects domain-specific adapter modules into each layer of the general-domain model.",
"Each adapter contains a normalization layer and two linear projection layers.",
"The adapter size is set to 64.",
"Multiple-output Layer Learning (MLL) (Dak-wale and Monz, 2017) The method modifies the general-domain model by adding domain-specific output layer for the in-domain and learning these domain specific parameters with respective learning objective.",
"The training objective is: LMLL ( S , G , I ) = L ( S , I ) + LKD ( S , G ) (15) where S is the domain-shared parameters, G and I denote the domain specific parameters for the general-domain and in-domain, respectively.",
"Our Method Pruning Then Expanding (PTE) Our model is trained just as the Method section describes.",
"For the neuron pruning scheme, we prune the last 10% unimportant neurons; for the weight pruning scheme, we prune the last 30% unimportant parameters.",
"To better show the ability of our method, we report the generaland in-domain performance after each training phase.",
"implemented as the base model configuration in Vaswani et al. (2017) strictly.",
"We set the hyper-parameter to 1 for MOL, EWC, and MLL and we will do more analysis on the impact of this hyper-parameter in the next section.",
"We set the learning rate during fine-tuning process to 7 .",
"5 10 5 for all the systems after having tried different values from 1 .",
"5 10 6 to 1 .",
"5 10 3 .",
"In both of our methods, we don't prune the layer-normalization layers in the encoder and decoder, which can make training faster and more stable.",
"For the neuron pruning method, we also don't prune the first layer of the encoder and the last layer of the decoder.",
"Just like the work of Dakwale and Monz (2017), the domain of the test data is known in our experiments.",
"Besides, we use beam search with a beam size of 4 during the decoding process.",
"The final translation is detokenized and then the quality is evaluated using the 4 -gram case-sensitive BLEU (Papineni et al., 2002) with the SacreBLEU tool (Post, 2018).",
"3 The results are given in Table 1. In all the datasets, our weight pruning method outperforms all the baselines.",
"Furthermore, we get the following conclusions: First, the contrast capacity-fixed methods can't handle large domain divergence and still suffer catastrophic forgetting.",
"They perform well in the 3 BLEU+case.mixed+numrefs.1+smooth.exp+tok.13a +version.1.3.6",
"En-De translation task, where the data distributions are similar.",
"They can significantly improve the in-domain translation quality without excessive damage to the general-domain translation quality.",
"However, they perform worse in the En-Fr and Zh-En translation tasks with more different data distributions.",
"The in-domain data contains many low-frequency or out-of-vocabulary tokens of the general-domain data.",
"In this situation, these methods either bring limited in-domain improvements or degrade the general-domain performance too much.",
"In contrast, our method is superior to them in all tasks, especially on the more different domains.",
"This also validates our motivation.",
"Second, the capacity-increased methods can better deal with domain divergence.",
"Compared with them, our method can achieve larger improvements on in-domain since we actually allocate more parameters for in-domain than the capacity-increased methods.",
"Besides, our methods are also more convenient to use in practice because we don't need to specialize the model architecture.",
"The pruning ratio is the only hyper-parameter needed tuning.",
"Lastly, both of our methods are immune to large domain divergence.",
"Moreover, the knowledge distillation can bring modest improvements on the general domain.",
"Compared with the neuron pruning method, the weight pruning method is more effective since it can prune and reutilize more parameters with smaller performance degradation.",
"We conduct experiments under the multi-domain scenario, which lets the model adapt to several different domains.",
"Except for the training data used in the main experiments of the Zh-En task, which are related to the News and Thesis domain, System #Para.",
"we add two datasets from other domains, namely, Spoken and Education .",
"Both of them are chosen randomly from the UM-corpus.",
"Each of them contains about 75K, 1K, and 1K sentence pairs in the training, development, and test set.",
"We test our weight-pruning based method and still prune last 30% unimportant parameters.",
"We compare our method with the basic fine-tuning system and more effective capacity-increased method.",
"The results are shown in Table 2. It shows that our method can get significant improvements on all the domains.",
"For the MOL, EWC, and MLL methods, the hyper-parameter controls the trade-off between the generaland in-domain performance.",
"As for our method, the proportion of model parameters to be pruned has a similar effect.",
"To better show the full generaland in-domain performance tradeoff, we conduct experiments with different hyper-parameters.",
"We compare our method with the best capacity-fixed method EWC and best capacity-increased method MLL.",
"For the EWC and MLL method, we vary from 0 .",
"25 to 2 .",
"5 .",
"We vary the pruning proportion from 5% to 30% for our neuron-pruning method and from 10% to 50% for our weight-pruning method.",
"The results are shown in Figure 2. It shows that our method outperforms EWC at all the operating points significantly.",
"Be-ID System Gen. In.",
"To further understand the impact of each step of our method, we perform further studies by removing or replacing certain steps of our method.",
"We first investigate the necessity of parameter importance evaluation.",
"We train another three models following our method but with the parameters randomly pruned.",
"The results are given in Table 3.",
"It shows that random pruning will give excessive damage to general-domain.",
"Besides, we also train another model that skips the model pruning and knowledge distillation steps and directly fine-tune the unimportant parameters.",
"At last, we perform translation with the whole model on both the general-and in-domain.",
"The results show that the change of unimportant parameters will also lead to catastrophic forgetting on general-domain, which shows the necessity of divide and conquer.",
"To further prove that our method is better at dealing with large domain divergence, we conduct experiments on the En-Fr translation task.",
"Following the method in Moore and Lewis (2010), we score and rank each in-domain sentence pair by calculating the per-word cross-entropy difference between the generaland in-domain language model: Score = ( HG ( s ) HI ( s )) + ( HG ( t ) HI ( t )) (16) where H denotes the language model which is trained with Srilm (Stolcke, 2002), s and t denote the source and target sentence.",
"Then, we split the in-domain data into four parts with equal size and Figure 3: The average BLEU with different domain divergences on the En Fr translation task.",
"train new models with them separately.",
"We compare our weight pruning based method with the EWC and MLL methods.",
"The results are shown in Figure 3.",
"It shows that we can get larger improvements as the data divergence gets larger.",
"Domain Adaptation Recent work on DA can be divided into two categories according to the use of training data.",
"The first category, which is also referred to as multi-domain adaptation, needs the training data from all of the domains.",
"Chu et al. (2017) fine-tunes the model with the mix of the general-domain data and over-sampled in-domain data.",
"Kobus et al. (2017) adds domain-specific tags to each sentence.",
"Zhang et al. (2019b) applies curriculum learning to the DA problem.",
"Britz et al. (2017) adds a discriminator to extract common features across domains.",
"There are also some work (Zeng et al., 2018, 2019; Gu et al., 2019) that adds domain-specific modules to the model to preserve the domain-specific features.",
"Currey et al. (2020) distills multiple expert models into a single student model.",
"The work of Liang et al. (2020) has a similar motivation with ours which also fix the important parameters and prune the unimportant parameters.",
"Compared with their method, our method doesn't need to store the general-domain training data and our method has less degradation on general-domain because we adopt the knowledge distillation method.",
"new domain and the model in use.",
"The biggest challenge for this kind of work is the catastrophic forgetting.",
"Luong and Manning (2015) fine-tunes the general-domain model with the in-domain data.",
"Freitag and Al-Onaizan (2016) ensembles the general-domain model and the fine-tuned model for generating.",
"Saunders et al. (2019) investigates adaptive ensemble weighting for inference.",
"Khayrallah et al. (2018) and Thompson et al. (2019) add regularization terms to let the model parameters stay close to their original values.",
"Dakwale and Monz (2017) minimizes the cross-entropy between the output distribution of the general-domain model and the fine-tuned model.",
"Michel and Neu-big (2018) adds domain-specific softmax bias term to the output layer.",
"Bapna and Firat (2019) injects domain-specific adapter modules into each layer of the general-domain model.",
"Wuebker et al. (2018) only saves the domain-specific offset based on the general-domain model.",
"Wang et al. (2020b) achieves efficient lifelong learning by establishing complementary learning systems.",
"Sato et al. (2020) adapts the vocabulary of a pre-trained NMT model to the target domain.",
"Overall, our work is related to the second type of approach, which is more flexible and convenient in practice.",
"The work of Thompson et al. (2019) and Dakwale and Monz (2017) are most related to our work.",
"Compared with Thompson et al. (2019), our work is better at dealing with large domain divergence, since we add domain-specific parts to the model.",
"In contrast to Dakwale and Monz (2017), our model divides each layer of the model into domain-shared and domain-specific parts, which increases the depth of the in-domain model, intuitively.",
"Besides, our method doesn't need to add parameters, but it can be easily extended when necessary.",
"Model Pruning Model pruning usually aims to reduce the model size or improve the inference efficiency.",
"See et al. (2016) examines three magnitude-based pruning schemes.",
"Zhu and Gupta (2018) demonstrates that large-sparse models outperform comparably-sized small-dense models.",
"Wang et al. (2020a) improves the utilization efficiency of parameters by introducing a rejuvenation approach.",
"Lan et al. (2020) presents two parameter reduction techniques to lower memory consumption and increase the training speed of BERT.",
"In this work, we propose a domain adaptation method based on the importance of neurons and parameters of the NMT model.",
"We make the important ones compromise between domains while unimportant ones focus on in-domain.",
"Based on this, our method consists of several steps, namely, model pruning, knowledge distillation, model expansion, and fine-tuning.",
"The experimental results on different languages and domains prove that our method can achieve significant improvements with model capacity fixed.",
"Further experiments prove that our method can also improve the overall performance under the multi-domain scenario.",
"We thank all the anonymous reviewers for their insightful and valuable comments.",
"This work was supported by National Key R&D Program of China (NO. 2017YFE0192900)."
] | [
"abstain",
"abstain",
"objective",
"objective",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"objective",
"method",
"method",
"result",
"objective",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"abstain",
"result",
"result",
"other",
"other"
] |
[
"Most general-purpose extractive summarization models are trained on news articles, which are short and present all important information upfront.",
"As a result, such models are biased by position and often perform a smart selection of sentences from the beginning of the document.",
"When summarizing long narratives, which have complex structure and present information piecemeal, simple position heuristics are not sufficient.",
"In this paper, we propose to explicitly incorporate the underlying structure of narratives into general unsupervised and supervised extractive summarization models.",
"We formalize narrative structure in terms of key narrative events (turning points) and treat it as latent in order to summarize screenplays (i.e., extract an optimal sequence of scenes).",
"Experimental results on the CSI corpus of TV screenplays, which we augment with scene-level summarization labels, show that latent turning points correlate with important aspects of a CSI episode and improve summarization performance over general extractive algorithms, leading to more complete and diverse summaries.",
"Automatic summarization has enjoyed renewed interest in recent years thanks to the popularity of modern neural network-based approaches (Cheng and Lapata, 2016; Nallapati et al., 2016, 2017; Zheng and Lapata, 2019) and the availability of large-scale datasets containing hundreds of thousands of documentsummary pairs (Sand-haus, 2008; Hermann et al., 2015; Grusky et al., 2018; Narayan et al., 2018; Fabbri et al., 2019; Liu and Lapata, 2019).",
"Most efforts to date have concentrated on the summarization of news articles which tend to be relatively short and formulaic following an inverted pyramid structure which places the most essential, novel and interesting el-Victim: Mike Kimble, found in a Body Farm.",
"Died 6 hours ago, unknown cause of death.",
"CSI discover cow tissue in Mike's body.",
"Cross-contamination is suggested.",
"Probable cause of death: Mike's house has been set on fire.",
"CSI finds blood: Mike was murdered, fire was a cover up.",
"First suspects: Mike's fiance, Jane and her ex-husband, Russ.",
"CSI finds photos in Mike's house of Jane's daughter, Jodie, posing naked.",
"Mike is now a suspect of abusing Jodie.",
"Russ allows CSI to examine his gun.",
"CSI discovers that the bullet that killed Mike was made of frozen beef that melt inside him.",
"They also find beef in Russ' gun.",
"Russ confesses that he knew that Mike was abusing Jody, so he confronted and killed him.",
"Setup New Situation Progress Complications The final push Aftermath Opportunity Change of Plans Point of no Return MajorSetback Climax Figure 1: Example of narrative structure for episode Burden of Proof from TV series Crime Scene Investigation (CSI); turning points are highlighted in color.",
"CSI discovers that the naked photos were taken on a boat, which belongs to Russ.",
"CSI discovers that it was Russ who was abusing his daughter based on fluids found in his sleeping bag and later killed Mike who tried to help Jodie.",
"Russ is given bail, since no jury would convict a protective father.",
"Russ receives a mandatory life sentence.",
"ements of a story in the beginning and supporting material and secondary details afterwards.",
"The rigid structure of news articles is expedient since important passages can be identified in predictable locations (e.g., by performing a smart selection of sentences from the beginning of the document) and the structure itself can be explicitly taken into account in model design (e.g., by encoding the relative and absolute position of each sentence).",
"In this paper we are interested in summarizing longer narratives, i.e., screenplays, whose form and structure is far removed from newspaper articles.",
"Screenplays are typically between 110 and 120 pages long (20k words), their content is broken down into scenes, which contain mostly dialogue (lines the actors speak) as well as descriptions explaining what the camera sees.",
"Moreover, screenplays are characterized by an underlying narrative structure, a sequence of events by which Screenplay Latent Narrative Structure TP1 : Introduction TP3 : Commitment TP2 : Goal definition TP4 : Setback TP5 : Ending Summary scenes V ideo summary relevant to TP2 relevant to TP5 irrelevant Figure 2: We first identify scenes that act as turning points (i.e., key events that segment the story into sec-tions).",
"a story is defined (Cutting, 2016), and by the story's characters and their roles (Propp, 1968).",
"Contrary to news articles, the gist of the story in a screenplay is not disclosed at the start, information is often revealed piecemeal; characters evolve and their actions might seem more or less important over the course of the narrative.",
"From a modeling perspective, obtaining training data is particularly problematic: even if one could assemble screenplays and corresponding summaries (e.g., by mining IMDb or Wikipedia), the size of such a corpus would be at best in the range of a few hundred examples not hundreds of thousands.",
"Also note that genre differences might render transfer learning (Pan and Yang, 2010) difficult, e.g., a model trained on movie screenplays might not generalize to sitcoms or soap operas.",
"Given the above challenges, we introduce a number of assumptions to make the task feasible.",
"Firstly, our goal is to produce informative summaries, which serve as a surrogate to reading the full script or watching the entire film.",
"Secondly, we follow Gorinski and Lapata (2015) in conceptualizing screenplay summarization as the task of identifying a sequence of informative scenes.",
"Thirdly, we focus on summarizing television programs such as CSI: Crime Scene Investigation (Fr-ermann et al., 2018) which revolves around a team of forensic investigators solving criminal cases.",
"Such programs have a complex but well-defined structure: they open with a crime, the crime scene is examined, the victim is identified, suspects are introduced, forensic clues are gathered, suspects are investigated, and finally the case is solved.",
"In this work, we adapt general-purpose extractive summarization algorithms (Nallapati et al., 2017; Zheng and Lapata, 2019) to identify informative scenes in screenplays and instill in them knowledge about narrative film structure (Hauge, 2017; Cutting, 2016; Freytag, 1896).",
"Specifically, we adopt a scheme commonly used by screenwriters as a practical guide for producing successful screenplays.",
"According to this scheme, well-structured stories consist of six basic stages which are defined by five turning points (TPs), i.e., events which change the direction of the narrative, and determine the story's progression and basic thematic units.",
"In Figure 1, TPs are highlighted for a CSI episode.",
"Although the link between turning points and summarization has not been previously made, earlier work has emphasized the importance of narrative structure for summarizing books (Mi-halcea and Ceylan, 2007) and social media content (Kim and Monroy-Hernandez, 2015).",
"More recently, Papalampidi et al. (2019) have shown how to identify turning points in feature-length screenplays by projecting synopsis-level annotations.",
"Crucially, our method does not involve manually annotating turning points in CSI episodes.",
"Instead, we approximate narrative structure automatically by pretraining on the annotations of the TRIPOD dataset of Papalampidi et al. (2019) and employing a variant of their model.",
"We find that narrative structure representations learned on their dataset (which was created for feature-length films), transfer well across cinematic genres and computational tasks.",
"We propose a framework for end-to-end training in which narrative structure is treated as a latent variable for summarization.",
"We extend the CSI dataset (Frermann et al., 2018) with binary labels indicating whether a scene should be included in the summary and present experiments with both supervised and unsupervised summarization models.",
"An overview of our approach is shown in Figure 2.",
"Our contributions can be summarized as follows:",
"(a) we develop methods for instilling knowledge about narrative structure into generic supervised and unsupervised summarization algorithms;",
"(b) we provide a new layer of annotations for the CSI corpus, which can be used for research in long-form summarization; and",
"(c) we demonstrate that narrative structure can facilitate screenplay summarization; our analysis shows that key events identified in the latent space correlate with important summary content.",
"A large body of previous work has focused on the computational analysis of narratives (Mani, 2012; Richards et al., 2009).",
"Attempts to analyze how stories are written have been based on sequences of events (Schank and Abelson, 1975; Chambers and Jurafsky, 2009), plot units (McIntyre and Lapata, 2010; Goyal et al., 2010; Finlayson, 2012) and their structure (Lehnert, 1981; Rumelhart, 1980), as well as on characters or personas in a narrative (Black and Wilensky, 1979; Propp, 1968; Bam-man et al., 2014, 2013; Valls-Vargas et al., 2014) and their relationships (Elson et al., 2010; Agarwal et al., 2014; Srivastava et al., 2016).",
"As mentioned earlier, work on summarization of narratives has had limited appeal, possibly due to the lack of annotated data for modeling and evaluation.",
"Kazantseva and Szpakowicz (2010) summarize short stories based on importance criteria (e.g., whether a segment contains protagonist or location information); they create summaries to help readers decide whether they are interested in reading the whole story, without revealing its plot.",
"Mihalcea and Ceylan (2007) summarize books with an unsupervised graph-based approach operating over segments (i.e., topical units).",
"Their algorithm first generates a summary for each segment and then an overall summary by collecting sentences from the individual segment summaries.",
"Focusing on screenplays, Gorinski and Lapata (2015) generate a summary by extracting an optimal chain of scenes via a graph-based approach centered around the main characters.",
"In a similar fashion, Tsoneva et al. (2007) create video summaries for TV series episodes; their algorithm ranks sub-scenes in terms of importance using features based on character graphs and textual cues available in the subtitles and movie scripts.",
"Vicol et al. (2018) introduce the MovieGraphs dataset, which also uses character-centered graphs to describe the content of movie video clips.",
"Our work synthesizes various strands of research on narrative structure analysis (Cutting, 2016; Hauge, 2017), screenplay summarization (Gorinski and Lapata, 2015), and neural network modeling (Dong, 2018).",
"We focus on extractive summarization and our goal is to identify an optimal sequence of key events in a narrative.",
"We aim to create summaries which re-tell the plot of a story in a concise manner.",
"Inspired by recent neural network-based approaches (Cheng and Lapata, 2016; Nallapati et al., 2017; Zhou et al., 2018; Zheng and Lapata, 2019), we develop supervised and unsupervised models for our summarization task based on neural representations of scenes and how these relate to the screenplay's narrative structure.",
"Contrary to most previous work which has focused on characters, we select summary scenes based on events and their importance in the story.",
"Our definition of narrative structure closely follows Papalampidi et al. (2019).",
"However, the model architectures we propose are general and could be adapted to different plot analysis schemes (Field, 2005; Vogler, 2007).",
"To overcome the difficulties in evaluating summaries for longer narratives, we also release a corpus of screenplays with scenes labeled as important (summary wor-thy).",
"Our annotations augment an existing dataset based on CSI episodes (Frermann et al., 2018), which was originally developed for incremental natural language understanding.",
"Let D denote a screenplay consisting of a sequence of scenes D = { s 1 , s 2 ,..., s n } .",
"Our aim is to select a subset D (cid:48) = { s i ,..., s k } consisting of the most informative scenes (where k < n ).",
"Note that this definition produces extractive summaries; we further assume that selected scenes are presented according to their order in the screenplay.",
"We next discuss how summaries can be created using both unsupervised and supervised approaches, and then move on to explain how these are adapted to incorporate narrative structure.",
"Our unsupervised model is based on an extension of TEXTRANK (Mihalcea and Tarau, 2004; Zheng and Lapata, 2019), a well-known algorithm for extractive single-document summarization.",
"In our setting, a screenplay is represented as a graph, in which nodes correspond to scenes and edges between scenes s i and s j are weighted by their similarity e ij .",
"A node's centrality (importance) is measured by computing its degree: centrality ( s i ) = 1 j < i e ij + 2 j > i e ij (1) where 1 + 2 = 1.",
"The modification introduced in Zheng and Lapata (2019) takes directed edges into account, capturing the intuition that the centrality of any two nodes is influenced by their relative position.",
"Also note that the edges of preceding and following scenes are differentially weighted by 1 and 2 .",
"Although earlier implementations of TEXTRANK (Mihalcea and Tarau, 2004) compute node similarity based on symbolic representations such as tf*idf, we adopt a neural approach.",
"Specifically, we obtain sentence representations based on a pre-trained encoder.",
"In our experiments, we rely on the Universal Sentence Encoder (USE; Cer et al. 2018), however, other embeddings are possible.",
"1 We represent a scene by the mean of its sentence representations and measure scene similarity e ij using cosine.",
"2 As in the original TEXTRANK algorithm (Mihalcea and Tarau, 2004), scenes are ranked based on their centrality and the M most central ones are selected to appear in the summary.",
"Most extractive models frame summarization as a classification problem.",
"Following a recent approach (SUMMARUNNER ; Nallapati et al. 2017), we use a neural network-based encoder to build representations for scenes and apply a binary clas-sifier over these to predict whether they should be in the summary.",
"For each scene s i D , we predict a label y i { 0 , 1 } (where 1 means that s i must be in the summary) and assign a score p ( y i | s i , D , ) quantifying s i 's relevance to the summary ( denotes model parameters).",
"We assemble a summary by selecting M sentences with the top p ( 1 | s i , D , ) .",
"We calculate sentence representations via the pre-trained USE encoder (Cer et al., 2018); a scene is represented as the weighted sum of the representations of its sentences, which we obtain from a BiLSTM equipped with an attention mechanism.",
"Next, we compute richer scene representations by modeling surrounding context of a given scene.",
"1 USE performed better than BERT in our experiments.",
"2 We found cosine to be particularly effective with USE representations; other metrics are also possible.",
"We encode the screenplay with a BiLSTM network and obtain contextualized representations s (cid:48) i for scenes s i by concatenating the hidden layers of the forward h i and backward h i LSTM, respectively: s (cid:48) i = [ h i ; h i ] .",
"The vector s (cid:48) i therefore represents the content of the i th scene.",
"We also estimate the salience of scene s i by measuring its similarity with a global screenplay content representation d .",
"The latter is the weighted sum of all scene representations s 1 , s 2 ,..., s n .",
"We calculate the semantic similarity between s (cid:48) i and d by computing the element-wise dot product b i , cosine similarity c i , and pairwise distance u i between their respective vectors: b i = s (cid:48) i (cid:12) d c i = s (cid:48) i d (cid:13)(cid:13) s (cid:48) i (cid:13)(cid:13) (cid:107) d (cid:107) (2) u i = s (cid:48) i d max ( (cid:107) s (cid:48) i (cid:107) 2 (cid:107) d (cid:107) 2 ) (3) The salience v i of scene s i is the concatenation of the similarity metrics: v i = [ b i ; c i ; u i ] .",
"The content vector s (cid:48) i and the salience vector v i are concatenated and fed to a single neuron that outputs the probability of a scene belonging to the summary.",
"3 3.3 Narrative Structure We now explain how to inject knowledge about narrative structure into our summarization models.",
"For both models, such knowledge is transferred via a network pre-trained on the TRIPOD 4 dataset introduced by Papalampidi et al. (2019).",
"This dataset contains 99 movies annotated with turning points.",
"TPs are key events in a narrative that define the progression of the plot and occur between consecutive acts (thematic units).",
"It is often assumed (Cutting, 2016) that there are six acts in a film (Figure 1), each delineated by a turning point (ar-rows in the figure).",
"Each of the five TPs has also a well-defined function in the narrative: we present each TP alongside with its definition as stated in screenwriting theory (Hauge, 2017) and adopted by Papalampidi et al. (2019) in Table 1 (see Appendix A for a more detailed description of narrative structure theory).",
"Papalampidi et al. (2019) identify scenes in movies that correspond to these key events as a means for analyzing the narrative 3 Aside from salience and content, Nallapati et al. (2017) take into account novelty and position-related features.",
"We ignore these as they are specific to news articles and denote the modified model as SUMMARUNNER *.",
"4 https://github.com/ppapalampidi/TRIPOD Turning Point Definition TP1: Opportunity Introductory event that occurs after the presentation of the story setting.",
"They collect sentence-level TP annotations for plot synopses and subsequently project them via distant supervision onto screenplays, thereby creating silver-standard labels.",
"We utilize this silver-standard dataset in order to pretrain a network which performs TP identification.",
"TP Identification Network We first encode screenplay scenes via a BiLSTM equipped with an attention mechanism.",
"We then contextualize them with respect to the whole screenplay via a second BiLSTM.",
"Next, we compute topic-aware scene representations t i via a context interaction layer (CIL) as proposed in Papalampidi et al. (2019).",
"CIL is inspired by traditional segmentation approaches (Hearst, 1997) and measures the semantic similarity of the current scene with a preceding and following context window in the screenplay.",
"Hence, the topic-aware scene representations also encode the degree to which each scene acts as a topic boundary in the screenplay.",
"In the final layer, we employ TP-specific attention mechanisms to compute the probability p ij that scene t i represents the j th TP in the screenplay.",
"Note that we expect the TP-specific attention distributions to be sparse, as there are only a few scenes which are relevant for a TP (recall that TPs are boundary scenes between sections).",
"To encourage sparsity, we add a low temperature value (Hinton et al., 2015) to the softmax part of the attention mechanisms: g ij = tanh ( W j t i + b j ) , g j [ 1 , 1 ] (4) p ij = exp ( g ij / ) Tt = 1 exp ( g t j / ) , T i = 1 p ij = 1 (5) where W j , b j represent the trainable weights of the attention layer of the j th TP.",
"Summarization with Narrative Structure).",
"5 We first present an unsupervised variant which mod-ifies the computation of scene centrality in the directed version of TEXTRANK (Equation (1)).",
"Specifically, we use the pre-trained network described in Section 3.3 to obtain TP-specific attention distributions.",
"We then select an overall score f i for each scene (denoting how likely it is to act as a TP).",
"We set f i = max j [ 1 , 5 ] p ij , i.e., to the p ij value that is highest across TPs.",
"We incorporate these scores into centrality as follows: centrality ( s i )= 1 j < i ( e ij + f j )+ 2 j > i ( e ij + f i ) (6) Intuitively, we add the f j term in the forward sum in order to incrementally increase the centrality scores of scenes as the story moves on and we encounter more TP events (i.e., we move to later sections in the narrative).",
"At the same time, we add the f i term in the backward sum in order to also increase the scores of scenes identified as TPs.",
"Supervised SUMMER We also propose a supervised variant of SUMMER following the basic model formulation in Section 3.3.",
"We still represent a scene as the concatenation of a content vector s (cid:48) and salience vector v (cid:48) , which serve as input to a binary classifier.",
"However, we now modify how salience is determined; instead of computing a general global content representation d for the screenplay, we identify a sequence of TPs and measure the semantic similarity of each scene with this sequence.",
"Our model is depicted in Figure 3.",
"We utilize the pre-trained TP network (Fig-ures",
"3(a) and",
"(b)) to compute sparse attention scores over scenes.",
"In the supervised setting, where gold-standard binary labels provide a training signal, we fine-tune the network in an end-to-end fashion on summarization (Figure",
"3(c)).",
"We compute the TP representations via the attention scores; we calculate a vector t p j as the weighted sum of all topic-aware scene representations t produced via CIL: t p j = i [ 1 , N ] p ij t i , where N is the number of scenes in a screenplay.",
"In practice, only a few scenes contribute to t p j due to the parameter in the softmax function (Equation (5)).",
"A TP-scene interaction layer measures the semantic similarity between scenes t i and latent TP representations t p j (Figure",
"3(c)).",
"Intuitively, a complete summary should contain scenes which 5 We make our code publicly available at https:// github.com/ppapalampidi/SUMMER .",
"are related to at least one of the key events in the screenplay.",
"We calculate the semantic similarity v ij of scene t i with TP t p j as in Equations (2) and (3).",
"We then perform max pooling over vectors v i 1 ,..., v iT , where T is the number of TPs (i.e., five) and calculate a final similarity vector v (cid:48) i for the i th scene.",
"The model is trained end-to-end on the summarization task using BCE , the binary cross-entropy loss function.",
"We add an extra regularization term to this objective to encourage the TP-specific attention distributions to be orthogonal (since we want each attention layer to attend to different parts of the screenplay).",
"We thus maximize the Kullback-Leibler (KL) divergence DKL between all pairs of TP attention distributions t p i , i [ 1 , 5 ] : O = i [ 1 , 5 ] j [ 1 , 5 ] , j (cid:54) = i log 1 DKL (cid:0) t p i (cid:13)(cid:13) t p j (cid:1) + (7) Furthermore, we know from screenwriting theory (Hauge, 2017) that there are rules of thumb as to when a TP should occur (e.g., the Opportunity occurs after the first 10% of a screenplay, Change of Plans is approximately 25% in).",
"It is reasonable to discourage t p distributions to deviate drastically from these expected positions.",
"Focal regularization F minimizes the KL divergence DKL between each TP attention distribution t p i and its expected position distribution th i : F = i [ 1 , 5 ] DKL ( t p i (cid:107) th i ) (8) The final loss L is the weighted sum of all three components, where a , b are fixed during training: L = BCE + aO + bF .",
"Crime Scene Investigation Dataset We performed experiments on an extension of the CSI dataset 6 introduced by Frermann et al. (2018).",
"It consists of 39 CSI episodes, each annotated with word-level labels denoting whether the perpetrator is mentioned in the utterances characters speak.",
"We further collected scene-level binary labels indicating whether episode scenes are important and should be included in a summary.",
"Three human judges performed the annotation task after watching the CSI episodes scene-by-scene.",
"To facilitate the annotation, judges were asked to indicate why they thought a scene was important, citing the following reasons: it revealed",
"(i) the victim,",
"(ii) the cause of death,",
"(iii) an autopsy report,",
"(iv) crucial evidence,",
"(v) the perpetrator, and",
"(vi) the motive or the relation between perpetrator and victim.",
"Annotators were free to select more than one or none of the listed reasons where appropriate.",
"We can think of these reasons as high-level aspects a good summary should cover (for CSI and related crime series).",
"Annotators were not given any information about TPs or narrative structure; the annotation was not guided by theoretical considerations, rather our aim was to produce useful CSI summaries.",
"Table 2 presents the dataset statistics (see also Appendix B for more detail).",
"Implementation Details In order to set the hy-perparameters of all proposed networks, we used a small development set of four episodes from the CSI dataset (see Appendix B for details).",
"After experimentation, we set the temperature of the softmax layers for the TP-specific attentions (Equa-tion (5)) to 0.01.",
"Since the binary labels in the 6 https://github.com/EdinburghNLP/csi-corpus overall episodes 39 scenes 1544 summary scenes 454 per episode scenes 39.58 (6.52) crime-specific aspects 5.62 (0.24) summary scenes 11.64 (2.98) summary scenes (%) 29.75 (7.35) sentences 822.56 (936.23) tokens 13.27k (14.67k) per episode scene sentences 20.78 (35.61) tokens 335.19 (547.61) tokens per sentence 16.13 (16.32) Table 2: CSI dataset statistics; means and (std).",
"supervised setting are imbalanced, we apply class weights to the binary cross-entropy loss of the respective models.",
"We weight each class by its inverse frequency in the training set.",
"Finally, in supervised SUMMER , where we also identify the narrative structure of the screenplays, we consider as key events per TP the scenes that correspond to an attention score higher than 0.05.",
"More implementation details can be found in Appendix C. As shown in Table 2, the gold-standard summaries in our dataset have a compression rate of approximately 30%.",
"During inference, we select the top M scenes as the summary, such that they correspond to 30% of the length of the episode.",
"Is Narrative Structure Helpful?",
"We perform 10-fold cross-validation and evaluate model performance in terms of F1 score.",
"Table 3 summarizes the results of unsupervised models.",
"We present the following baselines: Lead 30% selects the first 30% of an episode as the summary, Last 30% selects the last 30%, and Mixed 30%, randomly selects 15% of the summary from the first 30% of an episode and 15% from the last 30%.",
"We also compare SUMMER against TEXTRANK based on tf*idf (Mihalcea and Tarau, 2004), the directed neural variant described in Section 3.1 without any TP information, a variant where TPs are approximated by their expected position as postulated in screenwriting theory, and a variant that incorporates information about characters (Gorinski and Lapata, 2015) instead of narrative structure.",
"For the character-based TEXTRANK , called SCENESUM , we substitute the f i , f j scores in Equation (6) with character-related importance scores c i similar to the defini-Model F1 Lead 30% 30.66 Last 30% 39.85 Mixed 30% 34.32 TEXTRANK , undirected, tf*idf 32.11 TEXTRANK , directed, neural 41.75 TEXTRANK , directed, expected TP positions 41.05 SCENESUM , directed, character-based weights 42.02 SUMMER 44.70 Table 3: Unsupervised screenplay summarization.",
"tion in Gorinski and Lapata (2015): c i = c C [ c S main ( C )] c C [ c S ] (9) where S is the set of all characters participating in scene s i , C is the set of all characters participating in the screenplay and main ( C ) are all the main characters of the screenplay.",
"We retrieve the set of main characters from the IMDb page of the respective episode.",
"We also note that human agreement for our task is 79.26 F1 score, as measured on a small subset of the corpus.",
"As shown in Table 3, SUMMER achieves the best performance (44.70 F1 score) among all models and is superior to an equivalent model which uses expected TP positions or a character-based representation.",
"This indicates that the pre-trained network provides better predictions for key events than position and character heuristics, even though there is a domain shift from Hollywood movies in the TRIPOD corpus to episodes of a crime series in the CSI corpus.",
"Moreover, we find that the directed versions of TEXTRANK are better at identifying important scenes than the undirected version.",
"We found that performance peaks with 1 = 0 .",
"7 (see Equation (6)), indicating that higher importance is given to scenes as the story progresses (see Appendix D for experiments with different values).",
"In Table 4, we report results for supervised models.",
"Aside from the various baselines in the first block of the table, we compare the neural extractive model SUMMARUNNER * 7 (Nallapati et al., 2017) presented in Section 3.2 with several variants of our model SUMMER .",
"We experimented with randomly initializing the network for TP identification ( P) and with using a pre-trained network ( + P).",
"We also experimented with removing the regularization terms, O and F (Equa-tions (7) and (8)) from the loss ( R).",
"We assess the performance of SUMMER when we follow a two-step approach where we first predict TPs via the pre-trained network and then train a network on screenplay summarization based on fixed TP representations (fixed one-hot TPs), or alternatively use expected TP position distributions as postulated in screenwriting theory (fixed distribu-tions).",
"Finally, we incorporate character-based information into our baseline and create a supervised version of SCENESUM .",
"We now utilize the character importance scores per scene (Equation (9)) as attention scores instead of using a trainable attention mechanism when computing the global screenplay representation d (Section 3.2).",
"Table 4 shows that all end-to-end SUMMER variants outperform SUMMARUNNER *.",
"The best result (52.00 F1 Score) is achieved by pre-trained SUMMER with regularization, outperforming SUMMARUNNER * by an absolute difference of 3.44.",
"The randomly initialized version with no regularization achieves similar performance (51.93 F1 score).",
"For summarizing screenplays, explicitly encoding narrative structure seems to be more beneficial than general representations of scene importance.",
"Finally, two-step versions of SUMMER perform poorly, which indicates that end-to-end training and fine-tuning of the TP identification network on the target dataset is crucial.",
"What Does the Model Learn?",
"Apart from performance on summarization, we would also like to examine the quality of the TPs inferred by SUMMER (supervised variant).",
"Problematically, we do not have any gold-standard TP annotation in the CSI corpus.",
"Nevertheless, we can implicitly assess whether they are meaningful by measuring how well they correlate with the reasons annotators cite to justify their decision to include a scene in the summary (e.g., because it reveals cause of death 7 Our adaptation of SUMMARUNNER that considers content and salience vectors for scene selection. or provides important evidence).",
"where A is the set of all aspect scenes, | A | is the number of aspects, TP is the set of scenes inferred as TPs by the model, A i and TP j are the subsets of scenes corresponding to the i th aspect and j th TP, respectively, and dist ( TP j , A i ) is the minimum TP A",
"distance between j and i in number of scenes.",
"The proportion of aspects covered is given in Table 4, middle column.",
"We find that coverage is relatively low (44.48%) for the randomly initialized SUMMER with no regularization.",
"There is a slight improvement of 7.48% when we force the TP-specific attention distributions to be orthogonal and close to expected positions.",
"Pre-training and regularization provide a significant boost, increasing coverage to 70.25%, while pre-trained SUMMER without regularization infers on average more scenes representative of each TP.",
"This shows that the orthogonal constraint also encourages sparse attention distributions for TPs.",
"Table 5 shows the degree of association between individual TPs and summary aspects (see Appendix D for illustrated examples).",
"We observe that Opportunity and Change of Plans are mostly associated with information about the crime scene and the victim, Climax is focused on the revelation of the motive, while information relating to cause of death, perpetrator, and evidence is captured by both Point of no Return and Major Setback.",
"Overall, the generic Hollywood-inspired TP labels are adjusted to our genre and describe crime-related key events, even though no aspect labels were provided to our model during training.",
"Do Humans Like the Summaries?",
"We also conducted a human evaluation experiment using the summaries created for 10 CSI episodes.",
"8 We produced summaries based on the gold-standard annotations (Gold), SUMMARUNNER *, and the supervised version of SUMMER .",
"Since 30% of an episode results in lengthy summaries (15 minutes on average), we further increased the compression rate for this experiment by limiting each summary to six scenes.",
"For the gold standard condition, we randomly selected exactly one scene 8 https://github.com/ppapalampidi/SUMMER/tree/ master/video_summaries Turning Point Crime scene Victim Death Cause Perpetrator Evidence Motive Opportunity 56.76 52.63 15.63 15.38 2.56 0.00 Change of Plans 27.03 42.11 21.88 15.38 5.13 0.00 Point of no Return 8.11 13.16 9.38 25.64 48.72 5.88 Major Setback 0.00 0.00 6.25 10.25 48.72 35.29 Climax 2.70 0.00 6.25 2.56 23.08 55.88 Table 5: Percentage of aspect labels covered per TP for SUMMER , + P, + R. System Crime scene Victim Death Cause Perpetrator Evidence Motive Overall Rank SUMMARUNNER * 85.71 93.88 75.51 81.63 59.18 38.78 72.45 2.18 SUMMER 89.80 87.76 83.67 81.63 77.55 57.14 79.59 2.00 Gold 89.80 91.84 71.43 83.67 65.31 57.14 76.53 1.82 Table 6: Human evaluation: percentage of yes answers by AMT workers regarding each aspect in a summary.",
"per aspect.",
"For SUMMARUNNER * and SUMMER we selected the top six predicted scenes based on their posterior probabilities.",
"We then created video summaries by isolating and merging the selected scenes in the raw video.",
"We asked Amazon Mechanical Turk (AMT) workers to watch the video summaries for all systems and rank them from most to least informative.",
"They were also presented with six questions relating to the aspects the summary was supposed to cover (e.g., Was the victim revealed in the summary? Do you know who the perpetrator was?).",
"They could answer Yes, No, or Unsure.",
"Five workers evaluated each summary.",
"Table 6 shows the proportion of times participants responded Yes for each aspect across the three systems.",
"Although SUMMER does not improve over SUMMARUNNER * in identifying basic information (i.e., about the victim and perpe-trator), it creates better summaries overall with more diverse content (i.e., it more frequently includes information about cause of death, evidence, and motive).",
"This observation validates our as-sumption that identifying scenes that are semantically close to the key events of a screenplay leads to more complete and detailed summaries.",
"Finally, Table 6 also lists the average rank per system (lower is better), which shows that crowdwork-ers like gold summaries best, SUMMER is often ranked second, followed by SUMMARUNNER * in third place.",
"In this paper we argued that the underlying structure of narratives is beneficial for long-form summarization.",
"We adapted a scheme for identifying narrative structure (i.e., turning points) in Hollywood movies and showed how this information can be integrated with supervised and unsupervised extractive summarization algorithms.",
"Experiments on the CSI corpus showed that this scheme transfers well to a different genre (crime investigation) and that utilizing narrative structure boosts summarization performance, leading to more complete and diverse summaries.",
"Analysis of model output further revealed that latent events encapsulated by turning points correlate with important aspects of a CSI summary.",
"Although currently our approach relies solely on textual information, it would be interesting to incorporate additional modalities such as video or audio.",
"Audiovisual information could facilitate the identification of key events and scenes.",
"Besides narrative structure, we would also like to examine the role of emotional arcs (Vonnegut, 1981; Reagan et al., 2016) in a screenplay.",
"An often integral part of a compelling story is the emotional experience that is evoked in the reader or viewer (e.g., somebody gets into trouble and then out of it, somebody finds something wonderful, loses it, and then finds it again).",
"Understanding emotional arcs may be useful to revealing a story's shape, highlighting important scenes, and tracking how the story develops for different characters over time.",
"We thank the anonymous reviewers for their feedback.",
"We gratefully acknowledge the support of the European Research Council (Lapata; award 681760, Translating Multiple Modalities into Text) and of the Leverhulme Trust (Keller; award IAF-2017-019)."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"objective",
"abstain",
"result",
"objective",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"objective",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"other",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"In this paper, we focus on identifying interactive argument pairs from two posts with opposite stances to a certain topic.",
"Considering opinions are exchanged from different perspectives of the discussing topic, we study the discrete representations for arguments to capture varying aspects in argumentation languages (e.g., the debate focus and the participant behavior).",
"Moreover, we utilize hierarchical structure to model post-wise information incorporating contextual knowledge.",
"Experimental results on the large-scale dataset collected from CMV show that our proposed framework can significantly outperform the competitive baselines.",
"Further analyses reveal why our model yields superior performance and prove the usefulness of our learned representations.",
"Arguments play a central role in decision making on social issues.",
"Striving to automatically understand human arguments, computational argumentation becomes a growing field in natural language processing.",
"It can be analyzed at two levels monological argumentation and dialogical argumentation.",
"Existing research on monological argumentation covers argument structure prediction (Stab and Gurevych, 2014), claims generation (Bilu and Slonim, 2016), essay scoring (Taghipour and Ng, 2016), etc.",
"Recently, dialogical argumentation becomes an active topic.",
"In the process of dialogical arguments, participants exchange arguments on a given topic (Aster-han and Schwarz, 2007; Hunter, 2013).",
"With the popularity of online debating forums, large volume of dialogical arguments are daily formed, concerning wide range of topics.",
"A social media dialogical argumentation example from ChangeMyView sub-reddit is shown in Figure 1. There we show two *Corresponding author CMV: The position of vice president of the USA should be eliminated from our government.",
"Post A: a1:",
"..[If.....the...........president..is.......either.......killed...or..........resigns,....the .....vice...........president...is..a.........horrible.........choice..to......take......over.........office.] a2: The speaker of the House would be more qualified for the position.",
"a3: (cid:58)(cid:58)(cid:58) [I'm (cid:58)(cid:58)(cid:58)(cid:58)(cid:58) willing (cid:58)(cid:58) to (cid:58)(cid:58)(cid:58) bet (cid:58)(cid:58)(cid:58) that (cid:58)(cid:58)(cid:58)(cid:58) John (cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58) Boehner (cid:58)(cid:58)(cid:58)(cid:58)(cid:58) would (cid:58)(cid:58)(cid:58) have (cid:58)(cid:58)(cid:58) an (cid:58)(cid:58)(cid:58)(cid:58)(cid:58) easier (cid:58)(cid:58)(cid:58)(cid:58) time (cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58) dealing (cid:58)(cid:58)(cid:58)(cid:58) with (cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58) congress (cid:58)(cid:58) as (cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58) president (cid:58)(cid:58)(cid:58) than (cid:58)(cid:58)(cid:58) Joe (cid:58)(cid:58)(cid:58)(cid:58)(cid:58) Biden (cid:58)(cid:58)(cid:58)(cid:58)(cid:58) would (cid:58)(cid:58)(cid:58) due (cid:58)(cid:58) to (cid:58)(cid:58) his (cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58) constant (cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58) interaction (cid:58)(cid:58)(cid:58) with (cid:58)(cid:58) it.] a4: If Boehner took office, as a republican, would he do something to veto bills Obama supported?",
"Post B: b1: ............[Seriously,......stop.....this..............hyperbole.] b2: (cid:58)(cid:58)(cid:58) [Do (cid:58)(cid:58)(cid:58) you (cid:58)(cid:58)(cid:58)(cid:58) think (cid:58)(cid:58)(cid:58) that (cid:58)(cid:58)(cid:58) have (cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58) anything (cid:58)(cid:58) to (cid:58)(cid:58) do (cid:58)(cid:58)(cid:58)(cid:58) with (cid:58)(cid:58) the (cid:58)(cid:58)(cid:58) fact (cid:58)(cid:58)(cid:58) that (cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58) Boehner (cid:58) is (cid:58) a (cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58) republican, (cid:58)(cid:58)(cid:58) and (cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58) republicans (cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58) control (cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58)(cid:58) congress?] b3: That argument has much less to do with the individuals than it does with the current party in control.",
"posts holding opposite stances over the same topic.",
"One is the original post and the other is reply.",
"As can be seen, opinions from both sides are voiced with multiple arguments and the reply post B is organized in-line with post A 's arguments.",
"Here we define an interactive argument pair formed with two arguments from both sides (with the same un-derline), which focuses on the same perspective of the discussion topic.",
"The automatic identification of these pairs will be a fundamental step towards the understanding of dialogical argumentative structure.",
"Moreover, it can benefit downstream tasks, such as debate summarization (San-chan et al., 2017) and logical chain extraction in debates (Botschen et al., 2018).",
"However, it is non-trivial to extract the interactive argument pairs holding opposite stances.",
"Back to the example.",
"Given argument b1 with only four words contained, it is difficult, without richer contextual information, to understand why it has interactive relationship with a1 .",
"In addition, without modeling the debating focuses of arguments, it is likely for models to wrongly predict that b2 has interactive relationship with a4 for sharing more words.",
"Motivated by these observations, we propose to explore discrete argument representations to capture varying aspects (e.g., the debate focus) in argumentation language and learn context-sensitive argumentative representations for the automatic identification of interactive argument pairs.",
"For argument representation learning, different from previous methods focusing on the modeling of continuous argument representations, we obtain discrete latent representations via discrete variational autoencoders and investigate their effects on the understanding of dialogical argumentative structure.",
"For context representation modeling, we employ a hierarchical neural network to explore what content an argument conveys and how they interact with each other in the argumentative structure.",
"To the best of our knowledge, we are the first to explore discrete representations on argumentative structure understanding.",
"In model evaluation, we construct a dataset collected from CMV 1 , which is built as part of our work and has been publicly released 2 .",
"Experimental results show that our proposed model can significantly outperform the competitive baselines.",
"Further analysis on discrete latent variables reveals why our model yields superior performance.",
"At last, we show that the representations learned by our model can successfully boost the performance of argument persuasiveness evaluation.",
"In this section, we first define our task of interactive argument pair identification, followed by a description of how we collect the data for this task.",
"Given a argument q from the original post, a candidate set of replies consisting of one positive reply r + , several negative replies r 1 r u , and their corresponding argumentative contexts, our goal is to automatically identify which reply has interactive relationship with the quotation q .",
"We formulate the task of identifying interactive argument pairs as a pairwise ranking problem.",
"In practice, we calculate the matching score S ( q, r ) for each reply in the candidate set with the quotation q and treat the one with the highest matching score as the winner.",
"Our data collection is built on the CMV dataset released by Tan et al. (2016).",
"In CMV , users submit posts to elaborate their perspectives on a specific topic and other users are invited to argue for the other side to change the posters' stances.",
"The original dataset is crawled using Reddit API.",
"Discussion threads from the period between January 2013 and May 2015 are collected as training set, besides, threads between May 2015 and September 2015 are considered as test set.",
"In total, there are 18,363 and 2,263 discussion threads in training set and test set, respectively.",
"An observation on CMV shows that when users reply to a certain argument in the original post, they quote the argument first and write responsive argument directly, forming a quotation-reply pair.",
"Figure 2 shows how quotation-reply pairs could be identified.",
"Inspired by this finding, we decide to Original Post: ...",
"Strong family values in society lead to great results.",
"I want society to take positive aspects of the early Americans and implement that into society.",
"This would be a huge improvement than what we have now.",
"...",
"User Post: > I want society to take positive aspects of the early Americans and implement that into society.",
"What do you believe those aspects to be?",
"...",
"extract interactive argument pairs with the relation of quotation-reply.",
"In general, the content of posts in CMV is informal, making it difficult to parse an argument in a finer-grain with premise, conclusion and other components.",
"Therefore, following previous setting in Ji et al. (2018), we treat each sentence as an argument.",
"Specifically, we only consider the quotation containing one argument and view the first sentence after the quotation as the reply.",
"We treat the quotation-reply pairs extracted as positive samples and randomly select four replies from other posts that are also related to the original post to pair with the quotation as negative samples.",
"In detail, each instance in our dataset includes the quotation, one positive reply, four negative replies, and the posts where they exist.",
"The posts where they exist refer to argumentative contexts mentioned below.",
"What's more, we remove quotations from argumentative contexts of replies.",
"We keep words with the frequency higher than 15 and this makes the word vocabulary with 20,692 distinct entries.",
"In order to assure the quality of quotation-reply pairs, we only keep the instance where the number of words in the quotation and training set test set # of arg.",
"replies range from 7 to 45.",
"We regard the instances extracted from training set and test set in Tan et al. (2016) for training and test.",
"The number of instances in training and test set is 11,565 and 1,481, respectively.",
"We randomly select 10% of the training instances to form the development set.",
"The statistic information of our dataset is shown in Table 1. To further demonstrate that quotation-reply pairs have interactive relationships, we randomly select 100 instances from the test set and hire two trained annotators who are fluent English speakers to identify interactive argument pairs.",
"The accuracy of the two annotators is 0.83 and 0.93, respectively.",
"The inter-annotator agreement measured by Co-hens Kappa (Carletta, 1996) is 0.82.",
"This confirms the quality of the constructed dataset.",
"The overall architecture of our model is shown in Figure",
"3(a).",
"It takes a quotation, a reply and their corresponding argumentative contexts as inputs, and outputs a real value as its matching score.",
"It mainly consists of three components, namely, Discrete Variational AutoEncoders ( DVAE , Figure",
"3(c)), Argumentative Context Modeling (Fig-ure",
"3(b)) and Argument Matching and Scoring .",
"We learn discrete argument representations via DVAE and employ a hierarchical architecture to obtain the argumentative context representations.",
"The Argument Matching and Scoring integrates some semantic features between the quotation and the reply to calculate the matching score.",
"We employ discrete variational autoencoders (Rolfe, 2017) to reconstruct arguments from auto-encoding and obtain argument representations based on discrete latent variables to capture different aspects of argumentation languages.",
"Encoder.",
"Given an argument x with words w 1 , w 2 , ..., w T , we first embed each word to a dense vector obtaining w (cid:48) 1 , w (cid:48) 2 , ..., w (cid:48) T correspondingly.",
"Then we use a bi-directional GRU (Wang et al., 2018) to encode the argument.",
"h t = BiGRU ( w (cid:48) t , h t 1 ) (1) We obtain the hidden state for a given word w (cid:48) t by concatenating the forward hidden state and backward hidden state.",
"Finally, we consider the last hidden state h T as the continuous representation of the argument.",
"Discrete Latent Variables.",
"We introduce z as a set of K-way categorical variables z = { z 1 , z 2 , ..., z M } , where M is the number of variables.",
"Here, each z i is independent and we can easily extend the calculation process below to every latent variables.",
"Firstly, we calculate the logits l i as follows.",
"l i = W l h Ti + b l (2) where W l RK E stands for the weight matrix, E is the dimension of hidden units in encoder, while b l is a weight vector.",
"However, using discrete latent variables is challenging when training models end-to-end.",
"To alleviate this problem, we use the recently proposed Gumbel-Softmax trick (Lu et al., 2017) to create a differentiable estimator for categorical variables.",
"During training we draw samples g 1 , g 2 , ..., g K from the Gumbel distribution: g k log( log( u )) , where u U (0 , 1) are uniform samples.",
"Then, we compute the log-softmax of l i to get i RK : ik = exp (( l ik + g k ) / ) (cid:80) k exp (( l ik + g k ) / ) (5) is a hyper-parameter.",
"With low temperature , this vector i is close to the one-hot vector representing the maximum index of l i .",
"But with higher temperature, this vector i is smoother.",
"where W ei RK D is the embedding matrix, D is the dimension of hidden units in decoder.",
"Finally, we use a GRU as the decoder to reconstruct the ...",
"Discrete Argument Representations.",
"Through the process of auto-encoding mentioned above, we can reconstruct the argument.",
"The representation that we want to find can capture varying aspects in argumentation languages and contain salient features of the argument.",
"q ( z i | x ) shows the probability distribution of z i over K categories, which contains salient features of the argument on varying aspects.",
"Therefore, we obtain the discrete argument representation by the posterior distribution of discrete latent variables z .",
"R = M (cid:88) i =1 W ei q ( z i | x ) (7) 3.2 Argumentative Context Modeling Here, we introduce contextual information of the quotation and the reply to help identify the interactive argument pairs.",
"The argumentative context contains a list of arguments.",
"Following previous setting in Ji et al. (2018), we consider each sentence as an argument in the context.",
"Inspired by Dong et al. (2017), we employ a hierarchical architecture to obtain argumentative context representations.",
"Argument-level CNN.",
"Given an argument and their embedding forms { e 1 , e 2 , ..., e n } , we employ a convolution layer to incorporate the context information on word level.",
"where W s and b s are weight matrix and bias vector.",
"ws is the window size in the convolution layer and s i is the feature representation.",
"Then, we conduct an attention pooling operation over all the words to get argument embedding vectors.",
"where W m and W u are weight matrix and vector, b m is the bias vector, m i and u i are attention vector and attention weight of the i -th word.",
"a is the argument representation.",
"Document-level BiGRU.",
"Given the argument embedding { a 1 , a 2 , ..., a N } , we employ a bidirectional GRU to incorporate the contextual information on argument level.",
"h ci = BiGRU ( a i , h ci 1 ) (12) Finally, we employ an average pooling over arguments to obtain the context representation C .",
"Once representations of the quotation and the reply are generated, three matching methods are applied to analyze relevance between the two arguments.",
"We conduct element-wise product and element-wise difference to get the semantic features f p = R q R r and f d = R q R r .",
"Furthermore, to evaluate the relevance between each word in the reply and the discrete representation of the quotation, we propose the quotation-guided attention and obtain a new representation of the reply.",
"Quotation-Guided Attention.",
"We conduct dot product between R q and each hidden state representation h rj in the reply.",
"Then, a softmax layer is used to obtain an attention distribution.",
"v j = softmax ( R q h rj ) (13) Based on the attention probability v j of the j -th word in the reply, the new representation of the reply can then be constructed as follows: f r = (cid:88) j v j h rj (14) After obtaining the discrete representations, argumentative context representations and some semantic matching features f p , f d , f r of the quotation and the reply, we use two fully connected layers to obtain a higher-level representation H .",
"Finally, the matching score S is obtained by a linear transformation.",
"H = f ( WH [ R q ; R r ; C q ; C r ; f m ] + b H ) (16)",
"where WH and WS stand for the weight matrices, while b H and b S are weight vectors.",
"The proposed model contains three modules, i.e., the DVAE , argumentative context modeling and argument matching, which are trained jointly.",
"We define the loss function of the overall framework to combine the two effects.",
"where is a hyper-parameter to balance the two loss terms.",
"The first loss term is defined on the DVAE and cross entropy loss is defined as the re-construction loss.",
"We apply the regularization on KL cost term to solve posterior collapse issue.",
"Due to the space limitation, we leave out the derivation details and refer the readers to Zhao et al. (2018).",
"LDV AE = E q ( z | x ) [log p ( x | z )] KL ( q ( z | x ) || p ( z )) (19) The second loss term is defined on the argument matching.",
"We formalize this issue as a ranking task and utilize hinge loss for training.",
"where u is the number of negative replies in each instance.",
"is a margin parameter, S ( q, r + ) is the matching score of the positive pair and S ( q, r i ) is the matching score of the i -th negative pair.",
"We use Glove (Pennington et al., 2014) word embeddings with dimension of 50.",
"The number of discrete latent variables M is 5 and the number of categories for each latent variable is also 5.",
"What's more, the hidden units of GRU cell in encoder are 200 while that for the decoder is 400.",
"We set batch size to 32, filter sizes to 5, filter numbers to 100, dropout with probability of 0.5, temperature to 1. The hyper-parameters in loss function are set as = 10 for max margin and = 1 for controlling the effects of discrete argument representation learning and argument matching.",
"The proposed model is optimized by SGD and applied the strategy of learning rate decay with initial learning rate of 0.1.",
"We evaluate our model on development set at every epoch to select the best model.",
"During training, we run our model for 200 epochs with early-stop (Caruana et al., 2000).",
"For baselines, we consider simple models that rank argument pairs with cosine similarity measured with two types of word vectors: TF-IDF scores (henceforth TF-IDF) and the pre-trained word embeddings from word2vec corpus (henceforth WORD 2V EC ).",
"Also, we compare with the neural models from related areas: MALSTM (Mueller and Thyagarajan, 2016), the popular method for sentence-level semantic matching, and CBCAWOF (Ji et al., 2018), the state-of-the-art model to evaluate the persuasiveness of argumentative comments, which is tailored to fit our task.",
"In addition, we compare with some ablations to study the contribution from our components.",
"Here we first consider MATCH rnn , which uses BiGRU to learn argument representations and explore the match of arguments without modeling the context therein.",
"Then we compare with other ablations that adopt varying argument context modeling methods.",
"Here we consider BiGRU (henceforth MATCH rnn +C b ), which Models P@1 MRR Cosine Similarity based TF-IDF 28.36* 51.66* WORD 2 VEC 28.70* 52.03* Neural-Network based MALSTM (Mueller and Thyagarajan, 2016) 31.26* 52.97* CBCAWOF (Ji et al., 2018) 56.04* 73.03* Ablation Study MATCH rnn 51.52* 70.57* MATCH rnn +C b 55.98* 73.20* MATCH rnn +C h 57.46* 73.72* MATCH ae +C h 58.27 74.16* MATCH vae +C h 58.61 74.66 Our model 61.17 76.16 Table 2: The performances of different models on our dataset in terms of Mean Reciprocal Rank ( MRR ) and Precision at 1 (denoted as P@1 ).",
"focuses on words in argument context and ignores the argument interaction structure.",
"We also consider a hierarchical neural network ablation (hence-forth MATCH rnn +C h ), which models argument interactions with BiGRU and the words therein with CNN.",
"In addition, we compare with MATCH ae +C h and MATCH vae +C h , employing auto-encoder (AE) and variational AE (VAE), respectively, to take the duty of the DVAE module of our full model.",
"To evaluate the performance of different models, we first show the overall performance of different models for argument pair identification.",
"Then, we conduct three analyses including hyper-parameters sensitivity analysis , discrete latent variables analysis and error analysis to study the impact of hyper-parameters, explain why DVAE performs well on interactive argument pair identification and analyze the major causes of errors.",
"Finally, we apply our model to a downstream task to investigate the usefulness of discrete argument representations.",
"The overall results of different models are shown in Table 2. Mean Reciprocal Rank ( MRR ) and Precision at 1 (denoted as P@1 ) are used for evaluation metrics.",
"We have following findings.",
"Our model significantly outperforms all comparison models in terms of both evaluation metrics.",
"This proves the effectiveness of our model.",
"By modeling context representations, MATCH rnn +C b and MATCH rnn +C h significantly outperform MATCH rnn .",
"This proves that contextual information is helpful for identifying interactive argument pairs.",
"Argumentative contexts often contain a list of arguments.",
"In comparison of MATCH rnn +C b and MATCH rnn +C h , we find that MATCH rnn +C h achieve much better results than MATCH rnn +C b .",
"This demonstrates the effectiveness of representing argumentative contexts on argument level instead of word level.",
"By using autoencoders for argument representation learning, our model, MATCH vae +C h and MATCH ae +C h outperform MATCH rnn +C h .",
"This indicates the effectiveness of argument representation learning.",
"We investigate the impact of two hyper-parameters on our model, namely the number of discrete latent variables M and the number of categories for each latent variable K in DVAE .",
"For studying the impact of M and K , we set them as 1, 3, 5, 7, 9 respectively while keep other hyper-parameters the same as our best model.",
"We report P@1 of different settings.",
"As shown in Figure 4, we observe that curves obtained by changing the two parameters follow similar pattern.",
"When the number increases, P@1 first gradually grows, reaching the highest at position 5 and drops gradually after that.",
"When K and M are relatively high, say larger than 3, our model can always outperform VAE which is the most competitive baseline, indicating the effectiveness of the discrete representation for interactive arguments identification.",
"Here, we try to find out why DVAE performs best on interactive argument pair identification.",
"Given an argument, we set M =5, K =5 and learn the corresponding discrete code set Z code (1) Z code (5) .",
"We use the best model to select correct instances for argument matching in the dataset and cluster all quotations and corresponding replies according to the same discrete code set.",
"We get 2,272 clusters, of which 119 clusters have more than 100 arguments and we find that arguments with the same discrete code set are semantically related.",
"To show the reason why DVAE performs well on our task more intuitively, we select a case from our dataset shown in Table 3 and employ DVAE to learn discrete representations for arguments to capture varying aspects z 1 z 5 .",
"The posterior distributions of discrete latent variables z 1 z 5 for the quotation and replies are shown in Figure 5.",
"As shown in Figure 5, each subgraph shows the distribution of z i on K categories of the quotation and corresponding replies.",
"We can find that the posterior distributions of z 1 z 5 of Positive reply are more similar to those of Quotation compared to other Negative replies .",
"This finding proves that if the two arguments are more semantically related, their posterior distribution on each aspect z i should be more similar.",
"This further interprets why Positive reply has interactive relationship with Quotation and why DVAE performs well on interactive argument pair identification.",
"Quotation: I bet that John Boehner would deal with congress as president more easily than Joe Biden due to his constant interaction with it.",
"Positive reply: Do you think that have anything to do with the fact that Boehner is a republican, and congress is controlled by republicans?",
"Negative reply 1: I would propose that the title of vice president be kept, but to remove their right to succession for presidency.",
"Negative reply 2: Does Biden have the same level of respect from foreign nations needed to guide the country?",
"Negative reply 3: He did lose however, so perhaps people do put weight into the vp choice.",
"Negative reply 4: I don't know why you think this can be ignored.",
"Here, we inspect outputs of our model to identify major causes of errors.",
"Here are two major issues.",
"The number of M and K may not cover the latent space of all arguments in the dataset.",
"Natural language is complex and diverse.",
"If the size of the latent space doesn't fully contain semantic information of the arguments, it will cause the failure of our model.",
"Considering the number of aspects may vary for different topics, it is not perfect to use a universal setting of K and M .",
"Attention Error.",
"In our model, we employ a quotation-guided attention to evaluate the relevance between each word in the reply and the discrete representation of the quotation.",
"If the attention focuses on unimportant words, it causes errors.",
"It might be useful to utilize discrete representation to further regulate the attention procedure.",
"To further investigate the usefulness of our learned representations, we apply them to a downstream task: persuasiveness evaluation for argumentative comments (Tan et al., 2016; Ji et al., 2018).",
"It takes two arguments as input (one is original and another is a reply) and output a score to evaluate the quality of the reply.",
"The reasons for choosing this task are two fold.",
"First, both tasks focus on dialogical arguments.",
"Second, both tasks can be formulated as a pairwise ranking problem.",
"The performance of different models are shown in Table 4.",
"Note that we use the original CMV dataset and follow the previous setup in Tan et al. (2016); Ji et al. (2018).",
"We find that our model outperforms the state-of-the-art method (Ji et al., 2018) by a large margin, which indicates that our learned representation can well help downstream tasks.",
"In this section, we will introduce two major areas related to our work, which are dialogical argumentation and argument representation learning.",
"Computational argumentation is a growing sub-field of natural language processing in which arguments are analyzed in various respects.",
"Previous works mainly focus on analyzing the argumentative structure in texts.",
"Recently, the dialogical argumentation has become an active topic.",
"Dialogical argumentation refers to a series of interactive arguments related to a given topic, involving argument retraction, view exchange, and so on.",
"Existing research covers discourse structure prediction (Liu et al., 2018), dialog summarization (Hsueh and Moore, 2007), etc.",
"There are several attempts to address tasks related to analyzing the relationship between arguments (Wang and Cardie, 2014; Persing and Ng, 2017) and evaluating the quality of persuasive arguments (Habernal and Gurevych, 2016).",
"Gottipati et al. (2013) use sentiment lexicons as a preprocessing step and propose a probabilistic graphical model to predict stance of arguments in their dataset.",
"Park et al. (2011) design several argumentation-motivated features to finish the debate stance classification in Korean newswire discourse.",
"Sridhar et al. (2015) consider the joint stance classification of arguments and relations among them and find a multi-level model will work better.",
"for a combination of post-level and author-level collective modeling of both stance and disagreement could bring further improvements in performance.",
"Wang and Cardie (2014) create a dispute corpus from Wikipedia and use a sentiment analysis to predict the dispute label of arguments.",
"Wei et al. (2016) collect a dataset from CMV and analyze the correlation between disputing quality and disputation behaviors.",
"analyze the disputation action in the online debate.",
"Given an original argument and an argument disputing it, they aims to evaluate the quality of a disputing comment based on the original argument and the discussed topic.",
"Habernal and Gurevych (2016) crowdsource the UKPConvArg1 corpus to study what makes an informal social media argument convincing.",
"They crowdsource the UKPConvArg1 corpus and use SVM and bidirectional LSTM to experiment on their annotated datasets.",
"Tan et al. (2016) pay attention to belief change in the ChangeMyView subreddit, in which an original poster challenges others to change his/her opinion.",
"They construct datasets from CMV and employ logistic regression to predict which reply in the pair is more persuasive.",
"In addition, Persing and Ng (2017) annotate a corpus with persuasiveness scores and the errors they contain to analyze why arguments are unpersuasive.",
"Previous work mainly focuses on analyzing interactions between two arguments in debate.",
"However, there is limited research on the interactions between posts.",
"In this work, we propose a novel task of identifying interactive argument pairs from argumentative posts to further understand the interactions between posts.",
"Our work is also related with some similar tasks, such as question answering and sentence alignment.",
"They focus on the design of attention mechanism to learn sentence representations (Wang et al., 2017a) and their relations with others (Wang et al., 2017b).",
"Our task is inherently different from theirs because our target arguments naturally occur in the complex interaction context of dialogues, which requires additional efforts for understanding the discourse structure therein.",
"Argument representation learning for natural language has been studied widely in the past few years.",
"Previous work discuss prior approaches to learning argument representations from labelled and unlabelled data.",
"There have been attempts to use la-beled/structured data to learn argument representations.",
"Wieting et al. (2016) and Wieting and Gimpel (2017) introduce a large sentential paraphrase dataset and use paraphrase data to learn an encoder that maps synonymous phrases to similar embeddings.",
"Wieting et al. (2017) explore the use of machine translation to obtain more paraphrase data via back-translation of bilingual argument pairs for learning paraphrastic embeddings.",
"They show how neural backtranslation could be used to generate paraphrases.",
"Hermann and Blunsom (2013) explore a language-specific encoder applied to each argument and represent the argument by the mean vector of the words involved.",
"They consider minimizing the inner product between paired arguments in different languages as the training objective and do not rely on word alignments.",
"Conneau et al. (2017) propose a model called InferSent, which is used as the baseline as it served as the inspiration for the inclusion of the SNLI task in the multitask model.",
"They prove that NLI is an effective task for pre-training and transfer learning in obtaining generic argument representations.",
"They train argument encoders from identifying one of three relationships between two given arguments entailment, neutral and contradiction.",
"Results prove that the argument representations learned by this task perform strongly on downstream transfer tasks.",
"Due to the availability of practically unlimited textual data, learning argument representations via unsupervised methods is an attractive proposition.",
"Kiros et al. (2015) present the model called Skip Thought for learning representations by predicting the previous and next argument, which is a generalization of the skip-gram model (Mikolov et al., 2013).",
"Exploiting the relatedness inherent in adjacent arguments, the model is trained by using the encoder to encode a particular argument and then using the decoder to decode words in adjacent arguments.",
"Bowman et al. (2016) introduce variational autoencoders to incorporate distributed latent representations of entire arguments.",
"In addition, Hill et al. (2016) propose the FastSent model, using bag-of-words of arguments to predict the adjacent arguments.",
"Logeswaran and Lee (2018) propose the Quick Thoughts to exploit the closeness of adjacent arguments.",
"They formulate the argument representation learning as a classification problem.",
"Previous work focuses on learning continuous argument representations with no interpretability.",
"In this work, we study the discrete argument representations, capturing varying aspects in argumentation languages.",
"In this paper, we propose a novel task of interactive argument pair identification from two posts with opposite stances on a certain topic.",
"We examine contexts of arguments and induce latent representations via discrete variational autoencoders.",
"Experimental results on the dataset show that our model significantly outperforms the competitive baselines.",
"Further analyses reveal why our model yields superior performance and prove the usefulness of discrete argument representations.",
"The future work will be carried out in two directions.",
"First, we will study the usage of our model for applying to other dialogical argumentation related tasks, such as debate summarization.",
"Second, we will utilize neural topic model for learning discrete argument representations to further improve the interpretability of representations.",
"This work is partially supported by National Natural Science Foundation of China (No.71991471), Science and Technology Commission of Shanghai Municipality Grant (No.20dz1200600).",
"Jing Li is supported by CCF-Tencent Rhino-Bird Young Faculty Open Research Fund (R-ZDCJ), the Hong Kong Polytechnic University internal funds (1-BE2W and 1-ZVRH), and NSFC Young Scientists Fund 62006203."
] | [
"method",
"method",
"method",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"method",
"objective",
"result",
"result",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"objective",
"abstain",
"result",
"result",
"abstain",
"objective",
"result",
"other",
"other"
] |
[
"Understanding manipulated media, from automatically generated deepfakes' to manually edited ones, raises novel research challenges.",
"Because the vast majority of edited or manipulated images are benign, such as photoshopped images for visual enhancements, the key challenge is to understand the complex layers of underlying intents of media edits and their implications with respect to disinformation.",
"In this paper, we study Edited Media Understanding Frames , a new conceptual formalism to understand visual media manipulation as structured annotations with respect to the intents, emotional reactions, e ects on individuals, and the overall implications of disinformation.",
"We introduce a dataset for our task, EMU, with 56k question-answer pairs written in rich natural language.",
"We evaluate a wide variety of vision-and-language models for our task, and introduce a new model PELICAN, which builds upon recent progress in pretrained multimodal representations.",
"Our model obtains promising results on our dataset, with humans rating its answers as accurate 48.2% of the time.",
"At the same time, there is still much work to be done and we provide analysis that highlights areas for further progress.",
"The modern ubiquity of powerful image-editing software has led to a variety of new disinformation threats.",
"From AI-enabled deepfakes to low-skilled cheapfakes, attackers edit media to engage in a variety of harmful behaviors, such as spreading disinformation, creating revenge porn, and committing fraud (Paris and Donovan, 2019; Chesney and Citron, 2019; Kietzmann et al., 2020,",
"c.f.).",
"Accordingly, we argue that it is important to develop systems to help spot harmful manipulated media.",
"The rapid growth and virality of social I want to suggest that subject2 has the support of subject1.",
"media requires as such, especially as social media trends towards visual content (Gretzel, 2017).",
"Identifying whether an image or video has been digitally altered (i.e., digital forgery detection) has been a long-standing problem in the computer vision and media forensics communities.",
"This has enabled the development of a suite of detection approaches, such as analyzing pixel-level statistics and compression artifacts (Farid, 2009; Bianchi and Piva, 2012; Bappy et al., 2017) or identifying what the edit was (Tan et al., 2019).",
"However, little work has been done on why an edit is made, which is necessary for identifying harm.",
"Darkening someone's skin in a family photo because background light made them seem quite pale is generally harmless.",
"While such color rebalancing is common, darkening Barack Obama's (or Rafael Warnock's) skin in campaign ads was clearly meant as a harmful edit by the editor that did it.",
"1 We choose to focus on the why we define a schema for approaching the problem of intent and provide a rich set of natural language responses.",
"We also make a significant contribution towards the what: we include a physical-change question, provide rationales based in physical changes, and give structured annotations (bounding boxes) on what was changed in the edit.",
"We introduce Edited Media Understanding Frames (EMU), a new conceptual formalism that captures the notions of why and what in image editing for language and vision systems (Figure 1).",
"Following literature on pragmatic frames (Sap et al., 2017, 2020; Forbes et al., 2020)derived from frame semantics (Baker et al., 1998) we formalize EMU frames along six dimensions that cover a diverse range of inferences necessary to fully capture the scope of visual disinformation.",
"We delve into the concept of intention as discussed by the fake news literature (Rashkin et al., 2017; Shu et al., 2017; Zhou and Zafarani, 2020) to capture editor's intent such as motivation for edit and intent to deceive, as well as the resulting implications of the edited content.",
"For every dimension we collect both a classification label and a free-form text explanation.",
"For example, for frame intent , a model must classify the intent of the edit, and describe why this classification is selected.",
"We then introduce a new dataset for our task, EMU, with 56k annotations over 8k image pairs.",
"To kickstart progress on our task, we introduce a new language and vision model, PELICAN, that leverages recent progress in pretrained multimodal representations of images and text (Tan and Bansal, 2019; Lu et al., 2019; Li et al., 2019).",
"We compare our model to a suite of strong baselines, including a standard VLP model (Zhou et al., 2019), and show key improvement in terms of ability to reason about co-referent subjects in the edit.",
"Nevertheless, our task is far from solved: a significant gap remains 1 How Georgia's Senate race pits the Old South against the New South.",
"https: // www.politico.com / news / 2020 / 12 / 05 / georgia-senate-old-new-south-442423 between the best machine and human accuracy.",
"Our contributions are thus as follows.",
"First, we introduce a new task of Edited Media Understanding Frames, which requires a deep understanding of why an image was edited, and a corresponding dataset, EMU, with 56k captions that cover diverse inferences.",
"In addition, we introduce a new model, PELICAN, improving over competitive language-and-vision transformer baselines.",
"Our empirical study demonstrates promising results, but significant headroom remains.",
"We release our dataset at jeffda.com/edited-media-understanding to encourage further study in discovering pragmatic markers of disinformation.",
"Through an edit e on source image i (e.g. e = x is edited into a room full of drugs), an editor can cause harm to the subject x 's mental state ( mental state : x is angry about e ) and e ect x 's image ( effect : e makes x seem dishonest) (Rashkin et al., 2016).",
"The editor does this through the intention of the edit ( intent : e intends to harm x 's image) and changing the implications of the image ( implication : e frames x as a drug cartel member) (Forbes et al., 2020; Sap et al., 2020; Paris and Donovan, 2019).",
"To this end, we collect edits e and source images i from Reddit's r/photoshopbattles community.",
"There is no readily available (large) central database of harmful image edits, but r/photoshopbattles is replete with suitable complex and culturally implicative edits (e.g., reference to politics or pop culture).",
"This provides us with relevant image edits at a reasonable cost without advocation for dangerous training on real harmful image edits.",
"Keeping the source image i in the task allows us to sustain the tractability of the image edit problem (Tan et al., 2019; Jhamtani and Berg-Kirkpatrick, 2018).",
"Given an edit e : IS IE , we define an edited media understanding frame F ( ) as a collection of typed dimensions and their polarity assignments:",
"(i) physical P ( IS IE ): the changes from IS IE ,",
"(ii) intent N ( E IE ): whether the Editor E implied malicious intent in IS IE ,",
"(iii) implication M ( E IE ): how E might use IE to The Editor creates Image Edit from Image Source Frame N(E IE): The Editor has Intent Frame M(E IE): The Image Edit has potential Implications Frame S(IE s1 ): The Image Edit impacts Mental State of Subject 1 Frame E(IE s1 ): The Image Edit has Effect on perceptions of Subject 1 P(E IE): The physical changes between Image Source and Image Edit Subject 1 is edited into a room full of illicit drugs Background Changed Structural changes Intent: to show Subject 1 in illegal behavior Implications: to frame Subject 1 as a member of a drug cartel Mental State: Subject 1 would be angry Effect: makes Subject 1 seem like a dishonest leader Subject 1 is shown next to illegal substances Label: Intent is Harmful Label: Implications is Harmful Label: Mental State of S1 is Negative Label: Effect on S1 is Negative Subject 1 is next to a massive amount of drugs Subject 1 would not want to be seen in front of drugs shows Subject 1 is in charge of illegal drugs Su b j e c t 1 because because because because Image Source EMU Frames Figure 2: An example from EMU.",
"mislead,",
"(iv) mental state S ( IE s i ): whether the predicate IE impacts the emotion of a role s i ,",
"(v) e ect E ( IE s i ): the e ect of IE on s i .",
"We assume frames can be categorized as harmful or not harmful with polarity l { + , } .",
"Each polarity l can be interpreted with reason y , and that each reason can be supported with rationale r .",
"Technically, a model is given the following as input: A source image IS , and an edited image IE .",
"A list of important subjects: expressed as bounding boxes b i for each subject.",
"An open-ended question q associated with F ( );",
"e.g., How might subject3 feel upon seeing this edit? A list of annotated boxes a i IE marking the objects in the image that were introduced and modified , and a true / false label denoting if the background was changed.",
"A model must produce the polarity classification l (cid:48) { + , } , interpretation of the polarity (response y (cid:48) ) and rationale for interpolation r (cid:48) .",
"(For the physical frame, only y needs to be generated).",
"Figure 2 shows an example of our task configuration.",
"The lexicon of the label is fixed for each F ( ) (e.g. for N ( ), harmful, + harmless).",
"Sourcing Image Edits We source our image edits from the r/photoshopbattles community on Reddit which hosts regular Photoshop competitions, where given a source photo , members submit a comment with their own edited photo .",
"We collect 8K image edit pairs (source and edited photo pairs) from this community by, first, manually curating a list of more than 100 terms describing people frequently appearing in Photoshop battles posts.",
"Then, we screen over 100k posts for titles that contain one or more of these search terms resulting in 20k collected image pairs.",
"Additionally, we run an object detector (He et al., 2017) to ensure that is at least one person present in each image as a means for ensuring that annotators do not see image pairs without any subjects.",
"Annotating Image Edits We ask a group of vetted crowd workers to identify the main subjects in an image edit and answer open-ended questions in natural language.",
"Each image is annotated by 3 independent crowd workers.",
"Crowd workers are first presented with a numbered set of people bounding boxes (produced by Mask R-CNN (He et al., 2017)) over the edited F rame Notation Related Question P hysical P ( IS IE ) What changed in this image edit?",
"image and are asked to select subjects that are significant to the edits (as opposed, say, a crowd in the background).",
"Once subjects are selected, the annotators are asked to assign classification labels for each of the five possible question types and provide free-form text answers for each question (when applicable).",
"For the classification label, we retain the majority vote (Fleiss = 0 . 67).",
"In a separate and final pass, we explicitly identify which portions of the modified image is introduced or altered by asking the workers to to label the most important sections of the modified image and selecting one of the two labels.",
"The statistics of the dataset are shown in Figure 3. 4 Modeling Edited Media Understanding Frames In this section, we present a new model for Edited Media Understanding Frames, with a goal of kick-starting research on this challenging problem.",
"As described in Section 2, our task di ers from many standard vision-and-language tasks both in terms of format and required reasoning: a model must take as input two images (a source image and its edit), with a significant change of implication added by the editor.",
"A model must be able to answer questions, grounded in the main subjects of the image, describing these changes.",
"The answers are either boolean labels, or open-ended natural language including explainable rationales.",
"For Edited Media Understanding Frames, not all image regions are created equal .",
"Not only is the subject referred to in the question (e.g. subject1 ) likely important, so too are all of the regions in Figure 3: Statistics for EMU.",
"the image edit that are introduced or altered .",
"We propose to use the annotations that collected for these regions as additional signal for the model to highlight where to attend.",
"2 Not only should a model likely attend to these important regions, it should prioritize attending to regions nearby (such as objects that an edited person is interacting with).",
"We propose to model the (likely) importance of an image region through graph propagation.",
"We will build a directed graph with all regions of the image, rooted at a subject mentioned by the question (e.g. subject1 ).",
"We will then topologically sort this graph; each region is then given an embedding corresponding to its sorted position similar to the position embedding in a Transformer.",
"This will allow the model to selectively attend to important image regions in the image edit.",
"We use a di erent position embedding for the image source, and do not perform the graph propagation here (as we do not have introduced or altered annotations); this separate embedding captures the inductive bias that the edited is more important than the source.",
"2 These annotations are collected from workers, but in theory, it would be possible to train a model to annotate regions as such.",
"To make our task as accessible and easy-to-study as possible, however, we use the provided labels in place of a separate model however.",
"In this section, we describe integrating our importance embeddings with a multimodal transformer.",
"Let the source image be IS and IE .",
"We use the backbone feature extractor ( Faster-RCNN feature extractor (Ren et al., 2015; Anderson et al., 2018) to extract N regions of interest for each region: [ s 1 , ... , s N ] = ( IS ) [ e 1 , ... , e N ] = ( IE ) .",
"(1) We note that some of these regions in e 1 , ... , e N are provided to the model (as annotated regions in the image); the rest are detected by .",
"These, plus the language representation of the question, are passed to the Transformer backbone T : [ z 1 ... z N + L ] = T ([ s 1 ... s N ] , [ e 1 , ... , e N ] , [ x 1 ... x L ]) (2) Important for EMU, z 2 N + 1 , ... , z 2 N + L serve as language representations.",
"Training under a left-to-right language modeling objective, we can predict the next next token x L + 1 using the representation z N + L .",
"Transformers require position embeddings to be added to each image region and word enabling it to distinguish which region is which.",
"We supplement the position embeddings of the regions { e 1 ... e N } in the edited image IE with the result of a topological sort.",
"Graph definition.",
"We define the graph over image regions in the edited image as follows.",
"We begin by sourcing a seed region s { e 1 ... e N } .",
"Let G = ( V , E ), where each v V represents metadata of some r i ( IE ), defined as v i m ( IE ) for simplicity, s.t.: v i = { x 1 , y 1 , x 2 , y 2 , s i , l i } (3) where x 1 , y 1 , x 2 , y 1 represents the bounding box of r i , s i { 1 , 0 } denoting if r i is a subject of IE , and l i { introduced , altered } denoting the label of r i .",
"We build the graph iteratively: for each iteration, we define an edge e = { v , u } ; u V s.t.: v m ( IE ) , u V , E = E ( u , v ) E (cid:48) (4) We define E (cid:48) as the set of edges ( u , v ) in which u and v are notationally similar .",
"We define three cases in which this is true: if s i u i s j v j , if l i u i = l j v j , and if x 1 , y 1 , x 2 , y 2 u i and x 3 , y 3 , x 4 , y 4 u i overlaps, in which the percentage overlap is defined by standard intersection-over-union: min { x 4 , x 2 } max { x 3 , x 1 } min { y 4 , y 2 } max { y 3 , y 1 } (5) We cap the number of outgoing edges at 3, and prevent cycles by allowing edges only to unseen image regions.",
"In cases where there are more than three possible edges, we add edges in the order defined in the previous paragraph, and break overlap ties via maximum overlap.",
"To produce embeddings, we run topological sort over the directed graph to assign each image region an embedding, then assign an embedding to each image region based on the ordered index.",
"The embedding is zeroed out for image regions that are missing from the DAG, and from the source image (which are unlabeled).",
"We include bounding box and class labels.",
"To generate text and classification labels, we attach the embeddings onto the input for an encoder-decoder structure.",
"In this section, we evaluate a variety of strong vision-and-language generators on EMU.",
"Similar to past work on VQA, we rebalance our test set split ensuring a 50 / 50 split per question type of maliciously labeled captions.",
"We provide two human evaluation metrics head-to-head, in which generated responses are compared to human responses, and accuracy, in which humans are asked to label if generated responses are accurate in regards to the given edit.",
"In addition to evaluating PELICAN, we compare and evaluate the performance of various potentially high-performing baselines on our task.",
"a. Retrieval .",
"For a retrieval baseline, which generally performs well for generation-based tasks, we use features from ResNet-158 (He et al., 2016), defined as , to generate vectors for each IE in the test set.",
"We then find the most similar edited image IT in the training set T via cosine similarity: argmax IT T ( IE ) ( IT ) (cid:107) ( IE ) (cid:107) (cid:107) ( IT ) (cid:107) (6) We use the captions associated with the most similar image in the training set.",
"b. GPT-2 (Radford et al., 2019) .",
"As a text-only baseline, we use the 117M parameter model from GPT-2, fine-tuned on the captions from our dataset.",
"Since the images are not taken into consideration, we generate from the seeds associated with each question type and use the same captions for all images in the test set.",
"c. Cross-Modality GPT-2 .",
"We test a unified language-and-vision model on our dataset.",
"Similar to (Alberti et al., 2019), we append the visual features ( IS ) and ( IE ) to the beginning of the token embeddings from GPT-2 (117M).",
"For the questions involving a subject , we append an additional vector ( r ), where r is the region defined by the bounding box for that subject .",
"d. Dynamic Relational Attention (Tan et al., 2019) .",
"We test the best model from previous work on image edits on our task, Dynamic Relational Attention.",
"We train the model from scratch on our dataset, using the same procedure as (Tan et al., 2019).",
"We seed each caption with the relevant question.",
"e. VLP (Zhou et al., 2019) .",
"We test VLP, a pre-trained vision-and-language transformer model.",
"For image captioning, VLP takes a single image as input and uses an o -the-shelf object detector to extract regions, generation a caption using sequence-to-sequence decoding and treating the regions as a sequence of input tokens.",
"To generate a caption for a particular question type, we fix the first few generated tokens to match the prefix for that question type.",
"We fine-tune VLP starting from weights pre-trained on Conceptual Captions (3.3m image-caption pairs) (Sharma et al., 2018) and then further trained on COCO Captions (413k image-caption pairs) (Lin et al., 2014).",
"We present our results in Table 2. We calculate generative metrics (e.g. METEOR) by appending the rationale to the response.",
"Generations from PELICAN are preferred over human generations 14.0% of the time, with a 0.86 drop in perplexity compared to the next best model.",
"To investigate the performance of the model, we run an ablation study on various modeling attributes, detailed in Table 3. First, we investigate the e ect of pretraining (on Conceptual Captions (Sharma et al., 2018; Zhou et al., 2019)).",
"We find that performance drops without pretraining (53.47%), but surprisingly still beats other baselines.",
"This suggests that the task requires more pragmatic inferences than the semantic learning typically gained from pre-training tasks.",
"Second, we ablate the importance of including annotated ( a i ) features from the dataset when creating the directed graph, relying on a seed from a random R-CNN region (54.44%).",
"We also ablate our use of topological sort and a directed graph by suggesting a simple (but consistent) order for image regions (54.91%).",
"Finally, we ablate including the visual regions from the source image.",
"The performance is Figure 5: Generation examples from PELICAN, marked with results from human evaluation.",
"similar (55.35%), suggesting that PELICAN would be able to perform in real-world settings in which only the edited image is present (e.g. social media posts).",
"Last, we present qualitative examples in Figure 5.",
"PELICAN is able to correctly understand image pairs which require mostly surface level understanding for example, in the top example, it is able to identify that the gun and action implies negative context, but misunderstands the response with regards to the situation.",
"In the bottom example, we show that PELICAN is able to refer to subject1 correctly, but misinterprets the situation to be non-negative.",
"To study if EMU is helpful in real-world settings, we train a model of PELICAN on EMU with only the edited image.",
"In this setting, the model must hypothesize which parts of the image were edited and discern the main subjects in the image.",
"At test time, we generate captions for each of the 5 intention-based question types.",
"Results of this version of PELICAN is in Table 2. While this evaluation scheme is crude, we find that this version of PELICAN is still able to outperform previous models without usage of the source image.",
"This suggests potential for generations from EMU-trained models in human-assisted settings.",
"In an initial human study (given PELICANREAL captions, classify the edit as disinformation were the captions helpful in your decision?) we find that annotators label as helpful 71.5% of the time.",
"Additionally, annotators tended more often to pick the gold label (89.1% 95.2%).",
"EMU also helps us understand what current vision-and-language models are missing for use on disinformation , by analyzing the reasons and rationales generated.",
"We ask annotators to compare PELICAN-generated captions marked as worse and human captions.",
"Category details are included in the appendix.",
"Figure 6 shows our results.",
"Overall, current models primarily lack the commonsense (event-based and social) to accurately describe disinformation.",
"Geographical (location-based) and political (e.g. knowledge about the job of a president) external knowledge is also a missing component.",
"PELICAN also still makes mistakes in description-related attributes: describing something other than the important change and an inaccuracy (e.g. wrong color) are the most common.",
"Specific information such as information relating to a specific person in the image (i.e. requiring a model to identify the person in the image), and information about a past event are the least critical, suggesting that e orts should be focused first on general intelligence rather than named-entity lookup.",
"Language-and-Vision Datasets Datasets involving images and languages cover a variety of tasks, including visual question answering (Agrawal et al., 2015; Goyal et al., 2017), image caption generation (Lin et al., 2014; Young et al., 2014; Krishna et al., 2016), visual storytelling (Park and Kim, 2015; Bosselut et al., 2016), machine translation (Elliott et al., 2016), visual reasoning (Johnson et al., 2017; Hudson and Manning, 2019; Suhr et al., 2019), and visual common sense (Zellers et al., 2019).",
"Two-image tasks Though most computer vision tasks involve single images, some work has been done on exploring image pairs.",
"The NLVR2 dataset (Suhr et al., 2019) involves yes-no question answering over image pairs.",
"Neural Naturalist (Forbes et al., 2019) tests fine-grained captioning of bird pairs; (Jhamtani and Berg-Kirkpatrick, 2018) iden-tifies the di erence between two similar images.",
"Image Edits There has been some computer vision research studying image edits.",
"Unlike our EMU dataset, however, much of this work has focused on modeling lower-level image edits wherein the cultural implications do not change signifi-cantly between images.",
"For example, (Tan et al., 2019) predicts image editing requests (generate change the background to blue' from a pair of images).",
"Past work has also studied learning to perform image adjustments (like colorization and enhancement) from a language query (Chen et al., 2017; Wang et al., 2018).",
"Hateful Meme Challenge (Kiela et al., 2020) is a recent work challenging models to classify a meme as hateful or not.",
"We present Edited Media Understanding Frames a language-and-vision task requiring models to answer open-ended questions that capture the intent and implications of an image edit.",
"Our model, PELICAN, kickstarts progress on our dataset beating all previous models and with humans rating its answers as accurate 48.2% of the time.",
"At the same time, there is still much work to be done and we provide analysis that highlights areas for further progress.",
"The authors would like to thank Ryan Qiu for help with analysis, and the Amazon Mechanical Turk community for help with annotation.",
"This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No.",
"DGE1256082, and in part by NSF (IIS-1714566), DARPA CwC through ARO (W911NF15-1-0543), DARPA MCS program through NIWC Pacific (N66001-19-2-4031), NSF (IIS-1714566), and the Allen Institute for AI.",
"In constructed the EMU dataset, great care was taken to ensure that crowd-workers are compensated fairly for their e orts.",
"To this end, we monitored median HIT completion times for each published batch, adjusting the monetary reward such that at least 80% of workers always received > $15 / hour, which is roughly double the minimum wage in the United States (the country of residence for most Amazon Mechanical Turk workers).",
"This included the qualification and evaluation rounds.",
"The following data sheet summarized relevant aspects of the data collection process (Bender and Friedman, 2018): A. C uration R ationale : Selection criteria for the edits included in the presented dataset are discussed Section 3. We selected the highest-rated posts on Reddit, and collected metadata data from annotators marking if the edit is NSFW or o ensive.",
"B. L anguage V ariety : The dataset is available in English, with mainstream US Englishes being the dominant variety, as per the demographic of Amazon Mechanical Turk workers.",
"C. S peaker D emographic : N / A D. A nnotator D emographic : N / A E. S peech S ituation : All frames were collected and validated over a period of about 12 weeks, between November and January 2020, through the Amazon AMT platform.",
"Workers were given regular, detailed feedback regarding the quality of their submissions and were able to address any questions or comments to the study's main author via Email or Slack.",
"F. T ext C haracteristics : In line with the intended purpose of the dataset, the included edits describe social interactions related (but not limited to) platonic and romantic relationships, political situations, as well as cultural and social contexts.",
"G. R ecording Q uality : N / A H. O ther : N / A Lastly, we want to emphasize that our work is strictly scientific in nature, and serves the exploration of machine reasoning alone.",
"It was not developed to o er guidance on misinformation or to train models to classify social posts as misinformation.",
"Consequently, the inclusion of malicious image edits could allow adversaries to train malicious agents to produce visual misinformation.",
"We are aware of this risk, but also want to emphasize that the utility of these agents allow useful negative training signal for minimizing harm that may be cased by agents operating in visual information.",
"It is, therefore, necessary for future work that uses our dataset to specify how the collected examples of both negative and positive misinformation are used, and for what purpose."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"result",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"result",
"objective",
"other",
"objective",
"objective",
"objective",
"objective",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"method",
"abstain",
"result",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Traditional language models are unable to efficiently model entity names observed in text.",
"All but the most popular named entities appear infrequently in text providing insufficient context.",
"Recent efforts have recognized that context can be generalized between entity names that share the same type (e.g., person or location ) and have equipped language models with access to an external knowledge base (KB).",
"Our Knowledge-Augmented Language Model (KALM) continues this line of work by augmenting a traditional model with a KB.",
"Unlike previous methods, however, we train with an end-to-end predictive objective optimizing the perplexity of text.",
"We do not require any additional information such as named entity tags.",
"In addition to improving language modeling performance, KALM learns to recognize named entities in an entirely unsupervised way by using entity type information latent in the model.",
"On a Named Entity Recognition (NER) task, KALM achieves performance comparable with state-of-the-art supervised models.",
"Our work demonstrates that named entities (and possibly other types of world knowledge) can be modeled successfully using predictive learning and training on large corpora of text without any additional information.",
"Language modeling is a form of unsupervised learning that allows language properties to be learned from large amounts of unlabeled text.",
"As components, language models are useful for many Natural Language Processing (NLP) tasks such as generation (Parvez et al., 2018) and machine translation (Bahdanau et al., 2014).",
"Additionally, the form of predictive learning that language modeling uses is useful to acquire text representations that can be used successfully to improve a number of downstream NLP tasks (Peters et al., 2018; Devlin et al., 2018).",
"In fact, models pre-trained with a predictive objective have provided a new state-of-the-art by a large margin.",
"Current language models are unable to encode and decode factual knowledge such as the information about entities and their relations.",
"Names of entities are an open class.",
"While classes of named entities (e.g., person or location ) occur frequently, each individual name (e.g, Atherton or Zhouzhuang ) may be observed infrequently even in a very large corpus of text.",
"As a result, language models learn to represent accurately only the most popular named entities.",
"In the presence of external knowledge about named entities, language models should be able to learn to generalize across entity classes.",
"For example, knowing that Alice is a name used to refer to a person should give ample information about the context in which the word may occur (e.g., Bob visited Alice ).",
"In this work, we propose Knowledge Augmented Language Model (KALM), a language model with access to information available in a KB.",
"Unlike previous work, we make no assumptions about the availability of additional components (such as Named Entity Taggers) or annotations.",
"Instead, we enhance a traditional LM with a gating mechanism that controls whether a particular word is modeled as a general word or as a reference to an entity.",
"We train the model end-to-end with only the traditional predictive language modeling perplexity objective.",
"As a result, our system can model named entities in text more accurately as demonstrated by reduced perplexities compared to traditional LM baselines.",
"In addition, KALM learns to recognize named entities completely unsupervised by interpreting the predictions of the gating mechanism at test time.",
"In fact, KALM learns an unsupervised named entity tagger that rivals in accuracy supervised counterparts.",
"KALM works by providing a language model with the option to generate words from a set of entities from a database.",
"An individual word can either come from a general word dictionary as in traditional language model or be generated as a name of an entity from a database.",
"Entities in the database are partitioned by type.",
"The decision of whether the word is a general term or a named entity from a given type is controlled by a gating mechanism conditioned on the context observed so far.",
"Thus, KALM learns to predict whether the context observed is indicative of a named entity of a given type and what tokens are likely to be entities of a given type.",
"The gating mechanism at the core of KALM is similar to attention in Neural Machine Translation (Bahdanau et al., 2014).",
"As in translation, the gating mechanism allows the LM to represent additional latent information that is useful for the end task of modeling language.",
"The gating mechanism (in our case entity type prediction) is latent and learned in an end-to-end manner to maximize the probability of observed text.",
"Experiments with named entity recognition show that the latent mechanism learns the information that we expect while LM experiments show that it is ben-eficial for the overall language modeling task.",
"This paper makes the following contributions: Our model, KALM, achieves a new state-of-the art for Language Modeling on several benchmarks as measured by perplexity.",
"We learn a named entity recognizer without any explicit supervision by using only plain text.",
"Our unsupervised named entity recognizer achieves a performance on par with the state-of-the supervised methods.",
"We demonstrate that predictive learning combined with a gating mechanism can be utilized efficiently for generative training of deep learning systems beyond representation pre-training.",
"Our work draws inspiration from Ahn et al. (2016), who propose to predict whether the word to generate has an underlying fact or not.",
"Their model can generate knowledge-related words by copying from the description of the predicted fact.",
"While theoretically interesting, their model functions only in a very constrained setting as it requires extra information: a shortlist of candidate entities that are mentioned in the text.",
"Several efforts successfully extend LMs with entities from a knowledge base and their types, but require that entity models are trained separately from supervised entity labels.",
"Parvez et al. (2018) and Xin et al. (2018) explicitly model the type of the next word in addition to the word itself.",
"In particular, Parvez et al. (2018) use two LSTM-based language models, an entity type model and an entity composite (entity type) model.",
"Xin et al. (2018) use a similarly purposed entity typing module and a LM-enhancement module.",
"Instead of entity type generation, Gu et al. (2018) propose to explicitly decompose word generation into sememe (a semantic language unit of meaning) generation and sense generation, but requires sememe labels.Yang et al. (2016) propose a pointer-network LM that can point to a 1-D or 2-D database record during inference.",
"At each time step, the model decides whether to point to the database or the general vocabulary.",
"Unsupervised predictive learning has been proven effective in improving text understanding.",
"ELMo (Peters et al., 2018) and BERT (Devlin et al., 2018) used different unsupervised objectives to pre-train text models which have advanced the state-of-the-art for many NLP tasks.",
"Similar to these approaches KALM is trained end-to-end using a predictive objective on large corpus of text.",
"Most unsupervised NER models are rule-based (Collins and Singer, 1999; Etzioni et al., 2005; Nadeau et al., 2006) and require feature engineering or parallel corpora (Munro and Manning, 2012).",
"Yang and Mitchell (2017) incorporate a KB to the CRF-biLSTM model (Lample et al., 2016) by embedding triples from a KB obtained using TransE (Bordes et al., 2013).",
"Peters et al. (2017) add pre-trained language model embeddings as knowledge to the input of a CRF-biLSTM model, while still requiring labels in training.",
"To the best of our knowledge, KALM is the first unsupervised neural NER approach.",
"As we discuss in Section 5.4, KALM achieves results comparable to supervised CRF-biLSTM models.",
"KALM extends a traditional, RNN-based neural LM.",
"As in traditional LM, KALM predicts probabilities of words from a vocabulary V g , but it can also generate words that are names of entities of a specific type.",
"Each entity type has a separate vocabulary { V 1 , ..., VK } collected from a KB.",
"KALM learns to predict from context whether to expect an entity from a given type and generalizes over entity types.",
"At its core, a language model predicts a distribution for a word y t +1 given previously observed words c t := [ y 1 , ..., y t 1 , y t ] .",
"Models are trained by maximizing the likelihood of the observed next word.",
"In an LSTM LM, the probability of a word, P ( y t +1 | c t ) , is modeled from the hidden state of an LSTM (Hochreiter and Schmidhuber, 1997): P ( y t +1 = i | c t ) = exp( W pi, : h t ) | V g | (cid:80) w =1 exp( W pw, : h t ) (1) h t , t = lstm ( h t 1 , t 1 , y t ) (2) where lstm refers to the LSTM step function and h i , i and y i are the hidden, memory and input vectors, respectively.",
"W p is a projection layer that converts LSTM hidden states into logits that have the size of the vocabulary | V g | .",
"KALM builds upon the LSTM LM by adding type-specific entity vocabularies V 1 , V 2 , ..., VK in addition to the general vocabulary V g .",
"Type vocabularies are extracted from the entities of specific type in a KB.",
"For a given word, KALM computes a probability that the word represents an entity of that type by using a type-specific projection matrix { W p,j | j = 0 , ..., K } .",
"The model also computes the probability that the next word represents different entity types given the context observed so far.",
"The overall probability of a word is given by the weighted sum of the type probabilities and the probability of the word under the give type.",
"More precisely, let i be a latent variable denot-ing the type of word i .",
"We decompose the probability in Equation 1 using the type t +1 : P ( y t +1 | c t ) = K (cid:88) j =0 P ( y t +1 , t +1 = j | c t ) = K (cid:88) j =0 P ( y t +1 | t +1 = j, c t ) P ( t +1 = j | c t ) (3) Where P ( y t +1 | t +1 , c t ) is a distribution of entity words of type t +1 .",
"As in a general LM, it is computed by projecting the hidden state of the LSTM and normalizing through softmax (eq. 4).",
"The type-specific projection matrix W p,j is learned during training.",
"We maintain a type embedding matrix W e and use it in a similar manner to compute the probability that the next word has a given type P ( t +1 | c t ) (eq. 5).",
"The only difference is that we use an extra projection matrix, W h to project h t into lower dimensions.",
"Figure 1a illustrates visually the architecture of KALM.",
"P ( y t +1 = i | t +1 = j, c t ) = exp( W p,ji, : h t ) | V j | (cid:80) w =1 exp( W p,jw, : h t ) (4) P ( t +1 = j | c t ) = exp( W ej, : ( W h h t )) K (cid:80) k =0 exp( W ek, : ( W h h t )) (5) 3.3 Type representation as input In the base KALM model the input for word y t consists of its embedding vector y t .",
"We enhance the base model by adding as inputs the embedding of the type of the previous word.",
"As type information is latent, we represent it as the weighted sum of the type embeddings weighted by the predicted probabilities: t +1 = K (cid:88) j =0 P ( t +1 = j | c t ) W ej, : (6) y t +1 = [ y t +1 ; t +1 ] (7) P ( t +1 = j | c t ) is computed using Equation 5 and e j is the type embedding vector.",
"Adding type information as input serves two purposes: in the forward direction, it allows KALM to model context more precisely based on predicted entity types.",
"During back propagation, it allows us to learn latent types more accurately based on subsequent context.",
"The model enhanced with type input is illustrated in Figure 1b.",
"The type distribution that KALM learns is latent, but we can output it at test time and use it to predict whether a given word refers to an entity or a general word.",
"We compute P ( t +1 | c t ) using eq.",
"5 and use the most likely entity type as the named entity tag for the corresponding word y t +1 .",
"This straightforward approach, however, predicts the type based solely on the left context of the tag being predicted.",
"In the following two subsections, we discuss extensions to KALM that allow it to utilize the right context and the word being predicted itself.",
"While we cannot use a bidirectional LSTM for generation, we can use one for NER, since the entire sentence is known.",
"For each word, KALM generates the hidden vectors h l,t and h r,t representing context coming from left and right directions, as shown in Equations 8 and 9.",
"We concatenate the hidden vectors from the two directions to form an overall context vector h t , and generate the final type distribution using Equation 5.",
"Training the bidirectional model requires that we initialize the hidden and cell states from both ends of a sentence.",
"Suppose the length of a sentence is n .",
"The the cross entropy loss is computed for only the n 2 symbols in the middle.",
"Similarly, we compute only the types of the n 2 symbols in the middle during inference.",
"Even bidirectional context is insufficient to predict the word type by itself.",
"Consider the following example: Our computer models indicate Edouard is going to 1 The missing word can be either a location (e.g., London ), or a general word (e.g., quit ).",
"In an NER task we observe the underlined words: Our computer models indicate Edouard is going to London.",
"In order to learn predictively, we cannot base the type prediction on the current token.",
"Instead, we can use a prior type information P ( t | y t ) , pre-computed from entity popularity information available in many KBs.",
"We incorporate the prior information P ( t | y t ) in two different ways described in the two following subsections.",
"P ( t | c l , c r , y t ) = P ( y t | t , c l , c r ) 2 P ( y t | c l , c r ) P ( t | c l , c r ) + P ( c l , c r | t , y t ) 2 P ( c l , c r | y t ) P ( t | y t ) = P ( t | c l , c r ) + P ( t | y t ) (10)",
"An alternative for incorporating the pre-computed P ( t | y t ) is to use it during training to regularize the type distribution.",
"We use the following optimization criterion to compute the loss for each word: L = H ( P ( y i | c l , c r ) , P ( y i | c l , c r )) + || KL ( P ( i | c l , c r ) , P ( i | y i )) || 2 (11) where y i is the actual word, H ( . ) is the cross entropy function, and KL ( . ) measures the KL divergence between two distributions.",
"A hyper-parameter (tuned on validation data) controls the relative contribution of the two loss terms.",
"The new loss forces the learned type distribution, P ( i | c l , c r ) , to be close to the expected distribution P ( i | y i ) given the information in the database.",
"This loss is specifically tailored to help with unsupervised NER.",
"We evaluate KALM on two tasks: language modeling and NER.",
"We use two datasets: Recipe used only for LM evaluation and CoNLL 2003 used for both the LM and NER evaluations.",
"Recipe The recipe dataset 2 is composed of 95 , 786 recipes, We follow the same preprocessing steps as in Parvez et al. (2018) and divide the crawled dataset into training, validation and testing.",
"A typical sentence after preprocessing looks like the following: in a large mixing bowl combine the butter sugar and the egg yolks.",
"The entities in the recipe KB are recipe ingredients.",
"The 8 supported entity types are dairy , drinks , fruits , grains , proteins , seasonings , sides , and vegetables .",
"In the sample sentence above, the entity names are butter , sugar , egg and yolks , typed as dairy , seasonings , proteins and proteins , respectively.",
"CoNLL 2003 Introduced in Tjong Kim Sang and De Meulder (2003), the CoNLL 2003 dataset is composed of news articles.",
"It contains text and named entity labels in English, Spanish, German and Dutch.",
"We experiment only with the English version.",
"We follow the CoNLL labels and separate the KB into four entity types: LOC (loca-2 Crawled from http://www.ffts.com/recipes. htm tion), MISC (miscellaneous), ORG (organization), and PER (person).",
"Statistics about the recipe and the CoNLL 2003 dataset are presented in Table",
"1. train valid test #sent 61302 15326 19158 #tok 7223474 1814810 2267797 train valid test #sent 14986 3465 3683 #tok 204566 51577 46665 Table 1: Statistics of recipe and CoNLL 2003 datasets The information about the entities in each of the KBs is shown in Table",
"2. The recipe KB is provided along with the recipe dataset 3 as a conglomeration of typed ingredients.",
"The KB used by CoNLL 2003 is extracted from WikiText-2.",
"We filtered the entities which are not belonging to the 4 types of CoNLL 2003 task.",
"Vocabulary We use the entity words in Table 2 to form V 1 , ..., VK We extract 51 , 677 general words in the recipe dataset, and 17 , 907 general words in CoNLL 2003 to form V 0 .",
"Identical words that fall under different entity types, such as Washington in George Washington and Washington D.C. , share the same input embeddings.",
"Model The model has an embedding layer of 400 dimensions, LSTM cell and hidden states of 1 , 150 dimensions, and 3 stacked LSTM layers.",
"We scale the final LSTM's hidden and cell states to 400 dimensions, and share weights between the projection layer W p and the word embedding layer.",
"Each entity type in the knowledge base is represented by a trainable 100 -dimensional embedding vector.",
"When concatenating the weighted 3 The KB can be found in https://github.com/ uclanlp/NamedEntityLanguageModel 4 https://github.com/salesforce/ awd-lstm-lm average of the type embeddings to the input, we expand the input dimension of the first LSTM layer to 500 .",
"All trainable parameters are initialized uniformly randomly between 0 .",
"1 and 0 .",
"1 , except for the bias terms in the decoder linear layer, which are initialized to 0 .",
"For regularization, we adopt the techniques in AWD-LSTM, and use an LSTM weight dropout rate of 0 , an LSTM first-layers locked dropout rate of 0 .",
"3 , an LSTM last-layer locked dropout rate of 0 .",
"4 , an embedding Bernoulli dropout rate of 0 .",
"1 , and an embedding locked dropout rate of 0 .",
"65 .",
"Also, we impose L2 penalty on LSTM pre-dropout weights with coefficient 1 , and L2 penalty on LSTM dropout weights with coefficient 2 , both added to the cross entropy loss.",
"Optimization We use the same loss penalty, dropout schemes, and averaged SGD (ASGD) as in Merity et al. (2017).",
"The initial ASGD learning rate is 10 , weight decay rate is 1 .",
"2 10 6 , non-monotone trigger for ASGD is set to 5 , and gradient clipping happens at 0 .",
"25 .",
"The models are trained until the validation set performance starts to decrease.",
"5.3.1 Baselines AWD-LSTM (Merity et al., 2017) is the state-of-the-art word-level language model as measured on WikiText-2 and Penn Treebank.",
"It uses ASGD optimization, a new dropout scheme and novel penalty terms in the loss function to improve over vanilla LSTM LMs.",
"Named-entity LM (NE-LM) (Parvez et al., 2018) consists of a type model that outputs P ( i +1 | i , i 1 , ... ) and an entity composite model that outputs P ( y i +1 |{ y i , i } , { y i 1 , i 1 } , ... ) .",
"The type model is trained on corpora with entity type labels, whereas the entity composite model has an input for words and another input for the corresponding types, and so needs to be trained on both the labeled corpus and the unlabeled version of the same corpus.",
"At inference time, a joint inference heuristic aggregates type model and entity composite model predictions into a word prediction.",
"Since both models require type labels as input, each generation step of NE-LM requires not only the previously generated words [ y i , y i 1 , ... ] , but also the type labels for these words [ i , i 1 , ... ] .",
"For language modeling we report word prediction perplexity on the recipe dataset and CoNLL 2003.",
"Perplexity is defined as the following.",
"We use publicly available implementations to produce the two baseline results.",
"We also compare the language models in the bidirectional setting, which the reference implementations do not support.",
"In that setting, we transform both models in NE-LM to be bidirectional.",
"Discussion Table 3 shows that KALM outperforms the two baselines in both unidirectional and bidirectional settings on both datasets.",
"The improvement relative to NE-LM is larger in the unidirectional setting compared to the bidirectional setting.",
"We conjecture that this is because in that setting NE-LM trains a bidirectional NER in a supervised way.",
"The improvement relative to NE-LM is larger on CoNLL 2003 than on the recipe dataset.",
"We believe that the inference heuristic used by NE-LM is tuned specifically to recipes and is less suitable to the CoNLL setting.",
"We also find that training KALM on more unlabeled data further reduces the perplexity (see Table 4), and study how the quality of the KB affects the perplexity.",
"We discuss both these results in Section 5.4.",
"We train two supervised models for NER on the CoNLL 2003 dataset: a biLSTM and a CRF-biLSTM .",
"We replicate the hyperparameters used by Lample et al. (2016), who demonstrate the state-of-the-art performance on this dataset.",
"We use a word-level model, and 100 dimensional pretrained GloVe embeddings (Pennington et al., 2014) for initialization.",
"We train for 50 epochs, at which point the models converge.",
"Basic: bidirectional model with aggregated type embeddings fed to the input at the next time step; With type priors: using P ( t | y t ) in the two ways described in Section 4.2; Extra data: Since KALM is unsupervised, we can train it on extra data.",
"We use the WikiText-2 corpus in addition to the original CoNLL training data.",
"WikiText-2 is a standard language modeling dataset released with Merity et al. (2016).",
"It contains Wikipedia articles from a wide range of top-ics.",
"In contrast, the CoNLL 2003 corpus consists of news articles.",
"Table 4 show statistics about the raw / characer level WikiText-2 and the CoNLL 2003 corpora.",
"Despite the domain mismatch between the WikiText and CoNLL corpora, the WikiText coverage of the entity words that exist in the CoNLL dataset is high.",
"Specifically, most of the person, location and organization entity words that appear in CoNLL either have a Wikipedia section, or are mentioned in a Wiki article.",
"Therefore, we expect that the addition of WikiText can guide the unsupervised NER model to learn better entity type regularities.",
"Indeed, the result presented in the rightmost column of Table 4 shows that when adding WikiText-2 to CoNLL 2003, the perplexity for the KALM model for the news text of CoNLL 2003 is decreased: from 4 .",
"69 down to 2.29 .",
"We show NER results in Table 5.",
"The table lists the F1 score for each entity types, as well as the overall F1 score.",
"Discussion Even the basic KALM model learns context well it achieves an overall F1 score of 0 .",
"72 for NER.",
"This illustrates that KALM has learned to model entity classes entirely from surrounding context.",
"Adding prior information as to whether a word represents different entity types helps to bring the F1 score to 0 .",
"76 .",
"The strength of an unsupervised model is that it can be trained on large corpora.",
"Adding the Wikitext-2 corpus improves the NER score of KALM to 0 .",
"84 .",
"To give a sense of how the unsupervised models compare with the supervised model with respect to training data size, we trained biLSTM and CRF-biLSTM on a randomly sampled subset of the training data of successively decreasing sizes.",
"The resulting F1 scores are shown in Figure",
"2. Our best model scores 0 .",
"86 , same as a CRF-biLSTM trained on around 40% of the training data.",
"It is less than 0 .",
"03 behind the best supervised CRF-biLSTM.",
"The best KALM model almost always scores higher than biLSTM without the CRF loss.",
"Lastly, we perform an ablation experiment to gauge how sensitive KALM is to the quality of the knowledge base.",
"Previous studies (Liu et al., 2016; Zhang et al., 2012) have shown that the amount of knowledge retrieved from KBs can im-pact the performance of NLP models such as relation extraction systems substantially.",
"In this experiment, we deliberately corrupt the entity vocabularies V 0 , ..., VK 1 by moving a certain percentage of randomly selected entity words from V i to the general vocabulary V g .",
"Figure 3 shows language modeling perplexities on the validation set, and NER F1 scores on the test set as a function of the corruption percentage.",
"The language modeling performance stops reacting to KB corruption beyond a certain extent, whereas the NER performance keeps dropping as the number of entities removed from V 1 , V 2 , ... increases.",
"This result shows the importance of the quality of the KB entity unique entity size LM ratio ratio ratio perplexity 92.80% 82.56% 2.62 2.29 : 4.69 Table 4: Characterization of WikiText-2 relative to CoNLL 2003 training set.",
"We propose Knowledge Augmented Language Model (KALM), which extends a traditional RNN LM with information from a Knowledge Base.",
"We show that real-world knowledge can be used successfully for natural language understanding by using a probabilistic extension.",
"The latent type information is trained end-to-end using a predictive objective without any supervision.",
"We show that the latent type information that the model learns can be used for a high-accuracy NER system.",
"We believe that this modeling paradigm opens the door for end-to-end deep learning systems that can be enhanced with latent modeling capabilities and trained in a predictive manner end-to-end.",
"In ways this is similar to the attention mechanism in machine translation where an alignment mechanism is added and trained latently against the overall translation perplexity objective.",
"As with our NER tags, machine translation alignments are empirically observed to be of high quality.",
"In future work, we look to model other types of world knowledge beyond named entities using predictive learning and training on large corpora of text without additional information, and to make KALM more robust against corrupted entities."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"method",
"result",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"result",
"method",
"abstain",
"result",
"method"
] |
[
"Entity linking (EL) is concerned with disambiguating entity mentions in a text against knowledge bases (KB).",
"It is crucial in a considerable number of fields like humanities, technical writing and biomedical sciences to enrich texts with semantics and discover more knowledge.",
"The use of EL in such domains requires handling noisy texts, low resource settings and domain-specific KBs.",
"Existing approaches are mostly inappropriate for this, as they depend on training data.",
"However, in the above scenario, there exists hardly annotated data, and it needs to be created from scratch.",
"We therefore present a novel domain-agnostic Human-In-The-Loop annotation approach: we use recommenders that suggest potential concepts and adaptive candidate ranking, thereby speeding up the overall annotation process and making it less tedious for users.",
"We evaluate our ranking approach in a simulation on difficult texts and show that it greatly outperforms a strong baseline in ranking accuracy.",
"In a user study, the annotation speed improves by 35 % compared to annotating without interactive support; users report that they strongly prefer our system.",
"An open-source and ready-to-use implementation based on the text annotation platform INCEpTION 1 is made available 2 .",
"Entity linking (EL) describes the task of disambiguating entity mentions in a text by linking them to a knowledge base (KB), e.g. the text span Earl of Orrery can be linked to the KB entry John Boyle, 5.",
"Earl of Cork , thereby disambiguating it.",
"EL is highly beneficial in many fields like digital humanities, classics, technical writing or biomedical sciences for applications like search (Meij et al., 1 https://inception-project.github.io 2 https://github.com/UKPLab/ acl2020-interactive-entity-linking Figure 1: Difficult entity mentions with their linked entities: 1) Name variations, 2) Spelling Variation, 3) Ambiguity 2014), semantic enrichment (Schlogl and Lejtovicz, 2017) or information extraction (Nooralahzadeh and vrelid, 2018).",
"These are overwhelmingly low-resource settings: often, no data annotated exists; coverage of open-domain knowledge bases like Wikipedia or DBPedia is low.",
"Therefore, entity linking is frequently performed against domain-specific knowledge bases (Munnelly and Lawless, 2018a; Bartsch, 2004).",
"In these scenarios, the first crucial step is to obtain annotated data.",
"This data can then be either directly used by researchers for their downstream task or to train machine learning models for automatic annotation.",
"For this initial data creation step, we developed a novel Human-In-The-Loop (HITL) annotation approach.",
"Manual annotation is laborious and often prohibitively expensive.",
"To improve annotation speed and quality, we therefore add interactive machine learning annotation support that helps the user find entities in the text and select the correct knowledge base entries for them.",
"The more entities are annotated, the better the annotation support will be.",
"Throughout this work, we focus on texts from digital humanities, to be more precise, texts written in Early Modern English texts, including poems, biographies, novels as well as legal documents.",
"In this domain, texts are noisy as they were written in times where orthography was rather incidental or due to OCR and transcription errors (see Fig. 1).",
"Tools like named entity recognizers are unavailable or perform poorly (Erdmann et al., 2019).",
"We demonstrate the effectiveness of our approach with extensive simulation as well as a user study on different, challenging datasets.",
"We implement our approach based on the open-source annotation platform INCEpTION (Klie et al., 2018) and publish all datasets and code.",
"Our contributions are the following:",
"1. We present a generic, KB-agnostic annotation approach for low-resource settings and provide a ready-to-use implementation so that researchers can easily annotate data for their use cases.",
"We validate our approach extensively in a simulation and in a user study.",
"2. We show that statistical machine learning models can be used in an interactive entity linking setting to improve annotation speed by over 35%.",
"In the following, we give a broad overview of existing EL approaches, annotation support and Human-In-The-Loop",
"Human-In-The-Loop annotation.",
"Entity Linking describes the task of disambiguating mentions in a text against a knowledge base.",
"It is typically approached in three steps: 1) mention detection , 2) candidate generation and 3) candidate ranking (Shen et al., 2015) (Fig. 2).",
"Mention detection most often relies either on gazetteers or pretrained named entity recognizers.",
"Candidate generation either uses precompiled candidate lists derived from labeled data or uses full-text search.",
"Candidate ranking assigns each candidate a score, then the candidate with the highest score is returned as the final prediction.",
"Existing systems rely on the availability of certain resources like a large Wikipedia as well as software tools and often are restricted in the knowledge base they can link to.",
"Off-the-shelf systems like Dexter (Ceccarelli et al., 2013), DBPedia Spotlight (Daiber et al., 2013) and TagMe (Ferragina and Scaiella, 2010) most often can only link against Wikipedia or a related knowledge base like Wikidata or DBPedia.",
"They require good Wikipedia coverage for computing frequency statistics like popularity, view count or PageRank ( Guo et al., 2013).",
"These features work very well for standard datasets due to their Zipfian distribution of entities, leading to high reported scores on state-of-the art datasets (Ilievski et al., 2018; Milne and Witten, 2008).",
"However, these systems are rarely applied out-of-domain such as in digital humanities or classical studies.",
"Compared to state-of-the-art approaches, only a limited amount of research has been performed on entity linking against domain-specific knowledge bases.",
"AGDISTIS (Usbeck et al., 2014) developed a knowledge-base-agnostic approach based on the HITS algorithm.",
"The mention detection relies on gazetteers compiled from resources like Wikipedia and thereby performs string matching.",
"Brando et al. (2016) propose REDEN , an approach based on graph centrality to link French authors to literary criticism texts.",
"It requires additional linked data that is aligned with the custom knowledge basethey use DBPedia.",
"As we work in a domain-specific low resource setting, access to large corpora which can be used to compute popularity priors is limited.",
"We do not have suitable named entity linking tools, gazetteers or a sufficient amount of labeled training data.",
"Therefore, it is challenging to use state of the art systems.",
"Human-in-the-loop annotation HITL machine learning describes an interactive scenario where a machine learning (ML) system and a human work together to improve their performance.",
"The ML system gives predictions, and the human corrects if they are wrong and helps to spot things that have been overlooked by the machine.",
"The system uses this feedback to improve, leading to better predictions and thereby reducing the effort of the human.",
"In natural language processing, it has been applied in scenarios like interactive text summarization (Gao et al., 2018), parsing (He et al., 2016) or data generation (Wallace et al., 2019).",
"Regarding machine-learning assisted annotation, Yimam et al. (2014) propose an annotation editor that during annotation, interactively trains a model using annotations made by the user.",
"They use string matching and MIRA (Crammer and Singer, 2003) as recommenders, evaluate on POS and NER annotation and show improvement in annotation speed.",
"TASTY (Arnold et al., 2016) is a system that is able to perform EL against Wikipedia on the fly while typing a document.",
"A pretrained neural sequence tagger is being used that performs mention detection.",
"Candidates are precomputed and the candidate is chosen that has the highest text sim-Figure 2: Entity linking pipeline: First, mentions of entities in the text need to be found.",
"Then, given a mention, candidate entities are generated.",
"Finally, entities are ranked and the top entity is chosen.",
"ilarity.",
"The system updates its suggestions after interactions such as writing, rephrasing, removing or correcting suggested entity links.",
"Corrections are used as training data for the neural model.",
"However, due to the following reasons, it is not yet suitable for our scenario.",
"In order to overcome the cold start problem, it needs annotated training data in addition to a precomputed index for candidate generation.",
"It also only links against Wikipedia.",
"The following section describes the three components of our annotation framework, following the standard entity linking pipeline (see Fig. 2).",
"Throughout this work, we will mainly focus on the candidate Ranking step.",
"We call the text span which contains an entity the mention and the sentence the mention is in the context .",
"Each candidate from the knowledge base is assumed to have a label and a description.",
"For instance, in Fig. 2, one mention is Dublin , the context is Dublin is the capital of Ireland , the label of the the first candidate is Trinity College and its description is constituent college of the University of Dublin in Ireland .",
"Mention Detection In the annotation setting, we rely on users to mark text spans that contain annotations.",
"As support, we provide suggestions given by different recommender models: similar to Yimam et al. (2014), we use a string matcher suggesting annotations for mentions which have been annotated before.",
"We also propose a new Levenshtein string matcher based on Levenshtein automata (Schulz and Mihov, 2002).",
"In contrast to the string matcher, it suggests annotations for spans within a Levenshtein distance of 1 or",
"2. Preliminary experiments with ML models for mention detection like using a Conditional Random Field and handcrafted features did not perform well and yielded noisy suggestions, requiring further investigation.",
"Candidate Generation We index the knowledge base and use full text search to retrieve candidates based on the surface form of the annotated mention.",
"Besides, users can query this index during annotation.",
"We use fuzzy search to help in cases where the mention and the knowledge base label are almost the same but not identical (e.g. Dublin vs. Dublyn ).",
"In the interactive setting, the user can also search the knowledge base during annotation, e.g. in cases when the gold entity is not ranked high enough or when the surface form and knowledge base label are not the same ( Zeus vs. Jupiter ).",
"Candidate Ranking We follow Zheng et al. (2010) and model candidate ranking as a learning-to-rank problem: given a mention and a list of candidates, sort the candidates so that the most relevant candidate is at the top.",
"For training, we guarantee that the gold candidate is present in the candidate list.",
"For evaluation, the gold candidate can be absent from the candidate list if the candidate search failed to find it.",
"This interaction is the core Human-in-the-loop in our approach.",
"For training, we rephrase the task as preference learning: By selecting an entity label from the candidate list, users express that the selected one was preferred over all other candidates.",
"These preferences are used to train state-of-the-art pairwise learning-to-rank models from the literature: the gradient boosted trees variant LightGBM (Ke et al., 2017), RankSVM (Joachims, 2002) and RankNet (Burges et al., 2005).",
"Models are retrained in the background when new annotations are made, thus improving over time with an increasing number of annotations.",
"We use a set of generic handcrafted features which are described in Table",
"1. These models were chosen as they can work with low data, train quickly and allow introspection.",
"Using deep models or word embeddings as input features showed to be too slow to be interactive.",
"We also leverage pretrained Sentence-BERT embeddings (Reimers and Gurevych, 2019) trained on Natural Language Inference data written in simple English.",
"These are not fine-tuned by us during training.",
"Although they come from a different domain, we conjecture that the WordPiece tokeniza-tion of BERT helps with the spelling variance of our texts in contrast to traditional word embeddings which would have many out-of-vocabulary words.",
"For specific tasks, custom features can easily be incorporated e.g. entity type information, time information for diachronic entity linking, location information or distance for annotating geographical entities.",
"We use three datasets: AIDA-YAGO , Women Writers Online ( WWO ) and 1641 Depositions .",
"AIDA consists of Reuters news stories.",
"To the best of our knowledge, WWO has not been considered for automatic EL so far.",
"The 1641 Depositions have been used in automatic EL, but only when linking against DBPedia which has a very low entity coverage (Munnelly and Lawless, 2018b).",
"We preprocess the data, split it in sentences, tokenize and reduce noise.",
"For WWO , we derive a RDF KB from their personography, for 1641 we derive a knowledge base from the annotations.",
"The exact processing steps as well as example texts are described in the appendix.",
"The resulting data sets for WWO and 1641 Depositions are also made available in the accompanying code repository.",
"AIDA-YAGO : For validating our approach, we evaluate on the AIDA-YAGO state-of-the art dataset introduced by Hoffart et al. (2011).",
"Originally, this dataset is linked against YAGO and Wikipedia.",
"We map the Wikipedia URLs to Wikidata and link against this KB, as Wikidata is available in RDF and the official Wikidata SPARQL endpoint offers full text search: it does not offer fuzzy search though.",
"Women Writers Online : Women Writers Online 3 is a collection of texts by pre-Victorian women writers.",
"It includes texts on a wide range of topics and from various genres including poems, plays, and novels.",
"They represent different states of the English language between 1400 and 1850.",
"A subset of documents has been annotated with named entities (persons, works, places) (Melson and Flanders, 2010).",
"Persons have also been linked to create a personography, a structured representation of persons' biographies containing names, titles, time and place of birth and death.",
"The texts are challenging to disambiguate due to spelling variance, ciphering of names and a lack of standardized orthography.",
"Sometimes, people are not referred to by name but by rank or function, e.g. the king .",
"This dataset is interesting, as it contains documents with heterogeneous topics and text genres, causing low redundancy.",
"1641 Depositions : The 1641 Depositions 4 contain legal texts in form of court witness statements recorded after the Irish Rebellion of 1641.",
"In this conflict, Irish and English Catholics revolted against English and Scottish Protestants and their colonization of Ireland.",
"It lasted over 10 years and ended with the Irish Catholics' defeat and the foreign rule of Ireland.",
"The depositions have been transcribed from 17 th century handwriting, keeping the old language and orthography.",
"These documents have been used to analyze the rebellion, perform cold case reviews of the atrocities committed and to gain insights into contemporary life of this era.",
"Part of the documents have been annotated 3 https://www.wwp.northeastern.edu/wwo 4 http://1641.tcd.ie/ Table 2: Data statistics of the three used datasets: Total number of D ocuments, T okens, E ntities, average number of E ntities per S entence, % of entities that are not linked.",
"with named entities that are linked to DBPedia (Munnelly and Lawless, 2018b).",
"As the coverage of DBPedia was not sufficient (only around 20% of the entities are in DBPedia), we manually created a domain specific knowledge base for this data set containing places and people mentioned.",
"To increase difficulty and reduce overfitting, we added additional related entities from DBPedia.",
"The number of persons increases thereby by tenfold (130 1383) and the number of places by twentyfold (99 2119).",
"Details for that can be found in Appendix A.1.",
"While generating a KB from gold data is not ideal, creating or completing a knowledge base during annotation is not uncommon (see e.g. Wolfe et al., 2015).",
"The texts are difficult to disambiguate due to the same reasons as for WWO .",
"The depositions are interesting, as they contain documents from the same domain (witness reports), but feature many different actors and events.",
"Table 2 contains several statistics regarding the three datasets.",
"AIDA and 1641 contain on average at least one entity per sentence, whereas WWO , while larger, is only sparsely annotated.",
"In contrast to the other two, 1641 contains no entities linked to NIL .",
"This is caused by the fact that we created the KB for 1641 from the gold annotations and for entities previously NIL , new entities were created by hand ; before that, the original corpus linking to DBPedia had 77% NIL annotations.",
"The average ambiguity, that is, how many different entities were linked to mentions with the same surface form is quite high for AIDA and WWO and quite low for 1641 .",
"We explain the latter by the extreme variance in surface form, as even mentions of the same name are often written differently (e.g. Castlekevyn vs. Castlekevin ).",
"Also, 1641 contains many hapax legomena (mentions that only occur once).",
"The average number of candidates is comparatively larger for WWO and 1641 as we use fuzzy search for these.",
"Finally, the distributions of assigned entities in WWO and 1641 are also more balanced, expressed by a lower Gini coefficient (Dodge, 2008).",
"These last two aspects together with noisy texts and low resources causes entity linking to be much more difficult compared to state-of-the-art datasets like AIDA .",
"To validate our approach, we first evaluate recommender performance.",
"Then, non-interactive ranking performance is evaluated similarly to state-of-the-art EL.",
"Afterwards, we simulate a user annotating corpora with our Human-In-The-Loop ranker.",
"Finally, we conduct a user study to test it in a realistic setting.",
"Similar to other work on EL, our main metric for ranking is accuracy.",
"We also measure Accuracy@5, as our experiments showed that users can quickly scan and select the right entity from a list of five elements.",
"In our annotation editor, the candidate list shows the first five elements without scrolling.",
"As a baseline, we use the Most-Frequently Linked Entity baseline (MFLEB).",
"It assigns, given a mention, the entity that was most often linked to it in the training data.",
"We evaluate the performance of our Levenshtein-based recommender that suggests potential annotations to users (Table 3).",
"We filter out suggestions consisting of 3 characters as these introduce too much noise.",
"For annotation suggestions, we focus on recall: where low precision implies recommendations that are not useful, no recall results in no recommendations at all.",
"It can be seen that for AIDA and WWO , the performance of all three recommenders is quite good (recall is about 60% and 40%) while for 1641 , it is only around 20%.",
"The Levenshtein recommender increases recall and reduces precision.",
"The impact is most pronounced for 1641 , where it improves recall upon the string matching recommender by around 50%.",
"In summary, we suggest using the string matching rec-Dataset Model P R F1 AIDA String 0.43 0.60 0.50 Leven@1 0.31 0.55 0.40 Leven@2 0.19 0.57 0.28 WWO String 0.17 0.38 0.23 Leven@1 0.11 0.40 0.16 Leven@2 0.04 0.42 0.07 1641 String 0.12 0.14 0.13 Leven@1 0.16 0.19 0.17 Leven@2 0.12 0.22 0.15 Table 3: Recommender performance in P recision, R ecall and F1 score for String matching recommender and Leven shtein recommender with distance 1 and",
"ommender for domains where texts are clean and exhibit low spelling variance.",
"We consider the Levenshtein recommender to be more suitable for domains with noisy texts.",
"We evaluate EL candidate ranking in a noninteractive setting first to estimate the upper bound ranking performance.",
"As we are the first to perform EL on our version of WWO and 1641 , it also serves as a difficulty comparison between AIDA as the state-of-the-art dataset and datasets from our domain-specific setting.",
"For AIDA , we use the existing train, development and test split; for the other two corpora, we perform 10-fold cross validation as we observed high variance in score when using different train-test splits.",
"Features related to user queries are not used in this experiment.",
"We assume that the gold candidate always exists in training and evaluation data.",
"The results of this experiment are depicted in Table 4.",
"It can be seen that for AIDA , the MFLE baseline is particularly strong, being better than all trained models.",
"For the other datasets, the baseline is weaker than all, showing that popularity is a weak feature in our setting.",
"For AIDA , LightGBM performs best, for WWO and 1641 , the RankNet is best closely followed by the RankSVM .",
"The accuracy@5 is comparatively high as there are cases where the candidate list is relatively short.",
"Regarding training times, LightGBM trains extremely fast with RankSVM being a close second.",
"They are fast enough to retrain after each user annotation.",
"The RankNet trains two to four times slower than both.",
"Feature importance The models we chose for ranking are white-box; they allow us to introspect the importance they give to each feature, thereby explaining their scoring choice.",
"For the RankSVM, we follow Guyon et al. (2002) and use the square of the model weights as importance.",
"For LightGBM, we use the number of times a feature is used to make a split in a decision tree.",
"We train RankSVM and LightGBM models on all data and report the most important and least important features in Fig. 3.",
"We normalize the weights by the L1-norm.",
"It can be seen that both models rely on Levenshtein distance between mention and label as well as Sentence-BERT.",
"The other text similarity features are, while sparingly, also used.",
"Simple features like exact match , contains or prefix and postfix seem to not have a large impact.",
"In general, LightGBM uses more features than the RankSVM .",
"Even though Sentence-BERT was trained on Natural Language Inference (NLI) data which contains only relatively simple sentences, it still is relied on by both models for all datasets.",
"The high importance of Levenshtein distance between mention and label for 1641 is expected and can be explained by the fact that the knowledge base labels often were derived from the mentions in the text when creating a domain-specific knowledge base for this dataset.",
"When trained on AIDA , the RankSVM assigns a high importance to the Jaccard distance between context and description.",
"We attribute this to the fact that entity descriptions in Wikidata are quite short; if they are similar to the context then it is very likely a match.",
"We simulate the Human-In-The-Loop setting by modeling a user annotating an unannotated corpus linearly.",
"In the beginning, they annotate an initial seed of 10 entities without annotation support which are then used to bootstrap the ranker.",
"At every step, the user annotates several entities where the ranker is used as assistance.",
"After an annotation batch is finished, this new data is added to the training set, the ranker is retrained and evaluated.",
"Only LightGBM and RankSVM are used as the RankNet turned out to be too slow.",
"We do not evaluate on a holdout set.",
"Instead, we follow Erdmann et al. (2019) and simulate annotating the complete corpus and evaluate on the very same data as we are interested in how an annotated subset helps to annotate the rest of the data, not how well the model generalizes.",
"We assume that users annotate mention spans perfectly, i.e. we use gold spans.",
"The candidate generation is simulated in three phases.",
"It relies on the fact that the gold entity is given by the dataset: First, search for the mention only.",
"If it was not found, search for the first word of the mention only.",
"If this does not return the gold entity, search for the gold entity label.",
"All candidates retrieved by these searches for a mention are used as training data.",
"We also experimented with using only candidates for that the ranker assigned a higher score than the gold one.",
"This, however, did not affect the performance.",
"Therefore, we use all negative candidates.",
"Fig. 4 depicts the simulation results.",
"All models outperform the MFLE baseline over most of the annotation process.",
"It can be seen that both of our used models achieve high performance even if trained on very few annotations.",
"The RankSVM handles low data better than LightGBM , but quickly reaches its peak performance due to it being a linear model with limited learning capacity.",
"The LightGBM does not plateau that early.",
"This potentially allows to first use a RankSVM for the cold start and when enough annotations are made, LightGBM , thereby combining the best of both models.",
"Comparing the performance on the three datasets, we notice that the performance for AIDA is much higher.",
"Also, the baseline rises much more steeply, hinting again that AIDA is easier and popularity there is a very strong feature.",
"For 1641 , the curve continue to rise, hinting that more data is needed to reach maximum performance.",
"Table 5 shows how the simulated user searched for the gold entities.",
"We see that for WWO and 1641 , the user often does not need to spend much effort in searching for the gold label, using the mention is in around 50% of the cases enough.",
"We attribute this to the fuzzy search which the official Wikidata endpoint does not offer.",
"5.4 User Study In order to validate the viability of our approach in a realistic scenario, we conduct a user study.",
"For that, we augmented the already existing annotation tool INCEpTION 5 (Klie et al., 2018) with our Human-In-The-Loop entity ranking and automatic suggestions.",
"Fig. 5 shows a screenshot of the annotation editor itself.",
"We let five users reannotate parts of the 1641 corpus.",
"It was chosen as it has a high density of entity mentions while being small enough to be annotated in under one hour.",
"Users stem from various academic backgrounds, e.g. natural language processing, computer science and digital humanities.",
"Roughly half of them have previous experience with annotating.",
"We compare two configurations: one uses our ranking and Levenshtein recommender, one uses the ranking of the full text search with the string matching recommender.",
"We randomly selected eight documents which we split in two sets of four documents.",
"To reduce bias, we assign users in four groups based on which part and which ranking they use first.",
"Users are given detailed instructions and a warmup document that is not used in the evaluation to get used to the annotation process.",
"We measure annotation time, number of suggestions used and search queries performed.",
"After the annotation is finished, we ask users to fill out a survey asking which system they prefer, how they experienced the annotation process and what suggestions they have to improve it.",
"The evaluation of the user study 5 https://inception-project.github.io shows that using our approach, users on average annotated 35% faster and needed 15% less search queries.",
"Users positively commented on the ranking performance and the annotation suggestions for both systems.",
"For our ranking, users reported that the gold entity often ranked first or close to top; they rarely observed that gold candidates were sorted close to the end of the candidate list.",
"We conduct a paired sample t-test to estimate the significance of our user study.",
"Our null-hypothesis is that the reranking system does not improve the average annotation time.",
"Conducting the test yields the following: t = 3 .",
"332 , p = 0 .",
"029 .",
"We therefore reject the null hypothesis with p = 0 .",
"029 < 0 .",
"05 , meaning that we have ample evidence that our reranking speeds up annotation time.",
"Recommender suggestions made up around 30% of annotations.",
"We did not measure a significant difference between string and Levenshtein recommender.",
"About the latter, users liked that it can suggest annotations for inexact matches.",
"However, they criticized the noisier suggestions, especially for shorter mentions (e.g. annotating joabe (a name) yielded suggestions for to be ).",
"In the future, we will address this issue by filtering out more potentially unhelpful suggestions and using annotation rejections as a blacklist.",
"We presented a domain-agnostic annotation approach for annotating entity linking for low-resource domains.",
"It consists of two main com-Figure 5: For our user study, we extend the INCEpTION annotation framework: 1 (cid:13) entity linking search field, 2 (cid:13) candidate list, 3 (cid:13) linked named entity, 4 (cid:13) entity linking recommendation.",
"ponents: recommenders that are algorithms that suggest potential annotations to users and a ranker that, given a mention span, ranks potential entity candidates so that they show up higher in the candidate list, making it easier to find for users.",
"Both systems are retrained whenever new annotations are made, forming the Human-In-The-Loop.",
"Our approach does not require the existence of external resources like labeled data, tools like named entity recognizers or large-scale resources like Wikipedia.",
"It can be applied to any domain, only requiring a knowledge base whose entities have a label and a description.",
"In this paper, we evaluate on three datasets: AIDA , which is often used to validate state-of-the-art entity linking systems as well as WWO and 1641 from the humanities.",
"We show that in simulation, only a very small subset needs to be annotated (fewer than 100) for the ranker to reach high accuracy.",
"In a user study, results show that users prefer our approach compared to the typical annotation process; annotation speed improves by around 35% when using our system relative to using no reranking support.",
"In the future, we want to investigate more powerful recommenders, combine interactive entity linking with knowledge base completion and use online learning to leverage deep models, despite their long training time.",
"We thank the anonymous reviewers and Kevin Stowe for their detailed and helpful comments.",
"We also want to thank the Women Writers Project which made the Women Writers Online text collection available to us.",
"This work was supported by the German Research Foundation under grant EC 503/1-1 and GU 798/21-1 as well as by the German Federal Ministry of Education and Research (BMBF) under the promotional reference 01UG1816B (CEDIFOR)."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"method",
"objective",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"result",
"objective",
"other",
"other",
"other"
] |
[
"Lexically constrained machine translation allows the user to manipulate the output sentence by enforcing the presence or absence of certain words and phrases.",
"Although current approaches can enforce terms to appear in the translation, they often struggle to make the constraint word form agree with the rest of the generated output.",
"Our manual analysis shows that 46% of the errors in the output of a baseline constrained model for English to Czech translation are related to agreement.",
"We investigate mechanisms to allow neural machine translation to infer the correct word inflection given lemmatized constraints.",
"In particular, we focus on methods based on training the model with constraints provided as part of the input sequence.",
"Our experiments on the English-Czech language pair show that this approach improves the translation of constrained terms in both automatic and manual evaluation by reducing errors in agreement.",
"Our approach thus eliminates inflection errors, without introducing new errors or decreasing the overall quality of the translation.",
"In Neural Machine Translation (NMT), lexical constraining (Song et al., 2019; Hokamp and Liu, 2017; Post and Vilar, 2018) involves changing the translation process in a way that desired terms appear in the model's output.",
"Translation constraints are useful in domain adaptation, interactive machine translation or named entities translation.",
"Current approaches focus either on manipulating beam search decoding (Hokamp and Liu, 2017; Post and Vilar, 2018; Hu et al., 2019) or training an NMT model using constraints alongside the input (Dinu et al., 2019; Song et al., 2019; Chen et al., 2020).",
"In inflected languages, constraints from both source and target sides may appear in numerous surface forms, which may result in errors during Likud party has merged with an even more hawkish lot under Avigdor Lieberman.",
"translation.",
"By enforcing the presence of a certain exact term on the target side, existing approaches fail to deal with word inflections.",
"As we show, they preserve the surface form of the word provided as constraint regardless of the context.",
"Morphologically rich languages have multiple forms of each word, e.g. inflections to nouns.",
"For satisfactory results in these languages, the constraint processing method needs to be capable of detecting any surface form on the source side and generating the correct surface form on the target side.",
"To illustrate the problem, Figure 1 shows a sentence translation from English to Czech with outputs from three methods.",
"The first one is a no-constraint translation where hawkish is translated as jestrbm (literally hawkish, no figurative meaning; followed by a further mis-translation of lot).",
"The second is a constrained model requested to use the word form radikln (radical) in the output.",
"The constraint was satisfied but the adjective should have taken the comparative degree to match the rest of the translation.",
"The third output is the result of a model that processes the input along with the canonical form constraint (radikln) and modifies the constraint inflection in the final translation (radiklnej) to correctly express the comparative form (although the translation of lot is worse than in previous case).",
"We evaluate different methods of lexically constrained machine translation on the Czech language.",
"We propose an approach to deal with word inflection in lexically constrained translation.",
"By training a model that receives lemmatized target constraints as the input alongside the source sentence, we improve the generation of constraints in forms matching the output context.",
"We run experiments on both synthetic and real-world test scenarios.",
"In MT, there are scenarios where words that should or should not appear in the output are known upfront.",
"Common use cases include integration of domain-specific terminology and translation of named entities or rare words using a dictionary.",
"Such functionality was previously implemented in phrase-based systems (Okuma et al., 2008), like Moses (Koehn et al., 2007).",
"In NMT, this task is not yet definitely solved, since the translation process is hard to interpret and influence.",
"In order to enforce the presence of specific terms, some approaches post-process the output.",
"Prior to subword handling (Sennrich et al., 2016; Kudo and Richardson, 2018), unknown words were corrected by replacing them with word translation pairs from a bilingual dictionary (Luong et al., 2015).",
"Crego et al. (2016) use placeholders to translate numbers and named entities.",
"Placeholders have also been found useful for translation of text with formal mark-up and its interaction with content (Hanne-man and Dinu, 2020).",
"An alternative way of adding constraints to the final translation is by manipulating the beam search decoding process.",
"Anderson et al. (2017) use a finite state machine (FSM) that recognizes target sentence with constraint patterns.",
"Each state of the FSM has its own beam and only hypotheses in beams that are in accepting states can be fin-ished.",
"Hasler et al. (2018) improve upon this work by utilizing encoder-decoder attention weights to guide the placement of a constraint.",
"Chatterjee et al. (2017) also use attention weights and beam search look-ahead to choose constraint positions.",
"Hokamp and Liu (2017) present Grid Beam Search, which extends the usual beam search (Och and Ney, 2004) with a mechanism to ensure the coverage of all constrains.",
"Post and Vilar (2018) propose a similar but more efficient algorithm.",
"By dynamically reallocating the beam capacity, an arbitrary number of constraints can be processed within a constant width of the beam.",
"One shortcoming of the above methods is the slower inference compared to unmodified beam search models.",
"This issue is in large part solved by effective vectorized beam allocation (Hu et al., 2019).",
"Another drawback of constrained decoding is a less fluent output, especially in morphologically rich languages, since we force the output to contain a phrase that may not be in agreement with the rest of the output.",
"One way of integrating constraints into NMT is to provide them alongside the input sentence and train the model to be biased towards utilizing them.",
"This gives the user less direct control over the output translation and requires specially trained models.",
"On the other hand, these approaches are simple to implement, do not incur inference slowdown, and make the translation more robust in case of wrongly chosen constraints.",
"NMT models are often able to produce very fluent output (Popel et al., 2020a), making them capable to cope with inflections properly.",
"Thus, using this capability may yield better results than constrained decoding with heuristics for inflections in inflected languages.",
"Dinu et al. (2019) use input factors to annotate source sentences with desired translations and train the model to copy these translations into the output sequence.",
"Chen et al. (2020) append constraints to the end of the source sentence.",
"Their goal is to train the model to place constraints in the output translation without the need of a bilingual dictionary or a specified word alignment.",
"Song et al. (2019) also propose a data augmentation approach that uses constraints along the source as input during the model training.",
"Concurrently to our work, Bergmanis and Pinnis (2021) modify Dinu et al. (2019) approach by providing lemmatized word factors associated to random tokens in the source sentence.",
"With the lemmatized factors, they force the model to learn the correct inflection of the word in the translation.",
"The main difference between our work and most of the existing approaches is the use of lemmatized constraints to allow the model to correctly inflect them to agree with the output context.",
"The concurrent work by Bergmanis and Pinnis (2021) presents a very similar idea.",
"They also use lemmatized forms of the constraints and let the model itself to generate correct surface form.",
"While their choice of languages (English to Latvian) and their experimental setup was slightly different, the overall conclusions of their work agree with ours.",
"The main difference is the approach to integration of the constraints.",
"Bergmanis and Pinnis (2021) use factors to directly annotate to the source tokens with lemmas of their desired translations.",
"We experimented with this approach (see B.5), but in most of the experiments, we opted for a simpler integration method, by concatenating desired target lemmas to the source sentence.",
"This simplifies preparation of the training data by removing the need for source to target word alignment and as we show, hurts the performance only by a very slight margin.",
"Building upon the described techniques, we focus on allowing the model to choose the correct word form.",
"Our approaches are based on learned constraining, where the constraints are lemmatized during both training and test time.",
"In our approach, we append the target constraints as a suffix of the input sentences, same as Chen et al. (2020).",
"We use <sep> token to separate constraints from the input sentence, and <c> token to separate constraints from each other.",
"Inspired by Chen et al. (2020), we shift the positional embeddings by 1024 for the constraint tokens.",
"However, while Chen et al. (2020) start each constraint on the same position, we shift the start of the constraint string and continue monotonically from there.",
"We do not use any other techniques described in their work.",
"The following example illustrates an input to our baseline constrained model, passing two constraints (plnovno and obcch) along with the source text.",
"In this case, both constraints are in correct target surface forms, which are obtained from the reference translation.",
"Without knowledge of the reference, it is necessary to solve the problem of agreement of the constraint with the rest of the translation, which is the main goal of our work.",
"We also experimented with the factored translation approach introduced by Dinu et al. (2019) as a second constraint integration method.",
"In Appendix B, we present the description of the method and a comparison with appending the constraints as a suffix.",
"To our current knowledge, there is no English-Czech dataset with provided constraints.",
"Thus, we generate constraints from the existing parallel data.",
"We consider two approaches to generate constraints for the training and test data.",
"Training The simplest method of obtaining target-side constraints is sampling random token subsequences from the reference sentence.",
"In our experiments, every token in the sentence can become a start of a constraint with probability of 0.3.",
"An open constraint finishes on each subsequent token with probability of 0.85 and multiple constraints for a single sentence are permitted (without overlapping).",
"We did not optimize these probabilities, further gains may be obtained by a search for better values.",
"The constraint order is randomly permuted, since during the test time, order of constraints in the target is not known beforehand.",
"The second approach makes use of either a bilingual dictionary or a terminology database.",
"If a translation pair from the dictionary is found in the source and target sentences, its target side can serve as the constraint.",
"By this method, we also obtain alignment of the source and target expressions, which is useful for the factored translation approach (see Appendix B.5).",
"Test time Given an input sentence and no reference translation, we can synthesize constraints by searching for source expressions in a dictionary or a terminology database.",
"Dictionaries generally map one expression to many target ones and we or the model have to decide which of them to use.",
"Terminology databases are usually unambiguous and the target translation serves as the constraint.",
"We experiment with terminology in Section 4.3.",
"Lemmatization Our methods use lemmatized 1 constraints.",
"For the random target subsequence method, we lemmatize the selected words.",
"For the dictionary search method, we lemmatize both the dictionary and training data and we search for matching expression pairs using the lemmas.",
"During the actual training, we use the original, non-lemmatized sentence with lemmatized constraints.",
"This scenario is more similar to real-life use cases, since target word form which should be produced is not known beforehand.",
"With constraint lemmatization, the above example would be: Input: Price increase is planned mainly in larger municipalities.",
"<sep> obec <c> plnovat 4 Experiments In this section, methods presented above are compared on various tasks and datasets.",
"First, we use an oracle test set, which is created with previous knowledge of the reference.",
"We use it to assess the ability of the models to integrate the constraints themselves without additional noise caused by problems of the real world.",
"In the subsequent experiments, we present a more realistic scenario we use official terminology for EU-related expressions to translate parts of Europarl corpus.",
"Finally, we evaluate the approaches on translation of general, open-domain rare words using dictionary.",
"We train English-Czech NMT models for our experiments.",
"Czech has a high degree of inflection with seven cases and three genders for nouns and adjectives.",
"We train our models on CzEng 2.0 (Kocmi et al., 2020) using all authentic parallel sentences (61M), as well as back-translated Czech monolingual sentences (51M).",
"Newstest-2019 (Barrault et al., 2019) is used as a validation set and newstest-2020 (Barrault et al., 2020) as a test set.",
"We break the text into subwords using SentencePiece (Kudo and Richardson, 2018) and lemmatize using UD-Pipe (Straka and Strakov, 2017).",
"BLEU scores are computed using SacreBLEU (Post, 2018).",
"2 For experiments mentioning dictionaries, we extracted pairs of terms from English and Czech Wik-1 In Appendix B, we show that simple stemming heuristic performs at least as well as proper lemmatization in automated metrics described further.",
"tionary 3 and a large commercial dictionary.",
"In appendix B.2 we show that using Wiktionary also improves performance upon baseline, but the commercial dictionary offers better coverage of the expressions and thus provides better overall results.",
"For this reason, all the experimets shown further are based on the commercial dictionary data.",
"We use the Czech government database for EU terminology 4 to evaluate integration of domain-specific terminology through constraints.",
"We select all Czech terms and their translation to English, which corresponds to 14203 expressions per language.",
"Then, we search the Europarl 5 corpus (Koehn, 2005) for sentence pairs containing English terms in the source side and lemmas of the Czech translation in a lemmatized version of the target side, ignoring trivial terms.",
"Keeping at most the first ten sentence pairs containing specific source term, the final dataset consists of 6585 examples, covering 1433 terms.",
"We remove these sentences from the training data, since Europarl is part of the CzEng corpus.",
"4.1.1 Model We use MarianNMT (Junczys-Dowmunt et al., 2018) to train Transformer-base models with standard parameters (Vaswani et al., 2017).",
"Inspired by Popel et al. (2020b), we alternate between authentic and backtranslated data every 25 million training sentences, while using exponential smoothing of the parameters.",
"Four NVIDIA V100 GPUs were used for the training and one training run (400-500k steps) takes approximately 40 hours with this configuration.",
"A large portion of the computation time can be saved by finetuning an existing NMT model on the proposed dataset.",
"By finetuning the baseline model we reached the same performance after 30-50k steps.",
"However, all the results provided in this paper are obtained by training from scratch.",
"Since we integrate constraints in the target language into the source sequence, we share source and target vocabularies (and embeddings), consisting of 32000 subwords, to allow easier copying of the subwords from source to target sequence.",
"To assess the ability of a model to produce the provided constraints in the output, we use newstest-3",
"2020 test set with oracle constraints.",
"These constraints are obtained via dictionary search on the test set as described above, i.e. , the constraints are terms from a English-Czech dictionary, where both source and target sides are present in the sentence pair.",
"Note that we know the reference beforehand, thus, this evaluation may not reflect improvement in translation in a real world setting.",
"We only use it to measure the ability of constraint integration.",
"We trained two sets of constrained models.",
"The first one, baseline constrained models, use original target side forms of the constraint expressions.",
"The second set consists of models trained using lemmatized forms of the constraints.",
"Our goal with the lemmatized models was to harness the language modeling capacity of the model to generate a surface form of lemmatized constraint that agrees with the rest of the translation.",
"Table 1 presents the results.",
"We used two forms of the test set constraints original reference forms and lemmatized constraints (column Test form ).",
"The lemmatized constraints are closer to real world scenario, where we do not know the output form of the constraint expression beforehand.",
"As a sanity check, we compute standard BLEU and BLEU calculated on lemmatized hypothesis against lemmatized reference ( BLEUL ) .",
"More importantly, we assess target constraint coverage ( Cvg and Cvg L ) on original and lemmatized test set by comparing the constraints in the output with the reference.",
"Note that in theory, Cvg value should always be lower or equal to Cvg L , since surface form coverage is equal to lemma coverage minus proportion of incorrectly generated surface forms.",
"This is not always the case, since the lemmatizer takes the sentence context into consideration and lemmatized versions of stand-alone terms in the terminology database may not match lemmatized versions of the same terms inside a reference sentence.",
"This causes a slight underestimation of Cvg L .",
"The Cvg and Cvg L columns document that both methods of constraint synthesis for training (ran-dom target subsequences and dictionary terms) lead to models capable of producing more than 93% of the constraints when constraints are not lemmatized.",
"Surface coverage of surface form trained models drops to 6168% when using lemmatized form of the test set constraints, but lemma coverage is only slightly lower this is expected, as these models are trained to reproduce exact form of the given constraints.",
"The results of models trained on lemmatized constraints with lemmatized test constraints show that the surface form coverage increases compared to surface form trained models with lemmatized test constraints (rows lemma / lemma vs. surface / lemma ).",
"While the coverage is lower than when using surface form test set for the surface Train Test BLEU Cvg Baseline No constraints 37.9 75.02 All No constraints 19.1 61.40 Terms 37.3 91.73 Dict 43.3 84.14 Terms + Dict 44.0 93.75 Skip half No constraints 38.2 75.32 Terms 38.4 90.52 Dict 43.5 83.49 Terms + Dict 43.1 91.22 Table 2: Performance of models trained using surface forms of dictionary constraints on the same Europarl test set split.",
"form model, we show in Section 5 that this is mainly an artifact of reference-based evaluation and that the model inflects the constraints correctly.",
"The model trained with constraints based on dictionary reaches the best performance on the oracle constraint test set, for which the constraints are generated in the same way.",
"However, when constraints are not supplied, BLEU and coverage drops sharply (the row dict/surface/).",
"This may be caused by the fact that sentences containing expressions present in the dictionary are almost always accompanied by the constraint during the training.",
"Therefore, the model is not presented with many examples where the translation appears without the corresponding constraint and generates constraint expression with much lower probability when this happens during the test time.",
"We experimented with skipping half of the sentences during the constraint generation, leaving them without any constraints (skip half in the table).",
"As shown in Table 1, this largely reduces the problem without any test time constraints, the model reaches baseline results (the row dict, skip half/surface/).",
"However, when the constraints are supplied, the coverage is slightly lower than for a model trained with constraints for all the sentences (e.g. 91.4% instead of 93.5% for surface form models).",
"Fine-tuning the ratio or choosing the sentences to leave without the constraints dynamically during the training might help to solve this problem.",
"Since the studied methods proved to work well with oracle surface form of constraints, we moved to a realistic use-case with the Europarl test set described in Section 4.1.",
"We split the test set into two parts: same contains examples where the form of the constraint in the reference is the same as in the terminology database (and as provided to the baseline constrained model), diff contains examples where the form of the constraint in the target sentence is different from the database form.",
"The target lemmas of the constraint should match in both cases.",
"This split allows us to better assess the translation in inflected languages, since the problems we focus on are more pronounced in the diff test set.",
"Table 2 shows that the model trained with dictionary constraints underperforms in terms of BLEU when only the constraints from terminology database are supplied (BLEU of 19.1).",
"This is caused by the issue described earlier during the training, the model does not encounter the words which are present in the dictionary enough times without the constraint.",
"When the dictionary constraints are used alongside the terminology database constraints (rows denoted by Terms + Dict), the BLEU score increases.",
"This approach requires either prior knowledge of the reference, or a mechanism for the target dictionary term disambiguation.",
"To mitigate this issue, we skip half of the sentences when generating the constraints, i.e. , half of the training corpus is seen without any constraints.",
"This alleviates the problem to a large extent, see the Skip half results.",
"We present the results on the whole test set in Table 3.",
"The first and second columns show word form of the constraints during the training and test time, respectively.",
"Canon.",
"constraint is in its canonical, original form from the the terminology database.",
"Ref SF rows show results with constraints in the same form as in the reference translation (this requires prior knowledge of the reference).",
"First, let us focus on results of models trained with surface form constraints.",
"Three trends in the results hint that generating the correct constraint form is challenging for the model, if the correct form is different from the one supplied in the input.",
"First, the difference between surface form and lemma coverage (44% vs 96.6%) shows the model generates the correct constraint words, but in a form not matching the reference.",
"Second, the difference is more pronounced in the diff split (Ta-ble 4), while in the same split (Table 5), surface form coverage is almost the same as the lemma coverage.",
"This is because in the same split, target constraints are already in the canonical form, same as in the terminology database, so there is no need for further inflection.",
"Third, using constraints in the same surface form as in the reference ( Ref SF ) improves the observed coverage compared to using the canonical form from the terminology database (e.g., 97% vs 44% on the whole test set, see Table 3).",
"This oracle setting, using the reference to determine the correct surface form, shows the upper limits of the constraint integration approach, if the inflection issue is solved optimally.",
"As stated earlier, we trained the models again using lemmatized versions of the constraints.",
"When we supply lemmatized constraints to these mod-Train c.",
"els during the test time, the coverage rises from 44% (surface form trained model with canonical constraint forms) to 77%, but this is still far from the oracle 97%.",
"This suggests that a large room for improvement exists, but as we show in Section 5, most of these discrepancies are caused by reference-based evaluation and are not real errors.",
"In majority (92%) of the cases marked as not covered when using lemmatized model, the form of the constraint is different from the reference, but correct given the context, as the model translates the sentences differently (but correctly).",
"Our work is based on training the NMT model to include provided constraints in the output translation.",
"Another popular way of constraint integration is modifying the decoding process.",
"We hypothesize that this approach will not be useful in our scenario, since the constraints are enforced in their surface forms, which is the issue we are trying to solve.",
"To verify this, we evaluated lexically constrained decoding by Hu et al. (2019) as implemented in fairseq (Ott et al., 2019) on the Europarl test sets described in Section 4.3.",
"The results in Table 6 show that while the constrained decoding indeed produces the target constraints in the output, they stay in the same form as in the terminology database.",
"This is shown by the low surface form constraint coverage (column Constraint src BLEU % as ref % correct No constraint 21.6 35.4 64.6 Reference term 23.1 91.7 91.7 Random term 22.6 54.2 83.3 Table 7: Translation of sentences containing rare words.",
"Cvg ) for the diff and whole dataset splits, while for the same split, where the constraints are in the same form in the translation as in the terminology database, the coverage is high.",
"On lemma level ( Cvg L ), coverage on all splits remains high, again showing that the system produces exactly the surface form provided, instead of correct target sentence form.",
"Note that the results are not directly comparable with the results in previous subsection, since here we use only a part of the training data (first 25M sentence pairs from parallel part of CzEng) for the preliminary experiments.",
"We also observed that the Pearson correlation of constraint placement in respect to reference translation (see Appendix A.1 for details) is lower (0.81) when using constrained decoding than when using the training approach as in the main experiments (0.94).",
"We define rare words as terms from a dictionary that occur in the source side of the training corpus at most 50 times.",
"We create a subset of our general dictionary by only using expression pairs with rare words on source side.",
"We search WMT 2007-2020 English-Czech news test sets (Barrault et al., 2020) for sentence pairs containing term pairs from this rare word dictionary, resulting in 48 examples.",
"A dictionary generally provides 1-to-many mappings of source terms to a target language, so the correct target expression needs to be disambiguated.",
"Table 7 presents results with no constraints, with constraints where the lemmatized target constraint is chosen based on the lemmatized reference, and with constraints where the target expression is chosen randomly from all the possible translations.",
"We used a model trained on lemmatized random target token subsequences for the translation.",
"On average, each rare word in the test set has 3.3 possible dictionary translations.",
"Aside from BLEU score, we show the percentage of rare words translated correctly, meaning that either they are the same expression as in the reference, or that they are synonymous expressions that are correct in the given context.",
"This is different from the terminology use case, since we do not strictly enforce single possible translation.",
"The results show that even with the random choice of the dictionary constraint translation, our model improves the translation of rare words.",
"In this section, we analyse examples marked as errors by automatic evaluation.",
"In Appendix A.1, we analyse the position of constraints in translation outputs, showing that they are placed correctly.",
"In Appendix A.2, we look closely at the constrained translation of an out-of-domain document.",
"We manually analysed outputs marked as not having the desired constraint in the reference surface form by the automatic coverage evaluation introduced in the previous section.",
"Table 9 presents the results.",
"We compare three models.",
"First, the baseline without any constraints (column B ).",
"Second, the best model trained with non-lemmatized constraints ( SF ), and, finally, the best model trained on lemmatized constraints (column L ).",
"The baseline model outputs have constraint surface form coverage of 69.9% on the whole Europarl test set, which results in 1982 out of 6585 examples being marked as different from the reference by the automatic evaluation.",
"The SF model reached 44% coverage (4346 differences).",
"The lemmatized model agreed with the reference in 77.1% (1508 differences).",
"For each model, we randomly sample 100 supposedly erroneous translations to be analysed.",
"The first row of Table 9 shows the number of examples with constraints incorrectly inflected in the context of the generated output.",
"Rows 2 and 3 show cases where the constraint form agrees with rest of the translation: Correct in correct context (CCC) indicates that the target sentence is a valid translation, whereas Correct in incorrect context (CIC) indicates that the constraint was inflected correctly given its context but as a whole, the translation is wrong.",
"Thus, CCC cases are not in fact errors, but were wrongly classified as such by the automatic Source Canon Ref Translation Error They are seeking to weaken the Commission's proposal to benefit the industry.",
"evaluation, based on a direct comparison with the reference.",
"The cases where the model ignores the constraint and generates a different word are in the categories Different correct/incorrect word choice (fourth and fifth rows), based on whether the generated word is a plausible translation of the source constraint.",
"Examples where the translation generally goes wrong and the issue does not fit into the previous categories are under Invalid translation .",
"Our analysis shows that for the lemmatized model ( L ), the vast majority of the examples classified as errors are actually correctly translated and contain the requested constraint in the correct surface form.",
"The presumed error is an artifact of the reference-based evaluation.",
"Only 8% of these examples are real errors, compared to 66% for the surface form model.",
"In Table 8, we show three examples of errors found by the automatic evaluation.",
"Given the canonical and reference source form of a constraint (nvrh and nvrhu, respectively, meaning pro-posal), some errors may arise in the translation.",
"In the first row, although different from the reference source form, the constraint is correctly inflected given the context generated and in a correct translation, which configures a correct in correct context error (CCC).",
"Similarly, in the second row, the same constraint with the same source form is correctly inflected given the context but in a wrong translation, which describes a correct in incorrect context (CIC) error.",
"Finally, the third translation has a wrong inflection given the context generated (Inflection error).",
"We described the problem of word inflection in lexically constrained machine translation.",
"Our solution capitalizes on the ability of NMT models to generate correct word forms in the output translation.",
"We train a Transformer model using lemmatized constraints supplied alongside the input sentences, and correct surface forms of the constraints in the reference.",
"This training leads to a model producing the constraints in the output with high coverage, correct placement, and in a correct surface form.",
"We compare several methods of obtaining constraints and integrating them into the input.",
"In the realistic use case of terminology integration, we evaluated our methods and show that without lemmatizing the training constraints, the chosen approach of integrating constraints into NMT does not work well for Czech.",
"We effectively solve the issue of inflection errors by lemmatizing constraints, taking advantage of the Transformer's language modelling capacity with no additional inference costs.",
"This has been proven by both automatic and manual evaluation.",
"We show our method is also effective in translating general domain rare words using a bilingual dictionary and we plan future work in solving the problem of choosing correct translation term from number of variants.",
"Our work is supported by the Bergamot project (Eu-ropean Union's Horizon 2020 research and innovation programme under grant agreement No 825303) aiming for fast and private user-side browser translation, GA CR NEUREM3 grant (Neural Representations in Multi-modal and Multi-lingual Modelling, 19-26934X (RIV: GX19-26934X)) and by SVV 260 453 grant.",
"We also want to thank Michal Novk for his useful feedback and discussions."
] | [
"abstain",
"abstain",
"result",
"objective",
"method",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"result",
"result",
"objective",
"abstain",
"objective",
"other",
"other"
] |
[
"The enrichment of tabular datasets using external sources has gained significant attention in recent years.",
"Existing solutions, however, either ignore external unstructured data completely or devise dataset-specific solutions.",
"In this study, we proposed Few-Shot Transformer based Enrichment (FeSTE), a generic and robust framework for the enrichment of tabular datasets using unstructured data.",
"By training over multiple datasets, our approach is able to develop generic models that can be applied to additional datasets with minimal training (i.e., few-shot).",
"Our approach is based on an adaptation of BERT, for which we present a novel fine-tuning approach that reformulates the tuples of the datasets as sentences.",
"Our evaluation, conducted on 17 datasets, shows that FeSTE is able to generate high quality features and significantly outperform existing fine-tuning solutions.",
"Tabular data is the most diverse format of data representation, spanning domains from nutrition to banking.",
"It does, however, suffer from a lack of contextual information that could make its analysis more effective.",
"Data scientists seek to overcome this limitation by using feature engineering (FE), which involves applying transformations on existing features to create additional representations of the data.",
"When the available data is not sufficiently diverse (or when additional improvement is sought), one may attempt to use external information sources to enrich the data.",
"We refer to this process as external enrichment of datasets (EED).",
"The use of external sources for feature engineering is both computationally-heavy and time consuming.",
"The process first involves matching entities in the data to those in the external source, a process known as Entity Linking (Shen et al., 2014).",
"Once entities in the external source have been matched, candidate features need to be generated, evaluated, and finally integrated into the tabular dataset.",
"While multiple studies in recent years (Paulheim and Fmkranz, 2012; Ristoski et al., 2015; Friedman and Markovitch, 2018; Mountantonakis and Tzitzikas, 2017; Galhotra et al., 2019; Harari and Katz, 2022) have sought to automate the EED process, a large majority focuses solely on structured external sources, e.g., DBpedia tables, and do not attempt to use the large amounts of available unstructured data (i.e., free text).",
"In this study, we present Few-Shot Transformer based Enrichment (FeSTE) a generic and robust framework for the enrichment of tabular datasets using unstructured data.",
"Our approach utilizes transformer-based, pre-trained Language Models (LM) (Devlin et al., 2018) to identify and prioritize promising candidate features in the external data source.",
"FeSTE then applies a novel process of analyzing the relationships between the unstructured features and the dataset's target class values, and automatically generating new tabular features.",
"To overcome the difficulty imposed by datasets of limited size, we train FeSTE on multiple datasets in order to create a generic model that can later be applied to additional datasets.",
"Additionally, we propose a novel fine-tuning (FT) process that enables pre-trained LM to quickly adapt to new datasets (i.e., perform few-shot learning).",
"The result of this process is a more robust model that is also more effective on small datasets.",
"While previous studiesTAPAS (Herzig et al., 2020), TaBERT (Yin et al., 2020), TURL (Deng et al., 2020), and TPN (Wang et al., 2021)have attempted to use Transformers for analyzing tabular data, FeSTE focuses on analyzing the connection between external texts and the dataset's entities.",
"We are therefore able to leverage the Transformer architecture to generate additional features and fine-tune the generation process in a novel way.",
"diverse characteristics (number of samples, feature composition, etc.).",
"For our evaluation, we use BERT as the Transformer architecture and Wikipedia as the external source, with its page abstracts as our unstructured texts.",
"Our results show that FeSTE outperforms existing BERT fine-tuning strategies and that FeSTE is highly effective, achieving an average improvement of 9.2% when combined with the datasets' original features.",
"Finally, we show FeSTE performs well even when it is applied on its own (without any original fea-tures), achieving an average AUC of 0.664.",
"To summarize, our contributions in this study are as follows: Our work is the first to propose a generic and fully-automated approach for tabular data enrichment using unstructured external sources.",
"We propose a novel few-shot fine-tuning approach for transformer-based pre-trained LM, which performs well even for training sets consisting of as little as tens of samples.",
"We make our code publicly available.",
"The large majority of work in the field of automated features generation from external information sources mainly focuses on leveraging structured data.",
"For example, (Paulheim and Fmkranz, 2012) uses structured data from knowledge bases (KB) such as DBpedia to generate new features, which are then used to augment tabular datasets.",
"RapidMiner (Ristoski et al., 2015) processes KB of structured tabular and graphical data by modeling the relations among their entities.",
"Friedman et al. (Friedman and Markovitch, 2018) focus on features generation for text classification problems.",
"They leverage structured data from two KBs: FreeBase (Bollacker et al., 2008) and YAGO2 (Hoffart et al., 2013).",
"The authors first identify each entity in the text, and then recursively explore the KB to extract new features.",
"The LodsyndesisML framework (Mountantonakis and Tzitzikas, 2017) leverages KB's (e.g DBpedia) to create thousands of new features for classification tasks using nine operators.",
"Each operator creates different types of features, which are then used to enrich the original data.",
"Galhotra et el.",
"(Galhotra et al., 2019) use structured web data to generate new features for classification and regression tasks.",
"Their approach generates thousands of candidate features, then selects the final set using information theory-based measures such as Information Gain and Pearson correlation.",
"To the best of our knowledge, the only study to utilize both structured and unstructured sources is the recently proposed FGSES framework (Harari and Katz, 2022).",
"FGSES extracts features from both structured and unstructured DBpedia content, generates thousands of candidate features, and then uses a meta learning-based approach to rank them and return a small final set.",
"While this approach is the most similar to the one proposed in this study, there are significant differences: FeSTE focuses on the analysis of the texts, performs fine-tuning rather than relying on a general model, and takes into account the context of analyzed datasets.",
"Moreover, our approach generates a small set of features and is, therefore, more computationally efficient.",
"Wikipedia is widely used as an external source of information due to its availability, richness, and diversity (Lehmann et al., 2015).",
"An important addition to Wikipedia from an entity linking standpoint is DBpedia, a project that extracts Wikipedia data and makes it accessible in a more structured form.",
"DBpedia is used as an external data source for feature engineering by multiple studies (Paulheim and Fmkranz, 2012; Ristoski et al., 2015; Galhotra et al., 2019; Mountantonakis and Tzitzikas, 2017) because of its accessible format.",
"To utilize DBpedia for feature engineering in tabular data, one should first link the entities in the analyzed dataset to unique DBpedia entities.",
"DBpedia Spotlight (Mendes et al., 2011) is a tool for automatically identifying and linking textual entities to ones on DBpedia.",
"Unfortunately, DBpedia Spotlight tends to capture entities whose name consists only of one or two words, while ignoring entities composed of longer sequences.",
"In recent years, Transformers (Vaswani et al., 2017) and other deep learning-based approaches are being applied in the field of semantic relatedness, in order to link free texts to DBpedia.",
"Blink (Wu et al., 2020) is a BERT-based (Devlin et al., 2018) approach which receives a mention and its surrounding text, and links the mention to its corre-1578 sponding DBpedia entity.",
"It should be noted, however, that the names of DBpedia entities tend to be shorter than in free text, which hampers Blink's performance.",
"Recently, (Harari and Katz, 2022) developed an entity linking algorithm whose aim is to link entities in tabular datasets with Wikipedia pages.",
"We analyze the performance of this approach in Section 5.3.",
"One of the most influential developments in the field of NLP in recent years is the emergence of Transformer-based LM (Vaswani et al., 2017).",
"BERT (Devlin et al., 2018) and its various extensions, GPT (Radford et al., 2018) and XLnet (Yang et al., 2019) achieve state-of-the-art (SOTA) performance on a variety of tasks, including text classification, question answering, next word prediction, and more.",
"Unfortunately, training these models requires expensive hardware and very large amounts of data.",
"For this reasons, the large majority of studies and applications use pre-trained versions of these models.",
"However, fine tuning (FT) these models, i.e., additional limited training on data from the task at hand, has been shown to improve performance (Gururangan et al., 2020).",
"Studies such as (Sun et al., 2019) and (Gururan-gan et al., 2020) propose three FT strategies: (1) Task-specific , in which one trains the pre-trained LM on similar ML task (e.g text classification); (2) Domain-specific , where the pre-trained LM is trained on a similar domain (e.g biology), and; (3) Target task-specific , where the final training step is performed on the targeted dataset directly.",
"The aforementioned studies report a significant improvement in BERT's performance, especially where multi-phase FT was performed.",
"Another fine-tuning strategy called Multi-Task Deep Neural Network (MT-DNN) training was proposed by (Liu et al., 2019): for each task in each training step, the approach modifies the output layer but keeps the lower layers unchanged.",
"This approach can also be applied in cases where the training and target datasets have different characteristics, e.g., different number of classes.",
"One significant drawback of MT-DNN is the need to replace and train the final layer (e.g., the softmax layer) whenever it is applied to a new problem.",
"This approach has two potential shortcomings: first, given that FT mainly affect BERT's final layers (Gururan-gan et al., 2020), some loss of earlier knowledge may occur.",
"Secondly, MT-DNN needs to maintain separate final layers for each task during training (three different heads in the original study).",
"In cases where MT-DNN is training on a large number of datasets, this could pose problems in terms of memory consumption.",
"A different FT approach that does not require task-specific layers was proposed by (Wei et al., 2021), who used an \"instruction FT\" phase.",
"The authors added instructions (i.e., statements) to the text, and required the model to determine whether the statements are correct.",
"While effective, analysis shows that the approach is only applicable to very large datasets, and the addition of over 8B parameters to the already large architecture of 137B.",
"In contrast to all the aforementioned studies, FeSTE uses a single architecture for its training process, regardless of the number of datasets used.",
"Moreover, we propose a novel dataset reformulation process that enables us to apply the same architecture on all datasets, regardless of their number of classes.",
"This approach enables a much more efficient FT process, as shown in Section 5.3.",
"For our task of feature generation from the free text of external sources, we assume a target tabular dataset D t with a classification tasks.",
"Additionally, we assume a set of pre-analyzed tabular datasets with different classification tasks (i.e. different number of classes) D = { D 1 ...D n } .",
"For each dataset, let there be target class values tc i and original features F i .",
"Of F i , let there be at least one feature representing entities e i = { e i, 1 ...e i,m } .",
"For the purpose of generating new features, we assume an external data source EX which consists of entities e ex and text related to these entities.",
"We denote this set of texts as T .",
"For the purpose of linking e i and e ex , we assume an entity linking function .",
"We generate a set of new features f newt from T using a Language Model LM .",
"Overview.",
"Our proposed approach is presented in Figure 1 and Algorithm 1.",
"FeSTE consists of three main phases:",
"a) entity linking ;",
"b) fine-tuning , and;",
"c) features generation .",
"In the entity linking phase, FeSTE automatically matches entity names in the tabular dataset to their corresponding entries in the external data source.",
"In the fine-tuning phase, we fine-tune a pre-trained LM for the task of feature 1579 generation.",
"This phase consists of two stages: a preliminary stage where we fine-tune the model offline on multiple datasets, and an online stage where we fine-tune the model on the training samples of the analyzed dataset.",
"Finally, in the features generation phase, we add the newly-generated features to the original features of the tabular dataset.",
"The goal of this phase is to link entities from our analyzed dataset D t to entries/entities in the external data source EX (Figure 1 step #1).",
"The identification of relevant entities is a necessary first step, since the entities selected in this phase will be processed in the following phases.",
"In this study we use Wikipedia as our external source of information, and Google Search as our linking and disambiguation tool.",
"Obviously, other external sources of information (e.g., Reuters news or Yago (Hoffart et al., 2013)) will require a different linking strategy, but our approach can easily be adapted to support them.",
"Our chosen entity linking process is straightforward: for each dataset entity in e i in D i we query Google Search, focusing on Wikipedia and taking into account the domain of the entity: < lookup > is a < domain > site:en.wikipedia.org where < lookup > is the entity e i,j mention and < domain > is the entities domain (entities column name).",
"for example: USA is a country site:en.wikipedia.org .",
"Each of our queries returns a list of Wikipedia pages which are most likely to represent the entity.",
"FeSTE then extracts the Wikipedia page referenced in the first entry.",
"This step also serves as a form of automatic disambiguation, because we pair e i,j with its most popular interpretation.",
"At the end of this phase, each dataset D i entity e i,j has a linked Wikipedia entity e exi,j .",
"FeSTE then extracts the abstracts of those entities using DBpedia.",
"The goal of this phase is to adapt current state-of-the-art NLP architectures, (e.g., GPT, BERT, and their extensions) to the task of selecting the most relevant features from each of our linked external source entities e ex .",
"As explained in Section 2.3, two common FT approaches are task-specific fine-tuning , which is performed on the target dataset, and preliminary fine-tuning , which is applied on other datasets.",
"While the former is more common, recent studies (Sun et al., 2019; Gururangan et al., 2020) have shown that applying boththe latter and then the formeryields better results.",
"The main difficulty in applying preliminary FT to tabular datasets stems from their diversity: tabular datasets differ greatly in their domains, number of classes, feature composition, etc.",
"These differences make the training of a generic features engineering tool very difficult.",
"To overcome this challenge, we propose a novel FT approach (Figure 1 step #2), which consists of two stages: first, we perform preliminary FT with dataset task reformulation .",
"Then, we perform Target Dataset Fine-Tuning using only the target dataset's training set, i.e., task-specific FT .",
"Preliminary FT with dataset task reformulation.",
"The main challenge in learning from multiple tabular datasets, aside from their highly diverse content and characteristics, is their different number of classes.",
"Such a setup makes using a single output layer with a fixed number of entries impossible.",
"We overcome this challenge as follows: For each dataset D i let there be a set of free texts T i , each associated with an entity e i,j in D i .",
"For each T i,j , we create a Cartesian product T i,j XT C i , where T C i consists of all the target classes of the dataset D i .",
"Namely, we pair the text T i,j with all possible target class values.",
"We can now treat the problem as one of Sentence-pairs classification .",
"In this setting, we are presented with a set consisting of three elements { T ri , tc ri , l ri }, where T ri is the text (first sentence), tc ri is a possible target class value (second sentence) and l r i the label.",
"l r i set to True if { T ri , tc ri } { T i , tc i }.",
"This setting, which is presented in full in Algorithm 2, creates a unified representation for all tabular datasets regardless of their original number of classes .",
"Simply put, we reformulated the original task of each dataset into a NLP downstream task whose goal is to classify whether a given text T ri is related to a given class value tc ri .",
"Once we have reformulated our problem, we can use it to perform a preliminary-FT of BERT .",
"The input we provide consists of two sentences, a classification token and a separation token: [ CLS ] < T ri,j > [ SEP ] < tc ri,j > [ SEP ] 1580 University Original Features Class Alaska Pacific University x1,",
"where T ri,j is the free text, tc ri,j is the assigned target class value, and [ CLS ] and [ SEP ] are BERT's special tokens.",
"An example from the dataset AAUP, whose task is to predict whether a university is ranked as high or low, is presented below: [CLS] Alaska Pacific University (APU) is a private university in Anchorage, Alaska ... [SEP] Low [SEP] This phase of our FT process is similar to BERT's standard auxiliary training task, where the architecture is tasked with determining whether the class assigned to a sentence is correct (i.e., is it the one that appeared in the dataset?).",
"For our fine-tuning, we use the same loss function that is used by the original BERT architecture's auxiliary task.",
"Our data formulation enables us to fine-tune BERT simultaneously over a large set of datasets, thus creating a generic model that can then be effectively applied to additional datasets.",
"It should be noted that a similar process of including the class value as part of the input was previously used in the domain of zero-shot Text Classification (Yin et al., 2019), to address the possibility of new classes appearing in mid-training.",
"Target dataset fine-tuning.",
"The goal of the preliminary FT was to adapt the pre-trained LM for the general task of feature generation for tabular datasets.",
"Now we perform additional FT, designed to optimize the LM for the currently analyzed dataset.",
"To this end, we now repeat the process described above for the target dataset .",
"The process repeats all the steps of the preliminary FT, including the reformulation into a classification task.",
"The deep architecture used for the two fine-tuning phases is presented in Figure 2.",
"We partition the training set of the target dataset D t,train into two equal parts.",
"One half is used for the target dataset FT, while the second is used for the features generation process , which we describe next.",
"The goal of this phase is to produce the generated features that will augment the target dataset.",
"The features generation process is as follows: for each sample (i.e., dataset tuple), we provide the pre-trained LM with an input consisting of:",
"a) all the free text associated with the tuple's entity T rt,j , and;",
"b) the possible target class values we generated tc r t,j .",
"Simply put, we task the LM with predicting the likelihood of the text belonging to each of the target dataset's classes.",
"The output of this process is a set of values, equal in length to the number of classes in the target dataset.",
"Each of these values is added as a new feature to the target dataset.",
"An example of this process is presented in Figure 1, step #3.",
"The dataset presented in the example has only two class valueshigh and lowso FeSTE creates only two additional features that are added to the original features set.",
"It should be noted that because of the varying number of target class values in our analyzed datasets, we use the Sigmoid function and evaluate each class individ-1581 ually (which is why our values for a given entity don't add up to 1, as shown in Figure 1).",
"Once the new features F newt have been generated, we apply the Softmax function row-wise to receive a distribution over each target class value.",
"The process described above is first applied to the target dataset's training set (i.e., the half that is retained for this purpose).",
"We then train our classifier and apply it to the test set.",
"Before each tuple in the test set is classified, we use the LM to generate the same set of features as the training set.",
"In addition to the efficacy of our proposed approach, on which we elaborate in the following section, another advantage of FeSTE is the small number of features it generates.",
"Unlike previously-proposed approaches such as (Harari and Katz, 2022), which generate thousands of features, the small number of features generated by FeSTE does not result in a large computational overhead.",
"We compare FeSTE to the two leading fine-tuning methods: target dataset FT and MT-DNN FT : Target dataset FT.",
"For this baseline, we fine-tune a BERT-based architecture (Figure 2, left side) on the target dataset and the texts without reformulation nor preliminary FT (Algorithem 1, lines 1,9-11).",
"This approach is the commonly used FT strategy.",
"MT-DNN FT.",
"For this baseline, we first execute MT-DNN (Liu et al., 2019) as a preliminary FT step for the BERT-based architecture (Figure 2, left side).",
"Then, we fine-tune BERT again using Target Dataset FT (Algorithm 1, lines 1,6,9-11).",
"No reformulation is performed .",
"It is important to note that all baselines, as well as FeSTE, are evaluated using the same experimental settings.",
"The only difference between the approaches is their fine-tuning methods .",
"For full details on our baselines, see Section 2.",
"Datasets and evaluated classifiers.",
"We evaluate our approach on 17 classification datasets with a large variance in their characteristics.",
"The datasets were obtained from public repositories such as Kag-gle, UCI (Dua and Graff, 2017), OpenML (Van-schoren et al., 2013), and relevant studies (Ristoski et al., 2016).",
"The datasets and their characteristics are presented in the Appendix.",
"When applying the classifiers on each dataset (after its features have already been augmented), we used four-fold cross-validation, where we train on three folds and 1582 evaluate the fourth.",
"We use the following five classifiers to evaluate the performance of FeSTE and the baselines: Ran-domForest, MLP, SVC, KNeighbors, and Gradient-Boosting.",
"We used the implementations available in Scikit-learn, with the default hyper-parameter settings.",
"The only preprocessing we perform is feature normalization.",
"Since results are consistent for all algorithms, we present the average results .",
"Individual results are presented in the Appendix.",
"Architectures and parameter tuning.",
"All evaluated models (FeSTE and baselines) use a pre-trained BERT architecture with 12 transformer blocks, 12 attention heads, and 110 million parameters (Hugging Face Tensorflow implementation).",
"Additionally, the loss functions used by all fine-tuning approaches were either binary cross-entropy or multi-class cross-entropy, depending on the number of target classes.",
"Finally, only the embedding [CLS] vector was passed to the output layer.",
"When evaluating the performance of our approach on dataset D t = D i , we trained the BERT-based architecture on the remaining datasets, i.e., d i D where i = t .",
"Since we evaluate FeSTE on 17 datasets, our architecture was fine-tuned on 16 datasets and tested on the 17th.",
"This form of training was also performed for MT-DNN.",
"FeSTE's preliminary and target-dataset fine-tuning settings were as follows: 20 training epochs with early stopping, mini-batches of 8 samples, a warm-up period of one epoch, no dropout, and the Adam optimizer.",
"We used a learning rate of 1e-5 and 2e-5 for preliminary and target-dataset FTs, respectively.",
"We also used a linear learning rate decay.",
"For all experiments we used an Intel Xeon Gold 6140 2.3GHz Processor and 192GB RAM.",
"We conducted two sets of experiments.",
"The goal of the first is to evaluate the efficacy of our novel FT approach compared to the two leading baselines: target-dataset FT, and MT-DNN.",
"The second set of experiments is designed to determine whether FeSTE is generic by evaluating its performance when using a different entity linking approach.",
"Evaluating the efficacy of our FT method.",
"In this experiment we focus on the efficacy of the features generated from the external data source (i.e., DBpedia unstructured text).",
"We, therefore, train our classifiers only on the generated features Table 1: The AUC results obtained by our full proposed approach (Reformulated), and by versions of our approach that utilize the baseline FT methods for the fine-tuning phase.",
"and ignore the original features of the dataset.",
"This evaluation enables us to more accurately quantify the performance of each FT approach.",
"The setup of this evaluation is as follows: the FeSTE algorithm is used in all experiments, but the FT phases of our approach is either the Reformulation method presented in Section 4.2 (Algorithm 1, lines 2-8) , or one of the two baselines.",
"The results of this experiment are presented in Table 1.",
"While it is clear that FeSTE performs well with all FT approaches, our proposed reformulation approach outperforms the baselines, achieving the highest results in 10 out of 17 datasets.",
"In terms of AUC, Reformulated FT improves upon the baselines by 4.7%-6.8%.",
"Using the paired t-test, we were able to determine that Reformulated FT outperforms both baselines with p < 0 .",
"001 .",
"While Reformulated FT outperforms the baselines across all dataset sizes, it is noteworthy our approach achieves a larger relative improvement for smaller datasets.",
"Improving the performance of such datasets is more difficult because of the limited amount of data available for the FT of the model.",
"For example, the \"Zoo\" and \"Country Codes\" datasets contain only 35 and 75 records in their training set, respectively.",
"Nonetheless, Reformulated FT outperforms the other baselines by 37% and 8.9% in terms of AUCwell above the overall average.",
"These results demonstrate the effectiveness of our novel tuning approach, which leverages 1583 Table 2: An evaluation of FeSTE when it uses our Google-based entity linking approach, and when it implements the entity linking approach proposed by the recent FGSES framework.",
"Evaluating the efficacy of our FT method with the original features.",
"We now evaluate all approaches on the joint set of original and generated features.",
"The only preprocessing we apply is feature normalization (no feature selection or engi-neering).",
"We consider this setup the most realistic.",
"The results of this experiment are presented in Table 3.",
"Again, FeSTE performs well with all FT approaches and our reformulation approach outperforms the baselines, achieving the highest results in 9 out of 17 datasets.",
"In terms of AUC, Reformulated FT improves upon the baselines by 1.4%, 2.3%, and 9.2%.",
"Using the paired t-test, we were able to determine that Reformulated FT outperforms the three baselines with p < 0 .",
"001 .",
"Evaluating FeSTE using additional entity linking approaches.",
"In the previous experiment we demonstrated the efficacy of the features generated by FeSTE.",
"Our goal now is to determine whether our approach is sufficiently generic to be applied with additional forms of entity linking.",
"We, therefore, evaluate FeSTE's performance when our Google-based entity linking approach is replaced by the recently proposed FGSES approaches presented in (Harari and Katz, 2022).",
"Table 2.",
"We present the results for the two FeSTE versionsGoogle and FGSES-basedwhere the generated features are added to the original features set.",
"To provide a meaningful point of reference, we also include the results obtained by using only the original features set for each dataset.",
"It is clear that both versions of FeSTE outperform the original set of features.",
"Our approach achieved better performances in 10 out of 17 datasets, with the original features achieving top performance in only 6 datasets.",
"On average, FeSTE outperforms the results obtained by the original features by 9.2% and 5.2% for the Google-based and FGSES-based entity linking, respectively.",
"Using the paired-t statistical tests, we were once again shown that FeSTE su-perior performance is statistically significant, with p < 0 .",
"001 , compare to the original set of features.",
"Cases where the original features outperformed the augmented features set.",
"The results in Section 5.3 clearly show that FeSTE significantly outperforms the baselines in a large majority of the evaluated datasets.",
"In this section, however, we focus on datasets where our approach did not perform well compared to the original set of features.",
"As shown in Table 2, there are six datasets in which the original features set outperformed FeSTE.",
"We analyzed these datasets in an attempt to determine the causes of our approach's lower performance.",
"Our conclusion is that FeSTE is in greater danger of underperforming in cases of spe-cialized datasets, i.e., datasets that are dedicated to highly specific topics that are not of general interest.",
"In such use-cases, information extracted from a general data source like DBPedia might not be adequate.",
"An example of such a use case is the WDI dataset, whose goal is to determine the income groups of various countries.",
"Our analysis shows that the abstracts of the linked entities simply do not elaborate on the topic of income.",
"Finally, we compare the performance achieved using only FeSTE's generated features (Table 1) to the performance of the original features (Table 2).",
"Note that our generated features outperform the original features in 10 out of 17 datasetsan impressive accomplishment given that the original features are often highly informative.",
"On average for all datasets, features generated by our approach outperform the original features by 2%.",
"Moreover, in some datasets our approach significantly outperforms the original features by as much as 192%.",
"Analyzing FeSTE's Generalization Capabilities.",
"In all our previous experiments, FeSTE was finetuned on 16 datasets.",
"We now analyze our approach's ability to generalize as a function of the number of its fine-tuning datasets.",
"Figure 3 presents FeSTE's relative improvement compared to preliminary FT.",
"The results show that even four FT datasets yields an improvement (1.8%) compared to this baseline, with the gap rapidly expanding as new datasets are added.",
"This analysis highlights FeSTE's generic nature and its ability to leverage knowledge from multiple sources.",
"In this analysis we compare FeSTE both to target dataset FT and to MT-DNN (see Section 2).",
"Target dataset FT is clearly the most efficient of the three approaches, as it constitutes a part of the other approaches.",
"While FeSTE and MT-DNN were implemented using identical architectures (with one minor difference, described below), their comparison requires us to consider two aspects of their respective implementations: (1) While FeSTE employs the same architecture for all datasets, MT-DNN must train a new output layer for each new task, as well as for datasets with Figure 3: FeSTE's relative performance to preliminary FT, as a function of the number of datasets on which our approach performs its fine-tuning.",
"the same task but with a different number of classes.",
"In our experiments, for example, we trained seven output layers for MT-DNN.",
"In addition to the need to constantly re-train the model, MT-DNN incurs significant storage costs because of the need to maintain multiple architectures.",
"(2) FeSTE incurs an additional computational cost due to its reformulation phase.",
"The cost of reformulation consists of two parts: the first is the reformulation process itself, and the other is the additional FT as a results of the larger number of samples.",
"The computational cost of both tasks is O ( | C || UniqueEntities | ) .",
"Please note, however, that in tabular dataset both number of classes and the number of unique entities is relatively small.",
"To summarize, MT-DNN will likely be more efficient for a small number of tasks/datasets, each consisting of a large number of training samples.",
"FeSTE, on the other hand, will be more effective on a diverse set of datasets and tasks, possibly containing a relatively smaller number of samples.",
"We present FeSTE, a framework for generating new features for tabular datasets from unstructured sources.",
"Our approach uses a novel two-step fine-tuning process that enables it to effectively apply transformer based LM for the extraction of useful features even when the target dataset is limited in size.",
"Our FT approach significantly outperforms the existing SOTA."
] | [
"abstain",
"abstain",
"objective",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"result",
"result",
"objective",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"objective",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"result",
"result",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result"
] |
[
"Dirichlet Multinomial Regression ( DMR ) and other supervised topic models can incorporate arbitrary document-level features to inform topic priors.",
"However, their ability to model corpora are limited by the representation and selection of these features a choice the topic modeler must make.",
"Instead, we seek models that can learn the feature representations upon which to condition topic selection.",
"We present deep Dirichlet Multinomial Regression ( dDMR ), a generative topic model that simultaneously learns document feature representations and topics.",
"We evaluate dDMR on three datasets: New York Times articles with fine-grained tags, Amazon product reviews with product images, and Reddit posts with subreddit identity.",
"dDMR learns representations that outperform DMR and LDA according to heldout perplexity and are more eective at downstream predictive tasks as the number of topics grows.",
"Additionally, human subjects judge dDMR topics as being more representative of associated document features.",
"Finally, we find that supervision leads to faster convergence as compared to an LDA baseline and that dDMR 's model fit is less sensitive to training parameters than DMR .",
"Fifteen years of research on topic models, starting from Latent Dirichlet Allocation ( LDA ) (Blei et al., 2003), have led to a variety of models for numerous data settings.",
"These models identify sets (distribu-tions) of related words that reflect semantic topics in a large corpus of text data.",
"Topic models are now routinely used in the social sciences and humanities to analyze text collections (Schmidt, 2012).",
"Document collections are often accompanied by metadata and annotations, such as a book's author, an article's topic descriptor tags, images associated with a product review, or structured patient information associated with clinical records.",
"These document-level annotations can provide additional supervision for guiding topic model learning.",
"Additional information can be integrated into topic models using either downstream or upstream models.",
"Downstream models, such as supervised LDA (Mcaulie and Blei, 2008), assume that these additional document features are generated from each document's topic distribution.",
"These models are most helpful when you desire topics that are predictive of the output, such as models for predicting the sentiment of product reviews.",
"Upstream models, such as Dirichlet Multinomial Regression ( DMR ), condition each document's topic distribution on document features, such as author (Rosen-Zvi et al., 2004), social network (McCallum et al., 2007), or document labels (Ramage et al., 2009).",
"Previous work has demonstrated that upstream models tend to outperform downstream models in terms of model fit, as well as extracting topics that are useful in prediction of related tasks (Benton et al., 2016).",
"DMR is an upstream topic model with a particularly attractive method for incorporating arbitrary document features.",
"Rather than defining specific random variables in the graphical model for each new document feature, DMR treats the document annotations as features in a log-linear model.",
"The log-linear model parameterizes the Dirichlet prior for the document's topic distribution, making the Dirichlet's hyperparameter (typically ) document-specific.",
"By making no assumptions on model structure of new random variables, DMR is flexible to incorporating dierent types of features.",
"Despite this flexibility, DMR models are typically restricted to a small number of document features.",
"Several reasons account for this restriction: 1) Many text corpora only have a small number of document-level features; 2) Model hyperparameters become less interpretable as the dimensionality grows; and 3) DMR is liable to overfit the hyperparameters when the dimensionality of document features is high.",
"In practice, applications of DMR are limited to settings with a small number of features, or where the analyst selects a few meaningful features 365 by hand.",
"A solution to this restriction is to learn low-dimensional representations of document features.",
"Neural networks have shown wide-spread success at learning generalizable representations, often obviating the need for hand designed features (Collobert and Weston, 2008).",
"A prime example is word embedding features in natural language processing, which supplant traditional lexical features (Brown et al., 1992; Mikolov et al., 2013; Pennington et al., 2014).",
"Jointly learning networks that construct feature representations along with the parameters of a standard NLP model has become a common approach.",
"For example, (Yu et al., 2015) used a tensor decomposition to jointly learn features from both word embeddings and traditional NLP features, along with the parameters of a relation extraction model.",
"Additionally, neural networks can handle a variety of data types, including text, images and general metadata features.",
"This makes them appropriate for addressing dimensionality reduction in DMR .",
"We propose deep Dirichlet Multinomial Regression ( dDMR ), a model that extends DMR by introducing a deep neural network that learns a transformation of the input metadata into features used to form the Dirichlet hyperparameter.",
"Whereas DMR parameterizes the document-topic priors as a log-linear function of document features, dDMR jointly learns a feature representation for each document along with a log-linear function that best captures the distribution over topics.",
"Since the function mapping document features to topic prior is a neural network, we can jointly optimize the topic model and the neural network parameters by gradient ascent and back-propagation.",
"We show that dDMR can use network architectures to better fit text corpora with high-dimensional document features as compared to other supervised topic models.",
"The topics learned by dDMR are judged as being more representative of document features by human subjects.",
"We also find that dDMR tends to converge in many fewer iterations than LDA , and also does not suer from tuning diculties that DMR encounters when applied to high-dimensional document features.",
"Our model builds on the generative model of DMR : an LDA-style topic model that replaces the hyperparameter (vector) of the topic distribution Dirichlet prior with a hyperparameter that is output from a log-linear model given the document features.",
"Our model deep DMR ( dDMR ) replaces this log-linear model with an arbitrary function f that maps a real-valued vector of dimension F to a representation of dimension K .",
"For simplicity we make no assumptions on the choice of this function, only w m,n z m,n m k bias m D N ~ f m K Figure 1: The graphical model for dDMR .",
"that it can be optimized to minimize a cost on its output by gradient ascent.",
"In practice, we define this function as a neural network, where the architecture of this network is informed by the type of document features, e.g. a convolutional neural network for images.",
"We use neural networks since they are expressive, generalize well to unseen data, and can be jointly trained using straightforward gradient ascent with back-propagation.",
"The generative story for dDMR is as follows: 1. Representation function f RF RK 2. Topic-word prior parameters: bias RV 3. For each document m with features m RF , generate document prior:",
"(a) e m = exp ( f ( m ))",
"(b) m Dirichlet ( e m ) 4. For each topic k , generate word distribution:",
"(a) e k = exp ( bias )",
"(b) k Dirichlet ( e k ) 5. For each token ( m, n ) , generate data:",
"(a) Topic (unobserved): z m,n m",
"(b) Word (observed): w m,n z m,n where V is the vocabulary size and K are the number of topics.",
"In practice, the document features need not be restricted to fixed-length feature vectors, e.g. f may be an RNN that maps from a sequence of characters to a fixed length vector in R k .",
"DMR is a special case of dDMR with the choice of a linear function for f .",
"Figure 1 displays the graphical model diagram for dDMR .",
"We infer the random variables of the topic model using collapsed Gibbs sampling, and estimate the model parameters using gradient ascent with back-propagation.",
"We use alternating optimization: one 366 iteration of collapsed Gibbs sampling (sample topics for each word) and then an update of the parameters of f by gradient ascent to maximize the log-likelihood of the tokens and topic assignments.",
"Given the parameters, the sampling step remains unchanged from LDA (Griths and Steyvers, 2004).",
"The network parameters are estimated via back-propagation through the network for a fixed sample.",
"Eq.",
"1 shows the gradient of the data log-likelihood, L , with respect to e m,k = exp ( f ( m ) k ) , the prior weight of topic k for document m .",
"is the digamma function (derivative of the log-gamma function), n m is the number of tokens in document m , and n m,k is the count of how many tokens topic k was assigned to in document m .",
"We explore the flexibility of our model by considering three dierent datasets that include dierent types of metadata associated with each document.",
"For each dataset, we describe the documents and metadata.",
"New York Times The New York Times Annotated Corpus (Sandhaus, 2008) contains articles with extensive metadata used for indexing by the newspaper.",
"For supervision, we used the descrip-tor tags associated with each article assigned by archivists.",
"These tags reflect the topic of an article, as well as organizations or people mentioned in the article.",
"We selected all articles published in 1998, and kept those tags that were associated with at least 3 articles in that year 2424 unique tags.",
"20 of the 200 most frequent tags were held out from training for validation purposes: { education and schools, law and legislation, advertising, bud-gets and budgeting, freedom and human rights, telephones and telecommunications, bombs and explosives, sexual harassment, reform and reor-ganization, teachers and school employees, tests and testing, futures and options trading, boxing, firearms, company reports, embargoes and economic sanctions, hospitals, states (us), bridge (card game), and auctions }.",
"Articles contained a mean of 2.1 tags, with 738 articles not containing any of these tags.",
"Tags were represented using a one-hot encoding.",
"Articles were tokenized by non-alphanumeric characters and numerals were replaced by a special token.",
"Words occurring in more than 40% of documents were removed, and only the 15,000 most frequent types were retained.",
"There were a total of 89,397 articles with an average length of 158 tokens per article.",
"Amazon Reviews The Amazon product reviews corpus(McAuley and Yang, 2016) contains reviews of products as well as images of the product.",
"We sampled 100,000 Amazon product reviews: 20,000 reviews sampled uniformly from the Musical Instruments , Patio, Lawn, & Garden , Grocery & Gourmet Food , Automotive , and Pet Supplies product categories.",
"We hypothesize that knowing information about the product's appearance will indicate which words appear in the review, especially for product images occurring in these categories.",
"66 of the reviews we sampled contained only highly infrequent tokens, and were therefore removed from our data, leaving 99,934 product reviews.",
"Articles were preprocessed identically to the New York Times data.",
"We include images as supervision by using the 4096-dimensional second fully-connected layer of the Cae convolutional neural network reference model, trained to predict ImageNet object categories 1 .",
"Using these features as supervision to dDMR is similar to fine-tuning a pre-trained CNN to predict a new set of labels.",
"Since the Cae reference model is already trained on a large corpus of images, we chose to fine-tune only the final layers so as to learn a transformation of the already learned representation.",
"Reddit We selected a sample of Reddit posts made in January 2016.",
"A standard stop list was used to remove frequent function words and we restricted the vocabulary to the 30,000 most frequent types.",
"We restricted posts made to subreddits, collections of topically-related threads, with at least ten comments in this month (26,830 subreddits), and made by users with at least five comments across these subreddits (total of 1,351,283 million users).",
"We then sampled 10,000 users uniformly at random and used all their comments as a corpus, for a total of 389,234 comments over 7,866 subreddits (token length mean: 16.3, median: 9) 2 .",
"This corpus diers from the others in two ways.",
"First, Reddit documents are very short, which is problematic for topic models that rely on detecting correlations in token use.",
"Second, the Reddit metadata that may be useful for topic modeling is necessarily high-dimensional (e.g. subreddit identity, a proxy for topical content).",
"DMR may have trouble exploiting high-dimensional supervision.",
"Model Estimation We used the same procedure for training topic models on each dataset.",
"Hyperparameter gradient updates were performed after 1 Features used directly from http://jmcauley.",
"ucsd.edu/data/amazon/ 2 The sampled comment IDs can be found here: https://github.com/abenton/deep-dmr/blob/ master/resources/reddit_comment_ids.txt 367 a burnin period of 100 Gibbs sampling iterations.",
"Hyperparameters were updated with the adaptive learning rate algorithm Adadelta (Zeiler, 2012), with a tuned base learning rate and fixed = 0 .",
"95 3 .",
"All models were trained for a maximum of 15,000 epochs, with early stopping if heldout perplexity showed no improvements after 200 epochs (evalu-ated once every 20 epochs).",
"Hyperparameters were fit on every other token in the corpus, and (held-out) log-likelihood/perplexity was calculated on the remaining tokens.",
"For the architecture of the dDMR model we used single-hidden-layer multi-layer perceptrons (MLPs), with rectified linear unit (ReLU) activations on the hidden layer, and linear activation on the output layer.",
"We sampled three architectures for each dataset, by drawing layer widths independently at random from [10 , 500] , and also included two architectures with (50 , 10) and (100 , 50) , (hidden, output) layers 4 .",
"We compare the performance of dDMR to DMR trained on the same feature set as well as LDA .",
"For the New York Times dataset, we also compare dDMR to DMR trained on features after applying principal components analysis (PCA) to reduce the dimensionality of descriptor feature supervision, sweeping over PCA projection width in { 10 , 50 , 100 , 250 , 500 , 1000 } .",
"Comparing performance of dDMR to PCA-reduced DMR tests two modeling choices.",
"First, it tests the hypothesis that explicitly learning a representation for document annotations to maximize data likelihood produces a better-fit topic model than learning this annotation representation in unsupervised fashion a two-step process.",
"It also lets us determine if a linear dimensionality reduction technique is sucient to learning a good feature representation for topic modeling, as opposed to learning a non-linear transformation of the document supervision.",
"Note that we cannot apply PCA to reduce the dimensionality for subreddit id in Reddit since it is a one-hot feature.",
"Documents in each dataset were partitioned into ten equally-sized folds.",
"Model training parameters of L1 and L2 regularization penalties on feature weights for DMR and dDMR and the base learning rate for each model class were tuned to minimize heldout perplexity on the first fold.",
"These were 3 We found this adaptive learning rate algorithm improved model fit in many fewer iterations than gradient descent with tuned step size and decay rate for all models.",
"4 We included these two very narrow architectures to ensure that some architecture learned a small feature representation, generalizing better when features are very noisy or only provide a weak signal for topic modeling.",
"We restricted ourselves to only train dDMR models with single-hidden-layer MLPs in the priors for simplicity and to avoid model fishing.",
"tuned independently for each model , with number of topics fixed to 10, and dDMR architecture fixed to narrow layer widths (50, 10).",
"Model selection was based on the macro-averaged performance on the next eight folds, and we report performance on the remaining fold.",
"We selected models separately for each evaluation metric.",
"For dDMR , model selection amounts to selecting the document prior architecture, and for DMR with PCA-reduced feature supervision, model selection involved selecting the PCA projection width.",
"Evaluation Each model was evaluated according to heldout perplexity, topic coherence by normalized pointwise mutual information (NPMI) (Lau et al., 2014), and a dataset-specific predictive task.",
"Heldout perplexity was computed by only aggregating document-topic and topic-word counts from every other token in the corpus, and evaluating perplexity on the remaining heldout tokens.",
"This corresponds to the document completion evaluation method as described in (Wallach et al., 2009), where instead of holding out the words in the second half of a document, every other word is held out.NPMI (Lau et al., 2014) computes a an automatic measure of topic quality, the sum of pointwise mutual information between pairs of m most likely words normalized by the negative log of each pair jointly occurring within a document (Eq. 2).",
"We calculated this topic quality metric on the top 20 most probable words in each topic, and averaged over the most coherent 1, 5, 10, and over all learned topics.",
"However, models were selected to only maximize average NPMI over all topics.",
"For prediction tasks, we used the sampled topic distribution associated with a document, averaged over the last 100 iterations, as features to predict a document-level label.",
"For New York Times articles we predicted 10 of the 200 most frequent descriptor tags restricting to articles with exactly one of these descriptors.",
"For Amazon, we predicted the product category a document belonged to (one of five), and for Reddit we predicted a heldout set of document subreddit IDs.",
"In the case of Reddit, these heldout subreddits were 10 out of the 100 most prevalent in our data, and were held out similar to the New York Times evaluation.",
"SVM models were fit on inferred topic distribution features and were then evaluated according to accuracy, F1-score, and area under the ROC curve.",
"The SVM slack parameter was tuned by 4-fold cross-validation on 60% of the documents, and evaluated on the remaining 40%.",
"Dredze, 2010).",
"Each subject was presented with a human-readable version of the features used for supervision.",
"For New York Times articles we showed the descriptor tags, for Amazon the product image, and for Reddit the name, title, and public description of the subreddit.",
"We showed the top twenty words for the most probable topic sampled for the document with those features, as learned by two dierent models.",
"One topic was learned by dDMR and the other was either learned by LDA or DMR .",
"The topics presented were from the 200-topic model architecture that maximized NPMI on development folds.",
"Annotators were asked to choose which word list best describes a document . . . with the displayed features.",
"The topic learned by dDMR was shued to lie on either the right or left for each Human Intelligence Task (HIT).",
"We obtained judgments on 1,000 documents for each dataset and each model evaluation pair 6,000 documents in all.",
"This task can be dicult for many of the features, which may be unclear (e.g. descriptor tags without context) or dicult to interpret (e.g. images of automotive parts).",
"We excluded the document text since we did not want subjects to evaluate topic quality based on token overlap with the actual document.",
"Model Fitting dDMR achieves lower perplexity than LDA or DMR for most combinations of number of topics and dataset (Table 1).",
"It is striking that DMR achieves higher perplexity than LDA in many of these conditions.",
"This is particularly true for the Amazon dataset, where DMR consistently lags behind LDA .",
"Supervision alone does not improve topic model fit if it is too high-dimensional for learning .",
"Perplexity is higher on the Reddit data for all models due to both a larger vocabulary size and shorter documents.",
"It is also worth noting that finding a low-dimensional linear projection of the supervision features with PCA does not improve model fit as well as dDMR .",
"dDMR benefits both from joint learning to maximize corpus log-likelihood and possibly by the flexibility of learning non-linear projection (through the hidden layer ReLU activations).",
"Another striking result is the dierence in speed of convergence between the supervised models and LDA (Figure 2).",
"Even supervision that provides a weak signal for topic modeling, such as Amazon product image features, can speed convergence over LDA .",
"In certain cases (Figure 2 left), training dDMR for 1,000 iterations results in a lower perplexity model than LDA trained for over 10,000 iterations.",
"In terms of actual run time, parallelization of model training diers between the supervised model and LDA .",
"Gradient updates necessary for learning the representation can be trivially distributed across multiple cores using optimized linear algebra libraries (e.g. BLAS), mitigating the additional cost incurred by hyperparameter updates in supervised models.",
"In contrast, the Gibbs sampling iterations can also be parallelized, but not as easily, ultimately making resampling topics the most expensive step in model training.",
"Because of this, the potential dierence in runtime for a single iteration between dDMR and LDA is small, with the former converging in far fewer iterations.",
"In our experiments, per iteration time taken by DMR or dDMR was at most twice as long as LDA across all experiments.",
"dDMR performance is also insensitive to training parameters relative to DMR .",
"While DMR requires heavy L1 and L2 regularization and a very small step size to achieve low heldout perplexity, dDMR is relatively insensitive to the penalty on regularization and benefits from a higher base learning rate (Figure 3).",
"We found that dDMR is easier to tune than DMR, requiring less exploration of the training parameters.",
"This is also corroborated by higher variance in perplexity achieved by DMR across dierent cross-validation folds (Table 1).",
"Topic Quality Results for the automatic topic quality evaluation, NPMI, are mixed across datasets.",
"In many cases, LDA and DMR score highly according to NPMI, despite achieving higher heldout perplexity than dDMR (Table 2).",
"This may not be surprising as previous work has found that perplexity does not correlate well with human judgments of topic coherence (Lau et al., 2014).",
"However, in the human evaluation, subjects find that dDMR -learned topics are more representative of document annotations than DMR (Table 3).",
"While subjects only statistically significantly favored dDMR models over LDA on the Reddit data, they favored dDMR topics over LDA across all datasets, and significantly preferred dDMR top-369 0 2 0 0 0 4 0 0 0 6 0 0 0 8 0 0 0 1 0 0 0 0 Iteration 2000 3000 4000 5000 6000 7000 H e l dou t P e r p l e x i t y New York Times LDA DMR dDMR 0 2 0 0 0 4 0 0 0 6 0 0 0 8 0 0 0 1 0 0 0 0 Iteration 2000 3000 4000 5000 6000 7000 8000 Amazon LDA DMR dDMR 0 2 0 0 0 4 0 0 0 6 0 0 0 8 0 0 0 1 0 0 0 0 1 2 0 0 0 1 4 0 0 0 1 6 0 0 0 Iteration 4000 6000 8000 10000 12000 Reddit LDA DMR dDMR Figure 2: Heldout perplexity as a function of iteration for lowest-perplexity models with Z = 100 .",
"ics over DMR on two of the three datasets.",
"This is contrary to themodel rankings according to NPMI, which suggest that DMR topics are often higher quality when it comes to human interpretability.",
"We also qualitatively explored the product image representations DMR and dDMR learned on the Amazon data.",
"To do so, we computed and normalized the prior document distribution for a sample of documents for lowest perplexity DMR and dDMR Z = 200 topic models: p ( k | m ) = e m P Zk =1 e m,k , the prior probability of sampling topic k , conditioned on the features for document m .",
"We then marginalize over topics to yield the conditional probability of a word w given document m : p ( w | m ) = P Zk =1 p ( w | k ) p ( k | m ) .",
"Table 4 contains a sample of these probable words given document supervision.",
"We find that dDMR identifies words likely to appear in a review of the product pictured.",
"However, some images lead dDMR down a garden path.",
"For example, a bottle of Turtle Food should not be associated with words for human consumables like coee and chocolate, despite the container resembling some of these products.",
"However, the image-specific document priors DMR learned are not as sensitive to the actual product image as those learned by dDMR .",
"The prior conditional probabilities p ( w | m ) for Turtle Food, Slushy Magic Cup, and Rawhide Dog Bones product images are all ranked identically by DMR .",
"Predictive Performance Finally, we consider the utility of the learned topic distributions for downstream prediction tasks, a common use of topic models.",
"Although token perplexity is a standard measure of topic model fit, it has no direct relationship with how topic models are typically used: to identify consistent themes or reduce the dimensionality of a document corpus.",
"We found that features based on topic distributions from dDMR outperform LDA and DMR on the Amazon and Reddit data when the number of topics fit is large, although they fail to outperform DMR on New York Times (Table 5).",
"Heldout perplexity is strongly correlated with predictive performance, with a Pearson correlation coecient, = 0 .",
"898 between F1-score and heldout perplexity on the Amazon data.",
"This strong correlation is likely due to the tight relationship between words used in product reviews and product category: a model that assigns high likelihood to a words in a product review corpus should also be informative of the product categories.",
"Prior work showed that upstream supervised topic models, such as DMR , learn topic distributions that are eective at downstream prediction tasks (Ben-ton et al., 2016).",
"We find that topic distributions learned by dDMR improve over DMR in certain cases, particularly as the number of topics increases.",
"With the widespread adoption of neural networks, others have sought to combine topic and neural models.",
"One line of work replaces generative, LDA based, topic models with discriminatively-trained models based on neural networks.",
"(Cao et al., 2015) model and using neural networks with softmax output layers and learn network parameters that maximize data likelihood.",
"They also learn n-gram embeddings to identify topics whose elements are not restricted to unigrams.",
"(Chen et al., 2015) similarly expresses the (smoothed) supervised LDA (Mcaulie and Blei, 2008) generative model as a neural network, and give an algorithm to discriminatively train it.",
"(Wan et al., 2012) take a similar approach to dDMR where they use a neural network to extract image representations that maximize the probability of SIFT descriptors extracted from the image.",
"However, this model is used for image classification, not for exploring a corpus of documents as is typical of topic models.",
"These models are computationally attractive in that they avoid approximating the posterior distribution of topic assignments given tokens by dropping the assump-tion that and are drawn from Dirichlet priors.",
"Model fitting is performed by back-propagation of a max-margin cost.",
"In contrast, we use neural networks to learn feature representations for documents, not as a replacement for the LDA generative story.",
"This is similar to variants of SPRITE (Paul and Dredze, 2015), where many document-level factors are combined to generate a document-topic prior.",
"In contrast to several of these models, the core of our topic model remains unchanged, meaning that dDMR is agnostic to many other extensions of LDA .",
"There has been extensive work in modeling both textual and visual topics.",
"Models such as Corr-LDA (Blei and Jordan, 2003) suppose that a text document and associated image features are generated by a shared latent topic.",
"This property is shared by other topic models over images, such as STM-TwitterLDA (Cai et al., 2015) and (Zhang et al., 2015).",
"While these models try to model images, we instead use images in the Amazon data to better estimate topic distributions.",
"Our experiment on using images to model Ama-371 Image Item dDMR Probable Words DMR Probable Words Guitar Foot Rest grill easy cover well fit mower fits job gas hose light heavy easily stand back nice works use enough pressure fit easy well works car light sound quality work guitar would 0000 cover nice looks bought install battery 00 fits Bark Collar fit battery 0000 light install car sound easy work unit amp 00 lights mic power works 000 took replace installed fit easy well works car light work quality sound would guitar 0000 cover nice bought looks install battery 00 fits Turtle Food taste coee flavor food like love cat tea product tried dog eat chocolate litter cats good best bag sugar loves taste coee dog like love flavor food cat product tea cats tried water dogs loves eat chocolate toy mix sugar Slushy Magic Cup food taste cat coee flavor love like dog tea litter cats eat tried product chocolate loves bag good best smell taste coee dog like love flavor food cat product tea cats tried water dogs loves eat chocolate toy mix good Rawhide Dog Bones food cat dog cats litter dogs loves love product smell eat box tried pet bag hair taste vet like seeds taste coee dog like love flavor food cat product tea cats tried water dogs loves eat chocolate toy mix good InstrumentCable sound amp guitar mic pedal sounds price volume quality cable great bass microphone strings music play recording 000 tone unit sound guitar fit easy well 0000 works car quality light music cover work one set nice looks 00 install unit Table 4: Top twenty words associated with each of the product images learned by dDMR vs. DMR ( Z = 200 ).",
"zon product reviews resembles work on image caption generation, yet the similarity is superficial.",
"The relationship between an image and its caption is relatively tight (Fang et al., 2015) objects in the image will likely be referenced in the caption.",
"For Amazon product reviews, visual features of the product, like color, may be explicitly mentioned in the review, but then again, they may not.",
"Also, the aim of topic models is to extract common themes of co-occurring words, and how those themes are distributed across each document.",
"The similarity between our work and captioning lies only in the fact that we extract image features from a CNN trained as an object recognizer to inform document-topic distributions.",
"We present deep Dirichlet Multinomial Regression, a supervised topic model which both learns a representation of document-level features and how to use that representation for informing a topic distribution.",
"We demonstrate the flexibility of our model on three corpora with dierent types of metadata: topic descriptor tags, images, and subreddit IDs.",
"dDMR is better able to fit text corpora with high-dimensional supervision compared to LDA or DMR .",
"Furthermore, we find that document supervision greatly reduces the number of Gibbs sampling iterations for a topic model to converge, and that the dDMR prior architecture makes it more robust to training parameters than DMR .",
"We also find that the topic distributions learned by dDMR are more predictive of external 372 New York Times Amazon Reddit Z Model F1 Accuracy AUC . . . .",
"document labels such as known topic tags or product category as the number of topics grows and that dDMR topics are judged as more representative of the document metadata by human subjects.",
"Source code for training dDMR can be found at http://www.github.com/abenton/deep-dmr ."
] | [
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"result",
"abstain",
"result",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"objective",
"other",
"other",
"other",
"method",
"method",
"other",
"other",
"other",
"other",
"method",
"method",
"objective",
"abstain",
"result",
"result",
"abstain",
"other"
] |
[
"Neural topic models can augment or replace bag-of-words inputs with the learned representations of deep pre-trained transformer-based word prediction models.",
"One added benefit when using representations from multilingual models is that they facilitate zero-shot polylingual topic modeling.",
"However, while it has been widely observed that pre-trained embeddings should be fine-tuned to a given task, it is not immediately clear what supervision should look like for an unsupervised task such as topic modeling.",
"Thus, we propose several methods for fine-tuning encoders to improve both monolingual and zero-shot polylingual neural topic modeling.",
"We consider fine-tuning on auxiliary tasks, constructing a new topic classification task, integrating the topic classification objective directly into topic model training, and continued pre-training.",
"We find that fine-tuning encoder representations on topic classification and integrating the topic classification task directly into topic modeling improves topic quality, and that fine-tuning encoder representations on any task is the most important factor for facilitating cross-lingual transfer.",
"Topic models (Blei et al., 2003) are widely used across numerous disciplines to study large corpora (Boyd-Graber et al., 2017).",
"These data-driven models discover salient themes and semantic clusters without any supervision.",
"Monolingual topic models are language-agnostic but do not align topics across languages, as they have a fixed language-specific vocabulary which cannot be aligned cross-lingually after training.",
"Polylingual topic models (Mimno et al., 2009), however, enable users to consider multilingual corpora, and to discover and align topics across languages.",
"encode text documents for a wide variety of applications (Xia et al., 2020).",
"Furthermore, when trained on multilingual corpora, they have been able to discover cross-lingual alignments despite the lack of explicit cross-lingual links (Wu and Dredze, 2019).",
"Models such as multilingual BERT (mBERT; Devlin et al., 2018) or XLM-RoBERTa (XLM-R; Con-neau et al., 2019) can produce a representation of text in a shared subspace across multiple input languages, suitable for both monolingual and multilingual settings, including zero-shot language transfer (Pires et al., 2019).",
"Simultaneously, topic models have increasingly incorporated neural components.",
"This has included inference networks which learn representations of the input document (Miao et al., 2017; Srivastava and Sutton, 2017) that improve over using bags of words directly, as well as replacing bags of words with contextual representations.",
"In particular, the latter allows topic models to benefit from pre-training on large corpora.",
"For example, contextualized topic models (CTMs) (Bianchi et al., 2020a) use autoencoded contextual sentence representations of input documents.",
"An intriguing advantage of using encoders in topic models is their latent multilinguality.",
"Polylingual topic models (Mimno et al., 2009) are lightweight in their cross-lingual supervision to align topics across languages, but they nonetheless require some form of cross-lingual alignment.",
"While the diversity of resources and approaches for training polylingual topic models enable us to consider many language pairs and domains, there may be cases where existing resources cannot support an intended use case.",
"Can topic models become polylingual models by relying on multilingual encoders even without additional alignments?",
"Bianchi et al. (2020a) show that CTMs based on contextual sentence representations enable zero-shot cross-lingual topic transfer.",
"While promising, this line of work omits a key step in using contextualized embeddings: fine-tuning.",
"It has been widely observed that task specific fine-tuning of pretrained embeddings, even with a small amount of supervised data, can significantly improve performance on many tasks, including in zeroand few-shot settings (Howard and Ruder, 2018; Wu and Dredze, 2019).",
"However, in the case of unsupervised topic modeling, from where are we to obtain task-specific supervised training data?",
"We propose an investigation of how supervision should be bootstrapped to improve language encoders for monolingual and polylingual topic model learning.",
"We also propose a set of experiments to better understand why certain forms of supervision are effective in this unsupervised task.",
"Our contributions include the following:",
"1. We fine-tune contextualized sentence embeddings on various established auxiliary tasks, finding that many different tasks can be used to improve downstream topic quality and zero-shot topic model transfer.",
"2. We construct fine-tuning supervision for sentence embeddings through a proposed topic classification task, showing further improved topic coherence.",
"This task uses only the data on which we perform topic modeling.",
"3. We integrate a topic classification objective directly into the neural topic model architecture (without fine-tuning the embeddings) to understand whether the embeddings or the topic classification objective is responsible for performance improvements.",
"We find that this approach improves topic quality but has little effect on cross-language topic transfer.",
"We present results for both monolingual topic models and cross-lingual topic transfer from English to French, German, Portuguese, and Dutch.",
"Our code, including instructions for replicating our dataset and experimental setup, are publicly available on GitHub.",
"1 2 Background Neural Topic Models Neural topic models (NTMs) are defined by their parameterization by (deep) neural networks or incorporation of neural elements.",
"This approach has become practical largely due to advances in variational inferencespecifically, variational autoencoders (VAEs; Kingma and Welling, 2013).",
"The Neural 1 https://github.com/aaronmueller/ contextualized-topic-models Variational Document Model (Miao et al., 2016) and Gaussian Softmax Model (Miao et al., 2017) rely on amortized variational inference to approximate the posterior (Zhao et al., 2017; Krishnan et al., 2018).",
"As these methods employ Gaussian priors, they use softmax transforms to ensure non-negative samples.",
"Another approach has used ReLU transforms (Ding et al., 2018).",
"Conversely, ProdLDA (Srivastava and Sutton, 2017) uses a Dirichlet prior that produces nonnegative samples which do not need to be transformed.",
"ProdLDA uses an inference network with a VAE to map from an input bag of words to a continuous latent representation.",
"The decoder network samples from this hidden representation to form latent topic representations.",
"Bags of words are reconstructed for each latent space; these constitute the output topics.",
"Others have reported that ProdLDA is the best-performing NTM with respect to topic coherence (Miao et al., 2017).",
"Contextualized topic models (CTMs; Bianchi et al., 2020a,b) extend ProdLDA by replacing the input bag of words with sentence-BERT (SBERT; Reimers and Gurevych, 2019) embeddings.",
"If the SBERT embeddings are based on a multilingual model such as mBERT (Devlin et al., 2018) or XLM-R (Conneau et al., 2019), then the topic model becomes implicitly polylingual due to the unsupervised alignments induced between languages during pre-training.",
"This is distinct from how polylinguality is induced in approaches based on Latent Dirichlet Allocation (LDA; Blei et al., 2003), which require some form of cross-lingual alignments (Mimno et al., 2009).",
"Using embeddings in topic models is not new (Das et al., 2015; Liu et al., 2015; Li et al., 2016).",
"While a few recent approaches have leveraged word embeddings for topic modeling (Gupta et al., 2019; Dieng et al., 2020; Sia et al., 2020), none of these have investigated cross-lingual topic transfer.",
"Polylingual Topic Models Polylingual topic models require some form of cross-lingual alignments, which can come from comparable documents (Mimno et al., 2009), word alignments (Zhao and Xing, 2006), multilingual dictionaries (Jagarla-mudi and Daum, 2010), code-switched documents (Peng et al., 2014), or other distant alignments such as anchors (Yuan et al., 2018).",
"Work on incomparable documents with soft document links (Hao and Paul, 2018) still relies on dictionaries.",
"common in multilingual learning (Ruder et al., 2019b), they no longer represent the state-of-the-art.",
"More recent approaches instead tend to employ large pretrained multilingual models (Wu and Dredze, 2019) that induce unsupervised alignments between languages during pre-training.",
"Fine-tuning is known to improve an encoder's representations for a specific task when data directly related to the task is present (Howard and Ruder, 2018; Wu and Dredze, 2019).",
"Nonetheless, this requires supervised data, which is absent in unsupervised tasks like ours.",
"We consider several approaches to create fine-tuning supervision for topic modeling.",
"In the absence of supervised training sets, transfer learning can be used to learn from one supervised task (or many tasks in the case of meta-learning) for improvements on another (Ruder et al., 2019a).",
"While transfer is typically performed from a pretrained masked language model to downstream fine-tuning tasks, transfer can also be performed from one fine-tuning task to another.",
"The aim is that the auxiliary task should induce representations similar to those needed for the target task.",
"What task can serve as an effective auxiliary task for topic modeling?",
"We turn to document classification, the task of identifying the primary topic present in a document from a fixed set of (typi-cally human-identified and human-labeled) topics.",
"We may not have a document classification dataset from the same domain as the topic modeling corpus, nor a dataset which uses the same topics as those present in the corpus.",
"However, fine-tuning could teach the encoder to produce topic-level document representations, regardless of the specific topics present in the data.",
"We use MLDoc (Schwenk and Li, 2018), a multilingual news document classification dataset and fine-tune on English.",
"For comparison, we fine-tune on a natural language inference (NLI) task.",
"While it is not closely related to topic modeling, this task is a popular choice for fine-tuning both word and sentence representations.",
"This allows us to measure how much task relatedness matters for fine-tuning.",
"The auxiliary tasks use data from a different domain (and task) than the domain of interest for the topic model.",
"Can we bootstrap more direct supervision on our data?",
"We employ an LDA-based topic model to produce a form of topic supervision.",
"We first run LDA on the target corpus to generate topic distributions for each document.",
"Then, we use the inferred topic distributions as supervision by labeling each document with its most probable topic.",
"We fine-tune on this data as we did for the document classification task; the setup is identical except for how the labels are obtained.",
"The advantage of this method is that LDA topics can be created for any corpus.",
"Gururangan et al. (2020) advocated for adapting an encoder to the domain on which one will later fine-tune.",
"This is done by performing continued pre-training over in-domain data using the masked language modeling (MLM) objective.",
"2 Because continued pre-training requires no task-specific supervision, and because topic modeling implies a sizeable corpus of in-domain documents, we consider continued pre-training on the target corpus as another approach to adapting an encoder.",
"As continued pre-training can be done before fine-tuning, we also try doing both.",
"Does topic classification improve performance because fine-tuning itself induces better representations for topic modeling, or because the model has been exposed to in-domain data and/or supervision directly from the target corpus before topic modeling?",
"Continued pre-training on the target corpus may allow us to answer this question, and provides a further approach for adapting encoders to specific domains.",
"Both continued pre-training and fine-tuning provide supervision for our target task, but both create dependence on a pipeline: we must train and/or fine-tune sentence embeddings, then train a neural topic model using the modified embeddings.",
"However, we can combine the topic classification task and topic modeling into a single end-to-end procedure by modifying the inference network of the CTM.",
"Figure 1 shows our proposed archi-2 We note that there are mixed findings in the literature with respect to this method (Du et al., 2020).",
"tecture: a fully-connected layer into a softmax to predict the topic label of the document based on the learned representation of the VAE.",
"Note that we do not necessarily expect this architecture to outperform fine-tuning sentence embeddings: rather, this architecture allows us to ablate over the location of the topic classification objective, which allows us to determine whether improvements in topic quality and/or transfer are due to improved sentence embeddings induced by fine-tuning, or due to the topic classification task itself.",
"We use the negative log-likelihood loss between the topic predicted by LDA (which we treat as the true label) and the topic predicted by our model, adding this loss term (weighted by a hyperparam-eter ) to the contextualized topic model's loss function.",
"Thus, the new loss becomes LTCCTM = LELBO + LNLL where LELBO is the negated evidence lower bound objective of the CTM, and LNLL is the negative log-likelihood loss over topic classifications.",
"We refer to this as the topic classification contextualized topic modeling (TCCTM) loss, denoted LTCCTM .",
"TCCTM modifies the topic model, but not the embeddings.",
"This approach is therefore orthogonal to fine-tuning, and the two approaches can be combined; thus, we test the performance of TCCTM with and without fine-tuning.",
"Data We begin by creating a multilingual dataset for topic modeling based on aligned Wikipedia articles extracted from Wikipedia Comparable Corpora 3 in English, French, German, Portuguese, and Dutch.",
"We use 100,000 English articles for training the topic models and evaluating monolingual topic coherence.",
"We also extract 100,000 aligned articles for each language to build comparable vocabularies for preprocessing the test data.",
"4 For each language, we use a vocabulary of the 5,000 most frequent word types (case-insensitive), excluding stopwords25,000 types total.",
"We use the English training articles to evaluate monolingual topic quality, and hold out for cross-lingual evaluation a set of 10,000 aligned test articles per-language.",
"For out-of-domain topic classification, we use a dataset of COVID academic articles (in English).",
"5 To facilitate comparison with the Wikipedia dataset, we extract 100,000 articles and use a vocabulary size of 5,000.",
"To obtain topic labels for each English document, we run LDA for 400 iterations and choose the number of topics by performing a search in { 10 , 20 , . . . , 250 } , optimizing over NPMI coherence.",
"We find that { 100 , 110 , 120 } is best and use = 100 here.",
"We label each document with its most probable topic by counting the number of tokens in the document in the top10 token list for each topic, then taking the argmax.",
"We perform the same procedure on the out-of-domain COVID dataset to generate out-of-domain topic classification supervision, finding that = 80 is best on this dataset with respect to NPMI coherence.",
"For the document classification task, we use MLDoc (Schwenk and Li, 2018), a multilingual news dataset; we fine-tune on the English data.",
"For NLI, we follow Reimers and Gurevych (2020) in using a mixture of SNLI (Bowman et al., 2015) and MultiNLI (Williams et al., 2018), both of which only contain English data.",
"Training Details We consider embeddings produced by both mBERT (Devlin et al., 2018) and XLM-R (Conneau et al., 2019).",
"For fine-tuning, we append to these models a fully-connected layer fol-3 https://linguatools.org/tools/corpora/ wikipedia-comparable-corpora/ 4 We release article IDs and splits with our code.",
"5 https://www.kaggle.com/ allen-institute-for-ai/CORD-19-research-challenge lowed by a softmax, using a negative log-likelihood loss for topic/document classification.",
"We perform a search over the number of epochs in the range [1 , 8] , optimizing over downstream NPMI coherence during topic modeling.",
"We follow the procedure of Reimers and Gurevych (2019) to create sentence embedding models from contextual word representations: we mean-pool word embeddings for two sentences simultaneously, feeding these as inputs to a softmax classifier.",
"We use batch size 16 ; other hyperparameters are kept from Reimers and Gurevych (2019).",
"For NLI fine-tuning, we follow the procedure and use the hyperparameters of Reimers and Gurevych (2020): we first fine-tune monolingual BERT on SNLI and MultiNLI; the embeddings are pooled during fine-tuning to create a sentence embedding model.",
"We then perform a knowledge distillation step from the monolingual SBERT model to XLM-R or mBERT.",
"Continued pre-training is performed by training with the MLM objective on English Wikipedia.",
"We run for 1 epoch, using gradient accumulation to achieve an effective batch size of 256 .",
"We can pool the embeddings from the resulting model directly or perform fine-tuning after continued pre-training.",
"When topic modeling, we run the CTM for 60 epochs, using an initial learning rate of 2 10 3 , dropout 0 .",
"2 , and batch size 64 .",
"The VAE consists of two hidden layers of dimensionality 100 (as in Srivastava and Sutton 2017 and Bianchi et al. 2020b).",
"The ProdLDA baseline is run using the same hyperparameters and the same architecture as a CTM, differing only in using bags of words as input instead of SBERT representations.",
"For the LDA baseline, we employ MalletLDA (McCallum, 2002) as implemented in the gensim wrapper, running for 400 iterations on the Wikipedia data using = 100 .",
"We fine-tune in the TCCTM objective in { 0 .",
"1 , 0 .",
"2 , . . . , 3 .",
"0 } , finding that = 1 .",
"0 yields the best downstream topic coherence for the target Wikipedia data.",
"We try TCCTM based on non-fine-tuned sentence embeddings, as well as models fine-tuned on document classification or NLI.",
"We do not perform this approach on a model fine-tuned on in-domain topic classification to avoid overfit-ting and confounds from performing the same task in multiple stages of the model.",
"(NPMI) coherence on the English Wikipedia dataset.",
"NPMI is used because it is comparable across architectures and objectives, and because it tends to correlate better with human judgments of topic quality (Lau et al., 2014).",
"While perplexity has been used to evaluate LDA (Blei et al., 2003) as well as neural topic models in the past (Miao et al., 2017), it is not comparable across different objective functions when using neural approaches (as it depends on the test loss) and tends to correlate poorly with human judgments (Chang et al., 2009).",
"Topic significance ranking (AlSumait et al., 2009) has been used to measure and rank topics by semantic importance/relevance, though we care more about overall topic quality than ranking topics.",
"As the contextualized topic model is based on a multilingual encoder, it is able to generate i (a topic distribution over document i ) given input embeddings from a document h i in any language it has seen.",
"To evaluate multilingual generalization, we measure the proportion of aligned test documents for which the most probable English topic i English is the same as the most probable target-language topic i Target (the Match metric).",
"We also measure the KL divergence between topic distributions DKL ( i English (cid:107) i Target ) , taking the mean over all aligned documents (the KL metric).",
"We construct a random baseline by randomly shuffling the English articles and then computing both metrics against the newly unaligned foreign articles.",
"We compare topic coherences on the 100,000 English Wikipedia articles for LDA and ProdLDA baselines, a CTM with no fine-tuning, a CTM with continued pre-training (CPT), and the integrated TCCTM model.",
"We also compare the effect of fine-tuning (FT) on the NLI task, on a document classification task (MLDoc), and on labels from LDA for the out-of-domain COVID dataset and for the in-domain Wikipedia data (Table 1).",
"The baseline LDA and ProdLDA models both achieve the same coherence score of 0.129.",
"Compared to these baselines, models based on contextualized representations always achieve higher topic coherence.",
"We find that when using a base CTM without modifying its objective, fine-tuning on any auxiliary task improves topic quality for CTMs .",
"Specifically, fine-tuning on in-domain topic clas-Model Fine-tuning NPMI Neural model Fine-tuned embeddings Topic classification In-domain data LDA 0.129 ProdLDA 0.129 (cid:88) CTM XLM-R mBERT None 0.144 0.144 (cid:88) NLI 0.153 0.152 (cid:88) (cid:88) Doc.",
"sification data is best for monolingual topic modeling, followed closely by document classification on MLDoc.",
"Topic classification on the out-of-domain COVID data results in the same topic coherence scores as document classification, indicating that topic classification is an effective method for bootstrapping supervision, even compared to established document classification datasets with human-labeled documents.",
"The further gains in topic coherence when fine-tuning on Wikipedia topic classification data may be due to the data being in-domain, rather than due to the topic classification task.",
"Fine-tuning on NLI yields less coherent topics than document or topic classification.",
"For any given approach, XLM-R always outperforms mBERT.",
"We find that CPT without fine-tuning performs worse than simply fine-tuning, but better than a CTM using embeddings which are not fine-tuned.",
"Fine-tuning after performing continued pretraining (CPT+FT) slightly improves NPMI over CPT alone, but still results in less coherent topics than if we simply fine-tune on the in-domain Wiki data or the out-of-domain COVID data.",
"Thus, the MLM objective seems to induce representations not conducive to topic modeling.",
"Indeed, fine-tuning on any task is better than continuing to train the encoder on the exact data later used for the CTM.",
"This means that we may not attribute the effectiveness of topic classification solely to the model's seeing in-domain data before topic modeling; rather, some property of fine-tuning itself is better at inducing representations conducive to topic modeling.",
"Conversely, the TCCTM approach using non-fine-tuned embeddings produces more coherent topics than all fine-tuning tasks except topic classification on in-domain Wikipedia data.",
"This means that the topic classification task itself is also responsible for the high topic coherences observed , and not just the fine-tuned sentence embeddings.",
"Nonetheless, topic classification is more effective when used to fine-tune sentence embeddings, rather than as a part of the CTM objectivefurther cementing the importance of embeddings to topic quality.",
"There seems to be interferenceor perhaps overfittingwhen combining TCCTM with embeddings fine-tuned on other tasks.",
"Indeed, fine-tuning on document classification and NLI results in slightly less coherent topics than simply using TCCTM on non-fine-tuned sentence embeddings.",
"Perhaps this could be mitigated with task-specific French German Portuguese Dutch MEAN Model Match KL Match KL Match KL Match KL Match KL CTM (No FT) 20.11 0.71 41.68 0.46 24.85 0.67 46.74 0.40 33.30 0.56 CTM+FT (NLI) 53.68 0.39 56.29 0.33 54.38 0.36 56.98 0.31 55.33 0.35 CTM+FT (DC) 35.53 0.61 42.09 0.49 38.12 0.53 49.70 0.40 41.36 0.51 CTM+FT (TC, COVID) 41.09 0.54 46.39 0.47 43.56 0.48 51.11 0.40 45.54 0.47 CTM+FT (TC, Wiki) 45.02 0.50 51.11 0.40 42.58 0.49 50.68 0.40 47.17 0.44 CPT (No FT) 23.62 0.68 40.75 0.45 22.89 0.65 45.13 0.42 33.10 0.55 CPT+FT (NLI) 43.43 0.45 48.09 0.38 43.04 0.46 49.53 0.38 46.02 0.42 CPT+FT (TC, COVID) 41.70 0.53 43.67 0.44 39.91 0.60 47.44 0.43 43.18 0.50 CPT+FT (TC, Wiki) 47.02 0.45 51.53 0.36 45.83 0.44 52.54 0.34 49.23 0.40 TCCTM (No FT) 18.81 0.71 41.18 0.46 19.21 0.72 45.49 0.42 31.17 0.58 TCCTM+FT (NLI) 53.30 0.38 55.52 0.33 53.75 0.37 56.40 0.30 54.74 0.34 TCCTM+FT (DC) 41.83 0.51 48.72 0.42 38.80 0.53 49.73 0.39 44.77 0.46 Random 0.92 1.48 1.22 1.39 1.24 1.48 1.09 1.44 1.12 1.44 Table 2: Percentage of held-out documents assigned the same topic in English and other languages (Match, higher is better) and the mean KL divergence between the English and target language topic distributions per-document (KL, lower is better).",
"Table 2 presents results for zero-shot cross-lingual topic transfer.",
"All models, including without fine-tuning, are far better than random chance on both metrics.",
"This indicates that multilingual encoders contain enough cross-lingual alignment asis to induce cross-lingual topic alignment.",
"Nonetheless, we also find that fine-tuning the embeddings on any task produces better multilingual topic alignments than not fine-tuning ; NLI consistently shows the best cross-lingual transfer.",
"Document classification is generally a worse fine-tuning task than topic classification for cross-lingual transfer, despite achieving similar monolingual performance.",
"When performing continued pre-training without fine-tuning, we find that results tend to be comparable to the CTM without fine-tuning, though slightly better.",
"When performing both continued pre-training and fine-tuning, we achieve only slightly higher results compared to simply fine-tuning; thus, in both monolingual and multilingual settings, the fine-tuning task is more important for topic transfer than seeing in-domain data or having a better in-domain language model.",
"tive effect in monolingual contexts; however, it consistently performs effective cross-lingual transfer when paired with sentence embeddings fine-tuned on document classification.",
"When paired with embeddings fine-tuned on NLI, TCCTM achieves almost the same scores as the CTM model using the same embeddings.",
"Thus, the fine-tuning task 0 10 20 30 40 50 60 70 80 90 French Topic 0 10 20 30 40 50 60 70 80 90 E n g li s h T o p i c 0.0 0.2 0.4 0.6 0.8 1.0 0 10 20 30 40 50 60 70 80 90 French Topic 0 10 20 30 40 50 60 70 80 90 E n g li s h T o p i c 0.0 0.2 0.4 0.6 0.8 1.0 Figure 3: Row-normalized confusion matrices comparing topic assignments from the contextualized topic model in English and French on aligned documents, both without fine-tuned sentence embeddings (left) and with embeddings fine-tuned on NLI (right).",
"Correlation with Existing Benchmarks To further investigate the role of fine-tuning in inducing better transfer, we employ the Semantic Textual Similarity (STS) benchmark (Cer et al., 2017); 6 this has been used to evaluate the quality of sentence embeddings more broadly in previous works (Reimers and Gurevych, 2019, 2020).",
"Performance is evaluated by measuring the Spearman correlation between the cosine similarity of sentence representations and gold labels for the sentence similarity tasks contained in STS.",
"Here, we try correlating this metric with measures of topic quality, as well as with topic transfer (Figure 2).",
"While STS does not correlate strongly with NPMI ( = 0 . 46 , P > 0 . 1 ), it correlates very well with both Match and KL ( = 0 . 93 and = 0 . 96 , respectively, and P < . 005 for both).",
"This implies that well-tuned sentence embeddings are not necessarily the most important factor in producing good topics, but they are quite important for crosslingual transfer .",
"However, cross-lingual transfer performance saturates quickly at STS Spearman co-efficients over 55, such that an increase of over 50% in STS results in only an 8% increase in Match and 4% reduction in KL.",
"Thus, one could perhaps trade off STS for better cross-lingual transfer at scores above this threshold.",
"We leave this to future work.",
"6 This consists of combined English STS data from Se-mEval shared tasks from 20122017.",
"The exact data we use may be downloaded here: https://sbert.net/datasets/ stsbenchmark.tsv.gz We find further evidence for STS' weak correlation with NPMI and STS' strong correlation with Match and KL when observing the performance of TCCTM: it does not modify the sentence embeddings, so one would expect that TCCTM would perform similarly to the regular CTM if sentence embeddings are of primary importance.",
"This is not the case for NPMI, as TCCTM seems to greatly improve topic quality when using a non-fine-tuned model and have a slightly negative effect when using a fine-tuned model.",
"However, cross-lingual TCCTM performance is consistently comparable to CTM performance with respect to Match and KL when the fine-tuning datasets are the same.",
"Why is fine-tuning important for cross-lingual transfer?",
"Figure 3 displays confusion matrices comparing the topics obtained in English versus those obtained in French for the same documents using both the CTM (not fine-tuned) and CTM+FT (NLI) model.",
"We present confusion matrices for all target languages in Appendix A. When the embeddings are not fine-tuned, we see that a typical pattern of error is the CTM assigning foreign documents topics from a small subset of the 100 available topics, regardless of the actual content of the document; this is indicated by the frequency of vertical striping in the confusion matrix.",
"After fine-tuning, errors look more evenly distributed across topics and less frequent in general, though there is still slight striping at topic 81.",
"This striping also occurs after fine-tuning at topic 81 for Portuguese and (to a smaller extent) Dutch, but not German.",
"Thus, Lang Sample Document Topic en Niccol Zucchi was an Italian Jesuit, astronomer, and physicist... 12: star, constellation, sky, cluster, galaxy fr Niccol Zucchi...tait un prtre jsuite italien, astronome et physicien... 12: star, constellation, sky, cluster, galaxy pt Niccol Zucchi foi um jesuta, astrnomo e fsico italiano... 12: star, constellation, sky, cluster, galaxy de Niccol Zucchius, auch Niccolo Zucchi, war ein italienischer Astronom und Physiker... 12: star, constellation, sky, cluster, galaxy nl Niccol Zucchi was een Italiaans astronoom... 12: star, constellation, sky, cluster, galaxy en Chambilly is a commune in the Sane-et-Loire department... 81: relocated, traveling, transformed, completion, gaining fr Chambilly est une commune franaise, situe dans le dpartement de Sane-et-Loire... 51: tributary, border, flows, passes, alps pt Chambilly uma comuna francesa...no departamento de Sane-et-Loire... 89: dubbed, estimate, forty, moment, onwards de Chambilly ist eine franzsische Gemeinde...im Dpartement Sane-et-Loire... 51: tributary, border, flows, passes, alps nl Chambilly is een gemeente in het Franse departement Sane-et-Loire... 21: quebec, nord, maritime, seine, calais Table 3: Sample documents for the topics with highest (top) and lowest (bottom) cross-lingual precision.",
"CTMs trained on monolingual data are prone to assigning foreign documents topics from a small subset of the available topics, but this can be heavily mitigated with well-tuned sentence embeddings .",
"What kinds of topics have high cross-lingual precision, and which have lower precision?",
"We calculate the mean precision per-topic of cross-lingual topic transfer from English to all other target languages using the CTM+FT (NLI) model, 7 finding that topics which are more qualitatively coherent tend to have higher cross-lingual precision.",
"Topics that are less semantically clear or which compete with similar topics tend to exhibit more crosslingual variance.",
"Examples of the highestand lowest-precision topics may be found in Table",
"3. We sometimes observe competing topics which semantically overlap.",
"In our dataset, this typically occurs for short articles which describe small towns and obscure places, such as in the bottom example of Table 3; topics 51 and 21 appear most frequently for these articles.",
"Many instances of topics 81 and 89 (the lowest-precision topics in our dataset) also occur in short articles about small towns or obscure places; we hypothesize that this is often due to the probability mass of more relevant topics being split, thus allowing these topics which contain generally higher-probability tokens to be assigned.",
"In monolingual settings, the best topics are achieved through contextualized topic modeling using sentence embeddings fine-tuned on the topic classification task.",
"This holds whether the topic classification objective is used during fine-tuning or integrated into the CTM itself.",
"However, in zero-shot polylingual settings, it is far more important to 7 Recall (and therefore F 1 ) is dominated by topics which are consistently incorrectly assigned to foreign documents the same topics which cause vertical striping in Figure",
"fine-tune sentence embeddings on any task than to have seen in-domain data during pre-training or to use the topic classification objective.",
"As the topic classification task can be performed on any corpus which has enough documents for topic modeling, supervision for this task is always available; this supervision bootstrapping can therefore serve as a simple way to increase topic quality and transfer for contextualized topic models in the absence of any other data, regardless of domain.",
"There exists a weak but positive correlation between sentence embedding quality (as measured by the STS benchmark) and topic coherence, but a strong correlation between sentence embedding quality and cross-lingual topic transfer performance.",
"Nonetheless, these preliminary findings also suggest that transfer saturates quickly at quite low STS scores and that STS does not correlate well with topic quality, so we do not necessarily recommend directly optimizing over STS for neural topic modeling.",
"Future work should investigate fine-tuning on multilingual datasets, as well as explicitly inducing cross-lingual topic alignments.",
"Because the CTM currently generates topics in one language and then transfers into other languages, it would also be ben-eficial to investigate methods of generating topics in parallel across languages during topic modeling.",
"This material is based on work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. 1746891.",
"Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.",
"We wish to thank Shuoyang Ding, Chu-Cheng Lin, and the reviewers for their helpful feedback on earlier drafts of this work."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"objective",
"result",
"objective",
"method",
"objective",
"result",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"We introduce an argumentation annotation approach to model the structure of argumentative discourse in student-written business model pitches.",
"Additionally, the annotation scheme captures a series of persuasiveness scores such as the specificity, strength, evidence, and relevance of the pitch and the individual components.",
"Based on this scheme, we annotated a corpus of 200 business model pitches in German.",
"Moreover, we trained predictive models to detect argumentative discourse structures and embedded them in an adaptive writing support system for students that provides them with individual argumentation feedback independent of an instructor, time, and location.",
"We evaluated our tool in a real-world writing exercise and found promising results for the measured self-efficacy and perceived ease-of-use.",
"Finally, we present our freely available corpus of persuasive business model pitches with 3,207 annotated sentences in German language and our annotation guidelines.",
"Argumentation is an omnipresent rudiment of daily communication and thinking (Kuhn, 1992; Toulmin, 1984).",
"The ability to form convincing arguments is not only fundamental to persuading an audience of novel ideas but also plays a major role in strategic decision-making, negotiation, and constructive civil discourse (Walton et al., 2008; Scheuer et al., 2010).",
"However, humans often struggle to develop argumentation skills owing to a lack of individual and instant feedback in their learning process (Dillenbourg et al., 2009; Hattie and Timperley, 2007), since providing feedback on the individual argumentation skills of learners is time-consuming and not scalable if conducted manually by educators (OECD, 2018; Wambsganss et al., 2020b).",
"Furthermore, novel distance learning scenarios such as massive open online courses (MOOCs) (Seaman et al., 2018) come with addi-Figure 1: Argumentation annotation scheme.",
"First, a text sentence is classified into an argumentative component ( claim, premise, major claim, none-argumentative ).",
"Second, the same annotator captures the basic discourse structure between the components.",
"Third, the components and the pitch are scored for the persuasiveness scores (specificity, evidence, strength, relevance) based on our annotation guideline on a 1-to-5 scale.",
"One possible solution to this dilemma are adaptive argumentation support systems that enable individuals to train their argumentation skills, e.g., in collaborative learning settings (Dillenbourg et al., 2009) or by providing tailored argumentation feedback independent of an instructor, time and place (Wambsganss et al., 2020b, 2021).",
"Such tools are increasingly utilizing recent developments in computational linguistics in the form of computer-assisted writing (Ros et al., 2008) to provide tailored feedback about textual documents (Song et al., 2014; Stab and Gurevych, 2014a).",
"In this context, Argumentation Mining (AM) research is a crucial field for the development of support systems that identify arguments in unstructured texts (Lippi and Torroni, 2015; Lawrence and Reed, 2019).",
"However, corpora that are applicable for the design and development of adaptive argumentative writing systems in pedagogical scenarios are rather scarce.",
"To the best of our knowledge, there are only two collections from the educational domain which are based on student-written texts and annotated for argumentative discourse structures (Stab and Gurevych, 2017a; Wambsganss et al., 2020c).",
"We propose a novel argumentation annotation scheme for persuasive student-written business model pitches.",
"Therefore, we introduce a corpus of 200 student-written persuasive pitches with 3,207 sentences that are annotated for argument components, their relations, and persuasiveness scores to judge the argumentation quality of the single arguments.",
"We trained different models and embedded them as feedback algorithms in a novel writing support tool that provides students with individual argumentation feedback and recommendations in a persuasive writing exercise.",
"The design of our tool is based on the self-evaluation mechanism for students to improve self-efficacy and argumentation learning outcomes during a learning process (i.e., self-regulated learning theory Bandura (1991); Zimmerman and Schunk (2001)).",
"We asked students to conduct a persuasive writing exercise and provided them with argumentation self-evaluation.",
"The measured argumentation (Toulmin, 2003), the perceived self-efficacy (Bandura, 1991), and the perceived usefulness (Venkatesh and Bala, 2008) in an evaluation provided promising results for using our approach in different large-scale learning scenarios to offer quality education with individual feedback independent of an instructor, time, and location.",
"Hence, we contribute to research by (1) deriving an annotation scheme for a new data domain for AM based on argumentation theory and previous work on annotation schemes for student-written texts (Stab and Gurevych, 2017a; Carlile et al., 2018; Wambsganss et al., 2020c), (2) presenting an annotation study based on 50 persuasive business model pitches and two annotators to show that the annotation of student-written pitches is reliably possible, (3) offering our final and freely available corpus of 200 student business pitches consisting of 3,207 annotated sentences collected from a lecture about digital business models in German, and (4) embedding and evaluating our annotation approach as predictive models in a writing support system in a real-world writing exercise.",
"We, therefore, hope to encourage future research on argumentation discourse and persuasiveness levels in student-written texts and on writing support systems for argumentation.",
"Argumentation Mining AM aims to identify argument components in the form of claims and premises, along with support and attack relationships that model the discourse structure of arguments.",
"In recent years, this has been done for several domains, including legal texts (Mochales Palau and Ieven, 2009), newswire articles (Deng and Wiebe, 2015; Sardianos et al., 2015), or user-generated content (Wachsmuth et al., 2014; Habernal and Gurevych, 2015).",
"The objective is to automatically identify arguments in unstructured textual documents based on the classification of argumentative and non-argumentative text units and the extraction of argument components and their relations.",
"Recently, researchers have built increasing interest in adaptive argumentation support tools based on AM (Song et al., 2014; Stab and Gurevych, 2014a,b; Wambsganss et al., 2020b), offering argumentative writing support to students by providing individual feedback about the argumentation discourse.",
"However, utilizing this technology in a pedagogical scenario for educational purposes lacks a wider-scale adoption (Stab and Gurevych, 2017b; Lawrence and Reed, 2019; Ros et al., 2008), as argumentation-annotated corpora with student-written texts are rather rare (Lawrence and Reed, 2019; Wambsganss et al., 2020c).",
"Annotation Schemes and Corpora Since the availability of annotated data sets is crucial for designing, training, and evaluating AM algorithms, several research groups have dealt with creating labeled corpora, such as the Araucaria corpus (Reed et al., 2008), the European Court of Human Rights (ECHR) corpus (Mochales and Moens, 2008), or the Debatepedia corpus (Cabrio and Villata, 2012).",
"Creating gold standards and test collections requires a formal representation model as well as corresponding annotation guidelines.",
"While a number of well-defined models exist in the field of AM (e.g., Freeman (2001); Walton (1996); Wambsganss et al. (2020a), there is no general argumentation annotation scheme across all domains and genres of texts.",
"Instead, the proposed representations differ in granularity, expression power, and categorization (Lawrence and Reed, 2019).",
"There-8749 fore, conducting annotation studies with several annotators when introducing new annotation schemes is crucial for the quality of argumentation corpora.",
"Annotated Corpora for Education With the exception of the corpora proposed in Stab and Gurevych (2014a, 2017a) and Wambsganss et al. (2020c), prior argument-annotated data sets are not easily applicable for the development of argumentative writing support systems for students in a real-world case.",
"The reasons are twofold.",
"First, the texts are not extracted from a pedagogical scenario in which the annotation allows for training a model that provides students with individual and reliable feedback on the texts.",
"Second, the data is often not annotated at the level of discourse (Stab and Gurevych, 2017a; Lawrence and Reed, 2019), which is necessary, for example, to give students feedback on insufficiently supported claims.",
"Stab and Gurevych (2014a) identified the lack of linguistic corpora in the domain of student-written texts for designing and developing argumentative writing support systems by leveraging AM (Stab and Gurevych, 2014a).",
"Therefore, they introduced an annotation scheme for annotating argument components and their relationships in persuasive English student essays.",
"Afterwards, several researchers built on their corpus, including, e.g., Carlile et al. (2018), who use a subset of the essays and annotate their persuasiveness, and Ke et al. (2018), who train a persuasiveness scoring model on them.",
"Recently, Wambsganss et al. (2020c) published an argumentation annotation scheme to capture the discourse level of student-written peer reviews.",
"This corpus was successfully embedded in a writing support tool to provide students with adaptive argumentation tutoring (Wambsganss et al., 2020b).",
"Building on the potential of argumentation-annotated corpora for adaptive skill learning, we propose to further transfer argumentation corpora to other educational domains and student-written texts.",
"Our corpus consists of 200 student-written business model pitches in which students present an entrepreneurial idea of a digital business model.",
"Business model pitches also called entrepreneurial or business pitches (Sabaj et al., 2020) are described as a brief description of the value proposition of an idea or company (Daly and Davy, 2016) with the objective to convince a group of stakeholders of the novelty of an idea.",
"The formulation of persuasive business model pitches is increasingly used in modern pedagogical scenarios, e.g., to train the entrepreneurship mindset or agile work (i.e., OECD (2019)).",
"Students are asked to write a concise but persuasive summary of the what, why, and how of their (business) idea in order to convince a peer.",
"This pedagogical scenario is domain-independent, easy to implement in different settings (e.g., in MOOCs), and can be utilized to train skills such as logical argumentation.",
"In fact, in their study about entrepreneurial business pitches, Fernndez-Vzquez and lvarez-Delgado (2019) found out that the lack of rational arguments determines the failure of the entrepreneur's efforts to be persuasive, regardless of the emotional appeals that are introduced into the pitch .",
"Therefore, Fernndez-Vzquez and lvarez-Delgado (2019) calls for more emphasis on logical argumentation chains in business pitches.",
"However, linguistic research on business model pitches is a growing but still small field (Ducasse, 2020).",
"Therefore, it is not surprising that no pitch corpus exists that is annotated for argumentation discourse structures based on an appropriate argumentation scheme (Lawrence and Reed, 2019).",
"We propose a new annotation scheme to model argument components, their relations as well as argumentation quality labels that reflect the argumentative discourse structures in persuasive business model pitches.",
"We based our annotation scheme on the model of Toulmin (1984) and the studies of Stab and Gurevych (2014a, 2017a); Wambsganss et al. (2020c); Carlile et al. (2018); Ke et al. (2019).",
"Following a 4-step methodology to build a robust corpus, we (1) searched literature and scientific theory on argumentation discourse structures and argumentation models in different text domains; (2) randomly sampled 50 student-written business pitches and, based on our findings from step 1, developed a set of annotation guidelines consisting of rules and limitations on how to annotate argumentation discourse structures; (3) applied, evaluated and improved our guidelines with three native speakers in five consecutive workshops to resolve annotation ambiguities; and (4) applied the final annotation scheme based on our 26-page guideline to a corpus of 200 student-written business pitches with 3,207 annotated sentences.",
"1 1 The annotation guidelines as well as the entire corpus can be accessed at https://github.com/thiemowa/ -argumentative_business_model_pitches .",
"We gathered a corpus of 200 student-written business model pitches in German.",
"The data was collected in a mandatory business model innovation lecture at a Western European university.",
"In this lecture, around 200 students develop and present a new business model.",
"Students are asked to write a concise but persuasive pitch about the what, why, and how of their novel business idea in order to convince peer students.",
"Afterwards, the students receive peer feedback from three fellow students on the persuasiveness of their business model pitch.",
"The business pitches were collected from 2019 to 2020 according to the ethical guidelines of our university and with approval from the students to utilize the writings for scientific purposes.",
"Our objective is to model the argumentation discourse structures and the persuasiveness of student-written business model pitches by capturing argument components, their relations, and persuasiveness scores.",
"The majority of the pitches in our corpus follow the same structure.",
"They describe a novel business model and then provide convincing statements backed by examples, statistics, user-centered descriptions, quotes, or intuitions.",
"However, we found that the specificity, the strength, the relevance, and the evidence level vary between the different components.",
"Thus, we captured them with qualitative labels on a 1-to-5 scale.",
"Our basic annotation scheme is illustrated in Figure 2.",
"argumentation theory which provide detailed definitions of argument components (e.g., Toulmin (1984); Stab and Gurevych (2017a)).",
"These theories generally agree that a basic argument consists of multiple components and that it includes a claim that is supported or attacked by at least one premise .",
"Also in student-written business model pitches, we found that a claim is the central component of an argument.",
"It is a controversial statement (e.g., claiming a strength or novelty of a business model) that is either true or false and should not be accepted by the stakeholder without additional support or backing.",
"In business model pitches, authors usually start or conclude with an overall idea and topic of the business model.",
"Similar to the persuasive student essays corpus by Stab and Gurevych (2017a), we modeled this statement as a major claim .",
"Usually, the major claim is present in the introduction or conclusion of the pitch or in both.",
"In the introduction, it often represents a general claim of the novelty of the business idea, whereas in the conclusion the major claim often summarizes or repeats the argumentation according to the author's business model idea.",
"The major claim is then backed up by several other claims to manifest its validity.",
"The premise supports the validity of the claim (e.g., by providing a statistic, analogy, user-centered example, or a value-based intuition).",
"It is a reason given by the author to persuade the reader of their claim .",
"Figure 3 illustrates a fully annotated example.",
"2 Argumentative Relations The basic discourse structure in our data set of student-written business model pitches consists of one major claim and several claims, each independently supported by one or more premises.",
"Since in our domain the writers aim to pitch their business idea as convincingly as possible, the texts generally do not include attack relations between the components, as is the case, for example, in student-written peer reviews (Wambsganss et al., 2020c).",
"Therefore, we modeled and annotated only support relationships.",
"Nevertheless, more complicated constellations of major claims, claims, and premises are possible.",
"For example, a claim may be supported by several different premises or by a chain of premises in which each premise is in turn supported by an-other premise.",
"In the same way, a claim can be supported by one premise.",
"However, the simplest form consists of a major claim, backed up by a 2 Since the original texts are written in German, we translated the examples into English for the sake of this paper.",
"claim supported by a single premise.",
"To provide an overview, we illustrated three basic examples of annotated relations in our corpus in the appendix.",
"Persuasiveness Scores To capture the differences in the persuasiveness levels of the components (i.e., the strength of a premise or the specificity of a major claim), we followed the approach of Carlile et al. (2018) and Ke et al. (2018) and defined five persuasiveness scores for the argumentative components (see Figure 1).",
"Our objective was to capture the differences of a very persuasive major claim vs. a not very persuasive major claim accurately to provide students with more detailed writing support about why their argumentation is (un)persuasive.",
"For the major claim , we found two attributes that differ in business model pitches: specificity and evidence .",
"The specificity determines how detailed and specific the statement about the business model is, whereas the evidence ranks how well the major claim is backed up by supporting components.",
"We found significant differences in both attributes throughout our corpus, which we aim to model with those scores.",
"Tables 1 and 2 provide a more nuanced definition for the specificity and evidence in a 1-to-5 scale.",
"For claims , we defined evidence as a qualitative variable.",
"Some claims seem to be strong in their statement.",
"However, they do not contribute to the strength and persuasiveness of the overall business model.",
"Thus, we specified evidence for a claim as the level of how well the claim supports the business model and/or the major claim.",
"Most differences in the persuasiveness level in business model pitches can be found in the premises that back up the claims and thus the overall idea.",
"We found premises to differ in two qualitative labels: strength and relevance .",
"Strength is defined as how well a single premise contributes to the persuasiveness of the argument, and relevance determines how relevant a premise is for the overarching business idea.",
"We believe that with these two scores we can model the most significant differences in the persuasiveness level of premises.",
"Tables 3 and 4 provide an overview of the two scores.",
"Moreover, we found the business model pitches to also differ in their argumentative power on a discourse level.",
"Sometimes a major claim is well formulated and supported by several claims and premises, but the business model is not really strong or novel in the overall picture because the argumentative discourse structure is weak.",
"Therefore, we defined a document level score termed pitch strength to capture the persuasiveness of the argumentation discourse level of a business model.",
"More information on pitch strength can be found in the Table",
"5. All qualitative attributes are measured on a 1-to-5 scale following Carlile et al. (2018), with every level being precisely defined in our annotation guidelines.",
"A summary of the variables is illustrated in Table",
"6. 3.3 Annotation Process Two native German speakers annotated the business pitches independently from each other for the major claim , claims , and premises as well as their argumentative relationships .",
"Moreover, they labeled the pitch strength , the specificity and the evidence of the major claim, the evidence of claims, and the strength and relevance of premises according to the annotation guidelines we specified.",
"Inspired by Stab and Gurevych (2017a); Wambsganss et al. (2020c), our guideline consisted of 26 pages, including definitions and rules for what is an argument, which annotation scheme is to be used, and how argument components, argumentative relations, and the qualitative attributes are to be judged.",
"After constructing the annotation guidelines, the results were discussed and validated by two independent senior researchers concerning the criteria of robustness, conciseness, extensibility, and com-8752 Score Description 5 The major claim summarizes the argument well and has an addendum that indicates the extent to which the claim applies.",
"Score Description 5 A strong premise.",
"By itself, it contributes very well to the persuasiveness of the argument.",
"4 A reasonable premise.",
"It is a fairly strong point, but it could be improved to increase its persuasiveness.",
"3 An inadequate premise.",
"It is not a strong premise and may persuade only a few readers.",
"2 A weak premise.",
"It can only help persuade a small number of readers.",
"1 The premise does not contribute to persuasiveness at all.",
"prehensibility.",
"Several private training sessions and three team workshops were performed to resolve disagreements among the annotators and to reach a common understanding of the annotation guidelines.",
"We used the tagtog annotation tool 3 .",
"First, a text was classified into argumentative components ( major claim, claim, premise ) by the trained annotators.",
"Second, the same annotators scored the argumentative relations and the qualitative attributes of the major claim , premises , and claims based on our annotation guideline on a 1-to-5 scale.",
"After the first 50 pitches had been annotated by both annotators, we calculated the inter-annotator agreement (IAA) scores.",
"As we obtained satisfying results, we proceeded with a single annotator who marked up the remaining 150 documents.",
"To evaluate the reliability of the argument component and argumentative relation annotations, we followed the approach of Stab and Gurevych (2014a).",
"Argument Components With regard to the argument components, two strategies were used.",
"Since there were no predefined markables, the annotators not only had to identify the type of argument component but also its boundaries .",
"In order to assess the latter, we use Krippendorff's U (Krip-pendorff, 2004), which allows for assessing the reliability of an annotated corpus considering the differences in the markable boundaries.",
"To evaluate the annotators' agreement in terms of the selected category of an argument component for a given sentence, we calculate percentage agreement and two 3 https://tagtog.net/ chance-corrected measures, multi (Fleiss, 1971) and Krippendorff's (Krippendorf, 1980).",
"Table 7 displays the resulting IAA scores.",
"We obtain an IAA of 87.3% for the claims and 87.7% for the premises.",
"The corresponding multi scores are 0.71 and 0.75.",
"Regarding Krippendorff's , a score of 0.71 and 0.75 is obtained, indicating a substantial agreement for both categories.",
"With a score of 0.50 and 0.54, the unitized of both the claim and premise annotations is somewhat smaller compared to the sentence-level agreement.",
"Thus, the boundaries of argument components are less precisely identified in comparison to the classification into argument types.",
"Yet the scores still suggest that there is a moderate level of agreement between the annotators.",
"Finally, with an IAA of 99.5% and a score of 0.97 for both multi and Krippendorff's , we obtain an almost perfect agreement for the major claims.",
"Hence, we conclude that the annotation of the argument components in student-written business model pitches is reliably possible.",
"Argumentative Relations To evaluate the reliability of the argumentative relations, we used the data set of all pairs of argument components that were possible during the annotation task according to our annotation scheme, i.e., all pairs of a major claim and a claim, a claim and a premise, and two premises.",
"In total, the markables include 3,032 pairs of which 16.8% are annotated as support relations, while 83.2% of the possible pairs were left unidentified by an annotator.",
"We obtained an IAA of 91.5% for the support relations.",
"The corresponding multi and Krippendorff's scores both amount to 0.61.",
"Therefore, we conclude that argumentative relations can also be reliably annotated in business model pitches.",
"Persuasiveness Scores Finally, we determined the reliability of the qualitative argumentation labels based on Cohen's (Cohen, 1988).",
"Considering the strength of the pitch, we obtained an almost perfect agreement between the two annotators ( =0.88).",
"With respect to the strength of the premise, we found moderate agreement ( =0.47).",
"The same applies to the specificity of the major 8754 claim ( =0.41), which allows the conclusion that the annotators' labels are reliable.",
"Regarding the evidence for both the claim and the major claim, as well as the relevance of the premise, there is some room for improvement.",
"However, with scores of =0.33, =0.30, and =0.28, the annotations still show a fair agreement between the labelers.",
"Thus, qualitative argumentation labels can be reliably annotated in business model pitches, too.",
"The final corpus consists of 200 student-written business pitches in German that are composed of 3,207 sentences with 61,964 tokens in total.",
"Hence, on average, each document has 16 sentences and 305 tokens.",
"A total of 262 major claims, 1,270 claims, and 1,481 premises were annotated.",
"1,069 textual spans were identified as not being an argument component (None).",
"2,018 support relationships were marked up by the annotators.",
"4 5 Providing Students Adaptive Feedback Modelling Argumentation Structures After constructing and analyzing our corpus, we leveraged the novel data to train a machine learning model.",
"Our objective was to embed a classification algorithm in the back end of an argumentative writing support system to provide students with individual argumentation feedback in the writing process.",
"The task is considered a sentence-based classification task, where each sentence can be either a major claim , a claim , a premise , or non-argumentative .",
"Therefore, we trained and tuned a Long Short-Term Memory (LSTM) model (Hochreiter and Schmid-huber, 1997) to classify the argumentative components of a given text.",
"We tokenized the texts and transformed them into word embeddings.",
"The data set was split into training and test sets using an 80:20 split.",
"For the component classification we received an accuracy of 54.12%, a precision of 55.90% and a recall of 54.12% on the test data.",
"We benchmark our approach against a BERT model (Devlin et al., 2018).",
"However, we received a rather unsatisfying accuracy of 47.50%, a precision of 46.66% and a recall of 47.50%.",
"More information about the modeling can be found in Section B of the appendix.",
"tem that provides students with individual feedback on their argumentation skill level based on our model.",
"For the design of the tool, we followed the design principles of Wambsganss et al. (2020b) and self-regulated learning theory (Bandura, 1991; Zimmerman and Schunk, 2001).",
"Our goal is to provide learners with adaptive self-evaluation opportunities based on logical argumentation errors irrespective of instructor, time, and location.",
"Our system is illustrated in Figure 4.",
"Evaluation in a Writing Exercise We embedded the tool into a persuasive writing exercise where students were asked to write an argumentative pitch about a business idea.",
"During this writing task, they received adaptive feedback on their argumentation level based on our model.",
"The evaluation was conducted as a part of an exercise with students from a Western-European University, and thus designed and reviewed according to the ethical guidelines of the university.",
"To keep data privacy standards, the students' data were additionally anonymized.",
"We conducted a field experiment to see if and how individual argumentation self-evaluation with adaptive feedback can assist students in writing more persuasive writings.",
"We created a pedagogical scenario in which participants had to write a 300-word persuasive business pitch.",
"The declared goal was to write a convincing pitch to persuade potential investors.",
"Students were not required to participate in the assignment in order to pass the class; nevertheless, by successfully completing the assignment, they may increase their final mark by 2.2 percent.",
"The persuasiveness of the business presentation had no influence on the assignment's grading and, as a result, no impact on final marks.",
"After the treatment, we measured the perceived ease-of-use according to Venkatesh and Bala (2008) by asking the following three items: \"It would be easy for me to become adept at using the reasoning tool\", or \"I find the reasoning tool easy to interact with\", and \"Learning how to use the reasoning tool would be easy for me\".",
"Moreover, we measured the self-efficacy of students for the task of argumentation skill learning based on three items following Bandura (1991) to control for self-regulated learning.",
"The items included, \"In comparison to other users, I will write a good argumentative text\", \"I am sure that I could write a very good argumentative text\", and \"I think I now know quite a bit about argumentative writing.\"",
"Both con-8755 Figure 4: Screenshot of a trained model on our corpus as an adaptive writing support system.",
"structs were measured with a 1-to-7 point Likert scale (1: totally disagree to 7: totally agree, with 4 being a neutral statement).",
"Furthermore, we asked three qualitative questions: What did you particularly like about the use of the tool? , What else could be improved? , and Do you have any other ideas? and captured the demographics.",
"Results We received 25 valid results where participants successfully finished the writing exercises and the post-survey.",
"Participants had an average age of 24.24 (SD= 3.83, 13 males, 12 females).",
"The persuasive writing task took an average of 30 to 45 minutes.",
"We calculated the mean for both constructs and compared them to the midpoints.",
"All results were greater than the neutral value of 4, indicating a positive value for the design and the pedagogical scenario.",
"A high perceived ease-of-use (mean= 4.94, SD= 0.98, normalized = 0.71) is especially important for learning tools to ensure students are experiencing the usage of the tool as a benefit and that they find it easy to interact with.",
"This will foster the motivation, engagement, and adoption of the learning application.",
"Moreover, positive effects for self-regulated learning can be also seen by comparing the means of the measured self-efficacy against the midpoints (Bandura, 1991).",
"The average self-efficacy was 4.98 (SD= 0.98, normalized = 0.71) on a 1-7 Likert scale.",
"Compared to the neutral value of 4, this is a positive indication that argumentation self-monitoring and self-evaluation help students learn in a self-regulated way.",
"of our tool and model instantiation.",
"The general attitude for our tool was positive.",
"Participants positively mentioned the intelligent self-evaluation, the embedding in Google Docs, and the in-text highlighting several times.",
"However, participants also asked for the tool to provide concrete argument suggestions on how to improve the argumentativeness.",
"5 6 Conclusion We propose an argumentation annotation scheme and introduce an annotated corpus of persuasive student-written business model pitches extracted from a pedagogical scenario.",
"We offer a corpus of 200 student-written business model pitches with 3,207 sentences annotated for argument components, their relations, and six persuasiveness scores on different levels.",
"By presenting an annotation study based on 50 persuasive pitches, we demonstrate that the annotation of student-written business model pitches is reliably possible.",
"Finally, we embed and evaluated a trained model based on our corpus in an argumentation writing support tool for students.",
"We thus aim to encourage fellow researchers to leverage our annotation scheme and corpus to design and develop argumentation support systems for students in large-scale scenarios."
] | [
"abstain",
"abstain",
"abstain",
"method",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"result",
"method",
"result",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain"
] |
[
"This paper explores a new natural language processing task, review-driven multi-label music style classification.",
"This task requires systems to identify multiple styles of music based on its reviews on websites.",
"The biggest challenge lies in the complicated relations of music styles.",
"To tackle this problem, we propose a novel deep learning approach to automatically learn and exploit style correlations.",
"Experiment results show that our approach achieves large improvements over baselines on the proposed dataset.",
"Furthermore, the visualized analysis shows that our approach performs well in capturing style correlations.",
"1 1 Introduction As music style (e.g., Jazz, Pop, and Rock) is one of the most frequently used labels for music, music style classification is an important task for applications of music recommendation, music information retrieval, etc.",
"Several methods have been proposed for automatic music style classification (Qin and Ma, 2005; Zhou et al., 2006; Wang et al., 2009; Choi et al., 2017).",
"Most of them mainly focus on using audio information to identify styles.",
"Motivated by the fact that a pieces of music could has different styles, several studies (Wang et al., 2009; Oramas et al., 2017) also aim at multi-label music style classification.",
"Although these methods make promising progress, they are limited in two aspects.",
"First, not all audio data is available in real-world applications because of copyright restrictions, which limits the generalization ability.",
"Second, some of them are based on a strong assumption that a piece of music should be assigned with only one style.",
"Different from these studies, we focus on using Equal Contribution 1 The code and the dataset are available at https:// github.com/lancopku/RMSC easily obtained reviews in conjunction with multilabel music style classification.",
"The motivation comes from the fact that lots of user reviews contain rich style-related information, which can be used for music style classification.",
"The major challenge of this task lies in the complicated correlations of music styles.",
"For example, Soul Music 2 contains elements of R&B and Jazz.",
"These three labels can be used alone or in combination.",
"Traditional multi-label classification methods may mistake the true label [Soul Music, R&B, Jazz] for the false label [R&B, Jazz].",
"If well learned, style relations are useful knowledge for improving the performance, e.g., increasing the probability of Soul Music if we find that it is heavily linked with two high probability labels: R&B and Jazz.",
"Therefore, to better exploit style correlations, we propose a novel deep learning approach with two parts: a label-graph based neural network, and a soft training mechanism with correlation based continuous label representation.",
"Our contributions are listed as follows: To the best of our knowledge, this work is the first to explore review-driven multi-label music style classification.",
"To learn the relations among music styles, we propose a label-graph based neural network and a soft training mechanism with correlation-based label representation.",
"This paper is related with music style classification and multi-label classification.",
"In this section, we give a detailed introduction about the related studies.",
"2 Soul Music is a popular music genre that originated in the United States in the late 1950s and early 1960s.",
"It contains elements of African-American Gospel Music, R&B and Jazz.",
"Music Title Mozart: The Great Piano Concertos, Vol.1",
"Previous work mainly focuses on using audio information to identify music styles.",
"Traditional machine learning algorithms are adopted in these studies, such as Support Vector Machine (SVM) (Xu et al., 2003), Hidden Markov Model (HMM) (Chai and Vercoe, 2001; Pikrakis et al., 2006), and Decision Tree (DT) (Zhou et al., 2006).",
"In addition to audio information, Fell and Sporleder (2014) also propose to classify music by statistical analysis of lyrics.",
"Motivated by the fact that a piece of music could has different styles, several studies (Wang et al., 2009; Oramas et al., 2017) also aim at multi-label music style classification.",
"Different from these studies, we focus on using easily obtained reviews in conjunction with multi-label music style classification.",
"Multi-label classification has been widely applied to diverse problems, including image classification (Qi et al., 2007; Wang et al., 2008), audio classification (Boutell et al., 2004; Sanden and Zhang, 2011), web mining (Kazawa et al., 2004), information retrieval (Zhu et al., 2005; Gopal and Yang, 2010), etc.",
"Compared with the existing multilabel learning methods (Wei et al., 2018; Li et al., 2018b,a; Yang et al., 2018; Lin et al., 2018), our method has the following novelties: a label graph that explicitly models the relations of styles; a soft training mechanism that introduces correlation-based continuous label representation.",
"Given several reviews from a piece of music, this task requires models to predict a set of music styles.",
"Assume that X = { x 1 , . . . , x i , . . . , x K } denotes the input K reviews, and x i = x i, 1 , . . . , x i,J represents the i th review with J words.",
"The term Y = { y 1 , y 2 , . . . , y M } denotes the gold set with M labels, and M varies in different samples.",
"The target of review-driven multilabel music style classification is to learn the mapping from input reviews to style labels.",
"The dataset is collected from a popular Chinese music review website, 3 where registered users are allowed to comment on all released music albums.",
"Each sample includes a music title, a set of human annotated styles, and associated reviews.",
"An example is shown in Table",
"1. In order to build a high-quality dataset, we refer to the literature about music styles.",
"We merge similar music styles and delete music styles that violate the music classification list.",
"22 styles are defined in our dataset.",
"4 For user reviews, we first delete reviews with too little information by rule-based methods and then select top 40 voted reviews.",
"Music samples with too few reviews are also deleted.",
"The constructed datataset contains over 7.1k samples, 288K reviews, and 3.6M words.",
"The proposed approach contains two parts: a label-graph based neural network and a soft training mechanism with continuous label representation.",
"An illustration of the proposed method is shown in Figure",
"1. 4.1 Label-Graph Based Neural Network The first layer is a hierarchical attention layer (Yang et al., 2016) that lets the model to pay more or less attention to individual words 3 https://music.douban.com 4 Alternative Music, Britpop, Classical Music, Country Music, Dark Wave, Electronic Music, Folk Music, Heavy Metal Music, Hip-Hop, Independent Music, Jazz, J-Pop, New-Age Music, OST, Piano Music, Pop, Post-Punk, PostRock, Punk, R&B, Rock, and Soul Music.",
"and reviews when constructing raw label probability distribution z .",
"Label Graph.",
"To explicitly take advantage of the label correlations when classifying music styles, we add a label graph layer to the network.",
"This layer takes z as input and generates a soft label probability distribution e .",
"Formally, we denote G R m m as the label graph, where m is the number of labels, G is initialized by an identity matrix.",
"An element G [ l i , l j ] is a real-value score indicating how likely label l i and label l j are related.",
"The graph G is a part of parameters and can be learned by back-propagation.",
"Then, given the raw label probability distribution z and the label graph G , the output of this layer is: e = sigmoid( z G ) .",
"The probability of l j is not only determined by the current classification result, but also determined by other labels probabilities and their correlations to l j .",
"For example, the probability of a label heavily linked with many high-probability labels will be increased.",
"Given a predicted label probability distribution e and a target discrete label representation y , the typical loss function is computed as",
"where denotes all parameters, and m is the number of the labels.",
"The function H denotes the cross entropy between two distributions.",
"However, the widely used discrete label representation does not apply to the task of music style classification, because the music styles are not mutually exclusive and highly related to each other.",
"The discrete distribution without label relations makes the model over-distinguish the related labels.",
"Therefore, it is hard for the model to learn the label correlations.",
"Instead, we propose a soft training method by combining a discrete label representation y with a correlated-based continuous label representation y (cid:48) .",
"The probability gap between two similar labels in y (cid:48) should not be large.",
"A straight-forward approach to produce the continuous label representation is to use the label graph matrix G to transform the discrete representation y into a continuous form: y (cid:48) = y G .",
"Based on the discrete label representation y and continuous label representation y (cid:48) , we define the loss function as",
"Loss ( ) = H ( e , y ) + H ( e , y (cid:48) ) ,",
"where the loss H ( e , y ) aims to correctly classify labels, and the loss H ( e , y (cid:48) ) aims to avoid the over-distinguishing problem and to better learn label correlations.",
"In this section, we evaluate our approach on the proposed dataset.",
"We first introduce evaluation metrics, then show experiment results and give a detailed analysis.",
"The training details can be found at Appendices.",
"Multi-label classification requires different evaluation metrics compared with traditional single-label classification.",
"In this paper, we use the following widely-used evaluation metrics for multilabel classification.",
"F1-score: We calculate macro F1 and micro F1, respectively.",
"Macro F1 computes the metric independently for each label and then takes the average as the final score.",
"Micro F1 aggregates the contributions of all labels to compute the average score.",
"One-Error: One-error evaluates the fraction of examples whose top-ranked label is not in the gold label set.",
"Hamming Loss: Hamming loss counts the fraction of the wrong labels to the total number of labels.",
"We implement several widely-used multi-label classification methods as baselines, such as ML-KNN (Zhang and Zhou, 2007), Binary Relevance (Tsoumakas et al., 2010), Classifier Chains (Read et al., 2011), Label Powerset (Tsoumakas and Vlahavas, 2007).",
"The details of baselines can be found at Appendices.",
"The results on the test set are summarized in Table",
"2. The proposed approach significantly outperforms the baselines, with micro F1 of 64.5, macro F1 of 54.4, and one-error of 22.6, improving the metrics by 10.6, 21.4, and 7.9 respectively.",
"The improvements are attributed to two parts, a hierarchical attention network and a label correlation mechanism.",
"Only using the hierarchical attention network outperforms the baselines, which shows the effectiveness of hierarchically paying attention to different words and sentences.",
"The greater F1-score is achieved by adding the proposed label graph, which demonstrates that the proposed label graph helps a lot by taking advantage of label correlations.",
"It can be clearly seen that with the help of soft training, the proposed method achieves the best performance.",
"Especially, the micro F-score is improved from 62.8 to 64.5, and the one-error is reduced from 23.4 to 22.6.",
"With the new loss function, the model not only knows how to distinguish the right labels from the wrong ones, but also can learn the label correlations that are useful knowledge, especially when the input data contains too much style unrelated words for the model to extract all necessary information.",
"Figure 2 shows a whole heatmap of the automatically learned label graph, for convenience of display, we have subtracted the label graph by an identity matrix.",
"We can see from the picture that some obvious music style relations are well captured.",
"For Country Music, the most related label is Folk Music.",
"In reality, these two music styles are highly similar and the boundary between them is not well-defined.",
"For three kinds of rock music, Heavy Metal Music, Britpop Music, and Alternative Music, the label graph correctly captures that the most related label for them is Rock.",
"For a more complicated relation where Soul Music is highly linked with two different labels, R&B and Jazz, the label graph also correctly capture such relation.",
"These examples demonstrate that the proposed approach performs well in capturing relations among music styles.",
"For clearer understanding, we compare several examples generated with and without the label correlation mechanism in Table",
"3. By comparing gold labels and predicted labels generated by different methods, we find that the proposed label correlation mechanism identifies the related styles more precisely.",
"This is mainly attributed to the learned label correlations.",
"For example, the correct prediction in the first example shows that the label correlation mechanism captures the close relation between Britpop and Rock, which helps the model to generate an appropriate prediction.",
"Although the proposed method has achieved sig-nificant improvements, we also notice that there are some failure cases.",
"In this section, we give the detailed error analysis.",
"First, the proposed method performs worse on the styles with low frequency in the training set.",
"5 Britpop is a style of British Rock.",
"6 Hip-Hop is a mainstream Pop style.",
"7 Rhythm and Blues, often abbreviated as R&B, is a genre of popular music.",
"8 New-Age Music is a genre of music intended to create artistic inspiration, relaxation, and optimism.",
"It is used by listeners for yoga, massage, and meditation.",
"Table 4 compares the performance on the top 5 music styles of highest and lowest frequencies.",
"As we can see, the top 5 fewest music styles get much worse results than top 5 most music styles.",
"This is because the label distribution is highly imbal-anced where unpopular music styles have too little training data.",
"Second, we find that some music items are wrongly classified into the styles that are similar with the gold styles.",
"For example, a sample with a gold set [Country Music] is wrongly classified into [Folk] by the model.",
"The reason is that some music styles share many common elements and only subtly differ from each other.",
"It poses a great challenge for the model to distinguish them.",
"For future work, we would like to research how to effectively address this problem.",
"In this paper, we focus on classifying multi-label music styles with user reviews.",
"To meet the challenge of label correlations, we propose a label-graph neural network and a soft training mechanism.",
"Experiment results have showed the effectiveness of the proposed approach.",
"The visualization of label graph also shows that our method performs well in capturing label correlations.",
"We thank all reviewers for providing the thoughtful and constructive suggestions.",
"This work was supported in part by National Natural Science Foundation of China (No. 61673028).",
"Email correspondence to Xu Sun."
] | [
"objective",
"abstain",
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"result",
"other",
"other",
"other"
] |
[
"Question answering (QA) is a fundamental means to facilitate assessment and training of narrative comprehension skills for both machines and young children, yet there is scarcity of high-quality QA datasets carefully designed to serve this purpose.",
"In particular, existing datasets rarely distinguish fine-grained reading skills, such as the understanding of varying narrative elements.",
"Drawing on the reading education research, we introduce F airy tale QA 1 , a dataset focusing on narrative comprehension of kindergarten to eighth-grade students.",
"Generated by educational experts based on an evidence-based theoretical framework, F airytale QA consists of 10,580 explicit and implicit questions derived from 278 children-friendly stories, covering seven types of narrative elements or relations.",
"Our dataset is valuable in two folds: First, we ran existing QA models on our dataset and confirmed that this annotation helps assess models' fine-grained learning skills.",
"Second, the dataset supports question generation (QG) task in the education domain.",
"Through benchmarking with QG models, we show that the QG model trained on F airytale QA is capable of asking high-quality and more diverse questions.",
"Equal contributions [email protected], [email protected], [email protected], [email protected], [email protected].",
"Work done while Mo was at IBM .",
"Corresponding Author.",
"1 Our dataset is available at https://github.com/ uci-soe/FairytaleQAData .",
"[Sect 1] Once upon a time there was a lad who was better o ff than all the others.",
"He was never short of money, for he had a purse which was never empty.",
"... ... [Sect 6] When the king's daughter had eaten of the apples, she had a pair of horns.",
"But one day a foreign doctor from afar came to court.",
"He was not from their country, he said, and had made the journey purposely just to try his luck here.",
"But he must see the king's daughter alone, said he, and permission was granted him.",
"... [Sect 8] ... Q1 :Who will the foreign doctor turn out to be?",
"Reading comprehension is a complex, multidimensional cognitive process (Kim, 2017).",
"Question answering (QA) is fundamental for supporting humans' development of reading comprehen-447 sion skills, as questions serve as both instruments for evaluation and tools to facilitate learning.",
"To achieve this goal, comprehension questions should be valid and reliable, meaning that all items are designed to cohesively assess comprehension rather than some other skills (e.g., text matching, paraphrasing, or memorization) (Roberts and Priest, 2006).",
"Moreover, from the educational perspective, given that reading comprehension is a multicomponent skill, it is ideal for comprehension questions to be able to identify students' performance in specific sub-skills, thus allowing teachers to provide tailored guidance (Francis et al., 2005).",
"However, creating a large and suitable set of questions for supporting narrative comprehension is both time-consuming and cognitively demanding.",
"Some researchers have proposed developing models to automatically generate questions or QA-pairs that satisfy the need for a continuous supply of new questions (Kurdi et al., 2020; Yao et al., 2022), which can potentially enable large-scale development of AI-supported interactive platforms for the learning and assessment of reading comprehension skills (e.g., (Zhang et al., 2022)).",
"However, existing datasets are not particularly suitable for training question generation (QG) models for educational purposes (Das et al., 2021).",
"This is primarily because the datasets are not typically structured around the specific dimensions of reading comprehension sub-skills, nor do they provide su ffi cient information on what sub-skills are tested.",
"Consequently, QG models built on these datasets only yield one single comprehension score without a more detailed breakdown of performance on comprehension sub-skills.",
"This issue is compounded by the fact that many benchmarks rely on crowd-sourced workers who may not have su ffi cient training or education domain knowledge needed to create valid questions in a consistent way.",
"To bridge the gap, we constructed F airytale QA, an open-source dataset focusing on comprehension of narratives, targeting students from kindergarten to eighth grade.",
"We focus on narrative comprehension for two reasons.",
"First, narrative comprehension is a high-level comprehension skill strongly predictive of reading achievement (Lynch et al., 2008) and plays a central role in daily life as people frequently encounter narratives in di ff erent forms (Goldie, 2003).",
"Second, narrative stories have a clear structure of specific elements and relations among these elements, and there are existing validated narrative comprehension frameworks around this structure, which provides a basis for developing the annotation schema for our dataset.",
"We employed education experts who generated 10,580 question-answer pairs based on a collection of 278 fairytale stories for young readers, following evidence-based narrative comprehension frameworks (Paris and Paris, 2003; Alonzo et al., 2009).",
"Thereby, F airytale QA contains questions that focus on several narrative elements and relations, increasing the validity and reliability of the assessment.",
"In addition, F airytale QA also contains both explicit questions that involve answers found explicitly in the text and implicit questions that require high-level summarization (Table 1), thus representing a relatively balanced assessment with questions of varying di ffi culty (Zucker et al., 2010; Raphael, 1986).",
"Most importantly, our selection of annotators with education domain knowledge as well as the training and quality control process ensured that the aforementioned annotation protocol was consistently implemented.",
"A subset of questions in our dataset has been validated with 120 pre-kindergarten and kindergarten students (IRB approved from the first author's institution), proving the questions' reliability and validity.",
"We show the utility of F airytale QA through two benchmarking experiments.",
"First, we used our data to train and evaluate state-of-the-art (SOTA) QA models and demonstrated that (1) F airytale QA contains challenging phenomena for existing models, and (2) it can support finer-grained analysis on the di ff erent types of comprehension sub-skills, even for models trained on general QA datasets (NarrativeQA (Kocisk`y et al., 2018)).",
"We further calibrated model performances with human baseline, highlighting the most visible gap in models' reasoning capabilities on recognizing casual relationships and predicting event outcomes.",
"Second, we used F airytale QA to power question generation and showed that the QG model trained on ours was more capable of asking diverse questions and generating questions with higher quality.",
"Despite a large number of datasets on reading comprehension, few focus on comprehension of narrative text.",
"Table 2 reviews di ff erent narrative-related properties of existing popular QA datasets comparing with our proposed F airytale QA dataset.",
"Narra-448 Dataset Educ.",
"tiveQA (Kocisk`y et al., 2018) is one of the representative datasets.",
"It was generated by crowd workers who wrote QA pairs according to summaries of books or movie scripts, while the task takers are supposed to answer these questions based on their reading of original books or movie scripts.",
"As such, this dataset is posited to evaluate a per-son's understanding of the underlying narrative, with a significant amount of event-related questions (Mou et al., 2021).",
"However, NarrativeQA simply instructed crowd-sourced workers to generate questions as if they were to test students without using a detailed annotation protocol.",
"It is questionable whether these workers actually had experiences in testing students, and the lack of protocol may have imposed too little control over the coverage of reading sub-skills.",
"BookTest (Bajgar et al., 2016) is an automatically constructed cloze-style QA dataset based on a collection of narrative texts retrieved from Project Gutenberg.",
"The questions were generated by automatically removing a noun or entity in a sentence that has appeared in the preceding context.",
"While cloze-style tests can be a valid instrument for assessing reading comprehension, their validity depends on the careful selection of words to be removed so that filling them in requires proper comprehension (Gellert and Elbro, 2013).",
"It is unlikely that automatically constructed cloze tests would meet such standards.",
"Another dataset, TellMeWhy (Lal et al., 2021), aims to facilitate and assess understanding of causal relationships .",
"This dataset contains why questions that are relatively challenging, given that they require additional information not directly provided in the text.",
"However, TellMeWhy only addresses one narrative component type (i.e., causal relationship), whereas F airytale QA provides seven evaluation components.",
"Moreover, TellMeWhy was built upon ROCStories (Mostafazadeh et al., 2016) and thus only examine comprehension on incomplete story exerpts, which may have limited the dataset's ability to assess macro-level summarization and inference making.",
"There are several benchmarks derived from sources for education purposes (e.g., exams or curricula).",
"RACE (Lai et al., 2017) is a large-scale dataset consisting of comprehension questions from English exams for Chinese middle and high school students.",
"RACE uses a mixture of narrative and informational paragraphs.",
"These two genres require slightly di ff erent comprehension skills (Liebfre-und, 2021), and students perform di ff erently based on what genres of text they read (Denton et al., 2015).",
"Mixing these two together in one dataset without annotating the specific genre of each story / question obscures the ability to o ff er a precise assessment.",
"Moreover, RACE is in multiple-choice format, and paragraphs are usually shorter.",
"These two characteristics may make the RACE dataset less challenging, and recent models have demonstrated close-to-human performance 2 .",
"CLOTH (Xie et al., 2017) is a cloze-style dataset also collected from English exams with multiple choice fill-in-the-blank questions.",
"CLOTH can be advantageous for educational QG as each question is labeled with the level of reasoning it involves.",
"However, this dataset shares certain limitations inherent to multiple-choice formats (Klufa, 2015).",
"There are some datasets that are designed for assessing narrative comprehension skills but do not",
"2 http://www.qizhexie.com/data/RACE_ leaderboard.html",
"use QA as a form of evaluation.",
"Several datasets, such as NovelChapters (Ladhak et al., 2020) and BookSum (Kryscinski et al., 2021), evaluate models' comprehension through summarization tasks.",
"However, there have been debates of whether comprehension can be assessed solely through summarization (Head et al., 1989), as summarization poses a high demand on writing that confounds the reading skills intended to be assessed.",
"Two other recent datasets focus on singular specific elements in narratives.",
"The LiSCU dataset (Brahman et al., 2021) targets readers' understanding of characters , and Sims et al. (2019) propose a dataset for detecting events in narratives.",
"Given their focus on single narrative elements, these two datasets may not provide a comprehensive evaluation of narrative comprehension.",
"We developed the F airytale QA dataset to address some of the limitations in existing benchmarks.",
"Our dataset contains 10,580 QA pairs from 278 classic fairytale stories.",
"In the remainder of this section, we report the dataset construction process and its key statistics.",
"The narrative texts utilized in the dataset are classic fairytales with clear narrative structures.",
"We gathered the text from the Project Gutenberg website 3 , using fairytale as the search term.",
"Due to a large number of fairytales found, we used the most popular stories based on the number of downloads since these stories are presumably of higher quality.",
"To ensure the readability of the text, we made a small number of minor revisions to some obviously outdated vocabulary (e.g., changing ere to be-fore) and the unconventional use of punctuation (e.g., changing consecutive semi-colons to periods).",
"For each story, we evaluated the reading di ffi culty level using the textstat 4 Python package, primarily based on sentence length, word length, and commonness of words.",
"We excluded stories that are at 10th grade level or above.",
"These texts were broken down into small sections based on their semantic content by our annotators.",
"The annotators were instructed to split the story into sections of 100-300 words that also contain meaningful content and are separated at 3 https://www.gutenberg.org/ 4 https://pypi.org/project/textstat/ natural story breaks.",
"An initial annotator would split the story, and this would be reviewed by a cross-checking annotator.",
"Most of the resulting sections were one natural paragraph of the original text.",
"However, sometimes several paragraphs were combined (usually dialogue); and some exceptionally long paragraphs that contained more than one focal event were divided into multiple sections.",
"On average, there are 15 sections per story, and each section has an average of 150 words (Table 4).",
"Categorization via Narrative Elements or Relations F airytale QA is intended to include QA pairs that capture the seven narrative elements / relations that are verified in prior educational research (Paris and Paris, 2003).",
"Definitions of question types are shown below.",
"Example questions for each type are in Appendix D. Character questions ask test takers to identify the character of the story or describe characteristics of characters.",
"Setting questions ask about a place or time where / when story events take place and typically start with Where or When .",
"Action questions ask about characters' behaviors or information about that behavior.",
"Feeling questions ask about the character's emotional status or reaction to certain events and are typically worded as How did / does / do . . . feel .",
"Causal relationship questions focus on two events that are causally related where the prior events causally lead to the latter event in the question.",
"This type of questions usually begins with Why or What made / makes .",
"Outcome resolution questions ask for identifying outcome events that are causally led to by the prior event in the question.",
"This type of questions are usually worded as What happened / happens / has happened...after... .",
"Prediction questions ask for the unknown outcome of a focal event, which is predictable based on the existing information in the text.",
"These labels are to ensure the presence of the variety of questions' sub-skills so that the models trained on this dataset can also generate the variety.",
"The labels are not intended to aid the training of a model to classify questions.",
"Some but not all of the labels may be determined by surface features.",
"For example, feeling questions typically contain the words feel or feels, while action questions 450 F airytale QA Dataset Train Validation Test 232 Books with 8548 QA-pairs 23 Books with 1025 QA-pairs 23 Books with 1007 QA-pairs Mean S.D. Min Max Mean S.D. Min Max Mean S.D. Min Max # section per story 14.4 8.8 2 60 16.5 10.0 4 43 15.8 10.8 2 55 # tokens per story 2160.9 1375.9 228 7577 2441.8 1696.9 425 5865 2313.4 1369.6 332 6330 # tokens per section 149.6 64.8 12 447 147.8 56.7 33 298 145.8 58.6 24 290 # questions per story 36.8 28.9 5 161 44.5 29.5 13 100 43.7 28.8 12 107 # questions per section 2.8 2.440 0 18 2.9 2.3 0 16 3.0 2.4 0 15 # tokens per question 10.2 3.2 3 27 10.9 3.2 4 24 10.5 3.1 3 25 # tokens per answer 7.1 6.0 1 69 7.7 6.3 1 70 6.8 5.2 1 44 Table 3: Core statistics of the F airytale QA dataset, which has 278 books and 10580 QA-pairs.",
"Categorization via Source of Answers Orthogonal to the aforementioned question categories, questions in F airytale QA are also categorized based on whether or not the answer source can be directly found in the text, namely explicit versus implicit questions.",
"In general, explicit questions revolve around a specific story fact, and implicit questions require summarizing and making an inference based on information that is only implicit in the text.",
"Using a combination of explicit and implicit questions yields an assessment with more balanced di ffi culty (Raphael, 1986; Zucker et al., 2010).",
"In our data, explicit and implicit questions are defined as below (Examples in Appendix C): Explicit questions ask for answers that can be directly found in the stories.",
"In other words, the source of answer are spans of text.",
"Implicit questions ask for answers that cannot be directly found in the text.",
"Answering the questions require either reformulating language or making inference.",
"In other words, the answer source is free-form, meaning that the answers can be any free-text, and there is no limit to where the answer comes from.",
"Five annotators were involved in the annotation of QA pairs.",
"All of these annotators have a B.A. degree in education, psychology, or cognitive science and have substantial experience in teaching and reading assessment.",
"These annotators were supervised by three experts in literacy education.",
"Annotation Guidelines The annotators were instructed to imagine that they were creating questions to test elementary or middle school students in the process of reading a complete story.",
"We required the annotators to generate only natural, open-ended questions , avoiding yesor noquestions.",
"We also instructed them to provide a diverse set of questions about 7 di ff erent narrative elements, and with both implicit and explicit questions.",
"Each question in the dataset has a label on the narrative element / relation to be assessed and whether it is implicit or explicit.",
"We asked the annotators to also generate answers for each of their questions.",
"We asked them to provide the shortest possible answers but did not restrict them to complete sentences or short phrases.",
"For explicit questions, annotators extracted the shortest phrase from the text as the answer (i.e., span).",
"For implicit questions, annotators provided at least two possible answers for each question (i.e., free-form).",
"We also asked the annotators to label which section(s) the question and answer was from.",
"We did not specify the number of questions per story to account for story length variability and to allow annotators to create meaningful questions rather than be forced to add unnecessary questions.",
"However, we did ensure that the annotators broadly averaged 2-3 questions per section in order to guarantee dataset size.",
"Annotator Training and Cross-Checking All annotators received a two-week training in which each of them was familiarized with the coding template (described in the section below) and conducted practice coding on the same five stories.",
"The practice QA pairs were then reviewed by the other annotators and the three experts, and discrepancies among annotators were discussed.",
"At the end of the training session, the five annotators had a little disagreement with the questions generated by other coders.",
"During the annotation process, the team met once every week to review and discuss each member's work.",
"All QA pairs were cross-checked by two annotators, and 10% of the QA pairs were additionally checked by the expert supervisor.",
"This process was to ensure that the questions focused on key information to the narrative and the answers to 451 Mean Min Max SD Story Characteristics Sections / story 14.7 2 60 9.2 Tokens / story 2196.7 228 7577 1401.3 Tokens / section 149.1 12 447 63.6 Question Characteristics Tokens / question 10.3 3 27 3.3 Tokens / answer 7.2 1 69 6.1 Questions / story 38.1 5 161 29 Questions / section 2.9 0 18 2.4 Table 4: Various descriptive statistics for the length of stories and number of questions in the dataset.",
"Agreement among Annotators The questions generated by the five coders showed a consistent pattern.",
"All coders' questions have similar lengths (average length ranging from 8 to 10 words among the coders) and have similar readability levels (aver-age readability between fourth to fifth grade among the coders).",
"The distributions in narrative elements focused as well as implicit / explicit questions were also consistent.",
"A detailed description of the distributions by coders is displayed in Appendix E. We chose not to use traditional inter-annotator agreement (IAA) metrics like Kappa coe ffi cients because we explicitly asked the coders to generate questions and answers with variable language to aid QA and QG models based on this dataset.",
"This language variability leads to inaccurate IAA metrics by traditional means (Amidei et al., 2018), leading to our decision.",
"Second Answer Annotation For the 46 stories used as the evaluation set, we annotate a second reference answer by asking an annotator to independently read the story and answer the questions generated by others.",
"All questions were judged as answerable and thus answered by the second annotator.",
"The second answers are used for both human QA performance estimation and for providing multiple references in automatic QA evaluation.",
"We random split the F airytale QA dataset into train / val / test splits with a QA ratio of roughly 8:1:1.",
"Table 3 shows the detailed statistics of the F airy tale QA Dataset in train / val / test splits.",
"Overall, the resulting F airytale QA dataset contained 10,580 questions from 278 fairytale stories.",
"The description of story and question characteristics is presented in Table 4.",
"In F airytale QA, action and causal relationship questions are the two most common types, constituting 31.6% and 27.8%, respectively, of all questions.",
"Outcome resolution, character, and feeling types each constitute about 10% of all questions.",
"Setting and prediction questions are about 5% each.",
"Our dataset contains about 75% explicit questions and 25% implicit questions (Table 5 for details).",
"Validation of F airytale QA for Comprehension Assessment We validated the questions in F airy tale QA using established procedures in educational assessment development (zdemir and Akyol, 2019) and have proven that our questions have high reliability and validity.",
"Specifically, we sampled a small subset of the questions in our dataset (11 questions generated for one story) and tested them among 120 students in prekindergartens and kindergartens.",
"This study was preapproved by the IRB in first author's institution.",
"The Cronbach's coe ffi cient alpha was 0.83 for the items in this story comprehension assessment; suggesting was high internal reliability.",
"We also linked children's performance answering our questions to another validated language assessment (Mar-tin and Brownell, 2011), and the correlation was strong 0.76 (p < .001), suggesting an excellent external validity.",
"In the following sections, we present a couple of baseline benchmarks on both the Question Answering (QA) task and the Question Generation (QG) task with F airytale QA.",
"We leveraged both pretrained neural models and models fine-tuned on different QA datasets, including NarrativeQA and our dataset, F airytale QA.",
"The baseline results show 452 Model Validation / Test ROUGE-L F1 Pre-trained Models BERT 0.104 / 0.097 DistilBERT 0.097 / 0.082 BART 0.108 / 0.088 Fine-tuned Models BART fine-tuned on NarrativeQA 0.475 / 0.492 BART fine-tuned on F airytale QA 0.533 / 0.536 Human 0.651 / 0.644 Table 6: Question Answering benchmarks on F airytale QA validation and test splits.",
"that our F airytale QA demonstrates challenging problems to existing approaches, and those models fine-tuned on F airytale QA can benefit from the annotations a lot to achieve significant performance improvement.",
"We also report human performance by scoring one reference answer to the other.",
"Question Answering (QA) is a straightforward task that our F airytale QA dataset can contribute to.",
"We leveraged the commonly-used Rouge-L F1 score for the evaluation of QA performances.",
"For each QA instance, we compared the generated answer with each of the two ground-truth answers and took the higher Rouge-L F1 score.",
"Here in Table 6, we show the QA performance of a few pretrained SOTA neural-model architectures: BERT (Devlin et al., 2018), BART (Lewis et al., 2019), and DistilBERT(Sanh et al., 2019).",
"The quality of answers generated by these pre-trained models is on par with each other.",
"Since BART outperformed other model architectures in the QA task of NarrativeQA (Mou et al., 2021), we decided to use BART as the backbone for our fine-tuned models.",
"We report the performance of fine-tuned BART models with the following settings: BART fine-tuned on NarrativeQA, which is the SOTA model reported in (Mou et al., 2021), and another BART model fine-tuned on F airytale QA.",
"We note that for the QA task, the model that was fine-tuned on F airytale QA dataset performs much better than the model fine-tuned on NarrativeQA by at least 5%.",
"Even the human performance is underestimated Figure 1: Decomposed QA results (Rouge-L) on 7 narrative elements on the validation split.",
"here because it is obtained via cross-estimation between two annotated answers, this result still leaves around 12% on both splits between human performance and the model fine-tuned with F airytale QA, which demonstrates that the QA task is still a challenging problem for existing works on our F airy tale QA dataset.",
"We leave a full large-scale human study for evaluating the accurate human performance to future work.",
"Performance Decomposition Given that F airy tale QA has question type annotations on all the question-answer pairs, it supports the decomposition of performance on di ff erent types, thus resulting in a comprehensive picture of which reading skills the models lack the most.",
"Figure 1 presents the QA performance decomposition as a radar visualization.",
"(The full results on both validation and test sets can be found in Table 10 in Appendix A).",
"Compared to the model trained on NarrativeQA, our F airytale QA led to the biggest improvement on dimensions of Setting and Feeling with more than 10% in-crease.",
"The Character and Prediction dimensions were also improved by a large margin (7-8%).",
"The large improvements in these dimensions suggested that despite the NarrativeQA dataset's overall focus on narrative comprehension, it might not include questions that su ffi ciently cover some of the fundamental elements, probably due to the lack of detailed annotating protocol and typical crowd workers' limited knowledge in reading assessment.",
"By comparison, on dimensions of Action , Causal Relationship and Outcome Resolution , our model fine-tuned on F airy tale QA resulted in smaller improvement compared 453 Figure 2: Learning curve of the QA model on F airytale QA with varying size of training data.",
"to the model fine-tuned on NarrativeQA.",
"This is likely due to the fact that most of the NarrativeQA questions are about event arguments and causal or temporal relations between events, as suggested by a human study (Mou et al., 2021).",
"Our performance decomposition also revealed substantial gaps between existing SOTA models and humans.",
"Specifically, humans were 15-20% better on Causal Relationship , Outcome Resolution and Prediction .",
"The model-human performance gaps on Causal Relationship and Outcome Resolution likely reflected the deficiency of current NLP models in understanding story plots, and the gap on Prediction might be due to the fact that this dimension asked the models to envision what would come next in the text, which required connecting commonsense knowledge with the content of the text.",
"The model-human performance gaps on Character and Setting were also considerable, suggesting that the models' ability to understand these basic reading elements still has much room for improvement.",
"Finally, it was interesting that the model trained on our dataset outperformed humans on the Feeling dimension.",
"This was likely because the answers to these Feeling questions were most explicitly described in the story.",
"Therefore, it did not actually require reasoning of the character's mental states, but rather understanding which parts of the texts express the feelings.",
"Another QA performance decomposition result based on explicit / implicit question types is provided in Appendix B. Learning Curve Finally, we present the learning curve of the BART QA model on our F airy tale QA.",
"Figure 2 plots the model performance on the validation set with di ff erent sizes of training data.",
"The curve became flatter after training with 6,000 QA pairs in our dataset.",
"This suggested that Model Validation / Test ROUGE-L F1 BART fine-tuned on NarrativeQA 0.424 / 0.442 BART fine-tuned on F airytale QA 0.527 / 0.527 BART fine-tuned on NarrativeQA and F airytale QA 0.508 / 0.519 Table 7: Question Generation benchmarks on F airytale QA-validation and test splits.",
"our dataset has a reasonably good size for fine-tuning a SOTA pre-trained model, and the performance gap between models and humans requires a more sophisticated reading model design rather than solely augmenting the training examples.",
"In terms of the QG performance on F airytale QA, the task was to generate questions that correspond to the given answers and the context.",
"This task has important empirical applications that in the future, models may help teachers to create questions in the educational settings.",
"Similar to the QA task, we fine-tuned a BART model to generate a question conditioned on each human-labeled answer and corresponding story section.",
"The generated question is then evaluated with the corresponding ground-truth question.",
"We used ROUGE-L F1 score as the evaluation metric.",
"For this QG task, we compare the models fine-tuned on NarrativeQA, on F airytale QA, and on both datasets.",
"Table 7 displays the QG results.",
"The model fine-tuned on F airytale QA demonstrated a clear advantage on Rouge-L over the model fine-tuned on NarrativeQA.",
"It is worth noting that the model fine-tuned on both NarrativeQA and F airytale QA performs worse than the model fine-tuned on F airy tale QA only; we would assume that NarrativeQA 454 Input story section: the wild people who dwell in the south-west are masters of many black arts.",
"they often lure men of the middle kingdom to their country by promising them their daughters in marriage, but their promises are not to be trusted.",
"once there was the son of a poor family, who agreed to labor for three years for one of the wild men in order to become his son-in-law.",
"Further analysis (Table 8) examined the distribution of generated question types according to the beginning word of a question (whwords).",
"The questions generated by F airytale QA more closely resembled the pattern of the ground-truth questions, suggesting that our dataset was able to improve the model's ability to mimic the education experts' strategy of asking questions that assess the seven elements of reading comprehension.",
"This result is further supported by qualitative analysis (as seen in examples in Table 9).",
"Compared to the QG model trained with F airy tale QA, the baseline model trained with NarrativeQA dataset tended to generate vague questions that did not build upon specific contextual evidence within the narratives.",
"These kinds of vague questions may not be suitable in educational settings, as improving students' skills to find text evidence to support their comprehension is a crucial aspect of reading education.",
"The disparity between the two models might be attributed to how the QA-pairs were constructed in these two datasets: while NarrativeQA was constructed by crowd workers who only read the abstract of the stories, F airytale QA required annotators to read the complete story before developing QA-pairs.",
"As such, it is not surprising that models trained on F airytale QA dataset could generate questions that are more closely related to the contextual evidence within the original text.",
"In addition, we also observed that the model trained on NarrativeQA tended to generate questions with seemingly more correct grammar but were factually inaccurate (Table 12 Appendix C).",
"In summary, we constructed a large-scale dataset, F airytale QA, for the context of children's narrative comprehension.",
"The dataset was generated through a rigorous labeling process with educational domain experts.",
"This dataset has been helpful to support preliminary works on QG tasks (Yao et al., 2022; Zhao et al., 2022) and already enabled possibilities for new downstream AI-for-Education applications (Zhang et al., 2022; Xu et al., 2021).",
"Howerver, we acknowledge our work also has limitations that require future works to continue the exploration.",
"As aforementioned, the human performance results for QA task are underestimated because they are obtained via cross-estimation between the two annotated answers.",
"One possibility for future work is to conduct a large-scale human annotation to collect more answers per each question and then leverage the massive annotated answers to better establish a human performance evaluation.",
"Another avenue of future work is to leverage our dataset to detect and remediate social stereotypes and biases represented in story narratives the bias analysis in the children storybook corpus has been an underexplored research topic for the ML community, but it has profound societal impacts on the soceity.",
"Through such analysis on our dataset, we may be able to answer how do social stereotype and bias come into a child's mind?",
"In sum, there are many new research and application opportunities enabled by our F airytale QA dataset, and we welcome researchers from both NLP and education communities to join our e ff ort to continue this endeavor.",
"We thank Schmidt Futures for providing funding for the development of the FairytaleQA dataset.",
"This work is also supported by the National Science Foundation (Grant No. 1906321 and 2115382)."
] | [
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"method",
"objective",
"result",
"other",
"result",
"result",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"other"
] |
[
"In many languages like Arabic, diacritics are used to specify pronunciations as well as meanings.",
"Such diacritics are often omitted in written text, increasing the number of possible pronunciations and meanings for a word.",
"This results in a more ambiguous text making computational processing on such text more difficult.",
"Diacritic restoration is the task of restoring missing diacritics in the written text.",
"Most state-of-the-art diacritic restoration models are built on character level information which helps generalize the model to unseen data, but presumably lose useful information at the word level.",
"Thus, to compensate for this loss, we investigate the use of multi-task learning to jointly optimize diacritic restoration with related NLP problems namely word segmentation, part-of-speech tagging, and syntactic diacritization.",
"We use Arabic as a case study since it has sufficient data resources for tasks that we consider in our joint modeling.",
"Our joint models significantly outperform the baselines and are comparable to the state-of-the-art models that are more complex relying on morphological analyzers and/or a lot more data (e.g. dialectal data).",
"In contrast to English, some vowels in languages such as Arabic and Hebrew are not part of the alphabet and diacritics are used for vowel specification.",
"1 In addition to pertaining vowels, diacritics can also represent other features such as case marking and phonological gemination in Arabic.",
"Not including diacritics in the written text in such languages increases the number of possible meanings as well as pronunciations.",
"Humans rely on the surrounding * The work was conducted while the author was with AWS, Amazon AI.",
"1 Diacritics are marks that are added above, below, or in-between the letters to compose a new letter or characterize the letter with a different sound (Wells, 2000).",
"context and their previous knowledge to infer the meanings and/or pronunciations of words.",
"However, computational models, on the other hand, are inherently limited to deal with missing diacritics which pose a challenge for such models due to increased ambiguity.",
"Diacritic restoration (or diacritization) is the process of restoring these missing diacritics for every character in the written texts.",
"It can specify pronunciation and can be viewed as a relaxed variant of word sense disambiguation.",
"For example, the Arabic word (cid:213)(cid:206)(cid:171) Elm 2 can mean flag or knowledge, but the meaning as well as pronunciation is specified when the word is diacritized ( (cid:12)(cid:213)(cid:11)(cid:206) (cid:11)(cid:171) E a l a m u means flag while (cid:21)(cid:213)(cid:21)(cid:206)(cid:171)(cid:11) E i l o m o means knowledge).",
"As an illustrative example in English, if we omit the vowels in the word pn , the word can be read as pan , pin , pun , and pen , each of these variants have different pronunciations and meanings if it composes a valid word in the language.",
"The state-of-the-art diacritic restoration models reached a decent performance over the years using recurrent or convolutional neural networks in terms of accuracy (Zalmout and Habash, 2017; Alqahtani et al., 2019; Orife, 2018) and/or efficiency (Alqah-tani et al., 2019; Orife, 2018); yet, there is still room for further improvements.",
"Most of these models are built on character level information which help generalize the model to unseen data, but presumably lose some useful information at the word level.",
"Since word level resources are insufficient to be relied upon for training diacritic restoration models, we integrate additional linguistic information that considers word morphology as well as word relationships within a sentence to partially compensate for this loss.",
"2 We use Buckwalter Transliteration encoding http://www.qamus.org/transliteration.htm.",
"In this paper, we improve the performance of diacritic restoration by building a multitask learning model (i.e. joint modeling).",
"Multitask learning refers to models that learn more than one task at the same time, and has recently been shown to provide good solutions for a number of NLP tasks (Hashimoto et al., 2016; Kendall et al., 2018).",
"The use of a multitask learning approach provides an end-to-end solution, in contrast to generating the linguistic features for diacritic restoration as a preprocessing step.",
"In addition, it alleviates the reliance on other computational and/or data resources to generate these features.",
"Furthermore, the proposed model is flexible such that a task can be added or removed depending on the data availability.",
"This makes the model adaptable to other languages and dialects.",
"We consider the following auxiliary tasks to boost the performance of diacritic restoration: word segmentation, part-of-speech (POS) tagging, and syntactic diacritization.",
"We use Arabic as a case study for our approach since it has sufficient data resources for tasks that we consider in our joint modeling.",
"3 The contributions of this paper are twofold: 1. We investigate the benefits of automatically learning related tasks to boost the performance of diacritic restoration; 2. In doing so, we devise a state-of-the-art model for Arabic diacritic restoration as well as a framework for improving diacritic restoration in other languages that include diacritics.",
"We formulate the problem of (full) diacritic restoration ( DIAC ) as follows: given a sequence of characters, we identify the diacritic corresponding to each character in that sequence from the following set of diacritics { a, u, i, o, K, F, N, , a, u, i, F, K, and N } .",
"We additionally consider three auxiliary tasks: syntactic diacritization, part-of-speech tagging, and word segmentation.",
"Two of which operate at the word level (syntactic diacritization and POS tagging) and the remaining tasks (diacritic restoration and word segmentation) operate at the character level.",
"This helps diacritic restoration utilize information from both charac-3 Other languages that include diacritics lack such resources; however, the same multitask learning framework can be applied if data resources become available.",
"Syntactic Diacritization (SYN) : This refers to the task of retrieving diacritics related to the syntactic positions for each word in the sentence, which is a sub-task of full diacritic restoration.",
"Arabic is a templatic language where words comprise roots and patterns in which patterns are typically reflec-tive of diacritic distributions.",
"Verb patterns are more or less predictable however nouns tend to be more complex.",
"Arabic diacritics can be divided into lexical and inflectional (or syntactic) diacritics.",
"Lexical diacritics change the meanings of words as well as their pronunciations and their distribution is bound by patterns/templates.",
"In contrast, inflectional diacritics are related to the syntactic positions of words in the sentence and are added to the last letter of the main morphemes of words (word finally), changing their pronunciations.",
"4 Inflectional diacritics are also affected by word's root (e.g. weak roots) and semantic or morphological properties (e.g. with the same grammatical case, masculine and feminine plurals take different dia-critics).",
"Thus, the same word can be assigned a different syntactic diacritic reflecting syntactic case, i.e. depending on its relations to the remaining words in the sentence (e.g. subject or object).",
"For example, the diacritized variants (cid:11)(cid:213)(cid:11)(cid:206) (cid:11)(cid:171) Ealam a and (cid:12)(cid:213)(cid:11)(cid:206) (cid:11)(cid:171) Ealam u which both mean flag have the corresponding syntactic diacritics: a and u , respectively.",
"That being said, the main trigger for accurate syntactic prediction is the relationships between words, capturing semantic and most importantly, syntactic information.",
"Because Arabic has a unique set of diacritics, this study formulates syntactic diacritization in the following way: each word in the input is tagged with a single diacritic representing its syntactic position in the sentence.",
"5 The set of diacritics in syntactic diacritization is the same as the set of diacritics for full diacritic restoration.",
"Other languages that include diacritics can include syntactic related diacritics but in a different manner and complexity 4 Diacritics that are added due to passivization are also syntactic in nature but are not considered in our syntactic diacritization task.",
"That said, they are still considered in the full diacritic restoration model.",
"5 Combinations of diacritics is possible but we combine valid possibilities together as one single unit in our model.",
"For example, the diacritics and a are combined to form an additional diacritic a .",
"Word segmentation (SEG) : This refers to the process of separating affixes from the main unit of the word.",
"Word segmentation is commonly used as a preprocessing step for different NLP applications and its usefulness is apparent in morphologically rich languages.",
"For example, the undiacritized word whm (cid:209)(cid:235)(cid:240) might be diacritized as waham a (cid:11)(cid:15)(cid:209) (cid:11)(cid:235)(cid:11)(cid:240) and concerned, waham (cid:209) (cid:11)(cid:235)(cid:11)(cid:240) illu-sion, where the first diacritized word consists of two segments wa ham a (cid:11)(cid:15)(cid:209) (cid:11)(cid:235) (cid:11)(cid:240) while the second is composed of one word.",
"Word segmentation can be formulated in the following way: each character in the input is tagged following IOB tagging scheme ( B : beginning of a segment; I : inside a segment; O : out of the segment) (Diab et al., 2004).",
"Part-Of-Speech Tagging (POS) : This refers to the task of determining the syntactic role of a word (i.e. part of speech) within a sentence.",
"POS tags are highly correlated with diacritics (both syntactic and lexical): knowing one helps determine or reduce the possible choices of the other.",
"For instance, the word (cid:73)(cid:46)(cid:16)(cid:74)(cid:187) ktb in the sentence ktb [someone] means books if we know it to be a noun whereas the word would be either (cid:73)(cid:46)(cid:11)(cid:16)(cid:74)(cid:11)(cid:187) katab someone wrote or (cid:73)(cid:46)(cid:11)(cid:15)(cid:16)(cid:74)(cid:11)(cid:187) kat ab made someone write if it is known to be a verb.",
"POS tagging can be formulated in the following way: each word in the input is assigned a POS tag from the Universal Dependencies tagset (Taji et al., 2017).",
"6 3 Approach We built a diacritic restoration joint model and studied the extent to which sharing information is plausible to improve diacritic restoration performance.",
"Our joint model is motivated by the re-cent success of the hierarchical modeling proposed in (Hashimoto et al., 2016) such that information learned from an auxiliary task is passed as input to the diacritic restoration related layers.",
"7 6 Refer to https://universaldependencies.org/.",
"This tagset is chosen because it includes essential POS tags in the language, and it is unified across different languages which makes it suitable to investigate more languages in the future.",
"7 We also experimented with learning tasks sharing some levels and then diverging to specific layers for each tasks.",
"However, this did not improve the performance compared to the diacritic restoration model when we don't consider any additional task.",
"Since our joint model may involve both character and word level based tasks, we began our investigation by asking the following question: how to integrate information between these two levels?",
"Starting from the randomly initialized character embeddings as well as a pretrained set of embeddings for words, we follow two approaches (Figure 1 visually illustrates the two approaches with an example).",
"(1) Character Based Representation : We pass information learned by character level tasks into word level tasks by composing a word embedding from the word's characters.",
"We first concatenate the individual embeddings of characters in that word, and then apply a Bidirectional Long Short Term Memory (BiLSTM) layer to generate denser vectors.",
"8 This helps representing morphology and word composition into the model.",
"(2) Word-To-Character Representation : To pass information learned by word level tasks into character level tasks, we concatenate each word with each of its composed characters during each pass, similar to what is described in Watson et al. (2018)'s study.",
"This helps distinguishing the individual characters based on the surrounding context, implicitly capturing additional semantic and syntactic information.",
"8 We also evaluated the use of a feedforward layer and unidirectional Long Short Term Memory (LSTM) but a BiLSTM layer yielded better results.",
"For all architectures, the main component is BiLSTM (Hochreiter and Schmidhuber, 1997; Schuster and Paliwal, 1997), which preserves the temporal order of the sequence and has been shown to provide the state-of-the-art performance in terms of accuracy (Zalmout and Habash, 2017; Alqahtani et al., 2019).",
"After representing characters through random initialization and representing words using pretrained embeddings obtained from fastText (Bo-janowski et al., 2017), the learning process for each batch runs as follows: 1. We extract the two additional input representation described in Section 3.1; 2. We apply BiLSTM for each of the different tasks separately to obtain their corresponding outputs; 3. We pass all outputs from all tasks as well as WordToChar embedding vectors as input to the diacritic restoration model and obtain our diacritic outputs.",
"Figure 2 illustrates the diacritic restoration joint model.",
"As can be seen, SYN as well as POS tagging are trained on top of CharToWord representation which is basically the concatenation of the pretrained embedding for each word with the character-based representations described in Figure 1. SEG is also trained separately on top of the character embeddings.",
"We pass the outputs of all these tasks along with WordToChar representation to train the BiLSTM diacritic restoration model.",
"Omitting a task is rather easy, we just remove the related components for that task to yield the appropriate model.",
"We optionally pass the last hidden layer for SEG along with the remaining input to the diacritic restoration model.",
"9 4 Experimental Setups Dataset: We use the Arabic Treebank (ATB) dataset: parts 1, 2, and 3 and follow the same data division as Diab et al. (2013).",
"Table 1 illustrates the data statistics.",
"For word based tasks, we segment each sentence into space tokenized words.",
"For character based tasks, we, in addition, add the special boundary < w > between these words, and then each word is further segmented into its characters, similar to that in (Alqahtani et al., 2019).",
"We pass each word through the model along with a specific number of previous and future words (+/10 words).",
"Parameter Settings: For all tasks, we use 250 hidden units in each direction (500 units in both directions combined) and 300 as embedding size.",
"We use 3 hidden layers for tasks except in SEG in 9 Passing the last hidden layer for POS tagging and/or SYN did not improve the performance; the pretrained embeddings are sufficient to capture important linguistic signals.",
"which we use only one layer.",
"We use Adam for learning optimization with a learning rate of 0.001.",
"We use 20 for epoch size, 16 for batch size, 0.3 for hidden dropout, and 0.5 for embedding dropout.",
"We initialize the embedding with a uniform distribution [-0.1,0.1] and the hidden layers with normal distribution.",
"The loss scores for all considered tasks are combined and then normalized by the number of tasks in the model.",
"Evaluation metrics: We use accuracy for all tasks except diacritic restoration.",
"For diacritic restoration, the two most typically used metrics are Word Error Rate (WER) and Diacritic Error Rate (DER), the percentages of incorrectly diacritized words and characters, respectively.",
"In order to approximate errors in the syntactic diacritics, we use Last Diacritic Error Rate (LER), the percentage of words that have incorrect diacritics in the last positions of words.",
"To evaluate the models' ability to generalize beyond observed data, we compute WER on OOV (out-of-vocabulary) words.",
"10 Significance testing: We ran each experiment three times and reported the mean score.",
"11 We used the t-test with p = 0 .",
"05 to evaluate whether the difference between models' performance and the diacritic restoration is significant (Dror et al., 2018).",
"Table 2 shows the performance of joint diacritic restoration models when different tasks are considered.",
"When we consider WordToChar as input to the diacritic restoration model, we observe statistically significant improvements for all evaluation metrics.",
"This is justified by the ability of word embeddings to capture syntactic and semantic information at the sentence level.",
"The same character is disambiguated in terms of the surrounding context 10 Words that appear in the training dataset but do not appear in the test dataset.",
"11 Higher number of experiments provide more robust conclusion about the models' performance.",
"We only considered the minimum acceptable number of times to run each experiment due to limited computational resources.",
"as well as the word it appears in (e.g. the character t in the word cat would be represented slightly different than t in a related word cats or even a different word table ).",
"We consider both character based model as well as WordToChar based model as our baselines (BASE).",
"We use WordToChar representation rather than characters for all remaining models that jointly learn more than one task.",
"For all experiments, we observe improvements compared to both baselines across all evaluation metrics.",
"Furthermore, all models except DIAC+SEG outperform WordToChar diacritic restoration model in terms of WER, showing the benefits of considering output distributions for the other tasks.",
"Despite leveraging tasks focused on syntax (SYN/POS) or morpheme boundaries (SEG), the improvements extend to lexical diacritics as well.",
"Thus, the proposed joint diacritic restoration model is also helpful in settings beyond word final syntactic related diacritics.",
"The best performance is achieved when we consider all auxiliary tasks within the diacritic restoration model.",
"Impact of Auxiliary Tasks: We discuss the impact of adding each investigated task towards the performance of the diacritic restoration model.",
"Word segmentation (DIAC+SEG): When morpheme boundaries as well as diacritics are learned jointly, the WER performance is slightly reduced on all and OOV words.",
"This reduction is attributed mostly to lexical diacritics.",
"As Arabic exhibits a non-concatenative fusional morphology, reducing its complexity to a segmentation task might inherently obscure morphological processes for each form.",
"Observing only slight improvement is surprising; we believe that this is due to our experimental setup and does not negate the importance of having morphemes that assign the appropriate diacritics.",
"We speculate that the reason for this is that we do not capture the interaction between morphemes as an entity, losing some level of morphological information.",
"For instances, the words w a h a m a versus w a h u m for the undiacritized words whm (bold letters refer to consonants distinguishing it from diacritics) would benefit from morpheme boundary identifications to tease apart w a from h u m in the second variant ( w a h u m ), emphasizing that these are two words.",
"But on the other hand, it adds an Task WER DER LER/Lex OOV WER Zalmout and Habash (2017) 8.21 -20.2 Zalmout and Habash (2019a) 7.50 -Alqahtani and Diab (2019a) 7.6 2.7 -32.1 BASE (Char) 8.51 ( 0 . 01 ) 2.80 5.20/5.54 34.56 BASE (WordToChar) 8.09 ( 0 . 05 ) 2.73 5.00/5.30 32.10 DIAC+SEG 8.35 ( 0 . 02 ) 2.82 5.20/5.46 33.97 DIAC+SYN 7.70* ( 0 . 02 ) 2.60 4.72/5.08 30.94 DIAC+POS 7.86* ( 0 . 14 ) 2.65 4.72/5.20 32.28 DIAC+SEG+SYN 7.70* ( 0 . 05 ) 2.59 4.65/5.03 31.33 DIAC+SEG+POS 7.73* ( 0 . 08 ) 2.62 4.73/5.01 31.31 DIAC+SYN+POS 7.72* ( 0 . 06 ) 2.61 4.62/5.06 31.05 ALL 7.51 * ( 0 . 09 ) 2.54 4.54/4.91 31.07 Table 2: Performance of the joint diacritic restoration model when different related tasks are considered.",
"additional layer of ambiguity for other cases like the morpheme ktb in the diacritic variants k a t a b a , k u t u b u , saya k o t u b o note that the underlined segment has the same consonants as the other variants in which identifying morphemes increased the number of possible diacritic variants without learning the interactions between adjacent morphemes.",
"Furthermore, we found inconsistencies in the dataset for morphemes which might cause the drop in performance when we only consider SEG.",
"When we consider all tasks together, these inconsistencies are reduced because of the combined information from different linguistic signals towards improving the performance of the diacritic restoration model.",
"Syntactic diacritization (DIAC+SYN): By enforcing inflectional diacritics through an additional focused layer within the diacritic restoration model, we observe improvements on WER compared to the baselines.",
"We notice improvements on syntactic related diacritics (LER score), which is expected given the nature of syntactic diacritization in which it learns the underlying syntactic structure to assign the appropriate syntactic diacritics for each word.",
"Improvements also extend to lexical diacritics, and this is because word relationships are captured during learning syntactic diacritics in which BiLSTM modeling for words is integrated.",
"POS tagging (DIAC+POS): When we jointly train POS tagging with full diacritic restoration, we notice improvements compared to both baselines.",
"Compared to syntactic diacritization, we obtain similar findings across all evaluation metrics except for WER on OOV words in which POS tagging drops.",
"Including POS tagging within diacritic restoration also captures important information about the words; the idea of POS tagging is to learn the underlying syntax of the sentence.",
"In comparison to syntactic diacritization, it involves different types of information like passivization which could be essential in learning correct diacritics.",
"Ablation Analysis: Incorporating all the auxiliary tasks under study within the diacritic restoration model (ALL) provides the best performance across all measures except WER on OOV words in which the best performance was given by DIAC+SYN.",
"We discuss the impact of removing one task at a time from ALL and examine whether its exclusion significantly impacts the performance.",
"Excluding SEG from the process drops the performance of diacritic restoration.",
"This shows that even though SEG did not help greatly when it was combined solely with diacritic restoration, the combinations of SEG and the other word based tasks filled in the gaps that were missing from just identifying morpheme boundaries.",
"Excluding either POS tagging or syntactic diacritization also hurts the performance which shows that these tasks complement each other and, taken together, they improve the performance of diacritic restoration model.",
"Impact of output labels: Table 3 shows the different models when we do not pass the labels of the investigated tasks (the input is only WordToChar representation) against the same models when we do.",
"We noticed a drop in performance across all models.",
"Notice that all models even when we do not consider the label have better performance than the baselines.",
"This also supports the benefits of WordToChar representation.",
"Last hidden layer of SEG: Identifying morpheme boundaries did not increase accuracy as we expected.",
"Therefore, we examined whether information learned from the BiLSTM layer would help us learn morpheme interactions by passing the output of last BiLSTM layer to the diacritic restoration model along with segmentation labels.",
"We did not observe any improvements towards predicting accurate diacritics when we pass information regarding the last BiLSTM layer.",
"For ALL, the WER score increased by 0.22%.",
"Thus, it is sufficient to only utilize the segment labels for diacritic restoration.",
"Passive and active verbs: Passivation in Arabic is denoted through diacritics and missing such diacritic can cause ambiguity in some cases (Her-mena et al., 2015; Diab et al., 2007).",
"To examine its impact, we further divide verbs in the POS tagset into passive and active, increasing the size by one.",
"Table 4 shows the diacritic restoration performance with and without considering passivation.",
"We notice improvements, in some combinations of tasks, across all evaluation metrics compared to the pure POS tagging, showing its importance in diacritic restoration models.",
"diacritic restoration model were built empirically and tested against the development set.",
"We noticed that to improve the performance, soft parameter sharing in a hierarchical fashion performs better on diacritic restoration.",
"We experimented with building a joint diacritic restoration model that jointly learns segmentation and diacritics through hard parameter sharing.",
"To learn segmentation with diacritic restoration, we shared the embedding layer between the two tasks as well as sharing some or all layers of BiLSTM.",
"We got WER on all words (8.53 9.35) in which no improvements were shown compared to character based diacritic restoration.",
"To learn word based tasks with diacritic restoration, we pass WordToChar representation to the diacritic restoration and/or CharToWord representation for word-based tasks.",
"The best that we could get for both tasks is 8.23% 9.6%; no statistically significant improvements were found.",
"This shows the importance of hierarchical structure for appropriate diacritic assignments.",
"Qualitative analysis: We compared random errors that are correct in DIAC (character-based diacritic restoration) with ALL in which we consider all investigated tasks.",
"Although ALL provides accurate results for more words, it introduces errors in other words that have been correctly diacritized by DIAC.",
"The patterns of such words are not clear.",
"We did not find a particular category that occurs in one model but not the other.",
"Rather, the types and quantity of errors differ in each of these categories.",
"State-of-the-art Comparison: Table 2 also shows the performance of the state-of-the-art models.",
"ALL model surpass the performance of Zalmout and Habash (2017).",
"However, Zalmout and Habash (2017)'s model performs significantly better on OOV words.",
"Zalmout and Habash (2019a) provides comparable performance to ALL model.",
"The difference between their work and that in (Zal-mout and Habash, 2017) is the use of a joint model to learn morphological features other than diacritics (or features at the word level), rather than learning these features individually.",
"Zalmout and Habash (2019a) obtained an additional boost in performance (0.3% improvement over ours) when they add a dialect variant of Arabic in the learning process, sharing information between both languages.",
"Alqahtani and Diab (2019a) provides comparable performance to ALL and better performance on some task combinations in terms of WER on all and OOV words.",
"The difference between their model and our BASE model is the addition of a CRF (Conditional Random Fields) layer which incorporate dependencies in the output space at the cost of model's computational efficiency (memory and speed).",
"Zalmout and Habash (2019b) provides the current state-of-the-art performance in which they build a morphological disambiguation framework in Arabic similar to (Zalmout and Habash, 2017, 2019a).",
"They reported their scores based on the development set which was not used for tuning.",
"In the development set, they obtained 93.9% which significantly outperforms our best model (ALL) by 1.4%.",
"Our approach is similar to (Zalmout and Habash, 2019b).",
"We both follow WordToChar as well as CharToWord input representations discussed in Section 3.1, regardless of the specifics.",
"Furthermore, we both consider the morphological outputs as features in our diacritic restoration model.",
"In Zalmout and Habash (2019b), morphological feature space that are considered is larger, making use of all morphological features in Arabic.",
"Furthermore, Zalmout and Habash (2019b) use sequence-to-sequence modeling rather than sequence classification as ours.",
"Unlike Zalmout and Habash (2019b), our model is more flexible allowing additional tasks to be added when sufficient resources are available.",
"We believe that neither the underlying architecture nor the consideration of all possible features were the crucial factor that led to the significant reduction in WER performance.",
"Rather, morphological analyzers is crucial in such significant improvement.",
"As a matter of fact, in Zalmout and Habash (2019b), the performance significantly drops to 7.2 when they, similar to our approach, take the highest probabilistic value as a solution.",
"Thus, we believe that the use of morphological analyzers enforces valid word composition in the language and filter out invalid words (a side effect of using characters as input representation).",
"This also justifies the significant improvement on OOV words obtained by (Zalmout and Habash, 2017).",
"Thus, we believe that a global knowledge of words and internal constraints within words are captured.",
"Auxiliary tasks: We compared the base model of the auxiliary tasks to the state-of-the-art (SOTA).",
"For SEG, BiLSTM model has comparable performance to that in (Zalmout and Habash, 2017) (SEG yields 99.88% F1 compared to SOTA 99.6%).",
"For POS, we use a shallower tag set (16 number of tags compared to 70) than typically used in previous models hence we do not have a valid comparison set.",
"For SYN, we compare our results with (Hifny, 2018) which uses a hybrid network of BiLSTM and Maximum Entropy to solve syntactic diacritization.",
"The SYN yields results comparable to SOTA (our model performs 94.22 vs. SOTA 94.70).",
"The problem of diacritization has been addressed using classical machine learning approaches (e.g. Maximum Entropy and Support Vector Machine) (Zitouni and Sarikaya, 2009; Pasha et al., 2014) or neural based approaches for different languages that include diacritics such as Arabic, Vietnamese, and Yoruba.",
"Neural based approaches yield state-of-the-art performance for diacritic restoration by using Bidirectional LSTM or temporal convolutional networks (Zalmout and Habash, 2017; Orife, 2018; Alqahtani et al., 2019; Alqahtani and Diab, 2019a).",
"Arabic syntactic diacritization has been consistently reported to be difficult, degrading the performance of full diacritic restoration (Zitouni et al., 2006; Habash et al., 2007; Said et al., 2013; Shaalan et al., 2009; Shahrour et al., 2015; Dar-wish et al., 2017).",
"To improve the performance of syntactic diacritization or full diacritic restoration in general, previous studies followed different approaches.",
"Some studies separate lexical from syntactic diacritization (Shaalan et al., 2009; Dar-wish et al., 2017).",
"Other studies consider additional linguistic features such as POS tags and word segmentation (i.e. tokens or morphemes) (Ananthakr-ishnan et al., 2005; Zitouni et al., 2006; Zitouni and Sarikaya, 2009; Shaalan et al., 2009).",
"Hifny (2018) addresses syntactic diacritization by building BiLSTM model in which its input embeddings are augmented with manually generated features of context, POS tags, and word segments.",
"Rashwan et al. (2015) use deep belief network to build a diacritization model for Arabic that focuses on improving syntactic diacritization and build sub-classifiers based on the analysis of a confusion matrix and POS tags.",
"Regarding incorporating linguistic features into the model, previous studies have either used morphological features as a preprocessing step or as a ranking step for building diacritic restoration models.",
"As a preprocessing step, the words are converted to their constituents (e.g. morphemes, lemmas, or n -grams) and then diacritic restoration models are built on top of that (Ananthakrishnan et al., 2005; Alqahtani and Diab, 2019b).",
"Anan-thakrishnan et al. (2005) use POS tags to improve diacritic restoration at the syntax level assuming that POS tags are known at inference time.",
"As a ranking procedure, all possible analyses of words are generated and then the most probable analysis is chosen (Pasha et al., 2014; Zalmout and Habash, 2017, 2019a,b).",
"Zalmout and Habash (2017) develop a morphological disambiguation model to determine Arabic morphological features including diacritization.",
"They train the model using BiLSTM and consult with a LSTM-based language model as well as other morphological features to rank and score the output analysis.",
"Similar methodology can be found in (Pasha et al., 2014) but using Support Vector Machines.",
"This methodology shows better performance on out of vocabulary (OOV) words compared to pure character models.",
"We present a diacritic restoration joint model that considers the output distributions for different related tasks to improve the performance of diacritic restoration.",
"Our results shows statistically significant improvements across all evaluation metrics.",
"This shows the importance of considering additional linguistic information at morphological and/or sentence levels.",
"Including semantic information through pretrained word embeddings within the diacritic restoration model also helped boosting the diacritic restoration performance.",
"Although we apply our joint model on Arabic, this model provides a framework for other languages that include diacritics whenever resources become available.",
"Although we observed improvements in terms of generalizing beyond observed data when using the proposed linguistic features, the OOV performance is still an issue for diacritic restoration."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"result",
"result",
"abstain",
"abstain",
"method",
"objective"
] |
[
"This work revisits the task of training sequence tagging models with limited resources using transfer learning.",
"We investigate several proposed approaches introduced in recent works and suggest a new loss that relies on sentence reconstruction from normalized embeddings.",
"Specifically, our method demonstrates how by adding a decoding layer for sentence reconstruction, we can improve the performance of various baselines.",
"We show improved results on the CoNLL02 NER and UD 1.2 POS datasets and demonstrate the power of the method for transfer learning with low-resources achieving 0.6 F1 score in Dutch using only one sample from it.",
"The code is publicly available at: https://github.com/tperl/Low-Resource-Sequence-Tagging-using-Sentence-Reconstruction.",
"The increased popularity of deep learning led to a giant leap in natural language processing (NLP).",
"Tasks such as neural machine translation (Lample et al., 2018a; Gu et al., 2018), sentiment analysis (Patro et al., 2018) and question answering (Ran et al., 2019) achieved impressive results.",
"A major limitation of deep learning is the need for huge amounts of training data.",
"Thus, when dealing with low resource datasets, transfer learning is a common solution.",
"A popular approach in NLP is training a language model for getting a good context-based word representation.",
"Language models such as Bert (Devlin et al., 2019), Roberta (Liu et al., 2019b), ELMO (Peters et al., 2018), and XL-net (Yang et al., 2019) that are trained on very large corpora, are used by the community for different NLP tasks.",
"This transfer-learning across tasks within the same language relies on fine-tuning a language model for a specific task (Sun et al., 2019).",
"This work focuses on transfer learning between different languages.",
"Some approaches have been suggested for it.",
"Yang et al. (2017) have proposed using joint training with a large dataset as a source and a small dataset as a target.",
"Zou et al. (2018) have shown how by aligning sentence representations using an adversarial loss, they were able to transfer knowledge between two languages.",
"Contribution.",
"This work analyzes the contribution of various techniques proposed for transfer learning between languages for the task of sequence tagging.",
"In particular, we evaluate joint training and adversarial learning.",
"Moreover, we propose a novel regularization technique, namely, we add a reconstruction loss with (cid:96) 2 normalization.",
"We show that the addition of this loss improves the performance of various sequence tagging tasks when doing transfer learning.",
"Our strategy shows promising results for training models without being language-specific, which saves expensive labeling time.",
"An important characteristic of our technique is its ability to provide good tagging in few-shot learning (Fei-Fei et al., 2006).",
"We achieve this result by adding to the small dataset, a larger corpus corresponding to another language.",
"Our proposed loss improves the transfer of information and thus the tagging accuracy.",
"We demonstrate our approach on the ConLL02/03 and the Universal Dependency (UD) 1.2 datasets.",
"Solving sequence tagging tasks, such as named entity recognition (NER) or part of speech (POS), using statistical methods has been studied for more than two decades.",
"Early solutions used hidden markov models (HMMs) (Bikel et al., 1997), support-vector machines (SVMs) (Isozaki and Kazawa, 2002) and conditional random fields (CRF, Lafferty et al., 2001), we focus on a more Figure 1: Proposed Method.",
"modern approach using common deep learning-based approaches that significantly improve the performance.",
"Collobert et al. (2011) demonstrated the great potential of using neural networks for various NER tasks.",
"Huang et al. (2015) proposed the Bidirectional-LSTM (Bi-LSTM) CRF and Lample et al. (2016) presented a promising architecture for NER by adding character embeddings to its input.",
"Peng and Dredze (2016) used recurrent neural networks (RNN) for NER and word segmentation in Chinese.",
"In the context of transfer learning for sequence tagging, Yang et al. (2017) showed that by using hierarchical RNNs and joint training, it is possible to transfer knowledge between domains of different corpora and different languages.",
"Cao et al. (2018) exhibited that using self-attention and an adversarial loss, they were able to perform transfer learning between two different domains in Chinese.",
"Yadav et al. (2018) showed that Deep Affix Features is beneficial to NER.",
"Jiang et al. (2019) used DARTS neural architecture search (Liu et al., 2019a) to improve NER.",
"Lin et al. (2018) showed that by using multi-lingual multi-task architecture they were able to get interesting results.",
"Devlin et al. (2019) introduced a new representation scheme for NLP tasks achieving impressive NER results.",
"Clark et al. (2018) proposed a new method for getting improved representations of Bi-LSTM of sentence encoders using labeled and unlabeled data.",
"Barone and Valerio (2016) showed that using an adversarial loss (Goodfellow et al., 2014) may lead to a better word representation.",
"In addition, Adel et al. (2018) used an adversarial loss for getting better sentence representation.",
"Tzeng et al. (2017) demonstrated how by aligning deep representations using an adversarial loss, they transfer knowledge from one domain to another.",
"Lample et al. (2018a) exhibited this approach for unsupervised machine translation.",
"Inspired by these strategies, we propose a method for transfer learning between different languages for sequence tagging.",
"Specifically, we focus on sentence representation alignment.",
"This section describes our sentence reconstruction approach for improving low resource sequence tagging tasks.",
"Many successful sequence tagging network models are composed of an encoder-decoder structure.",
"We suggest adding to them a new decoder branch comprised of a fully convolutional network (FCN) and an (cid:96) 2 loss term for reconstructing the word embeddings of the input sentence.",
"To analyze the effectiveness of our proposed technique, we evaluate its contribution compared to other recently proposed strategies for transfer learning across languages: weight sharing and adversarial alignment.",
"For completeness, we briefly Baseline L2 TL (TL)+(L2) (TL) + Adversarial (TL) + (L2)+ Adversarial (Yang et al., 2017) English 89.1 89.3 89.6 89.9 89.5 90.1 91.26 Spanish 85.84 86 86.1 86.2 84.8 86.3 85.77 Dutch 86.67 87.18 87.1 87.62 85.7 87.64 85.19 English (0.1) 83.1 82.7 85.5 86.1 85.8 86.5 86.5 Spanish (0.1) 76.4 76.47 78.7 78.5 77.8 77.8 76.5 Dutch (0.1) 74.8 75.8 79 80 77.9 79.5 English (0.01) 44.75 44.8 73.8 74.17 73.8 74.3 72.6 Spanish (0.01) 33.3 43.6 63.3 64.98 65.8 67.87 60.4 Dutch (0.01) 40.7 42.9 62.5 64.75 68.56 68.93 Table 1: Ablation results on NER ConLL02/03 compared to (Yang et al., 2017), using sentence reconstruction (L2), using weight sharing based transfer learning (TL), using the adversarial loss and combining them all together.",
"describe the baseline we are using and each of these methods.",
"Then, we present our new auxiliary loss.",
"Our base model follows Lample et al. (2016).",
"Specifically, we run an LSTM (Hochreiter and Schmidhuber, 1997) on the character tokens, concatenate the output to the word embeddings and run an additional LSTM.",
"We then feed its output, denoted z , to another LSTM with a CRF at its end, which produces the sequence tagging, whether it is POS or NER.",
"See Fig. 4 for our baseline.",
"Yang et al. (2017) have shown that sharing weights between architectures that correspond to different languages leads to transferring knowledge between them.",
"Our joint training model is inspired by their Cross Lingual Transfer with the difference that we use a single CRF that is applied to the output of both LSTMs.",
"See Fig. 3 for a schematic of the our modified version.",
"The baseline described above essentially learns a sentence hidden representation, z .",
"For aligning representations from different languages, we feed this feature vector to a 1D CNN which encodes it and outputs a softmax class and acts as a discriminator.",
"We add a switch layer in the input ES NL EN (Gillick et al., 2015) 82.95 82.84 86.50 (Luo et al., 2015) -91.20 (Lample et al., 2016) 85.75 81.74 90.94 (Yang et al., 2017) 85.77 85.19 91.26 (Lin et al., 2018) 85.88 86.55 (Yadav et al., 2018) 87.26 87.54 90.86 (Baevski et al., 2019) -93.5 (Jiang et al., 2019) -93.47 (Strakov a et al., 2019) -93.38 Our baseline 85.84 86.67 89 Our transfer 86.3 87.64 90.1 Table 2: Method results F1 score on CoNLL 2002/2003 compared to state of the art.",
"that arbitrates between feeding sentences from the source and target language (each uses its respective word embedding).",
"We train the discriminator on the normalized hidden representations generated by each sentence Z = z/ || z || 2 .",
"Thus, given the possible labels l i , l j of the predicted language, for an input with label l i / l j , the discriminator will try to predict l i / l j .",
"The generator will try to fool the discriminator and cause it to predict the opposite ( l j / l i ).",
"The adversarial loss L adv is the sum of the discriminator loss LD and the generator loss LG as follows (Lample et al., 2018a): LD ( D , Z | D ) = E ( s i ,l i ) [log p D ( l i | e ( s i , l i )] , LG ( enc , Z | D ) = E ( s i ,l i ) [log p D ( l j | e ( s i , l i )] , L adv = LG + LD , (1) where s i is the input sentence, e ( ) the encoder function, and D and enc are the discriminator's and the encoder's parameters, respectively.",
"An adversarial training scheme can still reach trivial representations, meaning the generator produces sentence representations that do not contain meaningful information of the original sentences.",
"There-ES NL RO (Heinzerling and Strube, 2019) 96.5 93.8 89.7 (Plank et al., 2016) 95.74 93.3 (Yasunaga et al., 2018) 96.44 93.09 91.46 Ours baseline 96 93.1 91.45 Ours transfer 96.4 93.8 93.04 Table 3: Method results accuracy on UD 1.2 Part of speech (POS) compared to the state-of-the-art.",
"We do so by applying on the hidden representation z a 1D FCN with 5 layers, convolution kernels of size 3 and the ReLU non-linearity.",
"Notice that z is a sequence of embedding vectors.",
"Thus, the output of the FCN is also a sequence of vectors, where each of them tries to estimate the embedding of the corresponding word in the input sentence.",
"If the generated sentence is of a different length than the input, we use the padding embedding vector to make them even.",
"We train this decoder together with the encoder in the network using the following reconstruction loss L auto ( enc , dec ) = (cid:88) i (cid:107) e i e i (cid:107) 22 , (2) where dec are the FCN parameters, e i is the embedding of the i th word in the input sentence and e i is the corresponding reconstructed embedding, which we normalize.",
"The reconstruction loss acts as a regularization term, which improves results also when used by itself (see the ablation study).",
"We would like to emphasize the importance of normalizing the representing vectors.",
"Its motivation is in the fact that transforming the vectors onto a unit sphere causes the model to learn to maximize Baseline Our method Arabic 66.05 1.29 76.82 0.24 Bulgarian 52.41 1.46 84.86 0.30 Estonian 47.22 0.48 56.10 0.16 Finnish 49.00 1.45 79.91 0.39 French 63.34 3.10 87.19 0.37 German 77.10 1.36 87.66 0.30 Greek 60.43 0.80 87.66 0.30 Hebrew 65.13 2.11 85.50 0.75 Italian 63.46 1.31 88.88 0.71 Norwegian 78.55 0.62 91.06 0.31 Polish 52.05 0.61 80.84 0.47 Slovenian 53.50 0.37 83.93 0.77 Spanish 83.65 0.16 90.60 0.04 Table 4: Low resource testing for part of speech on UD 1.2 dataset.",
"Figure 1 presents a model with all the discussed regularization techniques.",
"Notice that each component in this model can be applied separately.",
"For example, we may apply our new reconstruction loss alone, or as an additional branch to the adversarial branch with or without weight sharing.",
"We follow the experiments of Yang et al. (2017) to evaluate our approach for transfer learning between languages.",
"We compare our proposed regularization to joint training and the adversarial loss.",
"We start by evaluating the impact of each strategy alone, and then gradually combine the losses to each other.",
"Our source-target pairs are built of English and a selected target language (Spanish, Dutch or Romanian).",
"In NER, we test both directions of transfer learning, i.e English to Spanish and Spanish to English.",
"In POS, English is always the source language.",
"We focus on using word embeddings that are aligned across different languages, specifically MUSE (Lample et al., 2018b).",
"Our motivation for choosing it is to leverage the word alignment, which makes the impact of the sentence alignment clearer.",
"Loss analysis.",
"For understanding the impact of our approach, we test it with and without the other techniques for transfer learning between languages.",
"We also compare to each of them being applied separately.",
"Table 1 summarizes our results.",
"Notice that our proposed loss improves the performance when combined with other methods and even when being applied alone.",
"Also, we have found that the improvement gained by the adversarial loss is ES NL EN (Yang et al., 2017) 16 -40.1 Lin et al. (2018) 60 50 Our baseline 22 33 7.6 Our transfer 59.5 61 43.1 Table 5: F1 scores on CoNLL 2002/2003 for few shot training (0.001 of the data) compared to (Yang et al., 2017).",
"marginal and therefore, we do not use it in the final model used in the next experiments, which consist of only weight sharing and our proposed (cid:96) 2 reconstruction loss.",
"Results.",
"We evaluate our model on three tasks:",
"(i) NER transfer learning compared to leading methods;",
"(ii) NER transfer learning on a subset of the target data; and",
"(iii) POS transfer.",
"We achieve competitive results on Conll2002 Dutch/Spanish.",
"For testing how competitive our approach is, we also compare to state-of-the-art methods.",
"Moreover, we perform experiments on subsets of the data similar to Yang et al. (2017).",
"These experiments exhibit the advantage of our model, especially when training on scarce data.",
"For example, we show that using only nine samples in Spanish (0.001 of the data) we get an F1 score of 0.59 (com-pared to the 0.16 transfer learning result of Yang et al. (2017)).",
"Table 2 shows the NER results, where we get competitve results in ConLL02 and improve our baseline in English ConLL03.",
"Table 4 shows how our method generalizes well for low resource transfer learning in POS.",
"Notice the great improvement between our baseline as shown in Fig. 4 and our method shown in Fig. 1.",
"Table 3 demonstrates the performance on POS, where we get the largest improvement on Romanian, which is a low resource language (with fewer labels).",
"Table 5 exhibits the Language Baseline Method Spanish 0 57 Dutch 0 55 Table 7: F1 scores on CoNLL 2002 for zero shot training.",
"advantage of our regularization for few-shot learning compared to Yang et al. (2017) and Lin et al. (2018).",
"Finally, Table 6 and Table 7 presents the results of our approach for one-shot learning compared to Lin et al. (2018) and zero-shot learning.",
"A major improvement compared to our baseline is apparent also here.",
"We found for the case of few-shot and one-shot learning that it is better to share the base BiLSTM because it does not see enough examples to train.",
"This work demonstrates the power of sentence reconstruction for transferring knowledge from a rich dataset to a sparse one.",
"It achieves competitive results with a relatively simple baseline.",
"We also show its strength in few-shot and one-shot learning.",
"We believe that using the proposed sentence (cid:96) 2 reconstruction may contribute as an auxiliary loss for other tasks.",
"Also, we have demonstrated our model with MUSE, since it provides word alignment across languages.",
"Yet, our approach can be applied also with other more recent language models that have stronger context-based embeddings."
] | [
"method",
"objective",
"objective",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"result",
"result",
"method",
"result",
"objective",
"objective",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"objective",
"abstain",
"result",
"objective",
"objective",
"method"
] |
Subsets and Splits